hash
stringlengths 40
40
| date
stringdate 2018-12-11 14:31:19
2025-03-22 02:45:31
| author
stringclasses 280
values | commit_message
stringlengths 14
176
| is_merge
bool 1
class | git_diff
stringlengths 198
25.8M
⌀ | type
stringclasses 83
values | masked_commit_message
stringlengths 8
170
|
|---|---|---|---|---|---|---|---|
a7f16a5bbd104790e485ccf3df32eec34ab6949c
|
2019-01-17 17:01:12
|
Xiang Dai
|
helm: remove promtail sa setting (#160)
| false
|
diff --git a/production/helm/templates/promtail/_helpers.tpl b/production/helm/templates/promtail/_helpers.tpl
index a1a5bccefc7c0..dcbe0ea260dbc 100644
--- a/production/helm/templates/promtail/_helpers.tpl
+++ b/production/helm/templates/promtail/_helpers.tpl
@@ -36,7 +36,7 @@ Create the name of the service account
*/}}
{{- define "promtail.serviceAccountName" -}}
{{- if .Values.serviceAccount.create -}}
- {{ default (include "promtail.fullname" .) .Values.serviceAccount.name }}
+ {{ default (include "loki.fullname" .) .Values.serviceAccount.name }}
{{- else -}}
{{ default "default" .Values.serviceAccount.name }}
{{- end -}}
diff --git a/production/helm/templates/promtail/serviceaccount.yaml b/production/helm/templates/promtail/serviceaccount.yaml
deleted file mode 100644
index cce8581e20fad..0000000000000
--- a/production/helm/templates/promtail/serviceaccount.yaml
+++ /dev/null
@@ -1,11 +0,0 @@
-{{- if .Values.serviceAccount.create }}
-apiVersion: v1
-kind: ServiceAccount
-metadata:
- labels:
- app: {{ template "promtail.name" . }}
- chart: {{ .Chart.Name }}-{{ .Chart.Version }}
- heritage: {{ .Release.Service }}
- release: {{ .Release.Name }}
- name: {{ template "promtail.serviceAccountName" . }}
-{{- end }}
\ No newline at end of file
|
helm
|
remove promtail sa setting (#160)
|
9ebc70f406cfac737943f3ac711883e9640b9ba5
|
2024-11-06 15:23:11
|
chengehe
|
chore: add missing backtick (#14712)
| false
|
diff --git a/clients/pkg/logentry/stages/extensions_test.go b/clients/pkg/logentry/stages/extensions_test.go
index 2cbe411c2a032..c9065f12d31f0 100644
--- a/clients/pkg/logentry/stages/extensions_test.go
+++ b/clients/pkg/logentry/stages/extensions_test.go
@@ -122,7 +122,7 @@ func TestCRI_tags(t *testing.T) {
},
expected: []string{
"partial line 1 log finished", // belongs to stream `{foo="bar"}`
- "partial line 2 another full log", // belongs to stream `{foo="bar2"}
+ "partial line 2 another full log", // belongs to stream `{foo="bar2"}`
},
},
{
|
chore
|
add missing backtick (#14712)
|
c8a4afe2ac458698533de26104d93dbdee7d641c
|
2024-12-19 02:18:50
|
Christian Haudum
|
chore: Update ASCII diagram of chunk binary encoding (#15386)
| false
|
diff --git a/docs/sources/operations/storage/_index.md b/docs/sources/operations/storage/_index.md
index 3b6bef94f9dce..f8918f87009a9 100644
--- a/docs/sources/operations/storage/_index.md
+++ b/docs/sources/operations/storage/_index.md
@@ -135,29 +135,56 @@ See the [IBM Cloud Object Storage section](https://grafana.com/docs/loki/<LOKI_V
## Chunk Format
```
- -------------------------------------------------------------------
- | | |
- | MagicNumber(4b) | version(1b) |
- | | |
- -------------------------------------------------------------------
- | block-1 bytes | checksum (4b) |
- -------------------------------------------------------------------
- | block-2 bytes | checksum (4b) |
- -------------------------------------------------------------------
- | block-n bytes | checksum (4b) |
- -------------------------------------------------------------------
- | #blocks (uvarint) |
- -------------------------------------------------------------------
- | #entries(uvarint) | mint, maxt (varint) | offset, len (uvarint) |
- -------------------------------------------------------------------
- | #entries(uvarint) | mint, maxt (varint) | offset, len (uvarint) |
- -------------------------------------------------------------------
- | #entries(uvarint) | mint, maxt (varint) | offset, len (uvarint) |
- -------------------------------------------------------------------
- | #entries(uvarint) | mint, maxt (varint) | offset, len (uvarint) |
- -------------------------------------------------------------------
- | checksum(from #blocks) |
- -------------------------------------------------------------------
- | metasOffset - offset to the point with #blocks |
- -------------------------------------------------------------------
+// Header
++-----------------------------------+
+| Magic Number (uint32, 4 bytes) |
++-----------------------------------+
+| Version (1 byte) |
++-----------------------------------+
+| Encoding (1 byte) |
++-----------------------------------+
+
+// Blocks
++--------------------+----------------------------+
+| block 1 (n bytes) | checksum (uint32, 4 bytes) |
++--------------------+----------------------------+
+| block 1 (n bytes) | checksum (uint32, 4 bytes) |
++--------------------+----------------------------+
+| ... |
++--------------------+----------------------------+
+| block N (n bytes) | checksum (uint32, 4 bytes) |
++--------------------+----------------------------+
+
+// Metas
++------------------------------------------------------------------------------------------------------------------------+
+| #blocks (uvarint) |
++--------------------+-----------------+-----------------+------------------+---------------+----------------------------+
+| #entries (uvarint) | minTs (uvarint) | maxTs (uvarint) | offset (uvarint) | len (uvarint) | uncompressedSize (uvarint) |
++--------------------+-----------------+-----------------+------------------+---------------+----------------------------+
+| #entries (uvarint) | minTs (uvarint) | maxTs (uvarint) | offset (uvarint) | len (uvarint) | uncompressedSize (uvarint) |
++--------------------+-----------------+-----------------+------------------+---------------+----------------------------+
+| ... |
++--------------------+-----------------+-----------------+------------------+---------------+----------------------------+
+| #entries (uvarint) | minTs (uvarint) | maxTs (uvarint) | offset (uvarint) | len (uvarint) | uncompressedSize (uvarint) |
++--------------------+-----------------+-----------------+------------------+---------------+----------------------------+
+| checksum (uint32, 4 bytes) |
++------------------------------------------------------------------------------------------------------------------------+
+
+// Structured Metadata
++---------------------------------+
+| #labels (uvarint) |
++---------------+-----------------+
+| len (uvarint) | value (n bytes) |
++---------------+-----------------+
+| ... |
++---------------+-----------------+
+| checksum (uint32, 4 bytes) |
++---------------------------------+
+
+// Footer
++-----------------------+--------------------------+
+| len (uint64, 8 bytes) | offset (uint64, 8 bytes) | // offset to Structured Metadata
++-----------------------+--------------------------+
+| len (uint64, 8 bytes) | offset (uint64, 8 bytes) | // offset to Metas
++-----------------------+--------------------------+
```
diff --git a/pkg/chunkenc/README.md b/pkg/chunkenc/README.md
index f697bb107853c..36b3f0ed66622 100644
--- a/pkg/chunkenc/README.md
+++ b/pkg/chunkenc/README.md
@@ -1,28 +1,56 @@
-# Chunk format
+# Chunk v4 format
```
- | | |
- | MagicNumber(4b) | version(1b) |
- | | |
- --------------------------------------------------
- | block-1 bytes | checksum (4b) |
- --------------------------------------------------
- | block-2 bytes | checksum (4b) |
- --------------------------------------------------
- | block-n bytes | checksum (4b) |
- --------------------------------------------------
- | #blocks (uvarint) |
- --------------------------------------------------
- | #entries(uvarint) | mint, maxt (varint) | offset, len (uvarint) | uncompressedSize (uvarint) |
- ------------------------------------------------------------------------------------------------
- | #entries(uvarint) | mint, maxt (varint) | offset, len (uvarint) | uncompressedSize (uvarint) |
- ------------------------------------------------------------------------------------------------
- | #entries(uvarint) | mint, maxt (varint) | offset, len (uvarint) | uncompressedSize (uvarint) |
- ------------------------------------------------------------------------------------------------
- | #entries(uvarint) | mint, maxt (varint) | offset, len (uvarint) | uncompressedSize (uvarint) |
- ------------------------------------------------------------------------------------------------
- | checksum(from #blocks) |
- -------------------------------------------------------------------
- | metasOffset - offset to the point with #blocks |
- --------------------------------------------------
+// Header
++-----------------------------------+
+| Magic Number (uint32, 4 bytes) |
++-----------------------------------+
+| Version (1 byte) |
++-----------------------------------+
+| Encoding (1 byte) |
++-----------------------------------+
+
+// Blocks
++--------------------+----------------------------+
+| block 1 (n bytes) | checksum (uint32, 4 bytes) |
++--------------------+----------------------------+
+| block 1 (n bytes) | checksum (uint32, 4 bytes) |
++--------------------+----------------------------+
+| ... |
++--------------------+----------------------------+
+| block N (n bytes) | checksum (uint32, 4 bytes) |
++--------------------+----------------------------+
+
+// Metas
++------------------------------------------------------------------------------------------------------------------------+
+| #blocks (uvarint) |
++--------------------+-----------------+-----------------+------------------+---------------+----------------------------+
+| #entries (uvarint) | minTs (uvarint) | maxTs (uvarint) | offset (uvarint) | len (uvarint) | uncompressedSize (uvarint) |
++--------------------+-----------------+-----------------+------------------+---------------+----------------------------+
+| #entries (uvarint) | minTs (uvarint) | maxTs (uvarint) | offset (uvarint) | len (uvarint) | uncompressedSize (uvarint) |
++--------------------+-----------------+-----------------+------------------+---------------+----------------------------+
+| ... |
++--------------------+-----------------+-----------------+------------------+---------------+----------------------------+
+| #entries (uvarint) | minTs (uvarint) | maxTs (uvarint) | offset (uvarint) | len (uvarint) | uncompressedSize (uvarint) |
++--------------------+-----------------+-----------------+------------------+---------------+----------------------------+
+| checksum (uint32, 4 bytes) |
++------------------------------------------------------------------------------------------------------------------------+
+
+// Structured Metadata
++---------------------------------+
+| #labels (uvarint) |
++---------------+-----------------+
+| len (uvarint) | value (n bytes) |
++---------------+-----------------+
+| ... |
++---------------+-----------------+
+| checksum (uint32, 4 bytes) |
++---------------------------------+
+
+// Footer
++-----------------------+--------------------------+
+| len (uint64, 8 bytes) | offset (uint64, 8 bytes) | // offset to Structured Metadata
++-----------------------+--------------------------+
+| len (uint64, 8 bytes) | offset (uint64, 8 bytes) | // offset to Metas
++-----------------------+--------------------------+
```
|
chore
|
Update ASCII diagram of chunk binary encoding (#15386)
|
c0113db4e8c4647188db6477d2ab265eda8dbb6c
|
2024-04-26 23:10:46
|
Jonas L. B
|
fix(promtail): Handle docker logs when a log is split in multiple frames (#12374)
| false
|
diff --git a/LICENSING.md b/LICENSING.md
index 91ea334a41d9b..4b9e6640a30af 100644
--- a/LICENSING.md
+++ b/LICENSING.md
@@ -10,6 +10,7 @@ The following folders and their subfolders are licensed under Apache-2.0:
```
clients/
+pkg/framedstdcopy/
pkg/ingester/wal
pkg/logproto/
pkg/loghttp/
diff --git a/clients/pkg/promtail/promtail.go b/clients/pkg/promtail/promtail.go
index ffe774a405be8..73e52f21703e1 100644
--- a/clients/pkg/promtail/promtail.go
+++ b/clients/pkg/promtail/promtail.go
@@ -184,7 +184,7 @@ func (p *Promtail) reloadConfig(cfg *config.Config) error {
entryHandlers = append(entryHandlers, p.client)
p.entriesFanout = utils.NewFanoutEntryHandler(timeoutUntilFanoutHardStop, entryHandlers...)
- tms, err := targets.NewTargetManagers(p, p.reg, p.logger, cfg.PositionsConfig, p.entriesFanout, cfg.ScrapeConfig, &cfg.TargetConfig, cfg.Global.FileWatch)
+ tms, err := targets.NewTargetManagers(p, p.reg, p.logger, cfg.PositionsConfig, p.entriesFanout, cfg.ScrapeConfig, &cfg.TargetConfig, cfg.Global.FileWatch, &cfg.LimitsConfig)
if err != nil {
return err
}
diff --git a/clients/pkg/promtail/targets/docker/target.go b/clients/pkg/promtail/targets/docker/target.go
index 3ec9d02a022c6..dba33086065ef 100644
--- a/clients/pkg/promtail/targets/docker/target.go
+++ b/clients/pkg/promtail/targets/docker/target.go
@@ -1,10 +1,8 @@
package docker
import (
- "bufio"
"context"
"fmt"
- "io"
"strconv"
"strings"
"sync"
@@ -12,7 +10,6 @@ import (
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/client"
- "github.com/docker/docker/pkg/stdcopy"
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/prometheus/common/model"
@@ -24,6 +21,7 @@ import (
"github.com/grafana/loki/v3/clients/pkg/promtail/positions"
"github.com/grafana/loki/v3/clients/pkg/promtail/targets/target"
+ "github.com/grafana/loki/v3/pkg/framedstdcopy"
"github.com/grafana/loki/v3/pkg/logproto"
)
@@ -36,6 +34,7 @@ type Target struct {
labels model.LabelSet
relabelConfig []*relabel.Config
metrics *Metrics
+ maxLineSize int
cancel context.CancelFunc
client client.APIClient
@@ -53,6 +52,7 @@ func NewTarget(
labels model.LabelSet,
relabelConfig []*relabel.Config,
client client.APIClient,
+ maxLineSize int,
) (*Target, error) {
pos, err := position.Get(positions.CursorKey(containerName))
@@ -73,6 +73,7 @@ func NewTarget(
labels: labels,
relabelConfig: relabelConfig,
metrics: metrics,
+ maxLineSize: maxLineSize,
client: client,
running: atomic.NewBool(false),
@@ -109,22 +110,22 @@ func (t *Target) processLoop(ctx context.Context) {
}
// Start transferring
- rstdout, wstdout := io.Pipe()
- rstderr, wstderr := io.Pipe()
+ cstdout := make(chan []byte)
+ cstderr := make(chan []byte)
t.wg.Add(1)
go func() {
defer func() {
t.wg.Done()
- wstdout.Close()
- wstderr.Close()
+ close(cstdout)
+ close(cstderr)
t.Stop()
}()
var written int64
var err error
if inspectInfo.Config.Tty {
- written, err = io.Copy(wstdout, logs)
+ written, err = framedstdcopy.NoHeaderFramedStdCopy(cstdout, logs)
} else {
- written, err = stdcopy.StdCopy(wstdout, wstderr, logs)
+ written, err = framedstdcopy.FramedStdCopy(cstdout, cstderr, logs)
}
if err != nil {
level.Warn(t.logger).Log("msg", "could not transfer logs", "written", written, "container", t.containerName, "err", err)
@@ -135,8 +136,8 @@ func (t *Target) processLoop(ctx context.Context) {
// Start processing
t.wg.Add(2)
- go t.process(rstdout, "stdout")
- go t.process(rstderr, "stderr")
+ go t.process(cstdout, "stdout")
+ go t.process(cstderr, "stderr")
// Wait until done
<-ctx.Done()
@@ -149,81 +150,120 @@ func (t *Target) processLoop(ctx context.Context) {
func extractTs(line string) (time.Time, string, error) {
pair := strings.SplitN(line, " ", 2)
if len(pair) != 2 {
- return time.Now(), line, fmt.Errorf("Could not find timestamp in '%s'", line)
+ return time.Now(), line, fmt.Errorf("could not find timestamp in '%s'", line)
}
ts, err := time.Parse("2006-01-02T15:04:05.999999999Z07:00", pair[0])
if err != nil {
- return time.Now(), line, fmt.Errorf("Could not parse timestamp from '%s': %w", pair[0], err)
+ return time.Now(), line, fmt.Errorf("could not parse timestamp from '%s': %w", pair[0], err)
}
return ts, pair[1], nil
}
-// https://devmarkpro.com/working-big-files-golang
-func readLine(r *bufio.Reader) (string, error) {
+func (t *Target) process(frames chan []byte, logStream string) {
+ defer func() {
+ t.wg.Done()
+ }()
+
var (
- isPrefix = true
- err error
- line, ln []byte
+ sizeLimit = t.maxLineSize
+ discardRemainingLine = false
+ payloadAcc strings.Builder
+ curTs = time.Now()
)
- for isPrefix && err == nil {
- line, isPrefix, err = r.ReadLine()
- ln = append(ln, line...)
+ // If max_line_size is disabled (set to 0), we can in theory have infinite buffer growth.
+ // We can't guarantee that there's any bound on Docker logs, they could be an infinite stream
+ // without newlines for all we know. To protect promtail from OOM in that case, we introduce
+ // this safety limit into the Docker target, inspired by the default Loki max_line_size value:
+ // https://grafana.com/docs/loki/latest/configure/#limits_config
+ if sizeLimit == 0 {
+ sizeLimit = 256 * 1024
}
- return string(ln), err
-}
-
-func (t *Target) process(r io.Reader, logStream string) {
- defer func() {
- t.wg.Done()
- }()
-
- reader := bufio.NewReader(r)
- for {
- line, err := readLine(reader)
+ for frame := range frames {
+ // Split frame into timestamp and payload
+ ts, payload, err := extractTs(string(frame))
if err != nil {
- if err == io.EOF {
- break
+ if payloadAcc.Len() == 0 {
+ // If we are currently accumulating a line split over multiple frames, we would still expect
+ // timestamps in every frame, but since we don't use those secondary ones, we don't log an error in that case.
+ level.Error(t.logger).Log("msg", "error reading docker log line, skipping line", "err", err)
+ t.metrics.dockerErrors.Inc()
+ continue
}
- level.Error(t.logger).Log("msg", "error reading docker log line, skipping line", "err", err)
- t.metrics.dockerErrors.Inc()
+ ts = curTs
}
- ts, line, err := extractTs(line)
- if err != nil {
- level.Error(t.logger).Log("msg", "could not extract timestamp, skipping line", "err", err)
- t.metrics.dockerErrors.Inc()
+ // If time has changed, we are looking at a new event (although we should have seen a new line..),
+ // so flush the buffer if we have one.
+ if ts != curTs {
+ discardRemainingLine = false
+ if payloadAcc.Len() > 0 {
+ t.handleOutput(logStream, curTs, payloadAcc.String())
+ payloadAcc.Reset()
+ }
+ }
+
+ // Check if we have the end of the event
+ var isEol = strings.HasSuffix(payload, "\n")
+
+ // If we are currently discarding a line (due to size limits), skip ahead, but don't skip the next
+ // frame if we saw the end of the line.
+ if discardRemainingLine {
+ discardRemainingLine = !isEol
continue
}
- // Add all labels from the config, relabel and filter them.
- lb := labels.NewBuilder(nil)
- for k, v := range t.labels {
- lb.Set(string(k), string(v))
+ // Strip newline ending if we have it
+ payload = strings.TrimRight(payload, "\r\n")
+
+ // Fast path: Most log lines are a single frame. If we have a full line in frame and buffer is empty,
+ // then don't use the buffer at all.
+ if payloadAcc.Len() == 0 && isEol {
+ t.handleOutput(logStream, ts, payload)
+ continue
}
- lb.Set(dockerLabelLogStream, logStream)
- processed, _ := relabel.Process(lb.Labels(), t.relabelConfig...)
- filtered := make(model.LabelSet)
- for _, lbl := range processed {
- if strings.HasPrefix(lbl.Name, "__") {
- continue
- }
- filtered[model.LabelName(lbl.Name)] = model.LabelValue(lbl.Value)
+ // Add to buffer
+ payloadAcc.WriteString(payload)
+ curTs = ts
+
+ // Send immediately if line ended or we built a very large event
+ if isEol || payloadAcc.Len() > sizeLimit {
+ discardRemainingLine = !isEol
+ t.handleOutput(logStream, curTs, payloadAcc.String())
+ payloadAcc.Reset()
}
+ }
+}
+
+func (t *Target) handleOutput(logStream string, ts time.Time, payload string) {
+ // Add all labels from the config, relabel and filter them.
+ lb := labels.NewBuilder(nil)
+ for k, v := range t.labels {
+ lb.Set(string(k), string(v))
+ }
+ lb.Set(dockerLabelLogStream, logStream)
+ processed, _ := relabel.Process(lb.Labels(), t.relabelConfig...)
- t.handler.Chan() <- api.Entry{
- Labels: filtered,
- Entry: logproto.Entry{
- Timestamp: ts,
- Line: line,
- },
+ filtered := make(model.LabelSet)
+ for _, lbl := range processed {
+ if strings.HasPrefix(lbl.Name, "__") {
+ continue
}
- t.metrics.dockerEntries.Inc()
- t.positions.Put(positions.CursorKey(t.containerName), ts.Unix())
- t.since = ts.Unix()
+ filtered[model.LabelName(lbl.Name)] = model.LabelValue(lbl.Value)
+ }
+
+ t.handler.Chan() <- api.Entry{
+ Labels: filtered,
+ Entry: logproto.Entry{
+ Timestamp: ts,
+ Line: payload,
+ },
}
+ t.metrics.dockerEntries.Inc()
+ t.positions.Put(positions.CursorKey(t.containerName), ts.Unix())
+ t.since = ts.Unix()
}
// startIfNotRunning starts processing container logs. The operation is idempotent , i.e. the processing cannot be started twice.
diff --git a/clients/pkg/promtail/targets/docker/target_group.go b/clients/pkg/promtail/targets/docker/target_group.go
index b9fd8940824d0..fa6809f4e7bfc 100644
--- a/clients/pkg/promtail/targets/docker/target_group.go
+++ b/clients/pkg/promtail/targets/docker/target_group.go
@@ -36,6 +36,7 @@ type targetGroup struct {
httpClientConfig config.HTTPClientConfig
client client.APIClient
refreshInterval model.Duration
+ maxLineSize int
mtx sync.Mutex
targets map[string]*Target
@@ -120,6 +121,7 @@ func (tg *targetGroup) addTarget(id string, discoveredLabels model.LabelSet) err
discoveredLabels.Merge(tg.defaultLabels),
tg.relabelConfig,
tg.client,
+ tg.maxLineSize,
)
if err != nil {
return err
diff --git a/clients/pkg/promtail/targets/docker/target_test.go b/clients/pkg/promtail/targets/docker/target_test.go
index 9bb5c9bfacd57..11b0a5cb3a24d 100644
--- a/clients/pkg/promtail/targets/docker/target_test.go
+++ b/clients/pkg/promtail/targets/docker/target_test.go
@@ -23,16 +23,23 @@ import (
"github.com/grafana/loki/v3/clients/pkg/promtail/positions"
)
-func Test_DockerTarget(t *testing.T) {
- h := func(w http.ResponseWriter, r *http.Request) {
+type urlContainToPath struct {
+ contains string
+ filePath string
+}
+
+func handlerForPath(t *testing.T, paths []urlContainToPath, tty bool) http.Handler {
+ return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
switch path := r.URL.Path; {
case strings.HasSuffix(path, "/logs"):
var filePath string
- if strings.Contains(r.URL.RawQuery, "since=0") {
- filePath = "testdata/flog.log"
- } else {
- filePath = "testdata/flog_after_restart.log"
+ for _, cf := range paths {
+ if strings.Contains(r.URL.RawQuery, cf.contains) {
+ filePath = cf.filePath
+ break
+ }
}
+ assert.NotEmpty(t, filePath, "Did not find appropriate filePath to serve request")
dat, err := os.ReadFile(filePath)
require.NoError(t, err)
_, err = w.Write(dat)
@@ -42,15 +49,19 @@ func Test_DockerTarget(t *testing.T) {
info := types.ContainerJSON{
ContainerJSONBase: &types.ContainerJSONBase{},
Mounts: []types.MountPoint{},
- Config: &container.Config{Tty: false},
+ Config: &container.Config{Tty: tty},
NetworkSettings: &types.NetworkSettings{},
}
err := json.NewEncoder(w).Encode(info)
require.NoError(t, err)
}
- }
+ })
+}
- ts := httptest.NewServer(http.HandlerFunc(h))
+func Test_DockerTarget(t *testing.T) {
+ h := handlerForPath(t, []urlContainToPath{{"since=0", "testdata/flog.log"}, {"", "testdata/flog_after_restart.log"}}, false)
+
+ ts := httptest.NewServer(h)
defer ts.Close()
w := log.NewSyncWriter(os.Stderr)
@@ -74,6 +85,7 @@ func Test_DockerTarget(t *testing.T) {
model.LabelSet{"job": "docker"},
[]*relabel.Config{},
client,
+ 0,
)
require.NoError(t, err)
@@ -105,6 +117,59 @@ func Test_DockerTarget(t *testing.T) {
}, 5*time.Second, 100*time.Millisecond, "Expected log lines after restart were not found within the time limit.")
}
+func doTestPartial(t *testing.T, tty bool) {
+ var filePath string
+ if tty {
+ filePath = "testdata/partial-tty.log"
+ } else {
+ filePath = "testdata/partial.log"
+ }
+ h := handlerForPath(t, []urlContainToPath{{"", filePath}}, tty)
+
+ ts := httptest.NewServer(h)
+ defer ts.Close()
+
+ w := log.NewSyncWriter(os.Stderr)
+ logger := log.NewLogfmtLogger(w)
+ entryHandler := fake.New(func() {})
+ client, err := client.NewClientWithOpts(client.WithHost(ts.URL))
+ require.NoError(t, err)
+
+ ps, err := positions.New(logger, positions.Config{
+ SyncPeriod: 10 * time.Second,
+ PositionsFile: t.TempDir() + "/positions.yml",
+ })
+ require.NoError(t, err)
+
+ target, err := NewTarget(
+ NewMetrics(prometheus.NewRegistry()),
+ logger,
+ entryHandler,
+ ps,
+ "flog",
+ model.LabelSet{"job": "docker"},
+ []*relabel.Config{},
+ client,
+ 0,
+ )
+ require.NoError(t, err)
+
+ expectedLines := []string{strings.Repeat("a", 16385)}
+ assert.EventuallyWithT(t, func(c *assert.CollectT) {
+ assertExpectedLog(c, entryHandler, expectedLines)
+ }, 10*time.Second, 100*time.Millisecond, "Expected log lines were not found within the time limit.")
+
+ target.Stop()
+ entryHandler.Clear()
+}
+
+func Test_DockerTargetPartial(t *testing.T) {
+ doTestPartial(t, false)
+}
+func Test_DockerTargetPartialTty(t *testing.T) {
+ doTestPartial(t, true)
+}
+
// assertExpectedLog will verify that all expectedLines were received, in any order, without duplicates.
func assertExpectedLog(c *assert.CollectT, entryHandler *fake.Client, expectedLines []string) {
logLines := entryHandler.Received()
diff --git a/clients/pkg/promtail/targets/docker/targetmanager.go b/clients/pkg/promtail/targets/docker/targetmanager.go
index 6321705b8f142..73c6eb63776ee 100644
--- a/clients/pkg/promtail/targets/docker/targetmanager.go
+++ b/clients/pkg/promtail/targets/docker/targetmanager.go
@@ -43,6 +43,7 @@ func NewTargetManager(
positions positions.Positions,
pushClient api.EntryHandler,
scrapeConfigs []scrapeconfig.Config,
+ maxLineSize int,
) (*TargetManager, error) {
noopRegistry := util.NoopRegistry{}
noopSdMetrics, err := discovery.CreateAndRegisterSDMetrics(noopRegistry)
@@ -94,6 +95,7 @@ func NewTargetManager(
host: sdConfig.Host,
httpClientConfig: sdConfig.HTTPClientConfig,
refreshInterval: sdConfig.RefreshInterval,
+ maxLineSize: maxLineSize,
}
}
configs[syncerKey] = append(configs[syncerKey], sdConfig)
diff --git a/clients/pkg/promtail/targets/docker/targetmanager_test.go b/clients/pkg/promtail/targets/docker/targetmanager_test.go
index 224e58d5a8930..3e2a3d527a765 100644
--- a/clients/pkg/promtail/targets/docker/targetmanager_test.go
+++ b/clients/pkg/promtail/targets/docker/targetmanager_test.go
@@ -95,6 +95,7 @@ func Test_TargetManager(t *testing.T) {
ps,
entryHandler,
cfgs,
+ 0,
)
require.NoError(t, err)
require.True(t, ta.Ready())
diff --git a/clients/pkg/promtail/targets/docker/testdata/partial-tty.log b/clients/pkg/promtail/targets/docker/testdata/partial-tty.log
new file mode 100644
index 0000000000000..1faa9be510c48
--- /dev/null
+++ b/clients/pkg/promtail/targets/docker/testdata/partial-tty.log
@@ -0,0 +1 @@
+2024-03-27T08:30:08.138460761Z aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa2024-03-27T08:30:08.138460761Z a
diff --git a/clients/pkg/promtail/targets/docker/testdata/partial.log b/clients/pkg/promtail/targets/docker/testdata/partial.log
new file mode 100644
index 0000000000000..2d6742f0a0093
Binary files /dev/null and b/clients/pkg/promtail/targets/docker/testdata/partial.log differ
diff --git a/clients/pkg/promtail/targets/manager.go b/clients/pkg/promtail/targets/manager.go
index 241dd25aaa5cc..b3794b0ed0fec 100644
--- a/clients/pkg/promtail/targets/manager.go
+++ b/clients/pkg/promtail/targets/manager.go
@@ -9,6 +9,7 @@ import (
"github.com/prometheus/client_golang/prometheus"
"github.com/grafana/loki/v3/clients/pkg/promtail/api"
+ "github.com/grafana/loki/v3/clients/pkg/promtail/limit"
"github.com/grafana/loki/v3/clients/pkg/promtail/positions"
"github.com/grafana/loki/v3/clients/pkg/promtail/scrapeconfig"
"github.com/grafana/loki/v3/clients/pkg/promtail/targets/azureeventhubs"
@@ -76,6 +77,7 @@ func NewTargetManagers(
scrapeConfigs []scrapeconfig.Config,
targetConfig *file.Config,
watchConfig file.WatchConfig,
+ limitsConfig *limit.Config,
) (*TargetManagers, error) {
if targetConfig.Stdin {
level.Debug(logger).Log("msg", "configured to read from stdin")
@@ -273,7 +275,7 @@ func NewTargetManagers(
if err != nil {
return nil, err
}
- cfTargetManager, err := docker.NewTargetManager(dockerMetrics, logger, pos, client, scrapeConfigs)
+ cfTargetManager, err := docker.NewTargetManager(dockerMetrics, logger, pos, client, scrapeConfigs, limitsConfig.MaxLineSize.Val())
if err != nil {
return nil, errors.Wrap(err, "failed to make Docker service discovery target manager")
}
diff --git a/docs/sources/send-data/promtail/configuration.md b/docs/sources/send-data/promtail/configuration.md
index 86d7630760608..5928e910b7b10 100644
--- a/docs/sources/send-data/promtail/configuration.md
+++ b/docs/sources/send-data/promtail/configuration.md
@@ -1959,6 +1959,13 @@ or [journald](https://docs.docker.com/config/containers/logging/journald/) loggi
Note that the discovery will not pick up finished containers. That means
Promtail will not scrape the remaining logs from finished containers after a restart.
+The Docker target correctly joins log segments if a long line was split into different frames by Docker.
+To avoid hypothetically unlimited line size and out-of-memory errors in Promtail, this target applies
+a default soft line size limit of 256 kiB corresponding to the default max line size in Loki.
+If the buffer increases above this size, then the line will be sent to output immediately, and the rest
+of the line discarded. To change this behaviour, set `limits_config.max_line_size` to a non-zero value
+to apply a hard limit.
+
The configuration is inherited from [Prometheus' Docker service discovery](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#docker_sd_config).
```yaml
@@ -2084,6 +2091,8 @@ The optional `limits_config` block configures global limits for this instance of
[max_streams: <int> | default = 0]
# Maximum log line byte size allowed without dropping. Example: 256kb, 2M. 0 to disable.
+# If disabled, targets may apply default buffer size safety limits. If a target implements
+# a default limit, this will be documented under the `scrape_configs` entry.
[max_line_size: <int> | default = 0]
# Whether to truncate lines that exceed max_line_size. No effect if max_line_size is disabled
[max_line_size_truncate: <bool> | default = false]
diff --git a/pkg/framedstdcopy/framedstdcopy.go b/pkg/framedstdcopy/framedstdcopy.go
new file mode 100644
index 0000000000000..43bc03b8a6608
--- /dev/null
+++ b/pkg/framedstdcopy/framedstdcopy.go
@@ -0,0 +1,173 @@
+package framedstdcopy
+
+import (
+ "bytes"
+ "encoding/binary"
+ "fmt"
+ "io"
+
+ "github.com/docker/docker/pkg/stdcopy"
+)
+
+const (
+ // From stdcopy
+ stdWriterPrefixLen = 8
+ stdWriterFdIndex = 0
+ stdWriterSizeIndex = 4
+ startingBufLen = 32*1024 + stdWriterPrefixLen + 1
+ maxFrameLen = 16384 + 31 // In practice (undocumented) frame payload can be timestamp + 16k
+)
+
+// FramedStdCopy is a modified version of stdcopy.StdCopy.
+// FramedStdCopy will demultiplex `src` in the same manner as StdCopy, but instead of
+// using io.Writer for outputs, channels are used, since each frame payload may contain
+// its own inner header (notably, timestamps). Frame payloads are not further parsed here,
+// but are passed raw as individual slices through the output channel.
+//
+// FramedStdCopy will read until it hits EOF on `src`. It will then return a nil error.
+// In other words: if `err` is non nil, it indicates a real underlying error.
+//
+// `written` will hold the total number of bytes written to `dstout` and `dsterr`.
+func FramedStdCopy(dstout, dsterr chan []byte, src io.Reader) (written int64, err error) {
+ var (
+ buf = make([]byte, startingBufLen)
+ bufLen = len(buf)
+ nr int
+ er error
+ out chan []byte
+ frameSize int
+ )
+
+ for {
+ // Make sure we have at least a full header
+ for nr < stdWriterPrefixLen {
+ var nr2 int
+ nr2, er = src.Read(buf[nr:])
+ nr += nr2
+ if er == io.EOF {
+ if nr < stdWriterPrefixLen {
+ return written, nil
+ }
+ break
+ }
+ if er != nil {
+ return 0, er
+ }
+ }
+
+ stream := stdcopy.StdType(buf[stdWriterFdIndex])
+ // Check the first byte to know where to write
+ switch stream {
+ case stdcopy.Stdin:
+ fallthrough
+ case stdcopy.Stdout:
+ // Write on stdout
+ out = dstout
+ case stdcopy.Stderr:
+ // Write on stderr
+ out = dsterr
+ case stdcopy.Systemerr:
+ // If we're on Systemerr, we won't write anywhere.
+ // NB: if this code changes later, make sure you don't try to write
+ // to outstream if Systemerr is the stream
+ out = nil
+ default:
+ return 0, fmt.Errorf("Unrecognized input header: %d", buf[stdWriterFdIndex])
+ }
+
+ // Retrieve the size of the frame
+ frameSize = int(binary.BigEndian.Uint32(buf[stdWriterSizeIndex : stdWriterSizeIndex+4]))
+
+ // Check if the buffer is big enough to read the frame.
+ // Extend it if necessary.
+ if frameSize+stdWriterPrefixLen > bufLen {
+ buf = append(buf, make([]byte, frameSize+stdWriterPrefixLen-bufLen+1)...)
+ bufLen = len(buf)
+ }
+
+ // While the amount of bytes read is less than the size of the frame + header, we keep reading
+ for nr < frameSize+stdWriterPrefixLen {
+ var nr2 int
+ nr2, er = src.Read(buf[nr:])
+ nr += nr2
+ if er == io.EOF {
+ if nr < frameSize+stdWriterPrefixLen {
+ return written, nil
+ }
+ break
+ }
+ if er != nil {
+ return 0, er
+ }
+ }
+
+ // we might have an error from the source mixed up in our multiplexed
+ // stream. if we do, return it.
+ if stream == stdcopy.Systemerr {
+ return written, fmt.Errorf("error from daemon in stream: %s", string(buf[stdWriterPrefixLen:frameSize+stdWriterPrefixLen]))
+ }
+
+ // Write the retrieved frame (without header)
+ var newBuf = make([]byte, frameSize)
+ copy(newBuf, buf[stdWriterPrefixLen:])
+ out <- newBuf
+ written += int64(frameSize)
+
+ // Move the rest of the buffer to the beginning
+ copy(buf, buf[frameSize+stdWriterPrefixLen:nr])
+ // Move the index
+ nr -= frameSize + stdWriterPrefixLen
+ }
+}
+
+// Specialized version of FramedStdCopy for when frames have no headers.
+// This will happen for output from a container that has TTY set.
+// In theory this makes it impossible to find the frame boundaries, which also does not matter if timestamps were not requested,
+// but if they were requested, they will still be there at the start of every frame, which might be mid-line.
+// In practice we can find most boundaries by looking for newlines, since these result in a new frame.
+// Otherwise we rely on using the same max frame size as used in practice by docker.
+func NoHeaderFramedStdCopy(dstout chan []byte, src io.Reader) (written int64, err error) {
+ var (
+ buf = make([]byte, 32768)
+ nrLine int
+ nr int
+ nr2 int
+ er error
+ )
+ for {
+ nr2, er = src.Read(buf[nr:])
+ if er == io.EOF && nr2 == 0 {
+ return written, nil
+ } else if er != nil {
+ return written, er
+ }
+ nr += nr2
+
+ // We might have read multiple frames, output all those we find in the buffer
+ for nr > 0 {
+ nrLine = bytes.Index(buf[:nr], []byte("\n")) + 1
+ if nrLine > maxFrameLen {
+ // we found a newline but it's in the next frame (most likely)
+ nrLine = maxFrameLen
+ } else if nrLine < 1 {
+ if nr >= maxFrameLen {
+ nrLine = maxFrameLen
+ } else {
+ // no end of frame found and we don't have enough bytes
+ break
+ }
+ }
+
+ // Write the frame
+ var newBuf = make([]byte, nrLine)
+ copy(newBuf, buf)
+ dstout <- newBuf
+ written += int64(nrLine)
+
+ // Move the rest of the buffer to the beginning
+ copy(buf, buf[nrLine:nr])
+ // Move the index
+ nr -= nrLine
+ }
+ }
+}
diff --git a/pkg/framedstdcopy/framedstdcopy_test.go b/pkg/framedstdcopy/framedstdcopy_test.go
new file mode 100644
index 0000000000000..3d89af61d59f4
--- /dev/null
+++ b/pkg/framedstdcopy/framedstdcopy_test.go
@@ -0,0 +1,269 @@
+package framedstdcopy
+
+import (
+ "bytes"
+ "errors"
+ "io"
+ "strings"
+ "sync"
+ "testing"
+
+ "github.com/docker/docker/pkg/stdcopy"
+)
+
+const (
+ tsPrefix string = "2024-03-14T15:32:05.358979323Z "
+ unprefixedFramePayloadSize int = 16384
+)
+
+func timestamped(bytes []byte) []byte {
+ var ts = []byte(tsPrefix)
+ return append(ts, bytes...)
+}
+
+func getSrcBuffer(stdOutFrames, stdErrFrames [][]byte) (buffer *bytes.Buffer, err error) {
+ buffer = new(bytes.Buffer)
+ dstOut := stdcopy.NewStdWriter(buffer, stdcopy.Stdout)
+ for _, stdOutBytes := range stdOutFrames {
+ _, err = dstOut.Write(timestamped(stdOutBytes))
+ if err != nil {
+ return
+ }
+ }
+ dstErr := stdcopy.NewStdWriter(buffer, stdcopy.Stderr)
+ for _, stdErrBytes := range stdErrFrames {
+ _, err = dstErr.Write(timestamped(stdErrBytes))
+ if err != nil {
+ return
+ }
+ }
+ return
+}
+
+type streamChans struct {
+ out chan []byte
+ err chan []byte
+ outCollected [][]byte
+ errCollected [][]byte
+ wg sync.WaitGroup
+}
+
+func newChans() streamChans {
+ return streamChans{
+ out: make(chan []byte),
+ err: make(chan []byte),
+ outCollected: make([][]byte, 0),
+ errCollected: make([][]byte, 0),
+ }
+}
+
+func (crx *streamChans) collectFrames() {
+ crx.wg.Add(1)
+ outClosed := false
+ errClosed := false
+ for {
+ if outClosed && errClosed {
+ crx.wg.Done()
+ return
+ }
+ select {
+ case bytes, ok := <-crx.out:
+ outClosed = !ok
+ if bytes != nil {
+ crx.outCollected = append(crx.outCollected, bytes)
+ }
+ case bytes, ok := <-crx.err:
+ errClosed = !ok
+ if bytes != nil {
+ crx.errCollected = append(crx.errCollected, bytes)
+ }
+ }
+ }
+}
+func (crx *streamChans) close() {
+ close(crx.out)
+ close(crx.err)
+}
+
+func TestStdCopyWriteAndRead(t *testing.T) {
+ ostr := strings.Repeat("o", unprefixedFramePayloadSize)
+ estr := strings.Repeat("e", unprefixedFramePayloadSize)
+ buffer, err := getSrcBuffer(
+ [][]byte{
+ []byte(ostr),
+ []byte(ostr[:3] + "\n"),
+ },
+ [][]byte{
+ []byte(estr),
+ []byte(estr[:3] + "\n"),
+ },
+ )
+ if err != nil {
+ t.Fatal(err)
+ }
+
+ rx := newChans()
+ go rx.collectFrames()
+ written, err := FramedStdCopy(rx.out, rx.err, buffer)
+ rx.close()
+ rx.wg.Wait()
+ if err != nil {
+ t.Fatal(err)
+ }
+ tslen := len(tsPrefix)
+ expectedTotalWritten := 2*maxFrameLen + 2*(4+tslen)
+ if written != int64(expectedTotalWritten) {
+ t.Fatalf("Expected to have total of %d bytes written, got %d", expectedTotalWritten, written)
+ }
+ if !bytes.Equal(rx.outCollected[0][tslen:maxFrameLen], []byte(ostr)) {
+ t.Fatal("Expected the first out frame to be all 'o'")
+ }
+ if !bytes.Equal(rx.outCollected[1][tslen:tslen+4], []byte("ooo\n")) {
+ t.Fatal("Expected the second out frame to be 'ooo\\n'")
+ }
+ if !bytes.Equal(rx.errCollected[0][tslen:maxFrameLen], []byte(estr)) {
+ t.Fatal("Expected the first err frame to be all 'e'")
+ }
+ if !bytes.Equal(rx.errCollected[1][tslen:tslen+4], []byte("eee\n")) {
+ t.Fatal("Expected the second err frame to be 'eee\\n'")
+ }
+}
+
+type customReader struct {
+ n int
+ err error
+ totalCalls int
+ correctCalls int
+ src *bytes.Buffer
+}
+
+func (f *customReader) Read(buf []byte) (int, error) {
+ f.totalCalls++
+ if f.totalCalls <= f.correctCalls {
+ return f.src.Read(buf)
+ }
+ return f.n, f.err
+}
+
+func TestStdCopyReturnsErrorReadingHeader(t *testing.T) {
+ expectedError := errors.New("error")
+ reader := &customReader{
+ err: expectedError,
+ }
+ discard := newChans()
+ go discard.collectFrames()
+ written, err := FramedStdCopy(discard.out, discard.err, reader)
+ discard.close()
+ if written != 0 {
+ t.Fatalf("Expected 0 bytes read, got %d", written)
+ }
+ if err != expectedError {
+ t.Fatalf("Didn't get expected error")
+ }
+}
+
+func TestStdCopyReturnsErrorReadingFrame(t *testing.T) {
+ expectedError := errors.New("error")
+ stdOutBytes := []byte(strings.Repeat("o", unprefixedFramePayloadSize))
+ stdErrBytes := []byte(strings.Repeat("e", unprefixedFramePayloadSize))
+ buffer, err := getSrcBuffer([][]byte{stdOutBytes}, [][]byte{stdErrBytes})
+ if err != nil {
+ t.Fatal(err)
+ }
+ reader := &customReader{
+ correctCalls: 1,
+ n: stdWriterPrefixLen + 1,
+ err: expectedError,
+ src: buffer,
+ }
+ discard := newChans()
+ go discard.collectFrames()
+ written, err := FramedStdCopy(discard.out, discard.err, reader)
+ discard.close()
+ if written != 0 {
+ t.Fatalf("Expected 0 bytes read, got %d", written)
+ }
+ if err != expectedError {
+ t.Fatalf("Didn't get expected error")
+ }
+}
+
+func TestStdCopyDetectsCorruptedFrame(t *testing.T) {
+ stdOutBytes := []byte(strings.Repeat("o", unprefixedFramePayloadSize))
+ stdErrBytes := []byte(strings.Repeat("e", unprefixedFramePayloadSize))
+ buffer, err := getSrcBuffer([][]byte{stdOutBytes}, [][]byte{stdErrBytes})
+ if err != nil {
+ t.Fatal(err)
+ }
+ reader := &customReader{
+ correctCalls: 1,
+ n: stdWriterPrefixLen + 1,
+ err: io.EOF,
+ src: buffer,
+ }
+ discard := newChans()
+ go discard.collectFrames()
+ written, err := FramedStdCopy(discard.out, discard.err, reader)
+ discard.close()
+ if written != maxFrameLen {
+ t.Fatalf("Expected %d bytes read, got %d", 0, written)
+ }
+ if err != nil {
+ t.Fatal("Didn't get nil error")
+ }
+}
+
+func TestStdCopyWithInvalidInputHeader(t *testing.T) {
+ dst := newChans()
+ go dst.collectFrames()
+ src := strings.NewReader("Invalid input")
+ _, err := FramedStdCopy(dst.out, dst.err, src)
+ dst.close()
+ if err == nil {
+ t.Fatal("FramedStdCopy with invalid input header should fail.")
+ }
+}
+
+func TestStdCopyWithCorruptedPrefix(t *testing.T) {
+ data := []byte{0x01, 0x02, 0x03}
+ src := bytes.NewReader(data)
+ written, err := FramedStdCopy(nil, nil, src)
+ if err != nil {
+ t.Fatalf("FramedStdCopy should not return an error with corrupted prefix.")
+ }
+ if written != 0 {
+ t.Fatalf("FramedStdCopy should have written 0, but has written %d", written)
+ }
+}
+
+// TestStdCopyReturnsErrorFromSystem tests that FramedStdCopy correctly returns an
+// error, when that error is muxed into the Systemerr stream.
+func TestStdCopyReturnsErrorFromSystem(t *testing.T) {
+ // write in the basic messages, just so there's some fluff in there
+ stdOutBytes := []byte(strings.Repeat("o", unprefixedFramePayloadSize))
+ stdErrBytes := []byte(strings.Repeat("e", unprefixedFramePayloadSize))
+ buffer, err := getSrcBuffer([][]byte{stdOutBytes}, [][]byte{stdErrBytes})
+ if err != nil {
+ t.Fatal(err)
+ }
+ // add in an error message on the Systemerr stream
+ systemErrBytes := []byte(strings.Repeat("S", unprefixedFramePayloadSize))
+ systemWriter := stdcopy.NewStdWriter(buffer, stdcopy.Systemerr)
+ _, err = systemWriter.Write(systemErrBytes)
+ if err != nil {
+ t.Fatal(err)
+ }
+
+ // now copy and demux. we should expect an error containing the string we
+ // wrote out
+ discard := newChans()
+ go discard.collectFrames()
+ _, err = FramedStdCopy(discard.out, discard.err, buffer)
+ discard.close()
+ if err == nil {
+ t.Fatal("expected error, got none")
+ }
+ if !strings.Contains(err.Error(), string(systemErrBytes)) {
+ t.Fatal("expected error to contain message")
+ }
+}
|
fix
|
Handle docker logs when a log is split in multiple frames (#12374)
|
7f35179cd3fd3627057d916b7f00c92cee400339
|
2024-07-03 21:23:19
|
George Robinson
|
feat: Ingester RF-1 (#13365)
| false
|
diff --git a/cmd/loki/loki-local-config.yaml b/cmd/loki/loki-local-config.yaml
index e2c54d5452790..ade3febc5e27e 100644
--- a/cmd/loki/loki-local-config.yaml
+++ b/cmd/loki/loki-local-config.yaml
@@ -4,6 +4,7 @@ server:
http_listen_port: 3100
grpc_listen_port: 9096
log_level: debug
+ grpc_server_max_concurrent_streams: 1000
common:
instance_addr: 127.0.0.1
@@ -17,6 +18,9 @@ common:
kvstore:
store: inmemory
+ingester_rf1:
+ enabled: false
+
query_range:
results_cache:
cache:
diff --git a/docs/sources/shared/configuration.md b/docs/sources/shared/configuration.md
index 145ab85144a06..1420e4ad373c2 100644
--- a/docs/sources/shared/configuration.md
+++ b/docs/sources/shared/configuration.md
@@ -133,10 +133,279 @@ Pass the `-config.expand-env` flag at the command line to enable this way of set
# the querier.
[ingester_client: <ingester_client>]
+# The ingester_client block configures how the distributor will connect to
+# ingesters. Only appropriate when running all components, the distributor, or
+# the querier.
+[ingester_rf1_client: <ingester_client>]
+
# The ingester block configures the ingester and how the ingester will register
# itself to a key value store.
[ingester: <ingester>]
+ingester_rf1:
+ # Whether the ingester is enabled.
+ # CLI flag: -ingester-rf1.enabled
+ [enabled: <boolean> | default = false]
+
+ # Configures how the lifecycle of the ingester will operate and where it will
+ # register for discovery.
+ lifecycler:
+ ring:
+ kvstore:
+ # Backend storage to use for the ring. Supported values are: consul,
+ # etcd, inmemory, memberlist, multi.
+ # CLI flag: -ingester-rf1.store
+ [store: <string> | default = "consul"]
+
+ # The prefix for the keys in the store. Should end with a /.
+ # CLI flag: -ingester-rf1.prefix
+ [prefix: <string> | default = "collectors/"]
+
+ # Configuration for a Consul client. Only applies if the selected
+ # kvstore is consul.
+ # The CLI flags prefix for this block configuration is: ingester-rf1
+ [consul: <consul>]
+
+ # Configuration for an ETCD v3 client. Only applies if the selected
+ # kvstore is etcd.
+ # The CLI flags prefix for this block configuration is: ingester-rf1
+ [etcd: <etcd>]
+
+ multi:
+ # Primary backend storage used by multi-client.
+ # CLI flag: -ingester-rf1.multi.primary
+ [primary: <string> | default = ""]
+
+ # Secondary backend storage used by multi-client.
+ # CLI flag: -ingester-rf1.multi.secondary
+ [secondary: <string> | default = ""]
+
+ # Mirror writes to secondary store.
+ # CLI flag: -ingester-rf1.multi.mirror-enabled
+ [mirror_enabled: <boolean> | default = false]
+
+ # Timeout for storing value to secondary store.
+ # CLI flag: -ingester-rf1.multi.mirror-timeout
+ [mirror_timeout: <duration> | default = 2s]
+
+ # The heartbeat timeout after which ingesters are skipped for
+ # reads/writes. 0 = never (timeout disabled).
+ # CLI flag: -ingester-rf1.ring.heartbeat-timeout
+ [heartbeat_timeout: <duration> | default = 1m]
+
+ # The number of ingesters to write to and read from.
+ # CLI flag: -ingester-rf1.distributor.replication-factor
+ [replication_factor: <int> | default = 3]
+
+ # True to enable the zone-awareness and replicate ingested samples across
+ # different availability zones.
+ # CLI flag: -ingester-rf1.distributor.zone-awareness-enabled
+ [zone_awareness_enabled: <boolean> | default = false]
+
+ # Comma-separated list of zones to exclude from the ring. Instances in
+ # excluded zones will be filtered out from the ring.
+ # CLI flag: -ingester-rf1.distributor.excluded-zones
+ [excluded_zones: <string> | default = ""]
+
+ # Number of tokens for each ingester.
+ # CLI flag: -ingester-rf1.num-tokens
+ [num_tokens: <int> | default = 128]
+
+ # Period at which to heartbeat to consul. 0 = disabled.
+ # CLI flag: -ingester-rf1.heartbeat-period
+ [heartbeat_period: <duration> | default = 5s]
+
+ # Heartbeat timeout after which instance is assumed to be unhealthy. 0 =
+ # disabled.
+ # CLI flag: -ingester-rf1.heartbeat-timeout
+ [heartbeat_timeout: <duration> | default = 1m]
+
+ # Observe tokens after generating to resolve collisions. Useful when using
+ # gossiping ring.
+ # CLI flag: -ingester-rf1.observe-period
+ [observe_period: <duration> | default = 0s]
+
+ # Period to wait for a claim from another member; will join automatically
+ # after this.
+ # CLI flag: -ingester-rf1.join-after
+ [join_after: <duration> | default = 0s]
+
+ # Minimum duration to wait after the internal readiness checks have passed
+ # but before succeeding the readiness endpoint. This is used to slowdown
+ # deployment controllers (eg. Kubernetes) after an instance is ready and
+ # before they proceed with a rolling update, to give the rest of the cluster
+ # instances enough time to receive ring updates.
+ # CLI flag: -ingester-rf1.min-ready-duration
+ [min_ready_duration: <duration> | default = 15s]
+
+ # Name of network interface to read address from.
+ # CLI flag: -ingester-rf1.lifecycler.interface
+ [interface_names: <list of strings> | default = [<private network interfaces>]]
+
+ # Enable IPv6 support. Required to make use of IP addresses from IPv6
+ # interfaces.
+ # CLI flag: -ingester-rf1.enable-inet6
+ [enable_inet6: <boolean> | default = false]
+
+ # Duration to sleep for before exiting, to ensure metrics are scraped.
+ # CLI flag: -ingester-rf1.final-sleep
+ [final_sleep: <duration> | default = 0s]
+
+ # File path where tokens are stored. If empty, tokens are not stored at
+ # shutdown and restored at startup.
+ # CLI flag: -ingester-rf1.tokens-file-path
+ [tokens_file_path: <string> | default = ""]
+
+ # The availability zone where this instance is running.
+ # CLI flag: -ingester-rf1.availability-zone
+ [availability_zone: <string> | default = ""]
+
+ # Unregister from the ring upon clean shutdown. It can be useful to disable
+ # for rolling restarts with consistent naming in conjunction with
+ # -distributor.extend-writes=false.
+ # CLI flag: -ingester-rf1.unregister-on-shutdown
+ [unregister_on_shutdown: <boolean> | default = true]
+
+ # When enabled the readiness probe succeeds only after all instances are
+ # ACTIVE and healthy in the ring, otherwise only the instance itself is
+ # checked. This option should be disabled if in your cluster multiple
+ # instances can be rolled out simultaneously, otherwise rolling updates may
+ # be slowed down.
+ # CLI flag: -ingester-rf1.readiness-check-ring-health
+ [readiness_check_ring_health: <boolean> | default = true]
+
+ # IP address to advertise in the ring.
+ # CLI flag: -ingester-rf1.lifecycler.addr
+ [address: <string> | default = ""]
+
+ # port to advertise in consul (defaults to server.grpc-listen-port).
+ # CLI flag: -ingester-rf1.lifecycler.port
+ [port: <int> | default = 0]
+
+ # ID to register in the ring.
+ # CLI flag: -ingester-rf1.lifecycler.ID
+ [id: <string> | default = "<hostname>"]
+
+ # How many flushes can happen concurrently from each stream.
+ # CLI flag: -ingester-rf1.concurrent-flushes
+ [concurrent_flushes: <int> | default = 32]
+
+ # How often should the ingester see if there are any blocks to flush. The
+ # first flush check is delayed by a random time up to 0.8x the flush check
+ # period. Additionally, there is +/- 1% jitter added to the interval.
+ # CLI flag: -ingester-rf1.flush-check-period
+ [flush_check_period: <duration> | default = 500ms]
+
+ flush_op_backoff:
+ # Minimum backoff period when a flush fails. Each concurrent flush has its
+ # own backoff, see `ingester.concurrent-flushes`.
+ # CLI flag: -ingester-rf1.flush-op-backoff-min-period
+ [min_period: <duration> | default = 100ms]
+
+ # Maximum backoff period when a flush fails. Each concurrent flush has its
+ # own backoff, see `ingester.concurrent-flushes`.
+ # CLI flag: -ingester-rf1.flush-op-backoff-max-period
+ [max_period: <duration> | default = 1m]
+
+ # Maximum retries for failed flushes.
+ # CLI flag: -ingester-rf1.flush-op-backoff-retries
+ [max_retries: <int> | default = 10]
+
+ # The timeout for an individual flush. Will be retried up to
+ # `flush-op-backoff-retries` times.
+ # CLI flag: -ingester-rf1.flush-op-timeout
+ [flush_op_timeout: <duration> | default = 10m]
+
+ # How long chunks should be retained in-memory after they've been flushed.
+ # CLI flag: -ingester-rf1.chunks-retain-period
+ [chunk_retain_period: <duration> | default = 0s]
+
+ [chunk_idle_period: <duration>]
+
+ # The targeted _uncompressed_ size in bytes of a chunk block When this
+ # threshold is exceeded the head block will be cut and compressed inside the
+ # chunk.
+ # CLI flag: -ingester-rf1.chunks-block-size
+ [chunk_block_size: <int> | default = 262144]
+
+ # A target _compressed_ size in bytes for chunks. This is a desired size not
+ # an exact size, chunks may be slightly bigger or significantly smaller if
+ # they get flushed for other reasons (e.g. chunk_idle_period). A value of 0
+ # creates chunks with a fixed 10 blocks, a non zero value will create chunks
+ # with a variable number of blocks to meet the target size.
+ # CLI flag: -ingester-rf1.chunk-target-size
+ [chunk_target_size: <int> | default = 1572864]
+
+ # The algorithm to use for compressing chunk. (none, gzip, lz4-64k, snappy,
+ # lz4-256k, lz4-1M, lz4, flate, zstd)
+ # CLI flag: -ingester-rf1.chunk-encoding
+ [chunk_encoding: <string> | default = "gzip"]
+
+ # The maximum duration of a timeseries chunk in memory. If a timeseries runs
+ # for longer than this, the current chunk will be flushed to the store and a
+ # new chunk created.
+ # CLI flag: -ingester-rf1.max-chunk-age
+ [max_chunk_age: <duration> | default = 2h]
+
+ # Forget about ingesters having heartbeat timestamps older than
+ # `ring.kvstore.heartbeat_timeout`. This is equivalent to clicking on the
+ # `/ring` `forget` button in the UI: the ingester is removed from the ring.
+ # This is a useful setting when you are sure that an unhealthy node won't
+ # return. An example is when not using stateful sets or the equivalent. Use
+ # `memberlist.rejoin_interval` > 0 to handle network partition cases when
+ # using a memberlist.
+ # CLI flag: -ingester-rf1.autoforget-unhealthy
+ [autoforget_unhealthy: <boolean> | default = false]
+
+ # The maximum number of errors a stream will report to the user when a push
+ # fails. 0 to make unlimited.
+ # CLI flag: -ingester-rf1.max-ignored-stream-errors
+ [max_returned_stream_errors: <int> | default = 10]
+
+ # Shard factor used in the ingesters for the in process reverse index. This
+ # MUST be evenly divisible by ALL schema shard factors or Loki will not start.
+ # CLI flag: -ingester-rf1.index-shards
+ [index_shards: <int> | default = 32]
+
+ # Maximum number of dropped streams to keep in memory during tailing.
+ # CLI flag: -ingester-rf1.tailer.max-dropped-streams
+ [max_dropped_streams: <int> | default = 10]
+
+ # Path where the shutdown marker file is stored. If not set and
+ # common.path_prefix is set then common.path_prefix will be used.
+ # CLI flag: -ingester-rf1.shutdown-marker-path
+ [shutdown_marker_path: <string> | default = ""]
+
+ # Interval at which the ingester ownedStreamService checks for changes in the
+ # ring to recalculate owned streams.
+ # CLI flag: -ingester-rf1.owned-streams-check-interval
+ [owned_streams_check_interval: <duration> | default = 30s]
+
+ # Configures how the pattern ingester will connect to the ingesters.
+ client_config:
+ # Configures how connections are pooled.
+ pool_config:
+ # How frequently to clean up clients for ingesters that have gone away.
+ # CLI flag: -ingester-rf1.client-cleanup-period
+ [client_cleanup_period: <duration> | default = 15s]
+
+ # Run a health check on each ingester client during periodic cleanup.
+ # CLI flag: -ingester-rf1.health-check-ingesters
+ [health_check_ingesters: <boolean> | default = true]
+
+ # Timeout for the health check.
+ # CLI flag: -ingester-rf1.remote-timeout
+ [remote_timeout: <duration> | default = 1s]
+
+ # The remote request timeout on the client side.
+ # CLI flag: -ingester-rf1.client.timeout
+ [remote_timeout: <duration> | default = 5s]
+
+ # Configures how the gRPC connection to ingesters work as a client.
+ # The CLI flags prefix for this block configuration is:
+ # pattern-ingester.client
+ [grpc_client_config: <grpc_client>]
+
pattern_ingester:
# Whether the pattern ingester is enabled.
# CLI flag: -pattern-ingester.enabled
@@ -303,7 +572,7 @@ pattern_ingester:
# Configures how the gRPC connection to ingesters work as a client.
# The CLI flags prefix for this block configuration is:
- # pattern-ingester.client
+ # bloom-build.builder.grpc
[grpc_client_config: <grpc_client>]
# How many flushes can happen concurrently from each stream.
@@ -370,7 +639,7 @@ bloom_build:
# The grpc_client block configures the gRPC client used to communicate
# between a client and server component in Loki.
# The CLI flags prefix for this block configuration is:
- # bloom-build.builder.grpc
+ # bloom-gateway-client.grpc
[grpc_config: <grpc_client>]
# Hostname (and port) of the bloom planner
@@ -1120,8 +1389,7 @@ client:
# The grpc_client block configures the gRPC client used to communicate between
# a client and server component in Loki.
- # The CLI flags prefix for this block configuration is:
- # bloom-gateway-client.grpc
+ # The CLI flags prefix for this block configuration is: bigtable
[grpc_client_config: <grpc_client>]
results_cache:
@@ -1867,6 +2135,7 @@ Configuration for a Consul client. Only applies if the selected kvstore is `cons
- `compactor.ring`
- `distributor.ring`
- `index-gateway.ring`
+- `ingester-rf1`
- `pattern-ingester`
- `query-scheduler.ring`
- `ruler.ring`
@@ -2087,6 +2356,7 @@ Configuration for an ETCD v3 client. Only applies if the selected kvstore is `et
- `compactor.ring`
- `distributor.ring`
- `index-gateway.ring`
+- `ingester-rf1`
- `pattern-ingester`
- `query-scheduler.ring`
- `ruler.ring`
@@ -2305,20 +2575,18 @@ The `frontend_worker` configures the worker - running within the Loki querier -
# Configures the querier gRPC client used to communicate with the
# query-frontend. This can't be used in conjunction with 'grpc_client_config'.
-# The CLI flags prefix for this block configuration is:
-# querier.frontend-grpc-client
+# The CLI flags prefix for this block configuration is: querier.frontend-client
[query_frontend_grpc_client: <grpc_client>]
# Configures the querier gRPC client used to communicate with the query-frontend
# and with the query-scheduler. This can't be used in conjunction with
# 'query_frontend_grpc_client' or 'query_scheduler_grpc_client'.
-# The CLI flags prefix for this block configuration is: querier.frontend-client
+# The CLI flags prefix for this block configuration is:
+# querier.scheduler-grpc-client
[grpc_client_config: <grpc_client>]
# Configures the querier gRPC client used to communicate with the
# query-scheduler. This can't be used in conjunction with 'grpc_client_config'.
-# The CLI flags prefix for this block configuration is:
-# querier.scheduler-grpc-client
[query_scheduler_grpc_client: <grpc_client>]
```
@@ -2375,6 +2643,7 @@ The `grpc_client` block configures the gRPC client used to communicate between a
- `bloom-gateway-client.grpc`
- `boltdb.shipper.index-gateway-client.grpc`
- `frontend.grpc-client-config`
+- `ingester-rf1.client`
- `ingester.client`
- `pattern-ingester.client`
- `querier.frontend-client`
@@ -2388,82 +2657,82 @@ The `grpc_client` block configures the gRPC client used to communicate between a
```yaml
# gRPC client max receive message size (bytes).
-# CLI flag: -<prefix>.grpc-max-recv-msg-size
+# CLI flag: -boltdb.shipper.index-gateway-client.grpc.grpc-max-recv-msg-size
[max_recv_msg_size: <int> | default = 104857600]
# gRPC client max send message size (bytes).
-# CLI flag: -<prefix>.grpc-max-send-msg-size
+# CLI flag: -boltdb.shipper.index-gateway-client.grpc.grpc-max-send-msg-size
[max_send_msg_size: <int> | default = 104857600]
# Use compression when sending messages. Supported values are: 'gzip', 'snappy'
# and '' (disable compression)
-# CLI flag: -<prefix>.grpc-compression
+# CLI flag: -boltdb.shipper.index-gateway-client.grpc.grpc-compression
[grpc_compression: <string> | default = ""]
# Rate limit for gRPC client; 0 means disabled.
-# CLI flag: -<prefix>.grpc-client-rate-limit
+# CLI flag: -boltdb.shipper.index-gateway-client.grpc.grpc-client-rate-limit
[rate_limit: <float> | default = 0]
# Rate limit burst for gRPC client.
-# CLI flag: -<prefix>.grpc-client-rate-limit-burst
+# CLI flag: -boltdb.shipper.index-gateway-client.grpc.grpc-client-rate-limit-burst
[rate_limit_burst: <int> | default = 0]
# Enable backoff and retry when we hit rate limits.
-# CLI flag: -<prefix>.backoff-on-ratelimits
+# CLI flag: -boltdb.shipper.index-gateway-client.grpc.backoff-on-ratelimits
[backoff_on_ratelimits: <boolean> | default = false]
backoff_config:
# Minimum delay when backing off.
- # CLI flag: -<prefix>.backoff-min-period
+ # CLI flag: -boltdb.shipper.index-gateway-client.grpc.backoff-min-period
[min_period: <duration> | default = 100ms]
# Maximum delay when backing off.
- # CLI flag: -<prefix>.backoff-max-period
+ # CLI flag: -boltdb.shipper.index-gateway-client.grpc.backoff-max-period
[max_period: <duration> | default = 10s]
# Number of times to backoff and retry before failing.
- # CLI flag: -<prefix>.backoff-retries
+ # CLI flag: -boltdb.shipper.index-gateway-client.grpc.backoff-retries
[max_retries: <int> | default = 10]
# Initial stream window size. Values less than the default are not supported and
# are ignored. Setting this to a value other than the default disables the BDP
# estimator.
-# CLI flag: -<prefix>.initial-stream-window-size
+# CLI flag: -boltdb.shipper.index-gateway-client.grpc.initial-stream-window-size
[initial_stream_window_size: <int> | default = 63KiB1023B]
# Initial connection window size. Values less than the default are not supported
# and are ignored. Setting this to a value other than the default disables the
# BDP estimator.
-# CLI flag: -<prefix>.initial-connection-window-size
+# CLI flag: -boltdb.shipper.index-gateway-client.grpc.initial-connection-window-size
[initial_connection_window_size: <int> | default = 63KiB1023B]
# Enable TLS in the gRPC client. This flag needs to be enabled when any other
# TLS flag is set. If set to false, insecure connection to gRPC server will be
# used.
-# CLI flag: -<prefix>.tls-enabled
+# CLI flag: -boltdb.shipper.index-gateway-client.grpc.tls-enabled
[tls_enabled: <boolean> | default = false]
# Path to the client certificate, which will be used for authenticating with the
# server. Also requires the key path to be configured.
-# CLI flag: -<prefix>.tls-cert-path
+# CLI flag: -boltdb.shipper.index-gateway-client.grpc.tls-cert-path
[tls_cert_path: <string> | default = ""]
# Path to the key for the client certificate. Also requires the client
# certificate to be configured.
-# CLI flag: -<prefix>.tls-key-path
+# CLI flag: -boltdb.shipper.index-gateway-client.grpc.tls-key-path
[tls_key_path: <string> | default = ""]
# Path to the CA certificates to validate server certificate against. If not
# set, the host's root CA certificates are used.
-# CLI flag: -<prefix>.tls-ca-path
+# CLI flag: -boltdb.shipper.index-gateway-client.grpc.tls-ca-path
[tls_ca_path: <string> | default = ""]
# Override the expected name on the server certificate.
-# CLI flag: -<prefix>.tls-server-name
+# CLI flag: -boltdb.shipper.index-gateway-client.grpc.tls-server-name
[tls_server_name: <string> | default = ""]
# Skip validating server certificate.
-# CLI flag: -<prefix>.tls-insecure-skip-verify
+# CLI flag: -boltdb.shipper.index-gateway-client.grpc.tls-insecure-skip-verify
[tls_insecure_skip_verify: <boolean> | default = false]
# Override the default cipher suite list (separated by commas). Allowed values:
@@ -2496,27 +2765,27 @@ backoff_config:
# - TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
# - TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
# - TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
-# CLI flag: -<prefix>.tls-cipher-suites
+# CLI flag: -boltdb.shipper.index-gateway-client.grpc.tls-cipher-suites
[tls_cipher_suites: <string> | default = ""]
# Override the default minimum TLS version. Allowed values: VersionTLS10,
# VersionTLS11, VersionTLS12, VersionTLS13
-# CLI flag: -<prefix>.tls-min-version
+# CLI flag: -boltdb.shipper.index-gateway-client.grpc.tls-min-version
[tls_min_version: <string> | default = ""]
# The maximum amount of time to establish a connection. A value of 0 means
# default gRPC client connect timeout and backoff.
-# CLI flag: -<prefix>.connect-timeout
+# CLI flag: -boltdb.shipper.index-gateway-client.grpc.connect-timeout
[connect_timeout: <duration> | default = 5s]
# Initial backoff delay after first connection failure. Only relevant if
# ConnectTimeout > 0.
-# CLI flag: -<prefix>.connect-backoff-base-delay
+# CLI flag: -boltdb.shipper.index-gateway-client.grpc.connect-backoff-base-delay
[connect_backoff_base_delay: <duration> | default = 1s]
# Maximum backoff delay when establishing a connection. Only relevant if
# ConnectTimeout > 0.
-# CLI flag: -<prefix>.connect-backoff-max-delay
+# CLI flag: -boltdb.shipper.index-gateway-client.grpc.connect-backoff-max-delay
[connect_backoff_max_delay: <duration> | default = 5s]
```
@@ -2920,26 +3189,16 @@ The `ingester_client` block configures how the distributor will connect to inges
```yaml
# Configures how connections are pooled.
pool_config:
- # How frequently to clean up clients for ingesters that have gone away.
- # CLI flag: -distributor.client-cleanup-period
- [client_cleanup_period: <duration> | default = 15s]
+ [client_cleanup_period: <duration>]
- # Run a health check on each ingester client during periodic cleanup.
- # CLI flag: -distributor.health-check-ingesters
- [health_check_ingesters: <boolean> | default = true]
+ [health_check_ingesters: <boolean>]
- # How quickly a dead client will be removed after it has been detected to
- # disappear. Set this to a value to allow time for a secondary health check to
- # recover the missing client.
- # CLI flag: -ingester.client.healthcheck-timeout
- [remote_timeout: <duration> | default = 1s]
+ [remote_timeout: <duration>]
-# The remote request timeout on the client side.
-# CLI flag: -ingester.client.timeout
-[remote_timeout: <duration> | default = 5s]
+[remote_timeout: <duration>]
# Configures how the gRPC connection to ingesters work as a client.
-# The CLI flags prefix for this block configuration is: ingester.client
+# The CLI flags prefix for this block configuration is: ingester-rf1.client
[grpc_client_config: <grpc_client>]
```
@@ -5073,7 +5332,8 @@ bigtable:
# The grpc_client block configures the gRPC client used to communicate between
# a client and server component in Loki.
- # The CLI flags prefix for this block configuration is: bigtable
+ # The CLI flags prefix for this block configuration is:
+ # boltdb.shipper.index-gateway-client.grpc
[grpc_client_config: <grpc_client>]
# If enabled, once a tables info is fetched, it is cached.
@@ -5365,7 +5625,7 @@ boltdb_shipper:
# The grpc_client block configures the gRPC client used to communicate
# between a client and server component in Loki.
# The CLI flags prefix for this block configuration is:
- # boltdb.shipper.index-gateway-client.grpc
+ # tsdb.shipper.index-gateway-client.grpc
[grpc_client_config: <grpc_client>]
# Hostname or IP of the Index Gateway gRPC server running in simple mode.
@@ -5420,7 +5680,7 @@ tsdb_shipper:
# The grpc_client block configures the gRPC client used to communicate
# between a client and server component in Loki.
# The CLI flags prefix for this block configuration is:
- # tsdb.shipper.index-gateway-client.grpc
+ # querier.frontend-grpc-client
[grpc_client_config: <grpc_client>]
# Hostname or IP of the Index Gateway gRPC server running in simple mode.
diff --git a/pkg/ingester-rf1/clientpool/client.go b/pkg/ingester-rf1/clientpool/client.go
new file mode 100644
index 0000000000000..4407a5856dd6f
--- /dev/null
+++ b/pkg/ingester-rf1/clientpool/client.go
@@ -0,0 +1,104 @@
+package clientpool
+
+import (
+ "flag"
+ "io"
+ "time"
+
+ "github.com/grafana/loki/v3/pkg/logproto"
+ "github.com/grafana/loki/v3/pkg/util/server"
+
+ "github.com/grafana/dskit/grpcclient"
+ "github.com/grafana/dskit/middleware"
+ "github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc"
+ "github.com/opentracing/opentracing-go"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/promauto"
+ "google.golang.org/grpc"
+ "google.golang.org/grpc/health/grpc_health_v1"
+)
+
+var ingesterClientRequestDuration = promauto.NewHistogramVec(prometheus.HistogramOpts{
+ Name: "loki_ingester_rf1_client_request_duration_seconds",
+ Help: "Time spent doing Ingester RF1 requests.",
+ Buckets: prometheus.ExponentialBuckets(0.001, 4, 6),
+}, []string{"operation", "status_code"})
+
+type HealthAndIngesterClient interface {
+ grpc_health_v1.HealthClient
+ Close() error
+}
+
+type ClosableHealthAndIngesterClient struct {
+ logproto.PusherRF1Client
+ grpc_health_v1.HealthClient
+ io.Closer
+}
+
+// Config for an ingester client.
+type Config struct {
+ PoolConfig PoolConfig `yaml:"pool_config,omitempty" doc:"description=Configures how connections are pooled."`
+ RemoteTimeout time.Duration `yaml:"remote_timeout,omitempty"`
+ GRPCClientConfig grpcclient.Config `yaml:"grpc_client_config" doc:"description=Configures how the gRPC connection to ingesters work as a client."`
+ GRPCUnaryClientInterceptors []grpc.UnaryClientInterceptor `yaml:"-"`
+ GRCPStreamClientInterceptors []grpc.StreamClientInterceptor `yaml:"-"`
+
+ // Internal is used to indicate that this client communicates on behalf of
+ // a machine and not a user. When Internal = true, the client won't attempt
+ // to inject an userid into the context.
+ Internal bool `yaml:"-"`
+}
+
+// RegisterFlags registers flags.
+func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
+ cfg.GRPCClientConfig.RegisterFlagsWithPrefix("ingester-rf1.client", f)
+ cfg.PoolConfig.RegisterFlagsWithPrefix("ingester-rf1.", f)
+
+ f.DurationVar(&cfg.PoolConfig.RemoteTimeout, "ingester-rf1.client.healthcheck-timeout", 1*time.Second, "How quickly a dead client will be removed after it has been detected to disappear. Set this to a value to allow time for a secondary health check to recover the missing client.")
+ f.DurationVar(&cfg.RemoteTimeout, "ingester-rf1.client.timeout", 5*time.Second, "The remote request timeout on the client side.")
+}
+
+// New returns a new ingester client.
+func NewClient(cfg Config, addr string) (HealthAndIngesterClient, error) {
+ opts := []grpc.DialOption{
+ grpc.WithDefaultCallOptions(cfg.GRPCClientConfig.CallOptions()...),
+ }
+
+ dialOpts, err := cfg.GRPCClientConfig.DialOption(instrumentation(&cfg))
+ if err != nil {
+ return nil, err
+ }
+
+ opts = append(opts, dialOpts...)
+ conn, err := grpc.Dial(addr, opts...)
+ if err != nil {
+ return nil, err
+ }
+ return ClosableHealthAndIngesterClient{
+ PusherRF1Client: logproto.NewPusherRF1Client(conn),
+ HealthClient: grpc_health_v1.NewHealthClient(conn),
+ Closer: conn,
+ }, nil
+}
+
+func instrumentation(cfg *Config) ([]grpc.UnaryClientInterceptor, []grpc.StreamClientInterceptor) {
+ var unaryInterceptors []grpc.UnaryClientInterceptor
+ unaryInterceptors = append(unaryInterceptors, cfg.GRPCUnaryClientInterceptors...)
+ unaryInterceptors = append(unaryInterceptors, server.UnaryClientQueryTagsInterceptor)
+ unaryInterceptors = append(unaryInterceptors, otgrpc.OpenTracingClientInterceptor(opentracing.GlobalTracer()))
+ if !cfg.Internal {
+ unaryInterceptors = append(unaryInterceptors, middleware.ClientUserHeaderInterceptor)
+ }
+ unaryInterceptors = append(unaryInterceptors, middleware.UnaryClientInstrumentInterceptor(ingesterClientRequestDuration))
+
+ var streamInterceptors []grpc.StreamClientInterceptor
+ streamInterceptors = append(streamInterceptors, cfg.GRCPStreamClientInterceptors...)
+ streamInterceptors = append(streamInterceptors, server.StreamClientQueryTagsInterceptor)
+ streamInterceptors = append(streamInterceptors, otgrpc.OpenTracingStreamClientInterceptor(opentracing.GlobalTracer()))
+ if !cfg.Internal {
+ streamInterceptors = append(streamInterceptors, middleware.StreamClientUserHeaderInterceptor)
+ }
+ streamInterceptors = append(streamInterceptors, middleware.StreamClientInstrumentInterceptor(ingesterClientRequestDuration))
+
+ return unaryInterceptors, streamInterceptors
+}
diff --git a/pkg/ingester-rf1/clientpool/ingester_client_pool.go b/pkg/ingester-rf1/clientpool/ingester_client_pool.go
new file mode 100644
index 0000000000000..7c84f6aa69b1b
--- /dev/null
+++ b/pkg/ingester-rf1/clientpool/ingester_client_pool.go
@@ -0,0 +1,46 @@
+package clientpool
+
+import (
+ "flag"
+ "time"
+
+ "github.com/go-kit/log"
+ "github.com/grafana/dskit/ring"
+ ring_client "github.com/grafana/dskit/ring/client"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/promauto"
+)
+
+var clients prometheus.Gauge
+
+// PoolConfig is config for creating a Pool.
+type PoolConfig struct {
+ ClientCleanupPeriod time.Duration `yaml:"client_cleanup_period"`
+ HealthCheckIngesters bool `yaml:"health_check_ingesters"`
+ RemoteTimeout time.Duration `yaml:"remote_timeout"`
+}
+
+// RegisterFlags adds the flags required to config this to the given FlagSet.
+func (cfg *PoolConfig) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) {
+ f.DurationVar(&cfg.ClientCleanupPeriod, prefix+"client-cleanup-period", 15*time.Second, "How frequently to clean up clients for ingesters that have gone away.")
+ f.BoolVar(&cfg.HealthCheckIngesters, prefix+"health-check-ingesters", true, "Run a health check on each ingester client during periodic cleanup.")
+ f.DurationVar(&cfg.RemoteTimeout, prefix+"remote-timeout", 1*time.Second, "Timeout for the health check.")
+}
+
+func NewPool(name string, cfg PoolConfig, ring ring.ReadRing, factory ring_client.PoolFactory, logger log.Logger, metricsNamespace string) *ring_client.Pool {
+ poolCfg := ring_client.PoolConfig{
+ CheckInterval: cfg.ClientCleanupPeriod,
+ HealthCheckEnabled: cfg.HealthCheckIngesters,
+ HealthCheckTimeout: cfg.RemoteTimeout,
+ }
+
+ if clients == nil {
+ clients = promauto.NewGauge(prometheus.GaugeOpts{
+ Namespace: metricsNamespace,
+ Name: "ingester_rf1_clients",
+ Help: "The current number of RF1 ingester clients.",
+ })
+ }
+ // TODO(chaudum): Allow configuration of metric name by the caller.
+ return ring_client.NewPool(name, poolCfg, ring_client.NewRingServiceDiscovery(ring), factory, clients, logger)
+}
diff --git a/pkg/ingester-rf1/flush.go b/pkg/ingester-rf1/flush.go
new file mode 100644
index 0000000000000..d46619575eeae
--- /dev/null
+++ b/pkg/ingester-rf1/flush.go
@@ -0,0 +1,186 @@
+package ingesterrf1
+
+import (
+ "fmt"
+ "net/http"
+ "time"
+
+ "github.com/go-kit/log"
+ "github.com/go-kit/log/level"
+ "github.com/grafana/dskit/backoff"
+ "github.com/grafana/dskit/ring"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/common/model"
+ "golang.org/x/net/context"
+
+ "github.com/grafana/loki/v3/pkg/chunkenc"
+ "github.com/grafana/loki/v3/pkg/storage/chunk"
+ "github.com/grafana/loki/v3/pkg/storage/wal"
+ "github.com/grafana/loki/v3/pkg/util"
+)
+
+const (
+ // Backoff for retrying 'immediate' flushes. Only counts for queue
+ // position, not wallclock time.
+ flushBackoff = 1 * time.Second
+
+ nameLabel = "__name__"
+ logsValue = "logs"
+
+ flushReasonIdle = "idle"
+ flushReasonMaxAge = "max_age"
+ flushReasonForced = "forced"
+ flushReasonFull = "full"
+ flushReasonSynced = "synced"
+)
+
+// Note: this is called both during the WAL replay (zero or more times)
+// and then after replay as well.
+func (i *Ingester) InitFlushQueues() {
+ i.flushQueuesDone.Add(i.cfg.ConcurrentFlushes)
+ for j := 0; j < i.cfg.ConcurrentFlushes; j++ {
+ i.flushQueues[j] = util.NewPriorityQueue(i.metrics.flushQueueLength)
+ go i.flushLoop(j)
+ }
+}
+
+// Flush implements ring.FlushTransferer
+// Flush triggers a flush of all the chunks and closes the flush queues.
+// Called from the Lifecycler as part of the ingester shutdown.
+func (i *Ingester) Flush() {
+ i.flush()
+}
+
+// TransferOut implements ring.FlushTransferer
+// Noop implemenetation because ingesters have a WAL now that does not require transferring chunks any more.
+// We return ErrTransferDisabled to indicate that we don't support transfers, and therefore we may flush on shutdown if configured to do so.
+func (i *Ingester) TransferOut(_ context.Context) error {
+ return ring.ErrTransferDisabled
+}
+
+func (i *Ingester) flush() {
+ // TODO: Flush the last chunks
+ // Close the flush queues, to unblock waiting workers.
+ for _, flushQueue := range i.flushQueues {
+ flushQueue.Close()
+ }
+
+ i.flushQueuesDone.Wait()
+ level.Debug(i.logger).Log("msg", "flush queues have drained")
+}
+
+// FlushHandler triggers a flush of all in memory chunks. Mainly used for
+// local testing.
+func (i *Ingester) FlushHandler(w http.ResponseWriter, _ *http.Request) {
+ w.WriteHeader(http.StatusNoContent)
+}
+
+type flushOp struct {
+ from model.Time
+ userID string
+ fp model.Fingerprint
+ immediate bool
+}
+
+func (o *flushOp) Key() string {
+ return fmt.Sprintf("%s-%s-%v", o.userID, o.fp, o.immediate)
+}
+
+func (o *flushOp) Priority() int64 {
+ return -int64(o.from)
+}
+
+func (i *Ingester) flushLoop(j int) {
+ l := log.With(i.logger, "loop", j)
+ defer func() {
+ level.Debug(l).Log("msg", "Ingester.flushLoop() exited")
+ i.flushQueuesDone.Done()
+ }()
+
+ for {
+ o := i.flushQueues[j].Dequeue()
+ if o == nil {
+ return
+ }
+ op := o.(*flushCtx)
+
+ err := i.flushOp(l, op)
+ if err != nil {
+ level.Error(l).Log("msg", "failed to flush", "err", err)
+ // Immediately re-queue another attempt at flushing this segment.
+ // TODO: Add some backoff or something?
+ i.flushQueues[j].Enqueue(op)
+ } else {
+ // Close the channel and trigger all waiting listeners to return
+ // TODO: Figure out how to return an error if we want to?
+ close(op.flushDone)
+ }
+ }
+}
+
+func (i *Ingester) flushOp(l log.Logger, flushCtx *flushCtx) error {
+ ctx, cancelFunc := context.WithCancel(context.Background())
+ defer cancelFunc()
+
+ b := backoff.New(ctx, i.cfg.FlushOpBackoff)
+ for b.Ongoing() {
+ err := i.flushSegment(ctx, flushCtx.segmentWriter)
+ if err == nil {
+ break
+ }
+ level.Error(l).Log("msg", "failed to flush", "retries", b.NumRetries(), "err", err)
+ b.Wait()
+ }
+ return b.Err()
+}
+
+// flushChunk flushes the given chunk to the store.
+//
+// If the flush is successful, metrics for this flush are to be reported.
+// If the flush isn't successful, the operation for this userID is requeued allowing this and all other unflushed
+// segments to have another opportunity to be flushed.
+func (i *Ingester) flushSegment(ctx context.Context, ch *wal.SegmentWriter) error {
+ if err := i.store.PutWal(ctx, ch); err != nil {
+ i.metrics.chunksFlushFailures.Inc()
+ return fmt.Errorf("store put chunk: %w", err)
+ }
+ i.metrics.flushedChunksStats.Inc(1)
+ // TODO: report some flush metrics
+ return nil
+}
+
+// reportFlushedChunkStatistics calculate overall statistics of flushed chunks without compromising the flush process.
+func (i *Ingester) reportFlushedChunkStatistics(ch *chunk.Chunk, desc *chunkDesc, sizePerTenant prometheus.Counter, countPerTenant prometheus.Counter, reason string) {
+ byt, err := ch.Encoded()
+ if err != nil {
+ level.Error(i.logger).Log("msg", "failed to encode flushed wire chunk", "err", err)
+ return
+ }
+
+ i.metrics.chunksFlushedPerReason.WithLabelValues(reason).Add(1)
+
+ compressedSize := float64(len(byt))
+ uncompressedSize, ok := chunkenc.UncompressedSize(ch.Data)
+
+ if ok && compressedSize > 0 {
+ i.metrics.chunkCompressionRatio.Observe(float64(uncompressedSize) / compressedSize)
+ }
+
+ utilization := ch.Data.Utilization()
+ i.metrics.chunkUtilization.Observe(utilization)
+ numEntries := desc.chunk.Size()
+ i.metrics.chunkEntries.Observe(float64(numEntries))
+ i.metrics.chunkSize.Observe(compressedSize)
+ sizePerTenant.Add(compressedSize)
+ countPerTenant.Inc()
+
+ boundsFrom, boundsTo := desc.chunk.Bounds()
+ i.metrics.chunkAge.Observe(time.Since(boundsFrom).Seconds())
+ i.metrics.chunkLifespan.Observe(boundsTo.Sub(boundsFrom).Hours())
+
+ i.metrics.flushedChunksBytesStats.Record(compressedSize)
+ i.metrics.flushedChunksLinesStats.Record(float64(numEntries))
+ i.metrics.flushedChunksUtilizationStats.Record(utilization)
+ i.metrics.flushedChunksAgeStats.Record(time.Since(boundsFrom).Seconds())
+ i.metrics.flushedChunksLifespanStats.Record(boundsTo.Sub(boundsFrom).Seconds())
+}
diff --git a/pkg/ingester-rf1/ingester.go b/pkg/ingester-rf1/ingester.go
new file mode 100644
index 0000000000000..fef8418945f3c
--- /dev/null
+++ b/pkg/ingester-rf1/ingester.go
@@ -0,0 +1,1081 @@
+package ingesterrf1
+
+import (
+ "context"
+ "flag"
+ "fmt"
+ "math/rand"
+ "net/http"
+ "os"
+ "path"
+ "runtime/pprof"
+ "sync"
+ "time"
+
+ ring_client "github.com/grafana/dskit/ring/client"
+ "github.com/opentracing/opentracing-go"
+
+ "github.com/grafana/loki/v3/pkg/ingester-rf1/clientpool"
+ "github.com/grafana/loki/v3/pkg/ingester/index"
+ "github.com/grafana/loki/v3/pkg/loghttp/push"
+ lokilog "github.com/grafana/loki/v3/pkg/logql/log"
+ "github.com/grafana/loki/v3/pkg/storage/types"
+ "github.com/grafana/loki/v3/pkg/storage/wal"
+ util_log "github.com/grafana/loki/v3/pkg/util/log"
+
+ "github.com/go-kit/log"
+ "github.com/go-kit/log/level"
+ "github.com/grafana/dskit/backoff"
+ "github.com/grafana/dskit/modules"
+ "github.com/grafana/dskit/multierror"
+ "github.com/grafana/dskit/ring"
+ "github.com/grafana/dskit/services"
+ "github.com/grafana/dskit/tenant"
+ "github.com/pkg/errors"
+ "github.com/prometheus/client_golang/prometheus"
+ "google.golang.org/grpc/health/grpc_health_v1"
+
+ server_util "github.com/grafana/loki/v3/pkg/util/server"
+
+ "github.com/grafana/loki/v3/pkg/analytics"
+ "github.com/grafana/loki/v3/pkg/chunkenc"
+ "github.com/grafana/loki/v3/pkg/distributor/writefailures"
+ "github.com/grafana/loki/v3/pkg/ingester/client"
+ "github.com/grafana/loki/v3/pkg/logproto"
+ "github.com/grafana/loki/v3/pkg/logql/syntax"
+ "github.com/grafana/loki/v3/pkg/runtime"
+ "github.com/grafana/loki/v3/pkg/storage"
+ "github.com/grafana/loki/v3/pkg/storage/chunk"
+ "github.com/grafana/loki/v3/pkg/storage/config"
+ "github.com/grafana/loki/v3/pkg/storage/stores"
+ indexstore "github.com/grafana/loki/v3/pkg/storage/stores/index"
+ "github.com/grafana/loki/v3/pkg/util"
+)
+
+const (
+ // RingKey is the key under which we store the ingesters ring in the KVStore.
+ RingKey = "ring"
+
+ shutdownMarkerFilename = "shutdown-requested.txt"
+)
+
+// ErrReadOnly is returned when the ingester is shutting down and a push was
+// attempted.
+var (
+ ErrReadOnly = errors.New("Ingester is shutting down")
+
+ compressionStats = analytics.NewString("ingester_compression")
+ targetSizeStats = analytics.NewInt("ingester_target_size_bytes")
+ activeTenantsStats = analytics.NewInt("ingester_active_tenants")
+)
+
+// Config for an ingester.
+type Config struct {
+ Enabled bool `yaml:"enabled" doc:"description=Whether the ingester is enabled."`
+
+ LifecyclerConfig ring.LifecyclerConfig `yaml:"lifecycler,omitempty" doc:"description=Configures how the lifecycle of the ingester will operate and where it will register for discovery."`
+
+ ConcurrentFlushes int `yaml:"concurrent_flushes"`
+ FlushCheckPeriod time.Duration `yaml:"flush_check_period"`
+ FlushOpBackoff backoff.Config `yaml:"flush_op_backoff"`
+ FlushOpTimeout time.Duration `yaml:"flush_op_timeout"`
+ RetainPeriod time.Duration `yaml:"chunk_retain_period"`
+ MaxChunkIdle time.Duration `yaml:"chunk_idle_period"`
+ BlockSize int `yaml:"chunk_block_size"`
+ TargetChunkSize int `yaml:"chunk_target_size"`
+ ChunkEncoding string `yaml:"chunk_encoding"`
+ parsedEncoding chunkenc.Encoding `yaml:"-"` // placeholder for validated encoding
+ MaxChunkAge time.Duration `yaml:"max_chunk_age"`
+ AutoForgetUnhealthy bool `yaml:"autoforget_unhealthy"`
+
+ MaxReturnedErrors int `yaml:"max_returned_stream_errors"`
+
+ // For testing, you can override the address and ID of this ingester.
+ ingesterClientFactory func(cfg client.Config, addr string) (client.HealthAndIngesterClient, error)
+
+ // Optional wrapper that can be used to modify the behaviour of the ingester
+ Wrapper Wrapper `yaml:"-"`
+
+ IndexShards int `yaml:"index_shards"`
+
+ MaxDroppedStreams int `yaml:"max_dropped_streams"`
+
+ ShutdownMarkerPath string `yaml:"shutdown_marker_path"`
+
+ OwnedStreamsCheckInterval time.Duration `yaml:"owned_streams_check_interval" doc:"description=Interval at which the ingester ownedStreamService checks for changes in the ring to recalculate owned streams."`
+
+ // Tee configs
+ ClientConfig clientpool.Config `yaml:"client_config,omitempty" doc:"description=Configures how the pattern ingester will connect to the ingesters."`
+ factory ring_client.PoolFactory `yaml:"-"`
+}
+
+// RegisterFlags registers the flags.
+func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
+ cfg.LifecyclerConfig.RegisterFlagsWithPrefix("ingester-rf1.", f, util_log.Logger)
+ cfg.ClientConfig.RegisterFlags(f)
+
+ f.IntVar(&cfg.ConcurrentFlushes, "ingester-rf1.concurrent-flushes", 32, "How many flushes can happen concurrently from each stream.")
+ f.DurationVar(&cfg.FlushCheckPeriod, "ingester-rf1.flush-check-period", 500*time.Millisecond, "How often should the ingester see if there are any blocks to flush. The first flush check is delayed by a random time up to 0.8x the flush check period. Additionally, there is +/- 1% jitter added to the interval.")
+ f.DurationVar(&cfg.FlushOpBackoff.MinBackoff, "ingester-rf1.flush-op-backoff-min-period", 100*time.Millisecond, "Minimum backoff period when a flush fails. Each concurrent flush has its own backoff, see `ingester.concurrent-flushes`.")
+ f.DurationVar(&cfg.FlushOpBackoff.MaxBackoff, "ingester-rf1.flush-op-backoff-max-period", time.Minute, "Maximum backoff period when a flush fails. Each concurrent flush has its own backoff, see `ingester.concurrent-flushes`.")
+ f.IntVar(&cfg.FlushOpBackoff.MaxRetries, "ingester-rf1.flush-op-backoff-retries", 10, "Maximum retries for failed flushes.")
+ f.DurationVar(&cfg.FlushOpTimeout, "ingester-rf1.flush-op-timeout", 10*time.Minute, "The timeout for an individual flush. Will be retried up to `flush-op-backoff-retries` times.")
+ f.DurationVar(&cfg.RetainPeriod, "ingester-rf1.chunks-retain-period", 0, "How long chunks should be retained in-memory after they've been flushed.")
+ //f.DurationVar(&cfg.MaxChunkIdle, "ingester-rf1.chunks-idle-period", 30*time.Minute, "How long chunks should sit in-memory with no updates before being flushed if they don't hit the max block size. This means that half-empty chunks will still be flushed after a certain period as long as they receive no further activity.")
+ f.IntVar(&cfg.BlockSize, "ingester-rf1.chunks-block-size", 256*1024, "The targeted _uncompressed_ size in bytes of a chunk block When this threshold is exceeded the head block will be cut and compressed inside the chunk.")
+ f.IntVar(&cfg.TargetChunkSize, "ingester-rf1.chunk-target-size", 1572864, "A target _compressed_ size in bytes for chunks. This is a desired size not an exact size, chunks may be slightly bigger or significantly smaller if they get flushed for other reasons (e.g. chunk_idle_period). A value of 0 creates chunks with a fixed 10 blocks, a non zero value will create chunks with a variable number of blocks to meet the target size.") // 1.5 MB
+ f.StringVar(&cfg.ChunkEncoding, "ingester-rf1.chunk-encoding", chunkenc.EncGZIP.String(), fmt.Sprintf("The algorithm to use for compressing chunk. (%s)", chunkenc.SupportedEncoding()))
+ f.IntVar(&cfg.MaxReturnedErrors, "ingester-rf1.max-ignored-stream-errors", 10, "The maximum number of errors a stream will report to the user when a push fails. 0 to make unlimited.")
+ f.DurationVar(&cfg.MaxChunkAge, "ingester-rf1.max-chunk-age", 2*time.Hour, "The maximum duration of a timeseries chunk in memory. If a timeseries runs for longer than this, the current chunk will be flushed to the store and a new chunk created.")
+ f.BoolVar(&cfg.AutoForgetUnhealthy, "ingester-rf1.autoforget-unhealthy", false, "Forget about ingesters having heartbeat timestamps older than `ring.kvstore.heartbeat_timeout`. This is equivalent to clicking on the `/ring` `forget` button in the UI: the ingester is removed from the ring. This is a useful setting when you are sure that an unhealthy node won't return. An example is when not using stateful sets or the equivalent. Use `memberlist.rejoin_interval` > 0 to handle network partition cases when using a memberlist.")
+ f.IntVar(&cfg.IndexShards, "ingester-rf1.index-shards", index.DefaultIndexShards, "Shard factor used in the ingesters for the in process reverse index. This MUST be evenly divisible by ALL schema shard factors or Loki will not start.")
+ f.IntVar(&cfg.MaxDroppedStreams, "ingester-rf1.tailer.max-dropped-streams", 10, "Maximum number of dropped streams to keep in memory during tailing.")
+ f.StringVar(&cfg.ShutdownMarkerPath, "ingester-rf1.shutdown-marker-path", "", "Path where the shutdown marker file is stored. If not set and common.path_prefix is set then common.path_prefix will be used.")
+ f.DurationVar(&cfg.OwnedStreamsCheckInterval, "ingester-rf1.owned-streams-check-interval", 30*time.Second, "Interval at which the ingester ownedStreamService checks for changes in the ring to recalculate owned streams.")
+ f.BoolVar(&cfg.Enabled, "ingester-rf1.enabled", false, "Flag to enable or disable the usage of the ingester-rf1 component.")
+}
+
+func (cfg *Config) Validate() error {
+ enc, err := chunkenc.ParseEncoding(cfg.ChunkEncoding)
+ if err != nil {
+ return err
+ }
+ cfg.parsedEncoding = enc
+
+ if cfg.FlushOpBackoff.MinBackoff > cfg.FlushOpBackoff.MaxBackoff {
+ return errors.New("invalid flush op min backoff: cannot be larger than max backoff")
+ }
+ if cfg.FlushOpBackoff.MaxRetries <= 0 {
+ return fmt.Errorf("invalid flush op max retries: %d", cfg.FlushOpBackoff.MaxRetries)
+ }
+ if cfg.FlushOpTimeout <= 0 {
+ return fmt.Errorf("invalid flush op timeout: %s", cfg.FlushOpTimeout)
+ }
+
+ return nil
+}
+
+type Wrapper interface {
+ Wrap(wrapped Interface) Interface
+}
+
+// Store is the store interface we need on the ingester.
+type Store interface {
+ stores.ChunkWriter
+ stores.ChunkFetcher
+ storage.SelectStore
+ storage.SchemaConfigProvider
+ indexstore.StatsReader
+}
+
+// Interface is an interface for the Ingester
+type Interface interface {
+ services.Service
+ http.Handler
+
+ logproto.PusherServer
+ //logproto.QuerierServer
+ //logproto.StreamDataServer
+
+ CheckReady(ctx context.Context) error
+ FlushHandler(w http.ResponseWriter, _ *http.Request)
+ GetOrCreateInstance(instanceID string) (*instance, error)
+ ShutdownHandler(w http.ResponseWriter, r *http.Request)
+ PrepareShutdown(w http.ResponseWriter, r *http.Request)
+}
+
+type flushCtx struct {
+ lock *sync.RWMutex
+ flushDone chan struct{}
+ newCtxAvailable chan struct{}
+ segmentWriter *wal.SegmentWriter
+ creationTime time.Time
+}
+
+func (o *flushCtx) Key() string {
+ return fmt.Sprintf("%d", o.creationTime.UnixNano())
+}
+
+func (o *flushCtx) Priority() int64 {
+ return -o.creationTime.UnixNano()
+}
+
+// Ingester builds chunks for incoming log streams.
+type Ingester struct {
+ services.Service
+
+ cfg Config
+ logger log.Logger
+
+ clientConfig client.Config
+ tenantConfigs *runtime.TenantConfigs
+
+ shutdownMtx sync.Mutex // Allows processes to grab a lock and prevent a shutdown
+ instancesMtx sync.RWMutex
+ instances map[string]*instance
+ readonly bool
+
+ lifecycler *ring.Lifecycler
+ lifecyclerWatcher *services.FailureWatcher
+
+ store Store
+ periodicConfigs []config.PeriodConfig
+
+ loopDone sync.WaitGroup
+ loopQuit chan struct{}
+ tailersQuit chan struct{}
+
+ // One queue per flush thread. Fingerprint is used to
+ // pick a queue.
+ flushQueues []*util.PriorityQueue
+ flushQueuesDone sync.WaitGroup
+
+ flushCtx *flushCtx
+
+ limiter *Limiter
+
+ // Flag for whether stopping the ingester service should also terminate the
+ // loki process.
+ // This is set when calling the shutdown handler.
+ terminateOnShutdown bool
+
+ // Only used by WAL & flusher to coordinate backpressure during replay.
+ //replayController *replayController
+
+ metrics *ingesterMetrics
+
+ chunkFilter chunk.RequestChunkFilterer
+ extractorWrapper lokilog.SampleExtractorWrapper
+ pipelineWrapper lokilog.PipelineWrapper
+
+ streamRateCalculator *StreamRateCalculator
+
+ writeLogManager *writefailures.Manager
+
+ customStreamsTracker push.UsageTracker
+
+ // recalculateOwnedStreams periodically checks the ring for changes and recalculates owned streams for each instance.
+ readRing ring.ReadRing
+ //recalculateOwnedStreams *recalculateOwnedStreams
+}
+
+// New makes a new Ingester.
+func New(cfg Config, clientConfig client.Config, store Store, limits Limits, configs *runtime.TenantConfigs, registerer prometheus.Registerer, writeFailuresCfg writefailures.Cfg, metricsNamespace string, logger log.Logger, customStreamsTracker push.UsageTracker, readRing ring.ReadRing) (*Ingester, error) {
+ if cfg.ingesterClientFactory == nil {
+ cfg.ingesterClientFactory = client.New
+ }
+ compressionStats.Set(cfg.ChunkEncoding)
+ targetSizeStats.Set(int64(cfg.TargetChunkSize))
+ metrics := newIngesterMetrics(registerer, metricsNamespace)
+
+ segmentWriter, err := wal.NewWalSegmentWriter()
+ if err != nil {
+ return nil, err
+ }
+
+ i := &Ingester{
+ cfg: cfg,
+ logger: logger,
+ clientConfig: clientConfig,
+ tenantConfigs: configs,
+ instances: map[string]*instance{},
+ store: store,
+ periodicConfigs: store.GetSchemaConfigs(),
+ loopQuit: make(chan struct{}),
+ flushQueues: make([]*util.PriorityQueue, cfg.ConcurrentFlushes),
+ tailersQuit: make(chan struct{}),
+ metrics: metrics,
+ //flushOnShutdownSwitch: &OnceSwitch{},
+ terminateOnShutdown: false,
+ streamRateCalculator: NewStreamRateCalculator(),
+ writeLogManager: writefailures.NewManager(logger, registerer, writeFailuresCfg, configs, "ingester_rf1"),
+ customStreamsTracker: customStreamsTracker,
+ readRing: readRing,
+ flushCtx: &flushCtx{
+ lock: &sync.RWMutex{},
+ flushDone: make(chan struct{}),
+ newCtxAvailable: make(chan struct{}),
+ segmentWriter: segmentWriter,
+ },
+ }
+ //i.replayController = newReplayController(metrics, cfg.WAL, &replayFlusher{i})
+
+ // TODO: change flush on shutdown
+ i.lifecycler, err = ring.NewLifecycler(cfg.LifecyclerConfig, i, "ingester-rf1", "ingester-rf1-ring", true, logger, prometheus.WrapRegistererWithPrefix(metricsNamespace+"_", registerer))
+ if err != nil {
+ return nil, err
+ }
+
+ i.lifecyclerWatcher = services.NewFailureWatcher()
+ i.lifecyclerWatcher.WatchService(i.lifecycler)
+
+ // Now that the lifecycler has been created, we can create the limiter
+ // which depends on it.
+ i.limiter = NewLimiter(limits, metrics, i.lifecycler, cfg.LifecyclerConfig.RingConfig.ReplicationFactor)
+
+ i.Service = services.NewBasicService(i.starting, i.running, i.stopping)
+
+ i.setupAutoForget()
+
+ //if i.cfg.ChunkFilterer != nil {
+ // i.SetChunkFilterer(i.cfg.ChunkFilterer)
+ //}
+ //
+ //if i.cfg.PipelineWrapper != nil {
+ // i.SetPipelineWrapper(i.cfg.PipelineWrapper)
+ //}
+ //
+ //if i.cfg.SampleExtractorWrapper != nil {
+ // i.SetExtractorWrapper(i.cfg.SampleExtractorWrapper)
+ //}
+ //
+ //i.recalculateOwnedStreams = newRecalculateOwnedStreams(i.getInstances, i.lifecycler.ID, i.readRing, cfg.OwnedStreamsCheckInterval, util_log.Logger)
+
+ return i, nil
+}
+
+func (i *Ingester) SetChunkFilterer(chunkFilter chunk.RequestChunkFilterer) {
+ i.chunkFilter = chunkFilter
+}
+
+func (i *Ingester) SetExtractorWrapper(wrapper lokilog.SampleExtractorWrapper) {
+ i.extractorWrapper = wrapper
+}
+
+func (i *Ingester) SetPipelineWrapper(wrapper lokilog.PipelineWrapper) {
+ i.pipelineWrapper = wrapper
+}
+
+// setupAutoForget looks for ring status if `AutoForgetUnhealthy` is enabled
+// when enabled, unhealthy ingesters that reach `ring.kvstore.heartbeat_timeout` are removed from the ring every `HeartbeatPeriod`
+func (i *Ingester) setupAutoForget() {
+ if !i.cfg.AutoForgetUnhealthy {
+ return
+ }
+
+ go func() {
+ ctx := context.Background()
+ err := i.Service.AwaitRunning(ctx)
+ if err != nil {
+ level.Error(i.logger).Log("msg", fmt.Sprintf("autoforget received error %s, autoforget is disabled", err.Error()))
+ return
+ }
+
+ level.Info(i.logger).Log("msg", fmt.Sprintf("autoforget is enabled and will remove unhealthy instances from the ring after %v with no heartbeat", i.cfg.LifecyclerConfig.RingConfig.HeartbeatTimeout))
+
+ ticker := time.NewTicker(i.cfg.LifecyclerConfig.HeartbeatPeriod)
+ defer ticker.Stop()
+
+ var forgetList []string
+ for range ticker.C {
+ err := i.lifecycler.KVStore.CAS(ctx, RingKey, func(in interface{}) (out interface{}, retry bool, err error) {
+ forgetList = forgetList[:0]
+ if in == nil {
+ return nil, false, nil
+ }
+
+ ringDesc, ok := in.(*ring.Desc)
+ if !ok {
+ level.Warn(i.logger).Log("msg", fmt.Sprintf("autoforget saw a KV store value that was not `ring.Desc`, got `%T`", in))
+ return nil, false, nil
+ }
+
+ for id, ingester := range ringDesc.Ingesters {
+ if !ingester.IsHealthy(ring.Reporting, i.cfg.LifecyclerConfig.RingConfig.HeartbeatTimeout, time.Now()) {
+ if i.lifecycler.ID == id {
+ level.Warn(i.logger).Log("msg", fmt.Sprintf("autoforget has seen our ID `%s` as unhealthy in the ring, network may be partitioned, skip forgeting ingesters this round", id))
+ return nil, false, nil
+ }
+ forgetList = append(forgetList, id)
+ }
+ }
+
+ if len(forgetList) == len(ringDesc.Ingesters)-1 {
+ level.Warn(i.logger).Log("msg", fmt.Sprintf("autoforget have seen %d unhealthy ingesters out of %d, network may be partioned, skip forgeting ingesters this round", len(forgetList), len(ringDesc.Ingesters)))
+ forgetList = forgetList[:0]
+ return nil, false, nil
+ }
+
+ if len(forgetList) > 0 {
+ for _, id := range forgetList {
+ ringDesc.RemoveIngester(id)
+ }
+ return ringDesc, true, nil
+ }
+ return nil, false, nil
+ })
+ if err != nil {
+ level.Warn(i.logger).Log("msg", err)
+ continue
+ }
+
+ for _, id := range forgetList {
+ level.Info(i.logger).Log("msg", fmt.Sprintf("autoforget removed ingester %v from the ring because it was not healthy after %v", id, i.cfg.LifecyclerConfig.RingConfig.HeartbeatTimeout))
+ }
+ i.metrics.autoForgetUnhealthyIngestersTotal.Add(float64(len(forgetList)))
+ }
+ }()
+}
+
+// ServeHTTP implements the pattern ring status page.
+func (i *Ingester) ServeHTTP(w http.ResponseWriter, r *http.Request) {
+ i.lifecycler.ServeHTTP(w, r)
+}
+
+func (i *Ingester) starting(ctx context.Context) error {
+ i.InitFlushQueues()
+
+ // pass new context to lifecycler, so that it doesn't stop automatically when Ingester's service context is done
+ err := i.lifecycler.StartAsync(context.Background())
+ if err != nil {
+ return err
+ }
+
+ err = i.lifecycler.AwaitRunning(ctx)
+ if err != nil {
+ return err
+ }
+
+ shutdownMarkerPath := path.Join(i.cfg.ShutdownMarkerPath, shutdownMarkerFilename)
+ shutdownMarker, err := shutdownMarkerExists(shutdownMarkerPath)
+ if err != nil {
+ return errors.Wrap(err, "failed to check ingester shutdown marker")
+ }
+
+ if shutdownMarker {
+ level.Info(i.logger).Log("msg", "detected existing shutdown marker, setting unregister and flush on shutdown", "path", shutdownMarkerPath)
+ i.setPrepareShutdown()
+ }
+
+ //err = i.recalculateOwnedStreams.StartAsync(ctx)
+ //if err != nil {
+ // return fmt.Errorf("can not start recalculate owned streams service: %w", err)
+ //}
+
+ err = i.lifecycler.AwaitRunning(ctx)
+ if err != nil {
+ return fmt.Errorf("can not ensure recalculate owned streams service is running: %w", err)
+ }
+
+ // start our loop
+ i.loopDone.Add(1)
+ go i.loop()
+ return nil
+}
+
+func (i *Ingester) running(ctx context.Context) error {
+ var serviceError error
+ select {
+ // wait until service is asked to stop
+ case <-ctx.Done():
+ // stop
+ case err := <-i.lifecyclerWatcher.Chan():
+ serviceError = fmt.Errorf("lifecycler failed: %w", err)
+ }
+
+ // close tailers before stopping our loop
+ //close(i.tailersQuit)
+ //for _, instance := range i.getInstances() {
+ // instance.closeTailers()
+ //}
+
+ close(i.loopQuit)
+ i.loopDone.Wait()
+ return serviceError
+}
+
+// stopping is called when Ingester transitions to Stopping state.
+//
+// At this point, loop no longer runs, but flushers are still running.
+func (i *Ingester) stopping(_ error) error {
+ i.stopIncomingRequests()
+ var errs util.MultiError
+ //errs.Add(i.wal.Stop())
+
+ //if i.flushOnShutdownSwitch.Get() {
+ // i.lifecycler.SetFlushOnShutdown(true)
+ //}
+ errs.Add(services.StopAndAwaitTerminated(context.Background(), i.lifecycler))
+
+ for _, flushQueue := range i.flushQueues {
+ flushQueue.Close()
+ }
+ i.flushQueuesDone.Wait()
+
+ //i.streamRateCalculator.Stop()
+
+ // In case the flag to terminate on shutdown is set or this instance is marked to release its resources,
+ // we need to mark the ingester service as "failed", so Loki will shut down entirely.
+ // The module manager logs the failure `modules.ErrStopProcess` in a special way.
+ if i.terminateOnShutdown && errs.Err() == nil {
+ i.removeShutdownMarkerFile()
+ return modules.ErrStopProcess
+ }
+ return errs.Err()
+}
+
+// stopIncomingRequests is called when ingester is stopping
+func (i *Ingester) stopIncomingRequests() {
+ i.shutdownMtx.Lock()
+ defer i.shutdownMtx.Unlock()
+
+ i.instancesMtx.Lock()
+ defer i.instancesMtx.Unlock()
+
+ i.readonly = true
+}
+
+// removeShutdownMarkerFile removes the shutdown marker if it exists. Any errors are logged.
+func (i *Ingester) removeShutdownMarkerFile() {
+ shutdownMarkerPath := path.Join(i.cfg.ShutdownMarkerPath, shutdownMarkerFilename)
+ exists, err := shutdownMarkerExists(shutdownMarkerPath)
+ if err != nil {
+ level.Error(i.logger).Log("msg", "error checking shutdown marker file exists", "err", err)
+ }
+ if exists {
+ err = removeShutdownMarker(shutdownMarkerPath)
+ if err != nil {
+ level.Error(i.logger).Log("msg", "error removing shutdown marker file", "err", err)
+ }
+ }
+}
+
+func (i *Ingester) loop() {
+ defer i.loopDone.Done()
+
+ // Delay the first flush operation by up to 0.8x the flush time period.
+ // This will ensure that multiple ingesters started at the same time do not
+ // flush at the same time. Flushing at the same time can cause concurrently
+ // writing the same chunk to object storage, which in AWS S3 leads to being
+ // rate limited.
+ jitter := time.Duration(rand.Int63n(int64(float64(i.cfg.FlushCheckPeriod.Nanoseconds()) * 0.8)))
+ initialDelay := time.NewTimer(jitter)
+ defer initialDelay.Stop()
+
+ level.Info(i.logger).Log("msg", "sleeping for initial delay before starting periodic flushing", "delay", jitter)
+
+ select {
+ case <-initialDelay.C:
+ // do nothing and continue with flush loop
+ case <-i.loopQuit:
+ // ingester stopped while waiting for initial delay
+ return
+ }
+
+ // Add +/- 20% of flush interval as jitter.
+ // The default flush check period is 30s so max jitter will be 6s.
+ j := i.cfg.FlushCheckPeriod / 5
+ flushTicker := util.NewTickerWithJitter(i.cfg.FlushCheckPeriod, j)
+ defer flushTicker.Stop()
+
+ for {
+ select {
+ case <-flushTicker.C:
+ i.doFlushTick()
+ case <-i.loopQuit:
+ return
+ }
+ }
+}
+
+func (i *Ingester) doFlushTick() {
+ i.flushCtx.lock.Lock()
+
+ //i.logger.Log("msg", "starting periodic flush")
+ // Stop new chunks being written while we swap destinations - we'll never unlock as this flushctx can no longer be used.
+ currentFlushCtx := i.flushCtx
+
+ // APIs become unblocked after resetting flushCtx
+ segmentWriter, err := wal.NewWalSegmentWriter()
+ if err != nil {
+ // TODO: handle this properly
+ panic(err)
+ }
+ i.flushCtx = &flushCtx{
+ lock: &sync.RWMutex{},
+ flushDone: make(chan struct{}),
+ newCtxAvailable: make(chan struct{}),
+ segmentWriter: segmentWriter,
+ }
+ close(currentFlushCtx.newCtxAvailable) // Broadcast to all waiters that they can now fetch a new flushCtx. Small chance of a race but if they re-fetch the old one, they'll just check again immediately.
+ // Flush the finished context in the background & then notify watching API requests
+ // TODO: use multiple flush queues if required
+ // Don't write empty segments if there is nothing to write.
+ if currentFlushCtx.segmentWriter.InputSize() > 0 {
+ i.flushQueues[0].Enqueue(currentFlushCtx)
+ }
+}
+
+// PrepareShutdown will handle the /ingester/prepare_shutdown endpoint.
+//
+// Internally, when triggered, this handler will configure the ingester service to release their resources whenever a SIGTERM is received.
+// Releasing resources meaning flushing data, deleting tokens, and removing itself from the ring.
+//
+// It also creates a file on disk which is used to re-apply the configuration if the
+// ingester crashes and restarts before being permanently shutdown.
+//
+// * `GET` shows the status of this configuration
+// * `POST` enables this configuration
+// * `DELETE` disables this configuration
+func (i *Ingester) PrepareShutdown(w http.ResponseWriter, r *http.Request) {
+ if i.cfg.ShutdownMarkerPath == "" {
+ w.WriteHeader(http.StatusInternalServerError)
+ return
+ }
+ shutdownMarkerPath := path.Join(i.cfg.ShutdownMarkerPath, shutdownMarkerFilename)
+
+ switch r.Method {
+ case http.MethodGet:
+ exists, err := shutdownMarkerExists(shutdownMarkerPath)
+ if err != nil {
+ level.Error(i.logger).Log("msg", "unable to check for prepare-shutdown marker file", "path", shutdownMarkerPath, "err", err)
+ w.WriteHeader(http.StatusInternalServerError)
+ return
+ }
+
+ if exists {
+ util.WriteTextResponse(w, "set")
+ } else {
+ util.WriteTextResponse(w, "unset")
+ }
+ case http.MethodPost:
+ if err := createShutdownMarker(shutdownMarkerPath); err != nil {
+ level.Error(i.logger).Log("msg", "unable to create prepare-shutdown marker file", "path", shutdownMarkerPath, "err", err)
+ w.WriteHeader(http.StatusInternalServerError)
+ return
+ }
+
+ i.setPrepareShutdown()
+ level.Info(i.logger).Log("msg", "created prepare-shutdown marker file", "path", shutdownMarkerPath)
+
+ w.WriteHeader(http.StatusNoContent)
+ case http.MethodDelete:
+ if err := removeShutdownMarker(shutdownMarkerPath); err != nil {
+ level.Error(i.logger).Log("msg", "unable to remove prepare-shutdown marker file", "path", shutdownMarkerPath, "err", err)
+ w.WriteHeader(http.StatusInternalServerError)
+ return
+ }
+
+ i.unsetPrepareShutdown()
+ level.Info(i.logger).Log("msg", "removed prepare-shutdown marker file", "path", shutdownMarkerPath)
+
+ w.WriteHeader(http.StatusNoContent)
+ default:
+ w.WriteHeader(http.StatusMethodNotAllowed)
+ }
+}
+
+// setPrepareShutdown toggles ingester lifecycler config to prepare for shutdown
+func (i *Ingester) setPrepareShutdown() {
+ level.Info(i.logger).Log("msg", "preparing full ingester shutdown, resources will be released on SIGTERM")
+ i.lifecycler.SetFlushOnShutdown(true)
+ i.lifecycler.SetUnregisterOnShutdown(true)
+ i.terminateOnShutdown = true
+ i.metrics.shutdownMarker.Set(1)
+}
+
+func (i *Ingester) unsetPrepareShutdown() {
+ level.Info(i.logger).Log("msg", "undoing preparation for full ingester shutdown")
+ i.lifecycler.SetFlushOnShutdown(true)
+ i.lifecycler.SetUnregisterOnShutdown(i.cfg.LifecyclerConfig.UnregisterOnShutdown)
+ i.terminateOnShutdown = false
+ i.metrics.shutdownMarker.Set(0)
+}
+
+// createShutdownMarker writes a marker file to disk to indicate that an ingester is
+// going to be scaled down in the future. The presence of this file means that an ingester
+// should flush and upload all data when stopping.
+func createShutdownMarker(p string) error {
+ // Write the file, fsync it, then fsync the containing directory in order to guarantee
+ // it is persisted to disk. From https://man7.org/linux/man-pages/man2/fsync.2.html
+ //
+ // > Calling fsync() does not necessarily ensure that the entry in the
+ // > directory containing the file has also reached disk. For that an
+ // > explicit fsync() on a file descriptor for the directory is also
+ // > needed.
+ file, err := os.Create(p)
+ if err != nil {
+ return err
+ }
+
+ merr := multierror.New()
+ _, err = file.WriteString(time.Now().UTC().Format(time.RFC3339))
+ merr.Add(err)
+ merr.Add(file.Sync())
+ merr.Add(file.Close())
+
+ if err := merr.Err(); err != nil {
+ return err
+ }
+
+ dir, err := os.OpenFile(path.Dir(p), os.O_RDONLY, 0777)
+ if err != nil {
+ return err
+ }
+
+ merr.Add(dir.Sync())
+ merr.Add(dir.Close())
+ return merr.Err()
+}
+
+// removeShutdownMarker removes the shutdown marker file if it exists.
+func removeShutdownMarker(p string) error {
+ err := os.Remove(p)
+ if err != nil && !os.IsNotExist(err) {
+ return err
+ }
+
+ dir, err := os.OpenFile(path.Dir(p), os.O_RDONLY, 0777)
+ if err != nil {
+ return err
+ }
+
+ merr := multierror.New()
+ merr.Add(dir.Sync())
+ merr.Add(dir.Close())
+ return merr.Err()
+}
+
+// shutdownMarkerExists returns true if the shutdown marker file exists, false otherwise
+func shutdownMarkerExists(p string) (bool, error) {
+ s, err := os.Stat(p)
+ if err != nil && os.IsNotExist(err) {
+ return false, nil
+ }
+
+ if err != nil {
+ return false, err
+ }
+
+ return s.Mode().IsRegular(), nil
+}
+
+// ShutdownHandler handles a graceful shutdown of the ingester service and
+// termination of the Loki process.
+func (i *Ingester) ShutdownHandler(w http.ResponseWriter, r *http.Request) {
+ // Don't allow calling the shutdown handler multiple times
+ if i.State() != services.Running {
+ w.WriteHeader(http.StatusServiceUnavailable)
+ _, _ = w.Write([]byte("Ingester is stopping or already stopped."))
+ return
+ }
+ params := r.URL.Query()
+ doFlush := util.FlagFromValues(params, "flush", true)
+ doDeleteRingTokens := util.FlagFromValues(params, "delete_ring_tokens", false)
+ doTerminate := util.FlagFromValues(params, "terminate", true)
+ err := i.handleShutdown(doTerminate, doFlush, doDeleteRingTokens)
+
+ // Stopping the module will return the modules.ErrStopProcess error. This is
+ // needed so the Loki process is shut down completely.
+ if err == nil || err == modules.ErrStopProcess {
+ w.WriteHeader(http.StatusNoContent)
+ } else {
+ w.WriteHeader(http.StatusInternalServerError)
+ _, _ = w.Write([]byte(err.Error()))
+ }
+}
+
+// handleShutdown triggers the following operations:
+// - Change the state of ring to stop accepting writes.
+// - optional: Flush all the chunks.
+// - optional: Delete ring tokens file
+// - Unregister from KV store
+// - optional: Terminate process (handled by service manager in loki.go)
+func (i *Ingester) handleShutdown(terminate, flush, del bool) error {
+ i.lifecycler.SetFlushOnShutdown(flush)
+ i.lifecycler.SetClearTokensOnShutdown(del)
+ i.lifecycler.SetUnregisterOnShutdown(true)
+ i.terminateOnShutdown = terminate
+ return services.StopAndAwaitTerminated(context.Background(), i)
+}
+
+// Push implements logproto.Pusher.
+func (i *Ingester) Push(ctx context.Context, req *logproto.PushRequest) (*logproto.PushResponse, error) {
+ instanceID, err := tenant.TenantID(ctx)
+ if err != nil {
+ return nil, err
+ } else if i.readonly {
+ return nil, ErrReadOnly
+ }
+
+ // Set profiling tags
+ defer pprof.SetGoroutineLabels(ctx)
+ ctx = pprof.WithLabels(ctx, pprof.Labels("path", "write", "tenant", instanceID))
+ pprof.SetGoroutineLabels(ctx)
+
+ instance, err := i.GetOrCreateInstance(instanceID)
+ if err != nil {
+ return &logproto.PushResponse{}, err
+ }
+
+ // Fetch a flush context and try to acquire the RLock
+ // The only time the Write Lock is held is when this context is no longer usable and a new one is being created.
+ // In this case, we need to re-read i.flushCtx in order to fetch the new one as soon as it's available.
+ //The newCtxAvailable chan is closed as soon as the new one is available to avoid a busy loop.
+ currentFlushCtx := i.flushCtx
+ for !currentFlushCtx.lock.TryRLock() {
+ select {
+ case <-currentFlushCtx.newCtxAvailable:
+ case <-ctx.Done():
+ return &logproto.PushResponse{}, ctx.Err()
+ }
+ currentFlushCtx = i.flushCtx
+ }
+ err = instance.Push(ctx, req, currentFlushCtx)
+ currentFlushCtx.lock.RUnlock()
+ select {
+ case <-ctx.Done():
+ return &logproto.PushResponse{}, ctx.Err()
+ case <-currentFlushCtx.flushDone:
+ return &logproto.PushResponse{}, err
+ }
+}
+
+// GetStreamRates returns a response containing all streams and their current rate
+// TODO: It might be nice for this to be human readable, eventually: Sort output and return labels, too?
+func (i *Ingester) GetStreamRates(ctx context.Context, _ *logproto.StreamRatesRequest) (*logproto.StreamRatesResponse, error) {
+ if sp := opentracing.SpanFromContext(ctx); sp != nil {
+ sp.LogKV("event", "ingester started to handle GetStreamRates")
+ defer sp.LogKV("event", "ingester finished handling GetStreamRates")
+ }
+
+ // Set profiling tags
+ defer pprof.SetGoroutineLabels(ctx)
+ ctx = pprof.WithLabels(ctx, pprof.Labels("path", "write"))
+ pprof.SetGoroutineLabels(ctx)
+
+ allRates := i.streamRateCalculator.Rates()
+ rates := make([]*logproto.StreamRate, len(allRates))
+ for idx := range allRates {
+ rates[idx] = &allRates[idx]
+ }
+ return &logproto.StreamRatesResponse{StreamRates: nil}, nil
+}
+
+func (i *Ingester) GetOrCreateInstance(instanceID string) (*instance, error) { //nolint:revive
+ inst, ok := i.getInstanceByID(instanceID)
+ if ok {
+ return inst, nil
+ }
+
+ i.instancesMtx.Lock()
+ defer i.instancesMtx.Unlock()
+ inst, ok = i.instances[instanceID]
+ if !ok {
+ var err error
+ inst, err = newInstance(&i.cfg, i.periodicConfigs, instanceID, i.limiter, i.tenantConfigs, i.metrics, i.chunkFilter, i.pipelineWrapper, i.extractorWrapper, i.streamRateCalculator, i.writeLogManager, i.customStreamsTracker)
+ if err != nil {
+ return nil, err
+ }
+ i.instances[instanceID] = inst
+ activeTenantsStats.Set(int64(len(i.instances)))
+ }
+ return inst, nil
+}
+
+// asyncStoreMaxLookBack returns a max look back period only if active index type is one of async index stores like `boltdb-shipper` and `tsdb`.
+// max look back is limited to from time of async store config.
+// It considers previous periodic config's from time if that also has async index type.
+// This is to limit the lookback to only async stores where relevant.
+func (i *Ingester) asyncStoreMaxLookBack() time.Duration {
+ activePeriodicConfigIndex := config.ActivePeriodConfig(i.periodicConfigs)
+ activePeriodicConfig := i.periodicConfigs[activePeriodicConfigIndex]
+ if activePeriodicConfig.IndexType != types.BoltDBShipperType && activePeriodicConfig.IndexType != types.TSDBType {
+ return 0
+ }
+
+ startTime := activePeriodicConfig.From
+ if activePeriodicConfigIndex != 0 && (i.periodicConfigs[activePeriodicConfigIndex-1].IndexType == types.BoltDBShipperType ||
+ i.periodicConfigs[activePeriodicConfigIndex-1].IndexType == types.TSDBType) {
+ startTime = i.periodicConfigs[activePeriodicConfigIndex-1].From
+ }
+
+ maxLookBack := time.Since(startTime.Time.Time())
+ return maxLookBack
+}
+
+// GetChunkIDs is meant to be used only when using an async store like boltdb-shipper or tsdb.
+func (i *Ingester) GetChunkIDs(ctx context.Context, req *logproto.GetChunkIDsRequest) (*logproto.GetChunkIDsResponse, error) {
+ gcr, err := i.getChunkIDs(ctx, req)
+ err = server_util.ClientGrpcStatusAndError(err)
+ return gcr, err
+}
+
+// GetChunkIDs is meant to be used only when using an async store like boltdb-shipper or tsdb.
+func (i *Ingester) getChunkIDs(ctx context.Context, req *logproto.GetChunkIDsRequest) (*logproto.GetChunkIDsResponse, error) {
+ orgID, err := tenant.TenantID(ctx)
+ if err != nil {
+ return nil, err
+ }
+
+ // Set profiling tags
+ defer pprof.SetGoroutineLabels(ctx)
+ ctx = pprof.WithLabels(ctx, pprof.Labels("path", "read", "type", "chunkIDs", "tenant", orgID))
+ pprof.SetGoroutineLabels(ctx)
+
+ asyncStoreMaxLookBack := i.asyncStoreMaxLookBack()
+ if asyncStoreMaxLookBack == 0 {
+ return &logproto.GetChunkIDsResponse{}, nil
+ }
+
+ reqStart := req.Start
+ reqStart = adjustQueryStartTime(asyncStoreMaxLookBack, reqStart, time.Now())
+
+ // parse the request
+ start, end := util.RoundToMilliseconds(reqStart, req.End)
+ matchers, err := syntax.ParseMatchers(req.Matchers, true)
+ if err != nil {
+ return nil, err
+ }
+
+ // get chunk references
+ chunksGroups, _, err := i.store.GetChunks(ctx, orgID, start, end, chunk.NewPredicate(matchers, nil), nil)
+ if err != nil {
+ return nil, err
+ }
+
+ // todo (Callum) ingester should maybe store the whole schema config?
+ s := config.SchemaConfig{
+ Configs: i.periodicConfigs,
+ }
+
+ // build the response
+ resp := logproto.GetChunkIDsResponse{ChunkIDs: []string{}}
+ for _, chunks := range chunksGroups {
+ for _, chk := range chunks {
+ resp.ChunkIDs = append(resp.ChunkIDs, s.ExternalKey(chk.ChunkRef))
+ }
+ }
+
+ return &resp, nil
+}
+
+// Watch implements grpc_health_v1.HealthCheck.
+func (*Ingester) Watch(*grpc_health_v1.HealthCheckRequest, grpc_health_v1.Health_WatchServer) error {
+ return nil
+}
+
+// ReadinessHandler is used to indicate to k8s when the ingesters are ready for
+// the addition removal of another ingester. Returns 204 when the ingester is
+// ready, 500 otherwise.
+func (i *Ingester) CheckReady(ctx context.Context) error {
+ if s := i.State(); s != services.Running && s != services.Stopping {
+ return fmt.Errorf("ingester not ready: %v", s)
+ }
+ return i.lifecycler.CheckReady(ctx)
+}
+
+func (i *Ingester) getInstanceByID(id string) (*instance, bool) {
+ i.instancesMtx.RLock()
+ defer i.instancesMtx.RUnlock()
+
+ inst, ok := i.instances[id]
+ return inst, ok
+}
+
+func (i *Ingester) getInstances() []*instance {
+ i.instancesMtx.RLock()
+ defer i.instancesMtx.RUnlock()
+
+ instances := make([]*instance, 0, len(i.instances))
+ for _, instance := range i.instances {
+ instances = append(instances, instance)
+ }
+ return instances
+}
+
+//// Tail logs matching given query
+//func (i *Ingester) Tail(req *logproto.TailRequest, queryServer logproto.Querier_TailServer) error {
+// err := i.tail(req, queryServer)
+// err = server_util.ClientGrpcStatusAndError(err)
+// return err
+//}
+//func (i *Ingester) tail(req *logproto.TailRequest, queryServer logproto.Querier_TailServer) error {
+// select {
+// case <-i.tailersQuit:
+// return errors.New("Ingester is stopping")
+// default:
+// }
+//
+// if req.Plan == nil {
+// parsed, err := syntax.ParseLogSelector(req.Query, true)
+// if err != nil {
+// return err
+// }
+// req.Plan = &plan.QueryPlan{
+// AST: parsed,
+// }
+// }
+//
+// instanceID, err := tenant.TenantID(queryServer.Context())
+// if err != nil {
+// return err
+// }
+//
+// instance, err := i.GetOrCreateInstance(instanceID)
+// if err != nil {
+// return err
+// }
+//
+// expr, ok := req.Plan.AST.(syntax.LogSelectorExpr)
+// if !ok {
+// return fmt.Errorf("unsupported query expression: want (LogSelectorExpr), got (%T)", req.Plan.AST)
+// }
+//
+// tailer, err := newTailer(instanceID, expr, queryServer, i.cfg.MaxDroppedStreams)
+// if err != nil {
+// return err
+// }
+//
+// if err := instance.addNewTailer(queryServer.Context(), tailer); err != nil {
+// return err
+// }
+// tailer.loop()
+// return nil
+//}
+//
+//// TailersCount returns count of active tail requests from a user
+//func (i *Ingester) TailersCount(ctx context.Context, _ *logproto.TailersCountRequest) (*logproto.TailersCountResponse, error) {
+// tcr, err := i.tailersCount(ctx)
+// err = server_util.ClientGrpcStatusAndError(err)
+// return tcr, err
+//}
+//
+//func (i *Ingester) tailersCount(ctx context.Context) (*logproto.TailersCountResponse, error) {
+// instanceID, err := tenant.TenantID(ctx)
+// if err != nil {
+// return nil, err
+// }
+//
+// resp := logproto.TailersCountResponse{}
+//
+// instance, ok := i.getInstanceByID(instanceID)
+// if ok {
+// resp.Count = instance.openTailersCount()
+// }
+//
+// return &resp, nil
+//}
+
+func adjustQueryStartTime(maxLookBackPeriod time.Duration, start, now time.Time) time.Time {
+ if maxLookBackPeriod > 0 {
+ oldestStartTime := now.Add(-maxLookBackPeriod)
+ if oldestStartTime.After(start) {
+ return oldestStartTime
+ }
+ }
+ return start
+}
+
+func (i *Ingester) GetDetectedFields(_ context.Context, r *logproto.DetectedFieldsRequest) (*logproto.DetectedFieldsResponse, error) {
+ return &logproto.DetectedFieldsResponse{
+ Fields: []*logproto.DetectedField{
+ {
+ Label: "foo",
+ Type: logproto.DetectedFieldString,
+ Cardinality: 1,
+ },
+ },
+ FieldLimit: r.GetFieldLimit(),
+ }, nil
+}
diff --git a/pkg/ingester-rf1/instance.go b/pkg/ingester-rf1/instance.go
new file mode 100644
index 0000000000000..e5c54549713af
--- /dev/null
+++ b/pkg/ingester-rf1/instance.go
@@ -0,0 +1,299 @@
+package ingesterrf1
+
+import (
+ "context"
+ "fmt"
+ "math"
+ "net/http"
+ "sync"
+
+ "github.com/go-kit/log/level"
+ "github.com/grafana/dskit/httpgrpc"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/promauto"
+ "github.com/prometheus/common/model"
+ "github.com/prometheus/prometheus/model/labels"
+
+ "github.com/grafana/loki/v3/pkg/analytics"
+ "github.com/grafana/loki/v3/pkg/chunkenc"
+ "github.com/grafana/loki/v3/pkg/distributor/writefailures"
+ "github.com/grafana/loki/v3/pkg/ingester/index"
+ "github.com/grafana/loki/v3/pkg/loghttp/push"
+ "github.com/grafana/loki/v3/pkg/logproto"
+ "github.com/grafana/loki/v3/pkg/logql/log"
+ "github.com/grafana/loki/v3/pkg/logql/syntax"
+ "github.com/grafana/loki/v3/pkg/runtime"
+ "github.com/grafana/loki/v3/pkg/storage/chunk"
+ "github.com/grafana/loki/v3/pkg/storage/config"
+ "github.com/grafana/loki/v3/pkg/util/constants"
+ util_log "github.com/grafana/loki/v3/pkg/util/log"
+ "github.com/grafana/loki/v3/pkg/validation"
+)
+
+var (
+ memoryStreams = promauto.NewGaugeVec(prometheus.GaugeOpts{
+ Namespace: constants.Loki,
+ Name: "ingester_rf1_memory_streams",
+ Help: "The total number of streams in memory per tenant.",
+ }, []string{"tenant"})
+ memoryStreamsLabelsBytes = promauto.NewGauge(prometheus.GaugeOpts{
+ Namespace: constants.Loki,
+ Name: "ingester_rf1_memory_streams_labels_bytes",
+ Help: "Total bytes of labels of the streams in memory.",
+ })
+ streamsCreatedTotal = promauto.NewCounterVec(prometheus.CounterOpts{
+ Namespace: constants.Loki,
+ Name: "ingester_rf1_streams_created_total",
+ Help: "The total number of streams created per tenant.",
+ }, []string{"tenant"})
+ streamsRemovedTotal = promauto.NewCounterVec(prometheus.CounterOpts{
+ Namespace: constants.Loki,
+ Name: "ingester_rf1_streams_removed_total",
+ Help: "The total number of streams removed per tenant.",
+ }, []string{"tenant"})
+
+ streamsCountStats = analytics.NewInt("ingester_rf1_streams_count")
+)
+
+type instance struct {
+ cfg *Config
+
+ buf []byte // buffer used to compute fps.
+ streams *streamsMap
+
+ index *index.Multi
+ mapper *FpMapper // using of mapper no longer needs mutex because reading from streams is lock-free
+
+ instanceID string
+
+ streamsCreatedTotal prometheus.Counter
+ streamsRemovedTotal prometheus.Counter
+
+ //tailers map[uint32]*tailer
+ tailerMtx sync.RWMutex
+
+ limiter *Limiter
+ streamCountLimiter *streamCountLimiter
+ ownedStreamsSvc *ownedStreamService
+
+ configs *runtime.TenantConfigs
+
+ metrics *ingesterMetrics
+
+ chunkFilter chunk.RequestChunkFilterer
+ pipelineWrapper log.PipelineWrapper
+ extractorWrapper log.SampleExtractorWrapper
+ streamRateCalculator *StreamRateCalculator
+
+ writeFailures *writefailures.Manager
+
+ schemaconfig *config.SchemaConfig
+
+ customStreamsTracker push.UsageTracker
+}
+
+func (i *instance) Push(ctx context.Context, req *logproto.PushRequest, flushCtx *flushCtx) error {
+ rateLimitWholeStream := i.limiter.limits.ShardStreams(i.instanceID).Enabled
+
+ var appendErr error
+ for _, reqStream := range req.Streams {
+ s, _, err := i.streams.LoadOrStoreNew(reqStream.Labels,
+ func() (*stream, error) {
+ s, err := i.createStream(ctx, reqStream)
+ return s, err
+ },
+ func(s *stream) error {
+ return nil
+ },
+ )
+ if err != nil {
+ appendErr = err
+ continue
+ }
+
+ _, appendErr = s.Push(ctx, reqStream.Entries, rateLimitWholeStream, i.customStreamsTracker, flushCtx)
+ }
+ return appendErr
+}
+
+func newInstance(
+ cfg *Config,
+ periodConfigs []config.PeriodConfig,
+ instanceID string,
+ limiter *Limiter,
+ configs *runtime.TenantConfigs,
+ metrics *ingesterMetrics,
+ chunkFilter chunk.RequestChunkFilterer,
+ pipelineWrapper log.PipelineWrapper,
+ extractorWrapper log.SampleExtractorWrapper,
+ streamRateCalculator *StreamRateCalculator,
+ writeFailures *writefailures.Manager,
+ customStreamsTracker push.UsageTracker,
+) (*instance, error) {
+ fmt.Println("new instance for", instanceID)
+ invertedIndex, err := index.NewMultiInvertedIndex(periodConfigs, uint32(cfg.IndexShards))
+ if err != nil {
+ return nil, err
+ }
+ streams := newStreamsMap()
+ ownedStreamsSvc := newOwnedStreamService(instanceID, limiter)
+ c := config.SchemaConfig{Configs: periodConfigs}
+ i := &instance{
+ cfg: cfg,
+ streams: streams,
+ buf: make([]byte, 0, 1024),
+ index: invertedIndex,
+ instanceID: instanceID,
+ //
+ streamsCreatedTotal: streamsCreatedTotal.WithLabelValues(instanceID),
+ streamsRemovedTotal: streamsRemovedTotal.WithLabelValues(instanceID),
+ //
+ //tailers: map[uint32]*tailer{},
+ limiter: limiter,
+ streamCountLimiter: newStreamCountLimiter(instanceID, streams.Len, limiter, ownedStreamsSvc),
+ ownedStreamsSvc: ownedStreamsSvc,
+ configs: configs,
+ metrics: metrics,
+ chunkFilter: chunkFilter,
+ pipelineWrapper: pipelineWrapper,
+ extractorWrapper: extractorWrapper,
+
+ streamRateCalculator: streamRateCalculator,
+
+ writeFailures: writeFailures,
+ schemaconfig: &c,
+
+ customStreamsTracker: customStreamsTracker,
+ }
+ i.mapper = NewFPMapper(i.getLabelsFromFingerprint)
+
+ return i, nil
+}
+
+func (i *instance) createStream(ctx context.Context, pushReqStream logproto.Stream) (*stream, error) {
+ // record is only nil when replaying WAL. We don't want to drop data when replaying a WAL after
+ // reducing the stream limits, for instance.
+ var err error
+
+ labels, err := syntax.ParseLabels(pushReqStream.Labels)
+ if err != nil {
+ if i.configs.LogStreamCreation(i.instanceID) {
+ level.Debug(util_log.Logger).Log(
+ "msg", "failed to create stream, failed to parse labels",
+ "org_id", i.instanceID,
+ "err", err,
+ "stream", pushReqStream.Labels,
+ )
+ }
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ }
+
+ if err != nil {
+ return i.onStreamCreationError(ctx, pushReqStream, err, labels)
+ }
+
+ fp := i.getHashForLabels(labels)
+
+ sortedLabels := i.index.Add(logproto.FromLabelsToLabelAdapters(labels), fp)
+
+ chunkfmt, headfmt, err := i.chunkFormatAt(minTs(&pushReqStream))
+ if err != nil {
+ return nil, fmt.Errorf("failed to create stream: %w", err)
+ }
+
+ s := newStream(chunkfmt, headfmt, i.cfg, i.limiter, i.instanceID, fp, sortedLabels, i.limiter.UnorderedWrites(i.instanceID) /*i.streamRateCalculator,*/, i.metrics, i.writeFailures)
+
+ i.onStreamCreated(s)
+
+ return s, nil
+}
+
+// minTs is a helper to return minimum Unix timestamp (as `model.Time`)
+// across all the entries in a given `stream`.
+func minTs(stream *logproto.Stream) model.Time {
+ // NOTE: We choose `min` timestamp because, the chunk is written once then
+ // added to the index buckets for may be different days. It would better rather to have
+ // some latest(say v13) indices reference older (say v12) compatible chunks than vice versa.
+
+ streamMinTs := int64(math.MaxInt64)
+ for _, entry := range stream.Entries {
+ ts := entry.Timestamp.UnixNano()
+ if streamMinTs > ts {
+ streamMinTs = ts
+ }
+ }
+ return model.TimeFromUnixNano(streamMinTs)
+}
+
+// chunkFormatAt returns chunk formats to use at given period of time.
+func (i *instance) chunkFormatAt(at model.Time) (byte, chunkenc.HeadBlockFmt, error) {
+ // NOTE: We choose chunk formats for stream based on it's entries timestamp.
+ // Rationale being, a single (ingester) instance can be running across multiple schema period
+ // and choosing correct periodConfig during creation of stream is more accurate rather
+ // than choosing it during starting of instance itself.
+
+ periodConfig, err := i.schemaconfig.SchemaForTime(at)
+ if err != nil {
+ return 0, 0, err
+ }
+
+ chunkFormat, headblock, err := periodConfig.ChunkFormat()
+ if err != nil {
+ return 0, 0, err
+ }
+
+ return chunkFormat, headblock, nil
+}
+
+func (i *instance) getHashForLabels(ls labels.Labels) model.Fingerprint {
+ var fp uint64
+ fp, i.buf = ls.HashWithoutLabels(i.buf, []string(nil)...)
+ return i.mapper.MapFP(model.Fingerprint(fp), ls)
+}
+
+// Return labels associated with given fingerprint. Used by fingerprint mapper.
+func (i *instance) getLabelsFromFingerprint(fp model.Fingerprint) labels.Labels {
+ s, ok := i.streams.LoadByFP(fp)
+ if !ok {
+ return nil
+ }
+ return s.labels
+}
+
+func (i *instance) onStreamCreationError(ctx context.Context, pushReqStream logproto.Stream, err error, labels labels.Labels) (*stream, error) {
+ if i.configs.LogStreamCreation(i.instanceID) {
+ level.Debug(util_log.Logger).Log(
+ "msg", "failed to create stream, exceeded limit",
+ "org_id", i.instanceID,
+ "err", err,
+ "stream", pushReqStream.Labels,
+ )
+ }
+
+ validation.DiscardedSamples.WithLabelValues(validation.StreamLimit, i.instanceID).Add(float64(len(pushReqStream.Entries)))
+ bytes := 0
+ for _, e := range pushReqStream.Entries {
+ bytes += len(e.Line)
+ }
+ validation.DiscardedBytes.WithLabelValues(validation.StreamLimit, i.instanceID).Add(float64(bytes))
+ if i.customStreamsTracker != nil {
+ i.customStreamsTracker.DiscardedBytesAdd(ctx, i.instanceID, validation.StreamLimit, labels, float64(bytes))
+ }
+ return nil, httpgrpc.Errorf(http.StatusTooManyRequests, validation.StreamLimitErrorMsg, labels, i.instanceID)
+}
+
+func (i *instance) onStreamCreated(s *stream) {
+ memoryStreams.WithLabelValues(i.instanceID).Inc()
+ memoryStreamsLabelsBytes.Add(float64(len(s.labels.String())))
+ i.streamsCreatedTotal.Inc()
+ //i.addTailersToNewStream(s)
+ streamsCountStats.Add(1)
+ i.ownedStreamsSvc.incOwnedStreamCount()
+ if i.configs.LogStreamCreation(i.instanceID) {
+ level.Debug(util_log.Logger).Log(
+ "msg", "successfully created stream",
+ "org_id", i.instanceID,
+ "stream", s.labels.String(),
+ )
+ }
+}
diff --git a/pkg/ingester-rf1/limiter.go b/pkg/ingester-rf1/limiter.go
new file mode 100644
index 0000000000000..1957ed54d9145
--- /dev/null
+++ b/pkg/ingester-rf1/limiter.go
@@ -0,0 +1,226 @@
+package ingesterrf1
+
+import (
+ "fmt"
+ "math"
+ "sync"
+ "time"
+
+ "golang.org/x/time/rate"
+
+ "github.com/grafana/loki/v3/pkg/distributor/shardstreams"
+ "github.com/grafana/loki/v3/pkg/validation"
+)
+
+const (
+ errMaxStreamsPerUserLimitExceeded = "tenant '%v' per-user streams limit exceeded, streams: %d exceeds calculated limit: %d (local limit: %d, global limit: %d, global/ingesters: %d)"
+)
+
+// RingCount is the interface exposed by a ring implementation which allows
+// to count members
+type RingCount interface {
+ HealthyInstancesCount() int
+}
+
+type Limits interface {
+ UnorderedWrites(userID string) bool
+ UseOwnedStreamCount(userID string) bool
+ MaxLocalStreamsPerUser(userID string) int
+ MaxGlobalStreamsPerUser(userID string) int
+ PerStreamRateLimit(userID string) validation.RateLimit
+ ShardStreams(userID string) shardstreams.Config
+}
+
+// Limiter implements primitives to get the maximum number of streams
+// an ingester can handle for a specific tenant
+type Limiter struct {
+ limits Limits
+ ring RingCount
+ replicationFactor int
+ metrics *ingesterMetrics
+
+ mtx sync.RWMutex
+ disabled bool
+}
+
+func (l *Limiter) DisableForWALReplay() {
+ l.mtx.Lock()
+ defer l.mtx.Unlock()
+ l.disabled = true
+ l.metrics.limiterEnabled.Set(0)
+}
+
+func (l *Limiter) Enable() {
+ l.mtx.Lock()
+ defer l.mtx.Unlock()
+ l.disabled = false
+ l.metrics.limiterEnabled.Set(1)
+}
+
+// NewLimiter makes a new limiter
+func NewLimiter(limits Limits, metrics *ingesterMetrics, ring RingCount, replicationFactor int) *Limiter {
+ return &Limiter{
+ limits: limits,
+ ring: ring,
+ replicationFactor: replicationFactor,
+ metrics: metrics,
+ }
+}
+
+func (l *Limiter) UnorderedWrites(userID string) bool {
+ // WAL replay should not discard previously ack'd writes,
+ // so allow out of order writes while the limiter is disabled.
+ // This allows replaying unordered WALs into ordered configurations.
+ if l.disabled {
+ return true
+ }
+ return l.limits.UnorderedWrites(userID)
+}
+
+func (l *Limiter) GetStreamCountLimit(tenantID string) (calculatedLimit, localLimit, globalLimit, adjustedGlobalLimit int) {
+ // Start by setting the local limit either from override or default
+ localLimit = l.limits.MaxLocalStreamsPerUser(tenantID)
+
+ // We can assume that streams are evenly distributed across ingesters
+ // so we do convert the global limit into a local limit
+ globalLimit = l.limits.MaxGlobalStreamsPerUser(tenantID)
+ adjustedGlobalLimit = l.convertGlobalToLocalLimit(globalLimit)
+
+ // Set the calculated limit to the lesser of the local limit or the new calculated global limit
+ calculatedLimit = l.minNonZero(localLimit, adjustedGlobalLimit)
+
+ // If both the local and global limits are disabled, we just
+ // use the largest int value
+ if calculatedLimit == 0 {
+ calculatedLimit = math.MaxInt32
+ }
+ return
+}
+
+func (l *Limiter) minNonZero(first, second int) int {
+ if first == 0 || (second != 0 && first > second) {
+ return second
+ }
+
+ return first
+}
+
+func (l *Limiter) convertGlobalToLocalLimit(globalLimit int) int {
+ if globalLimit == 0 {
+ return 0
+ }
+ // todo: change to healthyInstancesInZoneCount() once
+ // Given we don't need a super accurate count (ie. when the ingesters
+ // topology changes) and we prefer to always be in favor of the tenant,
+ // we can use a per-ingester limit equal to:
+ // (global limit / number of ingesters) * replication factor
+ numIngesters := l.ring.HealthyInstancesCount()
+
+ // May happen because the number of ingesters is asynchronously updated.
+ // If happens, we just temporarily ignore the global limit.
+ if numIngesters > 0 {
+ return int((float64(globalLimit) / float64(numIngesters)) * float64(l.replicationFactor))
+ }
+
+ return 0
+}
+
+type supplier[T any] func() T
+
+type streamCountLimiter struct {
+ tenantID string
+ limiter *Limiter
+ defaultStreamCountSupplier supplier[int]
+ ownedStreamSvc *ownedStreamService
+}
+
+var noopFixedLimitSupplier = func() int {
+ return 0
+}
+
+func newStreamCountLimiter(tenantID string, defaultStreamCountSupplier supplier[int], limiter *Limiter, service *ownedStreamService) *streamCountLimiter {
+ return &streamCountLimiter{
+ tenantID: tenantID,
+ limiter: limiter,
+ defaultStreamCountSupplier: defaultStreamCountSupplier,
+ ownedStreamSvc: service,
+ }
+}
+
+func (l *streamCountLimiter) AssertNewStreamAllowed(tenantID string) error {
+ streamCountSupplier, fixedLimitSupplier := l.getSuppliers(tenantID)
+ calculatedLimit, localLimit, globalLimit, adjustedGlobalLimit := l.getCurrentLimit(tenantID, fixedLimitSupplier)
+ actualStreamsCount := streamCountSupplier()
+ if actualStreamsCount < calculatedLimit {
+ return nil
+ }
+
+ return fmt.Errorf(errMaxStreamsPerUserLimitExceeded, tenantID, actualStreamsCount, calculatedLimit, localLimit, globalLimit, adjustedGlobalLimit)
+}
+
+func (l *streamCountLimiter) getCurrentLimit(tenantID string, fixedLimitSupplier supplier[int]) (calculatedLimit, localLimit, globalLimit, adjustedGlobalLimit int) {
+ calculatedLimit, localLimit, globalLimit, adjustedGlobalLimit = l.limiter.GetStreamCountLimit(tenantID)
+ fixedLimit := fixedLimitSupplier()
+ if fixedLimit > calculatedLimit {
+ calculatedLimit = fixedLimit
+ }
+ return
+}
+
+func (l *streamCountLimiter) getSuppliers(tenant string) (streamCountSupplier, fixedLimitSupplier supplier[int]) {
+ if l.limiter.limits.UseOwnedStreamCount(tenant) {
+ return l.ownedStreamSvc.getOwnedStreamCount, l.ownedStreamSvc.getFixedLimit
+ }
+ return l.defaultStreamCountSupplier, noopFixedLimitSupplier
+}
+
+type RateLimiterStrategy interface {
+ RateLimit(tenant string) validation.RateLimit
+}
+
+func (l *Limiter) RateLimit(tenant string) validation.RateLimit {
+ if l.disabled {
+ return validation.Unlimited
+ }
+
+ return l.limits.PerStreamRateLimit(tenant)
+}
+
+type StreamRateLimiter struct {
+ recheckPeriod time.Duration
+ recheckAt time.Time
+ strategy RateLimiterStrategy
+ tenant string
+ lim *rate.Limiter
+}
+
+func NewStreamRateLimiter(strategy RateLimiterStrategy, tenant string, recheckPeriod time.Duration) *StreamRateLimiter {
+ rl := strategy.RateLimit(tenant)
+ return &StreamRateLimiter{
+ recheckPeriod: recheckPeriod,
+ strategy: strategy,
+ tenant: tenant,
+ lim: rate.NewLimiter(rl.Limit, rl.Burst),
+ }
+}
+
+func (l *StreamRateLimiter) AllowN(at time.Time, n int) bool {
+ now := time.Now()
+ if now.After(l.recheckAt) {
+ l.recheckAt = now.Add(l.recheckPeriod)
+
+ oldLim := l.lim.Limit()
+ oldBurst := l.lim.Burst()
+
+ next := l.strategy.RateLimit(l.tenant)
+
+ if oldLim != next.Limit || oldBurst != next.Burst {
+ // Edge case: rate.Inf doesn't advance nicely when reconfigured.
+ // To simplify, we just create a new limiter after reconfiguration rather
+ // than alter the existing one.
+ l.lim = rate.NewLimiter(next.Limit, next.Burst)
+ }
+ }
+
+ return l.lim.AllowN(at, n)
+}
diff --git a/pkg/ingester-rf1/mapper.go b/pkg/ingester-rf1/mapper.go
new file mode 100644
index 0000000000000..02d7a4c753e98
--- /dev/null
+++ b/pkg/ingester-rf1/mapper.go
@@ -0,0 +1,152 @@
+package ingesterrf1
+
+import (
+ "fmt"
+ "sort"
+ "strings"
+ "sync"
+
+ "github.com/go-kit/log/level"
+ "github.com/prometheus/common/model"
+ "github.com/prometheus/prometheus/model/labels"
+ "go.uber.org/atomic"
+
+ util_log "github.com/grafana/loki/v3/pkg/util/log"
+)
+
+const maxMappedFP = 1 << 20 // About 1M fingerprints reserved for mapping.
+
+var separatorString = string([]byte{model.SeparatorByte})
+
+// FpMapper is used to map fingerprints in order to work around fingerprint
+// collisions.
+type FpMapper struct {
+ // highestMappedFP has to be aligned for atomic operations.
+ highestMappedFP atomic.Uint64
+
+ mtx sync.RWMutex // Protects mappings.
+ // maps original fingerprints to a map of string representations of
+ // metrics to the truly unique fingerprint.
+ mappings map[model.Fingerprint]map[string]model.Fingerprint
+
+ // Returns existing labels for given fingerprint, if any.
+ // Equality check relies on labels.Labels being sorted.
+ fpToLabels func(fingerprint model.Fingerprint) labels.Labels
+}
+
+// NewFPMapper returns an fpMapper ready to use.
+func NewFPMapper(fpToLabels func(fingerprint model.Fingerprint) labels.Labels) *FpMapper {
+ if fpToLabels == nil {
+ panic("nil fpToLabels")
+ }
+
+ return &FpMapper{
+ fpToLabels: fpToLabels,
+ mappings: map[model.Fingerprint]map[string]model.Fingerprint{},
+ }
+}
+
+// MapFP takes a raw fingerprint (as returned by Metrics.FastFingerprint) and
+// returns a truly unique fingerprint. The caller must have locked the raw
+// fingerprint.
+func (m *FpMapper) MapFP(fp model.Fingerprint, metric labels.Labels) model.Fingerprint {
+ // First check if we are in the reserved FP space, in which case this is
+ // automatically a collision that has to be mapped.
+ if fp <= maxMappedFP {
+ return m.maybeAddMapping(fp, metric)
+ }
+
+ // Then check the most likely case: This fp belongs to a series that is
+ // already in memory.
+ s := m.fpToLabels(fp)
+ if s != nil {
+ // FP exists in memory, but is it for the same metric?
+ if labels.Equal(metric, s) {
+ // Yupp. We are done.
+ return fp
+ }
+ // Collision detected!
+ return m.maybeAddMapping(fp, metric)
+ }
+ // Metric is not in memory. Before doing the expensive archive lookup,
+ // check if we have a mapping for this metric in place already.
+ m.mtx.RLock()
+ mappedFPs, fpAlreadyMapped := m.mappings[fp]
+ m.mtx.RUnlock()
+ if fpAlreadyMapped {
+ // We indeed have mapped fp historically.
+ ms := metricToUniqueString(metric)
+ // fp is locked by the caller, so no further locking of
+ // 'collisions' required (it is specific to fp).
+ mappedFP, ok := mappedFPs[ms]
+ if ok {
+ // Historical mapping found, return the mapped FP.
+ return mappedFP
+ }
+ }
+ return fp
+}
+
+// maybeAddMapping is only used internally. It takes a detected collision and
+// adds it to the collisions map if not yet there. In any case, it returns the
+// truly unique fingerprint for the colliding metric.
+func (m *FpMapper) maybeAddMapping(fp model.Fingerprint, collidingMetric labels.Labels) model.Fingerprint {
+ ms := metricToUniqueString(collidingMetric)
+ m.mtx.RLock()
+ mappedFPs, ok := m.mappings[fp]
+ m.mtx.RUnlock()
+ if ok {
+ // fp is locked by the caller, so no further locking required.
+ mappedFP, ok := mappedFPs[ms]
+ if ok {
+ return mappedFP // Existing mapping.
+ }
+ // A new mapping has to be created.
+ mappedFP = m.nextMappedFP()
+ mappedFPs[ms] = mappedFP
+ level.Info(util_log.Logger).Log(
+ "msg", "fingerprint collision detected, mapping to new fingerprint",
+ "old_fp", fp,
+ "new_fp", mappedFP,
+ "metric", ms,
+ )
+ return mappedFP
+ }
+ // This is the first collision for fp.
+ mappedFP := m.nextMappedFP()
+ mappedFPs = map[string]model.Fingerprint{ms: mappedFP}
+ m.mtx.Lock()
+ m.mappings[fp] = mappedFPs
+ m.mtx.Unlock()
+ level.Info(util_log.Logger).Log(
+ "msg", "fingerprint collision detected, mapping to new fingerprint",
+ "old_fp", fp,
+ "new_fp", mappedFP,
+ "metric", collidingMetric,
+ )
+ return mappedFP
+}
+
+func (m *FpMapper) nextMappedFP() model.Fingerprint {
+ mappedFP := model.Fingerprint(m.highestMappedFP.Inc())
+ if mappedFP > maxMappedFP {
+ panic(fmt.Errorf("more than %v fingerprints mapped in collision detection", maxMappedFP))
+ }
+ return mappedFP
+}
+
+// metricToUniqueString turns a metric into a string in a reproducible and
+// unique way, i.e. the same metric will always create the same string, and
+// different metrics will always create different strings. In a way, it is the
+// "ideal" fingerprint function, only that it is more expensive than the
+// FastFingerprint function, and its result is not suitable as a key for maps
+// and indexes as it might become really large, causing a lot of hashing effort
+// in maps and a lot of storage overhead in indexes.
+func metricToUniqueString(m labels.Labels) string {
+ parts := make([]string, 0, len(m))
+ for _, pair := range m {
+ parts = append(parts, pair.Name+separatorString+pair.Value)
+ }
+ sort.Strings(parts)
+ return strings.Join(parts, separatorString)
+}
diff --git a/pkg/ingester-rf1/metrics.go b/pkg/ingester-rf1/metrics.go
new file mode 100644
index 0000000000000..93291a25ce9ec
--- /dev/null
+++ b/pkg/ingester-rf1/metrics.go
@@ -0,0 +1,297 @@
+package ingesterrf1
+
+import (
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/promauto"
+
+ "github.com/grafana/loki/v3/pkg/analytics"
+ "github.com/grafana/loki/v3/pkg/util/constants"
+ "github.com/grafana/loki/v3/pkg/validation"
+)
+
+type ingesterMetrics struct {
+ checkpointDeleteFail prometheus.Counter
+ checkpointDeleteTotal prometheus.Counter
+ checkpointCreationFail prometheus.Counter
+ checkpointCreationTotal prometheus.Counter
+ checkpointDuration prometheus.Summary
+ checkpointLoggedBytesTotal prometheus.Counter
+
+ walDiskFullFailures prometheus.Counter
+ walReplayActive prometheus.Gauge
+ walReplayDuration prometheus.Gauge
+ walReplaySamplesDropped *prometheus.CounterVec
+ walReplayBytesDropped *prometheus.CounterVec
+ walCorruptionsTotal *prometheus.CounterVec
+ walLoggedBytesTotal prometheus.Counter
+ walRecordsLogged prometheus.Counter
+
+ recoveredStreamsTotal prometheus.Counter
+ recoveredChunksTotal prometheus.Counter
+ recoveredEntriesTotal prometheus.Counter
+ duplicateEntriesTotal prometheus.Counter
+ recoveredBytesTotal prometheus.Counter
+ recoveryBytesInUse prometheus.Gauge
+ recoveryIsFlushing prometheus.Gauge
+
+ limiterEnabled prometheus.Gauge
+
+ autoForgetUnhealthyIngestersTotal prometheus.Counter
+
+ chunkUtilization prometheus.Histogram
+ memoryChunks prometheus.Gauge
+ chunkEntries prometheus.Histogram
+ chunkSize prometheus.Histogram
+ chunkCompressionRatio prometheus.Histogram
+ chunksPerTenant *prometheus.CounterVec
+ chunkSizePerTenant *prometheus.CounterVec
+ chunkAge prometheus.Histogram
+ chunkEncodeTime prometheus.Histogram
+ chunksFlushFailures prometheus.Counter
+ chunksFlushedPerReason *prometheus.CounterVec
+ chunkLifespan prometheus.Histogram
+ flushedChunksStats *analytics.Counter
+ flushedChunksBytesStats *analytics.Statistics
+ flushedChunksLinesStats *analytics.Statistics
+ flushedChunksAgeStats *analytics.Statistics
+ flushedChunksLifespanStats *analytics.Statistics
+ flushedChunksUtilizationStats *analytics.Statistics
+
+ chunksCreatedTotal prometheus.Counter
+ samplesPerChunk prometheus.Histogram
+ blocksPerChunk prometheus.Histogram
+ chunkCreatedStats *analytics.Counter
+
+ // Shutdown marker for ingester scale down
+ shutdownMarker prometheus.Gauge
+
+ flushQueueLength prometheus.Gauge
+}
+
+// setRecoveryBytesInUse bounds the bytes reports to >= 0.
+// TODO(owen-d): we can gain some efficiency by having the flusher never update this after recovery ends.
+func (m *ingesterMetrics) setRecoveryBytesInUse(v int64) {
+ if v < 0 {
+ v = 0
+ }
+ m.recoveryBytesInUse.Set(float64(v))
+}
+
+const (
+ walTypeCheckpoint = "checkpoint"
+ walTypeSegment = "segment"
+
+ duplicateReason = "duplicate"
+)
+
+func newIngesterMetrics(r prometheus.Registerer, metricsNamespace string) *ingesterMetrics {
+ return &ingesterMetrics{
+ walDiskFullFailures: promauto.With(r).NewCounter(prometheus.CounterOpts{
+ Name: "loki_ingester_rf1_wal_disk_full_failures_total",
+ Help: "Total number of wal write failures due to full disk.",
+ }),
+ walReplayActive: promauto.With(r).NewGauge(prometheus.GaugeOpts{
+ Name: "loki_ingester_rf1_wal_replay_active",
+ Help: "Whether the WAL is replaying",
+ }),
+ walReplayDuration: promauto.With(r).NewGauge(prometheus.GaugeOpts{
+ Name: "loki_ingester_rf1_wal_replay_duration_seconds",
+ Help: "Time taken to replay the checkpoint and the WAL.",
+ }),
+ walReplaySamplesDropped: promauto.With(r).NewCounterVec(prometheus.CounterOpts{
+ Name: "loki_ingester_rf1_wal_discarded_samples_total",
+ Help: "WAL segment entries discarded during replay",
+ }, []string{validation.ReasonLabel}),
+ walReplayBytesDropped: promauto.With(r).NewCounterVec(prometheus.CounterOpts{
+ Name: "loki_ingester_rf1_wal_discarded_bytes_total",
+ Help: "WAL segment bytes discarded during replay",
+ }, []string{validation.ReasonLabel}),
+ walCorruptionsTotal: promauto.With(r).NewCounterVec(prometheus.CounterOpts{
+ Name: "loki_ingester_rf1_wal_corruptions_total",
+ Help: "Total number of WAL corruptions encountered.",
+ }, []string{"type"}),
+ checkpointDeleteFail: promauto.With(r).NewCounter(prometheus.CounterOpts{
+ Name: "loki_ingester_rf1_checkpoint_deletions_failed_total",
+ Help: "Total number of checkpoint deletions that failed.",
+ }),
+ checkpointDeleteTotal: promauto.With(r).NewCounter(prometheus.CounterOpts{
+ Name: "loki_ingester_rf1_checkpoint_deletions_total",
+ Help: "Total number of checkpoint deletions attempted.",
+ }),
+ checkpointCreationFail: promauto.With(r).NewCounter(prometheus.CounterOpts{
+ Name: "loki_ingester_rf1_checkpoint_creations_failed_total",
+ Help: "Total number of checkpoint creations that failed.",
+ }),
+ checkpointCreationTotal: promauto.With(r).NewCounter(prometheus.CounterOpts{
+ Name: "loki_ingester_rf1_checkpoint_creations_total",
+ Help: "Total number of checkpoint creations attempted.",
+ }),
+ checkpointDuration: promauto.With(r).NewSummary(prometheus.SummaryOpts{
+ Name: "loki_ingester_rf1_checkpoint_duration_seconds",
+ Help: "Time taken to create a checkpoint.",
+ Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
+ }),
+ walRecordsLogged: promauto.With(r).NewCounter(prometheus.CounterOpts{
+ Name: "loki_ingester_rf1_wal_records_logged_total",
+ Help: "Total number of WAL records logged.",
+ }),
+ checkpointLoggedBytesTotal: promauto.With(r).NewCounter(prometheus.CounterOpts{
+ Name: "loki_ingester_rf1_checkpoint_logged_bytes_total",
+ Help: "Total number of bytes written to disk for checkpointing.",
+ }),
+ walLoggedBytesTotal: promauto.With(r).NewCounter(prometheus.CounterOpts{
+ Name: "loki_ingester_rf1_wal_logged_bytes_total",
+ Help: "Total number of bytes written to disk for WAL records.",
+ }),
+ recoveredStreamsTotal: promauto.With(r).NewCounter(prometheus.CounterOpts{
+ Name: "loki_ingester_rf1_wal_recovered_streams_total",
+ Help: "Total number of streams recovered from the WAL.",
+ }),
+ recoveredChunksTotal: promauto.With(r).NewCounter(prometheus.CounterOpts{
+ Name: "loki_ingester_rf1_wal_recovered_chunks_total",
+ Help: "Total number of chunks recovered from the WAL checkpoints.",
+ }),
+ recoveredEntriesTotal: promauto.With(r).NewCounter(prometheus.CounterOpts{
+ Name: "loki_ingester_rf1_wal_recovered_entries_total",
+ Help: "Total number of entries recovered from the WAL.",
+ }),
+ duplicateEntriesTotal: promauto.With(r).NewCounter(prometheus.CounterOpts{
+ Name: "loki_ingester_rf1_wal_duplicate_entries_total",
+ Help: "Entries discarded during WAL replay due to existing in checkpoints.",
+ }),
+ recoveredBytesTotal: promauto.With(r).NewCounter(prometheus.CounterOpts{
+ Name: "loki_ingester_rf1_wal_recovered_bytes_total",
+ Help: "Total number of bytes recovered from the WAL.",
+ }),
+ recoveryBytesInUse: promauto.With(r).NewGauge(prometheus.GaugeOpts{
+ Name: "loki_ingester_rf1_wal_bytes_in_use",
+ Help: "Total number of bytes in use by the WAL recovery process.",
+ }),
+ recoveryIsFlushing: promauto.With(r).NewGauge(prometheus.GaugeOpts{
+ Name: "loki_ingester_rf1_wal_replay_flushing",
+ Help: "Whether the wal replay is in a flushing phase due to backpressure",
+ }),
+ limiterEnabled: promauto.With(r).NewGauge(prometheus.GaugeOpts{
+ Name: "loki_ingester_rf1_limiter_enabled",
+ Help: "Whether the ingester's limiter is enabled",
+ }),
+ autoForgetUnhealthyIngestersTotal: promauto.With(r).NewCounter(prometheus.CounterOpts{
+ Name: "loki_ingester_rf1_autoforget_unhealthy_ingesters_total",
+ Help: "Total number of ingesters automatically forgotten",
+ }),
+ chunkUtilization: promauto.With(r).NewHistogram(prometheus.HistogramOpts{
+ Namespace: constants.Loki,
+ Name: "ingester_rf1_chunk_utilization",
+ Help: "Distribution of stored chunk utilization (when stored).",
+ Buckets: prometheus.LinearBuckets(0, 0.2, 6),
+ }),
+ memoryChunks: promauto.With(r).NewGauge(prometheus.GaugeOpts{
+ Namespace: constants.Loki,
+ Name: "ingester_rf1_memory_chunks",
+ Help: "The total number of chunks in memory.",
+ }),
+ chunkEntries: promauto.With(r).NewHistogram(prometheus.HistogramOpts{
+ Namespace: constants.Loki,
+ Name: "ingester_rf1_chunk_entries",
+ Help: "Distribution of stored lines per chunk (when stored).",
+ Buckets: prometheus.ExponentialBuckets(200, 2, 9), // biggest bucket is 200*2^(9-1) = 51200
+ }),
+ chunkSize: promauto.With(r).NewHistogram(prometheus.HistogramOpts{
+ Namespace: constants.Loki,
+ Name: "ingester_rf1_chunk_size_bytes",
+ Help: "Distribution of stored chunk sizes (when stored).",
+ Buckets: prometheus.ExponentialBuckets(20000, 2, 10), // biggest bucket is 20000*2^(10-1) = 10,240,000 (~10.2MB)
+ }),
+ chunkCompressionRatio: promauto.With(r).NewHistogram(prometheus.HistogramOpts{
+ Namespace: constants.Loki,
+ Name: "ingester_rf1_chunk_compression_ratio",
+ Help: "Compression ratio of chunks (when stored).",
+ Buckets: prometheus.LinearBuckets(.75, 2, 10),
+ }),
+ chunksPerTenant: promauto.With(r).NewCounterVec(prometheus.CounterOpts{
+ Namespace: constants.Loki,
+ Name: "ingester_rf1_chunks_stored_total",
+ Help: "Total stored chunks per tenant.",
+ }, []string{"tenant"}),
+ chunkSizePerTenant: promauto.With(r).NewCounterVec(prometheus.CounterOpts{
+ Namespace: constants.Loki,
+ Name: "ingester_rf1_chunk_stored_bytes_total",
+ Help: "Total bytes stored in chunks per tenant.",
+ }, []string{"tenant"}),
+ chunkAge: promauto.With(r).NewHistogram(prometheus.HistogramOpts{
+ Namespace: constants.Loki,
+ Name: "ingester_rf1_chunk_age_seconds",
+ Help: "Distribution of chunk ages (when stored).",
+ // with default settings chunks should flush between 5 min and 12 hours
+ // so buckets at 1min, 5min, 10min, 30min, 1hr, 2hr, 4hr, 10hr, 12hr, 16hr
+ Buckets: []float64{60, 300, 600, 1800, 3600, 7200, 14400, 36000, 43200, 57600},
+ }),
+ chunkEncodeTime: promauto.With(r).NewHistogram(prometheus.HistogramOpts{
+ Namespace: constants.Loki,
+ Name: "ingester_rf1_chunk_encode_time_seconds",
+ Help: "Distribution of chunk encode times.",
+ // 10ms to 10s.
+ Buckets: prometheus.ExponentialBuckets(0.01, 4, 6),
+ }),
+ chunksFlushFailures: promauto.With(r).NewCounter(prometheus.CounterOpts{
+ Namespace: constants.Loki,
+ Name: "ingester_rf1_chunks_flush_failures_total",
+ Help: "Total number of flush failures.",
+ }),
+ chunksFlushedPerReason: promauto.With(r).NewCounterVec(prometheus.CounterOpts{
+ Namespace: constants.Loki,
+ Name: "ingester_rf1_chunks_flushed_total",
+ Help: "Total flushed chunks per reason.",
+ }, []string{"reason"}),
+ chunkLifespan: promauto.With(r).NewHistogram(prometheus.HistogramOpts{
+ Namespace: constants.Loki,
+ Name: "ingester_rf1_chunk_bounds_hours",
+ Help: "Distribution of chunk end-start durations.",
+ // 1h -> 8hr
+ Buckets: prometheus.LinearBuckets(1, 1, 8),
+ }),
+ flushedChunksStats: analytics.NewCounter("ingester_rf1_flushed_chunks"),
+ flushedChunksBytesStats: analytics.NewStatistics("ingester_rf1_flushed_chunks_bytes"),
+ flushedChunksLinesStats: analytics.NewStatistics("ingester_rf1_flushed_chunks_lines"),
+ flushedChunksAgeStats: analytics.NewStatistics("ingester_rf1_flushed_chunks_age_seconds"),
+ flushedChunksLifespanStats: analytics.NewStatistics("ingester_rf1_flushed_chunks_lifespan_seconds"),
+ flushedChunksUtilizationStats: analytics.NewStatistics("ingester_rf1_flushed_chunks_utilization"),
+ chunksCreatedTotal: promauto.With(r).NewCounter(prometheus.CounterOpts{
+ Namespace: constants.Loki,
+ Name: "ingester_rf1_chunks_created_total",
+ Help: "The total number of chunks created in the ingester.",
+ }),
+ samplesPerChunk: promauto.With(r).NewHistogram(prometheus.HistogramOpts{
+ Namespace: constants.Loki,
+ Subsystem: "ingester_rf1",
+ Name: "samples_per_chunk",
+ Help: "The number of samples in a chunk.",
+
+ Buckets: prometheus.LinearBuckets(4096, 2048, 6),
+ }),
+ blocksPerChunk: promauto.With(r).NewHistogram(prometheus.HistogramOpts{
+ Namespace: constants.Loki,
+ Subsystem: "ingester_rf1",
+ Name: "blocks_per_chunk",
+ Help: "The number of blocks in a chunk.",
+
+ Buckets: prometheus.ExponentialBuckets(5, 2, 6),
+ }),
+
+ chunkCreatedStats: analytics.NewCounter("ingester_chunk_created"),
+
+ shutdownMarker: promauto.With(r).NewGauge(prometheus.GaugeOpts{
+ Namespace: constants.Loki,
+ Subsystem: "ingester_rf1",
+ Name: "shutdown_marker",
+ Help: "1 if prepare shutdown has been called, 0 otherwise",
+ }),
+
+ flushQueueLength: promauto.With(r).NewGauge(prometheus.GaugeOpts{
+ Namespace: metricsNamespace,
+ Subsystem: "ingester_rf1",
+ Name: "flush_queue_length",
+ Help: "The total number of series pending in the flush queue.",
+ }),
+ }
+}
diff --git a/pkg/ingester-rf1/owned_streams.go b/pkg/ingester-rf1/owned_streams.go
new file mode 100644
index 0000000000000..196d1265d1ba1
--- /dev/null
+++ b/pkg/ingester-rf1/owned_streams.go
@@ -0,0 +1,74 @@
+package ingesterrf1
+
+import (
+ "sync"
+
+ "github.com/grafana/dskit/services"
+ "go.uber.org/atomic"
+)
+
+type ownedStreamService struct {
+ services.Service
+
+ tenantID string
+ limiter *Limiter
+ fixedLimit *atomic.Int32
+ ownedStreamCount int
+ notOwnedStreamCount int
+ lock sync.RWMutex
+}
+
+func newOwnedStreamService(tenantID string, limiter *Limiter) *ownedStreamService {
+ svc := &ownedStreamService{
+ tenantID: tenantID,
+ limiter: limiter,
+ fixedLimit: atomic.NewInt32(0),
+ }
+
+ svc.updateFixedLimit()
+ return svc
+}
+
+func (s *ownedStreamService) getOwnedStreamCount() int {
+ s.lock.RLock()
+ defer s.lock.RUnlock()
+ return s.ownedStreamCount
+}
+
+func (s *ownedStreamService) updateFixedLimit() {
+ limit, _, _, _ := s.limiter.GetStreamCountLimit(s.tenantID)
+ s.fixedLimit.Store(int32(limit))
+}
+
+func (s *ownedStreamService) getFixedLimit() int {
+ return int(s.fixedLimit.Load())
+}
+
+func (s *ownedStreamService) incOwnedStreamCount() {
+ s.lock.Lock()
+ defer s.lock.Unlock()
+ s.ownedStreamCount++
+}
+
+func (s *ownedStreamService) incNotOwnedStreamCount() {
+ s.lock.Lock()
+ defer s.lock.Unlock()
+ s.notOwnedStreamCount++
+}
+
+func (s *ownedStreamService) decOwnedStreamCount() {
+ s.lock.Lock()
+ defer s.lock.Unlock()
+ if s.notOwnedStreamCount > 0 {
+ s.notOwnedStreamCount--
+ return
+ }
+ s.ownedStreamCount--
+}
+
+func (s *ownedStreamService) resetStreamCounts() {
+ s.lock.Lock()
+ defer s.lock.Unlock()
+ s.ownedStreamCount = 0
+ s.notOwnedStreamCount = 0
+}
diff --git a/pkg/ingester-rf1/ring_client.go b/pkg/ingester-rf1/ring_client.go
new file mode 100644
index 0000000000000..534a7468fddf4
--- /dev/null
+++ b/pkg/ingester-rf1/ring_client.go
@@ -0,0 +1,77 @@
+package ingesterrf1
+
+import (
+ "context"
+ "fmt"
+
+ "github.com/go-kit/log"
+ "github.com/grafana/dskit/ring"
+ ring_client "github.com/grafana/dskit/ring/client"
+ "github.com/grafana/dskit/services"
+ "github.com/prometheus/client_golang/prometheus"
+
+ "github.com/grafana/loki/v3/pkg/ingester-rf1/clientpool"
+)
+
+type RingClient struct {
+ cfg Config
+ logger log.Logger
+
+ services.Service
+ subservices *services.Manager
+ subservicesWatcher *services.FailureWatcher
+ ring *ring.Ring
+ pool *ring_client.Pool
+}
+
+func NewRingClient(
+ cfg Config,
+ metricsNamespace string,
+ registerer prometheus.Registerer,
+ logger log.Logger,
+) (*RingClient, error) {
+ var err error
+ registerer = prometheus.WrapRegistererWithPrefix(metricsNamespace+"_", registerer)
+ ringClient := &RingClient{
+ logger: log.With(logger, "component", "ingester-rf1-client"),
+ cfg: cfg,
+ }
+ ringClient.ring, err = ring.New(cfg.LifecyclerConfig.RingConfig, "ingester-rf1", "ingester-rf1-ring", ringClient.logger, registerer)
+ if err != nil {
+ return nil, err
+ }
+ factory := cfg.factory
+ if factory == nil {
+ factory = ring_client.PoolAddrFunc(func(addr string) (ring_client.PoolClient, error) {
+ return clientpool.NewClient(cfg.ClientConfig, addr)
+ })
+ }
+ ringClient.pool = clientpool.NewPool("ingester-rf1", cfg.ClientConfig.PoolConfig, ringClient.ring, factory, logger, metricsNamespace)
+
+ ringClient.subservices, err = services.NewManager(ringClient.pool, ringClient.ring)
+ if err != nil {
+ return nil, fmt.Errorf("services manager: %w", err)
+ }
+ ringClient.subservicesWatcher = services.NewFailureWatcher()
+ ringClient.subservicesWatcher.WatchManager(ringClient.subservices)
+ ringClient.Service = services.NewBasicService(ringClient.starting, ringClient.running, ringClient.stopping)
+
+ return ringClient, nil
+}
+
+func (q *RingClient) starting(ctx context.Context) error {
+ return services.StartManagerAndAwaitHealthy(ctx, q.subservices)
+}
+
+func (q *RingClient) running(ctx context.Context) error {
+ select {
+ case <-ctx.Done():
+ return nil
+ case err := <-q.subservicesWatcher.Chan():
+ return fmt.Errorf("ingester-rf1 tee subservices failed: %w", err)
+ }
+}
+
+func (q *RingClient) stopping(_ error) error {
+ return services.StopManagerAndAwaitStopped(context.Background(), q.subservices)
+}
diff --git a/pkg/ingester-rf1/stream.go b/pkg/ingester-rf1/stream.go
new file mode 100644
index 0000000000000..932c6244bbf20
--- /dev/null
+++ b/pkg/ingester-rf1/stream.go
@@ -0,0 +1,325 @@
+package ingesterrf1
+
+import (
+ "bytes"
+ "context"
+ "fmt"
+ "net/http"
+ "time"
+
+ "github.com/grafana/dskit/httpgrpc"
+ "github.com/opentracing/opentracing-go"
+ "github.com/pkg/errors"
+ "github.com/prometheus/common/model"
+ "github.com/prometheus/prometheus/model/labels"
+
+ "github.com/grafana/loki/v3/pkg/chunkenc"
+ "github.com/grafana/loki/v3/pkg/distributor/writefailures"
+ "github.com/grafana/loki/v3/pkg/loghttp/push"
+ "github.com/grafana/loki/v3/pkg/logproto"
+ "github.com/grafana/loki/v3/pkg/util/flagext"
+ "github.com/grafana/loki/v3/pkg/validation"
+)
+
+var ErrEntriesExist = errors.New("duplicate push - entries already exist")
+
+type line struct {
+ ts time.Time
+ content string
+}
+
+type stream struct {
+ limiter *StreamRateLimiter
+ cfg *Config
+ tenant string
+ // Newest chunk at chunks[n-1].
+ // Not thread-safe; assume accesses to this are locked by caller.
+ fp model.Fingerprint // possibly remapped fingerprint, used in the streams map
+
+ labels labels.Labels
+ labelsString string
+ labelHash uint64
+ labelHashNoShard uint64
+
+ // most recently pushed line. This is used to prevent duplicate pushes.
+ // It also determines chunk synchronization when unordered writes are disabled.
+ lastLine line
+
+ // keeps track of the highest timestamp accepted by the stream.
+ // This is used when unordered writes are enabled to cap the validity window
+ // of accepted writes and for chunk synchronization.
+ highestTs time.Time
+
+ metrics *ingesterMetrics
+
+ //tailers map[uint32]*tailer
+ //tailerMtx sync.RWMutex
+
+ // entryCt is a counter which is incremented on each accepted entry.
+ // This allows us to discard WAL entries during replays which were
+ // already recovered via checkpoints. Historically out of order
+ // errors were used to detect this, but this counter has been
+ // introduced to facilitate removing the ordering constraint.
+ entryCt int64
+
+ unorderedWrites bool
+ //streamRateCalculator *StreamRateCalculator
+
+ writeFailures *writefailures.Manager
+
+ chunkFormat byte
+ chunkHeadBlockFormat chunkenc.HeadBlockFmt
+}
+
+type chunkDesc struct {
+ chunk *chunkenc.MemChunk
+ closed bool
+ synced bool
+ flushed time.Time
+ reason string
+
+ lastUpdated time.Time
+}
+
+type entryWithError struct {
+ entry *logproto.Entry
+ e error
+}
+
+func newStream(
+ chunkFormat byte,
+ headBlockFmt chunkenc.HeadBlockFmt,
+ cfg *Config,
+ limits RateLimiterStrategy,
+ tenant string,
+ fp model.Fingerprint,
+ labels labels.Labels,
+ unorderedWrites bool,
+ //streamRateCalculator *StreamRateCalculator,
+ metrics *ingesterMetrics,
+ writeFailures *writefailures.Manager,
+) *stream {
+ //hashNoShard, _ := labels.HashWithoutLabels(make([]byte, 0, 1024), ShardLbName)
+ return &stream{
+ limiter: NewStreamRateLimiter(limits, tenant, 10*time.Second),
+ cfg: cfg,
+ fp: fp,
+ labels: labels,
+ labelsString: labels.String(),
+ labelHash: labels.Hash(),
+ //labelHashNoShard: hashNoShard,
+ //tailers: map[uint32]*tailer{},
+ metrics: metrics,
+ tenant: tenant,
+ //streamRateCalculator: streamRateCalculator,
+
+ unorderedWrites: unorderedWrites,
+ writeFailures: writeFailures,
+ chunkFormat: chunkFormat,
+ chunkHeadBlockFormat: headBlockFmt,
+ }
+}
+
+// consumeChunk manually adds a chunk to the stream that was received during
+// ingester chunk transfer.
+// Must hold chunkMtx
+// DEPRECATED: chunk transfers are no longer suggested and remain for compatibility.
+func (s *stream) consumeChunk(_ context.Context, _ *logproto.Chunk) error {
+ return nil
+}
+
+func (s *stream) Push(
+ ctx context.Context,
+ entries []logproto.Entry,
+ // Whether nor not to ingest all at once or not. It is a per-tenant configuration.
+ rateLimitWholeStream bool,
+
+ usageTracker push.UsageTracker,
+ flushCtx *flushCtx,
+) (int, error) {
+
+ toStore, invalid := s.validateEntries(ctx, entries, rateLimitWholeStream, usageTracker)
+ if rateLimitWholeStream && hasRateLimitErr(invalid) {
+ return 0, errorForFailedEntries(s, invalid, len(entries))
+ }
+
+ bytesAdded, _ := s.storeEntries(ctx, toStore, usageTracker, flushCtx)
+
+ return bytesAdded, errorForFailedEntries(s, invalid, len(entries))
+}
+
+func errorForFailedEntries(s *stream, failedEntriesWithError []entryWithError, totalEntries int) error {
+ if len(failedEntriesWithError) == 0 {
+ return nil
+ }
+
+ lastEntryWithErr := failedEntriesWithError[len(failedEntriesWithError)-1]
+ _, ok := lastEntryWithErr.e.(*validation.ErrStreamRateLimit)
+ outOfOrder := chunkenc.IsOutOfOrderErr(lastEntryWithErr.e)
+ if !outOfOrder && !ok {
+ return lastEntryWithErr.e
+ }
+ var statusCode int
+ if outOfOrder {
+ statusCode = http.StatusBadRequest
+ }
+ if ok {
+ statusCode = http.StatusTooManyRequests
+ }
+ // Return a http status 4xx request response with all failed entries.
+ buf := bytes.Buffer{}
+ streamName := s.labelsString
+
+ limitedFailedEntries := failedEntriesWithError
+ if maxIgnore := s.cfg.MaxReturnedErrors; maxIgnore > 0 && len(limitedFailedEntries) > maxIgnore {
+ limitedFailedEntries = limitedFailedEntries[:maxIgnore]
+ }
+
+ for _, entryWithError := range limitedFailedEntries {
+ fmt.Fprintf(&buf,
+ "entry with timestamp %s ignored, reason: '%s',\n",
+ entryWithError.entry.Timestamp.String(), entryWithError.e.Error())
+ }
+
+ fmt.Fprintf(&buf, "user '%s', total ignored: %d out of %d for stream: %s", s.tenant, len(failedEntriesWithError), totalEntries, streamName)
+
+ return httpgrpc.Errorf(statusCode, buf.String())
+}
+
+func hasRateLimitErr(errs []entryWithError) bool {
+ if len(errs) == 0 {
+ return false
+ }
+
+ lastErr := errs[len(errs)-1]
+ _, ok := lastErr.e.(*validation.ErrStreamRateLimit)
+ return ok
+}
+
+func (s *stream) storeEntries(ctx context.Context, entries []logproto.Entry, usageTracker push.UsageTracker, flushCtx *flushCtx) (int, []*logproto.Entry) {
+ if sp := opentracing.SpanFromContext(ctx); sp != nil {
+ sp.LogKV("event", "stream started to store entries", "labels", s.labelsString)
+ defer sp.LogKV("event", "stream finished to store entries")
+ }
+
+ var bytesAdded, outOfOrderSamples, outOfOrderBytes int
+
+ storedEntries := make([]*logproto.Entry, 0, len(entries))
+ for i := 0; i < len(entries); i++ {
+ s.entryCt++
+ s.lastLine.ts = entries[i].Timestamp
+ s.lastLine.content = entries[i].Line
+ if s.highestTs.Before(entries[i].Timestamp) {
+ s.highestTs = entries[i].Timestamp
+ }
+
+ bytesAdded += len(entries[i].Line)
+ storedEntries = append(storedEntries, &entries[i])
+ }
+ flushCtx.segmentWriter.Append(s.tenant, s.labels.String(), s.labels, storedEntries)
+ s.reportMetrics(ctx, outOfOrderSamples, outOfOrderBytes, 0, 0, usageTracker)
+ return bytesAdded, storedEntries
+}
+
+func (s *stream) validateEntries(ctx context.Context, entries []logproto.Entry, rateLimitWholeStream bool, usageTracker push.UsageTracker) ([]logproto.Entry, []entryWithError) {
+
+ var (
+ outOfOrderSamples, outOfOrderBytes int
+ rateLimitedSamples, rateLimitedBytes int
+ validBytes, totalBytes int
+ failedEntriesWithError []entryWithError
+ limit = s.limiter.lim.Limit()
+ lastLine = s.lastLine
+ highestTs = s.highestTs
+ toStore = make([]logproto.Entry, 0, len(entries))
+ )
+
+ for i := range entries {
+ // If this entry matches our last appended line's timestamp and contents,
+ // ignore it.
+ //
+ // This check is done at the stream level so it persists across cut and
+ // flushed chunks.
+ //
+ // NOTE: it's still possible for duplicates to be appended if a stream is
+ // deleted from inactivity.
+ if entries[i].Timestamp.Equal(lastLine.ts) && entries[i].Line == lastLine.content {
+ continue
+ }
+
+ lineBytes := len(entries[i].Line)
+ totalBytes += lineBytes
+
+ now := time.Now()
+ if !rateLimitWholeStream && !s.limiter.AllowN(now, len(entries[i].Line)) {
+ failedEntriesWithError = append(failedEntriesWithError, entryWithError{&entries[i], &validation.ErrStreamRateLimit{RateLimit: flagext.ByteSize(limit), Labels: s.labelsString, Bytes: flagext.ByteSize(lineBytes)}})
+ s.writeFailures.Log(s.tenant, failedEntriesWithError[len(failedEntriesWithError)-1].e)
+ rateLimitedSamples++
+ rateLimitedBytes += lineBytes
+ continue
+ }
+
+ // The validity window for unordered writes is the highest timestamp present minus 1/2 * max-chunk-age.
+ cutoff := highestTs.Add(-s.cfg.MaxChunkAge / 2)
+ if s.unorderedWrites && !highestTs.IsZero() && cutoff.After(entries[i].Timestamp) {
+ failedEntriesWithError = append(failedEntriesWithError, entryWithError{&entries[i], chunkenc.ErrTooFarBehind(entries[i].Timestamp, cutoff)})
+ s.writeFailures.Log(s.tenant, fmt.Errorf("%w for stream %s", failedEntriesWithError[len(failedEntriesWithError)-1].e, s.labels))
+ outOfOrderSamples++
+ outOfOrderBytes += lineBytes
+ continue
+ }
+
+ validBytes += lineBytes
+
+ lastLine.ts = entries[i].Timestamp
+ lastLine.content = entries[i].Line
+ if highestTs.Before(entries[i].Timestamp) {
+ highestTs = entries[i].Timestamp
+ }
+
+ toStore = append(toStore, entries[i])
+ }
+
+ // Each successful call to 'AllowN' advances the limiter. With all-or-nothing
+ // ingestion, the limiter should only be advanced when the whole stream can be
+ // sent
+ now := time.Now()
+ if rateLimitWholeStream && !s.limiter.AllowN(now, validBytes) {
+ // Report that the whole stream was rate limited
+ rateLimitedSamples = len(toStore)
+ failedEntriesWithError = make([]entryWithError, 0, len(toStore))
+ for i := 0; i < len(toStore); i++ {
+ failedEntriesWithError = append(failedEntriesWithError, entryWithError{&toStore[i], &validation.ErrStreamRateLimit{RateLimit: flagext.ByteSize(limit), Labels: s.labelsString, Bytes: flagext.ByteSize(len(toStore[i].Line))}})
+ rateLimitedBytes += len(toStore[i].Line)
+ }
+ }
+
+ //s.streamRateCalculator.Record(s.tenant, s.labelHash, s.labelHashNoShard, totalBytes)
+ s.reportMetrics(ctx, outOfOrderSamples, outOfOrderBytes, rateLimitedSamples, rateLimitedBytes, usageTracker)
+ return toStore, failedEntriesWithError
+}
+
+func (s *stream) reportMetrics(ctx context.Context, outOfOrderSamples, outOfOrderBytes, rateLimitedSamples, rateLimitedBytes int, usageTracker push.UsageTracker) {
+ if outOfOrderSamples > 0 {
+ name := validation.OutOfOrder
+ if s.unorderedWrites {
+ name = validation.TooFarBehind
+ }
+ validation.DiscardedSamples.WithLabelValues(name, s.tenant).Add(float64(outOfOrderSamples))
+ validation.DiscardedBytes.WithLabelValues(name, s.tenant).Add(float64(outOfOrderBytes))
+ if usageTracker != nil {
+ usageTracker.DiscardedBytesAdd(ctx, s.tenant, name, s.labels, float64(outOfOrderBytes))
+ }
+ }
+ if rateLimitedSamples > 0 {
+ validation.DiscardedSamples.WithLabelValues(validation.StreamRateLimit, s.tenant).Add(float64(rateLimitedSamples))
+ validation.DiscardedBytes.WithLabelValues(validation.StreamRateLimit, s.tenant).Add(float64(rateLimitedBytes))
+ if usageTracker != nil {
+ usageTracker.DiscardedBytesAdd(ctx, s.tenant, validation.StreamRateLimit, s.labels, float64(rateLimitedBytes))
+ }
+ }
+}
+
+func (s *stream) resetCounter() {
+ s.entryCt = 0
+}
diff --git a/pkg/ingester-rf1/stream_rate_calculator.go b/pkg/ingester-rf1/stream_rate_calculator.go
new file mode 100644
index 0000000000000..59272a3fe80a0
--- /dev/null
+++ b/pkg/ingester-rf1/stream_rate_calculator.go
@@ -0,0 +1,131 @@
+package ingesterrf1
+
+import (
+ "sync"
+ "time"
+
+ "github.com/grafana/loki/v3/pkg/logproto"
+)
+
+const (
+ // defaultStripeSize is the default number of entries to allocate in the
+ // stripeSeries list.
+ defaultStripeSize = 1 << 10
+
+ // The intent is for a per-second rate so this is hard coded
+ updateInterval = time.Second
+)
+
+// stripeLock is taken from ruler/storage/wal/series.go
+type stripeLock struct {
+ sync.RWMutex
+ // Padding to avoid multiple locks being on the same cache line.
+ _ [40]byte
+}
+
+type StreamRateCalculator struct {
+ size int
+ samples []map[string]map[uint64]logproto.StreamRate
+ locks []stripeLock
+ stopchan chan struct{}
+
+ rateLock sync.RWMutex
+ allRates []logproto.StreamRate
+}
+
+func NewStreamRateCalculator() *StreamRateCalculator {
+ calc := &StreamRateCalculator{
+ size: defaultStripeSize,
+ // Lookup pattern: tenant -> fingerprint -> rate
+ samples: make([]map[string]map[uint64]logproto.StreamRate, defaultStripeSize),
+ locks: make([]stripeLock, defaultStripeSize),
+ stopchan: make(chan struct{}),
+ }
+
+ for i := 0; i < defaultStripeSize; i++ {
+ calc.samples[i] = make(map[string]map[uint64]logproto.StreamRate)
+ }
+
+ go calc.updateLoop()
+
+ return calc
+}
+
+func (c *StreamRateCalculator) updateLoop() {
+ t := time.NewTicker(updateInterval)
+ defer t.Stop()
+
+ for {
+ select {
+ case <-t.C:
+ c.updateRates()
+ case <-c.stopchan:
+ return
+ }
+ }
+}
+
+func (c *StreamRateCalculator) updateRates() {
+ rates := make([]logproto.StreamRate, 0, c.size)
+
+ for i := 0; i < c.size; i++ {
+ c.locks[i].Lock()
+
+ tenantRates := c.samples[i]
+ for _, tenant := range tenantRates {
+ for _, streamRate := range tenant {
+ rates = append(rates, logproto.StreamRate{
+ Tenant: streamRate.Tenant,
+ StreamHash: streamRate.StreamHash,
+ StreamHashNoShard: streamRate.StreamHashNoShard,
+ Rate: streamRate.Rate,
+ Pushes: streamRate.Pushes,
+ })
+ }
+ }
+
+ c.samples[i] = make(map[string]map[uint64]logproto.StreamRate)
+ c.locks[i].Unlock()
+ }
+
+ c.rateLock.Lock()
+ defer c.rateLock.Unlock()
+
+ c.allRates = rates
+}
+
+func (c *StreamRateCalculator) Rates() []logproto.StreamRate {
+ c.rateLock.RLock()
+ defer c.rateLock.RUnlock()
+
+ return c.allRates
+}
+
+func (c *StreamRateCalculator) Record(tenant string, streamHash, streamHashNoShard uint64, bytes int) {
+ i := streamHash & uint64(c.size-1)
+
+ c.locks[i].Lock()
+ defer c.locks[i].Unlock()
+
+ tenantMap := c.getTenant(i, tenant)
+ streamRate := tenantMap[streamHash]
+ streamRate.StreamHash = streamHash
+ streamRate.StreamHashNoShard = streamHashNoShard
+ streamRate.Tenant = tenant
+ streamRate.Rate += int64(bytes)
+ streamRate.Pushes++
+ tenantMap[streamHash] = streamRate
+
+ c.samples[i][tenant] = tenantMap
+}
+
+func (c *StreamRateCalculator) getTenant(idx uint64, tenant string) map[uint64]logproto.StreamRate {
+ if t, ok := c.samples[idx][tenant]; ok {
+ return t
+ }
+ return make(map[uint64]logproto.StreamRate)
+}
+
+func (c *StreamRateCalculator) Stop() {
+ close(c.stopchan)
+}
diff --git a/pkg/ingester-rf1/streams_map.go b/pkg/ingester-rf1/streams_map.go
new file mode 100644
index 0000000000000..ccf0f18a40389
--- /dev/null
+++ b/pkg/ingester-rf1/streams_map.go
@@ -0,0 +1,149 @@
+package ingesterrf1
+
+import (
+ "sync"
+
+ "github.com/prometheus/common/model"
+ "go.uber.org/atomic"
+)
+
+type streamsMap struct {
+ consistencyMtx sync.RWMutex // Keep read/write consistency between other fields
+ streams *sync.Map // map[string]*stream
+ streamsByFP *sync.Map // map[model.Fingerprint]*stream
+ streamsCounter *atomic.Int64
+}
+
+func newStreamsMap() *streamsMap {
+ return &streamsMap{
+ consistencyMtx: sync.RWMutex{},
+ streams: &sync.Map{},
+ streamsByFP: &sync.Map{},
+ streamsCounter: atomic.NewInt64(0),
+ }
+}
+
+// Load is lock-free. If usage of the stream is consistency sensitive, must be called inside WithRLock at least
+func (m *streamsMap) Load(key string) (*stream, bool) {
+ return m.load(m.streams, key)
+}
+
+// LoadByFP is lock-free. If usage of the stream is consistency sensitive, must be called inside WithRLock at least
+func (m *streamsMap) LoadByFP(fp model.Fingerprint) (*stream, bool) {
+ return m.load(m.streamsByFP, fp)
+}
+
+// Store must be called inside WithLock
+func (m *streamsMap) Store(key string, s *stream) {
+ m.store(key, s)
+}
+
+// StoreByFP must be called inside WithLock
+func (m *streamsMap) StoreByFP(fp model.Fingerprint, s *stream) {
+ m.store(fp, s)
+}
+
+// Delete must be called inside WithLock
+func (m *streamsMap) Delete(s *stream) bool {
+ _, loaded := m.streams.LoadAndDelete(s.labelsString)
+ if loaded {
+ m.streamsByFP.Delete(s.fp)
+ m.streamsCounter.Dec()
+ return true
+ }
+ return false
+}
+
+// LoadOrStoreNew already has lock inside, do NOT call inside WithLock or WithRLock
+func (m *streamsMap) LoadOrStoreNew(key string, newStreamFn func() (*stream, error), postLoadFn func(*stream) error) (*stream, bool, error) {
+ return m.loadOrStoreNew(m.streams, key, newStreamFn, postLoadFn)
+}
+
+// LoadOrStoreNewByFP already has lock inside, do NOT call inside WithLock or WithRLock
+func (m *streamsMap) LoadOrStoreNewByFP(fp model.Fingerprint, newStreamFn func() (*stream, error), postLoadFn func(*stream) error) (*stream, bool, error) {
+ return m.loadOrStoreNew(m.streamsByFP, fp, newStreamFn, postLoadFn)
+}
+
+// WithLock is a helper function to execute write operations
+func (m *streamsMap) WithLock(fn func()) {
+ m.consistencyMtx.Lock()
+ defer m.consistencyMtx.Unlock()
+ fn()
+}
+
+// WithRLock is a helper function to execute consistency sensitive read operations.
+// Generally, if a stream loaded from streamsMap will have its chunkMtx locked, chunkMtx.Lock is supposed to be called
+// within this function.
+func (m *streamsMap) WithRLock(fn func()) {
+ m.consistencyMtx.RLock()
+ defer m.consistencyMtx.RUnlock()
+ fn()
+}
+
+func (m *streamsMap) ForEach(fn func(s *stream) (bool, error)) error {
+ var c bool
+ var err error
+ m.streams.Range(func(_, value interface{}) bool {
+ c, err = fn(value.(*stream))
+ return c
+ })
+ return err
+}
+
+func (m *streamsMap) Len() int {
+ return int(m.streamsCounter.Load())
+}
+
+func (m *streamsMap) load(mp *sync.Map, key interface{}) (*stream, bool) {
+ if v, ok := mp.Load(key); ok {
+ return v.(*stream), true
+ }
+ return nil, false
+}
+
+func (m *streamsMap) store(key interface{}, s *stream) {
+ if labelsString, ok := key.(string); ok {
+ m.streams.Store(labelsString, s)
+ } else {
+ m.streams.Store(s.labelsString, s)
+ }
+ m.streamsByFP.Store(s.fp, s)
+ m.streamsCounter.Inc()
+}
+
+// newStreamFn: Called if not loaded, with consistencyMtx locked. Must not be nil
+// postLoadFn: Called if loaded, with consistencyMtx read-locked at least. Can be nil
+func (m *streamsMap) loadOrStoreNew(mp *sync.Map, key interface{}, newStreamFn func() (*stream, error), postLoadFn func(*stream) error) (*stream, bool, error) {
+ var s *stream
+ var loaded bool
+ var err error
+ m.WithRLock(func() {
+ if s, loaded = m.load(mp, key); loaded {
+ if postLoadFn != nil {
+ err = postLoadFn(s)
+ }
+ }
+ })
+
+ if loaded {
+ return s, true, err
+ }
+
+ m.WithLock(func() {
+ // Double check
+ if s, loaded = m.load(mp, key); loaded {
+ if postLoadFn != nil {
+ err = postLoadFn(s)
+ }
+ return
+ }
+
+ s, err = newStreamFn()
+ if err != nil {
+ return
+ }
+ m.store(key, s)
+ })
+
+ return s, loaded, err
+}
diff --git a/pkg/ingester-rf1/tee.go b/pkg/ingester-rf1/tee.go
new file mode 100644
index 0000000000000..14d5aa87da3d0
--- /dev/null
+++ b/pkg/ingester-rf1/tee.go
@@ -0,0 +1,88 @@
+package ingesterrf1
+
+import (
+ "context"
+ "errors"
+
+ "github.com/go-kit/log"
+ "github.com/go-kit/log/level"
+ "github.com/grafana/dskit/ring"
+ "github.com/grafana/dskit/user"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/promauto"
+
+ "github.com/grafana/loki/v3/pkg/distributor"
+ "github.com/grafana/loki/v3/pkg/logproto"
+)
+
+type Tee struct {
+ cfg Config
+ logger log.Logger
+ ringClient *RingClient
+
+ ingesterAppends *prometheus.CounterVec
+}
+
+func NewTee(
+ cfg Config,
+ ringClient *RingClient,
+ metricsNamespace string,
+ registerer prometheus.Registerer,
+ logger log.Logger,
+) (*Tee, error) {
+ registerer = prometheus.WrapRegistererWithPrefix(metricsNamespace+"_", registerer)
+
+ t := &Tee{
+ logger: log.With(logger, "component", "ingester-rf1-tee"),
+ ingesterAppends: promauto.With(registerer).NewCounterVec(prometheus.CounterOpts{
+ Name: "ingester_rf1_appends_total",
+ Help: "The total number of batch appends sent to rf1 ingesters.",
+ }, []string{"ingester", "status"}),
+ cfg: cfg,
+ ringClient: ringClient,
+ }
+
+ return t, nil
+}
+
+// Duplicate Implements distributor.Tee which is used to tee distributor requests to pattern ingesters.
+func (t *Tee) Duplicate(tenant string, streams []distributor.KeyedStream) {
+ for idx := range streams {
+ go func(stream distributor.KeyedStream) {
+ if err := t.sendStream(tenant, stream); err != nil {
+ level.Error(t.logger).Log("msg", "failed to send stream to ingester-rf1", "err", err)
+ }
+ }(streams[idx])
+ }
+}
+
+func (t *Tee) sendStream(tenant string, stream distributor.KeyedStream) error {
+ var descs [1]ring.InstanceDesc
+ replicationSet, err := t.ringClient.ring.Get(stream.HashKey, ring.WriteNoExtend, descs[:0], nil, nil)
+ if err != nil {
+ return err
+ }
+ if replicationSet.Instances == nil {
+ return errors.New("no instances found")
+ }
+ addr := replicationSet.Instances[0].Addr
+ client, err := t.ringClient.pool.GetClientFor(addr)
+ if err != nil {
+ return err
+ }
+ req := &logproto.PushRequest{
+ Streams: []logproto.Stream{
+ stream.Stream,
+ },
+ }
+
+ ctx, cancel := context.WithTimeout(user.InjectOrgID(context.Background(), tenant), t.cfg.ClientConfig.RemoteTimeout)
+ defer cancel()
+ _, err = client.(logproto.PusherRF1Client).Push(ctx, req)
+ if err != nil {
+ t.ingesterAppends.WithLabelValues(addr, "fail").Inc()
+ return err
+ }
+ t.ingesterAppends.WithLabelValues(addr, "success").Inc()
+ return nil
+}
diff --git a/pkg/ingester/flush_test.go b/pkg/ingester/flush_test.go
index 69462a3d352a5..a33b9d8ba4ee0 100644
--- a/pkg/ingester/flush_test.go
+++ b/pkg/ingester/flush_test.go
@@ -37,6 +37,7 @@ import (
"github.com/grafana/loki/v3/pkg/storage/config"
"github.com/grafana/loki/v3/pkg/storage/stores/index/stats"
"github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/tsdb/sharding"
+ walsegment "github.com/grafana/loki/v3/pkg/storage/wal"
"github.com/grafana/loki/v3/pkg/util/constants"
"github.com/grafana/loki/v3/pkg/validation"
)
@@ -432,6 +433,10 @@ func defaultIngesterTestConfig(t testing.TB) Config {
return cfg
}
+func (s *testStore) PutWal(_ context.Context, _ *walsegment.SegmentWriter) error {
+ return nil
+}
+
func (s *testStore) Put(ctx context.Context, chunks []chunk.Chunk) error {
s.mtx.Lock()
defer s.mtx.Unlock()
diff --git a/pkg/ingester/ingester_test.go b/pkg/ingester/ingester_test.go
index 17daa7b3ba580..d9f924352d0f6 100644
--- a/pkg/ingester/ingester_test.go
+++ b/pkg/ingester/ingester_test.go
@@ -2,7 +2,7 @@ package ingester
import (
"fmt"
- math "math"
+ "math"
"net"
"net/http"
"net/http/httptest"
@@ -48,6 +48,7 @@ import (
"github.com/grafana/loki/v3/pkg/storage/stores/index/seriesvolume"
"github.com/grafana/loki/v3/pkg/storage/stores/index/stats"
"github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/tsdb/sharding"
+ "github.com/grafana/loki/v3/pkg/storage/wal"
"github.com/grafana/loki/v3/pkg/util/constants"
"github.com/grafana/loki/v3/pkg/validation"
)
@@ -435,6 +436,10 @@ type mockStore struct {
chunks map[string][]chunk.Chunk
}
+func (s *mockStore) PutWal(_ context.Context, _ *wal.SegmentWriter) error {
+ return nil
+}
+
func (s *mockStore) Put(ctx context.Context, chunks []chunk.Chunk) error {
s.mtx.Lock()
defer s.mtx.Unlock()
diff --git a/pkg/logproto/ingester-rf1.pb.go b/pkg/logproto/ingester-rf1.pb.go
new file mode 100644
index 0000000000000..c9eb2db42f83f
--- /dev/null
+++ b/pkg/logproto/ingester-rf1.pb.go
@@ -0,0 +1,130 @@
+// Code generated by protoc-gen-gogo. DO NOT EDIT.
+// source: pkg/logproto/ingester-rf1.proto
+
+package logproto
+
+import (
+ context "context"
+ fmt "fmt"
+ _ "github.com/gogo/protobuf/gogoproto"
+ proto "github.com/gogo/protobuf/proto"
+ _ "github.com/gogo/protobuf/types"
+ push "github.com/grafana/loki/pkg/push"
+ grpc "google.golang.org/grpc"
+ codes "google.golang.org/grpc/codes"
+ status "google.golang.org/grpc/status"
+ math "math"
+)
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the proto package it is being compiled against.
+// A compilation error at this line likely means your copy of the
+// proto package needs to be updated.
+const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
+
+func init() { proto.RegisterFile("pkg/logproto/ingester-rf1.proto", fileDescriptor_8ef2a56eb3f3c377) }
+
+var fileDescriptor_8ef2a56eb3f3c377 = []byte{
+ // 250 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x92, 0x2f, 0xc8, 0x4e, 0xd7,
+ 0xcf, 0xc9, 0x4f, 0x2f, 0x28, 0xca, 0x2f, 0xc9, 0xd7, 0xcf, 0xcc, 0x4b, 0x4f, 0x2d, 0x2e, 0x49,
+ 0x2d, 0xd2, 0x2d, 0x4a, 0x33, 0xd4, 0x03, 0x0b, 0x09, 0x71, 0xc0, 0x24, 0xa5, 0x44, 0xd2, 0xf3,
+ 0xd3, 0xf3, 0x21, 0xea, 0x40, 0x2c, 0x88, 0xbc, 0x94, 0x7c, 0x7a, 0x7e, 0x7e, 0x7a, 0x4e, 0xaa,
+ 0x3e, 0x98, 0x97, 0x54, 0x9a, 0xa6, 0x5f, 0x92, 0x99, 0x9b, 0x5a, 0x5c, 0x92, 0x98, 0x5b, 0x00,
+ 0x55, 0x20, 0x8d, 0x62, 0x03, 0x8c, 0x01, 0x95, 0x14, 0x06, 0x49, 0x16, 0x94, 0x16, 0x67, 0x80,
+ 0x09, 0x88, 0xa0, 0x91, 0x0b, 0x17, 0x67, 0x40, 0x69, 0x71, 0x46, 0x6a, 0x51, 0x90, 0x9b, 0xa1,
+ 0x90, 0x39, 0x17, 0x0b, 0x88, 0x23, 0x24, 0xaa, 0x07, 0xd7, 0x0a, 0xe2, 0x07, 0xa5, 0x16, 0x96,
+ 0xa6, 0x16, 0x97, 0x48, 0x89, 0xa1, 0x0b, 0x17, 0x17, 0xe4, 0xe7, 0x15, 0xa7, 0x2a, 0x31, 0x38,
+ 0xc5, 0x5e, 0x78, 0x28, 0xc7, 0x70, 0xe3, 0xa1, 0x1c, 0xc3, 0x87, 0x87, 0x72, 0x8c, 0x0d, 0x8f,
+ 0xe4, 0x18, 0x57, 0x3c, 0x92, 0x63, 0x3c, 0xf1, 0x48, 0x8e, 0xf1, 0xc2, 0x23, 0x39, 0xc6, 0x07,
+ 0x8f, 0xe4, 0x18, 0x5f, 0x3c, 0x92, 0x63, 0xf8, 0xf0, 0x48, 0x8e, 0x71, 0xc2, 0x63, 0x39, 0x86,
+ 0x0b, 0x8f, 0xe5, 0x18, 0x6e, 0x3c, 0x96, 0x63, 0x88, 0x52, 0x4f, 0xcf, 0x2c, 0xc9, 0x28, 0x4d,
+ 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0x2f, 0x4a, 0x4c, 0x4b, 0xcc, 0x4b, 0xd4, 0xcf, 0xc9, 0xcf,
+ 0xce, 0xd4, 0x2f, 0x33, 0xd6, 0x47, 0xf6, 0x48, 0x12, 0x1b, 0x98, 0x32, 0x06, 0x04, 0x00, 0x00,
+ 0xff, 0xff, 0x29, 0x6e, 0xb8, 0x46, 0x41, 0x01, 0x00, 0x00,
+}
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ context.Context
+var _ grpc.ClientConn
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the grpc package it is being compiled against.
+const _ = grpc.SupportPackageIsVersion4
+
+// PusherRF1Client is the client API for PusherRF1 service.
+//
+// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
+type PusherRF1Client interface {
+ Push(ctx context.Context, in *push.PushRequest, opts ...grpc.CallOption) (*push.PushResponse, error)
+}
+
+type pusherRF1Client struct {
+ cc *grpc.ClientConn
+}
+
+func NewPusherRF1Client(cc *grpc.ClientConn) PusherRF1Client {
+ return &pusherRF1Client{cc}
+}
+
+func (c *pusherRF1Client) Push(ctx context.Context, in *push.PushRequest, opts ...grpc.CallOption) (*push.PushResponse, error) {
+ out := new(push.PushResponse)
+ err := c.cc.Invoke(ctx, "/logproto.PusherRF1/Push", in, out, opts...)
+ if err != nil {
+ return nil, err
+ }
+ return out, nil
+}
+
+// PusherRF1Server is the server API for PusherRF1 service.
+type PusherRF1Server interface {
+ Push(context.Context, *push.PushRequest) (*push.PushResponse, error)
+}
+
+// UnimplementedPusherRF1Server can be embedded to have forward compatible implementations.
+type UnimplementedPusherRF1Server struct {
+}
+
+func (*UnimplementedPusherRF1Server) Push(ctx context.Context, req *push.PushRequest) (*push.PushResponse, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method Push not implemented")
+}
+
+func RegisterPusherRF1Server(s *grpc.Server, srv PusherRF1Server) {
+ s.RegisterService(&_PusherRF1_serviceDesc, srv)
+}
+
+func _PusherRF1_Push_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(push.PushRequest)
+ if err := dec(in); err != nil {
+ return nil, err
+ }
+ if interceptor == nil {
+ return srv.(PusherRF1Server).Push(ctx, in)
+ }
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/logproto.PusherRF1/Push",
+ }
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(PusherRF1Server).Push(ctx, req.(*push.PushRequest))
+ }
+ return interceptor(ctx, in, info, handler)
+}
+
+var _PusherRF1_serviceDesc = grpc.ServiceDesc{
+ ServiceName: "logproto.PusherRF1",
+ HandlerType: (*PusherRF1Server)(nil),
+ Methods: []grpc.MethodDesc{
+ {
+ MethodName: "Push",
+ Handler: _PusherRF1_Push_Handler,
+ },
+ },
+ Streams: []grpc.StreamDesc{},
+ Metadata: "pkg/logproto/ingester-rf1.proto",
+}
diff --git a/pkg/logproto/ingester-rf1.proto b/pkg/logproto/ingester-rf1.proto
new file mode 100644
index 0000000000000..374d659175bcf
--- /dev/null
+++ b/pkg/logproto/ingester-rf1.proto
@@ -0,0 +1,14 @@
+syntax = "proto3";
+
+package logproto;
+
+import "gogoproto/gogo.proto";
+import "google/protobuf/timestamp.proto";
+import "pkg/logproto/logproto.proto";
+import "pkg/push/push.proto";
+
+option go_package = "github.com/grafana/loki/v3/pkg/logproto";
+
+service PusherRF1 {
+ rpc Push(PushRequest) returns (PushResponse) {}
+}
diff --git a/pkg/loki/config_wrapper.go b/pkg/loki/config_wrapper.go
index 4c8da5de23aea..48deb5151bb57 100644
--- a/pkg/loki/config_wrapper.go
+++ b/pkg/loki/config_wrapper.go
@@ -246,6 +246,21 @@ func applyConfigToRings(r, defaults *ConfigWrapper, rc lokiring.RingConfig, merg
r.Ingester.LifecyclerConfig.ObservePeriod = rc.ObservePeriod
}
+ if mergeWithExisting {
+ r.IngesterRF1.LifecyclerConfig.RingConfig.KVStore = rc.KVStore
+ r.IngesterRF1.LifecyclerConfig.HeartbeatPeriod = rc.HeartbeatPeriod
+ r.IngesterRF1.LifecyclerConfig.RingConfig.HeartbeatTimeout = rc.HeartbeatTimeout
+ r.IngesterRF1.LifecyclerConfig.TokensFilePath = rc.TokensFilePath
+ r.IngesterRF1.LifecyclerConfig.RingConfig.ZoneAwarenessEnabled = rc.ZoneAwarenessEnabled
+ r.IngesterRF1.LifecyclerConfig.ID = rc.InstanceID
+ r.IngesterRF1.LifecyclerConfig.InfNames = rc.InstanceInterfaceNames
+ r.IngesterRF1.LifecyclerConfig.Port = rc.InstancePort
+ r.IngesterRF1.LifecyclerConfig.Addr = rc.InstanceAddr
+ r.IngesterRF1.LifecyclerConfig.Zone = rc.InstanceZone
+ r.IngesterRF1.LifecyclerConfig.ListenPort = rc.ListenPort
+ r.IngesterRF1.LifecyclerConfig.ObservePeriod = rc.ObservePeriod
+ }
+
if mergeWithExisting {
r.Pattern.LifecyclerConfig.RingConfig.KVStore = rc.KVStore
r.Pattern.LifecyclerConfig.HeartbeatPeriod = rc.HeartbeatPeriod
@@ -669,6 +684,7 @@ func applyIngesterFinalSleep(cfg *ConfigWrapper) {
func applyIngesterReplicationFactor(cfg *ConfigWrapper) {
cfg.Ingester.LifecyclerConfig.RingConfig.ReplicationFactor = cfg.Common.ReplicationFactor
+ cfg.IngesterRF1.LifecyclerConfig.RingConfig.ReplicationFactor = cfg.Common.ReplicationFactor
}
// applyChunkRetain is used to set chunk retain based on having an index query cache configured
diff --git a/pkg/loki/loki.go b/pkg/loki/loki.go
index 0b2f2a3c91058..ecc3c7dbc4daa 100644
--- a/pkg/loki/loki.go
+++ b/pkg/loki/loki.go
@@ -11,6 +11,8 @@ import (
rt "runtime"
"time"
+ ingester_rf1 "github.com/grafana/loki/v3/pkg/ingester-rf1"
+
"go.uber.org/atomic"
"github.com/fatih/color"
@@ -87,7 +89,9 @@ type Config struct {
QueryRange queryrange.Config `yaml:"query_range,omitempty"`
Ruler ruler.Config `yaml:"ruler,omitempty"`
IngesterClient ingester_client.Config `yaml:"ingester_client,omitempty"`
+ IngesterRF1Client ingester_client.Config `yaml:"ingester_rf1_client,omitempty"`
Ingester ingester.Config `yaml:"ingester,omitempty"`
+ IngesterRF1 ingester_rf1.Config `yaml:"ingester_rf1,omitempty"`
Pattern pattern.Config `yaml:"pattern_ingester,omitempty"`
IndexGateway indexgateway.Config `yaml:"index_gateway"`
BloomCompactor bloomcompactor.Config `yaml:"bloom_compactor,omitempty" category:"experimental"`
@@ -159,7 +163,9 @@ func (c *Config) RegisterFlags(f *flag.FlagSet) {
c.CompactorHTTPClient.RegisterFlags(f)
c.CompactorGRPCClient.RegisterFlags(f)
c.IngesterClient.RegisterFlags(f)
+ //c.IngesterRF1Client.RegisterFlags(f)
c.Ingester.RegisterFlags(f)
+ c.IngesterRF1.RegisterFlags(f)
c.StorageConfig.RegisterFlags(f)
c.IndexGateway.RegisterFlags(f)
c.BloomGateway.RegisterFlags(f)
@@ -332,6 +338,8 @@ type Loki struct {
TenantLimits validation.TenantLimits
distributor *distributor.Distributor
Ingester ingester.Interface
+ IngesterRF1 ingester_rf1.Interface
+ IngesterRF1RingClient *ingester_rf1.RingClient
PatternIngester *pattern.Ingester
PatternRingClient pattern.RingClient
Querier querier.Querier
@@ -610,6 +618,15 @@ func (t *Loki) readyHandler(sm *services.Manager, shutdownRequested *atomic.Bool
}
}
+ // Ingester RF1 has a special check that makes sure that it was able to register into the ring,
+ // and that all other ring entries are OK too.
+ if t.IngesterRF1 != nil {
+ if err := t.IngesterRF1.CheckReady(r.Context()); err != nil {
+ http.Error(w, "Pattern Ingester not ready: "+err.Error(), http.StatusServiceUnavailable)
+ return
+ }
+ }
+
// Query Frontend has a special check that makes sure that a querier is attached before it signals
// itself as ready
if t.frontend != nil {
@@ -642,6 +659,8 @@ func (t *Loki) setupModuleManager() error {
mm.RegisterModule(Store, t.initStore, modules.UserInvisibleModule)
mm.RegisterModule(Querier, t.initQuerier)
mm.RegisterModule(Ingester, t.initIngester)
+ mm.RegisterModule(IngesterRF1, t.initIngesterRF1)
+ mm.RegisterModule(IngesterRF1RingClient, t.initIngesterRF1RingClient, modules.UserInvisibleModule)
mm.RegisterModule(IngesterQuerier, t.initIngesterQuerier)
mm.RegisterModule(IngesterGRPCInterceptors, t.initIngesterGRPCInterceptors, modules.UserInvisibleModule)
mm.RegisterModule(QueryFrontendTripperware, t.initQueryFrontendMiddleware, modules.UserInvisibleModule)
@@ -679,8 +698,9 @@ func (t *Loki) setupModuleManager() error {
Overrides: {RuntimeConfig},
OverridesExporter: {Overrides, Server},
TenantConfigs: {RuntimeConfig},
- Distributor: {Ring, Server, Overrides, TenantConfigs, PatternRingClient, Analytics},
+ Distributor: {Ring, Server, Overrides, TenantConfigs, PatternRingClient, IngesterRF1RingClient, Analytics},
Store: {Overrides, IndexGatewayRing},
+ IngesterRF1: {Store, Server, MemberlistKV, TenantConfigs, Analytics},
Ingester: {Store, Server, MemberlistKV, TenantConfigs, Analytics},
Querier: {Store, Ring, Server, IngesterQuerier, PatternRingClient, Overrides, Analytics, CacheGenerationLoader, QuerySchedulerRing},
QueryFrontendTripperware: {Server, Overrides, TenantConfigs},
@@ -697,6 +717,7 @@ func (t *Loki) setupModuleManager() error {
BloomBuilder: {Server, BloomStore, Analytics, Store},
PatternIngester: {Server, MemberlistKV, Analytics},
PatternRingClient: {Server, MemberlistKV, Analytics},
+ IngesterRF1RingClient: {Server, MemberlistKV, Analytics},
IngesterQuerier: {Ring},
QuerySchedulerRing: {Overrides, MemberlistKV},
IndexGatewayRing: {Overrides, MemberlistKV},
@@ -704,10 +725,10 @@ func (t *Loki) setupModuleManager() error {
MemberlistKV: {Server},
Read: {QueryFrontend, Querier},
- Write: {Ingester, Distributor, PatternIngester},
+ Write: {Ingester, IngesterRF1, Distributor, PatternIngester},
Backend: {QueryScheduler, Ruler, Compactor, IndexGateway, BloomGateway, BloomCompactor},
- All: {QueryScheduler, QueryFrontend, Querier, Ingester, PatternIngester, Distributor, Ruler, Compactor},
+ All: {QueryScheduler, QueryFrontend, Querier, Ingester, IngesterRF1, PatternIngester, Distributor, Ruler, Compactor},
}
if t.Cfg.Querier.PerRequestLimitsEnabled {
diff --git a/pkg/loki/modules.go b/pkg/loki/modules.go
index 204cecd0ce3ad..2b3fb852918d1 100644
--- a/pkg/loki/modules.go
+++ b/pkg/loki/modules.go
@@ -15,6 +15,8 @@ import (
"strings"
"time"
+ ingester_rf1 "github.com/grafana/loki/v3/pkg/ingester-rf1"
+
"github.com/NYTimes/gziphandler"
"github.com/go-kit/log"
"github.com/go-kit/log/level"
@@ -102,6 +104,8 @@ const (
Querier string = "querier"
CacheGenerationLoader string = "cache-generation-loader"
Ingester string = "ingester"
+ IngesterRF1 string = "ingester-rf1"
+ IngesterRF1RingClient string = "ingester-rf1-ring-client"
PatternIngester string = "pattern-ingester"
PatternRingClient string = "pattern-ring-client"
IngesterQuerier string = "ingester-querier"
@@ -328,6 +332,13 @@ func (t *Loki) initDistributor() (services.Service, error) {
}
t.Tee = distributor.WrapTee(t.Tee, patternTee)
}
+ if t.Cfg.IngesterRF1.Enabled {
+ rf1Tee, err := ingester_rf1.NewTee(t.Cfg.IngesterRF1, t.IngesterRF1RingClient, t.Cfg.MetricsNamespace, prometheus.DefaultRegisterer, util_log.Logger)
+ if err != nil {
+ return nil, err
+ }
+ t.Tee = distributor.WrapTee(t.Tee, rf1Tee)
+ }
var err error
logger := log.With(util_log.Logger, "component", "distributor")
@@ -618,6 +629,67 @@ func (t *Loki) initIngester() (_ services.Service, err error) {
return t.Ingester, nil
}
+func (t *Loki) initIngesterRF1() (_ services.Service, err error) {
+ if !t.Cfg.IngesterRF1.Enabled {
+ return nil, nil
+ }
+
+ logger := log.With(util_log.Logger, "component", "ingester-rf1")
+ t.Cfg.IngesterRF1.LifecyclerConfig.ListenPort = t.Cfg.Server.GRPCListenPort
+
+ if t.Cfg.IngesterRF1.ShutdownMarkerPath == "" && t.Cfg.Common.PathPrefix != "" {
+ t.Cfg.IngesterRF1.ShutdownMarkerPath = t.Cfg.Common.PathPrefix
+ }
+ if t.Cfg.IngesterRF1.ShutdownMarkerPath == "" {
+ level.Warn(util_log.Logger).Log("msg", "The config setting shutdown marker path is not set. The /ingester/prepare_shutdown endpoint won't work")
+ }
+
+ t.IngesterRF1, err = ingester_rf1.New(t.Cfg.IngesterRF1, t.Cfg.IngesterRF1Client, t.Store, t.Overrides, t.tenantConfigs, prometheus.DefaultRegisterer, t.Cfg.Distributor.WriteFailuresLogging, t.Cfg.MetricsNamespace, logger, t.UsageTracker, t.ring)
+ if err != nil {
+ fmt.Println("Error initializing ingester rf1", err)
+ return
+ }
+
+ if t.Cfg.IngesterRF1.Wrapper != nil {
+ t.IngesterRF1 = t.Cfg.IngesterRF1.Wrapper.Wrap(t.IngesterRF1)
+ }
+
+ fmt.Println("registered GRPC")
+ logproto.RegisterPusherRF1Server(t.Server.GRPC, t.IngesterRF1)
+
+ t.Server.HTTP.Path("/ingester-rf1/ring").Methods("GET", "POST").Handler(t.IngesterRF1)
+
+ if t.Cfg.InternalServer.Enable {
+ t.InternalServer.HTTP.Path("/ingester-rf1/ring").Methods("GET", "POST").Handler(t.IngesterRF1)
+ }
+
+ httpMiddleware := middleware.Merge(
+ serverutil.RecoveryHTTPMiddleware,
+ )
+ t.Server.HTTP.Methods("GET", "POST").Path("/flush").Handler(
+ httpMiddleware.Wrap(http.HandlerFunc(t.IngesterRF1.FlushHandler)),
+ )
+ t.Server.HTTP.Methods("POST", "GET", "DELETE").Path("/ingester-rf1/prepare_shutdown").Handler(
+ httpMiddleware.Wrap(http.HandlerFunc(t.IngesterRF1.PrepareShutdown)),
+ )
+ t.Server.HTTP.Methods("POST", "GET").Path("/ingester-rf1/shutdown").Handler(
+ httpMiddleware.Wrap(http.HandlerFunc(t.IngesterRF1.ShutdownHandler)),
+ )
+ return t.IngesterRF1, nil
+}
+
+func (t *Loki) initIngesterRF1RingClient() (_ services.Service, err error) {
+ if !t.Cfg.IngesterRF1.Enabled {
+ return nil, nil
+ }
+ ringClient, err := ingester_rf1.NewRingClient(t.Cfg.IngesterRF1, t.Cfg.MetricsNamespace, prometheus.DefaultRegisterer, util_log.Logger)
+ if err != nil {
+ return nil, err
+ }
+ t.IngesterRF1RingClient = ringClient
+ return ringClient, nil
+}
+
func (t *Loki) initPatternIngester() (_ services.Service, err error) {
if !t.Cfg.Pattern.Enabled {
return nil, nil
diff --git a/pkg/push/push-rf1.pb.go b/pkg/push/push-rf1.pb.go
new file mode 100644
index 0000000000000..7d87f4a1cc718
--- /dev/null
+++ b/pkg/push/push-rf1.pb.go
@@ -0,0 +1,128 @@
+// Code generated by protoc-gen-gogo. DO NOT EDIT.
+// source: pkg/push/push-rf1.proto
+
+package push
+
+import (
+ context "context"
+ fmt "fmt"
+ _ "github.com/gogo/protobuf/gogoproto"
+ proto "github.com/gogo/protobuf/proto"
+ _ "github.com/gogo/protobuf/types"
+ grpc "google.golang.org/grpc"
+ codes "google.golang.org/grpc/codes"
+ status "google.golang.org/grpc/status"
+ math "math"
+)
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the proto package it is being compiled against.
+// A compilation error at this line likely means your copy of the
+// proto package needs to be updated.
+const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
+
+func init() { proto.RegisterFile("pkg/push/push-rf1.proto", fileDescriptor_4b1742ccc5fd9087) }
+
+var fileDescriptor_4b1742ccc5fd9087 = []byte{
+ // 232 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0x2f, 0xc8, 0x4e, 0xd7,
+ 0x2f, 0x28, 0x2d, 0xce, 0x00, 0x13, 0xba, 0x45, 0x69, 0x86, 0x7a, 0x05, 0x45, 0xf9, 0x25, 0xf9,
+ 0x42, 0x1c, 0x39, 0xf9, 0xe9, 0x60, 0x96, 0x94, 0x48, 0x7a, 0x7e, 0x7a, 0x3e, 0x98, 0xa9, 0x0f,
+ 0x62, 0x41, 0xe4, 0xa5, 0xe4, 0xd3, 0xf3, 0xf3, 0xd3, 0x73, 0x52, 0xf5, 0xc1, 0xbc, 0xa4, 0xd2,
+ 0x34, 0xfd, 0x92, 0xcc, 0xdc, 0xd4, 0xe2, 0x92, 0xc4, 0xdc, 0x02, 0xa8, 0x02, 0x61, 0x14, 0x93,
+ 0x21, 0x82, 0x46, 0x2e, 0x5c, 0x9c, 0x01, 0xa5, 0xc5, 0x19, 0xa9, 0x45, 0x41, 0x6e, 0x86, 0x42,
+ 0xe6, 0x5c, 0x2c, 0x20, 0x8e, 0x90, 0xa8, 0x1e, 0xcc, 0x2e, 0x3d, 0x10, 0x3f, 0x28, 0xb5, 0xb0,
+ 0x34, 0xb5, 0xb8, 0x44, 0x4a, 0x0c, 0x5d, 0xb8, 0xb8, 0x20, 0x3f, 0xaf, 0x38, 0x55, 0x89, 0xc1,
+ 0x29, 0xec, 0xc2, 0x43, 0x39, 0x86, 0x1b, 0x0f, 0xe5, 0x18, 0x3e, 0x3c, 0x94, 0x63, 0x6c, 0x78,
+ 0x24, 0xc7, 0xb8, 0xe2, 0x91, 0x1c, 0xe3, 0x89, 0x47, 0x72, 0x8c, 0x17, 0x1e, 0xc9, 0x31, 0x3e,
+ 0x78, 0x24, 0xc7, 0xf8, 0xe2, 0x91, 0x1c, 0xc3, 0x87, 0x47, 0x72, 0x8c, 0x13, 0x1e, 0xcb, 0x31,
+ 0x5c, 0x78, 0x2c, 0xc7, 0x70, 0xe3, 0xb1, 0x1c, 0x43, 0x94, 0x42, 0x7a, 0x66, 0x49, 0x46, 0x69,
+ 0x92, 0x5e, 0x72, 0x7e, 0xae, 0x7e, 0x7a, 0x51, 0x62, 0x5a, 0x62, 0x5e, 0xa2, 0x7e, 0x4e, 0x7e,
+ 0x76, 0xa6, 0x3e, 0xcc, 0xa1, 0x49, 0x6c, 0x60, 0xdb, 0x8c, 0x01, 0x01, 0x00, 0x00, 0xff, 0xff,
+ 0x48, 0x19, 0x4c, 0x81, 0x15, 0x01, 0x00, 0x00,
+}
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ context.Context
+var _ grpc.ClientConn
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the grpc package it is being compiled against.
+const _ = grpc.SupportPackageIsVersion4
+
+// PusherRF1Client is the client API for PusherRF1 service.
+//
+// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
+type PusherRF1Client interface {
+ Push(ctx context.Context, in *PushRequest, opts ...grpc.CallOption) (*PushResponse, error)
+}
+
+type pusherRF1Client struct {
+ cc *grpc.ClientConn
+}
+
+func NewPusherRF1Client(cc *grpc.ClientConn) PusherRF1Client {
+ return &pusherRF1Client{cc}
+}
+
+func (c *pusherRF1Client) Push(ctx context.Context, in *PushRequest, opts ...grpc.CallOption) (*PushResponse, error) {
+ out := new(PushResponse)
+ err := c.cc.Invoke(ctx, "/logproto.PusherRF1/Push", in, out, opts...)
+ if err != nil {
+ return nil, err
+ }
+ return out, nil
+}
+
+// PusherRF1Server is the server API for PusherRF1 service.
+type PusherRF1Server interface {
+ Push(context.Context, *PushRequest) (*PushResponse, error)
+}
+
+// UnimplementedPusherRF1Server can be embedded to have forward compatible implementations.
+type UnimplementedPusherRF1Server struct {
+}
+
+func (*UnimplementedPusherRF1Server) Push(ctx context.Context, req *PushRequest) (*PushResponse, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method Push not implemented")
+}
+
+func RegisterPusherRF1Server(s *grpc.Server, srv PusherRF1Server) {
+ s.RegisterService(&_PusherRF1_serviceDesc, srv)
+}
+
+func _PusherRF1_Push_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(PushRequest)
+ if err := dec(in); err != nil {
+ return nil, err
+ }
+ if interceptor == nil {
+ return srv.(PusherRF1Server).Push(ctx, in)
+ }
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/logproto.PusherRF1/Push",
+ }
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(PusherRF1Server).Push(ctx, req.(*PushRequest))
+ }
+ return interceptor(ctx, in, info, handler)
+}
+
+var _PusherRF1_serviceDesc = grpc.ServiceDesc{
+ ServiceName: "logproto.PusherRF1",
+ HandlerType: (*PusherRF1Server)(nil),
+ Methods: []grpc.MethodDesc{
+ {
+ MethodName: "Push",
+ Handler: _PusherRF1_Push_Handler,
+ },
+ },
+ Streams: []grpc.StreamDesc{},
+ Metadata: "pkg/push/push-rf1.proto",
+}
diff --git a/pkg/push/push-rf1.proto b/pkg/push/push-rf1.proto
new file mode 100644
index 0000000000000..1c5a3e039341d
--- /dev/null
+++ b/pkg/push/push-rf1.proto
@@ -0,0 +1,13 @@
+syntax = "proto3";
+
+package logproto;
+
+import "gogoproto/gogo.proto";
+import "google/protobuf/timestamp.proto";
+import "pkg/push/push.proto";
+
+option go_package = "github.com/grafana/loki/pkg/push";
+
+service PusherRF1 {
+ rpc Push(PushRequest) returns (PushResponse) {}
+}
diff --git a/pkg/querier/querier_mock_test.go b/pkg/querier/querier_mock_test.go
index 20c3b9f1b77c2..60d26fee28d2a 100644
--- a/pkg/querier/querier_mock_test.go
+++ b/pkg/querier/querier_mock_test.go
@@ -8,6 +8,7 @@ import (
"time"
"github.com/grafana/loki/v3/pkg/logql/log"
+ "github.com/grafana/loki/v3/pkg/storage/wal"
"github.com/grafana/loki/v3/pkg/loghttp"
@@ -339,6 +340,9 @@ func (s *storeMock) GetChunks(ctx context.Context, userID string, from, through
return args.Get(0).([][]chunk.Chunk), args.Get(0).([]*fetcher.Fetcher), args.Error(2)
}
+func (s *storeMock) PutWal(_ context.Context, _ *wal.SegmentWriter) error {
+ return errors.New("storeMock.PutWal() has not been mocked")
+}
func (s *storeMock) Put(_ context.Context, _ []chunk.Chunk) error {
return errors.New("storeMock.Put() has not been mocked")
}
diff --git a/pkg/storage/chunk/client/aws/dynamodb_storage_client.go b/pkg/storage/chunk/client/aws/dynamodb_storage_client.go
index 87fd24e127db0..b70c4269ede23 100644
--- a/pkg/storage/chunk/client/aws/dynamodb_storage_client.go
+++ b/pkg/storage/chunk/client/aws/dynamodb_storage_client.go
@@ -33,6 +33,7 @@ import (
client_util "github.com/grafana/loki/v3/pkg/storage/chunk/client/util"
"github.com/grafana/loki/v3/pkg/storage/config"
"github.com/grafana/loki/v3/pkg/storage/stores/series/index"
+ "github.com/grafana/loki/v3/pkg/storage/wal"
"github.com/grafana/loki/v3/pkg/util"
"github.com/grafana/loki/v3/pkg/util/log"
"github.com/grafana/loki/v3/pkg/util/math"
@@ -118,6 +119,10 @@ type dynamoDBStorageClient struct {
metrics *dynamoDBMetrics
}
+func (a dynamoDBStorageClient) PutWal(_ context.Context, _ *wal.SegmentWriter) error {
+ return errors.New("not implemented")
+}
+
// NewDynamoDBIndexClient makes a new DynamoDB-backed IndexClient.
func NewDynamoDBIndexClient(cfg DynamoDBConfig, schemaCfg config.SchemaConfig, reg prometheus.Registerer) (index.Client, error) {
return newDynamoDBStorageClient(cfg, schemaCfg, reg)
diff --git a/pkg/storage/chunk/client/cassandra/storage_client.go b/pkg/storage/chunk/client/cassandra/storage_client.go
index d847f9d6b7e2d..732491de2df8a 100644
--- a/pkg/storage/chunk/client/cassandra/storage_client.go
+++ b/pkg/storage/chunk/client/cassandra/storage_client.go
@@ -23,6 +23,7 @@ import (
"github.com/grafana/loki/v3/pkg/storage/chunk/client/util"
"github.com/grafana/loki/v3/pkg/storage/config"
"github.com/grafana/loki/v3/pkg/storage/stores/series/index"
+ "github.com/grafana/loki/v3/pkg/storage/wal"
util_log "github.com/grafana/loki/v3/pkg/util/log"
)
@@ -567,6 +568,10 @@ func (s *ObjectClient) reconnectReadSession() error {
return nil
}
+func (s *ObjectClient) PutWal(_ context.Context, _ *wal.SegmentWriter) error {
+ return errors.New("not implemented")
+}
+
// PutChunks implements chunk.ObjectClient.
func (s *ObjectClient) PutChunks(ctx context.Context, chunks []chunk.Chunk) error {
err := s.putChunks(ctx, chunks)
diff --git a/pkg/storage/chunk/client/client.go b/pkg/storage/chunk/client/client.go
index 36b65d40b6c2e..800086c6616be 100644
--- a/pkg/storage/chunk/client/client.go
+++ b/pkg/storage/chunk/client/client.go
@@ -6,6 +6,7 @@ import (
"github.com/grafana/loki/v3/pkg/storage/chunk"
"github.com/grafana/loki/v3/pkg/storage/stores/series/index"
+ "github.com/grafana/loki/v3/pkg/storage/wal"
)
var (
@@ -18,6 +19,7 @@ var (
// Client is for storing and retrieving chunks.
type Client interface {
Stop()
+ PutWal(ctx context.Context, writer *wal.SegmentWriter) error
PutChunks(ctx context.Context, chunks []chunk.Chunk) error
GetChunks(ctx context.Context, chunks []chunk.Chunk) ([]chunk.Chunk, error)
DeleteChunk(ctx context.Context, userID, chunkID string) error
diff --git a/pkg/storage/chunk/client/gcp/bigtable_object_client.go b/pkg/storage/chunk/client/gcp/bigtable_object_client.go
index d878bc19bccf0..992e4bff926e0 100644
--- a/pkg/storage/chunk/client/gcp/bigtable_object_client.go
+++ b/pkg/storage/chunk/client/gcp/bigtable_object_client.go
@@ -12,6 +12,7 @@ import (
"github.com/grafana/loki/v3/pkg/storage/chunk"
"github.com/grafana/loki/v3/pkg/storage/chunk/client"
"github.com/grafana/loki/v3/pkg/storage/config"
+ "github.com/grafana/loki/v3/pkg/storage/wal"
"github.com/grafana/loki/v3/pkg/util/math"
)
@@ -83,6 +84,10 @@ func (s *bigtableObjectClient) PutChunks(ctx context.Context, chunks []chunk.Chu
return nil
}
+func (s *bigtableObjectClient) PutWal(_ context.Context, _ *wal.SegmentWriter) error {
+ return errors.New("not implemented")
+}
+
func (s *bigtableObjectClient) GetChunks(ctx context.Context, input []chunk.Chunk) ([]chunk.Chunk, error) {
sp, ctx := ot.StartSpanFromContext(ctx, "GetChunks")
defer sp.Finish()
diff --git a/pkg/storage/chunk/client/grpc/storage_client.go b/pkg/storage/chunk/client/grpc/storage_client.go
index 42ee00507e412..8c1284ba1de49 100644
--- a/pkg/storage/chunk/client/grpc/storage_client.go
+++ b/pkg/storage/chunk/client/grpc/storage_client.go
@@ -9,6 +9,7 @@ import (
"github.com/grafana/loki/v3/pkg/storage/chunk"
"github.com/grafana/loki/v3/pkg/storage/config"
+ "github.com/grafana/loki/v3/pkg/storage/wal"
)
type StorageClient struct {
@@ -66,6 +67,10 @@ func (s *StorageClient) PutChunks(ctx context.Context, chunks []chunk.Chunk) err
return nil
}
+func (s *StorageClient) PutWal(_ context.Context, _ *wal.SegmentWriter) error {
+ return errors.New("not implemented")
+}
+
func (s *StorageClient) DeleteChunk(ctx context.Context, _, chunkID string) error {
chunkInfo := &ChunkID{ChunkID: chunkID}
_, err := s.client.DeleteChunks(ctx, chunkInfo)
diff --git a/pkg/storage/chunk/client/metrics.go b/pkg/storage/chunk/client/metrics.go
index 76ca20a1bac5f..5e1ba5b41869b 100644
--- a/pkg/storage/chunk/client/metrics.go
+++ b/pkg/storage/chunk/client/metrics.go
@@ -7,6 +7,7 @@ import (
"github.com/prometheus/client_golang/prometheus/promauto"
"github.com/grafana/loki/v3/pkg/storage/chunk"
+ "github.com/grafana/loki/v3/pkg/storage/wal"
"github.com/grafana/loki/v3/pkg/util/constants"
)
@@ -60,6 +61,10 @@ func (c MetricsChunkClient) Stop() {
c.Client.Stop()
}
+func (c MetricsChunkClient) PutWal(ctx context.Context, writer *wal.SegmentWriter) error {
+ return c.Client.PutWal(ctx, writer)
+}
+
func (c MetricsChunkClient) PutChunks(ctx context.Context, chunks []chunk.Chunk) error {
if err := c.Client.PutChunks(ctx, chunks); err != nil {
return err
diff --git a/pkg/storage/chunk/client/object_client.go b/pkg/storage/chunk/client/object_client.go
index 7a3b2e40c1663..8a22686fb113a 100644
--- a/pkg/storage/chunk/client/object_client.go
+++ b/pkg/storage/chunk/client/object_client.go
@@ -13,6 +13,7 @@ import (
"github.com/grafana/loki/v3/pkg/storage/chunk"
"github.com/grafana/loki/v3/pkg/storage/chunk/client/util"
"github.com/grafana/loki/v3/pkg/storage/config"
+ "github.com/grafana/loki/v3/pkg/storage/wal"
)
// ObjectClient is used to store arbitrary data in Object Store (S3/GCS/Azure/...)
@@ -105,6 +106,15 @@ func (o *client) Stop() {
o.store.Stop()
}
+func (o *client) PutWal(ctx context.Context, segment *wal.SegmentWriter) error {
+ buffer := bytes.NewBuffer(nil)
+ _, err := segment.WriteTo(buffer)
+ if err != nil {
+ return err
+ }
+ return o.store.PutObject(ctx, "wal-segment-"+time.Now().UTC().Format(time.RFC3339Nano), bytes.NewReader(buffer.Bytes()))
+}
+
// PutChunks stores the provided chunks in the configured backend. If multiple errors are
// returned, the last one sequentially will be propagated up.
func (o *client) PutChunks(ctx context.Context, chunks []chunk.Chunk) error {
diff --git a/pkg/storage/store.go b/pkg/storage/store.go
index db4a0a498e17d..2f5d88941f36a 100644
--- a/pkg/storage/store.go
+++ b/pkg/storage/store.go
@@ -7,6 +7,7 @@ import (
"time"
"github.com/grafana/loki/v3/pkg/storage/types"
+ "github.com/grafana/loki/v3/pkg/storage/wal"
"github.com/grafana/loki/v3/pkg/util/httpreq"
lokilog "github.com/grafana/loki/v3/pkg/logql/log"
@@ -608,3 +609,7 @@ func (f failingChunkWriter) Put(_ context.Context, _ []chunk.Chunk) error {
func (f failingChunkWriter) PutOne(_ context.Context, _, _ model.Time, _ chunk.Chunk) error {
return errWritingChunkUnsupported
}
+
+func (f failingChunkWriter) PutWal(_ context.Context, _ *wal.SegmentWriter) error {
+ return errWritingChunkUnsupported
+}
diff --git a/pkg/storage/stores/composite_store.go b/pkg/storage/stores/composite_store.go
index 834d9602727fc..484d8574f3cb3 100644
--- a/pkg/storage/stores/composite_store.go
+++ b/pkg/storage/stores/composite_store.go
@@ -15,10 +15,16 @@ import (
"github.com/grafana/loki/v3/pkg/storage/stores/index/stats"
tsdb_index "github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/tsdb/index"
"github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/tsdb/sharding"
+ "github.com/grafana/loki/v3/pkg/storage/wal"
"github.com/grafana/loki/v3/pkg/util"
)
+type WalSegmentWriter interface {
+ PutWal(ctx context.Context, writer *wal.SegmentWriter) error
+}
+
type ChunkWriter interface {
+ WalSegmentWriter
Put(ctx context.Context, chunks []chunk.Chunk) error
PutOne(ctx context.Context, from, through model.Time, chunk chunk.Chunk) error
}
@@ -45,6 +51,7 @@ type Store interface {
ChunkWriter
ChunkFetcher
ChunkFetcherProvider
+ WalSegmentWriter
Stop()
}
@@ -88,6 +95,12 @@ func (c *CompositeStore) Stores() []Store {
return stores
}
+func (c CompositeStore) PutWal(ctx context.Context, writer *wal.SegmentWriter) error {
+ // TODO: Understand how to use the forStores method to correctly pick a store for this
+ err := c.stores[0].PutWal(ctx, writer)
+ return err
+}
+
func (c CompositeStore) Put(ctx context.Context, chunks []chunk.Chunk) error {
for _, chunk := range chunks {
err := c.forStores(ctx, chunk.From, chunk.Through, func(innerCtx context.Context, from, through model.Time, store Store) error {
diff --git a/pkg/storage/stores/composite_store_test.go b/pkg/storage/stores/composite_store_test.go
index 28052c528f7c2..064e19ca8bbf9 100644
--- a/pkg/storage/stores/composite_store_test.go
+++ b/pkg/storage/stores/composite_store_test.go
@@ -9,6 +9,7 @@ import (
"github.com/pkg/errors"
"github.com/grafana/loki/v3/pkg/logproto"
+ "github.com/grafana/loki/v3/pkg/storage/wal"
"github.com/grafana/dskit/test"
"github.com/prometheus/common/model"
@@ -23,6 +24,10 @@ import (
type mockStore int
+func (m mockStore) PutWal(_ context.Context, _ *wal.SegmentWriter) error {
+ return nil
+}
+
func (m mockStore) Put(_ context.Context, _ []chunk.Chunk) error {
return nil
}
diff --git a/pkg/storage/stores/series_store_write.go b/pkg/storage/stores/series_store_write.go
index a36ae4510b8e3..2b134472eb2ea 100644
--- a/pkg/storage/stores/series_store_write.go
+++ b/pkg/storage/stores/series_store_write.go
@@ -13,6 +13,7 @@ import (
"github.com/grafana/loki/v3/pkg/storage/chunk/fetcher"
"github.com/grafana/loki/v3/pkg/storage/config"
"github.com/grafana/loki/v3/pkg/storage/stores/index"
+ "github.com/grafana/loki/v3/pkg/storage/wal"
"github.com/grafana/loki/v3/pkg/util/constants"
"github.com/grafana/loki/v3/pkg/util/spanlogger"
)
@@ -65,6 +66,10 @@ func (c *Writer) Put(ctx context.Context, chunks []chunk.Chunk) error {
return nil
}
+func (c *Writer) PutWal(ctx context.Context, segment *wal.SegmentWriter) error {
+ return c.fetcher.Client().PutWal(ctx, segment)
+}
+
// PutOne implements Store
func (c *Writer) PutOne(ctx context.Context, from, through model.Time, chk chunk.Chunk) error {
sp, ctx := opentracing.StartSpanFromContext(ctx, "SeriesStore.PutOne")
diff --git a/pkg/storage/stores/series_store_write_test.go b/pkg/storage/stores/series_store_write_test.go
index cac84a17ebfbf..882fbaa00908b 100644
--- a/pkg/storage/stores/series_store_write_test.go
+++ b/pkg/storage/stores/series_store_write_test.go
@@ -13,6 +13,7 @@ import (
"github.com/grafana/loki/v3/pkg/storage/chunk"
"github.com/grafana/loki/v3/pkg/storage/chunk/fetcher"
"github.com/grafana/loki/v3/pkg/storage/config"
+ "github.com/grafana/loki/v3/pkg/storage/wal"
)
type mockCache struct {
@@ -55,6 +56,10 @@ type mockChunksClient struct {
called int
}
+func (m *mockChunksClient) PutWal(_ context.Context, _ *wal.SegmentWriter) error {
+ return nil
+}
+
func (m *mockChunksClient) PutChunks(_ context.Context, _ []chunk.Chunk) error {
m.called++
return nil
diff --git a/pkg/storage/util_test.go b/pkg/storage/util_test.go
index 5ef02e74b1caf..b359132408902 100644
--- a/pkg/storage/util_test.go
+++ b/pkg/storage/util_test.go
@@ -26,6 +26,7 @@ import (
"github.com/grafana/loki/v3/pkg/storage/stores"
index_stats "github.com/grafana/loki/v3/pkg/storage/stores/index/stats"
"github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/tsdb/sharding"
+ "github.com/grafana/loki/v3/pkg/storage/wal"
loki_util "github.com/grafana/loki/v3/pkg/util"
"github.com/grafana/loki/v3/pkg/util/constants"
util_log "github.com/grafana/loki/v3/pkg/util/log"
@@ -185,7 +186,8 @@ func newMockChunkStore(chunkFormat byte, headfmt chunkenc.HeadBlockFmt, streams
return &mockChunkStore{schemas: config.SchemaConfig{}, chunks: chunks, client: &mockChunkStoreClient{chunks: chunks, scfg: config.SchemaConfig{}}}
}
-func (m *mockChunkStore) Put(_ context.Context, _ []chunk.Chunk) error { return nil }
+func (m *mockChunkStore) PutWal(_ context.Context, _ *wal.SegmentWriter) error { return nil }
+func (m *mockChunkStore) Put(_ context.Context, _ []chunk.Chunk) error { return nil }
func (m *mockChunkStore) PutOne(_ context.Context, _, _ model.Time, _ chunk.Chunk) error {
return nil
}
@@ -292,6 +294,7 @@ func (m mockChunkStoreClient) Stop() {
panic("implement me")
}
+func (m mockChunkStoreClient) PutWal(_ context.Context, _ *wal.SegmentWriter) error { return nil }
func (m mockChunkStoreClient) PutChunks(_ context.Context, _ []chunk.Chunk) error {
return nil
}
diff --git a/pkg/storage/wal/segment.go b/pkg/storage/wal/segment.go
index 3e4fb0c2fa302..fa8e42cc94a5d 100644
--- a/pkg/storage/wal/segment.go
+++ b/pkg/storage/wal/segment.go
@@ -10,11 +10,12 @@ import (
"sort"
"sync"
+ "go.uber.org/atomic"
+
"github.com/prometheus/prometheus/model/labels"
"github.com/prometheus/prometheus/storage"
"github.com/grafana/loki/v3/pkg/logproto"
- "github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/tsdb"
tsdbindex "github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/tsdb/index"
"github.com/grafana/loki/v3/pkg/storage/wal/chunks"
"github.com/grafana/loki/v3/pkg/storage/wal/index"
@@ -29,12 +30,15 @@ var (
streamSegmentPool = sync.Pool{
New: func() interface{} {
return &streamSegment{
+ lock: &sync.Mutex{},
entries: make([]*logproto.Entry, 0, 4096),
}
},
}
+
// 512kb - 20 mb
encodedWalSegmentBufferPool = pool.NewBuffer(512*1024, 20*1024*1024, 2)
+ tenantLabel = "__loki_tenant__"
)
func init() {
@@ -46,13 +50,15 @@ type streamID struct {
}
type SegmentWriter struct {
- streams map[streamID]*streamSegment
- buf1 encoding.Encbuf
- inputSize int64
- idxWriter *index.Writer
+ streams map[streamID]*streamSegment
+ buf1 encoding.Encbuf
+ inputSize atomic.Int64
+ idxWriter *index.Writer
+ consistencyMtx *sync.RWMutex
}
type streamSegment struct {
+ lock *sync.Mutex
lbls labels.Labels
entries []*logproto.Entry
tenantID string
@@ -74,36 +80,52 @@ func NewWalSegmentWriter() (*SegmentWriter, error) {
return nil, err
}
return &SegmentWriter{
- streams: make(map[streamID]*streamSegment, 64),
- buf1: encoding.EncWith(make([]byte, 0, 4)),
- idxWriter: idxWriter,
+ streams: make(map[streamID]*streamSegment, 64),
+ buf1: encoding.EncWith(make([]byte, 0, 4)),
+ idxWriter: idxWriter,
+ inputSize: atomic.Int64{},
+ consistencyMtx: &sync.RWMutex{},
}, nil
}
+func (b *SegmentWriter) getOrCreateStream(id streamID, lbls labels.Labels) *streamSegment {
+ b.consistencyMtx.RLock()
+ s, ok := b.streams[id]
+ b.consistencyMtx.RUnlock()
+ if ok {
+ return s
+ }
+ b.consistencyMtx.Lock()
+ defer b.consistencyMtx.Unlock()
+ // Check another thread has not created it
+ s, ok = b.streams[id]
+ if ok {
+ return s
+ }
+ if lbls.Get(tenantLabel) == "" {
+ lbls = labels.NewBuilder(lbls).Set(tenantLabel, id.tenant).Labels()
+ }
+ s = streamSegmentPool.Get().(*streamSegment)
+ s.Reset()
+ s.lbls = lbls
+ s.tenantID = id.tenant
+ b.streams[id] = s
+ return s
+}
+
// Labels are passed a string `{foo="bar",baz="qux"}` `{foo="foo",baz="foo"}`. labels.Labels => Symbols foo, baz , qux
func (b *SegmentWriter) Append(tenantID, labelsString string, lbls labels.Labels, entries []*logproto.Entry) {
if len(entries) == 0 {
return
}
for _, e := range entries {
- b.inputSize += int64(len(e.Line))
+ b.inputSize.Add(int64(len(e.Line)))
}
id := streamID{labels: labelsString, tenant: tenantID}
- s, ok := b.streams[id]
- if !ok {
- if lbls.Get(tsdb.TenantLabel) == "" {
- lbls = labels.NewBuilder(lbls).Set(tsdb.TenantLabel, tenantID).Labels()
- }
- s = streamSegmentPool.Get().(*streamSegment)
- s.Reset()
- s.lbls = lbls
- s.tenantID = tenantID
- s.maxt = entries[len(entries)-1].Timestamp.UnixNano()
- s.entries = append(s.entries, entries...)
- b.streams[id] = s
- return
- }
+ s := b.getOrCreateStream(id, lbls)
+ s.lock.Lock()
+ defer s.lock.Unlock()
for i, e := range entries {
if e.Timestamp.UnixNano() >= s.maxt {
s.entries = append(s.entries, entries[i])
@@ -250,12 +272,13 @@ func (b *SegmentWriter) Reset() {
streamSegmentPool.Put(s)
}
b.streams = make(map[streamID]*streamSegment, 64)
- b.inputSize = 0
+ b.buf1.Reset()
+ b.inputSize.Store(0)
}
func (b *SegmentWriter) ToReader() (io.ReadSeekCloser, error) {
// snappy compression rate is ~5x , but we can not predict it, so we need to allocate bigger buffer to avoid allocations
- buffer := encodedWalSegmentBufferPool.Get(int(b.inputSize / 3))
+ buffer := encodedWalSegmentBufferPool.Get(int(b.inputSize.Load() / 3))
_, err := b.WriteTo(buffer)
if err != nil {
return nil, fmt.Errorf("failed to write segment to create a reader: %w", err)
@@ -297,7 +320,7 @@ func (e *EncodedSegmentReader) Close() error {
// InputSize returns the total size of the input data written to the writer.
// It doesn't account for timestamps and labels.
func (b *SegmentWriter) InputSize() int64 {
- return b.inputSize
+ return b.inputSize.Load()
}
type SegmentReader struct {
diff --git a/pkg/storage/wal/segment_test.go b/pkg/storage/wal/segment_test.go
index db0e9959ebebe..0e14028bd0531 100644
--- a/pkg/storage/wal/segment_test.go
+++ b/pkg/storage/wal/segment_test.go
@@ -5,6 +5,7 @@ import (
"context"
"fmt"
"sort"
+ "sync"
"testing"
"time"
@@ -14,7 +15,6 @@ import (
"github.com/grafana/loki/v3/pkg/logproto"
"github.com/grafana/loki/v3/pkg/logql/syntax"
- "github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/tsdb"
"github.com/grafana/loki/v3/pkg/storage/wal/testdata"
"github.com/grafana/loki/pkg/push"
@@ -121,7 +121,7 @@ func TestWalSegmentWriter_Append(t *testing.T) {
require.True(t, ok)
lbs, err := syntax.ParseLabels(expected.labels)
require.NoError(t, err)
- lbs = append(lbs, labels.Label{Name: string(tsdb.TenantLabel), Value: expected.tenant})
+ lbs = append(lbs, labels.Label{Name: tenantLabel, Value: expected.tenant})
sort.Sort(lbs)
require.Equal(t, lbs, stream.lbls)
require.Equal(t, expected.entries, stream.entries)
@@ -130,6 +130,163 @@ func TestWalSegmentWriter_Append(t *testing.T) {
}
}
+func BenchmarkConcurrentAppends(t *testing.B) {
+ type appendArgs struct {
+ tenant string
+ labels labels.Labels
+ entries []*push.Entry
+ }
+
+ lbls := []labels.Labels{
+ labels.FromStrings("container", "foo", "namespace", "dev"),
+ labels.FromStrings("container", "bar", "namespace", "staging"),
+ labels.FromStrings("container", "bar", "namespace", "prod"),
+ }
+ characters := "abcdefghijklmnopqrstuvwxyz"
+ tenants := []string{}
+ // 676 unique tenants (26^2)
+ for i := 0; i < len(characters); i++ {
+ for j := 0; j < len(characters); j++ {
+ tenants = append(tenants, string(characters[i])+string(characters[j]))
+ }
+ }
+
+ workChan := make(chan *appendArgs)
+ var wg sync.WaitGroup
+ var w *SegmentWriter
+ for i := 0; i < 100; i++ {
+ wg.Add(1)
+ go func(i int) {
+ for args := range workChan {
+ w.Append(args.tenant, args.labels.String(), args.labels, args.entries)
+ }
+ wg.Done()
+ }(i)
+ }
+
+ t.ResetTimer()
+ for i := 0; i < t.N; i++ {
+ var err error
+ w, err = NewWalSegmentWriter()
+ require.NoError(t, err)
+
+ for _, lbl := range lbls {
+ for _, r := range tenants {
+ for i := 0; i < 10; i++ {
+ workChan <- &appendArgs{
+ tenant: r,
+ labels: lbl,
+ entries: []*push.Entry{
+ {Timestamp: time.Unix(0, int64(i)), Line: fmt.Sprintf("log line %d", i)},
+ },
+ }
+ }
+ }
+ }
+ }
+ close(workChan)
+ wg.Wait()
+}
+
+func TestConcurrentAppends(t *testing.T) {
+ type appendArgs struct {
+ tenant string
+ labels labels.Labels
+ entries []*push.Entry
+ }
+ dst := bytes.NewBuffer(nil)
+
+ w, err := NewWalSegmentWriter()
+ require.NoError(t, err)
+ var wg sync.WaitGroup
+ workChan := make(chan *appendArgs, 100)
+ for i := 0; i < 100; i++ {
+ wg.Add(1)
+ go func(i int) {
+ for args := range workChan {
+ w.Append(args.tenant, args.labels.String(), args.labels, args.entries)
+ }
+ wg.Done()
+ }(i)
+ }
+
+ lbls := []labels.Labels{
+ labels.FromStrings("container", "foo", "namespace", "dev"),
+ labels.FromStrings("container", "bar", "namespace", "staging"),
+ labels.FromStrings("container", "bar", "namespace", "prod"),
+ }
+ characters := "abcdefghijklmnopqrstuvwxyz"
+ tenants := []string{}
+ // 676 unique tenants (26^2)
+ for i := 0; i < len(characters); i++ {
+ for j := 0; j < len(characters); j++ {
+ for k := 0; k < len(characters); k++ {
+ tenants = append(tenants, string(characters[i])+string(characters[j])+string(characters[k]))
+ }
+ }
+ }
+
+ msgsPerSeries := 10
+ msgsGenerated := 0
+ for _, r := range tenants {
+ for _, lbl := range lbls {
+ for i := 0; i < msgsPerSeries; i++ {
+ msgsGenerated++
+ workChan <- &appendArgs{
+ tenant: r,
+ labels: lbl,
+ entries: []*push.Entry{
+ {Timestamp: time.Unix(0, int64(i)), Line: fmt.Sprintf("log line %d", i)},
+ },
+ }
+ }
+ }
+ }
+ close(workChan)
+ wg.Wait()
+
+ n, err := w.WriteTo(dst)
+ require.NoError(t, err)
+ require.True(t, n > 0)
+
+ r, err := NewReader(dst.Bytes())
+ require.NoError(t, err)
+
+ iter, err := r.Series(context.Background())
+ require.NoError(t, err)
+
+ var expectedSeries, actualSeries []string
+
+ for _, tenant := range tenants {
+ for _, lbl := range lbls {
+ expectedSeries = append(expectedSeries, labels.NewBuilder(lbl).Set(tenantLabel, tenant).Labels().String())
+ }
+ }
+
+ msgsRead := 0
+ for iter.Next() {
+ actualSeries = append(actualSeries, iter.At().String())
+ chk, err := iter.ChunkReader(nil)
+ require.NoError(t, err)
+ // verify all lines
+ var i int
+ for chk.Next() {
+ ts, line := chk.At()
+ require.Equal(t, int64(i), ts)
+ require.Equal(t, fmt.Sprintf("log line %d", i), string(line))
+ msgsRead++
+ i++
+ }
+ require.NoError(t, chk.Err())
+ require.NoError(t, chk.Close())
+ require.Equal(t, msgsPerSeries, i)
+ }
+ require.NoError(t, iter.Err())
+ require.ElementsMatch(t, expectedSeries, actualSeries)
+ require.Equal(t, msgsGenerated, msgsRead)
+ t.Logf("Generated %d messages between %d tenants", msgsGenerated, len(tenants))
+}
+
func TestMultiTenantWrite(t *testing.T) {
w, err := NewWalSegmentWriter()
require.NoError(t, err)
@@ -167,7 +324,7 @@ func TestMultiTenantWrite(t *testing.T) {
for _, tenant := range tenants {
for _, lbl := range lbls {
- expectedSeries = append(expectedSeries, labels.NewBuilder(lbl).Set(tsdb.TenantLabel, tenant).Labels().String())
+ expectedSeries = append(expectedSeries, labels.NewBuilder(lbl).Set(tenantLabel, tenant).Labels().String())
}
}
diff --git a/vendor/github.com/grafana/loki/pkg/push/push-rf1.pb.go b/vendor/github.com/grafana/loki/pkg/push/push-rf1.pb.go
new file mode 100644
index 0000000000000..7d87f4a1cc718
--- /dev/null
+++ b/vendor/github.com/grafana/loki/pkg/push/push-rf1.pb.go
@@ -0,0 +1,128 @@
+// Code generated by protoc-gen-gogo. DO NOT EDIT.
+// source: pkg/push/push-rf1.proto
+
+package push
+
+import (
+ context "context"
+ fmt "fmt"
+ _ "github.com/gogo/protobuf/gogoproto"
+ proto "github.com/gogo/protobuf/proto"
+ _ "github.com/gogo/protobuf/types"
+ grpc "google.golang.org/grpc"
+ codes "google.golang.org/grpc/codes"
+ status "google.golang.org/grpc/status"
+ math "math"
+)
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the proto package it is being compiled against.
+// A compilation error at this line likely means your copy of the
+// proto package needs to be updated.
+const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
+
+func init() { proto.RegisterFile("pkg/push/push-rf1.proto", fileDescriptor_4b1742ccc5fd9087) }
+
+var fileDescriptor_4b1742ccc5fd9087 = []byte{
+ // 232 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x12, 0x2f, 0xc8, 0x4e, 0xd7,
+ 0x2f, 0x28, 0x2d, 0xce, 0x00, 0x13, 0xba, 0x45, 0x69, 0x86, 0x7a, 0x05, 0x45, 0xf9, 0x25, 0xf9,
+ 0x42, 0x1c, 0x39, 0xf9, 0xe9, 0x60, 0x96, 0x94, 0x48, 0x7a, 0x7e, 0x7a, 0x3e, 0x98, 0xa9, 0x0f,
+ 0x62, 0x41, 0xe4, 0xa5, 0xe4, 0xd3, 0xf3, 0xf3, 0xd3, 0x73, 0x52, 0xf5, 0xc1, 0xbc, 0xa4, 0xd2,
+ 0x34, 0xfd, 0x92, 0xcc, 0xdc, 0xd4, 0xe2, 0x92, 0xc4, 0xdc, 0x02, 0xa8, 0x02, 0x61, 0x14, 0x93,
+ 0x21, 0x82, 0x46, 0x2e, 0x5c, 0x9c, 0x01, 0xa5, 0xc5, 0x19, 0xa9, 0x45, 0x41, 0x6e, 0x86, 0x42,
+ 0xe6, 0x5c, 0x2c, 0x20, 0x8e, 0x90, 0xa8, 0x1e, 0xcc, 0x2e, 0x3d, 0x10, 0x3f, 0x28, 0xb5, 0xb0,
+ 0x34, 0xb5, 0xb8, 0x44, 0x4a, 0x0c, 0x5d, 0xb8, 0xb8, 0x20, 0x3f, 0xaf, 0x38, 0x55, 0x89, 0xc1,
+ 0x29, 0xec, 0xc2, 0x43, 0x39, 0x86, 0x1b, 0x0f, 0xe5, 0x18, 0x3e, 0x3c, 0x94, 0x63, 0x6c, 0x78,
+ 0x24, 0xc7, 0xb8, 0xe2, 0x91, 0x1c, 0xe3, 0x89, 0x47, 0x72, 0x8c, 0x17, 0x1e, 0xc9, 0x31, 0x3e,
+ 0x78, 0x24, 0xc7, 0xf8, 0xe2, 0x91, 0x1c, 0xc3, 0x87, 0x47, 0x72, 0x8c, 0x13, 0x1e, 0xcb, 0x31,
+ 0x5c, 0x78, 0x2c, 0xc7, 0x70, 0xe3, 0xb1, 0x1c, 0x43, 0x94, 0x42, 0x7a, 0x66, 0x49, 0x46, 0x69,
+ 0x92, 0x5e, 0x72, 0x7e, 0xae, 0x7e, 0x7a, 0x51, 0x62, 0x5a, 0x62, 0x5e, 0xa2, 0x7e, 0x4e, 0x7e,
+ 0x76, 0xa6, 0x3e, 0xcc, 0xa1, 0x49, 0x6c, 0x60, 0xdb, 0x8c, 0x01, 0x01, 0x00, 0x00, 0xff, 0xff,
+ 0x48, 0x19, 0x4c, 0x81, 0x15, 0x01, 0x00, 0x00,
+}
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ context.Context
+var _ grpc.ClientConn
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the grpc package it is being compiled against.
+const _ = grpc.SupportPackageIsVersion4
+
+// PusherRF1Client is the client API for PusherRF1 service.
+//
+// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
+type PusherRF1Client interface {
+ Push(ctx context.Context, in *PushRequest, opts ...grpc.CallOption) (*PushResponse, error)
+}
+
+type pusherRF1Client struct {
+ cc *grpc.ClientConn
+}
+
+func NewPusherRF1Client(cc *grpc.ClientConn) PusherRF1Client {
+ return &pusherRF1Client{cc}
+}
+
+func (c *pusherRF1Client) Push(ctx context.Context, in *PushRequest, opts ...grpc.CallOption) (*PushResponse, error) {
+ out := new(PushResponse)
+ err := c.cc.Invoke(ctx, "/logproto.PusherRF1/Push", in, out, opts...)
+ if err != nil {
+ return nil, err
+ }
+ return out, nil
+}
+
+// PusherRF1Server is the server API for PusherRF1 service.
+type PusherRF1Server interface {
+ Push(context.Context, *PushRequest) (*PushResponse, error)
+}
+
+// UnimplementedPusherRF1Server can be embedded to have forward compatible implementations.
+type UnimplementedPusherRF1Server struct {
+}
+
+func (*UnimplementedPusherRF1Server) Push(ctx context.Context, req *PushRequest) (*PushResponse, error) {
+ return nil, status.Errorf(codes.Unimplemented, "method Push not implemented")
+}
+
+func RegisterPusherRF1Server(s *grpc.Server, srv PusherRF1Server) {
+ s.RegisterService(&_PusherRF1_serviceDesc, srv)
+}
+
+func _PusherRF1_Push_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
+ in := new(PushRequest)
+ if err := dec(in); err != nil {
+ return nil, err
+ }
+ if interceptor == nil {
+ return srv.(PusherRF1Server).Push(ctx, in)
+ }
+ info := &grpc.UnaryServerInfo{
+ Server: srv,
+ FullMethod: "/logproto.PusherRF1/Push",
+ }
+ handler := func(ctx context.Context, req interface{}) (interface{}, error) {
+ return srv.(PusherRF1Server).Push(ctx, req.(*PushRequest))
+ }
+ return interceptor(ctx, in, info, handler)
+}
+
+var _PusherRF1_serviceDesc = grpc.ServiceDesc{
+ ServiceName: "logproto.PusherRF1",
+ HandlerType: (*PusherRF1Server)(nil),
+ Methods: []grpc.MethodDesc{
+ {
+ MethodName: "Push",
+ Handler: _PusherRF1_Push_Handler,
+ },
+ },
+ Streams: []grpc.StreamDesc{},
+ Metadata: "pkg/push/push-rf1.proto",
+}
diff --git a/vendor/github.com/grafana/loki/pkg/push/push-rf1.proto b/vendor/github.com/grafana/loki/pkg/push/push-rf1.proto
new file mode 100644
index 0000000000000..1c5a3e039341d
--- /dev/null
+++ b/vendor/github.com/grafana/loki/pkg/push/push-rf1.proto
@@ -0,0 +1,13 @@
+syntax = "proto3";
+
+package logproto;
+
+import "gogoproto/gogo.proto";
+import "google/protobuf/timestamp.proto";
+import "pkg/push/push.proto";
+
+option go_package = "github.com/grafana/loki/pkg/push";
+
+service PusherRF1 {
+ rpc Push(PushRequest) returns (PushResponse) {}
+}
|
feat
|
Ingester RF-1 (#13365)
|
947a66f35e1e02505290f52f2eee17c3e281cbbc
|
2024-11-12 14:55:11
|
Ashwanth
|
feat(thanos): disable retries when congestion control is enabled (#14867)
| false
|
diff --git a/go.mod b/go.mod
index 3809c5bcd58df..fad94e325aa59 100644
--- a/go.mod
+++ b/go.mod
@@ -137,7 +137,7 @@ require (
github.com/richardartoul/molecule v1.0.0
github.com/schollz/progressbar/v3 v3.17.0
github.com/shirou/gopsutil/v4 v4.24.10
- github.com/thanos-io/objstore v0.0.0-20241105144332-b598dceacb13
+ github.com/thanos-io/objstore v0.0.0-20241111205755-d1dd89d41f97
github.com/twmb/franz-go v1.17.1
github.com/twmb/franz-go/pkg/kadm v1.13.0
github.com/twmb/franz-go/pkg/kfake v0.0.0-20241015013301-cea7aa5d8037
diff --git a/go.sum b/go.sum
index 82d8fbf885d95..ac7fede0d1f79 100644
--- a/go.sum
+++ b/go.sum
@@ -2585,8 +2585,8 @@ github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common v1.0.480/go.mod
github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/cvm v1.0.480/go.mod h1:zaBIuDDs+rC74X8Aog+LSu91GFtHYRYDC196RGTm2jk=
github.com/tencentyun/cos-go-sdk-v5 v0.7.40 h1:W6vDGKCHe4wBACI1d2UgE6+50sJFhRWU4O8IB2ozzxM=
github.com/tencentyun/cos-go-sdk-v5 v0.7.40/go.mod h1:4dCEtLHGh8QPxHEkgq+nFaky7yZxQuYwgSJM87icDaw=
-github.com/thanos-io/objstore v0.0.0-20241105144332-b598dceacb13 h1:PQd6xZs18KGoCZJgL9eyYsrRGzzRwYCr4iXuehZm++w=
-github.com/thanos-io/objstore v0.0.0-20241105144332-b598dceacb13/go.mod h1:/ZMUxFcp/nT6oYV5WslH9k07NU/+86+aibgZRmMMr/4=
+github.com/thanos-io/objstore v0.0.0-20241111205755-d1dd89d41f97 h1:VjG0mwhN1DkncwDHFvrpd12/2TLfgYNRmEQA48ikp+0=
+github.com/thanos-io/objstore v0.0.0-20241111205755-d1dd89d41f97/go.mod h1:vyzFrBXgP+fGNG2FopEGWOO/zrIuoy7zt3LpLeezRsw=
github.com/tidwall/gjson v1.6.0/go.mod h1:P256ACg0Mn+j1RXIDXoss50DeIABTYK1PULOJHhxOls=
github.com/tidwall/match v1.0.1/go.mod h1:LujAq0jyVjBy028G1WhWfIzbpQfMO8bBZ6Tyb0+pL9E=
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
diff --git a/pkg/storage/bucket/azure/bucket_client.go b/pkg/storage/bucket/azure/bucket_client.go
index e0910b9dbed22..f7157cf1bca9a 100644
--- a/pkg/storage/bucket/azure/bucket_client.go
+++ b/pkg/storage/bucket/azure/bucket_client.go
@@ -22,16 +22,12 @@ func newBucketClient(cfg Config, name string, logger log.Logger, factory func(lo
bucketConfig.ContainerName = cfg.ContainerName
bucketConfig.MaxRetries = cfg.MaxRetries
bucketConfig.UserAssignedID = cfg.UserAssignedID
+ bucketConfig.HTTPConfig.Transport = cfg.Transport
if cfg.Endpoint != "" {
// azure.DefaultConfig has the default Endpoint, overwrite it only if a different one was explicitly provided.
bucketConfig.Endpoint = cfg.Endpoint
}
- return factory(logger, bucketConfig, name, func(rt http.RoundTripper) http.RoundTripper {
- if cfg.Transport != nil {
- rt = cfg.Transport
- }
- return rt
- })
+ return factory(logger, bucketConfig, name, nil)
}
diff --git a/pkg/storage/bucket/client.go b/pkg/storage/bucket/client.go
index 06f8d128f850d..64338e8f02a18 100644
--- a/pkg/storage/bucket/client.go
+++ b/pkg/storage/bucket/client.go
@@ -4,6 +4,8 @@ import (
"context"
"errors"
"flag"
+ "fmt"
+ "net/http"
"regexp"
"github.com/go-kit/log"
@@ -126,6 +128,44 @@ func (cfg *Config) Validate() error {
return cfg.StorageBackendConfig.Validate()
}
+func (cfg *Config) disableRetries(backend string) error {
+ switch backend {
+ case S3:
+ cfg.S3.MaxRetries = 1
+ case GCS:
+ cfg.GCS.MaxRetries = 1
+ case Azure:
+ cfg.Azure.MaxRetries = 1
+ case Swift:
+ cfg.Swift.MaxRetries = 1
+ case Filesystem:
+ // do nothing
+ default:
+ return fmt.Errorf("cannot disable retries for backend: %s", backend)
+ }
+
+ return nil
+}
+
+func (cfg *Config) configureTransport(backend string, rt http.RoundTripper) error {
+ switch backend {
+ case S3:
+ cfg.S3.HTTP.Transport = rt
+ case GCS:
+ cfg.GCS.Transport = rt
+ case Azure:
+ cfg.Azure.Transport = rt
+ case Swift:
+ cfg.Swift.Transport = rt
+ case Filesystem:
+ // do nothing
+ default:
+ return fmt.Errorf("cannot configure transport for backend: %s", backend)
+ }
+
+ return nil
+}
+
// NewClient creates a new bucket client based on the configured backend
func NewClient(ctx context.Context, backend string, cfg Config, name string, logger log.Logger) (objstore.InstrumentedBucket, error) {
var (
diff --git a/pkg/storage/bucket/gcs/bucket_client.go b/pkg/storage/bucket/gcs/bucket_client.go
index b5a8ce541e1d7..950202ea540e9 100644
--- a/pkg/storage/bucket/gcs/bucket_client.go
+++ b/pkg/storage/bucket/gcs/bucket_client.go
@@ -15,6 +15,7 @@ func NewBucketClient(ctx context.Context, cfg Config, name string, logger log.Lo
bucketConfig.Bucket = cfg.BucketName
bucketConfig.ServiceAccount = cfg.ServiceAccount.String()
bucketConfig.ChunkSizeBytes = cfg.ChunkBufferSize
+ bucketConfig.MaxRetries = cfg.MaxRetries
bucketConfig.HTTPConfig.Transport = cfg.Transport
return gcs.NewBucketWithConfig(ctx, logger, bucketConfig, name, nil)
diff --git a/pkg/storage/bucket/gcs/config.go b/pkg/storage/bucket/gcs/config.go
index a46c5030e4413..23ac4b409137f 100644
--- a/pkg/storage/bucket/gcs/config.go
+++ b/pkg/storage/bucket/gcs/config.go
@@ -12,6 +12,7 @@ type Config struct {
BucketName string `yaml:"bucket_name"`
ServiceAccount flagext.Secret `yaml:"service_account" doc:"description_method=GCSServiceAccountLongDescription"`
ChunkBufferSize int `yaml:"chunk_buffer_size"`
+ MaxRetries int `yaml:"max_retries"`
// Allow upstream callers to inject a round tripper
Transport http.RoundTripper `yaml:"-"`
@@ -27,6 +28,7 @@ func (cfg *Config) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) {
f.StringVar(&cfg.BucketName, prefix+"gcs.bucket-name", "", "GCS bucket name")
f.Var(&cfg.ServiceAccount, prefix+"gcs.service-account", cfg.GCSServiceAccountShortDescription())
f.IntVar(&cfg.ChunkBufferSize, prefix+"gcs.chunk-buffer-size", 0, "The maximum size of the buffer that GCS client for a single PUT request. 0 to disable buffering.")
+ f.IntVar(&cfg.MaxRetries, prefix+"gcs.max-retries", 10, "The maximum number of retries for idempotent operations. Overrides the default gcs storage client behavior if this value is greater than 0. Set this to 1 to disable retries.")
}
func (cfg *Config) GCSServiceAccountShortDescription() string {
diff --git a/pkg/storage/bucket/object_client_adapter.go b/pkg/storage/bucket/object_client_adapter.go
index 93c767819be22..011ad0ed624ad 100644
--- a/pkg/storage/bucket/object_client_adapter.go
+++ b/pkg/storage/bucket/object_client_adapter.go
@@ -2,6 +2,7 @@ package bucket
import (
"context"
+ "fmt"
"io"
"slices"
"strings"
@@ -9,9 +10,13 @@ import (
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/pkg/errors"
+ "github.com/prometheus/client_golang/prometheus"
"github.com/thanos-io/objstore"
"github.com/grafana/loki/v3/pkg/storage/chunk/client"
+ "github.com/grafana/loki/v3/pkg/storage/chunk/client/aws"
+ "github.com/grafana/loki/v3/pkg/storage/chunk/client/gcp"
+ "github.com/grafana/loki/v3/pkg/storage/chunk/client/hedging"
)
type ObjectClientAdapter struct {
@@ -21,9 +26,33 @@ type ObjectClientAdapter struct {
isRetryableErr func(err error) bool
}
-func NewObjectClientAdapter(bucket, hedgedBucket objstore.Bucket, logger log.Logger, opts ...ClientOptions) *ObjectClientAdapter {
- if hedgedBucket == nil {
- hedgedBucket = bucket
+func NewObjectClient(ctx context.Context, backend string, cfg Config, component string, hedgingCfg hedging.Config, disableRetries bool, logger log.Logger) (*ObjectClientAdapter, error) {
+ if disableRetries {
+ if err := cfg.disableRetries(backend); err != nil {
+ return nil, fmt.Errorf("create bucket: %w", err)
+ }
+ }
+
+ bucket, err := NewClient(ctx, backend, cfg, component, logger)
+ if err != nil {
+ return nil, fmt.Errorf("create bucket: %w", err)
+ }
+
+ hedgedBucket := bucket
+ if hedgingCfg.At != 0 {
+ hedgedTrasport, err := hedgingCfg.RoundTripperWithRegisterer(nil, prometheus.WrapRegistererWithPrefix("loki_", prometheus.DefaultRegisterer))
+ if err != nil {
+ return nil, fmt.Errorf("create hedged transport: %w", err)
+ }
+
+ if err := cfg.configureTransport(backend, hedgedTrasport); err != nil {
+ return nil, fmt.Errorf("create hedged bucket: %w", err)
+ }
+
+ hedgedBucket, err = NewClient(ctx, backend, cfg, component, logger)
+ if err != nil {
+ return nil, fmt.Errorf("create hedged bucket: %w", err)
+ }
}
o := &ObjectClientAdapter{
@@ -37,19 +66,14 @@ func NewObjectClientAdapter(bucket, hedgedBucket objstore.Bucket, logger log.Log
},
}
- for _, opt := range opts {
- opt(o)
+ switch backend {
+ case GCS:
+ o.isRetryableErr = gcp.IsRetryableErr
+ case S3:
+ o.isRetryableErr = aws.IsRetryableErr
}
- return o
-}
-
-type ClientOptions func(*ObjectClientAdapter)
-
-func WithRetryableErrFunc(f func(err error) bool) ClientOptions {
- return func(o *ObjectClientAdapter) {
- o.isRetryableErr = f
- }
+ return o, nil
}
func (o *ObjectClientAdapter) Stop() {
diff --git a/pkg/storage/bucket/object_client_adapter_test.go b/pkg/storage/bucket/object_client_adapter_test.go
index 1ce6de26856bf..341b59566333a 100644
--- a/pkg/storage/bucket/object_client_adapter_test.go
+++ b/pkg/storage/bucket/object_client_adapter_test.go
@@ -6,10 +6,12 @@ import (
"sort"
"testing"
+ "github.com/go-kit/log"
"github.com/stretchr/testify/require"
"github.com/grafana/loki/v3/pkg/storage/bucket/filesystem"
"github.com/grafana/loki/v3/pkg/storage/chunk/client"
+ "github.com/grafana/loki/v3/pkg/storage/chunk/client/hedging"
)
func TestObjectClientAdapter_List(t *testing.T) {
@@ -95,8 +97,12 @@ func TestObjectClientAdapter_List(t *testing.T) {
require.NoError(t, newBucket.Upload(context.Background(), "depply/nested/folder/b", buff))
require.NoError(t, newBucket.Upload(context.Background(), "depply/nested/folder/c", buff))
- client := NewObjectClientAdapter(newBucket, nil, nil)
- client.bucket = newBucket
+ client, err := NewObjectClient(context.Background(), "filesystem", Config{
+ StorageBackendConfig: StorageBackendConfig{
+ Filesystem: config,
+ },
+ }, "test", hedging.Config{}, false, log.NewNopLogger())
+ require.NoError(t, err)
storageObj, storageCommonPref, err := client.List(context.Background(), tt.prefix, tt.delimiter)
if tt.wantErr != nil {
diff --git a/pkg/storage/bucket/s3/bucket_client.go b/pkg/storage/bucket/s3/bucket_client.go
index 5d904d8e5fe9b..381f3436f53d4 100644
--- a/pkg/storage/bucket/s3/bucket_client.go
+++ b/pkg/storage/bucket/s3/bucket_client.go
@@ -82,5 +82,6 @@ func newS3Config(cfg Config) (s3.Config, error) {
Enable: cfg.TraceConfig.Enabled,
},
STSEndpoint: cfg.STSEndpoint,
+ MaxRetries: cfg.MaxRetries,
}, nil
}
diff --git a/pkg/storage/bucket/s3/config.go b/pkg/storage/bucket/s3/config.go
index 792f93f752b32..67c412de6d606 100644
--- a/pkg/storage/bucket/s3/config.go
+++ b/pkg/storage/bucket/s3/config.go
@@ -118,6 +118,7 @@ type Config struct {
PartSize uint64 `yaml:"part_size" category:"experimental"`
SendContentMd5 bool `yaml:"send_content_md5" category:"experimental"`
STSEndpoint string `yaml:"sts_endpoint"`
+ MaxRetries int `yaml:"max_retries"`
SSE SSEConfig `yaml:"sse"`
HTTP HTTPConfig `yaml:"http"`
@@ -146,6 +147,7 @@ func (cfg *Config) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) {
f.Var(newBucketLookupTypeValue(s3.AutoLookup, &cfg.BucketLookupType), prefix+"s3.bucket-lookup-type", fmt.Sprintf("Bucket lookup style type, used to access bucket in S3-compatible service. Default is auto. Supported values are: %s.", strings.Join(supportedBucketLookupTypes, ", ")))
f.BoolVar(&cfg.DualstackEnabled, prefix+"s3.dualstack-enabled", true, "When enabled, direct all AWS S3 requests to the dual-stack IPv4/IPv6 endpoint for the configured region.")
f.StringVar(&cfg.STSEndpoint, prefix+"s3.sts-endpoint", "", "Accessing S3 resources using temporary, secure credentials provided by AWS Security Token Service.")
+ f.IntVar(&cfg.MaxRetries, prefix+"s3.max-retries", 10, "The maximum number of retries for S3 requests that are retryable. Default is 10, set this to 1 to disable retries.")
cfg.SSE.RegisterFlagsWithPrefix(prefix+"s3.sse.", f)
cfg.HTTP.RegisterFlagsWithPrefix(prefix, f)
cfg.TraceConfig.RegisterFlagsWithPrefix(prefix+"s3.trace.", f)
diff --git a/pkg/storage/bucket/swift/bucket_client.go b/pkg/storage/bucket/swift/bucket_client.go
index b36c07e506b87..28f3c922c4254 100644
--- a/pkg/storage/bucket/swift/bucket_client.go
+++ b/pkg/storage/bucket/swift/bucket_client.go
@@ -4,8 +4,8 @@ import (
"github.com/go-kit/log"
"github.com/prometheus/common/model"
"github.com/thanos-io/objstore"
+ "github.com/thanos-io/objstore/exthttp"
"github.com/thanos-io/objstore/providers/swift"
- yaml "gopkg.in/yaml.v2"
)
// NewBucketClient creates a new Swift bucket client
@@ -33,14 +33,9 @@ func NewBucketClient(cfg Config, _ string, logger log.Logger) (objstore.Bucket,
// Hard-coded defaults.
ChunkSize: swift.DefaultConfig.ChunkSize,
UseDynamicLargeObjects: false,
+ HTTPConfig: exthttp.DefaultHTTPConfig,
}
+ bucketConfig.HTTPConfig.Transport = cfg.Transport
- // Thanos currently doesn't support passing the config as is, but expects a YAML,
- // so we're going to serialize it.
- serialized, err := yaml.Marshal(bucketConfig)
- if err != nil {
- return nil, err
- }
-
- return swift.NewContainer(logger, serialized, nil)
+ return swift.NewContainerFromConfig(logger, &bucketConfig, false, nil)
}
diff --git a/pkg/storage/bucket/swift/config.go b/pkg/storage/bucket/swift/config.go
index a30dd7319e8c9..22717efcc8e59 100644
--- a/pkg/storage/bucket/swift/config.go
+++ b/pkg/storage/bucket/swift/config.go
@@ -2,6 +2,7 @@ package swift
import (
"flag"
+ "net/http"
"time"
)
@@ -26,6 +27,9 @@ type Config struct {
MaxRetries int `yaml:"max_retries"`
ConnectTimeout time.Duration `yaml:"connect_timeout"`
RequestTimeout time.Duration `yaml:"request_timeout"`
+
+ // Allow upstream callers to inject a round tripper
+ Transport http.RoundTripper `yaml:"-"`
}
// RegisterFlags registers the flags for Swift storage
diff --git a/pkg/storage/chunk/client/aws/s3_thanos_object_client.go b/pkg/storage/chunk/client/aws/s3_thanos_object_client.go
deleted file mode 100644
index e00ded920d552..0000000000000
--- a/pkg/storage/chunk/client/aws/s3_thanos_object_client.go
+++ /dev/null
@@ -1,44 +0,0 @@
-package aws
-
-import (
- "context"
-
- "github.com/go-kit/log"
- "github.com/prometheus/client_golang/prometheus"
- "github.com/thanos-io/objstore"
-
- "github.com/grafana/loki/v3/pkg/storage/bucket"
- "github.com/grafana/loki/v3/pkg/storage/chunk/client"
- "github.com/grafana/loki/v3/pkg/storage/chunk/client/hedging"
-)
-
-func NewS3ThanosObjectClient(ctx context.Context, cfg bucket.Config, component string, logger log.Logger, hedgingCfg hedging.Config) (client.ObjectClient, error) {
- b, err := newS3ThanosObjectClient(ctx, cfg, component, logger, false, hedgingCfg)
- if err != nil {
- return nil, err
- }
-
- var hedged objstore.Bucket
- if hedgingCfg.At != 0 {
- hedged, err = newS3ThanosObjectClient(ctx, cfg, component, logger, true, hedgingCfg)
- if err != nil {
- return nil, err
- }
- }
-
- o := bucket.NewObjectClientAdapter(b, hedged, logger, bucket.WithRetryableErrFunc(IsRetryableErr))
- return o, nil
-}
-
-func newS3ThanosObjectClient(ctx context.Context, cfg bucket.Config, component string, logger log.Logger, hedging bool, hedgingCfg hedging.Config) (objstore.Bucket, error) {
- if hedging {
- hedgedTrasport, err := hedgingCfg.RoundTripperWithRegisterer(nil, prometheus.WrapRegistererWithPrefix("loki_", prometheus.DefaultRegisterer))
- if err != nil {
- return nil, err
- }
-
- cfg.S3.HTTP.Transport = hedgedTrasport
- }
-
- return bucket.NewClient(ctx, bucket.S3, cfg, component, logger)
-}
diff --git a/pkg/storage/chunk/client/azure/blob_storage_thanos_object_client.go b/pkg/storage/chunk/client/azure/blob_storage_thanos_object_client.go
deleted file mode 100644
index 4bf2137433064..0000000000000
--- a/pkg/storage/chunk/client/azure/blob_storage_thanos_object_client.go
+++ /dev/null
@@ -1,44 +0,0 @@
-package azure
-
-import (
- "context"
-
- "github.com/go-kit/log"
- "github.com/prometheus/client_golang/prometheus"
- "github.com/thanos-io/objstore"
-
- "github.com/grafana/loki/v3/pkg/storage/bucket"
- "github.com/grafana/loki/v3/pkg/storage/chunk/client"
- "github.com/grafana/loki/v3/pkg/storage/chunk/client/hedging"
-)
-
-// NewBlobStorageObjectClient makes a new BlobStorage-backed ObjectClient.
-func NewBlobStorageThanosObjectClient(ctx context.Context, cfg bucket.Config, component string, logger log.Logger, hedgingCfg hedging.Config) (client.ObjectClient, error) {
- b, err := newBlobStorageThanosObjClient(ctx, cfg, component, logger, false, hedgingCfg)
- if err != nil {
- return nil, err
- }
-
- var hedged objstore.Bucket
- if hedgingCfg.At != 0 {
- hedged, err = newBlobStorageThanosObjClient(ctx, cfg, component, logger, true, hedgingCfg)
- if err != nil {
- return nil, err
- }
- }
-
- return bucket.NewObjectClientAdapter(b, hedged, logger), nil
-}
-
-func newBlobStorageThanosObjClient(ctx context.Context, cfg bucket.Config, component string, logger log.Logger, hedging bool, hedgingCfg hedging.Config) (objstore.Bucket, error) {
- if hedging {
- hedgedTrasport, err := hedgingCfg.RoundTripperWithRegisterer(nil, prometheus.WrapRegistererWithPrefix("loki_", prometheus.DefaultRegisterer))
- if err != nil {
- return nil, err
- }
-
- cfg.Azure.Transport = hedgedTrasport
- }
-
- return bucket.NewClient(ctx, bucket.Azure, cfg, component, logger)
-}
diff --git a/pkg/storage/chunk/client/gcp/gcs_thanos_object_client.go b/pkg/storage/chunk/client/gcp/gcs_thanos_object_client.go
deleted file mode 100644
index b4190be2d6943..0000000000000
--- a/pkg/storage/chunk/client/gcp/gcs_thanos_object_client.go
+++ /dev/null
@@ -1,44 +0,0 @@
-package gcp
-
-import (
- "context"
-
- "github.com/go-kit/log"
- "github.com/prometheus/client_golang/prometheus"
- "github.com/thanos-io/objstore"
-
- "github.com/grafana/loki/v3/pkg/storage/bucket"
- "github.com/grafana/loki/v3/pkg/storage/chunk/client"
- "github.com/grafana/loki/v3/pkg/storage/chunk/client/hedging"
-)
-
-func NewGCSThanosObjectClient(ctx context.Context, cfg bucket.Config, component string, logger log.Logger, hedgingCfg hedging.Config) (client.ObjectClient, error) {
- b, err := newGCSThanosObjectClient(ctx, cfg, component, logger, false, hedgingCfg)
- if err != nil {
- return nil, err
- }
-
- var hedged objstore.Bucket
- if hedgingCfg.At != 0 {
- hedged, err = newGCSThanosObjectClient(ctx, cfg, component, logger, true, hedgingCfg)
- if err != nil {
- return nil, err
- }
- }
-
- o := bucket.NewObjectClientAdapter(b, hedged, logger, bucket.WithRetryableErrFunc(IsRetryableErr))
- return o, nil
-}
-
-func newGCSThanosObjectClient(ctx context.Context, cfg bucket.Config, component string, logger log.Logger, hedging bool, hedgingCfg hedging.Config) (objstore.Bucket, error) {
- if hedging {
- hedgedTrasport, err := hedgingCfg.RoundTripperWithRegisterer(nil, prometheus.WrapRegistererWithPrefix("loki_", prometheus.DefaultRegisterer))
- if err != nil {
- return nil, err
- }
-
- cfg.GCS.Transport = hedgedTrasport
- }
-
- return bucket.NewClient(ctx, bucket.GCS, cfg, component, logger)
-}
diff --git a/pkg/storage/factory.go b/pkg/storage/factory.go
index 959b97980aeaa..bc2257a64a876 100644
--- a/pkg/storage/factory.go
+++ b/pkg/storage/factory.go
@@ -615,12 +615,16 @@ func (c *ClientMetrics) Unregister() {
// NewObjectClient makes a new StorageClient with the prefix in the front.
func NewObjectClient(name, component string, cfg Config, clientMetrics ClientMetrics) (client.ObjectClient, error) {
+ if cfg.UseThanosObjstore {
+ return bucket.NewObjectClient(context.Background(), name, cfg.ObjectStore, component, cfg.Hedging, cfg.CongestionControl.Enabled, util_log.Logger)
+ }
+
actual, err := internalNewObjectClient(name, component, cfg, clientMetrics)
if err != nil {
return nil, err
}
- if cfg.UseThanosObjstore || cfg.ObjectPrefix == "" {
+ if cfg.ObjectPrefix == "" {
return actual, nil
} else {
prefix := strings.Trim(cfg.ObjectPrefix, "/") + "/"
@@ -659,9 +663,6 @@ func internalNewObjectClient(storeName, component string, cfg Config, clientMetr
s3Cfg.BackoffConfig.MaxRetries = 1
}
- if cfg.UseThanosObjstore {
- return aws.NewS3ThanosObjectClient(context.Background(), cfg.ObjectStore, component, util_log.Logger, cfg.Hedging)
- }
return aws.NewS3ObjectClient(s3Cfg, cfg.Hedging)
case types.StorageTypeAlibabaCloud:
@@ -691,9 +692,6 @@ func internalNewObjectClient(storeName, component string, cfg Config, clientMetr
if cfg.CongestionControl.Enabled {
gcsCfg.EnableRetries = false
}
- if cfg.UseThanosObjstore {
- return gcp.NewGCSThanosObjectClient(context.Background(), cfg.ObjectStore, component, util_log.Logger, cfg.Hedging)
- }
return gcp.NewGCSObjectClient(context.Background(), gcsCfg, cfg.Hedging)
case types.StorageTypeAzure:
@@ -705,9 +703,6 @@ func internalNewObjectClient(storeName, component string, cfg Config, clientMetr
}
azureCfg = (azure.BlobStorageConfig)(nsCfg)
}
- if cfg.UseThanosObjstore {
- return azure.NewBlobStorageThanosObjectClient(context.Background(), cfg.ObjectStore, component, util_log.Logger, cfg.Hedging)
- }
return azure.NewBlobStorage(&azureCfg, clientMetrics.AzureMetrics, cfg.Hedging)
case types.StorageTypeSwift:
@@ -757,10 +752,6 @@ func internalNewObjectClient(storeName, component string, cfg Config, clientMetr
return ibmcloud.NewCOSObjectClient(cosCfg, cfg.Hedging)
default:
- if cfg.UseThanosObjstore {
- return nil, fmt.Errorf("Unrecognized storage client %v, choose one of: %s", storeName, strings.Join(cfg.ObjectStore.SupportedBackends(), ", "))
- }
-
return nil, fmt.Errorf("Unrecognized storage client %v, choose one of: %v, %v, %v, %v, %v, %v, %v, %v, %v", storeName, types.StorageTypeAWS, types.StorageTypeS3, types.StorageTypeGCS, types.StorageTypeAzure, types.StorageTypeAlibabaCloud, types.StorageTypeSwift, types.StorageTypeBOS, types.StorageTypeCOS, types.StorageTypeFileSystem)
}
}
diff --git a/vendor/github.com/thanos-io/objstore/CHANGELOG.md b/vendor/github.com/thanos-io/objstore/CHANGELOG.md
index 099f83ad9433c..f0904faa198b4 100644
--- a/vendor/github.com/thanos-io/objstore/CHANGELOG.md
+++ b/vendor/github.com/thanos-io/objstore/CHANGELOG.md
@@ -54,6 +54,7 @@ We use *breaking :warning:* to mark changes that are not backward compatible (re
- [#116](https://github.com/thanos-io/objstore/pull/116) Azure: Add new storage_create_container configuration property
- [#128](https://github.com/thanos-io/objstore/pull/128) GCS: Add support for `ChunkSize` for writer.
- [#130](https://github.com/thanos-io/objstore/pull/130) feat: Decouple creating bucket metrics from instrumenting the bucket
+- [#147](https://github.com/thanos-io/objstore/pull/147) feat: Add MaxRetries config to cos, gcs and obs.
- [#150](https://github.com/thanos-io/objstore/pull/150) Add support for roundtripper wrapper.
### Changed
diff --git a/vendor/github.com/thanos-io/objstore/providers/gcs/gcs.go b/vendor/github.com/thanos-io/objstore/providers/gcs/gcs.go
index 1a3edfd221a3b..d54a6782f4215 100644
--- a/vendor/github.com/thanos-io/objstore/providers/gcs/gcs.go
+++ b/vendor/github.com/thanos-io/objstore/providers/gcs/gcs.go
@@ -54,6 +54,11 @@ type Config struct {
// Used as storage.Writer.ChunkSize of https://pkg.go.dev/google.golang.org/cloud/storage#Writer
ChunkSizeBytes int `yaml:"chunk_size_bytes"`
noAuth bool `yaml:"no_auth"`
+
+ // MaxRetries controls the number of retries for idempotent operations.
+ // Overrides the default gcs storage client behavior if this value is greater than 0.
+ // Set this to 1 to disable retries.
+ MaxRetries int `yaml:"max_retries"`
}
// Bucket implements the store.Bucket and shipper.Bucket interfaces against GCS.
@@ -173,6 +178,11 @@ func newBucket(ctx context.Context, logger log.Logger, gc Config, opts []option.
name: gc.Bucket,
chunkSize: gc.ChunkSizeBytes,
}
+
+ if gc.MaxRetries > 0 {
+ bkt.bkt = bkt.bkt.Retryer(storage.WithMaxAttempts(gc.MaxRetries))
+ }
+
return bkt, nil
}
diff --git a/vendor/github.com/thanos-io/objstore/providers/s3/s3.go b/vendor/github.com/thanos-io/objstore/providers/s3/s3.go
index fc8da7b3c10ec..27e82ffbafb33 100644
--- a/vendor/github.com/thanos-io/objstore/providers/s3/s3.go
+++ b/vendor/github.com/thanos-io/objstore/providers/s3/s3.go
@@ -136,6 +136,7 @@ type Config struct {
PartSize uint64 `yaml:"part_size"`
SSEConfig SSEConfig `yaml:"sse_config"`
STSEndpoint string `yaml:"sts_endpoint"`
+ MaxRetries int `yaml:"max_retries"`
}
// SSEConfig deals with the configuration of SSE for Minio. The following options are valid:
@@ -263,6 +264,7 @@ func NewBucketWithConfig(logger log.Logger, config Config, component string, wra
Region: config.Region,
Transport: tpt,
BucketLookup: config.BucketLookupType.MinioType(),
+ MaxRetries: config.MaxRetries,
})
if err != nil {
return nil, errors.Wrap(err, "initialize s3 client")
diff --git a/vendor/modules.txt b/vendor/modules.txt
index 65f61e30d70cb..e1978ed10b5a7 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -1582,8 +1582,8 @@ github.com/stretchr/testify/assert
github.com/stretchr/testify/mock
github.com/stretchr/testify/require
github.com/stretchr/testify/suite
-# github.com/thanos-io/objstore v0.0.0-20241105144332-b598dceacb13
-## explicit; go 1.21
+# github.com/thanos-io/objstore v0.0.0-20241111205755-d1dd89d41f97
+## explicit; go 1.22
github.com/thanos-io/objstore
github.com/thanos-io/objstore/exthttp
github.com/thanos-io/objstore/providers/azure
|
feat
|
disable retries when congestion control is enabled (#14867)
|
52fec6187fd08fec85573272382ce5448fd57767
|
2024-10-12 01:10:01
|
J Stickler
|
docs: update Helm installation topics (#14466)
| false
|
diff --git a/docs/sources/setup/install/helm/install-microservices/_index.md b/docs/sources/setup/install/helm/install-microservices/_index.md
index 4afca42d10b3e..5fd24d7c32192 100644
--- a/docs/sources/setup/install/helm/install-microservices/_index.md
+++ b/docs/sources/setup/install/helm/install-microservices/_index.md
@@ -51,7 +51,7 @@ It is not recommended to run scalable mode with `filesystem` storage. For the pu
loki:
schemaConfig:
configs:
- - from: 2024-04-01
+ - from: "2024-04-01"
store: tsdb
object_store: s3
schema: v13
@@ -179,7 +179,7 @@ When deploying Loki using S3 Storage **DO NOT** use the default bucket names; `
loki:
schemaConfig:
configs:
- - from: 2024-04-01
+ - from: "2024-04-01"
store: tsdb
object_store: s3
schema: v13
@@ -267,7 +267,7 @@ When deploying Loki using S3 Storage **DO NOT** use the default bucket names; `
loki:
schemaConfig:
configs:
- - from: 2024-04-01
+ - from: "2024-04-01"
store: tsdb
object_store: azure
schema: v13
diff --git a/docs/sources/setup/install/helm/install-scalable/_index.md b/docs/sources/setup/install/helm/install-scalable/_index.md
index a39b6580a90b2..e2ebe0ee316b5 100644
--- a/docs/sources/setup/install/helm/install-scalable/_index.md
+++ b/docs/sources/setup/install/helm/install-scalable/_index.md
@@ -53,7 +53,7 @@ It is not recommended to run scalable mode with `filesystem` storage. For the pu
loki:
schemaConfig:
configs:
- - from: 2024-04-01
+ - from: "2024-04-01"
store: tsdb
object_store: s3
schema: v13
@@ -138,7 +138,7 @@ When deploying Loki using S3 Storage **DO NOT** use the default bucket names; `
loki:
schemaConfig:
configs:
- - from: 2024-04-01
+ - from: "2024-04-01"
store: tsdb
object_store: s3
schema: v13
@@ -218,7 +218,7 @@ bloomGateway:
loki:
schemaConfig:
configs:
- - from: 2024-04-01
+ - from: "2024-04-01"
store: tsdb
object_store: azure
schema: v13
|
docs
|
update Helm installation topics (#14466)
|
bdfc86bc3b1f5170f8d181f2e71435250a2461cd
|
2024-08-05 16:16:10
|
Vladyslav Diachenko
|
chore(helm-chart): added SSE config into AWS storage config (#13746)
| false
|
diff --git a/production/helm/loki/CHANGELOG.md b/production/helm/loki/CHANGELOG.md
index 6e3b8f17a9d63..ad24a1ceb348c 100644
--- a/production/helm/loki/CHANGELOG.md
+++ b/production/helm/loki/CHANGELOG.md
@@ -13,6 +13,10 @@ Entries should include a reference to the pull request that introduced the chang
[//]: # (<AUTOMATED_UPDATES_LOCATOR> : do not remove this line. This locator is used by the CI pipeline to automatically create a changelog entry for each new Loki release. Add other chart versions and respective changelog entries bellow this line.)
+## 6.7.4
+
+- [ENHANCEMENT] Allow configuring the SSE section under AWS S3 storage config.
+
## 6.7.3
- [BUGFIX] Removed Helm test binary
diff --git a/production/helm/loki/Chart.yaml b/production/helm/loki/Chart.yaml
index 641a28a425edb..9e84333dee63f 100644
--- a/production/helm/loki/Chart.yaml
+++ b/production/helm/loki/Chart.yaml
@@ -3,7 +3,7 @@ name: loki
description: Helm chart for Grafana Loki and Grafana Enterprise Logs supporting both simple, scalable and distributed modes.
type: application
appVersion: 3.1.0
-version: 6.7.3
+version: 6.7.4
home: https://grafana.github.io/helm-charts
sources:
- https://github.com/grafana/loki
diff --git a/production/helm/loki/README.md b/production/helm/loki/README.md
index 24f84ace97212..701ae4059140c 100644
--- a/production/helm/loki/README.md
+++ b/production/helm/loki/README.md
@@ -1,6 +1,6 @@
# loki
-  
+  
Helm chart for Grafana Loki and Grafana Enterprise Logs supporting both simple, scalable and distributed modes.
diff --git a/production/helm/loki/templates/_helpers.tpl b/production/helm/loki/templates/_helpers.tpl
index 8d4a0a9cb94ef..91b453efa062a 100644
--- a/production/helm/loki/templates/_helpers.tpl
+++ b/production/helm/loki/templates/_helpers.tpl
@@ -239,30 +239,15 @@ s3:
insecure: {{ .insecure }}
{{- with .http_config}}
http_config:
- {{- with .idle_conn_timeout }}
- idle_conn_timeout: {{ . }}
- {{- end}}
- {{- with .response_header_timeout }}
- response_header_timeout: {{ . }}
- {{- end}}
- {{- with .insecure_skip_verify }}
- insecure_skip_verify: {{ . }}
- {{- end}}
- {{- with .ca_file}}
- ca_file: {{ . }}
- {{- end}}
+{{ toYaml . | indent 4 }}
{{- end }}
{{- with .backoff_config}}
backoff_config:
- {{- with .min_period }}
- min_period: {{ . }}
- {{- end}}
- {{- with .max_period }}
- max_period: {{ . }}
- {{- end}}
- {{- with .max_retries }}
- max_retries: {{ . }}
- {{- end}}
+{{ toYaml . | indent 4 }}
+ {{- end }}
+ {{- with .sse }}
+ sse:
+{{ toYaml . | indent 4 }}
{{- end }}
{{- end -}}
@@ -308,35 +293,7 @@ alibabacloud:
{{- else if eq .Values.loki.storage.type "swift" -}}
{{- with .Values.loki.storage.swift }}
swift:
- {{- with .auth_version }}
- auth_version: {{ . }}
- {{- end }}
- auth_url: {{ .auth_url }}
- {{- with .internal }}
- internal: {{ . }}
- {{- end }}
- username: {{ .username }}
- user_domain_name: {{ .user_domain_name }}
- {{- with .user_domain_id }}
- user_domain_id: {{ . }}
- {{- end }}
- {{- with .user_id }}
- user_id: {{ . }}
- {{- end }}
- password: {{ .password }}
- {{- with .domain_id }}
- domain_id: {{ . }}
- {{- end }}
- domain_name: {{ .domain_name }}
- project_id: {{ .project_id }}
- project_name: {{ .project_name }}
- project_domain_id: {{ .project_domain_id }}
- project_domain_name: {{ .project_domain_name }}
- region_name: {{ .region_name }}
- container_name: {{ .container_name }}
- max_retries: {{ .max_retries | default 3 }}
- connect_timeout: {{ .connect_timeout | default "10s" }}
- request_timeout: {{ .request_timeout | default "5s" }}
+{{ toYaml . | indent 2 }}
{{- end -}}
{{- else -}}
{{- with .Values.loki.storage.filesystem }}
|
chore
|
added SSE config into AWS storage config (#13746)
|
bd2ee0b64da8b0ffa05fec9302d3c735f7d4358f
|
2024-03-25 19:30:54
|
Paul Rogers
|
test: Update azure-storage-blob-go to include a data race fix (#12325)
| false
|
diff --git a/go.mod b/go.mod
index 3c1bdffad37f1..b9ac027038f56 100644
--- a/go.mod
+++ b/go.mod
@@ -342,7 +342,7 @@ require (
replace github.com/Azure/azure-sdk-for-go => github.com/Azure/azure-sdk-for-go v36.2.0+incompatible
-replace github.com/Azure/azure-storage-blob-go => github.com/MasslessParticle/azure-storage-blob-go v0.14.1-0.20220216145902-b5e698eff68e
+replace github.com/Azure/azure-storage-blob-go => github.com/MasslessParticle/azure-storage-blob-go v0.14.1-0.20240322194317-344980fda573
replace github.com/hashicorp/consul => github.com/hashicorp/consul v1.14.5
diff --git a/go.sum b/go.sum
index 23ff165770ff2..203ed3e3c520a 100644
--- a/go.sum
+++ b/go.sum
@@ -250,8 +250,8 @@ github.com/IBM/go-sdk-core/v5 v5.13.1/go.mod h1:pVkN7IGmsSdmR1ZCU4E/cLcCclqRKMYg
github.com/IBM/ibm-cos-sdk-go v1.10.0 h1:/2VIev2/jBei39OqU2+nSZQnoWJ+KtkiSAIDkqsd7uU=
github.com/IBM/ibm-cos-sdk-go v1.10.0/go.mod h1:C8KRTRaoD3CWPPBOa6FCOpdh0ZMlUjKAAA4i3F+Q/sc=
github.com/Knetic/govaluate v3.0.1-0.20171022003610-9aa49832a739+incompatible/go.mod h1:r7JcOSlj0wfOMncg0iLm8Leh48TZaKVeNIfJntJ2wa0=
-github.com/MasslessParticle/azure-storage-blob-go v0.14.1-0.20220216145902-b5e698eff68e h1:HisBR+gQKIwJqDe1iNVqUDk+GTRE2IZAbl+fLoDKNBs=
-github.com/MasslessParticle/azure-storage-blob-go v0.14.1-0.20220216145902-b5e698eff68e/go.mod h1:SMqIBi+SuiQH32bvyjngEewEeXoPfKMgWlBDaYf6fck=
+github.com/MasslessParticle/azure-storage-blob-go v0.14.1-0.20240322194317-344980fda573 h1:DCPjdUAi+jcGnL7iN+A7uNY8xG584oMRuisYh/VE21E=
+github.com/MasslessParticle/azure-storage-blob-go v0.14.1-0.20240322194317-344980fda573/go.mod h1:SMqIBi+SuiQH32bvyjngEewEeXoPfKMgWlBDaYf6fck=
github.com/Masterminds/goutils v1.1.1 h1:5nUrii3FMTL5diU80unEVvNevw1nH4+ZV4DSLVJLSYI=
github.com/Masterminds/goutils v1.1.1/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU=
github.com/Masterminds/semver/v3 v3.2.0 h1:3MEsd0SM6jqZojhjLWWeBY+Kcjy9i6MQAeY7YgDP83g=
diff --git a/vendor/github.com/Azure/azure-storage-blob-go/azblob/chunkwriting.go b/vendor/github.com/Azure/azure-storage-blob-go/azblob/chunkwriting.go
index 35e953146b37c..a35601cab8ef7 100644
--- a/vendor/github.com/Azure/azure-storage-blob-go/azblob/chunkwriting.go
+++ b/vendor/github.com/Azure/azure-storage-blob-go/azblob/chunkwriting.go
@@ -188,8 +188,11 @@ func (c *copier) close() error {
// waitForFinish waits for all writes to complete while combining errors from errCh
func (c *copier) waitForFinish() error {
var err error
+ var mu sync.Mutex
done := make(chan struct{})
+ mu.Lock()
go func() {
+ defer mu.Unlock()
// when write latencies are long, several errors might have occurred
// drain them all as we wait for writes to complete.
err = c.drainErrs(done)
@@ -197,6 +200,9 @@ func (c *copier) waitForFinish() error {
c.wg.Wait()
close(done)
+
+ mu.Lock()
+ defer mu.Unlock()
return err
}
diff --git a/vendor/modules.txt b/vendor/modules.txt
index 946efbb5afe87..c9087b6e748cf 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -104,7 +104,7 @@ github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/internal/shared
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/pageblob
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/sas
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/service
-# github.com/Azure/azure-storage-blob-go v0.14.0 => github.com/MasslessParticle/azure-storage-blob-go v0.14.1-0.20220216145902-b5e698eff68e
+# github.com/Azure/azure-storage-blob-go v0.14.0 => github.com/MasslessParticle/azure-storage-blob-go v0.14.1-0.20240322194317-344980fda573
## explicit; go 1.15
github.com/Azure/azure-storage-blob-go/azblob
# github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1
@@ -2260,7 +2260,7 @@ sigs.k8s.io/structured-merge-diff/v4/value
## explicit; go 1.12
sigs.k8s.io/yaml
# github.com/Azure/azure-sdk-for-go => github.com/Azure/azure-sdk-for-go v36.2.0+incompatible
-# github.com/Azure/azure-storage-blob-go => github.com/MasslessParticle/azure-storage-blob-go v0.14.1-0.20220216145902-b5e698eff68e
+# github.com/Azure/azure-storage-blob-go => github.com/MasslessParticle/azure-storage-blob-go v0.14.1-0.20240322194317-344980fda573
# github.com/hashicorp/consul => github.com/hashicorp/consul v1.14.5
# github.com/gocql/gocql => github.com/grafana/gocql v0.0.0-20200605141915-ba5dc39ece85
# github.com/hashicorp/memberlist => github.com/grafana/memberlist v0.3.1-0.20220714140823-09ffed8adbbe
|
test
|
Update azure-storage-blob-go to include a data race fix (#12325)
|
8a2081a39ea1ff0458376196f44324d63be982a5
|
2021-08-05 13:07:24
|
Travis Patterson
|
distributor: Truncate rather than drop log lines (#4051)
| false
|
diff --git a/docs/sources/configuration/_index.md b/docs/sources/configuration/_index.md
index 8dca784902957..86ffec0445e74 100644
--- a/docs/sources/configuration/_index.md
+++ b/docs/sources/configuration/_index.md
@@ -1781,6 +1781,10 @@ logs in Loki.
# CLI flag: -distributor.max-line-size
[max_line_size: <string> | default = none ]
+# Truncate log lines when they exceed max_line_size.
+# CLI flag: -distributor.max-line-size-truncate
+[max_line_size_truncate: <boolean> | default = false ]
+
# Maximum number of log entries that will be returned for a query.
# CLI flag: -validation.max-entries-limit
[max_entries_limit_per_query: <int> | default = 5000 ]
diff --git a/pkg/distributor/distributor.go b/pkg/distributor/distributor.go
index 6fc896ac0f51e..a914dae6209a7 100644
--- a/pkg/distributor/distributor.go
+++ b/pkg/distributor/distributor.go
@@ -209,6 +209,9 @@ func (d *Distributor) Push(ctx context.Context, req *logproto.PushRequest) (*log
validationContext := d.validator.getValidationContextFor(userID)
for _, stream := range req.Streams {
+ // Truncate first so subsequent steps have consistent line lengths
+ d.truncateLines(validationContext, &stream)
+
stream.Labels, err = d.parseStreamLabels(validationContext, stream.Labels, &stream)
if err != nil {
validationErr = err
@@ -220,6 +223,7 @@ func (d *Distributor) Push(ctx context.Context, req *logproto.PushRequest) (*log
validation.DiscardedBytes.WithLabelValues(validation.InvalidLabels, userID).Add(float64(bytes))
continue
}
+
n := 0
for _, entry := range stream.Entries {
if err := d.validator.ValidateEntry(validationContext, stream.Labels, entry); err != nil {
@@ -236,6 +240,7 @@ func (d *Distributor) Push(ctx context.Context, req *logproto.PushRequest) (*log
if len(stream.Entries) == 0 {
continue
}
+
keys = append(keys, util.TokenFor(userID, stream.Labels))
streams = append(streams, streamTracker{
stream: stream,
@@ -300,6 +305,25 @@ func (d *Distributor) Push(ctx context.Context, req *logproto.PushRequest) (*log
}
}
+func (d *Distributor) truncateLines(vContext validationContext, stream *logproto.Stream) {
+ if !vContext.maxLineSizeTruncate {
+ return
+ }
+
+ var truncatedSamples, truncatedBytes int
+ for i, e := range stream.Entries {
+ if maxSize := vContext.maxLineSize; maxSize != 0 && len(e.Line) > maxSize {
+ stream.Entries[i].Line = e.Line[:maxSize]
+
+ truncatedSamples++
+ truncatedBytes = len(e.Line) - maxSize
+ }
+ }
+
+ validation.MutatedSamples.WithLabelValues(validation.LineTooLong, vContext.userID).Add(float64(truncatedSamples))
+ validation.MutatedBytes.WithLabelValues(validation.LineTooLong, vContext.userID).Add(float64(truncatedBytes))
+}
+
// TODO taken from Cortex, see if we can refactor out an usable interface.
func (d *Distributor) sendSamples(ctx context.Context, ingester ring.InstanceDesc, streamTrackers []*streamTracker, pushTracker *pushTracker) {
err := d.sendSamplesErr(ctx, ingester, streamTrackers)
diff --git a/pkg/distributor/distributor_test.go b/pkg/distributor/distributor_test.go
index d7dfe2220a542..6127ecd7080ed 100644
--- a/pkg/distributor/distributor_test.go
+++ b/pkg/distributor/distributor_test.go
@@ -115,6 +115,29 @@ func Test_SortLabelsOnPush(t *testing.T) {
require.Equal(t, `{a="b", buzz="f"}`, ingester.pushed[0].Streams[0].Labels)
}
+func Test_TruncateLogLines(t *testing.T) {
+ setup := func() (*validation.Limits, *mockIngester) {
+ limits := &validation.Limits{}
+ flagext.DefaultValues(limits)
+
+ limits.EnforceMetricName = false
+ limits.MaxLineSize = 5
+ limits.MaxLineSizeTruncate = true
+ return limits, &mockIngester{}
+ }
+
+ t.Run("it truncates lines to MaxLineSize when MaxLineSizeTruncate is true", func(t *testing.T) {
+ limits, ingester := setup()
+
+ d := prepare(t, limits, nil, func(addr string) (ring_client.PoolClient, error) { return ingester, nil })
+ defer services.StopAndAwaitTerminated(context.Background(), d) //nolint:errcheck
+
+ _, err := d.Push(ctx, makeWriteRequest(1, 10))
+ require.NoError(t, err)
+ require.Len(t, ingester.pushed[0].Streams[0].Entries[0].Line, 5)
+ })
+}
+
func Benchmark_SortLabelsOnPush(b *testing.B) {
limits := &validation.Limits{}
flagext.DefaultValues(limits)
@@ -162,6 +185,31 @@ func Benchmark_Push(b *testing.B) {
}
}
+func Benchmark_PushWithLineTruncation(b *testing.B) {
+ limits := &validation.Limits{}
+ flagext.DefaultValues(limits)
+
+ limits.IngestionRateMB = math.MaxInt32
+ limits.MaxLineSizeTruncate = true
+ limits.MaxLineSize = 50
+
+ ingester := &mockIngester{}
+ d := prepare(&testing.T{}, limits, nil, func(addr string) (ring_client.PoolClient, error) { return ingester, nil })
+ defer services.StopAndAwaitTerminated(context.Background(), d) //nolint:errcheck
+ request := makeWriteRequest(100000, 100)
+
+ b.ResetTimer()
+ b.ReportAllocs()
+
+ for n := 0; n < b.N; n++ {
+
+ _, err := d.Push(ctx, request)
+ if err != nil {
+ require.NoError(b, err)
+ }
+ }
+}
+
func TestDistributor_PushIngestionRateLimiter(t *testing.T) {
type testPush struct {
bytes int
diff --git a/pkg/distributor/limits.go b/pkg/distributor/limits.go
index 520cc514fe4a3..559f4b55051c3 100644
--- a/pkg/distributor/limits.go
+++ b/pkg/distributor/limits.go
@@ -5,6 +5,7 @@ import "time"
// Limits is an interface for distributor limits/related configs
type Limits interface {
MaxLineSize(userID string) int
+ MaxLineSizeTruncate(userID string) bool
EnforceMetricName(userID string) bool
MaxLabelNamesPerSeries(userID string) int
MaxLabelNameLength(userID string) int
diff --git a/pkg/distributor/validator.go b/pkg/distributor/validator.go
index 5c6ab2374cfdc..b095bb1fdde80 100644
--- a/pkg/distributor/validator.go
+++ b/pkg/distributor/validator.go
@@ -28,7 +28,9 @@ type validationContext struct {
rejectOldSample bool
rejectOldSampleMaxAge int64
creationGracePeriod int64
- maxLineSize int
+
+ maxLineSize int
+ maxLineSizeTruncate bool
maxLabelNamesPerSeries int
maxLabelNameLength int
@@ -45,6 +47,7 @@ func (v Validator) getValidationContextFor(userID string) validationContext {
rejectOldSampleMaxAge: now.Add(-v.RejectOldSamplesMaxAge(userID)).UnixNano(),
creationGracePeriod: now.Add(v.CreationGracePeriod(userID)).UnixNano(),
maxLineSize: v.MaxLineSize(userID),
+ maxLineSizeTruncate: v.MaxLineSizeTruncate(userID),
maxLabelNamesPerSeries: v.MaxLabelNamesPerSeries(userID),
maxLabelNameLength: v.MaxLabelNameLength(userID),
maxLabelValueLength: v.MaxLabelValueLength(userID),
diff --git a/pkg/validation/limits.go b/pkg/validation/limits.go
index 6f6a27c1ecb4c..03ad3670c0bf9 100644
--- a/pkg/validation/limits.go
+++ b/pkg/validation/limits.go
@@ -39,6 +39,7 @@ type Limits struct {
CreationGracePeriod model.Duration `yaml:"creation_grace_period" json:"creation_grace_period"`
EnforceMetricName bool `yaml:"enforce_metric_name" json:"enforce_metric_name"`
MaxLineSize flagext.ByteSize `yaml:"max_line_size" json:"max_line_size"`
+ MaxLineSizeTruncate bool `yaml:"max_line_size_truncate" json:"max_line_size_truncate"`
// Ingester enforced limits.
MaxLocalStreamsPerUser int `yaml:"max_streams_per_user" json:"max_streams_per_user"`
@@ -88,6 +89,7 @@ func (l *Limits) RegisterFlags(f *flag.FlagSet) {
f.Float64Var(&l.IngestionRateMB, "distributor.ingestion-rate-limit-mb", 4, "Per-user ingestion rate limit in sample size per second. Units in MB.")
f.Float64Var(&l.IngestionBurstSizeMB, "distributor.ingestion-burst-size-mb", 6, "Per-user allowed ingestion burst size (in sample size). Units in MB.")
f.Var(&l.MaxLineSize, "distributor.max-line-size", "maximum line length allowed, i.e. 100mb. Default (0) means unlimited.")
+ f.BoolVar(&l.MaxLineSizeTruncate, "distributor.max-line-size-truncate", false, "Whether to truncate lines that exceed max_line_size")
f.IntVar(&l.MaxLabelNameLength, "validation.max-length-label-name", 1024, "Maximum length accepted for label names")
f.IntVar(&l.MaxLabelValueLength, "validation.max-length-label-value", 2048, "Maximum length accepted for label value. This setting also applies to the metric name")
f.IntVar(&l.MaxLabelNamesPerSeries, "validation.max-label-names-per-series", 30, "Maximum number of label names per series.")
@@ -335,6 +337,11 @@ func (o *Overrides) MaxLineSize(userID string) int {
return o.getOverridesForUser(userID).MaxLineSize.Val()
}
+// MaxLineSizeShouldTruncate returns whether lines longer than max should be truncated.
+func (o *Overrides) MaxLineSizeTruncate(userID string) bool {
+ return o.getOverridesForUser(userID).MaxLineSizeTruncate
+}
+
// MaxEntriesLimitPerQuery returns the limit to number of entries the querier should return per query.
func (o *Overrides) MaxEntriesLimitPerQuery(userID string) int {
return o.getOverridesForUser(userID).MaxEntriesLimitPerQuery
diff --git a/pkg/validation/limits_test.go b/pkg/validation/limits_test.go
index 6a7a3101844ce..9605bbd44f5ef 100644
--- a/pkg/validation/limits_test.go
+++ b/pkg/validation/limits_test.go
@@ -44,6 +44,7 @@ reject_old_samples_max_age: 40s
creation_grace_period: 50s
enforce_metric_name: true
max_line_size: 60
+max_line_size_truncate: true
max_streams_per_user: 70
max_global_streams_per_user: 80
max_chunks_per_query: 90
@@ -76,6 +77,7 @@ per_tenant_override_period: 230s
"creation_grace_period": "50s",
"enforce_metric_name": true,
"max_line_size": 60,
+ "max_line_size_truncate": true,
"max_streams_per_user": 70,
"max_global_streams_per_user": 80,
"max_chunks_per_query": 90,
diff --git a/pkg/validation/validate.go b/pkg/validation/validate.go
index 409f3047c3fb4..c8f2551807f15 100644
--- a/pkg/validation/validate.go
+++ b/pkg/validation/validate.go
@@ -5,7 +5,7 @@ import (
)
const (
- discardReasonLabel = "reason"
+ reasonLabel = "reason"
// InvalidLabels is a reason for discarding log lines which have labels that cannot be parsed.
InvalidLabels = "invalid_labels"
MissingLabels = "missing_labels"
@@ -43,6 +43,26 @@ const (
DuplicateLabelNamesErrorMsg = "stream '%s' has duplicate label name: '%s'"
)
+// MutatedSamples is a metric of the total number of lines mutated, by reason.
+var MutatedSamples = prometheus.NewCounterVec(
+ prometheus.CounterOpts{
+ Namespace: "loki",
+ Name: "mutated_samples_total",
+ Help: "The total number of samples that have been mutated.",
+ },
+ []string{reasonLabel, "truncated"},
+)
+
+// MutatedBytes is a metric of the total mutated bytes, by reason.
+var MutatedBytes = prometheus.NewCounterVec(
+ prometheus.CounterOpts{
+ Namespace: "loki",
+ Name: "mutated_bytes_total",
+ Help: "The total number of bytes that have been mutated.",
+ },
+ []string{reasonLabel, "truncated"},
+)
+
// DiscardedBytes is a metric of the total discarded bytes, by reason.
var DiscardedBytes = prometheus.NewCounterVec(
prometheus.CounterOpts{
@@ -50,7 +70,7 @@ var DiscardedBytes = prometheus.NewCounterVec(
Name: "discarded_bytes_total",
Help: "The total number of bytes that were discarded.",
},
- []string{discardReasonLabel, "tenant"},
+ []string{reasonLabel, "tenant"},
)
// DiscardedSamples is a metric of the number of discarded samples, by reason.
@@ -60,7 +80,7 @@ var DiscardedSamples = prometheus.NewCounterVec(
Name: "discarded_samples_total",
Help: "The total number of samples that were discarded.",
},
- []string{discardReasonLabel, "tenant"},
+ []string{reasonLabel, "tenant"},
)
func init() {
|
distributor
|
Truncate rather than drop log lines (#4051)
|
e83b339627b024ff4684d596ed4afb45428b9a46
|
2025-01-21 19:27:06
|
renovate[bot]
|
fix(deps): update module github.com/aws/aws-sdk-go-v2/config to v1.29.1 (#15835)
| false
|
diff --git a/tools/lambda-promtail/go.mod b/tools/lambda-promtail/go.mod
index 2bfdde52595f7..bcd3cad7e8494 100644
--- a/tools/lambda-promtail/go.mod
+++ b/tools/lambda-promtail/go.mod
@@ -5,7 +5,7 @@ go 1.22
require (
github.com/aws/aws-lambda-go v1.47.0
github.com/aws/aws-sdk-go-v2 v1.33.0
- github.com/aws/aws-sdk-go-v2/config v1.29.0
+ github.com/aws/aws-sdk-go-v2/config v1.29.1
github.com/aws/aws-sdk-go-v2/service/s3 v1.73.1
github.com/go-kit/log v0.2.1
github.com/gogo/protobuf v1.3.2
@@ -25,7 +25,7 @@ require (
github.com/alecthomas/units v0.0.0-20240626203959-61d1e3462e30 // indirect
github.com/armon/go-metrics v0.4.1 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.7 // indirect
- github.com/aws/aws-sdk-go-v2/credentials v1.17.53 // indirect
+ github.com/aws/aws-sdk-go-v2/credentials v1.17.54 // indirect
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.24 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.28 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.28 // indirect
@@ -35,9 +35,9 @@ require (
github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.5.1 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.9 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.9 // indirect
- github.com/aws/aws-sdk-go-v2/service/sso v1.24.10 // indirect
- github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.9 // indirect
- github.com/aws/aws-sdk-go-v2/service/sts v1.33.8 // indirect
+ github.com/aws/aws-sdk-go-v2/service/sso v1.24.11 // indirect
+ github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.10 // indirect
+ github.com/aws/aws-sdk-go-v2/service/sts v1.33.9 // indirect
github.com/aws/smithy-go v1.22.1 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/c2h5oh/datasize v0.0.0-20231215233829-aa82cc1e6500 // indirect
diff --git a/tools/lambda-promtail/go.sum b/tools/lambda-promtail/go.sum
index 72367bf3c71e1..3bab390478f6b 100644
--- a/tools/lambda-promtail/go.sum
+++ b/tools/lambda-promtail/go.sum
@@ -52,10 +52,10 @@ github.com/aws/aws-sdk-go-v2 v1.33.0 h1:Evgm4DI9imD81V0WwD+TN4DCwjUMdc94TrduMLbg
github.com/aws/aws-sdk-go-v2 v1.33.0/go.mod h1:P5WJBrYqqbWVaOxgH0X/FYYD47/nooaPOZPlQdmiN2U=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.7 h1:lL7IfaFzngfx0ZwUGOZdsFFnQ5uLvR0hWqqhyE7Q9M8=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.7/go.mod h1:QraP0UcVlQJsmHfioCrveWOC1nbiWUl3ej08h4mXWoc=
-github.com/aws/aws-sdk-go-v2/config v1.29.0 h1:Vk/u4jof33or1qAQLdofpjKV7mQQT7DcUpnYx8kdmxY=
-github.com/aws/aws-sdk-go-v2/config v1.29.0/go.mod h1:iXAZK3Gxvpq3tA+B9WaDYpZis7M8KFgdrDPMmHrgbJM=
-github.com/aws/aws-sdk-go-v2/credentials v1.17.53 h1:lwrVhiEDW5yXsuVKlFVUnR2R50zt2DklhOyeLETqDuE=
-github.com/aws/aws-sdk-go-v2/credentials v1.17.53/go.mod h1:CkqM1bIw/xjEpBMhBnvqUXYZbpCFuj6dnCAyDk2AtAY=
+github.com/aws/aws-sdk-go-v2/config v1.29.1 h1:JZhGawAyZ/EuJeBtbQYnaoftczcb2drR2Iq36Wgz4sQ=
+github.com/aws/aws-sdk-go-v2/config v1.29.1/go.mod h1:7bR2YD5euaxBhzt2y/oDkt3uNRb6tjFp98GlTFueRwk=
+github.com/aws/aws-sdk-go-v2/credentials v1.17.54 h1:4UmqeOqJPvdvASZWrKlhzpRahAulBfyTJQUaYy4+hEI=
+github.com/aws/aws-sdk-go-v2/credentials v1.17.54/go.mod h1:RTdfo0P0hbbTxIhmQrOsC/PquBZGabEPnCaxxKRPSnI=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.24 h1:5grmdTdMsovn9kPZPI23Hhvp0ZyNm5cRO+IZFIYiAfw=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.24/go.mod h1:zqi7TVKTswH3Ozq28PkmBmgzG1tona7mo9G2IJg4Cis=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.28 h1:igORFSiH3bfq4lxKFkTSYDhJEUCYo6C8VKiWJjYwQuQ=
@@ -76,12 +76,12 @@ github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.9 h1:2aInXbh02XsbO0
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.9/go.mod h1:dgXS1i+HgWnYkPXqNoPIPKeUsUUYHaUbThC90aDnNiE=
github.com/aws/aws-sdk-go-v2/service/s3 v1.73.1 h1:OzmyfYGiMCOIAq5pa0KWcaZoA9F8FqajOJevh+hhFdY=
github.com/aws/aws-sdk-go-v2/service/s3 v1.73.1/go.mod h1:K+0a0kWDHAUXBH8GvYGS3cQRwIuRjO9bMWUz6vpNCaU=
-github.com/aws/aws-sdk-go-v2/service/sso v1.24.10 h1:DyZUj3xSw3FR3TXSwDhPhuZkkT14QHBiacdbUVcD0Dg=
-github.com/aws/aws-sdk-go-v2/service/sso v1.24.10/go.mod h1:Ro744S4fKiCCuZECXgOi760TiYylUM8ZBf6OGiZzJtY=
-github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.9 h1:I1TsPEs34vbpOnR81GIcAq4/3Ud+jRHVGwx6qLQUHLs=
-github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.9/go.mod h1:Fzsj6lZEb8AkTE5S68OhcbBqeWPsR8RnGuKPr8Todl8=
-github.com/aws/aws-sdk-go-v2/service/sts v1.33.8 h1:pqEJQtlKWvnv3B6VRt60ZmsHy3SotlEBvfUBPB1KVcM=
-github.com/aws/aws-sdk-go-v2/service/sts v1.33.8/go.mod h1:f6vjfZER1M17Fokn0IzssOTMT2N8ZSq+7jnNF0tArvw=
+github.com/aws/aws-sdk-go-v2/service/sso v1.24.11 h1:kuIyu4fTT38Kj7YCC7ouNbVZSSpqkZ+LzIfhCr6Dg+I=
+github.com/aws/aws-sdk-go-v2/service/sso v1.24.11/go.mod h1:Ro744S4fKiCCuZECXgOi760TiYylUM8ZBf6OGiZzJtY=
+github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.10 h1:l+dgv/64iVlQ3WsBbnn+JSbkj01jIi+SM0wYsj3y/hY=
+github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.10/go.mod h1:Fzsj6lZEb8AkTE5S68OhcbBqeWPsR8RnGuKPr8Todl8=
+github.com/aws/aws-sdk-go-v2/service/sts v1.33.9 h1:BRVDbewN6VZcwr+FBOszDKvYeXY1kJ+GGMCcpghlw0U=
+github.com/aws/aws-sdk-go-v2/service/sts v1.33.9/go.mod h1:f6vjfZER1M17Fokn0IzssOTMT2N8ZSq+7jnNF0tArvw=
github.com/aws/smithy-go v1.22.1 h1:/HPHZQ0g7f4eUeK6HKglFz8uwVfZKgoI25rb/J+dnro=
github.com/aws/smithy-go v1.22.1/go.mod h1:irrKGvNn1InZwb2d7fkIRNucdfwR8R+Ts3wxYa/cJHg=
github.com/bboreham/go-loser v0.0.0-20230920113527-fcc2c21820a3 h1:6df1vn4bBlDDo4tARvBm7l6KA9iVMnE3NWizDeWSrps=
|
fix
|
update module github.com/aws/aws-sdk-go-v2/config to v1.29.1 (#15835)
|
10283d9bc16bb1f4916eb6a168c431488c161d54
|
2022-06-15 00:02:45
|
Karen Miller
|
docs: edit the CHANGELOG (#6386)
| false
|
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 9d2cc6eaa6dff..0bbbf592373c3 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,5 +1,11 @@
## Main
+* [6372](https://github.com/grafana/loki/pull/6372) **splitice**: Add support for numbers in JSON fields.
+* [6105](https://github.com/grafana/loki/pull/6105) **rutgerke** Export metrics for the Promtail journal target.
+* [6099](https://github.com/grafana/loki/pull/6099) **cstyan**: Drop lines with malformed JSON in Promtail JSON pipeline stage.
+* [6136](https://github.com/grafana/loki/pull/6136) **periklis**: Add support for alertmanager header authorization.
+* [6102](https://github.com/grafana/loki/pull/6102) **timchenko-a**: Add multi-tenancy support to lambda-promtail.
+* [5971](https://github.com/grafana/loki/pull/5971) **kavirajk**: Extend the `metrics.go` recording of statistics about metadata queries to include labels and series queries.
* [6372](https://github.com/grafana/loki/pull/6372) **splitice**: Add support for numbers in JSON fields
* [6105](https://github.com/grafana/loki/pull/6105) **rutgerke** Export metrics for the promtail journal target
* [6179](https://github.com/grafana/loki/pull/6179) **chaudum**: Add new HTTP endpoint to delete ingester ring token file and shutdown process gracefully
@@ -8,68 +14,69 @@
* [6102](https://github.com/grafana/loki/pull/6102) **timchenko-a**: Add multi-tenancy support to lambda-promtail
* [5971](https://github.com/grafana/loki/pull/5971) **kavirajk**: Record statistics about metadata queries such as labels and series queries in `metrics.go` as well
* [5790](https://github.com/grafana/loki/pull/5790) **chaudum**: Add UDP support for Promtail's syslog target.
-* [5984](https://github.com/grafana/loki/pull/5984) **dannykopping** and **salvacorts**: Querier: prevent unnecessary calls to ingesters.
-* [5943](https://github.com/grafana/loki/pull/5943) **tpaschalis**: Add support for exclusion patterns in Promtail's static_config
+* [5984](https://github.com/grafana/loki/pull/5984) **dannykopping** and **salvacorts**: Improve query performance by preventing unnecessary querying of ingesters when the query data is old enough to be in object storage.
+* [5943](https://github.com/grafana/loki/pull/5943) **tpaschalis**: Add configuration support for excluding configuration files when instantiating Promtail.
* [5879](https://github.com/grafana/loki/pull/5879) **MichelHollands**: Remove lines matching delete request expression when using "filter-and-delete" deletion mode.
* [5899](https://github.com/grafana/loki/pull/5899) **simonswine**: Update go image to 1.17.9.
-* [5888](https://github.com/grafana/loki/pull/5888) **Papawy** Fix common config net interface name overwritten by ring common config
+* [5888](https://github.com/grafana/loki/pull/5888) **Papawy** Fix common configuration block net interface name when overwritten by ring common configuration.
* [5848](https://github.com/grafana/loki/pull/5848) **arcosx**: Add Baidu AI Cloud as a storage backend choice.
* [5799](https://github.com/grafana/loki/pull/5799) **cyriltovena** Fix deduping issues when multiple entries with the same timestamp exist.
-* [5799](https://github.com/grafana/loki/pull/5799) **cyriltovena** Fixes deduping issues when multiple entries exists with the same timestamp.
* [5780](https://github.com/grafana/loki/pull/5780) **simonswine**: Update alpine image to 3.15.4.
* [5715](https://github.com/grafana/loki/pull/5715) **chaudum** Add option to push RFC5424 syslog messages from Promtail in syslog scrape target.
-* [5696](https://github.com/grafana/loki/pull/5696) **paullryan** don't block scraping of new logs from cloudflare within promtail if an error is received from cloudflare about too early logs.
+* [5696](https://github.com/grafana/loki/pull/5696) **paullryan** Don't block scraping of new logs from Cloudflare within Promtail if an error is received from Cloudflare about logs from too early of a time period.
* [5662](https://github.com/grafana/loki/pull/5662) **ssncferreira** **chaudum** Improve performance of instant queries by splitting range into multiple subqueries that are executed in parallel.
* [5685](https://github.com/grafana/loki/pull/5625) **chaudum** Fix bug in push request parser that allowed users to send arbitrary non-string data as "log line".
-* [5707](https://github.com/grafana/loki/pull/5707) **franzwong** Promtail: Rename config name limit_config to limits_config.
+* [5707](https://github.com/grafana/loki/pull/5707) **franzwong** Rename Promtail configuration parameter from `limit_config` to `limits_config`.
* [5626](https://github.com/grafana/loki/pull/5626) **jeschkies** Support multi-tenant select logs and samples queries.
-* [5622](https://github.com/grafana/loki/pull/5622) **chaudum**: Fix bug in query splitter that caused `interval` query parameter to be ignored and therefore returning more logs than expected.
-* [5521](https://github.com/grafana/loki/pull/5521) **cstyan**: Move stream lag configuration to top level clients config struct and refactor stream lag metric, this resolves a bug with duplicate metric collection when a single Promtail binary is running multiple Promtail clients.
-* [5568](https://github.com/grafana/loki/pull/5568) **afayngelerindbx**: Fix canary panics due to concurrent execution of `confirmMissing`
-* [5552](https://github.com/grafana/loki/pull/5552) **jiachengxu**: Loki mixin: add `DiskSpaceUtilizationPanel`
-* [5541](https://github.com/grafana/loki/pull/5541) **bboreham**: Queries: reject very deeply nested regexps which could crash Loki.
-* [5536](https://github.com/grafana/loki/pull/5536) **jiachengxu**: Loki mixin: make labelsSelector in loki chunks dashboards configurable
-* [5535](https://github.com/grafana/loki/pull/5535) **jiachengxu**: Loki mixins: use labels selector for loki chunks dashboard
-* [5507](https://github.com/grafana/loki/pull/5507) **MichelHollands**: Remove extra param in call for inflightRequests metric.
-* [5481](https://github.com/grafana/loki/pull/5481) **MichelHollands**: Add a DeletionMode config variable to specify the delete mode and validate match parameters.
-* [5356](https://github.com/grafana/loki/pull/5356) **jbschami**: Enhance lambda-promtail to support adding extra labels from an environment variable value
-* [5409](https://github.com/grafana/loki/pull/5409) **ldb**: Enable best effort parsing for Syslog messages
-* [5392](https://github.com/grafana/loki/pull/5392) **MichelHollands**: Etcd credentials are parsed as secrets instead of plain text now.
-* [5361](https://github.com/grafana/loki/pull/5361) **ctovena**: Add usage report to grafana.com.
+* [5622](https://github.com/grafana/loki/pull/5622) **chaudum**: Fixed a bug in the query splitter that caused the `interval` query parameter to be ignored and therefore returning more logs than expected.
+* [5521](https://github.com/grafana/loki/pull/5521) **cstyan**: Moved stream lag configuration to the top-level clients configuration structure, and refactored stream lag metric. This resolves a bug with duplicate metric collection when a single Promtail binary is running multiple Promtail clients.
+* [5568](https://github.com/grafana/loki/pull/5568) **afayngelerindbx**: Fix Loki Canary panics that were due to concurrent execution of `confirmMissing`.
+* [5552](https://github.com/grafana/loki/pull/5552) **jiachengxu**: Add `DiskSpaceUtilizationPanel` to the Loki mixin.
+* [5541](https://github.com/grafana/loki/pull/5541) **bboreham**: Return an error for queries that define a nested regular expression that causes a recursion depth of 1000 or greater when evaluated, which would hit a Go implementation limit.
+* [5536](https://github.com/grafana/loki/pull/5536) **jiachengxu**: Make `labelsSelector` in the Loki chunks dashboards configurable in the Loki mixin.
+* [5535](https://github.com/grafana/loki/pull/5535) **jiachengxu**: Use a labels selector for the Loki chunks dashboard in the Loki mixin.
+* [5507](https://github.com/grafana/loki/pull/5507) **MichelHollands**: Eliminate a panic caused by an extra parameter in the call for the `inflightRequests` metric.
+* [5481](https://github.com/grafana/loki/pull/5481) **MichelHollands**: Add a DeletionMode configuration parameter to specify the delete mode and validate match parameters.
+* [5356](https://github.com/grafana/loki/pull/5356) **jbschami**: Enhance lambda-promtail to support adding extra labels from an environment variable value.
+* [5409](https://github.com/grafana/loki/pull/5409) **ldb**: Enable best effort parsing for syslog messages.
+* [5392](https://github.com/grafana/loki/pull/5392) **MichelHollands**: Etcd credentials are parsed as secrets instead of plain text.
+* [5361](https://github.com/grafana/loki/pull/5361) **ctovena**: Send a usage report to grafana.com.
+* [5406](https://github.com/grafana/loki/pull/5406) **ctovena**: Revise the configuration parameters that configure the usage report to grafana.com.
* [5289](https://github.com/grafana/loki/pull/5289) **ctovena**: Fix deduplication bug in queries when mutating labels.
-* [5302](https://github.com/grafana/loki/pull/5302) **MasslessParticle** Update azure blobstore client to use new sdk.
+* [5302](https://github.com/grafana/loki/pull/5302) **MasslessParticle** Update the Azure blobstore client to use a new SDK.
* [5243](https://github.com/grafana/loki/pull/5290) **ssncferreira**: Update Promtail to support duration string formats.
* [5266](https://github.com/grafana/loki/pull/5266) **jeschkies**: Write Promtail position file atomically on Unix.
* [5280](https://github.com/grafana/loki/pull/5280) **jeschkies**: Fix Docker target connection loss.
-* [5243](https://github.com/grafana/loki/pull/5243) **owen-d**: moves `querier.split-queries-by-interval` to limits code only.
+* [5243](https://github.com/grafana/loki/pull/5243) **owen-d**: Move `querier.split-queries-by-interval` to limits code only.
* [5139](https://github.com/grafana/loki/pull/5139) **DylanGuedes**: Drop support for legacy configuration rules format.
-* [5262](https://github.com/grafana/loki/pull/5262) **MichelHollands**: Remove the labelFilter field
+* [5262](https://github.com/grafana/loki/pull/5262) **MichelHollands**: Remove the labelFilter field.
* [4911](https://github.com/grafana/loki/pull/4911) **jeschkies**: Support Docker service discovery in Promtail.
-* [5107](https://github.com/grafana/loki/pull/5107) **chaudum** Fix bug in fluentd plugin that caused log lines containing non UTF-8 characters to be dropped.
-* [5148](https://github.com/grafana/loki/pull/5148) **chaudum** Add periodic task to prune old expired items from the FIFO cache to free up memory.
-* [5187](https://github.com/grafana/loki/pull/5187) **aknuds1** Rename metric `cortex_experimental_features_in_use_total` to `loki_experimental_features_in_use_total` and metric `log_messages_total` to `loki_log_messages_total`.
-* [5170](https://github.com/grafana/loki/pull/5170) **chaudum** Fix deadlock in Promtail caused when targets got removed from a target group by the discovery manager.
-* [5163](https://github.com/grafana/loki/pull/5163) **chaudum** Fix regression in fluentd plugin introduced with #5107 that caused `NoMethodError` when parsing non-string values of log lines.
-* [5144](https://github.com/grafana/loki/pull/5144) **dannykopping** Ruler: fix remote write basic auth credentials.
-* [5091](https://github.com/grafana/loki/pull/5091) **owen-d**: Changes `ingester.concurrent-flushes` default to 32
-* [5031](https://github.com/grafana/loki/pull/5031) **liguozhong**: Promtail: Add global read rate limiting.
-* [4879](https://github.com/grafana/loki/pull/4879) **cyriltovena**: LogQL: add __line__ function to | line_format template.
-* [5081](https://github.com/grafana/loki/pull/5081) **SasSwart**: Add the option to configure memory ballast for Loki
+* [5107](https://github.com/grafana/loki/pull/5107) **chaudum** Fix bug in the fluentd plugin that caused log lines containing non UTF-8 characters to be dropped.
+* [5148](https://github.com/grafana/loki/pull/5148) **chaudum** Added a periodic task to prune old expired items from the FIFO cache, in order to free up memory.
+* [5187](https://github.com/grafana/loki/pull/5187) **aknuds1** Rename metrics `cortex_experimental_features_in_use_total` to `loki_experimental_features_in_use_total` and metric `log_messages_total` to `loki_log_messages_total`.
+* [5170](https://github.com/grafana/loki/pull/5170) **chaudum** Eliminate a deadlock in Promtail caused when targets got removed from a target group by the discovery manager.
+* [5163](https://github.com/grafana/loki/pull/5163) **chaudum** Fixed a regression in the fluentd plugin introduced with PR 5107 that caused `NoMethodError` when parsing non-string values of log lines.
+* [5144](https://github.com/grafana/loki/pull/5144) **dannykopping** Ruler: Fix remote write basic authorization credentials.
+* [5091](https://github.com/grafana/loki/pull/5091) **owen-d**: Changes the `ingester.concurrent-flushes` default to 32.
+* [5031](https://github.com/grafana/loki/pull/5031) **liguozhong**: Added global read rate limiting to Promtail.
+* [4879](https://github.com/grafana/loki/pull/4879) **cyriltovena**: LogQL: Add the `__line__` function to the `| line_format` template.
+* [5081](https://github.com/grafana/loki/pull/5081) **SasSwart**: Add the option to configure memory ballast for Loki.
* [5085](https://github.com/grafana/loki/pull/5085) **aknuds1**: Upgrade Cortex to [e0807c4eb487](https://github.com/cortexproject/cortex/compare/4e9fc3a2b5ab..e0807c4eb487) and Prometheus to [692a54649ed7](https://github.com/prometheus/prometheus/compare/2a3d62ac8456..692a54649ed7)
-* [5067](https://github.com/grafana/loki/pull/5057) **cstyan**: Add a metric to Azure Blob Storage client to track total egress bytes
-* [5065](https://github.com/grafana/loki/pull/5065) **AndreZiviani**: lambda-promtail: Add ability to ingest logs from S3
-* [4950](https://github.com/grafana/loki/pull/4950) **DylanGuedes**: Implement common instance addr/net interface
-* [4949](https://github.com/grafana/loki/pull/4949) **ssncferreira**: Add query `queueTime` metric to statistics and metrics.go
-* [4938](https://github.com/grafana/loki/pull/4938) **DylanGuedes**: Implement ring status page for the distributor
-* [5023](https://github.com/grafana/loki/pull/5023) **ssncferreira**: Move `querier.split-queries-by-interval` to a per-tenant configuration
-* [4993](https://github.com/grafana/loki/pull/4926) **thejosephstevens**: Fix parent of wal and wal_cleaner in loki ruler config docs
-* [4933](https://github.com/grafana/loki/pull/4933) **jeschkies**: Support matchers in series label values query.
-* [4926](https://github.com/grafana/loki/pull/4926) **thejosephstevens**: Fix comment in Loki module loading for accuracy
-* [4920](https://github.com/grafana/loki/pull/4920) **chaudum**: Add `-list-targets` command line flag to list all available run targets
-* [4860](https://github.com/grafana/loki/pull/4860) **cyriltovena**: Add rate limiting and metrics to hedging
-* [4865](https://github.com/grafana/loki/pull/4865) **taisho6339**: Fix duplicate registry.MustRegister call in Promtail Kafka
-* [4845](https://github.com/grafana/loki/pull/4845) **chaudum** Return error responses consistently as JSON
-* [6163](https://github.com/grafana/loki/pull/6163) **jburnham**: LogQL: add `default` sprig template function in logql label/line formatter
+* [5067](https://github.com/grafana/loki/pull/5057) **cstyan**: Add a metric to the Azure Blob Storage client to track total egress bytes.
+* [5065](https://github.com/grafana/loki/pull/5065) **AndreZiviani**: lambda-promtail: Add ability to ingest logs from S3.
+* [4950](https://github.com/grafana/loki/pull/4950) **DylanGuedes**: Implement a common instance addr/net interface.
+* [4949](https://github.com/grafana/loki/pull/4949) **ssncferreira**: Add the query `queueTime` metric to statistics and metrics.go.
+* [4938](https://github.com/grafana/loki/pull/4938) **DylanGuedes**: Implement a ring status page for the distributor.
+* [5023](https://github.com/grafana/loki/pull/5023) **ssncferreira**: Move `querier.split-queries-by-interval` to a per-tenant configuration.
+* [4993](https://github.com/grafana/loki/pull/4926) **thejosephstevens**: Fix the parent of the WAL and wal_cleaner in the Loki ruler configuration docs.
+* [4933](https://github.com/grafana/loki/pull/4933) **jeschkies**: Support matchers in the series label values query.
+* [4926](https://github.com/grafana/loki/pull/4926) **thejosephstevens**: Fix a comment in Loki module loading for accuracy.
+* [4920](https://github.com/grafana/loki/pull/4920) **chaudum**: Add the `-list-targets` command line flag to list all available run targets.
+* [4860](https://github.com/grafana/loki/pull/4860) **cyriltovena**: Add rate limiting and metrics to hedging.
+* [4865](https://github.com/grafana/loki/pull/4865) **taisho6339**: Remove a duplicate `registry.MustRegister` call in Promtail Kafka.
+* [4845](https://github.com/grafana/loki/pull/4845) **chaudum** Return error responses as JSON throughout the code base.
+* [6163](https://github.com/grafana/loki/pull/6163) **jburnham**: LogQL: Add a `default` sprig template function in LogQL label/line formatter.
+
## Unreleased
### All Changes
|
docs
|
edit the CHANGELOG (#6386)
|
358b8965c4fe5e6d95d09aa3166a02213974bd64
|
2023-05-18 17:43:41
|
Dylan Guedes
|
distributor: Add org_id to log error (#9475)
| false
|
diff --git a/pkg/ingester/instance.go b/pkg/ingester/instance.go
index 9c872a5397f70..0ec2a54271690 100644
--- a/pkg/ingester/instance.go
+++ b/pkg/ingester/instance.go
@@ -253,7 +253,7 @@ func (i *instance) createStream(pushReqStream logproto.Stream, record *wal.Recor
bytes += len(e.Line)
}
validation.DiscardedBytes.WithLabelValues(validation.StreamLimit, i.instanceID).Add(float64(bytes))
- return nil, httpgrpc.Errorf(http.StatusTooManyRequests, validation.StreamLimitErrorMsg)
+ return nil, httpgrpc.Errorf(http.StatusTooManyRequests, validation.StreamLimitErrorMsg, i.instanceID)
}
labels, err := syntax.ParseLabels(pushReqStream.Labels)
diff --git a/pkg/validation/validate.go b/pkg/validation/validate.go
index 399f192a47e16..a21825118baeb 100644
--- a/pkg/validation/validate.go
+++ b/pkg/validation/validate.go
@@ -27,7 +27,7 @@ const (
// StreamLimit is a reason for discarding lines when we can't create a new stream
// because the limit of active streams has been reached.
StreamLimit = "stream_limit"
- StreamLimitErrorMsg = "Maximum active stream limit exceeded, reduce the number of active streams (reduce labels or reduce label values), or contact your Loki administrator to see if the limit can be increased"
+ StreamLimitErrorMsg = "Maximum active stream limit exceeded, reduce the number of active streams (reduce labels or reduce label values), or contact your Loki administrator to see if the limit can be increased, user: '%s'"
// StreamRateLimit is a reason for discarding lines when the streams own rate limit is hit
// rather than the overall ingestion rate limit.
StreamRateLimit = "per_stream_rate_limit"
|
distributor
|
Add org_id to log error (#9475)
|
21135269ad3d6206616970f7222f43dfd7f729ee
|
2024-11-25 16:24:53
|
George Robinson
|
fix: use KafkaEndOffset instead of -1 (#15099)
| false
|
diff --git a/pkg/kafka/partition/reader_service.go b/pkg/kafka/partition/reader_service.go
index d9ea75d38f02a..daa3ff2649a59 100644
--- a/pkg/kafka/partition/reader_service.go
+++ b/pkg/kafka/partition/reader_service.go
@@ -120,7 +120,7 @@ func newReaderService(
consumerFactory: consumerFactory,
logger: log.With(logger, "partition", reader.Partition(), "consumer_group", reader.ConsumerGroup()),
metrics: newServiceMetrics(reg),
- lastProcessedOffset: -1,
+ lastProcessedOffset: kafkaEndOffset,
}
// Create the committer
|
fix
|
use KafkaEndOffset instead of -1 (#15099)
|
bfc289028ec2a7405cfa8356af4fa1cc2e8f3c81
|
2024-11-11 21:22:55
|
Salva Corts
|
fix(blooms): Copy chunks from ForSeries (#14863)
| false
|
diff --git a/pkg/bloombuild/planner/strategies/chunksize.go b/pkg/bloombuild/planner/strategies/chunksize.go
index 91f7223440d74..456183aa62ef5 100644
--- a/pkg/bloombuild/planner/strategies/chunksize.go
+++ b/pkg/bloombuild/planner/strategies/chunksize.go
@@ -155,20 +155,16 @@ func getBlocksMatchingBounds(metas []bloomshipper.Meta, bounds v1.FingerprintBou
return deduped, nil
}
-type seriesWithChunks struct {
- tsdb tsdb.SingleTenantTSDBIdentifier
- fp model.Fingerprint
- chunks []index.ChunkMeta
-}
-
type seriesBatch struct {
- series []seriesWithChunks
+ tsdb tsdb.SingleTenantTSDBIdentifier
+ series []*v1.Series
size uint64
}
-func newSeriesBatch() seriesBatch {
+func newSeriesBatch(tsdb tsdb.SingleTenantTSDBIdentifier) seriesBatch {
return seriesBatch{
- series: make([]seriesWithChunks, 0, 100),
+ tsdb: tsdb,
+ series: make([]*v1.Series, 0, 100),
}
}
@@ -179,31 +175,14 @@ func (b *seriesBatch) Bounds() v1.FingerprintBounds {
// We assume that the series are sorted by fingerprint.
// This is guaranteed since series are iterated in order by the TSDB.
- return v1.NewBounds(b.series[0].fp, b.series[len(b.series)-1].fp)
+ return v1.NewBounds(b.series[0].Fingerprint, b.series[len(b.series)-1].Fingerprint)
}
func (b *seriesBatch) V1Series() []*v1.Series {
- series := make([]*v1.Series, 0, len(b.series))
- for _, s := range b.series {
- res := &v1.Series{
- Fingerprint: s.fp,
- Chunks: make(v1.ChunkRefs, 0, len(s.chunks)),
- }
- for _, chk := range s.chunks {
- res.Chunks = append(res.Chunks, v1.ChunkRef{
- From: model.Time(chk.MinTime),
- Through: model.Time(chk.MaxTime),
- Checksum: chk.Checksum,
- })
- }
-
- series = append(series, res)
- }
-
- return series
+ return b.series
}
-func (b *seriesBatch) Append(s seriesWithChunks, size uint64) {
+func (b *seriesBatch) Append(s *v1.Series, size uint64) {
b.series = append(b.series, s)
b.size += size
}
@@ -217,10 +196,7 @@ func (b *seriesBatch) Size() uint64 {
}
func (b *seriesBatch) TSDB() tsdb.SingleTenantTSDBIdentifier {
- if len(b.series) == 0 {
- return tsdb.SingleTenantTSDBIdentifier{}
- }
- return b.series[0].tsdb
+ return b.tsdb
}
func (s *ChunkSizeStrategy) sizedSeriesIter(
@@ -230,9 +206,14 @@ func (s *ChunkSizeStrategy) sizedSeriesIter(
targetTaskSizeBytes uint64,
) (iter.Iterator[seriesBatch], int, error) {
batches := make([]seriesBatch, 0, 100)
- currentBatch := newSeriesBatch()
+ var currentBatch seriesBatch
for _, idx := range tsdbsWithGaps {
+ if currentBatch.Len() > 0 {
+ batches = append(batches, currentBatch)
+ }
+ currentBatch = newSeriesBatch(idx.tsdbIdentifier)
+
for _, gap := range idx.gaps {
if err := idx.tsdb.ForSeries(
ctx,
@@ -253,14 +234,22 @@ func (s *ChunkSizeStrategy) sizedSeriesIter(
// AND Adding this series to the batch would exceed the target task size.
if currentBatch.Len() > 0 && currentBatch.Size()+seriesSize > targetTaskSizeBytes {
batches = append(batches, currentBatch)
- currentBatch = newSeriesBatch()
+ currentBatch = newSeriesBatch(idx.tsdbIdentifier)
+ }
+
+ res := &v1.Series{
+ Fingerprint: fp,
+ Chunks: make(v1.ChunkRefs, 0, len(chks)),
+ }
+ for _, chk := range chks {
+ res.Chunks = append(res.Chunks, v1.ChunkRef{
+ From: model.Time(chk.MinTime),
+ Through: model.Time(chk.MaxTime),
+ Checksum: chk.Checksum,
+ })
}
- currentBatch.Append(seriesWithChunks{
- tsdb: idx.tsdbIdentifier,
- fp: fp,
- chunks: chks,
- }, seriesSize)
+ currentBatch.Append(res, seriesSize)
return false
}
},
@@ -269,10 +258,10 @@ func (s *ChunkSizeStrategy) sizedSeriesIter(
return nil, 0, err
}
- // Add the last batch for this TSDB if it's not empty.
+ // Add the last batch for this gap if it's not empty.
if currentBatch.Len() > 0 {
batches = append(batches, currentBatch)
- currentBatch = newSeriesBatch()
+ currentBatch = newSeriesBatch(idx.tsdbIdentifier)
}
}
}
|
fix
|
Copy chunks from ForSeries (#14863)
|
8641a026d8a52c901a37a6ee68fdcdaf873e6aec
|
2025-03-05 15:38:52
|
renovate[bot]
|
fix(deps): update dependency @tanstack/react-query to v5.67.1 (main) (#16561)
| false
|
diff --git a/pkg/ui/frontend/package-lock.json b/pkg/ui/frontend/package-lock.json
index 5e980d500e2c1..5139b4d75fe06 100644
--- a/pkg/ui/frontend/package-lock.json
+++ b/pkg/ui/frontend/package-lock.json
@@ -2550,9 +2550,9 @@
"license": "MIT"
},
"node_modules/@tanstack/query-core": {
- "version": "5.66.11",
- "resolved": "https://registry.npmjs.org/@tanstack/query-core/-/query-core-5.66.11.tgz",
- "integrity": "sha512-ZEYxgHUcohj3sHkbRaw0gYwFxjY5O6M3IXOYXEun7E1rqNhsP8fOtqjJTKPZpVHcdIdrmX4lzZctT4+pts0OgA==",
+ "version": "5.67.1",
+ "resolved": "https://registry.npmjs.org/@tanstack/query-core/-/query-core-5.67.1.tgz",
+ "integrity": "sha512-AkFmuukVejyqVIjEQoFhLb3q+xHl7JG8G9cANWTMe3s8iKzD9j1VBSYXgCjy6vm6xM8cUCR9zP2yqWxY9pTWOA==",
"license": "MIT",
"funding": {
"type": "github",
@@ -2570,12 +2570,12 @@
}
},
"node_modules/@tanstack/react-query": {
- "version": "5.66.11",
- "resolved": "https://registry.npmjs.org/@tanstack/react-query/-/react-query-5.66.11.tgz",
- "integrity": "sha512-uPDiQbZScWkAeihmZ9gAm3wOBA1TmLB1KCB1fJ1hIiEKq3dTT+ja/aYM7wGUD+XiEsY4sDSE7p8VIz/21L2Dow==",
+ "version": "5.67.1",
+ "resolved": "https://registry.npmjs.org/@tanstack/react-query/-/react-query-5.67.1.tgz",
+ "integrity": "sha512-fH5u4JLwB6A+wLFdi8wWBWAYoJV5deYif2OveJ26ktAWjU499uvVFS1wPWnyEyq5LvZX1MZInvv9QRaIZANRaQ==",
"license": "MIT",
"dependencies": {
- "@tanstack/query-core": "5.66.11"
+ "@tanstack/query-core": "5.67.1"
},
"funding": {
"type": "github",
|
fix
|
update dependency @tanstack/react-query to v5.67.1 (main) (#16561)
|
d25f8abff91516a76f50272caeabe47692956b5a
|
2023-05-01 17:15:06
|
Mitsuru Kariya
|
helm: Fix gateway proxy_pass settings (#9215)
| false
|
diff --git a/production/helm/loki/templates/_helpers.tpl b/production/helm/loki/templates/_helpers.tpl
index b5a5c63a5ced1..520a73c20f5bb 100644
--- a/production/helm/loki/templates/_helpers.tpl
+++ b/production/helm/loki/templates/_helpers.tpl
@@ -596,78 +596,119 @@ http {
{{- $backendUrl = .Values.gateway.nginxConfig.customBackendUrl }}
{{- end }}
+
+ # Distributor
location = /api/prom/push {
proxy_pass {{ $writeUrl }}$request_uri;
}
+ location = /loki/api/v1/push {
+ proxy_pass {{ $writeUrl }}$request_uri;
+ }
+ location = /distributor/ring {
+ proxy_pass {{ $writeUrl }}$request_uri;
+ }
- location = /api/prom/tail {
- proxy_pass {{ $readUrl }}$request_uri;
- proxy_set_header Upgrade $http_upgrade;
- proxy_set_header Connection "upgrade";
+ # Ingester
+ location = /flush {
+ proxy_pass {{ $writeUrl }}$request_uri;
+ }
+ location ^~ /ingester/ {
+ proxy_pass {{ $writeUrl }}$request_uri;
+ }
+ location = /ingester {
+ internal; # to suppress 301
}
- location ~ /api/prom/.* {
- proxy_pass {{ $readUrl }}$request_uri;
+ # Ring
+ location = /ring {
+ proxy_pass {{ $writeUrl }}$request_uri;
+ }
+
+ # MemberListKV
+ location = /memberlist {
+ proxy_pass {{ $writeUrl }}$request_uri;
}
- location ~ /prometheus/api/v1/alerts.* {
+
+ # Ruler
+ location = /ruler/ring {
proxy_pass {{ $backendUrl }}$request_uri;
}
- location ~ /prometheus/api/v1/rules.* {
+ location = /api/prom/rules {
proxy_pass {{ $backendUrl }}$request_uri;
}
- location ~ /ruler/.* {
+ location ^~ /api/prom/rules/ {
proxy_pass {{ $backendUrl }}$request_uri;
}
-
- location = /loki/api/v1/push {
- proxy_pass {{ $writeUrl }}$request_uri;
+ location = /loki/api/v1/rules {
+ proxy_pass {{ $backendUrl }}$request_uri;
}
-
- location = /loki/api/v1/tail {
- proxy_pass {{ $readUrl }}$request_uri;
- proxy_set_header Upgrade $http_upgrade;
- proxy_set_header Connection "upgrade";
+ location ^~ /loki/api/v1/rules/ {
+ proxy_pass {{ $backendUrl }}$request_uri;
}
-
- location ~ /compactor/.* {
+ location = /prometheus/api/v1/alerts {
proxy_pass {{ $backendUrl }}$request_uri;
}
-
- location ~ /loki/api/v1/delete.* {
+ location = /prometheus/api/v1/rules {
proxy_pass {{ $backendUrl }}$request_uri;
}
- location ~ /distributor/.* {
- proxy_pass {{ $writeUrl }}$request_uri;
+ # Compactor
+ location = /compactor/ring {
+ proxy_pass {{ $backendUrl }}$request_uri;
}
-
- location ~ /ring {
- proxy_pass {{ $writeUrl }}$request_uri;
+ location = /loki/api/v1/delete {
+ proxy_pass {{ $backendUrl }}$request_uri;
}
-
- location ~ /ingester/.* {
- proxy_pass {{ $writeUrl }}$request_uri;
+ location = /loki/api/v1/cache/generation_numbers {
+ proxy_pass {{ $backendUrl }}$request_uri;
}
- location ~ /store-gateway/.* {
+ # IndexGateway
+ location = /indexgateway/ring {
proxy_pass {{ $backendUrl }}$request_uri;
}
- location ~ /query-scheduler/.* {
+ # QueryScheduler
+ location = /scheduler/ring {
proxy_pass {{ $backendUrl }}$request_uri;
}
- location ~ /scheduler/.* {
+
+ {{- if and .Values.enterprise.enabled .Values.enterprise.adminApi.enabled }}
+ # Admin API
+ location ^~ /admin/api/ {
proxy_pass {{ $backendUrl }}$request_uri;
}
+ location = /admin/api {
+ internal; # to suppress 301
+ }
+ {{- end }}
+
- location ~ /loki/api/.* {
+ # QueryFrontend, Querier
+ location = /api/prom/tail {
proxy_pass {{ $readUrl }}$request_uri;
+ proxy_set_header Upgrade $http_upgrade;
+ proxy_set_header Connection "upgrade";
}
-
- location ~ /admin/api/.* {
- proxy_pass {{ $backendUrl }}$request_uri;
+ location = /loki/api/v1/tail {
+ proxy_pass {{ $readUrl }}$request_uri;
+ proxy_set_header Upgrade $http_upgrade;
+ proxy_set_header Connection "upgrade";
}
+ location ^~ /api/prom/ {
+ proxy_pass {{ $readUrl }}$request_uri;
+ }
+ location = /api/prom {
+ internal; # to suppress 301
+ }
+ location ^~ /loki/api/v1/ {
+ proxy_pass {{ $readUrl }}$request_uri;
+ }
+ location = /loki/api/v1 {
+ internal; # to suppress 301
+ }
+
{{- with .Values.gateway.nginxConfig.serverSnippet }}
{{ . | nindent 4 }}
|
helm
|
Fix gateway proxy_pass settings (#9215)
|
c9e5c7f991abb7a6b24f9155480bea9e74f7c633
|
2024-03-27 22:57:44
|
Owen Diehl
|
chore(blooms): removes bloom-gw & bloom-compactor from all non-microservice targets (#12381)
| false
|
diff --git a/pkg/loki/loki.go b/pkg/loki/loki.go
index 5eab58e357c53..4e2b7df3ae35f 100644
--- a/pkg/loki/loki.go
+++ b/pkg/loki/loki.go
@@ -663,12 +663,9 @@ func (t *Loki) setupModuleManager() error {
Read: {QueryFrontend, Querier},
Write: {Ingester, Distributor},
- Backend: {QueryScheduler, Ruler, Compactor, IndexGateway, BloomGateway, BloomCompactor},
+ Backend: {QueryScheduler, Ruler, Compactor, IndexGateway},
- // TODO(salvacorts): We added the BloomCompactor component to the `all` target to ease testing.
- // We should remove it before releasing the feature since we don’t think any user running
- // the single binary will benefit from the blooms given their scale in terms of ingested data
- All: {QueryScheduler, QueryFrontend, Querier, Ingester, Distributor, Ruler, Compactor, BloomCompactor},
+ All: {QueryScheduler, QueryFrontend, Querier, Ingester, Distributor, Ruler, Compactor},
}
if t.Cfg.Querier.PerRequestLimitsEnabled {
|
chore
|
removes bloom-gw & bloom-compactor from all non-microservice targets (#12381)
|
44f1d8d7f6b1fb9b7fac32e913bd6517d15bad3f
|
2023-03-27 21:31:19
|
André Águas
|
azure: respect retry config before cancelling the context (#8732)
| false
|
diff --git a/CHANGELOG.md b/CHANGELOG.md
index b9ab662bc7ac1..3a50ccf4b9bd6 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -65,6 +65,7 @@
* [8448](https://github.com/grafana/loki/pull/8448) **chaudum**: Fix bug in LogQL parser that caused certain queries that contain a vector expression to fail.
* [8448](https://github.com/grafana/loki/pull/8665) **sandeepsukhani**: deletion: fix issue in processing delete requests with tsdb index
* [8753](https://github.com/grafana/loki/pull/8753) **slim-bean** A zero value for retention_period will now disable retention.
+* [8732](https://github.com/grafana/loki/pull/8732) **abaguas**: azure: respect retry config before cancelling the context
##### Changes
diff --git a/pkg/storage/chunk/client/azure/blob_storage_client.go b/pkg/storage/chunk/client/azure/blob_storage_client.go
index 1ab87a631c379..b4d37ea3d4e36 100644
--- a/pkg/storage/chunk/client/azure/blob_storage_client.go
+++ b/pkg/storage/chunk/client/azure/blob_storage_client.go
@@ -219,7 +219,7 @@ func (b *BlobStorage) Stop() {}
func (b *BlobStorage) GetObject(ctx context.Context, objectKey string) (io.ReadCloser, int64, error) {
var cancel context.CancelFunc = func() {}
if b.cfg.RequestTimeout > 0 {
- ctx, cancel = context.WithTimeout(ctx, b.cfg.RequestTimeout)
+ ctx, cancel = context.WithTimeout(ctx, (time.Duration(b.cfg.MaxRetries)*b.cfg.RequestTimeout)+(time.Duration(b.cfg.MaxRetries-1)*b.cfg.MaxRetryDelay)) // timeout only after azure client's built in retries
}
var (
|
azure
|
respect retry config before cancelling the context (#8732)
|
e2ed1c01d484d8ba5a75a9379a075fb92b6960f2
|
2023-10-05 22:19:35
|
Robert Jacob
|
operator: Update tools and dependencies (#10795)
| false
|
diff --git a/.github/workflows/operator.yaml b/.github/workflows/operator.yaml
index 0cb73e51419cc..93100178cb3d8 100644
--- a/.github/workflows/operator.yaml
+++ b/.github/workflows/operator.yaml
@@ -49,9 +49,9 @@ jobs:
id: go
- uses: actions/checkout@v3
- name: Lint
- uses: golangci/[email protected]
+ uses: golangci/golangci-lint-action@v3
with:
- version: v1.53.3
+ version: v1.54.2
args: --timeout=4m
working-directory: ./operator
- name: Check prometheus rules
diff --git a/operator/.bingo/Variables.mk b/operator/.bingo/Variables.mk
index cc141216f58cc..17f66a8c0ac19 100644
--- a/operator/.bingo/Variables.mk
+++ b/operator/.bingo/Variables.mk
@@ -23,11 +23,11 @@ $(BINGO): $(BINGO_DIR)/bingo.mod
@echo "(re)installing $(GOBIN)/bingo-v0.8.0"
@cd $(BINGO_DIR) && GOWORK=off $(GO) build -mod=mod -modfile=bingo.mod -o=$(GOBIN)/bingo-v0.8.0 "github.com/bwplotka/bingo"
-CONTROLLER_GEN := $(GOBIN)/controller-gen-v0.11.3
+CONTROLLER_GEN := $(GOBIN)/controller-gen-v0.13.0
$(CONTROLLER_GEN): $(BINGO_DIR)/controller-gen.mod
@# Install binary/ries using Go 1.14+ build command. This is using bwplotka/bingo-controlled, separate go module with pinned dependencies.
- @echo "(re)installing $(GOBIN)/controller-gen-v0.11.3"
- @cd $(BINGO_DIR) && GOWORK=off $(GO) build -mod=mod -modfile=controller-gen.mod -o=$(GOBIN)/controller-gen-v0.11.3 "sigs.k8s.io/controller-tools/cmd/controller-gen"
+ @echo "(re)installing $(GOBIN)/controller-gen-v0.13.0"
+ @cd $(BINGO_DIR) && GOWORK=off $(GO) build -mod=mod -modfile=controller-gen.mod -o=$(GOBIN)/controller-gen-v0.13.0 "sigs.k8s.io/controller-tools/cmd/controller-gen"
GEN_CRD_API_REFERENCE_DOCS := $(GOBIN)/gen-crd-api-reference-docs-v0.0.3
$(GEN_CRD_API_REFERENCE_DOCS): $(BINGO_DIR)/gen-crd-api-reference-docs.mod
@@ -35,17 +35,17 @@ $(GEN_CRD_API_REFERENCE_DOCS): $(BINGO_DIR)/gen-crd-api-reference-docs.mod
@echo "(re)installing $(GOBIN)/gen-crd-api-reference-docs-v0.0.3"
@cd $(BINGO_DIR) && GOWORK=off $(GO) build -mod=mod -modfile=gen-crd-api-reference-docs.mod -o=$(GOBIN)/gen-crd-api-reference-docs-v0.0.3 "github.com/ViaQ/gen-crd-api-reference-docs"
-GOFUMPT := $(GOBIN)/gofumpt-v0.4.0
+GOFUMPT := $(GOBIN)/gofumpt-v0.5.0
$(GOFUMPT): $(BINGO_DIR)/gofumpt.mod
@# Install binary/ries using Go 1.14+ build command. This is using bwplotka/bingo-controlled, separate go module with pinned dependencies.
- @echo "(re)installing $(GOBIN)/gofumpt-v0.4.0"
- @cd $(BINGO_DIR) && GOWORK=off $(GO) build -mod=mod -modfile=gofumpt.mod -o=$(GOBIN)/gofumpt-v0.4.0 "mvdan.cc/gofumpt"
+ @echo "(re)installing $(GOBIN)/gofumpt-v0.5.0"
+ @cd $(BINGO_DIR) && GOWORK=off $(GO) build -mod=mod -modfile=gofumpt.mod -o=$(GOBIN)/gofumpt-v0.5.0 "mvdan.cc/gofumpt"
-GOLANGCI_LINT := $(GOBIN)/golangci-lint-v1.53.3
+GOLANGCI_LINT := $(GOBIN)/golangci-lint-v1.54.2
$(GOLANGCI_LINT): $(BINGO_DIR)/golangci-lint.mod
@# Install binary/ries using Go 1.14+ build command. This is using bwplotka/bingo-controlled, separate go module with pinned dependencies.
- @echo "(re)installing $(GOBIN)/golangci-lint-v1.53.3"
- @cd $(BINGO_DIR) && GOWORK=off $(GO) build -mod=mod -modfile=golangci-lint.mod -o=$(GOBIN)/golangci-lint-v1.53.3 "github.com/golangci/golangci-lint/cmd/golangci-lint"
+ @echo "(re)installing $(GOBIN)/golangci-lint-v1.54.2"
+ @cd $(BINGO_DIR) && GOWORK=off $(GO) build -mod=mod -modfile=golangci-lint.mod -o=$(GOBIN)/golangci-lint-v1.54.2 "github.com/golangci/golangci-lint/cmd/golangci-lint"
HUGO := $(GOBIN)/hugo-v0.80.0
$(HUGO): $(BINGO_DIR)/hugo.mod
@@ -71,11 +71,11 @@ $(JSONNETFMT): $(BINGO_DIR)/jsonnetfmt.mod
@echo "(re)installing $(GOBIN)/jsonnetfmt-v0.20.0"
@cd $(BINGO_DIR) && GOWORK=off $(GO) build -mod=mod -modfile=jsonnetfmt.mod -o=$(GOBIN)/jsonnetfmt-v0.20.0 "github.com/google/go-jsonnet/cmd/jsonnetfmt"
-KIND := $(GOBIN)/kind-v0.17.0
+KIND := $(GOBIN)/kind-v0.20.0
$(KIND): $(BINGO_DIR)/kind.mod
@# Install binary/ries using Go 1.14+ build command. This is using bwplotka/bingo-controlled, separate go module with pinned dependencies.
- @echo "(re)installing $(GOBIN)/kind-v0.17.0"
- @cd $(BINGO_DIR) && GOWORK=off $(GO) build -mod=mod -modfile=kind.mod -o=$(GOBIN)/kind-v0.17.0 "sigs.k8s.io/kind"
+ @echo "(re)installing $(GOBIN)/kind-v0.20.0"
+ @cd $(BINGO_DIR) && GOWORK=off $(GO) build -mod=mod -modfile=kind.mod -o=$(GOBIN)/kind-v0.20.0 "sigs.k8s.io/kind"
KUSTOMIZE := $(GOBIN)/kustomize-v4.5.7
$(KUSTOMIZE): $(BINGO_DIR)/kustomize.mod
@@ -83,15 +83,15 @@ $(KUSTOMIZE): $(BINGO_DIR)/kustomize.mod
@echo "(re)installing $(GOBIN)/kustomize-v4.5.7"
@cd $(BINGO_DIR) && GOWORK=off $(GO) build -mod=mod -modfile=kustomize.mod -o=$(GOBIN)/kustomize-v4.5.7 "sigs.k8s.io/kustomize/kustomize/v4"
-OPERATOR_SDK := $(GOBIN)/operator-sdk-v1.27.0
+OPERATOR_SDK := $(GOBIN)/operator-sdk-v1.31.0
$(OPERATOR_SDK): $(BINGO_DIR)/operator-sdk.mod
@# Install binary/ries using Go 1.14+ build command. This is using bwplotka/bingo-controlled, separate go module with pinned dependencies.
- @echo "(re)installing $(GOBIN)/operator-sdk-v1.27.0"
- @cd $(BINGO_DIR) && GOWORK=off $(GO) build -mod=mod -modfile=operator-sdk.mod -o=$(GOBIN)/operator-sdk-v1.27.0 "github.com/operator-framework/operator-sdk/cmd/operator-sdk"
+ @echo "(re)installing $(GOBIN)/operator-sdk-v1.31.0"
+ @cd $(BINGO_DIR) && GOWORK=off $(GO) build -mod=mod -modfile=operator-sdk.mod -o=$(GOBIN)/operator-sdk-v1.31.0 "github.com/operator-framework/operator-sdk/cmd/operator-sdk"
-PROMTOOL := $(GOBIN)/promtool-v0.42.0
+PROMTOOL := $(GOBIN)/promtool-v0.47.1
$(PROMTOOL): $(BINGO_DIR)/promtool.mod
@# Install binary/ries using Go 1.14+ build command. This is using bwplotka/bingo-controlled, separate go module with pinned dependencies.
- @echo "(re)installing $(GOBIN)/promtool-v0.42.0"
- @cd $(BINGO_DIR) && GOWORK=off $(GO) build -mod=mod -modfile=promtool.mod -o=$(GOBIN)/promtool-v0.42.0 "github.com/prometheus/prometheus/cmd/promtool"
+ @echo "(re)installing $(GOBIN)/promtool-v0.47.1"
+ @cd $(BINGO_DIR) && GOWORK=off $(GO) build -mod=mod -modfile=promtool.mod -o=$(GOBIN)/promtool-v0.47.1 "github.com/prometheus/prometheus/cmd/promtool"
diff --git a/operator/.bingo/controller-gen.mod b/operator/.bingo/controller-gen.mod
index 26c8706ccf8fb..485a7e87ba40d 100644
--- a/operator/.bingo/controller-gen.mod
+++ b/operator/.bingo/controller-gen.mod
@@ -2,4 +2,4 @@ module _ // Auto generated by https://github.com/bwplotka/bingo. DO NOT EDIT
go 1.20
-require sigs.k8s.io/controller-tools v0.11.3 // cmd/controller-gen
+require sigs.k8s.io/controller-tools v0.13.0 // cmd/controller-gen
diff --git a/operator/.bingo/controller-gen.sum b/operator/.bingo/controller-gen.sum
index 5cdb83195b6ff..d1616c04798e8 100644
--- a/operator/.bingo/controller-gen.sum
+++ b/operator/.bingo/controller-gen.sum
@@ -156,6 +156,8 @@ github.com/fatih/color v1.12.0 h1:mRhaKNwANqRgUBGKmnI5ZxEk7QXmjQeCcuYFMX2bfcc=
github.com/fatih/color v1.12.0/go.mod h1:ELkj/draVOlAH/xkhN6mQ50Qd0MPOk5AAr3maGEBuJM=
github.com/fatih/color v1.13.0 h1:8LOYc1KYPPmyKMuN8QV2DNRWNbLo6LZ0iLs8+mlH53w=
github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk=
+github.com/fatih/color v1.15.0 h1:kOqh6YHBtK8aywxGerMG2Eq3H6Qgoqeo13Bk2Mv/nBs=
+github.com/fatih/color v1.15.0/go.mod h1:0h5ZqXfHYED7Bhv2ZJamyIOUej9KtShiJESRwBDUSsw=
github.com/felixge/httpsnoop v1.0.1/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
github.com/form3tech-oss/jwt-go v3.2.3+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
@@ -182,6 +184,8 @@ github.com/go-logr/logr v1.2.0/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbV
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.2.3 h1:2DntVwHkVopvECVRSlL5PSo9eG+cAkDCuckLubN+rq0=
github.com/go-logr/logr v1.2.3/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
+github.com/go-logr/logr v1.2.4 h1:g01GSCwiDw2xSZfjJ2/T9M+S6pFdcNtFYsp+Y43HYDQ=
+github.com/go-logr/logr v1.2.4/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/zapr v1.2.3/go.mod h1:eIauM6P8qSvTw5o2ez6UEAfGjQKrxQTl5EoK+Qa2oG4=
github.com/go-openapi/jsonpointer v0.19.2/go.mod h1:3akKfEdA7DF1sugOqz1dVQHBcuDBPKZGEoHC/NkiQRg=
github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
@@ -202,6 +206,8 @@ github.com/gobuffalo/flect v0.2.5 h1:H6vvsv2an0lalEaCDRThvtBfmg44W/QHXBCYUXf/6S4
github.com/gobuffalo/flect v0.2.5/go.mod h1:1ZyCLIbg0YD7sDkzvFdPoOydPtD8y9JQnrOROolUcM8=
github.com/gobuffalo/flect v0.3.0 h1:erfPWM+K1rFNIQeRPdeEXxo8yFr/PO17lhRnS8FUrtk=
github.com/gobuffalo/flect v0.3.0/go.mod h1:5pf3aGnsvqvCj50AVni7mJJF8ICxGZ8HomberC3pXLE=
+github.com/gobuffalo/flect v1.0.2 h1:eqjPGSo2WmjgY2XlpGwo2NXgL3RucAKo4k4qQMNA5sA=
+github.com/gobuffalo/flect v1.0.2/go.mod h1:A5msMlrHtLqh9umBSnvabjsMrCcCpAyzglnDvkbYKHs=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
@@ -267,6 +273,8 @@ github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.1.0 h1:Hsa8mG0dQ46ij8Sl2AYJDUv1oA9/d6Vk+3LG99Oe02g=
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
+github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
+github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/martian/v3 v3.0.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
github.com/google/martian/v3 v3.1.0/go.mod h1:y5Zk1BBys9G+gd6Jrk0W3cC1+ELVxBWuIGO+w/tUAp0=
@@ -334,6 +342,8 @@ github.com/imdario/mergo v0.3.6/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJ
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/inconshreveable/mousetrap v1.0.1 h1:U3uMjPSQEBMNp1lFxmllqCPM6P5u/Xq7Pgzkat/bFNc=
github.com/inconshreveable/mousetrap v1.0.1/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
+github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8=
+github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/jonboulle/clockwork v0.2.2/go.mod h1:Pkfl5aHPm1nk2H9h0bjmnJD/BcgbGXUBGnn1kMkgxc8=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
@@ -375,6 +385,8 @@ github.com/mattn/go-colorable v0.1.8 h1:c1ghPdyEDarC70ftn0y+A/Ee++9zz8ljHG1b13eJ
github.com/mattn/go-colorable v0.1.8/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-colorable v0.1.9 h1:sqDoxXbdeALODt0DAeJCVp38ps9ZogZEAXjus69YV3U=
github.com/mattn/go-colorable v0.1.9/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
+github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
+github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
@@ -383,6 +395,9 @@ github.com/mattn/go-isatty v0.0.12 h1:wuysRhFDzyxgEmMf5xjvJ2M9dZoWAXNNr5LSBS7uHX
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-isatty v0.0.14 h1:yVuAays6BHfxijgZPzw+3Zlu5yQgKGP2/hcQbHb7S9Y=
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
+github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
+github.com/mattn/go-isatty v0.0.17 h1:BTarxUcIeDqL27Mc+vyvdWYSL28zpIhv3RoTdsLMPng=
+github.com/mattn/go-isatty v0.0.17/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
@@ -501,6 +516,8 @@ github.com/spf13/cobra v1.4.0 h1:y+wJpx64xcgO1V+RcnwW0LEHxTKRi2ZDPSBjWnrg88Q=
github.com/spf13/cobra v1.4.0/go.mod h1:Wo4iy3BUC+X2Fybo0PDqwJIv3dNRiZLHQymsfxlB84g=
github.com/spf13/cobra v1.6.1 h1:o94oiPyS4KD1mPy2fmcYYHHfCxLqYjJOhGsCHFZtEzA=
github.com/spf13/cobra v1.6.1/go.mod h1:IOw/AERYS7UzyrGinqmz6HLUo219MORXGxhbaJUqzrY=
+github.com/spf13/cobra v1.7.0 h1:hyqWnYt1ZQShIddO5kBpj3vu05/++x6tJ6dg8EC572I=
+github.com/spf13/cobra v1.7.0/go.mod h1:uLxZILRyS/50WlhOIKD7W6V5bgeIt+4sICxh6uRMrb0=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/pflag v0.0.0-20170130214245-9ff6c6923cff/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.1/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
@@ -513,6 +530,7 @@ github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
+github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
@@ -521,6 +539,7 @@ github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
+github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
@@ -631,6 +650,8 @@ golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 h1:6zppjxzCulZykYSLyVD
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.7.0 h1:LapD9S96VoQRhi/GrNTqeBJFrUjs5UHCAtTlgwA5oZA=
golang.org/x/mod v0.7.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
+golang.org/x/mod v0.12.0 h1:rmsUpXtvNzj340zd98LZ4KntptpfRHwpFOHG188oHXc=
+golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -688,6 +709,8 @@ golang.org/x/net v0.0.0-20220722155237-a158d28d115b h1:PxfKdU9lEEDYjdIzOtC4qFWgk
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.4.0 h1:Q5QPcMlvfxFTAPV0+07Xz/MpK9NTXu2VDUuy0FeMfaU=
golang.org/x/net v0.4.0/go.mod h1:MBQ8lrhLObU/6UmLb4fmbmk5OcyYmqtbGd/9yIeKjEE=
+golang.org/x/net v0.14.0 h1:BONx9s002vGdD9umnlX1Po8vOZmrgH34qlHcD1MfK14=
+golang.org/x/net v0.14.0/go.mod h1:PpSgVXXLK0OxS0F31C1/tv6XNguvCrnXIDrFMspZIUI=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -797,8 +820,11 @@ golang.org/x/sys v0.0.0-20220319134239-a9b59b0215f8/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f h1:v4INt8xihDGvnrfjMDVXGxw9wrfxYyCjk0KbXjhR55s=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.3.0 h1:w8ZOecv6NaNa/zC8944JTU3vz4u6Lagfk4RPQxv92NQ=
golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.11.0 h1:eG7RXZHdqOJ1i+0lgLgCpSXAp6M3LYlAo6osgSi0xOM=
+golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -814,6 +840,8 @@ golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.5.0 h1:OLmvp0KP+FVG99Ct/qFiL/Fhk4zp4QQnZ7b2U+5piUM=
golang.org/x/text v0.5.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
+golang.org/x/text v0.12.0 h1:k+n5B8goJNdU7hSvEtMUz3d1Q6D/XW4COJSJR6fN0mc=
+golang.org/x/text v0.12.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@@ -892,6 +920,8 @@ golang.org/x/tools v0.1.12 h1:VveCTK38A2rkS8ZqFY25HIDFscX5X9OoEhJd3quQmXU=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.4.0 h1:7mTAgkunk3fr4GAloyyCasadO6h9zSsQZbwvcaIciV4=
golang.org/x/tools v0.4.0/go.mod h1:UE5sM2OK9E/d67R0ANs2xJizIymRP5gJU295PvKXxjQ=
+golang.org/x/tools v0.12.0 h1:YW6HUoUmYBpwSgyaGaZq1fHjrBjX1rlpZ54T6mu2kss=
+golang.org/x/tools v0.12.0/go.mod h1:Sc0INKfu04TlqNoRA1hgpFZbhYXHPr4V5DzpSBTPqQM=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -1084,6 +1114,8 @@ k8s.io/api v0.25.0 h1:H+Q4ma2U/ww0iGB78ijZx6DRByPz6/733jIuFpX70e0=
k8s.io/api v0.25.0/go.mod h1:ttceV1GyV1i1rnmvzT3BST08N6nGt+dudGrquzVQWPk=
k8s.io/api v0.26.1 h1:f+SWYiPd/GsiWwVRz+NbFyCgvv75Pk9NK6dlkZgpCRQ=
k8s.io/api v0.26.1/go.mod h1:xd/GBNgR0f707+ATNyPmQ1oyKSgndzXij81FzWGsejg=
+k8s.io/api v0.28.0 h1:3j3VPWmN9tTDI68NETBWlDiA9qOiGJ7sdKeufehBYsM=
+k8s.io/api v0.28.0/go.mod h1:0l8NZJzB0i/etuWnIXcwfIv+xnDOhL3lLW919AWYDuY=
k8s.io/apiextensions-apiserver v0.20.2 h1:rfrMWQ87lhd8EzQWRnbQ4gXrniL/yTRBgYH1x1+BLlo=
k8s.io/apiextensions-apiserver v0.20.2/go.mod h1:F6TXp389Xntt+LUq3vw6HFOLttPa0V8821ogLGwb6Zs=
k8s.io/apiextensions-apiserver v0.23.0 h1:uii8BYmHYiT2ZTAJxmvc3X8UhNYMxl2A0z0Xq3Pm+WY=
@@ -1092,6 +1124,8 @@ k8s.io/apiextensions-apiserver v0.25.0 h1:CJ9zlyXAbq0FIW8CD7HHyozCMBpDSiH7EdrSTC
k8s.io/apiextensions-apiserver v0.25.0/go.mod h1:3pAjZiN4zw7R8aZC5gR0y3/vCkGlAjCazcg1me8iB/E=
k8s.io/apiextensions-apiserver v0.26.1 h1:cB8h1SRk6e/+i3NOrQgSFij1B2S0Y0wDoNl66bn8RMI=
k8s.io/apiextensions-apiserver v0.26.1/go.mod h1:AptjOSXDGuE0JICx/Em15PaoO7buLwTs0dGleIHixSM=
+k8s.io/apiextensions-apiserver v0.28.0 h1:CszgmBL8CizEnj4sj7/PtLGey6Na3YgWyGCPONv7E9E=
+k8s.io/apiextensions-apiserver v0.28.0/go.mod h1:uRdYiwIuu0SyqJKriKmqEN2jThIJPhVmOWETm8ud1VE=
k8s.io/apimachinery v0.20.2 h1:hFx6Sbt1oG0n6DZ+g4bFt5f6BoMkOjKWsQFu077M3Vg=
k8s.io/apimachinery v0.20.2/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU=
k8s.io/apimachinery v0.23.0 h1:mIfWRMjBuMdolAWJ3Fd+aPTMv3X9z+waiARMpvvb0HQ=
@@ -1100,6 +1134,8 @@ k8s.io/apimachinery v0.25.0 h1:MlP0r6+3XbkUG2itd6vp3oxbtdQLQI94fD5gCS+gnoU=
k8s.io/apimachinery v0.25.0/go.mod h1:qMx9eAk0sZQGsXGu86fab8tZdffHbwUfsvzqKn4mfB0=
k8s.io/apimachinery v0.26.1 h1:8EZ/eGJL+hY/MYCNwhmDzVqq2lPl3N3Bo8rvweJwXUQ=
k8s.io/apimachinery v0.26.1/go.mod h1:tnPmbONNJ7ByJNz9+n9kMjNP8ON+1qoAIIC70lztu74=
+k8s.io/apimachinery v0.28.0 h1:ScHS2AG16UlYWk63r46oU3D5y54T53cVI5mMJwwqFNA=
+k8s.io/apimachinery v0.28.0/go.mod h1:X0xh/chESs2hP9koe+SdIAcXWcQ+RM5hy0ZynB+yEvw=
k8s.io/apiserver v0.20.2/go.mod h1:2nKd93WyMhZx4Hp3RfgH2K5PhwyTrprrkWYnI7id7jA=
k8s.io/apiserver v0.25.0/go.mod h1:BKwsE+PTC+aZK+6OJQDPr0v6uS91/HWxX7evElAH6xo=
k8s.io/client-go v0.20.2/go.mod h1:kH5brqWqp7HDxUFKoEgiI4v8G1xzbe9giaCenUWJzgE=
@@ -1122,6 +1158,8 @@ k8s.io/klog/v2 v2.70.1 h1:7aaoSdahviPmR+XkS7FyxlkkXs6tHISSG03RxleQAVQ=
k8s.io/klog/v2 v2.70.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
k8s.io/klog/v2 v2.80.1 h1:atnLQ121W371wYYFawwYx1aEY2eUfs4l3J72wtgAwV4=
k8s.io/klog/v2 v2.80.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
+k8s.io/klog/v2 v2.100.1 h1:7WCHKK6K8fNhTqfBhISHQ97KrnJNFZMcQvKp7gP/tmg=
+k8s.io/klog/v2 v2.100.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
k8s.io/kube-openapi v0.0.0-20201113171705-d219536bb9fd/go.mod h1:WOJ3KddDSol4tAGcJo0Tvi+dK12EcqSLqcWsryKMpfM=
k8s.io/kube-openapi v0.0.0-20220803162953-67bda5d908f1/go.mod h1:C/N6wCaBHeBHkHUesQOQy2/MZqGgMAFPqGsGQLdbZBU=
k8s.io/utils v0.0.0-20201110183641-67b214c5f920 h1:CbnUZsM497iRC5QMVkHwyl8s2tB3g7yaSHkYPkpgelw=
@@ -1133,6 +1171,8 @@ k8s.io/utils v0.0.0-20220728103510-ee6ede2d64ed h1:jAne/RjBTyawwAy0utX5eqigAwz/l
k8s.io/utils v0.0.0-20220728103510-ee6ede2d64ed/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
k8s.io/utils v0.0.0-20221107191617-1a15be271d1d h1:0Smp/HP1OH4Rvhe+4B8nWGERtlqAGSftbSbbmm45oFs=
k8s.io/utils v0.0.0-20221107191617-1a15be271d1d/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
+k8s.io/utils v0.0.0-20230406110748-d93618cff8a2 h1:qY1Ad8PODbnymg2pRbkyMT/ylpTrCM8P2RJ0yroCyIk=
+k8s.io/utils v0.0.0-20230406110748-d93618cff8a2/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
@@ -1148,10 +1188,14 @@ sigs.k8s.io/controller-tools v0.10.0 h1:0L5DTDTFB67jm9DkfrONgTGmfc/zYow0ZaHyppiz
sigs.k8s.io/controller-tools v0.10.0/go.mod h1:uvr0EW6IsprfB0jpQq6evtKy+hHyHCXNfdWI5ONPx94=
sigs.k8s.io/controller-tools v0.11.3 h1:T1xzLkog9saiyQSLz1XOImu4OcbdXWytc5cmYsBeBiE=
sigs.k8s.io/controller-tools v0.11.3/go.mod h1:qcfX7jfcfYD/b7lAhvqAyTbt/px4GpvN88WKLFFv7p8=
+sigs.k8s.io/controller-tools v0.13.0 h1:NfrvuZ4bxyolhDBt/rCZhDnx3M2hzlhgo5n3Iv2RykI=
+sigs.k8s.io/controller-tools v0.13.0/go.mod h1:5vw3En2NazbejQGCeWKRrE7q4P+CW8/klfVqP8QZkgA=
sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6 h1:fD1pz4yfdADVNfFmcP2aBEtudwUQ1AlLnRBALr33v3s=
sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2 h1:kDi4JBNAsJWfz1aEXhO8Jg87JJaPNLh5tIzYHgStQ9Y=
sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2 h1:iXTIw73aPyC+oRdyqqvVJuloN1p0AC/kzH07hu3NE+k=
sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0=
+sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo=
+sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0=
sigs.k8s.io/structured-merge-diff/v4 v4.0.2 h1:YHQV7Dajm86OuqnIR6zAelnDWBRjo+YhYV9PmGrh1s8=
sigs.k8s.io/structured-merge-diff/v4 v4.0.2/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
sigs.k8s.io/structured-merge-diff/v4 v4.1.2 h1:Hr/htKFmJEbtMgS/UD0N+gtgctAqz81t3nu+sPzynno=
diff --git a/operator/.bingo/gofumpt.mod b/operator/.bingo/gofumpt.mod
index 3cec626cc8132..ab80f4e7bae7b 100644
--- a/operator/.bingo/gofumpt.mod
+++ b/operator/.bingo/gofumpt.mod
@@ -2,4 +2,4 @@ module _ // Auto generated by https://github.com/bwplotka/bingo. DO NOT EDIT
go 1.19
-require mvdan.cc/gofumpt v0.4.0
+require mvdan.cc/gofumpt v0.5.0
diff --git a/operator/.bingo/gofumpt.sum b/operator/.bingo/gofumpt.sum
index d1cc1ee3cf8f1..e0cf9bdd8624e 100644
--- a/operator/.bingo/gofumpt.sum
+++ b/operator/.bingo/gofumpt.sum
@@ -2,6 +2,8 @@ github.com/google/go-cmp v0.5.4 h1:L8R9j+yAqZuZjsqh/z+F1NCffTKKLShY6zXTItVIZ8M=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.7 h1:81/ik6ipDQS2aGcBfIN5dHDB36BwrStyeAQquSYCV4o=
github.com/google/go-cmp v0.5.8 h1:e6P7q2lk1O+qJJb4BtCQXlK8vWEO8V1ZeuEdJNOqZyg=
+github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
+github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
@@ -15,6 +17,8 @@ golang.org/x/mod v0.4.0 h1:8pl+sMODzuvGJkmj2W4kZihvVb5mKm8pB/X44PIQHv8=
golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3 h1:kQgndtyPBW/JIYERgdxfwMYh3AVStj88WQTlNDi2a+o=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 h1:6zppjxzCulZykYSLyVDYbneBfbaBIQPYMevg0bEwv2s=
+golang.org/x/mod v0.10.0 h1:lFO9qtOdlre5W1jxS3r/4szv2/6iXxScdzjoBMXNhYk=
+golang.org/x/mod v0.10.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
@@ -22,6 +26,8 @@ golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c h1:5KslGYwFpkhGh+Q16bwMP3cOontH8FOep7tGV86Y7SQ=
golang.org/x/sync v0.0.0-20220819030929-7fc1605a5dde h1:ejfdSekXMDxDLbRrJMwUk6KnSLZ2McaUCVcIKM+N6jc=
+golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o=
+golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -35,6 +41,8 @@ golang.org/x/tools v0.0.0-20210101214203-2dba1e4ea05c h1:dS09fXwOFF9cXBnIzZexIuU
golang.org/x/tools v0.0.0-20210101214203-2dba1e4ea05c/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.10 h1:QjFRCZxdOhBJ/UNgnBZLbNV13DlbnK0quyivTnXJM20=
golang.org/x/tools v0.1.12 h1:VveCTK38A2rkS8ZqFY25HIDFscX5X9OoEhJd3quQmXU=
+golang.org/x/tools v0.8.0 h1:vSDcovVPld282ceKgDimkRSC8kpaH1dgyc9UMzlt84Y=
+golang.org/x/tools v0.8.0/go.mod h1:JxBZ99ISMI5ViVkT1tr6tdNmXeTrcpVSD3vZ1RsRdN4=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -48,3 +56,5 @@ mvdan.cc/gofumpt v0.3.1 h1:avhhrOmv0IuvQVK7fvwV91oFSGAk5/6Po8GXTzICeu8=
mvdan.cc/gofumpt v0.3.1/go.mod h1:w3ymliuxvzVx8DAutBnVyDqYb1Niy/yCJt/lk821YCE=
mvdan.cc/gofumpt v0.4.0 h1:JVf4NN1mIpHogBj7ABpgOyZc65/UUOkKQFkoURsz4MM=
mvdan.cc/gofumpt v0.4.0/go.mod h1:PljLOHDeZqgS8opHRKLzp2It2VBuSdteAgqUfzMTxlQ=
+mvdan.cc/gofumpt v0.5.0 h1:0EQ+Z56k8tXjj/6TQD25BFNKQXpCvT0rnansIc7Ug5E=
+mvdan.cc/gofumpt v0.5.0/go.mod h1:HBeVDtMKRZpXyxFciAirzdKklDlGu8aAy1wEbH5Y9js=
diff --git a/operator/.bingo/golangci-lint.mod b/operator/.bingo/golangci-lint.mod
index 6275d68be9537..2be0691976f6b 100644
--- a/operator/.bingo/golangci-lint.mod
+++ b/operator/.bingo/golangci-lint.mod
@@ -2,4 +2,4 @@ module _ // Auto generated by https://github.com/bwplotka/bingo. DO NOT EDIT
go 1.20
-require github.com/golangci/golangci-lint v1.53.3 // cmd/golangci-lint
+require github.com/golangci/golangci-lint v1.54.2 // cmd/golangci-lint
diff --git a/operator/.bingo/golangci-lint.sum b/operator/.bingo/golangci-lint.sum
index 0d5190bf56f26..47bdd1f5071e5 100644
--- a/operator/.bingo/golangci-lint.sum
+++ b/operator/.bingo/golangci-lint.sum
@@ -68,20 +68,28 @@ contrib.go.opencensus.io/exporter/stackdriver v0.13.4/go.mod h1:aXENhDJ1Y4lIg4EU
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/4meepo/tagalign v1.2.2 h1:kQeUTkFTaBRtd/7jm8OKJl9iHk0gAO+TDFPHGSna0aw=
github.com/4meepo/tagalign v1.2.2/go.mod h1:Q9c1rYMZJc9dPRkbQPpcBNCLEmY2njbAsXhQOZFE2dE=
+github.com/4meepo/tagalign v1.3.2 h1:1idD3yxlRGV18VjqtDbqYvQ5pXqQS0wO2dn6M3XstvI=
+github.com/4meepo/tagalign v1.3.2/go.mod h1:Q9c1rYMZJc9dPRkbQPpcBNCLEmY2njbAsXhQOZFE2dE=
github.com/Abirdcfly/dupword v0.0.7 h1:z14n0yytA3wNO2gpCD/jVtp/acEXPGmYu0esewpBt6Q=
github.com/Abirdcfly/dupword v0.0.7/go.mod h1:K/4M1kj+Zh39d2aotRwypvasonOyAMH1c/IZJzE0dmk=
github.com/Abirdcfly/dupword v0.0.9 h1:MxprGjKq3yDBICXDgEEsyGirIXfMYXkLNT/agPsE1tk=
github.com/Abirdcfly/dupword v0.0.9/go.mod h1:PzmHVLLZ27MvHSzV7eFmMXSFArWXZPZmfuuziuUrf2g=
github.com/Abirdcfly/dupword v0.0.11 h1:z6v8rMETchZXUIuHxYNmlUAuKuB21PeaSymTed16wgU=
github.com/Abirdcfly/dupword v0.0.11/go.mod h1:wH8mVGuf3CP5fsBTkfWwwwKTjDnVVCxtU8d8rgeVYXA=
+github.com/Abirdcfly/dupword v0.0.12 h1:56NnOyrXzChj07BDFjeRA+IUzSz01jmzEq+G4kEgFhc=
+github.com/Abirdcfly/dupword v0.0.12/go.mod h1:+us/TGct/nI9Ndcbcp3rgNcQzctTj68pq7TcgNpLfdI=
github.com/Antonboom/errname v0.1.7 h1:mBBDKvEYwPl4WFFNwec1CZO096G6vzK9vvDQzAwkako=
github.com/Antonboom/errname v0.1.7/go.mod h1:g0ONh16msHIPgJSGsecu1G/dcF2hlYR/0SddnIAGavU=
github.com/Antonboom/errname v0.1.10 h1:RZ7cYo/GuZqjr1nuJLNe8ZH+a+Jd9DaZzttWzak9Bls=
github.com/Antonboom/errname v0.1.10/go.mod h1:xLeiCIrvVNpUtsN0wxAh05bNIZpqE22/qDMnTBTttiA=
+github.com/Antonboom/errname v0.1.12 h1:oh9ak2zUtsLp5oaEd/erjB4GPu9w19NyoIskZClDcQY=
+github.com/Antonboom/errname v0.1.12/go.mod h1:bK7todrzvlaZoQagP1orKzWXv59X/x0W0Io2XT1Ssro=
github.com/Antonboom/nilnil v0.1.1 h1:PHhrh5ANKFWRBh7TdYmyyq2gyT2lotnvFvvFbylF81Q=
github.com/Antonboom/nilnil v0.1.1/go.mod h1:L1jBqoWM7AOeTD+tSquifKSesRHs4ZdaxvZR+xdJEaI=
github.com/Antonboom/nilnil v0.1.5 h1:X2JAdEVcbPaOom2TUa1FxZ3uyuUlex0XMLGYMemu6l0=
github.com/Antonboom/nilnil v0.1.5/go.mod h1:I24toVuBKhfP5teihGWctrRiPbRKHwZIFOvc6v3HZXk=
+github.com/Antonboom/nilnil v0.1.7 h1:ofgL+BA7vlA1K2wNQOsHzLJ2Pw5B5DpWRLdDAVvvTow=
+github.com/Antonboom/nilnil v0.1.7/go.mod h1:TP+ScQWVEq0eSIxqU8CbdT5DFWoHp0MbP+KMUO1BKYQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/toml v0.4.1/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
github.com/BurntSushi/toml v1.1.0 h1:ksErzDEI1khOiGPgpwuI7x2ebx/uXQNw7xJpn9Eq1+I=
@@ -100,6 +108,8 @@ github.com/Djarvur/go-err113 v0.0.0-20210108212216-aea10b59be24/go.mod h1:4UJr5H
github.com/GaijinEntertainment/go-exhaustruct/v2 v2.2.0 h1:V9xVvhKbLt7unNEGAruK1xXglyc668Pq3Xx0MNTNqpo=
github.com/GaijinEntertainment/go-exhaustruct/v2 v2.3.0 h1:+r1rSv4gvYn0wmRjC8X7IAzX8QezqtFV9m0MUHFJgts=
github.com/GaijinEntertainment/go-exhaustruct/v2 v2.3.0/go.mod h1:b3g59n2Y+T5xmcxJL+UEG2f8cQploZm1mR/v6BW0mU0=
+github.com/GaijinEntertainment/go-exhaustruct/v3 v3.1.0 h1:3ZBs7LAezy8gh0uECsA6CGU43FF3zsx5f4eah5FxTMA=
+github.com/GaijinEntertainment/go-exhaustruct/v3 v3.1.0/go.mod h1:rZLTje5A9kFBe0pzhpe2TdhRniBF++PRHQuRpR8esVc=
github.com/Masterminds/goutils v1.1.0/go.mod h1:8cTjp+g8YejhMuvIA5y2vz3BpJxksy863GQaJW2MFNU=
github.com/Masterminds/semver v1.4.2/go.mod h1:MB6lktGJrhw8PrUyiEoblNEGEQ+RzHPF078ddwwvV3Y=
github.com/Masterminds/semver v1.5.0 h1:H65muMkzWKEuNDnfl9d70GUjFniHKHRbFPGBuZ3QEww=
@@ -145,6 +155,8 @@ github.com/ashanbrown/forbidigo v1.5.1 h1:WXhzLjOlnuDYPYQo/eFlcFMi8X/kLfvWLYu6CS
github.com/ashanbrown/forbidigo v1.5.1/go.mod h1:Y8j9jy9ZYAEHXdu723cUlraTqbzjKF1MUyfOKL+AjcU=
github.com/ashanbrown/forbidigo v1.5.3 h1:jfg+fkm/snMx+V9FBwsl1d340BV/99kZGv5jN9hBoXk=
github.com/ashanbrown/forbidigo v1.5.3/go.mod h1:Y8j9jy9ZYAEHXdu723cUlraTqbzjKF1MUyfOKL+AjcU=
+github.com/ashanbrown/forbidigo v1.6.0 h1:D3aewfM37Yb3pxHujIPSpTf6oQk9sc9WZi8gerOIVIY=
+github.com/ashanbrown/forbidigo v1.6.0/go.mod h1:Y8j9jy9ZYAEHXdu723cUlraTqbzjKF1MUyfOKL+AjcU=
github.com/ashanbrown/makezero v1.1.1 h1:iCQ87C0V0vSyO+M9E/FZYbu65auqH0lnsOkf5FcB28s=
github.com/ashanbrown/makezero v1.1.1/go.mod h1:i1bJLCRSCHOcOa9Y6MyF2FTfMZMFdHvxKHxgO5Z1axI=
github.com/aws/aws-sdk-go v1.23.20/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
@@ -179,6 +191,8 @@ github.com/butuzov/ireturn v0.2.0 h1:kCHi+YzC150GE98WFuZQu9yrTn6GEydO2AuPLbTgnO4
github.com/butuzov/ireturn v0.2.0/go.mod h1:Wh6Zl3IMtTpaIKbmwzqi6olnM9ptYQxxVacMsOEFPoc=
github.com/butuzov/mirror v1.1.0 h1:ZqX54gBVMXu78QLoiqdwpl2mgmoOJTk7s4p4o+0avZI=
github.com/butuzov/mirror v1.1.0/go.mod h1:8Q0BdQU6rC6WILDiBM60DBfvV78OLJmMmixe7GF45AE=
+github.com/ccojocar/zxcvbn-go v1.0.1 h1:+sxrANSCj6CdadkcMnvde/GWU1vZiiXRbqYSCalV4/4=
+github.com/ccojocar/zxcvbn-go v1.0.1/go.mod h1:g1qkXtUSvHP8lhHp5GrSmTz6uWALGRMQdw6Qnz/hi60=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/census-instrumentation/opencensus-proto v0.3.0/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
@@ -240,6 +254,8 @@ github.com/daixiang0/gci v0.9.1 h1:jBrwBmBZTDsGsXiaCTLIe9diotp1X4X64zodFrh7l+c=
github.com/daixiang0/gci v0.9.1/go.mod h1:EpVfrztufwVgQRXjnX4zuNinEpLj5OmMjtu/+MB0V0c=
github.com/daixiang0/gci v0.10.1 h1:eheNA3ljF6SxnPD/vE4lCBusVHmV3Rs3dkKvFrJ7MR0=
github.com/daixiang0/gci v0.10.1/go.mod h1:xtHP9N7AHdNvtRNfcx9gwTDfw7FRJx4bZUsiEfiNNAI=
+github.com/daixiang0/gci v0.11.0 h1:XeQbFKkCRxvVyn06EOuNY6LPGBLVuB/W130c8FrnX6A=
+github.com/daixiang0/gci v0.11.0/go.mod h1:xtHP9N7AHdNvtRNfcx9gwTDfw7FRJx4bZUsiEfiNNAI=
github.com/davecgh/go-spew v0.0.0-20161028175848-04cdfd42973b/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
@@ -296,6 +312,8 @@ github.com/go-critic/go-critic v0.6.7 h1:1evPrElnLQ2LZtJfmNDzlieDhjnq36SLgNzisx0
github.com/go-critic/go-critic v0.6.7/go.mod h1:fYZUijFdcnxgx6wPjQA2QEjIRaNCT0gO8bhexy6/QmE=
github.com/go-critic/go-critic v0.8.1 h1:16omCF1gN3gTzt4j4J6fKI/HnRojhEp+Eks6EuKw3vw=
github.com/go-critic/go-critic v0.8.1/go.mod h1:kpzXl09SIJX1cr9TB/g/sAG+eFEl7ZS9f9cqvZtyNl0=
+github.com/go-critic/go-critic v0.9.0 h1:Pmys9qvU3pSML/3GEQ2Xd9RZ/ip+aXHKILuxczKGV/U=
+github.com/go-critic/go-critic v0.9.0/go.mod h1:5P8tdXL7m/6qnyG6oRAlYLORvoXH0WDypYgAEmagT40=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
@@ -414,6 +432,8 @@ github.com/golangci/golangci-lint v1.53.2 h1:52pgJKXiAuyfcOa8HJPIrZk1oMgpyXeN8TU
github.com/golangci/golangci-lint v1.53.2/go.mod h1:fz9DDC9UABJ7SFHTz0XnkiYzb4su7YpuB9ucsgdUfCk=
github.com/golangci/golangci-lint v1.53.3 h1:CUcRafczT4t1F+mvdkUm6KuOpxUZTl0yWN/rSU6sSMo=
github.com/golangci/golangci-lint v1.53.3/go.mod h1:W4Gg3ONq6p3Jl+0s/h9Gr0j7yEgHJWWZO2bHl2tBUXM=
+github.com/golangci/golangci-lint v1.54.2 h1:oR9zxfWYxt7hFqk6+fw6Enr+E7F0SN2nqHhJYyIb0yo=
+github.com/golangci/golangci-lint v1.54.2/go.mod h1:vnsaCTPKCI2wreL9tv7RkHDwUrz3htLjed6+6UsvcwU=
github.com/golangci/lint-1 v0.0.0-20191013205115-297bf364a8e0 h1:MfyDlzVjl1hoaPzPD4Gpb/QgoRfSBR0jdhwGyAWwMSA=
github.com/golangci/lint-1 v0.0.0-20191013205115-297bf364a8e0/go.mod h1:66R6K6P6VWk9I95jvqGxkqJxVWGFy9XlDwLwVz1RCFg=
github.com/golangci/maligned v0.0.0-20180506175553-b1d89398deca h1:kNY3/svz5T29MYHubXix4aDDuE3RWHkPvopM/EDv/MA=
@@ -422,6 +442,8 @@ github.com/golangci/misspell v0.3.5 h1:pLzmVdl3VxTOncgzHcvLOKirdvcx/TydsClUQXTeh
github.com/golangci/misspell v0.3.5/go.mod h1:dEbvlSfYbMQDtrpRMQU675gSDLDNa8sCPPChZ7PhiVA=
github.com/golangci/misspell v0.4.0 h1:KtVB/hTK4bbL/S6bs64rYyk8adjmh1BygbBiaAiX+a0=
github.com/golangci/misspell v0.4.0/go.mod h1:W6O/bwV6lGDxUCChm2ykw9NQdd5bYd1Xkjo88UcWyJc=
+github.com/golangci/misspell v0.4.1 h1:+y73iSicVy2PqyX7kmUefHusENlrP9YwuHZHPLGQj/g=
+github.com/golangci/misspell v0.4.1/go.mod h1:9mAN1quEo3DlpbaIKKyEvRxK1pwqR9s/Sea1bJCtlNI=
github.com/golangci/revgrep v0.0.0-20210930125155-c22e5001d4f2 h1:SgM7GDZTxtTTQPU84heOxy34iG5Du7F2jcoZnvp+fXI=
github.com/golangci/revgrep v0.0.0-20220804021717-745bb2f7c2e6 h1:DIPQnGy2Gv2FSA4B/hh8Q7xx3B7AIDk3DAMeHclH1vQ=
github.com/golangci/revgrep v0.0.0-20220804021717-745bb2f7c2e6/go.mod h1:0AKcRCkMoKvUvlf89F6O7H2LYdhr1zBh736mBItOdRs=
@@ -641,6 +663,8 @@ github.com/kunwardeep/paralleltest v1.0.6 h1:FCKYMF1OF2+RveWlABsdnmsvJrei5aoyZoa
github.com/kunwardeep/paralleltest v1.0.6/go.mod h1:Y0Y0XISdZM5IKm3TREQMZ6iteqn1YuwCsJO/0kL9Zes=
github.com/kunwardeep/paralleltest v1.0.7 h1:2uCk94js0+nVNQoHZNLBkAR1DQJrVzw6T0RMzJn55dQ=
github.com/kunwardeep/paralleltest v1.0.7/go.mod h1:2C7s65hONVqY7Q5Efj5aLzRCNLjw2h4eMc9EcypGjcY=
+github.com/kunwardeep/paralleltest v1.0.8 h1:Ul2KsqtzFxTlSU7IP0JusWlLiNqQaloB9vguyjbE558=
+github.com/kunwardeep/paralleltest v1.0.8/go.mod h1:2C7s65hONVqY7Q5Efj5aLzRCNLjw2h4eMc9EcypGjcY=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/kyoh86/exportloopref v0.1.8 h1:5Ry/at+eFdkX9Vsdw3qU4YkvGtzuVfzT4X7S77LoN/M=
github.com/kyoh86/exportloopref v0.1.8/go.mod h1:1tUcJeiioIs7VWe5gcOObrux3lb66+sBqGZrRkMwPgg=
@@ -780,6 +804,8 @@ github.com/nunnatsa/ginkgolinter v0.12.0 h1:seZo112n+lt0gdLJ/Jh70mzvrqbABWFpXd1b
github.com/nunnatsa/ginkgolinter v0.12.0/go.mod h1:dJIGXYXbkBswqa/pIzG0QlVTTDSBMxDoCFwhsl4Uras=
github.com/nunnatsa/ginkgolinter v0.12.1 h1:vwOqb5Nu05OikTXqhvLdHCGcx5uthIYIl0t79UVrERQ=
github.com/nunnatsa/ginkgolinter v0.12.1/go.mod h1:AK8Ab1PypVrcGUusuKD8RDcl2KgsIwvNaaxAlyHSzso=
+github.com/nunnatsa/ginkgolinter v0.13.5 h1:fOsPB4CEZOPkyMqF4B9hoqOpooFWU7vWSVkCSscVpgU=
+github.com/nunnatsa/ginkgolinter v0.13.5/go.mod h1:OBHy4536xtuX3102NM63XRtOyxqZOO02chsaeDWXVO8=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+AU=
github.com/olekukonko/tablewriter v0.0.0-20170122224234-a0225b3f23b5/go.mod h1:vsDQFd/mU46D+Z4whnwzcISnGGzXWMclvtLoiIKAKIo=
@@ -835,6 +861,8 @@ github.com/polyfloyd/go-errorlint v1.1.0 h1:VKoEFg5yxSgJ2yFPVhxW7oGz+f8/OVcuMeNv
github.com/polyfloyd/go-errorlint v1.1.0/go.mod h1:Uss7Bc/izYG0leCMRx3WVlrpqWedSZk7V/FUQW6VJ6U=
github.com/polyfloyd/go-errorlint v1.4.2 h1:CU+O4181IxFDdPH6t/HT7IiDj1I7zxNi1RIUxYwn8d0=
github.com/polyfloyd/go-errorlint v1.4.2/go.mod h1:k6fU/+fQe38ednoZS51T7gSIGQW1y94d6TkSr35OzH8=
+github.com/polyfloyd/go-errorlint v1.4.4 h1:A9gytp+p6TYqeALTYRoxJESYP8wJRETRX2xzGWFsEBU=
+github.com/polyfloyd/go-errorlint v1.4.4/go.mod h1:ry5NqF7l9Q77V+XqAfUg1zfryrEtyac3G5+WVpIK0xU=
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
github.com/posener/complete v1.2.3/go.mod h1:WZIdtGGp+qx0sLrYKtIRAruyNpv6hFCicSgv7Sy7s/s=
github.com/power-devops/perfstat v0.0.0-20210106213030-5aafc221ea8c/go.mod h1:OmDBASR4679mdNQnz2pUhc2G8CO2JrUAVFDRBDP/hJE=
@@ -872,6 +900,8 @@ github.com/quasilyte/go-ruleguard v0.3.18 h1:sd+abO1PEI9fkYennwzHn9kl3nqP6M5vE7F
github.com/quasilyte/go-ruleguard v0.3.18/go.mod h1:lOIzcYlgxrQ2sGJ735EHXmf/e9MJ516j16K/Ifcttvs=
github.com/quasilyte/go-ruleguard v0.3.19 h1:tfMnabXle/HzOb5Xe9CUZYWXKfkS1KwRmZyPmD9nVcc=
github.com/quasilyte/go-ruleguard v0.3.19/go.mod h1:lHSn69Scl48I7Gt9cX3VrbsZYvYiBYszZOZW4A+oTEw=
+github.com/quasilyte/go-ruleguard v0.4.0 h1:DyM6r+TKL+xbKB4Nm7Afd1IQh9kEUKQs2pboWGKtvQo=
+github.com/quasilyte/go-ruleguard v0.4.0/go.mod h1:Eu76Z/R8IXtViWUIHkE3p8gdH3/PKk1eh3YGfaEof10=
github.com/quasilyte/go-ruleguard/dsl v0.3.0/go.mod h1:KeCP03KrjuSO0H1kTuZQCWlQPulDV6YMIXmpQss17rU=
github.com/quasilyte/go-ruleguard/dsl v0.3.21/go.mod h1:KeCP03KrjuSO0H1kTuZQCWlQPulDV6YMIXmpQss17rU=
github.com/quasilyte/go-ruleguard/rules v0.0.0-20201231183845-9e62ed36efe1/go.mod h1:7JTjp89EGyU1d6XfBiXihJNG37wB2VRkd125Q1u7Plc=
@@ -922,6 +952,8 @@ github.com/sashamelentyev/usestdlibvars v1.20.0 h1:K6CXjqqtSYSsuyRDDC7Sjn6vTMLiS
github.com/sashamelentyev/usestdlibvars v1.20.0/go.mod h1:0GaP+ecfZMXShS0A94CJn6aEuPRILv8h/VuWI9n1ygg=
github.com/sashamelentyev/usestdlibvars v1.23.0 h1:01h+/2Kd+NblNItNeux0veSL5cBF1jbEOPrEhDzGYq0=
github.com/sashamelentyev/usestdlibvars v1.23.0/go.mod h1:YPwr/Y1LATzHI93CqoPUN/2BzGQ/6N/cl/KwgR0B/aU=
+github.com/sashamelentyev/usestdlibvars v1.24.0 h1:MKNzmXtGh5N0y74Z/CIaJh4GlB364l0K1RUT08WSWAc=
+github.com/sashamelentyev/usestdlibvars v1.24.0/go.mod h1:9cYkq+gYJ+a5W2RPdhfaSCnTVUC1OQP/bSiiBhq3OZE=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/securego/gosec/v2 v2.12.0 h1:CQWdW7ATFpvLSohMVsajscfyHJ5rsGmEXmsNcsDNmAg=
github.com/securego/gosec/v2 v2.13.1 h1:7mU32qn2dyC81MH9L2kefnQyRMUarfDER3iQyMHcjYM=
@@ -930,6 +962,8 @@ github.com/securego/gosec/v2 v2.15.0 h1:v4Ym7FF58/jlykYmmhZ7mTm7FQvN/setNm++0fgI
github.com/securego/gosec/v2 v2.15.0/go.mod h1:VOjTrZOkUtSDt2QLSJmQBMWnvwiQPEjg0l+5juIqGk8=
github.com/securego/gosec/v2 v2.16.0 h1:Pi0JKoasQQ3NnoRao/ww/N/XdynIB9NRYYZT5CyOs5U=
github.com/securego/gosec/v2 v2.16.0/go.mod h1:xvLcVZqUfo4aAQu56TNv7/Ltz6emAOQAEsrZrt7uGlI=
+github.com/securego/gosec/v2 v2.17.0 h1:ZpAStTDKY39insEG9OH6kV3IkhQZPTq9a9eGOLOjcdI=
+github.com/securego/gosec/v2 v2.17.0/go.mod h1:lt+mgC91VSmriVoJLentrMkRCYs+HLTBnUFUBuhV2hc=
github.com/sergi/go-diff v1.1.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM=
github.com/shazow/go-diff v0.0.0-20160112020656-b6b7b6733b8c h1:W65qqJCIOVP4jpqPQ0YvHYKwcMEMVWIzWC5iNQQfBTU=
github.com/shazow/go-diff v0.0.0-20160112020656-b6b7b6733b8c/go.mod h1:/PevMnwAxekIXwN8qQyfc5gl2NlkB3CQlkizAbOkeBs=
@@ -1045,6 +1079,8 @@ github.com/tenntenn/modver v1.0.1/go.mod h1:bePIyQPb7UeioSRkw3Q0XeMhYZSMx9B8ePqg
github.com/tenntenn/text/transform v0.0.0-20200319021203-7eef512accb3/go.mod h1:ON8b8w4BN/kE1EOhwT0o+d62W65a6aPw1nouo9LMgyY=
github.com/tetafro/godot v1.4.11 h1:BVoBIqAf/2QdbFmSwAWnaIqDivZdOV0ZRwEm6jivLKw=
github.com/tetafro/godot v1.4.11/go.mod h1:LR3CJpxDVGlYOWn3ZZg1PgNZdTUvzsZWu8xaEohUpn8=
+github.com/tetafro/godot v1.4.14 h1:ScO641OHpf9UpHPk8fCknSuXNMpi4iFlwuWoBs3L+1s=
+github.com/tetafro/godot v1.4.14/go.mod h1:2oVxTBSftRTh4+MVfUaUXR6bn2GDXCaMcOG4Dk3rfio=
github.com/timakin/bodyclose v0.0.0-20210704033933-f49887972144 h1:kl4KhGNsJIbDHS9/4U9yQo1UcPQM0kOMJHn29EoH/Ro=
github.com/timakin/bodyclose v0.0.0-20210704033933-f49887972144/go.mod h1:Qimiffbc6q9tBWlVV6x0P9sat/ao1xEkREYPPj9hphk=
github.com/timakin/bodyclose v0.0.0-20221125081123-e39cf3fc478e h1:MV6KaVu/hzByHP0UvJ4HcMGE/8a6A4Rggc/0wx2AvJo=
@@ -1075,12 +1111,16 @@ github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqri
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
github.com/ultraware/funlen v0.0.3 h1:5ylVWm8wsNwH5aWo9438pwvsK0QiqVuUrt9bn7S/iLA=
github.com/ultraware/funlen v0.0.3/go.mod h1:Dp4UiAus7Wdb9KUZsYWZEWiRzGuM2kXM1lPbfaF6xhA=
+github.com/ultraware/funlen v0.1.0 h1:BuqclbkY6pO+cvxoq7OsktIXZpgBSkYTQtmwhAK81vI=
+github.com/ultraware/funlen v0.1.0/go.mod h1:XJqmOQja6DpxarLj6Jj1U7JuoS8PvL4nEqDaQhy22p4=
github.com/ultraware/whitespace v0.0.5 h1:hh+/cpIcopyMYbZNVov9iSxvJU3OYQg78Sfaqzi/CzI=
github.com/ultraware/whitespace v0.0.5/go.mod h1:aVMh/gQve5Maj9hQ/hg+F75lr/X5A89uZnzAmWSineA=
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/uudashr/gocognit v1.0.6 h1:2Cgi6MweCsdB6kpcVQp7EW4U23iBFQWfTXiWlyp842Y=
github.com/uudashr/gocognit v1.0.6/go.mod h1:nAIUuVBnYU7pcninia3BHOvQkpQCeO76Uscky5BOwcY=
+github.com/uudashr/gocognit v1.0.7 h1:e9aFXgKgUJrQ5+bs61zBigmj7bFJ/5cC6HmMahVzuDo=
+github.com/uudashr/gocognit v1.0.7/go.mod h1:nAIUuVBnYU7pcninia3BHOvQkpQCeO76Uscky5BOwcY=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasthttp v1.30.0/go.mod h1:2rsYD01CKFrjjsvFxx75KlEUNpWNBY9JWD3K/7o2Cus=
github.com/valyala/quicktemplate v1.7.0/go.mod h1:sqKJnoaOF88V07vkO+9FL8fb9uZg/VPSJnLYn+LmLk8=
@@ -1099,6 +1139,8 @@ github.com/ykadowak/zerologlint v0.1.1 h1:CA1+RsGS1DbBn3jJP2jpWfiMJipWdeqJfSY0Gp
github.com/ykadowak/zerologlint v0.1.1/go.mod h1:KaUskqF3e/v59oPmdq1U1DnKcuHokl2/K1U4pmIELKg=
github.com/ykadowak/zerologlint v0.1.2 h1:Um4P5RMmelfjQqQJKtE8ZW+dLZrXrENeIzWWKw800U4=
github.com/ykadowak/zerologlint v0.1.2/go.mod h1:KaUskqF3e/v59oPmdq1U1DnKcuHokl2/K1U4pmIELKg=
+github.com/ykadowak/zerologlint v0.1.3 h1:TLy1dTW3Nuc+YE3bYRPToG1Q9Ej78b5UUN6bjbGdxPE=
+github.com/ykadowak/zerologlint v0.1.3/go.mod h1:KaUskqF3e/v59oPmdq1U1DnKcuHokl2/K1U4pmIELKg=
github.com/yudai/gojsondiff v1.0.0/go.mod h1:AY32+k2cwILAkW1fbgxQ5mUmMiZFgLIV+FBNExI05xg=
github.com/yudai/golcs v0.0.0-20170316035057-ecda9a501e82/go.mod h1:lgjkn3NuSvDfVJdfcVVdX+jpBxNmX4rDAzaS45IcYoM=
github.com/yudai/pp v2.0.1+incompatible/go.mod h1:PuxR/8QJ7cyCkFp/aUDS+JY727OFEZkTdatxwunjIkc=
@@ -1114,6 +1156,8 @@ github.com/yusufpapurcu/wmi v1.2.2/go.mod h1:SBZ9tNy3G9/m5Oi98Zks0QjeHVDvuK0qfxQ
gitlab.com/bosi/decorder v0.2.2 h1:LRfb3lP6mZWjUzpMOCLTVjcnl/SqZWBWmKNqQvMocQs=
gitlab.com/bosi/decorder v0.2.3 h1:gX4/RgK16ijY8V+BRQHAySfQAb354T7/xQpDB2n10P0=
gitlab.com/bosi/decorder v0.2.3/go.mod h1:9K1RB5+VPNQYtXtTDAzd2OEftsZb1oV0IrJrzChSdGE=
+gitlab.com/bosi/decorder v0.4.0 h1:HWuxAhSxIvsITcXeP+iIRg9d1cVfvVkmlF7M68GaoDY=
+gitlab.com/bosi/decorder v0.4.0/go.mod h1:xarnteyUoJiOTEldDysquWKTVDCKo2TOIOIibSuWqOg=
go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/bbolt v1.3.4/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ=
go.etcd.io/etcd v0.0.0-20200513171258-e048e166ab9c/go.mod h1:xCI7ZzBfRuGgBXyXO6yfWfDmlWd35khcWpUa4L0xI/k=
@@ -1135,6 +1179,8 @@ go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
go.tmz.dev/musttag v0.7.0 h1:QfytzjTWGXZmChoX0L++7uQN+yRCPfyFm+whsM+lfGc=
go.tmz.dev/musttag v0.7.0/go.mod h1:oTFPvgOkJmp5kYL02S8+jrH0eLrBIl57rzWeA26zDEM=
+go.tmz.dev/musttag v0.7.2 h1:1J6S9ipDbalBSODNT5jCep8dhZyMr4ttnjQagmGYR5s=
+go.tmz.dev/musttag v0.7.2/go.mod h1:m6q5NiiSKMnQYokefa2xGoyoXnrswCbJ0AWYzf4Zs28=
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.5.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
@@ -1197,6 +1243,8 @@ golang.org/x/exp/typeparams v0.0.0-20230203172020-98cc5a0785f9 h1:6WHiuFL9FNjg8R
golang.org/x/exp/typeparams v0.0.0-20230203172020-98cc5a0785f9/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk=
golang.org/x/exp/typeparams v0.0.0-20230224173230-c95f2b4c22f2 h1:J74nGeMgeFnYQJN59eFwh06jX/V8g0lB7LWpjSLxtgU=
golang.org/x/exp/typeparams v0.0.0-20230224173230-c95f2b4c22f2/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk=
+golang.org/x/exp/typeparams v0.0.0-20230307190834-24139beb5833 h1:jWGQJV4niP+CCmFW9ekjA9Zx8vYORzOUH2/Nl5WPuLQ=
+golang.org/x/exp/typeparams v0.0.0-20230307190834-24139beb5833/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
@@ -1233,6 +1281,8 @@ golang.org/x/mod v0.8.0 h1:LUYupSeNrTNCGzR/hVBk2NHZO4hXcVaW1k4Qx7rjPx8=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.10.0 h1:lFO9qtOdlre5W1jxS3r/4szv2/6iXxScdzjoBMXNhYk=
golang.org/x/mod v0.10.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
+golang.org/x/mod v0.12.0 h1:rmsUpXtvNzj340zd98LZ4KntptpfRHwpFOHG188oHXc=
+golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -1340,6 +1390,8 @@ golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.2.0 h1:PUR+T4wwASmuSTYdKjYHI5TD22Wy5ogLU5qZCOLxBrI=
golang.org/x/sync v0.2.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.3.0 h1:ftCYgMx6zT/asHUrPw8BLLscYtGznsLAnjq5RH9P66E=
+golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -1460,6 +1512,8 @@ golang.org/x/sys v0.5.0 h1:MUK/U/4lj1t1oPg0HfuXDN/Z1wv31ZJ/YcPiGccS4DU=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0 h1:EBmGv8NaZBZTWvrbjNoL6HVt+IVy3QDQpJs7VRIw3tU=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.11.0 h1:eG7RXZHdqOJ1i+0lgLgCpSXAp6M3LYlAo6osgSi0xOM=
+golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
@@ -1485,6 +1539,8 @@ golang.org/x/text v0.6.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0 h1:2sjJmO8cDvYveuX97RDLsxlyUxLl+GHoLxBiRdHllBE=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
+golang.org/x/text v0.12.0 h1:k+n5B8goJNdU7hSvEtMUz3d1Q6D/XW4COJSJR6fN0mc=
+golang.org/x/text v0.12.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@@ -1602,6 +1658,8 @@ golang.org/x/tools v0.6.0 h1:BOw41kyTf3PuCW1pVQf8+Cyg8pMlkYB1oo9iJ6D/lKM=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.9.3 h1:Gn1I8+64MsuTb/HpH+LmQtNas23LhUVr3rYZ0eKuaMM=
golang.org/x/tools v0.9.3/go.mod h1:owI94Op576fPu3cIGQeHs3joujW/2Oc6MtlxbF5dfNc=
+golang.org/x/tools v0.12.0 h1:YW6HUoUmYBpwSgyaGaZq1fHjrBjX1rlpZ54T6mu2kss=
+golang.org/x/tools v0.12.0/go.mod h1:Sc0INKfu04TlqNoRA1hgpFZbhYXHPr4V5DzpSBTPqQM=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -1843,6 +1901,8 @@ honnef.co/go/tools v0.4.2 h1:6qXr+R5w+ktL5UkwEbPp+fEvfyoMPche6GkOpGHZcLc=
honnef.co/go/tools v0.4.2/go.mod h1:36ZgoUOrqOk1GxwHhyryEkq8FQWkUO2xGuSMhUCcdvA=
honnef.co/go/tools v0.4.3 h1:o/n5/K5gXqk8Gozvs2cnL0F2S1/g1vcGCAx2vETjITw=
honnef.co/go/tools v0.4.3/go.mod h1:36ZgoUOrqOk1GxwHhyryEkq8FQWkUO2xGuSMhUCcdvA=
+honnef.co/go/tools v0.4.5 h1:YGD4H+SuIOOqsyoLOpZDWcieM28W47/zRO7f+9V3nvo=
+honnef.co/go/tools v0.4.5/go.mod h1:GUV+uIBCLpdf0/v6UhHHG/yzI/z6qPskBeQCjcNB96k=
mvdan.cc/gofumpt v0.3.1 h1:avhhrOmv0IuvQVK7fvwV91oFSGAk5/6Po8GXTzICeu8=
mvdan.cc/gofumpt v0.4.0 h1:JVf4NN1mIpHogBj7ABpgOyZc65/UUOkKQFkoURsz4MM=
mvdan.cc/gofumpt v0.4.0/go.mod h1:PljLOHDeZqgS8opHRKLzp2It2VBuSdteAgqUfzMTxlQ=
diff --git a/operator/.bingo/kind.mod b/operator/.bingo/kind.mod
index 74bb39e21b7b5..38a8faf70b632 100644
--- a/operator/.bingo/kind.mod
+++ b/operator/.bingo/kind.mod
@@ -2,4 +2,4 @@ module _ // Auto generated by https://github.com/bwplotka/bingo. DO NOT EDIT
go 1.20
-require sigs.k8s.io/kind v0.17.0
+require sigs.k8s.io/kind v0.20.0
diff --git a/operator/.bingo/kind.sum b/operator/.bingo/kind.sum
index fc34943193e17..6e1ced3dc4c4b 100644
--- a/operator/.bingo/kind.sum
+++ b/operator/.bingo/kind.sum
@@ -438,6 +438,8 @@ sigs.k8s.io/kind v0.16.0 h1:GFXyyxtPnHFKqXr3ZG8/X0+0K9sl69lejStlPn2WQyM=
sigs.k8s.io/kind v0.16.0/go.mod h1:cKTqagdRyUQmihhBOd+7p43DpOPRn9rHsUC08K1Jbsk=
sigs.k8s.io/kind v0.17.0 h1:CScmGz/wX66puA06Gj8OZb76Wmk7JIjgWf5JDvY7msM=
sigs.k8s.io/kind v0.17.0/go.mod h1:Qqp8AiwOlMZmJWs37Hgs31xcbiYXjtXlRBSftcnZXQk=
+sigs.k8s.io/kind v0.20.0 h1:f0sc3v9mQbGnjBUaqSFST1dwIuiikKVGgoTwpoP33a8=
+sigs.k8s.io/kind v0.20.0/go.mod h1:aBlbxg08cauDgZ612shr017/rZwqd7AS563FvpWKPVs=
sigs.k8s.io/structured-merge-diff/v4 v4.0.2/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=
sigs.k8s.io/yaml v1.2.0 h1:kr/MCeFWJWTwyaHoR9c8EjH9OumOmoF9YGiZd7lFm/Q=
diff --git a/operator/.bingo/kustomize.sum b/operator/.bingo/kustomize.sum
index 17411b529d8ff..247834f4b7f42 100644
--- a/operator/.bingo/kustomize.sum
+++ b/operator/.bingo/kustomize.sum
@@ -17,6 +17,7 @@ github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a h1:idn718Q4
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
+github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
@@ -28,6 +29,7 @@ github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3Ee
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
+github.com/cpuguy83/go-md2man/v2 v2.0.1/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
@@ -38,11 +40,14 @@ github.com/docker/go-units v0.3.3/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDD
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
+github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
+github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/evanphx/json-patch v4.5.0+incompatible h1:ouOWdg56aJriqS0huScTkVXPC5IcNrDCXZ6OoTAWu7M=
github.com/evanphx/json-patch v4.5.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch v4.11.0+incompatible h1:glyUF9yIYtMHzn8xaKw5rMhdWcwsYV8dZHIq5567/xs=
github.com/evanphx/json-patch v4.11.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
+github.com/getkin/kin-openapi v0.76.0/go.mod h1:660oXbgy5JFMKreazJaQTw7o+X00qeSyhcnluiMv+Xg=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/globalsign/mgo v0.0.0-20180905125535-1ca0a4f7cbcb/go.mod h1:xkRDCp4j0OGD1HRkm4kmhM+pmpv3AKq5SU7GMg4oO/Q=
github.com/globalsign/mgo v0.0.0-20181015135952-eeefdecb41b8/go.mod h1:xkRDCp4j0OGD1HRkm4kmhM+pmpv3AKq5SU7GMg4oO/Q=
@@ -52,6 +57,7 @@ github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=
+github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
github.com/go-openapi/analysis v0.0.0-20180825180245-b006789cd277/go.mod h1:k70tL6pCuVxPJOHXQ+wIac1FUrvNkHolPie/cLEU6hI=
github.com/go-openapi/analysis v0.17.0/go.mod h1:IowGgpVeD0vNm45So8nr+IcQ3pxVtpRoBWb8PVZO0ik=
github.com/go-openapi/analysis v0.18.0/go.mod h1:IowGgpVeD0vNm45So8nr+IcQ3pxVtpRoBWb8PVZO0ik=
@@ -119,12 +125,15 @@ github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4er
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
+github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
+github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
@@ -135,13 +144,16 @@ github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMyw
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
+github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 h1:El6M4kTTCOh6aBiKaUGG7oYTSPP8MxqL4YI3kZKwcP4=
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510/go.mod h1:pupxD2MaaD3pAXIBCelhxNneeOaAeabZDe5s4K6zSpQ=
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gnostic v0.5.1/go.mod h1:6U4PtQXGIEt/Z3h5MAT7FNofLnw9vXk2cUuW7uA/OeU=
+github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
@@ -151,6 +163,7 @@ github.com/imdario/mergo v0.3.5 h1:JboBksRwiiAJWvIYJVo46AfV+IAIKZpfrSzVKj42R4Q=
github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.6 h1:xTNEAn+kxVO7dTZGu0CegyqKZmoWFI0rF8UxjlB2d28=
github.com/imdario/mergo v0.3.6/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
+github.com/inconshreveable/mousetrap v1.0.0 h1:Z8tu5sraLXCXIcARxBp/8cbvlwVa7Z1NHg9XEKhtSvM=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
@@ -208,6 +221,7 @@ github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXP
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
+github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
@@ -215,6 +229,7 @@ github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7z
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
+github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
github.com/sergi/go-diff v1.1.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
@@ -222,6 +237,7 @@ github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPx
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
+github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v1.0.0 h1:6m/oheQuQ13N9ks4hubMG6BnvwOeaJrqSPLahSnczz8=
github.com/spf13/cobra v1.0.0/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
@@ -255,6 +271,8 @@ github.com/xlab/treeprint v0.0.0-20181112141820-a009c3971eca/go.mod h1:ce1O1j6Ut
github.com/xlab/treeprint v1.1.0 h1:G/1DjNkPpfZCFt9CSh6b5/nY4VimlbHF3Rh4obvtzDk=
github.com/xlab/treeprint v1.1.0/go.mod h1:gj5Gd3gPdKtR1ikdDK6fnFLdmIS0X30kTTuNd/WEJu0=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
+github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
+github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.mongodb.org/mongo-driver v1.0.3/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
go.mongodb.org/mongo-driver v1.1.1/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
@@ -270,22 +288,32 @@ golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACk
golang.org/x/crypto v0.0.0-20190320223903-b7391e95e576/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190617133340-57b3e21c3d56/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
+golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181005035420-146acd28ed58/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190320064053-1272bf9dcd53/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200324143707-d3edc9973b7e/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b h1:uwuIcX0g4Yl1NC5XAz37xsr2lTtcqevgzYNVt49waME=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
+golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd h1:O7DYs+zxREGLKzKoMQrtrEacpb0ZVXA5rIwylE2Xchk=
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
@@ -293,6 +321,8 @@ golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -304,6 +334,12 @@ golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20191002063906-3421d5a6bb1c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e h1:fLOSk5Q00efkSvAm+4xcoXD+RRmLmmulPn5I3Y9F2EM=
+golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
@@ -316,20 +352,40 @@ golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGm
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190125232054-d66bd3c5d5a6/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190617190820-da514acc4774/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20200505023115-26f46d2f7ef8/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
+golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
+golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
+google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
+google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
+google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
+google.golang.org/genproto v0.0.0-20201019141844-1ed22bb0c154/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
+google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
+google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
+google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
+google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
+google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGjtUeSXeh4=
+google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
+google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
+google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.28.0 h1:w43yiav+6bVFTBQFZX0r7ipe9JQ1QsbMgHwbBziscLw=
google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
@@ -346,6 +402,7 @@ gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.7/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
@@ -355,12 +412,16 @@ gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b h1:h8qDotaEPuJATrMmW04NCwg7v22aHH28wwpauUhK9Oo=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
k8s.io/gengo v0.0.0-20200413195148-3a45101e95ac/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
+k8s.io/gengo v0.0.0-20210813121822-485abfe95c7c/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E=
k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
+k8s.io/klog/v2 v2.2.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
k8s.io/kube-openapi v0.0.0-20210421082810-95288971da7e h1:KLHHjkdQFomZy8+06csTWZ0m1343QqxZhR2LJ1OxCYM=
k8s.io/kube-openapi v0.0.0-20210421082810-95288971da7e/go.mod h1:vHXdDvt9+2spS2Rx9ql3I8tycm3H9FDfdUoIuKCefvw=
k8s.io/kube-openapi v0.0.0-20220401212409-b28bf2818661 h1:nqYOUleKLC/0P1zbU29F5q6aoezM6MOAVz+iyfQbZ5M=
k8s.io/kube-openapi v0.0.0-20220401212409-b28bf2818661/go.mod h1:daOouuuwd9JXpv1L7Y34iV3yf6nxzipkKMWWlqlvK9M=
+k8s.io/utils v0.0.0-20210802155522-efc7438f0176/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9 h1:HNSDgDCrr/6Ly3WEGKZftiE7IY19Vz2GdbOCyI4qqhc=
k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
sigs.k8s.io/kustomize/api v0.8.5 h1:bfCXGXDAbFbb/Jv5AhMj2BB8a5VAJuuQ5/KU69WtDjQ=
diff --git a/operator/.bingo/operator-sdk.mod b/operator/.bingo/operator-sdk.mod
index 6588d3f28a89a..774411872e4db 100644
--- a/operator/.bingo/operator-sdk.mod
+++ b/operator/.bingo/operator-sdk.mod
@@ -8,12 +8,4 @@ replace github.com/docker/distribution => github.com/docker/distribution v0.0.0-
replace github.com/mattn/go-sqlite3 => github.com/mattn/go-sqlite3 v1.10.0
-replace go.opentelemetry.io/otel => go.opentelemetry.io/otel v0.20.0
-
-replace go.opentelemetry.io/otel/sdk => go.opentelemetry.io/otel/sdk v0.20.0
-
-replace go.opentelemetry.io/otel/trace => go.opentelemetry.io/otel/trace v0.20.0
-
-replace go.opentelemetry.io/proto/otlp => go.opentelemetry.io/proto/otlp v0.7.0
-
-require github.com/operator-framework/operator-sdk v1.27.0 // cmd/operator-sdk
+require github.com/operator-framework/operator-sdk v1.31.0 // cmd/operator-sdk
diff --git a/operator/.bingo/operator-sdk.sum b/operator/.bingo/operator-sdk.sum
index 658f3e8959b15..f805302708505 100644
--- a/operator/.bingo/operator-sdk.sum
+++ b/operator/.bingo/operator-sdk.sum
@@ -62,6 +62,8 @@ github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX
github.com/Azure/go-ansiterm v0.0.0-20210608223527-2377c96fe795/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 h1:UQHMgLO+TxOElx5B5HZ4hJQsoJ/PvUvKRhJHDQXO8P8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
+github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161 h1:L/gRVlceqvL25UVaW/CKtUDjefjrs0SPonmDGUVOYP0=
+github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/Azure/go-autorest v10.8.1+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest v14.2.0+incompatible h1:V5VMDjClD3GiElqLWO7mz2MxNAK/vTfRHdAubSIPRgs=
github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
@@ -109,6 +111,8 @@ github.com/BurntSushi/toml v1.0.0 h1:dtDWrepsVPfW9H/4y7dDgFc2MBUSeJhlaDtK13CxFlU
github.com/BurntSushi/toml v1.0.0/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
github.com/BurntSushi/toml v1.2.0 h1:Rt8g24XnyGTyglgET/PRUNlrUeu9F5L+7FilkXfZgs0=
github.com/BurntSushi/toml v1.2.0/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
+github.com/BurntSushi/toml v1.2.1 h1:9F2/+DoOYIOksmaJFPw1tGFy1eDnIJXg+UHjuD8lTak=
+github.com/BurntSushi/toml v1.2.1/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/DATA-DOG/go-sqlmock v1.5.0/go.mod h1:f/Ixk793poVmq4qj/V1dPUg2JEAKC73Q5eFN3EC/SaM=
github.com/DataDog/datadog-go v3.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ=
@@ -126,11 +130,16 @@ github.com/Masterminds/semver v1.5.0 h1:H65muMkzWKEuNDnfl9d70GUjFniHKHRbFPGBuZ3Q
github.com/Masterminds/semver v1.5.0/go.mod h1:MB6lktGJrhw8PrUyiEoblNEGEQ+RzHPF078ddwwvV3Y=
github.com/Masterminds/semver/v3 v3.1.1 h1:hLg3sBzpNErnxhQtUy/mmLR2I9foDujNK030IGemrRc=
github.com/Masterminds/semver/v3 v3.1.1/go.mod h1:VPu/7SZ7ePZ3QOrcuXROw5FAcLl4a0cBrbBpGY/8hQs=
+github.com/Masterminds/semver/v3 v3.2.0 h1:3MEsd0SM6jqZojhjLWWeBY+Kcjy9i6MQAeY7YgDP83g=
+github.com/Masterminds/semver/v3 v3.2.0/go.mod h1:qvl/7zhW3nngYb5+80sSMF+FG2BjYrf8m9wsX0PNOMQ=
github.com/Masterminds/sprig v2.22.0+incompatible h1:z4yfnGrZ7netVz+0EDJ0Wi+5VZCSYp4Z0m2dk6cEM60=
github.com/Masterminds/sprig v2.22.0+incompatible/go.mod h1:y6hNFY5UBTIWBxnzTeuNhlNS5hqE0NB0E6fgfo2Br3o=
github.com/Masterminds/sprig/v3 v3.2.0/go.mod h1:tWhwTbUTndesPNeF0C900vKoq283u6zp4APT9vaF3SI=
+github.com/Masterminds/sprig/v3 v3.2.1/go.mod h1:UoaO7Yp8KlPnJIYWTFkMaqPUYKTfGFPhxNuwnnxkKlk=
github.com/Masterminds/sprig/v3 v3.2.2 h1:17jRggJu518dr3QaafizSXOjKYp94wKfABxUmyxvxX8=
github.com/Masterminds/sprig/v3 v3.2.2/go.mod h1:UoaO7Yp8KlPnJIYWTFkMaqPUYKTfGFPhxNuwnnxkKlk=
+github.com/Masterminds/sprig/v3 v3.2.3 h1:eL2fZNezLomi0uOLqjQoN6BfsDD+fyLtgbJMAj9n6YA=
+github.com/Masterminds/sprig/v3 v3.2.3/go.mod h1:rXcFaZ2zZbLRJv/xSysmlgIM1u11eBaRMhvYXJNkGuM=
github.com/Masterminds/squirrel v1.5.0 h1:JukIZisrUXadA9pl3rMkjhiamxiB0cXiu+HGp/Y8cY8=
github.com/Masterminds/squirrel v1.5.0/go.mod h1:NNaOrjSoIDfDA40n7sr2tPNZRfjzjA400rg+riTZj10=
github.com/Masterminds/squirrel v1.5.2 h1:UiOEi2ZX4RCSkpiNDQN5kro/XIBpSRk9iTqdIRPzUXE=
@@ -148,6 +157,8 @@ github.com/Microsoft/go-winio v0.4.17/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOp
github.com/Microsoft/go-winio v0.5.1/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
github.com/Microsoft/go-winio v0.6.0 h1:slsWYD/zyx7lCXoZVlvQrj0hPTM1HI4+v1sIda2yDvg=
github.com/Microsoft/go-winio v0.6.0/go.mod h1:cTAf44im0RAYeL23bpB+fzCyDH2MJiz2BO69KH/soAE=
+github.com/Microsoft/go-winio v0.6.1 h1:9/kr64B9VUZrLm5YYwbGtUJnMgqWVOdUAXu6Migciow=
+github.com/Microsoft/go-winio v0.6.1/go.mod h1:LRdKpFKfdobln8UmuiYcKPot9D2v6svN5+sAH+4kjUM=
github.com/Microsoft/hcsshim v0.8.9/go.mod h1:5692vkUqntj1idxauYlpoINNKeqCiG6Sg38RRsjT5y8=
github.com/Microsoft/hcsshim v0.8.14/go.mod h1:NtVKoYxQuTLx6gEq0L96c9Ju4JbRJ4nY2ow3VK6a9Lg=
github.com/Microsoft/hcsshim v0.9.1/go.mod h1:Y/0uV2jUab5kBI7SQgl62at0AVX7uaruzADAVmxm3eM=
@@ -168,6 +179,7 @@ github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d/go.mod h1:H
github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo=
github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI=
github.com/VividCortex/gohistogram v1.0.0/go.mod h1:Pf5mBqqDxYaXu3hDrrU+w6nw50o/4+TcAqDqk/vUH7g=
+github.com/a8m/expect v1.0.0/go.mod h1:4IwSCMumY49ScypDnjNbYEjgVeqy1/U2cEs3Lat96eA=
github.com/adrg/xdg v0.4.0 h1:RzRqFcjH4nE5C6oTAxhBtoE2IRyjBSa62SCbyPidvls=
github.com/adrg/xdg v0.4.0/go.mod h1:N6ag73EX4wyxeaoeHctc1mas01KZgsj5tYiAIwqJE/E=
github.com/afex/hystrix-go v0.0.0-20180502004556-fa1af6a1f4f5/go.mod h1:SkGFH1ia65gfNATL8TAiHDNxPzPdmEL5uirI2Uyuz6c=
@@ -188,6 +200,8 @@ github.com/antlr/antlr4/runtime/Go/antlr v0.0.0-20210826220005-b48c857c3a0e h1:G
github.com/antlr/antlr4/runtime/Go/antlr v0.0.0-20210826220005-b48c857c3a0e/go.mod h1:F7bn7fEU90QkQ3tnmaTx3LTKLEDqnwWODIYppRQ5hnY=
github.com/antlr/antlr4/runtime/Go/antlr v0.0.0-20220418222510-f25a4f6275ed h1:ue9pVfIcP+QMEjfgo/Ez4ZjNZfonGgR6NgjMaJMu1Cg=
github.com/antlr/antlr4/runtime/Go/antlr v0.0.0-20220418222510-f25a4f6275ed/go.mod h1:F7bn7fEU90QkQ3tnmaTx3LTKLEDqnwWODIYppRQ5hnY=
+github.com/antlr/antlr4/runtime/Go/antlr v1.4.10 h1:yL7+Jz0jTC6yykIK/Wh74gnTJnrGr5AyrNMXuA0gves=
+github.com/antlr/antlr4/runtime/Go/antlr v1.4.10/go.mod h1:F7bn7fEU90QkQ3tnmaTx3LTKLEDqnwWODIYppRQ5hnY=
github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/apache/thrift v0.13.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
@@ -241,6 +255,8 @@ github.com/bugsnag/panicwrap v1.2.0/go.mod h1:D/8v3kj0zr8ZAKg1AQ6crr+5VwKN5eIywR
github.com/casbin/casbin/v2 v2.1.2/go.mod h1:YcPU1XXisHhLzuxH9coDNf2FbKpjGlbCg3n9yuLkIJQ=
github.com/cenkalti/backoff v2.2.1+incompatible/go.mod h1:90ReRw6GdpyfrHakVjL/QHaoyV4aDUVVkXQJJJ3NXXM=
github.com/cenkalti/backoff/v4 v4.1.1/go.mod h1:scbssz8iZGpm3xbr14ovlUdkxfGXNInqkPWOWmG2CLw=
+github.com/cenkalti/backoff/v4 v4.2.0 h1:HN5dHm3WBOgndBH6E8V0q2jIYIR3s9yglV8k/+MN3u4=
+github.com/cenkalti/backoff/v4 v4.2.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/census-instrumentation/opencensus-proto v0.3.0/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/certifi/gocertifi v0.0.0-20180118203423-deb3ae2ef261/go.mod h1:GJKEexRPVJrBSOjoqN5VNOIKJ5Q3RViH6eu3puDRwx4=
@@ -252,6 +268,8 @@ github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.1.2 h1:YRXhKfTDauu4ajMg1TPgFO5jnlC2HCbmLXMcTG5cbYE=
github.com/cespare/xxhash/v2 v2.1.2/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
+github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
+github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chai2010/gettext-go v0.0.0-20160711120539-c6fed771bfd5 h1:7aWHqerlJ41y6FOsEUvknqgXnGmJyJSbjhAWq5pO4F8=
github.com/chai2010/gettext-go v0.0.0-20160711120539-c6fed771bfd5/go.mod h1:/iP1qXHoty45bqomnu2LM+VVyAEdWN+vtSHGlQgyxbw=
github.com/chai2010/gettext-go v1.0.2 h1:1Lwwip6Q2QGsAdl/ZKPCwTe9fe0CjlUbqj5bFNSjIRk=
@@ -308,6 +326,8 @@ github.com/containerd/continuity v0.0.0-20201208142359-180525291bb7 h1:6ejg6Lkk8
github.com/containerd/continuity v0.0.0-20201208142359-180525291bb7/go.mod h1:kR3BEg7bDFaEddKm54WSmrol1fKWDU1nKYkgrcgZT7Y=
github.com/containerd/continuity v0.1.0 h1:UFRRY5JemiAhPZrr/uE0n8fMTLcZsUvySPr1+D7pgr8=
github.com/containerd/continuity v0.1.0/go.mod h1:ICJu0PwR54nI0yPEnJ6jcS+J7CZAUXrLh8lPo2knzsM=
+github.com/containerd/continuity v0.3.0 h1:nisirsYROK15TAMVukJOUyGJjz4BNQJBVsNvAXZJ/eg=
+github.com/containerd/continuity v0.3.0/go.mod h1:wJEAIwKOm/pBZuBd0JmeTvnLquTB1Ag8espWhkykbPM=
github.com/containerd/fifo v0.0.0-20190226154929-a9fb20d87448/go.mod h1:ODA38xgv3Kuk8dQz2ZQXpnv/UZZUHUCL7pnLehbXgQI=
github.com/containerd/go-runc v0.0.0-20180907222934-5a6d9f37cfa3/go.mod h1:IV7qH3hrUgRmyYrtgEeGWJfWbgcHL9CSRruz2Vqcph0=
github.com/containerd/go-runc v1.0.0/go.mod h1:cNU0ZbCgCQVZK4lgG3P+9tn9/PaJNmoDXPpoJhDR+Ok=
@@ -385,6 +405,8 @@ github.com/docker/cli v20.10.12+incompatible h1:lZlz0uzG+GH+c0plStMUdF/qk3ppmgns
github.com/docker/cli v20.10.12+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/cli v20.10.19+incompatible h1:VKVBUb0KY/bx0FUCrCiNCL8wqgy8VxQli1dtNTn38AE=
github.com/docker/cli v20.10.19+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
+github.com/docker/cli v20.10.21+incompatible h1:qVkgyYUnOLQ98LtXBrwd/duVqPT2X4SHndOuGsfwyhU=
+github.com/docker/cli v20.10.21+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/distribution v0.0.0-20191216044856-a8371794149d h1:jC8tT/S0OGx2cswpeUTn4gOIea8P08lD3VFQT0cOZ50=
github.com/docker/distribution v0.0.0-20191216044856-a8371794149d/go.mod h1:0+TTO4EOBfRPhZXAeF1Vu+W3hHZ8eLp8PgKVZlcvtFY=
github.com/docker/distribution v2.7.0+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
@@ -404,6 +426,8 @@ github.com/docker/docker v20.10.14+incompatible h1:+T9/PRYWNDo5SZl5qS1r9Mo/0Q8Aw
github.com/docker/docker v20.10.14+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker v20.10.19+incompatible h1:lzEmjivyNHFHMNAFLXORMBXyGIhw/UP4DvJwvyKYq64=
github.com/docker/docker v20.10.19+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
+github.com/docker/docker v20.10.24+incompatible h1:Ugvxm7a8+Gz6vqQYQQ2W7GYq5EUPaAiuPgIfVyI3dYE=
+github.com/docker/docker v20.10.24+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker-credential-helpers v0.6.3 h1:zI2p9+1NQYdnG6sMU26EX4aVGlqbInSQxQXLvzJ4RPQ=
github.com/docker/docker-credential-helpers v0.6.3/go.mod h1:WRaJzqw3CTB9bk10avuGsjVBZsD05qeibJ1/TYlvc0Y=
github.com/docker/docker-credential-helpers v0.6.4 h1:axCks+yV+2MR3/kZhAmy07yC56WZ2Pwu/fKWtKuZB0o=
@@ -438,6 +462,8 @@ github.com/emicklei/go-restful v2.9.5+incompatible h1:spTtZBk5DYEvbxMVutUuTyh1Ao
github.com/emicklei/go-restful v2.9.5+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/emicklei/go-restful/v3 v3.9.0 h1:XwGDlfxEnQZzuopoqxwSEllNcCOM9DhhFyhFIIGKwxE=
github.com/emicklei/go-restful/v3 v3.9.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
+github.com/emicklei/go-restful/v3 v3.10.1 h1:rc42Y5YTp7Am7CS630D7JmhRjq4UlEUuEKfrDac4bSQ=
+github.com/emicklei/go-restful/v3 v3.10.1/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/emirpasic/gods v1.12.0/go.mod h1:YfzfFFoVP/catgzJb4IKIqXjX78Ha8FMSDh3ymbK86o=
github.com/envoyproxy/go-control-plane v0.6.9/go.mod h1:SBwIajubJHhxtWwsL9s8ss4safvEdbitLhGGK48rN6g=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
@@ -480,6 +506,8 @@ github.com/fatih/structtag v1.1.0 h1:6j4mUV/ES2duvnAzKMFkN6/A5mCaNYPD3xfbAkLLOF8
github.com/fatih/structtag v1.1.0/go.mod h1:mBJUNpUnHmRKrKlQQlmCrh5PuhftFbNv8Ys4/aAZl94=
github.com/felixge/httpsnoop v1.0.1 h1:lvB5Jl89CsZtGIWuTcDM1E/vkVs49/Ml7JJe07l8SPQ=
github.com/felixge/httpsnoop v1.0.1/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
+github.com/felixge/httpsnoop v1.0.3 h1:s/nj+GCswXYzN5v2DpNMuMQYe+0DDwt5WVCU6CWBdXk=
+github.com/felixge/httpsnoop v1.0.3/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/flowstack/go-jsonschema v0.1.1/go.mod h1:yL7fNggx1o8rm9RlgXv7hTBWxdBM0rVwpMwimd3F3N0=
github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc=
github.com/form3tech-oss/jwt-go v3.2.2+incompatible h1:TcekIExNqud5crz4xD2pavyTgWiPvpYe4Xau31I0PRk=
@@ -489,6 +517,7 @@ github.com/form3tech-oss/jwt-go v3.2.3+incompatible/go.mod h1:pbq4aXjuKjdthFRnoD
github.com/franela/goblin v0.0.0-20200105215937-c9ffbefa60db/go.mod h1:7dvUGVsVBjqR7JHJk0brhHOZYGmfBYOrK0ZhYMEtBr4=
github.com/franela/goreq v0.0.0-20171204163338-bcd34c9993f8/go.mod h1:ZhphrRTfi2rbfLwlschooIH4+wKKDR4Pdxhh+TRoA20=
github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k=
+github.com/frankban/quicktest v1.14.3/go.mod h1:mgiwOwqx65TmIk1wJ6Q7wvnVMocbUorkibMOrVTHZps=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9 h1:hsms1Qyu0jgnwNXIxa+/V/PDsU6CfLf6CNO8H7IWoS4=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
@@ -527,6 +556,8 @@ github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-gorp/gorp/v3 v3.0.2 h1:ULqJXIekoqMx29FI5ekXXFoH1dT2Vc8UhnRzBg+Emz4=
github.com/go-gorp/gorp/v3 v3.0.2/go.mod h1:BJ3q1ejpV8cVALtcXvXaXyTOlMmJhWDxTmncaR6rwBY=
+github.com/go-gorp/gorp/v3 v3.0.5 h1:PUjzYdYu3HBOh8LE+UUmRG2P0IRDak9XMeGNvaeq4Ow=
+github.com/go-gorp/gorp/v3 v3.0.5/go.mod h1:dLEjIyyRNiXvNZ8PSmzpt1GsWAUK8kjVhEpjH8TixEw=
github.com/go-ini/ini v1.25.4/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
@@ -548,6 +579,8 @@ github.com/go-logr/logr v1.2.2 h1:ahHml/yUpnlb96Rp8HCvtYVPY8ZYpxq3g7UYchIYwbs=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.2.3 h1:2DntVwHkVopvECVRSlL5PSo9eG+cAkDCuckLubN+rq0=
github.com/go-logr/logr v1.2.3/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
+github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
+github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-logr/zapr v0.2.0/go.mod h1:qhKdvif7YF5GI9NWEpyxTSSBdGmzkNguibrdCNVPunU=
github.com/go-logr/zapr v0.4.0/go.mod h1:tabnROwaDl0UNxkVeFRbY8bwB37GwRv0P8lg6aAiEnk=
github.com/go-logr/zapr v1.2.0/go.mod h1:Qa4Bsj2Vb+FAVeAKsLD8RLQ+YRJB8YDmOAKxaBQf7Ro=
@@ -633,6 +666,8 @@ github.com/gobuffalo/flect v0.2.5 h1:H6vvsv2an0lalEaCDRThvtBfmg44W/QHXBCYUXf/6S4
github.com/gobuffalo/flect v0.2.5/go.mod h1:1ZyCLIbg0YD7sDkzvFdPoOydPtD8y9JQnrOROolUcM8=
github.com/gobuffalo/flect v0.3.0 h1:erfPWM+K1rFNIQeRPdeEXxo8yFr/PO17lhRnS8FUrtk=
github.com/gobuffalo/flect v0.3.0/go.mod h1:5pf3aGnsvqvCj50AVni7mJJF8ICxGZ8HomberC3pXLE=
+github.com/gobuffalo/flect v1.0.0 h1:eBFmskjXZgAOagiTXJH25Nt5sdFwNRcb8DKZsIsAUQI=
+github.com/gobuffalo/flect v1.0.0/go.mod h1:l9V6xSb4BlXwsxEMj3FVEub2nkdQjWhPvD8XTTlHPQc=
github.com/gobuffalo/here v0.6.0/go.mod h1:wAG085dHOYqUpf+Ap+WOdrPTp5IYcDAs/x7PLa8Y5fM=
github.com/gobuffalo/logger v1.0.1/go.mod h1:2zbswyIUa45I+c+FLXuWl9zSWEiVuthsk8ze5s8JvPs=
github.com/gobuffalo/logger v1.0.6/go.mod h1:J31TBEHR1QLV2683OXTAItYIg8pv2JMHnF/quuAbMjs=
@@ -666,6 +701,8 @@ github.com/golang-jwt/jwt/v4 v4.2.0 h1:besgBTC8w8HjP6NzQdxwKH9Z5oQMZ24ThTrHp3cZ8
github.com/golang-jwt/jwt/v4 v4.2.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg=
github.com/golang-migrate/migrate/v4 v4.6.2 h1:LDDOHo/q1W5UDj6PbkxdCv7lv9yunyZHXvxuwDkGo3k=
github.com/golang-migrate/migrate/v4 v4.6.2/go.mod h1:JYi6reN3+Z734VZ0akNuyOJNcrg45ZL7LDBMW3WGJL0=
+github.com/golang-migrate/migrate/v4 v4.16.1 h1:O+0C55RbMN66pWm5MjO6mw0px6usGpY0+bkSGW9zCo0=
+github.com/golang-migrate/migrate/v4 v4.16.1/go.mod h1:qXiwa/3Zeqaltm1MxOCZDYysW/F6folYiBgBG03l9hc=
github.com/golang-sql/civil v0.0.0-20190719163853-cb61b32ac6fe/go.mod h1:8vg3r2VgvsThLBIFL93Qb5yWzgyZWhEmBwUJWevAkK0=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/glog v1.0.0/go.mod h1:EWib/APOK0SL3dFbYqvxE3UYd8E6s1ouQ7iEp/0LWV4=
@@ -726,6 +763,8 @@ github.com/google/cel-go v0.10.1 h1:MQBGSZGnDwh7T/un+mzGKOMz3x+4E/GDPprWjDL+1Jg=
github.com/google/cel-go v0.10.1/go.mod h1:U7ayypeSkw23szu4GaQTPJGx66c20mx8JklMSxrmI1w=
github.com/google/cel-go v0.12.5 h1:DmzaiSgoaqGCjtpPQWl26/gND+yRpim56H1jCVev6d8=
github.com/google/cel-go v0.12.5/go.mod h1:Jk7ljRzLBhkmiAwBoUxB1sZSCVBAzkqPF25olK/iRDw=
+github.com/google/cel-go v0.12.6 h1:kjeKudqV0OygrAqA9fX6J55S8gj+Jre2tckIm5RoG4M=
+github.com/google/cel-go v0.12.6/go.mod h1:Jk7ljRzLBhkmiAwBoUxB1sZSCVBAzkqPF25olK/iRDw=
github.com/google/cel-spec v0.6.0/go.mod h1:Nwjgxy5CbjlPrtCWjeDjUyKMl8w41YBYGjsyDdqk0xA=
github.com/google/certificate-transparency-go v1.0.21 h1:Yf1aXowfZ2nuboBsg7iYGLmwsOARdV86pfH3g95wXmE=
github.com/google/certificate-transparency-go v1.0.21/go.mod h1:QeJfpSbVSfYc7RgB3gJFj9cbuQMMchQxrWXz8Ruopmg=
@@ -746,6 +785,7 @@ github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.6 h1:BKbKCqvP6I+rmFHt06ZmyQtvB8xAkWdhFyr0ZUNZcxQ=
github.com/google/go-cmp v0.5.6/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/google/go-cmp v0.5.7/go.mod h1:n+brtR0CgQNWTVd5ZUFpTBC8YFBDLK/h/bpaJ8/DtOE=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-containerregistry v0.5.1/go.mod h1:Ct15B4yir3PLOP5jsy0GNeYVaIZs/MK/Jz5any1wFW0=
@@ -830,6 +870,8 @@ github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t
github.com/grpc-ecosystem/grpc-gateway v1.9.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.16.0 h1:gmcG1KaJ57LophUzW0Hy8NmPhnMZb4M0+kPpLofRdBo=
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
+github.com/grpc-ecosystem/grpc-gateway/v2 v2.7.0 h1:BZHcxBETFHIdVyhyEfOvn/RdU/QGdLI4y34qQGjGWO0=
+github.com/grpc-ecosystem/grpc-gateway/v2 v2.7.0/go.mod h1:hgWBS7lorOAVIJEQMi4ZsPv9hVvWI6+ch50m39Pf2Ks=
github.com/grpc-ecosystem/grpc-health-probe v0.3.2/go.mod h1:izVOQ4RWbjUR6lm4nn+VLJyQ+FyaiGmprEYgI04Gs7U=
github.com/grpc-ecosystem/grpc-health-probe v0.4.11/go.mod h1:Ew6du240dK067iM38yVbni1pLpWUFnuyc0PefrB81Uc=
github.com/h2non/filetype v1.1.1 h1:xvOwnXKAckvtLWsN398qS9QhlxlnVXBjXBydK2/UFB4=
@@ -845,6 +887,8 @@ github.com/hashicorp/consul/sdk v0.3.0/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyN
github.com/hashicorp/consul/sdk v0.8.0/go.mod h1:GBvyrGALthsZObzUGsfgHZQDXjg4lOjagTIwIR1vPms=
github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
+github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
+github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48=
@@ -856,6 +900,8 @@ github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iP
github.com/hashicorp/go-multierror v1.0.0 h1:iVjPR7a6H0tWELX5NxNe7bYopibicUzc7uPribsnS6o=
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
github.com/hashicorp/go-multierror v1.1.0/go.mod h1:spPvp8C1qA32ftKqdAHm4hHTbPw+vmowP0z+KUhOZdA=
+github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
+github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hashicorp/go-retryablehttp v0.5.3/go.mod h1:9B5zBasrRhHXnJnui7y6sL7es7NDiJgTc6Er0maI1Xs=
github.com/hashicorp/go-rootcerts v1.0.0/go.mod h1:K6zTfqpRlCUIjkwsN4Z+hiSfzSTQa6eBIzfwKfwNnHU=
github.com/hashicorp/go-rootcerts v1.0.2/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8=
@@ -886,6 +932,9 @@ github.com/huandu/xstrings v1.3.1 h1:4jgBlKK6tLKFvO8u5pmYjG91cqytmDCDvGh7ECVFfFs
github.com/huandu/xstrings v1.3.1/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=
github.com/huandu/xstrings v1.3.2 h1:L18LIDzqlW6xN2rEkpdV8+oL/IXWJ1APd+vsdYy4Wdw=
github.com/huandu/xstrings v1.3.2/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=
+github.com/huandu/xstrings v1.3.3/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=
+github.com/huandu/xstrings v1.4.0 h1:D17IlohoQq4UcpqD7fDk80P7l+lwAmlFaBHgOipl2FU=
+github.com/huandu/xstrings v1.4.0/go.mod h1:y5/lhBue+AyNmUVz9RLU9xbLR0o4KIIExikq4ovT0aE=
github.com/hudl/fargo v1.3.0/go.mod h1:y3CKSmjA+wD2gak7sUSXTAoopbhU08POFhmITJgmKTg=
github.com/iancoleman/strcase v0.0.0-20191112232945-16388991a334 h1:VHgatEHNcBFEB7inlalqfNqw65aNkM1lGX2yt3NmbS8=
github.com/iancoleman/strcase v0.0.0-20191112232945-16388991a334/go.mod h1:SK73tn/9oHe+/Y0h39VT4UCxmurVJkR5NA7kMEAOgSE=
@@ -961,6 +1010,8 @@ github.com/kisom/goutils v1.1.0/go.mod h1:+UBTfd78habUYWFbNWTJNG+jNG/i/lGURakr4A
github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
github.com/klauspost/compress v1.14.1 h1:hLQYb23E8/fO+1u53d02A97a8UnsddcvYzq4ERRU4ds=
github.com/klauspost/compress v1.14.1/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
+github.com/klauspost/compress v1.16.5 h1:IFV2oUNUzZaz+XyusxpLzpzS8Pt5rh0Z16For/djlyI=
+github.com/klauspost/compress v1.16.5/go.mod h1:ntbaceVETuRiXiv4DpjP66DpAtAGkEQskQzEyD//IeE=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
@@ -970,6 +1021,8 @@ github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFB
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
+github.com/kr/pretty v0.3.0/go.mod h1:640gp4NfQd8pI5XOwp5fnNeVWj67G7CFk/SaSQn7NBk=
+github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/pty v1.1.5/go.mod h1:9r2w37qlBe7rQ6e1fg1S/9xpWHSnaqNdHD3WcMdbPDA=
github.com/kr/pty v1.1.8/go.mod h1:O1sed60cT9XZ5uDucP5qwvh+TE3NnUj51EiZO/lmSfw=
@@ -1047,6 +1100,8 @@ github.com/mattn/go-isatty v0.0.14 h1:yVuAays6BHfxijgZPzw+3Zlu5yQgKGP2/hcQbHb7S9
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
github.com/mattn/go-isatty v0.0.16 h1:bq3VjFmv/sOjHtdEhmkEV4x1AJtvUvOJ2PFAZ5+peKQ=
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
+github.com/mattn/go-isatty v0.0.17 h1:BTarxUcIeDqL27Mc+vyvdWYSL28zpIhv3RoTdsLMPng=
+github.com/mattn/go-isatty v0.0.17/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
github.com/mattn/go-oci8 v0.0.7/go.mod h1:wjDx6Xm9q7dFtHJvIlrI99JytznLw5wQ4R+9mNXJwGI=
github.com/mattn/go-oci8 v0.1.1/go.mod h1:wjDx6Xm9q7dFtHJvIlrI99JytznLw5wQ4R+9mNXJwGI=
github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
@@ -1067,6 +1122,8 @@ github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182aff
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/matttproud/golang_protobuf_extensions v1.0.2 h1:hAHbPm5IJGijwng3PWk09JkG9WeqChjprR5s9bBZ+OM=
github.com/matttproud/golang_protobuf_extensions v1.0.2/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
+github.com/matttproud/golang_protobuf_extensions v1.0.4 h1:mmDVorXM7PCGKw94cs5zkfA9PSy5pEvNWRP0ET0TIVo=
+github.com/matttproud/golang_protobuf_extensions v1.0.4/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
github.com/maxbrunsfeld/counterfeiter/v6 v6.2.2/go.mod h1:eD9eIE7cdwcMi9rYluz88Jz2VyhSmden33/aXg4oVIY=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/miekg/dns v1.1.26/go.mod h1:bPDLeHnStXmXAq1m/Ch/hvfNHr14JKNPMBo3VZKjuso=
@@ -1076,6 +1133,7 @@ github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceT
github.com/mitchellh/cli v1.1.0/go.mod h1:xcISNoH86gajksDmfB23e/pu+B+GeFRMYmoHXxx3xhI=
github.com/mitchellh/cli v1.1.2/go.mod h1:6iaV0fGdElS6dPBx0EApTxHrcWvmJphyh2n8YBLPPZ4=
github.com/mitchellh/cli v1.1.4/go.mod h1:vTLESy5mRhKOs9KDp0/RATawxP1UqBmdrpVRMnpcvKQ=
+github.com/mitchellh/cli v1.1.5/go.mod h1:v8+iFts2sPIKUV1ltktPXMCC8fumSKFItNcD2cLtRR4=
github.com/mitchellh/copystructure v1.0.0/go.mod h1:SNtv71yrdKgLRyLFxmLdkAbkKEFWgYaq1OVrnRcwhnw=
github.com/mitchellh/copystructure v1.1.1 h1:Bp6x9R1Wn16SIz3OfeDr0b7RnCG2OB66Y7PQyC/cvq4=
github.com/mitchellh/copystructure v1.1.1/go.mod h1:EBArHfARyrSWO/+Wyr9zwEkc6XMFB9XyNgFNmRkZZU4=
@@ -1122,6 +1180,8 @@ github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6 h1:dcztxKSvZ4Id8iPpHERQB
github.com/moby/term v0.0.0-20210619224110-3f7ff695adc6/go.mod h1:E2VnQOmVuvZB6UYnnDB0qG5Nq/1tD9acaOpo6xmt0Kw=
github.com/moby/term v0.0.0-20220808134915-39b0c02b01ae h1:O4SWKdcHVCvYqyDV+9CJA1fcDN2L11Bule0iFy3YlAI=
github.com/moby/term v0.0.0-20220808134915-39b0c02b01ae/go.mod h1:E2VnQOmVuvZB6UYnnDB0qG5Nq/1tD9acaOpo6xmt0Kw=
+github.com/moby/term v0.5.0 h1:xt8Q1nalod/v7BqbG21f8mQPqH+xAaC9C3N3wfWbVP0=
+github.com/moby/term v0.5.0/go.mod h1:8FzsFHVUBGZdbDsJw/ot+X+d5HLUbvklYLJ9uGfcI3Y=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@@ -1152,6 +1212,8 @@ github.com/nats-io/nkeys v0.1.0/go.mod h1:xpnFELMwJABBLVhffcfd1MZx6VsNRFpEugbxzi
github.com/nats-io/nkeys v0.1.3/go.mod h1:xpnFELMwJABBLVhffcfd1MZx6VsNRFpEugbxziKVo7w=
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
github.com/ncw/swift v1.0.47/go.mod h1:23YIA4yWVnGwv2dQlN4bB7egfYX6YLn0Yo/S6zZO/ZM=
+github.com/nelsam/hel/v2 v2.3.2/go.mod h1:1ZTGfU2PFTOd5mx22i5O0Lc2GY933lQ2wb/ggy+rL3w=
+github.com/nelsam/hel/v2 v2.3.3/go.mod h1:1ZTGfU2PFTOd5mx22i5O0Lc2GY933lQ2wb/ggy+rL3w=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/nkovacs/streamquote v0.0.0-20170412213628-49af9bddb229/go.mod h1:0aYXnNPJ8l7uZxf45rWW1a/uME32OF0rhiYGNQ2oF2E=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
@@ -1199,6 +1261,8 @@ github.com/onsi/gomega v1.18.1 h1:M1GfJqGRrBrrGGsbxzV5dqM2U2ApXefZCQpkukxYRLE=
github.com/onsi/gomega v1.18.1/go.mod h1:0q+aL8jAiMXy9hbwj2mr5GziHiwhAIQpFmmtT5hitRs=
github.com/onsi/gomega v1.20.2 h1:8uQq0zMgLEfa0vRrrBgaJF2gyW9Da9BmfGV+OyUzfkY=
github.com/onsi/gomega v1.20.2/go.mod h1:iYAIXgPSaDHak0LCMA+AWBpIKBr8WZicMxnE8luStNc=
+github.com/onsi/gomega v1.24.2 h1:J/tulyYK6JwBldPViHJReihxxZ+22FHs0piGjQAvoUE=
+github.com/onsi/gomega v1.24.2/go.mod h1:gs3J10IS7Z7r7eXRoNJIrNqU4ToQukCJhFtKrWgHWnk=
github.com/op/go-logging v0.0.0-20160315200505-970db520ece7/go.mod h1:HzydrMdWErDVzsI23lYNej1Htcns9BCg93Dk0bBINWk=
github.com/opencontainers/go-digest v0.0.0-20170106003457-a6d0ee40d420/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
github.com/opencontainers/go-digest v0.0.0-20180430190053-c9281466c8b2/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
@@ -1216,6 +1280,8 @@ github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799 h1:rc3
github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/image-spec v1.1.0-rc2 h1:2zx/Stx4Wc5pIPDvIxHXvXtQFW/7XWJGmnM7r3wg034=
github.com/opencontainers/image-spec v1.1.0-rc2/go.mod h1:3OVijpioIKYWTqjiG0zfF6wvoJ4fAXGbjdZuI2NgsRQ=
+github.com/opencontainers/image-spec v1.1.0-rc2.0.20221005185240-3a7f492d3f1b h1:YWuSjZCQAPM8UUBLkYUk1e+rZcvWHJmFb6i6rM44Xs8=
+github.com/opencontainers/image-spec v1.1.0-rc2.0.20221005185240-3a7f492d3f1b/go.mod h1:3OVijpioIKYWTqjiG0zfF6wvoJ4fAXGbjdZuI2NgsRQ=
github.com/opencontainers/runc v0.0.0-20190115041553-12f6a991201f/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
github.com/opencontainers/runc v0.1.1 h1:GlxAyO6x8rfZYN9Tt0Kti5a/cP41iuiO2yYT0IJGY8Y=
github.com/opencontainers/runc v0.1.1/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
@@ -1251,6 +1317,8 @@ github.com/operator-framework/api v0.15.1-0.20220624132056-decf74800a17 h1:05SuH
github.com/operator-framework/api v0.15.1-0.20220624132056-decf74800a17/go.mod h1:scnY9xqSeCsOdtJtNoHIXd7OtHZ14gj1hkDA4+DlgLY=
github.com/operator-framework/api v0.17.4-0.20221221181915-f1b729684854 h1:/EFqPU6baS6MzfLvIeGlfVV0bDLehlA1jwSxHxL0D9s=
github.com/operator-framework/api v0.17.4-0.20221221181915-f1b729684854/go.mod h1:34tb98EwTN5SZLkgoxwvRkhMJKLHUWHOrrcv1ZwvEeA=
+github.com/operator-framework/api v0.17.4-0.20230223191600-0131a6301e42 h1:d/Pnr19TnmIq3zQ6ebewC+5jt5zqYbRkvYd37YZENQY=
+github.com/operator-framework/api v0.17.4-0.20230223191600-0131a6301e42/go.mod h1:l/cuwtPxkVUY7fzYgdust2m9tlmb8I4pOvbsUufRb24=
github.com/operator-framework/helm-operator-plugins v0.0.9 h1:G5aBY5sPrNXcRiKLpAaBMOYm7q0+qCmk9XWOAL/ZJuc=
github.com/operator-framework/helm-operator-plugins v0.0.10 h1:27o8kDaLY9A3DKp2v6s+cAhebM0gXyfgYVc54x7Vtgc=
github.com/operator-framework/helm-operator-plugins v0.0.12-0.20220613184440-7329cace347f h1:lS/IvqlvEQGIwXE0VlW+mOCmFEXBKywNbGQDrK++r/g=
@@ -1258,6 +1326,8 @@ github.com/operator-framework/helm-operator-plugins v0.0.12-0.20220616200420-1a6
github.com/operator-framework/helm-operator-plugins v0.0.12-0.20220616200420-1a695cb9f6a1/go.mod h1:D7zPPwmIFBqHtWigU2iJiLuZ0v7hOJOb1/VC+/UuBAQ=
github.com/operator-framework/helm-operator-plugins v0.0.12-0.20230109213218-ebfbea851192 h1:s7nLCTlcogFSoG97qjehPKdz7h7OA8UWRFnFPOvjL4g=
github.com/operator-framework/helm-operator-plugins v0.0.12-0.20230109213218-ebfbea851192/go.mod h1:qPilKwZ47jjA8kioD9Yr84jQt6NVzb5iK56H5tkh0rQ=
+github.com/operator-framework/helm-operator-plugins v0.0.12-0.20230413193425-4632388adc61 h1:FPO2hS4HNIU2pzWeX2KusKxqDFeGIURRMkxRtn/i570=
+github.com/operator-framework/helm-operator-plugins v0.0.12-0.20230413193425-4632388adc61/go.mod h1:QpVyiSOKGbWADyNRl7LvMlRuuMGrWXJQdEYyHPQWMUg=
github.com/operator-framework/java-operator-plugins v0.0.0-20210708174638-463fb91f3d5e h1:LMsT59IJqaLn7kD6DnZFy0IouRufXyJHTT+mXQrl9Ps=
github.com/operator-framework/java-operator-plugins v0.0.0-20210708174638-463fb91f3d5e/go.mod h1:sGKGELFkUeRqElcyvyPC89bC76YnCL7MPMa13P0AQcw=
github.com/operator-framework/java-operator-plugins v0.1.0 h1:khkYsrkEG4m+wT+oPjZYmWXo8jd0QQ8E4agSrqrhPhU=
@@ -1269,6 +1339,8 @@ github.com/operator-framework/java-operator-plugins v0.6.0 h1:zVapay2Gm1PKigssVN
github.com/operator-framework/java-operator-plugins v0.6.0/go.mod h1:UnUHAWY203Xw1j6Xpiirp/psJJaSRYcjenc0NH2+aVw=
github.com/operator-framework/java-operator-plugins v0.7.1-0.20221007075838-2e24140314fb h1:lHKsuPfcDwgFFvwwh4OdA9MUyZ+xY82Q/xztJkotUKI=
github.com/operator-framework/java-operator-plugins v0.7.1-0.20221007075838-2e24140314fb/go.mod h1:OpTW9khbip8t1urqW1siXHIaq397P1aOAi/4BbWdgXo=
+github.com/operator-framework/java-operator-plugins v0.7.1-0.20230306190439-0eed476d2b75 h1:mjMid39qs1lEXpIldVmj7sa1wtuZvYge8oHkT0qOY0Y=
+github.com/operator-framework/java-operator-plugins v0.7.1-0.20230306190439-0eed476d2b75/go.mod h1:oQTt35EEUrDY8ca/kRWYz5omWsVhk9Sj78vKlHFqxjM=
github.com/operator-framework/operator-lib v0.5.0/go.mod h1:33Skl0vjauYx3nAS+cSFbHNkX8do7weQ6s5siIV/w1E=
github.com/operator-framework/operator-lib v0.6.0/go.mod h1:2Z32GTTJUz2/f+OKcoJXsVnAyRwcXx7mGmQsdhIAIIE=
github.com/operator-framework/operator-lib v0.11.0/go.mod h1:RpyKhFAoG6DmKTDIwMuO6pI3LRc8IE9rxEYWy476o6g=
@@ -1277,6 +1349,8 @@ github.com/operator-framework/operator-manifest-tools v0.2.1 h1:hD3iyOm2mBItzYhp
github.com/operator-framework/operator-manifest-tools v0.2.1/go.mod h1:C4AmRDIJiM8WVyGyqoUuK3KlloZr7XqaabKMMKKhHtA=
github.com/operator-framework/operator-manifest-tools v0.2.3-0.20220901033859-2a7ce32ef673 h1:D4yF9mVC3JmpBpLVEMSzBlaJqN8D6WeJ0e/+Szv4l5A=
github.com/operator-framework/operator-manifest-tools v0.2.3-0.20220901033859-2a7ce32ef673/go.mod h1:pYXBtryqeokM8MiCtSsGxQUI/vZgcFLMhEI0gkt9KFI=
+github.com/operator-framework/operator-manifest-tools v0.2.3-0.20230227155221-caa8b9e1ab12 h1:PXejNY6ZFU6CutIkowf/ECsuT/xcLAIgmXQxG43SHnY=
+github.com/operator-framework/operator-manifest-tools v0.2.3-0.20230227155221-caa8b9e1ab12/go.mod h1:5OAMYmIkFCiiHfS1r3HcIYu3F/sum38pofSoLZy7Cbw=
github.com/operator-framework/operator-registry v1.17.4 h1:bYoWevurGEUshSMu8QNcImhLuPZJ/a4MbsUuvBjFEzA=
github.com/operator-framework/operator-registry v1.17.4/go.mod h1:k0rWVT23QoN1prs9tX8PHjRVXz6FMZfUJ5EIZSrqh9E=
github.com/operator-framework/operator-registry v1.19.5 h1:2LtfN4hrOn+z4MwQsFtJVkyQocQPV+rDrNoawwnBhPI=
@@ -1285,6 +1359,8 @@ github.com/operator-framework/operator-registry v1.23.0 h1:9bOJbxjjupEUBDhAdjC+n
github.com/operator-framework/operator-registry v1.23.0/go.mod h1:7XM/SlL9ExTWYJiNxkJfJxdqJrOovuuP35MvVctNtDE=
github.com/operator-framework/operator-registry v1.26.3-0.20220930210947-614d6a955dc0 h1:dOC9sDcQwAxMk9BnL9uacOAE+7LpdrOruckovzlzVj0=
github.com/operator-framework/operator-registry v1.26.3-0.20220930210947-614d6a955dc0/go.mod h1:yCJaYiwRDd+Vi+ACtN1QlwRfJB/moStJvtlW+VOyx9o=
+github.com/operator-framework/operator-registry v1.28.0 h1:vtmd2WgJxkx7vuuOxW4k5Le/oo0SfonSeJVMU3rKIfk=
+github.com/operator-framework/operator-registry v1.28.0/go.mod h1:UYw3uaZyHwHgnczLRYmUqMpgRgP2EfkqOsaR+LI+nK8=
github.com/operator-framework/operator-sdk v1.11.0 h1:CpNNSLbrSIR3j1O0YH9CmayxD6fIkuW6KB+m6jWsph8=
github.com/operator-framework/operator-sdk v1.11.0/go.mod h1:fkJPPNnepIgLn9FJDZhXKYrx+LsU8/4TWLT4RF0Whr0=
github.com/operator-framework/operator-sdk v1.13.0 h1:XOW3KMLf8DyXisC/gmltRaq/rBJy45QsBLINOiFDVD0=
@@ -1315,6 +1391,8 @@ github.com/operator-framework/operator-sdk v1.24.1 h1:exm5snGUOPSysHF0z1p/APrTn/
github.com/operator-framework/operator-sdk v1.24.1/go.mod h1:ABeI+hU/bPfO1u09f7Vranx5GHcwdFnAa21IZjTTKxo=
github.com/operator-framework/operator-sdk v1.27.0 h1:Uhnhi88U2jWkagWB2G60qNbTv2ETeIuDUswohEbcfIQ=
github.com/operator-framework/operator-sdk v1.27.0/go.mod h1:DhJvT5akOZNilQu7OVUd8I+LLzaXv+S8VZktElQCtOs=
+github.com/operator-framework/operator-sdk v1.31.0 h1:jnTK3lQ8JkRE0sRV3AdTmNKBZmYZaCiEkPcm3LWGKxE=
+github.com/operator-framework/operator-sdk v1.31.0/go.mod h1:j51dzpQQTMlNxtn5ThSOfRZP7N2iUiGaAPj9uJN5JAo=
github.com/otiai10/copy v1.2.0 h1:HvG945u96iNadPoG2/Ja2+AUJeW5YuFQMixq9yirC+k=
github.com/otiai10/copy v1.2.0/go.mod h1:rrF5dJ5F0t/EWSYODDu4j9/vEeYHMkc8jt0zJChqQWw=
github.com/otiai10/curr v0.0.0-20150429015615-9b4961190c95/go.mod h1:9qAhocn7zKJG+0mI8eUu6xqkFDYS2kb2saOteoSB3cE=
@@ -1355,6 +1433,8 @@ github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZN
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
github.com/posener/complete v1.2.3/go.mod h1:WZIdtGGp+qx0sLrYKtIRAruyNpv6hFCicSgv7Sy7s/s=
github.com/poy/onpar v0.0.0-20190519213022-ee068f8ea4d1/go.mod h1:nSbFQvMj97ZyhFRSJYtut+msi4sOY6zJDGCdSc+/rZU=
+github.com/poy/onpar v0.0.0-20200406201722-06f95a1c68e8/go.mod h1:nSbFQvMj97ZyhFRSJYtut+msi4sOY6zJDGCdSc+/rZU=
+github.com/poy/onpar v1.1.2/go.mod h1:6X8FLNoxyr9kkmnlqpK6LSoiOtrO6MICtWwEuWkLjzg=
github.com/pquerna/cachecontrol v0.0.0-20171018203845-0dec1b30a021/go.mod h1:prYjPmNq4d1NPVmpShWobRqXY3q7Vp+80DqgxxUrUIA=
github.com/prometheus/client_golang v0.0.0-20180209125602-c332b6f63c06/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
@@ -1371,6 +1451,8 @@ github.com/prometheus/client_golang v1.12.1 h1:ZiaPsmm9uiBeaSMRznKsCDNtPCS0T3JVD
github.com/prometheus/client_golang v1.12.1/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY=
github.com/prometheus/client_golang v1.13.0 h1:b71QUfeo5M8gq2+evJdTPfZhYMAU0uKPkyPJ7TPsloU=
github.com/prometheus/client_golang v1.13.0/go.mod h1:vTeo+zgvILHsnnj/39Ou/1fPN5nJFOEMgftOUOmlvYQ=
+github.com/prometheus/client_golang v1.14.0 h1:nJdhIvne2eSX/XRAFV9PcvFFRbrjbcTUj0VP62TMhnw=
+github.com/prometheus/client_golang v1.14.0/go.mod h1:8vpkKitgIVNcqrRBWh1C4TIUQgYNtG/XQE4E/Zae36Y=
github.com/prometheus/client_model v0.0.0-20171117100541-99fa1f4be8e5/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
@@ -1379,6 +1461,8 @@ github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:
github.com/prometheus/client_model v0.1.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2dUR+/W/M=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
+github.com/prometheus/client_model v0.3.0 h1:UBgGFHqYdG/TPFD1B1ogZywDqEkwp3fBMvqdiQ7Xew4=
+github.com/prometheus/client_model v0.3.0/go.mod h1:LDGWKZIo7rky3hgvBe+caln+Dr3dPggB5dvjtD7w9+w=
github.com/prometheus/common v0.0.0-20180110214958-89604d197083/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.2.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
@@ -1423,18 +1507,23 @@ github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFR
github.com/rogpeppe/go-internal v1.3.2/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
github.com/rogpeppe/go-internal v1.4.0 h1:LUa41nrWTQNGhzdsZ5lTnkwbNjj6rXTdazA1cSdjkOY=
github.com/rogpeppe/go-internal v1.4.0/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
+github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
github.com/rogpeppe/go-internal v1.8.0/go.mod h1:WmiCO8CzOY8rg0OYDC4/i/2WRWAB6poM+XZ2dLUbcbE=
+github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
github.com/rubenv/sql-migrate v0.0.0-20200616145509-8d140a17f351 h1:HXr/qUllAWv9riaI4zh2eXWKmCSDqVS/XH1MRHLKRwk=
github.com/rubenv/sql-migrate v0.0.0-20200616145509-8d140a17f351/go.mod h1:DCgfY80j8GYL7MLEfvcpSFvjD0L5yZq/aZUJmhZklyg=
github.com/rubenv/sql-migrate v1.1.1 h1:haR5Hn8hbW9/SpAICrXoZqXnywS7Q5WijwkQENPeNWY=
github.com/rubenv/sql-migrate v1.1.1/go.mod h1:/7TZymwxN8VWumcIxw1jjHEcR1djpdkMHQPT4FWdnbQ=
github.com/rubenv/sql-migrate v1.2.0 h1:fOXMPLMd41sK7Tg75SXDec15k3zg5WNV6SjuDRiNfcU=
github.com/rubenv/sql-migrate v1.2.0/go.mod h1:Z5uVnq7vrIrPmHbVFfR4YLHRZquxeHpckCnRq0P/K9Y=
+github.com/rubenv/sql-migrate v1.3.1 h1:Vx+n4Du8X8VTYuXbhNxdEUoh6wiJERA0GlWocR5FrbA=
+github.com/rubenv/sql-migrate v1.3.1/go.mod h1:YzG/Vh82CwyhTFXy+Mf5ahAiiEOpAlHurg+23VEzcsk=
github.com/russross/blackfriday v1.5.2 h1:HyvC0ARfnZBqnXwABFeSZHpKvJHJJfPz81GNueLj0oo=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/russross/blackfriday v1.6.0 h1:KqfZb0pUVN2lYqZUYRddxF4OR8ZMURnJIG5Y3VRLtww=
github.com/russross/blackfriday v1.6.0/go.mod h1:ti0ldHuxg49ri4ksnFxlkCfN+hvslNlmVHqNRXXJNAY=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
+github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
github.com/sagikazarmark/crypt v0.3.0/go.mod h1:uD/D+6UF4SrIR1uGEv7bBNkNqLGqUr43MRiaGWX1Nig=
@@ -1463,6 +1552,8 @@ github.com/sirupsen/logrus v1.8.1 h1:dJKuHgqk1NNQlqoA6BTlM1Wf9DOH3NBjQyu0h9+AZZE
github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
github.com/sirupsen/logrus v1.9.0 h1:trlNQbNUG3OdDrDil03MCb1H2o9nJ1x4/5LYw7byDE0=
github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
+github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
+github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/goconvey v0.0.0-20190330032615-68dc04aab96a/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
@@ -1477,6 +1568,8 @@ github.com/spf13/afero v1.6.0 h1:xoax2sJ2DT8S8xA2paPFjDCScCNeWsg75VG0DLRreiY=
github.com/spf13/afero v1.6.0/go.mod h1:Ai8FlHk4v/PARR026UzYexafAt9roJ7LcLMAmO6Z93I=
github.com/spf13/afero v1.9.2 h1:j49Hj62F0n+DaZ1dDCvhABaPNSGNkt32oRFxI33IEMw=
github.com/spf13/afero v1.9.2/go.mod h1:iUV7ddyEEZPO5gA3zD4fJt6iStLlL+Lg4m2cihcDf8Y=
+github.com/spf13/afero v1.9.3 h1:41FoI0fD7OR7mGcKE/aOiLkGreyf8ifIOQmJANWogMk=
+github.com/spf13/afero v1.9.3/go.mod h1:iUV7ddyEEZPO5gA3zD4fJt6iStLlL+Lg4m2cihcDf8Y=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cast v1.3.1 h1:nFm6S0SMdyzrzcmThSipiEubIDy8WEXKNZ0UOgiRpng=
github.com/spf13/cast v1.3.1/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
@@ -1487,6 +1580,7 @@ github.com/spf13/cast v1.5.0/go.mod h1:SpXXQ5YoyJw6s3/6cMTQuxvgRl3PCJiyaX9p6b155
github.com/spf13/cobra v0.0.2-0.20171109065643-2da4a54c5cee/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
+github.com/spf13/cobra v0.0.6/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
github.com/spf13/cobra v1.0.0/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
github.com/spf13/cobra v1.1.1/go.mod h1:WnodtKOvamDL/PwE2M4iKs8aMDBZ5Q5klgD3qfVJQMI=
github.com/spf13/cobra v1.1.3 h1:xghbfqPkxzxP3C/f3n5DdpAbdKLj4ZE4BWQI362l53M=
@@ -1499,6 +1593,8 @@ github.com/spf13/cobra v1.4.0 h1:y+wJpx64xcgO1V+RcnwW0LEHxTKRi2ZDPSBjWnrg88Q=
github.com/spf13/cobra v1.4.0/go.mod h1:Wo4iy3BUC+X2Fybo0PDqwJIv3dNRiZLHQymsfxlB84g=
github.com/spf13/cobra v1.6.0 h1:42a0n6jwCot1pUmomAp4T7DeMD+20LFv4Q54pxLf2LI=
github.com/spf13/cobra v1.6.0/go.mod h1:IOw/AERYS7UzyrGinqmz6HLUo219MORXGxhbaJUqzrY=
+github.com/spf13/cobra v1.6.1 h1:o94oiPyS4KD1mPy2fmcYYHHfCxLqYjJOhGsCHFZtEzA=
+github.com/spf13/cobra v1.6.1/go.mod h1:IOw/AERYS7UzyrGinqmz6HLUo219MORXGxhbaJUqzrY=
github.com/spf13/jwalterweatherman v1.0.0 h1:XHEdyB+EcvlqZamSM4ZOMGlc93t6AcsBEu9Gc1vn7yk=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk=
@@ -1528,6 +1624,7 @@ github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
+github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
@@ -1539,6 +1636,9 @@ github.com/stretchr/testify v1.7.1 h1:5TQK59W5E3v0r2duFAb7P95B6hEeOyEnHRa8MjYSMT
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0 h1:pSgiaMZlXftHpm5L7V1+rVB+AZJydKsMxsQBIJw4PKk=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
+github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
+github.com/stretchr/testify v1.8.2 h1:+h33VjcLVPDHtOdpUCuF+7gSuG3yGIftsP1YvFihtJ8=
+github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
@@ -1595,6 +1695,7 @@ github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9dec
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.0/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
+github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
github.com/yvasiyarov/go-metrics v0.0.0-20140926110328-57bccd1ccd43/go.mod h1:aX5oPXxHm3bOH+xeAttToC8pqch2ScQN/JoXYupl6xs=
github.com/yvasiyarov/go-metrics v0.0.0-20150112132944-c25f46c4b940/go.mod h1:aX5oPXxHm3bOH+xeAttToC8pqch2ScQN/JoXYupl6xs=
github.com/yvasiyarov/gorelic v0.0.0-20141212073537-a9bba5b9ab50/go.mod h1:NUSPSUX/bi6SeDMUh6brw0nXpxHnc96TguQh0+r/ssA=
@@ -1647,28 +1748,48 @@ go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
go.opencensus.io v0.23.0 h1:gqCw0LfLxScz8irSi8exQc7fyQ0fKQU/qnC/X8+V/1M=
go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
+go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
+go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
go.opentelemetry.io/contrib v0.20.0 h1:ubFQUn0VCZ0gPwIoJfBJVpeBlyRMxu8Mm/huKWYd9p0=
go.opentelemetry.io/contrib v0.20.0/go.mod h1:G/EtFaa6qaN7+LxqfIAT3GiZa7Wv5DTBUzl5H4LY0Kc=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.20.0/go.mod h1:oVGt1LRbBOBq1A5BQLlUg9UaU/54aiHw8cgjV3aWZ/E=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.20.0 h1:Q3C9yzW6I9jqEc8sawxzxZmY48fs9u220KXq6d5s3XU=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.20.0/go.mod h1:2AboqHi0CiIZU0qwhtUfCYD1GeUzvvIXWNkhDt7ZMG4=
+go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.35.0 h1:Ajldaqhxqw/gNzQA45IKFWLdG7jZuXX/wBW1d5qvbUI=
+go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.35.0/go.mod h1:9NiG9I2aHTKkcxqCILhjtyNA1QEiCjdBACv4IvrFQ+c=
go.opentelemetry.io/otel v0.20.0 h1:eaP0Fqu7SXHwvjiqDq83zImeehOHX8doTvU9AwXON8g=
go.opentelemetry.io/otel v0.20.0/go.mod h1:Y3ugLH2oa81t5QO+Lty+zXf8zC9L26ax4Nzoxm/dooo=
+go.opentelemetry.io/otel v1.14.0 h1:/79Huy8wbf5DnIPhemGB+zEPVwnN6fuQybr/SRXa6hM=
+go.opentelemetry.io/otel v1.14.0/go.mod h1:o4buv+dJzx8rohcUeRmWUZhqupFvzWis188WlggnNeU=
go.opentelemetry.io/otel/exporters/otlp v0.20.0 h1:PTNgq9MRmQqqJY0REVbZFvwkYOA85vbdQU/nVfxDyqg=
go.opentelemetry.io/otel/exporters/otlp v0.20.0/go.mod h1:YIieizyaN77rtLJra0buKiNBOm9XQfkPEKBeuhoMwAM=
+go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.14.0 h1:/fXHZHGvro6MVqV34fJzDhi7sHGpX3Ej/Qjmfn003ho=
+go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.14.0/go.mod h1:UFG7EBMRdXyFstOwH028U0sVf+AvukSGhF0g8+dmNG8=
+go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.14.0 h1:TKf2uAs2ueguzLaxOCBXNpHxfO/aC7PAdDsSH0IbeRQ=
+go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.14.0/go.mod h1:HrbCVv40OOLTABmOn1ZWty6CHXkU8DK/Urc43tHug70=
+go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.14.0 h1:ap+y8RXX3Mu9apKVtOkM6WSFESLM8K3wNQyOU8sWHcc=
+go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.14.0/go.mod h1:5w41DY6S9gZrbjuq6Y+753e96WfPha5IcsOSZTtullM=
go.opentelemetry.io/otel/metric v0.20.0 h1:4kzhXFP+btKm4jwxpjIqjs41A7MakRFUS86bqLHTIw8=
go.opentelemetry.io/otel/metric v0.20.0/go.mod h1:598I5tYlH1vzBjn+BTuhzTCSb/9debfNp6R3s7Pr1eU=
+go.opentelemetry.io/otel/metric v0.31.0 h1:6SiklT+gfWAwWUR0meEMxQBtihpiEs4c+vL9spDTqUs=
+go.opentelemetry.io/otel/metric v0.31.0/go.mod h1:ohmwj9KTSIeBnDBm/ZwH2PSZxZzoOaG2xZeekTRzL5A=
go.opentelemetry.io/otel/oteltest v0.20.0/go.mod h1:L7bgKf9ZB7qCwT9Up7i9/pn0PWIa9FqQ2IQ8LoxiGnw=
go.opentelemetry.io/otel/sdk v0.20.0 h1:JsxtGXd06J8jrnya7fdI/U/MR6yXA5DtbZy+qoHQlr8=
go.opentelemetry.io/otel/sdk v0.20.0/go.mod h1:g/IcepuwNsoiX5Byy2nNV0ySUF1em498m7hBWC279Yc=
+go.opentelemetry.io/otel/sdk v1.14.0 h1:PDCppFRDq8A1jL9v6KMI6dYesaq+DFcDZvjsoGvxGzY=
+go.opentelemetry.io/otel/sdk v1.14.0/go.mod h1:bwIC5TjrNG6QDCHNWvW4HLHtUQ4I+VQDsnjhvyZCALM=
go.opentelemetry.io/otel/sdk/export/metric v0.20.0 h1:c5VRjxCXdQlx1HjzwGdQHzZaVI82b5EbBgOu2ljD92g=
go.opentelemetry.io/otel/sdk/export/metric v0.20.0/go.mod h1:h7RBNMsDJ5pmI1zExLi+bJK+Dr8NQCh0qGhm1KDnNlE=
go.opentelemetry.io/otel/sdk/metric v0.20.0 h1:7ao1wpzHRVKf0OQ7GIxiQJA6X7DLX9o14gmVon7mMK8=
go.opentelemetry.io/otel/sdk/metric v0.20.0/go.mod h1:knxiS8Xd4E/N+ZqKmUPf3gTTZ4/0TjTXukfxjzSTpHE=
go.opentelemetry.io/otel/trace v0.20.0 h1:1DL6EXUdcg95gukhuRRvLDO/4X5THh/5dIV52lqtnbw=
go.opentelemetry.io/otel/trace v0.20.0/go.mod h1:6GjCW8zgDjwGHGa6GkyeB8+/5vjT16gUEi0Nf1iBdgw=
+go.opentelemetry.io/otel/trace v1.14.0 h1:wp2Mmvj41tDsyAJXiWDWpfNsOiIyd38fy85pyKcFq/M=
+go.opentelemetry.io/otel/trace v1.14.0/go.mod h1:8avnQLK+CG77yNLUae4ea2JDQ6iT+gozhnZjy/rw9G8=
go.opentelemetry.io/proto/otlp v0.7.0 h1:rwOQPCuKAKmwGKq2aVNnYIibI6wnV7EvzgfTCzcdGg8=
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
+go.opentelemetry.io/proto/otlp v0.19.0 h1:IVN6GR+mhC4s5yfcTbmzHYODqvWAp3ZedA2SJPI1Nnw=
+go.opentelemetry.io/proto/otlp v0.19.0/go.mod h1:H7XAot3MsfNsj7EXtrA2q5xSNQ10UqI405h3+duxN4U=
go.starlark.net v0.0.0-20200306205701-8dd3e2ee1dd5 h1:+FNtrFTmVw0YZGpBGX56XDee331t6JAXeK2bcyhLOOc=
go.starlark.net v0.0.0-20200306205701-8dd3e2ee1dd5/go.mod h1:nmDLcffg48OtT/PSW0Hg7FvpRQsQh5OSqIylirxKC7o=
go.starlark.net v0.0.0-20221010140840-6bf6f0955179 h1:Mc5MkF55Iasgq23vSYpL6/l7EJXtlNjzw+8hbMQ/ShY=
@@ -1738,6 +1859,10 @@ golang.org/x/crypto v0.0.0-20220408190544-5352b0902921 h1:iU7T1X1J6yxDr0rda54sWG
golang.org/x/crypto v0.0.0-20220408190544-5352b0902921/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20221012134737-56aed061732a h1:NmSIgad6KjE6VvHciPZuNRTKxGhlPfD6OA87W/PLkqg=
golang.org/x/crypto v0.0.0-20221012134737-56aed061732a/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
+golang.org/x/crypto v0.3.0/go.mod h1:hebNnKkNXi2UzZN1eVRvBB7co0a+JxK6XbPiWVs/3J4=
+golang.org/x/crypto v0.5.0/go.mod h1:NK/OQwhpMQP3MwtdjgLlYHnH9ebylxKWv3e0fK+mkQU=
+golang.org/x/crypto v0.7.0 h1:AvwMYaRytfdeVt3u6mLaxYtErKYjxA2OXjJ1HHq6t3A=
+golang.org/x/crypto v0.7.0/go.mod h1:pYwdfH91IfpZVANVyUOhSIPZaFoJGxTFbZhFTx+dXZU=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -1783,6 +1908,8 @@ golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3 h1:kQgndtyPBW/JIYERgdx
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 h1:6zppjxzCulZykYSLyVDYbneBfbaBIQPYMevg0bEwv2s=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
+golang.org/x/mod v0.10.0 h1:lFO9qtOdlre5W1jxS3r/4szv2/6iXxScdzjoBMXNhYk=
+golang.org/x/mod v0.10.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20170114055629-f2499483f923/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -1864,8 +1991,13 @@ golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220407224826-aac1ed45d8e3 h1:EN5+DfgmRMvRUrMGERW2gQl3Vc+Z7ZMnI/xdEpPSf0c=
golang.org/x/net v0.0.0-20220407224826-aac1ed45d8e3/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
+golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.0.0-20221014081412-f15817d10f9b h1:tvrvnPFcdzp294diPnrdZZZ8XUt2Tyj7svb7X52iDuU=
golang.org/x/net v0.0.0-20221014081412-f15817d10f9b/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk=
+golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY=
+golang.org/x/net v0.5.0/go.mod h1:DivGGAXEgPSlEBzxGzZI+ZLohi+xUj054jfeKui00ws=
+golang.org/x/net v0.10.0 h1:X2//UzNDwYmtCLn7To6G58Wr6f5ahEAQgKNzv9Y951M=
+golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20181106182150-f42d05182288/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -1893,6 +2025,8 @@ golang.org/x/oauth2 v0.0.0-20211104180415-d3ed0bb246c8/go.mod h1:KelEdhl1UZF7XfJ
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
golang.org/x/oauth2 v0.0.0-20221014153046-6fdb5e3db783 h1:nt+Q6cXKz4MosCSpnbMtqiQ8Oz0pxTef2B4Vca2lvfk=
golang.org/x/oauth2 v0.0.0-20221014153046-6fdb5e3db783/go.mod h1:h4gKUeWbJ4rQPri7E0u6Gs4e9Ri2zaLxzw5DI5XGrYg=
+golang.org/x/oauth2 v0.6.0 h1:Lh8GPgSKBfWSwFvtuWOfeI3aAAnbXTSutYxJiOJFgIw=
+golang.org/x/oauth2 v0.6.0/go.mod h1:ycmewcwgD4Rpr3eZJLSB4Kyyljb3qDh40vJ8STE5HKw=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -1905,8 +2039,11 @@ golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c h1:5KslGYwFpkhGh+Q16bwMP3cOontH8FOep7tGV86Y7SQ=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220929204114-8fcdb60fdcc0 h1:cu5kTvlzcw1Q5S9f5ip1/cpiB4nXvw1XYzFPGgzLUOY=
golang.org/x/sync v0.0.0-20220929204114-8fcdb60fdcc0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.2.0 h1:PUR+T4wwASmuSTYdKjYHI5TD22Wy5ogLU5qZCOLxBrI=
+golang.org/x/sync v0.2.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20170830134202-bb24a47a89ea/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -2038,13 +2175,19 @@ golang.org/x/sys v0.0.0-20220224120231-95c6836cb0e7 h1:BXxu8t6QN0G1uff4bzZzSkpsa
golang.org/x/sys v0.0.0-20220408201424-a24fb2fb8a0f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220422013727-9388b58f7150 h1:xHms4gcpe1YE7A3yIllJXP16CMAGuqwO2lX1mTyyRRc=
golang.org/x/sys v0.0.0-20220422013727-9388b58f7150/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220614162138-6c1b26c55098 h1:PgOr27OhUx2IRqGJ2RxAWI4dJQ7bi9cSrB82uzFzfUA=
golang.org/x/sys v0.0.0-20220614162138-6c1b26c55098/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220908164124-27713097b956/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20221013171732-95e765b1cc43 h1:OK7RB6t2WQX54srQQYSXMW8dF5C6/8+oA/s5QBmmto4=
golang.org/x/sys v0.0.0-20221013171732-95e765b1cc43/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.4.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.8.0 h1:EBmGv8NaZBZTWvrbjNoL6HVt+IVy3QDQpJs7VRIw3tU=
+golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210220032956-6a3ed077a48d h1:SZxvLBoTP5yHO3Frd4z4vrF+DBX9vMVanchswa69toE=
@@ -2055,6 +2198,10 @@ golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuX
golang.org/x/term v0.0.0-20220526004731-065cf7ba2467/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.0.0-20220919170432-7a66f970e087 h1:tPwmk4vmvVCMdr98VgL4JH+qZxPL8fqlUOHnyOM8N3w=
golang.org/x/term v0.0.0-20220919170432-7a66f970e087/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
+golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc=
+golang.org/x/term v0.4.0/go.mod h1:9P2UbLfCdcvo3p/nzKvsmas4TnlujnuoV9hGgYzW1lQ=
+golang.org/x/term v0.8.0 h1:n5xxQn2i3PC0yLAbjTpNT85q/Kgzcr2gIoX9OrJUols=
+golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -2068,6 +2215,10 @@ golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.3.8 h1:nAL+RVCQ9uMn3vJZbV+MRnydTJFPf8qqY42YiA6MrqY=
golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
+golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
+golang.org/x/text v0.6.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
+golang.org/x/text v0.9.0 h1:2sjJmO8cDvYveuX97RDLsxlyUxLl+GHoLxBiRdHllBE=
+golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@@ -2083,6 +2234,8 @@ golang.org/x/time v0.0.0-20220210224613-90d013bbcef8 h1:vVKdlvoWBphwdxWKrFZEuM0k
golang.org/x/time v0.0.0-20220210224613-90d013bbcef8/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20220922220347-f3bd1da661af h1:Yx9k8YCG3dvF87UAn2tu2HQLf2dt/eR1bXxpLMWeH+Y=
golang.org/x/time v0.0.0-20220922220347-f3bd1da661af/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4=
+golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@@ -2137,6 +2290,7 @@ golang.org/x/tools v0.0.0-20200224181240-023911ca70b2/go.mod h1:TB2adYChydJhpapK
golang.org/x/tools v0.0.0-20200227222343-706bc42d1f0d/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200304193943-95d2e580d8eb/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
golang.org/x/tools v0.0.0-20200312045724-11d5b4c81c7d/go.mod h1:o4KQGtdN14AW+yjsvvwRTJJuXz8XRtIHtEnmAXLyFUw=
+golang.org/x/tools v0.0.0-20200313205530-4303120df7d8/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8=
golang.org/x/tools v0.0.0-20200331025713-a30bf2db82d4/go.mod h1:Sl4aGygMT6LrqrWclx+PTx3U+LnKx/seiNR+3G19Ar8=
golang.org/x/tools v0.0.0-20200501065659-ab2804fb9c9d/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200505023115-26f46d2f7ef8/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
@@ -2176,6 +2330,8 @@ golang.org/x/tools v0.1.11 h1:loJ25fNOEhSXfHrpoGj91eCUThwdNX6u24rO1xnNteY=
golang.org/x/tools v0.1.11/go.mod h1:SgwaegtQh8clINPpECJMqnxLv9I09HLqnW3RMqW0CA4=
golang.org/x/tools v0.1.12 h1:VveCTK38A2rkS8ZqFY25HIDFscX5X9OoEhJd3quQmXU=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
+golang.org/x/tools v0.9.1 h1:8WMNJAz3zrtPmnYC7ISf5dEn3MT0gY7jBJfw27yrrLo=
+golang.org/x/tools v0.9.1/go.mod h1:owI94Op576fPu3cIGQeHs3joujW/2Oc6MtlxbF5dfNc=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -2319,6 +2475,8 @@ google.golang.org/genproto v0.0.0-20220407144326-9054f6ed7bac h1:qSNTkEN+L2mvWcL
google.golang.org/genproto v0.0.0-20220407144326-9054f6ed7bac/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
google.golang.org/genproto v0.0.0-20221014173430-6e2ab493f96b h1:IOQ/4u8ZSLV+xns0LQxzdAcdOJTDMWB+0shVM8KWXBE=
google.golang.org/genproto v0.0.0-20221014173430-6e2ab493f96b/go.mod h1:1vXfmgAz9N9Jx0QA82PqRVauvCz1SGSz739p0f183jM=
+google.golang.org/genproto v0.0.0-20230320184635-7606e756e683 h1:khxVcsk/FhnzxMKOyD+TDGwjbEOpcPuIpmafPGFmhMA=
+google.golang.org/genproto v0.0.0-20230320184635-7606e756e683/go.mod h1:NWraEVixdDnqcqQ30jipen1STv2r/n24Wb7twVTGR4s=
google.golang.org/grpc v0.0.0-20160317175043-d3ddb4469d5a/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
@@ -2364,6 +2522,8 @@ google.golang.org/grpc v1.45.0 h1:NEpgUqV3Z+ZjkqMsxMg11IaDrXY4RY6CQukSGK0uI1M=
google.golang.org/grpc v1.45.0/go.mod h1:lN7owxKUQEqMfSyQikvvk5tf/6zMPsrK+ONuO11+0rQ=
google.golang.org/grpc v1.50.0 h1:fPVVDxY9w++VjTZsYvXWqEf9Rqar/e+9zYfxKK+W+YU=
google.golang.org/grpc v1.50.0/go.mod h1:ZgQEeidpAuNRZ8iRrlBKXZQP1ghovWIVhdJRyCDK+GI=
+google.golang.org/grpc v1.53.0 h1:LAv2ds7cmFV/XTS3XG1NneeENYrXGmorPxsBbptIjNc=
+google.golang.org/grpc v1.53.0/go.mod h1:OnIrk0ipVdj4N5d9IUoFUx72/VlD7+jUsHwZgwSMQpw=
google.golang.org/grpc/cmd/protoc-gen-go-grpc v0.0.0-20200709232328-d8193ee9cc3e/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw=
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw=
google.golang.org/grpc/examples v0.0.0-20201130180447-c456688b1860/go.mod h1:Ly7ZA/ARzg8fnPU9TyZIxoz33sEUuWX7txiqs8lPTgE=
@@ -2386,6 +2546,8 @@ google.golang.org/protobuf v1.28.0 h1:w43yiav+6bVFTBQFZX0r7ipe9JQ1QsbMgHwbBziscL
google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w=
google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
+google.golang.org/protobuf v1.29.1 h1:7QBf+IK2gx70Ap/hDsOmam3GE0v9HicjfEdAxE62UoM=
+google.golang.org/protobuf v1.29.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@@ -2451,6 +2613,8 @@ helm.sh/helm/v3 v3.9.0 h1:qDSWViuF6SzZX5s5AB/NVRGWmdao7T5j4S4ebIkMGag=
helm.sh/helm/v3 v3.9.0/go.mod h1:fzZfyslcPAWwSdkXrXlpKexFeE2Dei8N27FFQWt+PN0=
helm.sh/helm/v3 v3.10.3 h1:wL7IUZ7Zyukm5Kz0OUmIFZgKHuAgByCrUcJBtY0kDyw=
helm.sh/helm/v3 v3.10.3/go.mod h1:CXOcs02AYvrlPMWARNYNRgf2rNP7gLJQsi/Ubd4EDrI=
+helm.sh/helm/v3 v3.11.3 h1:n1X5yaQTP5DYywlBOZMl2gX398Gp6YwFp/IAVj6+5D4=
+helm.sh/helm/v3 v3.11.3/go.mod h1:S+sOdQc3BLvt09a9rSlKKVs9x0N/yx+No0y3qFw+FQ8=
honnef.co/go/tools v0.0.0-20180728063816-88497007e858/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
@@ -2480,6 +2644,8 @@ k8s.io/api v0.24.2 h1:g518dPU/L7VRLxWfcadQn2OnsiGWVOadTLpdnqgY2OI=
k8s.io/api v0.24.2/go.mod h1:AHqbSkTm6YrQ0ObxjO3Pmp/ubFF/KuM7jU+3khoBsOg=
k8s.io/api v0.25.3 h1:Q1v5UFfYe87vi5H7NU0p4RXC26PPMT8KOpr1TLQbCMQ=
k8s.io/api v0.25.3/go.mod h1:o42gKscFrEVjHdQnyRenACrMtbuJsVdP+WVjqejfzmI=
+k8s.io/api v0.26.2 h1:dM3cinp3PGB6asOySalOZxEG4CZ0IAdJsrYZXE/ovGQ=
+k8s.io/api v0.26.2/go.mod h1:1kjMQsFE+QHPfskEcVNgL3+Hp88B80uj0QtSOlj8itU=
k8s.io/apiextensions-apiserver v0.18.2/go.mod h1:q3faSnRGmYimiocj6cHQ1I3WpLqmDgJFlKL37fC4ZvY=
k8s.io/apiextensions-apiserver v0.20.1/go.mod h1:ntnrZV+6a3dB504qwC5PN/Yg9PBiDNt1EVqbW2kORVk=
k8s.io/apiextensions-apiserver v0.20.6/go.mod h1:qO8YMqeMmZH+lV21LUNzV41vfpoE9QVAJRA+MNqj0mo=
@@ -2500,6 +2666,8 @@ k8s.io/apiextensions-apiserver v0.24.2 h1:/4NEQHKlEz1MlaK/wHT5KMKC9UKYz6NZz6JE6o
k8s.io/apiextensions-apiserver v0.24.2/go.mod h1:e5t2GMFVngUEHUd0wuCJzw8YDwZoqZfJiGOW6mm2hLQ=
k8s.io/apiextensions-apiserver v0.25.3 h1:bfI4KS31w2f9WM1KLGwnwuVlW3RSRPuIsfNF/3HzR0k=
k8s.io/apiextensions-apiserver v0.25.3/go.mod h1:ZJqwpCkxIx9itilmZek7JgfUAM0dnTsA48I4krPqRmo=
+k8s.io/apiextensions-apiserver v0.26.2 h1:/yTG2B9jGY2Q70iGskMf41qTLhL9XeNN2KhI0uDgwko=
+k8s.io/apiextensions-apiserver v0.26.2/go.mod h1:Y7UPgch8nph8mGCuVk0SK83LnS8Esf3n6fUBgew8SH8=
k8s.io/apimachinery v0.18.2/go.mod h1:9SnR/e11v5IbyPCGbvJViimtJ0SwHG4nfZFjU77ftcA=
k8s.io/apimachinery v0.19.2/go.mod h1:DnPGDnARWFvYa3pMHgSxtbZb7gpzzAZ1pTfaUNDVlmA=
k8s.io/apimachinery v0.20.1/go.mod h1:WlLqWAHZGg07AeltaI0MV5uk1Omp8xaN0JGLY6gkRpU=
@@ -2523,6 +2691,8 @@ k8s.io/apimachinery v0.24.2 h1:5QlH9SL2C8KMcrNJPor+LbXVTaZRReml7svPEh4OKDM=
k8s.io/apimachinery v0.24.2/go.mod h1:82Bi4sCzVBdpYjyI4jY6aHX+YCUchUIrZrXKedjd2UM=
k8s.io/apimachinery v0.25.3 h1:7o9ium4uyUOM76t6aunP0nZuex7gDf8VGwkR5RcJnQc=
k8s.io/apimachinery v0.25.3/go.mod h1:jaF9C/iPNM1FuLl7Zuy5b9v+n35HGSh6AQ4HYRkCqwo=
+k8s.io/apimachinery v0.26.2 h1:da1u3D5wfR5u2RpLhE/ZtZS2P7QvDgLZTi9wrNZl/tQ=
+k8s.io/apimachinery v0.26.2/go.mod h1:ats7nN1LExKHvJ9TmwootT00Yz05MuYqPXEXaVeOy5I=
k8s.io/apiserver v0.18.2/go.mod h1:Xbh066NqrZO8cbsoenCwyDJ1OSi8Ag8I2lezeHxzwzw=
k8s.io/apiserver v0.20.1/go.mod h1:ro5QHeQkgMS7ZGpvf4tSMx6bBOgPfE+f52KwvXfScaU=
k8s.io/apiserver v0.20.6/go.mod h1:QIJXNt6i6JB+0YQRNcS0hdRHJlMhflFmsBDeSgT1r8Q=
@@ -2543,6 +2713,8 @@ k8s.io/apiserver v0.24.2 h1:orxipm5elPJSkkFNlwH9ClqaKEDJJA3yR2cAAlCnyj4=
k8s.io/apiserver v0.24.2/go.mod h1:pSuKzr3zV+L+MWqsEo0kHHYwCo77AT5qXbFXP2jbvFI=
k8s.io/apiserver v0.25.3 h1:m7+xGuG5+KYAnEsqaFtDyWMkmMMEOFYlu+NlWv5qSBI=
k8s.io/apiserver v0.25.3/go.mod h1:9bT47iM2fzRuhICJpM/RcQR9sqDDfZ7Yw60h0p3JW08=
+k8s.io/apiserver v0.26.2 h1:Pk8lmX4G14hYqJd1poHGC08G03nIHVqdJMR0SD3IH3o=
+k8s.io/apiserver v0.26.2/go.mod h1:GHcozwXgXsPuOJ28EnQ/jXEM9QeG6HT22YxSNmpYNh8=
k8s.io/cli-runtime v0.20.6/go.mod h1:JVERW478qcxWrUjJuWQSqyJeiz9QC4T6jmBznHFBC8w=
k8s.io/cli-runtime v0.21.0 h1:/V2Kkxtf6x5NI2z+Sd/mIrq4FQyQ8jzZAUD6N5RnN7Y=
k8s.io/cli-runtime v0.21.0/go.mod h1:XoaHP93mGPF37MkLbjGVYqg3S1MnsFdKtiA/RZzzxOo=
@@ -2553,6 +2725,8 @@ k8s.io/cli-runtime v0.24.1 h1:IW6L8dRBq+pPTzvXcB+m/hOabzbqXy57Bqo4XxmW7DY=
k8s.io/cli-runtime v0.24.1/go.mod h1:14aVvCTqkA7dNXY51N/6hRY3GUjchyWDOwW84qmR3bs=
k8s.io/cli-runtime v0.25.3 h1:Zs7P7l7db/5J+KDePOVtDlArAa9pZXaDinGWGZl0aM8=
k8s.io/cli-runtime v0.25.3/go.mod h1:InHHsjkyW5hQsILJGpGjeruiDZT/R0OkROQgD6GzxO4=
+k8s.io/cli-runtime v0.26.2 h1:6XcIQOYW1RGNwFgRwejvyUyAojhToPmJLGr0JBMC5jw=
+k8s.io/cli-runtime v0.26.2/go.mod h1:U7sIXX7n6ZB+MmYQsyJratzPeJwgITqrSlpr1a5wM5I=
k8s.io/client-go v0.18.2/go.mod h1:Xcm5wVGXX9HAA2JJ2sSBUn3tCJ+4SVlCbl2MNNv+CIU=
k8s.io/client-go v0.20.1/go.mod h1:/zcHdt1TeWSd5HoUe6elJmHSQ6uLLgp4bIJHVEuy+/Y=
k8s.io/client-go v0.20.6/go.mod h1:nNQMnOvEUEsOzRRFIIkdmYOjAZrC8bgq0ExboWSU1I0=
@@ -2574,6 +2748,8 @@ k8s.io/client-go v0.24.2 h1:CoXFSf8if+bLEbinDqN9ePIDGzcLtqhfd6jpfnwGOFA=
k8s.io/client-go v0.24.2/go.mod h1:zg4Xaoo+umDsfCWr4fCnmLEtQXyCNXCvJuSsglNcV30=
k8s.io/client-go v0.25.3 h1:oB4Dyl8d6UbfDHD8Bv8evKylzs3BXzzufLiO27xuPs0=
k8s.io/client-go v0.25.3/go.mod h1:t39LPczAIMwycjcXkVc+CB+PZV69jQuNx4um5ORDjQA=
+k8s.io/client-go v0.26.2 h1:s1WkVujHX3kTp4Zn4yGNFK+dlDXy1bAAkIl+cFAiuYI=
+k8s.io/client-go v0.26.2/go.mod h1:u5EjOuSyBa09yqqyY7m3abZeovO/7D/WehVVlZ2qcqU=
k8s.io/code-generator v0.18.2/go.mod h1:+UHX5rSbxmR8kzS+FAv7um6dtYrZokQvjHpDSYRVkTc=
k8s.io/code-generator v0.19.7/go.mod h1:lwEq3YnLYb/7uVXLorOJfxg+cUu2oihFhHZ0n9NIla0=
k8s.io/code-generator v0.20.1/go.mod h1:UsqdF+VX4PU2g46NC2JRs4gc+IfrctnwHb76RNbWHJg=
@@ -2608,6 +2784,8 @@ k8s.io/component-base v0.24.2 h1:kwpQdoSfbcH+8MPN4tALtajLDfSfYxBDYlXobNWI6OU=
k8s.io/component-base v0.24.2/go.mod h1:ucHwW76dajvQ9B7+zecZAP3BVqvrHoOxm8olHEg0nmM=
k8s.io/component-base v0.25.3 h1:UrsxciGdrCY03ULT1h/S/gXFCOPnLhUVwSyx+hM/zq4=
k8s.io/component-base v0.25.3/go.mod h1:WYoS8L+IlTZgU7rhAl5Ctpw0WdMxDfCC5dkxcEFa/TI=
+k8s.io/component-base v0.26.2 h1:IfWgCGUDzrD6wLLgXEstJKYZKAFS2kO+rBRi0p3LqcI=
+k8s.io/component-base v0.26.2/go.mod h1:DxbuIe9M3IZPRxPIzhch2m1eT7uFrSBJUBuVCQEBivs=
k8s.io/component-helpers v0.20.6/go.mod h1:d4rFhZS/wxrZCxRiJJiWf1mVGVeMB5/ey3Yv8/rOp78=
k8s.io/component-helpers v0.21.0/go.mod h1:tezqefP7lxfvJyR+0a+6QtVrkZ/wIkyMLK4WcQ3Cj8U=
k8s.io/component-helpers v0.24.0/go.mod h1:Q2SlLm4h6g6lPTC9GMMfzdywfLSvJT2f1hOnnjaWD8c=
@@ -2636,6 +2814,8 @@ k8s.io/klog/v2 v2.60.1 h1:VW25q3bZx9uE3vvdL6M8ezOX79vA2Aq1nEWLqNQclHc=
k8s.io/klog/v2 v2.60.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
k8s.io/klog/v2 v2.80.1 h1:atnLQ121W371wYYFawwYx1aEY2eUfs4l3J72wtgAwV4=
k8s.io/klog/v2 v2.80.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
+k8s.io/klog/v2 v2.90.1 h1:m4bYOKall2MmOiRaR1J+We67Do7vm9KiQVlT96lnHUw=
+k8s.io/klog/v2 v2.90.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
k8s.io/kube-openapi v0.0.0-20200121204235-bf4fb3bd569c/go.mod h1:GRQhZsXIAJ1xR0C9bd8UpWHZ5plfAS9fzPjJuQ6JL3E=
k8s.io/kube-openapi v0.0.0-20200805222855-6aeccd4b50c6/go.mod h1:UuqjUnNftUyPE5H64/qeyjQoUZhGpeFDVdxjTeEVN2o=
k8s.io/kube-openapi v0.0.0-20201113171705-d219536bb9fd/go.mod h1:WOJ3KddDSol4tAGcJo0Tvi+dK12EcqSLqcWsryKMpfM=
@@ -2657,6 +2837,8 @@ k8s.io/kubectl v0.24.1 h1:gxcjHrnwntV1c+G/BHWVv4Mtk8CQJ0WTraElLBG+ddk=
k8s.io/kubectl v0.24.1/go.mod h1:NzFqQ50B004fHYWOfhHTrAm4TY6oGF5FAAL13LEaeUI=
k8s.io/kubectl v0.25.3 h1:HnWJziEtmsm4JaJiKT33kG0kadx68MXxUE8UEbXnN4U=
k8s.io/kubectl v0.25.3/go.mod h1:glU7PiVj/R6Ud4A9FJdTcJjyzOtCJyc0eO7Mrbh3jlI=
+k8s.io/kubectl v0.26.2 h1:SMPB4j48eVFxsYluBq3VLyqXtE6b72YnszkbTAtFye4=
+k8s.io/kubectl v0.26.2/go.mod h1:KYWOXSwp2BrDn3kPeoU/uKzKtdqvhK1dgZGd0+no4cM=
k8s.io/metrics v0.20.6/go.mod h1:d+OAIaXutom9kGWcBit/M8OkDpIzBKTsm47+KcUt7VI=
k8s.io/metrics v0.21.0/go.mod h1:L3Ji9EGPP1YBbfm9sPfEXSpnj8i24bfQbAFAsW0NueQ=
k8s.io/metrics v0.24.0/go.mod h1:jrLlFGdKl3X+szubOXPG0Lf2aVxuV3QJcbsgVRAM6fI=
@@ -2676,10 +2858,14 @@ k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9 h1:HNSDgDCrr/6Ly3WEGKZftiE7IY19V
k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
k8s.io/utils v0.0.0-20221012122500-cfd413dd9e85 h1:cTdVh7LYu82xeClmfzGtgyspNh6UxpwLWGi8R4sspNo=
k8s.io/utils v0.0.0-20221012122500-cfd413dd9e85/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
+k8s.io/utils v0.0.0-20230711102312-30195339c3c7 h1:ZgnF1KZsYxWIifwSNZFZgNtWE89WI5yiP5WwlfDoIyc=
+k8s.io/utils v0.0.0-20230711102312-30195339c3c7/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
oras.land/oras-go v1.1.0 h1:tfWM1RT7PzUwWphqHU6ptPU3ZhwVnSw/9nEGf519rYg=
oras.land/oras-go v1.1.0/go.mod h1:1A7vR/0KknT2UkJVWh+xMi95I/AhK8ZrxrnUSmXN0bQ=
oras.land/oras-go v1.2.0 h1:yoKosVIbsPoFMqAIFHTnrmOuafHal+J/r+I5bdbVWu4=
oras.land/oras-go v1.2.0/go.mod h1:pFNs7oHp2dYsYMSS82HaX5l4mpnGO7hbpPN6EWH2ltc=
+oras.land/oras-go v1.2.2 h1:0E9tOHUfrNH7TCDk5KU0jVBEzCqbfdyuVfGmJ7ZeRPE=
+oras.land/oras-go v1.2.2/go.mod h1:Apa81sKoZPpP7CDciE006tSZ0x3Q3+dOoBcMZ/aNxvw=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
@@ -2695,6 +2881,8 @@ sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.30 h1:dUk62HQ3ZFhD4
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.30/go.mod h1:fEO7lRTdivWO2qYVCVG7dEADOMo/MLDCVr8So2g88Uw=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.33 h1:LYqFq+6Cj2D0gFfrJvL7iElD4ET6ir3VDdhDdTK7rgc=
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.33/go.mod h1:soWkSNf2tZC7aMibXEqVhCd73GOY5fJikn8qbdzemB0=
+sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.35 h1:+xBL5uTc+BkPBwmMi3vYfUJjq+N3K+H6PXeETwf5cPI=
+sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.35/go.mod h1:WxjusMwXlKzfAs4p9km6XJRndVt2FROgMVCE4cdohFo=
sigs.k8s.io/controller-runtime v0.8.0/go.mod h1:v9Lbj5oX443uR7GXYY46E0EE2o7k2YxQ58GxVNeXSW4=
sigs.k8s.io/controller-runtime v0.9.0/go.mod h1:TgkfvrhhEw3PlI0BRL/5xM+89y3/yc0ZDfdbTl84si8=
sigs.k8s.io/controller-runtime v0.9.2 h1:MnCAsopQno6+hI9SgJHKddzXpmv2wtouZz6931Eax+Q=
@@ -2709,6 +2897,8 @@ sigs.k8s.io/controller-runtime v0.12.2 h1:nqV02cvhbAj7tbt21bpPpTByrXGn2INHRsi39l
sigs.k8s.io/controller-runtime v0.12.2/go.mod h1:qKsk4WE6zW2Hfj0G4v10EnNB2jMG1C+NTb8h+DwCoU0=
sigs.k8s.io/controller-runtime v0.13.0 h1:iqa5RNciy7ADWnIc8QxCbOX5FEKVR3uxVxKHRMc2WIQ=
sigs.k8s.io/controller-runtime v0.13.0/go.mod h1:Zbz+el8Yg31jubvAEyglRZGdLAjplZl+PgtYNI6WNTI=
+sigs.k8s.io/controller-runtime v0.14.5 h1:6xaWFqzT5KuAQ9ufgUaj1G/+C4Y1GRkhrxl+BJ9i+5s=
+sigs.k8s.io/controller-runtime v0.14.5/go.mod h1:WqIdsAY6JBsjfc/CqO0CORmNtoCtE4S6qbPc9s68h+0=
sigs.k8s.io/controller-tools v0.4.1/go.mod h1:G9rHdZMVlBDocIxGkK3jHLWqcTMNvveypYJwrvYKjWU=
sigs.k8s.io/controller-tools v0.6.0 h1:o2Fm1K7CmIp8OVaBtXsWB/ssBAzyoKZPPAGR3VuxaKs=
sigs.k8s.io/controller-tools v0.6.0/go.mod h1:baRMVPrctU77F+rfAuH2uPqW93k6yQnZA2dhUOr7ihc=
@@ -2723,6 +2913,8 @@ sigs.k8s.io/controller-tools v0.9.2 h1:AkTE3QAdz9LS4iD3EJvHyYxBkg/g9fTbgiYsrcsFC
sigs.k8s.io/controller-tools v0.9.2/go.mod h1:NUkn8FTV3Sad3wWpSK7dt/145qfuQ8CKJV6j4jHC5rM=
sigs.k8s.io/controller-tools v0.10.0 h1:0L5DTDTFB67jm9DkfrONgTGmfc/zYow0ZaHyppizU2U=
sigs.k8s.io/controller-tools v0.10.0/go.mod h1:uvr0EW6IsprfB0jpQq6evtKy+hHyHCXNfdWI5ONPx94=
+sigs.k8s.io/controller-tools v0.11.3 h1:T1xzLkog9saiyQSLz1XOImu4OcbdXWytc5cmYsBeBiE=
+sigs.k8s.io/controller-tools v0.11.3/go.mod h1:qcfX7jfcfYD/b7lAhvqAyTbt/px4GpvN88WKLFFv7p8=
sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6 h1:fD1pz4yfdADVNfFmcP2aBEtudwUQ1AlLnRBALr33v3s=
sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2 h1:kDi4JBNAsJWfz1aEXhO8Jg87JJaPNLh5tIzYHgStQ9Y=
sigs.k8s.io/json v0.0.0-20211208200746-9f7c6b3444d2/go.mod h1:B+TnT182UBxE84DiCz4CVE26eOSDAeYCpfDnC2kdKMY=
@@ -2746,6 +2938,8 @@ sigs.k8s.io/kubebuilder/v3 v3.6.0 h1:S0J4ST871SVG5oObPRfyJVjdoKbIUoQgTIUHDq/YtcQ
sigs.k8s.io/kubebuilder/v3 v3.6.0/go.mod h1:+s6WdvJjIpYRKO+idaeIK5JhbjrZybxh9+K6jK9/Yyc=
sigs.k8s.io/kubebuilder/v3 v3.7.1-0.20221011212440-eff842a46496 h1:IG01Y1Akf8BXvFUHkzrA9jG9GsW4owIzCluWT/KlGZE=
sigs.k8s.io/kubebuilder/v3 v3.7.1-0.20221011212440-eff842a46496/go.mod h1:mKDy5CSm+qbGaDtMA0nZOxbNZktv7z26x29Aoe6Q1OI=
+sigs.k8s.io/kubebuilder/v3 v3.9.1 h1:9JNKRg9GzlLBYwYRx1nQlwha8+Pd9gPyat1lj7T+jZw=
+sigs.k8s.io/kubebuilder/v3 v3.9.1/go.mod h1:Z4boifT/XHIZTVEAIZaPTXqjhuK8Msx2iPYJy8ic6vg=
sigs.k8s.io/kustomize v2.0.3+incompatible h1:JUufWFNlI44MdtnjUqVnvh29rR37PQFzPbLXqhyOyX0=
sigs.k8s.io/kustomize v2.0.3+incompatible/go.mod h1:MkjgH3RdOWrievjo6c9T245dYlB5QeXV4WCbnt/PEpU=
sigs.k8s.io/kustomize/api v0.8.5 h1:bfCXGXDAbFbb/Jv5AhMj2BB8a5VAJuuQ5/KU69WtDjQ=
diff --git a/operator/.bingo/promtool.mod b/operator/.bingo/promtool.mod
index f6c02d00e979d..27f16ae12a12c 100644
--- a/operator/.bingo/promtool.mod
+++ b/operator/.bingo/promtool.mod
@@ -4,7 +4,7 @@ go 1.20
replace k8s.io/klog => github.com/simonpasquier/klog-gokit v0.3.0
-replace k8s.io/klog/v2 => github.com/simonpasquier/klog-gokit/v3 v3.0.0
+replace k8s.io/klog/v2 => github.com/simonpasquier/klog-gokit/v3 v3.3.0
exclude github.com/linode/linodego v1.0.0
@@ -12,4 +12,4 @@ exclude github.com/grpc-ecosystem/grpc-gateway v1.14.7
exclude google.golang.org/api v0.30.0
-require github.com/prometheus/prometheus v0.42.0 // cmd/promtool
+require github.com/prometheus/prometheus v0.47.1 // cmd/promtool
diff --git a/operator/.bingo/promtool.sum b/operator/.bingo/promtool.sum
index ac6c839f937a9..dd24437491736 100644
--- a/operator/.bingo/promtool.sum
+++ b/operator/.bingo/promtool.sum
@@ -56,6 +56,8 @@ cloud.google.com/go/compute v1.7.0 h1:v/k9Eueb8aAJ0vZuxKMrgm6kPhCLZU9HxFU+AFDs9U
cloud.google.com/go/compute v1.7.0/go.mod h1:435lt8av5oL9P3fv1OEzSbSUe+ybHXGMPQHHZWZxy9U=
cloud.google.com/go/compute v1.14.0 h1:hfm2+FfxVmnRlh6LpB7cg1ZNU+5edAHmW679JePztk0=
cloud.google.com/go/compute v1.14.0/go.mod h1:YfLtxrj9sU4Yxv+sXzZkyPjEyPBZfXHUvjxega5vAdo=
+cloud.google.com/go/compute v1.22.0 h1:cB8R6FtUtT1TYGl5R3xuxnW6OUIc/DrT2aiR16TTG7Y=
+cloud.google.com/go/compute v1.22.0/go.mod h1:4tCnrn48xsqlwSAiLf1HXMQk8CONslYbdiEZc9FEIbM=
cloud.google.com/go/compute/metadata v0.2.3 h1:mg4jlk7mCAj6xXp9UJ4fjI9VUI5rubuGBW5aJ7UnBMY=
cloud.google.com/go/compute/metadata v0.2.3/go.mod h1:VAV5nSsACxMJvgaAuX6Pk2AawlZn8kiOGuCv6gTkwuA=
cloud.google.com/go/datacatalog v1.5.0/go.mod h1:M7GPLNQeLfWqeIm3iuiruhPzkt65+Bx8dAKvScX8jvs=
@@ -121,18 +123,28 @@ cloud.google.com/go/workflows v1.6.0/go.mod h1:6t9F5h/unJz41YqfBmqSASJSXccBLtD1V
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Azure/azure-sdk-for-go v65.0.0+incompatible h1:HzKLt3kIwMm4KeJYTdx9EbjRYTySD/t8i1Ee/W5EGXw=
github.com/Azure/azure-sdk-for-go v65.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
+github.com/Azure/azure-sdk-for-go/sdk/azcore v1.7.0 h1:8q4SaHjFsClSvuVne0ID/5Ka8u3fcIHyqkLjcFpNRHQ=
+github.com/Azure/azure-sdk-for-go/sdk/azcore v1.7.0/go.mod h1:bjGvMhVMb+EEm3VRNQawDMUyMMjo+S5ewNjflkep/0Q=
+github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.3.0 h1:vcYCAze6p19qBW7MhZybIsqD8sMV8js0NyQM8JDnVtg=
+github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.3.0/go.mod h1:OQeznEEkTZ9OrhHJoDD8ZDq51FHgXjqtP9z6bEwBq9U=
+github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0 h1:sXr+ck84g/ZlZUOZiNELInmMgOsuGwdjjVkEIde0OtY=
+github.com/Azure/azure-sdk-for-go/sdk/internal v1.3.0/go.mod h1:okt5dMMTOFjX/aovMlrjvvXoPMBVSPzk9185BT0+eZM=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/Azure/go-autorest v14.2.0+incompatible h1:V5VMDjClD3GiElqLWO7mz2MxNAK/vTfRHdAubSIPRgs=
github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest/autorest v0.11.27/go.mod h1:7l8ybrIdUmGqZMTD0sRtAr8NvbHjfofbf8RSP2q7w7U=
github.com/Azure/go-autorest/autorest v0.11.28 h1:ndAExarwr5Y+GaHE6VCaY1kyS/HwwGGyuimVhWsHOEM=
github.com/Azure/go-autorest/autorest v0.11.28/go.mod h1:MrkzG3Y3AH668QyF9KRk5neJnGgmhQ6krbhR8Q5eMvA=
+github.com/Azure/go-autorest/autorest v0.11.29 h1:I4+HL/JDvErx2LjyzaVxllw2lRDB5/BT2Bm4g20iqYw=
+github.com/Azure/go-autorest/autorest v0.11.29/go.mod h1:ZtEzC4Jy2JDrZLxvWs8LrBWEBycl1hbT1eknI8MtfAs=
github.com/Azure/go-autorest/autorest/adal v0.9.18/go.mod h1:XVVeme+LZwABT8K5Lc3hA4nAe8LDBVle26gTrguhhPQ=
github.com/Azure/go-autorest/autorest/adal v0.9.20/go.mod h1:XVVeme+LZwABT8K5Lc3hA4nAe8LDBVle26gTrguhhPQ=
github.com/Azure/go-autorest/autorest/adal v0.9.21 h1:jjQnVFXPfekaqb8vIsv2G1lxshoW+oGv4MDlhRtnYZk=
github.com/Azure/go-autorest/autorest/adal v0.9.21/go.mod h1:zua7mBUaCc5YnSLKYgGJR/w5ePdMDA6H56upLsHzA9U=
github.com/Azure/go-autorest/autorest/adal v0.9.22 h1:/GblQdIudfEM3AWWZ0mrYJQSd7JS4S/Mbzh6F0ov0Xc=
github.com/Azure/go-autorest/autorest/adal v0.9.22/go.mod h1:XuAbAEUv2Tta//+voMI038TrJBqjKam0me7qR+L8Cmk=
+github.com/Azure/go-autorest/autorest/adal v0.9.23 h1:Yepx8CvFxwNKpH6ja7RZ+sKX+DWYNldbLiALMC3BTz8=
+github.com/Azure/go-autorest/autorest/adal v0.9.23/go.mod h1:5pcMqFkdPhviJdlEy3kC/v1ZLnQl0MH6XA5YCcMhy4c=
github.com/Azure/go-autorest/autorest/date v0.3.0 h1:7gUk1U5M/CQbp9WoqinNzJar+8KY+LPI6wiWrP/myHw=
github.com/Azure/go-autorest/autorest/date v0.3.0/go.mod h1:BI0uouVdmngYNUzGWeSYnokU+TrmwEsOqdt8Y6sso74=
github.com/Azure/go-autorest/autorest/mocks v0.4.1/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
@@ -145,12 +157,16 @@ github.com/Azure/go-autorest/logger v0.2.1 h1:IG7i4p/mDa2Ce4TRyAO8IHnVhAVF3RFU+Z
github.com/Azure/go-autorest/logger v0.2.1/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=
github.com/Azure/go-autorest/tracing v0.6.0 h1:TYi4+3m5t6K48TGI9AUdb+IzbnSxvnvUMfuitfgcfuo=
github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
+github.com/AzureAD/microsoft-authentication-library-for-go v1.0.0 h1:OBhqkivkhkMqLPymWEppkm7vgPQY2XsHoEkaMQ0AdZY=
+github.com/AzureAD/microsoft-authentication-library-for-go v1.0.0/go.mod h1:kgDmCTgBzIEPFElEF+FK0SdjAor06dRq2Go927dnQ6o=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/DataDog/datadog-go v3.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ=
github.com/Knetic/govaluate v3.0.1-0.20171022003610-9aa49832a739+incompatible/go.mod h1:r7JcOSlj0wfOMncg0iLm8Leh48TZaKVeNIfJntJ2wa0=
github.com/Microsoft/go-winio v0.5.1 h1:aPJp2QD7OOrhO5tQXqQoGSJc+DjDtWTGLOmNyAm6FgY=
github.com/Microsoft/go-winio v0.5.1/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
+github.com/Microsoft/go-winio v0.6.1 h1:9/kr64B9VUZrLm5YYwbGtUJnMgqWVOdUAXu6Migciow=
+github.com/Microsoft/go-winio v0.6.1/go.mod h1:LRdKpFKfdobln8UmuiYcKPot9D2v6svN5+sAH+4kjUM=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/PuerkitoBio/purell v1.1.1 h1:WEQqlqaGbrPkxLJWfBwQmfEAE1Z7ONdDLqrN38tNFfI=
@@ -161,6 +177,8 @@ github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWX
github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI=
github.com/VividCortex/gohistogram v1.0.0/go.mod h1:Pf5mBqqDxYaXu3hDrrU+w6nw50o/4+TcAqDqk/vUH7g=
github.com/afex/hystrix-go v0.0.0-20180502004556-fa1af6a1f4f5/go.mod h1:SkGFH1ia65gfNATL8TAiHDNxPzPdmEL5uirI2Uyuz6c=
+github.com/alecthomas/kingpin/v2 v2.3.2 h1:H0aULhgmSzN8xQ3nX1uxtdlTHYoPLu5AhHxWrKI6ocU=
+github.com/alecthomas/kingpin/v2 v2.3.2/go.mod h1:0gyi0zQnjuFk8xrkNKamJoyUo382HRL7ATRpFZCw6tE=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751 h1:JYp7IbQjafoB+tBA3gMyHYHrpOtNuDiK/uB5uXxq5wM=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
@@ -176,6 +194,8 @@ github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hC
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
github.com/armon/go-metrics v0.3.10 h1:FR+drcQStOe+32sYyJYyZ7FIdgoGGBnwLl+flodp8Uo=
github.com/armon/go-metrics v0.3.10/go.mod h1:4O98XIr/9W0sxpJ8UaYkvjk10Iff7SnFrb4QAOwNTFc=
+github.com/armon/go-metrics v0.4.1 h1:hR91U9KYmb6bLBYLQjyM+3j+rcd/UhE+G78SFnF8gJA=
+github.com/armon/go-metrics v0.4.1/go.mod h1:E6amYzXo6aW1tqzoZGT755KkbgrJsSdpwZ+3JqfkOG4=
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/armon/go-radix v1.0.0/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs=
@@ -184,6 +204,8 @@ github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:l
github.com/asaskevich/govalidator v0.0.0-20200907205600-7a23bdc65eef/go.mod h1:WaHUgvxTVq04UNunO+XhnAqY/wQc+bxr74GqbsZ/Jqw=
github.com/asaskevich/govalidator v0.0.0-20210307081110-f21760c49a8d h1:Byv0BzEl3/e6D5CLfI0j/7hiIEtvGVFPCZ7Ei2oq8iQ=
github.com/asaskevich/govalidator v0.0.0-20210307081110-f21760c49a8d/go.mod h1:WaHUgvxTVq04UNunO+XhnAqY/wQc+bxr74GqbsZ/Jqw=
+github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2 h1:DklsrG3dyBCFEj5IhUbnKptjxatkF07cF2ak3yi77so=
+github.com/asaskevich/govalidator v0.0.0-20230301143203-a9d515a09cc2/go.mod h1:WaHUgvxTVq04UNunO+XhnAqY/wQc+bxr74GqbsZ/Jqw=
github.com/aws/aws-lambda-go v1.13.3/go.mod h1:4UKl9IzQMoD+QF79YdCuzCwp8VbmG4VAQwij/eHl5CU=
github.com/aws/aws-sdk-go v1.27.0/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/aws/aws-sdk-go v1.38.35/go.mod h1:hcU610XS61/+aQV88ixoOzUoG7v3b31pl2zKMmprdro=
@@ -193,12 +215,15 @@ github.com/aws/aws-sdk-go v1.44.102 h1:6tUCTGL2UDbFZae1TLGk8vTgeXuzkb8KbAe2FiAeK
github.com/aws/aws-sdk-go v1.44.102/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
github.com/aws/aws-sdk-go v1.44.187 h1:D5CsRomPnlwDHJCanL2mtaLIcbhjiWxNh5j8zvaWdJA=
github.com/aws/aws-sdk-go v1.44.187/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI=
+github.com/aws/aws-sdk-go v1.44.302 h1:ST3ko6GrJKn3Xi+nAvxjG3uk/V1pW8KC52WLeIxqqNk=
+github.com/aws/aws-sdk-go v1.44.302/go.mod h1:aVsgQcEevwlmQ7qHE9I3h+dtQgpqhFB+i8Phjh7fkwI=
github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZwxzkQq9wy+g=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
+github.com/buger/jsonparser v1.1.1/go.mod h1:6RYKKt7H4d4+iWqouImQ9R2FZql3VbhNgx27UK13J/0=
github.com/casbin/casbin/v2 v2.1.2/go.mod h1:YcPU1XXisHhLzuxH9coDNf2FbKpjGlbCg3n9yuLkIJQ=
github.com/cenkalti/backoff v2.2.1+incompatible/go.mod h1:90ReRw6GdpyfrHakVjL/QHaoyV4aDUVVkXQJJJ3NXXM=
github.com/cenkalti/backoff/v4 v4.1.2/go.mod h1:scbssz8iZGpm3xbr14ovlUdkxfGXNInqkPWOWmG2CLw=
@@ -233,12 +258,16 @@ github.com/cncf/xds/go v0.0.0-20211001041855-01bcc9b48dfe/go.mod h1:eXthEFrGJvWH
github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20220314180256-7f1daf1720fc h1:PYXxkRUBGUMa5xgMVMDl62vEklZvKpVaxQeN9ie7Hfk=
github.com/cncf/xds/go v0.0.0-20220314180256-7f1daf1720fc/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
+github.com/cncf/xds/go v0.0.0-20230607035331-e9ce68804cb4 h1:/inchEIKaYC1Akx+H+gqO04wryn5h75LSazbRlnya1k=
+github.com/cncf/xds/go v0.0.0-20230607035331-e9ce68804cb4/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cockroachdb/datadriven v0.0.0-20190809214429-80d97fb3cbaa/go.mod h1:zn76sxSg3SzpJ0PPJaLDCu+Bu0Lg3sKTORVIj19EIF8=
github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd/v22 v22.4.0 h1:y9YHcjnjynCd/DVbg5j9L/33jQM3MxJlbj/zWskzfGU=
github.com/coreos/go-systemd/v22 v22.4.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
+github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs=
+github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
@@ -247,6 +276,8 @@ github.com/creack/pty v1.1.11/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
+github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dennwc/varint v1.0.0 h1:kGNFFSSw8ToIy3obO/kKr8U9GZYUAxQEVuix4zfDWzE=
github.com/dennwc/varint v1.0.0/go.mod h1:hnItb35rvZvJrbTALZtY/iQfDs48JKRG1RPpgziApxA=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
@@ -256,16 +287,22 @@ github.com/digitalocean/godo v1.84.1 h1:VgPsuxhrO9pUygvij6qOhqXfAkxAsDZYRpmjSDME
github.com/digitalocean/godo v1.84.1/go.mod h1:BPCqvwbjbGqxuUnIKB4EvS/AX7IDnNmt5fwvIkWo+ew=
github.com/digitalocean/godo v1.95.0 h1:S48/byPKui7RHZc1wYEPfRvkcEvToADNb5I3guu95xg=
github.com/digitalocean/godo v1.95.0/go.mod h1:NRpFznZFvhHjBoqZAaOD3khVzsJ3EibzKqFL4R60dmA=
+github.com/digitalocean/godo v1.99.0 h1:gUHO7n9bDaZFWvbzOum4bXE0/09ZuYA9yA8idQHX57E=
+github.com/digitalocean/godo v1.99.0/go.mod h1:SsS2oXo2rznfM/nORlZ/6JaUJZFhmKTib1YhopUc8NA=
github.com/dnaeon/go-vcr v1.2.0/go.mod h1:R4UdLID7HZT3taECzJs4YgbbH6PIGXB6W/sc5OLb6RQ=
github.com/docker/distribution v2.7.1+incompatible h1:a5mlkVzth6W5A4fOsS3D2EO5BUmsJpcB+cRlLU7cSug=
github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/distribution v2.8.1+incompatible h1:Q50tZOPR6T/hjNsyc9g8/syEs6bk8XXApsHjKukMl68=
github.com/docker/distribution v2.8.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
+github.com/docker/distribution v2.8.2+incompatible h1:T3de5rq0dB1j30rp0sA2rER+m322EBzniBPB6ZIzuh8=
+github.com/docker/distribution v2.8.2+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v20.10.17+incompatible h1:JYCuMrWaVNophQTOrMMoSwudOVEfcegoZZrleKc1xwE=
github.com/docker/docker v20.10.18+incompatible h1:SN84VYXTBNGn92T/QwIRPlum9zfemfitN7pbsp26WSc=
github.com/docker/docker v20.10.18+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker v20.10.23+incompatible h1:1ZQUUYAdh+oylOT85aA2ZcfRp22jmLhoaEcVEfK8dyA=
github.com/docker/docker v20.10.23+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
+github.com/docker/docker v24.0.4+incompatible h1:s/LVDftw9hjblvqIeTiGYXBCD95nOEEl7qRsRrIOuQI=
+github.com/docker/docker v24.0.4+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
github.com/docker/go-units v0.4.0 h1:3uh0PgVws3nIA0Q+MwDC8yjEPf9zjRfZZWXZYDct3Tw=
@@ -286,6 +323,8 @@ github.com/emicklei/go-restful/v3 v3.8.0 h1:eCZ8ulSerjdAiaNpF7GxXIE7ZCMo1moN1qX+
github.com/emicklei/go-restful/v3 v3.8.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/emicklei/go-restful/v3 v3.9.0 h1:XwGDlfxEnQZzuopoqxwSEllNcCOM9DhhFyhFIIGKwxE=
github.com/emicklei/go-restful/v3 v3.9.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
+github.com/emicklei/go-restful/v3 v3.10.2 h1:hIovbnmBTLjHXkqEBUz3HGpXZdM7ZrE9fJIZIqlJLqE=
+github.com/emicklei/go-restful/v3 v3.10.2/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/envoyproxy/go-control-plane v0.6.9/go.mod h1:SBwIajubJHhxtWwsL9s8ss4safvEdbitLhGGK48rN6g=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
@@ -298,6 +337,8 @@ github.com/envoyproxy/go-control-plane v0.9.10-0.20210907150352-cf90f659a021/go.
github.com/envoyproxy/go-control-plane v0.10.2-0.20220325020618-49ff273808a1/go.mod h1:KJwIaB5Mv44NWtYuAOFCVOjcI94vtpEz2JU/D2v6IjE=
github.com/envoyproxy/go-control-plane v0.10.3 h1:xdCVXxEe0Y3FQith+0cj2irwZudqGYvecuLB1HtdexY=
github.com/envoyproxy/go-control-plane v0.10.3/go.mod h1:fJJn/j26vwOu972OllsvAgJJM//w9BV6Fxbg2LuVd34=
+github.com/envoyproxy/go-control-plane v0.11.1 h1:wSUXTlLfiAQRWs2F+p+EKOY9rUyis1MyGqJ2DIk5HpM=
+github.com/envoyproxy/go-control-plane v0.11.1/go.mod h1:uhMcXKCQMEJHiAb0w+YGefQLaTEw+YhGluxZkrTmD0g=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/envoyproxy/protoc-gen-validate v0.6.7 h1:qcZcULcd/abmQg6dwigimCNEyi4gg31M/xaciQlDml8=
github.com/envoyproxy/protoc-gen-validate v0.6.7/go.mod h1:dyJXwwfPK2VSqiB9Klm1J6romD608Ba7Hij42vrOBCo=
@@ -305,13 +346,19 @@ github.com/envoyproxy/protoc-gen-validate v0.6.8 h1:B2cR/FAaiMtYDHv5BQpaqtkjGuWQ
github.com/envoyproxy/protoc-gen-validate v0.6.8/go.mod h1:0ZMblUx0cxNoWRswEEXoj9kHBmqX8pxGweMiyIAfR6A=
github.com/envoyproxy/protoc-gen-validate v0.9.1 h1:PS7VIOgmSVhWUEeZwTe7z7zouA22Cr590PzXKbZHOVY=
github.com/envoyproxy/protoc-gen-validate v0.9.1/go.mod h1:OKNgG7TCp5pF4d6XftA0++PMirau2/yoOwVac3AbF2w=
+github.com/envoyproxy/protoc-gen-validate v1.0.2 h1:QkIBuU5k+x7/QXPvPPnWXWlCdaBFApVqftFV6k087DA=
+github.com/envoyproxy/protoc-gen-validate v1.0.2/go.mod h1:GpiZQP3dDbg4JouG/NNS7QWXpgx6x8QiMKdmN72jogE=
github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
github.com/fatih/color v1.10.0/go.mod h1:ELkj/draVOlAH/xkhN6mQ50Qd0MPOk5AAr3maGEBuJM=
github.com/fatih/color v1.13.0 h1:8LOYc1KYPPmyKMuN8QV2DNRWNbLo6LZ0iLs8+mlH53w=
github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk=
+github.com/fatih/color v1.15.0 h1:kOqh6YHBtK8aywxGerMG2Eq3H6Qgoqeo13Bk2Mv/nBs=
+github.com/fatih/color v1.15.0/go.mod h1:0h5ZqXfHYED7Bhv2ZJamyIOUej9KtShiJESRwBDUSsw=
+github.com/felixge/httpsnoop v1.0.3 h1:s/nj+GCswXYzN5v2DpNMuMQYe+0DDwt5WVCU6CWBdXk=
github.com/felixge/httpsnoop v1.0.3/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
+github.com/flowstack/go-jsonschema v0.1.1/go.mod h1:yL7fNggx1o8rm9RlgXv7hTBWxdBM0rVwpMwimd3F3N0=
github.com/franela/goblin v0.0.0-20200105215937-c9ffbefa60db/go.mod h1:7dvUGVsVBjqR7JHJk0brhHOZYGmfBYOrK0ZhYMEtBr4=
github.com/franela/goreq v0.0.0-20171204163338-bcd34c9993f8/go.mod h1:ZhphrRTfi2rbfLwlschooIH4+wKKDR4Pdxhh+TRoA20=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
@@ -337,10 +384,14 @@ github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
github.com/go-logfmt/logfmt v0.5.1 h1:otpy5pqBCBZ1ng9RQ0dPu4PN7ba75Y/aA+UpowDyNVA=
github.com/go-logfmt/logfmt v0.5.1/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
+github.com/go-logfmt/logfmt v0.6.0 h1:wGYYu3uicYdqXVgoYbvnkrPVXkuLM1p1ifugDMEdRi4=
+github.com/go-logfmt/logfmt v0.6.0/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.2.3 h1:2DntVwHkVopvECVRSlL5PSo9eG+cAkDCuckLubN+rq0=
github.com/go-logr/logr v1.2.3/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
+github.com/go-logr/logr v1.2.4 h1:g01GSCwiDw2xSZfjJ2/T9M+S6pFdcNtFYsp+Y43HYDQ=
+github.com/go-logr/logr v1.2.4/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-openapi/analysis v0.21.2 h1:hXFrOYFHUAMQdu6zwAiKKJHJQ8kqZs1ux/ru1P1wLJU=
@@ -353,15 +404,22 @@ github.com/go-openapi/errors v0.20.2 h1:dxy7PGTqEh94zj2E3h1cUmQQWiM1+aeCROfAr02E
github.com/go-openapi/errors v0.20.2/go.mod h1:cM//ZKUKyO06HSwqAelJ5NsEMMcpa6VpXe8DOa1Mi1M=
github.com/go-openapi/errors v0.20.3 h1:rz6kiC84sqNQoqrtulzaL/VERgkoCyB6WdEkc2ujzUc=
github.com/go-openapi/errors v0.20.3/go.mod h1:Z3FlZ4I8jEGxjUK+bugx3on2mIAk4txuAOhlsB1FSgk=
+github.com/go-openapi/errors v0.20.4 h1:unTcVm6PispJsMECE3zWgvG4xTiKda1LIR5rCRWLG6M=
+github.com/go-openapi/errors v0.20.4/go.mod h1:Z3FlZ4I8jEGxjUK+bugx3on2mIAk4txuAOhlsB1FSgk=
github.com/go-openapi/jsonpointer v0.19.3/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
github.com/go-openapi/jsonpointer v0.19.5 h1:gZr+CIYByUqjcgeLXnQu2gHYQC9o73G2XUeOFYEICuY=
github.com/go-openapi/jsonpointer v0.19.5/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34VGT37dQOVbmoatg=
+github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs=
+github.com/go-openapi/jsonpointer v0.20.0 h1:ESKJdU9ASRfaPNOPRx12IUyA1vn3R9GiE3KYD14BXdQ=
+github.com/go-openapi/jsonpointer v0.20.0/go.mod h1:6PGzBjjIIumbLYysB73Klnms1mwnU4G3YHOECG3CedA=
github.com/go-openapi/jsonreference v0.19.3/go.mod h1:rjx6GuL8TTa9VaixXglHmQmIL98+wF9xc8zWvFonSJ8=
github.com/go-openapi/jsonreference v0.19.5/go.mod h1:RdybgQwPxbL4UEjuAruzK1x3nE69AqPYEJeo/TWfEeg=
github.com/go-openapi/jsonreference v0.19.6 h1:UBIxjkht+AWIgYzCDSv2GN+E/togfwXUJFRTWhl2Jjs=
github.com/go-openapi/jsonreference v0.19.6/go.mod h1:diGHMEHg2IqXZGKxqyvWdfWU/aim5Dprw5bqpKkTvns=
github.com/go-openapi/jsonreference v0.20.0 h1:MYlu0sBgChmCfJxxUKZ8g1cPWFOB37YSZqewK7OKeyA=
github.com/go-openapi/jsonreference v0.20.0/go.mod h1:Ag74Ico3lPc+zR+qjn4XBUmXymS4zJbYVCZmcgkasdo=
+github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE=
+github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k=
github.com/go-openapi/loads v0.21.1 h1:Wb3nVZpdEzDTcly8S4HMkey6fjARRzb7iEaySimlDW0=
github.com/go-openapi/loads v0.21.1/go.mod h1:/DtAMXXneXFjbQMGEtbamCZb+4x7eGwkvZCvBmwUG+g=
github.com/go-openapi/loads v0.21.2 h1:r2a/xFIYeZ4Qd2TnGpWDIQNcP80dIaZgf704za8enro=
@@ -372,11 +430,15 @@ github.com/go-openapi/spec v0.20.4/go.mod h1:faYFR1CvsJZ0mNsmsphTMSoRrNV3TEDoAM7
github.com/go-openapi/spec v0.20.6/go.mod h1:2OpW+JddWPrpXSCIX8eOx7lZ5iyuWj3RYR6VaaBKcWA=
github.com/go-openapi/spec v0.20.7 h1:1Rlu/ZrOCCob0n+JKKJAWhNWMPW8bOZRg8FJaY+0SKI=
github.com/go-openapi/spec v0.20.7/go.mod h1:2OpW+JddWPrpXSCIX8eOx7lZ5iyuWj3RYR6VaaBKcWA=
+github.com/go-openapi/spec v0.20.9 h1:xnlYNQAwKd2VQRRfwTEI0DcK+2cbuvI/0c7jx3gA8/8=
+github.com/go-openapi/spec v0.20.9/go.mod h1:2OpW+JddWPrpXSCIX8eOx7lZ5iyuWj3RYR6VaaBKcWA=
github.com/go-openapi/strfmt v0.21.0/go.mod h1:ZRQ409bWMj+SOgXofQAGTIo2Ebu72Gs+WaRADcS5iNg=
github.com/go-openapi/strfmt v0.21.1/go.mod h1:I/XVKeLc5+MM5oPNN7P6urMOpuLXEcNrCX/rPGuWb0k=
github.com/go-openapi/strfmt v0.21.2/go.mod h1:I/XVKeLc5+MM5oPNN7P6urMOpuLXEcNrCX/rPGuWb0k=
github.com/go-openapi/strfmt v0.21.3 h1:xwhj5X6CjXEZZHMWy1zKJxvW9AfHC9pkyUjLvHtKG7o=
github.com/go-openapi/strfmt v0.21.3/go.mod h1:k+RzNO0Da+k3FrrynSNN8F7n/peCmQQqbbXjtDfvmGg=
+github.com/go-openapi/strfmt v0.21.7 h1:rspiXgNWgeUzhjo1YU01do6qsahtJNByjLVbPLNHb8k=
+github.com/go-openapi/strfmt v0.21.7/go.mod h1:adeGTkxE44sPyLk0JV235VQAO/ZXUr8KAzYjclFs3ew=
github.com/go-openapi/swag v0.19.5/go.mod h1:POnQmlKehdgb5mhVOsnJFsivZCEZ/vjK9gh66Z9tfKk=
github.com/go-openapi/swag v0.19.14/go.mod h1:QYRuS/SOXUCsnplDa677K7+DxSOj6IPNl/eQntq43wQ=
github.com/go-openapi/swag v0.19.15/go.mod h1:QYRuS/SOXUCsnplDa677K7+DxSOj6IPNl/eQntq43wQ=
@@ -384,16 +446,22 @@ github.com/go-openapi/swag v0.21.1 h1:wm0rhTb5z7qpJRHBdPOMuY4QjVUMbF6/kwoYeRAOrK
github.com/go-openapi/swag v0.21.1/go.mod h1:QYRuS/SOXUCsnplDa677K7+DxSOj6IPNl/eQntq43wQ=
github.com/go-openapi/swag v0.22.3 h1:yMBqmnQ0gyZvEb/+KzuWZOXgllrXT4SADYbvDaXHv/g=
github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
+github.com/go-openapi/swag v0.22.4 h1:QLMzNJnMGPRNDCbySlcj1x01tzU8/9LTTL9hZZZogBU=
+github.com/go-openapi/swag v0.22.4/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
github.com/go-openapi/validate v0.21.0 h1:+Wqk39yKOhfpLqNLEC0/eViCkzM5FVXVqrvt526+wcI=
github.com/go-openapi/validate v0.21.0/go.mod h1:rjnrwK57VJ7A8xqfpAOEKRH8yQSGUriMu5/zuPSQ1hg=
github.com/go-openapi/validate v0.22.0 h1:b0QecH6VslW/TxtpKgzpO1SNG7GU2FsaqKdP1E2T50Y=
github.com/go-openapi/validate v0.22.0/go.mod h1:rjnrwK57VJ7A8xqfpAOEKRH8yQSGUriMu5/zuPSQ1hg=
+github.com/go-openapi/validate v0.22.1 h1:G+c2ub6q47kfX1sOBLwIQwzBVt8qmOAARyo/9Fqs9NU=
+github.com/go-openapi/validate v0.22.1/go.mod h1:rjnrwK57VJ7A8xqfpAOEKRH8yQSGUriMu5/zuPSQ1hg=
github.com/go-playground/assert/v2 v2.0.1/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
github.com/go-playground/locales v0.13.0/go.mod h1:taPMhCMXrRLJO55olJkUXHZBHCxTMfnGwq/HNwmWNS8=
github.com/go-playground/universal-translator v0.17.0/go.mod h1:UkSxE5sNxxRwHyU+Scu5vgOQjsIJAF8j9muTVoKLVtA=
github.com/go-playground/validator/v10 v10.4.1/go.mod h1:nlOn6nFhuKACm19sB/8EGNn9GlaMV7XkbRSipzJ0Ii4=
github.com/go-resty/resty/v2 v2.1.1-0.20191201195748-d7b97669fe48 h1:JVrqSeQfdhYRFk24TvhTZWU0q8lfCojxZQFi3Ou7+uY=
github.com/go-resty/resty/v2 v2.1.1-0.20191201195748-d7b97669fe48/go.mod h1:dZGr0i9PLlaaTD4H/hoZIDjQ+r6xq8mgbRzHZf7f2J8=
+github.com/go-resty/resty/v2 v2.7.0 h1:me+K9p3uhSmXtrBZ4k9jcEAfJmuC8IivWHwaLZwPrFY=
+github.com/go-resty/resty/v2 v2.7.0/go.mod h1:9PWDzw47qPphMRFfhsyk0NnSgvluHcljSMVIq3w7q0I=
github.com/go-sql-driver/mysql v1.4.0/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-stack/stack v1.8.1/go.mod h1:dcoOX6HbPZSZptuspn9bctJ+N/CnF5gGygcUP3XYfe4=
@@ -436,6 +504,8 @@ github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69
github.com/golang-jwt/jwt/v4 v4.0.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg=
github.com/golang-jwt/jwt/v4 v4.2.0 h1:besgBTC8w8HjP6NzQdxwKH9Z5oQMZ24ThTrHp3cZ8eU=
github.com/golang-jwt/jwt/v4 v4.2.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg=
+github.com/golang-jwt/jwt/v4 v4.5.0 h1:7cYmW1XlMY7h7ii7UhUyChSgS5wUJEnm9uZVTGqOWzg=
+github.com/golang-jwt/jwt/v4 v4.5.0/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/glog v1.0.0/go.mod h1:EWib/APOK0SL3dFbYqvxE3UYd8E6s1ouQ7iEp/0LWV4=
github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
@@ -471,6 +541,8 @@ github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaS
github.com/golang/protobuf v1.5.1/go.mod h1:DopwsBzvsk0Fs44TXzsVbJyPhcCPeIwnvohx4u74HPM=
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
+github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
+github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
@@ -481,6 +553,8 @@ github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ
github.com/google/btree v1.0.1/go.mod h1:xXMiIv4Fb/0kKde4SpL7qlzvu5cMJDRkFDxJfI9uaxA=
github.com/google/gnostic v0.5.7-v3refs h1:FhTMOKj2VhjpouxvWJAV1TL304uMlb9zcDqkl6cEI54=
github.com/google/gnostic v0.5.7-v3refs/go.mod h1:73MKFl6jIHelAJNaBGFzt3SPtZULs9dYrGFt8OiIsHQ=
+github.com/google/gnostic v0.6.9 h1:ZK/5VhkoX835RikCHpSUJV9a+S3e1zLh59YnyWeBW+0=
+github.com/google/gnostic v0.6.9/go.mod h1:Nm8234We1lq6iB9OmlgNv3nH91XLLVZHCDayfA3xq+E=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
@@ -529,7 +603,11 @@ github.com/google/pprof v0.0.0-20220829040838-70bd9ae97f40 h1:ykKxL12NZd3JmWZnyq
github.com/google/pprof v0.0.0-20220829040838-70bd9ae97f40/go.mod h1:dDKJzRmX4S37WGHujM7tX//fmj1uioxKzKxz3lo4HJo=
github.com/google/pprof v0.0.0-20230111200839-76d1ae5aea2b h1:8htHrh2bw9c7Idkb7YNac+ZpTqLMjRpI+FWu51ltaQc=
github.com/google/pprof v0.0.0-20230111200839-76d1ae5aea2b/go.mod h1:dDKJzRmX4S37WGHujM7tX//fmj1uioxKzKxz3lo4HJo=
+github.com/google/pprof v0.0.0-20230705174524-200ffdc848b8 h1:n6vlPhxsA+BW/XsS5+uqi7GyzaLa5MH7qlSLBZtRdiA=
+github.com/google/pprof v0.0.0-20230705174524-200ffdc848b8/go.mod h1:Jh3hGz2jkYak8qXPD19ryItVnUgpgeqzdkY/D0EaeuA=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
+github.com/google/s2a-go v0.1.4 h1:1kZ/sQM3srePvKs3tXAvQzo66XfcReoqFpIpIccE7Oc=
+github.com/google/s2a-go v0.1.4/go.mod h1:Ej+mSEMGRnqRzjc7VtF+jdBwYG5fuJfiZ8ELkjEwM0A=
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
@@ -540,6 +618,8 @@ github.com/googleapis/enterprise-certificate-proxy v0.1.0 h1:zO8WHNx/MYiAKJ3d5sp
github.com/googleapis/enterprise-certificate-proxy v0.1.0/go.mod h1:17drOmN3MwGY7t0e+Ei9b45FFGA3fBs3x36SsCg1hq8=
github.com/googleapis/enterprise-certificate-proxy v0.2.1 h1:RY7tHKZcRlk788d5WSo/e83gOyyy742E8GSs771ySpg=
github.com/googleapis/enterprise-certificate-proxy v0.2.1/go.mod h1:AwSRAtLfXpU5Nm3pW+v7rGDHp09LsPtGY9MduiEsR9k=
+github.com/googleapis/enterprise-certificate-proxy v0.2.5 h1:UR4rDjcgpgEnqpIEvkiqTYKBCKLNmlge2eVjoZfySzM=
+github.com/googleapis/enterprise-certificate-proxy v0.2.5/go.mod h1:RxW0N9901Cko1VOCW3SXCpWP+mlIEkk2tP7jnHy9a3w=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/googleapis/gax-go/v2 v2.1.0/go.mod h1:Q3nei7sK6ybPYH7twZdmQpAd1MKb7pfu6SK+H1/DsU0=
@@ -552,6 +632,8 @@ github.com/googleapis/gax-go/v2 v2.5.1 h1:kBRZU0PSuI7PspsSb/ChWoVResUcwNVIdpB049
github.com/googleapis/gax-go/v2 v2.5.1/go.mod h1:h6B0KMMFNtI2ddbGJn3T3ZbwkeT6yqEF02fYlzkUCyo=
github.com/googleapis/gax-go/v2 v2.7.0 h1:IcsPKeInNvYi7eqSaDjiZqDDKu5rsmunY0Y1YupQSSQ=
github.com/googleapis/gax-go/v2 v2.7.0/go.mod h1:TEop28CZZQ2y+c0VxMUmu1lV+fQx57QpBWsYpwqHJx8=
+github.com/googleapis/gax-go/v2 v2.12.0 h1:A+gCJKdRfqXkr+BIRGtZLibNXf0m1f9E4HG56etFpas=
+github.com/googleapis/gax-go/v2 v2.12.0/go.mod h1:y+aIqrI5eb1YGMVJfuV3185Ts/D7qKpsEkdD5+I6QGU=
github.com/googleapis/go-type-adapters v1.0.0/go.mod h1:zHW75FOG2aur7gAO2B+MLby+cLsWGBF62rFAi7WjWO4=
github.com/googleapis/google-cloud-go-testing v0.0.0-20200911160855-bcd43fbb19e8/go.mod h1:dvDLG8qkwmyD9a/MJJN3XJcT3xFxOKAvTZGvuZmac9g=
github.com/gophercloud/gophercloud v0.25.0 h1:C3Oae7y0fUVQGSsBrb3zliAjdX+riCSEh4lNMejFNI4=
@@ -559,6 +641,8 @@ github.com/gophercloud/gophercloud v1.0.0 h1:9nTGx0jizmHxDobe4mck89FyQHVyA3CaXLI
github.com/gophercloud/gophercloud v1.0.0/go.mod h1:Q8fZtyi5zZxPS/j9aj3sSxtvj41AdQMDwyo1myduD5c=
github.com/gophercloud/gophercloud v1.1.1 h1:MuGyqbSxiuVBqkPZ3+Nhbytk1xZxhmfCB2Rg1cJWFWM=
github.com/gophercloud/gophercloud v1.1.1/go.mod h1:aAVqcocTSXh2vYFZ1JTvx4EQmfgzxRcNupUfxZbBNDM=
+github.com/gophercloud/gophercloud v1.5.0 h1:cDN6XFCLKiiqvYpjQLq9AiM7RDRbIC9450WpPH+yvXo=
+github.com/gophercloud/gophercloud v1.5.0/go.mod h1:aAVqcocTSXh2vYFZ1JTvx4EQmfgzxRcNupUfxZbBNDM=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg=
github.com/gorilla/mux v1.6.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
@@ -585,13 +669,19 @@ github.com/hashicorp/consul/api v1.15.2 h1:3Q/pDqvJ7udgt/60QOOW/p/PeKioQN+ncYzzC
github.com/hashicorp/consul/api v1.15.2/go.mod h1:v6nvB10borjOuIwNRZYPZiHKrTM/AyrGtd0WVVodKM8=
github.com/hashicorp/consul/api v1.18.0 h1:R7PPNzTCeN6VuQNDwwhZWJvzCtGSrNpJqfb22h3yH9g=
github.com/hashicorp/consul/api v1.18.0/go.mod h1:owRRGJ9M5xReDC5nfT8FTJrNAPbT4NM6p/k+d03q2v4=
+github.com/hashicorp/consul/api v1.22.0 h1:ydEvDooB/A0c/xpsBd8GSt7P2/zYPBui4KrNip0xGjE=
+github.com/hashicorp/consul/api v1.22.0/go.mod h1:zHpYgZ7TeYqS6zaszjwSt128OwESRpnhU9aGa6ue3Eg=
github.com/hashicorp/consul/sdk v0.3.0/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
github.com/hashicorp/consul/sdk v0.11.0/go.mod h1:yPkX5Q6CsxTFMjQQDJwzeNmUUF5NUGGbrDsv9wTb8cw=
github.com/hashicorp/consul/sdk v0.13.0/go.mod h1:0hs/l5fOVhJy/VdcoaNqUSi2AUs95eF5WKtv+EYIQqE=
github.com/hashicorp/cronexpr v1.1.1 h1:NJZDd87hGXjoZBdvyCF9mX4DCq5Wy7+A/w+A7q0wn6c=
github.com/hashicorp/cronexpr v1.1.1/go.mod h1:P4wA0KBl9C5q2hABiMO7cp6jcIg96CDh1Efb3g1PWA4=
+github.com/hashicorp/cronexpr v1.1.2 h1:wG/ZYIKT+RT3QkOdgYc+xsKWVRgnxJ1OJtjjy84fJ9A=
+github.com/hashicorp/cronexpr v1.1.2/go.mod h1:P4wA0KBl9C5q2hABiMO7cp6jcIg96CDh1Efb3g1PWA4=
github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
+github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
+github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ=
@@ -602,6 +692,8 @@ github.com/hashicorp/go-hclog v0.14.1 h1:nQcJDQwIAGnmoUWp8ubocEX40cCml/17YkF6csQ
github.com/hashicorp/go-hclog v0.14.1/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ=
github.com/hashicorp/go-hclog v0.16.2 h1:K4ev2ib4LdQETX5cSZBG0DVLk1jwGqSPXBjdah3veNs=
github.com/hashicorp/go-hclog v0.16.2/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ=
+github.com/hashicorp/go-hclog v1.5.0 h1:bI2ocEMgcVlz55Oj1xZNBsVi900c7II+fWDyV9o+13c=
+github.com/hashicorp/go-hclog v1.5.0/go.mod h1:W4Qnvbt70Wk/zYJryRzDRU/4r0kIg0PVHBcfoyhpF5M=
github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
github.com/hashicorp/go-immutable-radix v1.3.0 h1:8exGP7ego3OmkfksihtSouGMZ+hQrhxx+FVELeXpVPE=
github.com/hashicorp/go-immutable-radix v1.3.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
@@ -616,6 +708,8 @@ github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9
github.com/hashicorp/go-retryablehttp v0.5.3/go.mod h1:9B5zBasrRhHXnJnui7y6sL7es7NDiJgTc6Er0maI1Xs=
github.com/hashicorp/go-retryablehttp v0.7.1 h1:sUiuQAnLlbvmExtFQs72iFW/HXeUn8Z1aJLQ4LJJbTQ=
github.com/hashicorp/go-retryablehttp v0.7.1/go.mod h1:vAew36LZh98gCBJNLH42IQ1ER/9wtLZZ8meHqQvEYWY=
+github.com/hashicorp/go-retryablehttp v0.7.4 h1:ZQgVdpTdAL7WpMIwLzCfbalOcSUdkDZnpUv3/+BxzFA=
+github.com/hashicorp/go-retryablehttp v0.7.4/go.mod h1:Jy/gPYAdjqffZ/yFGCFV2doI5wjtH1ewM9u8iYVjtX8=
github.com/hashicorp/go-rootcerts v1.0.0/go.mod h1:K6zTfqpRlCUIjkwsN4Z+hiSfzSTQa6eBIzfwKfwNnHU=
github.com/hashicorp/go-rootcerts v1.0.2 h1:jzhAVGtqPKbwpyCPELlgNWhE1znq+qwJtW5Oi2viEzc=
github.com/hashicorp/go-rootcerts v1.0.2/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8=
@@ -646,6 +740,8 @@ github.com/hashicorp/nomad/api v0.0.0-20220921012004-ddeeb1040edf h1:l/EZ57iRPNs
github.com/hashicorp/nomad/api v0.0.0-20220921012004-ddeeb1040edf/go.mod h1:Z0U0rpbh4Qlkgqu3iRDcfJBA+r3FgoeD1BfigmZhfzM=
github.com/hashicorp/nomad/api v0.0.0-20230124213148-69fd1a0e4bf7 h1:XOdd3JHyeQnBRxotBo9ibxBFiYGuYhQU25s/YeV2cTU=
github.com/hashicorp/nomad/api v0.0.0-20230124213148-69fd1a0e4bf7/go.mod h1:xYYd4dybIhRhhzDemKx7Ddt8CvCosgrEek8YM7/cF0A=
+github.com/hashicorp/nomad/api v0.0.0-20230718173136-3a687930bd3e h1:sr4lujmn9heD030xx/Pd4B/JSmvRhFzuotNXaaV0WLs=
+github.com/hashicorp/nomad/api v0.0.0-20230718173136-3a687930bd3e/go.mod h1:O23qLAZuCx4htdY9zBaO4cJPXgleSFEdq6D/sezGgYE=
github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc=
github.com/hashicorp/serf v0.9.7 h1:hkdgbqizGQHuU5IPqYM1JdSMV8nKfpuOnZYXssk9muY=
github.com/hashicorp/serf v0.9.7/go.mod h1:TXZNMjZQijwlDvp+r0b63xZ45H7JmCmgg4gpTwn9UV4=
@@ -656,6 +752,8 @@ github.com/hetznercloud/hcloud-go v1.35.3 h1:WCmFAhLRooih2QHAsbCbEdpIHnshQQmrPqs
github.com/hetznercloud/hcloud-go v1.35.3/go.mod h1:mepQwR6va27S3UQthaEPGS86jtzSY9xWL1e9dyxXpgA=
github.com/hetznercloud/hcloud-go v1.39.0 h1:RUlzI458nGnPR6dlcZlrsGXYC1hQlFbKdm8tVtEQQB0=
github.com/hetznercloud/hcloud-go v1.39.0/go.mod h1:mepQwR6va27S3UQthaEPGS86jtzSY9xWL1e9dyxXpgA=
+github.com/hetznercloud/hcloud-go/v2 v2.0.0 h1:Sg1DJ+MAKvbYAqaBaq9tPbwXBS2ckPIaMtVdUjKu+4g=
+github.com/hetznercloud/hcloud-go/v2 v2.0.0/go.mod h1:4iUG2NG8b61IAwNx6UsMWQ6IfIf/i1RsG0BbsKAyR5Q=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/hudl/fargo v1.3.0/go.mod h1:y3CKSmjA+wD2gak7sUSXTAoopbhU08POFhmITJgmKTg=
github.com/iancoleman/strcase v0.2.0/go.mod h1:iwCmte+B7n89clKwxIoIXy/HfoL7AsD47ZCWhYzw7ho=
@@ -665,11 +763,15 @@ github.com/ianlancetaylor/demangle v0.0.0-20220319035150-800ac71e25c2/go.mod h1:
github.com/imdario/mergo v0.3.6/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.12 h1:b6R2BslTbIEToALKP7LxUvijTsNI9TAe80pLWN2g/HU=
github.com/imdario/mergo v0.3.12/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
+github.com/imdario/mergo v0.3.16 h1:wwQJbIsHYGMUyLSPrEq1CT16AhnhNJQ51+4fdHUnCl4=
+github.com/imdario/mergo v0.3.16/go.mod h1:WBLT9ZmE3lPoWsEzCh9LPo3TiwVN+ZKEjmz+hD27ysY=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/influxdata/influxdb1-client v0.0.0-20191209144304-8bf82d3c094d/go.mod h1:qj24IKcXYK6Iy9ceXlo3Tc+vtHo9lIhSX5JddghvEPo=
github.com/ionos-cloud/sdk-go/v6 v6.1.2 h1:es5R5sVmjHFrYNBbJfAeHF+16GheaJMyc63xWxIAec4=
github.com/ionos-cloud/sdk-go/v6 v6.1.3 h1:vb6yqdpiqaytvreM0bsn2pXw+1YDvEk2RKSmBAQvgDQ=
github.com/ionos-cloud/sdk-go/v6 v6.1.3/go.mod h1:Ox3W0iiEz0GHnfY9e5LmAxwklsxguuNFEUSu0gVRTME=
+github.com/ionos-cloud/sdk-go/v6 v6.1.8 h1:493wE/BkZxJf7x79UCE0cYGPZoqQcPiEBALvt7uVGY0=
+github.com/ionos-cloud/sdk-go/v6 v6.1.8/go.mod h1:EzEgRIDxBELvfoa/uBN0kOQaqovLjUWEB7iW4/Q+t4k=
github.com/jessevdk/go-flags v1.5.0/go.mod h1:Fw0T6WPc1dYxT4mKEZRfG5kJhaTDP9pj1c2EWnYs/m4=
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
github.com/jmespath/go-jmespath v0.4.0 h1:BEgLn5cpjn8UN1mAw4NjwDrS35OdebyEtFe+9YPoQUg=
@@ -700,6 +802,8 @@ github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvW
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.13.6/go.mod h1:/3/Vjq9QcHkK5uEr5lBEmyoZ1iFhe47etQ6QUkpK6sk=
+github.com/klauspost/compress v1.16.7 h1:2mk3MPGNzKyxErAw8YaohYh69+pa4sIQSC0fPGCFR9I=
+github.com/klauspost/compress v1.16.7/go.mod h1:ntbaceVETuRiXiv4DpjP66DpAtAGkEQskQzEyD//IeE=
github.com/kolo/xmlrpc v0.0.0-20201022064351-38db28db192b h1:iNjcivnc6lhbvJA3LD622NPrUponluJrBWPIwGG/3Bg=
github.com/kolo/xmlrpc v0.0.0-20220919000247-3377102c83bd h1:b1taQnM42dp3NdiiQwfmM1WyyucHayZSKN5R0PRYWL0=
github.com/kolo/xmlrpc v0.0.0-20220919000247-3377102c83bd/go.mod h1:pcaDhQK0/NJZEvtCO0qQPPropqV0sJOJ6YW7X+9kRwM=
@@ -717,6 +821,7 @@ github.com/kr/pretty v0.3.0/go.mod h1:640gp4NfQd8pI5XOwp5fnNeVWj67G7CFk/SaSQn7NB
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
+github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/leodido/go-urn v1.2.0/go.mod h1:+8+nEpDfqqsY+g338gtMEUOtuK+4dEMhiQEgxpxOKII=
github.com/lightstep/lightstep-tracer-common/golang/gogo v0.0.0-20190605223551-bc2310a04743/go.mod h1:qklhhLq1aX+mtWk9cPHPzaBjWImj5ULL6C7HFJtXQMM=
@@ -726,6 +831,8 @@ github.com/linode/linodego v1.9.1 h1:29UpEPpYcGFnbwiJW8mbk/bjBZpgd/pv68io2IKTo34
github.com/linode/linodego v1.9.1/go.mod h1:h6AuFR/JpqwwM/vkj7s8KV3iGN8/jxn+zc437F8SZ8w=
github.com/linode/linodego v1.12.0 h1:33mOIrZ+gVva14gyJMKPZ85mQGovAvZCEP1ftgmFBjA=
github.com/linode/linodego v1.12.0/go.mod h1:NJlzvlNtdMRRkXb0oN6UWzUkj6t+IBsyveHgZ5Ppjyk=
+github.com/linode/linodego v1.19.0 h1:n4WJrcr9+30e9JGZ6DI0nZbm5SdAj1kSwvvt/998YUw=
+github.com/linode/linodego v1.19.0/go.mod h1:XZFR+yJ9mm2kwf6itZ6SCpu+6w3KnIevV0Uu5HNWJgQ=
github.com/lyft/protoc-gen-star v0.6.0/go.mod h1:TGAoBVkt8w7MPG72TrKIu85MIdXwDuzJYeZuUPFPNwA=
github.com/lyft/protoc-gen-star v0.6.1/go.mod h1:TGAoBVkt8w7MPG72TrKIu85MIdXwDuzJYeZuUPFPNwA=
github.com/lyft/protoc-gen-validate v0.0.13/go.mod h1:XbGvPuh87YZc5TdIa2/I4pLk0QoUACkjt2znoq26NVQ=
@@ -743,6 +850,8 @@ github.com/mattn/go-colorable v0.1.8/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope
github.com/mattn/go-colorable v0.1.9/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-colorable v0.1.12 h1:jF+Du6AlPIjs2BiUiQlKOX0rt3SujHxPnksPKZbaA40=
github.com/mattn/go-colorable v0.1.12/go.mod h1:u5H1YNBxpqRaxsYJYSkiCWKzEfiAb1Gb520KVy5xxl4=
+github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
+github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
@@ -751,6 +860,9 @@ github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOA
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-isatty v0.0.14 h1:yVuAays6BHfxijgZPzw+3Zlu5yQgKGP2/hcQbHb7S9Y=
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
+github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
+github.com/mattn/go-isatty v0.0.19 h1:JITubQf0MOLdlGRuRq+jtsDlekdYPia9ZFsB8h/APPA=
+github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/matttproud/golang_protobuf_extensions v1.0.2-0.20181231171920-c182affec369 h1:I0XW9+e1XWDxdcEniV4rQAIOPUGDq67JSCiRCgGCZLI=
@@ -762,6 +874,8 @@ github.com/miekg/dns v1.1.26/go.mod h1:bPDLeHnStXmXAq1m/Ch/hvfNHr14JKNPMBo3VZKju
github.com/miekg/dns v1.1.41/go.mod h1:p6aan82bvRIyn+zDIv9xYNUpwa73JcSh9BKwknJysuI=
github.com/miekg/dns v1.1.50 h1:DQUfb9uc6smULcREF09Uc+/Gd46YWqJd5DbpPE9xkcA=
github.com/miekg/dns v1.1.50/go.mod h1:e3IlAVfNqAllflbibAZEWOXOQ+Ynzk/dDozDxY7XnME=
+github.com/miekg/dns v1.1.55 h1:GoQ4hpsj0nFLYe+bWiCToyrBEJXkQfOOIvFGFy0lEgo=
+github.com/miekg/dns v1.1.55/go.mod h1:uInx36IzPl7FYnDcMeVWxj9byh7DutNykX4G9Sj60FY=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/cli v1.1.0/go.mod h1:xcISNoH86gajksDmfB23e/pu+B+GeFRMYmoHXxx3xhI=
github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
@@ -843,6 +957,8 @@ github.com/openzipkin/zipkin-go v0.2.1/go.mod h1:NaW6tEwdmWMaCDZzg8sh+IBNOxHMPnh
github.com/openzipkin/zipkin-go v0.2.2/go.mod h1:NaW6tEwdmWMaCDZzg8sh+IBNOxHMPnhQw8ySjnjRyN4=
github.com/ovh/go-ovh v1.3.0 h1:mvZaddk4E4kLcXhzb+cxBsMPYp2pHqiQpWYkInsuZPQ=
github.com/ovh/go-ovh v1.3.0/go.mod h1:AxitLZ5HBRPyUd+Zl60Ajaag+rNTdVXWIkzfrVuTXWA=
+github.com/ovh/go-ovh v1.4.1 h1:VBGa5wMyQtTP7Zb+w97zRCh9sLtM/2YKRyy+MEJmWaM=
+github.com/ovh/go-ovh v1.4.1/go.mod h1:6bL6pPyUT7tBfI0pqOegJgRjgjuO+mOo+MyXd1EEC0M=
github.com/pact-foundation/pact-go v1.0.4/go.mod h1:uExwJY4kCzNPcHRj+hCR/HBbOOIwwtUjcrb0b5/5kLM=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pascaldekloe/goe v0.1.0/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
@@ -852,6 +968,8 @@ github.com/performancecopilot/speed v3.0.0+incompatible/go.mod h1:/CLtqpZ5gBg1M9
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/pierrec/lz4 v1.0.2-0.20190131084431-473cd7ce01a1/go.mod h1:3/3N9NVKO0jef7pBehbT1qWhCMrIgbYNnFAZCqQ5LRc=
github.com/pierrec/lz4 v2.0.5+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
+github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8 h1:KoWmjvw+nsYOo29YJK9vDA65RGE3NrOnUtO7a+RF9HU=
+github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8/go.mod h1:HKlIX3XHQyzLZPlr7++PzdhaXEj94dEiJgZDTsxEqUI=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
@@ -861,6 +979,8 @@ github.com/pkg/sftp v1.10.1/go.mod h1:lYOWFsE0bwd1+KfKJaKeuokY15vzFx25BLbzYYoAxZ
github.com/pkg/sftp v1.13.1/go.mod h1:3HaPG6Dq1ILlpPZRO0HVMrsydcdLt6HRDccSgb87qRg=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
+github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
+github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
github.com/posener/complete v1.2.3/go.mod h1:WZIdtGGp+qx0sLrYKtIRAruyNpv6hFCicSgv7Sy7s/s=
github.com/prashantv/gostub v1.1.0/go.mod h1:A5zLQHz7ieHGG7is6LLXLz7I8+3LZzsrV0P1IAHhP5U=
@@ -880,6 +1000,8 @@ github.com/prometheus/client_golang v1.13.0 h1:b71QUfeo5M8gq2+evJdTPfZhYMAU0uKPk
github.com/prometheus/client_golang v1.13.0/go.mod h1:vTeo+zgvILHsnnj/39Ou/1fPN5nJFOEMgftOUOmlvYQ=
github.com/prometheus/client_golang v1.14.0 h1:nJdhIvne2eSX/XRAFV9PcvFFRbrjbcTUj0VP62TMhnw=
github.com/prometheus/client_golang v1.14.0/go.mod h1:8vpkKitgIVNcqrRBWh1C4TIUQgYNtG/XQE4E/Zae36Y=
+github.com/prometheus/client_golang v1.16.0 h1:yk/hx9hDbrGHovbci4BY+pRMfSuuat626eFsHb7tmT8=
+github.com/prometheus/client_golang v1.16.0/go.mod h1:Zsulrv/L9oM40tJ7T815tM89lFEugiJ9HzIqaAx4LKc=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
@@ -889,6 +1011,8 @@ github.com/prometheus/client_model v0.2.0 h1:uq5h0d+GuxiXLJLNABMgp2qUWDPiLvgCzz2
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.3.0 h1:UBgGFHqYdG/TPFD1B1ogZywDqEkwp3fBMvqdiQ7Xew4=
github.com/prometheus/client_model v0.3.0/go.mod h1:LDGWKZIo7rky3hgvBe+caln+Dr3dPggB5dvjtD7w9+w=
+github.com/prometheus/client_model v0.4.0 h1:5lQXD3cAg1OXBf4Wq03gTrXHeaV0TQvGfUooCfx1yqY=
+github.com/prometheus/client_model v0.4.0/go.mod h1:oMQmHW1/JoDwqLtg57MGgP/Fb1CJEYF2imWWhWtMkYU=
github.com/prometheus/common v0.2.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt26CguLLsqA=
@@ -901,6 +1025,8 @@ github.com/prometheus/common v0.37.0 h1:ccBbHCgIiT9uSoFY0vX8H3zsNR5eLt17/RQLUvn8
github.com/prometheus/common v0.37.0/go.mod h1:phzohg0JFMnBEFGxTDbfu3QyL5GI8gTQJFhYO5B3mfA=
github.com/prometheus/common v0.39.0 h1:oOyhkDq05hPZKItWVBkJ6g6AtGxi+fy7F4JvUV8uhsI=
github.com/prometheus/common v0.39.0/go.mod h1:6XBZ7lYdLCbkAVhwRsWTZn+IN5AB9F/NXd5w0BbEX0Y=
+github.com/prometheus/common v0.44.0 h1:+5BrQJwiBB9xsMygAB3TNvpQKOwlkc25LbISbrdOOfY=
+github.com/prometheus/common v0.44.0/go.mod h1:ofAIvZbQ1e/nugmZGz4/qCb9Ap1VoSTIO7x0VV9VvuY=
github.com/prometheus/common/assets v0.2.0/go.mod h1:D17UVUE12bHbim7HzwUvtqm6gwBEaDQ0F+hIGbFbccI=
github.com/prometheus/common/sigv4 v0.1.0 h1:qoVebwtwwEhS85Czm2dSROY5fTo2PAPEVdDeppTwGX4=
github.com/prometheus/common/sigv4 v0.1.0/go.mod h1:2Jkxxk9yYvCkE5G1sQT7GuEXm57JrvHu9k5YwTjsNtI=
@@ -908,6 +1034,8 @@ github.com/prometheus/exporter-toolkit v0.7.1 h1:c6RXaK8xBVercEeUQ4tRNL8UGWzDHfv
github.com/prometheus/exporter-toolkit v0.7.1/go.mod h1:ZUBIj498ePooX9t/2xtDjeQYwvRpiPP2lh5u4iblj2g=
github.com/prometheus/exporter-toolkit v0.8.2 h1:sbJAfBXQFkG6sUkbwBun8MNdzW9+wd5YfPYofbmj0YM=
github.com/prometheus/exporter-toolkit v0.8.2/go.mod h1:00shzmJL7KxcsabLWcONwpyNEuWhREOnFqZW7vadFS0=
+github.com/prometheus/exporter-toolkit v0.10.0 h1:yOAzZTi4M22ZzVxD+fhy1URTuNRj/36uQJJ5S8IPza8=
+github.com/prometheus/exporter-toolkit v0.10.0/go.mod h1:+sVFzuvV5JDyw+Ih6p3zFxZNVnKQa3x5qPmDSiPu4ZY=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
@@ -917,12 +1045,16 @@ github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1
github.com/prometheus/procfs v0.7.3/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/procfs v0.8.0 h1:ODq8ZFEaYeCaZOJlZZdJA2AbQR98dSHSM1KW/You5mo=
github.com/prometheus/procfs v0.8.0/go.mod h1:z7EfXMXOkbkqb9IINtpCn86r/to3BnA0uaxHdg830/4=
+github.com/prometheus/procfs v0.11.0 h1:5EAgkfkMl659uZPbe9AS2N68a7Cc1TJbPEuGzFuRbyk=
+github.com/prometheus/procfs v0.11.0/go.mod h1:nwNm2aOCAYw8uTR/9bWRREkZFxAUcWzPHWJq+XBB/FM=
github.com/prometheus/prometheus v0.38.0 h1:YSiJ5gDZmXnOntPRyHn1wb/6I1Frasj9dw57XowIqeA=
github.com/prometheus/prometheus v0.38.0/go.mod h1:2zHO5FtRhM+iu995gwKIb99EXxjeZEuXpKUTIRq4YI0=
github.com/prometheus/prometheus v0.39.1 h1:abZM6A+sKAv2eKTbRIaHq4amM/nT07MuxRm0+QTaTj0=
github.com/prometheus/prometheus v0.39.1/go.mod h1:GjQjgLhHMc0oo4Ko7qt/yBSJMY4hUoiAZwsYQgjaePA=
github.com/prometheus/prometheus v0.42.0 h1:G769v8covTkOiNckXFIwLx01XE04OE6Fr0JPA0oR2nI=
github.com/prometheus/prometheus v0.42.0/go.mod h1:Pfqb/MLnnR2KK+0vchiaH39jXxvLMBk+3lnIGP4N7Vk=
+github.com/prometheus/prometheus v0.47.1 h1:bd2LiZyxzHn9Oo2Ei4eK2D86vz/L/OiqR1qYo0XmMBo=
+github.com/prometheus/prometheus v0.47.1/go.mod h1:J/bmOSjgH7lFxz2gZhrWEZs2i64vMS+HIuZfmYNhJ/M=
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
@@ -939,6 +1071,8 @@ github.com/scaleway/scaleway-sdk-go v1.0.0-beta.9 h1:0roa6gXKgyta64uqh52AQG3wzZX
github.com/scaleway/scaleway-sdk-go v1.0.0-beta.9/go.mod h1:fCa7OJZ/9DRTnOKmxvT6pn+LPWUptQAmHF/SBJUGEcg=
github.com/scaleway/scaleway-sdk-go v1.0.0-beta.12 h1:Aaz4T7dZp7cB2cv7D/tGtRdSMh48sRaDYr7Jh0HV4qQ=
github.com/scaleway/scaleway-sdk-go v1.0.0-beta.12/go.mod h1:fCa7OJZ/9DRTnOKmxvT6pn+LPWUptQAmHF/SBJUGEcg=
+github.com/scaleway/scaleway-sdk-go v1.0.0-beta.20 h1:a9hSJdJcd16e0HoMsnFvaHvxB3pxSD+SC7+CISp7xY0=
+github.com/scaleway/scaleway-sdk-go v1.0.0-beta.20/go.mod h1:fCa7OJZ/9DRTnOKmxvT6pn+LPWUptQAmHF/SBJUGEcg=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/shoenig/test v0.3.1/go.mod h1:xYtyGBC5Q3kzCNyJg/SjgNpfAa2kvmgA0i5+lQso8x0=
github.com/shurcooL/httpfs v0.0.0-20190707220628-8d4bc4ba7749/go.mod h1:ZY1cvUeJuFPAdZ/B6v7RHavJWZn2YPVFQ1OSXhCGOkg=
@@ -948,6 +1082,8 @@ github.com/simonpasquier/klog-gokit v0.3.0 h1:TkFK21cbwDRS+CiystjqbAiq5ubJcVTk9h
github.com/simonpasquier/klog-gokit v0.3.0/go.mod h1:+SUlDQNrhVtGt2FieaqNftzzk8P72zpWlACateWxA9k=
github.com/simonpasquier/klog-gokit/v3 v3.0.0 h1:J0QrVhAULISHWN05PeXX/xMqJBjnpl2fAuO8uHdQGsA=
github.com/simonpasquier/klog-gokit/v3 v3.0.0/go.mod h1:+WRhGy707Lp2Q4r727m9Oc7FxazOHgW76FIyCr23nus=
+github.com/simonpasquier/klog-gokit/v3 v3.3.0 h1:HMzH999kO5gEgJTaWWO+xjncW5oycspcsBnjn9b853Q=
+github.com/simonpasquier/klog-gokit/v3 v3.3.0/go.mod h1:uSbnWC3T7kt1dQyY9sjv0Ao1SehMAJdVnUNSKhjaDsg=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q=
@@ -986,10 +1122,14 @@ github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
+github.com/stretchr/testify v1.7.2/go.mod h1:R6va5+xMeoiuVRoj+gSkQ7d3FALtqAAGI1FQKckRals=
github.com/stretchr/testify v1.8.0 h1:pSgiaMZlXftHpm5L7V1+rVB+AZJydKsMxsQBIJw4PKk=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
+github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
+github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
+github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqriFuLhtthL60Sar/7RFoluCcXsuvEwTV5KM=
@@ -1000,8 +1140,15 @@ github.com/vultr/govultr/v2 v2.17.2/go.mod h1:ZFOKGWmgjytfyjeyAdhQlSWwTjh2ig+X49
github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
github.com/xdg-go/scram v1.0.2/go.mod h1:1WAq6h33pAW+iRreB34OORO2Nf7qel3VV3fjBj+hCSs=
github.com/xdg-go/scram v1.1.1/go.mod h1:RaEWvsqvNKKvBPvcKeFjrG2cJqOkHTiyTpzz23ni57g=
+github.com/xdg-go/scram v1.1.2/go.mod h1:RT/sEzTbU5y00aCK8UOx6R7YryM0iF1N2MOmC3kKLN4=
github.com/xdg-go/stringprep v1.0.2/go.mod h1:8F9zXuvzgwmyT5DUm4GUfZGDdT3W+LCvS6+da4O5kxM=
github.com/xdg-go/stringprep v1.0.3/go.mod h1:W3f5j4i+9rC0kuIEJL0ky1VpHXQU3ocBgklLGvcBnW8=
+github.com/xdg-go/stringprep v1.0.4/go.mod h1:mPGuuIYwz7CmR2bT9j4GbQqutWS1zV24gijq1dTyGkM=
+github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
+github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ=
+github.com/xeipuuv/gojsonschema v1.2.0/go.mod h1:anYRn/JVcOK2ZgGU+IjEV4nwlhoK5sQluxsYJ78Id3Y=
+github.com/xhit/go-str2duration/v2 v2.1.0 h1:lxklc02Drh6ynqX+DdPyp5pCKLUQpRT8bp8Ydu2Bstc=
+github.com/xhit/go-str2duration/v2 v2.1.0/go.mod h1:ohY8p+0f07DiV6Em5LKB0s2YpLtXVyJfNt1+BlmyAsU=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xlab/treeprint v1.1.0/go.mod h1:gj5Gd3gPdKtR1ikdDK6fnFLdmIS0X30kTTuNd/WEJu0=
github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d/go.mod h1:rHwXgn7JulP+udvsHwJoVG1YGAP6VLg4y9I5dyZdqmA=
@@ -1023,6 +1170,8 @@ go.mongodb.org/mongo-driver v1.10.2 h1:4Wk3cnqOrQCn0P92L3/mmurMxzdvWWs5J9jinAVKD
go.mongodb.org/mongo-driver v1.10.2/go.mod h1:z4XpeoU6w+9Vht+jAFyLgVrD+jGSQQe0+CBWFHNiHt8=
go.mongodb.org/mongo-driver v1.11.0 h1:FZKhBSTydeuffHj9CBjXlR8vQLee1cQyTWYPA6/tqiE=
go.mongodb.org/mongo-driver v1.11.0/go.mod h1:s7p5vEtfbeR1gYi6pnj3c3/urpbLv2T5Sfd6Rp2HBB8=
+go.mongodb.org/mongo-driver v1.12.0 h1:aPx33jmn/rQuJXPQLZQ8NtfPQG8CaqgLThFtqRb0PiE=
+go.mongodb.org/mongo-driver v1.12.0/go.mod h1:AZkxhPnFJUoH7kZlFkVKucV20K387miPfm7oimrSmK0=
go.opencensus.io v0.20.1/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.20.2/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
@@ -1035,23 +1184,35 @@ go.opencensus.io v0.23.0 h1:gqCw0LfLxScz8irSi8exQc7fyQ0fKQU/qnC/X8+V/1M=
go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
+go.opentelemetry.io/collector/pdata v1.0.0-rcv0014 h1:iT5qH0NLmkGeIdDtnBogYDx7L58t6CaWGL378DEo2QY=
+go.opentelemetry.io/collector/pdata v1.0.0-rcv0014/go.mod h1:BRvDrx43kiSoUx3mr7SoA7h9B8+OY99mUK+CZSQFWW4=
+go.opentelemetry.io/collector/semconv v0.81.0 h1:lCYNNo3powDvFIaTPP2jDKIrBiV1T92NK4QgL/aHYXw=
+go.opentelemetry.io/collector/semconv v0.81.0/go.mod h1:TlYPtzvsXyHOgr5eATi43qEMqwSmIziivJB2uctKswo=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.36.0/go.mod h1:14Oo79mRwusSI02L0EfG3Gp1uF3+1wSL+D4zDysxyqs=
+go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.42.0 h1:pginetY7+onl4qN1vl0xW/V/v6OBZ0vVdH+esuJgvmM=
+go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.42.0/go.mod h1:XiYsayHc36K3EByOO6nbAXnAWbrUxdjUROCEeeROOH8=
go.opentelemetry.io/otel v1.9.0 h1:8WZNQFIB2a71LnANS9JeyidJKKGOOremcUtb/OtHISw=
go.opentelemetry.io/otel v1.10.0 h1:Y7DTJMR6zs1xkS/upamJYk0SxxN4C9AqRd77jmZnyY4=
go.opentelemetry.io/otel v1.10.0/go.mod h1:NbvWjCthWHKBEUMpf0/v8ZRZlni86PpGFEMA9pnQSnQ=
go.opentelemetry.io/otel v1.11.2 h1:YBZcQlsVekzFsFbjygXMOXSs6pialIZxcjfO/mBDmR0=
go.opentelemetry.io/otel v1.11.2/go.mod h1:7p4EUV+AqgdlNV9gL97IgUZiVR3yrFXYo53f9BM3tRI=
+go.opentelemetry.io/otel v1.16.0 h1:Z7GVAX/UkAXPKsy94IU+i6thsQS4nb7LviLpnaNeW8s=
+go.opentelemetry.io/otel v1.16.0/go.mod h1:vl0h9NUa1D5s1nv3A5vZOYWn8av4K8Ml6JDeHrT/bx4=
go.opentelemetry.io/otel/exporters/otlp/internal/retry v1.10.0/go.mod h1:78XhIg8Ht9vR4tbLNUhXsiOnE2HOuSeKAiAcoVQEpOY=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.10.0/go.mod h1:Krqnjl22jUJ0HgMzw5eveuCvFDXY4nSYb4F8t5gdrag=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.10.0/go.mod h1:OfUCyyIiDvNXHWpcWgbF+MWvqPZiNa3YDEnivcnYsV0=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.10.0/go.mod h1:5WV40MLWwvWlGP7Xm8g3pMcg0pKOUY609qxJn8y7LmM=
go.opentelemetry.io/otel/metric v0.32.0/go.mod h1:PVDNTt297p8ehm949jsIzd+Z2bIZJYQQG/uuHTeWFHY=
+go.opentelemetry.io/otel/metric v1.16.0 h1:RbrpwVG1Hfv85LgnZ7+txXioPDoh6EdbZHo26Q3hqOo=
+go.opentelemetry.io/otel/metric v1.16.0/go.mod h1:QE47cpOmkwipPiefDwo2wDzwJrlfxxNYodqc4xnGCo4=
go.opentelemetry.io/otel/sdk v1.10.0/go.mod h1:vO06iKzD5baltJz1zarxMCNHFpUlUiOy4s65ECtn6kE=
go.opentelemetry.io/otel/trace v1.9.0 h1:oZaCNJUjWcg60VXWee8lJKlqhPbXAPB51URuR47pQYc=
go.opentelemetry.io/otel/trace v1.10.0 h1:npQMbR8o7mum8uF95yFbOEJffhs1sbCOfDh8zAJiH5E=
go.opentelemetry.io/otel/trace v1.10.0/go.mod h1:Sij3YYczqAdz+EhmGhE6TpTxUO5/F/AzrK+kxfGqySM=
go.opentelemetry.io/otel/trace v1.11.2 h1:Xf7hWSF2Glv0DE3MH7fBHvtpSBsjcBUe5MYAmZM/+y0=
go.opentelemetry.io/otel/trace v1.11.2/go.mod h1:4N+yC7QEz7TTsG9BSRLNAa63eg5E06ObSbKPmxQ/pKA=
+go.opentelemetry.io/otel/trace v1.16.0 h1:8JRpaObFoW0pxuVPapkgH8UhHQj+bJW8jJsCZEu5MQs=
+go.opentelemetry.io/otel/trace v1.16.0/go.mod h1:Yt9vYq1SdNz3xdjZZK7wcXv1qv2pwLkqr2QVwea0ef0=
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
go.opentelemetry.io/proto/otlp v0.15.0/go.mod h1:H7XAot3MsfNsj7EXtrA2q5xSNQ10UqI405h3+duxN4U=
go.opentelemetry.io/proto/otlp v0.19.0/go.mod h1:H7XAot3MsfNsj7EXtrA2q5xSNQ10UqI405h3+duxN4U=
@@ -1061,13 +1222,19 @@ go.uber.org/atomic v1.9.0 h1:ECmE8Bn/WFTYwEW/bpKD3M8VtR/zQVbavAoalC1PYyE=
go.uber.org/atomic v1.9.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/atomic v1.10.0 h1:9qC72Qh0+3MqyJbAn8YU5xVq1frD8bn3JtD2oXtafVQ=
go.uber.org/atomic v1.10.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
+go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE=
+go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
go.uber.org/automaxprocs v1.5.1/go.mod h1:BF4eumQw0P9GtnuxxovUd06vwm1o18oMzFtK66vU6XU=
go.uber.org/goleak v1.1.12 h1:gZAh5/EyT/HQwlpkCy6wTpqfH9H8Lz8zbm3dZh+OyzA=
go.uber.org/goleak v1.1.12/go.mod h1:cwTWslyiVhfpKIDGSZEM2HlOvcqm+tG4zioyIeLoqMQ=
go.uber.org/goleak v1.2.0 h1:xqgm/S+aQvhWFTtR0XK3Jvg7z8kGV8P4X14IzwN3Eqk=
go.uber.org/goleak v1.2.0/go.mod h1:XJYK+MuIchqpmGmUSAzotztawfKvYLUIgg7guXrwVUo=
+go.uber.org/goleak v1.2.1 h1:NBol2c7O1ZokfZ0LEU9K6Whx/KnwvepVetCUhtKja4A=
+go.uber.org/goleak v1.2.1/go.mod h1:qlT2yGI9QafXHhZZLxlSuNsMw3FFLxBr+tBRlmO1xH4=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/multierr v1.3.0/go.mod h1:VgVr7evmIr6uPjLBxg28wmKNXyqE9akIJ5XnfpiKl+4=
+go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
+go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee/go.mod h1:vJERXedbb3MVM5f9Ejo0C68/HhF8uaILCdgjnY+goOA=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
go.uber.org/zap v1.13.0/go.mod h1:zwrFLgMcdUuIBviXEYEH1YKNaOBnKXsx2IPda5bBwHM=
@@ -1090,6 +1257,7 @@ golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5y
golang.org/x/crypto v0.0.0-20211108221036-ceb1ce70b4fa/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20211202192323-5770296d904e/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
+golang.org/x/crypto v0.0.0-20220314234659-1baeb1ce4c0b/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20220315160706-3147a52a75dd/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa h1:zuSxTR4o9y82ebqCUJYNGJbGPo6sKVl54f/TVDObg1c=
@@ -1097,6 +1265,9 @@ golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa/go.mod h1:IxCIyHEi3zRg3s0
golang.org/x/crypto v0.0.0-20220829220503-c86fa9a7ed90/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.1.0 h1:MDRAIl0xIo9Io2xV565hzXHw3zVseKrJKodhohM5CjU=
golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw=
+golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58=
+golang.org/x/crypto v0.11.0 h1:6Ewdq3tDic1mg5xRO4milcWCfMVQhI4NkqWWvqejpuA=
+golang.org/x/crypto v0.11.0/go.mod h1:xgJhtzW8F9jGdVFWZESrid1U1bjeNy4zgy5cRr/CIio=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -1109,6 +1280,8 @@ golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EH
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
golang.org/x/exp v0.0.0-20230124195608-d38c7dcee874 h1:kWC3b7j6Fu09SnEBr7P4PuQyM0R6sqyH9R+EjIvT1nQ=
golang.org/x/exp v0.0.0-20230124195608-d38c7dcee874/go.mod h1:CxIveKay+FTh1D0yPZemJVgC/95VzuuOLq5Qi4xnoYc=
+golang.org/x/exp v0.0.0-20230713183714-613f0c0eb8a1 h1:MGwJjxBy0HJshjDNfLsYO8xppfqWlA5ZT9OhtUUhTNw=
+golang.org/x/exp v0.0.0-20230713183714-613f0c0eb8a1/go.mod h1:FXUEEKJgO7OQYeo8N01OfiKP8RXMtf6e8aTskBGqWdc=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
@@ -1140,6 +1313,8 @@ golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.7.0 h1:LapD9S96VoQRhi/GrNTqeBJFrUjs5UHCAtTlgwA5oZA=
golang.org/x/mod v0.7.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
+golang.org/x/mod v0.12.0 h1:rmsUpXtvNzj340zd98LZ4KntptpfRHwpFOHG188oHXc=
+golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -1191,8 +1366,10 @@ golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT
golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210726213435-c6fcb2dbf985/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
+golang.org/x/net v0.0.0-20210805182204-aaa1db679c0d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210813160813-60bc85c4be6d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
+golang.org/x/net v0.0.0-20211029224645-99673261e6eb/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211216030914-fe4d6282115f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
@@ -1212,6 +1389,9 @@ golang.org/x/net v0.0.0-20220920203100-d0c6ba3f52d9/go.mod h1:YDH+HFinaLZZlnHAfS
golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
golang.org/x/net v0.5.0 h1:GyT4nK/YDHSqa1c4753ouYCDajOYKTja9Xb/OHtgvSw=
golang.org/x/net v0.5.0/go.mod h1:DivGGAXEgPSlEBzxGzZI+ZLohi+xUj054jfeKui00ws=
+golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
+golang.org/x/net v0.12.0 h1:cfawfvKITfUsFCeJIHJrbSxpeu/E81khclypR0GVT50=
+golang.org/x/net v0.12.0/go.mod h1:zEVYFnQC7m/vmpQFELhcD1EWkZlX69l4oqgmer6hfKA=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@@ -1240,6 +1420,8 @@ golang.org/x/oauth2 v0.0.0-20220909003341-f21342109be1 h1:lxqLZaMad/dJHMFZH0NiNp
golang.org/x/oauth2 v0.0.0-20220909003341-f21342109be1/go.mod h1:h4gKUeWbJ4rQPri7E0u6Gs4e9Ri2zaLxzw5DI5XGrYg=
golang.org/x/oauth2 v0.4.0 h1:NF0gk8LVPg1Ml7SSbGyySuoxdsXitj7TvgvuRxIMc/M=
golang.org/x/oauth2 v0.4.0/go.mod h1:RznEsdpjGAINPTOF0UH/t+xJ75L18YO3Ho6Pyn+uRec=
+golang.org/x/oauth2 v0.10.0 h1:zHCpF2Khkwy4mMB4bv0U37YtJdTGW8jI0glAApi0Kh8=
+golang.org/x/oauth2 v0.10.0/go.mod h1:kTpgurOux7LqtuxjuyZa4Gj2gdezIt/jQtGnNFfypQI=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -1259,6 +1441,8 @@ golang.org/x/sync v0.0.0-20220907140024-f12130a52804 h1:0SH2R3f1b1VmIMG7BXbEZCBU
golang.org/x/sync v0.0.0-20220907140024-f12130a52804/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.3.0 h1:ftCYgMx6zT/asHUrPw8BLLscYtGznsLAnjq5RH9P66E=
+golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -1331,6 +1515,7 @@ golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603125802-9665404d3644/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210616045830-e2b7044e8c71/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -1360,6 +1545,7 @@ golang.org/x/sys v0.0.0-20220624220833-87e55d714810/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220808155132-1c4a2a72c664 h1:v1W7bwXHsnLLloWYTVEdvGvA7BHMeBYsPcF0GLDxIRs=
+golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220908150016-7ac13a9a928d/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220908164124-27713097b956/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220919091848-fb04ddd9f9c8 h1:h+EGohizhe9XlX18rfpa8k8RAc5XyaeamM+0VHRd4lc=
@@ -1367,6 +1553,10 @@ golang.org/x/sys v0.0.0-20220919091848-fb04ddd9f9c8/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.4.0 h1:Zr2JFtRQNX3BCZ8YtxRE9hNJYC8J6I1MVbMg6owUp18=
golang.org/x/sys v0.4.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.10.0 h1:SqMFp9UcQJZa+pmYuAKjd9xq1f0j5rLcDIk0mj4qAsA=
+golang.org/x/sys v0.10.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 h1:JGgROgKl9N8DuW20oFS5gxc+lE67/N3FcwmBPMe7ArY=
@@ -1374,6 +1564,9 @@ golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuX
golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.4.0 h1:O7UWfv5+A2qiuulQk30kVinPoMtoIPeVaKLEgLpVkvg=
golang.org/x/term v0.4.0/go.mod h1:9P2UbLfCdcvo3p/nzKvsmas4TnlujnuoV9hGgYzW1lQ=
+golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
+golang.org/x/term v0.10.0 h1:3R7pNqamzBraeqj/Tj8qt1aQ2HpmlC+Cx/qL/7hn4/c=
+golang.org/x/term v0.10.0/go.mod h1:lpqdcUyK/oCiQxvxVrppt5ggO2KCZ5QblwqPnfZ6d5o=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -1384,9 +1577,13 @@ golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
+golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.6.0 h1:3XmdazWV+ubf7QgHSTWeykHOci5oeekaGJBLkrkaw4k=
golang.org/x/text v0.6.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
+golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
+golang.org/x/text v0.11.0 h1:LAntKIrcmeSKERyiOh0XMV39LXS8IE9UL2yP7+f5ij4=
+golang.org/x/text v0.11.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@@ -1469,6 +1666,8 @@ golang.org/x/tools v0.1.10/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.5.0 h1:+bSpV5HIeWkuvgaMfI3UmKRThoTA5ODJTUd8T17NO+4=
golang.org/x/tools v0.5.0/go.mod h1:N+Kgy78s5I24c24dU8OfWNEotWjutIs8SnJvn5IDq+k=
+golang.org/x/tools v0.11.0 h1:EMCa6U9S2LtZXLAMoWiR/R8dAQFRqbAitmbJ2UKhoi8=
+golang.org/x/tools v0.11.0/go.mod h1:anzJrxPjNtfgiYQYirP2CPGzGLxrH2u2QBhn6Bf3qY8=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -1525,6 +1724,8 @@ google.golang.org/api v0.96.0 h1:F60cuQPJq7K7FzsxMYHAUJSiXh2oKctHxBMbDygxhfM=
google.golang.org/api v0.96.0/go.mod h1:w7wJQLTM+wvQpNf5JyEcBoxK0RH7EDrh/L4qfsuJ13s=
google.golang.org/api v0.108.0 h1:WVBc/faN0DkKtR43Q/7+tPny9ZoLZdIiAyG5Q9vFClg=
google.golang.org/api v0.108.0/go.mod h1:2Ts0XTHNVWxypznxWOYUeI4g3WdP9Pk2Qk58+a/O9MY=
+google.golang.org/api v0.132.0 h1:8t2/+qZ26kAOGSmOiHwVycqVaDg7q3JDILrNi/Z6rvc=
+google.golang.org/api v0.132.0/go.mod h1:AeTBC6GpJnJSRJjktDcPX0QwtS8pGYZOV6MSuSCusw0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
@@ -1597,6 +1798,7 @@ google.golang.org/genproto v0.0.0-20211118181313-81c1377c94b1/go.mod h1:5CzLGKJ6
google.golang.org/genproto v0.0.0-20211206160659-862468c7d6e0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20211208223120-3a66f561d7aa/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20211221195035-429b39de9b1c/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
+google.golang.org/genproto v0.0.0-20220107163113-42d7afdf6368/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20220126215142-9970aeb2e350/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20220207164111-0872dc986b00/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20220218161850-94dd64e39d7c/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
@@ -1637,6 +1839,12 @@ google.golang.org/genproto v0.0.0-20220920201722-2b89144ce006 h1:mmbq5q8M1t7dhkL
google.golang.org/genproto v0.0.0-20220920201722-2b89144ce006/go.mod h1:ht8XFiar2npT/g4vkk7O0WYS1sHOHbdujxbEp7CJWbw=
google.golang.org/genproto v0.0.0-20230124163310-31e0e69b6fc2 h1:O97sLx/Xmb/KIZHB/2/BzofxBs5QmmR0LcihPtllmbc=
google.golang.org/genproto v0.0.0-20230124163310-31e0e69b6fc2/go.mod h1:RGgjbofJ8xD9Sq1VVhDM1Vok1vRONV+rg+CjzG4SZKM=
+google.golang.org/genproto v0.0.0-20230717213848-3f92550aa753 h1:+VoAg+OKmWaommL56xmZSE2sUK8A7m6SUO7X89F2tbw=
+google.golang.org/genproto v0.0.0-20230717213848-3f92550aa753/go.mod h1:iqkVr8IRpZ53gx1dEnWlCUIEwDWqWARWrbzpasaTNYM=
+google.golang.org/genproto/googleapis/api v0.0.0-20230717213848-3f92550aa753 h1:lCbbUxUDD+DiXx9Q6F/ttL0aAu7N2pz8XnmMm8ZW4NE=
+google.golang.org/genproto/googleapis/api v0.0.0-20230717213848-3f92550aa753/go.mod h1:rsr7RhLuwsDKL7RmgDDCUc6yaGr1iqceVb5Wv6f6YvQ=
+google.golang.org/genproto/googleapis/rpc v0.0.0-20230717213848-3f92550aa753 h1:XUODHrpzJEUeWmVo/jfNTLj0YyVveOo28oE6vkFbkO4=
+google.golang.org/genproto/googleapis/rpc v0.0.0-20230717213848-3f92550aa753/go.mod h1:TUfxEVdsvPg18p6AslUXFoLdpED4oBnGwyqk3dV1XzM=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=
@@ -1679,6 +1887,8 @@ google.golang.org/grpc v1.49.0 h1:WTLtQzmQori5FUH25Pq4WT22oCsv8USpQ+F6rqtsmxw=
google.golang.org/grpc v1.49.0/go.mod h1:ZgQEeidpAuNRZ8iRrlBKXZQP1ghovWIVhdJRyCDK+GI=
google.golang.org/grpc v1.52.1 h1:2NpOPk5g5Xtb0qebIEs7hNIa++PdtZLo2AQUpc1YnSU=
google.golang.org/grpc v1.52.1/go.mod h1:pu6fVzoFb+NBYNAvQL08ic+lvB2IojljRYuun5vorUY=
+google.golang.org/grpc v1.56.2 h1:fVRFRnXvU+x6C4IlHZewvJOVHoOv1TUuQyoRsYnB4bI=
+google.golang.org/grpc v1.56.2/go.mod h1:I9bI3vqKfayGqPUAwGdOSu7kt6oIJLixfffKrpXqQ9s=
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
@@ -1696,6 +1906,8 @@ google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQ
google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w=
google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
+google.golang.org/protobuf v1.31.0 h1:g0LDEJHgrBl9N9r17Ru3sqWhkIx2NB67okBHPwC7hs8=
+google.golang.org/protobuf v1.31.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/alecthomas/kingpin.v2 v2.2.6 h1:jMFz6MfLP0/4fUyZle81rXUoxOBFi19VUFKVDOQfozc=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@@ -1712,6 +1924,8 @@ gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/ini.v1 v1.57.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/ini.v1 v1.66.6 h1:LATuAqN/shcYAOkv3wl2L4rkaKqkcgTBQjOyYDvcPKI=
gopkg.in/ini.v1 v1.66.6/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
+gopkg.in/ini.v1 v1.67.0 h1:Dgnx+6+nfE+IfzjUEISNeydPJh9AXNNsWbGP9KzCsOA=
+gopkg.in/ini.v1 v1.67.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/telebot.v3 v3.0.0/go.mod h1:7rExV8/0mDDNu9epSrDm/8j22KLaActH1Tbee6YjzWg=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
@@ -1747,28 +1961,38 @@ k8s.io/api v0.25.1 h1:yL7du50yc93k17nH/Xe9jujAYrcDkI/i5DL1jPz4E3M=
k8s.io/api v0.25.1/go.mod h1:hh4itDvrWSJsmeUc28rIFNri8MatNAAxJjKcQmhX6TU=
k8s.io/api v0.26.1 h1:f+SWYiPd/GsiWwVRz+NbFyCgvv75Pk9NK6dlkZgpCRQ=
k8s.io/api v0.26.1/go.mod h1:xd/GBNgR0f707+ATNyPmQ1oyKSgndzXij81FzWGsejg=
+k8s.io/api v0.27.3 h1:yR6oQXXnUEBWEWcvPWS0jQL575KoAboQPfJAuKNrw5Y=
+k8s.io/api v0.27.3/go.mod h1:C4BNvZnQOF7JA/0Xed2S+aUyJSfTGkGFxLXz9MnpIpg=
k8s.io/apimachinery v0.24.3 h1:hrFiNSA2cBZqllakVYyH/VyEh4B581bQRmqATJSeQTg=
k8s.io/apimachinery v0.25.1 h1:t0XrnmCEHVgJlR2arwO8Awp9ylluDic706WePaYCBTI=
k8s.io/apimachinery v0.25.1/go.mod h1:hqqA1X0bsgsxI6dXsJ4HnNTBOmJNxyPp8dw3u2fSHwA=
k8s.io/apimachinery v0.26.1 h1:8EZ/eGJL+hY/MYCNwhmDzVqq2lPl3N3Bo8rvweJwXUQ=
k8s.io/apimachinery v0.26.1/go.mod h1:tnPmbONNJ7ByJNz9+n9kMjNP8ON+1qoAIIC70lztu74=
+k8s.io/apimachinery v0.27.3 h1:Ubye8oBufD04l9QnNtW05idcOe9Z3GQN8+7PqmuVcUM=
+k8s.io/apimachinery v0.27.3/go.mod h1:XNfZ6xklnMCOGGFNqXG7bUrQCoR04dh/E7FprV6pb+E=
k8s.io/client-go v0.24.3 h1:Nl1840+6p4JqkFWEW2LnMKU667BUxw03REfLAVhuKQY=
k8s.io/client-go v0.25.1 h1:uFj4AJKtE1/ckcSKz8IhgAuZTdRXZDKev8g387ndD58=
k8s.io/client-go v0.25.1/go.mod h1:rdFWTLV/uj2C74zGbQzOsmXPUtMAjSf7ajil4iJUNKo=
k8s.io/client-go v0.26.1 h1:87CXzYJnAMGaa/IDDfRdhTzxk/wzGZ+/HUQpqgVSZXU=
k8s.io/client-go v0.26.1/go.mod h1:IWNSglg+rQ3OcvDkhY6+QLeasV4OYHDjdqeWkDQZwGE=
+k8s.io/client-go v0.27.3 h1:7dnEGHZEJld3lYwxvLl7WoehK6lAq7GvgjxpA3nv1E8=
+k8s.io/client-go v0.27.3/go.mod h1:2MBEKuTo6V1lbKy3z1euEGnhPfGZLKTS9tiJ2xodM48=
k8s.io/gengo v0.0.0-20210813121822-485abfe95c7c/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E=
k8s.io/kube-openapi v0.0.0-20220328201542-3ee0da9b0b42 h1:Gii5eqf+GmIEwGNKQYQClCayuJCe2/4fZUvF7VG99sU=
k8s.io/kube-openapi v0.0.0-20220803162953-67bda5d908f1 h1:MQ8BAZPZlWk3S9K4a9NCkIFQtZShWqoha7snGixVgEA=
k8s.io/kube-openapi v0.0.0-20220803162953-67bda5d908f1/go.mod h1:C/N6wCaBHeBHkHUesQOQy2/MZqGgMAFPqGsGQLdbZBU=
k8s.io/kube-openapi v0.0.0-20221207184640-f3cff1453715 h1:tBEbstoM+K0FiBV5KGAKQ0kuvf54v/hwpldiJt69w1s=
k8s.io/kube-openapi v0.0.0-20221207184640-f3cff1453715/go.mod h1:+Axhij7bCpeqhklhUTe3xmOn6bWxolyZEeyaFpjGtl4=
+k8s.io/kube-openapi v0.0.0-20230525220651-2546d827e515 h1:OmK1d0WrkD3IPfkskvroRykOulHVHf0s0ZIFRjyt+UI=
+k8s.io/kube-openapi v0.0.0-20230525220651-2546d827e515/go.mod h1:kzo02I3kQ4BTtEfVLaPbjvCkX97YqGve33wzlb3fofQ=
k8s.io/utils v0.0.0-20210802155522-efc7438f0176/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9 h1:HNSDgDCrr/6Ly3WEGKZftiE7IY19Vz2GdbOCyI4qqhc=
k8s.io/utils v0.0.0-20220728103510-ee6ede2d64ed h1:jAne/RjBTyawwAy0utX5eqigAwz/lQhTmy+Hr/Cpue4=
k8s.io/utils v0.0.0-20220728103510-ee6ede2d64ed/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
k8s.io/utils v0.0.0-20221128185143-99ec85e7a448 h1:KTgPnR10d5zhztWptI952TNtt/4u5h3IzDXkdIMuo2Y=
k8s.io/utils v0.0.0-20221128185143-99ec85e7a448/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
+k8s.io/utils v0.0.0-20230711102312-30195339c3c7 h1:ZgnF1KZsYxWIifwSNZFZgNtWE89WI5yiP5WwlfDoIyc=
+k8s.io/utils v0.0.0-20230711102312-30195339c3c7/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=
@@ -1780,6 +2004,8 @@ sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h6
sigs.k8s.io/structured-merge-diff/v4 v4.2.1 h1:bKCqE9GvQ5tiVHn5rfn1r+yao3aLQEaLzkkmAkf+A6Y=
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 h1:PRbqxJClWWYMNV1dhaG4NsibJbArud9kFxnAMREiWFE=
sigs.k8s.io/structured-merge-diff/v4 v4.2.3/go.mod h1:qjx8mGObPmV2aSZepjQjbmb2ihdVs8cGKBraizNC69E=
+sigs.k8s.io/structured-merge-diff/v4 v4.3.0 h1:UZbZAZfX0wV2zr7YZorDz6GXROfDFj6LvqCRm4VUVKk=
+sigs.k8s.io/structured-merge-diff/v4 v4.3.0/go.mod h1:N8hJocpFajUSSeSJ9bOZ77VzejKZaXsTtZo4/u7Io08=
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=
sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=
sigs.k8s.io/yaml v1.3.0 h1:a2VclLzOGrwOHDiV8EfBGhvjHvP46CtW5j6POvhYGGo=
diff --git a/operator/.bingo/variables.env b/operator/.bingo/variables.env
index 8f5734013f22e..72d37aeec6642 100644
--- a/operator/.bingo/variables.env
+++ b/operator/.bingo/variables.env
@@ -10,13 +10,13 @@ fi
BINGO="${GOBIN}/bingo-v0.8.0"
-CONTROLLER_GEN="${GOBIN}/controller-gen-v0.11.3"
+CONTROLLER_GEN="${GOBIN}/controller-gen-v0.13.0"
GEN_CRD_API_REFERENCE_DOCS="${GOBIN}/gen-crd-api-reference-docs-v0.0.3"
-GOFUMPT="${GOBIN}/gofumpt-v0.4.0"
+GOFUMPT="${GOBIN}/gofumpt-v0.5.0"
-GOLANGCI_LINT="${GOBIN}/golangci-lint-v1.53.3"
+GOLANGCI_LINT="${GOBIN}/golangci-lint-v1.54.2"
HUGO="${GOBIN}/hugo-v0.80.0"
@@ -26,11 +26,11 @@ JSONNET="${GOBIN}/jsonnet-v0.20.0"
JSONNETFMT="${GOBIN}/jsonnetfmt-v0.20.0"
-KIND="${GOBIN}/kind-v0.17.0"
+KIND="${GOBIN}/kind-v0.20.0"
KUSTOMIZE="${GOBIN}/kustomize-v4.5.7"
-OPERATOR_SDK="${GOBIN}/operator-sdk-v1.27.0"
+OPERATOR_SDK="${GOBIN}/operator-sdk-v1.31.0"
-PROMTOOL="${GOBIN}/promtool-v0.42.0"
+PROMTOOL="${GOBIN}/promtool-v0.47.1"
diff --git a/operator/apis/config/v1/zz_generated.deepcopy.go b/operator/apis/config/v1/zz_generated.deepcopy.go
index c85446c21e0b4..ef20274e286eb 100644
--- a/operator/apis/config/v1/zz_generated.deepcopy.go
+++ b/operator/apis/config/v1/zz_generated.deepcopy.go
@@ -1,5 +1,4 @@
//go:build !ignore_autogenerated
-// +build !ignore_autogenerated
// Code generated by controller-gen. DO NOT EDIT.
diff --git a/operator/apis/loki/go.mod b/operator/apis/loki/go.mod
index 964d1bb6ec147..183452e88699f 100644
--- a/operator/apis/loki/go.mod
+++ b/operator/apis/loki/go.mod
@@ -4,8 +4,8 @@ go 1.19
require (
github.com/stretchr/testify v1.8.2
- k8s.io/api v0.26.3
- k8s.io/apimachinery v0.26.3
+ k8s.io/api v0.26.9
+ k8s.io/apimachinery v0.26.9
k8s.io/utils v0.0.0-20230313181309-38a27ef9d749
sigs.k8s.io/controller-runtime v0.14.5
)
@@ -19,8 +19,8 @@ require (
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
- golang.org/x/net v0.7.0 // indirect
- golang.org/x/text v0.7.0 // indirect
+ golang.org/x/net v0.8.0 // indirect
+ golang.org/x/text v0.8.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
diff --git a/operator/apis/loki/go.sum b/operator/apis/loki/go.sum
index 98b5d65fef4a7..be5fe7db01e86 100644
--- a/operator/apis/loki/go.sum
+++ b/operator/apis/loki/go.sum
@@ -45,19 +45,19 @@ golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
-golang.org/x/net v0.7.0 h1:rJrUqqhjsgNp7KqAIc25s9pZnjU7TUcSY7HcVZjdn1g=
-golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
+golang.org/x/net v0.8.0 h1:Zrh2ngAOFYneWTAIAPethzeaQLuHwhuBkuV6ZiRnUaQ=
+golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
-golang.org/x/sys v0.5.0 h1:MUK/U/4lj1t1oPg0HfuXDN/Z1wv31ZJ/YcPiGccS4DU=
+golang.org/x/sys v0.6.0 h1:MVltZSvRTcU2ljQOhs94SXPftV6DCNnZViHeQps87pQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
-golang.org/x/text v0.7.0 h1:4BRB4x83lYWy72KwLD/qYDuTu7q9PjSagHvijDw7cLo=
-golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
+golang.org/x/text v0.8.0 h1:57P1ETyNKtuIjB4SRd15iJxuhj8Gc416Y78H3qgMh68=
+golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
@@ -76,10 +76,10 @@ gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
-k8s.io/api v0.26.3 h1:emf74GIQMTik01Aum9dPP0gAypL8JTLl/lHa4V9RFSU=
-k8s.io/api v0.26.3/go.mod h1:PXsqwPMXBSBcL1lJ9CYDKy7kIReUydukS5JiRlxC3qE=
-k8s.io/apimachinery v0.26.3 h1:dQx6PNETJ7nODU3XPtrwkfuubs6w7sX0M8n61zHIV/k=
-k8s.io/apimachinery v0.26.3/go.mod h1:ats7nN1LExKHvJ9TmwootT00Yz05MuYqPXEXaVeOy5I=
+k8s.io/api v0.26.9 h1:s8Y+G1u2JM55b90+Yo2RVb3PGT/hkWNVPN4idPERxJg=
+k8s.io/api v0.26.9/go.mod h1:W/W4fEWRVzPD36820LlVUQfNBiSbiq0VPWRFJKwzmUg=
+k8s.io/apimachinery v0.26.9 h1:5yAV9cFR7Z4gIorKcAjWnx4uxtxiFsERwq4Pvmx0CCg=
+k8s.io/apimachinery v0.26.9/go.mod h1:qYzLkrQ9lhrZRh0jNKo2cfvf/R1/kQONnSiyB7NUJU0=
k8s.io/klog/v2 v2.80.1 h1:atnLQ121W371wYYFawwYx1aEY2eUfs4l3J72wtgAwV4=
k8s.io/klog/v2 v2.80.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
k8s.io/utils v0.0.0-20230313181309-38a27ef9d749 h1:xMMXJlJbsU8w3V5N2FLDQ8YgU8s1EoULdbQBcAeNJkY=
diff --git a/operator/apis/loki/v1/zz_generated.deepcopy.go b/operator/apis/loki/v1/zz_generated.deepcopy.go
index 686294b2185e4..b29ddc93872e1 100644
--- a/operator/apis/loki/v1/zz_generated.deepcopy.go
+++ b/operator/apis/loki/v1/zz_generated.deepcopy.go
@@ -1,5 +1,4 @@
//go:build !ignore_autogenerated
-// +build !ignore_autogenerated
// Code generated by controller-gen. DO NOT EDIT.
@@ -625,7 +624,8 @@ func (in *LokiStackComponentStatus) DeepCopyInto(out *LokiStackComponentStatus)
if val == nil {
(*out)[key] = nil
} else {
- in, out := &val, &outVal
+ inVal := (*in)[key]
+ in, out := &inVal, &outVal
*out = make([]string, len(*in))
copy(*out, *in)
}
@@ -640,7 +640,8 @@ func (in *LokiStackComponentStatus) DeepCopyInto(out *LokiStackComponentStatus)
if val == nil {
(*out)[key] = nil
} else {
- in, out := &val, &outVal
+ inVal := (*in)[key]
+ in, out := &inVal, &outVal
*out = make([]string, len(*in))
copy(*out, *in)
}
@@ -655,7 +656,8 @@ func (in *LokiStackComponentStatus) DeepCopyInto(out *LokiStackComponentStatus)
if val == nil {
(*out)[key] = nil
} else {
- in, out := &val, &outVal
+ inVal := (*in)[key]
+ in, out := &inVal, &outVal
*out = make([]string, len(*in))
copy(*out, *in)
}
@@ -670,7 +672,8 @@ func (in *LokiStackComponentStatus) DeepCopyInto(out *LokiStackComponentStatus)
if val == nil {
(*out)[key] = nil
} else {
- in, out := &val, &outVal
+ inVal := (*in)[key]
+ in, out := &inVal, &outVal
*out = make([]string, len(*in))
copy(*out, *in)
}
@@ -685,7 +688,8 @@ func (in *LokiStackComponentStatus) DeepCopyInto(out *LokiStackComponentStatus)
if val == nil {
(*out)[key] = nil
} else {
- in, out := &val, &outVal
+ inVal := (*in)[key]
+ in, out := &inVal, &outVal
*out = make([]string, len(*in))
copy(*out, *in)
}
@@ -700,7 +704,8 @@ func (in *LokiStackComponentStatus) DeepCopyInto(out *LokiStackComponentStatus)
if val == nil {
(*out)[key] = nil
} else {
- in, out := &val, &outVal
+ inVal := (*in)[key]
+ in, out := &inVal, &outVal
*out = make([]string, len(*in))
copy(*out, *in)
}
@@ -715,7 +720,8 @@ func (in *LokiStackComponentStatus) DeepCopyInto(out *LokiStackComponentStatus)
if val == nil {
(*out)[key] = nil
} else {
- in, out := &val, &outVal
+ inVal := (*in)[key]
+ in, out := &inVal, &outVal
*out = make([]string, len(*in))
copy(*out, *in)
}
@@ -730,7 +736,8 @@ func (in *LokiStackComponentStatus) DeepCopyInto(out *LokiStackComponentStatus)
if val == nil {
(*out)[key] = nil
} else {
- in, out := &val, &outVal
+ inVal := (*in)[key]
+ in, out := &inVal, &outVal
*out = make([]string, len(*in))
copy(*out, *in)
}
@@ -1108,7 +1115,8 @@ func (in PodStatusMap) DeepCopyInto(out *PodStatusMap) {
if val == nil {
(*out)[key] = nil
} else {
- in, out := &val, &outVal
+ inVal := (*in)[key]
+ in, out := &inVal, &outVal
*out = make([]string, len(*in))
copy(*out, *in)
}
diff --git a/operator/apis/loki/v1beta1/zz_generated.deepcopy.go b/operator/apis/loki/v1beta1/zz_generated.deepcopy.go
index ca44ac2f35453..caabd66aac67a 100644
--- a/operator/apis/loki/v1beta1/zz_generated.deepcopy.go
+++ b/operator/apis/loki/v1beta1/zz_generated.deepcopy.go
@@ -1,5 +1,4 @@
//go:build !ignore_autogenerated
-// +build !ignore_autogenerated
// Code generated by controller-gen. DO NOT EDIT.
@@ -560,7 +559,8 @@ func (in *LokiStackComponentStatus) DeepCopyInto(out *LokiStackComponentStatus)
if val == nil {
(*out)[key] = nil
} else {
- in, out := &val, &outVal
+ inVal := (*in)[key]
+ in, out := &inVal, &outVal
*out = make([]string, len(*in))
copy(*out, *in)
}
@@ -575,7 +575,8 @@ func (in *LokiStackComponentStatus) DeepCopyInto(out *LokiStackComponentStatus)
if val == nil {
(*out)[key] = nil
} else {
- in, out := &val, &outVal
+ inVal := (*in)[key]
+ in, out := &inVal, &outVal
*out = make([]string, len(*in))
copy(*out, *in)
}
@@ -590,7 +591,8 @@ func (in *LokiStackComponentStatus) DeepCopyInto(out *LokiStackComponentStatus)
if val == nil {
(*out)[key] = nil
} else {
- in, out := &val, &outVal
+ inVal := (*in)[key]
+ in, out := &inVal, &outVal
*out = make([]string, len(*in))
copy(*out, *in)
}
@@ -605,7 +607,8 @@ func (in *LokiStackComponentStatus) DeepCopyInto(out *LokiStackComponentStatus)
if val == nil {
(*out)[key] = nil
} else {
- in, out := &val, &outVal
+ inVal := (*in)[key]
+ in, out := &inVal, &outVal
*out = make([]string, len(*in))
copy(*out, *in)
}
@@ -620,7 +623,8 @@ func (in *LokiStackComponentStatus) DeepCopyInto(out *LokiStackComponentStatus)
if val == nil {
(*out)[key] = nil
} else {
- in, out := &val, &outVal
+ inVal := (*in)[key]
+ in, out := &inVal, &outVal
*out = make([]string, len(*in))
copy(*out, *in)
}
@@ -635,7 +639,8 @@ func (in *LokiStackComponentStatus) DeepCopyInto(out *LokiStackComponentStatus)
if val == nil {
(*out)[key] = nil
} else {
- in, out := &val, &outVal
+ inVal := (*in)[key]
+ in, out := &inVal, &outVal
*out = make([]string, len(*in))
copy(*out, *in)
}
@@ -650,7 +655,8 @@ func (in *LokiStackComponentStatus) DeepCopyInto(out *LokiStackComponentStatus)
if val == nil {
(*out)[key] = nil
} else {
- in, out := &val, &outVal
+ inVal := (*in)[key]
+ in, out := &inVal, &outVal
*out = make([]string, len(*in))
copy(*out, *in)
}
@@ -665,7 +671,8 @@ func (in *LokiStackComponentStatus) DeepCopyInto(out *LokiStackComponentStatus)
if val == nil {
(*out)[key] = nil
} else {
- in, out := &val, &outVal
+ inVal := (*in)[key]
+ in, out := &inVal, &outVal
*out = make([]string, len(*in))
copy(*out, *in)
}
@@ -967,7 +974,8 @@ func (in PodStatusMap) DeepCopyInto(out *PodStatusMap) {
if val == nil {
(*out)[key] = nil
} else {
- in, out := &val, &outVal
+ inVal := (*in)[key]
+ in, out := &inVal, &outVal
*out = make([]string, len(*in))
copy(*out, *in)
}
diff --git a/operator/bundle/community-openshift/manifests/loki-operator.clusterserviceversion.yaml b/operator/bundle/community-openshift/manifests/loki-operator.clusterserviceversion.yaml
index 0cbe7401f81c0..411b0d1b8d119 100644
--- a/operator/bundle/community-openshift/manifests/loki-operator.clusterserviceversion.yaml
+++ b/operator/bundle/community-openshift/manifests/loki-operator.clusterserviceversion.yaml
@@ -150,7 +150,7 @@ metadata:
categories: OpenShift Optional, Logging & Tracing
certified: "false"
containerImage: docker.io/grafana/loki-operator:0.4.0
- createdAt: "2023-09-28T10:58:47Z"
+ createdAt: "2023-10-04T19:21:13Z"
description: The Community Loki Operator provides Kubernetes native deployment
and management of Loki and related logging components.
operators.operatorframework.io/builder: operator-sdk-unknown
diff --git a/operator/bundle/community-openshift/manifests/loki.grafana.com_alertingrules.yaml b/operator/bundle/community-openshift/manifests/loki.grafana.com_alertingrules.yaml
index f524144151873..e8ebdc32528f7 100644
--- a/operator/bundle/community-openshift/manifests/loki.grafana.com_alertingrules.yaml
+++ b/operator/bundle/community-openshift/manifests/loki.grafana.com_alertingrules.yaml
@@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
- controller-gen.kubebuilder.io/version: v0.11.3
+ controller-gen.kubebuilder.io/version: v0.13.0
creationTimestamp: null
labels:
app.kubernetes.io/instance: loki-operator-v0.4.0
diff --git a/operator/bundle/community-openshift/manifests/loki.grafana.com_lokistacks.yaml b/operator/bundle/community-openshift/manifests/loki.grafana.com_lokistacks.yaml
index a0054ad811c1d..86c36c4eb88b4 100644
--- a/operator/bundle/community-openshift/manifests/loki.grafana.com_lokistacks.yaml
+++ b/operator/bundle/community-openshift/manifests/loki.grafana.com_lokistacks.yaml
@@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
- controller-gen.kubebuilder.io/version: v0.11.3
+ controller-gen.kubebuilder.io/version: v0.13.0
creationTimestamp: null
labels:
app.kubernetes.io/instance: loki-operator-v0.4.0
diff --git a/operator/bundle/community-openshift/manifests/loki.grafana.com_recordingrules.yaml b/operator/bundle/community-openshift/manifests/loki.grafana.com_recordingrules.yaml
index 07eac8b94e801..2a827b65e0c63 100644
--- a/operator/bundle/community-openshift/manifests/loki.grafana.com_recordingrules.yaml
+++ b/operator/bundle/community-openshift/manifests/loki.grafana.com_recordingrules.yaml
@@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
- controller-gen.kubebuilder.io/version: v0.11.3
+ controller-gen.kubebuilder.io/version: v0.13.0
creationTimestamp: null
labels:
app.kubernetes.io/instance: loki-operator-v0.4.0
diff --git a/operator/bundle/community-openshift/manifests/loki.grafana.com_rulerconfigs.yaml b/operator/bundle/community-openshift/manifests/loki.grafana.com_rulerconfigs.yaml
index 2fd20cc1ce5a9..a218c6e076c0e 100644
--- a/operator/bundle/community-openshift/manifests/loki.grafana.com_rulerconfigs.yaml
+++ b/operator/bundle/community-openshift/manifests/loki.grafana.com_rulerconfigs.yaml
@@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
- controller-gen.kubebuilder.io/version: v0.11.3
+ controller-gen.kubebuilder.io/version: v0.13.0
creationTimestamp: null
labels:
app.kubernetes.io/instance: loki-operator-v0.4.0
diff --git a/operator/bundle/community/manifests/loki-operator.clusterserviceversion.yaml b/operator/bundle/community/manifests/loki-operator.clusterserviceversion.yaml
index 6b598c81c1e62..d41b7d8cfa4bc 100644
--- a/operator/bundle/community/manifests/loki-operator.clusterserviceversion.yaml
+++ b/operator/bundle/community/manifests/loki-operator.clusterserviceversion.yaml
@@ -150,7 +150,7 @@ metadata:
categories: OpenShift Optional, Logging & Tracing
certified: "false"
containerImage: docker.io/grafana/loki-operator:0.4.0
- createdAt: "2023-09-28T10:58:45Z"
+ createdAt: "2023-10-04T19:21:10Z"
description: The Community Loki Operator provides Kubernetes native deployment
and management of Loki and related logging components.
operators.operatorframework.io/builder: operator-sdk-unknown
diff --git a/operator/bundle/community/manifests/loki.grafana.com_alertingrules.yaml b/operator/bundle/community/manifests/loki.grafana.com_alertingrules.yaml
index 2e17035252284..dbb6b869602ee 100644
--- a/operator/bundle/community/manifests/loki.grafana.com_alertingrules.yaml
+++ b/operator/bundle/community/manifests/loki.grafana.com_alertingrules.yaml
@@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
- controller-gen.kubebuilder.io/version: v0.11.3
+ controller-gen.kubebuilder.io/version: v0.13.0
creationTimestamp: null
labels:
app.kubernetes.io/instance: loki-operator-v0.4.0
diff --git a/operator/bundle/community/manifests/loki.grafana.com_lokistacks.yaml b/operator/bundle/community/manifests/loki.grafana.com_lokistacks.yaml
index e3ff543d7a3fe..90ba4f19c5275 100644
--- a/operator/bundle/community/manifests/loki.grafana.com_lokistacks.yaml
+++ b/operator/bundle/community/manifests/loki.grafana.com_lokistacks.yaml
@@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
- controller-gen.kubebuilder.io/version: v0.11.3
+ controller-gen.kubebuilder.io/version: v0.13.0
creationTimestamp: null
labels:
app.kubernetes.io/instance: loki-operator-v0.4.0
diff --git a/operator/bundle/community/manifests/loki.grafana.com_recordingrules.yaml b/operator/bundle/community/manifests/loki.grafana.com_recordingrules.yaml
index 2c6e34792d28a..ec5eb9cc61358 100644
--- a/operator/bundle/community/manifests/loki.grafana.com_recordingrules.yaml
+++ b/operator/bundle/community/manifests/loki.grafana.com_recordingrules.yaml
@@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
- controller-gen.kubebuilder.io/version: v0.11.3
+ controller-gen.kubebuilder.io/version: v0.13.0
creationTimestamp: null
labels:
app.kubernetes.io/instance: loki-operator-v0.4.0
diff --git a/operator/bundle/community/manifests/loki.grafana.com_rulerconfigs.yaml b/operator/bundle/community/manifests/loki.grafana.com_rulerconfigs.yaml
index 451e24beb5bf5..689d10a5d6ff4 100644
--- a/operator/bundle/community/manifests/loki.grafana.com_rulerconfigs.yaml
+++ b/operator/bundle/community/manifests/loki.grafana.com_rulerconfigs.yaml
@@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
- controller-gen.kubebuilder.io/version: v0.11.3
+ controller-gen.kubebuilder.io/version: v0.13.0
creationTimestamp: null
labels:
app.kubernetes.io/instance: loki-operator-v0.4.0
diff --git a/operator/bundle/openshift/manifests/loki-operator.clusterserviceversion.yaml b/operator/bundle/openshift/manifests/loki-operator.clusterserviceversion.yaml
index 467494b0836ac..b5996b5f63422 100644
--- a/operator/bundle/openshift/manifests/loki-operator.clusterserviceversion.yaml
+++ b/operator/bundle/openshift/manifests/loki-operator.clusterserviceversion.yaml
@@ -150,7 +150,7 @@ metadata:
categories: OpenShift Optional, Logging & Tracing
certified: "false"
containerImage: quay.io/openshift-logging/loki-operator:0.1.0
- createdAt: "2023-09-28T10:58:49Z"
+ createdAt: "2023-10-04T19:21:16Z"
description: |
The Loki Operator for OCP provides a means for configuring and managing a Loki stack for cluster logging.
## Prerequisites and Requirements
diff --git a/operator/bundle/openshift/manifests/loki.grafana.com_alertingrules.yaml b/operator/bundle/openshift/manifests/loki.grafana.com_alertingrules.yaml
index cf8b58e603e78..dc3a5038b08fd 100644
--- a/operator/bundle/openshift/manifests/loki.grafana.com_alertingrules.yaml
+++ b/operator/bundle/openshift/manifests/loki.grafana.com_alertingrules.yaml
@@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
- controller-gen.kubebuilder.io/version: v0.11.3
+ controller-gen.kubebuilder.io/version: v0.13.0
creationTimestamp: null
labels:
app.kubernetes.io/instance: loki-operator-0.1.0
diff --git a/operator/bundle/openshift/manifests/loki.grafana.com_lokistacks.yaml b/operator/bundle/openshift/manifests/loki.grafana.com_lokistacks.yaml
index cbc862a9ed179..c34ebd59c8fa8 100644
--- a/operator/bundle/openshift/manifests/loki.grafana.com_lokistacks.yaml
+++ b/operator/bundle/openshift/manifests/loki.grafana.com_lokistacks.yaml
@@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
- controller-gen.kubebuilder.io/version: v0.11.3
+ controller-gen.kubebuilder.io/version: v0.13.0
creationTimestamp: null
labels:
app.kubernetes.io/instance: loki-operator-0.1.0
diff --git a/operator/bundle/openshift/manifests/loki.grafana.com_recordingrules.yaml b/operator/bundle/openshift/manifests/loki.grafana.com_recordingrules.yaml
index 9de09e3a7454d..d896012f81977 100644
--- a/operator/bundle/openshift/manifests/loki.grafana.com_recordingrules.yaml
+++ b/operator/bundle/openshift/manifests/loki.grafana.com_recordingrules.yaml
@@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
- controller-gen.kubebuilder.io/version: v0.11.3
+ controller-gen.kubebuilder.io/version: v0.13.0
creationTimestamp: null
labels:
app.kubernetes.io/instance: loki-operator-0.1.0
diff --git a/operator/bundle/openshift/manifests/loki.grafana.com_rulerconfigs.yaml b/operator/bundle/openshift/manifests/loki.grafana.com_rulerconfigs.yaml
index 8277345c2c45c..9900a460b8f91 100644
--- a/operator/bundle/openshift/manifests/loki.grafana.com_rulerconfigs.yaml
+++ b/operator/bundle/openshift/manifests/loki.grafana.com_rulerconfigs.yaml
@@ -2,7 +2,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
- controller-gen.kubebuilder.io/version: v0.11.3
+ controller-gen.kubebuilder.io/version: v0.13.0
creationTimestamp: null
labels:
app.kubernetes.io/instance: loki-operator-0.1.0
diff --git a/operator/config/crd/bases/loki.grafana.com_alertingrules.yaml b/operator/config/crd/bases/loki.grafana.com_alertingrules.yaml
index 8d585b0b89426..0818f611b638d 100644
--- a/operator/config/crd/bases/loki.grafana.com_alertingrules.yaml
+++ b/operator/config/crd/bases/loki.grafana.com_alertingrules.yaml
@@ -3,8 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
- controller-gen.kubebuilder.io/version: v0.11.3
- creationTimestamp: null
+ controller-gen.kubebuilder.io/version: v0.13.0
name: alertingrules.loki.grafana.com
spec:
group: loki.grafana.com
diff --git a/operator/config/crd/bases/loki.grafana.com_lokistacks.yaml b/operator/config/crd/bases/loki.grafana.com_lokistacks.yaml
index fc0e2bacb9737..1acdaa2418eb3 100644
--- a/operator/config/crd/bases/loki.grafana.com_lokistacks.yaml
+++ b/operator/config/crd/bases/loki.grafana.com_lokistacks.yaml
@@ -3,8 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
- controller-gen.kubebuilder.io/version: v0.11.3
- creationTimestamp: null
+ controller-gen.kubebuilder.io/version: v0.13.0
name: lokistacks.loki.grafana.com
spec:
group: loki.grafana.com
diff --git a/operator/config/crd/bases/loki.grafana.com_recordingrules.yaml b/operator/config/crd/bases/loki.grafana.com_recordingrules.yaml
index 631da1b48152d..fbce6aafef538 100644
--- a/operator/config/crd/bases/loki.grafana.com_recordingrules.yaml
+++ b/operator/config/crd/bases/loki.grafana.com_recordingrules.yaml
@@ -3,8 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
- controller-gen.kubebuilder.io/version: v0.11.3
- creationTimestamp: null
+ controller-gen.kubebuilder.io/version: v0.13.0
name: recordingrules.loki.grafana.com
spec:
group: loki.grafana.com
diff --git a/operator/config/crd/bases/loki.grafana.com_rulerconfigs.yaml b/operator/config/crd/bases/loki.grafana.com_rulerconfigs.yaml
index 534abedfce642..96c57b4733f6a 100644
--- a/operator/config/crd/bases/loki.grafana.com_rulerconfigs.yaml
+++ b/operator/config/crd/bases/loki.grafana.com_rulerconfigs.yaml
@@ -3,8 +3,7 @@ apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
- controller-gen.kubebuilder.io/version: v0.11.3
- creationTimestamp: null
+ controller-gen.kubebuilder.io/version: v0.13.0
name: rulerconfigs.loki.grafana.com
spec:
group: loki.grafana.com
diff --git a/operator/config/rbac/role.yaml b/operator/config/rbac/role.yaml
index f2df6c8a2cfe0..d7b881ef8e33d 100644
--- a/operator/config/rbac/role.yaml
+++ b/operator/config/rbac/role.yaml
@@ -2,7 +2,6 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
- creationTimestamp: null
name: lokistack-manager
rules:
- nonResourceURLs:
diff --git a/operator/config/webhook/manifests.yaml b/operator/config/webhook/manifests.yaml
index 138a244f8f28c..2e89e5d68117f 100644
--- a/operator/config/webhook/manifests.yaml
+++ b/operator/config/webhook/manifests.yaml
@@ -2,7 +2,6 @@
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
- creationTimestamp: null
name: validating-webhook-configuration
webhooks:
- admissionReviewVersions:
diff --git a/operator/go.mod b/operator/go.mod
index a0f5ec973821f..8f0c033ded3d8 100644
--- a/operator/go.mod
+++ b/operator/go.mod
@@ -1,6 +1,6 @@
module github.com/grafana/loki/operator
-go 1.19
+go 1.20
require (
github.com/ViaQ/logerr/v2 v2.1.0
@@ -19,10 +19,10 @@ require (
github.com/prometheus/prometheus v0.42.0
github.com/stretchr/testify v1.8.4
gopkg.in/yaml.v2 v2.4.0
- k8s.io/api v0.26.3
- k8s.io/apimachinery v0.26.3
- k8s.io/apiserver v0.26.2
- k8s.io/client-go v0.26.2
+ k8s.io/api v0.26.9
+ k8s.io/apimachinery v0.26.9
+ k8s.io/apiserver v0.26.9
+ k8s.io/client-go v0.26.9
k8s.io/utils v0.0.0-20230313181309-38a27ef9d749
sigs.k8s.io/controller-runtime v0.14.5
sigs.k8s.io/yaml v1.3.0
@@ -150,7 +150,7 @@ require (
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
k8s.io/apiextensions-apiserver v0.26.1 // indirect
- k8s.io/component-base v0.26.2 // indirect
+ k8s.io/component-base v0.26.9 // indirect
k8s.io/klog/v2 v2.80.1 // indirect
k8s.io/kube-openapi v0.0.0-20221207184640-f3cff1453715 // indirect
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
diff --git a/operator/go.sum b/operator/go.sum
index 51e42e5c046cc..19dc9e79aafd8 100644
--- a/operator/go.sum
+++ b/operator/go.sum
@@ -1408,24 +1408,24 @@ honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
k8s.io/api v0.18.3/go.mod h1:UOaMwERbqJMfeeeHc8XJKawj4P9TgDRnViIqqBeH2QA=
-k8s.io/api v0.26.3 h1:emf74GIQMTik01Aum9dPP0gAypL8JTLl/lHa4V9RFSU=
-k8s.io/api v0.26.3/go.mod h1:PXsqwPMXBSBcL1lJ9CYDKy7kIReUydukS5JiRlxC3qE=
+k8s.io/api v0.26.9 h1:s8Y+G1u2JM55b90+Yo2RVb3PGT/hkWNVPN4idPERxJg=
+k8s.io/api v0.26.9/go.mod h1:W/W4fEWRVzPD36820LlVUQfNBiSbiq0VPWRFJKwzmUg=
k8s.io/apiextensions-apiserver v0.18.3/go.mod h1:TMsNGs7DYpMXd+8MOCX8KzPOCx8fnZMoIGB24m03+JE=
k8s.io/apiextensions-apiserver v0.26.1 h1:cB8h1SRk6e/+i3NOrQgSFij1B2S0Y0wDoNl66bn8RMI=
k8s.io/apiextensions-apiserver v0.26.1/go.mod h1:AptjOSXDGuE0JICx/Em15PaoO7buLwTs0dGleIHixSM=
k8s.io/apimachinery v0.18.3/go.mod h1:OaXp26zu/5J7p0f92ASynJa1pZo06YlV9fG7BoWbCko=
-k8s.io/apimachinery v0.26.3 h1:dQx6PNETJ7nODU3XPtrwkfuubs6w7sX0M8n61zHIV/k=
-k8s.io/apimachinery v0.26.3/go.mod h1:ats7nN1LExKHvJ9TmwootT00Yz05MuYqPXEXaVeOy5I=
+k8s.io/apimachinery v0.26.9 h1:5yAV9cFR7Z4gIorKcAjWnx4uxtxiFsERwq4Pvmx0CCg=
+k8s.io/apimachinery v0.26.9/go.mod h1:qYzLkrQ9lhrZRh0jNKo2cfvf/R1/kQONnSiyB7NUJU0=
k8s.io/apiserver v0.18.3/go.mod h1:tHQRmthRPLUtwqsOnJJMoI8SW3lnoReZeE861lH8vUw=
-k8s.io/apiserver v0.26.2 h1:Pk8lmX4G14hYqJd1poHGC08G03nIHVqdJMR0SD3IH3o=
-k8s.io/apiserver v0.26.2/go.mod h1:GHcozwXgXsPuOJ28EnQ/jXEM9QeG6HT22YxSNmpYNh8=
+k8s.io/apiserver v0.26.9 h1:G8D5XIXbhLzqdRY3FajzkKE2lt8hnAW5Vjq67mzEeR8=
+k8s.io/apiserver v0.26.9/go.mod h1:HY2TzNkDgq71jsNLyk61ZoDrpiyvujdY6kHyT9DwvtU=
k8s.io/client-go v0.18.3/go.mod h1:4a/dpQEvzAhT1BbuWW09qvIaGw6Gbu1gZYiQZIi1DMw=
-k8s.io/client-go v0.26.2 h1:s1WkVujHX3kTp4Zn4yGNFK+dlDXy1bAAkIl+cFAiuYI=
-k8s.io/client-go v0.26.2/go.mod h1:u5EjOuSyBa09yqqyY7m3abZeovO/7D/WehVVlZ2qcqU=
+k8s.io/client-go v0.26.9 h1:TGWi/6guEjIgT0Hg871Gsmx0qFuoGyGFjlFedrk7It0=
+k8s.io/client-go v0.26.9/go.mod h1:tU1FZS0bwAmAFyPYpZycUQrQnUMzQ5MHloop7EbX6ow=
k8s.io/code-generator v0.18.3/go.mod h1:TgNEVx9hCyPGpdtCWA34olQYLkh3ok9ar7XfSsr8b6c=
k8s.io/component-base v0.18.3/go.mod h1:bp5GzGR0aGkYEfTj+eTY0AN/vXTgkJdQXjNTTVUaa3k=
-k8s.io/component-base v0.26.2 h1:IfWgCGUDzrD6wLLgXEstJKYZKAFS2kO+rBRi0p3LqcI=
-k8s.io/component-base v0.26.2/go.mod h1:DxbuIe9M3IZPRxPIzhch2m1eT7uFrSBJUBuVCQEBivs=
+k8s.io/component-base v0.26.9 h1:qQVdQgyEIUe8EUkB3EEuQ9l5sgVlG2KgOB519yWEBGw=
+k8s.io/component-base v0.26.9/go.mod h1:3WmW9lH9tbjpuvpAc22cPF/6C3VxCjMxkOU1j2mpzr8=
k8s.io/gengo v0.0.0-20190128074634-0689ccc1d7d6/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/gengo v0.0.0-20200114144118-36b2048a9120/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
k8s.io/klog v0.0.0-20181102134211-b9b56d5dfc92/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=
diff --git a/operator/internal/manifests/internal/config/build_test.go b/operator/internal/manifests/internal/config/build_test.go
index a7d4a54b41086..2972b15377950 100644
--- a/operator/internal/manifests/internal/config/build_test.go
+++ b/operator/internal/manifests/internal/config/build_test.go
@@ -4349,6 +4349,7 @@ overrides:
require.YAMLEq(t, expCfg, string(cfg))
require.YAMLEq(t, expRCfg, string(rCfg))
}
+
func TestBuild_ConfigAndRuntimeConfig_WithS3SSES3(t *testing.T) {
expCfg := `
---
|
operator
|
Update tools and dependencies (#10795)
|
0ab1b28812ec44a9ece076c5144992f2bc69a8a6
|
2020-04-30 02:20:40
|
Ed Welch
|
loki: Improve logging and add metrics to streams dropped by stream limit (#2012)
| false
|
diff --git a/pkg/ingester/instance.go b/pkg/ingester/instance.go
index 7d05eb97608c2..afcbc5d72526f 100644
--- a/pkg/ingester/instance.go
+++ b/pkg/ingester/instance.go
@@ -2,6 +2,7 @@ package ingester
import (
"context"
+ "github.com/grafana/loki/pkg/util/validation"
"net/http"
"sync"
"time"
@@ -129,13 +130,8 @@ func (i *instance) Push(ctx context.Context, req *logproto.PushRequest) error {
var appendErr error
for _, s := range req.Streams {
- labels, err := util.ToClientLabels(s.Labels)
- if err != nil {
- appendErr = err
- continue
- }
- stream, err := i.getOrCreateStream(labels)
+ stream, err := i.getOrCreateStream(s)
if err != nil {
appendErr = err
continue
@@ -153,7 +149,11 @@ func (i *instance) Push(ctx context.Context, req *logproto.PushRequest) error {
return appendErr
}
-func (i *instance) getOrCreateStream(labels []client.LabelAdapter) (*stream, error) {
+func (i *instance) getOrCreateStream(pushReqStream *logproto.Stream) (*stream, error) {
+ labels, err := util.ToClientLabels(pushReqStream.Labels)
+ if err != nil {
+ return nil, httpgrpc.Errorf(http.StatusBadRequest, err.Error())
+ }
rawFp := client.FastFingerprint(labels)
fp := i.mapper.mapFP(rawFp, labels)
@@ -162,8 +162,14 @@ func (i *instance) getOrCreateStream(labels []client.LabelAdapter) (*stream, err
return stream, nil
}
- err := i.limiter.AssertMaxStreamsPerUser(i.instanceID, len(i.streams))
+ err = i.limiter.AssertMaxStreamsPerUser(i.instanceID, len(i.streams))
if err != nil {
+ validation.DiscardedSamples.WithLabelValues(validation.StreamLimit, i.instanceID).Add(float64(len(pushReqStream.Entries)))
+ bytes := 0
+ for _, e := range pushReqStream.Entries {
+ bytes += len(e.Line)
+ }
+ validation.DiscardedBytes.WithLabelValues(validation.StreamLimit, i.instanceID).Add(float64(bytes))
return nil, httpgrpc.Errorf(http.StatusTooManyRequests, err.Error())
}
diff --git a/pkg/ingester/instance_test.go b/pkg/ingester/instance_test.go
index 50bfed5473882..c425672e207f2 100644
--- a/pkg/ingester/instance_test.go
+++ b/pkg/ingester/instance_test.go
@@ -10,8 +10,6 @@ import (
"github.com/prometheus/prometheus/pkg/labels"
- "github.com/grafana/loki/pkg/util"
-
"github.com/grafana/loki/pkg/chunkenc"
"github.com/grafana/loki/pkg/logproto"
@@ -124,15 +122,12 @@ func TestSyncPeriod(t *testing.T) {
result = append(result, logproto.Entry{Timestamp: tt, Line: fmt.Sprintf("hello %d", i)})
tt = tt.Add(time.Duration(1 + rand.Int63n(randomStep.Nanoseconds())))
}
-
- err = inst.Push(context.Background(), &logproto.PushRequest{Streams: []*logproto.Stream{{Labels: lbls, Entries: result}}})
- require.NoError(t, err)
-
- // let's verify results.
- ls, err := util.ToClientLabels(lbls)
+ pr := &logproto.PushRequest{Streams: []*logproto.Stream{{Labels: lbls, Entries: result}}}
+ err = inst.Push(context.Background(), pr)
require.NoError(t, err)
- s, err := inst.getOrCreateStream(ls)
+ // let's verify results
+ s, err := inst.getOrCreateStream(pr.Streams[0])
require.NoError(t, err)
// make sure each chunk spans max 'sync period' time
diff --git a/pkg/ingester/limiter.go b/pkg/ingester/limiter.go
index 5f1b52002754e..382c1c0be70bb 100644
--- a/pkg/ingester/limiter.go
+++ b/pkg/ingester/limiter.go
@@ -8,7 +8,7 @@ import (
)
const (
- errMaxStreamsPerUserLimitExceeded = "per-user streams limit (local: %d global: %d actual local: %d) exceeded"
+ errMaxStreamsPerUserLimitExceeded = "tenant '%v' per-user streams limit exceeded, streams: %d exceeds calculated limit: %d (local limit: %d, global limit: %d, global/ingesters: %d)"
)
// RingCount is the interface exposed by a ring implementation which allows
@@ -37,32 +37,28 @@ func NewLimiter(limits *validation.Overrides, ring RingCount, replicationFactor
// AssertMaxStreamsPerUser ensures limit has not been reached compared to the current
// number of streams in input and returns an error if so.
func (l *Limiter) AssertMaxStreamsPerUser(userID string, streams int) error {
- actualLimit := l.maxStreamsPerUser(userID)
- if streams < actualLimit {
- return nil
- }
-
- localLimit := l.limits.MaxLocalStreamsPerUser(userID)
- globalLimit := l.limits.MaxGlobalStreamsPerUser(userID)
-
- return fmt.Errorf(errMaxStreamsPerUserLimitExceeded, localLimit, globalLimit, actualLimit)
-}
-
-func (l *Limiter) maxStreamsPerUser(userID string) int {
+ // Start by setting the local limit either from override or default
localLimit := l.limits.MaxLocalStreamsPerUser(userID)
// We can assume that streams are evenly distributed across ingesters
// so we do convert the global limit into a local limit
globalLimit := l.limits.MaxGlobalStreamsPerUser(userID)
- localLimit = l.minNonZero(localLimit, l.convertGlobalToLocalLimit(globalLimit))
+ adjustedGlobalLimit := l.convertGlobalToLocalLimit(globalLimit)
+
+ // Set the calculated limit to the lesser of the local limit or the new calculated global limit
+ calculatedLimit := l.minNonZero(localLimit, adjustedGlobalLimit)
// If both the local and global limits are disabled, we just
// use the largest int value
- if localLimit == 0 {
- localLimit = math.MaxInt32
+ if calculatedLimit == 0 {
+ calculatedLimit = math.MaxInt32
+ }
+
+ if streams < calculatedLimit {
+ return nil
}
- return localLimit
+ return fmt.Errorf(errMaxStreamsPerUserLimitExceeded, userID, streams, calculatedLimit, localLimit, globalLimit, adjustedGlobalLimit)
}
func (l *Limiter) convertGlobalToLocalLimit(globalLimit int) int {
diff --git a/pkg/ingester/limiter_test.go b/pkg/ingester/limiter_test.go
index c01a06862824d..e43e65d74b205 100644
--- a/pkg/ingester/limiter_test.go
+++ b/pkg/ingester/limiter_test.go
@@ -11,112 +11,86 @@ import (
"github.com/grafana/loki/pkg/util/validation"
)
-func TestLimiter_maxStreamsPerUser(t *testing.T) {
+func TestLimiter_AssertMaxStreamsPerUser(t *testing.T) {
tests := map[string]struct {
maxLocalStreamsPerUser int
maxGlobalStreamsPerUser int
ringReplicationFactor int
ringIngesterCount int
- expected int
+ streams int
+ expected error
}{
+ "both local and global limit are disabled": {
+ maxLocalStreamsPerUser: 0,
+ maxGlobalStreamsPerUser: 0,
+ ringReplicationFactor: 1,
+ ringIngesterCount: 1,
+ streams: 100,
+ expected: nil,
+ },
+ "current number of streams is below the limit": {
+ maxLocalStreamsPerUser: 0,
+ maxGlobalStreamsPerUser: 1000,
+ ringReplicationFactor: 3,
+ ringIngesterCount: 10,
+ streams: 299,
+ expected: nil,
+ },
+ "current number of streams is above the limit": {
+ maxLocalStreamsPerUser: 0,
+ maxGlobalStreamsPerUser: 1000,
+ ringReplicationFactor: 3,
+ ringIngesterCount: 10,
+ streams: 300,
+ expected: fmt.Errorf(errMaxStreamsPerUserLimitExceeded, "test", 300, 300, 0, 1000, 300),
+ },
"both local and global limits are disabled": {
maxLocalStreamsPerUser: 0,
maxGlobalStreamsPerUser: 0,
ringReplicationFactor: 1,
ringIngesterCount: 1,
- expected: math.MaxInt32,
+ streams: math.MaxInt32 - 1,
+ expected: nil,
},
"only local limit is enabled": {
maxLocalStreamsPerUser: 1000,
maxGlobalStreamsPerUser: 0,
ringReplicationFactor: 1,
ringIngesterCount: 1,
- expected: 1000,
+ streams: 3000,
+ expected: fmt.Errorf(errMaxStreamsPerUserLimitExceeded, "test", 3000, 1000, 1000, 0, 0),
},
"only global limit is enabled with replication-factor=1": {
maxLocalStreamsPerUser: 0,
maxGlobalStreamsPerUser: 1000,
ringReplicationFactor: 1,
ringIngesterCount: 10,
- expected: 100,
+ streams: 3000,
+ expected: fmt.Errorf(errMaxStreamsPerUserLimitExceeded, "test", 3000, 100, 0, 1000, 100),
},
"only global limit is enabled with replication-factor=3": {
maxLocalStreamsPerUser: 0,
maxGlobalStreamsPerUser: 1000,
ringReplicationFactor: 3,
ringIngesterCount: 10,
- expected: 300,
+ streams: 3000,
+ expected: fmt.Errorf(errMaxStreamsPerUserLimitExceeded, "test", 3000, 300, 0, 1000, 300),
},
"both local and global limits are set with local limit < global limit": {
maxLocalStreamsPerUser: 150,
maxGlobalStreamsPerUser: 1000,
ringReplicationFactor: 3,
ringIngesterCount: 10,
- expected: 150,
+ streams: 3000,
+ expected: fmt.Errorf(errMaxStreamsPerUserLimitExceeded, "test", 3000, 150, 150, 1000, 300),
},
"both local and global limits are set with local limit > global limit": {
maxLocalStreamsPerUser: 500,
maxGlobalStreamsPerUser: 1000,
ringReplicationFactor: 3,
ringIngesterCount: 10,
- expected: 300,
- },
- }
-
- for testName, testData := range tests {
- testData := testData
-
- t.Run(testName, func(t *testing.T) {
- // Mock the ring
- ring := &ringCountMock{count: testData.ringIngesterCount}
-
- // Mock limits
- limits, err := validation.NewOverrides(validation.Limits{
- MaxLocalStreamsPerUser: testData.maxLocalStreamsPerUser,
- MaxGlobalStreamsPerUser: testData.maxGlobalStreamsPerUser,
- }, nil)
- require.NoError(t, err)
-
- limiter := NewLimiter(limits, ring, testData.ringReplicationFactor)
- actual := limiter.maxStreamsPerUser("test")
-
- assert.Equal(t, testData.expected, actual)
- })
- }
-}
-
-func TestLimiter_AssertMaxStreamsPerUser(t *testing.T) {
- tests := map[string]struct {
- maxLocalStreamsPerUser int
- maxGlobalStreamsPerUser int
- ringReplicationFactor int
- ringIngesterCount int
- streams int
- expected error
- }{
- "both local and global limit are disabled": {
- maxLocalStreamsPerUser: 0,
- maxGlobalStreamsPerUser: 0,
- ringReplicationFactor: 1,
- ringIngesterCount: 1,
- streams: 100,
- expected: nil,
- },
- "current number of streams is below the limit": {
- maxLocalStreamsPerUser: 0,
- maxGlobalStreamsPerUser: 1000,
- ringReplicationFactor: 3,
- ringIngesterCount: 10,
- streams: 299,
- expected: nil,
- },
- "current number of streams is above the limit": {
- maxLocalStreamsPerUser: 0,
- maxGlobalStreamsPerUser: 1000,
- ringReplicationFactor: 3,
- ringIngesterCount: 10,
- streams: 300,
- expected: fmt.Errorf(errMaxStreamsPerUserLimitExceeded, 0, 1000, 300),
+ streams: 3000,
+ expected: fmt.Errorf(errMaxStreamsPerUserLimitExceeded, "test", 3000, 300, 500, 1000, 300),
},
}
diff --git a/pkg/util/validation/validate.go b/pkg/util/validation/validate.go
index 9293a989c17fb..97f54caa15e20 100644
--- a/pkg/util/validation/validate.go
+++ b/pkg/util/validation/validate.go
@@ -10,6 +10,9 @@ const (
RateLimited = "rate_limited"
// LineTooLong is a reason for discarding too long log lines.
LineTooLong = "line_too_long"
+ // StreamLimit is a reason for discarding lines when we can't create a new stream
+ // because the limit of active streams has been reached.
+ StreamLimit = "stream_limit"
)
// DiscardedBytes is a metric of the total discarded bytes, by reason.
|
loki
|
Improve logging and add metrics to streams dropped by stream limit (#2012)
|
adf08ac32dc6f50b07662469c8fb01648b07a1f4
|
2024-09-23 18:23:38
|
J Stickler
|
docs: Explore Logs GA (#14198)
| false
|
diff --git a/docs/sources/release-notes/v3-2.md b/docs/sources/release-notes/v3-2.md
index ac4eb2aea4b33..94655ecbc690b 100644
--- a/docs/sources/release-notes/v3-2.md
+++ b/docs/sources/release-notes/v3-2.md
@@ -6,7 +6,7 @@ weight: 10
# v3.2
-Grafana Labs and the Loki team are excited to announce the release of Loki 3.2. Here's a summary of new enhancements and important fixes.
+Grafana Labs and the Loki team are excited to announce the release of Loki 3.2. Explore Logs is also now Generally Available. Upgrade to Loki/GEL 3.2 to get the best possible experience with Explore Logs. Here's a summary of new enhancements and important fixes.
For a full list of all changes and fixes, refer to the [CHANGELOG](https://github.com/grafana/loki/blob/release-3.2.x/CHANGELOG.md).
@@ -20,15 +20,15 @@ Key features in Loki 3.2.0 include the following:
- **distributor:** Ignore empty streams in distributor if all entries fail validation ([#13674](https://github.com/grafana/loki/issues/13674)) ([6c4b062](https://github.com/grafana/loki/commit/6c4b0622aa3de44cccb76fe16bb6583bf91cf15c)), and limit to block ingestion until configured date ([#13958](https://github.com/grafana/loki/issues/13958)) ([b5ac6a0](https://github.com/grafana/loki/commit/b5ac6a0258be51a6d6c3a7743e498dc40014b64b)).
-- **Explore Logs** Is now Generally Available (GA). This release includes enhancements to add _extracted suffix to detected fields conflicts ([#13993](https://github.com/grafana/loki/issues/13993)) ([ab1caea](https://github.com/grafana/loki/commit/ab1caea12325b5db777101347acf4f277312adf6)), collect and serve pre-aggregated bytes and counts ([#13020](https://github.com/grafana/loki/issues/13020)) ([467eb1b](https://github.com/grafana/loki/commit/467eb1bb1b08fa69e3d5e40a1e0143f65230ad2b)), and remove cardinality filter ([#13652](https://github.com/grafana/loki/issues/13652)) ([4f534d7](https://github.com/grafana/loki/commit/4f534d7317fa0557251f16b76ebf790f079cf98e)).
+- **Explore Logs** Is now Generally Available (GA). For the best experience, you should be on Grafana 11.2 or later and Loki 3.2. This release includes enhancements to add _extracted suffix to detected fields conflicts ([#13993](https://github.com/grafana/loki/issues/13993)) ([ab1caea](https://github.com/grafana/loki/commit/ab1caea12325b5db777101347acf4f277312adf6)), collect and serve pre-aggregated bytes and counts ([#13020](https://github.com/grafana/loki/issues/13020)) ([467eb1b](https://github.com/grafana/loki/commit/467eb1bb1b08fa69e3d5e40a1e0143f65230ad2b)), and remove cardinality filter ([#13652](https://github.com/grafana/loki/issues/13652)) ([4f534d7](https://github.com/grafana/loki/commit/4f534d7317fa0557251f16b76ebf790f079cf98e)).
-- **Helm:** This release includes updates to the Helm charts to make gateway container port configurable. ([#13294](https://github.com/grafana/loki/issues/13294)) ([05176e4](https://github.com/grafana/loki/commit/05176e445b90597379c268e799b0fb86b8629b9e)) and to support alibabacloud oss in the Helm chart ([#13441](https://github.com/grafana/loki/issues/13441)) ([3ebab6f](https://github.com/grafana/loki/commit/3ebab6f3931841f62ac59e6b09afef98db656c71)). It also includes a **breaking change** to the Helm chart to support distributed mode and 3.0 ([#12067](https://github.com/grafana/loki/issues/12067)).
+- **Helm:** This release includes updates to the Helm charts to make gateway container port configurable. ([#13294](https://github.com/grafana/loki/issues/13294)) ([05176e4](https://github.com/grafana/loki/commit/05176e445b90597379c268e799b0fb86b8629b9e)) and to support alibabacloud oss in the Helm chart ([#13441](https://github.com/grafana/loki/issues/13441)) ([3ebab6f](https://github.com/grafana/loki/commit/3ebab6f3931841f62ac59e6b09afef98db656c71)). It also includes a **breaking change** to the Helm chart to support distributed mode and 3.0 ([#12067](https://github.com/grafana/loki/issues/12067)).
- **ingester:** Ingester Stream Limit Improvements: Ingester stream limits now take into account "owned streams" and periodically update when the Ingester ring is changed. Non-owned streams are now also flushed when this update takes place. The stream limit calculation has also been updated for improved accuracy in multi-zone ingester deployments. ([#13532](https://github.com/grafana/loki/issues/13532)) ([ec34aaa](https://github.com/grafana/loki/commit/ec34aaa1ff2e616ef223631657b63f7dffedd3cc)).
- **lambda-promtail:** Add S3 log parser support for AWS GuardDuty ([#13148](https://github.com/grafana/loki/issues/13148)) ([2d92fff](https://github.com/grafana/loki/commit/2d92fff2aa4dbda5f9f8c18ea19347e1236257af)), build lambda with zip file ([#13787](https://github.com/grafana/loki/issues/13787)) ([9bf08f7](https://github.com/grafana/loki/commit/9bf08f7cc055db1997c439ef8edb11247c4e1d67)), and ensure messages to Kinesis are usable by refactoring parsing of KinesisEvent to match parsing of CWEvents + code cleanup ([#13098](https://github.com/grafana/loki/issues/13098)) ([dbfb19b](https://github.com/grafana/loki/commit/dbfb19be49fb3bc1f2f62613f50370028cbf5552)).
-- **loki:** Add ability to disable AWS S3 dualstack endpoints usage ([#13785](https://github.com/grafana/loki/issues/13785)) ([bb257f5](https://github.com/grafana/loki/commit/bb257f54b33ecb04cbe1786c4efac779d8d28d8c)), not enforce max-query-bytes-read and max-querier-bytes-read in limited tripperware ([#13406](https://github.com/grafana/loki/issues/13406)) ([47f6ea5](https://github.com/grafana/loki/commit/47f6ea53fc4816b259bce4ce4efddee377422d3c)), and upgrade Prometheus ([#13671](https://github.com/grafana/loki/issues/13671)) ([b88583d](https://github.com/grafana/loki/commit/b88583da7d3cc840d4b66698de042773422e334d)).
+- **loki:** Add ability to disable AWS S3 dualstack endpoints usage([#13785](https://github.com/grafana/loki/issues/13785)) ([bb257f5](https://github.com/grafana/loki/commit/bb257f54b33ecb04cbe1786c4efac779d8d28d8c)), not enforce max-query-bytes-read and max-querier-bytes-read in limited tripperware ([#13406](https://github.com/grafana/loki/issues/13406)) ([47f6ea5](https://github.com/grafana/loki/commit/47f6ea53fc4816b259bce4ce4efddee377422d3c)), and upgrade Prometheus ([#13671](https://github.com/grafana/loki/issues/13671)) ([b88583d](https://github.com/grafana/loki/commit/b88583da7d3cc840d4b66698de042773422e334d)).
- **operator:** Add alert for discarded samples ([#13512](https://github.com/grafana/loki/issues/13512)) ([5f2a02f](https://github.com/grafana/loki/commit/5f2a02f14222dab891b7851e8f48052d6c9b594a)), add support for the volume API ([#13369](https://github.com/grafana/loki/issues/13369)) ([d451e23](https://github.com/grafana/loki/commit/d451e23225047a11b4d5d82900cec4a46d6e7b39)), enable leader-election ([#13760](https://github.com/grafana/loki/issues/13760)) ([1ba4bff](https://github.com/grafana/loki/commit/1ba4bff005930b173391df35248e6f58e076fa74)), and update Loki operand to v3.1.0 ([#13422](https://github.com/grafana/loki/issues/13422)) ([cf5f52d](https://github.com/grafana/loki/commit/cf5f52dca0db93847218cdd2c3f4860d983381ae)).
@@ -38,7 +38,7 @@ Key features in Loki 3.2.0 include the following:
Other improvements include the following:
-- **chunks-inspect:** Support structured metadata ([#11506](https://github.com/grafana/loki/issues/11506)) ([1834065](https://github.com/grafana/loki/commit/183406570411a5ad5ceaf32bf07451b8fce608c1)).
+- **chunks-inspect:** Support structured metadata ([#11506](https://github.com/grafana/loki/issues/11506)) ([1834065](https://github.com/grafana/loki/commit/183406570411a5ad5ceaf32bf07451b8fce608c1)).
- **exporter:** Include boolean values in limit exporter ([#13466](https://github.com/grafana/loki/issues/13466)) ([4220737](https://github.com/grafana/loki/commit/4220737a52da7ab6c9346b12d5a5d7bedbcd641d)).
- **mempool:** Replace `sync.Mutex` with `sync.Once` ([#13293](https://github.com/grafana/loki/issues/13293)) ([61a9854](https://github.com/grafana/loki/commit/61a9854eb189e5d2c91528ced10ecf39071df680)).
- **metrics:** Collect duplicate log line metrics ([#13084](https://github.com/grafana/loki/issues/13084)) ([40ee766](https://github.com/grafana/loki/commit/40ee7667244f2e094b5a7199705b4f3dacb7ffaf)).
@@ -92,7 +92,7 @@ Out of an abundance of caution, we advise that users with Loki or Grafana Enterp
- **blooms:** Suppress error from resolving server addresses for blocks ([#13385](https://github.com/grafana/loki/issues/13385)) ([3ac2317](https://github.com/grafana/loki/commit/3ac231728e6bc9d3166684bcb697c78b4fb56fae)).
- **blooms:** Use correct key to populate blockscache at startup ([#13624](https://github.com/grafana/loki/issues/13624)) ([2624a4b](https://github.com/grafana/loki/commit/2624a4bdd43badcd1159b83e26c1b0ff14479ac0)).
- **blooms:** Fix log line for fingerprint not found ([#13555](https://github.com/grafana/loki/issues/13555)) ([aeb23bb](https://github.com/grafana/loki/commit/aeb23bb7fc3d33327060828ddf97cb7da7b3c8f8)).
-- **blooms:** Fix panic in BloomStore initialization ([#13457](https://github.com/grafana/loki/issues/13457)) ([5f4b8fc](https://github.com/grafana/loki/commit/5f4b8fc9e44ac386ef5bfc64dd5f8f47b72f8ef9)).
+- **blooms:** Fix panic in BloomStore initialization ([#13457](https://github.com/grafana/loki/issues/13457)) ([5f4b8fc](https://github.com/grafana/loki/commit/5f4b8fc9e44ac386ef5bfc64dd5f8f47b72f8ef9)).
- **blooms:** Flaky test blockPlansForGaps ([#13743](https://github.com/grafana/loki/issues/13743)) ([37e33d4](https://github.com/grafana/loki/commit/37e33d41b4583626a0384e4eb4c4570d3ef11882)).
- **blooms:** Keep blocks referenced by newer metas ([#13614](https://github.com/grafana/loki/issues/13614)) ([784e7d5](https://github.com/grafana/loki/commit/784e7d562fedec7134c8ed4e2cee8ccb7049e271)).
- **blooms:** Lint issues after merge to main ([#13326](https://github.com/grafana/loki/issues/13326)) ([7e19cc7](https://github.com/grafana/loki/commit/7e19cc7dca8480932b39c87c7c2e296f99318c95)).
@@ -136,7 +136,7 @@ Out of an abundance of caution, we advise that users with Loki or Grafana Enterp
- **detected fields:** Remove query size limit for detected fields ([#13423](https://github.com/grafana/loki/issues/13423)) ([1fa5127](https://github.com/grafana/loki/commit/1fa51277978ead6569e31e908dec7f140dadb90f)).
- **detected labels:** Response when store label values are empty ([#13970](https://github.com/grafana/loki/issues/13970)) ([6f99af6](https://github.com/grafana/loki/commit/6f99af62227f98c7d9de8a5cf480ae792ce6220a)).
- **detected_labels:** Add matchers to get labels from store" ([#14012](https://github.com/grafana/loki/issues/14012)) ([25234e8](https://github.com/grafana/loki/commit/25234e83483cb8a974d40b7c80b3d4dd62d6d880)).
-- **detected_labels:** remove limit middleware for `detected_labels` ([#13643](https://github.com/grafana/loki/issues/13643)) ([2642718](https://github.com/grafana/loki/commit/2642718d50569931b71cfc0c9288318ab775ca41)).
+- **detected_labels:** Remove limit middleware for `detected_labels` ([#13643](https://github.com/grafana/loki/issues/13643)) ([2642718](https://github.com/grafana/loki/commit/2642718d50569931b71cfc0c9288318ab775ca41)).
- **docs:** Fixed typo in ruler URL ([#13692](https://github.com/grafana/loki/issues/13692)) ([1476498](https://github.com/grafana/loki/commit/14764989a2c6f01803f0313d8151f7aa20affd4a)).
- **docs:** Remove trailing backtick in verify-config for Loki 3.0 ([#13640](https://github.com/grafana/loki/issues/13640)) ([498f29a](https://github.com/grafana/loki/commit/498f29a66b2dbfeff85454f22d0596d20066a635)).
- **Helm:** Fix HPA ingester typo ([#13158](https://github.com/grafana/loki/issues/13158)) ([4ca9785](https://github.com/grafana/loki/commit/4ca97858d9dc33db7abbe20ca01c6735cb9ce34e)).
@@ -155,7 +155,7 @@ Out of an abundance of caution, we advise that users with Loki or Grafana Enterp
- **ingester:** Remove tenant label tagging from profiles to reduce cardinality ([#13270](https://github.com/grafana/loki/issues/13270)) ([f897758](https://github.com/grafana/loki/commit/f8977587476169197d6da4d7055b97b189808344)).
- **ingester:** Stream ownership check ([#13314](https://github.com/grafana/loki/issues/13314)) ([5ae5b31](https://github.com/grafana/loki/commit/5ae5b31b1f9ffcac9193cfd4ba47a64d911966db)).
- **ingester:** Support multi-zone ingesters when converting global to local limits for streams in limiter.go ([#13321](https://github.com/grafana/loki/issues/13321)) ([e28c15f](https://github.com/grafana/loki/commit/e28c15f56c2aab62eecbaa382055eac99fc3a581)).
-- **ingester:** Update fixed limit once streams ownership re-checked ([#13231](https://github.com/grafana/loki/issues/13231)) ([7ac19f0](https://github.com/grafana/loki/commit/7ac19f00b4f5186b0c38a8dad23cf61e14d071de)).
+- **ingester:** Update fixed limit once streams ownership re-checked ([#13231](https://github.com/grafana/loki/issues/13231)) ([7ac19f0](https://github.com/grafana/loki/commit/7ac19f00b4f5186b0c38a8dad23cf61e14d071de)).
- **LOgQL:** AST left circular reference result in out of memory ([#13501](https://github.com/grafana/loki/issues/13501)) ([6dd6b65](https://github.com/grafana/loki/commit/6dd6b65139b3b8d4254f114e99ab8fb3eaa2ae09)).
- **LogQL:** Improve execution speed for queries with label filters ([#13922](https://github.com/grafana/loki/issues/13922)) ([40f4f14](https://github.com/grafana/loki/commit/40f4f1479170a90b39c005292e11a3ec4db4bc34)).
- **LogQL:** Panic when parsing and extracting JSON key values ([#13790](https://github.com/grafana/loki/issues/13790)) ([5ef83a7](https://github.com/grafana/loki/commit/5ef83a741ba515f68343e9dc345fcb8afe921bfd)).
@@ -177,7 +177,7 @@ Out of an abundance of caution, we advise that users with Loki or Grafana Enterp
- **operator:** Support v3.1.0 in OpenShift dashboards ([#13430](https://github.com/grafana/loki/issues/13430)) ([8279d59](https://github.com/grafana/loki/commit/8279d59f145df9c9132aeff9e3d46c738650027c)).
- **operator:** Watch for CredentialsRequests on CCOAuthEnv only ([#13299](https://github.com/grafana/loki/issues/13299)) ([7fc926e](https://github.com/grafana/loki/commit/7fc926e36ea8fca7bd8e9955c8994574535dbbae)).
- **querier:** Add a retry middleware to all the stats handlers ([#13584](https://github.com/grafana/loki/issues/13584)) ([7232795](https://github.com/grafana/loki/commit/7232795e1f5fb1868c83111f5aab72ca0f3d9891)).
-- **querier:** Adjust tailer loop criteria so it is actually re-tested ([#13906](https://github.com/grafana/loki/issues/13906)) ([dabbfd8](https://github.com/grafana/loki/commit/dabbfd81ef5c4f02a255b404ab25edd1eec126cf)).
+- **querier:** Adjust tailer loop criteria so it is actually re-tested ([#13906](https://github.com/grafana/loki/issues/13906)) ([dabbfd8](https://github.com/grafana/loki/commit/dabbfd81ef5c4f02a255b404ab25edd1eec126cf)).
- **querier:** Fix retry code to handle grpc status codes. updated newer stats retries to be wrapped with spans ([#13592](https://github.com/grafana/loki/issues/13592)) ([d3e1edb](https://github.com/grafana/loki/commit/d3e1edbf1102b2f0f4116c3bb1773000d0368dde)).
- **querier:** Fixes span name of serializeRounTripper ([#13541](https://github.com/grafana/loki/issues/13541)) ([4451d56](https://github.com/grafana/loki/commit/4451d56d6b9a9d2eb54ed75d3d2c8fe0db6908eb)).
- **querier:** Remove retries on the stats handlers because they already retry ([#13608](https://github.com/grafana/loki/issues/13608)) ([1008315](https://github.com/grafana/loki/commit/10083159a7e54df4e41efe2fc2e04e267fee1147)).
|
docs
|
Explore Logs GA (#14198)
|
a250055806a1d1566de7560f4022b6c0f6bc78b9
|
2025-02-27 12:56:03
|
Trevor Whitney
|
test: fix flaky multi extractor (#16484)
| false
|
diff --git a/pkg/chunkenc/memchunk_test.go b/pkg/chunkenc/memchunk_test.go
index 27abcdd96467c..be2c2b8bd1f55 100644
--- a/pkg/chunkenc/memchunk_test.go
+++ b/pkg/chunkenc/memchunk_test.go
@@ -248,26 +248,55 @@ func TestBlock(t *testing.T) {
require.NoError(t, sampleIt.Close())
require.Equal(t, len(cases), idx)
t.Run("multi-extractor", func(t *testing.T) {
- t.Skip("TODO(trevor): fix this")
- extractors := []log.StreamSampleExtractor{countExtractor, bytesExtractor}
+ // Wrap extractors in variant extractors so they get a variant index we can use later for differentiating counts and bytes
+ extractors := []log.StreamSampleExtractor{
+ log.NewVariantsStreamSampleExtractorWrapper(0, countExtractor),
+ log.NewVariantsStreamSampleExtractorWrapper(1, bytesExtractor),
+ }
sampleIt = chk.SampleIterator(context.Background(), time.Unix(0, 0), time.Unix(0, math.MaxInt64), extractors...)
idx = 0
+ // variadic arguments can't guarantee order, so we're going to store the expected and actual values
+ // and do an ElementsMatch on them.
+ var actualCounts = make([]float64, 0, len(cases))
+ var actualBytes = make([]float64, 0, len(cases))
+
+ var expectedCounts = make([]float64, 0, len(cases))
+ var expectedBytes = make([]float64, 0, len(cases))
+ for _, c := range cases {
+ expectedCounts = append(expectedCounts, 1.)
+ expectedBytes = append(expectedBytes, c.bytes)
+ }
+
// 2 extractors, expect 2 samples per original timestamp
for sampleIt.Next() {
s := sampleIt.At()
require.Equal(t, cases[idx].ts, s.Timestamp)
- require.Equal(t, 1., s.Value)
require.NotEmpty(t, s.Hash)
+ lbls := sampleIt.Labels()
+ if strings.Contains(lbls, `__variant__="0"`) {
+ actualCounts = append(actualCounts, s.Value)
+ } else {
+ actualBytes = append(actualBytes, s.Value)
+ }
require.True(t, sampleIt.Next())
s = sampleIt.At()
require.Equal(t, cases[idx].ts, s.Timestamp)
- require.Equal(t, cases[idx].bytes, s.Value)
require.NotEmpty(t, s.Hash)
+ lbls = sampleIt.Labels()
+ if strings.Contains(lbls, `__variant__="0"`) {
+ actualCounts = append(actualCounts, s.Value)
+ } else {
+ actualBytes = append(actualBytes, s.Value)
+ }
+
idx++
}
+ require.ElementsMatch(t, expectedCounts, actualCounts)
+ require.ElementsMatch(t, expectedBytes, actualBytes)
+
require.NoError(t, sampleIt.Err())
require.NoError(t, sampleIt.Close())
require.Equal(t, len(cases), idx)
|
test
|
fix flaky multi extractor (#16484)
|
9d5630d8756d487959cb6eae3c6e047c77a87eaa
|
2019-07-15 17:58:12
|
sh0rez
|
feat(logcli): output modes (#731)
| false
|
diff --git a/cmd/logcli/client.go b/cmd/logcli/client.go
index 14b53a7527741..ed4bd3d9c83ab 100644
--- a/cmd/logcli/client.go
+++ b/cmd/logcli/client.go
@@ -55,7 +55,9 @@ func listLabelValues(name string) (*logproto.LabelResponse, error) {
func doRequest(path string, out interface{}) error {
url := *addr + path
- log.Print(url)
+ if !*quiet {
+ log.Print(url)
+ }
req, err := http.NewRequest("GET", url, nil)
if err != nil {
@@ -121,7 +123,9 @@ func wsConnect(path string) (*websocket.Conn, error) {
} else if strings.HasPrefix(url, "http") {
url = strings.Replace(url, "http", "ws", 1)
}
- log.Println(url)
+ if !*quiet {
+ log.Println(url)
+ }
h := http.Header{"Authorization": {"Basic " + base64.StdEncoding.EncodeToString([]byte(*username+":"+*password))}}
diff --git a/cmd/logcli/main.go b/cmd/logcli/main.go
index 0022caa2499fe..52ce05e6af42c 100644
--- a/cmd/logcli/main.go
+++ b/cmd/logcli/main.go
@@ -8,7 +8,9 @@ import (
)
var (
- app = kingpin.New("logcli", "A command-line for loki.")
+ app = kingpin.New("logcli", "A command-line for loki.")
+ quiet = app.Flag("quiet", "suppress everything but log lines").Default("false").Short('q').Bool()
+ outputMode = app.Flag("output", "specify output mode [default, raw, jsonl]").Default("default").Short('o').Enum("default", "raw", "jsonl")
addr = app.Flag("addr", "Server address.").Default("https://logs-us-west1.grafana.net").Envar("GRAFANA_ADDR").String()
username = app.Flag("username", "Username for HTTP basic auth.").Default("").Envar("GRAFANA_USERNAME").String()
diff --git a/cmd/logcli/output.go b/cmd/logcli/output.go
new file mode 100644
index 0000000000000..d822775058f21
--- /dev/null
+++ b/cmd/logcli/output.go
@@ -0,0 +1,73 @@
+package main
+
+import (
+ "encoding/json"
+ "fmt"
+ "log"
+ "strings"
+ "time"
+
+ "github.com/fatih/color"
+ "github.com/prometheus/prometheus/pkg/labels"
+)
+
+// Outputs is an enum with all possible output modes
+var Outputs = map[string]LogOutput{
+ "default": &DefaultOutput{},
+ "jsonl": &JSONLOutput{},
+ "raw": &RawOutput{},
+}
+
+// LogOutput is the interface any output mode must implement
+type LogOutput interface {
+ Print(ts time.Time, lbls *labels.Labels, line string)
+}
+
+// DefaultOutput provides logs and metadata in human readable format
+type DefaultOutput struct {
+ MaxLabelsLen int
+ CommonLabels labels.Labels
+}
+
+// Print a log entry in a human readable format
+func (f DefaultOutput) Print(ts time.Time, lbls *labels.Labels, line string) {
+ ls := subtract(*lbls, f.CommonLabels)
+ if len(*ignoreLabelsKey) > 0 {
+ ls = ls.MatchLabels(false, *ignoreLabelsKey...)
+ }
+
+ labels := ""
+ if !*noLabels {
+ labels = padLabel(ls, f.MaxLabelsLen)
+ }
+ fmt.Println(
+ color.BlueString(ts.Format(time.RFC3339)),
+ color.RedString(labels),
+ strings.TrimSpace(line),
+ )
+}
+
+// JSONLOutput prints logs and metadata as JSON Lines, suitable for scripts
+type JSONLOutput struct{}
+
+// Print a log entry as json line
+func (f JSONLOutput) Print(ts time.Time, lbls *labels.Labels, line string) {
+ entry := map[string]interface{}{
+ "timestamp": ts,
+ "labels": lbls,
+ "line": line,
+ }
+ out, err := json.Marshal(entry)
+ if err != nil {
+ log.Fatalf("error marshalling entry: %s", err)
+ }
+ fmt.Println(string(out))
+}
+
+// RawOutput prints logs in their original form, without any metadata
+type RawOutput struct{}
+
+// Print a log entry as is
+func (f RawOutput) Print(ts time.Time, lbls *labels.Labels, line string) {
+ fmt.Println(line)
+}
diff --git a/cmd/logcli/query.go b/cmd/logcli/query.go
index da1f57de626d4..177ef53b0de70 100644
--- a/cmd/logcli/query.go
+++ b/cmd/logcli/query.go
@@ -56,11 +56,11 @@ func doQuery() {
common = common.MatchLabels(false, *showLabelsKey...)
}
- if len(common) > 0 {
+ if len(common) > 0 && !*quiet {
log.Println("Common labels:", color.RedString(common.String()))
}
- if len(*ignoreLabelsKey) > 0 {
+ if len(*ignoreLabelsKey) > 0 && !*quiet {
log.Println("Ignoring labels key:", color.RedString(strings.Join(*ignoreLabelsKey, ",")))
}
@@ -79,19 +79,14 @@ func doQuery() {
i = iter.NewQueryResponseIterator(resp, d)
+ Outputs["default"] = DefaultOutput{
+ MaxLabelsLen: maxLabelsLen,
+ CommonLabels: common,
+ }
+
for i.Next() {
ls := labelsCache(i.Labels())
- ls = subtract(ls, common)
- if len(*ignoreLabelsKey) > 0 {
- ls = ls.MatchLabels(false, *ignoreLabelsKey...)
- }
-
- labels := ""
- if !*noLabels {
- labels = padLabel(ls, maxLabelsLen)
- }
-
- printLogEntry(i.Entry().Timestamp, labels, i.Entry().Line)
+ Outputs[*outputMode].Print(i.Entry().Timestamp, &ls, i.Entry().Line)
}
if err := i.Error(); err != nil {
diff --git a/cmd/logcli/tail.go b/cmd/logcli/tail.go
index 6349e3b29788f..94ee75b3432ec 100644
--- a/cmd/logcli/tail.go
+++ b/cmd/logcli/tail.go
@@ -55,9 +55,12 @@ func tailQuery() {
labels = stream.Labels
}
}
+
for _, entry := range stream.Entries {
- printLogEntry(entry.Timestamp, labels, entry.Line)
+ lbls := mustParseLabels(labels)
+ Outputs[*outputMode].Print(entry.Timestamp, &lbls, entry.Line)
}
+
}
if len(tailReponse.DroppedEntries) != 0 {
log.Println("Server dropped following entries due to slow client")
diff --git a/cmd/logcli/utils.go b/cmd/logcli/utils.go
index 807c3a6184873..6054f86ff6e70 100644
--- a/cmd/logcli/utils.go
+++ b/cmd/logcli/utils.go
@@ -1,27 +1,15 @@
package main
import (
- "fmt"
"log"
"sort"
"strings"
- "time"
- "github.com/fatih/color"
"github.com/grafana/loki/pkg/logproto"
"github.com/prometheus/prometheus/pkg/labels"
"github.com/prometheus/prometheus/promql"
)
-// print a log entry
-func printLogEntry(ts time.Time, lbls string, line string) {
- fmt.Println(
- color.BlueString(ts.Format(time.RFC3339)),
- color.RedString(lbls),
- strings.TrimSpace(line),
- )
-}
-
// add some padding after labels
func padLabel(ls labels.Labels, maxLabelsLen int) string {
labels := ls.String()
@@ -52,7 +40,7 @@ func parseLabels(resp *logproto.QueryResponse) (map[string]labels.Labels, []labe
return cache, lss
}
-// return common labels between given lavels set
+// return commonLabels labels between given lavels set
func commonLabels(lss []labels.Labels) labels.Labels {
if len(lss) == 0 {
return nil
|
feat
|
output modes (#731)
|
4443ba44534ccc3d2c5f0f1a6e47c3f2b5e7b0b1
|
2025-01-25 00:10:48
|
renovate[bot]
|
fix(deps): update module cloud.google.com/go/bigtable to v1.35.0 (main) (#15951)
| false
|
diff --git a/go.mod b/go.mod
index 196f56d8288a2..c6983f8ba6148 100644
--- a/go.mod
+++ b/go.mod
@@ -5,7 +5,7 @@ go 1.23.0
toolchain go1.23.1
require (
- cloud.google.com/go/bigtable v1.34.0
+ cloud.google.com/go/bigtable v1.35.0
cloud.google.com/go/pubsub v1.45.3
cloud.google.com/go/storage v1.50.0
dario.cat/mergo v1.0.1
@@ -203,10 +203,10 @@ require (
)
require (
- cloud.google.com/go v0.117.0 // indirect
+ cloud.google.com/go v0.118.0 // indirect
cloud.google.com/go/compute/metadata v0.6.0 // indirect
- cloud.google.com/go/iam v1.2.2 // indirect
- cloud.google.com/go/longrunning v0.6.2 // indirect
+ cloud.google.com/go/iam v1.3.1 // indirect
+ cloud.google.com/go/longrunning v0.6.4 // indirect
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.16.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.7.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.10.0 // indirect
@@ -371,8 +371,8 @@ require (
golang.org/x/mod v0.22.0 // indirect
golang.org/x/term v0.28.0 // indirect
golang.org/x/tools v0.28.0 // indirect
- google.golang.org/genproto v0.0.0-20241118233622-e639e219e697 // indirect
- google.golang.org/genproto/googleapis/api v0.0.0-20250102185135-69823020774d // indirect
+ google.golang.org/genproto v0.0.0-20241216192217-9240e9c98484 // indirect
+ google.golang.org/genproto/googleapis/api v0.0.0-20250115164207-1a7da9e5054f // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250115164207-1a7da9e5054f
gopkg.in/fsnotify/fsnotify.v1 v1.4.7 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
diff --git a/go.sum b/go.sum
index d52d76c74db90..bb6ba21689599 100644
--- a/go.sum
+++ b/go.sum
@@ -15,8 +15,8 @@ cloud.google.com/go v0.56.0/go.mod h1:jr7tqZxxKOVYizybht9+26Z/gUq7tiRzu+ACVAMbKV
cloud.google.com/go v0.57.0/go.mod h1:oXiQ6Rzq3RAkkY7N6t3TcE6jE+CIBBbA36lwQ1JyzZs=
cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOYc=
cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY=
-cloud.google.com/go v0.117.0 h1:Z5TNFfQxj7WG2FgOGX1ekC5RiXrYgms6QscOm32M/4s=
-cloud.google.com/go v0.117.0/go.mod h1:ZbwhVTb1DBGt2Iwb3tNO6SEK4q+cplHZmLWH+DelYYc=
+cloud.google.com/go v0.118.0 h1:tvZe1mgqRxpiVa3XlIGMiPcEUbP1gNXELgD4y/IXmeQ=
+cloud.google.com/go v0.118.0/go.mod h1:zIt2pkedt/mo+DQjcT4/L3NDxzHPR29j5HcclNH+9PM=
cloud.google.com/go/auth v0.14.0 h1:A5C4dKV/Spdvxcl0ggWwWEzzP7AZMJSEIgrkngwhGYM=
cloud.google.com/go/auth v0.14.0/go.mod h1:CYsoRL1PdiDuqeQpZE0bP2pnPrGqFcOkI0nldEQis+A=
cloud.google.com/go/auth/oauth2adapt v0.2.7 h1:/Lc7xODdqcEw8IrZ9SvwnlLX6j9FHQM74z6cBk9Rw6M=
@@ -27,20 +27,20 @@ cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvf
cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg=
cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
-cloud.google.com/go/bigtable v1.34.0 h1:eIgi3QLcN4aq8p6n9U/zPgmHeBP34sm9FiKq4ik/ZoY=
-cloud.google.com/go/bigtable v1.34.0/go.mod h1:p94uLf6cy6D73POkudMagaFF3x9c7ktZjRnOUVGjZAw=
+cloud.google.com/go/bigtable v1.35.0 h1:UEacPwaejN2mNbz67i1Iy3G812rxtgcs6ePj1TAg7dw=
+cloud.google.com/go/bigtable v1.35.0/go.mod h1:EabtwwmTcOJFXp+oMZAT/jZkyDIjNwrv53TrS4DGrrM=
cloud.google.com/go/compute/metadata v0.6.0 h1:A6hENjEsCDtC1k8byVsgwvVcioamEHvZ4j01OwKxG9I=
cloud.google.com/go/compute/metadata v0.6.0/go.mod h1:FjyFAW1MW0C203CEOMDTu3Dk1FlqW3Rga40jzHL4hfg=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
-cloud.google.com/go/iam v1.2.2 h1:ozUSofHUGf/F4tCNy/mu9tHLTaxZFLOUiKzjcgWHGIA=
-cloud.google.com/go/iam v1.2.2/go.mod h1:0Ys8ccaZHdI1dEUilwzqng/6ps2YB6vRsjIe00/+6JY=
-cloud.google.com/go/kms v1.20.1 h1:og29Wv59uf2FVaZlesaiDAqHFzHaoUyHI3HYp9VUHVg=
-cloud.google.com/go/kms v1.20.1/go.mod h1:LywpNiVCvzYNJWS9JUcGJSVTNSwPwi0vBAotzDqn2nc=
+cloud.google.com/go/iam v1.3.1 h1:KFf8SaT71yYq+sQtRISn90Gyhyf4X8RGgeAVC8XGf3E=
+cloud.google.com/go/iam v1.3.1/go.mod h1:3wMtuyT4NcbnYNPLMBzYRFiEfjKfJlLVLrisE7bwm34=
+cloud.google.com/go/kms v1.20.2 h1:NGTHOxAyhDVUGVU5KngeyGScrg2D39X76Aphe6NC7S0=
+cloud.google.com/go/kms v1.20.2/go.mod h1:LywpNiVCvzYNJWS9JUcGJSVTNSwPwi0vBAotzDqn2nc=
cloud.google.com/go/logging v1.12.0 h1:ex1igYcGFd4S/RZWOCU51StlIEuey5bjqwH9ZYjHibk=
cloud.google.com/go/logging v1.12.0/go.mod h1:wwYBt5HlYP1InnrtYI0wtwttpVU1rifnMT7RejksUAM=
-cloud.google.com/go/longrunning v0.6.2 h1:xjDfh1pQcWPEvnfjZmwjKQEcHnpz6lHjfy7Fo0MK+hc=
-cloud.google.com/go/longrunning v0.6.2/go.mod h1:k/vIs83RN4bE3YCswdXC5PFfWVILjm3hpEUlSko4PiI=
+cloud.google.com/go/longrunning v0.6.4 h1:3tyw9rO3E2XVXzSApn1gyEEnH2K9SynNQjMlBi3uHLg=
+cloud.google.com/go/longrunning v0.6.4/go.mod h1:ttZpLCe6e7EXvn9OxpBRx7kZEB0efv8yBO6YnVMfhJs=
cloud.google.com/go/monitoring v1.22.1 h1:KQbnAC4IAH+5x3iWuPZT5iN9VXqKMzzOgqcYB6fqPDE=
cloud.google.com/go/monitoring v1.22.1/go.mod h1:AuZZXAoN0WWWfsSvET1Cpc4/1D8LXq8KRDU87fMS6XY=
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
@@ -1611,10 +1611,10 @@ google.golang.org/genproto v0.0.0-20200729003335-053ba62fc06f/go.mod h1:FWY/as6D
google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200825200019-8632dd797987/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
-google.golang.org/genproto v0.0.0-20241118233622-e639e219e697 h1:ToEetK57OidYuqD4Q5w+vfEnPvPpuTwedCNVohYJfNk=
-google.golang.org/genproto v0.0.0-20241118233622-e639e219e697/go.mod h1:JJrvXBWRZaFMxBufik1a4RpFw4HhgVtBBWQeQgUj2cc=
-google.golang.org/genproto/googleapis/api v0.0.0-20250102185135-69823020774d h1:H8tOf8XM88HvKqLTxe755haY6r1fqqzLbEnfrmLXlSA=
-google.golang.org/genproto/googleapis/api v0.0.0-20250102185135-69823020774d/go.mod h1:2v7Z7gP2ZUOGsaFyxATQSRoBnKygqVq2Cwnvom7QiqY=
+google.golang.org/genproto v0.0.0-20241216192217-9240e9c98484 h1:a/U5otbGrI6mYIO598WriFB1172i6Ktr6FGcatZD3Yw=
+google.golang.org/genproto v0.0.0-20241216192217-9240e9c98484/go.mod h1:Gmd/M/W9fEyf6VSu/mWLnl+9Be51B9CLdxdsKokYq7Y=
+google.golang.org/genproto/googleapis/api v0.0.0-20250115164207-1a7da9e5054f h1:gap6+3Gk41EItBuyi4XX/bp4oqJ3UwuIMl25yGinuAA=
+google.golang.org/genproto/googleapis/api v0.0.0-20250115164207-1a7da9e5054f/go.mod h1:Ic02D47M+zbarjYYUlK57y316f2MoN0gjAwI3f2S95o=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250115164207-1a7da9e5054f h1:OxYkA3wjPsZyBylwymxSHa7ViiW1Sml4ToBrncvFehI=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250115164207-1a7da9e5054f/go.mod h1:+2Yz8+CLJbIfL9z73EW45avw8Lmge3xVElCP9zEKi50=
google.golang.org/grpc v1.12.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
diff --git a/vendor/cloud.google.com/go/.release-please-manifest-individual.json b/vendor/cloud.google.com/go/.release-please-manifest-individual.json
index 0c4dc6cb5d918..f7c0ab189278d 100644
--- a/vendor/cloud.google.com/go/.release-please-manifest-individual.json
+++ b/vendor/cloud.google.com/go/.release-please-manifest-individual.json
@@ -6,11 +6,11 @@
"datastore": "1.20.0",
"errorreporting": "0.3.2",
"firestore": "1.17.0",
- "logging": "1.12.0",
+ "logging": "1.13.0",
"profiler": "0.4.2",
"pubsub": "1.45.3",
"pubsublite": "1.8.2",
"spanner": "1.73.0",
- "storage": "1.48.0",
- "vertexai": "0.13.2"
+ "storage": "1.49.0",
+ "vertexai": "0.13.3"
}
diff --git a/vendor/cloud.google.com/go/.release-please-manifest-submodules.json b/vendor/cloud.google.com/go/.release-please-manifest-submodules.json
index b70366d86907e..0c8cc178b7419 100644
--- a/vendor/cloud.google.com/go/.release-please-manifest-submodules.json
+++ b/vendor/cloud.google.com/go/.release-please-manifest-submodules.json
@@ -39,10 +39,10 @@
"compute/metadata": "0.6.0",
"confidentialcomputing": "1.8.0",
"config": "1.2.0",
- "contactcenterinsights": "1.16.0",
+ "contactcenterinsights": "1.17.0",
"container": "1.42.0",
"containeranalysis": "0.13.2",
- "datacatalog": "1.24.0",
+ "datacatalog": "1.24.1",
"dataflow": "0.10.2",
"dataform": "0.10.2",
"datafusion": "1.8.2",
@@ -53,7 +53,7 @@
"datastream": "1.12.0",
"deploy": "1.26.0",
"developerconnect": "0.3.0",
- "dialogflow": "1.63.0",
+ "dialogflow": "1.64.0",
"discoveryengine": "1.16.0",
"dlp": "1.20.0",
"documentai": "1.35.0",
@@ -75,7 +75,7 @@
"identitytoolkit": "0.2.2",
"ids": "1.5.2",
"iot": "1.8.2",
- "kms": "1.20.2",
+ "kms": "1.20.3",
"language": "1.14.2",
"lifesciences": "0.10.2",
"longrunning": "0.6.3",
@@ -84,6 +84,7 @@
"maps": "1.17.0",
"mediatranslation": "0.9.2",
"memcache": "1.11.2",
+ "memorystore": "0.1.0",
"metastore": "1.14.2",
"migrationcenter": "1.1.2",
"monitoring": "1.22.0",
@@ -127,10 +128,10 @@
"servicemanagement": "1.10.2",
"serviceusage": "1.9.2",
"shell": "1.8.2",
- "shopping": "0.13.0",
+ "shopping": "0.14.0",
"speech": "1.25.2",
"storageinsights": "1.1.2",
- "storagetransfer": "1.11.2",
+ "storagetransfer": "1.12.0",
"streetview": "0.2.2",
"support": "1.1.2",
"talent": "1.7.2",
diff --git a/vendor/cloud.google.com/go/.release-please-manifest.json b/vendor/cloud.google.com/go/.release-please-manifest.json
index 3200815ccadaf..87c6277740c6f 100644
--- a/vendor/cloud.google.com/go/.release-please-manifest.json
+++ b/vendor/cloud.google.com/go/.release-please-manifest.json
@@ -1,3 +1,3 @@
{
- ".": "0.117.0"
+ ".": "0.118.0"
}
diff --git a/vendor/cloud.google.com/go/CHANGES.md b/vendor/cloud.google.com/go/CHANGES.md
index c8d214a8e30b1..74c920023e3bd 100644
--- a/vendor/cloud.google.com/go/CHANGES.md
+++ b/vendor/cloud.google.com/go/CHANGES.md
@@ -1,5 +1,12 @@
# Changes
+## [0.118.0](https://github.com/googleapis/google-cloud-go/compare/v0.117.0...v0.118.0) (2025-01-02)
+
+
+### Features
+
+* **civil:** Add AddMonths, AddYears and Weekday methods to Date ([#11340](https://github.com/googleapis/google-cloud-go/issues/11340)) ([d45f1a0](https://github.com/googleapis/google-cloud-go/commit/d45f1a01ebff868418aa14fe762ef7d1334f797d))
+
## [0.117.0](https://github.com/googleapis/google-cloud-go/compare/v0.116.0...v0.117.0) (2024-12-16)
diff --git a/vendor/cloud.google.com/go/bigtable/CHANGES.md b/vendor/cloud.google.com/go/bigtable/CHANGES.md
index f66ddcd063614..3d7b9735fc87e 100644
--- a/vendor/cloud.google.com/go/bigtable/CHANGES.md
+++ b/vendor/cloud.google.com/go/bigtable/CHANGES.md
@@ -1,5 +1,21 @@
# Changes
+## [1.35.0](https://github.com/googleapis/google-cloud-go/compare/bigtable/v1.34.0...bigtable/v1.35.0) (2025-01-22)
+
+
+### Features
+
+* **bigtable:** Hot backups ([#11215](https://github.com/googleapis/google-cloud-go/issues/11215)) ([238ac1c](https://github.com/googleapis/google-cloud-go/commit/238ac1c37978b7ccdd72af453416308c511dd493))
+
+
+### Bug Fixes
+
+* **bigtable:** Allow nil condition in conditional mutation ([#11457](https://github.com/googleapis/google-cloud-go/issues/11457)) ([d83bc05](https://github.com/googleapis/google-cloud-go/commit/d83bc05219223027cfaa3fba127c2f03eb554c53))
+* **bigtable:** Do not retry conditional mutate ([#11437](https://github.com/googleapis/google-cloud-go/issues/11437)) ([ce8c9b1](https://github.com/googleapis/google-cloud-go/commit/ce8c9b1e5523646175b9650265928386143259fd))
+* **bigtable:** Mutate groups even after first error ([#11434](https://github.com/googleapis/google-cloud-go/issues/11434)) ([6ffe32b](https://github.com/googleapis/google-cloud-go/commit/6ffe32b76e7228d99e12eeba60a5e719f2d3e5e3))
+* **bigtable:** Retry correct mutations ([#11388](https://github.com/googleapis/google-cloud-go/issues/11388)) ([ca2c4e3](https://github.com/googleapis/google-cloud-go/commit/ca2c4e334f07e7f8f0e276db922122d47262dabf))
+* **bigtable:** Track number of readrows to set rowsLimit in subsequent requests ([#10213](https://github.com/googleapis/google-cloud-go/issues/10213)) ([abb615e](https://github.com/googleapis/google-cloud-go/commit/abb615e240e612540b24b03d95835058045275fc))
+
## [1.34.0](https://github.com/googleapis/google-cloud-go/compare/bigtable/v1.33.0...bigtable/v1.34.0) (2025-01-02)
diff --git a/vendor/cloud.google.com/go/bigtable/admin.go b/vendor/cloud.google.com/go/bigtable/admin.go
index 69f57f0efa7b8..a9737946eb1db 100644
--- a/vendor/cloud.google.com/go/bigtable/admin.go
+++ b/vendor/cloud.google.com/go/bigtable/admin.go
@@ -47,6 +47,8 @@ import (
const adminAddr = "bigtableadmin.googleapis.com:443"
const mtlsAdminAddr = "bigtableadmin.mtls.googleapis.com:443"
+var errExpiryMissing = errors.New("WithExpiry is a required option")
+
// ErrPartiallyUnavailable is returned when some locations (clusters) are
// unavailable. Both partial results (retrieved from available locations)
// and the error are returned when this exception occurred.
@@ -2150,13 +2152,68 @@ func (ac *AdminClient) RestoreTableFrom(ctx context.Context, sourceInstance, tab
return longrunning.InternalNewOperation(ac.lroClient, op).Wait(ctx, &resp)
}
+type backupOptions struct {
+ backupType *BackupType
+ hotToStandardTime *time.Time
+ expireTime *time.Time
+}
+
+// BackupOption can be used to specify parameters for backup operations.
+type BackupOption func(*backupOptions)
+
+// WithHotToStandardBackup option can be used to create backup with
+// type [BackupTypeHot] and specify time at which the hot backup will be
+// converted to a standard backup. Once the 'hotToStandardTime' has passed,
+// Cloud Bigtable will convert the hot backup to a standard backup.
+// This value must be greater than the backup creation time by at least 24 hours
+func WithHotToStandardBackup(hotToStandardTime time.Time) BackupOption {
+ return func(bo *backupOptions) {
+ btHot := BackupTypeHot
+ bo.backupType = &btHot
+ bo.hotToStandardTime = &hotToStandardTime
+ }
+}
+
+// WithExpiry option can be used to create backup
+// that expires after time 'expireTime'.
+// Once the 'expireTime' has passed, Cloud Bigtable will delete the backup.
+func WithExpiry(expireTime time.Time) BackupOption {
+ return func(bo *backupOptions) {
+ bo.expireTime = &expireTime
+ }
+}
+
+// WithHotBackup option can be used to create backup
+// with type [BackupTypeHot]
+func WithHotBackup() BackupOption {
+ return func(bo *backupOptions) {
+ btHot := BackupTypeHot
+ bo.backupType = &btHot
+ }
+}
+
// CreateBackup creates a new backup in the specified cluster from the
// specified source table with the user-provided expire time.
func (ac *AdminClient) CreateBackup(ctx context.Context, table, cluster, backup string, expireTime time.Time) error {
+ return ac.CreateBackupWithOptions(ctx, table, cluster, backup, WithExpiry(expireTime))
+}
+
+// CreateBackupWithOptions is similar to CreateBackup but lets the user specify additional options.
+func (ac *AdminClient) CreateBackupWithOptions(ctx context.Context, table, cluster, backup string, opts ...BackupOption) error {
ctx = mergeOutgoingMetadata(ctx, ac.md)
prefix := ac.instancePrefix()
- parsedExpireTime := timestamppb.New(expireTime)
+ o := backupOptions{}
+ for _, opt := range opts {
+ if opt != nil {
+ opt(&o)
+ }
+ }
+
+ if o.expireTime == nil {
+ return errExpiryMissing
+ }
+ parsedExpireTime := timestamppb.New(*o.expireTime)
req := &btapb.CreateBackupRequest{
Parent: prefix + "/clusters/" + cluster,
@@ -2167,6 +2224,12 @@ func (ac *AdminClient) CreateBackup(ctx context.Context, table, cluster, backup
},
}
+ if o.backupType != nil {
+ req.Backup.BackupType = btapb.Backup_BackupType(*o.backupType)
+ }
+ if o.hotToStandardTime != nil {
+ req.Backup.HotToStandardTime = timestamppb.New(*o.hotToStandardTime)
+ }
op, err := ac.tClient.CreateBackup(ctx, req)
if err != nil {
return err
@@ -2263,17 +2326,29 @@ func newBackupInfo(backup *btapb.Backup) (*BackupInfo, error) {
return nil, fmt.Errorf("invalid expireTime: %v", err)
}
expireTime := backup.GetExpireTime().AsTime()
+
+ var htsTimePtr *time.Time
+ if backup.GetHotToStandardTime() != nil {
+ if err := backup.GetHotToStandardTime().CheckValid(); err != nil {
+ return nil, fmt.Errorf("invalid HotToStandardTime: %v", err)
+ }
+ htsTime := backup.GetHotToStandardTime().AsTime()
+ htsTimePtr = &htsTime
+ }
+
encryptionInfo := newEncryptionInfo(backup.EncryptionInfo)
bi := BackupInfo{
- Name: name,
- SourceTable: tableID,
- SourceBackup: backup.SourceBackup,
- SizeBytes: backup.SizeBytes,
- StartTime: startTime,
- EndTime: endTime,
- ExpireTime: expireTime,
- State: backup.State.String(),
- EncryptionInfo: encryptionInfo,
+ Name: name,
+ SourceTable: tableID,
+ SourceBackup: backup.SourceBackup,
+ SizeBytes: backup.SizeBytes,
+ StartTime: startTime,
+ EndTime: endTime,
+ ExpireTime: expireTime,
+ State: backup.State.String(),
+ EncryptionInfo: encryptionInfo,
+ BackupType: BackupType(backup.GetBackupType()),
+ HotToStandardTime: htsTimePtr,
}
return &bi, nil
@@ -2303,6 +2378,25 @@ func (it *BackupIterator) Next() (*BackupInfo, error) {
return item, nil
}
+// BackupType denotes the type of the backup.
+type BackupType int32
+
+const (
+ // BackupTypeUnspecified denotes that backup type has not been specified.
+ BackupTypeUnspecified BackupType = 0
+
+ // BackupTypeStandard is the default type for Cloud Bigtable managed backups. Supported for
+ // backups created in both HDD and SSD instances. Requires optimization when
+ // restored to a table in an SSD instance.
+ BackupTypeStandard BackupType = 1
+
+ // BackupTypeHot is a backup type with faster restore to SSD performance. Only supported for
+ // backups created in SSD instances. A new SSD table restored from a hot
+ // backup reaches production performance more quickly than a standard
+ // backup.
+ BackupTypeHot BackupType = 2
+)
+
// BackupInfo contains backup metadata. This struct is read-only.
type BackupInfo struct {
Name string
@@ -2314,6 +2408,15 @@ type BackupInfo struct {
ExpireTime time.Time
State string
EncryptionInfo *EncryptionInfo
+ BackupType BackupType
+
+ // The time at which the hot backup will be converted to a standard backup.
+ // Once the `hot_to_standard_time` has passed, Cloud Bigtable will convert the
+ // hot backup to a standard backup. This value must be greater than the backup
+ // creation time by at least 24 hours
+ //
+ // This field only applies for hot backups.
+ HotToStandardTime *time.Time
}
// BackupInfo gets backup metadata.
@@ -2371,6 +2474,38 @@ func (ac *AdminClient) UpdateBackup(ctx context.Context, cluster, backup string,
return err
}
+// UpdateBackupHotToStandardTime updates the HotToStandardTime of a hot backup.
+func (ac *AdminClient) UpdateBackupHotToStandardTime(ctx context.Context, cluster, backup string, hotToStandardTime time.Time) error {
+ return ac.updateBackupHotToStandardTime(ctx, cluster, backup, &hotToStandardTime)
+}
+
+// UpdateBackupRemoveHotToStandardTime removes the HotToStandardTime of a hot backup.
+func (ac *AdminClient) UpdateBackupRemoveHotToStandardTime(ctx context.Context, cluster, backup string) error {
+ return ac.updateBackupHotToStandardTime(ctx, cluster, backup, nil)
+}
+
+func (ac *AdminClient) updateBackupHotToStandardTime(ctx context.Context, cluster, backup string, hotToStandardTime *time.Time) error {
+ ctx = mergeOutgoingMetadata(ctx, ac.md)
+ backupPath := ac.backupPath(cluster, ac.instance, backup)
+
+ updateMask := &field_mask.FieldMask{}
+ updateMask.Paths = append(updateMask.Paths, "hot_to_standard_time")
+
+ req := &btapb.UpdateBackupRequest{
+ Backup: &btapb.Backup{
+ Name: backupPath,
+ },
+ UpdateMask: updateMask,
+ }
+
+ if hotToStandardTime != nil {
+ req.Backup.HotToStandardTime = timestamppb.New(*hotToStandardTime)
+ }
+
+ _, err := ac.tClient.UpdateBackup(ctx, req)
+ return err
+}
+
// AuthorizedViewConf contains information about an authorized view.
type AuthorizedViewConf struct {
TableID string
diff --git a/vendor/cloud.google.com/go/bigtable/bigtable.go b/vendor/cloud.google.com/go/bigtable/bigtable.go
index 0748672a597e5..89685d6c43a4a 100644
--- a/vendor/cloud.google.com/go/bigtable/bigtable.go
+++ b/vendor/cloud.google.com/go/bigtable/bigtable.go
@@ -53,6 +53,8 @@ const prodAddr = "bigtable.googleapis.com:443"
const mtlsProdAddr = "bigtable.mtls.googleapis.com:443"
const featureFlagsHeaderKey = "bigtable-features"
+var errNegativeRowLimit = errors.New("bigtable: row limit cannot be negative")
+
// Client is a client for reading and writing data to tables in an instance.
//
// A Client is safe to use concurrently, except for its Close method.
@@ -391,7 +393,25 @@ func (t *Table) ReadRows(ctx context.Context, arg RowSet, f func(Row) bool, opts
func (t *Table) readRows(ctx context.Context, arg RowSet, f func(Row) bool, mt *builtinMetricsTracer, opts ...ReadOption) (err error) {
var prevRowKey string
attrMap := make(map[string]interface{})
+
+ numRowsRead := int64(0)
+ rowLimitSet := false
+ intialRowLimit := int64(0)
+ for _, opt := range opts {
+ if l, ok := opt.(limitRows); ok {
+ rowLimitSet = true
+ intialRowLimit = l.limit
+ }
+ }
+ if intialRowLimit < 0 {
+ return errNegativeRowLimit
+ }
+
err = gaxInvokeWithRecorder(ctx, mt, "ReadRows", func(ctx context.Context, headerMD, trailerMD *metadata.MD, _ gax.CallSettings) error {
+ if rowLimitSet && numRowsRead >= intialRowLimit {
+ return nil
+ }
+
req := &btpb.ReadRowsRequest{
AppProfileId: t.c.appProfile,
}
@@ -410,7 +430,7 @@ func (t *Table) readRows(ctx context.Context, arg RowSet, f func(Row) bool, mt *
}
req.Rows = arg.proto()
}
- settings := makeReadSettings(req)
+ settings := makeReadSettings(req, numRowsRead)
for _, opt := range opts {
opt.set(&settings)
}
@@ -473,7 +493,9 @@ func (t *Table) readRows(ctx context.Context, arg RowSet, f func(Row) bool, mt *
continue
}
prevRowKey = row.Key()
- if !f(row) {
+ continueReading := f(row)
+ numRowsRead++
+ if !continueReading {
// Cancel and drain stream.
cancel()
for {
@@ -939,14 +961,16 @@ type FullReadStatsFunc func(*FullReadStats)
type readSettings struct {
req *btpb.ReadRowsRequest
fullReadStatsFunc FullReadStatsFunc
+ numRowsRead int64
}
-func makeReadSettings(req *btpb.ReadRowsRequest) readSettings {
- return readSettings{req, nil}
+func makeReadSettings(req *btpb.ReadRowsRequest, numRowsRead int64) readSettings {
+ return readSettings{req, nil, numRowsRead}
}
// A ReadOption is an optional argument to ReadRows.
type ReadOption interface {
+ // set modifies the request stored in the settings
set(settings *readSettings)
}
@@ -965,7 +989,11 @@ func LimitRows(limit int64) ReadOption { return limitRows{limit} }
type limitRows struct{ limit int64 }
-func (lr limitRows) set(settings *readSettings) { settings.req.RowsLimit = lr.limit }
+func (lr limitRows) set(settings *readSettings) {
+ // Since 'numRowsRead' out of 'limit' requested rows have already been read,
+ // the subsequest requests should fetch only the remaining rows.
+ settings.req.RowsLimit = lr.limit - settings.numRowsRead
+}
// WithFullReadStats returns a ReadOption that will request FullReadStats
// and invoke the given callback on the resulting FullReadStats.
@@ -1013,7 +1041,8 @@ func mutationsAreRetryable(muts []*btpb.Mutation) bool {
return true
}
-const maxMutations = 100000
+// Overriden in tests
+var maxMutations = 100000
// Apply mutates a row atomically. A mutation must contain at least one
// operation and at most 100000 operations.
@@ -1038,7 +1067,7 @@ func (t *Table) apply(ctx context.Context, mt *builtinMetricsTracer, row string,
}
var callOptions []gax.CallOption
- if m.cond == nil {
+ if !m.isConditional {
req := &btpb.MutateRowRequest{
AppProfileId: t.c.appProfile,
RowKey: []byte(row),
@@ -1065,9 +1094,11 @@ func (t *Table) apply(ctx context.Context, mt *builtinMetricsTracer, row string,
}
req := &btpb.CheckAndMutateRowRequest{
- AppProfileId: t.c.appProfile,
- RowKey: []byte(row),
- PredicateFilter: m.cond.proto(),
+ AppProfileId: t.c.appProfile,
+ RowKey: []byte(row),
+ }
+ if m.cond != nil {
+ req.PredicateFilter = m.cond.proto()
}
if t.authorizedView == "" {
req.TableName = t.c.fullTableName(t.table)
@@ -1086,15 +1117,12 @@ func (t *Table) apply(ctx context.Context, mt *builtinMetricsTracer, row string,
}
req.FalseMutations = m.mfalse.ops
}
- if mutationsAreRetryable(req.TrueMutations) && mutationsAreRetryable(req.FalseMutations) {
- callOptions = retryOptions
- }
var cmRes *btpb.CheckAndMutateRowResponse
err = gaxInvokeWithRecorder(ctx, mt, "CheckAndMutateRow", func(ctx context.Context, headerMD, trailerMD *metadata.MD, _ gax.CallSettings) error {
var err error
cmRes, err = t.c.client.CheckAndMutateRow(ctx, req, grpc.Header(headerMD), grpc.Trailer(trailerMD))
return err
- }, callOptions...)
+ })
if err == nil {
after(cmRes)
}
@@ -1122,10 +1150,10 @@ func GetCondMutationResult(matched *bool) ApplyOption {
// Mutation represents a set of changes for a single row of a table.
type Mutation struct {
- ops []*btpb.Mutation
-
+ ops []*btpb.Mutation
+ cond Filter
// for conditional mutations
- cond Filter
+ isConditional bool
mtrue, mfalse *Mutation
}
@@ -1143,7 +1171,7 @@ func NewMutation() *Mutation {
// The application of a ReadModifyWrite is atomic; concurrent ReadModifyWrites will
// be executed serially by the server.
func NewCondMutation(cond Filter, mtrue, mfalse *Mutation) *Mutation {
- return &Mutation{cond: cond, mtrue: mtrue, mfalse: mfalse}
+ return &Mutation{cond: cond, mtrue: mtrue, mfalse: mfalse, isConditional: true}
}
// Set sets a value in a specified column, with the given timestamp.
@@ -1227,9 +1255,14 @@ func (m *Mutation) mergeToCell(family, column string, ts Timestamp, value *btpb.
type entryErr struct {
Entry *btpb.MutateRowsRequest_Entry
Err error
+
+ // TopLevelErr is the error received either from
+ // 1. client.MutateRows
+ // 2. stream.Recv
+ TopLevelErr error
}
-// ApplyBulk applies multiple Mutations, up to a maximum of 100,000.
+// ApplyBulk applies multiple Mutations.
// Each mutation is individually applied atomically,
// but the set of mutations may be applied in any order.
//
@@ -1251,23 +1284,37 @@ func (t *Table) ApplyBulk(ctx context.Context, rowKeys []string, muts []*Mutatio
origEntries := make([]*entryErr, len(rowKeys))
for i, key := range rowKeys {
mut := muts[i]
- if mut.cond != nil {
+ if mut.isConditional {
return nil, errors.New("conditional mutations cannot be applied in bulk")
}
origEntries[i] = &entryErr{Entry: &btpb.MutateRowsRequest_Entry{RowKey: []byte(key), Mutations: mut.ops}}
}
- for _, group := range groupEntries(origEntries, maxMutations) {
+ var firstGroupErr error
+ numFailed := 0
+ groups := groupEntries(origEntries, maxMutations)
+ for _, group := range groups {
err := t.applyGroup(ctx, group, opts...)
if err != nil {
- return nil, err
+ if firstGroupErr == nil {
+ firstGroupErr = err
+ }
+ numFailed++
}
}
+ if numFailed == len(groups) {
+ return nil, firstGroupErr
+ }
+
// All the errors are accumulated into an array and returned, interspersed with nils for successful
// entries. The absence of any errors means we should return nil.
var foundErr bool
for _, entry := range origEntries {
+ if entry.Err == nil && entry.TopLevelErr != nil {
+ // Populate per mutation error if top level error is not nil
+ entry.Err = entry.TopLevelErr
+ }
if entry.Err != nil {
foundErr = true
}
@@ -1292,6 +1339,7 @@ func (t *Table) applyGroup(ctx context.Context, group []*entryErr, opts ...Apply
// We want to retry the entire request with the current group
return err
}
+ // Get the entries that need to be retried
group = t.getApplyBulkRetries(group)
if len(group) > 0 && len(idempotentRetryCodes) > 0 {
// We have at least one mutation that needs to be retried.
@@ -1327,6 +1375,11 @@ func (t *Table) doApplyBulk(ctx context.Context, entryErrs []*entryErr, headerMD
}
}
+ var topLevelErr error
+ defer func() {
+ populateTopLevelError(entryErrs, topLevelErr)
+ }()
+
entries := make([]*btpb.MutateRowsRequest_Entry, len(entryErrs))
for i, entryErr := range entryErrs {
entries[i] = entryErr.Entry
@@ -1343,6 +1396,7 @@ func (t *Table) doApplyBulk(ctx context.Context, entryErrs []*entryErr, headerMD
stream, err := t.c.client.MutateRows(ctx, req)
if err != nil {
+ _, topLevelErr = convertToGrpcStatusErr(err)
return err
}
@@ -1357,15 +1411,16 @@ func (t *Table) doApplyBulk(ctx context.Context, entryErrs []*entryErr, headerMD
}
if err != nil {
*trailerMD = stream.Trailer()
+ _, topLevelErr = convertToGrpcStatusErr(err)
return err
}
- for i, entry := range res.Entries {
+ for _, entry := range res.Entries {
s := entry.Status
if s.Code == int32(codes.OK) {
- entryErrs[i].Err = nil
+ entryErrs[entry.Index].Err = nil
} else {
- entryErrs[i].Err = status.Errorf(codes.Code(s.Code), s.Message)
+ entryErrs[entry.Index].Err = status.Errorf(codes.Code(s.Code), s.Message)
}
}
after(res)
@@ -1373,6 +1428,12 @@ func (t *Table) doApplyBulk(ctx context.Context, entryErrs []*entryErr, headerMD
return nil
}
+func populateTopLevelError(entries []*entryErr, topLevelErr error) {
+ for _, entry := range entries {
+ entry.TopLevelErr = topLevelErr
+ }
+}
+
// groupEntries groups entries into groups of a specified size without breaking up
// individual entries.
func groupEntries(entries []*entryErr, maxSize int) [][]*entryErr {
diff --git a/vendor/cloud.google.com/go/bigtable/conformance_test.sh b/vendor/cloud.google.com/go/bigtable/conformance_test.sh
index d380758315fc5..77c037741b41b 100644
--- a/vendor/cloud.google.com/go/bigtable/conformance_test.sh
+++ b/vendor/cloud.google.com/go/bigtable/conformance_test.sh
@@ -53,7 +53,8 @@ cd $conformanceTestsHome
# Tests in https://github.com/googleapis/cloud-bigtable-clients-test/tree/main/tests can only be run on go1.22.7
go install golang.org/dl/go1.22.7@latest
go1.22.7 download
-go1.22.7 test -v -proxy_addr=:$testProxyPort | tee -a $sponge_log
+# known_failures.txt contains tests for the unimplemented features
+eval "go1.22.7 test -v -proxy_addr=:$testProxyPort -skip `tr -d '\n' < $testProxyHome/known_failures.txt` | tee -a $sponge_log"
RETURN_CODE=$?
echo "exiting with ${RETURN_CODE}"
diff --git a/vendor/cloud.google.com/go/bigtable/internal/version.go b/vendor/cloud.google.com/go/bigtable/internal/version.go
index a018e6275b148..b846f144d2533 100644
--- a/vendor/cloud.google.com/go/bigtable/internal/version.go
+++ b/vendor/cloud.google.com/go/bigtable/internal/version.go
@@ -15,4 +15,4 @@
package internal
// Version is the current tagged release of the library.
-const Version = "1.34.0"
+const Version = "1.35.0"
diff --git a/vendor/cloud.google.com/go/debug.md b/vendor/cloud.google.com/go/debug.md
index 7979d14af3f44..052962e3433ce 100644
--- a/vendor/cloud.google.com/go/debug.md
+++ b/vendor/cloud.google.com/go/debug.md
@@ -24,6 +24,17 @@ impact and are therefore not recommended for sustained production use. Use these
tips locally or in production for a *limited time* to help get a better
understanding of what is going on.
+### Request/Response Logging
+
+To enable logging for all outgoing requests from the Go Client Libraries, set
+the environment variable `GOOGLE_SDK_GO_LOGGING_LEVEL` to `debug`. Currently all
+logging is at the debug level, but this is likely to change in the future.
+
+*Caution*: Debug level logging should only be used in a limited manner. Debug
+level logs contain sensitive information, including headers, request/response
+payloads, and authentication tokens. Additionally, enabling logging at this
+level will have a minor performance impact.
+
### HTTP based clients
All of our auto-generated clients have a constructor to create a client that
@@ -40,74 +51,6 @@ GODEBUG=http2debug=1. To read more about this feature please see the godoc for
*WARNING*: Enabling this debug variable will log headers and payloads which may
contain private information.
-#### Add in your own logging with an HTTP middleware
-
-You may want to add in your own logging around HTTP requests. One way to do this
-is to register a custom HTTP client with a logging transport built in. Here is
-an example of how you would do this with the storage client.
-
-*WARNING*: Adding this middleware will log headers and payloads which may
-contain private information.
-
-```go
-package main
-
-import (
- "context"
- "fmt"
- "log"
- "net/http"
- "net/http/httputil"
-
- "cloud.google.com/go/storage"
- "google.golang.org/api/iterator"
- "google.golang.org/api/option"
- htransport "google.golang.org/api/transport/http"
-)
-
-type loggingRoundTripper struct {
- rt http.RoundTripper
-}
-
-func (d loggingRoundTripper) RoundTrip(r *http.Request) (*http.Response, error) {
- // Will create a dump of the request and body.
- dump, err := httputil.DumpRequest(r, true)
- if err != nil {
- log.Println("error dumping request")
- }
- log.Printf("%s", dump)
- return d.rt.RoundTrip(r)
-}
-
-func main() {
- ctx := context.Background()
-
- // Create a transport with authentication built-in detected with
- // [ADC](https://google.aip.dev/auth/4110). Note you will have to pass any
- // required scoped for the client you are using.
- trans, err := htransport.NewTransport(ctx,
- http.DefaultTransport,
- option.WithScopes(storage.ScopeFullControl),
- )
- if err != nil {
- log.Fatal(err)
- }
-
- // Embed customized transport into an HTTP client.
- hc := &http.Client{
- Transport: loggingRoundTripper{rt: trans},
- }
-
- // Supply custom HTTP client for use by the library.
- client, err := storage.NewClient(ctx, option.WithHTTPClient(hc))
- if err != nil {
- log.Fatal(err)
- }
- defer client.Close()
- // Use the client
-}
-```
-
### gRPC based clients
#### Try setting grpc-go's debug variables
@@ -117,66 +60,6 @@ Try setting the following environment variables for grpc-go:
good for diagnosing connection level failures. For more information please see
[grpc-go's debug documentation](https://pkg.go.dev/google.golang.org/grpc/examples/features/debugging#section-readme).
-#### Add in your own logging with a gRPC interceptors
-
-You may want to add in your own logging around gRPC requests. One way to do this
-is to register a custom interceptor that adds logging. Here is
-an example of how you would do this with the secretmanager client. Note this
-example registers a UnaryClientInterceptor but you may want/need to register
-a StreamClientInterceptor instead-of/as-well depending on what kinds of
-RPCs you are calling.
-
-*WARNING*: Adding this interceptor will log metadata and payloads which may
-contain private information.
-
-```go
-package main
-
-import (
- "context"
- "log"
-
- secretmanager "cloud.google.com/go/secretmanager/apiv1"
- "google.golang.org/api/option"
- "google.golang.org/grpc"
- "google.golang.org/grpc/metadata"
- "google.golang.org/protobuf/encoding/protojson"
- "google.golang.org/protobuf/reflect/protoreflect"
-)
-
-func loggingUnaryInterceptor() grpc.UnaryClientInterceptor {
- return func(ctx context.Context, method string, req, reply interface{}, cc *grpc.ClientConn, invoker grpc.UnaryInvoker, opts ...grpc.CallOption) error {
- err := invoker(ctx, method, req, reply, cc, opts...)
- log.Printf("Invoked method: %v", method)
- md, ok := metadata.FromOutgoingContext(ctx)
- if ok {
- log.Println("Metadata:")
- for k, v := range md {
- log.Printf("Key: %v, Value: %v", k, v)
- }
- }
- reqb, merr := protojson.Marshal(req.(protoreflect.ProtoMessage))
- if merr == nil {
- log.Printf("Request: %s", reqb)
- }
- return err
- }
-}
-
-func main() {
- ctx := context.Background()
- // Supply custom gRPC interceptor for use by the client.
- client, err := secretmanager.NewClient(ctx,
- option.WithGRPCDialOption(grpc.WithUnaryInterceptor(loggingUnaryInterceptor())),
- )
- if err != nil {
- log.Fatal(err)
- }
- defer client.Close()
- // Use the client
-}
-```
-
## Telemetry
**Warning: The OpenCensus project is obsolete and was archived on July 31st,
diff --git a/vendor/cloud.google.com/go/iam/CHANGES.md b/vendor/cloud.google.com/go/iam/CHANGES.md
index 1b2dc2ca5f97e..6bfd910506ed2 100644
--- a/vendor/cloud.google.com/go/iam/CHANGES.md
+++ b/vendor/cloud.google.com/go/iam/CHANGES.md
@@ -1,6 +1,20 @@
# Changes
+## [1.3.1](https://github.com/googleapis/google-cloud-go/compare/iam/v1.3.0...iam/v1.3.1) (2025-01-02)
+
+
+### Bug Fixes
+
+* **iam:** Update golang.org/x/net to v0.33.0 ([e9b0b69](https://github.com/googleapis/google-cloud-go/commit/e9b0b69644ea5b276cacff0a707e8a5e87efafc9))
+
+## [1.3.0](https://github.com/googleapis/google-cloud-go/compare/iam/v1.2.2...iam/v1.3.0) (2024-12-04)
+
+
+### Features
+
+* **iam:** Add ResourcePolicyMember to google/iam/v1 ([8dedb87](https://github.com/googleapis/google-cloud-go/commit/8dedb878c070cc1e92d62bb9b32358425e3ceffb))
+
## [1.2.2](https://github.com/googleapis/google-cloud-go/compare/iam/v1.2.1...iam/v1.2.2) (2024-10-23)
diff --git a/vendor/cloud.google.com/go/iam/apiv1/iampb/iam_policy.pb.go b/vendor/cloud.google.com/go/iam/apiv1/iampb/iam_policy.pb.go
index 56de55be8424f..f975d76191bc8 100644
--- a/vendor/cloud.google.com/go/iam/apiv1/iampb/iam_policy.pb.go
+++ b/vendor/cloud.google.com/go/iam/apiv1/iampb/iam_policy.pb.go
@@ -14,7 +14,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
-// protoc-gen-go v1.34.2
+// protoc-gen-go v1.35.2
// protoc v4.25.3
// source: google/iam/v1/iam_policy.proto
@@ -65,11 +65,9 @@ type SetIamPolicyRequest struct {
func (x *SetIamPolicyRequest) Reset() {
*x = SetIamPolicyRequest{}
- if protoimpl.UnsafeEnabled {
- mi := &file_google_iam_v1_iam_policy_proto_msgTypes[0]
- ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
- ms.StoreMessageInfo(mi)
- }
+ mi := &file_google_iam_v1_iam_policy_proto_msgTypes[0]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
}
func (x *SetIamPolicyRequest) String() string {
@@ -80,7 +78,7 @@ func (*SetIamPolicyRequest) ProtoMessage() {}
func (x *SetIamPolicyRequest) ProtoReflect() protoreflect.Message {
mi := &file_google_iam_v1_iam_policy_proto_msgTypes[0]
- if protoimpl.UnsafeEnabled && x != nil {
+ if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -132,11 +130,9 @@ type GetIamPolicyRequest struct {
func (x *GetIamPolicyRequest) Reset() {
*x = GetIamPolicyRequest{}
- if protoimpl.UnsafeEnabled {
- mi := &file_google_iam_v1_iam_policy_proto_msgTypes[1]
- ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
- ms.StoreMessageInfo(mi)
- }
+ mi := &file_google_iam_v1_iam_policy_proto_msgTypes[1]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
}
func (x *GetIamPolicyRequest) String() string {
@@ -147,7 +143,7 @@ func (*GetIamPolicyRequest) ProtoMessage() {}
func (x *GetIamPolicyRequest) ProtoReflect() protoreflect.Message {
mi := &file_google_iam_v1_iam_policy_proto_msgTypes[1]
- if protoimpl.UnsafeEnabled && x != nil {
+ if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -194,11 +190,9 @@ type TestIamPermissionsRequest struct {
func (x *TestIamPermissionsRequest) Reset() {
*x = TestIamPermissionsRequest{}
- if protoimpl.UnsafeEnabled {
- mi := &file_google_iam_v1_iam_policy_proto_msgTypes[2]
- ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
- ms.StoreMessageInfo(mi)
- }
+ mi := &file_google_iam_v1_iam_policy_proto_msgTypes[2]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
}
func (x *TestIamPermissionsRequest) String() string {
@@ -209,7 +203,7 @@ func (*TestIamPermissionsRequest) ProtoMessage() {}
func (x *TestIamPermissionsRequest) ProtoReflect() protoreflect.Message {
mi := &file_google_iam_v1_iam_policy_proto_msgTypes[2]
- if protoimpl.UnsafeEnabled && x != nil {
+ if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -251,11 +245,9 @@ type TestIamPermissionsResponse struct {
func (x *TestIamPermissionsResponse) Reset() {
*x = TestIamPermissionsResponse{}
- if protoimpl.UnsafeEnabled {
- mi := &file_google_iam_v1_iam_policy_proto_msgTypes[3]
- ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
- ms.StoreMessageInfo(mi)
- }
+ mi := &file_google_iam_v1_iam_policy_proto_msgTypes[3]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
}
func (x *TestIamPermissionsResponse) String() string {
@@ -266,7 +258,7 @@ func (*TestIamPermissionsResponse) ProtoMessage() {}
func (x *TestIamPermissionsResponse) ProtoReflect() protoreflect.Message {
mi := &file_google_iam_v1_iam_policy_proto_msgTypes[3]
- if protoimpl.UnsafeEnabled && x != nil {
+ if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -420,56 +412,6 @@ func file_google_iam_v1_iam_policy_proto_init() {
}
file_google_iam_v1_options_proto_init()
file_google_iam_v1_policy_proto_init()
- if !protoimpl.UnsafeEnabled {
- file_google_iam_v1_iam_policy_proto_msgTypes[0].Exporter = func(v any, i int) any {
- switch v := v.(*SetIamPolicyRequest); i {
- case 0:
- return &v.state
- case 1:
- return &v.sizeCache
- case 2:
- return &v.unknownFields
- default:
- return nil
- }
- }
- file_google_iam_v1_iam_policy_proto_msgTypes[1].Exporter = func(v any, i int) any {
- switch v := v.(*GetIamPolicyRequest); i {
- case 0:
- return &v.state
- case 1:
- return &v.sizeCache
- case 2:
- return &v.unknownFields
- default:
- return nil
- }
- }
- file_google_iam_v1_iam_policy_proto_msgTypes[2].Exporter = func(v any, i int) any {
- switch v := v.(*TestIamPermissionsRequest); i {
- case 0:
- return &v.state
- case 1:
- return &v.sizeCache
- case 2:
- return &v.unknownFields
- default:
- return nil
- }
- }
- file_google_iam_v1_iam_policy_proto_msgTypes[3].Exporter = func(v any, i int) any {
- switch v := v.(*TestIamPermissionsResponse); i {
- case 0:
- return &v.state
- case 1:
- return &v.sizeCache
- case 2:
- return &v.unknownFields
- default:
- return nil
- }
- }
- }
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
diff --git a/vendor/cloud.google.com/go/iam/apiv1/iampb/options.pb.go b/vendor/cloud.google.com/go/iam/apiv1/iampb/options.pb.go
index f1c1c084e34d2..0c82db752bd3c 100644
--- a/vendor/cloud.google.com/go/iam/apiv1/iampb/options.pb.go
+++ b/vendor/cloud.google.com/go/iam/apiv1/iampb/options.pb.go
@@ -14,7 +14,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
-// protoc-gen-go v1.34.2
+// protoc-gen-go v1.35.2
// protoc v4.25.3
// source: google/iam/v1/options.proto
@@ -64,11 +64,9 @@ type GetPolicyOptions struct {
func (x *GetPolicyOptions) Reset() {
*x = GetPolicyOptions{}
- if protoimpl.UnsafeEnabled {
- mi := &file_google_iam_v1_options_proto_msgTypes[0]
- ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
- ms.StoreMessageInfo(mi)
- }
+ mi := &file_google_iam_v1_options_proto_msgTypes[0]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
}
func (x *GetPolicyOptions) String() string {
@@ -79,7 +77,7 @@ func (*GetPolicyOptions) ProtoMessage() {}
func (x *GetPolicyOptions) ProtoReflect() protoreflect.Message {
mi := &file_google_iam_v1_options_proto_msgTypes[0]
- if protoimpl.UnsafeEnabled && x != nil {
+ if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -152,20 +150,6 @@ func file_google_iam_v1_options_proto_init() {
if File_google_iam_v1_options_proto != nil {
return
}
- if !protoimpl.UnsafeEnabled {
- file_google_iam_v1_options_proto_msgTypes[0].Exporter = func(v any, i int) any {
- switch v := v.(*GetPolicyOptions); i {
- case 0:
- return &v.state
- case 1:
- return &v.sizeCache
- case 2:
- return &v.unknownFields
- default:
- return nil
- }
- }
- }
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
diff --git a/vendor/cloud.google.com/go/iam/apiv1/iampb/policy.pb.go b/vendor/cloud.google.com/go/iam/apiv1/iampb/policy.pb.go
index 4dda5d6d056bf..a2e42f878699a 100644
--- a/vendor/cloud.google.com/go/iam/apiv1/iampb/policy.pb.go
+++ b/vendor/cloud.google.com/go/iam/apiv1/iampb/policy.pb.go
@@ -14,7 +14,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
-// protoc-gen-go v1.34.2
+// protoc-gen-go v1.35.2
// protoc v4.25.3
// source: google/iam/v1/policy.proto
@@ -337,11 +337,9 @@ type Policy struct {
func (x *Policy) Reset() {
*x = Policy{}
- if protoimpl.UnsafeEnabled {
- mi := &file_google_iam_v1_policy_proto_msgTypes[0]
- ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
- ms.StoreMessageInfo(mi)
- }
+ mi := &file_google_iam_v1_policy_proto_msgTypes[0]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
}
func (x *Policy) String() string {
@@ -352,7 +350,7 @@ func (*Policy) ProtoMessage() {}
func (x *Policy) ProtoReflect() protoreflect.Message {
mi := &file_google_iam_v1_policy_proto_msgTypes[0]
- if protoimpl.UnsafeEnabled && x != nil {
+ if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -462,11 +460,9 @@ type Binding struct {
func (x *Binding) Reset() {
*x = Binding{}
- if protoimpl.UnsafeEnabled {
- mi := &file_google_iam_v1_policy_proto_msgTypes[1]
- ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
- ms.StoreMessageInfo(mi)
- }
+ mi := &file_google_iam_v1_policy_proto_msgTypes[1]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
}
func (x *Binding) String() string {
@@ -477,7 +473,7 @@ func (*Binding) ProtoMessage() {}
func (x *Binding) ProtoReflect() protoreflect.Message {
mi := &file_google_iam_v1_policy_proto_msgTypes[1]
- if protoimpl.UnsafeEnabled && x != nil {
+ if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -579,11 +575,9 @@ type AuditConfig struct {
func (x *AuditConfig) Reset() {
*x = AuditConfig{}
- if protoimpl.UnsafeEnabled {
- mi := &file_google_iam_v1_policy_proto_msgTypes[2]
- ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
- ms.StoreMessageInfo(mi)
- }
+ mi := &file_google_iam_v1_policy_proto_msgTypes[2]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
}
func (x *AuditConfig) String() string {
@@ -594,7 +588,7 @@ func (*AuditConfig) ProtoMessage() {}
func (x *AuditConfig) ProtoReflect() protoreflect.Message {
mi := &file_google_iam_v1_policy_proto_msgTypes[2]
- if protoimpl.UnsafeEnabled && x != nil {
+ if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -658,11 +652,9 @@ type AuditLogConfig struct {
func (x *AuditLogConfig) Reset() {
*x = AuditLogConfig{}
- if protoimpl.UnsafeEnabled {
- mi := &file_google_iam_v1_policy_proto_msgTypes[3]
- ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
- ms.StoreMessageInfo(mi)
- }
+ mi := &file_google_iam_v1_policy_proto_msgTypes[3]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
}
func (x *AuditLogConfig) String() string {
@@ -673,7 +665,7 @@ func (*AuditLogConfig) ProtoMessage() {}
func (x *AuditLogConfig) ProtoReflect() protoreflect.Message {
mi := &file_google_iam_v1_policy_proto_msgTypes[3]
- if protoimpl.UnsafeEnabled && x != nil {
+ if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -716,11 +708,9 @@ type PolicyDelta struct {
func (x *PolicyDelta) Reset() {
*x = PolicyDelta{}
- if protoimpl.UnsafeEnabled {
- mi := &file_google_iam_v1_policy_proto_msgTypes[4]
- ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
- ms.StoreMessageInfo(mi)
- }
+ mi := &file_google_iam_v1_policy_proto_msgTypes[4]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
}
func (x *PolicyDelta) String() string {
@@ -731,7 +721,7 @@ func (*PolicyDelta) ProtoMessage() {}
func (x *PolicyDelta) ProtoReflect() protoreflect.Message {
mi := &file_google_iam_v1_policy_proto_msgTypes[4]
- if protoimpl.UnsafeEnabled && x != nil {
+ if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -784,11 +774,9 @@ type BindingDelta struct {
func (x *BindingDelta) Reset() {
*x = BindingDelta{}
- if protoimpl.UnsafeEnabled {
- mi := &file_google_iam_v1_policy_proto_msgTypes[5]
- ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
- ms.StoreMessageInfo(mi)
- }
+ mi := &file_google_iam_v1_policy_proto_msgTypes[5]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
}
func (x *BindingDelta) String() string {
@@ -799,7 +787,7 @@ func (*BindingDelta) ProtoMessage() {}
func (x *BindingDelta) ProtoReflect() protoreflect.Message {
mi := &file_google_iam_v1_policy_proto_msgTypes[5]
- if protoimpl.UnsafeEnabled && x != nil {
+ if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -869,11 +857,9 @@ type AuditConfigDelta struct {
func (x *AuditConfigDelta) Reset() {
*x = AuditConfigDelta{}
- if protoimpl.UnsafeEnabled {
- mi := &file_google_iam_v1_policy_proto_msgTypes[6]
- ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
- ms.StoreMessageInfo(mi)
- }
+ mi := &file_google_iam_v1_policy_proto_msgTypes[6]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
}
func (x *AuditConfigDelta) String() string {
@@ -884,7 +870,7 @@ func (*AuditConfigDelta) ProtoMessage() {}
func (x *AuditConfigDelta) ProtoReflect() protoreflect.Message {
mi := &file_google_iam_v1_policy_proto_msgTypes[6]
- if protoimpl.UnsafeEnabled && x != nil {
+ if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -1072,92 +1058,6 @@ func file_google_iam_v1_policy_proto_init() {
if File_google_iam_v1_policy_proto != nil {
return
}
- if !protoimpl.UnsafeEnabled {
- file_google_iam_v1_policy_proto_msgTypes[0].Exporter = func(v any, i int) any {
- switch v := v.(*Policy); i {
- case 0:
- return &v.state
- case 1:
- return &v.sizeCache
- case 2:
- return &v.unknownFields
- default:
- return nil
- }
- }
- file_google_iam_v1_policy_proto_msgTypes[1].Exporter = func(v any, i int) any {
- switch v := v.(*Binding); i {
- case 0:
- return &v.state
- case 1:
- return &v.sizeCache
- case 2:
- return &v.unknownFields
- default:
- return nil
- }
- }
- file_google_iam_v1_policy_proto_msgTypes[2].Exporter = func(v any, i int) any {
- switch v := v.(*AuditConfig); i {
- case 0:
- return &v.state
- case 1:
- return &v.sizeCache
- case 2:
- return &v.unknownFields
- default:
- return nil
- }
- }
- file_google_iam_v1_policy_proto_msgTypes[3].Exporter = func(v any, i int) any {
- switch v := v.(*AuditLogConfig); i {
- case 0:
- return &v.state
- case 1:
- return &v.sizeCache
- case 2:
- return &v.unknownFields
- default:
- return nil
- }
- }
- file_google_iam_v1_policy_proto_msgTypes[4].Exporter = func(v any, i int) any {
- switch v := v.(*PolicyDelta); i {
- case 0:
- return &v.state
- case 1:
- return &v.sizeCache
- case 2:
- return &v.unknownFields
- default:
- return nil
- }
- }
- file_google_iam_v1_policy_proto_msgTypes[5].Exporter = func(v any, i int) any {
- switch v := v.(*BindingDelta); i {
- case 0:
- return &v.state
- case 1:
- return &v.sizeCache
- case 2:
- return &v.unknownFields
- default:
- return nil
- }
- }
- file_google_iam_v1_policy_proto_msgTypes[6].Exporter = func(v any, i int) any {
- switch v := v.(*AuditConfigDelta); i {
- case 0:
- return &v.state
- case 1:
- return &v.sizeCache
- case 2:
- return &v.unknownFields
- default:
- return nil
- }
- }
- }
type x struct{}
out := protoimpl.TypeBuilder{
File: protoimpl.DescBuilder{
diff --git a/vendor/cloud.google.com/go/iam/apiv1/iampb/resource_policy_member.pb.go b/vendor/cloud.google.com/go/iam/apiv1/iampb/resource_policy_member.pb.go
new file mode 100644
index 0000000000000..361d79752ad98
--- /dev/null
+++ b/vendor/cloud.google.com/go/iam/apiv1/iampb/resource_policy_member.pb.go
@@ -0,0 +1,185 @@
+// Copyright 2024 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// Code generated by protoc-gen-go. DO NOT EDIT.
+// versions:
+// protoc-gen-go v1.35.2
+// protoc v4.25.3
+// source: google/iam/v1/resource_policy_member.proto
+
+package iampb
+
+import (
+ reflect "reflect"
+ sync "sync"
+
+ _ "google.golang.org/genproto/googleapis/api/annotations"
+ protoreflect "google.golang.org/protobuf/reflect/protoreflect"
+ protoimpl "google.golang.org/protobuf/runtime/protoimpl"
+)
+
+const (
+ // Verify that this generated code is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(20 - protoimpl.MinVersion)
+ // Verify that runtime/protoimpl is sufficiently up-to-date.
+ _ = protoimpl.EnforceVersion(protoimpl.MaxVersion - 20)
+)
+
+// Output-only policy member strings of a Google Cloud resource's built-in
+// identity.
+type ResourcePolicyMember struct {
+ state protoimpl.MessageState
+ sizeCache protoimpl.SizeCache
+ unknownFields protoimpl.UnknownFields
+
+ // IAM policy binding member referring to a Google Cloud resource by
+ // user-assigned name (https://google.aip.dev/122). If a resource is deleted
+ // and recreated with the same name, the binding will be applicable to the new
+ // resource.
+ //
+ // Example:
+ // `principal://parametermanager.googleapis.com/projects/12345/name/locations/us-central1-a/parameters/my-parameter`
+ IamPolicyNamePrincipal string `protobuf:"bytes,1,opt,name=iam_policy_name_principal,json=iamPolicyNamePrincipal,proto3" json:"iam_policy_name_principal,omitempty"`
+ // IAM policy binding member referring to a Google Cloud resource by
+ // system-assigned unique identifier (https://google.aip.dev/148#uid). If a
+ // resource is deleted and recreated with the same name, the binding will not
+ // be applicable to the new resource
+ //
+ // Example:
+ // `principal://parametermanager.googleapis.com/projects/12345/uid/locations/us-central1-a/parameters/a918fed5`
+ IamPolicyUidPrincipal string `protobuf:"bytes,2,opt,name=iam_policy_uid_principal,json=iamPolicyUidPrincipal,proto3" json:"iam_policy_uid_principal,omitempty"`
+}
+
+func (x *ResourcePolicyMember) Reset() {
+ *x = ResourcePolicyMember{}
+ mi := &file_google_iam_v1_resource_policy_member_proto_msgTypes[0]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
+}
+
+func (x *ResourcePolicyMember) String() string {
+ return protoimpl.X.MessageStringOf(x)
+}
+
+func (*ResourcePolicyMember) ProtoMessage() {}
+
+func (x *ResourcePolicyMember) ProtoReflect() protoreflect.Message {
+ mi := &file_google_iam_v1_resource_policy_member_proto_msgTypes[0]
+ if x != nil {
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ if ms.LoadMessageInfo() == nil {
+ ms.StoreMessageInfo(mi)
+ }
+ return ms
+ }
+ return mi.MessageOf(x)
+}
+
+// Deprecated: Use ResourcePolicyMember.ProtoReflect.Descriptor instead.
+func (*ResourcePolicyMember) Descriptor() ([]byte, []int) {
+ return file_google_iam_v1_resource_policy_member_proto_rawDescGZIP(), []int{0}
+}
+
+func (x *ResourcePolicyMember) GetIamPolicyNamePrincipal() string {
+ if x != nil {
+ return x.IamPolicyNamePrincipal
+ }
+ return ""
+}
+
+func (x *ResourcePolicyMember) GetIamPolicyUidPrincipal() string {
+ if x != nil {
+ return x.IamPolicyUidPrincipal
+ }
+ return ""
+}
+
+var File_google_iam_v1_resource_policy_member_proto protoreflect.FileDescriptor
+
+var file_google_iam_v1_resource_policy_member_proto_rawDesc = []byte{
+ 0x0a, 0x2a, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x69, 0x61, 0x6d, 0x2f, 0x76, 0x31, 0x2f,
+ 0x72, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x5f, 0x70, 0x6f, 0x6c, 0x69, 0x63, 0x79, 0x5f,
+ 0x6d, 0x65, 0x6d, 0x62, 0x65, 0x72, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x12, 0x0d, 0x67, 0x6f,
+ 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x69, 0x61, 0x6d, 0x2e, 0x76, 0x31, 0x1a, 0x1f, 0x67, 0x6f, 0x6f,
+ 0x67, 0x6c, 0x65, 0x2f, 0x61, 0x70, 0x69, 0x2f, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x5f, 0x62, 0x65,
+ 0x68, 0x61, 0x76, 0x69, 0x6f, 0x72, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x22, 0x94, 0x01, 0x0a,
+ 0x14, 0x52, 0x65, 0x73, 0x6f, 0x75, 0x72, 0x63, 0x65, 0x50, 0x6f, 0x6c, 0x69, 0x63, 0x79, 0x4d,
+ 0x65, 0x6d, 0x62, 0x65, 0x72, 0x12, 0x3e, 0x0a, 0x19, 0x69, 0x61, 0x6d, 0x5f, 0x70, 0x6f, 0x6c,
+ 0x69, 0x63, 0x79, 0x5f, 0x6e, 0x61, 0x6d, 0x65, 0x5f, 0x70, 0x72, 0x69, 0x6e, 0x63, 0x69, 0x70,
+ 0x61, 0x6c, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x42, 0x03, 0xe0, 0x41, 0x03, 0x52, 0x16, 0x69,
+ 0x61, 0x6d, 0x50, 0x6f, 0x6c, 0x69, 0x63, 0x79, 0x4e, 0x61, 0x6d, 0x65, 0x50, 0x72, 0x69, 0x6e,
+ 0x63, 0x69, 0x70, 0x61, 0x6c, 0x12, 0x3c, 0x0a, 0x18, 0x69, 0x61, 0x6d, 0x5f, 0x70, 0x6f, 0x6c,
+ 0x69, 0x63, 0x79, 0x5f, 0x75, 0x69, 0x64, 0x5f, 0x70, 0x72, 0x69, 0x6e, 0x63, 0x69, 0x70, 0x61,
+ 0x6c, 0x18, 0x02, 0x20, 0x01, 0x28, 0x09, 0x42, 0x03, 0xe0, 0x41, 0x03, 0x52, 0x15, 0x69, 0x61,
+ 0x6d, 0x50, 0x6f, 0x6c, 0x69, 0x63, 0x79, 0x55, 0x69, 0x64, 0x50, 0x72, 0x69, 0x6e, 0x63, 0x69,
+ 0x70, 0x61, 0x6c, 0x42, 0x87, 0x01, 0x0a, 0x11, 0x63, 0x6f, 0x6d, 0x2e, 0x67, 0x6f, 0x6f, 0x67,
+ 0x6c, 0x65, 0x2e, 0x69, 0x61, 0x6d, 0x2e, 0x76, 0x31, 0x42, 0x19, 0x52, 0x65, 0x73, 0x6f, 0x75,
+ 0x72, 0x63, 0x65, 0x50, 0x6f, 0x6c, 0x69, 0x63, 0x79, 0x4d, 0x65, 0x6d, 0x62, 0x65, 0x72, 0x50,
+ 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01, 0x5a, 0x29, 0x63, 0x6c, 0x6f, 0x75, 0x64, 0x2e, 0x67, 0x6f,
+ 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x63, 0x6f, 0x6d, 0x2f, 0x67, 0x6f, 0x2f, 0x69, 0x61, 0x6d, 0x2f,
+ 0x61, 0x70, 0x69, 0x76, 0x31, 0x2f, 0x69, 0x61, 0x6d, 0x70, 0x62, 0x3b, 0x69, 0x61, 0x6d, 0x70,
+ 0x62, 0xaa, 0x02, 0x13, 0x47, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x43, 0x6c, 0x6f, 0x75, 0x64,
+ 0x2e, 0x49, 0x61, 0x6d, 0x2e, 0x56, 0x31, 0xca, 0x02, 0x13, 0x47, 0x6f, 0x6f, 0x67, 0x6c, 0x65,
+ 0x5c, 0x43, 0x6c, 0x6f, 0x75, 0x64, 0x5c, 0x49, 0x61, 0x6d, 0x5c, 0x56, 0x31, 0x62, 0x06, 0x70,
+ 0x72, 0x6f, 0x74, 0x6f, 0x33,
+}
+
+var (
+ file_google_iam_v1_resource_policy_member_proto_rawDescOnce sync.Once
+ file_google_iam_v1_resource_policy_member_proto_rawDescData = file_google_iam_v1_resource_policy_member_proto_rawDesc
+)
+
+func file_google_iam_v1_resource_policy_member_proto_rawDescGZIP() []byte {
+ file_google_iam_v1_resource_policy_member_proto_rawDescOnce.Do(func() {
+ file_google_iam_v1_resource_policy_member_proto_rawDescData = protoimpl.X.CompressGZIP(file_google_iam_v1_resource_policy_member_proto_rawDescData)
+ })
+ return file_google_iam_v1_resource_policy_member_proto_rawDescData
+}
+
+var file_google_iam_v1_resource_policy_member_proto_msgTypes = make([]protoimpl.MessageInfo, 1)
+var file_google_iam_v1_resource_policy_member_proto_goTypes = []any{
+ (*ResourcePolicyMember)(nil), // 0: google.iam.v1.ResourcePolicyMember
+}
+var file_google_iam_v1_resource_policy_member_proto_depIdxs = []int32{
+ 0, // [0:0] is the sub-list for method output_type
+ 0, // [0:0] is the sub-list for method input_type
+ 0, // [0:0] is the sub-list for extension type_name
+ 0, // [0:0] is the sub-list for extension extendee
+ 0, // [0:0] is the sub-list for field type_name
+}
+
+func init() { file_google_iam_v1_resource_policy_member_proto_init() }
+func file_google_iam_v1_resource_policy_member_proto_init() {
+ if File_google_iam_v1_resource_policy_member_proto != nil {
+ return
+ }
+ type x struct{}
+ out := protoimpl.TypeBuilder{
+ File: protoimpl.DescBuilder{
+ GoPackagePath: reflect.TypeOf(x{}).PkgPath(),
+ RawDescriptor: file_google_iam_v1_resource_policy_member_proto_rawDesc,
+ NumEnums: 0,
+ NumMessages: 1,
+ NumExtensions: 0,
+ NumServices: 0,
+ },
+ GoTypes: file_google_iam_v1_resource_policy_member_proto_goTypes,
+ DependencyIndexes: file_google_iam_v1_resource_policy_member_proto_depIdxs,
+ MessageInfos: file_google_iam_v1_resource_policy_member_proto_msgTypes,
+ }.Build()
+ File_google_iam_v1_resource_policy_member_proto = out.File
+ file_google_iam_v1_resource_policy_member_proto_rawDesc = nil
+ file_google_iam_v1_resource_policy_member_proto_goTypes = nil
+ file_google_iam_v1_resource_policy_member_proto_depIdxs = nil
+}
diff --git a/vendor/cloud.google.com/go/internal/.repo-metadata-full.json b/vendor/cloud.google.com/go/internal/.repo-metadata-full.json
index cea277a4d8459..0c23edb915fa5 100644
--- a/vendor/cloud.google.com/go/internal/.repo-metadata-full.json
+++ b/vendor/cloud.google.com/go/internal/.repo-metadata-full.json
@@ -1709,6 +1709,26 @@
"release_level": "preview",
"library_type": "GAPIC_AUTO"
},
+ "cloud.google.com/go/memorystore/apiv1": {
+ "api_shortname": "memorystore",
+ "distribution_name": "cloud.google.com/go/memorystore/apiv1",
+ "description": "Memorystore API",
+ "language": "go",
+ "client_library_type": "generated",
+ "client_documentation": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/memorystore/latest/apiv1",
+ "release_level": "preview",
+ "library_type": "GAPIC_AUTO"
+ },
+ "cloud.google.com/go/memorystore/apiv1beta": {
+ "api_shortname": "memorystore",
+ "distribution_name": "cloud.google.com/go/memorystore/apiv1beta",
+ "description": "Memorystore API",
+ "language": "go",
+ "client_library_type": "generated",
+ "client_documentation": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/memorystore/latest/apiv1beta",
+ "release_level": "preview",
+ "library_type": "GAPIC_AUTO"
+ },
"cloud.google.com/go/metastore/apiv1": {
"api_shortname": "metastore",
"distribution_name": "cloud.google.com/go/metastore/apiv1",
@@ -1756,7 +1776,7 @@
"language": "go",
"client_library_type": "generated",
"client_documentation": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/monitoring/latest/apiv3/v2",
- "release_level": "stable",
+ "release_level": "preview",
"library_type": "GAPIC_AUTO"
},
"cloud.google.com/go/monitoring/dashboard/apiv1": {
@@ -2609,6 +2629,16 @@
"release_level": "preview",
"library_type": "GAPIC_AUTO"
},
+ "cloud.google.com/go/shopping/merchant/reviews/apiv1beta": {
+ "api_shortname": "merchantapi",
+ "distribution_name": "cloud.google.com/go/shopping/merchant/reviews/apiv1beta",
+ "description": "Merchant API",
+ "language": "go",
+ "client_library_type": "generated",
+ "client_documentation": "https://cloud.google.com/go/docs/reference/cloud.google.com/go/shopping/latest/merchant/reviews/apiv1beta",
+ "release_level": "preview",
+ "library_type": "GAPIC_AUTO"
+ },
"cloud.google.com/go/spanner": {
"api_shortname": "spanner",
"distribution_name": "cloud.google.com/go/spanner",
diff --git a/vendor/cloud.google.com/go/longrunning/CHANGES.md b/vendor/cloud.google.com/go/longrunning/CHANGES.md
index 0d665dde2e232..875fb4b399dca 100644
--- a/vendor/cloud.google.com/go/longrunning/CHANGES.md
+++ b/vendor/cloud.google.com/go/longrunning/CHANGES.md
@@ -1,5 +1,20 @@
# Changes
+## [0.6.4](https://github.com/googleapis/google-cloud-go/compare/longrunning/v0.6.3...longrunning/v0.6.4) (2025-01-02)
+
+
+### Bug Fixes
+
+* **longrunning:** Update golang.org/x/net to v0.33.0 ([e9b0b69](https://github.com/googleapis/google-cloud-go/commit/e9b0b69644ea5b276cacff0a707e8a5e87efafc9))
+
+## [0.6.3](https://github.com/googleapis/google-cloud-go/compare/longrunning/v0.6.2...longrunning/v0.6.3) (2024-11-19)
+
+
+### Documentation
+
+* **longrunning:** Clarity and typo fixes for documentation ([c1e936d](https://github.com/googleapis/google-cloud-go/commit/c1e936df6527933f5e7c31be0f95aa46ff2c0e61))
+* **longrunning:** Fix example rpc naming ([c1e936d](https://github.com/googleapis/google-cloud-go/commit/c1e936df6527933f5e7c31be0f95aa46ff2c0e61))
+
## [0.6.2](https://github.com/googleapis/google-cloud-go/compare/longrunning/v0.6.1...longrunning/v0.6.2) (2024-10-23)
diff --git a/vendor/cloud.google.com/go/longrunning/autogen/auxiliary.go b/vendor/cloud.google.com/go/longrunning/autogen/auxiliary.go
index a42e61e99c382..f3d679ccfd7af 100644
--- a/vendor/cloud.google.com/go/longrunning/autogen/auxiliary.go
+++ b/vendor/cloud.google.com/go/longrunning/autogen/auxiliary.go
@@ -41,7 +41,7 @@ type OperationIterator struct {
InternalFetch func(pageSize int, pageToken string) (results []*longrunningpb.Operation, nextPageToken string, err error)
}
-// PageInfo supports pagination. See the google.golang.org/api/iterator package for details.
+// PageInfo supports pagination. See the [google.golang.org/api/iterator] package for details.
func (it *OperationIterator) PageInfo() *iterator.PageInfo {
return it.pageInfo
}
diff --git a/vendor/cloud.google.com/go/longrunning/autogen/doc.go b/vendor/cloud.google.com/go/longrunning/autogen/doc.go
index 7976ed73455f8..7f5c0bef3c357 100644
--- a/vendor/cloud.google.com/go/longrunning/autogen/doc.go
+++ b/vendor/cloud.google.com/go/longrunning/autogen/doc.go
@@ -33,6 +33,7 @@
//
// To get started with this package, create a client.
//
+// // go get cloud.google.com/go/longrunning/autogen@latest
// ctx := context.Background()
// // This snippet has been automatically generated and should be regarded as a code template only.
// // It will require modifications to work:
@@ -51,19 +52,7 @@
//
// # Using the Client
//
-// The following is an example of making an API call with the newly created client.
-//
-// ctx := context.Background()
-// // This snippet has been automatically generated and should be regarded as a code template only.
-// // It will require modifications to work:
-// // - It may require correct/in-range values for request initialization.
-// // - It may require specifying regional endpoints when creating the service client as shown in:
-// // https://pkg.go.dev/cloud.google.com/go#hdr-Client_Options
-// c, err := longrunning.NewOperationsClient(ctx)
-// if err != nil {
-// // TODO: Handle error.
-// }
-// defer c.Close()
+// The following is an example of making an API call with the newly created client, mentioned above.
//
// req := &longrunningpb.CancelOperationRequest{
// // TODO: Fill request struct fields.
@@ -88,30 +77,3 @@
// [Debugging Client Libraries]: https://pkg.go.dev/cloud.google.com/go#hdr-Debugging
// [Inspecting errors]: https://pkg.go.dev/cloud.google.com/go#hdr-Inspecting_errors
package longrunning // import "cloud.google.com/go/longrunning/autogen"
-
-import (
- "context"
-
- "google.golang.org/api/option"
-)
-
-// For more information on implementing a client constructor hook, see
-// https://github.com/googleapis/google-cloud-go/wiki/Customizing-constructors.
-type clientHookParams struct{}
-type clientHook func(context.Context, clientHookParams) ([]option.ClientOption, error)
-
-var versionClient string
-
-func getVersionClient() string {
- if versionClient == "" {
- return "UNKNOWN"
- }
- return versionClient
-}
-
-// DefaultAuthScopes reports the default set of authentication scopes to use with this package.
-func DefaultAuthScopes() []string {
- return []string{
- "",
- }
-}
diff --git a/vendor/cloud.google.com/go/longrunning/autogen/helpers.go b/vendor/cloud.google.com/go/longrunning/autogen/helpers.go
new file mode 100644
index 0000000000000..d14fac0d3f816
--- /dev/null
+++ b/vendor/cloud.google.com/go/longrunning/autogen/helpers.go
@@ -0,0 +1,99 @@
+// Copyright 2024 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// Code generated by protoc-gen-go_gapic. DO NOT EDIT.
+
+package longrunning
+
+import (
+ "context"
+ "io"
+ "log/slog"
+ "net/http"
+
+ "github.com/googleapis/gax-go/v2/internallog"
+ "github.com/googleapis/gax-go/v2/internallog/grpclog"
+ "google.golang.org/api/googleapi"
+ "google.golang.org/api/option"
+ "google.golang.org/grpc"
+ "google.golang.org/protobuf/proto"
+)
+
+const serviceName = "longrunning.googleapis.com"
+
+// For more information on implementing a client constructor hook, see
+// https://github.com/googleapis/google-cloud-go/wiki/Customizing-constructors.
+type clientHookParams struct{}
+type clientHook func(context.Context, clientHookParams) ([]option.ClientOption, error)
+
+var versionClient string
+
+func getVersionClient() string {
+ if versionClient == "" {
+ return "UNKNOWN"
+ }
+ return versionClient
+}
+
+// DefaultAuthScopes reports the default set of authentication scopes to use with this package.
+func DefaultAuthScopes() []string {
+ return []string{}
+}
+
+func executeHTTPRequestWithResponse(ctx context.Context, client *http.Client, req *http.Request, logger *slog.Logger, body []byte, rpc string) ([]byte, *http.Response, error) {
+ logger.DebugContext(ctx, "api request", "serviceName", serviceName, "rpcName", rpc, "request", internallog.HTTPRequest(req, body))
+ resp, err := client.Do(req)
+ if err != nil {
+ return nil, nil, err
+ }
+ defer resp.Body.Close()
+ buf, err := io.ReadAll(resp.Body)
+ if err != nil {
+ return nil, nil, err
+ }
+ logger.DebugContext(ctx, "api response", "serviceName", serviceName, "rpcName", rpc, "response", internallog.HTTPResponse(resp, buf))
+ if err = googleapi.CheckResponse(resp); err != nil {
+ return nil, nil, err
+ }
+ return buf, resp, nil
+}
+
+func executeHTTPRequest(ctx context.Context, client *http.Client, req *http.Request, logger *slog.Logger, body []byte, rpc string) ([]byte, error) {
+ buf, _, err := executeHTTPRequestWithResponse(ctx, client, req, logger, body, rpc)
+ return buf, err
+}
+
+func executeStreamingHTTPRequest(ctx context.Context, client *http.Client, req *http.Request, logger *slog.Logger, body []byte, rpc string) (*http.Response, error) {
+ logger.DebugContext(ctx, "api request", "serviceName", serviceName, "rpcName", rpc, "request", internallog.HTTPRequest(req, body))
+ resp, err := client.Do(req)
+ if err != nil {
+ return nil, err
+ }
+ logger.DebugContext(ctx, "api response", "serviceName", serviceName, "rpcName", rpc, "response", internallog.HTTPResponse(resp, nil))
+ if err = googleapi.CheckResponse(resp); err != nil {
+ return nil, err
+ }
+ return resp, nil
+}
+
+func executeRPC[I proto.Message, O proto.Message](ctx context.Context, fn func(context.Context, I, ...grpc.CallOption) (O, error), req I, opts []grpc.CallOption, logger *slog.Logger, rpc string) (O, error) {
+ var zero O
+ logger.DebugContext(ctx, "api request", "serviceName", serviceName, "rpcName", rpc, "request", grpclog.ProtoMessageRequest(ctx, req))
+ resp, err := fn(ctx, req, opts...)
+ if err != nil {
+ return zero, err
+ }
+ logger.DebugContext(ctx, "api response", "serviceName", serviceName, "rpcName", rpc, "response", grpclog.ProtoMessageResponse(resp))
+ return resp, err
+}
diff --git a/vendor/cloud.google.com/go/longrunning/autogen/longrunningpb/operations.pb.go b/vendor/cloud.google.com/go/longrunning/autogen/longrunningpb/operations.pb.go
index 0a4d66c6373a4..7f779c74d0b52 100644
--- a/vendor/cloud.google.com/go/longrunning/autogen/longrunningpb/operations.pb.go
+++ b/vendor/cloud.google.com/go/longrunning/autogen/longrunningpb/operations.pb.go
@@ -14,7 +14,7 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// versions:
-// protoc-gen-go v1.34.2
+// protoc-gen-go v1.35.2
// protoc v4.25.3
// source: google/longrunning/operations.proto
@@ -67,7 +67,8 @@ type Operation struct {
Done bool `protobuf:"varint,3,opt,name=done,proto3" json:"done,omitempty"`
// The operation result, which can be either an `error` or a valid `response`.
// If `done` == `false`, neither `error` nor `response` is set.
- // If `done` == `true`, exactly one of `error` or `response` is set.
+ // If `done` == `true`, exactly one of `error` or `response` can be set.
+ // Some services might not provide the result.
//
// Types that are assignable to Result:
//
@@ -78,11 +79,9 @@ type Operation struct {
func (x *Operation) Reset() {
*x = Operation{}
- if protoimpl.UnsafeEnabled {
- mi := &file_google_longrunning_operations_proto_msgTypes[0]
- ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
- ms.StoreMessageInfo(mi)
- }
+ mi := &file_google_longrunning_operations_proto_msgTypes[0]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
}
func (x *Operation) String() string {
@@ -93,7 +92,7 @@ func (*Operation) ProtoMessage() {}
func (x *Operation) ProtoReflect() protoreflect.Message {
mi := &file_google_longrunning_operations_proto_msgTypes[0]
- if protoimpl.UnsafeEnabled && x != nil {
+ if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -160,7 +159,7 @@ type Operation_Error struct {
}
type Operation_Response struct {
- // The normal response of the operation in case of success. If the original
+ // The normal, successful response of the operation. If the original
// method returns no data on success, such as `Delete`, the response is
// `google.protobuf.Empty`. If the original method is standard
// `Get`/`Create`/`Update`, the response should be the resource. For other
@@ -175,7 +174,8 @@ func (*Operation_Error) isOperation_Result() {}
func (*Operation_Response) isOperation_Result() {}
-// The request message for [Operations.GetOperation][google.longrunning.Operations.GetOperation].
+// The request message for
+// [Operations.GetOperation][google.longrunning.Operations.GetOperation].
type GetOperationRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
@@ -187,11 +187,9 @@ type GetOperationRequest struct {
func (x *GetOperationRequest) Reset() {
*x = GetOperationRequest{}
- if protoimpl.UnsafeEnabled {
- mi := &file_google_longrunning_operations_proto_msgTypes[1]
- ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
- ms.StoreMessageInfo(mi)
- }
+ mi := &file_google_longrunning_operations_proto_msgTypes[1]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
}
func (x *GetOperationRequest) String() string {
@@ -202,7 +200,7 @@ func (*GetOperationRequest) ProtoMessage() {}
func (x *GetOperationRequest) ProtoReflect() protoreflect.Message {
mi := &file_google_longrunning_operations_proto_msgTypes[1]
- if protoimpl.UnsafeEnabled && x != nil {
+ if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -224,7 +222,8 @@ func (x *GetOperationRequest) GetName() string {
return ""
}
-// The request message for [Operations.ListOperations][google.longrunning.Operations.ListOperations].
+// The request message for
+// [Operations.ListOperations][google.longrunning.Operations.ListOperations].
type ListOperationsRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
@@ -242,11 +241,9 @@ type ListOperationsRequest struct {
func (x *ListOperationsRequest) Reset() {
*x = ListOperationsRequest{}
- if protoimpl.UnsafeEnabled {
- mi := &file_google_longrunning_operations_proto_msgTypes[2]
- ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
- ms.StoreMessageInfo(mi)
- }
+ mi := &file_google_longrunning_operations_proto_msgTypes[2]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
}
func (x *ListOperationsRequest) String() string {
@@ -257,7 +254,7 @@ func (*ListOperationsRequest) ProtoMessage() {}
func (x *ListOperationsRequest) ProtoReflect() protoreflect.Message {
mi := &file_google_longrunning_operations_proto_msgTypes[2]
- if protoimpl.UnsafeEnabled && x != nil {
+ if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -300,7 +297,8 @@ func (x *ListOperationsRequest) GetPageToken() string {
return ""
}
-// The response message for [Operations.ListOperations][google.longrunning.Operations.ListOperations].
+// The response message for
+// [Operations.ListOperations][google.longrunning.Operations.ListOperations].
type ListOperationsResponse struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
@@ -314,11 +312,9 @@ type ListOperationsResponse struct {
func (x *ListOperationsResponse) Reset() {
*x = ListOperationsResponse{}
- if protoimpl.UnsafeEnabled {
- mi := &file_google_longrunning_operations_proto_msgTypes[3]
- ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
- ms.StoreMessageInfo(mi)
- }
+ mi := &file_google_longrunning_operations_proto_msgTypes[3]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
}
func (x *ListOperationsResponse) String() string {
@@ -329,7 +325,7 @@ func (*ListOperationsResponse) ProtoMessage() {}
func (x *ListOperationsResponse) ProtoReflect() protoreflect.Message {
mi := &file_google_longrunning_operations_proto_msgTypes[3]
- if protoimpl.UnsafeEnabled && x != nil {
+ if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -358,7 +354,8 @@ func (x *ListOperationsResponse) GetNextPageToken() string {
return ""
}
-// The request message for [Operations.CancelOperation][google.longrunning.Operations.CancelOperation].
+// The request message for
+// [Operations.CancelOperation][google.longrunning.Operations.CancelOperation].
type CancelOperationRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
@@ -370,11 +367,9 @@ type CancelOperationRequest struct {
func (x *CancelOperationRequest) Reset() {
*x = CancelOperationRequest{}
- if protoimpl.UnsafeEnabled {
- mi := &file_google_longrunning_operations_proto_msgTypes[4]
- ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
- ms.StoreMessageInfo(mi)
- }
+ mi := &file_google_longrunning_operations_proto_msgTypes[4]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
}
func (x *CancelOperationRequest) String() string {
@@ -385,7 +380,7 @@ func (*CancelOperationRequest) ProtoMessage() {}
func (x *CancelOperationRequest) ProtoReflect() protoreflect.Message {
mi := &file_google_longrunning_operations_proto_msgTypes[4]
- if protoimpl.UnsafeEnabled && x != nil {
+ if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -407,7 +402,8 @@ func (x *CancelOperationRequest) GetName() string {
return ""
}
-// The request message for [Operations.DeleteOperation][google.longrunning.Operations.DeleteOperation].
+// The request message for
+// [Operations.DeleteOperation][google.longrunning.Operations.DeleteOperation].
type DeleteOperationRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
@@ -419,11 +415,9 @@ type DeleteOperationRequest struct {
func (x *DeleteOperationRequest) Reset() {
*x = DeleteOperationRequest{}
- if protoimpl.UnsafeEnabled {
- mi := &file_google_longrunning_operations_proto_msgTypes[5]
- ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
- ms.StoreMessageInfo(mi)
- }
+ mi := &file_google_longrunning_operations_proto_msgTypes[5]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
}
func (x *DeleteOperationRequest) String() string {
@@ -434,7 +428,7 @@ func (*DeleteOperationRequest) ProtoMessage() {}
func (x *DeleteOperationRequest) ProtoReflect() protoreflect.Message {
mi := &file_google_longrunning_operations_proto_msgTypes[5]
- if protoimpl.UnsafeEnabled && x != nil {
+ if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -456,7 +450,8 @@ func (x *DeleteOperationRequest) GetName() string {
return ""
}
-// The request message for [Operations.WaitOperation][google.longrunning.Operations.WaitOperation].
+// The request message for
+// [Operations.WaitOperation][google.longrunning.Operations.WaitOperation].
type WaitOperationRequest struct {
state protoimpl.MessageState
sizeCache protoimpl.SizeCache
@@ -472,11 +467,9 @@ type WaitOperationRequest struct {
func (x *WaitOperationRequest) Reset() {
*x = WaitOperationRequest{}
- if protoimpl.UnsafeEnabled {
- mi := &file_google_longrunning_operations_proto_msgTypes[6]
- ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
- ms.StoreMessageInfo(mi)
- }
+ mi := &file_google_longrunning_operations_proto_msgTypes[6]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
}
func (x *WaitOperationRequest) String() string {
@@ -487,7 +480,7 @@ func (*WaitOperationRequest) ProtoMessage() {}
func (x *WaitOperationRequest) ProtoReflect() protoreflect.Message {
mi := &file_google_longrunning_operations_proto_msgTypes[6]
- if protoimpl.UnsafeEnabled && x != nil {
+ if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -520,11 +513,10 @@ func (x *WaitOperationRequest) GetTimeout() *durationpb.Duration {
//
// Example:
//
-// rpc LongRunningRecognize(LongRunningRecognizeRequest)
-// returns (google.longrunning.Operation) {
+// rpc Export(ExportRequest) returns (google.longrunning.Operation) {
// option (google.longrunning.operation_info) = {
-// response_type: "LongRunningRecognizeResponse"
-// metadata_type: "LongRunningRecognizeMetadata"
+// response_type: "ExportResponse"
+// metadata_type: "ExportMetadata"
// };
// }
type OperationInfo struct {
@@ -553,11 +545,9 @@ type OperationInfo struct {
func (x *OperationInfo) Reset() {
*x = OperationInfo{}
- if protoimpl.UnsafeEnabled {
- mi := &file_google_longrunning_operations_proto_msgTypes[7]
- ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
- ms.StoreMessageInfo(mi)
- }
+ mi := &file_google_longrunning_operations_proto_msgTypes[7]
+ ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
+ ms.StoreMessageInfo(mi)
}
func (x *OperationInfo) String() string {
@@ -568,7 +558,7 @@ func (*OperationInfo) ProtoMessage() {}
func (x *OperationInfo) ProtoReflect() protoreflect.Message {
mi := &file_google_longrunning_operations_proto_msgTypes[7]
- if protoimpl.UnsafeEnabled && x != nil {
+ if x != nil {
ms := protoimpl.X.MessageStateOf(protoimpl.Pointer(x))
if ms.LoadMessageInfo() == nil {
ms.StoreMessageInfo(mi)
@@ -632,14 +622,14 @@ var file_google_longrunning_operations_proto_rawDesc = []byte{
0x73, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x17, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f,
0x61, 0x70, 0x69, 0x2f, 0x63, 0x6c, 0x69, 0x65, 0x6e, 0x74, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f,
0x1a, 0x19, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75,
- 0x66, 0x2f, 0x61, 0x6e, 0x79, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1e, 0x67, 0x6f, 0x6f,
- 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x64, 0x75, 0x72,
- 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1b, 0x67, 0x6f, 0x6f,
- 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x65, 0x6d, 0x70,
- 0x74, 0x79, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x17, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65,
- 0x2f, 0x72, 0x70, 0x63, 0x2f, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x2e, 0x70, 0x72, 0x6f, 0x74,
- 0x6f, 0x1a, 0x20, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62,
- 0x75, 0x66, 0x2f, 0x64, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x2e, 0x70, 0x72,
+ 0x66, 0x2f, 0x61, 0x6e, 0x79, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x20, 0x67, 0x6f, 0x6f,
+ 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x64, 0x65, 0x73,
+ 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1e, 0x67,
+ 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x64,
+ 0x75, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x1b, 0x67,
+ 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x65,
+ 0x6d, 0x70, 0x74, 0x79, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x1a, 0x17, 0x67, 0x6f, 0x6f, 0x67,
+ 0x6c, 0x65, 0x2f, 0x72, 0x70, 0x63, 0x2f, 0x73, 0x74, 0x61, 0x74, 0x75, 0x73, 0x2e, 0x70, 0x72,
0x6f, 0x74, 0x6f, 0x22, 0xcf, 0x01, 0x0a, 0x09, 0x4f, 0x70, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f,
0x6e, 0x12, 0x12, 0x0a, 0x04, 0x6e, 0x61, 0x6d, 0x65, 0x18, 0x01, 0x20, 0x01, 0x28, 0x09, 0x52,
0x04, 0x6e, 0x61, 0x6d, 0x65, 0x12, 0x30, 0x0a, 0x08, 0x6d, 0x65, 0x74, 0x61, 0x64, 0x61, 0x74,
@@ -739,17 +729,18 @@ var file_google_longrunning_operations_proto_rawDesc = []byte{
0x32, 0x21, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x6c, 0x6f, 0x6e, 0x67, 0x72, 0x75,
0x6e, 0x6e, 0x69, 0x6e, 0x67, 0x2e, 0x4f, 0x70, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x49,
0x6e, 0x66, 0x6f, 0x52, 0x0d, 0x6f, 0x70, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x49, 0x6e,
- 0x66, 0x6f, 0x42, 0x9d, 0x01, 0x0a, 0x16, 0x63, 0x6f, 0x6d, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c,
+ 0x66, 0x6f, 0x42, 0xa5, 0x01, 0x0a, 0x16, 0x63, 0x6f, 0x6d, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c,
0x65, 0x2e, 0x6c, 0x6f, 0x6e, 0x67, 0x72, 0x75, 0x6e, 0x6e, 0x69, 0x6e, 0x67, 0x42, 0x0f, 0x4f,
0x70, 0x65, 0x72, 0x61, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x50, 0x72, 0x6f, 0x74, 0x6f, 0x50, 0x01,
0x5a, 0x43, 0x63, 0x6c, 0x6f, 0x75, 0x64, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x63,
0x6f, 0x6d, 0x2f, 0x67, 0x6f, 0x2f, 0x6c, 0x6f, 0x6e, 0x67, 0x72, 0x75, 0x6e, 0x6e, 0x69, 0x6e,
0x67, 0x2f, 0x61, 0x75, 0x74, 0x6f, 0x67, 0x65, 0x6e, 0x2f, 0x6c, 0x6f, 0x6e, 0x67, 0x72, 0x75,
0x6e, 0x6e, 0x69, 0x6e, 0x67, 0x70, 0x62, 0x3b, 0x6c, 0x6f, 0x6e, 0x67, 0x72, 0x75, 0x6e, 0x6e,
- 0x69, 0x6e, 0x67, 0x70, 0x62, 0xf8, 0x01, 0x01, 0xaa, 0x02, 0x12, 0x47, 0x6f, 0x6f, 0x67, 0x6c,
- 0x65, 0x2e, 0x4c, 0x6f, 0x6e, 0x67, 0x52, 0x75, 0x6e, 0x6e, 0x69, 0x6e, 0x67, 0xca, 0x02, 0x12,
- 0x47, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x5c, 0x4c, 0x6f, 0x6e, 0x67, 0x52, 0x75, 0x6e, 0x6e, 0x69,
- 0x6e, 0x67, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x33,
+ 0x69, 0x6e, 0x67, 0x70, 0x62, 0xf8, 0x01, 0x01, 0xa2, 0x02, 0x05, 0x47, 0x4c, 0x52, 0x55, 0x4e,
+ 0xaa, 0x02, 0x12, 0x47, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x4c, 0x6f, 0x6e, 0x67, 0x52, 0x75,
+ 0x6e, 0x6e, 0x69, 0x6e, 0x67, 0xca, 0x02, 0x12, 0x47, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x5c, 0x4c,
+ 0x6f, 0x6e, 0x67, 0x52, 0x75, 0x6e, 0x6e, 0x69, 0x6e, 0x67, 0x62, 0x06, 0x70, 0x72, 0x6f, 0x74,
+ 0x6f, 0x33,
}
var (
@@ -810,104 +801,6 @@ func file_google_longrunning_operations_proto_init() {
if File_google_longrunning_operations_proto != nil {
return
}
- if !protoimpl.UnsafeEnabled {
- file_google_longrunning_operations_proto_msgTypes[0].Exporter = func(v any, i int) any {
- switch v := v.(*Operation); i {
- case 0:
- return &v.state
- case 1:
- return &v.sizeCache
- case 2:
- return &v.unknownFields
- default:
- return nil
- }
- }
- file_google_longrunning_operations_proto_msgTypes[1].Exporter = func(v any, i int) any {
- switch v := v.(*GetOperationRequest); i {
- case 0:
- return &v.state
- case 1:
- return &v.sizeCache
- case 2:
- return &v.unknownFields
- default:
- return nil
- }
- }
- file_google_longrunning_operations_proto_msgTypes[2].Exporter = func(v any, i int) any {
- switch v := v.(*ListOperationsRequest); i {
- case 0:
- return &v.state
- case 1:
- return &v.sizeCache
- case 2:
- return &v.unknownFields
- default:
- return nil
- }
- }
- file_google_longrunning_operations_proto_msgTypes[3].Exporter = func(v any, i int) any {
- switch v := v.(*ListOperationsResponse); i {
- case 0:
- return &v.state
- case 1:
- return &v.sizeCache
- case 2:
- return &v.unknownFields
- default:
- return nil
- }
- }
- file_google_longrunning_operations_proto_msgTypes[4].Exporter = func(v any, i int) any {
- switch v := v.(*CancelOperationRequest); i {
- case 0:
- return &v.state
- case 1:
- return &v.sizeCache
- case 2:
- return &v.unknownFields
- default:
- return nil
- }
- }
- file_google_longrunning_operations_proto_msgTypes[5].Exporter = func(v any, i int) any {
- switch v := v.(*DeleteOperationRequest); i {
- case 0:
- return &v.state
- case 1:
- return &v.sizeCache
- case 2:
- return &v.unknownFields
- default:
- return nil
- }
- }
- file_google_longrunning_operations_proto_msgTypes[6].Exporter = func(v any, i int) any {
- switch v := v.(*WaitOperationRequest); i {
- case 0:
- return &v.state
- case 1:
- return &v.sizeCache
- case 2:
- return &v.unknownFields
- default:
- return nil
- }
- }
- file_google_longrunning_operations_proto_msgTypes[7].Exporter = func(v any, i int) any {
- switch v := v.(*OperationInfo); i {
- case 0:
- return &v.state
- case 1:
- return &v.sizeCache
- case 2:
- return &v.unknownFields
- default:
- return nil
- }
- }
- }
file_google_longrunning_operations_proto_msgTypes[0].OneofWrappers = []any{
(*Operation_Error)(nil),
(*Operation_Response)(nil),
@@ -947,14 +840,6 @@ const _ = grpc.SupportPackageIsVersion6
type OperationsClient interface {
// Lists operations that match the specified filter in the request. If the
// server doesn't support this method, it returns `UNIMPLEMENTED`.
- //
- // NOTE: the `name` binding allows API services to override the binding
- // to use different resource name schemes, such as `users/*/operations`. To
- // override the binding, API services can add a binding such as
- // `"/v1/{name=users/*}/operations"` to their service configuration.
- // For backwards compatibility, the default name includes the operations
- // collection id, however overriding users must ensure the name binding
- // is the parent resource, without the operations collection id.
ListOperations(ctx context.Context, in *ListOperationsRequest, opts ...grpc.CallOption) (*ListOperationsResponse, error)
// Gets the latest state of a long-running operation. Clients can use this
// method to poll the operation result at intervals as recommended by the API
@@ -973,8 +858,9 @@ type OperationsClient interface {
// other methods to check whether the cancellation succeeded or whether the
// operation completed despite cancellation. On successful cancellation,
// the operation is not deleted; instead, it becomes an operation with
- // an [Operation.error][google.longrunning.Operation.error] value with a [google.rpc.Status.code][google.rpc.Status.code] of 1,
- // corresponding to `Code.CANCELLED`.
+ // an [Operation.error][google.longrunning.Operation.error] value with a
+ // [google.rpc.Status.code][google.rpc.Status.code] of `1`, corresponding to
+ // `Code.CANCELLED`.
CancelOperation(ctx context.Context, in *CancelOperationRequest, opts ...grpc.CallOption) (*emptypb.Empty, error)
// Waits until the specified long-running operation is done or reaches at most
// a specified timeout, returning the latest state. If the operation is
@@ -1045,14 +931,6 @@ func (c *operationsClient) WaitOperation(ctx context.Context, in *WaitOperationR
type OperationsServer interface {
// Lists operations that match the specified filter in the request. If the
// server doesn't support this method, it returns `UNIMPLEMENTED`.
- //
- // NOTE: the `name` binding allows API services to override the binding
- // to use different resource name schemes, such as `users/*/operations`. To
- // override the binding, API services can add a binding such as
- // `"/v1/{name=users/*}/operations"` to their service configuration.
- // For backwards compatibility, the default name includes the operations
- // collection id, however overriding users must ensure the name binding
- // is the parent resource, without the operations collection id.
ListOperations(context.Context, *ListOperationsRequest) (*ListOperationsResponse, error)
// Gets the latest state of a long-running operation. Clients can use this
// method to poll the operation result at intervals as recommended by the API
@@ -1071,8 +949,9 @@ type OperationsServer interface {
// other methods to check whether the cancellation succeeded or whether the
// operation completed despite cancellation. On successful cancellation,
// the operation is not deleted; instead, it becomes an operation with
- // an [Operation.error][google.longrunning.Operation.error] value with a [google.rpc.Status.code][google.rpc.Status.code] of 1,
- // corresponding to `Code.CANCELLED`.
+ // an [Operation.error][google.longrunning.Operation.error] value with a
+ // [google.rpc.Status.code][google.rpc.Status.code] of `1`, corresponding to
+ // `Code.CANCELLED`.
CancelOperation(context.Context, *CancelOperationRequest) (*emptypb.Empty, error)
// Waits until the specified long-running operation is done or reaches at most
// a specified timeout, returning the latest state. If the operation is
diff --git a/vendor/cloud.google.com/go/longrunning/autogen/operations_client.go b/vendor/cloud.google.com/go/longrunning/autogen/operations_client.go
index 3be65a155e67e..a0a229cf0e9aa 100644
--- a/vendor/cloud.google.com/go/longrunning/autogen/operations_client.go
+++ b/vendor/cloud.google.com/go/longrunning/autogen/operations_client.go
@@ -20,7 +20,7 @@ import (
"bytes"
"context"
"fmt"
- "io"
+ "log/slog"
"math"
"net/http"
"net/url"
@@ -28,7 +28,6 @@ import (
longrunningpb "cloud.google.com/go/longrunning/autogen/longrunningpb"
gax "github.com/googleapis/gax-go/v2"
- "google.golang.org/api/googleapi"
"google.golang.org/api/iterator"
"google.golang.org/api/option"
"google.golang.org/api/option/internaloption"
@@ -188,12 +187,12 @@ type internalOperationsClient interface {
// Manages long-running operations with an API service.
//
// When an API method normally takes long time to complete, it can be designed
-// to return Operation to the client, and the client can use this
-// interface to receive the real response asynchronously by polling the
-// operation resource, or pass the operation resource to another API (such as
-// Google Cloud Pub/Sub API) to receive the response. Any API service that
-// returns long-running operations should implement the Operations interface
-// so developers can have a consistent client experience.
+// to return Operation to the client, and the
+// client can use this interface to receive the real response asynchronously by
+// polling the operation resource, or pass the operation resource to another API
+// (such as Pub/Sub API) to receive the response. Any API service that returns
+// long-running operations should implement the Operations interface so
+// developers can have a consistent client experience.
type OperationsClient struct {
// The internal transport-dependent client.
internalClient internalOperationsClient
@@ -227,14 +226,6 @@ func (c *OperationsClient) Connection() *grpc.ClientConn {
// ListOperations lists operations that match the specified filter in the request. If the
// server doesn’t support this method, it returns UNIMPLEMENTED.
-//
-// NOTE: the name binding allows API services to override the binding
-// to use different resource name schemes, such as users/*/operations. To
-// override the binding, API services can add a binding such as
-// "/v1/{name=users/*}/operations" to their service configuration.
-// For backwards compatibility, the default name includes the operations
-// collection id, however overriding users must ensure the name binding
-// is the parent resource, without the operations collection id.
func (c *OperationsClient) ListOperations(ctx context.Context, req *longrunningpb.ListOperationsRequest, opts ...gax.CallOption) *OperationIterator {
return c.internalClient.ListOperations(ctx, req, opts...)
}
@@ -262,8 +253,9 @@ func (c *OperationsClient) DeleteOperation(ctx context.Context, req *longrunning
// other methods to check whether the cancellation succeeded or whether the
// operation completed despite cancellation. On successful cancellation,
// the operation is not deleted; instead, it becomes an operation with
-// an Operation.error value with a google.rpc.Status.code of 1,
-// corresponding to Code.CANCELLED.
+// an Operation.error value with a
+// google.rpc.Status.code of 1, corresponding to
+// Code.CANCELLED.
func (c *OperationsClient) CancelOperation(ctx context.Context, req *longrunningpb.CancelOperationRequest, opts ...gax.CallOption) error {
return c.internalClient.CancelOperation(ctx, req, opts...)
}
@@ -296,6 +288,8 @@ type operationsGRPCClient struct {
// The x-goog-* metadata to be sent with each request.
xGoogHeaders []string
+
+ logger *slog.Logger
}
// NewOperationsClient creates a new operations client based on gRPC.
@@ -304,12 +298,12 @@ type operationsGRPCClient struct {
// Manages long-running operations with an API service.
//
// When an API method normally takes long time to complete, it can be designed
-// to return Operation to the client, and the client can use this
-// interface to receive the real response asynchronously by polling the
-// operation resource, or pass the operation resource to another API (such as
-// Google Cloud Pub/Sub API) to receive the response. Any API service that
-// returns long-running operations should implement the Operations interface
-// so developers can have a consistent client experience.
+// to return Operation to the client, and the
+// client can use this interface to receive the real response asynchronously by
+// polling the operation resource, or pass the operation resource to another API
+// (such as Pub/Sub API) to receive the response. Any API service that returns
+// long-running operations should implement the Operations interface so
+// developers can have a consistent client experience.
func NewOperationsClient(ctx context.Context, opts ...option.ClientOption) (*OperationsClient, error) {
clientOpts := defaultOperationsGRPCClientOptions()
if newOperationsClientHook != nil {
@@ -330,6 +324,7 @@ func NewOperationsClient(ctx context.Context, opts ...option.ClientOption) (*Ope
connPool: connPool,
operationsClient: longrunningpb.NewOperationsClient(connPool),
CallOptions: &client.CallOptions,
+ logger: internaloption.GetLogger(opts),
}
c.setGoogleClientInfo()
@@ -376,6 +371,8 @@ type operationsRESTClient struct {
// Points back to the CallOptions field of the containing OperationsClient
CallOptions **OperationsCallOptions
+
+ logger *slog.Logger
}
// NewOperationsRESTClient creates a new operations rest client.
@@ -383,12 +380,12 @@ type operationsRESTClient struct {
// Manages long-running operations with an API service.
//
// When an API method normally takes long time to complete, it can be designed
-// to return Operation to the client, and the client can use this
-// interface to receive the real response asynchronously by polling the
-// operation resource, or pass the operation resource to another API (such as
-// Google Cloud Pub/Sub API) to receive the response. Any API service that
-// returns long-running operations should implement the Operations interface
-// so developers can have a consistent client experience.
+// to return Operation to the client, and the
+// client can use this interface to receive the real response asynchronously by
+// polling the operation resource, or pass the operation resource to another API
+// (such as Pub/Sub API) to receive the response. Any API service that returns
+// long-running operations should implement the Operations interface so
+// developers can have a consistent client experience.
func NewOperationsRESTClient(ctx context.Context, opts ...option.ClientOption) (*OperationsClient, error) {
clientOpts := append(defaultOperationsRESTClientOptions(), opts...)
httpClient, endpoint, err := httptransport.NewClient(ctx, clientOpts...)
@@ -401,6 +398,7 @@ func NewOperationsRESTClient(ctx context.Context, opts ...option.ClientOption) (
endpoint: endpoint,
httpClient: httpClient,
CallOptions: &callOpts,
+ logger: internaloption.GetLogger(opts),
}
c.setGoogleClientInfo()
@@ -464,7 +462,7 @@ func (c *operationsGRPCClient) ListOperations(ctx context.Context, req *longrunn
}
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
var err error
- resp, err = c.operationsClient.ListOperations(ctx, req, settings.GRPC...)
+ resp, err = executeRPC(ctx, c.operationsClient.ListOperations, req, settings.GRPC, c.logger, "ListOperations")
return err
}, opts...)
if err != nil {
@@ -499,7 +497,7 @@ func (c *operationsGRPCClient) GetOperation(ctx context.Context, req *longrunnin
var resp *longrunningpb.Operation
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
var err error
- resp, err = c.operationsClient.GetOperation(ctx, req, settings.GRPC...)
+ resp, err = executeRPC(ctx, c.operationsClient.GetOperation, req, settings.GRPC, c.logger, "GetOperation")
return err
}, opts...)
if err != nil {
@@ -516,7 +514,7 @@ func (c *operationsGRPCClient) DeleteOperation(ctx context.Context, req *longrun
opts = append((*c.CallOptions).DeleteOperation[0:len((*c.CallOptions).DeleteOperation):len((*c.CallOptions).DeleteOperation)], opts...)
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
var err error
- _, err = c.operationsClient.DeleteOperation(ctx, req, settings.GRPC...)
+ _, err = executeRPC(ctx, c.operationsClient.DeleteOperation, req, settings.GRPC, c.logger, "DeleteOperation")
return err
}, opts...)
return err
@@ -530,7 +528,7 @@ func (c *operationsGRPCClient) CancelOperation(ctx context.Context, req *longrun
opts = append((*c.CallOptions).CancelOperation[0:len((*c.CallOptions).CancelOperation):len((*c.CallOptions).CancelOperation)], opts...)
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
var err error
- _, err = c.operationsClient.CancelOperation(ctx, req, settings.GRPC...)
+ _, err = executeRPC(ctx, c.operationsClient.CancelOperation, req, settings.GRPC, c.logger, "CancelOperation")
return err
}, opts...)
return err
@@ -542,7 +540,7 @@ func (c *operationsGRPCClient) WaitOperation(ctx context.Context, req *longrunni
var resp *longrunningpb.Operation
err := gax.Invoke(ctx, func(ctx context.Context, settings gax.CallSettings) error {
var err error
- resp, err = c.operationsClient.WaitOperation(ctx, req, settings.GRPC...)
+ resp, err = executeRPC(ctx, c.operationsClient.WaitOperation, req, settings.GRPC, c.logger, "WaitOperation")
return err
}, opts...)
if err != nil {
@@ -553,14 +551,6 @@ func (c *operationsGRPCClient) WaitOperation(ctx context.Context, req *longrunni
// ListOperations lists operations that match the specified filter in the request. If the
// server doesn’t support this method, it returns UNIMPLEMENTED.
-//
-// NOTE: the name binding allows API services to override the binding
-// to use different resource name schemes, such as users/*/operations. To
-// override the binding, API services can add a binding such as
-// "/v1/{name=users/*}/operations" to their service configuration.
-// For backwards compatibility, the default name includes the operations
-// collection id, however overriding users must ensure the name binding
-// is the parent resource, without the operations collection id.
func (c *operationsRESTClient) ListOperations(ctx context.Context, req *longrunningpb.ListOperationsRequest, opts ...gax.CallOption) *OperationIterator {
it := &OperationIterator{}
req = proto.Clone(req).(*longrunningpb.ListOperationsRequest)
@@ -607,21 +597,10 @@ func (c *operationsRESTClient) ListOperations(ctx context.Context, req *longrunn
}
httpReq.Header = headers
- httpRsp, err := c.httpClient.Do(httpReq)
- if err != nil {
- return err
- }
- defer httpRsp.Body.Close()
-
- if err = googleapi.CheckResponse(httpRsp); err != nil {
- return err
- }
-
- buf, err := io.ReadAll(httpRsp.Body)
+ buf, err := executeHTTPRequest(ctx, c.httpClient, httpReq, c.logger, nil, "ListOperations")
if err != nil {
return err
}
-
if err := unm.Unmarshal(buf, resp); err != nil {
return err
}
@@ -681,17 +660,7 @@ func (c *operationsRESTClient) GetOperation(ctx context.Context, req *longrunnin
httpReq = httpReq.WithContext(ctx)
httpReq.Header = headers
- httpRsp, err := c.httpClient.Do(httpReq)
- if err != nil {
- return err
- }
- defer httpRsp.Body.Close()
-
- if err = googleapi.CheckResponse(httpRsp); err != nil {
- return err
- }
-
- buf, err := io.ReadAll(httpRsp.Body)
+ buf, err := executeHTTPRequest(ctx, c.httpClient, httpReq, c.logger, nil, "GetOperation")
if err != nil {
return err
}
@@ -736,15 +705,8 @@ func (c *operationsRESTClient) DeleteOperation(ctx context.Context, req *longrun
httpReq = httpReq.WithContext(ctx)
httpReq.Header = headers
- httpRsp, err := c.httpClient.Do(httpReq)
- if err != nil {
- return err
- }
- defer httpRsp.Body.Close()
-
- // Returns nil if there is no error, otherwise wraps
- // the response code and body into a non-nil error
- return googleapi.CheckResponse(httpRsp)
+ _, err = executeHTTPRequest(ctx, c.httpClient, httpReq, c.logger, nil, "DeleteOperation")
+ return err
}, opts...)
}
@@ -756,8 +718,9 @@ func (c *operationsRESTClient) DeleteOperation(ctx context.Context, req *longrun
// other methods to check whether the cancellation succeeded or whether the
// operation completed despite cancellation. On successful cancellation,
// the operation is not deleted; instead, it becomes an operation with
-// an Operation.error value with a google.rpc.Status.code of 1,
-// corresponding to Code.CANCELLED.
+// an Operation.error value with a
+// google.rpc.Status.code of 1, corresponding to
+// Code.CANCELLED.
func (c *operationsRESTClient) CancelOperation(ctx context.Context, req *longrunningpb.CancelOperationRequest, opts ...gax.CallOption) error {
m := protojson.MarshalOptions{AllowPartial: true, UseEnumNumbers: true}
jsonReq, err := m.Marshal(req)
@@ -788,15 +751,8 @@ func (c *operationsRESTClient) CancelOperation(ctx context.Context, req *longrun
httpReq = httpReq.WithContext(ctx)
httpReq.Header = headers
- httpRsp, err := c.httpClient.Do(httpReq)
- if err != nil {
- return err
- }
- defer httpRsp.Body.Close()
-
- // Returns nil if there is no error, otherwise wraps
- // the response code and body into a non-nil error
- return googleapi.CheckResponse(httpRsp)
+ _, err = executeHTTPRequest(ctx, c.httpClient, httpReq, c.logger, jsonReq, "CancelOperation")
+ return err
}, opts...)
}
@@ -847,17 +803,7 @@ func (c *operationsRESTClient) WaitOperation(ctx context.Context, req *longrunni
httpReq = httpReq.WithContext(ctx)
httpReq.Header = headers
- httpRsp, err := c.httpClient.Do(httpReq)
- if err != nil {
- return err
- }
- defer httpRsp.Body.Close()
-
- if err = googleapi.CheckResponse(httpRsp); err != nil {
- return err
- }
-
- buf, err := io.ReadAll(httpRsp.Body)
+ buf, err := executeHTTPRequest(ctx, c.httpClient, httpReq, c.logger, nil, "WaitOperation")
if err != nil {
return err
}
diff --git a/vendor/cloud.google.com/go/release-please-config-yoshi-submodules.json b/vendor/cloud.google.com/go/release-please-config-yoshi-submodules.json
index 3d73202baf892..f2029f249bb50 100644
--- a/vendor/cloud.google.com/go/release-please-config-yoshi-submodules.json
+++ b/vendor/cloud.google.com/go/release-please-config-yoshi-submodules.json
@@ -252,6 +252,9 @@
"memcache": {
"component": "memcache"
},
+ "memorystore": {
+ "component": "memorystore"
+ },
"metastore": {
"component": "metastore"
},
diff --git a/vendor/modules.txt b/vendor/modules.txt
index 81e2943465455..5e30ec37a3aba 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -1,7 +1,7 @@
# cel.dev/expr v0.19.1
## explicit; go 1.21.1
cel.dev/expr
-# cloud.google.com/go v0.117.0
+# cloud.google.com/go v0.118.0
## explicit; go 1.21
cloud.google.com/go
cloud.google.com/go/internal
@@ -30,8 +30,8 @@ cloud.google.com/go/auth/internal/transport/cert
# cloud.google.com/go/auth/oauth2adapt v0.2.7
## explicit; go 1.22
cloud.google.com/go/auth/oauth2adapt
-# cloud.google.com/go/bigtable v1.34.0
-## explicit; go 1.21
+# cloud.google.com/go/bigtable v1.35.0
+## explicit; go 1.22.7
cloud.google.com/go/bigtable
cloud.google.com/go/bigtable/admin/apiv2/adminpb
cloud.google.com/go/bigtable/apiv2/bigtablepb
@@ -41,11 +41,11 @@ cloud.google.com/go/bigtable/internal/option
# cloud.google.com/go/compute/metadata v0.6.0
## explicit; go 1.21
cloud.google.com/go/compute/metadata
-# cloud.google.com/go/iam v1.2.2
+# cloud.google.com/go/iam v1.3.1
## explicit; go 1.21
cloud.google.com/go/iam
cloud.google.com/go/iam/apiv1/iampb
-# cloud.google.com/go/longrunning v0.6.2
+# cloud.google.com/go/longrunning v0.6.4
## explicit; go 1.21
cloud.google.com/go/longrunning
cloud.google.com/go/longrunning/autogen
@@ -2028,14 +2028,14 @@ google.golang.org/api/support/bundler
google.golang.org/api/transport
google.golang.org/api/transport/grpc
google.golang.org/api/transport/http
-# google.golang.org/genproto v0.0.0-20241118233622-e639e219e697
+# google.golang.org/genproto v0.0.0-20241216192217-9240e9c98484
## explicit; go 1.21
google.golang.org/genproto/googleapis/type/calendarperiod
google.golang.org/genproto/googleapis/type/date
google.golang.org/genproto/googleapis/type/expr
google.golang.org/genproto/googleapis/type/timeofday
google.golang.org/genproto/protobuf/api
-# google.golang.org/genproto/googleapis/api v0.0.0-20250102185135-69823020774d
+# google.golang.org/genproto/googleapis/api v0.0.0-20250115164207-1a7da9e5054f
## explicit; go 1.22
google.golang.org/genproto/googleapis/api
google.golang.org/genproto/googleapis/api/annotations
|
fix
|
update module cloud.google.com/go/bigtable to v1.35.0 (main) (#15951)
|
76d35cc97f0cca4cd0375f5003a4028eec9cf654
|
2024-11-05 19:19:33
|
Salva Corts
|
fix(blooms): Skip multi-tenant TSDBs during bloom planning (#14770)
| false
|
diff --git a/pkg/bloombuild/planner/planner.go b/pkg/bloombuild/planner/planner.go
index 33f0bf64c833b..7c13dedb50452 100644
--- a/pkg/bloombuild/planner/planner.go
+++ b/pkg/bloombuild/planner/planner.go
@@ -405,7 +405,6 @@ func (p *Planner) computeTasks(
// Resolve TSDBs
tsdbs, err := p.tsdbStore.ResolveTSDBs(ctx, table, tenant)
if err != nil {
- level.Error(logger).Log("msg", "failed to resolve tsdbs", "err", err)
return nil, nil, fmt.Errorf("failed to resolve tsdbs: %w", err)
}
@@ -664,9 +663,14 @@ func (p *Planner) loadTenantTables(
}
for tenants.Next() && tenants.Err() == nil && ctx.Err() == nil {
- p.metrics.tenantsDiscovered.Inc()
tenant := tenants.At()
+ if tenant == "" {
+ // Tables that have not been fully compacted yet will have multi-tenant TSDBs for which the tenant is ""
+ // in this case we just skip the tenant
+ continue
+ }
+ p.metrics.tenantsDiscovered.Inc()
if !p.limits.BloomCreationEnabled(tenant) {
level.Debug(p.logger).Log("msg", "bloom creation disabled for tenant", "tenant", tenant)
continue
|
fix
|
Skip multi-tenant TSDBs during bloom planning (#14770)
|
e455a11b49ddcf1b2610a96ebfcd6b54dcc4f75f
|
2023-07-30 13:19:19
|
Eng Zer Jun
|
ruler: compare header with `strings.EqualFold` (#10103)
| false
|
diff --git a/pkg/ruler/registry.go b/pkg/ruler/registry.go
index 8c6a77a3be80b..f7d5982d93e4d 100644
--- a/pkg/ruler/registry.go
+++ b/pkg/ruler/registry.go
@@ -202,7 +202,7 @@ func (r *walRegistry) getTenantConfig(tenant string) (instance.Config, error) {
// ensure that no variation of the X-Scope-OrgId header can be added, which might trick authentication
for k := range clt.Headers {
- if strings.ToLower(user.OrgIDHeaderName) == strings.ToLower(strings.TrimSpace(k)) {
+ if strings.EqualFold(user.OrgIDHeaderName, strings.TrimSpace(k)) {
delete(clt.Headers, k)
}
}
|
ruler
|
compare header with `strings.EqualFold` (#10103)
|
50d8d9cddff28b69f8723bc063764a96ddc4f897
|
2025-01-07 20:26:24
|
chlorine
|
docs: remove duplicated title (#15530)
| false
|
diff --git a/docs/sources/get-started/quick-start.md b/docs/sources/get-started/quick-start.md
index 2bf3af714da45..013887acfa491 100644
--- a/docs/sources/get-started/quick-start.md
+++ b/docs/sources/get-started/quick-start.md
@@ -349,7 +349,6 @@ You have completed the Loki Quickstart demo. So where to go next?
Head back to where you started from to continue with the Loki documentation: [Loki documentation](https://grafana.com/docs/loki/latest/get-started/quick-start/).
{{< /docs/ignore >}}
-## Complete metrics, logs, traces, and profiling example
If you would like to run a demonstration environment that includes Mimir, Loki, Tempo, and Grafana, you can use [Introduction to Metrics, Logs, Traces, and Profiling in Grafana](https://github.com/grafana/intro-to-mlt).
It's a self-contained environment for learning about Mimir, Loki, Tempo, and Grafana.
|
docs
|
remove duplicated title (#15530)
|
b4e6c599368278b8da20852a9acc4722f70af603
|
2022-06-20 15:42:17
|
Michel Hollands
|
compactor: add per tenant compaction delete enabled flag (#6410)
| false
|
diff --git a/CHANGELOG.md b/CHANGELOG.md
index eabc5fdfe5326..1f6f2b799cf9d 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,5 +1,6 @@
## Main
+* [6410](https://github.com/grafana/loki/pull/6410) **MichelHollands**: Add support for per tenant delete API access enabling.
* [6372](https://github.com/grafana/loki/pull/6372) **splitice**: Add support for numbers in JSON fields.
* [6105](https://github.com/grafana/loki/pull/6105) **rutgerke** Export metrics for the Promtail journal target.
* [6099](https://github.com/grafana/loki/pull/6099) **cstyan**: Drop lines with malformed JSON in Promtail JSON pipeline stage.
diff --git a/docs/sources/configuration/_index.md b/docs/sources/configuration/_index.md
index d187eb2096a05..e8ee5cc747c9a 100644
--- a/docs/sources/configuration/_index.md
+++ b/docs/sources/configuration/_index.md
@@ -2372,6 +2372,10 @@ The `limits_config` block configures global and per-tenant limits in Loki.
# This also determines how cache keys are chosen when result caching is enabled
# CLI flag: -querier.split-queries-by-interval
[split_queries_by_interval: <duration> | default = 30m]
+
+# When true, access to the deletion API is enabled.
+# CLI flag: -compactor.allow_deletes
+[allow_deletes: <boolean> | default = false]
```
## sigv4_config
diff --git a/docs/sources/operations/storage/logs-deletion.md b/docs/sources/operations/storage/logs-deletion.md
index 5c3bb31dfc8f4..acc75a390d500 100644
--- a/docs/sources/operations/storage/logs-deletion.md
+++ b/docs/sources/operations/storage/logs-deletion.md
@@ -24,9 +24,10 @@ With `whole-stream-deletion`, all the log entries matching the query given in th
With `filter-only`, log lines matching the query in the delete request are filtered out when querying Loki. They are not removed from the on-disk chunks.
With `filter-and-delete`, log lines matching the query in the delete request are filtered out when querying Loki, and they are also removed from the on-disk chunks.
-
A delete request may be canceled within a configurable cancellation period. Set the `delete_request_cancel_period` in the Compactor's YAML configuration or on the command line when invoking Loki. Its default value is 24h.
+Access to the deletion API can be enabled per tenant via the `allow_deletes` setting.
+
## Compactor endpoints
The Compactor exposes endpoints to allow for the deletion of log entries from specified streams.
diff --git a/integration/loki_micro_services_delete_test.go b/integration/loki_micro_services_delete_test.go
index dbd1b97c8dc1f..1949d617ba4e1 100644
--- a/integration/loki_micro_services_delete_test.go
+++ b/integration/loki_micro_services_delete_test.go
@@ -26,6 +26,7 @@ func TestMicroServicesDeleteRequest(t *testing.T) {
"-boltdb.shipper.compactor.deletion-mode=filter-and-delete",
// By default a minute is added to the delete request start time. This compensates for that.
"-boltdb.shipper.compactor.delete-request-cancel-period=-60s",
+ "-compactor.allow-deletes=true",
)
tIndexGateway = clu.AddComponent(
"index-gateway",
diff --git a/pkg/loki/modules.go b/pkg/loki/modules.go
index 25b2bf8474294..d39e4ef957870 100644
--- a/pkg/loki/modules.go
+++ b/pkg/loki/modules.go
@@ -873,10 +873,10 @@ func (t *Loki) initCompactor() (services.Service, error) {
if t.Cfg.CompactorConfig.RetentionEnabled {
switch t.compactor.DeleteMode() {
case deletion.WholeStreamDeletion, deletion.FilterOnly, deletion.FilterAndDelete:
- t.Server.HTTP.Path("/loki/api/v1/delete").Methods("PUT", "POST").Handler(t.HTTPAuthMiddleware.Wrap(http.HandlerFunc(t.compactor.DeleteRequestsHandler.AddDeleteRequestHandler)))
- t.Server.HTTP.Path("/loki/api/v1/delete").Methods("GET").Handler(t.HTTPAuthMiddleware.Wrap(http.HandlerFunc(t.compactor.DeleteRequestsHandler.GetAllDeleteRequestsHandler)))
- t.Server.HTTP.Path("/loki/api/v1/delete").Methods("DELETE").Handler(t.HTTPAuthMiddleware.Wrap(http.HandlerFunc(t.compactor.DeleteRequestsHandler.CancelDeleteRequestHandler)))
- t.Server.HTTP.Path("/loki/api/v1/cache/generation_numbers").Methods("GET").Handler(t.HTTPAuthMiddleware.Wrap(http.HandlerFunc(t.compactor.DeleteRequestsHandler.GetCacheGenerationNumberHandler)))
+ t.Server.HTTP.Path("/loki/api/v1/delete").Methods("PUT", "POST").Handler(t.HTTPAuthMiddleware.Wrap(t.compactor.DeleteRequestsHandler.AddDeleteRequestHandler()))
+ t.Server.HTTP.Path("/loki/api/v1/delete").Methods("GET").Handler(t.HTTPAuthMiddleware.Wrap(t.compactor.DeleteRequestsHandler.GetAllDeleteRequestsHandler()))
+ t.Server.HTTP.Path("/loki/api/v1/delete").Methods("DELETE").Handler(t.HTTPAuthMiddleware.Wrap(t.compactor.DeleteRequestsHandler.CancelDeleteRequestHandler()))
+ t.Server.HTTP.Path("/loki/api/v1/cache/generation_numbers").Methods("GET").Handler(t.HTTPAuthMiddleware.Wrap(t.compactor.DeleteRequestsHandler.GetCacheGenerationNumberHandler()))
default:
break
}
diff --git a/pkg/storage/stores/shipper/compactor/compactor.go b/pkg/storage/stores/shipper/compactor/compactor.go
index 0c6673c52c056..aa55ba11d7e67 100644
--- a/pkg/storage/stores/shipper/compactor/compactor.go
+++ b/pkg/storage/stores/shipper/compactor/compactor.go
@@ -265,6 +265,7 @@ func (c *Compactor) initDeletes(r prometheus.Registerer, limits retention.Limits
c.DeleteRequestsHandler = deletion.NewDeleteRequestHandler(
c.deleteRequestsStore,
c.cfg.DeleteRequestCancelPeriod,
+ limits,
r,
)
diff --git a/pkg/storage/stores/shipper/compactor/deletion/request_handler.go b/pkg/storage/stores/shipper/compactor/deletion/request_handler.go
index 4566a7c0f4575..acead9175ba14 100644
--- a/pkg/storage/stores/shipper/compactor/deletion/request_handler.go
+++ b/pkg/storage/stores/shipper/compactor/deletion/request_handler.go
@@ -12,22 +12,27 @@ import (
"github.com/grafana/dskit/tenant"
+ "github.com/grafana/loki/pkg/storage/stores/shipper/compactor/retention"
"github.com/grafana/loki/pkg/util"
util_log "github.com/grafana/loki/pkg/util/log"
)
+const deletionNotAvailableMsg = "deletion is not available for this tenant"
+
// DeleteRequestHandler provides handlers for delete requests
type DeleteRequestHandler struct {
deleteRequestsStore DeleteRequestsStore
metrics *deleteRequestHandlerMetrics
+ limits retention.Limits
deleteRequestCancelPeriod time.Duration
}
// NewDeleteRequestHandler creates a DeleteRequestHandler
-func NewDeleteRequestHandler(deleteStore DeleteRequestsStore, deleteRequestCancelPeriod time.Duration, registerer prometheus.Registerer) *DeleteRequestHandler {
+func NewDeleteRequestHandler(deleteStore DeleteRequestsStore, deleteRequestCancelPeriod time.Duration, limits retention.Limits, registerer prometheus.Registerer) *DeleteRequestHandler {
deleteMgr := DeleteRequestHandler{
deleteRequestsStore: deleteStore,
deleteRequestCancelPeriod: deleteRequestCancelPeriod,
+ limits: limits,
metrics: newDeleteRequestHandlerMetrics(registerer),
}
@@ -35,7 +40,12 @@ func NewDeleteRequestHandler(deleteStore DeleteRequestsStore, deleteRequestCance
}
// AddDeleteRequestHandler handles addition of a new delete request
-func (dm *DeleteRequestHandler) AddDeleteRequestHandler(w http.ResponseWriter, r *http.Request) {
+func (dm *DeleteRequestHandler) AddDeleteRequestHandler() http.Handler {
+ return dm.deletionMiddleware(http.HandlerFunc(dm.addDeleteRequestHandler))
+}
+
+// AddDeleteRequestHandler handles addition of a new delete request
+func (dm *DeleteRequestHandler) addDeleteRequestHandler(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
userID, err := tenant.TenantID(ctx)
if err != nil {
@@ -98,7 +108,12 @@ func (dm *DeleteRequestHandler) AddDeleteRequestHandler(w http.ResponseWriter, r
}
// GetAllDeleteRequestsHandler handles get all delete requests
-func (dm *DeleteRequestHandler) GetAllDeleteRequestsHandler(w http.ResponseWriter, r *http.Request) {
+func (dm *DeleteRequestHandler) GetAllDeleteRequestsHandler() http.Handler {
+ return dm.deletionMiddleware(http.HandlerFunc(dm.getAllDeleteRequestsHandler))
+}
+
+// GetAllDeleteRequestsHandler handles get all delete requests
+func (dm *DeleteRequestHandler) getAllDeleteRequestsHandler(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
userID, err := tenant.TenantID(ctx)
if err != nil {
@@ -120,7 +135,12 @@ func (dm *DeleteRequestHandler) GetAllDeleteRequestsHandler(w http.ResponseWrite
}
// CancelDeleteRequestHandler handles delete request cancellation
-func (dm *DeleteRequestHandler) CancelDeleteRequestHandler(w http.ResponseWriter, r *http.Request) {
+func (dm *DeleteRequestHandler) CancelDeleteRequestHandler() http.Handler {
+ return dm.deletionMiddleware(http.HandlerFunc(dm.cancelDeleteRequestHandler))
+}
+
+// CancelDeleteRequestHandler handles delete request cancellation
+func (dm *DeleteRequestHandler) cancelDeleteRequestHandler(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
userID, err := tenant.TenantID(ctx)
if err != nil {
@@ -163,7 +183,12 @@ func (dm *DeleteRequestHandler) CancelDeleteRequestHandler(w http.ResponseWriter
}
// GetCacheGenerationNumberHandler handles requests for a user's cache generation number
-func (dm *DeleteRequestHandler) GetCacheGenerationNumberHandler(w http.ResponseWriter, r *http.Request) {
+func (dm *DeleteRequestHandler) GetCacheGenerationNumberHandler() http.Handler {
+ return dm.deletionMiddleware(http.HandlerFunc(dm.getCacheGenerationNumberHandler))
+}
+
+// GetCacheGenerationNumberHandler handles requests for a user's cache generation number
+func (dm *DeleteRequestHandler) getCacheGenerationNumberHandler(w http.ResponseWriter, r *http.Request) {
ctx := r.Context()
userID, err := tenant.TenantID(ctx)
if err != nil {
@@ -183,3 +208,30 @@ func (dm *DeleteRequestHandler) GetCacheGenerationNumberHandler(w http.ResponseW
http.Error(w, fmt.Sprintf("Error marshalling response: %v", err), http.StatusInternalServerError)
}
}
+
+func (dm *DeleteRequestHandler) deletionMiddleware(next http.Handler) http.Handler {
+ return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
+ ctx := r.Context()
+ userID, err := tenant.TenantID(ctx)
+ if err != nil {
+ http.Error(w, err.Error(), http.StatusBadRequest)
+ return
+ }
+
+ allLimits := dm.limits.AllByUserID()
+ userLimits, ok := allLimits[userID]
+ if ok {
+ if !userLimits.CompactorDeletionEnabled {
+ http.Error(w, deletionNotAvailableMsg, http.StatusForbidden)
+ return
+ }
+ } else {
+ if !dm.limits.DefaultLimits().CompactorDeletionEnabled {
+ http.Error(w, deletionNotAvailableMsg, http.StatusForbidden)
+ return
+ }
+ }
+
+ next.ServeHTTP(w, r)
+ })
+}
diff --git a/pkg/storage/stores/shipper/compactor/deletion/request_handler_test.go b/pkg/storage/stores/shipper/compactor/deletion/request_handler_test.go
new file mode 100644
index 0000000000000..aaecb44428735
--- /dev/null
+++ b/pkg/storage/stores/shipper/compactor/deletion/request_handler_test.go
@@ -0,0 +1,133 @@
+package deletion
+
+import (
+ "net/http"
+ "net/http/httptest"
+ "path/filepath"
+ "testing"
+ "time"
+
+ "github.com/prometheus/common/model"
+ "github.com/stretchr/testify/require"
+ "github.com/weaveworks/common/user"
+
+ "github.com/grafana/loki/pkg/storage/chunk/client/local"
+ "github.com/grafana/loki/pkg/storage/stores/shipper/storage"
+ "github.com/grafana/loki/pkg/validation"
+)
+
+type retentionLimit struct {
+ compactorDeletionEnabled bool
+ retentionPeriod time.Duration
+ streamRetention []validation.StreamRetention
+}
+
+func (r retentionLimit) convertToValidationLimit() *validation.Limits {
+ return &validation.Limits{
+ CompactorDeletionEnabled: r.compactorDeletionEnabled,
+ RetentionPeriod: model.Duration(r.retentionPeriod),
+ StreamRetention: r.streamRetention,
+ }
+}
+
+type fakeLimits struct {
+ defaultLimit retentionLimit
+ perTenant map[string]retentionLimit
+}
+
+func (f fakeLimits) RetentionPeriod(userID string) time.Duration {
+ return f.perTenant[userID].retentionPeriod
+}
+
+func (f fakeLimits) StreamRetention(userID string) []validation.StreamRetention {
+ return f.perTenant[userID].streamRetention
+}
+
+func (f fakeLimits) CompactorDeletionEnabled(userID string) bool {
+ return f.perTenant[userID].compactorDeletionEnabled
+}
+
+func (f fakeLimits) DefaultLimits() *validation.Limits {
+ return f.defaultLimit.convertToValidationLimit()
+}
+
+func (f fakeLimits) AllByUserID() map[string]*validation.Limits {
+ res := make(map[string]*validation.Limits)
+ for userID, ret := range f.perTenant {
+ res[userID] = ret.convertToValidationLimit()
+ }
+ return res
+}
+
+func TestDeleteRequestHandlerDeletionMiddleware(t *testing.T) {
+ // build the store
+ tempDir := t.TempDir()
+
+ workingDir := filepath.Join(tempDir, "working-dir")
+ objectStorePath := filepath.Join(tempDir, "object-store")
+
+ objectClient, err := local.NewFSObjectClient(local.FSConfig{
+ Directory: objectStorePath,
+ })
+ require.NoError(t, err)
+ testDeleteRequestsStore, err := NewDeleteStore(workingDir, storage.NewIndexStorageClient(objectClient, ""))
+ require.NoError(t, err)
+
+ // limits
+ fl := &fakeLimits{
+ perTenant: map[string]retentionLimit{
+ "1": {compactorDeletionEnabled: true},
+ "2": {compactorDeletionEnabled: false},
+ },
+ }
+
+ // Setup handler
+ drh := NewDeleteRequestHandler(testDeleteRequestsStore, 10*time.Second, fl, nil)
+ middle := drh.deletionMiddleware(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {}))
+
+ // User that has deletion enabled
+ req := httptest.NewRequest(http.MethodGet, "http://www.your-domain.com", nil)
+ req = req.WithContext(user.InjectOrgID(req.Context(), "1"))
+
+ res := httptest.NewRecorder()
+ middle.ServeHTTP(res, req)
+
+ require.Equal(t, http.StatusOK, res.Result().StatusCode)
+
+ // User that does not have deletion enabled
+ req = httptest.NewRequest(http.MethodGet, "http://www.your-domain.com", nil)
+ req = req.WithContext(user.InjectOrgID(req.Context(), "2"))
+
+ res = httptest.NewRecorder()
+ middle.ServeHTTP(res, req)
+
+ require.Equal(t, http.StatusForbidden, res.Result().StatusCode)
+
+ // User without override, this should use the default value which is false
+ req = httptest.NewRequest(http.MethodGet, "http://www.your-domain.com", nil)
+ req = req.WithContext(user.InjectOrgID(req.Context(), "3"))
+
+ res = httptest.NewRecorder()
+ middle.ServeHTTP(res, req)
+
+ require.Equal(t, http.StatusForbidden, res.Result().StatusCode)
+
+ // User without override, after the default value is set to true
+ fl.defaultLimit.compactorDeletionEnabled = true
+
+ req = httptest.NewRequest(http.MethodGet, "http://www.your-domain.com", nil)
+ req = req.WithContext(user.InjectOrgID(req.Context(), "3"))
+
+ res = httptest.NewRecorder()
+ middle.ServeHTTP(res, req)
+
+ require.Equal(t, http.StatusOK, res.Result().StatusCode)
+
+ // User header is not given
+ req = httptest.NewRequest(http.MethodGet, "http://www.your-domain.com", nil)
+
+ res = httptest.NewRecorder()
+ middle.ServeHTTP(res, req)
+
+ require.Equal(t, http.StatusBadRequest, res.Result().StatusCode)
+}
diff --git a/pkg/validation/limits.go b/pkg/validation/limits.go
index b32a8c8fa07a1..fb81552127b9f 100644
--- a/pkg/validation/limits.go
+++ b/pkg/validation/limits.go
@@ -111,6 +111,8 @@ type Limits struct {
RulerRemoteWriteQueueRetryOnRateLimit bool `yaml:"ruler_remote_write_queue_retry_on_ratelimit" json:"ruler_remote_write_queue_retry_on_ratelimit"`
RulerRemoteWriteSigV4Config *sigv4.SigV4Config `yaml:"ruler_remote_write_sigv4_config" json:"ruler_remote_write_sigv4_config"`
+ CompactorDeletionEnabled bool `yaml:"allow_deletes" json:"allow_deletes"`
+
// Global and per tenant retention
RetentionPeriod model.Duration `yaml:"retention_period" json:"retention_period"`
StreamRetention []StreamRetention `yaml:"retention_stream,omitempty" json:"retention_stream,omitempty"`
@@ -193,6 +195,8 @@ func (l *Limits) RegisterFlags(f *flag.FlagSet) {
_ = l.QuerySplitDuration.Set("30m")
f.Var(&l.QuerySplitDuration, "querier.split-queries-by-interval", "Split queries by an interval and execute in parallel, 0 disables it. This also determines how cache keys are chosen when result caching is enabled")
+
+ f.BoolVar(&l.CompactorDeletionEnabled, "compactor.allow-deletes", false, "Enable access to the deletion API.")
}
// UnmarshalYAML implements the yaml.Unmarshaler interface.
@@ -532,6 +536,10 @@ func (o *Overrides) UnorderedWrites(userID string) bool {
return o.getOverridesForUser(userID).UnorderedWrites
}
+func (o *Overrides) CompactorDeletionEnabled(userID string) bool {
+ return o.getOverridesForUser(userID).CompactorDeletionEnabled
+}
+
func (o *Overrides) DefaultLimits() *Limits {
return o.defaultLimits
}
|
compactor
|
add per tenant compaction delete enabled flag (#6410)
|
46b552def21421fc847fb3f95cd909aef51b286e
|
2022-04-08 21:32:27
|
Bryan Boreham
|
makefile: run lint and tests with default garbage-collection (#5841)
| false
|
diff --git a/Makefile b/Makefile
index f4b338e4ec6a6..1aa0ec26c0cec 100644
--- a/Makefile
+++ b/Makefile
@@ -252,7 +252,7 @@ publish: packages
# To run this efficiently on your workstation, run this from the root dir:
# docker run --rm --tty -i -v $(pwd)/.cache:/go/cache -v $(pwd)/.pkg:/go/pkg -v $(pwd):/src/loki grafana/loki-build-image:0.17.0 lint
lint:
- GO111MODULE=on GOGC=10 golangci-lint run -v
+ GO111MODULE=on golangci-lint run -v
faillint -paths "sync/atomic=go.uber.org/atomic" ./...
########
@@ -260,7 +260,7 @@ lint:
########
test: all
- GOGC=10 $(GOTEST) -covermode=atomic -coverprofile=coverage.txt -p=4 ./...
+ $(GOTEST) -covermode=atomic -coverprofile=coverage.txt -p=4 ./...
#########
# Clean #
|
makefile
|
run lint and tests with default garbage-collection (#5841)
|
4117b6ca981b2852a15b102be2394bffce37b3e2
|
2024-06-20 15:35:47
|
Christian Haudum
|
perf: Re-introduce fixed size memory pool for bloom querier (#13172)
| false
|
diff --git a/pkg/bloombuild/builder/builder.go b/pkg/bloombuild/builder/builder.go
index cbbd737a83190..52ef9e023f4f4 100644
--- a/pkg/bloombuild/builder/builder.go
+++ b/pkg/bloombuild/builder/builder.go
@@ -38,7 +38,7 @@ type Builder struct {
logger log.Logger
tsdbStore common.TSDBStore
- bloomStore bloomshipper.Store
+ bloomStore bloomshipper.StoreBase
chunkLoader ChunkLoader
client protos.PlannerForBuilderClient
@@ -51,7 +51,7 @@ func New(
storeCfg storage.Config,
storageMetrics storage.ClientMetrics,
fetcherProvider stores.ChunkFetcherProvider,
- bloomStore bloomshipper.Store,
+ bloomStore bloomshipper.StoreBase,
logger log.Logger,
r prometheus.Registerer,
) (*Builder, error) {
diff --git a/pkg/bloombuild/builder/spec_test.go b/pkg/bloombuild/builder/spec_test.go
index e6b47b1442a6e..77bb76f7ecafa 100644
--- a/pkg/bloombuild/builder/spec_test.go
+++ b/pkg/bloombuild/builder/spec_test.go
@@ -13,6 +13,7 @@ import (
"github.com/grafana/loki/v3/pkg/chunkenc"
v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
"github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper"
+ "github.com/grafana/loki/v3/pkg/util/mempool"
)
func blocksFromSchema(t *testing.T, n int, options v1.BlockOptions) (res []*v1.Block, data []v1.SeriesWithBlooms, refs []bloomshipper.BlockRef) {
@@ -74,7 +75,7 @@ func dummyBloomGen(t *testing.T, opts v1.BlockOptions, store v1.Iterator[*v1.Ser
for i, b := range blocks {
bqs = append(bqs, &bloomshipper.CloseableBlockQuerier{
BlockRef: refs[i],
- BlockQuerier: v1.NewBlockQuerier(b, false, v1.DefaultMaxPageSize),
+ BlockQuerier: v1.NewBlockQuerier(b, &mempool.SimpleHeapAllocator{}, v1.DefaultMaxPageSize),
})
}
@@ -152,7 +153,7 @@ func TestSimpleBloomGenerator(t *testing.T) {
expectedRefs := v1.PointerSlice(data)
outputRefs := make([]*v1.SeriesWithBlooms, 0, len(data))
for _, block := range outputBlocks {
- bq := v1.NewBlockQuerier(block, false, v1.DefaultMaxPageSize).Iter()
+ bq := v1.NewBlockQuerier(block, &mempool.SimpleHeapAllocator{}, v1.DefaultMaxPageSize).Iter()
for bq.Next() {
outputRefs = append(outputRefs, bq.At())
}
diff --git a/pkg/bloombuild/planner/planner.go b/pkg/bloombuild/planner/planner.go
index 8234dde9c54a0..dd44c545ff36a 100644
--- a/pkg/bloombuild/planner/planner.go
+++ b/pkg/bloombuild/planner/planner.go
@@ -40,7 +40,7 @@ type Planner struct {
schemaCfg config.SchemaConfig
tsdbStore common.TSDBStore
- bloomStore bloomshipper.Store
+ bloomStore bloomshipper.StoreBase
tasksQueue *queue.RequestQueue
activeUsers *util.ActiveUsersCleanupService
@@ -57,7 +57,7 @@ func New(
schemaCfg config.SchemaConfig,
storeCfg storage.Config,
storageMetrics storage.ClientMetrics,
- bloomStore bloomshipper.Store,
+ bloomStore bloomshipper.StoreBase,
logger log.Logger,
r prometheus.Registerer,
) (*Planner, error) {
diff --git a/pkg/bloombuild/planner/planner_test.go b/pkg/bloombuild/planner/planner_test.go
index 64c6ef086dac8..433c978fa0f24 100644
--- a/pkg/bloombuild/planner/planner_test.go
+++ b/pkg/bloombuild/planner/planner_test.go
@@ -29,6 +29,7 @@ import (
bloomshipperconfig "github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper/config"
"github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/tsdb"
"github.com/grafana/loki/v3/pkg/storage/types"
+ "github.com/grafana/loki/v3/pkg/util/mempool"
)
var testDay = parseDayTime("2023-09-01")
@@ -411,7 +412,7 @@ func createPlanner(
reg := prometheus.NewPedanticRegistry()
metasCache := cache.NewNoopCache()
blocksCache := bloomshipper.NewFsBlocksCache(storageCfg.BloomShipperConfig.BlocksCache, reg, logger)
- bloomStore, err := bloomshipper.NewBloomStore(schemaCfg.Configs, storageCfg, storage.ClientMetrics{}, metasCache, blocksCache, reg, logger)
+ bloomStore, err := bloomshipper.NewBloomStore(schemaCfg.Configs, storageCfg, storage.ClientMetrics{}, metasCache, blocksCache, &mempool.SimpleHeapAllocator{}, reg, logger)
require.NoError(t, err)
planner, err := New(cfg, limits, schemaCfg, storageCfg, storage.ClientMetrics{}, bloomStore, logger, reg)
diff --git a/pkg/bloomcompactor/bloomcompactor.go b/pkg/bloomcompactor/bloomcompactor.go
index acfb5ba01f355..8eed0823314a7 100644
--- a/pkg/bloomcompactor/bloomcompactor.go
+++ b/pkg/bloomcompactor/bloomcompactor.go
@@ -53,7 +53,7 @@ type Compactor struct {
retentionManager *RetentionManager
// temporary workaround until bloomStore has implemented read/write shipper interface
- bloomStore bloomshipper.Store
+ bloomStore bloomshipper.StoreBase
sharding util_ring.TenantSharding
@@ -69,7 +69,7 @@ func New(
ring ring.ReadRing,
ringLifeCycler *ring.BasicLifecycler,
limits Limits,
- store bloomshipper.StoreWithMetrics,
+ store bloomshipper.Store,
logger log.Logger,
r prometheus.Registerer,
) (*Compactor, error) {
diff --git a/pkg/bloomcompactor/controller.go b/pkg/bloomcompactor/controller.go
index 277d040d688b9..3929f2da3f805 100644
--- a/pkg/bloomcompactor/controller.go
+++ b/pkg/bloomcompactor/controller.go
@@ -22,7 +22,7 @@ import (
type SimpleBloomController struct {
tsdbStore TSDBStore
- bloomStore bloomshipper.Store
+ bloomStore bloomshipper.StoreBase
chunkLoader ChunkLoader
metrics *Metrics
limits Limits
@@ -32,7 +32,7 @@ type SimpleBloomController struct {
func NewSimpleBloomController(
tsdbStore TSDBStore,
- blockStore bloomshipper.Store,
+ blockStore bloomshipper.StoreBase,
chunkLoader ChunkLoader,
limits Limits,
metrics *Metrics,
diff --git a/pkg/bloomcompactor/retention.go b/pkg/bloomcompactor/retention.go
index 7dd30dece9e8a..caaf80ffb9c3f 100644
--- a/pkg/bloomcompactor/retention.go
+++ b/pkg/bloomcompactor/retention.go
@@ -95,7 +95,7 @@ type RetentionLimits interface {
type RetentionManager struct {
cfg RetentionConfig
limits RetentionLimits
- bloomStore bloomshipper.Store
+ bloomStore bloomshipper.StoreBase
sharding retentionSharding
metrics *Metrics
logger log.Logger
@@ -108,7 +108,7 @@ type RetentionManager struct {
func NewRetentionManager(
cfg RetentionConfig,
limits RetentionLimits,
- bloomStore bloomshipper.Store,
+ bloomStore bloomshipper.StoreBase,
sharding retentionSharding,
metrics *Metrics,
logger log.Logger,
diff --git a/pkg/bloomcompactor/retention_test.go b/pkg/bloomcompactor/retention_test.go
index b8e855b0d4e90..e610ab5b02e02 100644
--- a/pkg/bloomcompactor/retention_test.go
+++ b/pkg/bloomcompactor/retention_test.go
@@ -24,6 +24,7 @@ import (
"github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper/config"
"github.com/grafana/loki/v3/pkg/storage/types"
util_log "github.com/grafana/loki/v3/pkg/util/log"
+ "github.com/grafana/loki/v3/pkg/util/mempool"
lokiring "github.com/grafana/loki/v3/pkg/util/ring"
"github.com/grafana/loki/v3/pkg/validation"
)
@@ -822,7 +823,7 @@ func NewMockBloomStoreWithWorkDir(t *testing.T, workDir string) (*bloomshipper.B
metasCache := cache.NewMockCache()
blocksCache := bloomshipper.NewFsBlocksCache(storageConfig.BloomShipperConfig.BlocksCache, prometheus.NewPedanticRegistry(), logger)
- store, err := bloomshipper.NewBloomStore(schemaCfg.Configs, storageConfig, metrics, metasCache, blocksCache, reg, logger)
+ store, err := bloomshipper.NewBloomStore(schemaCfg.Configs, storageConfig, metrics, metasCache, blocksCache, &mempool.SimpleHeapAllocator{}, reg, logger)
if err == nil {
t.Cleanup(store.Stop)
}
diff --git a/pkg/bloomcompactor/spec_test.go b/pkg/bloomcompactor/spec_test.go
index f887d32053226..e08cafb68cab4 100644
--- a/pkg/bloomcompactor/spec_test.go
+++ b/pkg/bloomcompactor/spec_test.go
@@ -13,6 +13,7 @@ import (
"github.com/grafana/loki/v3/pkg/chunkenc"
v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
"github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper"
+ "github.com/grafana/loki/v3/pkg/util/mempool"
)
func blocksFromSchema(t *testing.T, n int, options v1.BlockOptions) (res []*v1.Block, data []v1.SeriesWithBlooms, refs []bloomshipper.BlockRef) {
@@ -74,7 +75,7 @@ func dummyBloomGen(t *testing.T, opts v1.BlockOptions, store v1.Iterator[*v1.Ser
for i, b := range blocks {
bqs = append(bqs, &bloomshipper.CloseableBlockQuerier{
BlockRef: refs[i],
- BlockQuerier: v1.NewBlockQuerier(b, false, v1.DefaultMaxPageSize),
+ BlockQuerier: v1.NewBlockQuerier(b, &mempool.SimpleHeapAllocator{}, v1.DefaultMaxPageSize),
})
}
@@ -152,7 +153,7 @@ func TestSimpleBloomGenerator(t *testing.T) {
expectedRefs := v1.PointerSlice(data)
outputRefs := make([]*v1.SeriesWithBlooms, 0, len(data))
for _, block := range outputBlocks {
- bq := v1.NewBlockQuerier(block, false, v1.DefaultMaxPageSize).Iter()
+ bq := v1.NewBlockQuerier(block, &mempool.SimpleHeapAllocator{}, v1.DefaultMaxPageSize).Iter()
for bq.Next() {
outputRefs = append(outputRefs, bq.At())
}
diff --git a/pkg/bloomgateway/bloomgateway.go b/pkg/bloomgateway/bloomgateway.go
index 5747f6e7993e8..603d41c2c4371 100644
--- a/pkg/bloomgateway/bloomgateway.go
+++ b/pkg/bloomgateway/bloomgateway.go
@@ -50,7 +50,7 @@ type Gateway struct {
queue *queue.RequestQueue
activeUsers *util.ActiveUsersCleanupService
- bloomStore bloomshipper.StoreWithMetrics
+ bloomStore bloomshipper.Store
pendingTasks *atomic.Int64
@@ -72,7 +72,7 @@ func (l *fixedQueueLimits) MaxConsumers(_ string, _ int) int {
}
// New returns a new instance of the Bloom Gateway.
-func New(cfg Config, store bloomshipper.StoreWithMetrics, logger log.Logger, reg prometheus.Registerer) (*Gateway, error) {
+func New(cfg Config, store bloomshipper.Store, logger log.Logger, reg prometheus.Registerer) (*Gateway, error) {
utillog.WarnExperimentalUse("Bloom Gateway", logger)
g := &Gateway{
cfg: cfg,
diff --git a/pkg/bloomgateway/bloomgateway_test.go b/pkg/bloomgateway/bloomgateway_test.go
index 9250ec91ff868..67bb59e460ad9 100644
--- a/pkg/bloomgateway/bloomgateway_test.go
+++ b/pkg/bloomgateway/bloomgateway_test.go
@@ -27,6 +27,7 @@ import (
"github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper"
bloomshipperconfig "github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper/config"
"github.com/grafana/loki/v3/pkg/storage/types"
+ "github.com/grafana/loki/v3/pkg/util/mempool"
"github.com/grafana/loki/v3/pkg/validation"
)
@@ -92,7 +93,7 @@ func setupBloomStore(t *testing.T) *bloomshipper.BloomStore {
reg := prometheus.NewRegistry()
blocksCache := bloomshipper.NewFsBlocksCache(storageCfg.BloomShipperConfig.BlocksCache, nil, logger)
- store, err := bloomshipper.NewBloomStore(schemaCfg.Configs, storageCfg, cm, nil, blocksCache, reg, logger)
+ store, err := bloomshipper.NewBloomStore(schemaCfg.Configs, storageCfg, cm, nil, blocksCache, &mempool.SimpleHeapAllocator{}, reg, logger)
require.NoError(t, err)
t.Cleanup(store.Stop)
diff --git a/pkg/bloomgateway/processor.go b/pkg/bloomgateway/processor.go
index 6973ad1f565b7..e95953c94bf42 100644
--- a/pkg/bloomgateway/processor.go
+++ b/pkg/bloomgateway/processor.go
@@ -88,7 +88,7 @@ func (p *processor) processTasks(ctx context.Context, tenant string, day config.
// after iteration for performance (alloc reduction).
// This is safe to do here because we do not capture
// the underlying bloom []byte outside of iteration
- bloomshipper.WithPool(true),
+ bloomshipper.WithPool(p.store.Allocator()),
)
duration = time.Since(startBlocks)
level.Debug(p.logger).Log("msg", "fetched blocks", "count", len(refs), "duration", duration, "err", err)
diff --git a/pkg/bloomgateway/processor_test.go b/pkg/bloomgateway/processor_test.go
index 9d2d6c6d0642b..0a2fd804ead78 100644
--- a/pkg/bloomgateway/processor_test.go
+++ b/pkg/bloomgateway/processor_test.go
@@ -20,16 +20,21 @@ import (
"github.com/grafana/loki/v3/pkg/storage/config"
"github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper"
"github.com/grafana/loki/v3/pkg/util/constants"
+ "github.com/grafana/loki/v3/pkg/util/mempool"
)
-var _ bloomshipper.Store = &dummyStore{}
+var _ bloomshipper.StoreBase = &dummyStore{}
// refs and blocks must be in 1-1 correspondence.
func newMockBloomStore(refs []bloomshipper.BlockRef, blocks []*v1.Block, metas []bloomshipper.Meta) *dummyStore {
+ allocator := mempool.New("bloompages", mempool.Buckets{
+ {Size: 32, Capacity: 512 << 10},
+ }, nil)
return &dummyStore{
- refs: refs,
- blocks: blocks,
- metas: metas,
+ refs: refs,
+ blocks: blocks,
+ metas: metas,
+ allocator: allocator,
}
}
@@ -38,6 +43,8 @@ type dummyStore struct {
refs []bloomshipper.BlockRef
blocks []*v1.Block
+ allocator mempool.Allocator
+
// mock how long it takes to serve block queriers
delay time.Duration
// mock response error when serving block queriers in ForEach
@@ -76,6 +83,10 @@ func (s *dummyStore) Client(_ model.Time) (bloomshipper.Client, error) {
return nil, nil
}
+func (s *dummyStore) Allocator() mempool.Allocator {
+ return s.allocator
+}
+
func (s *dummyStore) Stop() {
}
@@ -92,7 +103,7 @@ func (s *dummyStore) FetchBlocks(_ context.Context, refs []bloomshipper.BlockRef
if ref.Bounds.Equal(s.refs[i].Bounds) {
blockCopy := *block
bq := &bloomshipper.CloseableBlockQuerier{
- BlockQuerier: v1.NewBlockQuerier(&blockCopy, false, v1.DefaultMaxPageSize),
+ BlockQuerier: v1.NewBlockQuerier(&blockCopy, s.Allocator(), v1.DefaultMaxPageSize),
BlockRef: s.refs[i],
}
result = append(result, bq)
diff --git a/pkg/bloomgateway/resolver.go b/pkg/bloomgateway/resolver.go
index 62ec5836cc136..0f6fe27626958 100644
--- a/pkg/bloomgateway/resolver.go
+++ b/pkg/bloomgateway/resolver.go
@@ -24,7 +24,7 @@ type blockWithSeries struct {
}
type defaultBlockResolver struct {
- store bloomshipper.Store
+ store bloomshipper.StoreBase
logger log.Logger
}
@@ -123,7 +123,7 @@ func unassignedSeries(mapped []blockWithSeries, series []*logproto.GroupedChunkR
return skipped
}
-func NewBlockResolver(store bloomshipper.Store, logger log.Logger) BlockResolver {
+func NewBlockResolver(store bloomshipper.StoreBase, logger log.Logger) BlockResolver {
return &defaultBlockResolver{
store: store,
logger: logger,
diff --git a/pkg/bloomgateway/util_test.go b/pkg/bloomgateway/util_test.go
index ed47d46456d91..f6ae68cf2aa2b 100644
--- a/pkg/bloomgateway/util_test.go
+++ b/pkg/bloomgateway/util_test.go
@@ -432,6 +432,7 @@ func createBlocks(t *testing.T, tenant string, n int, from, through model.Time,
// t.Log(i, j, string(keys[i][j]))
// }
// }
+
blocks = append(blocks, block)
metas = append(metas, meta)
blockRefs = append(blockRefs, blockRef)
diff --git a/pkg/loki/loki.go b/pkg/loki/loki.go
index ce826f0752d4b..68b210de4a772 100644
--- a/pkg/loki/loki.go
+++ b/pkg/loki/loki.go
@@ -337,7 +337,7 @@ type Loki struct {
querierAPI *querier.QuerierAPI
ingesterQuerier *querier.IngesterQuerier
Store storage.Store
- BloomStore bloomshipper.StoreWithMetrics
+ BloomStore bloomshipper.Store
tableManager *index.TableManager
frontend Frontend
ruler *base_ruler.Ruler
diff --git a/pkg/loki/modules.go b/pkg/loki/modules.go
index 39c5df98b84c7..e405f5d762f0b 100644
--- a/pkg/loki/modules.go
+++ b/pkg/loki/modules.go
@@ -79,6 +79,7 @@ import (
"github.com/grafana/loki/v3/pkg/util/httpreq"
"github.com/grafana/loki/v3/pkg/util/limiter"
util_log "github.com/grafana/loki/v3/pkg/util/log"
+ "github.com/grafana/loki/v3/pkg/util/mempool"
"github.com/grafana/loki/v3/pkg/util/querylimits"
lokiring "github.com/grafana/loki/v3/pkg/util/ring"
serverutil "github.com/grafana/loki/v3/pkg/util/server"
@@ -754,7 +755,24 @@ func (t *Loki) initBloomStore() (services.Service, error) {
level.Warn(logger).Log("msg", "failed to preload blocks cache", "err", err)
}
- t.BloomStore, err = bloomshipper.NewBloomStore(t.Cfg.SchemaConfig.Configs, t.Cfg.StorageConfig, t.ClientMetrics, metasCache, blocksCache, reg, logger)
+ var pageAllocator mempool.Allocator
+
+ // Set global BloomPageAllocator variable
+ switch bsCfg.MemoryManagement.BloomPageAllocationType {
+ case "simple":
+ pageAllocator = &mempool.SimpleHeapAllocator{}
+ case "dynamic":
+ // sync buffer pool for bloom pages
+ // 128KB 256KB 512KB 1MB 2MB 4MB 8MB 16MB 32MB 64MB 128MB
+ pageAllocator = mempool.NewBytePoolAllocator(128<<10, 128<<20, 2)
+ case "fixed":
+ pageAllocator = mempool.New("bloom-page-pool", bsCfg.MemoryManagement.BloomPageMemPoolBuckets, reg)
+ default:
+ // should not happen as the type is validated upfront
+ return nil, fmt.Errorf("failed to create bloom store: invalid allocator type")
+ }
+
+ t.BloomStore, err = bloomshipper.NewBloomStore(t.Cfg.SchemaConfig.Configs, t.Cfg.StorageConfig, t.ClientMetrics, metasCache, blocksCache, pageAllocator, reg, logger)
if err != nil {
return nil, fmt.Errorf("failed to create bloom store: %w", err)
}
diff --git a/pkg/storage/bloom/v1/block.go b/pkg/storage/bloom/v1/block.go
index b0b4e5ad9647a..8aaf21d5e7516 100644
--- a/pkg/storage/bloom/v1/block.go
+++ b/pkg/storage/bloom/v1/block.go
@@ -4,6 +4,8 @@ import (
"fmt"
"github.com/pkg/errors"
+
+ "github.com/grafana/loki/v3/pkg/util/mempool"
)
type BlockMetadata struct {
@@ -110,17 +112,18 @@ type BlockQuerier struct {
}
// NewBlockQuerier returns a new BlockQuerier for the given block.
-// WARNING: If noCapture is true, the underlying byte slice of the bloom page
-// will be returned to the pool for efficiency. This can only safely be used
-// when the underlying bloom bytes don't escape the decoder, i.e.
-// when loading blooms for querying (bloom-gw) but not for writing (bloom-compactor).
-// When usePool is true, the bloom MUST NOT be captured by the caller. Rather,
-// it should be discarded before another call to Next().
-func NewBlockQuerier(b *Block, usePool bool, maxPageSize int) *BlockQuerier {
+// WARNING: You can pass an implementation of Allocator that is responsible for
+// whether the underlying byte slice of the bloom page will be returned to the
+// pool for efficiency or not. Returning to the pool can only safely be used
+// when the underlying bloom bytes don't escape the decoder, i.e. when loading
+// blooms for querying (bloom-gateway), but not for writing (bloom-compactor).
+// Therefore, when calling NewBlockQuerier on the write path, you should always
+// pass the SimpleHeapAllocator implementation of the Allocator interface.
+func NewBlockQuerier(b *Block, alloc mempool.Allocator, maxPageSize int) *BlockQuerier {
return &BlockQuerier{
block: b,
LazySeriesIter: NewLazySeriesIter(b),
- blooms: NewLazyBloomIter(b, usePool, maxPageSize),
+ blooms: NewLazyBloomIter(b, alloc, maxPageSize),
}
}
@@ -144,6 +147,10 @@ func (bq *BlockQuerier) Err() error {
return bq.blooms.Err()
}
+func (bq *BlockQuerier) Close() {
+ bq.blooms.Close()
+}
+
type BlockQuerierIter struct {
*BlockQuerier
}
diff --git a/pkg/storage/bloom/v1/bloom.go b/pkg/storage/bloom/v1/bloom.go
index b9f4b0cdc6a9a..dfd8b758c3385 100644
--- a/pkg/storage/bloom/v1/bloom.go
+++ b/pkg/storage/bloom/v1/bloom.go
@@ -10,6 +10,7 @@ import (
"github.com/grafana/loki/v3/pkg/chunkenc"
"github.com/grafana/loki/v3/pkg/storage/bloom/v1/filter"
"github.com/grafana/loki/v3/pkg/util/encoding"
+ "github.com/grafana/loki/v3/pkg/util/mempool"
)
// NB(chaudum): Some block pages are way bigger than others (400MiB and
@@ -24,7 +25,7 @@ type Bloom struct {
func (b *Bloom) Encode(enc *encoding.Encbuf) error {
// divide by 8 b/c bloom capacity is measured in bits, but we want bytes
- buf := bytes.NewBuffer(BloomPagePool.Get(int(b.Capacity() / 8)))
+ buf := bytes.NewBuffer(make([]byte, 0, int(b.Capacity()/8)))
// TODO(owen-d): have encoder implement writer directly so we don't need
// to indirect via a buffer
@@ -36,7 +37,6 @@ func (b *Bloom) Encode(enc *encoding.Encbuf) error {
data := buf.Bytes()
enc.PutUvarint(len(data)) // length of bloom filter
enc.PutBytes(data)
- BloomPagePool.Put(data[:0]) // release to pool
return nil
}
@@ -64,11 +64,14 @@ func (b *Bloom) Decode(dec *encoding.Decbuf) error {
return nil
}
-func LazyDecodeBloomPage(r io.Reader, pool chunkenc.ReaderPool, page BloomPageHeader) (*BloomPageDecoder, error) {
- data := BloomPagePool.Get(page.Len)[:page.Len]
- defer BloomPagePool.Put(data)
+func LazyDecodeBloomPage(r io.Reader, alloc mempool.Allocator, pool chunkenc.ReaderPool, page BloomPageHeader) (*BloomPageDecoder, error) {
+ data, err := alloc.Get(page.Len)
+ if err != nil {
+ return nil, errors.Wrap(err, "allocating buffer")
+ }
+ defer alloc.Put(data)
- _, err := io.ReadFull(r, data)
+ _, err = io.ReadFull(r, data)
if err != nil {
return nil, errors.Wrap(err, "reading bloom page")
}
@@ -84,7 +87,10 @@ func LazyDecodeBloomPage(r io.Reader, pool chunkenc.ReaderPool, page BloomPageHe
}
defer pool.PutReader(decompressor)
- b := BloomPagePool.Get(page.DecompressedLen)[:page.DecompressedLen]
+ b, err := alloc.Get(page.DecompressedLen)
+ if err != nil {
+ return nil, errors.Wrap(err, "allocating buffer")
+ }
if _, err = io.ReadFull(decompressor, b); err != nil {
return nil, errors.Wrap(err, "decompressing bloom page")
@@ -96,14 +102,18 @@ func LazyDecodeBloomPage(r io.Reader, pool chunkenc.ReaderPool, page BloomPageHe
}
// shortcut to skip allocations when we know the page is not compressed
-func LazyDecodeBloomPageNoCompression(r io.Reader, page BloomPageHeader) (*BloomPageDecoder, error) {
+func LazyDecodeBloomPageNoCompression(r io.Reader, alloc mempool.Allocator, page BloomPageHeader) (*BloomPageDecoder, error) {
// data + checksum
if page.Len != page.DecompressedLen+4 {
return nil, errors.New("the Len and DecompressedLen of the page do not match")
}
- data := BloomPagePool.Get(page.Len)[:page.Len]
- _, err := io.ReadFull(r, data)
+ data, err := alloc.Get(page.Len)
+ if err != nil {
+ return nil, errors.Wrap(err, "allocating buffer")
+ }
+
+ _, err = io.ReadFull(r, data)
if err != nil {
return nil, errors.Wrap(err, "reading bloom page")
}
@@ -158,12 +168,16 @@ type BloomPageDecoder struct {
// This can only safely be used when the underlying bloom
// bytes don't escape the decoder:
// on reads in the bloom-gw but not in the bloom-compactor
-func (d *BloomPageDecoder) Relinquish() {
+func (d *BloomPageDecoder) Relinquish(alloc mempool.Allocator) {
+ if d == nil {
+ return
+ }
+
data := d.data
d.data = nil
if cap(data) > 0 {
- BloomPagePool.Put(data)
+ _ = alloc.Put(data)
}
}
@@ -271,7 +285,7 @@ func (b *BloomBlock) DecodeHeaders(r io.ReadSeeker) (uint32, error) {
// BloomPageDecoder returns a decoder for the given page index.
// It may skip the page if it's too large.
// NB(owen-d): if `skip` is true, err _must_ be nil.
-func (b *BloomBlock) BloomPageDecoder(r io.ReadSeeker, pageIdx int, maxPageSize int, metrics *Metrics) (res *BloomPageDecoder, skip bool, err error) {
+func (b *BloomBlock) BloomPageDecoder(r io.ReadSeeker, alloc mempool.Allocator, pageIdx int, maxPageSize int, metrics *Metrics) (res *BloomPageDecoder, skip bool, err error) {
if pageIdx < 0 || pageIdx >= len(b.pageHeaders) {
metrics.pagesSkipped.WithLabelValues(pageTypeBloom, skipReasonOOB).Inc()
metrics.bytesSkipped.WithLabelValues(pageTypeBloom, skipReasonOOB).Add(float64(b.pageHeaders[pageIdx].DecompressedLen))
@@ -294,9 +308,9 @@ func (b *BloomBlock) BloomPageDecoder(r io.ReadSeeker, pageIdx int, maxPageSize
}
if b.schema.encoding == chunkenc.EncNone {
- res, err = LazyDecodeBloomPageNoCompression(r, page)
+ res, err = LazyDecodeBloomPageNoCompression(r, alloc, page)
} else {
- res, err = LazyDecodeBloomPage(r, b.schema.DecompressorPool(), page)
+ res, err = LazyDecodeBloomPage(r, alloc, b.schema.DecompressorPool(), page)
}
if err != nil {
diff --git a/pkg/storage/bloom/v1/bloom_querier.go b/pkg/storage/bloom/v1/bloom_querier.go
index 8de9a33e713f0..ab30b74f8a9eb 100644
--- a/pkg/storage/bloom/v1/bloom_querier.go
+++ b/pkg/storage/bloom/v1/bloom_querier.go
@@ -1,17 +1,21 @@
package v1
-import "github.com/pkg/errors"
+import (
+ "github.com/pkg/errors"
+
+ "github.com/grafana/loki/v3/pkg/util/mempool"
+)
type BloomQuerier interface {
Seek(BloomOffset) (*Bloom, error)
}
type LazyBloomIter struct {
- usePool bool
-
b *Block
m int // max page size in bytes
+ alloc mempool.Allocator
+
// state
initialized bool
err error
@@ -24,11 +28,11 @@ type LazyBloomIter struct {
// will be returned to the pool for efficiency.
// This can only safely be used when the underlying bloom
// bytes don't escape the decoder.
-func NewLazyBloomIter(b *Block, pool bool, maxSize int) *LazyBloomIter {
+func NewLazyBloomIter(b *Block, alloc mempool.Allocator, maxSize int) *LazyBloomIter {
return &LazyBloomIter{
- usePool: pool,
- b: b,
- m: maxSize,
+ b: b,
+ m: maxSize,
+ alloc: alloc,
}
}
@@ -53,16 +57,14 @@ func (it *LazyBloomIter) LoadOffset(offset BloomOffset) (skip bool) {
// drop the current page if it exists and
// we're using the pool
- if it.curPage != nil && it.usePool {
- it.curPage.Relinquish()
- }
+ it.curPage.Relinquish(it.alloc)
r, err := it.b.reader.Blooms()
if err != nil {
it.err = errors.Wrap(err, "getting blooms reader")
return false
}
- decoder, skip, err := it.b.blooms.BloomPageDecoder(r, offset.Page, it.m, it.b.metrics)
+ decoder, skip, err := it.b.blooms.BloomPageDecoder(r, it.alloc, offset.Page, it.m, it.b.metrics)
if err != nil {
it.err = errors.Wrap(err, "loading bloom page")
return false
@@ -106,6 +108,7 @@ func (it *LazyBloomIter) next() bool {
var skip bool
it.curPage, skip, err = it.b.blooms.BloomPageDecoder(
r,
+ it.alloc,
it.curPageIndex,
it.m,
it.b.metrics,
@@ -130,11 +133,8 @@ func (it *LazyBloomIter) next() bool {
// we've exhausted the current page, progress to next
it.curPageIndex++
- // drop the current page if it exists and
- // we're using the pool
- if it.usePool {
- it.curPage.Relinquish()
- }
+ // drop the current page if it exists
+ it.curPage.Relinquish(it.alloc)
it.curPage = nil
continue
}
@@ -161,3 +161,7 @@ func (it *LazyBloomIter) Err() error {
return nil
}
}
+
+func (it *LazyBloomIter) Close() {
+ it.curPage.Relinquish(it.alloc)
+}
diff --git a/pkg/storage/bloom/v1/builder_test.go b/pkg/storage/bloom/v1/builder_test.go
index ae1b440af09b8..15f0de0842a93 100644
--- a/pkg/storage/bloom/v1/builder_test.go
+++ b/pkg/storage/bloom/v1/builder_test.go
@@ -12,6 +12,7 @@ import (
"github.com/grafana/loki/v3/pkg/chunkenc"
"github.com/grafana/loki/v3/pkg/storage/bloom/v1/filter"
"github.com/grafana/loki/v3/pkg/util/encoding"
+ "github.com/grafana/loki/v3/pkg/util/mempool"
)
var blockEncodings = []chunkenc.Encoding{
@@ -121,7 +122,7 @@ func TestBlockBuilder_RoundTrip(t *testing.T) {
}
block := NewBlock(tc.reader, NewMetrics(nil))
- querier := NewBlockQuerier(block, false, DefaultMaxPageSize).Iter()
+ querier := NewBlockQuerier(block, &mempool.SimpleHeapAllocator{}, DefaultMaxPageSize).Iter()
err = block.LoadHeaders()
require.Nil(t, err)
@@ -239,7 +240,7 @@ func TestMergeBuilder(t *testing.T) {
itr := NewSliceIter[SeriesWithBlooms](data[min:max])
_, err = builder.BuildFrom(itr)
require.Nil(t, err)
- blocks = append(blocks, NewPeekingIter[*SeriesWithBlooms](NewBlockQuerier(NewBlock(reader, NewMetrics(nil)), false, DefaultMaxPageSize).Iter()))
+ blocks = append(blocks, NewPeekingIter[*SeriesWithBlooms](NewBlockQuerier(NewBlock(reader, NewMetrics(nil)), &mempool.SimpleHeapAllocator{}, DefaultMaxPageSize).Iter()))
}
// We're not testing the ability to extend a bloom in this test
@@ -280,7 +281,7 @@ func TestMergeBuilder(t *testing.T) {
require.Nil(t, err)
block := NewBlock(reader, NewMetrics(nil))
- querier := NewBlockQuerier(block, false, DefaultMaxPageSize)
+ querier := NewBlockQuerier(block, &mempool.SimpleHeapAllocator{}, DefaultMaxPageSize)
EqualIterators[*SeriesWithBlooms](
t,
@@ -372,7 +373,7 @@ func TestMergeBuilderFingerprintCollision(t *testing.T) {
require.Nil(t, err)
block := NewBlock(reader, NewMetrics(nil))
- querier := NewBlockQuerier(block, false, DefaultMaxPageSize)
+ querier := NewBlockQuerier(block, &mempool.SimpleHeapAllocator{}, DefaultMaxPageSize)
require.True(t, querier.Next())
require.Equal(t,
@@ -417,7 +418,7 @@ func TestBlockReset(t *testing.T) {
_, err = builder.BuildFrom(itr)
require.Nil(t, err)
block := NewBlock(reader, NewMetrics(nil))
- querier := NewBlockQuerier(block, false, DefaultMaxPageSize)
+ querier := NewBlockQuerier(block, &mempool.SimpleHeapAllocator{}, DefaultMaxPageSize)
rounds := make([][]model.Fingerprint, 2)
@@ -482,7 +483,7 @@ func TestMergeBuilder_Roundtrip(t *testing.T) {
_, err = builder.BuildFrom(itr)
require.Nil(t, err)
block := NewBlock(reader, NewMetrics(nil))
- querier := NewBlockQuerier(block, false, DefaultMaxPageSize).Iter()
+ querier := NewBlockQuerier(block, &mempool.SimpleHeapAllocator{}, DefaultMaxPageSize).Iter()
// rather than use the block querier directly, collect it's data
// so we can use it in a few places later
@@ -552,7 +553,7 @@ func TestMergeBuilder_Roundtrip(t *testing.T) {
// ensure the new block contains one copy of all the data
// by comparing it against an iterator over the source data
- mergedBlockQuerier := NewBlockQuerier(NewBlock(reader, NewMetrics(nil)), false, DefaultMaxPageSize)
+ mergedBlockQuerier := NewBlockQuerier(NewBlock(reader, NewMetrics(nil)), &mempool.SimpleHeapAllocator{}, DefaultMaxPageSize)
sourceItr := NewSliceIter[*SeriesWithBlooms](PointerSlice[SeriesWithBlooms](xs))
EqualIterators[*SeriesWithBlooms](
diff --git a/pkg/storage/bloom/v1/fuse_test.go b/pkg/storage/bloom/v1/fuse_test.go
index 7f11eece4c239..7459819658937 100644
--- a/pkg/storage/bloom/v1/fuse_test.go
+++ b/pkg/storage/bloom/v1/fuse_test.go
@@ -14,8 +14,15 @@ import (
"github.com/grafana/loki/v3/pkg/chunkenc"
"github.com/grafana/loki/v3/pkg/storage/bloom/v1/filter"
+ "github.com/grafana/loki/v3/pkg/util/mempool"
)
+var BloomPagePool = mempool.New("test", []mempool.Bucket{
+ {Size: 16, Capacity: 128 << 10},
+ {Size: 16, Capacity: 256 << 10},
+ {Size: 16, Capacity: 512 << 10},
+}, nil)
+
// TODO(owen-d): this is unhinged from the data it represents. I'm leaving this solely so I don't
// have to refactor tests here in order to fix this elsewhere, but it can/should be fixed --
// the skip & n len are hardcoded based on data that's passed to it elsewhere.
@@ -64,7 +71,7 @@ func TestFusedQuerier(t *testing.T) {
require.NoError(t, err)
require.False(t, itr.Next())
block := NewBlock(reader, NewMetrics(nil))
- querier := NewBlockQuerier(block, true, DefaultMaxPageSize)
+ querier := NewBlockQuerier(block, BloomPagePool, DefaultMaxPageSize)
n := 500 // series per request
nReqs := numSeries / n
@@ -194,7 +201,7 @@ func TestFuseMultiPage(t *testing.T) {
block := NewBlock(reader, NewMetrics(nil))
- querier := NewBlockQuerier(block, true, 100<<20) // 100MB too large to interfere
+ querier := NewBlockQuerier(block, BloomPagePool, 100<<20) // 100MB too large to interfere
keys := [][]byte{
key1, // found in the first bloom
@@ -315,8 +322,7 @@ func TestLazyBloomIter_Seek_ResetError(t *testing.T) {
require.False(t, itr.Next())
block := NewBlock(reader, NewMetrics(nil))
- smallMaxPageSize := 1000 // deliberately trigger page skipping for tests
- querier := NewBlockQuerier(block, true, smallMaxPageSize)
+ querier := NewBlockQuerier(block, BloomPagePool, 1000)
for fp := model.Fingerprint(0); fp < model.Fingerprint(numSeries); fp++ {
err := querier.Seek(fp)
@@ -373,7 +379,7 @@ func setupBlockForBenchmark(b *testing.B) (*BlockQuerier, [][]Request, []chan Ou
_, err = builder.BuildFrom(itr)
require.Nil(b, err)
block := NewBlock(reader, NewMetrics(nil))
- querier := NewBlockQuerier(block, true, DefaultMaxPageSize)
+ querier := NewBlockQuerier(block, BloomPagePool, DefaultMaxPageSize)
numRequestChains := 100
seriesPerRequest := 100
diff --git a/pkg/storage/bloom/v1/index.go b/pkg/storage/bloom/v1/index.go
index b11927ed303b2..8cae1d8a87f1e 100644
--- a/pkg/storage/bloom/v1/index.go
+++ b/pkg/storage/bloom/v1/index.go
@@ -166,7 +166,7 @@ func (b *BlockIndex) NewSeriesPageDecoder(r io.ReadSeeker, header SeriesPageHead
return nil, errors.Wrap(err, "seeking to series page")
}
- data := SeriesPagePool.Get(header.Len)[:header.Len]
+ data, _ := SeriesPagePool.Get(header.Len)
defer SeriesPagePool.Put(data)
_, err = io.ReadFull(r, data)
if err != nil {
diff --git a/pkg/storage/bloom/v1/util.go b/pkg/storage/bloom/v1/util.go
index 85aa7baa7b81b..ae0a70453098d 100644
--- a/pkg/storage/bloom/v1/util.go
+++ b/pkg/storage/bloom/v1/util.go
@@ -8,7 +8,7 @@ import (
"io"
"sync"
- "github.com/prometheus/prometheus/util/pool"
+ "github.com/grafana/loki/v3/pkg/util/mempool"
)
type Version byte
@@ -44,36 +44,9 @@ var (
// buffer pool for series pages
// 1KB 2KB 4KB 8KB 16KB 32KB 64KB 128KB
- SeriesPagePool = BytePool{
- pool: pool.New(
- 1<<10, 128<<10, 2,
- func(size int) interface{} {
- return make([]byte, size)
- }),
- }
-
- // buffer pool for bloom pages
- // 128KB 256KB 512KB 1MB 2MB 4MB 8MB 16MB 32MB 64MB 128MB
- BloomPagePool = BytePool{
- pool: pool.New(
- 128<<10, 128<<20, 2,
- func(size int) interface{} {
- return make([]byte, size)
- }),
- }
+ SeriesPagePool = mempool.NewBytePoolAllocator(1<<10, 128<<10, 2)
)
-type BytePool struct {
- pool *pool.Pool
-}
-
-func (p *BytePool) Get(size int) []byte {
- return p.pool.Get(size).([]byte)[:0]
-}
-func (p *BytePool) Put(b []byte) {
- p.pool.Put(b)
-}
-
func newCRC32() hash.Hash32 {
return crc32.New(castagnoliTable)
}
diff --git a/pkg/storage/bloom/v1/versioned_builder_test.go b/pkg/storage/bloom/v1/versioned_builder_test.go
index a88ed9396982e..eca86ef7aaa15 100644
--- a/pkg/storage/bloom/v1/versioned_builder_test.go
+++ b/pkg/storage/bloom/v1/versioned_builder_test.go
@@ -8,6 +8,7 @@ import (
"github.com/grafana/loki/v3/pkg/chunkenc"
"github.com/grafana/loki/v3/pkg/util/encoding"
+ "github.com/grafana/loki/v3/pkg/util/mempool"
)
// smallBlockOpts returns a set of block options that are suitable for testing
@@ -61,7 +62,7 @@ func TestV1RoundTrip(t *testing.T) {
// Ensure Equality
block := NewBlock(reader, NewMetrics(nil))
- querier := NewBlockQuerier(block, false, DefaultMaxPageSize).Iter()
+ querier := NewBlockQuerier(block, &mempool.SimpleHeapAllocator{}, DefaultMaxPageSize).Iter()
CompareIterators[SeriesWithLiteralBlooms, *SeriesWithBlooms](
t,
@@ -118,7 +119,7 @@ func TestV2Roundtrip(t *testing.T) {
// Ensure Equality
block := NewBlock(reader, NewMetrics(nil))
- querier := NewBlockQuerier(block, false, DefaultMaxPageSize).Iter()
+ querier := NewBlockQuerier(block, &mempool.SimpleHeapAllocator{}, DefaultMaxPageSize).Iter()
CompareIterators[SeriesWithLiteralBlooms, *SeriesWithBlooms](
t,
diff --git a/pkg/storage/stores/shipper/bloomshipper/cache.go b/pkg/storage/stores/shipper/bloomshipper/cache.go
index 3c324b7b8b0e6..8b7ba7d253a94 100644
--- a/pkg/storage/stores/shipper/bloomshipper/cache.go
+++ b/pkg/storage/stores/shipper/bloomshipper/cache.go
@@ -13,6 +13,7 @@ import (
v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
"github.com/grafana/loki/v3/pkg/storage/chunk/cache"
"github.com/grafana/loki/v3/pkg/util"
+ "github.com/grafana/loki/v3/pkg/util/mempool"
)
type CloseableBlockQuerier struct {
@@ -22,6 +23,7 @@ type CloseableBlockQuerier struct {
}
func (c *CloseableBlockQuerier) Close() error {
+ c.BlockQuerier.Close()
if c.close != nil {
return c.close()
}
@@ -157,15 +159,14 @@ func (b *BlockDirectory) resolveSize() error {
// BlockQuerier returns a new block querier from the directory.
// The passed function `close` is called when the the returned querier is closed.
-
func (b BlockDirectory) BlockQuerier(
- usePool bool,
+ alloc mempool.Allocator,
close func() error,
maxPageSize int,
metrics *v1.Metrics,
) *CloseableBlockQuerier {
return &CloseableBlockQuerier{
- BlockQuerier: v1.NewBlockQuerier(b.Block(metrics), usePool, maxPageSize),
+ BlockQuerier: v1.NewBlockQuerier(b.Block(metrics), alloc, maxPageSize),
BlockRef: b.BlockRef,
close: close,
}
diff --git a/pkg/storage/stores/shipper/bloomshipper/config/config.go b/pkg/storage/stores/shipper/bloomshipper/config/config.go
index 72d8f8557b095..6de144a3f84bf 100644
--- a/pkg/storage/stores/shipper/bloomshipper/config/config.go
+++ b/pkg/storage/stores/shipper/bloomshipper/config/config.go
@@ -4,11 +4,16 @@ package config
import (
"errors"
"flag"
+ "fmt"
+ "slices"
+ "strings"
"time"
"github.com/grafana/dskit/flagext"
"github.com/grafana/loki/v3/pkg/storage/chunk/cache"
+ lokiflagext "github.com/grafana/loki/v3/pkg/util/flagext"
+ "github.com/grafana/loki/v3/pkg/util/mempool"
)
type Config struct {
@@ -18,6 +23,7 @@ type Config struct {
BlocksCache BlocksCacheConfig `yaml:"blocks_cache"`
MetasCache cache.Config `yaml:"metas_cache"`
MetasLRUCache cache.EmbeddedCacheConfig `yaml:"metas_lru_cache"`
+ MemoryManagement MemoryManagementConfig `yaml:"memory_management" doc:"hidden"`
// This will always be set to true when flags are registered.
// In tests, where config is created as literal, it can be set manually.
@@ -34,6 +40,7 @@ func (c *Config) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) {
c.BlocksCache.RegisterFlagsWithPrefixAndDefaults(prefix+"blocks-cache.", "Cache for bloom blocks. ", f, 24*time.Hour)
c.MetasCache.RegisterFlagsWithPrefix(prefix+"metas-cache.", "Cache for bloom metas. ", f)
c.MetasLRUCache.RegisterFlagsWithPrefix(prefix+"metas-lru-cache.", "In-memory LRU cache for bloom metas. ", f)
+ c.MemoryManagement.RegisterFlagsWithPrefix(prefix+"memory-management.", f)
// always cache LIST operations
c.CacheListOps = true
@@ -43,6 +50,9 @@ func (c *Config) Validate() error {
if len(c.WorkingDirectory) == 0 {
return errors.New("at least one working directory must be specified")
}
+ if err := c.MemoryManagement.Validate(); err != nil {
+ return err
+ }
return nil
}
@@ -81,3 +91,60 @@ func (cfg *BlocksCacheConfig) Validate() error {
}
return nil
}
+
+var (
+ // the default that describes a 4GiB memory pool
+ defaultMemPoolBuckets = mempool.Buckets{
+ {Size: 128, Capacity: 64 << 10}, // 8MiB -- for tests
+ {Size: 512, Capacity: 2 << 20}, // 1024MiB
+ {Size: 128, Capacity: 8 << 20}, // 1024MiB
+ {Size: 32, Capacity: 32 << 20}, // 1024MiB
+ {Size: 8, Capacity: 128 << 20}, // 1024MiB
+ }
+ types = supportedAllocationTypes{
+ "simple", "simple heap allocations using Go's make([]byte, n) and no re-cycling of buffers",
+ "dynamic", "a buffer pool with variable sized buckets and best effort re-cycling of buffers using Go's sync.Pool",
+ "fixed", "a fixed size memory pool with configurable slab sizes, see mem-pool-buckets",
+ }
+)
+
+type MemoryManagementConfig struct {
+ BloomPageAllocationType string `yaml:"bloom_page_alloc_type"`
+ BloomPageMemPoolBuckets lokiflagext.CSV[mempool.Bucket] `yaml:"bloom_page_mem_pool_buckets"`
+}
+
+func (cfg *MemoryManagementConfig) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) {
+ f.StringVar(&cfg.BloomPageAllocationType, prefix+"alloc-type", "dynamic", fmt.Sprintf("One of: %s", strings.Join(types.descriptions(), ", ")))
+
+ _ = cfg.BloomPageMemPoolBuckets.Set(defaultMemPoolBuckets.String())
+ f.Var(&cfg.BloomPageMemPoolBuckets, prefix+"mem-pool-buckets", "Comma separated list of buckets in the format {size}x{bytes}")
+}
+
+func (cfg *MemoryManagementConfig) Validate() error {
+ if !slices.Contains(types.names(), cfg.BloomPageAllocationType) {
+ msg := fmt.Sprintf("bloom_page_alloc_type must be one of: %s", strings.Join(types.descriptions(), ", "))
+ return errors.New(msg)
+ }
+ if cfg.BloomPageAllocationType == "fixed" && len(cfg.BloomPageMemPoolBuckets) == 0 {
+ return errors.New("fixed memory pool requires at least one bucket")
+ }
+ return nil
+}
+
+type supportedAllocationTypes []string
+
+func (t supportedAllocationTypes) names() []string {
+ names := make([]string, 0, len(t)/2)
+ for i := 0; i < len(t); i += 2 {
+ names = append(names, t[i])
+ }
+ return names
+}
+
+func (t supportedAllocationTypes) descriptions() []string {
+ names := make([]string, 0, len(t)/2)
+ for i := 0; i < len(t); i += 2 {
+ names = append(names, fmt.Sprintf("%s (%s)", t[i], t[i+1]))
+ }
+ return names
+}
diff --git a/pkg/storage/stores/shipper/bloomshipper/fetcher.go b/pkg/storage/stores/shipper/bloomshipper/fetcher.go
index c2a2939a805b3..42d8d116b64a8 100644
--- a/pkg/storage/stores/shipper/bloomshipper/fetcher.go
+++ b/pkg/storage/stores/shipper/bloomshipper/fetcher.go
@@ -19,6 +19,7 @@ import (
v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
"github.com/grafana/loki/v3/pkg/storage/chunk/cache"
"github.com/grafana/loki/v3/pkg/util/constants"
+ "github.com/grafana/loki/v3/pkg/util/mempool"
"github.com/grafana/loki/v3/pkg/util/spanlogger"
)
@@ -30,7 +31,7 @@ type options struct {
// return bloom blocks to pool after iteration; default=false
// NB(owen-d): this can only be safely used when blooms are not captured outside
// of iteration or it can introduce use-after-free bugs
- usePool bool
+ usePool mempool.Allocator
}
func (o *options) apply(opts ...FetchOption) {
@@ -53,7 +54,7 @@ func WithFetchAsync(v bool) FetchOption {
}
}
-func WithPool(v bool) FetchOption {
+func WithPool(v mempool.Allocator) FetchOption {
return func(opts *options) {
opts.usePool = v
}
@@ -222,7 +223,7 @@ func (f *Fetcher) writeBackMetas(ctx context.Context, metas []Meta) error {
// FetchBlocks implements fetcher
func (f *Fetcher) FetchBlocks(ctx context.Context, refs []BlockRef, opts ...FetchOption) ([]*CloseableBlockQuerier, error) {
// apply fetch options
- cfg := &options{ignoreNotFound: true, fetchAsync: false, usePool: false}
+ cfg := &options{ignoreNotFound: true, fetchAsync: false, usePool: &mempool.SimpleHeapAllocator{}}
cfg.apply(opts...)
// first, resolve blocks from cache and enqueue missing blocks to download queue
diff --git a/pkg/storage/stores/shipper/bloomshipper/shipper.go b/pkg/storage/stores/shipper/bloomshipper/shipper.go
index edaa15596fff3..8e58e1231d255 100644
--- a/pkg/storage/stores/shipper/bloomshipper/shipper.go
+++ b/pkg/storage/stores/shipper/bloomshipper/shipper.go
@@ -16,10 +16,10 @@ type Interface interface {
}
type Shipper struct {
- store Store
+ store StoreBase
}
-func NewShipper(client Store) *Shipper {
+func NewShipper(client StoreBase) *Shipper {
return &Shipper{store: client}
}
diff --git a/pkg/storage/stores/shipper/bloomshipper/store.go b/pkg/storage/stores/shipper/bloomshipper/store.go
index f2c77d7ac74e0..363fb7806ece3 100644
--- a/pkg/storage/stores/shipper/bloomshipper/store.go
+++ b/pkg/storage/stores/shipper/bloomshipper/store.go
@@ -21,6 +21,7 @@ import (
"github.com/grafana/loki/v3/pkg/storage/chunk/client/util"
"github.com/grafana/loki/v3/pkg/storage/config"
"github.com/grafana/loki/v3/pkg/util/constants"
+ "github.com/grafana/loki/v3/pkg/util/mempool"
"github.com/grafana/loki/v3/pkg/util/spanlogger"
)
@@ -28,7 +29,7 @@ var (
errNoStore = errors.New("no store found for time")
)
-type Store interface {
+type StoreBase interface {
ResolveMetas(ctx context.Context, params MetaSearchParams) ([][]MetaRef, []*Fetcher, error)
FetchMetas(ctx context.Context, params MetaSearchParams) ([]Meta, error)
FetchBlocks(ctx context.Context, refs []BlockRef, opts ...FetchOption) ([]*CloseableBlockQuerier, error)
@@ -41,9 +42,10 @@ type Store interface {
Stop()
}
-type StoreWithMetrics interface {
- Store
+type Store interface {
+ StoreBase
BloomMetrics() *v1.Metrics
+ Allocator() mempool.Allocator
}
type bloomStoreConfig struct {
@@ -53,7 +55,7 @@ type bloomStoreConfig struct {
}
// Compiler check to ensure bloomStoreEntry implements the Store interface
-var _ Store = &bloomStoreEntry{}
+var _ StoreBase = &bloomStoreEntry{}
type bloomStoreEntry struct {
start model.Time
@@ -272,15 +274,15 @@ func (b bloomStoreEntry) Stop() {
}
// Compiler check to ensure BloomStore implements the Store interface
-var _ StoreWithMetrics = &BloomStore{}
+var _ Store = &BloomStore{}
type BloomStore struct {
- stores []*bloomStoreEntry
- storageConfig storage.Config
- metrics *storeMetrics
- bloomMetrics *v1.Metrics
-
+ stores []*bloomStoreEntry
+ storageConfig storage.Config
+ metrics *storeMetrics
+ bloomMetrics *v1.Metrics
logger log.Logger
+ allocator mempool.Allocator
defaultKeyResolver // TODO(owen-d): impl schema aware resolvers
}
@@ -290,6 +292,7 @@ func NewBloomStore(
clientMetrics storage.ClientMetrics,
metasCache cache.Cache,
blocksCache Cache,
+ allocator mempool.Allocator,
reg prometheus.Registerer,
logger log.Logger,
) (*BloomStore, error) {
@@ -297,6 +300,7 @@ func NewBloomStore(
storageConfig: storageConfig,
metrics: newStoreMetrics(reg, constants.Loki, "bloom_store"),
bloomMetrics: v1.NewMetrics(reg),
+ allocator: allocator,
logger: logger,
}
@@ -404,7 +408,7 @@ func (b *BloomStore) TenantFilesForInterval(
) (map[string][]client.StorageObject, error) {
var allTenants map[string][]client.StorageObject
- err := b.forStores(ctx, interval, func(innerCtx context.Context, interval Interval, store Store) error {
+ err := b.forStores(ctx, interval, func(innerCtx context.Context, interval Interval, store StoreBase) error {
tenants, err := store.TenantFilesForInterval(innerCtx, interval, filter)
if err != nil {
return err
@@ -441,12 +445,17 @@ func (b *BloomStore) Client(ts model.Time) (Client, error) {
return nil, errNoStore
}
+// Allocator implements Store.
+func (b *BloomStore) Allocator() mempool.Allocator {
+ return b.allocator
+}
+
// ResolveMetas implements Store.
func (b *BloomStore) ResolveMetas(ctx context.Context, params MetaSearchParams) ([][]MetaRef, []*Fetcher, error) {
refs := make([][]MetaRef, 0, len(b.stores))
fetchers := make([]*Fetcher, 0, len(b.stores))
- err := b.forStores(ctx, params.Interval, func(innerCtx context.Context, interval Interval, store Store) error {
+ err := b.forStores(ctx, params.Interval, func(innerCtx context.Context, interval Interval, store StoreBase) error {
newParams := params
newParams.Interval = interval
metas, fetcher, err := store.ResolveMetas(innerCtx, newParams)
@@ -580,7 +589,7 @@ func (b *BloomStore) storeDo(ts model.Time, f func(s *bloomStoreEntry) error) er
return fmt.Errorf("no store found for timestamp %s", ts.Time())
}
-func (b *BloomStore) forStores(ctx context.Context, interval Interval, f func(innerCtx context.Context, interval Interval, store Store) error) error {
+func (b *BloomStore) forStores(ctx context.Context, interval Interval, f func(innerCtx context.Context, interval Interval, store StoreBase) error) error {
if len(b.stores) == 0 {
return nil
}
diff --git a/pkg/storage/stores/shipper/bloomshipper/store_test.go b/pkg/storage/stores/shipper/bloomshipper/store_test.go
index 093858444891d..15568e8763bd2 100644
--- a/pkg/storage/stores/shipper/bloomshipper/store_test.go
+++ b/pkg/storage/stores/shipper/bloomshipper/store_test.go
@@ -23,6 +23,7 @@ import (
storageconfig "github.com/grafana/loki/v3/pkg/storage/config"
"github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper/config"
"github.com/grafana/loki/v3/pkg/storage/types"
+ "github.com/grafana/loki/v3/pkg/util/mempool"
)
func newMockBloomStore(t *testing.T) (*BloomStore, string, error) {
@@ -77,7 +78,7 @@ func newMockBloomStoreWithWorkDir(t *testing.T, workDir, storeDir string) (*Bloo
metasCache := cache.NewMockCache()
blocksCache := NewFsBlocksCache(storageConfig.BloomShipperConfig.BlocksCache, prometheus.NewPedanticRegistry(), logger)
- store, err := NewBloomStore(periodicConfigs, storageConfig, metrics, metasCache, blocksCache, reg, logger)
+ store, err := NewBloomStore(periodicConfigs, storageConfig, metrics, metasCache, blocksCache, &mempool.SimpleHeapAllocator{}, reg, logger)
if err == nil {
t.Cleanup(store.Stop)
}
diff --git a/pkg/util/flagext/csv.go b/pkg/util/flagext/csv.go
new file mode 100644
index 0000000000000..6ed5f9bad11a0
--- /dev/null
+++ b/pkg/util/flagext/csv.go
@@ -0,0 +1,62 @@
+package flagext
+
+import (
+ "strings"
+)
+
+type ListValue interface {
+ String() string
+ Parse(s string) (any, error)
+}
+
+// StringSliceCSV is a slice of strings that is parsed from a comma-separated string
+// It implements flag.Value and yaml Marshalers
+type CSV[T ListValue] []T
+
+// String implements flag.Value
+func (v CSV[T]) String() string {
+ s := make([]string, 0, len(v))
+ for i := range v {
+ s = append(s, v[i].String())
+ }
+ return strings.Join(s, ",")
+}
+
+// Set implements flag.Value
+func (v *CSV[T]) Set(s string) error {
+ if len(s) == 0 {
+ *v = nil
+ return nil
+ }
+ var zero T
+ values := strings.Split(s, ",")
+ *v = make(CSV[T], 0, len(values))
+ for _, val := range values {
+ el, err := zero.Parse(val)
+ if err != nil {
+ return err
+ }
+ *v = append(*v, el.(T))
+ }
+ return nil
+}
+
+// String implements flag.Getter
+func (v CSV[T]) Get() []T {
+ return v
+}
+
+// UnmarshalYAML implements yaml.Unmarshaler.
+func (v *CSV[T]) UnmarshalYAML(unmarshal func(interface{}) error) error {
+ var s string
+ if err := unmarshal(&s); err != nil {
+ return err
+ }
+
+ return v.Set(s)
+}
+
+// MarshalYAML implements yaml.Marshaler.
+func (v CSV[T]) MarshalYAML() (interface{}, error) {
+ return v.String(), nil
+}
diff --git a/pkg/util/flagext/csv_test.go b/pkg/util/flagext/csv_test.go
new file mode 100644
index 0000000000000..aca4ea8a77eef
--- /dev/null
+++ b/pkg/util/flagext/csv_test.go
@@ -0,0 +1,79 @@
+package flagext
+
+import (
+ "strconv"
+ "testing"
+
+ "github.com/stretchr/testify/require"
+)
+
+type customType int
+
+// Parse implements ListValue.
+func (l customType) Parse(s string) (any, error) {
+ v, err := strconv.Atoi(s)
+ if err != nil {
+ return customType(0), err
+ }
+ return customType(v), nil
+}
+
+// String implements ListValue.
+func (l customType) String() string {
+ return strconv.Itoa(int(l))
+}
+
+var _ ListValue = customType(0)
+
+func Test_CSV(t *testing.T) {
+ for _, tc := range []struct {
+ in string
+ err bool
+ out []customType
+ }{
+ {
+ in: "",
+ err: false,
+ out: nil,
+ },
+ {
+ in: ",",
+ err: true,
+ out: []customType{},
+ },
+ {
+ in: "1",
+ err: false,
+ out: []customType{1},
+ },
+ {
+ in: "1,2",
+ err: false,
+ out: []customType{1, 2},
+ },
+ {
+ in: "1,",
+ err: true,
+ out: []customType{},
+ },
+ {
+ in: ",1",
+ err: true,
+ out: []customType{},
+ },
+ } {
+ t.Run(tc.in, func(t *testing.T) {
+ var v CSV[customType]
+
+ err := v.Set(tc.in)
+ if tc.err {
+ require.NotNil(t, err)
+ } else {
+ require.Nil(t, err)
+ require.Equal(t, tc.out, v.Get())
+ }
+
+ })
+ }
+
+}
diff --git a/pkg/util/mempool/allocator.go b/pkg/util/mempool/allocator.go
new file mode 100644
index 0000000000000..a27429b80692f
--- /dev/null
+++ b/pkg/util/mempool/allocator.go
@@ -0,0 +1,49 @@
+package mempool
+
+import (
+ "github.com/prometheus/prometheus/util/pool"
+)
+
+// Allocator handles byte slices for bloom queriers.
+// It exists to reduce the cost of allocations and allows to re-use already allocated memory.
+type Allocator interface {
+ Get(size int) ([]byte, error)
+ Put([]byte) bool
+}
+
+// SimpleHeapAllocator allocates a new byte slice every time and does not re-cycle buffers.
+type SimpleHeapAllocator struct{}
+
+func (a *SimpleHeapAllocator) Get(size int) ([]byte, error) {
+ return make([]byte, size), nil
+}
+
+func (a *SimpleHeapAllocator) Put([]byte) bool {
+ return true
+}
+
+// BytePool uses a sync.Pool to re-cycle already allocated buffers.
+type BytePool struct {
+ pool *pool.Pool
+}
+
+func NewBytePoolAllocator(minSize, maxSize int, factor float64) *BytePool {
+ return &BytePool{
+ pool: pool.New(
+ minSize, maxSize, factor,
+ func(size int) interface{} {
+ return make([]byte, size)
+ }),
+ }
+}
+
+// Get implements Allocator
+func (p *BytePool) Get(size int) ([]byte, error) {
+ return p.pool.Get(size).([]byte)[:size], nil
+}
+
+// Put implements Allocator
+func (p *BytePool) Put(b []byte) bool {
+ p.pool.Put(b)
+ return true
+}
diff --git a/pkg/util/mempool/bucket.go b/pkg/util/mempool/bucket.go
new file mode 100644
index 0000000000000..2a56608230d3b
--- /dev/null
+++ b/pkg/util/mempool/bucket.go
@@ -0,0 +1,51 @@
+package mempool
+
+import (
+ "errors"
+ "fmt"
+ "strconv"
+ "strings"
+
+ "github.com/c2h5oh/datasize"
+)
+
+type Bucket struct {
+ Size int // Number of buffers
+ Capacity uint64 // Size of a buffer
+}
+
+func (b Bucket) Parse(s string) (any, error) {
+ parts := strings.Split(s, "x")
+ if len(parts) != 2 {
+ return nil, errors.New("bucket must be in format {count}x{bytes}")
+ }
+
+ size, err := strconv.Atoi(parts[0])
+ if err != nil {
+ return nil, err
+ }
+
+ capacity, err := datasize.ParseString(parts[1])
+ if err != nil {
+ panic(err.Error())
+ }
+
+ return Bucket{
+ Size: size,
+ Capacity: uint64(capacity),
+ }, nil
+}
+
+func (b Bucket) String() string {
+ return fmt.Sprintf("%dx%s", b.Size, datasize.ByteSize(b.Capacity).String())
+}
+
+type Buckets []Bucket
+
+func (b Buckets) String() string {
+ s := make([]string, 0, len(b))
+ for i := range b {
+ s = append(s, b[i].String())
+ }
+ return strings.Join(s, ",")
+}
diff --git a/pkg/util/mempool/metrics.go b/pkg/util/mempool/metrics.go
new file mode 100644
index 0000000000000..f7d5a52eb0d91
--- /dev/null
+++ b/pkg/util/mempool/metrics.go
@@ -0,0 +1,32 @@
+package mempool
+
+import (
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/promauto"
+
+ "github.com/grafana/loki/v3/pkg/util/constants"
+)
+
+type metrics struct {
+ availableBuffersPerSlab *prometheus.CounterVec
+ errorsCounter *prometheus.CounterVec
+}
+
+func newMetrics(r prometheus.Registerer, name string) *metrics {
+ return &metrics{
+ availableBuffersPerSlab: promauto.With(r).NewCounterVec(prometheus.CounterOpts{
+ Namespace: constants.Loki,
+ Subsystem: "mempool",
+ Name: "available_buffers_per_slab",
+ Help: "The amount of available buffers per slab.",
+ ConstLabels: prometheus.Labels{"pool": name},
+ }, []string{"slab"}),
+ errorsCounter: promauto.With(r).NewCounterVec(prometheus.CounterOpts{
+ Namespace: constants.Loki,
+ Subsystem: "mempool",
+ Name: "errors_total",
+ Help: "The total amount of errors returned from the pool.",
+ ConstLabels: prometheus.Labels{"pool": name},
+ }, []string{"slab", "reason"}),
+ }
+}
diff --git a/pkg/util/mempool/pool.go b/pkg/util/mempool/pool.go
new file mode 100644
index 0000000000000..b42d8d9237677
--- /dev/null
+++ b/pkg/util/mempool/pool.go
@@ -0,0 +1,135 @@
+package mempool
+
+import (
+ "errors"
+ "fmt"
+ "sync"
+ "unsafe"
+
+ "github.com/dustin/go-humanize"
+ "github.com/prometheus/client_golang/prometheus"
+)
+
+var (
+ errSlabExhausted = errors.New("slab exhausted")
+
+ reasonSizeExceeded = "size-exceeded"
+ reasonSlabExhausted = "slab-exhausted"
+)
+
+type slab struct {
+ buffer chan unsafe.Pointer
+ size, count int
+ mtx sync.Mutex
+ metrics *metrics
+ name string
+}
+
+func newSlab(bufferSize, bufferCount int, m *metrics) *slab {
+ name := humanize.Bytes(uint64(bufferSize))
+ m.availableBuffersPerSlab.WithLabelValues(name).Add(0) // initialize metric with value 0
+
+ return &slab{
+ size: bufferSize,
+ count: bufferCount,
+ metrics: m,
+ name: name,
+ }
+}
+
+func (s *slab) init() {
+ s.buffer = make(chan unsafe.Pointer, s.count)
+ for i := 0; i < s.count; i++ {
+ buf := make([]byte, 0, s.size)
+ ptr := unsafe.Pointer(unsafe.SliceData(buf))
+ s.buffer <- ptr
+ }
+ s.metrics.availableBuffersPerSlab.WithLabelValues(s.name).Add(float64(s.count))
+}
+
+func (s *slab) get(size int) ([]byte, error) {
+ s.mtx.Lock()
+ if s.buffer == nil {
+ s.init()
+ }
+ defer s.mtx.Unlock()
+
+ // wait for available buffer on channel
+ var buf []byte
+ select {
+ case ptr := <-s.buffer:
+ buf = unsafe.Slice((*byte)(ptr), s.size)
+ default:
+ s.metrics.errorsCounter.WithLabelValues(s.name, reasonSlabExhausted).Inc()
+ return nil, errSlabExhausted
+ }
+
+ // Taken from https://github.com/ortuman/nuke/blob/main/monotonic_arena.go#L37-L48
+ // This piece of code will be translated into a runtime.memclrNoHeapPointers
+ // invocation by the compiler, which is an assembler optimized implementation.
+ // Architecture specific code can be found at src/runtime/memclr_$GOARCH.s
+ // in Go source (since https://codereview.appspot.com/137880043).
+ for i := range buf {
+ buf[i] = 0
+ }
+
+ return buf[:size], nil
+}
+
+func (s *slab) put(buf []byte) {
+ if s.buffer == nil {
+ panic("slab is not initialized")
+ }
+
+ ptr := unsafe.Pointer(unsafe.SliceData(buf))
+ s.buffer <- ptr
+}
+
+// MemPool is an Allocator implementation that uses a fixed size memory pool
+// that is split into multiple slabs of different buffer sizes.
+// Buffers are re-cycled and need to be returned back to the pool, otherwise
+// the pool runs out of available buffers.
+type MemPool struct {
+ slabs []*slab
+ metrics *metrics
+}
+
+func New(name string, buckets []Bucket, r prometheus.Registerer) *MemPool {
+ a := &MemPool{
+ slabs: make([]*slab, 0, len(buckets)),
+ metrics: newMetrics(r, name),
+ }
+ for _, b := range buckets {
+ a.slabs = append(a.slabs, newSlab(int(b.Capacity), b.Size, a.metrics))
+ }
+ return a
+}
+
+// Get satisfies Allocator interface
+// Allocating a buffer from an exhausted pool/slab, or allocating a buffer that
+// exceeds the largest slab size will return an error.
+func (a *MemPool) Get(size int) ([]byte, error) {
+ for i := 0; i < len(a.slabs); i++ {
+ if a.slabs[i].size < size {
+ continue
+ }
+ return a.slabs[i].get(size)
+ }
+ a.metrics.errorsCounter.WithLabelValues("pool", reasonSizeExceeded).Inc()
+ return nil, fmt.Errorf("no slab found for size: %d", size)
+}
+
+// Put satisfies Allocator interface
+// Every buffer allocated with Get(size int) needs to be returned to the pool
+// using Put(buffer []byte) so it can be re-cycled.
+func (a *MemPool) Put(buffer []byte) bool {
+ size := cap(buffer)
+ for i := 0; i < len(a.slabs); i++ {
+ if a.slabs[i].size < size {
+ continue
+ }
+ a.slabs[i].put(buffer)
+ return true
+ }
+ return false
+}
diff --git a/pkg/util/mempool/pool_test.go b/pkg/util/mempool/pool_test.go
new file mode 100644
index 0000000000000..da0fc361dd4a4
--- /dev/null
+++ b/pkg/util/mempool/pool_test.go
@@ -0,0 +1,133 @@
+package mempool
+
+import (
+ "math/rand"
+ "sync"
+ "testing"
+ "time"
+ "unsafe"
+
+ "github.com/stretchr/testify/require"
+)
+
+func TestMemPool(t *testing.T) {
+
+ t.Run("empty pool", func(t *testing.T) {
+ pool := New("test", []Bucket{}, nil)
+ _, err := pool.Get(256)
+ require.Error(t, err)
+ })
+
+ t.Run("requested size too big", func(t *testing.T) {
+ pool := New("test", []Bucket{
+ {Size: 1, Capacity: 128},
+ }, nil)
+ _, err := pool.Get(256)
+ require.Error(t, err)
+ })
+
+ t.Run("requested size within bucket", func(t *testing.T) {
+ pool := New("test", []Bucket{
+ {Size: 1, Capacity: 128},
+ {Size: 1, Capacity: 256},
+ {Size: 1, Capacity: 512},
+ }, nil)
+ res, err := pool.Get(200)
+ require.NoError(t, err)
+ require.Equal(t, 200, len(res))
+ require.Equal(t, 256, cap(res))
+
+ res, err = pool.Get(300)
+ require.NoError(t, err)
+ require.Equal(t, 300, len(res))
+ require.Equal(t, 512, cap(res))
+ })
+
+ t.Run("buffer is cleared when returned", func(t *testing.T) {
+ pool := New("test", []Bucket{
+ {Size: 1, Capacity: 64},
+ }, nil)
+ res, err := pool.Get(8)
+ require.NoError(t, err)
+ require.Equal(t, 8, len(res))
+ source := []byte{0, 1, 2, 3, 4, 5, 6, 7}
+ copy(res, source)
+
+ pool.Put(res)
+
+ res, err = pool.Get(8)
+ require.NoError(t, err)
+ require.Equal(t, 8, len(res))
+ require.Equal(t, make([]byte, 8), res)
+ })
+
+ t.Run("pool returns error when no buffer is available", func(t *testing.T) {
+ pool := New("test", []Bucket{
+ {Size: 1, Capacity: 64},
+ }, nil)
+ buf1, _ := pool.Get(32)
+ require.Equal(t, 32, len(buf1))
+
+ _, err := pool.Get(16)
+ require.ErrorContains(t, err, errSlabExhausted.Error())
+ })
+
+ t.Run("test ring buffer returns same backing array", func(t *testing.T) {
+ pool := New("test", []Bucket{
+ {Size: 2, Capacity: 128},
+ }, nil)
+ res1, _ := pool.Get(32)
+ ptr1 := unsafe.Pointer(unsafe.SliceData(res1))
+
+ res2, _ := pool.Get(64)
+ ptr2 := unsafe.Pointer(unsafe.SliceData(res2))
+
+ pool.Put(res2)
+ pool.Put(res1)
+
+ res3, _ := pool.Get(48)
+ ptr3 := unsafe.Pointer(unsafe.SliceData(res3))
+
+ res4, _ := pool.Get(96)
+ ptr4 := unsafe.Pointer(unsafe.SliceData(res4))
+
+ require.Equal(t, ptr1, ptr4)
+ require.Equal(t, ptr2, ptr3)
+ })
+
+ t.Run("concurrent access", func(t *testing.T) {
+ pool := New("test", []Bucket{
+ {Size: 32, Capacity: 2 << 10},
+ {Size: 16, Capacity: 4 << 10},
+ {Size: 8, Capacity: 8 << 10},
+ {Size: 4, Capacity: 16 << 10},
+ {Size: 2, Capacity: 32 << 10},
+ }, nil)
+
+ var wg sync.WaitGroup
+ numWorkers := 256
+ n := 10
+
+ for i := 0; i < numWorkers; i++ {
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+ for i := 0; i < n; i++ {
+ s := 2 << rand.Intn(5)
+ buf1, err1 := pool.Get(s)
+ buf2, err2 := pool.Get(s)
+ if err2 == nil {
+ pool.Put(buf2)
+ }
+ time.Sleep(time.Millisecond * time.Duration(rand.Intn(10)))
+ if err1 == nil {
+ pool.Put(buf1)
+ }
+ }
+ }()
+ }
+
+ wg.Wait()
+ t.Log("finished")
+ })
+}
diff --git a/tools/bloom/inspector/main.go b/tools/bloom/inspector/main.go
index dfcc7c79cd86d..8f60422cd6487 100644
--- a/tools/bloom/inspector/main.go
+++ b/tools/bloom/inspector/main.go
@@ -5,6 +5,7 @@ import (
"os"
v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
+ "github.com/grafana/loki/v3/pkg/util/mempool"
)
func main() {
@@ -18,7 +19,7 @@ func main() {
r := v1.NewDirectoryBlockReader(path)
b := v1.NewBlock(r, v1.NewMetrics(nil))
- q := v1.NewBlockQuerier(b, true, v1.DefaultMaxPageSize)
+ q := v1.NewBlockQuerier(b, &mempool.SimpleHeapAllocator{}, v1.DefaultMaxPageSize)
md, err := q.Metadata()
if err != nil {
|
perf
|
Re-introduce fixed size memory pool for bloom querier (#13172)
|
04ab08dc89c0523a0e7c515b6ef53700eccfcfe7
|
2021-10-06 04:00:40
|
Karen Miller
|
docs: remove empty section "Generic placeholders" (#4417)
| false
|
diff --git a/docs/sources/configuration/_index.md b/docs/sources/configuration/_index.md
index 7f65e9b3cd554..77fb9211120e1 100644
--- a/docs/sources/configuration/_index.md
+++ b/docs/sources/configuration/_index.md
@@ -2168,7 +2168,6 @@ multi_kv_config:
mirror-enabled: false
primary: consul
```
-### Generic placeholders
## Accept out-of-order writes
|
docs
|
remove empty section "Generic placeholders" (#4417)
|
2b3ae48d9be63183907dfd7163af6a980360c853
|
2024-05-03 17:17:15
|
Sven Grossmann
|
feat(detectedFields): add parser to response (#12872)
| false
|
diff --git a/pkg/loghttp/detected.go b/pkg/loghttp/detected.go
index d255bf6124a75..632ac7cd02410 100644
--- a/pkg/loghttp/detected.go
+++ b/pkg/loghttp/detected.go
@@ -11,4 +11,5 @@ type DetectedField struct {
Label string `json:"label,omitempty"`
Type logproto.DetectedFieldType `json:"type,omitempty"`
Cardinality uint64 `json:"cardinality,omitempty"`
+ Parser string `json:"parser,omitempty"`
}
diff --git a/pkg/logproto/logproto.pb.go b/pkg/logproto/logproto.pb.go
index 774af1fb709b1..ac9bd37a06186 100644
--- a/pkg/logproto/logproto.pb.go
+++ b/pkg/logproto/logproto.pb.go
@@ -2818,7 +2818,8 @@ type DetectedField struct {
Label string `protobuf:"bytes,1,opt,name=label,proto3" json:"label,omitempty"`
Type DetectedFieldType `protobuf:"bytes,2,opt,name=type,proto3,casttype=DetectedFieldType" json:"type,omitempty"`
Cardinality uint64 `protobuf:"varint,3,opt,name=cardinality,proto3" json:"cardinality,omitempty"`
- Sketch []byte `protobuf:"bytes,4,opt,name=sketch,proto3" json:"sketch,omitempty"`
+ Parser string `protobuf:"bytes,4,opt,name=parser,proto3" json:"parser,omitempty"`
+ Sketch []byte `protobuf:"bytes,5,opt,name=sketch,proto3" json:"sketch,omitempty"`
}
func (m *DetectedField) Reset() { *m = DetectedField{} }
@@ -2874,6 +2875,13 @@ func (m *DetectedField) GetCardinality() uint64 {
return 0
}
+func (m *DetectedField) GetParser() string {
+ if m != nil {
+ return m.Parser
+ }
+ return ""
+}
+
func (m *DetectedField) GetSketch() []byte {
if m != nil {
return m.Sketch
@@ -3097,174 +3105,174 @@ func init() {
func init() { proto.RegisterFile("pkg/logproto/logproto.proto", fileDescriptor_c28a5f14f1f4c79a) }
var fileDescriptor_c28a5f14f1f4c79a = []byte{
- // 2658 bytes of a gzipped FileDescriptorProto
- 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xd4, 0x1a, 0x4d, 0x8c, 0x5b, 0x47,
- 0xd9, 0xcf, 0x7e, 0xf6, 0xda, 0x9f, 0xbd, 0x9b, 0xcd, 0xac, 0x93, 0x58, 0x9b, 0xd4, 0x6f, 0x3b,
- 0x82, 0x36, 0x34, 0xe9, 0xba, 0x49, 0x69, 0x49, 0x53, 0x4a, 0x89, 0x77, 0x9b, 0x6d, 0xd2, 0x6d,
- 0x9a, 0xce, 0xa6, 0x69, 0x41, 0x54, 0xd5, 0x8b, 0x3d, 0xeb, 0x7d, 0x8a, 0xfd, 0x9e, 0xf3, 0xde,
- 0xb8, 0xe9, 0xde, 0x90, 0x38, 0x23, 0x2a, 0x71, 0x00, 0x2e, 0x08, 0x24, 0x24, 0x10, 0xa8, 0x17,
- 0xc4, 0x11, 0xc1, 0x85, 0x43, 0xb9, 0x95, 0x5b, 0xd5, 0x83, 0xa1, 0xdb, 0x0b, 0xda, 0x53, 0x25,
- 0x24, 0x0e, 0x3d, 0xa1, 0xf9, 0x7b, 0x6f, 0xde, 0x5b, 0x9b, 0xd4, 0xdb, 0xa0, 0x92, 0x8b, 0x3d,
- 0xf3, 0xcd, 0x37, 0xdf, 0xcc, 0xf7, 0x33, 0xdf, 0x9f, 0x0d, 0x27, 0x87, 0xb7, 0x7b, 0xad, 0x7e,
- 0xd0, 0x1b, 0x86, 0x01, 0x0b, 0xe2, 0xc1, 0xaa, 0xf8, 0x44, 0x65, 0x3d, 0x5f, 0xae, 0xf7, 0x82,
- 0x5e, 0x20, 0x71, 0xf8, 0x48, 0xae, 0x2f, 0x3b, 0xbd, 0x20, 0xe8, 0xf5, 0x69, 0x4b, 0xcc, 0x6e,
- 0x8d, 0xb6, 0x5b, 0xcc, 0x1b, 0xd0, 0x88, 0xb9, 0x83, 0xa1, 0x42, 0x58, 0x51, 0xd4, 0xef, 0xf4,
- 0x07, 0x41, 0x97, 0xf6, 0x5b, 0x11, 0x73, 0x59, 0x24, 0x3f, 0x15, 0xc6, 0x12, 0xc7, 0x18, 0x8e,
- 0xa2, 0x1d, 0xf1, 0x21, 0x81, 0xf8, 0x0f, 0x16, 0x1c, 0xdb, 0x74, 0x6f, 0xd1, 0xfe, 0x8d, 0xe0,
- 0xa6, 0xdb, 0x1f, 0xd1, 0x88, 0xd0, 0x68, 0x18, 0xf8, 0x11, 0x45, 0x6b, 0x50, 0xea, 0xf3, 0x85,
- 0xa8, 0x61, 0xad, 0x14, 0x4e, 0x57, 0xcf, 0x9f, 0x59, 0x8d, 0xaf, 0x3c, 0x71, 0x83, 0x84, 0x46,
- 0x2f, 0xf8, 0x2c, 0xdc, 0x25, 0x6a, 0xeb, 0xf2, 0x4d, 0xa8, 0x1a, 0x60, 0xb4, 0x08, 0x85, 0xdb,
- 0x74, 0xb7, 0x61, 0xad, 0x58, 0xa7, 0x2b, 0x84, 0x0f, 0xd1, 0x39, 0x28, 0xbe, 0xcd, 0xc9, 0x34,
- 0xf2, 0x2b, 0xd6, 0xe9, 0xea, 0xf9, 0x93, 0xc9, 0x21, 0xaf, 0xf9, 0xde, 0x9d, 0x11, 0x15, 0xbb,
- 0xd5, 0x41, 0x12, 0xf3, 0x62, 0xfe, 0x82, 0x85, 0xcf, 0xc0, 0xd1, 0x03, 0xeb, 0xe8, 0x38, 0x94,
- 0x04, 0x86, 0xbc, 0x71, 0x85, 0xa8, 0x19, 0xae, 0x03, 0xda, 0x62, 0x21, 0x75, 0x07, 0xc4, 0x65,
- 0xfc, 0xbe, 0x77, 0x46, 0x34, 0x62, 0xf8, 0x65, 0x58, 0x4a, 0x41, 0x15, 0xdb, 0x4f, 0x43, 0x35,
- 0x4a, 0xc0, 0x8a, 0xf7, 0x7a, 0x72, 0xad, 0x64, 0x0f, 0x31, 0x11, 0xf1, 0xcf, 0x2d, 0x80, 0x64,
- 0x0d, 0x35, 0x01, 0xe4, 0xea, 0x8b, 0x6e, 0xb4, 0x23, 0x18, 0xb6, 0x89, 0x01, 0x41, 0x67, 0xe1,
- 0x68, 0x32, 0xbb, 0x16, 0x6c, 0xed, 0xb8, 0x61, 0x57, 0xc8, 0xc0, 0x26, 0x07, 0x17, 0x10, 0x02,
- 0x3b, 0x74, 0x19, 0x6d, 0x14, 0x56, 0xac, 0xd3, 0x05, 0x22, 0xc6, 0x9c, 0x5b, 0x46, 0x7d, 0xd7,
- 0x67, 0x0d, 0x5b, 0x88, 0x53, 0xcd, 0x38, 0x9c, 0xeb, 0x97, 0x46, 0x8d, 0xe2, 0x8a, 0x75, 0x7a,
- 0x9e, 0xa8, 0x19, 0xfe, 0x77, 0x01, 0x6a, 0xaf, 0x8e, 0x68, 0xb8, 0xab, 0x04, 0x80, 0x9a, 0x50,
- 0x8e, 0x68, 0x9f, 0x76, 0x58, 0x10, 0x4a, 0x8d, 0xb4, 0xf3, 0x0d, 0x8b, 0xc4, 0x30, 0x54, 0x87,
- 0x62, 0xdf, 0x1b, 0x78, 0x4c, 0x5c, 0x6b, 0x9e, 0xc8, 0x09, 0xba, 0x08, 0xc5, 0x88, 0xb9, 0x21,
- 0x13, 0x77, 0xa9, 0x9e, 0x5f, 0x5e, 0x95, 0x86, 0xb9, 0xaa, 0x0d, 0x73, 0xf5, 0x86, 0x36, 0xcc,
- 0x76, 0xf9, 0xfd, 0xb1, 0x93, 0x7b, 0xf7, 0xef, 0x8e, 0x45, 0xe4, 0x16, 0xf4, 0x34, 0x14, 0xa8,
- 0xdf, 0x15, 0xf7, 0xfd, 0xbc, 0x3b, 0xf9, 0x06, 0x74, 0x0e, 0x2a, 0x5d, 0x2f, 0xa4, 0x1d, 0xe6,
- 0x05, 0xbe, 0xe0, 0x6a, 0xe1, 0xfc, 0x52, 0xa2, 0x91, 0x75, 0xbd, 0x44, 0x12, 0x2c, 0x74, 0x16,
- 0x4a, 0x11, 0x17, 0x5d, 0xd4, 0x98, 0xe3, 0xb6, 0xd0, 0xae, 0xef, 0x8f, 0x9d, 0x45, 0x09, 0x39,
- 0x1b, 0x0c, 0x3c, 0x46, 0x07, 0x43, 0xb6, 0x4b, 0x14, 0x0e, 0x7a, 0x0c, 0xe6, 0xba, 0xb4, 0x4f,
- 0xb9, 0xc2, 0xcb, 0x42, 0xe1, 0x8b, 0x06, 0x79, 0xb1, 0x40, 0x34, 0x02, 0x7a, 0x13, 0xec, 0x61,
- 0xdf, 0xf5, 0x1b, 0x15, 0xc1, 0xc5, 0x42, 0x82, 0x78, 0xbd, 0xef, 0xfa, 0xed, 0x67, 0x3e, 0x1a,
- 0x3b, 0x4f, 0xf5, 0x3c, 0xb6, 0x33, 0xba, 0xb5, 0xda, 0x09, 0x06, 0xad, 0x5e, 0xe8, 0x6e, 0xbb,
- 0xbe, 0xdb, 0xea, 0x07, 0xb7, 0xbd, 0xd6, 0xdb, 0x4f, 0xb6, 0xf8, 0x1b, 0xbc, 0x33, 0xa2, 0xa1,
- 0x47, 0xc3, 0x16, 0x27, 0xb3, 0x2a, 0x54, 0xc2, 0xb7, 0x12, 0x41, 0x16, 0x5d, 0xe5, 0xf6, 0x17,
- 0x84, 0x74, 0x6d, 0x67, 0xe4, 0xdf, 0x8e, 0x1a, 0x20, 0x4e, 0x39, 0x91, 0x9c, 0x22, 0xe0, 0x84,
- 0x6e, 0x6f, 0x84, 0xc1, 0x68, 0xd8, 0x3e, 0xb2, 0x3f, 0x76, 0x4c, 0x7c, 0x62, 0x4e, 0xae, 0xda,
- 0xe5, 0xd2, 0xe2, 0x1c, 0x7e, 0xaf, 0x00, 0x68, 0xcb, 0x1d, 0x0c, 0xfb, 0x74, 0x26, 0xf5, 0xc7,
- 0x8a, 0xce, 0x1f, 0x5a, 0xd1, 0x85, 0x59, 0x15, 0x9d, 0x68, 0xcd, 0x9e, 0x4d, 0x6b, 0xc5, 0xcf,
- 0xab, 0xb5, 0xd2, 0xff, 0xbd, 0xd6, 0x70, 0x03, 0x6c, 0x4e, 0x99, 0x3b, 0xcb, 0xd0, 0xbd, 0x2b,
- 0x74, 0x53, 0x23, 0x7c, 0x88, 0x37, 0xa1, 0x24, 0xf9, 0x42, 0xcb, 0x59, 0xe5, 0xa5, 0xdf, 0x6d,
- 0xa2, 0xb8, 0x82, 0x56, 0xc9, 0x62, 0xa2, 0x92, 0x82, 0x10, 0x36, 0xfe, 0xa3, 0x05, 0xf3, 0xca,
- 0x22, 0x94, 0xef, 0xbb, 0x05, 0x73, 0xd2, 0xf7, 0x68, 0xbf, 0x77, 0x22, 0xeb, 0xf7, 0x2e, 0x75,
- 0xdd, 0x21, 0xa3, 0x61, 0xbb, 0xf5, 0xfe, 0xd8, 0xb1, 0x3e, 0x1a, 0x3b, 0x8f, 0x4e, 0x13, 0x9a,
- 0x8e, 0x35, 0xda, 0x5f, 0x6a, 0xc2, 0xe8, 0x8c, 0xb8, 0x1d, 0x8b, 0x94, 0x59, 0x1d, 0x59, 0x95,
- 0x21, 0xea, 0x8a, 0xdf, 0xa3, 0x11, 0xa7, 0x6c, 0x73, 0x8b, 0x20, 0x12, 0x87, 0xb3, 0x79, 0xd7,
- 0x0d, 0x7d, 0xcf, 0xef, 0x45, 0x8d, 0x82, 0xf0, 0xe9, 0xf1, 0x1c, 0xff, 0xd4, 0x82, 0xa5, 0x94,
- 0x59, 0x2b, 0x26, 0x2e, 0x40, 0x29, 0xe2, 0x9a, 0xd2, 0x3c, 0x18, 0x46, 0xb1, 0x25, 0xe0, 0xed,
- 0x05, 0x75, 0xf9, 0x92, 0x9c, 0x13, 0x85, 0x7f, 0xff, 0xae, 0xf6, 0x17, 0x0b, 0x6a, 0x22, 0x30,
- 0xe9, 0xb7, 0x86, 0xc0, 0xf6, 0xdd, 0x01, 0x55, 0xaa, 0x12, 0x63, 0x23, 0x5a, 0xf1, 0xe3, 0xca,
- 0x3a, 0x5a, 0xcd, 0xea, 0x60, 0xad, 0x43, 0x3b, 0x58, 0x2b, 0x79, 0x77, 0x75, 0x28, 0x72, 0xf3,
- 0xde, 0x15, 0xce, 0xb5, 0x42, 0xe4, 0x04, 0x3f, 0x0a, 0xf3, 0x8a, 0x0b, 0x25, 0xda, 0x69, 0x01,
- 0x76, 0x00, 0x25, 0xa9, 0x09, 0xf4, 0x15, 0xa8, 0xc4, 0x89, 0x89, 0xe0, 0xb6, 0xd0, 0x2e, 0xed,
- 0x8f, 0x9d, 0x3c, 0x8b, 0x48, 0xb2, 0x80, 0x1c, 0x33, 0xe8, 0x5b, 0xed, 0xca, 0xfe, 0xd8, 0x91,
- 0x00, 0x15, 0xe2, 0xd1, 0x29, 0xb0, 0x77, 0x78, 0xdc, 0xe4, 0x22, 0xb0, 0xdb, 0xe5, 0xfd, 0xb1,
- 0x23, 0xe6, 0x44, 0x7c, 0xe2, 0x0d, 0xa8, 0x6d, 0xd2, 0x9e, 0xdb, 0xd9, 0x55, 0x87, 0xd6, 0x35,
- 0x39, 0x7e, 0xa0, 0xa5, 0x69, 0x3c, 0x0c, 0xb5, 0xf8, 0xc4, 0xb7, 0x06, 0x91, 0x7a, 0x0d, 0xd5,
- 0x18, 0xf6, 0x72, 0x84, 0x7f, 0x66, 0x81, 0xb2, 0x01, 0x84, 0x8d, 0x6c, 0x87, 0xfb, 0x42, 0xd8,
- 0x1f, 0x3b, 0x0a, 0xa2, 0x93, 0x19, 0xf4, 0x2c, 0xcc, 0x45, 0xe2, 0x44, 0x4e, 0x2c, 0x6b, 0x5a,
- 0x62, 0xa1, 0x7d, 0x84, 0x9b, 0xc8, 0xfe, 0xd8, 0xd1, 0x88, 0x44, 0x0f, 0xd0, 0x6a, 0x2a, 0x21,
- 0x90, 0x8c, 0x2d, 0xec, 0x8f, 0x1d, 0x03, 0x6a, 0x26, 0x08, 0xf8, 0x33, 0x0b, 0xaa, 0x37, 0x5c,
- 0x2f, 0x36, 0xa1, 0x86, 0x56, 0x51, 0xe2, 0xab, 0x25, 0x80, 0x5b, 0x62, 0x97, 0xf6, 0xdd, 0xdd,
- 0xcb, 0x41, 0x28, 0xe8, 0xce, 0x93, 0x78, 0x9e, 0xc4, 0x70, 0x7b, 0x62, 0x0c, 0x2f, 0xce, 0xee,
- 0xda, 0xff, 0xb7, 0x8e, 0xf4, 0xaa, 0x5d, 0xce, 0x2f, 0x16, 0xf0, 0x7b, 0x16, 0xd4, 0x24, 0xf3,
- 0xca, 0xf2, 0xbe, 0x07, 0x25, 0x29, 0x1b, 0xc1, 0xfe, 0x7f, 0x71, 0x4c, 0x67, 0x66, 0x71, 0x4a,
- 0x8a, 0x26, 0x7a, 0x1e, 0x16, 0xba, 0x61, 0x30, 0x1c, 0xd2, 0xee, 0x96, 0x72, 0x7f, 0xf9, 0xac,
- 0xfb, 0x5b, 0x37, 0xd7, 0x49, 0x06, 0x1d, 0xff, 0xd5, 0x82, 0x79, 0xe5, 0x4c, 0x94, 0xba, 0x62,
- 0x11, 0x5b, 0x87, 0x8e, 0x9e, 0xf9, 0x59, 0xa3, 0xe7, 0x71, 0x28, 0xf5, 0x78, 0x7c, 0xd1, 0x0e,
- 0x49, 0xcd, 0x66, 0x8b, 0xaa, 0xf8, 0x2a, 0x2c, 0x68, 0x56, 0xa6, 0x78, 0xd4, 0xe5, 0xac, 0x47,
- 0xbd, 0xd2, 0xa5, 0x3e, 0xf3, 0xb6, 0xbd, 0xd8, 0x47, 0x2a, 0x7c, 0xfc, 0x23, 0x0b, 0x16, 0xb3,
- 0x28, 0x68, 0x3d, 0x53, 0x58, 0x3c, 0x32, 0x9d, 0x9c, 0x59, 0x53, 0x68, 0xd2, 0xaa, 0xb2, 0x78,
- 0xea, 0x5e, 0x95, 0x45, 0xdd, 0x74, 0x32, 0x15, 0xe5, 0x15, 0xf0, 0x4f, 0x2c, 0x98, 0x4f, 0xe9,
- 0x12, 0x5d, 0x00, 0x7b, 0x3b, 0x0c, 0x06, 0x33, 0x29, 0x4a, 0xec, 0x40, 0x5f, 0x87, 0x3c, 0x0b,
- 0x66, 0x52, 0x53, 0x9e, 0x05, 0x5c, 0x4b, 0x8a, 0xfd, 0x82, 0xcc, 0xdb, 0xe5, 0x0c, 0x3f, 0x05,
- 0x15, 0xc1, 0xd0, 0x75, 0xd7, 0x0b, 0x27, 0x06, 0x8c, 0xc9, 0x0c, 0x3d, 0x0b, 0x47, 0xa4, 0x33,
- 0x9c, 0xbc, 0xb9, 0x36, 0x69, 0x73, 0x4d, 0x6f, 0x3e, 0x09, 0x45, 0x91, 0x74, 0xf0, 0x2d, 0x5d,
- 0x97, 0xb9, 0x7a, 0x0b, 0x1f, 0xe3, 0x63, 0xb0, 0xc4, 0xdf, 0x20, 0x0d, 0xa3, 0xb5, 0x60, 0xe4,
- 0x33, 0x5d, 0x37, 0x9d, 0x85, 0x7a, 0x1a, 0xac, 0xac, 0xa4, 0x0e, 0xc5, 0x0e, 0x07, 0x08, 0x1a,
- 0xf3, 0x44, 0x4e, 0xf0, 0xaf, 0x2c, 0x40, 0x1b, 0x94, 0x89, 0x53, 0xae, 0xac, 0xc7, 0xcf, 0x63,
- 0x19, 0xca, 0x03, 0x97, 0x75, 0x76, 0x68, 0x18, 0xe9, 0xfc, 0x45, 0xcf, 0xbf, 0x8c, 0xc4, 0x13,
- 0x9f, 0x83, 0xa5, 0xd4, 0x2d, 0x15, 0x4f, 0xcb, 0x50, 0xee, 0x28, 0x98, 0x0a, 0x79, 0xf1, 0x1c,
- 0xff, 0x3e, 0x0f, 0x65, 0x9d, 0xd6, 0xa1, 0x73, 0x50, 0xdd, 0xf6, 0xfc, 0x1e, 0x0d, 0x87, 0xa1,
- 0xa7, 0x44, 0x60, 0xcb, 0x34, 0xcf, 0x00, 0x13, 0x73, 0x82, 0x1e, 0x87, 0xb9, 0x51, 0x44, 0xc3,
- 0xb7, 0x3c, 0xf9, 0xd2, 0x2b, 0xed, 0xfa, 0xde, 0xd8, 0x29, 0xbd, 0x16, 0xd1, 0xf0, 0xca, 0x3a,
- 0x0f, 0x3e, 0x23, 0x31, 0x22, 0xf2, 0xbb, 0x8b, 0x5e, 0x52, 0x66, 0x2a, 0x12, 0xb8, 0xf6, 0x37,
- 0xf8, 0xf5, 0x33, 0xae, 0x6e, 0x18, 0x06, 0x03, 0xca, 0x76, 0xe8, 0x28, 0x6a, 0x75, 0x82, 0xc1,
- 0x20, 0xf0, 0x5b, 0xa2, 0x13, 0x20, 0x98, 0xe6, 0x11, 0x94, 0x6f, 0x57, 0x96, 0x7b, 0x03, 0xe6,
- 0xd8, 0x4e, 0x18, 0x8c, 0x7a, 0x3b, 0x22, 0x30, 0x14, 0xda, 0x17, 0x67, 0xa7, 0xa7, 0x29, 0x10,
- 0x3d, 0x40, 0x0f, 0x73, 0x69, 0xd1, 0xce, 0xed, 0x68, 0x34, 0x90, 0xb5, 0x67, 0xbb, 0xb8, 0x3f,
- 0x76, 0xac, 0xc7, 0x49, 0x0c, 0xc6, 0x97, 0x60, 0x3e, 0x95, 0x0a, 0xa3, 0x27, 0xc0, 0x0e, 0xe9,
- 0xb6, 0x76, 0x05, 0xe8, 0x60, 0xc6, 0x2c, 0xa3, 0x3f, 0xc7, 0x21, 0xe2, 0x13, 0xff, 0x30, 0x0f,
- 0x8e, 0x51, 0xf5, 0x5f, 0x0e, 0xc2, 0x97, 0x29, 0x0b, 0xbd, 0xce, 0x35, 0x77, 0x40, 0xb5, 0x79,
- 0x39, 0x50, 0x1d, 0x08, 0xe0, 0x5b, 0xc6, 0x2b, 0x82, 0x41, 0x8c, 0x87, 0x1e, 0x02, 0x10, 0xcf,
- 0x4e, 0xae, 0xcb, 0x07, 0x55, 0x11, 0x10, 0xb1, 0xbc, 0x96, 0x12, 0x76, 0x6b, 0x46, 0xe1, 0x28,
- 0x21, 0x5f, 0xc9, 0x0a, 0x79, 0x66, 0x3a, 0xb1, 0x64, 0xcd, 0xe7, 0x52, 0x4c, 0x3f, 0x17, 0xfc,
- 0x37, 0x0b, 0x9a, 0x9b, 0xfa, 0xe6, 0x87, 0x14, 0x87, 0xe6, 0x37, 0x7f, 0x9f, 0xf8, 0x2d, 0x7c,
- 0x31, 0x7e, 0x71, 0x13, 0x60, 0xd3, 0xf3, 0xe9, 0x65, 0xaf, 0xcf, 0x68, 0x38, 0xa1, 0x10, 0xfa,
- 0x71, 0x21, 0xf1, 0x2a, 0x84, 0x6e, 0x6b, 0x3e, 0xd7, 0x0c, 0x57, 0x7e, 0x3f, 0xd8, 0xc8, 0xdf,
- 0x47, 0xb5, 0x15, 0x32, 0x5e, 0xce, 0x87, 0xb9, 0x6d, 0xc1, 0x9e, 0x8c, 0xca, 0xa9, 0x1e, 0x53,
- 0xc2, 0x7b, 0xfb, 0x5b, 0xea, 0xf0, 0xa7, 0xef, 0x91, 0x54, 0x89, 0xce, 0x5f, 0x2b, 0xda, 0xf5,
- 0x99, 0xfb, 0x8e, 0xb1, 0x9f, 0xe8, 0x43, 0x90, 0xab, 0xf2, 0xb6, 0xe2, 0xc4, 0xbc, 0xed, 0x39,
- 0x75, 0xcc, 0x17, 0xc9, 0xdd, 0xf0, 0x73, 0x89, 0x13, 0x15, 0x4a, 0x51, 0x4e, 0xf4, 0x91, 0x7b,
- 0x3d, 0x71, 0xf5, 0xb0, 0xff, 0x64, 0xc1, 0xe2, 0x06, 0x65, 0xe9, 0x3c, 0xea, 0x01, 0x52, 0x29,
- 0x7e, 0x11, 0x8e, 0x1a, 0xf7, 0x57, 0xdc, 0x3f, 0x99, 0x49, 0x9e, 0x8e, 0x25, 0xfc, 0x5f, 0xf1,
- 0xbb, 0xf4, 0x1d, 0x55, 0x93, 0xa6, 0xf3, 0xa6, 0xeb, 0x50, 0x35, 0x16, 0xd1, 0xa5, 0x4c, 0xc6,
- 0xb4, 0x94, 0x69, 0xc5, 0xf2, 0xa8, 0xdf, 0xae, 0x2b, 0x9e, 0x64, 0xe5, 0xa9, 0xf2, 0xe1, 0x38,
- 0xbb, 0xd8, 0x02, 0x24, 0xd4, 0x25, 0xc8, 0x9a, 0xf1, 0x4d, 0x40, 0x5f, 0x8a, 0x53, 0xa7, 0x78,
- 0x8e, 0x1e, 0x06, 0x3b, 0x0c, 0xee, 0xea, 0x54, 0x78, 0x3e, 0x39, 0x92, 0x04, 0x77, 0x89, 0x58,
- 0xc2, 0xcf, 0x42, 0x81, 0x04, 0x77, 0x51, 0x13, 0x20, 0x74, 0xfd, 0x1e, 0xbd, 0x19, 0x17, 0x61,
- 0x35, 0x62, 0x40, 0xa6, 0xe4, 0x1e, 0x6b, 0x70, 0xd4, 0xbc, 0x91, 0x54, 0xf7, 0x2a, 0xcc, 0xbd,
- 0x3a, 0x32, 0xc5, 0x55, 0xcf, 0x88, 0x4b, 0xd6, 0xfa, 0x1a, 0x89, 0xdb, 0x0c, 0x24, 0x70, 0x74,
- 0x0a, 0x2a, 0xcc, 0xbd, 0xd5, 0xa7, 0xd7, 0x12, 0x37, 0x97, 0x00, 0xf8, 0x2a, 0xaf, 0x1f, 0x6f,
- 0x1a, 0x49, 0x54, 0x02, 0x40, 0x8f, 0xc1, 0x62, 0x72, 0xe7, 0xeb, 0x21, 0xdd, 0xf6, 0xde, 0x11,
- 0x1a, 0xae, 0x91, 0x03, 0x70, 0x74, 0x1a, 0x8e, 0x24, 0xb0, 0x2d, 0x91, 0xac, 0xd8, 0x02, 0x35,
- 0x0b, 0xe6, 0xb2, 0x11, 0xec, 0xbe, 0x70, 0x67, 0xe4, 0xf6, 0xc5, 0xe3, 0xab, 0x11, 0x03, 0x82,
- 0xff, 0x6c, 0xc1, 0x51, 0xa9, 0x6a, 0xe6, 0xb2, 0x07, 0xd2, 0xea, 0x7f, 0x6d, 0x01, 0x32, 0x39,
- 0x50, 0xa6, 0xf5, 0x55, 0xb3, 0x97, 0xc4, 0xb3, 0xa1, 0xaa, 0x28, 0x8b, 0x25, 0x28, 0x69, 0x07,
- 0x61, 0x28, 0x75, 0x64, 0xcf, 0x4c, 0x34, 0xbf, 0x65, 0xdd, 0x2d, 0x21, 0x44, 0x7d, 0x23, 0x07,
- 0x8a, 0xb7, 0x76, 0x19, 0x8d, 0x54, 0xd5, 0x2c, 0xda, 0x05, 0x02, 0x40, 0xe4, 0x17, 0x3f, 0x8b,
- 0xfa, 0x4c, 0x58, 0x8d, 0x9d, 0x9c, 0xa5, 0x40, 0x44, 0x0f, 0xf0, 0xef, 0xf2, 0x30, 0x7f, 0x33,
- 0xe8, 0x8f, 0x92, 0xc0, 0xf8, 0x20, 0x05, 0x8c, 0x54, 0x29, 0x5f, 0xd4, 0xa5, 0x3c, 0x02, 0x3b,
- 0x62, 0x74, 0x28, 0x2c, 0xab, 0x40, 0xc4, 0x18, 0x61, 0xa8, 0x31, 0x37, 0xec, 0x51, 0x26, 0x0b,
- 0xa4, 0x46, 0x49, 0x64, 0xae, 0x29, 0x18, 0x5a, 0x81, 0xaa, 0xdb, 0xeb, 0x85, 0xb4, 0xe7, 0x32,
- 0xda, 0xde, 0x6d, 0xcc, 0x89, 0xc3, 0x4c, 0x10, 0x7e, 0x03, 0x16, 0xb4, 0xb0, 0x94, 0x4a, 0x9f,
- 0x80, 0xb9, 0xb7, 0x05, 0x64, 0x42, 0x6b, 0x4d, 0xa2, 0x2a, 0x37, 0xa6, 0xd1, 0xd2, 0x3f, 0x21,
- 0xe8, 0x3b, 0xe3, 0xab, 0x50, 0x92, 0xe8, 0xe8, 0x94, 0x59, 0xe6, 0xc8, 0x4c, 0x8f, 0xcf, 0x55,
- 0xcd, 0x82, 0xa1, 0x24, 0x09, 0x29, 0xc5, 0x0b, 0xdb, 0x90, 0x10, 0xa2, 0xbe, 0xf1, 0xbf, 0x2c,
- 0x38, 0xb6, 0x4e, 0x19, 0xed, 0x30, 0xda, 0xbd, 0xec, 0xd1, 0x7e, 0xf7, 0x4b, 0xad, 0xc0, 0xe3,
- 0x3e, 0x5a, 0xc1, 0xe8, 0xa3, 0x71, 0xbf, 0xd3, 0xf7, 0x7c, 0xba, 0x69, 0x34, 0x62, 0x12, 0x00,
- 0xf7, 0x10, 0xdb, 0xfc, 0xe2, 0x72, 0x59, 0xfe, 0x66, 0x63, 0x40, 0x62, 0x0d, 0x97, 0x12, 0x0d,
- 0xe3, 0x1f, 0x58, 0x70, 0x3c, 0xcb, 0xb5, 0x52, 0x52, 0x0b, 0x4a, 0x62, 0xf3, 0x84, 0x16, 0x6e,
- 0x6a, 0x07, 0x51, 0x68, 0xe8, 0x42, 0xea, 0x7c, 0xf1, 0x5b, 0x4f, 0xbb, 0xb1, 0x3f, 0x76, 0xea,
- 0x09, 0xd4, 0xe8, 0x12, 0x18, 0xb8, 0xf8, 0x17, 0xbc, 0x96, 0x36, 0x69, 0x0a, 0x7d, 0x73, 0xfb,
- 0x52, 0xbe, 0x57, 0x4e, 0xd0, 0xd7, 0xc0, 0x66, 0xbb, 0x43, 0xe5, 0x72, 0xdb, 0xc7, 0x3e, 0x1b,
- 0x3b, 0x47, 0x53, 0xdb, 0x6e, 0xec, 0x0e, 0x29, 0x11, 0x28, 0xdc, 0x2c, 0x3b, 0x6e, 0xd8, 0xf5,
- 0x7c, 0xb7, 0xef, 0x31, 0x29, 0x46, 0x9b, 0x98, 0x20, 0xd1, 0xcc, 0xb8, 0x4d, 0x59, 0x47, 0x26,
- 0xd5, 0x35, 0xd5, 0xcc, 0x10, 0x90, 0x54, 0x33, 0x43, 0x40, 0xf0, 0x2f, 0x0d, 0xf3, 0x90, 0x96,
- 0x7f, 0x48, 0xf3, 0xb0, 0x0e, 0x6d, 0x1e, 0xd6, 0x3d, 0xcc, 0x03, 0x7f, 0x27, 0xd1, 0xa5, 0xbe,
- 0xa2, 0xd2, 0xe5, 0xf3, 0xb0, 0xd0, 0x4d, 0xad, 0x4c, 0xd7, 0xa9, 0x6c, 0xd4, 0x66, 0xd0, 0xf1,
- 0x46, 0xa2, 0x20, 0x01, 0x99, 0xa2, 0xa0, 0x8c, 0xd4, 0xf3, 0x07, 0xa4, 0xfe, 0xd8, 0x23, 0x50,
- 0x89, 0x7f, 0x66, 0x43, 0x55, 0x98, 0xbb, 0xfc, 0x0a, 0x79, 0xfd, 0x12, 0x59, 0x5f, 0xcc, 0xa1,
- 0x1a, 0x94, 0xdb, 0x97, 0xd6, 0x5e, 0x12, 0x33, 0xeb, 0xfc, 0x6f, 0x4b, 0x3a, 0x80, 0x87, 0xe8,
- 0x9b, 0x50, 0x94, 0x51, 0xf9, 0x78, 0x72, 0x5d, 0xf3, 0x17, 0xa8, 0xe5, 0x13, 0x07, 0xe0, 0x92,
- 0x6f, 0x9c, 0x7b, 0xc2, 0x42, 0xd7, 0xa0, 0x2a, 0x80, 0xaa, 0xc7, 0x7b, 0x2a, 0xdb, 0x6a, 0x4d,
- 0x51, 0x7a, 0x68, 0xca, 0xaa, 0x41, 0xef, 0x22, 0x14, 0xa5, 0x08, 0x8e, 0x67, 0x92, 0xa7, 0x09,
- 0xb7, 0x49, 0x75, 0xbd, 0x71, 0x0e, 0x3d, 0x03, 0xf6, 0x0d, 0xd7, 0xeb, 0x23, 0x23, 0x77, 0x33,
- 0x5a, 0xb3, 0xcb, 0xc7, 0xb3, 0x60, 0xe3, 0xd8, 0xe7, 0xe2, 0x0e, 0xf3, 0x89, 0x6c, 0x9b, 0x4b,
- 0x6f, 0x6f, 0x1c, 0x5c, 0x88, 0x4f, 0x7e, 0x45, 0xf6, 0x41, 0x75, 0xb3, 0x05, 0x3d, 0x94, 0x3e,
- 0x2a, 0xd3, 0x9b, 0x59, 0x6e, 0x4e, 0x5b, 0x8e, 0x09, 0x6e, 0x42, 0xd5, 0x68, 0x74, 0x98, 0x62,
- 0x3d, 0xd8, 0xa5, 0x31, 0xc5, 0x3a, 0xa1, 0x3b, 0x82, 0x73, 0x68, 0x03, 0xca, 0x3c, 0xe3, 0x15,
- 0x3f, 0x88, 0x9c, 0xcc, 0x26, 0xb6, 0x46, 0x42, 0xb3, 0x7c, 0x6a, 0xf2, 0x62, 0x4c, 0xe8, 0xdb,
- 0x50, 0xd9, 0xa0, 0x4c, 0x45, 0x85, 0x13, 0xd9, 0xb0, 0x32, 0x41, 0x52, 0xe9, 0xd0, 0x84, 0x73,
- 0xe8, 0x0d, 0x91, 0x7c, 0xa7, 0x9d, 0x22, 0x72, 0xa6, 0x38, 0xbf, 0xf8, 0x5e, 0x2b, 0xd3, 0x11,
- 0x62, 0xca, 0xaf, 0xa7, 0x28, 0xab, 0xf8, 0xe9, 0x4c, 0x79, 0x82, 0x31, 0x65, 0xe7, 0x1e, 0x7f,
- 0x97, 0xc0, 0xb9, 0xf3, 0x6f, 0xea, 0x7f, 0x0c, 0xac, 0xbb, 0xcc, 0x45, 0xaf, 0xc0, 0x82, 0x90,
- 0x65, 0xfc, 0x97, 0x82, 0x94, 0xcd, 0x1f, 0xf8, 0xff, 0x42, 0xca, 0xe6, 0x0f, 0xfe, 0x8f, 0x01,
- 0xe7, 0xda, 0x6f, 0x7e, 0xf0, 0x71, 0x33, 0xf7, 0xe1, 0xc7, 0xcd, 0xdc, 0xa7, 0x1f, 0x37, 0xad,
- 0xef, 0xef, 0x35, 0xad, 0xdf, 0xec, 0x35, 0xad, 0xf7, 0xf7, 0x9a, 0xd6, 0x07, 0x7b, 0x4d, 0xeb,
- 0x1f, 0x7b, 0x4d, 0xeb, 0x9f, 0x7b, 0xcd, 0xdc, 0xa7, 0x7b, 0x4d, 0xeb, 0xdd, 0x4f, 0x9a, 0xb9,
- 0x0f, 0x3e, 0x69, 0xe6, 0x3e, 0xfc, 0xa4, 0x99, 0xfb, 0xee, 0xa3, 0xf7, 0x2e, 0x34, 0xa5, 0xa3,
- 0x2b, 0x89, 0xaf, 0x27, 0xff, 0x13, 0x00, 0x00, 0xff, 0xff, 0x2d, 0xd2, 0x02, 0x97, 0xd7, 0x22,
- 0x00, 0x00,
+ // 2671 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xd4, 0x1a, 0x4d, 0x8c, 0x1b, 0x57,
+ 0xd9, 0x63, 0x8f, 0xbd, 0xf6, 0x67, 0xef, 0x66, 0xf3, 0xd6, 0x49, 0xac, 0x4d, 0xea, 0xd9, 0x3e,
+ 0x41, 0x1b, 0x9a, 0x74, 0xdd, 0xa4, 0xb4, 0xa4, 0x29, 0xa5, 0xc4, 0xbb, 0xcd, 0x36, 0xe9, 0x36,
+ 0x4d, 0xdf, 0xa6, 0x69, 0x41, 0x54, 0xd5, 0xc4, 0x7e, 0xeb, 0x1d, 0xc5, 0x9e, 0x71, 0x66, 0x9e,
+ 0x9b, 0xee, 0x0d, 0x89, 0x33, 0xa2, 0x12, 0x07, 0xe0, 0x82, 0x84, 0x84, 0x04, 0x02, 0xf5, 0x82,
+ 0x38, 0x70, 0x40, 0x70, 0xe1, 0x50, 0x6e, 0xe5, 0x56, 0xf5, 0x60, 0xe8, 0xf6, 0x82, 0xf6, 0x54,
+ 0x09, 0x89, 0x43, 0x4f, 0xe8, 0xfd, 0xcd, 0xbc, 0x99, 0xb5, 0x49, 0xbd, 0x0d, 0x2a, 0xb9, 0xd8,
+ 0xf3, 0xbe, 0xf7, 0xbd, 0xef, 0xbd, 0xef, 0xe7, 0x7d, 0x7f, 0x33, 0x70, 0x72, 0x78, 0xbb, 0xd7,
+ 0xea, 0x07, 0xbd, 0x61, 0x18, 0xb0, 0x20, 0x7e, 0x58, 0x15, 0xbf, 0xa8, 0xac, 0xc7, 0xcb, 0xf5,
+ 0x5e, 0xd0, 0x0b, 0x24, 0x0e, 0x7f, 0x92, 0xf3, 0xcb, 0x4e, 0x2f, 0x08, 0x7a, 0x7d, 0xda, 0x12,
+ 0xa3, 0x5b, 0xa3, 0xed, 0x16, 0xf3, 0x06, 0x34, 0x62, 0xee, 0x60, 0xa8, 0x10, 0x56, 0x14, 0xf5,
+ 0x3b, 0xfd, 0x41, 0xd0, 0xa5, 0xfd, 0x56, 0xc4, 0x5c, 0x16, 0xc9, 0x5f, 0x85, 0xb1, 0xc4, 0x31,
+ 0x86, 0xa3, 0x68, 0x47, 0xfc, 0x48, 0x20, 0xfe, 0xbd, 0x05, 0xc7, 0x36, 0xdd, 0x5b, 0xb4, 0x7f,
+ 0x23, 0xb8, 0xe9, 0xf6, 0x47, 0x34, 0x22, 0x34, 0x1a, 0x06, 0x7e, 0x44, 0xd1, 0x1a, 0x94, 0xfa,
+ 0x7c, 0x22, 0x6a, 0x58, 0x2b, 0x85, 0xd3, 0xd5, 0xf3, 0x67, 0x56, 0xe3, 0x23, 0x4f, 0x5c, 0x20,
+ 0xa1, 0xd1, 0x0b, 0x3e, 0x0b, 0x77, 0x89, 0x5a, 0xba, 0x7c, 0x13, 0xaa, 0x06, 0x18, 0x2d, 0x42,
+ 0xe1, 0x36, 0xdd, 0x6d, 0x58, 0x2b, 0xd6, 0xe9, 0x0a, 0xe1, 0x8f, 0xe8, 0x1c, 0x14, 0xdf, 0xe6,
+ 0x64, 0x1a, 0xf9, 0x15, 0xeb, 0x74, 0xf5, 0xfc, 0xc9, 0x64, 0x93, 0xd7, 0x7c, 0xef, 0xce, 0x88,
+ 0x8a, 0xd5, 0x6a, 0x23, 0x89, 0x79, 0x31, 0x7f, 0xc1, 0xc2, 0x67, 0xe0, 0xe8, 0x81, 0x79, 0x74,
+ 0x1c, 0x4a, 0x02, 0x43, 0x9e, 0xb8, 0x42, 0xd4, 0x08, 0xd7, 0x01, 0x6d, 0xb1, 0x90, 0xba, 0x03,
+ 0xe2, 0x32, 0x7e, 0xde, 0x3b, 0x23, 0x1a, 0x31, 0xfc, 0x32, 0x2c, 0xa5, 0xa0, 0x8a, 0xed, 0xa7,
+ 0xa1, 0x1a, 0x25, 0x60, 0xc5, 0x7b, 0x3d, 0x39, 0x56, 0xb2, 0x86, 0x98, 0x88, 0xf8, 0xe7, 0x16,
+ 0x40, 0x32, 0x87, 0x9a, 0x00, 0x72, 0xf6, 0x45, 0x37, 0xda, 0x11, 0x0c, 0xdb, 0xc4, 0x80, 0xa0,
+ 0xb3, 0x70, 0x34, 0x19, 0x5d, 0x0b, 0xb6, 0x76, 0xdc, 0xb0, 0x2b, 0x64, 0x60, 0x93, 0x83, 0x13,
+ 0x08, 0x81, 0x1d, 0xba, 0x8c, 0x36, 0x0a, 0x2b, 0xd6, 0xe9, 0x02, 0x11, 0xcf, 0x9c, 0x5b, 0x46,
+ 0x7d, 0xd7, 0x67, 0x0d, 0x5b, 0x88, 0x53, 0x8d, 0x38, 0x9c, 0xeb, 0x97, 0x46, 0x8d, 0xe2, 0x8a,
+ 0x75, 0x7a, 0x9e, 0xa8, 0x11, 0xfe, 0x77, 0x01, 0x6a, 0xaf, 0x8e, 0x68, 0xb8, 0xab, 0x04, 0x80,
+ 0x9a, 0x50, 0x8e, 0x68, 0x9f, 0x76, 0x58, 0x10, 0x4a, 0x8d, 0xb4, 0xf3, 0x0d, 0x8b, 0xc4, 0x30,
+ 0x54, 0x87, 0x62, 0xdf, 0x1b, 0x78, 0x4c, 0x1c, 0x6b, 0x9e, 0xc8, 0x01, 0xba, 0x08, 0xc5, 0x88,
+ 0xb9, 0x21, 0x13, 0x67, 0xa9, 0x9e, 0x5f, 0x5e, 0x95, 0x86, 0xb9, 0xaa, 0x0d, 0x73, 0xf5, 0x86,
+ 0x36, 0xcc, 0x76, 0xf9, 0xfd, 0xb1, 0x93, 0x7b, 0xf7, 0xef, 0x8e, 0x45, 0xe4, 0x12, 0xf4, 0x34,
+ 0x14, 0xa8, 0xdf, 0x15, 0xe7, 0xfd, 0xbc, 0x2b, 0xf9, 0x02, 0x74, 0x0e, 0x2a, 0x5d, 0x2f, 0xa4,
+ 0x1d, 0xe6, 0x05, 0xbe, 0xe0, 0x6a, 0xe1, 0xfc, 0x52, 0xa2, 0x91, 0x75, 0x3d, 0x45, 0x12, 0x2c,
+ 0x74, 0x16, 0x4a, 0x11, 0x17, 0x5d, 0xd4, 0x98, 0xe3, 0xb6, 0xd0, 0xae, 0xef, 0x8f, 0x9d, 0x45,
+ 0x09, 0x39, 0x1b, 0x0c, 0x3c, 0x46, 0x07, 0x43, 0xb6, 0x4b, 0x14, 0x0e, 0x7a, 0x0c, 0xe6, 0xba,
+ 0xb4, 0x4f, 0xb9, 0xc2, 0xcb, 0x42, 0xe1, 0x8b, 0x06, 0x79, 0x31, 0x41, 0x34, 0x02, 0x7a, 0x13,
+ 0xec, 0x61, 0xdf, 0xf5, 0x1b, 0x15, 0xc1, 0xc5, 0x42, 0x82, 0x78, 0xbd, 0xef, 0xfa, 0xed, 0x67,
+ 0x3e, 0x1a, 0x3b, 0x4f, 0xf5, 0x3c, 0xb6, 0x33, 0xba, 0xb5, 0xda, 0x09, 0x06, 0xad, 0x5e, 0xe8,
+ 0x6e, 0xbb, 0xbe, 0xdb, 0xea, 0x07, 0xb7, 0xbd, 0xd6, 0xdb, 0x4f, 0xb6, 0xf8, 0x1d, 0xbc, 0x33,
+ 0xa2, 0xa1, 0x47, 0xc3, 0x16, 0x27, 0xb3, 0x2a, 0x54, 0xc2, 0x97, 0x12, 0x41, 0x16, 0x5d, 0xe5,
+ 0xf6, 0x17, 0x84, 0x74, 0x6d, 0x67, 0xe4, 0xdf, 0x8e, 0x1a, 0x20, 0x76, 0x39, 0x91, 0xec, 0x22,
+ 0xe0, 0x84, 0x6e, 0x6f, 0x84, 0xc1, 0x68, 0xd8, 0x3e, 0xb2, 0x3f, 0x76, 0x4c, 0x7c, 0x62, 0x0e,
+ 0xae, 0xda, 0xe5, 0xd2, 0xe2, 0x1c, 0x7e, 0xaf, 0x00, 0x68, 0xcb, 0x1d, 0x0c, 0xfb, 0x74, 0x26,
+ 0xf5, 0xc7, 0x8a, 0xce, 0x1f, 0x5a, 0xd1, 0x85, 0x59, 0x15, 0x9d, 0x68, 0xcd, 0x9e, 0x4d, 0x6b,
+ 0xc5, 0xcf, 0xab, 0xb5, 0xd2, 0xff, 0xbd, 0xd6, 0x70, 0x03, 0x6c, 0x4e, 0x99, 0x3b, 0xcb, 0xd0,
+ 0xbd, 0x2b, 0x74, 0x53, 0x23, 0xfc, 0x11, 0x6f, 0x42, 0x49, 0xf2, 0x85, 0x96, 0xb3, 0xca, 0x4b,
+ 0xdf, 0xdb, 0x44, 0x71, 0x05, 0xad, 0x92, 0xc5, 0x44, 0x25, 0x05, 0x21, 0x6c, 0xfc, 0x47, 0x0b,
+ 0xe6, 0x95, 0x45, 0x28, 0xdf, 0x77, 0x0b, 0xe6, 0xa4, 0xef, 0xd1, 0x7e, 0xef, 0x44, 0xd6, 0xef,
+ 0x5d, 0xea, 0xba, 0x43, 0x46, 0xc3, 0x76, 0xeb, 0xfd, 0xb1, 0x63, 0x7d, 0x34, 0x76, 0x1e, 0x9d,
+ 0x26, 0x34, 0x1d, 0x6b, 0xb4, 0xbf, 0xd4, 0x84, 0xd1, 0x19, 0x71, 0x3a, 0x16, 0x29, 0xb3, 0x3a,
+ 0xb2, 0x2a, 0x43, 0xd4, 0x15, 0xbf, 0x47, 0x23, 0x4e, 0xd9, 0xe6, 0x16, 0x41, 0x24, 0x0e, 0x67,
+ 0xf3, 0xae, 0x1b, 0xfa, 0x9e, 0xdf, 0x8b, 0x1a, 0x05, 0xe1, 0xd3, 0xe3, 0x31, 0xfe, 0xa9, 0x05,
+ 0x4b, 0x29, 0xb3, 0x56, 0x4c, 0x5c, 0x80, 0x52, 0xc4, 0x35, 0xa5, 0x79, 0x30, 0x8c, 0x62, 0x4b,
+ 0xc0, 0xdb, 0x0b, 0xea, 0xf0, 0x25, 0x39, 0x26, 0x0a, 0xff, 0xfe, 0x1d, 0xed, 0x2f, 0x16, 0xd4,
+ 0x44, 0x60, 0xd2, 0x77, 0x0d, 0x81, 0xed, 0xbb, 0x03, 0xaa, 0x54, 0x25, 0x9e, 0x8d, 0x68, 0xc5,
+ 0xb7, 0x2b, 0xeb, 0x68, 0x35, 0xab, 0x83, 0xb5, 0x0e, 0xed, 0x60, 0xad, 0xe4, 0xde, 0xd5, 0xa1,
+ 0xc8, 0xcd, 0x7b, 0x57, 0x38, 0xd7, 0x0a, 0x91, 0x03, 0xfc, 0x28, 0xcc, 0x2b, 0x2e, 0x94, 0x68,
+ 0xa7, 0x05, 0xd8, 0x01, 0x94, 0xa4, 0x26, 0xd0, 0x57, 0xa0, 0x12, 0x27, 0x26, 0x82, 0xdb, 0x42,
+ 0xbb, 0xb4, 0x3f, 0x76, 0xf2, 0x2c, 0x22, 0xc9, 0x04, 0x72, 0xcc, 0xa0, 0x6f, 0xb5, 0x2b, 0xfb,
+ 0x63, 0x47, 0x02, 0x54, 0x88, 0x47, 0xa7, 0xc0, 0xde, 0xe1, 0x71, 0x93, 0x8b, 0xc0, 0x6e, 0x97,
+ 0xf7, 0xc7, 0x8e, 0x18, 0x13, 0xf1, 0x8b, 0x37, 0xa0, 0xb6, 0x49, 0x7b, 0x6e, 0x67, 0x57, 0x6d,
+ 0x5a, 0xd7, 0xe4, 0xf8, 0x86, 0x96, 0xa6, 0xf1, 0x30, 0xd4, 0xe2, 0x1d, 0xdf, 0x1a, 0x44, 0xea,
+ 0x36, 0x54, 0x63, 0xd8, 0xcb, 0x11, 0xfe, 0x99, 0x05, 0xca, 0x06, 0x10, 0x36, 0xb2, 0x1d, 0xee,
+ 0x0b, 0x61, 0x7f, 0xec, 0x28, 0x88, 0x4e, 0x66, 0xd0, 0xb3, 0x30, 0x17, 0x89, 0x1d, 0x39, 0xb1,
+ 0xac, 0x69, 0x89, 0x89, 0xf6, 0x11, 0x6e, 0x22, 0xfb, 0x63, 0x47, 0x23, 0x12, 0xfd, 0x80, 0x56,
+ 0x53, 0x09, 0x81, 0x64, 0x6c, 0x61, 0x7f, 0xec, 0x18, 0x50, 0x33, 0x41, 0xc0, 0x9f, 0x59, 0x50,
+ 0xbd, 0xe1, 0x7a, 0xb1, 0x09, 0x35, 0xb4, 0x8a, 0x12, 0x5f, 0x2d, 0x01, 0xdc, 0x12, 0xbb, 0xb4,
+ 0xef, 0xee, 0x5e, 0x0e, 0x42, 0x41, 0x77, 0x9e, 0xc4, 0xe3, 0x24, 0x86, 0xdb, 0x13, 0x63, 0x78,
+ 0x71, 0x76, 0xd7, 0xfe, 0xbf, 0x75, 0xa4, 0x57, 0xed, 0x72, 0x7e, 0xb1, 0x80, 0xdf, 0xb3, 0xa0,
+ 0x26, 0x99, 0x57, 0x96, 0xf7, 0x3d, 0x28, 0x49, 0xd9, 0x08, 0xf6, 0xff, 0x8b, 0x63, 0x3a, 0x33,
+ 0x8b, 0x53, 0x52, 0x34, 0xd1, 0xf3, 0xb0, 0xd0, 0x0d, 0x83, 0xe1, 0x90, 0x76, 0xb7, 0x94, 0xfb,
+ 0xcb, 0x67, 0xdd, 0xdf, 0xba, 0x39, 0x4f, 0x32, 0xe8, 0xf8, 0xaf, 0x16, 0xcc, 0x2b, 0x67, 0xa2,
+ 0xd4, 0x15, 0x8b, 0xd8, 0x3a, 0x74, 0xf4, 0xcc, 0xcf, 0x1a, 0x3d, 0x8f, 0x43, 0xa9, 0xc7, 0xe3,
+ 0x8b, 0x76, 0x48, 0x6a, 0x34, 0x5b, 0x54, 0xc5, 0x57, 0x61, 0x41, 0xb3, 0x32, 0xc5, 0xa3, 0x2e,
+ 0x67, 0x3d, 0xea, 0x95, 0x2e, 0xf5, 0x99, 0xb7, 0xed, 0xc5, 0x3e, 0x52, 0xe1, 0xe3, 0x1f, 0x59,
+ 0xb0, 0x98, 0x45, 0x41, 0xeb, 0x99, 0xc2, 0xe2, 0x91, 0xe9, 0xe4, 0xcc, 0x9a, 0x42, 0x93, 0x56,
+ 0x95, 0xc5, 0x53, 0xf7, 0xaa, 0x2c, 0xea, 0xa6, 0x93, 0xa9, 0x28, 0xaf, 0x80, 0x7f, 0x62, 0xc1,
+ 0x7c, 0x4a, 0x97, 0xe8, 0x02, 0xd8, 0xdb, 0x61, 0x30, 0x98, 0x49, 0x51, 0x62, 0x05, 0xfa, 0x3a,
+ 0xe4, 0x59, 0x30, 0x93, 0x9a, 0xf2, 0x2c, 0xe0, 0x5a, 0x52, 0xec, 0x17, 0x64, 0xde, 0x2e, 0x47,
+ 0xf8, 0x29, 0xa8, 0x08, 0x86, 0xae, 0xbb, 0x5e, 0x38, 0x31, 0x60, 0x4c, 0x66, 0xe8, 0x59, 0x38,
+ 0x22, 0x9d, 0xe1, 0xe4, 0xc5, 0xb5, 0x49, 0x8b, 0x6b, 0x7a, 0xf1, 0x49, 0x28, 0x8a, 0xa4, 0x83,
+ 0x2f, 0xe9, 0xba, 0xcc, 0xd5, 0x4b, 0xf8, 0x33, 0x3e, 0x06, 0x4b, 0xfc, 0x0e, 0xd2, 0x30, 0x5a,
+ 0x0b, 0x46, 0x3e, 0xd3, 0x75, 0xd3, 0x59, 0xa8, 0xa7, 0xc1, 0xca, 0x4a, 0xea, 0x50, 0xec, 0x70,
+ 0x80, 0xa0, 0x31, 0x4f, 0xe4, 0x00, 0xff, 0xd2, 0x02, 0xb4, 0x41, 0x99, 0xd8, 0xe5, 0xca, 0x7a,
+ 0x7c, 0x3d, 0x96, 0xa1, 0x3c, 0x70, 0x59, 0x67, 0x87, 0x86, 0x91, 0xce, 0x5f, 0xf4, 0xf8, 0xcb,
+ 0x48, 0x3c, 0xf1, 0x39, 0x58, 0x4a, 0x9d, 0x52, 0xf1, 0xb4, 0x0c, 0xe5, 0x8e, 0x82, 0xa9, 0x90,
+ 0x17, 0x8f, 0xf1, 0xef, 0xf2, 0x50, 0xd6, 0x69, 0x1d, 0x3a, 0x07, 0xd5, 0x6d, 0xcf, 0xef, 0xd1,
+ 0x70, 0x18, 0x7a, 0x4a, 0x04, 0xb6, 0x4c, 0xf3, 0x0c, 0x30, 0x31, 0x07, 0xe8, 0x71, 0x98, 0x1b,
+ 0x45, 0x34, 0x7c, 0xcb, 0x93, 0x37, 0xbd, 0xd2, 0xae, 0xef, 0x8d, 0x9d, 0xd2, 0x6b, 0x11, 0x0d,
+ 0xaf, 0xac, 0xf3, 0xe0, 0x33, 0x12, 0x4f, 0x44, 0xfe, 0x77, 0xd1, 0x4b, 0xca, 0x4c, 0x45, 0x02,
+ 0xd7, 0xfe, 0x06, 0x3f, 0x7e, 0xc6, 0xd5, 0x0d, 0xc3, 0x60, 0x40, 0xd9, 0x0e, 0x1d, 0x45, 0xad,
+ 0x4e, 0x30, 0x18, 0x04, 0x7e, 0x4b, 0x74, 0x02, 0x04, 0xd3, 0x3c, 0x82, 0xf2, 0xe5, 0xca, 0x72,
+ 0x6f, 0xc0, 0x1c, 0xdb, 0x09, 0x83, 0x51, 0x6f, 0x47, 0x04, 0x86, 0x42, 0xfb, 0xe2, 0xec, 0xf4,
+ 0x34, 0x05, 0xa2, 0x1f, 0xd0, 0xc3, 0x5c, 0x5a, 0xb4, 0x73, 0x3b, 0x1a, 0x0d, 0x64, 0xed, 0xd9,
+ 0x2e, 0xee, 0x8f, 0x1d, 0xeb, 0x71, 0x12, 0x83, 0xf1, 0x25, 0x98, 0x4f, 0xa5, 0xc2, 0xe8, 0x09,
+ 0xb0, 0x43, 0xba, 0xad, 0x5d, 0x01, 0x3a, 0x98, 0x31, 0xcb, 0xe8, 0xcf, 0x71, 0x88, 0xf8, 0xc5,
+ 0x3f, 0xcc, 0x83, 0x63, 0x54, 0xfd, 0x97, 0x83, 0xf0, 0x65, 0xca, 0x42, 0xaf, 0x73, 0xcd, 0x1d,
+ 0x50, 0x6d, 0x5e, 0x0e, 0x54, 0x07, 0x02, 0xf8, 0x96, 0x71, 0x8b, 0x60, 0x10, 0xe3, 0xa1, 0x87,
+ 0x00, 0xc4, 0xb5, 0x93, 0xf3, 0xf2, 0x42, 0x55, 0x04, 0x44, 0x4c, 0xaf, 0xa5, 0x84, 0xdd, 0x9a,
+ 0x51, 0x38, 0x4a, 0xc8, 0x57, 0xb2, 0x42, 0x9e, 0x99, 0x4e, 0x2c, 0x59, 0xf3, 0xba, 0x14, 0xd3,
+ 0xd7, 0x05, 0xff, 0xcd, 0x82, 0xe6, 0xa6, 0x3e, 0xf9, 0x21, 0xc5, 0xa1, 0xf9, 0xcd, 0xdf, 0x27,
+ 0x7e, 0x0b, 0x5f, 0x8c, 0x5f, 0xdc, 0x04, 0xd8, 0xf4, 0x7c, 0x7a, 0xd9, 0xeb, 0x33, 0x1a, 0x4e,
+ 0x28, 0x84, 0x7e, 0x5c, 0x48, 0xbc, 0x0a, 0xa1, 0xdb, 0x9a, 0xcf, 0x35, 0xc3, 0x95, 0xdf, 0x0f,
+ 0x36, 0xf2, 0xf7, 0x51, 0x6d, 0x85, 0x8c, 0x97, 0xf3, 0x61, 0x6e, 0x5b, 0xb0, 0x27, 0xa3, 0x72,
+ 0xaa, 0xc7, 0x94, 0xf0, 0xde, 0xfe, 0x96, 0xda, 0xfc, 0xe9, 0x7b, 0x24, 0x55, 0xa2, 0xf3, 0xd7,
+ 0x8a, 0x76, 0x7d, 0xe6, 0xbe, 0x63, 0xac, 0x27, 0x7a, 0x13, 0xe4, 0xaa, 0xbc, 0xad, 0x38, 0x31,
+ 0x6f, 0x7b, 0x4e, 0x6d, 0xf3, 0x45, 0x72, 0x37, 0xfc, 0x5c, 0xe2, 0x44, 0x85, 0x52, 0x94, 0x13,
+ 0x7d, 0xe4, 0x5e, 0x57, 0x5c, 0x5d, 0xec, 0x3f, 0x59, 0xb0, 0xb8, 0x41, 0x59, 0x3a, 0x8f, 0x7a,
+ 0x80, 0x54, 0x8a, 0x5f, 0x84, 0xa3, 0xc6, 0xf9, 0x15, 0xf7, 0x4f, 0x66, 0x92, 0xa7, 0x63, 0x09,
+ 0xff, 0x57, 0xfc, 0x2e, 0x7d, 0x47, 0xd5, 0xa4, 0xe9, 0xbc, 0xe9, 0x3a, 0x54, 0x8d, 0x49, 0x74,
+ 0x29, 0x93, 0x31, 0x2d, 0x65, 0x5a, 0xb1, 0x3c, 0xea, 0xb7, 0xeb, 0x8a, 0x27, 0x59, 0x79, 0xaa,
+ 0x7c, 0x38, 0xce, 0x2e, 0xb6, 0x00, 0x09, 0x75, 0x09, 0xb2, 0x66, 0x7c, 0x13, 0xd0, 0x97, 0xe2,
+ 0xd4, 0x29, 0x1e, 0xa3, 0x87, 0xc1, 0x0e, 0x83, 0xbb, 0x3a, 0x15, 0x9e, 0x4f, 0xb6, 0x24, 0xc1,
+ 0x5d, 0x22, 0xa6, 0xf0, 0xb3, 0x50, 0x20, 0xc1, 0x5d, 0xd4, 0x04, 0x08, 0x5d, 0xbf, 0x47, 0x6f,
+ 0xc6, 0x45, 0x58, 0x8d, 0x18, 0x90, 0x29, 0xb9, 0xc7, 0x1a, 0x1c, 0x35, 0x4f, 0x24, 0xd5, 0xbd,
+ 0x0a, 0x73, 0xaf, 0x8e, 0x4c, 0x71, 0xd5, 0x33, 0xe2, 0x92, 0xb5, 0xbe, 0x46, 0xe2, 0x36, 0x03,
+ 0x09, 0x1c, 0x9d, 0x82, 0x0a, 0x73, 0x6f, 0xf5, 0xe9, 0xb5, 0xc4, 0xcd, 0x25, 0x00, 0x3e, 0xcb,
+ 0xeb, 0xc7, 0x9b, 0x46, 0x12, 0x95, 0x00, 0xd0, 0x63, 0xb0, 0x98, 0x9c, 0xf9, 0x7a, 0x48, 0xb7,
+ 0xbd, 0x77, 0x84, 0x86, 0x6b, 0xe4, 0x00, 0x1c, 0x9d, 0x86, 0x23, 0x09, 0x6c, 0x4b, 0x24, 0x2b,
+ 0xb6, 0x40, 0xcd, 0x82, 0xb9, 0x6c, 0x04, 0xbb, 0x2f, 0xdc, 0x19, 0xb9, 0x7d, 0x71, 0xf9, 0x6a,
+ 0xc4, 0x80, 0xe0, 0x3f, 0x5b, 0x70, 0x54, 0xaa, 0x9a, 0xb9, 0xec, 0x81, 0xb4, 0xfa, 0x5f, 0x59,
+ 0x80, 0x4c, 0x0e, 0x94, 0x69, 0x7d, 0xd5, 0xec, 0x25, 0xf1, 0x6c, 0xa8, 0x2a, 0xca, 0x62, 0x09,
+ 0x4a, 0xda, 0x41, 0x18, 0x4a, 0x1d, 0xd9, 0x33, 0x13, 0xcd, 0x6f, 0x59, 0x77, 0x4b, 0x08, 0x51,
+ 0xff, 0xc8, 0x81, 0xe2, 0xad, 0x5d, 0x46, 0x23, 0x55, 0x35, 0x8b, 0x76, 0x81, 0x00, 0x10, 0xf9,
+ 0xc7, 0xf7, 0xa2, 0x3e, 0x13, 0x56, 0x63, 0x27, 0x7b, 0x29, 0x10, 0xd1, 0x0f, 0xf8, 0xb7, 0x79,
+ 0x98, 0xbf, 0x19, 0xf4, 0x47, 0x49, 0x60, 0x7c, 0x90, 0x02, 0x46, 0xaa, 0x94, 0x2f, 0xea, 0x52,
+ 0x1e, 0x81, 0x1d, 0x31, 0x3a, 0x14, 0x96, 0x55, 0x20, 0xe2, 0x19, 0x61, 0xa8, 0x31, 0x37, 0xec,
+ 0x51, 0x26, 0x0b, 0xa4, 0x46, 0x49, 0x64, 0xae, 0x29, 0x18, 0x5a, 0x81, 0xaa, 0xdb, 0xeb, 0x85,
+ 0xb4, 0xe7, 0x32, 0xda, 0xde, 0x6d, 0xcc, 0x89, 0xcd, 0x4c, 0x10, 0x7e, 0x03, 0x16, 0xb4, 0xb0,
+ 0x94, 0x4a, 0x9f, 0x80, 0xb9, 0xb7, 0x05, 0x64, 0x42, 0x6b, 0x4d, 0xa2, 0x2a, 0x37, 0xa6, 0xd1,
+ 0xd2, 0xaf, 0x10, 0xf4, 0x99, 0xf1, 0x55, 0x28, 0x49, 0x74, 0x74, 0xca, 0x2c, 0x73, 0x64, 0xa6,
+ 0xc7, 0xc7, 0xaa, 0x66, 0xc1, 0x50, 0x92, 0x84, 0x94, 0xe2, 0x85, 0x6d, 0x48, 0x08, 0x51, 0xff,
+ 0xf8, 0x5f, 0x16, 0x1c, 0x5b, 0xa7, 0x8c, 0x76, 0x18, 0xed, 0x5e, 0xf6, 0x68, 0xbf, 0xfb, 0xa5,
+ 0x56, 0xe0, 0x71, 0x1f, 0xad, 0x60, 0xf4, 0xd1, 0xb8, 0xdf, 0xe9, 0x7b, 0x3e, 0xdd, 0x34, 0x1a,
+ 0x31, 0x09, 0x80, 0x7b, 0x88, 0x6d, 0x7e, 0x70, 0x39, 0x2d, 0xdf, 0xd9, 0x18, 0x90, 0x58, 0xc3,
+ 0xa5, 0x44, 0xc3, 0xf8, 0x07, 0x16, 0x1c, 0xcf, 0x72, 0xad, 0x94, 0xd4, 0x82, 0x92, 0x58, 0x3c,
+ 0xa1, 0x85, 0x9b, 0x5a, 0x41, 0x14, 0x1a, 0xba, 0x90, 0xda, 0x5f, 0xbc, 0xeb, 0x69, 0x37, 0xf6,
+ 0xc7, 0x4e, 0x3d, 0x81, 0x1a, 0x5d, 0x02, 0x03, 0x17, 0xff, 0x81, 0xd7, 0xd2, 0x26, 0x4d, 0xa1,
+ 0x6f, 0x6e, 0x5f, 0xca, 0xf7, 0xca, 0x01, 0xfa, 0x1a, 0xd8, 0x6c, 0x77, 0xa8, 0x5c, 0x6e, 0xfb,
+ 0xd8, 0x67, 0x63, 0xe7, 0x68, 0x6a, 0xd9, 0x8d, 0xdd, 0x21, 0x25, 0x02, 0x85, 0x9b, 0x65, 0xc7,
+ 0x0d, 0xbb, 0x9e, 0xef, 0xf6, 0x3d, 0x26, 0xc5, 0x68, 0x13, 0x13, 0x24, 0x5e, 0x6f, 0xb9, 0x61,
+ 0x44, 0x43, 0xfd, 0xda, 0x4b, 0x8e, 0x44, 0x93, 0xe3, 0x36, 0x65, 0x9d, 0x1d, 0xe9, 0x64, 0x55,
+ 0x93, 0x43, 0x40, 0x52, 0x4d, 0x0e, 0x01, 0xc1, 0xbf, 0x30, 0xcc, 0x46, 0xde, 0x88, 0x43, 0x9a,
+ 0x8d, 0x75, 0x68, 0xb3, 0xb1, 0xee, 0x61, 0x36, 0xf8, 0x3b, 0x89, 0x8e, 0xf5, 0x11, 0x95, 0x8e,
+ 0x9f, 0x87, 0x85, 0x6e, 0x6a, 0x66, 0xba, 0xae, 0x65, 0x03, 0x37, 0x83, 0x8e, 0x37, 0x12, 0xc5,
+ 0x09, 0xc8, 0x14, 0xc5, 0x65, 0xb4, 0x91, 0x3f, 0xa0, 0x8d, 0xc7, 0x1e, 0x81, 0x4a, 0xfc, 0xfa,
+ 0x0d, 0x55, 0x61, 0xee, 0xf2, 0x2b, 0xe4, 0xf5, 0x4b, 0x64, 0x7d, 0x31, 0x87, 0x6a, 0x50, 0x6e,
+ 0x5f, 0x5a, 0x7b, 0x49, 0x8c, 0xac, 0xf3, 0xbf, 0x29, 0xe9, 0xc0, 0x1e, 0xa2, 0x6f, 0x42, 0x51,
+ 0x46, 0xeb, 0xe3, 0xc9, 0x71, 0xcd, 0x37, 0x53, 0xcb, 0x27, 0x0e, 0xc0, 0x25, 0xdf, 0x38, 0xf7,
+ 0x84, 0x85, 0xae, 0x41, 0x55, 0x00, 0x55, 0xef, 0xf7, 0x54, 0xb6, 0x05, 0x9b, 0xa2, 0xf4, 0xd0,
+ 0x94, 0x59, 0x83, 0xde, 0x45, 0x28, 0x4a, 0x11, 0x1c, 0xcf, 0x24, 0x55, 0x13, 0x4e, 0x93, 0xea,
+ 0x86, 0xe3, 0x1c, 0x7a, 0x06, 0xec, 0x1b, 0xae, 0xd7, 0x47, 0x46, 0x4e, 0x67, 0xb4, 0x6c, 0x97,
+ 0x8f, 0x67, 0xc1, 0xc6, 0xb6, 0xcf, 0xc5, 0x9d, 0xe7, 0x13, 0xd9, 0xf6, 0x97, 0x5e, 0xde, 0x38,
+ 0x38, 0x11, 0xef, 0xfc, 0x8a, 0xec, 0x8f, 0xea, 0x26, 0x0c, 0x7a, 0x28, 0xbd, 0x55, 0xa6, 0x67,
+ 0xb3, 0xdc, 0x9c, 0x36, 0x1d, 0x13, 0xdc, 0x84, 0xaa, 0xd1, 0x00, 0x31, 0xc5, 0x7a, 0xb0, 0x7b,
+ 0x63, 0x8a, 0x75, 0x42, 0xd7, 0x04, 0xe7, 0xd0, 0x06, 0x94, 0x79, 0x26, 0x2c, 0x5e, 0x94, 0x9c,
+ 0xcc, 0x26, 0xbc, 0x46, 0xa2, 0xb3, 0x7c, 0x6a, 0xf2, 0x64, 0x4c, 0xe8, 0xdb, 0x50, 0xd9, 0xa0,
+ 0x4c, 0x45, 0x8b, 0x13, 0xd9, 0x70, 0x33, 0x41, 0x52, 0xe9, 0x90, 0x85, 0x73, 0xe8, 0x0d, 0x91,
+ 0x94, 0xa7, 0x9d, 0x25, 0x72, 0xa6, 0x38, 0xc5, 0xf8, 0x5c, 0x2b, 0xd3, 0x11, 0x62, 0xca, 0xaf,
+ 0xa7, 0x28, 0xab, 0xb8, 0xea, 0x4c, 0xb9, 0x82, 0x31, 0x65, 0xe7, 0x1e, 0x9f, 0x51, 0xe0, 0xdc,
+ 0xf9, 0x37, 0xf5, 0x97, 0x04, 0xeb, 0x2e, 0x73, 0xd1, 0x2b, 0xb0, 0x20, 0x64, 0x19, 0x7f, 0x6a,
+ 0x90, 0xb2, 0xf9, 0x03, 0xdf, 0x35, 0xa4, 0x6c, 0xfe, 0xe0, 0xf7, 0x0d, 0x38, 0xd7, 0x7e, 0xf3,
+ 0x83, 0x8f, 0x9b, 0xb9, 0x0f, 0x3f, 0x6e, 0xe6, 0x3e, 0xfd, 0xb8, 0x69, 0x7d, 0x7f, 0xaf, 0x69,
+ 0xfd, 0x7a, 0xaf, 0x69, 0xbd, 0xbf, 0xd7, 0xb4, 0x3e, 0xd8, 0x6b, 0x5a, 0xff, 0xd8, 0x6b, 0x5a,
+ 0xff, 0xdc, 0x6b, 0xe6, 0x3e, 0xdd, 0x6b, 0x5a, 0xef, 0x7e, 0xd2, 0xcc, 0x7d, 0xf0, 0x49, 0x33,
+ 0xf7, 0xe1, 0x27, 0xcd, 0xdc, 0x77, 0x1f, 0xbd, 0x77, 0x01, 0x2a, 0x1d, 0x5d, 0x49, 0xfc, 0x3d,
+ 0xf9, 0x9f, 0x00, 0x00, 0x00, 0xff, 0xff, 0x82, 0x5c, 0x85, 0xa1, 0xef, 0x22, 0x00, 0x00,
}
func (x Direction) String() string {
@@ -4956,6 +4964,9 @@ func (this *DetectedField) Equal(that interface{}) bool {
if this.Cardinality != that1.Cardinality {
return false
}
+ if this.Parser != that1.Parser {
+ return false
+ }
if !bytes.Equal(this.Sketch, that1.Sketch) {
return false
}
@@ -5720,11 +5731,12 @@ func (this *DetectedField) GoString() string {
if this == nil {
return "nil"
}
- s := make([]string, 0, 8)
+ s := make([]string, 0, 9)
s = append(s, "&logproto.DetectedField{")
s = append(s, "Label: "+fmt.Sprintf("%#v", this.Label)+",\n")
s = append(s, "Type: "+fmt.Sprintf("%#v", this.Type)+",\n")
s = append(s, "Cardinality: "+fmt.Sprintf("%#v", this.Cardinality)+",\n")
+ s = append(s, "Parser: "+fmt.Sprintf("%#v", this.Parser)+",\n")
s = append(s, "Sketch: "+fmt.Sprintf("%#v", this.Sketch)+",\n")
s = append(s, "}")
return strings.Join(s, "")
@@ -8677,6 +8689,13 @@ func (m *DetectedField) MarshalToSizedBuffer(dAtA []byte) (int, error) {
copy(dAtA[i:], m.Sketch)
i = encodeVarintLogproto(dAtA, i, uint64(len(m.Sketch)))
i--
+ dAtA[i] = 0x2a
+ }
+ if len(m.Parser) > 0 {
+ i -= len(m.Parser)
+ copy(dAtA[i:], m.Parser)
+ i = encodeVarintLogproto(dAtA, i, uint64(len(m.Parser)))
+ i--
dAtA[i] = 0x22
}
if m.Cardinality != 0 {
@@ -9849,6 +9868,10 @@ func (m *DetectedField) Size() (n int) {
if m.Cardinality != 0 {
n += 1 + sovLogproto(uint64(m.Cardinality))
}
+ l = len(m.Parser)
+ if l > 0 {
+ n += 1 + l + sovLogproto(uint64(l))
+ }
l = len(m.Sketch)
if l > 0 {
n += 1 + l + sovLogproto(uint64(l))
@@ -10599,6 +10622,7 @@ func (this *DetectedField) String() string {
`Label:` + fmt.Sprintf("%v", this.Label) + `,`,
`Type:` + fmt.Sprintf("%v", this.Type) + `,`,
`Cardinality:` + fmt.Sprintf("%v", this.Cardinality) + `,`,
+ `Parser:` + fmt.Sprintf("%v", this.Parser) + `,`,
`Sketch:` + fmt.Sprintf("%v", this.Sketch) + `,`,
`}`,
}, "")
@@ -17470,6 +17494,38 @@ func (m *DetectedField) Unmarshal(dAtA []byte) error {
}
}
case 4:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Parser", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowLogproto
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthLogproto
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthLogproto
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Parser = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 5:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Sketch", wireType)
}
diff --git a/pkg/logproto/logproto.proto b/pkg/logproto/logproto.proto
index a29e38df01af9..f6f8c12a8fdec 100644
--- a/pkg/logproto/logproto.proto
+++ b/pkg/logproto/logproto.proto
@@ -471,7 +471,8 @@ message DetectedField {
string label = 1;
string type = 2 [(gogoproto.casttype) = "DetectedFieldType"];
uint64 cardinality = 3;
- bytes sketch = 4 [(gogoproto.jsontag) = "sketch,omitempty"];
+ string parser = 4;
+ bytes sketch = 5 [(gogoproto.jsontag) = "sketch,omitempty"];
}
message DetectedLabelsRequest {
diff --git a/pkg/querier/querier.go b/pkg/querier/querier.go
index bee850fd82d69..73ea98d05fef5 100644
--- a/pkg/querier/querier.go
+++ b/pkg/querier/querier.go
@@ -1108,6 +1108,7 @@ func (q *SingleTenantQuerier) DetectedFields(ctx context.Context, req *logproto.
Type: v.fieldType,
Cardinality: v.Estimate(),
Sketch: sketch,
+ Parser: v.parser,
}
fieldCount++
@@ -1124,13 +1125,19 @@ type parsedFields struct {
sketch *hyperloglog.Sketch
isTypeDetected bool
fieldType logproto.DetectedFieldType
+ parser string
}
-func newParsedFields() *parsedFields {
+func newParsedFields(parser *string) *parsedFields {
+ p := ""
+ if parser != nil {
+ p = *parser
+ }
return &parsedFields{
sketch: hyperloglog.New(),
isTypeDetected: false,
fieldType: logproto.DetectedFieldString,
+ parser: p,
}
}
@@ -1185,11 +1192,12 @@ func parseDetectedFields(ctx context.Context, limit uint32, streams logqlmodel.S
"msg", fmt.Sprintf("looking for detected fields in stream %d with %d lines", stream.Hash, len(stream.Entries)))
for _, entry := range stream.Entries {
- detected := parseLine(entry.Line)
+ detected, parser := parseLine(entry.Line)
for k, vals := range detected {
df, ok := detectedFields[k]
if !ok && fieldCount < limit {
- df = newParsedFields()
+
+ df = newParsedFields(parser)
detectedFields[k] = df
fieldCount++
}
@@ -1217,17 +1225,19 @@ func parseDetectedFields(ctx context.Context, limit uint32, streams logqlmodel.S
return detectedFields
}
-func parseLine(line string) map[string][]string {
+func parseLine(line string) (map[string][]string, *string) {
+ parser := "logfmt"
logFmtParser := logql_log.NewLogfmtParser(true, false)
- jsonParser := logql_log.NewJSONParser()
lbls := logql_log.NewBaseLabelsBuilder().ForLabels(labels.EmptyLabels(), 0)
_, logfmtSuccess := logFmtParser.Process(0, []byte(line), lbls)
if !logfmtSuccess || lbls.HasErr() {
+ parser = "json"
+ jsonParser := logql_log.NewJSONParser()
lbls.Reset()
_, jsonSuccess := jsonParser.Process(0, []byte(line), lbls)
if !jsonSuccess || lbls.HasErr() {
- return map[string][]string{}
+ return map[string][]string{}, nil
}
}
@@ -1249,7 +1259,7 @@ func parseLine(line string) map[string][]string {
result[lbl] = vals
}
- return result
+ return result, &parser
}
// streamsForFieldDetection reads the streams from the iterator and returns them sorted.
diff --git a/pkg/storage/detected/fields.go b/pkg/storage/detected/fields.go
index 7ced040b37d2d..6310448216653 100644
--- a/pkg/storage/detected/fields.go
+++ b/pkg/storage/detected/fields.go
@@ -9,6 +9,7 @@ import (
type UnmarshaledDetectedField struct {
Label string
Type logproto.DetectedFieldType
+ Parser string
Sketch *hyperloglog.Sketch
}
@@ -22,6 +23,7 @@ func UnmarshalDetectedField(f *logproto.DetectedField) (*UnmarshaledDetectedFiel
return &UnmarshaledDetectedField{
Label: f.Label,
Type: f.Type,
+ Parser: f.Parser,
Sketch: sketch,
}, nil
}
@@ -77,6 +79,7 @@ func MergeFields(
Label: field.Label,
Type: field.Type,
Cardinality: field.Sketch.Estimate(),
+ Parser: field.Parser,
Sketch: nil,
}
result = append(result, detectedField)
diff --git a/pkg/storage/detected/fields_test.go b/pkg/storage/detected/fields_test.go
index 4edd7026baf68..0e6ad800738ad 100644
--- a/pkg/storage/detected/fields_test.go
+++ b/pkg/storage/detected/fields_test.go
@@ -34,6 +34,7 @@ func Test_MergeFields(t *testing.T) {
Type: logproto.DetectedFieldString,
Cardinality: 1,
Sketch: marshalledFooSketch,
+ Parser: "logfmt",
},
{
Label: "bar",
@@ -65,6 +66,7 @@ func Test_MergeFields(t *testing.T) {
assert.Equal(t, logproto.DetectedFieldString, foo.Type)
assert.Equal(t, uint64(3), foo.Cardinality)
+ assert.Equal(t, "logfmt", foo.Parser)
})
t.Run("returns up to limit number of fields", func(t *testing.T) {
|
feat
|
add parser to response (#12872)
|
8e6647f92650db41f39d69fba3efc4bd3c9c5a04
|
2024-11-15 21:11:36
|
renovate[bot]
|
fix(deps): update module cloud.google.com/go/storage to v1.47.0 (#14940)
| false
|
diff --git a/go.mod b/go.mod
index f550efaac1f82..3980e4790c122 100644
--- a/go.mod
+++ b/go.mod
@@ -7,7 +7,7 @@ toolchain go1.23.1
require (
cloud.google.com/go/bigtable v1.33.0
cloud.google.com/go/pubsub v1.45.1
- cloud.google.com/go/storage v1.46.0
+ cloud.google.com/go/storage v1.47.0
github.com/Azure/azure-pipeline-go v0.2.3
github.com/Azure/azure-storage-blob-go v0.15.0
github.com/Azure/go-autorest/autorest/adal v0.9.24
@@ -158,7 +158,7 @@ require (
require (
cel.dev/expr v0.16.1 // indirect
- cloud.google.com/go/auth v0.10.0 // indirect
+ cloud.google.com/go/auth v0.10.2 // indirect
cloud.google.com/go/auth/oauth2adapt v0.2.5 // indirect
cloud.google.com/go/monitoring v1.21.1 // indirect
dario.cat/mergo v1.0.1 // indirect
diff --git a/go.sum b/go.sum
index 4d7297b503eee..a1c5a845e3bec 100644
--- a/go.sum
+++ b/go.sum
@@ -117,8 +117,8 @@ cloud.google.com/go/assuredworkloads v1.8.0/go.mod h1:AsX2cqyNCOvEQC8RMPnoc0yEar
cloud.google.com/go/assuredworkloads v1.9.0/go.mod h1:kFuI1P78bplYtT77Tb1hi0FMxM0vVpRC7VVoJC3ZoT0=
cloud.google.com/go/assuredworkloads v1.10.0/go.mod h1:kwdUQuXcedVdsIaKgKTp9t0UJkE5+PAVNhdQm4ZVq2E=
cloud.google.com/go/assuredworkloads v1.11.1/go.mod h1:+F04I52Pgn5nmPG36CWFtxmav6+7Q+c5QyJoL18Lry0=
-cloud.google.com/go/auth v0.10.0 h1:tWlkvFAh+wwTOzXIjrwM64karR1iTBZ/GRr0S/DULYo=
-cloud.google.com/go/auth v0.10.0/go.mod h1:xxA5AqpDrvS+Gkmo9RqrGGRh6WSNKKOXhY3zNOr38tI=
+cloud.google.com/go/auth v0.10.2 h1:oKF7rgBfSHdp/kuhXtqU/tNDr0mZqhYbEh+6SiqzkKo=
+cloud.google.com/go/auth v0.10.2/go.mod h1:xxA5AqpDrvS+Gkmo9RqrGGRh6WSNKKOXhY3zNOr38tI=
cloud.google.com/go/auth/oauth2adapt v0.2.5 h1:2p29+dePqsCHPP1bqDJcKj4qxRyYCcbzKpFyKGt3MTk=
cloud.google.com/go/auth/oauth2adapt v0.2.5/go.mod h1:AlmsELtlEBnaNTL7jCj8VQFLy6mbZv0s4Q7NGBeQ5E8=
cloud.google.com/go/automl v1.5.0/go.mod h1:34EjfoFGMZ5sgJ9EoLsRtdPSNZLcfflJR39VbVNS2M0=
@@ -661,8 +661,8 @@ cloud.google.com/go/storage v1.27.0/go.mod h1:x9DOL8TK/ygDUMieqwfhdpQryTeEkhGKMi
cloud.google.com/go/storage v1.28.1/go.mod h1:Qnisd4CqDdo6BGs2AD5LLnEsmSQ80wQ5ogcBBKhU86Y=
cloud.google.com/go/storage v1.29.0/go.mod h1:4puEjyTKnku6gfKoTfNOU/W+a9JyuVNxjpS5GBrB8h4=
cloud.google.com/go/storage v1.30.1/go.mod h1:NfxhC0UJE1aXSx7CIIbCf7y9HKT7BiccwkR7+P7gN8E=
-cloud.google.com/go/storage v1.46.0 h1:OTXISBpFd8KaA2ClT3K3oRk8UGOcTHtrZ1bW88xKiic=
-cloud.google.com/go/storage v1.46.0/go.mod h1:lM+gMAW91EfXIeMTBmixRsKL/XCxysytoAgduVikjMk=
+cloud.google.com/go/storage v1.47.0 h1:ajqgt30fnOMmLfWfu1PWcb+V9Dxz6n+9WKjdNg5R4HM=
+cloud.google.com/go/storage v1.47.0/go.mod h1:Ks0vP374w0PW6jOUameJbapbQKXqkjGd/OJRp2fb9IQ=
cloud.google.com/go/storagetransfer v1.5.0/go.mod h1:dxNzUopWy7RQevYFHewchb29POFv3/AaBgnhqzqiK0w=
cloud.google.com/go/storagetransfer v1.6.0/go.mod h1:y77xm4CQV/ZhFZH75PLEXY0ROiS7Gh6pSKrM8dJyg6I=
cloud.google.com/go/storagetransfer v1.7.0/go.mod h1:8Giuj1QNb1kfLAiWM1bN6dHzfdlDAVC9rv9abHot2W4=
@@ -2751,6 +2751,8 @@ go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0 h1:3Q/xZUyC1BBkualc9RO
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.28.0/go.mod h1:s75jGIWA9OfCMzF0xr+ZgfrB5FEbbV7UuYo32ahUiFI=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.28.0 h1:j9+03ymgYhPKmeXGk5Zu+cIZOlVzd9Zv7QIiyItjFBU=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.28.0/go.mod h1:Y5+XiUG4Emn1hTfciPzGPJaSI+RpDts6BnCIir0SLqk=
+go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.29.0 h1:WDdP9acbMYjbKIyJUhTvtzj601sVJOqgWdUxSdR/Ysc=
+go.opentelemetry.io/otel/exporters/stdout/stdoutmetric v1.29.0/go.mod h1:BLbf7zbNIONBLPwvFnwNHGj4zge8uTCM/UPIVW1Mq2I=
go.opentelemetry.io/otel/metric v1.16.0/go.mod h1:QE47cpOmkwipPiefDwo2wDzwJrlfxxNYodqc4xnGCo4=
go.opentelemetry.io/otel/metric v1.17.0/go.mod h1:h4skoxdZI17AxwITdmdZjjYJQH5nzijUUjm+wtPph5o=
go.opentelemetry.io/otel/metric v1.29.0 h1:vPf/HFWTNkPu1aYeIsc98l4ktOQaL6LeSoeV2g+8YLc=
diff --git a/vendor/cloud.google.com/go/auth/CHANGES.md b/vendor/cloud.google.com/go/auth/CHANGES.md
index ecdc47daba4f9..a754df909f6d3 100644
--- a/vendor/cloud.google.com/go/auth/CHANGES.md
+++ b/vendor/cloud.google.com/go/auth/CHANGES.md
@@ -1,5 +1,20 @@
# Changelog
+## [0.10.2](https://github.com/googleapis/google-cloud-go/compare/auth/v0.10.1...auth/v0.10.2) (2024-11-12)
+
+
+### Bug Fixes
+
+* **auth:** Restore use of grpc.Dial ([#11118](https://github.com/googleapis/google-cloud-go/issues/11118)) ([2456b94](https://github.com/googleapis/google-cloud-go/commit/2456b943b7b8aaabd4d8bfb7572c0f477ae0db45)), refs [#7556](https://github.com/googleapis/google-cloud-go/issues/7556)
+
+## [0.10.1](https://github.com/googleapis/google-cloud-go/compare/auth/v0.10.0...auth/v0.10.1) (2024-11-06)
+
+
+### Bug Fixes
+
+* **auth:** Restore Application Default Credentials support to idtoken ([#11083](https://github.com/googleapis/google-cloud-go/issues/11083)) ([8771f2e](https://github.com/googleapis/google-cloud-go/commit/8771f2ea9807ab822083808e0678392edff3b4f2))
+* **auth:** Skip impersonate universe domain check if empty ([#11086](https://github.com/googleapis/google-cloud-go/issues/11086)) ([87159c1](https://github.com/googleapis/google-cloud-go/commit/87159c1059d4a18d1367ce62746a838a94964ab6))
+
## [0.10.0](https://github.com/googleapis/google-cloud-go/compare/auth/v0.9.9...auth/v0.10.0) (2024-10-30)
diff --git a/vendor/cloud.google.com/go/auth/grpctransport/grpctransport.go b/vendor/cloud.google.com/go/auth/grpctransport/grpctransport.go
index 42d4cbe3062ed..38212ed0f82a0 100644
--- a/vendor/cloud.google.com/go/auth/grpctransport/grpctransport.go
+++ b/vendor/cloud.google.com/go/auth/grpctransport/grpctransport.go
@@ -322,7 +322,7 @@ func dial(ctx context.Context, secure bool, opts *Options) (*grpc.ClientConn, er
grpcOpts = addOpenTelemetryStatsHandler(grpcOpts, opts)
grpcOpts = append(grpcOpts, opts.GRPCDialOpts...)
- return grpc.NewClient(endpoint, grpcOpts...)
+ return grpc.Dial(endpoint, grpcOpts...)
}
// grpcKeyProvider satisfies https://pkg.go.dev/google.golang.org/grpc/credentials#PerRPCCredentials.
diff --git a/vendor/cloud.google.com/go/auth/httptransport/httptransport.go b/vendor/cloud.google.com/go/auth/httptransport/httptransport.go
index 38e8c99399bb3..cbe5a7a40a77a 100644
--- a/vendor/cloud.google.com/go/auth/httptransport/httptransport.go
+++ b/vendor/cloud.google.com/go/auth/httptransport/httptransport.go
@@ -147,8 +147,13 @@ type InternalOptions struct {
// service.
DefaultScopes []string
// SkipValidation bypasses validation on Options. It should only be used
- // internally for clients that needs more control over their transport.
+ // internally for clients that need more control over their transport.
SkipValidation bool
+ // SkipUniverseDomainValidation skips the verification that the universe
+ // domain configured for the client matches the universe domain configured
+ // for the credentials. It should only be used internally for clients that
+ // need more control over their transport. The default is false.
+ SkipUniverseDomainValidation bool
}
// AddAuthorizationMiddleware adds a middleware to the provided client's
diff --git a/vendor/cloud.google.com/go/auth/httptransport/transport.go b/vendor/cloud.google.com/go/auth/httptransport/transport.go
index 63498ee792be9..1d139b9dc49eb 100644
--- a/vendor/cloud.google.com/go/auth/httptransport/transport.go
+++ b/vendor/cloud.google.com/go/auth/httptransport/transport.go
@@ -86,11 +86,16 @@ func newTransport(base http.RoundTripper, opts *Options) (http.RoundTripper, err
headers.Set(quotaProjectHeaderKey, qp)
}
}
+ var skipUD bool
+ if iOpts := opts.InternalOptions; iOpts != nil {
+ skipUD = iOpts.SkipUniverseDomainValidation
+ }
creds.TokenProvider = auth.NewCachedTokenProvider(creds.TokenProvider, nil)
trans = &authTransport{
- base: trans,
- creds: creds,
- clientUniverseDomain: opts.UniverseDomain,
+ base: trans,
+ creds: creds,
+ clientUniverseDomain: opts.UniverseDomain,
+ skipUniverseDomainValidation: skipUD,
}
}
return trans, nil
@@ -185,9 +190,10 @@ func addOCTransport(trans http.RoundTripper, opts *Options) http.RoundTripper {
}
type authTransport struct {
- creds *auth.Credentials
- base http.RoundTripper
- clientUniverseDomain string
+ creds *auth.Credentials
+ base http.RoundTripper
+ clientUniverseDomain string
+ skipUniverseDomainValidation bool
}
// getClientUniverseDomain returns the default service domain for a given Cloud
@@ -226,7 +232,7 @@ func (t *authTransport) RoundTrip(req *http.Request) (*http.Response, error) {
if err != nil {
return nil, err
}
- if token.MetadataString("auth.google.tokenSource") != "compute-metadata" {
+ if !t.skipUniverseDomainValidation && token.MetadataString("auth.google.tokenSource") != "compute-metadata" {
credentialsUniverseDomain, err := t.creds.UniverseDomain(req.Context())
if err != nil {
return nil, err
diff --git a/vendor/cloud.google.com/go/storage/CHANGES.md b/vendor/cloud.google.com/go/storage/CHANGES.md
index 7b1b063367345..b06751bc81790 100644
--- a/vendor/cloud.google.com/go/storage/CHANGES.md
+++ b/vendor/cloud.google.com/go/storage/CHANGES.md
@@ -1,6 +1,18 @@
# Changes
+## [1.47.0](https://github.com/googleapis/google-cloud-go/compare/storage/v1.46.0...storage/v1.47.0) (2024-11-14)
+
+
+### Features
+
+* **storage:** Introduce dp detector based on grpc metrics ([#11100](https://github.com/googleapis/google-cloud-go/issues/11100)) ([60c2323](https://github.com/googleapis/google-cloud-go/commit/60c2323102b623e042fc508e2b1bb830a03f9577))
+
+
+### Bug Fixes
+
+* **storage:** Bump auth dep ([#11135](https://github.com/googleapis/google-cloud-go/issues/11135)) ([9620a51](https://github.com/googleapis/google-cloud-go/commit/9620a51b2c6904d8d93e124494bc297fb98553d2))
+
## [1.46.0](https://github.com/googleapis/google-cloud-go/compare/storage/v1.45.0...storage/v1.46.0) (2024-10-31)
### Features
diff --git a/vendor/cloud.google.com/go/storage/http_client.go b/vendor/cloud.google.com/go/storage/http_client.go
index 6baf905473569..221078f3e2622 100644
--- a/vendor/cloud.google.com/go/storage/http_client.go
+++ b/vendor/cloud.google.com/go/storage/http_client.go
@@ -901,11 +901,12 @@ func (c *httpStorageClient) newRangeReaderXML(ctx context.Context, params *newRa
done <- true
}()
- // Wait until timeout or request is successful.
- timer := time.After(c.dynamicReadReqStallTimeout.getValue(params.bucket))
+ // Wait until stall timeout or request is successful.
+ stallTimeout := c.dynamicReadReqStallTimeout.getValue(params.bucket)
+ timer := time.After(stallTimeout)
select {
case <-timer:
- log.Printf("stalled read-req cancelled after %fs", c.dynamicReadReqStallTimeout.getValue(params.bucket).Seconds())
+ log.Printf("stalled read-req (%p) cancelled after %fs", req, stallTimeout.Seconds())
cancel()
err = context.DeadlineExceeded
if res != nil && res.Body != nil {
diff --git a/vendor/cloud.google.com/go/storage/internal/apiv2/auxiliary.go b/vendor/cloud.google.com/go/storage/internal/apiv2/auxiliary.go
index fc3e783bdc49e..f6af3452d3126 100644
--- a/vendor/cloud.google.com/go/storage/internal/apiv2/auxiliary.go
+++ b/vendor/cloud.google.com/go/storage/internal/apiv2/auxiliary.go
@@ -41,7 +41,7 @@ type BucketIterator struct {
InternalFetch func(pageSize int, pageToken string) (results []*storagepb.Bucket, nextPageToken string, err error)
}
-// PageInfo supports pagination. See the google.golang.org/api/iterator package for details.
+// PageInfo supports pagination. See the [google.golang.org/api/iterator] package for details.
func (it *BucketIterator) PageInfo() *iterator.PageInfo {
return it.pageInfo
}
@@ -88,7 +88,7 @@ type ObjectIterator struct {
InternalFetch func(pageSize int, pageToken string) (results []*storagepb.Object, nextPageToken string, err error)
}
-// PageInfo supports pagination. See the google.golang.org/api/iterator package for details.
+// PageInfo supports pagination. See the [google.golang.org/api/iterator] package for details.
func (it *ObjectIterator) PageInfo() *iterator.PageInfo {
return it.pageInfo
}
diff --git a/vendor/cloud.google.com/go/storage/internal/apiv2/doc.go b/vendor/cloud.google.com/go/storage/internal/apiv2/doc.go
index 869f3b1fbcd73..4640d5bdfd2a1 100644
--- a/vendor/cloud.google.com/go/storage/internal/apiv2/doc.go
+++ b/vendor/cloud.google.com/go/storage/internal/apiv2/doc.go
@@ -43,6 +43,7 @@
//
// To get started with this package, create a client.
//
+// // go get cloud.google.com/go/storage/internal/apiv2@latest
// ctx := context.Background()
// // This snippet has been automatically generated and should be regarded as a code template only.
// // It will require modifications to work:
@@ -61,19 +62,8 @@
//
// # Using the Client
//
-// The following is an example of making an API call with the newly created client.
+// The following is an example of making an API call with the newly created client, mentioned above.
//
-// ctx := context.Background()
-// // This snippet has been automatically generated and should be regarded as a code template only.
-// // It will require modifications to work:
-// // - It may require correct/in-range values for request initialization.
-// // - It may require specifying regional endpoints when creating the service client as shown in:
-// // https://pkg.go.dev/cloud.google.com/go#hdr-Client_Options
-// c, err := storage.NewClient(ctx)
-// if err != nil {
-// // TODO: Handle error.
-// }
-// defer c.Close()
// stream, err := c.BidiWriteObject(ctx)
// if err != nil {
// // TODO: Handle error.
@@ -115,34 +105,3 @@
// [Debugging Client Libraries]: https://pkg.go.dev/cloud.google.com/go#hdr-Debugging
// [Inspecting errors]: https://pkg.go.dev/cloud.google.com/go#hdr-Inspecting_errors
package storage // import "cloud.google.com/go/storage/internal/apiv2"
-
-import (
- "context"
-
- "google.golang.org/api/option"
-)
-
-// For more information on implementing a client constructor hook, see
-// https://github.com/googleapis/google-cloud-go/wiki/Customizing-constructors.
-type clientHookParams struct{}
-type clientHook func(context.Context, clientHookParams) ([]option.ClientOption, error)
-
-var versionClient string
-
-func getVersionClient() string {
- if versionClient == "" {
- return "UNKNOWN"
- }
- return versionClient
-}
-
-// DefaultAuthScopes reports the default set of authentication scopes to use with this package.
-func DefaultAuthScopes() []string {
- return []string{
- "https://www.googleapis.com/auth/cloud-platform",
- "https://www.googleapis.com/auth/cloud-platform.read-only",
- "https://www.googleapis.com/auth/devstorage.full_control",
- "https://www.googleapis.com/auth/devstorage.read_only",
- "https://www.googleapis.com/auth/devstorage.read_write",
- }
-}
diff --git a/vendor/cloud.google.com/go/storage/internal/apiv2/helpers.go b/vendor/cloud.google.com/go/storage/internal/apiv2/helpers.go
new file mode 100644
index 0000000000000..762882d1d9e7a
--- /dev/null
+++ b/vendor/cloud.google.com/go/storage/internal/apiv2/helpers.go
@@ -0,0 +1,48 @@
+// Copyright 2024 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// https://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// Code generated by protoc-gen-go_gapic. DO NOT EDIT.
+
+package storage
+
+import (
+ "context"
+
+ "google.golang.org/api/option"
+)
+
+// For more information on implementing a client constructor hook, see
+// https://github.com/googleapis/google-cloud-go/wiki/Customizing-constructors.
+type clientHookParams struct{}
+type clientHook func(context.Context, clientHookParams) ([]option.ClientOption, error)
+
+var versionClient string
+
+func getVersionClient() string {
+ if versionClient == "" {
+ return "UNKNOWN"
+ }
+ return versionClient
+}
+
+// DefaultAuthScopes reports the default set of authentication scopes to use with this package.
+func DefaultAuthScopes() []string {
+ return []string{
+ "https://www.googleapis.com/auth/cloud-platform",
+ "https://www.googleapis.com/auth/cloud-platform.read-only",
+ "https://www.googleapis.com/auth/devstorage.full_control",
+ "https://www.googleapis.com/auth/devstorage.read_only",
+ "https://www.googleapis.com/auth/devstorage.read_write",
+ }
+}
diff --git a/vendor/cloud.google.com/go/storage/internal/version.go b/vendor/cloud.google.com/go/storage/internal/version.go
index fc6b11e22267c..f2754293858ec 100644
--- a/vendor/cloud.google.com/go/storage/internal/version.go
+++ b/vendor/cloud.google.com/go/storage/internal/version.go
@@ -15,4 +15,4 @@
package internal
// Version is the current tagged release of the library.
-const Version = "1.46.0"
+const Version = "1.47.0"
diff --git a/vendor/cloud.google.com/go/storage/storage.go b/vendor/cloud.google.com/go/storage/storage.go
index 0c11eb82adc22..cc91f8fe9fa4e 100644
--- a/vendor/cloud.google.com/go/storage/storage.go
+++ b/vendor/cloud.google.com/go/storage/storage.go
@@ -43,6 +43,9 @@ import (
"cloud.google.com/go/storage/internal"
"cloud.google.com/go/storage/internal/apiv2/storagepb"
"github.com/googleapis/gax-go/v2"
+ "go.opentelemetry.io/otel/attribute"
+ "go.opentelemetry.io/otel/sdk/metric"
+ "go.opentelemetry.io/otel/sdk/metric/metricdata"
"golang.org/x/oauth2/google"
"google.golang.org/api/googleapi"
"google.golang.org/api/option"
@@ -50,6 +53,8 @@ import (
raw "google.golang.org/api/storage/v1"
"google.golang.org/api/transport"
htransport "google.golang.org/api/transport/http"
+ "google.golang.org/grpc/experimental/stats"
+ "google.golang.org/grpc/stats/opentelemetry"
"google.golang.org/protobuf/proto"
"google.golang.org/protobuf/reflect/protoreflect"
"google.golang.org/protobuf/types/known/fieldmaskpb"
@@ -233,6 +238,60 @@ func NewGRPCClient(ctx context.Context, opts ...option.ClientOption) (*Client, e
return &Client{tc: tc}, nil
}
+// CheckDirectConnectivitySupported checks if gRPC direct connectivity
+// is available for a specific bucket from the environment where the client
+// is running. A `nil` error represents Direct Connectivity was detected.
+// Direct connectivity is expected to be available when running from inside
+// GCP and connecting to a bucket in the same region.
+//
+// You can pass in [option.ClientOption] you plan on passing to [NewGRPCClient]
+func CheckDirectConnectivitySupported(ctx context.Context, bucket string, opts ...option.ClientOption) error {
+ view := metric.NewView(
+ metric.Instrument{
+ Name: "grpc.client.attempt.duration",
+ Kind: metric.InstrumentKindHistogram,
+ },
+ metric.Stream{AttributeFilter: attribute.NewAllowKeysFilter("grpc.lb.locality")},
+ )
+ mr := metric.NewManualReader()
+ provider := metric.NewMeterProvider(metric.WithReader(mr), metric.WithView(view))
+ // Provider handles shutting down ManualReader
+ defer provider.Shutdown(ctx)
+ mo := opentelemetry.MetricsOptions{
+ MeterProvider: provider,
+ Metrics: stats.NewMetrics("grpc.client.attempt.duration"),
+ OptionalLabels: []string{"grpc.lb.locality"},
+ }
+ combinedOpts := append(opts, WithDisabledClientMetrics(), option.WithGRPCDialOption(opentelemetry.DialOption(opentelemetry.Options{MetricsOptions: mo})))
+ client, err := NewGRPCClient(ctx, combinedOpts...)
+ if err != nil {
+ return fmt.Errorf("storage.NewGRPCClient: %w", err)
+ }
+ defer client.Close()
+ if _, err = client.Bucket(bucket).Attrs(ctx); err != nil {
+ return fmt.Errorf("Bucket.Attrs: %w", err)
+ }
+ // Call manual reader to collect metric
+ rm := metricdata.ResourceMetrics{}
+ if err = mr.Collect(context.Background(), &rm); err != nil {
+ return fmt.Errorf("ManualReader.Collect: %w", err)
+ }
+ for _, sm := range rm.ScopeMetrics {
+ for _, m := range sm.Metrics {
+ if m.Name == "grpc.client.attempt.duration" {
+ hist := m.Data.(metricdata.Histogram[float64])
+ for _, d := range hist.DataPoints {
+ v, present := d.Attributes.Value("grpc.lb.locality")
+ if present && v.AsString() != "" {
+ return nil
+ }
+ }
+ }
+ }
+ }
+ return errors.New("storage: direct connectivity not detected")
+}
+
// Close closes the Client.
//
// Close need not be called at program exit.
diff --git a/vendor/modules.txt b/vendor/modules.txt
index b688db14dac25..216c325c60c67 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -10,7 +10,7 @@ cloud.google.com/go/internal/optional
cloud.google.com/go/internal/pubsub
cloud.google.com/go/internal/trace
cloud.google.com/go/internal/version
-# cloud.google.com/go/auth v0.10.0
+# cloud.google.com/go/auth v0.10.2
## explicit; go 1.21
cloud.google.com/go/auth
cloud.google.com/go/auth/credentials
@@ -63,7 +63,7 @@ cloud.google.com/go/pubsub/apiv1/pubsubpb
cloud.google.com/go/pubsub/internal
cloud.google.com/go/pubsub/internal/distribution
cloud.google.com/go/pubsub/internal/scheduler
-# cloud.google.com/go/storage v1.46.0
+# cloud.google.com/go/storage v1.47.0
## explicit; go 1.21
cloud.google.com/go/storage
cloud.google.com/go/storage/experimental
|
fix
|
update module cloud.google.com/go/storage to v1.47.0 (#14940)
|
7200107ac4dcaf4f8e84e0c906e2f7d423aaa23d
|
2025-03-01 14:38:11
|
renovate[bot]
|
chore(deps): update dependency @types/node to v22.13.8 (main) (#16519)
| false
|
diff --git a/pkg/ui/frontend/package-lock.json b/pkg/ui/frontend/package-lock.json
index 00a3e72275992..8599a1cf70693 100644
--- a/pkg/ui/frontend/package-lock.json
+++ b/pkg/ui/frontend/package-lock.json
@@ -2738,9 +2738,9 @@
"license": "MIT"
},
"node_modules/@types/node": {
- "version": "22.13.7",
- "resolved": "https://registry.npmjs.org/@types/node/-/node-22.13.7.tgz",
- "integrity": "sha512-oU2q+BsQldB9lYxHNp/5aZO+/Bs0Usa74Abo9mAKulz4ahQyXRHK6UVKYIN8KSC8HXwhWSi7b49JnX+txuac0w==",
+ "version": "22.13.8",
+ "resolved": "https://registry.npmjs.org/@types/node/-/node-22.13.8.tgz",
+ "integrity": "sha512-G3EfaZS+iOGYWLLRCEAXdWK9my08oHNZ+FHluRiggIYJPOXzhOiDgpVCUHaUvyIC5/fj7C/p637jdzC666AOKQ==",
"dev": true,
"license": "MIT",
"dependencies": {
|
chore
|
update dependency @types/node to v22.13.8 (main) (#16519)
|
b65b9bca92226b219ecbc229c8c05a797868225c
|
2024-09-25 01:53:18
|
Trevor Whitney
|
chore: fix loki local config (#14250)
| false
|
diff --git a/cmd/loki/loki-local-config.yaml b/cmd/loki/loki-local-config.yaml
index 38efa3f6bf6e7..c593b14a252c0 100644
--- a/cmd/loki/loki-local-config.yaml
+++ b/cmd/loki/loki-local-config.yaml
@@ -18,9 +18,6 @@ common:
kvstore:
store: inmemory
-ingester_rf1:
- enabled: false
-
query_range:
results_cache:
cache:
|
chore
|
fix loki local config (#14250)
|
ccf3e4e5ecd5f237d22c82c7e3623aafd216245a
|
2025-01-23 03:50:06
|
renovate[bot]
|
chore(deps): update grafana/promtail docker tag to v3.3.2 (main) (#15888)
| false
|
diff --git a/docs/sources/send-data/promtail/cloud/eks/values.yaml b/docs/sources/send-data/promtail/cloud/eks/values.yaml
index c2f26f4de1ec8..a2c060c4b0171 100644
--- a/docs/sources/send-data/promtail/cloud/eks/values.yaml
+++ b/docs/sources/send-data/promtail/cloud/eks/values.yaml
@@ -17,7 +17,7 @@ initContainer:
image:
repository: grafana/promtail
- tag: 3.3.1
+ tag: 3.3.2
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
|
chore
|
update grafana/promtail docker tag to v3.3.2 (main) (#15888)
|
20b9bd2b776d294c3b03e3dc8b4ff1c39f76e881
|
2025-02-18 11:14:52
|
renovate[bot]
|
fix(deps): update module github.com/fsouza/fake-gcs-server to v1.52.2 (main) (#16334)
| false
|
diff --git a/go.mod b/go.mod
index 5699b391dce39..1f63022a6ddda 100644
--- a/go.mod
+++ b/go.mod
@@ -37,7 +37,7 @@ require (
github.com/fatih/color v1.18.0
github.com/felixge/fgprof v0.9.5
github.com/fluent/fluent-bit-go v0.0.0-20230731091245-a7a013e2473c
- github.com/fsouza/fake-gcs-server v1.52.1
+ github.com/fsouza/fake-gcs-server v1.52.2
github.com/go-kit/log v0.2.1
github.com/go-logfmt/logfmt v0.6.0
github.com/gocql/gocql v1.7.0
@@ -69,7 +69,7 @@ require (
github.com/klauspost/pgzip v1.2.6
github.com/leodido/go-syslog/v4 v4.2.0
github.com/mattn/go-ieproxy v0.0.12
- github.com/minio/minio-go/v7 v7.0.85
+ github.com/minio/minio-go/v7 v7.0.86
github.com/mitchellh/go-wordwrap v1.0.1
github.com/mitchellh/mapstructure v1.5.1-0.20220423185008-bf980b35cac4
github.com/modern-go/reflect2 v1.0.2
@@ -173,7 +173,7 @@ require (
github.com/go-ini/ini v1.67.0 // indirect
github.com/go-ole/go-ole v1.3.0 // indirect
github.com/go-redsync/redsync/v4 v4.13.0 // indirect
- github.com/goccy/go-json v0.10.4 // indirect
+ github.com/goccy/go-json v0.10.5 // indirect
github.com/gorilla/handlers v1.5.2 // indirect
github.com/hashicorp/golang-lru v1.0.2 // indirect
github.com/imdario/mergo v0.3.16 // indirect
@@ -182,6 +182,7 @@ require (
github.com/mattn/go-runewidth v0.0.16 // indirect
github.com/mdlayher/socket v0.5.1 // indirect
github.com/mdlayher/vsock v1.2.1 // indirect
+ github.com/minio/crc64nvme v1.0.0 // indirect
github.com/moby/docker-image-spec v1.3.1 // indirect
github.com/moby/sys/userns v0.1.0 // indirect
github.com/ncw/swift v1.0.53 // indirect
diff --git a/go.sum b/go.sum
index 315e82422d120..957ca089091e0 100644
--- a/go.sum
+++ b/go.sum
@@ -401,8 +401,8 @@ github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMo
github.com/fsnotify/fsnotify v1.6.0/go.mod h1:sl3t1tCWJFWoRz9R8WJCbQihKKwmorjAbSClcnxKAGw=
github.com/fsnotify/fsnotify v1.8.0 h1:dAwr6QBTBZIkG8roQaJjGof0pp0EeF+tNV7YBP3F/8M=
github.com/fsnotify/fsnotify v1.8.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
-github.com/fsouza/fake-gcs-server v1.52.1 h1:Hx3G2ZpyBzHGmW7cHURWWoTm6jM3M5fcWMIAHBYlJyc=
-github.com/fsouza/fake-gcs-server v1.52.1/go.mod h1:Paxf25VmSNMN52L+2/cVulF5fkLUA0YJIYjTGJiwf3c=
+github.com/fsouza/fake-gcs-server v1.52.2 h1:j6ne83nqHrlX5EEor7WWVIKdBsztGtwJ1J2mL+k+iio=
+github.com/fsouza/fake-gcs-server v1.52.2/go.mod h1:47HKyIkz6oLTes1R8vEaHLwXfzYsGfmDUk1ViHHAUsA=
github.com/fullstorydev/emulators/storage v0.0.0-20240401123056-edc69752f474 h1:TufioMBjkJ6/Oqmlye/ReuxHFS35HyLmypj/BNy/8GY=
github.com/fullstorydev/emulators/storage v0.0.0-20240401123056-edc69752f474/go.mod h1:PQwxF4UU8wuL+srGxr3BOhIW5zXqgucwVlO/nPZLsxw=
github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E=
@@ -481,8 +481,8 @@ github.com/go-zookeeper/zk v1.0.3/go.mod h1:nOB03cncLtlp4t+UAkGSV+9beXP/akpekBwL
github.com/gobwas/httphead v0.1.0/go.mod h1:O/RXo79gxV8G+RqlR/otEwx4Q36zl9rqC5u12GKvMCM=
github.com/gobwas/pool v0.2.1/go.mod h1:q8bcK0KcYlCgd9e7WYLm9LpyS+YeLd8JVDW6WezmKEw=
github.com/gobwas/ws v1.2.1/go.mod h1:hRKAFb8wOxFROYNsT1bqfWnhX+b5MFeJM9r2ZSwg/KY=
-github.com/goccy/go-json v0.10.4 h1:JSwxQzIqKfmFX1swYPpUThQZp/Ka4wzJdK0LWVytLPM=
-github.com/goccy/go-json v0.10.4/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
+github.com/goccy/go-json v0.10.5 h1:Fq85nIqj+gXn/S5ahsiTlK3TmC85qgirsdTP/+DeaC4=
+github.com/goccy/go-json v0.10.5/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gofrs/flock v0.7.1/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14jxHU=
github.com/gofrs/flock v0.8.1 h1:+gYjHKf32LDeiEEFhQaotPbLuUXjY5ZqxKgXy7n59aw=
@@ -850,10 +850,12 @@ github.com/miekg/dns v1.1.26/go.mod h1:bPDLeHnStXmXAq1m/Ch/hvfNHr14JKNPMBo3VZKju
github.com/miekg/dns v1.1.41/go.mod h1:p6aan82bvRIyn+zDIv9xYNUpwa73JcSh9BKwknJysuI=
github.com/miekg/dns v1.1.62 h1:cN8OuEF1/x5Rq6Np+h1epln8OiyPWV+lROx9LxcGgIQ=
github.com/miekg/dns v1.1.62/go.mod h1:mvDlcItzm+br7MToIKqkglaGhlFMHJ9DTNNWONWXbNQ=
+github.com/minio/crc64nvme v1.0.0 h1:MeLcBkCTD4pAoU7TciAfwsfxgkhM2u5hCe48hSEVFr0=
+github.com/minio/crc64nvme v1.0.0/go.mod h1:eVfm2fAzLlxMdUGc0EEBGSMmPwmXD5XiNRpnu9J3bvg=
github.com/minio/md5-simd v1.1.2 h1:Gdi1DZK69+ZVMoNHRXJyNcxrMA4dSxoYHZSQbirFg34=
github.com/minio/md5-simd v1.1.2/go.mod h1:MzdKDxYpY2BT9XQFocsiZf/NKVtR7nkE4RoEpN+20RM=
-github.com/minio/minio-go/v7 v7.0.85 h1:9psTLS/NTvC3MWoyjhjXpwcKoNbkongaCSF3PNpSuXo=
-github.com/minio/minio-go/v7 v7.0.85/go.mod h1:57YXpvc5l3rjPdhqNrDsvVlY0qPI6UTk1bflAe+9doY=
+github.com/minio/minio-go/v7 v7.0.86 h1:DcgQ0AUjLJzRH6y/HrxiZ8CXarA70PAIufXHodP4s+k=
+github.com/minio/minio-go/v7 v7.0.86/go.mod h1:VbfO4hYwUu3Of9WqGLBZ8vl3Hxnxo4ngxK4hzQDf4x4=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/cli v1.1.0/go.mod h1:xcISNoH86gajksDmfB23e/pu+B+GeFRMYmoHXxx3xhI=
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db h1:62I3jR2EmQ4l5rM/4FEfDWcRD+abF5XlKShorW5LRoQ=
diff --git a/vendor/github.com/fsouza/fake-gcs-server/fakestorage/mux_tranport.go b/vendor/github.com/fsouza/fake-gcs-server/fakestorage/mux_transport.go
similarity index 100%
rename from vendor/github.com/fsouza/fake-gcs-server/fakestorage/mux_tranport.go
rename to vendor/github.com/fsouza/fake-gcs-server/fakestorage/mux_transport.go
diff --git a/vendor/github.com/fsouza/fake-gcs-server/fakestorage/object.go b/vendor/github.com/fsouza/fake-gcs-server/fakestorage/object.go
index 664751ee44349..951c40456afb2 100644
--- a/vendor/github.com/fsouza/fake-gcs-server/fakestorage/object.go
+++ b/vendor/github.com/fsouza/fake-gcs-server/fakestorage/object.go
@@ -852,7 +852,7 @@ func (s *Server) rewriteObject(r *http.Request) jsonResponse {
return jsonResponse{errorMessage: "Invalid metadata", status: http.StatusBadRequest}
}
- // Only supplied metadata overwrites the new object's metdata
+ // Only supplied metadata overwrites the new object's metadata
if len(metadata.Metadata) == 0 {
metadata.Metadata = obj.Metadata
}
diff --git a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/fs.go b/vendor/github.com/fsouza/fake-gcs-server/internal/backend/fs.go
index d915ac1b7a5b6..d52ec98bb97b4 100644
--- a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/fs.go
+++ b/vendor/github.com/fsouza/fake-gcs-server/internal/backend/fs.go
@@ -277,7 +277,7 @@ func (s *storageFS) CreateObject(obj StreamingObject, conditions Conditions) (St
}
// ListObjects lists the objects in a given bucket with a given prefix and
-// delimeter.
+// delimiter.
func (s *storageFS) ListObjects(bucketName string, prefix string, versions bool) ([]ObjectAttrs, error) {
s.mtx.RLock()
defer s.mtx.RUnlock()
@@ -318,7 +318,7 @@ func (s *storageFS) GetObject(bucketName, objectName string) (StreamingObject, e
return s.getObject(bucketName, objectName)
}
-// GetObjectWithGeneration retrieves an specific version of the object. Not
+// GetObjectWithGeneration retrieves a specific version of the object. Not
// implemented for this backend.
func (s *storageFS) GetObjectWithGeneration(bucketName, objectName string, generation int64) (StreamingObject, error) {
obj, err := s.GetObject(bucketName, objectName)
diff --git a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/storage.go b/vendor/github.com/fsouza/fake-gcs-server/internal/backend/storage.go
index c428fcc8d27b1..9b953a9bcdf6b 100644
--- a/vendor/github.com/fsouza/fake-gcs-server/internal/backend/storage.go
+++ b/vendor/github.com/fsouza/fake-gcs-server/internal/backend/storage.go
@@ -2,7 +2,7 @@
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
-// Package backend proides the backends used by fake-gcs-server.
+// Package backend provides the backends used by fake-gcs-server.
package backend
type Conditions interface {
diff --git a/vendor/github.com/fsouza/fake-gcs-server/internal/notification/event.go b/vendor/github.com/fsouza/fake-gcs-server/internal/notification/event.go
index f20ac8c87a40a..7943d0aef58ad 100644
--- a/vendor/github.com/fsouza/fake-gcs-server/internal/notification/event.go
+++ b/vendor/github.com/fsouza/fake-gcs-server/internal/notification/event.go
@@ -67,7 +67,7 @@ type PubsubEventManager struct {
notifyOn EventNotificationOptions
// writer is where logs are written to.
writer io.Writer
- // bucket, if not empty, only objects from this bucker will generate trigger events.
+ // bucket, if not empty, only objects from this bucket will generate trigger events.
bucket string
// objectPrefix, if not empty, only objects having this prefix will generate
// trigger events.
diff --git a/vendor/github.com/goccy/go-json/internal/runtime/type.go b/vendor/github.com/goccy/go-json/internal/runtime/type.go
index 0167cd2c0183d..4b693cb0bbedc 100644
--- a/vendor/github.com/goccy/go-json/internal/runtime/type.go
+++ b/vendor/github.com/goccy/go-json/internal/runtime/type.go
@@ -2,6 +2,7 @@ package runtime
import (
"reflect"
+ "sync"
"unsafe"
)
@@ -23,8 +24,8 @@ type TypeAddr struct {
}
var (
- typeAddr *TypeAddr
- alreadyAnalyzed bool
+ typeAddr *TypeAddr
+ once sync.Once
)
//go:linkname typelinks reflect.typelinks
@@ -34,67 +35,64 @@ func typelinks() ([]unsafe.Pointer, [][]int32)
func rtypeOff(unsafe.Pointer, int32) unsafe.Pointer
func AnalyzeTypeAddr() *TypeAddr {
- defer func() {
- alreadyAnalyzed = true
- }()
- if alreadyAnalyzed {
- return typeAddr
- }
- sections, offsets := typelinks()
- if len(sections) != 1 {
- return nil
- }
- if len(offsets) != 1 {
- return nil
- }
- section := sections[0]
- offset := offsets[0]
- var (
- min uintptr = uintptr(^uint(0))
- max uintptr = 0
- isAligned64 = true
- isAligned32 = true
- )
- for i := 0; i < len(offset); i++ {
- typ := (*Type)(rtypeOff(section, offset[i]))
- addr := uintptr(unsafe.Pointer(typ))
- if min > addr {
- min = addr
+ once.Do(func() {
+ sections, offsets := typelinks()
+ if len(sections) != 1 {
+ return
}
- if max < addr {
- max = addr
+ if len(offsets) != 1 {
+ return
}
- if typ.Kind() == reflect.Ptr {
- addr = uintptr(unsafe.Pointer(typ.Elem()))
+ section := sections[0]
+ offset := offsets[0]
+ var (
+ min uintptr = uintptr(^uint(0))
+ max uintptr = 0
+ isAligned64 = true
+ isAligned32 = true
+ )
+ for i := 0; i < len(offset); i++ {
+ typ := (*Type)(rtypeOff(section, offset[i]))
+ addr := uintptr(unsafe.Pointer(typ))
if min > addr {
min = addr
}
if max < addr {
max = addr
}
+ if typ.Kind() == reflect.Ptr {
+ addr = uintptr(unsafe.Pointer(typ.Elem()))
+ if min > addr {
+ min = addr
+ }
+ if max < addr {
+ max = addr
+ }
+ }
+ isAligned64 = isAligned64 && (addr-min)&63 == 0
+ isAligned32 = isAligned32 && (addr-min)&31 == 0
+ }
+ addrRange := max - min
+ if addrRange == 0 {
+ return
+ }
+ var addrShift uintptr
+ if isAligned64 {
+ addrShift = 6
+ } else if isAligned32 {
+ addrShift = 5
}
- isAligned64 = isAligned64 && (addr-min)&63 == 0
- isAligned32 = isAligned32 && (addr-min)&31 == 0
- }
- addrRange := max - min
- if addrRange == 0 {
- return nil
- }
- var addrShift uintptr
- if isAligned64 {
- addrShift = 6
- } else if isAligned32 {
- addrShift = 5
- }
- cacheSize := addrRange >> addrShift
- if cacheSize > maxAcceptableTypeAddrRange {
- return nil
- }
- typeAddr = &TypeAddr{
- BaseTypeAddr: min,
- MaxTypeAddr: max,
- AddrRange: addrRange,
- AddrShift: addrShift,
- }
+ cacheSize := addrRange >> addrShift
+ if cacheSize > maxAcceptableTypeAddrRange {
+ return
+ }
+ typeAddr = &TypeAddr{
+ BaseTypeAddr: min,
+ MaxTypeAddr: max,
+ AddrRange: addrRange,
+ AddrShift: addrShift,
+ }
+ })
+
return typeAddr
}
diff --git a/vendor/github.com/minio/crc64nvme/LICENSE b/vendor/github.com/minio/crc64nvme/LICENSE
new file mode 100644
index 0000000000000..d645695673349
--- /dev/null
+++ b/vendor/github.com/minio/crc64nvme/LICENSE
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/vendor/github.com/minio/crc64nvme/README.md b/vendor/github.com/minio/crc64nvme/README.md
new file mode 100644
index 0000000000000..977dfcc881881
--- /dev/null
+++ b/vendor/github.com/minio/crc64nvme/README.md
@@ -0,0 +1,20 @@
+
+## crc64nvme
+
+This Golang package calculates CRC64 checksums using carryless-multiplication accelerated with SIMD instructions for both ARM and x86. It is based on the NVME polynomial as specified in the [NVM Express® NVM Command Set Specification](https://nvmexpress.org/wp-content/uploads/NVM-Express-NVM-Command-Set-Specification-1.0d-2023.12.28-Ratified.pdf).
+
+The code is based on the [crc64fast-nvme](https://github.com/awesomized/crc64fast-nvme.git) package in Rust and is released under the Apache 2.0 license.
+
+For more background on the exact technique used, see this [Fast CRC Computation for Generic Polynomials Using PCLMULQDQ Instruction](https://web.archive.org/web/20131224125630/https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/fast-crc-computation-generic-polynomials-pclmulqdq-paper.pdf) paper.
+
+### Performance
+
+To follow.
+
+### Requirements
+
+All Go versions >= 1.22 are supported.
+
+### Contributing
+
+Contributions are welcome, please send PRs for any enhancements.
diff --git a/vendor/github.com/minio/crc64nvme/crc64.go b/vendor/github.com/minio/crc64nvme/crc64.go
new file mode 100644
index 0000000000000..40ac28c7655e3
--- /dev/null
+++ b/vendor/github.com/minio/crc64nvme/crc64.go
@@ -0,0 +1,180 @@
+// Copyright (c) 2025 Minio Inc. All rights reserved.
+// Use of this source code is governed by a license that can be
+// found in the LICENSE file.
+
+// Package crc64nvme implements the 64-bit cyclic redundancy check with NVME polynomial.
+package crc64nvme
+
+import (
+ "encoding/binary"
+ "errors"
+ "hash"
+ "sync"
+ "unsafe"
+)
+
+const (
+ // The size of a CRC-64 checksum in bytes.
+ Size = 8
+
+ // The NVME polynoimial (reversed, as used by Go)
+ NVME = 0x9a6c9329ac4bc9b5
+)
+
+var (
+ // precalculated table.
+ nvmeTable = makeTable(NVME)
+)
+
+// table is a 256-word table representing the polynomial for efficient processing.
+type table [256]uint64
+
+var (
+ slicing8TablesBuildOnce sync.Once
+ slicing8TableNVME *[8]table
+)
+
+func buildSlicing8TablesOnce() {
+ slicing8TablesBuildOnce.Do(buildSlicing8Tables)
+}
+
+func buildSlicing8Tables() {
+ slicing8TableNVME = makeSlicingBy8Table(makeTable(NVME))
+}
+
+func makeTable(poly uint64) *table {
+ t := new(table)
+ for i := 0; i < 256; i++ {
+ crc := uint64(i)
+ for j := 0; j < 8; j++ {
+ if crc&1 == 1 {
+ crc = (crc >> 1) ^ poly
+ } else {
+ crc >>= 1
+ }
+ }
+ t[i] = crc
+ }
+ return t
+}
+
+func makeSlicingBy8Table(t *table) *[8]table {
+ var helperTable [8]table
+ helperTable[0] = *t
+ for i := 0; i < 256; i++ {
+ crc := t[i]
+ for j := 1; j < 8; j++ {
+ crc = t[crc&0xff] ^ (crc >> 8)
+ helperTable[j][i] = crc
+ }
+ }
+ return &helperTable
+}
+
+// digest represents the partial evaluation of a checksum.
+type digest struct {
+ crc uint64
+}
+
+// New creates a new hash.Hash64 computing the CRC-64 checksum using the
+// NVME polynomial. Its Sum method will lay the
+// value out in big-endian byte order. The returned Hash64 also
+// implements [encoding.BinaryMarshaler] and [encoding.BinaryUnmarshaler] to
+// marshal and unmarshal the internal state of the hash.
+func New() hash.Hash64 { return &digest{0} }
+
+func (d *digest) Size() int { return Size }
+
+func (d *digest) BlockSize() int { return 1 }
+
+func (d *digest) Reset() { d.crc = 0 }
+
+const (
+ magic = "crc\x02"
+ marshaledSize = len(magic) + 8 + 8
+)
+
+func (d *digest) MarshalBinary() ([]byte, error) {
+ b := make([]byte, 0, marshaledSize)
+ b = append(b, magic...)
+ b = binary.BigEndian.AppendUint64(b, tableSum)
+ b = binary.BigEndian.AppendUint64(b, d.crc)
+ return b, nil
+}
+
+func (d *digest) UnmarshalBinary(b []byte) error {
+ if len(b) < len(magic) || string(b[:len(magic)]) != magic {
+ return errors.New("hash/crc64: invalid hash state identifier")
+ }
+ if len(b) != marshaledSize {
+ return errors.New("hash/crc64: invalid hash state size")
+ }
+ if tableSum != binary.BigEndian.Uint64(b[4:]) {
+ return errors.New("hash/crc64: tables do not match")
+ }
+ d.crc = binary.BigEndian.Uint64(b[12:])
+ return nil
+}
+
+func update(crc uint64, p []byte) uint64 {
+ if hasAsm && len(p) > 127 {
+ ptr := unsafe.Pointer(&p[0])
+ if align := (uintptr(ptr)+15)&^0xf - uintptr(ptr); align > 0 {
+ // Align to 16-byte boundary.
+ crc = update(crc, p[:align])
+ p = p[align:]
+ }
+ runs := len(p) / 128
+ crc = updateAsm(crc, p[:128*runs])
+ return update(crc, p[128*runs:])
+ }
+
+ buildSlicing8TablesOnce()
+ crc = ^crc
+ // table comparison is somewhat expensive, so avoid it for small sizes
+ for len(p) >= 64 {
+ var helperTable = slicing8TableNVME
+ // Update using slicing-by-8
+ for len(p) > 8 {
+ crc ^= binary.LittleEndian.Uint64(p)
+ crc = helperTable[7][crc&0xff] ^
+ helperTable[6][(crc>>8)&0xff] ^
+ helperTable[5][(crc>>16)&0xff] ^
+ helperTable[4][(crc>>24)&0xff] ^
+ helperTable[3][(crc>>32)&0xff] ^
+ helperTable[2][(crc>>40)&0xff] ^
+ helperTable[1][(crc>>48)&0xff] ^
+ helperTable[0][crc>>56]
+ p = p[8:]
+ }
+ }
+ // For reminders or small sizes
+ for _, v := range p {
+ crc = nvmeTable[byte(crc)^v] ^ (crc >> 8)
+ }
+ return ^crc
+}
+
+// Update returns the result of adding the bytes in p to the crc.
+func Update(crc uint64, p []byte) uint64 {
+ return update(crc, p)
+}
+
+func (d *digest) Write(p []byte) (n int, err error) {
+ d.crc = update(d.crc, p)
+ return len(p), nil
+}
+
+func (d *digest) Sum64() uint64 { return d.crc }
+
+func (d *digest) Sum(in []byte) []byte {
+ s := d.Sum64()
+ return append(in, byte(s>>56), byte(s>>48), byte(s>>40), byte(s>>32), byte(s>>24), byte(s>>16), byte(s>>8), byte(s))
+}
+
+// Checksum returns the CRC-64 checksum of data
+// using the NVME polynomial.
+func Checksum(data []byte) uint64 { return update(0, data) }
+
+// ISO tablesum of NVME poly
+const tableSum = 0x8ddd9ee4402c7163
diff --git a/vendor/github.com/minio/crc64nvme/crc64_amd64.go b/vendor/github.com/minio/crc64nvme/crc64_amd64.go
new file mode 100644
index 0000000000000..fc8538bc3e3b5
--- /dev/null
+++ b/vendor/github.com/minio/crc64nvme/crc64_amd64.go
@@ -0,0 +1,15 @@
+// Copyright (c) 2025 Minio Inc. All rights reserved.
+// Use of this source code is governed by a license that can be
+// found in the LICENSE file.
+
+//go:build !noasm && !appengine && !gccgo
+
+package crc64nvme
+
+import (
+ "github.com/klauspost/cpuid/v2"
+)
+
+var hasAsm = cpuid.CPU.Supports(cpuid.SSE2, cpuid.CLMUL, cpuid.SSE4)
+
+func updateAsm(crc uint64, p []byte) (checksum uint64)
diff --git a/vendor/github.com/minio/crc64nvme/crc64_amd64.s b/vendor/github.com/minio/crc64nvme/crc64_amd64.s
new file mode 100644
index 0000000000000..04d8cf7869a84
--- /dev/null
+++ b/vendor/github.com/minio/crc64nvme/crc64_amd64.s
@@ -0,0 +1,155 @@
+// Copyright (c) 2025 Minio Inc. All rights reserved.
+// Use of this source code is governed by a license that can be
+// found in the LICENSE file.
+
+#include "textflag.h"
+
+TEXT ·updateAsm(SB), $0-40
+ MOVQ crc+0(FP), AX // checksum
+ MOVQ p_base+8(FP), SI // start pointer
+ MOVQ p_len+16(FP), CX // length of buffer
+ NOTQ AX
+ SHRQ $7, CX
+ CMPQ CX, $1
+ JLT skip128
+
+ VMOVDQA 0x00(SI), X0
+ VMOVDQA 0x10(SI), X1
+ VMOVDQA 0x20(SI), X2
+ VMOVDQA 0x30(SI), X3
+ VMOVDQA 0x40(SI), X4
+ VMOVDQA 0x50(SI), X5
+ VMOVDQA 0x60(SI), X6
+ VMOVDQA 0x70(SI), X7
+ MOVQ AX, X8
+ PXOR X8, X0
+ CMPQ CX, $1
+ JE tail128
+
+ MOVQ $0xa1ca681e733f9c40, AX
+ MOVQ AX, X8
+ MOVQ $0x5f852fb61e8d92dc, AX
+ PINSRQ $0x1, AX, X9
+
+loop128:
+ ADDQ $128, SI
+ SUBQ $1, CX
+ VMOVDQA X0, X10
+ PCLMULQDQ $0x00, X8, X10
+ PCLMULQDQ $0x11, X9, X0
+ PXOR X10, X0
+ PXOR 0(SI), X0
+ VMOVDQA X1, X10
+ PCLMULQDQ $0x00, X8, X10
+ PCLMULQDQ $0x11, X9, X1
+ PXOR X10, X1
+ PXOR 0x10(SI), X1
+ VMOVDQA X2, X10
+ PCLMULQDQ $0x00, X8, X10
+ PCLMULQDQ $0x11, X9, X2
+ PXOR X10, X2
+ PXOR 0x20(SI), X2
+ VMOVDQA X3, X10
+ PCLMULQDQ $0x00, X8, X10
+ PCLMULQDQ $0x11, X9, X3
+ PXOR X10, X3
+ PXOR 0x30(SI), X3
+ VMOVDQA X4, X10
+ PCLMULQDQ $0x00, X8, X10
+ PCLMULQDQ $0x11, X9, X4
+ PXOR X10, X4
+ PXOR 0x40(SI), X4
+ VMOVDQA X5, X10
+ PCLMULQDQ $0x00, X8, X10
+ PCLMULQDQ $0x11, X9, X5
+ PXOR X10, X5
+ PXOR 0x50(SI), X5
+ VMOVDQA X6, X10
+ PCLMULQDQ $0x00, X8, X10
+ PCLMULQDQ $0x11, X9, X6
+ PXOR X10, X6
+ PXOR 0x60(SI), X6
+ VMOVDQA X7, X10
+ PCLMULQDQ $0x00, X8, X10
+ PCLMULQDQ $0x11, X9, X7
+ PXOR X10, X7
+ PXOR 0x70(SI), X7
+ CMPQ CX, $1
+ JGT loop128
+
+tail128:
+ MOVQ $0xd083dd594d96319d, AX
+ MOVQ AX, X11
+ PCLMULQDQ $0x00, X0, X11
+ MOVQ $0x946588403d4adcbc, AX
+ PINSRQ $0x1, AX, X12
+ PCLMULQDQ $0x11, X12, X0
+ PXOR X11, X7
+ PXOR X0, X7
+ MOVQ $0x3c255f5ebc414423, AX
+ MOVQ AX, X11
+ PCLMULQDQ $0x00, X1, X11
+ MOVQ $0x34f5a24e22d66e90, AX
+ PINSRQ $0x1, AX, X12
+ PCLMULQDQ $0x11, X12, X1
+ PXOR X11, X1
+ PXOR X7, X1
+ MOVQ $0x7b0ab10dd0f809fe, AX
+ MOVQ AX, X11
+ PCLMULQDQ $0x00, X2, X11
+ MOVQ $0x03363823e6e791e5, AX
+ PINSRQ $0x1, AX, X12
+ PCLMULQDQ $0x11, X12, X2
+ PXOR X11, X2
+ PXOR X1, X2
+ MOVQ $0x0c32cdb31e18a84a, AX
+ MOVQ AX, X11
+ PCLMULQDQ $0x00, X3, X11
+ MOVQ $0x62242240ace5045a, AX
+ PINSRQ $0x1, AX, X12
+ PCLMULQDQ $0x11, X12, X3
+ PXOR X11, X3
+ PXOR X2, X3
+ MOVQ $0xbdd7ac0ee1a4a0f0, AX
+ MOVQ AX, X11
+ PCLMULQDQ $0x00, X4, X11
+ MOVQ $0xa3ffdc1fe8e82a8b, AX
+ PINSRQ $0x1, AX, X12
+ PCLMULQDQ $0x11, X12, X4
+ PXOR X11, X4
+ PXOR X3, X4
+ MOVQ $0xb0bc2e589204f500, AX
+ MOVQ AX, X11
+ PCLMULQDQ $0x00, X5, X11
+ MOVQ $0xe1e0bb9d45d7a44c, AX
+ PINSRQ $0x1, AX, X12
+ PCLMULQDQ $0x11, X12, X5
+ PXOR X11, X5
+ PXOR X4, X5
+ MOVQ $0xeadc41fd2ba3d420, AX
+ MOVQ AX, X11
+ PCLMULQDQ $0x00, X6, X11
+ MOVQ $0x21e9761e252621ac, AX
+ PINSRQ $0x1, AX, X12
+ PCLMULQDQ $0x11, X12, X6
+ PXOR X11, X6
+ PXOR X5, X6
+ MOVQ AX, X5
+ PCLMULQDQ $0x00, X6, X5
+ PSHUFD $0xee, X6, X6
+ PXOR X5, X6
+ MOVQ $0x27ecfa329aef9f77, AX
+ MOVQ AX, X4
+ PCLMULQDQ $0x00, X4, X6
+ PEXTRQ $0, X6, BX
+ MOVQ $0x34d926535897936b, AX
+ MOVQ AX, X4
+ PCLMULQDQ $0x00, X4, X6
+ PXOR X5, X6
+ PEXTRQ $1, X6, AX
+ XORQ BX, AX
+
+skip128:
+ NOTQ AX
+ MOVQ AX, checksum+32(FP)
+ RET
diff --git a/vendor/github.com/minio/crc64nvme/crc64_arm64.go b/vendor/github.com/minio/crc64nvme/crc64_arm64.go
new file mode 100644
index 0000000000000..c77c819ce0caf
--- /dev/null
+++ b/vendor/github.com/minio/crc64nvme/crc64_arm64.go
@@ -0,0 +1,15 @@
+// Copyright (c) 2025 Minio Inc. All rights reserved.
+// Use of this source code is governed by a license that can be
+// found in the LICENSE file.
+
+//go:build !noasm && !appengine && !gccgo
+
+package crc64nvme
+
+import (
+ "github.com/klauspost/cpuid/v2"
+)
+
+var hasAsm = cpuid.CPU.Supports(cpuid.ASIMD) && cpuid.CPU.Supports(cpuid.PMULL)
+
+func updateAsm(crc uint64, p []byte) (checksum uint64)
diff --git a/vendor/github.com/minio/crc64nvme/crc64_arm64.s b/vendor/github.com/minio/crc64nvme/crc64_arm64.s
new file mode 100644
index 0000000000000..b61866f6306e2
--- /dev/null
+++ b/vendor/github.com/minio/crc64nvme/crc64_arm64.s
@@ -0,0 +1,155 @@
+// Copyright (c) 2025 Minio Inc. All rights reserved.
+// Use of this source code is governed by a license that can be
+// found in the LICENSE file.
+
+#include "textflag.h"
+
+TEXT ·updateAsm(SB), $0-40
+ MOVD crc+0(FP), R0 // checksum
+ MOVD p_base+8(FP), R1 // start pointer
+ MOVD p_len+16(FP), R2 // length of buffer
+ MOVD $·const(SB), R3 // constants
+ MVN R0, R0
+ LSR $7, R2, R2
+ CMP $1, R2
+ BLT skip128
+
+ FLDPQ (R1), (F0, F1)
+ FLDPQ 32(R1), (F2, F3)
+ FLDPQ 64(R1), (F4, F5)
+ FLDPQ 96(R1), (F6, F7)
+ FMOVD R0, F8
+ VMOVI $0, V9.B16
+ VMOV V9.D[0], V8.D[1]
+ VEOR V8.B16, V0.B16, V0.B16
+ CMP $1, R2
+ BEQ tail128
+
+ MOVD 112(R3), R4
+ MOVD 120(R3), R5
+ FMOVD R4, F8
+ VDUP R5, V9.D2
+
+loop128:
+ ADD $128, R1, R1
+ SUB $1, R2, R2
+ VPMULL V0.D1, V8.D1, V10.Q1
+ VPMULL2 V0.D2, V9.D2, V0.Q1
+ FLDPQ (R1), (F11, F12)
+ VEOR3 V0.B16, V11.B16, V10.B16, V0.B16
+ VPMULL V1.D1, V8.D1, V10.Q1
+ VPMULL2 V1.D2, V9.D2, V1.Q1
+ VEOR3 V1.B16, V12.B16, V10.B16, V1.B16
+ VPMULL V2.D1, V8.D1, V10.Q1
+ VPMULL2 V2.D2, V9.D2, V2.Q1
+ FLDPQ 32(R1), (F11, F12)
+ VEOR3 V2.B16, V11.B16, V10.B16, V2.B16
+ VPMULL V3.D1, V8.D1, V10.Q1
+ VPMULL2 V3.D2, V9.D2, V3.Q1
+ VEOR3 V3.B16, V12.B16, V10.B16, V3.B16
+ VPMULL V4.D1, V8.D1, V10.Q1
+ VPMULL2 V4.D2, V9.D2, V4.Q1
+ FLDPQ 64(R1), (F11, F12)
+ VEOR3 V4.B16, V11.B16, V10.B16, V4.B16
+ VPMULL V5.D1, V8.D1, V10.Q1
+ VPMULL2 V5.D2, V9.D2, V5.Q1
+ VEOR3 V5.B16, V12.B16, V10.B16, V5.B16
+ VPMULL V6.D1, V8.D1, V10.Q1
+ VPMULL2 V6.D2, V9.D2, V6.Q1
+ FLDPQ 96(R1), (F11, F12)
+ VEOR3 V6.B16, V11.B16, V10.B16, V6.B16
+ VPMULL V7.D1, V8.D1, V10.Q1
+ VPMULL2 V7.D2, V9.D2, V7.Q1
+ VEOR3 V7.B16, V12.B16, V10.B16, V7.B16
+ CMP $1, R2
+ BHI loop128
+
+tail128:
+ MOVD (R3), R4
+ FMOVD R4, F11
+ VPMULL V0.D1, V11.D1, V11.Q1
+ MOVD 8(R3), R4
+ VDUP R4, V12.D2
+ VPMULL2 V0.D2, V12.D2, V0.Q1
+ VEOR3 V0.B16, V7.B16, V11.B16, V7.B16
+ MOVD 16(R3), R4
+ FMOVD R4, F11
+ VPMULL V1.D1, V11.D1, V11.Q1
+ MOVD 24(R3), R4
+ VDUP R4, V12.D2
+ VPMULL2 V1.D2, V12.D2, V1.Q1
+ VEOR3 V1.B16, V11.B16, V7.B16, V1.B16
+ MOVD 32(R3), R4
+ FMOVD R4, F11
+ VPMULL V2.D1, V11.D1, V11.Q1
+ MOVD 40(R3), R4
+ VDUP R4, V12.D2
+ VPMULL2 V2.D2, V12.D2, V2.Q1
+ VEOR3 V2.B16, V11.B16, V1.B16, V2.B16
+ MOVD 48(R3), R4
+ FMOVD R4, F11
+ VPMULL V3.D1, V11.D1, V11.Q1
+ MOVD 56(R3), R4
+ VDUP R4, V12.D2
+ VPMULL2 V3.D2, V12.D2, V3.Q1
+ VEOR3 V3.B16, V11.B16, V2.B16, V3.B16
+ MOVD 64(R3), R4
+ FMOVD R4, F11
+ VPMULL V4.D1, V11.D1, V11.Q1
+ MOVD 72(R3), R4
+ VDUP R4, V12.D2
+ VPMULL2 V4.D2, V12.D2, V4.Q1
+ VEOR3 V4.B16, V11.B16, V3.B16, V4.B16
+ MOVD 80(R3), R4
+ FMOVD R4, F11
+ VPMULL V5.D1, V11.D1, V11.Q1
+ MOVD 88(R3), R4
+ VDUP R4, V12.D2
+ VPMULL2 V5.D2, V12.D2, V5.Q1
+ VEOR3 V5.B16, V11.B16, V4.B16, V5.B16
+ MOVD 96(R3), R4
+ FMOVD R4, F11
+ VPMULL V6.D1, V11.D1, V11.Q1
+ MOVD 104(R3), R4
+ VDUP R4, V12.D2
+ VPMULL2 V6.D2, V12.D2, V6.Q1
+ VEOR3 V6.B16, V11.B16, V5.B16, V6.B16
+ FMOVD R4, F5
+ VPMULL V6.D1, V5.D1, V5.Q1
+ VDUP V6.D[1], V6.D2
+ VEOR V5.B8, V6.B8, V6.B8
+ MOVD 128(R3), R4
+ FMOVD R4, F4
+ VPMULL V4.D1, V6.D1, V6.Q1
+ FMOVD F6, R4
+ MOVD 136(R3), R5
+ FMOVD R5, F4
+ VPMULL V4.D1, V6.D1, V6.Q1
+ VEOR V6.B16, V5.B16, V6.B16
+ VMOV V6.D[1], R5
+ EOR R4, R5, R0
+
+skip128:
+ MVN R0, R0
+ MOVD R0, checksum+32(FP)
+ RET
+
+DATA ·const+0x000(SB)/8, $0xd083dd594d96319d // K_959
+DATA ·const+0x008(SB)/8, $0x946588403d4adcbc // K_895
+DATA ·const+0x010(SB)/8, $0x3c255f5ebc414423 // K_831
+DATA ·const+0x018(SB)/8, $0x34f5a24e22d66e90 // K_767
+DATA ·const+0x020(SB)/8, $0x7b0ab10dd0f809fe // K_703
+DATA ·const+0x028(SB)/8, $0x03363823e6e791e5 // K_639
+DATA ·const+0x030(SB)/8, $0x0c32cdb31e18a84a // K_575
+DATA ·const+0x038(SB)/8, $0x62242240ace5045a // K_511
+DATA ·const+0x040(SB)/8, $0xbdd7ac0ee1a4a0f0 // K_447
+DATA ·const+0x048(SB)/8, $0xa3ffdc1fe8e82a8b // K_383
+DATA ·const+0x050(SB)/8, $0xb0bc2e589204f500 // K_319
+DATA ·const+0x058(SB)/8, $0xe1e0bb9d45d7a44c // K_255
+DATA ·const+0x060(SB)/8, $0xeadc41fd2ba3d420 // K_191
+DATA ·const+0x068(SB)/8, $0x21e9761e252621ac // K_127
+DATA ·const+0x070(SB)/8, $0xa1ca681e733f9c40 // K_1087
+DATA ·const+0x078(SB)/8, $0x5f852fb61e8d92dc // K_1023
+DATA ·const+0x080(SB)/8, $0x27ecfa329aef9f77 // MU
+DATA ·const+0x088(SB)/8, $0x34d926535897936b // POLY
+GLOBL ·const(SB), (NOPTR+RODATA), $144
diff --git a/vendor/github.com/minio/crc64nvme/crc64_other.go b/vendor/github.com/minio/crc64nvme/crc64_other.go
new file mode 100644
index 0000000000000..1d78d74310ede
--- /dev/null
+++ b/vendor/github.com/minio/crc64nvme/crc64_other.go
@@ -0,0 +1,9 @@
+// Copyright (c) 2025 Minio Inc. All rights reserved.
+// Use of this source code is governed by a license that can be
+// found in the LICENSE file.
+
+//go:build (!amd64 || noasm || appengine || gccgo) && (!arm64 || noasm || appengine || gccgo)
+
+package crc64nvme
+
+var hasAsm = false
diff --git a/vendor/github.com/minio/minio-go/v7/api.go b/vendor/github.com/minio/minio-go/v7/api.go
index ff9f69118b99c..1b4842ad18e4e 100644
--- a/vendor/github.com/minio/minio-go/v7/api.go
+++ b/vendor/github.com/minio/minio-go/v7/api.go
@@ -155,7 +155,7 @@ type Options struct {
// Global constants.
const (
libraryName = "minio-go"
- libraryVersion = "v7.0.85"
+ libraryVersion = "v7.0.86"
)
// User Agent should always following the below style.
diff --git a/vendor/github.com/minio/minio-go/v7/checksum.go b/vendor/github.com/minio/minio-go/v7/checksum.go
index 8e4c27ce42fa5..c7456cda2e6b1 100644
--- a/vendor/github.com/minio/minio-go/v7/checksum.go
+++ b/vendor/github.com/minio/minio-go/v7/checksum.go
@@ -30,6 +30,8 @@ import (
"math/bits"
"net/http"
"sort"
+
+ "github.com/minio/crc64nvme"
)
// ChecksumType contains information about the checksum type.
@@ -152,9 +154,6 @@ func (c ChecksumType) RawByteLen() int {
const crc64NVMEPolynomial = 0xad93d23594c93659
-// crc64 uses reversed polynomials.
-var crc64Table = crc64.MakeTable(bits.Reverse64(crc64NVMEPolynomial))
-
// Hasher returns a hasher corresponding to the checksum type.
// Returns nil if no checksum.
func (c ChecksumType) Hasher() hash.Hash {
@@ -168,7 +167,7 @@ func (c ChecksumType) Hasher() hash.Hash {
case ChecksumSHA256:
return sha256.New()
case ChecksumCRC64NVME:
- return crc64.New(crc64Table)
+ return crc64nvme.New()
}
return nil
}
diff --git a/vendor/modules.txt b/vendor/modules.txt
index a513055e42f06..b0d337ec8961f 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -750,8 +750,8 @@ github.com/fluent/fluent-bit-go/output
## explicit; go 1.17
github.com/fsnotify/fsnotify
github.com/fsnotify/fsnotify/internal
-# github.com/fsouza/fake-gcs-server v1.52.1
-## explicit; go 1.22.9
+# github.com/fsouza/fake-gcs-server v1.52.2
+## explicit; go 1.23
github.com/fsouza/fake-gcs-server/fakestorage
github.com/fsouza/fake-gcs-server/internal/backend
github.com/fsouza/fake-gcs-server/internal/checksum
@@ -842,7 +842,7 @@ github.com/go-redsync/redsync/v4/redis/goredis/v9
# github.com/go-zookeeper/zk v1.0.3
## explicit; go 1.13
github.com/go-zookeeper/zk
-# github.com/goccy/go-json v0.10.4
+# github.com/goccy/go-json v0.10.5
## explicit; go 1.19
github.com/goccy/go-json
github.com/goccy/go-json/internal/decoder
@@ -1284,10 +1284,13 @@ github.com/mdlayher/vsock
# github.com/miekg/dns v1.1.62
## explicit; go 1.19
github.com/miekg/dns
+# github.com/minio/crc64nvme v1.0.0
+## explicit; go 1.22
+github.com/minio/crc64nvme
# github.com/minio/md5-simd v1.1.2
## explicit; go 1.14
github.com/minio/md5-simd
-# github.com/minio/minio-go/v7 v7.0.85
+# github.com/minio/minio-go/v7 v7.0.86
## explicit; go 1.22
github.com/minio/minio-go/v7
github.com/minio/minio-go/v7/pkg/cors
|
fix
|
update module github.com/fsouza/fake-gcs-server to v1.52.2 (main) (#16334)
|
6532b4b263deea4581699a076f244c2c818f5e8c
|
2025-01-17 03:37:10
|
Sean P.
|
fix(helm): Update GEL Helm parameters to work with MinIO (#15807)
| false
|
diff --git a/docs/sources/setup/install/helm/reference.md b/docs/sources/setup/install/helm/reference.md
index f94ca93205918..43669ccd0b96d 100644
--- a/docs/sources/setup/install/helm/reference.md
+++ b/docs/sources/setup/install/helm/reference.md
@@ -6922,8 +6922,15 @@ false
"memory": "128Mi"
}
},
- "rootPassword": "supersecret",
- "rootUser": "enterprise-logs"
+ "rootPassword": "supersecretpassword",
+ "rootUser": "root-user",
+ "users": [
+ {
+ "accessKey": "logs-user",
+ "policy": "readwrite",
+ "secretKey": "supersecretpassword"
+ }
+ ]
}
</pre>
</td>
diff --git a/production/helm/loki/CHANGELOG.md b/production/helm/loki/CHANGELOG.md
index 644a921928aad..f49d7974de6f5 100644
--- a/production/helm/loki/CHANGELOG.md
+++ b/production/helm/loki/CHANGELOG.md
@@ -13,6 +13,9 @@ Entries should include a reference to the pull request that introduced the chang
[//]: # (<AUTOMATED_UPDATES_LOCATOR> : do not remove this line. This locator is used by the CI pipeline to automatically create a changelog entry for each new Loki release. Add other chart versions and respective changelog entries bellow this line.)
+- [BUGFIX] Removed minio-mc init container from admin-api.
+- [BUGFIX] Fixed admin-api and gateway deployment container args.
+
## 6.24.1
- [ENHANCEMENT] Fix Inconsistency between sidecar.securityContext and loki.containerSecurityContext
diff --git a/production/helm/loki/templates/admin-api/deployment-admin-api.yaml b/production/helm/loki/templates/admin-api/deployment-admin-api.yaml
index c5e97dd004c0a..7146ffb6588e6 100644
--- a/production/helm/loki/templates/admin-api/deployment-admin-api.yaml
+++ b/production/helm/loki/templates/admin-api/deployment-admin-api.yaml
@@ -50,27 +50,6 @@ spec:
{{- end }}
securityContext:
{{- toYaml .Values.adminApi.podSecurityContext | nindent 8 }}
- initContainers:
- # Taken from
- # https://github.com/minio/charts/blob/a5c84bcbad884728bff5c9c23541f936d57a13b3/minio/templates/post-install-create-bucket-job.yaml
- {{- if .Values.minio.enabled }}
- - name: minio-mc
- image: "{{ .Values.minio.mcImage.repository }}:{{ .Values.minio.mcImage.tag }}"
- imagePullPolicy: {{ .Values.minio.mcImage.pullPolicy }}
- command: ["/bin/sh", "/config/initialize"]
- env:
- - name: MINIO_ENDPOINT
- value: {{ .Release.Name }}-minio
- - name: MINIO_PORT
- value: {{ .Values.minio.service.port | quote }}
- volumeMounts:
- - name: minio-configuration
- mountPath: /config
- {{- if .Values.minio.tls.enabled }}
- - name: cert-secret-volume-mc
- mountPath: {{ .Values.minio.configPathmc }}certs
- {{ end }}
- {{- end }}
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
@@ -89,10 +68,10 @@ spec:
{{- if .Values.minio.enabled }}
- -admin.client.backend-type=s3
- -admin.client.s3.endpoint={{ template "loki.minio" . }}
- - -admin.client.s3.bucket-name=enterprise-logs-admin
- - -admin.client.s3.access-key-id={{ .Values.minio.accessKey }}
- - -admin.client.s3.secret-access-key={{ .Values.minio.secretKey }}
- - -admin.client.s3.insecure=true
+ - -admin.client.s3.bucket-name={{ .Values.loki.storage.bucketNames.admin }}
+ - -admin.client.s3.access-key-id={{ (index .Values.minio.users 0).accessKey }}
+ - -admin.client.s3.secret-access-key={{ (index .Values.minio.users 0).secretKey }}
+ - -admin.client.s3.insecure={{ .Values.loki.storage.s3.insecure }}
{{- end }}
{{- range $key, $value := .Values.adminApi.extraArgs }}
- "-{{ $key }}={{ $value }}"
diff --git a/production/helm/loki/templates/gateway/deployment-gateway-enterprise.yaml b/production/helm/loki/templates/gateway/deployment-gateway-enterprise.yaml
index 22bf5c1fd6183..6e9bb15dd09d3 100644
--- a/production/helm/loki/templates/gateway/deployment-gateway-enterprise.yaml
+++ b/production/helm/loki/templates/gateway/deployment-gateway-enterprise.yaml
@@ -70,10 +70,10 @@ spec:
{{- if .Values.minio.enabled }}
- -admin.client.backend-type=s3
- -admin.client.s3.endpoint={{ template "loki.minio" . }}
- - -admin.client.s3.bucket-name=enterprise-logs-admin
- - -admin.client.s3.access-key-id={{ .Values.minio.accessKey }}
- - -admin.client.s3.secret-access-key={{ .Values.minio.secretKey }}
- - -admin.client.s3.insecure=true
+ - -admin.client.s3.bucket-name={{ .Values.loki.storage.bucketNames.admin }}
+ - -admin.client.s3.access-key-id={{ (index .Values.minio.users 0).accessKey }}
+ - -admin.client.s3.secret-access-key={{ (index .Values.minio.users 0).secretKey }}
+ - -admin.client.s3.insecure={{ .Values.loki.storage.s3.insecure }}
{{- end }}
{{- if and $isDistributed .Values.enterpriseGateway.useDefaultProxyURLs }}
- -gateway.proxy.default.url=http://{{ template "loki.fullname" . }}-admin-api.{{ .Release.Namespace }}.svc:3100
diff --git a/production/helm/loki/values.yaml b/production/helm/loki/values.yaml
index 58d77427ddcaa..378e8b8c268fd 100644
--- a/production/helm/loki/values.yaml
+++ b/production/helm/loki/values.yaml
@@ -3372,8 +3372,16 @@ minio:
# https://docs.min.io/docs/minio-erasure-code-quickstart-guide
# Since we only have 1 replica, that means 2 drives must be used.
drivesPerNode: 2
- rootUser: enterprise-logs
- rootPassword: supersecret
+ # root user; not used for GEL authentication
+ rootUser: root-user
+ rootPassword: supersecretpassword
+ # The first user in the list below is used for Loki/GEL authentication.
+ # You can add additional users if desired; they will not impact Loki/GEL.
+ # `accessKey` = username, `secretKey` = password
+ users:
+ - accessKey: logs-user
+ secretKey: supersecretpassword
+ policy: readwrite
buckets:
- name: chunks
policy: none
|
fix
|
Update GEL Helm parameters to work with MinIO (#15807)
|
2ab63d2e4780578535ae3fec51d95fb9f5a83361
|
2025-02-26 17:30:27
|
Cyril Tovena
|
fix(promtail): remove flaky TestFileTarget_StopsTailersCleanly (#16473)
| false
|
diff --git a/clients/pkg/promtail/targets/file/filetarget.go b/clients/pkg/promtail/targets/file/filetarget.go
index b7b5f4e7c6861..52aec210d9db0 100644
--- a/clients/pkg/promtail/targets/file/filetarget.go
+++ b/clients/pkg/promtail/targets/file/filetarget.go
@@ -50,7 +50,7 @@ type WatchConfig struct {
MaxPollFrequency time.Duration `mapstructure:"max_poll_frequency" yaml:"max_poll_frequency"`
}
-var DefaultWatchConig = WatchConfig{
+var DefaultWatchConfig = WatchConfig{
MinPollFrequency: 250 * time.Millisecond,
MaxPollFrequency: 250 * time.Millisecond,
}
@@ -58,7 +58,7 @@ var DefaultWatchConig = WatchConfig{
// RegisterFlags with prefix registers flags where every name is prefixed by
// prefix. If prefix is a non-empty string, prefix should end with a period.
func (cfg *WatchConfig) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) {
- d := DefaultWatchConig
+ d := DefaultWatchConfig
f.DurationVar(&cfg.MinPollFrequency, prefix+"min_poll_frequency", d.MinPollFrequency, "Minimum period to poll for file changes")
f.DurationVar(&cfg.MaxPollFrequency, prefix+"max_poll_frequency", d.MaxPollFrequency, "Maximum period to poll for file changes")
@@ -247,7 +247,6 @@ func (t *FileTarget) sync() error {
} else {
// Gets current list of files to tail.
matches, err = doublestar.FilepathGlob(t.path)
-
if err != nil {
return errors.Wrap(err, "filetarget.sync.filepath.Glob")
}
@@ -257,7 +256,6 @@ func (t *FileTarget) sync() error {
matchesExcluded = []string{t.pathExclude}
} else {
matchesExcluded, err = doublestar.FilepathGlob(t.pathExclude)
-
if err != nil {
return errors.Wrap(err, "filetarget.sync.filepathexclude.Glob")
}
diff --git a/clients/pkg/promtail/targets/file/filetarget_test.go b/clients/pkg/promtail/targets/file/filetarget_test.go
index caf33395ba201..6d991265685e9 100644
--- a/clients/pkg/promtail/targets/file/filetarget_test.go
+++ b/clients/pkg/promtail/targets/file/filetarget_test.go
@@ -8,7 +8,6 @@ import (
"os"
"path/filepath"
"sort"
- "sync"
"testing"
"time"
@@ -72,7 +71,7 @@ func TestFileTargetSync(t *testing.T) {
path := logDir1 + "/*.log"
target, err := NewFileTarget(metrics, logger, client, ps, path, "", nil, nil, &Config{
SyncPeriod: 1 * time.Minute, // assure the sync is not called by the ticker
- }, DefaultWatchConig, nil, fakeHandler, "", nil)
+ }, DefaultWatchConfig, nil, fakeHandler, "", nil)
assert.NoError(t, err)
// Start with nothing watched.
@@ -84,7 +83,7 @@ func TestFileTargetSync(t *testing.T) {
}
// Create the base dir, still nothing watched.
- err = os.MkdirAll(logDir1, 0750)
+ err = os.MkdirAll(logDir1, 0o750)
assert.NoError(t, err)
err = target.sync()
@@ -191,7 +190,7 @@ func TestFileTarget_StopsTailersCleanly(t *testing.T) {
registry := prometheus.NewRegistry()
target, err := NewFileTarget(NewMetrics(registry), logger, client, ps, pathToWatch, "", nil, nil, &Config{
SyncPeriod: 10 * time.Millisecond,
- }, DefaultWatchConig, nil, fakeHandler, "", nil)
+ }, DefaultWatchConfig, nil, fakeHandler, "", nil)
assert.NoError(t, err)
_, err = os.Create(logFile)
@@ -247,95 +246,6 @@ func TestFileTarget_StopsTailersCleanly(t *testing.T) {
`), "promtail_files_active_total"))
}
-func TestFileTarget_StopsTailersCleanly_Parallel(t *testing.T) {
- w := log.NewSyncWriter(os.Stderr)
- logger := log.NewLogfmtLogger(w)
-
- tempDir := t.TempDir()
- positionsFileName := filepath.Join(tempDir, "positions.yml")
-
- ps, err := positions.New(logger, positions.Config{
- SyncPeriod: 10 * time.Millisecond,
- PositionsFile: positionsFileName,
- })
- require.NoError(t, err)
-
- client := fake.New(func() {})
- defer client.Stop()
-
- pathToWatch := filepath.Join(tempDir, "*.log")
- registry := prometheus.NewRegistry()
- metrics := NewMetrics(registry)
-
- // Increase this to several thousand to make the test more likely to fail when debugging a race condition
- iterations := 500
- fakeHandler := make(chan fileTargetEvent, 10*iterations)
- for i := 0; i < iterations; i++ {
- logFile := filepath.Join(tempDir, fmt.Sprintf("test_%d.log", i))
-
- target, err := NewFileTarget(metrics, logger, client, ps, pathToWatch, "", nil, nil, &Config{
- SyncPeriod: 10 * time.Millisecond,
- }, DefaultWatchConig, nil, fakeHandler, "", nil)
- assert.NoError(t, err)
-
- file, err := os.Create(logFile)
- assert.NoError(t, err)
-
- // Write some data to the file
- for j := 0; j < 5; j++ {
- _, _ = file.WriteString(fmt.Sprintf("test %d\n", j))
- }
- require.NoError(t, file.Close())
-
- requireEventually(t, func() bool {
- return testutil.CollectAndCount(registry, "promtail_read_lines_total") == 1
- }, "expected 1 read_lines_total metric")
-
- requireEventually(t, func() bool {
- return testutil.CollectAndCount(registry, "promtail_read_bytes_total") == 1
- }, "expected 1 read_bytes_total metric")
-
- requireEventually(t, func() bool {
- return testutil.ToFloat64(metrics.readLines) == 5
- }, "expected 5 read_lines_total")
-
- requireEventually(t, func() bool {
- return testutil.ToFloat64(metrics.totalBytes) == 35
- }, "expected 35 total_bytes")
-
- requireEventually(t, func() bool {
- return testutil.ToFloat64(metrics.readBytes) == 35
- }, "expected 35 read_bytes")
-
- // Concurrently stop the target and remove the file
- wg := sync.WaitGroup{}
- wg.Add(2)
- go func() {
- sleepRandomDuration(time.Millisecond * 10)
- target.Stop()
- wg.Done()
-
- }()
- go func() {
- sleepRandomDuration(time.Millisecond * 10)
- _ = os.Remove(logFile)
- wg.Done()
- }()
-
- wg.Wait()
-
- requireEventually(t, func() bool {
- return testutil.CollectAndCount(registry, "promtail_read_bytes_total") == 0
- }, "expected read_bytes_total metric to be cleaned up")
-
- requireEventually(t, func() bool {
- return testutil.CollectAndCount(registry, "promtail_file_bytes_total") == 0
- }, "expected file_bytes_total metric to be cleaned up")
- }
-
- ps.Stop()
-}
-
// Make sure that Stop() doesn't hang if FileTarget is waiting on a channel send.
func TestFileTarget_StopAbruptly(t *testing.T) {
w := log.NewSyncWriter(os.Stderr)
@@ -367,11 +277,11 @@ func TestFileTarget_StopAbruptly(t *testing.T) {
registry := prometheus.NewRegistry()
target, err := NewFileTarget(NewMetrics(registry), logger, client, ps, pathToWatch, "", nil, nil, &Config{
SyncPeriod: 10 * time.Millisecond,
- }, DefaultWatchConig, nil, fakeHandler, "", nil)
+ }, DefaultWatchConfig, nil, fakeHandler, "", nil)
assert.NoError(t, err)
// Create a directory, still nothing is watched.
- err = os.MkdirAll(logDir1, 0750)
+ err = os.MkdirAll(logDir1, 0o750)
assert.NoError(t, err)
_, err = os.Create(logfile1)
assert.NoError(t, err)
@@ -392,12 +302,12 @@ func TestFileTarget_StopAbruptly(t *testing.T) {
// Create two directories - one more than the buffer of fakeHandler,
// so that the file target hands until we call Stop().
- err = os.MkdirAll(logDir2, 0750)
+ err = os.MkdirAll(logDir2, 0o750)
assert.NoError(t, err)
_, err = os.Create(logfile2)
assert.NoError(t, err)
- err = os.MkdirAll(logDir3, 0750)
+ err = os.MkdirAll(logDir3, 0o750)
assert.NoError(t, err)
_, err = os.Create(logfile3)
assert.NoError(t, err)
@@ -479,7 +389,7 @@ func TestFileTargetPathExclusion(t *testing.T) {
pathExclude := filepath.Join(dirName, "log3", "*.log")
target, err := NewFileTarget(metrics, logger, client, ps, path, pathExclude, nil, nil, &Config{
SyncPeriod: 1 * time.Minute, // assure the sync is not called by the ticker
- }, DefaultWatchConig, nil, fakeHandler, "", nil)
+ }, DefaultWatchConfig, nil, fakeHandler, "", nil)
assert.NoError(t, err)
// Start with nothing watched.
@@ -491,11 +401,11 @@ func TestFileTargetPathExclusion(t *testing.T) {
}
// Create the base directories, still nothing watched.
- err = os.MkdirAll(logDir1, 0750)
+ err = os.MkdirAll(logDir1, 0o750)
assert.NoError(t, err)
- err = os.MkdirAll(logDir2, 0750)
+ err = os.MkdirAll(logDir2, 0o750)
assert.NoError(t, err)
- err = os.MkdirAll(logDir3, 0750)
+ err = os.MkdirAll(logDir3, 0o750)
assert.NoError(t, err)
err = target.sync()
@@ -571,7 +481,7 @@ func TestHandleFileCreationEvent(t *testing.T) {
logFile := filepath.Join(logDir, "test1.log")
logFileIgnored := filepath.Join(logDir, "test.donot.log")
- if err := os.MkdirAll(logDir, 0750); err != nil {
+ if err := os.MkdirAll(logDir, 0o750); err != nil {
t.Fatal(err)
}
@@ -610,7 +520,7 @@ func TestHandleFileCreationEvent(t *testing.T) {
target, err := NewFileTarget(metrics, logger, client, ps, path, pathExclude, nil, nil, &Config{
// To handle file creation event from channel, set enough long time as sync period
SyncPeriod: 10 * time.Minute,
- }, DefaultWatchConig, fakeFileHandler, fakeTargetHandler, "", nil)
+ }, DefaultWatchConfig, fakeFileHandler, fakeTargetHandler, "", nil)
if err != nil {
t.Fatal(err)
}
@@ -653,7 +563,6 @@ func TestToStopTailing(t *testing.T) {
t.Error("Results mismatch, expected", expected[i], "got", st[i])
}
}
-
}
func BenchmarkToStopTailing(b *testing.B) {
@@ -717,7 +626,6 @@ func TestMissing(t *testing.T) {
if _, ok := c["str3"]; !ok {
t.Error("Expected the set to contain str3 but it did not")
}
-
}
func requireEventually(t *testing.T, f func() bool, msg string) {
diff --git a/clients/pkg/promtail/targets/file/filetargetmanager_test.go b/clients/pkg/promtail/targets/file/filetargetmanager_test.go
index 7b1480f674f6b..cc4cf361a2048 100644
--- a/clients/pkg/promtail/targets/file/filetargetmanager_test.go
+++ b/clients/pkg/promtail/targets/file/filetargetmanager_test.go
@@ -26,7 +26,7 @@ import (
func newTestLogDirectories(t *testing.T) string {
tmpDir := t.TempDir()
logFileDir := filepath.Join(tmpDir, "logs")
- err := os.MkdirAll(logFileDir, 0750)
+ err := os.MkdirAll(logFileDir, 0o750)
assert.NoError(t, err)
return logFileDir
}
@@ -71,7 +71,7 @@ func newTestFileTargetManager(logger log.Logger, client api.EntryHandler, positi
}
metrics := NewMetrics(nil)
- ftm, err := NewFileTargetManager(metrics, logger, positions, client, []scrapeconfig.Config{sc}, tc, DefaultWatchConig)
+ ftm, err := NewFileTargetManager(metrics, logger, positions, client, []scrapeconfig.Config{sc}, tc, DefaultWatchConfig)
if err != nil {
return nil, err
}
@@ -492,7 +492,7 @@ func TestDeadlockStartWatchingDuringSync(t *testing.T) {
go func() {
for i := 0; i < 10; i++ {
dir := filepath.Join(newLogDir, fmt.Sprintf("%d", i))
- err := os.MkdirAll(dir, 0750)
+ err := os.MkdirAll(dir, 0o750)
assert.NoError(t, err)
time.Sleep(1 * time.Millisecond)
for j := 0; j < 100; j++ {
@@ -551,13 +551,13 @@ func TestLabelSetUpdate(t *testing.T) {
},
}
- var target = model.LabelSet{
+ target := model.LabelSet{
hostLabel: "localhost",
pathLabel: "baz",
"job": "foo",
}
- var target2 = model.LabelSet{
+ target2 := model.LabelSet{
hostLabel: "localhost",
pathLabel: "baz",
"job": "foo2",
@@ -593,7 +593,6 @@ func TestLabelSetUpdate(t *testing.T) {
}, targetEventHandler)
require.Equal(t, 0, len(syncer.targets))
require.Equal(t, 0, len(syncer.fileEventWatchers))
-
}
func TestFulfillKubePodSelector(t *testing.T) {
|
fix
|
remove flaky TestFileTarget_StopsTailersCleanly (#16473)
|
4e4174400fba410b9f32e0e43c1d866d283a9e62
|
2024-09-02 17:16:09
|
Sandeep Sukhani
|
fix: do not retain span logger created with index set initialized at query time (#14027)
| false
|
diff --git a/pkg/storage/stores/shipper/indexshipper/downloads/index_set.go b/pkg/storage/stores/shipper/indexshipper/downloads/index_set.go
index 8eae835b441c3..8edd121071c5e 100644
--- a/pkg/storage/stores/shipper/indexshipper/downloads/index_set.go
+++ b/pkg/storage/stores/shipper/indexshipper/downloads/index_set.go
@@ -20,7 +20,6 @@ import (
"github.com/grafana/loki/v3/pkg/storage/chunk/client/util"
"github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/index"
"github.com/grafana/loki/v3/pkg/storage/stores/shipper/indexshipper/storage"
- util_log "github.com/grafana/loki/v3/pkg/util/log"
"github.com/grafana/loki/v3/pkg/util/spanlogger"
)
@@ -32,7 +31,7 @@ const (
var errIndexListCacheTooStale = fmt.Errorf("index list cache too stale")
type IndexSet interface {
- Init(forQuerying bool) error
+ Init(forQuerying bool, logger log.Logger) error
Close()
ForEach(ctx context.Context, callback index.ForEachIndexCallback) error
ForEachConcurrent(ctx context.Context, callback index.ForEachIndexCallback) error
@@ -94,14 +93,12 @@ func NewIndexSet(tableName, userID, cacheLocation string, baseIndexSet storage.I
}
// Init downloads all the db files for the table from object storage.
-func (t *indexSet) Init(forQuerying bool) (err error) {
+func (t *indexSet) Init(forQuerying bool, logger log.Logger) (err error) {
// Using background context to avoid cancellation of download when request times out.
// We would anyways need the files for serving next requests.
ctx := context.Background()
ctx, t.cancelFunc = context.WithTimeout(ctx, downloadTimeout)
- logger, ctx := spanlogger.NewWithLogger(ctx, t.logger, "indexSet.Init")
-
defer func() {
if err != nil {
level.Error(logger).Log("msg", "failed to initialize table, cleaning it up", "table", t.tableName, "err", err)
@@ -186,7 +183,7 @@ func (t *indexSet) ForEach(ctx context.Context, callback index.ForEachIndexCallb
}
defer t.indexMtx.rUnlock()
- logger := util_log.WithContext(ctx, t.logger)
+ logger := spanlogger.FromContextWithFallback(ctx, t.logger)
level.Debug(logger).Log("index-files-count", len(t.index))
for _, idx := range t.index {
@@ -205,7 +202,7 @@ func (t *indexSet) ForEachConcurrent(ctx context.Context, callback index.ForEach
}
defer t.indexMtx.rUnlock()
- logger := util_log.WithContext(ctx, t.logger)
+ logger := spanlogger.FromContextWithFallback(ctx, t.logger)
level.Debug(logger).Log("index-files-count", len(t.index))
if len(t.index) == 0 {
diff --git a/pkg/storage/stores/shipper/indexshipper/downloads/index_set_test.go b/pkg/storage/stores/shipper/indexshipper/downloads/index_set_test.go
index 5a2f6522de9f2..988c0457fd190 100644
--- a/pkg/storage/stores/shipper/indexshipper/downloads/index_set_test.go
+++ b/pkg/storage/stores/shipper/indexshipper/downloads/index_set_test.go
@@ -26,7 +26,7 @@ func buildTestIndexSet(t *testing.T, userID, path string) (*indexSet, stopFunc)
}, util_log.Logger)
require.NoError(t, err)
- require.NoError(t, idxSet.Init(false))
+ require.NoError(t, idxSet.Init(false, util_log.Logger))
return idxSet.(*indexSet), idxSet.Close
}
diff --git a/pkg/storage/stores/shipper/indexshipper/downloads/table.go b/pkg/storage/stores/shipper/indexshipper/downloads/table.go
index f329c3b41dcd8..1bae83c51e0e9 100644
--- a/pkg/storage/stores/shipper/indexshipper/downloads/table.go
+++ b/pkg/storage/stores/shipper/indexshipper/downloads/table.go
@@ -109,13 +109,14 @@ func LoadTable(name, cacheLocation string, storageClient storage.Client, openInd
}
userID := entry.Name()
+ logger := loggerWithUserID(table.logger, userID)
userIndexSet, err := NewIndexSet(name, userID, filepath.Join(cacheLocation, userID),
- table.baseUserIndexSet, openIndexFileFunc, loggerWithUserID(table.logger, userID))
+ table.baseUserIndexSet, openIndexFileFunc, logger)
if err != nil {
return nil, err
}
- err = userIndexSet.Init(false)
+ err = userIndexSet.Init(false, logger)
if err != nil {
return nil, err
}
@@ -129,7 +130,7 @@ func LoadTable(name, cacheLocation string, storageClient storage.Client, openInd
return nil, err
}
- err = commonIndexSet.Init(false)
+ err = commonIndexSet.Init(false, table.logger)
if err != nil {
return nil, err
}
@@ -287,7 +288,7 @@ func (t *table) Sync(ctx context.Context) error {
// forQuerying must be set to true only getting the index for querying since
// it captures the amount of time it takes to download the index at query time.
func (t *table) getOrCreateIndexSet(ctx context.Context, id string, forQuerying bool) (IndexSet, error) {
- logger := spanlogger.FromContextWithFallback(ctx, log.With(t.logger, "user", id, "table", t.name))
+ logger := spanlogger.FromContextWithFallback(ctx, loggerWithUserID(t.logger, id))
t.indexSetsMtx.RLock()
indexSet, ok := t.indexSets[id]
@@ -311,7 +312,7 @@ func (t *table) getOrCreateIndexSet(ctx context.Context, id string, forQuerying
}
// instantiate the index set, add it to the map
- indexSet, err = NewIndexSet(t.name, id, filepath.Join(t.cacheLocation, id), baseIndexSet, t.openIndexFileFunc, logger)
+ indexSet, err = NewIndexSet(t.name, id, filepath.Join(t.cacheLocation, id), baseIndexSet, t.openIndexFileFunc, loggerWithUserID(t.logger, id))
if err != nil {
return nil, err
}
@@ -321,7 +322,7 @@ func (t *table) getOrCreateIndexSet(ctx context.Context, id string, forQuerying
// it is up to the caller to wait for its readiness using IndexSet.AwaitReady()
go func() {
start := time.Now()
- err := indexSet.Init(forQuerying)
+ err := indexSet.Init(forQuerying, logger)
duration := time.Since(start)
level.Info(logger).Log("msg", "init index set", "duration", duration, "success", err == nil)
|
fix
|
do not retain span logger created with index set initialized at query time (#14027)
|
7a3338ead82e4c577652ab86e9a55faf200ac05a
|
2024-05-21 15:11:42
|
Jonathan Davies
|
feat: loki/main.go: Log which config file path is used on startup (#12985)
| false
|
diff --git a/cmd/loki/main.go b/cmd/loki/main.go
index d9f4613977872..401085b3aab11 100644
--- a/cmd/loki/main.go
+++ b/cmd/loki/main.go
@@ -118,6 +118,7 @@ func main() {
}
level.Info(util_log.Logger).Log("msg", "Starting Loki", "version", version.Info())
+ level.Info(util_log.Logger).Log("msg", "Loading configuration file", "filename", config.ConfigFile)
err = t.Run(loki.RunOpts{StartTime: startTime})
util_log.CheckFatal("running loki", err, util_log.Logger)
|
feat
|
loki/main.go: Log which config file path is used on startup (#12985)
|
5fe714e4667458a161864031b92f79fdb499a2a2
|
2022-10-17 17:48:13
|
Richard Treu
|
fluentd: Add un-escaping of control characters in JSON (#7396)
| false
|
diff --git a/clients/cmd/fluentd/lib/fluent/plugin/out_loki.rb b/clients/cmd/fluentd/lib/fluent/plugin/out_loki.rb
index c72aa2adc1969..bc3429b574061 100644
--- a/clients/cmd/fluentd/lib/fluent/plugin/out_loki.rb
+++ b/clients/cmd/fluentd/lib/fluent/plugin/out_loki.rb
@@ -278,6 +278,7 @@ def record_to_line(record)
case @line_format
when :json
line = Yajl.dump(record)
+ line = line.gsub("\\n", "\n").gsub("\\r", "\r").gsub("\\s", "\s").gsub("\\t", "\t")
when :key_value
formatted_labels = []
record.each do |k, v|
|
fluentd
|
Add un-escaping of control characters in JSON (#7396)
|
eb0397408678edec7def2dffc710b51f8696c7ba
|
2024-03-21 22:00:24
|
Christian Haudum
|
feat(blooms): Make bloom block compression configurable (#12293)
| false
|
diff --git a/docs/sources/configure/_index.md b/docs/sources/configure/_index.md
index c327e919f059f..39089ee4e4d49 100644
--- a/docs/sources/configure/_index.md
+++ b/docs/sources/configure/_index.md
@@ -3202,6 +3202,10 @@ shard_streams:
# CLI flag: -bloom-compactor.false-positive-rate
[bloom_false_positive_rate: <float> | default = 0.01]
+# Compression algorithm for bloom block pages.
+# CLI flag: -bloom-compactor.block-encoding
+[bloom_block_encoding: <string> | default = "none"]
+
# Maximum number of blocks will be downloaded in parallel by the Bloom Gateway.
# CLI flag: -bloom-gateway.blocks-downloading-parallelism
[bloom_gateway_blocks_downloading_parallelism: <int> | default = 50]
diff --git a/pkg/bloomcompactor/bloomcompactor_test.go b/pkg/bloomcompactor/bloomcompactor_test.go
index 70e76d41e9856..db1221fe58d2f 100644
--- a/pkg/bloomcompactor/bloomcompactor_test.go
+++ b/pkg/bloomcompactor/bloomcompactor_test.go
@@ -15,6 +15,7 @@ import (
"github.com/stretchr/testify/require"
"github.com/grafana/loki/pkg/bloomutils"
+ "github.com/grafana/loki/pkg/chunkenc"
v1 "github.com/grafana/loki/pkg/storage/bloom/v1"
"github.com/grafana/loki/pkg/storage/config"
util_log "github.com/grafana/loki/pkg/util/log"
@@ -180,6 +181,10 @@ func (m mockLimits) BloomFalsePositiveRate(_ string) float64 {
panic("implement me")
}
+func (m mockLimits) BloomBlockEncoding(_ string) string {
+ return chunkenc.EncNone.String()
+}
+
func (m mockLimits) BloomCompactorMaxBlockSize(_ string) int {
panic("implement me")
}
diff --git a/pkg/bloomcompactor/config.go b/pkg/bloomcompactor/config.go
index fee457767647b..a80399503f4e7 100644
--- a/pkg/bloomcompactor/config.go
+++ b/pkg/bloomcompactor/config.go
@@ -3,9 +3,10 @@ package bloomcompactor
import (
"flag"
"fmt"
- "github.com/pkg/errors"
"time"
+ "github.com/pkg/errors"
+
"github.com/grafana/loki/pkg/storage/stores/shipper/indexshipper/downloads"
"github.com/grafana/loki/pkg/util/ring"
)
@@ -83,4 +84,5 @@ type Limits interface {
BloomNGramSkip(tenantID string) int
BloomFalsePositiveRate(tenantID string) float64
BloomCompactorMaxBlockSize(tenantID string) int
+ BloomBlockEncoding(tenantID string) string
}
diff --git a/pkg/bloomcompactor/controller.go b/pkg/bloomcompactor/controller.go
index 2e0a27e68904a..1f8770cc216fb 100644
--- a/pkg/bloomcompactor/controller.go
+++ b/pkg/bloomcompactor/controller.go
@@ -12,6 +12,7 @@ import (
"github.com/pkg/errors"
"github.com/prometheus/common/model"
+ "github.com/grafana/loki/pkg/chunkenc"
v1 "github.com/grafana/loki/pkg/storage/bloom/v1"
"github.com/grafana/loki/pkg/storage/config"
"github.com/grafana/loki/pkg/storage/stores/shipper/bloomshipper"
@@ -339,13 +340,18 @@ func (s *SimpleBloomController) buildGaps(
// With these in hand, we can download the old blocks and use them to
// accelerate bloom generation for the new blocks.
+ blockEnc, err := chunkenc.ParseEncoding(s.limits.BloomBlockEncoding(tenant))
+ if err != nil {
+ return nil, errors.Wrap(err, "failed to parse block encoding")
+ }
+
var (
blockCt int
tsdbCt = len(work)
nGramSize = uint64(s.limits.BloomNGramLength(tenant))
nGramSkip = uint64(s.limits.BloomNGramSkip(tenant))
maxBlockSize = uint64(s.limits.BloomCompactorMaxBlockSize(tenant))
- blockOpts = v1.NewBlockOptions(nGramSize, nGramSkip, maxBlockSize)
+ blockOpts = v1.NewBlockOptions(blockEnc, nGramSize, nGramSkip, maxBlockSize)
created []bloomshipper.Meta
totalSeries int
bytesAdded int
diff --git a/pkg/bloomcompactor/spec_test.go b/pkg/bloomcompactor/spec_test.go
index 3df4564909261..29f579a8e777d 100644
--- a/pkg/bloomcompactor/spec_test.go
+++ b/pkg/bloomcompactor/spec_test.go
@@ -3,12 +3,14 @@ package bloomcompactor
import (
"bytes"
"context"
+ "fmt"
"testing"
"github.com/go-kit/log"
"github.com/prometheus/common/model"
"github.com/stretchr/testify/require"
+ "github.com/grafana/loki/pkg/chunkenc"
v1 "github.com/grafana/loki/pkg/storage/bloom/v1"
"github.com/grafana/loki/pkg/storage/stores/shipper/bloomshipper"
)
@@ -111,53 +113,56 @@ func dummyBloomGen(t *testing.T, opts v1.BlockOptions, store v1.Iterator[*v1.Ser
func TestSimpleBloomGenerator(t *testing.T) {
const maxBlockSize = 100 << 20 // 100MB
- for _, tc := range []struct {
- desc string
- fromSchema, toSchema v1.BlockOptions
- overlapping bool
- }{
- {
- desc: "SkipsIncompatibleSchemas",
- fromSchema: v1.NewBlockOptions(3, 0, maxBlockSize),
- toSchema: v1.NewBlockOptions(4, 0, maxBlockSize),
- },
- {
- desc: "CombinesBlocks",
- fromSchema: v1.NewBlockOptions(4, 0, maxBlockSize),
- toSchema: v1.NewBlockOptions(4, 0, maxBlockSize),
- },
- } {
- t.Run(tc.desc, func(t *testing.T) {
- sourceBlocks, data, refs := blocksFromSchemaWithRange(t, 2, tc.fromSchema, 0x00000, 0x6ffff)
- storeItr := v1.NewMapIter[v1.SeriesWithBloom, *v1.Series](
- v1.NewSliceIter[v1.SeriesWithBloom](data),
- func(swb v1.SeriesWithBloom) *v1.Series {
- return swb.Series
- },
- )
-
- gen := dummyBloomGen(t, tc.toSchema, storeItr, sourceBlocks, refs)
- results := gen.Generate(context.Background())
-
- var outputBlocks []*v1.Block
- for results.Next() {
- outputBlocks = append(outputBlocks, results.At())
- }
- // require.Equal(t, tc.outputBlocks, len(outputBlocks))
-
- // Check all the input series are present in the output blocks.
- expectedRefs := v1.PointerSlice(data)
- outputRefs := make([]*v1.SeriesWithBloom, 0, len(data))
- for _, block := range outputBlocks {
- bq := block.Querier()
- for bq.Next() {
- outputRefs = append(outputRefs, bq.At())
+ for _, enc := range []chunkenc.Encoding{chunkenc.EncNone, chunkenc.EncGZIP, chunkenc.EncSnappy} {
+ for _, tc := range []struct {
+ desc string
+ fromSchema, toSchema v1.BlockOptions
+ overlapping bool
+ }{
+ {
+ desc: "SkipsIncompatibleSchemas",
+ fromSchema: v1.NewBlockOptions(enc, 3, 0, maxBlockSize),
+ toSchema: v1.NewBlockOptions(enc, 4, 0, maxBlockSize),
+ },
+ {
+ desc: "CombinesBlocks",
+ fromSchema: v1.NewBlockOptions(enc, 4, 0, maxBlockSize),
+ toSchema: v1.NewBlockOptions(enc, 4, 0, maxBlockSize),
+ },
+ } {
+ t.Run(fmt.Sprintf("%s/%s", tc.desc, enc), func(t *testing.T) {
+ sourceBlocks, data, refs := blocksFromSchemaWithRange(t, 2, tc.fromSchema, 0x00000, 0x6ffff)
+ storeItr := v1.NewMapIter[v1.SeriesWithBloom, *v1.Series](
+ v1.NewSliceIter[v1.SeriesWithBloom](data),
+ func(swb v1.SeriesWithBloom) *v1.Series {
+ return swb.Series
+ },
+ )
+
+ gen := dummyBloomGen(t, tc.toSchema, storeItr, sourceBlocks, refs)
+ results := gen.Generate(context.Background())
+
+ var outputBlocks []*v1.Block
+ for results.Next() {
+ outputBlocks = append(outputBlocks, results.At())
}
- }
- require.Equal(t, len(expectedRefs), len(outputRefs))
- for i := range expectedRefs {
- require.Equal(t, expectedRefs[i].Series, outputRefs[i].Series)
- }
- })
+ // require.Equal(t, tc.outputBlocks, len(outputBlocks))
+
+ // Check all the input series are present in the output blocks.
+ expectedRefs := v1.PointerSlice(data)
+ outputRefs := make([]*v1.SeriesWithBloom, 0, len(data))
+ for _, block := range outputBlocks {
+ bq := block.Querier()
+ for bq.Next() {
+ outputRefs = append(outputRefs, bq.At())
+ }
+ }
+ require.Equal(t, len(expectedRefs), len(outputRefs))
+ for i := range expectedRefs {
+ require.Equal(t, expectedRefs[i].Series, outputRefs[i].Series)
+ }
+ })
+ }
}
+
}
diff --git a/pkg/storage/bloom/v1/builder.go b/pkg/storage/bloom/v1/builder.go
index 5e463734eb181..aa00b58cf6705 100644
--- a/pkg/storage/bloom/v1/builder.go
+++ b/pkg/storage/bloom/v1/builder.go
@@ -15,7 +15,7 @@ import (
)
var (
- DefaultBlockOptions = NewBlockOptions(4, 1, 50<<20) // 50MB
+ DefaultBlockOptions = NewBlockOptions(0, 4, 1, 50<<20) // EncNone, 50MB
)
type BlockOptions struct {
@@ -70,9 +70,10 @@ type BlockBuilder struct {
blooms *BloomBlockBuilder
}
-func NewBlockOptions(NGramLength, NGramSkip, MaxBlockSizeBytes uint64) BlockOptions {
+func NewBlockOptions(enc chunkenc.Encoding, NGramLength, NGramSkip, MaxBlockSizeBytes uint64) BlockOptions {
opts := NewBlockOptionsFromSchema(Schema{
version: byte(1),
+ encoding: enc,
nGramLength: NGramLength,
nGramSkip: NGramSkip,
})
diff --git a/pkg/validation/limits.go b/pkg/validation/limits.go
index 2be257af59cb5..e159fbf018f1b 100644
--- a/pkg/validation/limits.go
+++ b/pkg/validation/limits.go
@@ -19,6 +19,7 @@ import (
"golang.org/x/time/rate"
"gopkg.in/yaml.v2"
+ "github.com/grafana/loki/pkg/chunkenc"
"github.com/grafana/loki/pkg/compactor/deletionmode"
"github.com/grafana/loki/pkg/distributor/shardstreams"
"github.com/grafana/loki/pkg/loghttp/push"
@@ -196,6 +197,7 @@ type Limits struct {
BloomNGramLength int `yaml:"bloom_ngram_length" json:"bloom_ngram_length"`
BloomNGramSkip int `yaml:"bloom_ngram_skip" json:"bloom_ngram_skip"`
BloomFalsePositiveRate float64 `yaml:"bloom_false_positive_rate" json:"bloom_false_positive_rate"`
+ BloomBlockEncoding string `yaml:"bloom_block_encoding" json:"bloom_block_encoding"`
BloomGatewayBlocksDownloadingParallelism int `yaml:"bloom_gateway_blocks_downloading_parallelism" json:"bloom_gateway_blocks_downloading_parallelism"`
BloomGatewayCacheKeyInterval time.Duration `yaml:"bloom_gateway_cache_key_interval" json:"bloom_gateway_cache_key_interval"`
BloomCompactorMaxBlockSize flagext.ByteSize `yaml:"bloom_compactor_max_block_size" json:"bloom_compactor_max_block_size"`
@@ -336,6 +338,7 @@ func (l *Limits) RegisterFlags(f *flag.FlagSet) {
f.IntVar(&l.BloomNGramLength, "bloom-compactor.ngram-length", 4, "Length of the n-grams created when computing blooms from log lines.")
f.IntVar(&l.BloomNGramSkip, "bloom-compactor.ngram-skip", 1, "Skip factor for the n-grams created when computing blooms from log lines.")
f.Float64Var(&l.BloomFalsePositiveRate, "bloom-compactor.false-positive-rate", 0.01, "Scalable Bloom Filter desired false-positive rate.")
+ f.StringVar(&l.BloomBlockEncoding, "bloom-compactor.block-encoding", "none", "Compression algorithm for bloom block pages.")
f.IntVar(&l.BloomGatewayBlocksDownloadingParallelism, "bloom-gateway.blocks-downloading-parallelism", 50, "Maximum number of blocks will be downloaded in parallel by the Bloom Gateway.")
f.DurationVar(&l.BloomGatewayCacheKeyInterval, "bloom-gateway.cache-key-interval", 15*time.Minute, "Interval for computing the cache key in the Bloom Gateway.")
_ = l.BloomCompactorMaxBlockSize.Set(defaultBloomCompactorMaxBlockSize)
@@ -429,6 +432,10 @@ func (l *Limits) Validate() error {
return err
}
+ if _, err := chunkenc.ParseEncoding(l.BloomBlockEncoding); err != nil {
+ return err
+ }
+
return nil
}
@@ -922,6 +929,10 @@ func (o *Overrides) BloomFalsePositiveRate(userID string) float64 {
return o.getOverridesForUser(userID).BloomFalsePositiveRate
}
+func (o *Overrides) BloomBlockEncoding(userID string) string {
+ return o.getOverridesForUser(userID).BloomBlockEncoding
+}
+
func (o *Overrides) AllowStructuredMetadata(userID string) bool {
return o.getOverridesForUser(userID).AllowStructuredMetadata
}
diff --git a/pkg/validation/limits_test.go b/pkg/validation/limits_test.go
index 0fb6dcbae2ef0..59626aeb8cdbe 100644
--- a/pkg/validation/limits_test.go
+++ b/pkg/validation/limits_test.go
@@ -2,6 +2,7 @@ package validation
import (
"encoding/json"
+ "fmt"
"reflect"
"testing"
"time"
@@ -11,6 +12,7 @@ import (
"github.com/stretchr/testify/require"
"gopkg.in/yaml.v2"
+ "github.com/grafana/loki/pkg/chunkenc"
"github.com/grafana/loki/pkg/compactor/deletionmode"
"github.com/grafana/loki/pkg/loghttp/push"
)
@@ -308,19 +310,39 @@ query_timeout: 5m
}
}
-func TestLimitsValidation_deletionMode(t *testing.T) {
+func TestLimitsValidation(t *testing.T) {
for _, tc := range []struct {
- mode string
+ limits Limits
expected error
}{
- {mode: "disabled", expected: nil},
- {mode: "filter-only", expected: nil},
- {mode: "filter-and-delete", expected: nil},
- {mode: "something-else", expected: deletionmode.ErrUnknownMode},
+ {
+ limits: Limits{DeletionMode: "disabled", BloomBlockEncoding: "none"},
+ expected: nil,
+ },
+ {
+ limits: Limits{DeletionMode: "filter-only", BloomBlockEncoding: "none"},
+ expected: nil,
+ },
+ {
+ limits: Limits{DeletionMode: "filter-and-delete", BloomBlockEncoding: "none"},
+ expected: nil,
+ },
+ {
+ limits: Limits{DeletionMode: "something-else", BloomBlockEncoding: "none"},
+ expected: deletionmode.ErrUnknownMode,
+ },
+ {
+ limits: Limits{DeletionMode: "disabled", BloomBlockEncoding: "unknown"},
+ expected: fmt.Errorf("invalid encoding: unknown, supported: %s", chunkenc.SupportedEncoding()),
+ },
} {
- t.Run(tc.mode, func(t *testing.T) {
- limits := Limits{DeletionMode: tc.mode}
- require.ErrorIs(t, limits.Validate(), tc.expected)
+ desc := fmt.Sprintf("%s/%s", tc.limits.DeletionMode, tc.limits.BloomBlockEncoding)
+ t.Run(desc, func(t *testing.T) {
+ if tc.expected == nil {
+ require.NoError(t, tc.limits.Validate())
+ } else {
+ require.ErrorContains(t, tc.limits.Validate(), tc.expected.Error())
+ }
})
}
}
|
feat
|
Make bloom block compression configurable (#12293)
|
9a99859465f6e4306cbbf14526fdd7a17abd2a4f
|
2025-03-12 02:56:00
|
renovate[bot]
|
fix(deps): update module github.com/aws/aws-sdk-go-v2/service/s3 to v1.78.2 (main) (#16694)
| false
|
diff --git a/tools/lambda-promtail/go.mod b/tools/lambda-promtail/go.mod
index 13d229beecbc2..feffffe594bc6 100644
--- a/tools/lambda-promtail/go.mod
+++ b/tools/lambda-promtail/go.mod
@@ -8,7 +8,7 @@ require (
github.com/aws/aws-lambda-go v1.47.0
github.com/aws/aws-sdk-go-v2 v1.36.3
github.com/aws/aws-sdk-go-v2/config v1.29.9
- github.com/aws/aws-sdk-go-v2/service/s3 v1.78.1
+ github.com/aws/aws-sdk-go-v2/service/s3 v1.78.2
github.com/go-kit/log v0.2.1
github.com/gogo/protobuf v1.3.2
github.com/golang/snappy v1.0.0
@@ -36,7 +36,7 @@ require (
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.3 // indirect
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.34 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.3 // indirect
- github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.6.2 // indirect
+ github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.0 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.15 // indirect
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.15 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.25.1 // indirect
diff --git a/tools/lambda-promtail/go.sum b/tools/lambda-promtail/go.sum
index 59997d79572e9..b736b43c63a6f 100644
--- a/tools/lambda-promtail/go.sum
+++ b/tools/lambda-promtail/go.sum
@@ -75,14 +75,14 @@ github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.34 h1:ZNTqv4nIdE/DiBfUUfXcLZ/Spcu
github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.34/go.mod h1:zf7Vcd1ViW7cPqYWEHLHJkS50X0JS2IKz9Cgaj6ugrs=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.3 h1:eAh2A4b5IzM/lum78bZ590jy36+d/aFLgKF/4Vd1xPE=
github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.3/go.mod h1:0yKJC/kb8sAnmlYa6Zs3QVYqaC8ug2AbnNChv5Ox3uA=
-github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.6.2 h1:t/gZFyrijKuSU0elA5kRngP/oU3mc0I+Dvp8HwRE4c0=
-github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.6.2/go.mod h1:iu6FSzgt+M2/x3Dk8zhycdIcHjEFb36IS8HVUVFoMg0=
+github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.0 h1:lguz0bmOoGzozP9XfRJR1QIayEYo+2vP/No3OfLF0pU=
+github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.7.0/go.mod h1:iu6FSzgt+M2/x3Dk8zhycdIcHjEFb36IS8HVUVFoMg0=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.15 h1:dM9/92u2F1JbDaGooxTq18wmmFzbJRfXfVfy96/1CXM=
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.15/go.mod h1:SwFBy2vjtA0vZbjjaFtfN045boopadnoVPhu4Fv66vY=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.15 h1:moLQUoVq91LiqT1nbvzDukyqAlCv89ZmwaHw/ZFlFZg=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.15/go.mod h1:ZH34PJUc8ApjBIfgQCFvkWcUDBtl/WTD+uiYHjd8igA=
-github.com/aws/aws-sdk-go-v2/service/s3 v1.78.1 h1:1M0gSbyP6q06gl3384wpoKPaH9G16NPqZFieEhLboSU=
-github.com/aws/aws-sdk-go-v2/service/s3 v1.78.1/go.mod h1:4qzsZSzB/KiX2EzDjs9D7A8rI/WGJxZceVJIHqtJjIU=
+github.com/aws/aws-sdk-go-v2/service/s3 v1.78.2 h1:jIiopHEV22b4yQP2q36Y0OmwLbsxNWdWwfZRR5QRRO4=
+github.com/aws/aws-sdk-go-v2/service/s3 v1.78.2/go.mod h1:U5SNqwhXB3Xe6F47kXvWihPl/ilGaEDe8HD/50Z9wxc=
github.com/aws/aws-sdk-go-v2/service/sso v1.25.1 h1:8JdC7Gr9NROg1Rusk25IcZeTO59zLxsKgE0gkh5O6h0=
github.com/aws/aws-sdk-go-v2/service/sso v1.25.1/go.mod h1:qs4a9T5EMLl/Cajiw2TcbNt2UNo/Hqlyp+GiuG4CFDI=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.29.1 h1:KwuLovgQPcdjNMfFt9OhUd9a2OwcOKhxfvF4glTzLuA=
@@ -208,8 +208,6 @@ github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaS
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
-github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
-github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golang/snappy v1.0.0 h1:Oy607GVXHs7RtbggtPBnr2RmDArIsAefDwvrdWvRhGs=
github.com/golang/snappy v1.0.0/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/gomodule/redigo v1.8.9 h1:Sl3u+2BI/kk+VEatbj0scLdrFhjPmbxOc1myhDP41ws=
|
fix
|
update module github.com/aws/aws-sdk-go-v2/service/s3 to v1.78.2 (main) (#16694)
|
863acc015279cdf1e15ba8936bc5c48e03e24278
|
2024-03-22 01:43:38
|
Christian Haudum
|
chore(blooms): Make block query concurrency configurable (#12292)
| false
|
diff --git a/docs/sources/configure/_index.md b/docs/sources/configure/_index.md
index b49aaffb65413..ba96be3b3d604 100644
--- a/docs/sources/configure/_index.md
+++ b/docs/sources/configure/_index.md
@@ -1939,6 +1939,10 @@ client:
# CLI flag: -bloom-gateway.worker-concurrency
[worker_concurrency: <int> | default = 4]
+# Number of blocks processed concurrently on a single worker.
+# CLI flag: -bloom-gateway.block-query-concurrency
+[block_query_concurrency: <int> | default = 4]
+
# Maximum number of outstanding tasks per tenant.
# CLI flag: -bloom-gateway.max-outstanding-per-tenant
[max_outstanding_per_tenant: <int> | default = 1024]
diff --git a/pkg/bloomgateway/bloomgateway.go b/pkg/bloomgateway/bloomgateway.go
index 4138ff4c1beb2..f76d6a55d2a09 100644
--- a/pkg/bloomgateway/bloomgateway.go
+++ b/pkg/bloomgateway/bloomgateway.go
@@ -114,7 +114,8 @@ func New(cfg Config, store bloomshipper.Store, logger log.Logger, reg prometheus
logger: logger,
metrics: newMetrics(reg, constants.Loki, metricsSubsystem),
workerConfig: workerConfig{
- maxItems: cfg.NumMultiplexItems,
+ maxItems: cfg.NumMultiplexItems,
+ queryConcurrency: cfg.BlockQueryConcurrency,
},
pendingTasks: &atomic.Int64{},
diff --git a/pkg/bloomgateway/bloomgateway_test.go b/pkg/bloomgateway/bloomgateway_test.go
index b96eeeec19c7e..45c9a3926c157 100644
--- a/pkg/bloomgateway/bloomgateway_test.go
+++ b/pkg/bloomgateway/bloomgateway_test.go
@@ -138,7 +138,6 @@ func TestBloomGateway_StartStopService(t *testing.T) {
func TestBloomGateway_FilterChunkRefs(t *testing.T) {
tenantID := "test"
- store := setupBloomStore(t)
logger := log.NewNopLogger()
reg := prometheus.NewRegistry()
@@ -156,7 +155,8 @@ func TestBloomGateway_FilterChunkRefs(t *testing.T) {
ReplicationFactor: 1,
NumTokens: 16,
},
- WorkerConcurrency: 4,
+ WorkerConcurrency: 2,
+ BlockQueryConcurrency: 2,
MaxOutstandingPerTenant: 1024,
}
@@ -249,7 +249,7 @@ func TestBloomGateway_FilterChunkRefs(t *testing.T) {
now := mktime("2023-10-03 10:00")
reg := prometheus.NewRegistry()
- gw, err := New(cfg, store, logger, reg)
+ gw, err := New(cfg, newMockBloomStore(nil, nil), logger, reg)
require.NoError(t, err)
err = services.StartAndAwaitRunning(context.Background(), gw)
@@ -294,7 +294,7 @@ func TestBloomGateway_FilterChunkRefs(t *testing.T) {
now := mktime("2023-10-03 10:00")
reg := prometheus.NewRegistry()
- gw, err := New(cfg, store, logger, reg)
+ gw, err := New(cfg, newMockBloomStore(nil, nil), logger, reg)
require.NoError(t, err)
err = services.StartAndAwaitRunning(context.Background(), gw)
@@ -333,15 +333,13 @@ func TestBloomGateway_FilterChunkRefs(t *testing.T) {
t.Run("use fuse queriers to filter chunks", func(t *testing.T) {
now := mktime("2023-10-03 10:00")
- reg := prometheus.NewRegistry()
- gw, err := New(cfg, store, logger, reg)
- require.NoError(t, err)
-
// replace store implementation and re-initialize workers and sub-services
_, metas, queriers, data := createBlocks(t, tenantID, 10, now.Add(-1*time.Hour), now, 0x0000, 0x0fff)
- gw.bloomStore = newMockBloomStore(queriers, metas)
- err = gw.initServices()
+ reg := prometheus.NewRegistry()
+ store := newMockBloomStore(queriers, metas)
+
+ gw, err := New(cfg, store, logger, reg)
require.NoError(t, err)
err = services.StartAndAwaitRunning(context.Background(), gw)
diff --git a/pkg/bloomgateway/config.go b/pkg/bloomgateway/config.go
index ad5d2928728a6..356bc782fb839 100644
--- a/pkg/bloomgateway/config.go
+++ b/pkg/bloomgateway/config.go
@@ -18,6 +18,7 @@ type Config struct {
Client ClientConfig `yaml:"client,omitempty" doc:""`
WorkerConcurrency int `yaml:"worker_concurrency"`
+ BlockQueryConcurrency int `yaml:"block_query_concurrency"`
MaxOutstandingPerTenant int `yaml:"max_outstanding_per_tenant"`
NumMultiplexItems int `yaml:"num_multiplex_tasks"`
}
@@ -31,6 +32,7 @@ func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
func (cfg *Config) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) {
f.BoolVar(&cfg.Enabled, prefix+"enabled", false, "Flag to enable or disable the bloom gateway component globally.")
f.IntVar(&cfg.WorkerConcurrency, prefix+"worker-concurrency", 4, "Number of workers to use for filtering chunks concurrently.")
+ f.IntVar(&cfg.BlockQueryConcurrency, prefix+"block-query-concurrency", 4, "Number of blocks processed concurrently on a single worker.")
f.IntVar(&cfg.MaxOutstandingPerTenant, prefix+"max-outstanding-per-tenant", 1024, "Maximum number of outstanding tasks per tenant.")
f.IntVar(&cfg.NumMultiplexItems, prefix+"num-multiplex-tasks", 512, "How many tasks are multiplexed at once.")
// TODO(chaudum): Figure out what the better place is for registering flags
diff --git a/pkg/bloomgateway/processor.go b/pkg/bloomgateway/processor.go
index 9a503551d3d23..9fc6aca57dc11 100644
--- a/pkg/bloomgateway/processor.go
+++ b/pkg/bloomgateway/processor.go
@@ -11,25 +11,28 @@ import (
"github.com/pkg/errors"
"github.com/grafana/dskit/concurrency"
+
v1 "github.com/grafana/loki/pkg/storage/bloom/v1"
"github.com/grafana/loki/pkg/storage/config"
"github.com/grafana/loki/pkg/storage/stores/shipper/bloomshipper"
)
-func newProcessor(id string, store bloomshipper.Store, logger log.Logger, metrics *workerMetrics) *processor {
+func newProcessor(id string, concurrency int, store bloomshipper.Store, logger log.Logger, metrics *workerMetrics) *processor {
return &processor{
- id: id,
- store: store,
- logger: logger,
- metrics: metrics,
+ id: id,
+ concurrency: concurrency,
+ store: store,
+ logger: logger,
+ metrics: metrics,
}
}
type processor struct {
- id string
- store bloomshipper.Store
- logger log.Logger
- metrics *workerMetrics
+ id string
+ concurrency int // concurrency at which bloom blocks are processed
+ store bloomshipper.Store
+ logger log.Logger
+ metrics *workerMetrics
}
func (p *processor) run(ctx context.Context, tasks []Task) error {
@@ -94,8 +97,7 @@ func (p *processor) processBlocks(ctx context.Context, data []blockWithTasks) er
return err
}
- // TODO(chaudum): What's a good cocurrency value?
- return concurrency.ForEachJob(ctx, len(bqs), 10, func(ctx context.Context, i int) error {
+ return concurrency.ForEachJob(ctx, len(bqs), p.concurrency, func(ctx context.Context, i int) error {
bq := bqs[i]
if bq == nil {
// TODO(chaudum): Add metric for skipped blocks
diff --git a/pkg/bloomgateway/processor_test.go b/pkg/bloomgateway/processor_test.go
index 246da373f3574..d9e6a799045e3 100644
--- a/pkg/bloomgateway/processor_test.go
+++ b/pkg/bloomgateway/processor_test.go
@@ -96,7 +96,7 @@ func TestProcessor(t *testing.T) {
_, metas, queriers, data := createBlocks(t, tenant, 10, now.Add(-1*time.Hour), now, 0x0000, 0x0fff)
mockStore := newMockBloomStore(queriers, metas)
- p := newProcessor("worker", mockStore, log.NewNopLogger(), metrics)
+ p := newProcessor("worker", 1, mockStore, log.NewNopLogger(), metrics)
chunkRefs := createQueryInputFromBlockData(t, tenant, data, 10)
swb := seriesWithInterval{
@@ -145,7 +145,7 @@ func TestProcessor(t *testing.T) {
mockStore := newMockBloomStore(queriers, metas)
mockStore.err = errors.New("store failed")
- p := newProcessor("worker", mockStore, log.NewNopLogger(), metrics)
+ p := newProcessor("worker", 1, mockStore, log.NewNopLogger(), metrics)
chunkRefs := createQueryInputFromBlockData(t, tenant, data, 10)
swb := seriesWithInterval{
diff --git a/pkg/bloomgateway/worker.go b/pkg/bloomgateway/worker.go
index 5a37b059e75e9..52de8155d7783 100644
--- a/pkg/bloomgateway/worker.go
+++ b/pkg/bloomgateway/worker.go
@@ -22,7 +22,8 @@ const (
)
type workerConfig struct {
- maxItems int
+ maxItems int
+ queryConcurrency int
}
// worker is a datastructure that consumes tasks from the request queue,
@@ -65,7 +66,7 @@ func (w *worker) starting(_ context.Context) error {
func (w *worker) running(_ context.Context) error {
idx := queue.StartIndexWithLocalQueue
- p := newProcessor(w.id, w.store, w.logger, w.metrics)
+ p := newProcessor(w.id, w.cfg.queryConcurrency, w.store, w.logger, w.metrics)
for st := w.State(); st == services.Running || st == services.Stopping; {
taskCtx := context.Background()
diff --git a/pkg/distributor/http_test.go b/pkg/distributor/http_test.go
index b5b4bebbd58fb..23b2993c5b213 100644
--- a/pkg/distributor/http_test.go
+++ b/pkg/distributor/http_test.go
@@ -2,14 +2,16 @@ package distributor
import (
"context"
- "github.com/grafana/dskit/user"
- "github.com/grafana/loki/pkg/loghttp/push"
- "github.com/grafana/loki/pkg/logproto"
"io"
"net/http"
"net/http/httptest"
"testing"
+ "github.com/grafana/dskit/user"
+
+ "github.com/grafana/loki/pkg/loghttp/push"
+ "github.com/grafana/loki/pkg/logproto"
+
"github.com/grafana/dskit/flagext"
"github.com/stretchr/testify/require"
diff --git a/pkg/logql/log/filter.go b/pkg/logql/log/filter.go
index 7117b77805d61..7b613947c8b8b 100644
--- a/pkg/logql/log/filter.go
+++ b/pkg/logql/log/filter.go
@@ -9,8 +9,9 @@ import (
"github.com/grafana/regexp"
"github.com/grafana/regexp/syntax"
- "github.com/grafana/loki/pkg/util"
"github.com/prometheus/prometheus/model/labels"
+
+ "github.com/grafana/loki/pkg/util"
)
// Checker is an interface that matches against the input line or regexp.
diff --git a/pkg/querier/ingester_querier_test.go b/pkg/querier/ingester_querier_test.go
index 2eee6dc0ae072..d5f4d872c5084 100644
--- a/pkg/querier/ingester_querier_test.go
+++ b/pkg/querier/ingester_querier_test.go
@@ -3,11 +3,12 @@ package querier
import (
"context"
"errors"
- "go.uber.org/atomic"
"sync"
"testing"
"time"
+ "go.uber.org/atomic"
+
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
diff --git a/pkg/storage/chunk/predicate.go b/pkg/storage/chunk/predicate.go
index 391b1e9163235..62a91c7a46437 100644
--- a/pkg/storage/chunk/predicate.go
+++ b/pkg/storage/chunk/predicate.go
@@ -1,8 +1,9 @@
package chunk
import (
- "github.com/grafana/loki/pkg/querier/plan"
"github.com/prometheus/prometheus/model/labels"
+
+ "github.com/grafana/loki/pkg/querier/plan"
)
type Predicate struct {
diff --git a/pkg/storage/stores/shipper/indexshipper/indexgateway/config.go b/pkg/storage/stores/shipper/indexshipper/indexgateway/config.go
index 884d29bf9e37c..eb5c134a5de18 100644
--- a/pkg/storage/stores/shipper/indexshipper/indexgateway/config.go
+++ b/pkg/storage/stores/shipper/indexshipper/indexgateway/config.go
@@ -3,6 +3,7 @@ package indexgateway
import (
"flag"
"fmt"
+
"github.com/pkg/errors"
"github.com/grafana/loki/pkg/util/ring"
|
chore
|
Make block query concurrency configurable (#12292)
|
f2da621f7255c678774fb0e99c0b7fb68f6a6a69
|
2024-11-08 21:44:57
|
benclive
|
fix: Add flags for path & configure kafka for non-memberlist kv store (#14850)
| false
|
diff --git a/docs/sources/shared/configuration.md b/docs/sources/shared/configuration.md
index a4e9c682b9afe..39cdf3da675a5 100644
--- a/docs/sources/shared/configuration.md
+++ b/docs/sources/shared/configuration.md
@@ -1620,6 +1620,8 @@ The `chunk_store_config` block configures how chunks will be cached and how long
Common configuration to be shared between multiple modules. If a more specific configuration is given in other sections, the related configuration within this section will be ignored.
```yaml
+# prefix for the path
+# CLI flag: -common.path-prefix
[path_prefix: <string> | default = ""]
storage:
diff --git a/pkg/loki/common/common.go b/pkg/loki/common/common.go
index f1a3c58055d0e..cc280e19bd514 100644
--- a/pkg/loki/common/common.go
+++ b/pkg/loki/common/common.go
@@ -66,6 +66,7 @@ func (c *Config) RegisterFlags(f *flag.FlagSet) {
f.StringVar(&c.CompactorAddress, "common.compactor-address", "", "the http address of the compactor in the form http://host:port")
f.StringVar(&c.CompactorGRPCAddress, "common.compactor-grpc-address", "", "the grpc address of the compactor in the form host:port")
+ f.StringVar(&c.PathPrefix, "common.path-prefix", "", "prefix for the path")
}
type Storage struct {
diff --git a/pkg/loki/config_wrapper.go b/pkg/loki/config_wrapper.go
index a09792cd403ae..4fa7800ef8661 100644
--- a/pkg/loki/config_wrapper.go
+++ b/pkg/loki/config_wrapper.go
@@ -244,6 +244,7 @@ func applyConfigToRings(r, defaults *ConfigWrapper, rc lokiring.RingConfig, merg
r.Ingester.LifecyclerConfig.Zone = rc.InstanceZone
r.Ingester.LifecyclerConfig.ListenPort = rc.ListenPort
r.Ingester.LifecyclerConfig.ObservePeriod = rc.ObservePeriod
+ r.Ingester.KafkaIngestion.PartitionRingConfig.KVStore = rc.KVStore
}
if mergeWithExisting {
|
fix
|
Add flags for path & configure kafka for non-memberlist kv store (#14850)
|
7c7c5e3511cce1579e061a0cbf2400907ba59d33
|
2023-08-23 19:00:52
|
dependabot[bot]
|
build(deps): bump golangci/golangci-lint-action from 3.5.0 to 3.7.0 (#10303)
| false
|
diff --git a/.github/workflows/operator.yaml b/.github/workflows/operator.yaml
index a5c0793d72695..0cb73e51419cc 100644
--- a/.github/workflows/operator.yaml
+++ b/.github/workflows/operator.yaml
@@ -49,7 +49,7 @@ jobs:
id: go
- uses: actions/checkout@v3
- name: Lint
- uses: golangci/[email protected]
+ uses: golangci/[email protected]
with:
version: v1.53.3
args: --timeout=4m
|
build
|
bump golangci/golangci-lint-action from 3.5.0 to 3.7.0 (#10303)
|
6df81db978b0157ab96fa0629a311f919dad1e8a
|
2024-06-04 20:30:39
|
benclive
|
feat: Support negative numbers in LogQL (#13091)
| false
|
diff --git a/pkg/logql/syntax/ast_test.go b/pkg/logql/syntax/ast_test.go
index ec458849331e6..9090fc98b7558 100644
--- a/pkg/logql/syntax/ast_test.go
+++ b/pkg/logql/syntax/ast_test.go
@@ -49,6 +49,7 @@ func Test_logSelectorExpr_String(t *testing.T) {
{`{foo="bar"} |= "baz" |~ "blip" != "flip" !~ "flap" | logfmt | b=ip("127.0.0.1") | level="error" | c=ip("::1")`, true}, // chain inside label filters.
{`{foo="bar"} |= "baz" |~ "blip" != "flip" !~ "flap" | regexp "(?P<foo>foo|bar)"`, true},
{`{foo="bar"} |= "baz" |~ "blip" != "flip" !~ "flap" | regexp "(?P<foo>foo|bar)" | ( ( foo<5.01 , bar>20ms ) or foo="bar" ) | line_format "blip{{.boop}}bap" | label_format foo=bar,bar="blip{{.blop}}"`, true},
+ {`{foo="bar"} | logfmt | counter>-1 | counter>=-1 | counter<-1 | counter<=-1 | counter!=-1 | counter==-1`, true},
}
for _, tt := range tests {
@@ -76,6 +77,7 @@ func Test_logSelectorExpr_String(t *testing.T) {
func Test_SampleExpr_String(t *testing.T) {
t.Parallel()
for _, tc := range []string{
+ `rate( ( {job="mysql"} |="error" !="timeout" ) [10s] )>-1`,
`rate( ( {job="mysql"} |="error" !="timeout" ) [10s] )`,
`absent_over_time( ( {job="mysql"} |="error" !="timeout" ) [10s] )`,
`absent_over_time( ( {job="mysql"} |="error" !="timeout" ) [10s] offset 10d )`,
@@ -556,6 +558,27 @@ func Test_FilterMatcher(t *testing.T) {
},
[]linecheck{{"foo", false}, {"bar", false}, {"none", true}},
},
+ {
+ `{app="foo"} | logfmt | duration > -1s`,
+ []*labels.Matcher{
+ mustNewMatcher(labels.MatchEqual, "app", "foo"),
+ },
+ []linecheck{{"duration=5m", true}, {"duration=1s", true}, {"duration=0s", true}, {"duration=-5m", false}},
+ },
+ {
+ `{app="foo"} | logfmt | count > -1`,
+ []*labels.Matcher{
+ mustNewMatcher(labels.MatchEqual, "app", "foo"),
+ },
+ []linecheck{{"count=5", true}, {"count=1", true}, {"count=0", true}, {"count=-5", false}},
+ },
+ {
+ `{app="foo"} | logfmt | counter <= -1`,
+ []*labels.Matcher{
+ mustNewMatcher(labels.MatchEqual, "app", "foo"),
+ },
+ []linecheck{{"counter=1", false}, {"counter=0", false}, {"counter=-1", true}, {"counter=-2", true}},
+ },
} {
tt := tt
t.Run(tt.q, func(t *testing.T) {
diff --git a/pkg/logql/syntax/expr.y b/pkg/logql/syntax/expr.y
index 7f443831159bb..fb6aa7f344a6b 100644
--- a/pkg/logql/syntax/expr.y
+++ b/pkg/logql/syntax/expr.y
@@ -400,13 +400,13 @@ bytesFilter:
;
numberFilter:
- IDENTIFIER GT NUMBER { $$ = log.NewNumericLabelFilter(log.LabelFilterGreaterThan, $1, mustNewFloat($3))}
- | IDENTIFIER GTE NUMBER { $$ = log.NewNumericLabelFilter(log.LabelFilterGreaterThanOrEqual, $1, mustNewFloat($3))}
- | IDENTIFIER LT NUMBER { $$ = log.NewNumericLabelFilter(log.LabelFilterLesserThan, $1, mustNewFloat($3))}
- | IDENTIFIER LTE NUMBER { $$ = log.NewNumericLabelFilter(log.LabelFilterLesserThanOrEqual, $1, mustNewFloat($3))}
- | IDENTIFIER NEQ NUMBER { $$ = log.NewNumericLabelFilter(log.LabelFilterNotEqual, $1, mustNewFloat($3))}
- | IDENTIFIER EQ NUMBER { $$ = log.NewNumericLabelFilter(log.LabelFilterEqual, $1, mustNewFloat($3))}
- | IDENTIFIER CMP_EQ NUMBER { $$ = log.NewNumericLabelFilter(log.LabelFilterEqual, $1, mustNewFloat($3))}
+ IDENTIFIER GT literalExpr { $$ = log.NewNumericLabelFilter(log.LabelFilterGreaterThan, $1, $3.Val)}
+ | IDENTIFIER GTE literalExpr { $$ = log.NewNumericLabelFilter(log.LabelFilterGreaterThanOrEqual, $1,$3.Val)}
+ | IDENTIFIER LT literalExpr { $$ = log.NewNumericLabelFilter(log.LabelFilterLesserThan, $1, $3.Val)}
+ | IDENTIFIER LTE literalExpr { $$ = log.NewNumericLabelFilter(log.LabelFilterLesserThanOrEqual, $1, $3.Val)}
+ | IDENTIFIER NEQ literalExpr { $$ = log.NewNumericLabelFilter(log.LabelFilterNotEqual, $1, $3.Val)}
+ | IDENTIFIER EQ literalExpr { $$ = log.NewNumericLabelFilter(log.LabelFilterEqual, $1, $3.Val)}
+ | IDENTIFIER CMP_EQ literalExpr { $$ = log.NewNumericLabelFilter(log.LabelFilterEqual, $1, $3.Val)}
;
dropLabel:
diff --git a/pkg/logql/syntax/expr.y.go b/pkg/logql/syntax/expr.y.go
index 2d322514a75fc..4918cf5cdc81a 100644
--- a/pkg/logql/syntax/expr.y.go
+++ b/pkg/logql/syntax/expr.y.go
@@ -1,15 +1,19 @@
-// Code generated by goyacc -p expr -o pkg/logql/syntax/expr.y.go pkg/logql/syntax/expr.y. DO NOT EDIT.
+// Code generated by goyacc -p expr -o expr.y.go expr.y. DO NOT EDIT.
+//line expr.y:2
package syntax
import __yyfmt__ "fmt"
+//line expr.y:2
+
import (
"github.com/grafana/loki/v3/pkg/logql/log"
"github.com/prometheus/prometheus/model/labels"
"time"
)
+//line expr.y:12
type exprSymType struct {
yys int
Expr Expr
@@ -263,13 +267,17 @@ var exprToknames = [...]string{
"MOD",
"POW",
}
+
var exprStatenames = [...]string{}
const exprEofCode = 1
const exprErrCode = 2
const exprInitialStackSize = 16
-var exprExca = [...]int{
+//line expr.y:582
+
+//line yacctab:1
+var exprExca = [...]int8{
-1, 1,
1, -1,
-2, 0,
@@ -277,125 +285,127 @@ var exprExca = [...]int{
const exprPrivate = 57344
-const exprLast = 608
-
-var exprAct = [...]int{
+const exprLast = 639
+var exprAct = [...]int16{
289, 228, 84, 4, 214, 64, 182, 126, 204, 189,
75, 200, 197, 63, 237, 5, 152, 187, 77, 2,
56, 80, 48, 49, 50, 57, 58, 61, 62, 59,
- 60, 51, 52, 53, 54, 55, 56, 49, 50, 57,
- 58, 61, 62, 59, 60, 51, 52, 53, 54, 55,
- 56, 57, 58, 61, 62, 59, 60, 51, 52, 53,
- 54, 55, 56, 51, 52, 53, 54, 55, 56, 109,
- 148, 150, 151, 115, 53, 54, 55, 56, 207, 150,
- 151, 283, 217, 139, 166, 167, 292, 156, 215, 295,
- 216, 72, 74, 161, 72, 74, 164, 165, 154, 69,
- 70, 71, 69, 70, 71, 297, 347, 140, 67, 294,
- 163, 366, 136, 366, 168, 169, 170, 171, 172, 173,
- 174, 175, 176, 177, 178, 179, 180, 181, 184, 230,
- 339, 292, 94, 130, 255, 136, 339, 194, 85, 86,
- 191, 149, 202, 206, 293, 136, 386, 369, 213, 208,
- 211, 212, 209, 210, 142, 219, 130, 363, 306, 141,
- 73, 184, 235, 73, 356, 381, 130, 239, 229, 374,
- 294, 231, 232, 142, 110, 240, 294, 122, 123, 121,
- 227, 131, 133, 297, 294, 72, 74, 185, 183, 316,
- 248, 249, 250, 69, 70, 71, 373, 298, 371, 124,
- 295, 125, 136, 359, 252, 72, 74, 132, 134, 135,
- 293, 349, 239, 69, 70, 71, 72, 74, 184, 224,
- 230, 183, 285, 130, 69, 70, 71, 346, 287, 290,
- 330, 296, 136, 299, 314, 109, 302, 115, 303, 304,
- 230, 291, 154, 288, 331, 300, 306, 72, 74, 243,
- 294, 230, 355, 130, 73, 69, 70, 71, 306, 310,
- 312, 315, 317, 318, 354, 233, 202, 206, 325, 320,
- 324, 292, 340, 83, 73, 85, 86, 185, 183, 136,
- 144, 266, 230, 221, 267, 73, 265, 262, 328, 220,
- 263, 332, 261, 334, 336, 184, 338, 109, 239, 306,
- 130, 337, 348, 333, 239, 353, 109, 224, 227, 350,
- 327, 306, 306, 72, 74, 239, 73, 308, 307, 239,
- 313, 69, 70, 71, 13, 224, 311, 143, 342, 343,
- 344, 379, 301, 155, 360, 361, 384, 241, 326, 109,
- 362, 238, 284, 153, 247, 246, 364, 365, 230, 264,
- 225, 245, 370, 13, 244, 260, 218, 160, 159, 158,
- 16, 90, 155, 89, 72, 74, 376, 82, 377, 378,
- 13, 380, 69, 70, 71, 352, 253, 305, 259, 6,
+ 60, 51, 52, 53, 54, 55, 56, 283, 10, 49,
+ 50, 57, 58, 61, 62, 59, 60, 51, 52, 53,
+ 54, 55, 56, 57, 58, 61, 62, 59, 60, 51,
+ 52, 53, 54, 55, 56, 53, 54, 55, 56, 109,
+ 217, 139, 215, 115, 51, 52, 53, 54, 55, 56,
+ 266, 140, 221, 16, 216, 265, 292, 156, 136, 166,
+ 167, 297, 262, 161, 220, 16, 294, 261, 154, 148,
+ 150, 151, 366, 281, 184, 339, 16, 67, 280, 130,
+ 163, 207, 150, 151, 168, 169, 170, 171, 172, 173,
+ 174, 175, 176, 177, 178, 179, 180, 181, 278, 164,
+ 165, 16, 366, 277, 239, 94, 293, 194, 142, 386,
+ 191, 275, 202, 206, 16, 294, 274, 142, 264, 340,
+ 85, 86, 292, 141, 272, 219, 316, 16, 381, 271,
+ 260, 306, 235, 185, 183, 17, 18, 356, 229, 224,
+ 149, 231, 232, 110, 306, 240, 294, 17, 18, 136,
+ 355, 213, 208, 211, 212, 209, 210, 374, 17, 18,
+ 248, 249, 250, 269, 331, 184, 16, 306, 268, 373,
+ 130, 255, 306, 354, 252, 342, 343, 344, 353, 136,
+ 239, 72, 74, 17, 18, 371, 363, 339, 306, 69,
+ 70, 71, 285, 384, 308, 184, 17, 18, 287, 290,
+ 130, 296, 314, 299, 369, 109, 302, 115, 303, 17,
+ 18, 291, 154, 288, 359, 300, 263, 267, 270, 273,
+ 276, 279, 282, 349, 185, 183, 330, 294, 293, 310,
+ 312, 315, 317, 318, 72, 74, 202, 206, 325, 320,
+ 324, 295, 69, 70, 71, 346, 72, 74, 17, 18,
+ 73, 306, 239, 239, 69, 70, 71, 307, 328, 136,
+ 239, 332, 304, 334, 336, 224, 338, 109, 294, 230,
+ 243, 337, 348, 333, 313, 311, 109, 239, 295, 350,
+ 130, 230, 241, 72, 74, 83, 224, 85, 86, 292,
+ 301, 69, 70, 71, 233, 347, 144, 13, 136, 238,
+ 143, 327, 153, 73, 360, 361, 155, 326, 284, 109,
+ 362, 225, 13, 247, 184, 73, 364, 365, 230, 130,
+ 246, 155, 370, 245, 244, 218, 160, 159, 158, 227,
+ 16, 90, 89, 82, 72, 74, 376, 380, 377, 378,
+ 13, 352, 69, 70, 71, 253, 298, 305, 259, 6,
382, 258, 73, 21, 22, 23, 36, 45, 46, 37,
- 39, 40, 38, 41, 42, 43, 44, 24, 25, 66,
- 256, 242, 234, 226, 257, 254, 368, 26, 27, 28,
- 29, 30, 31, 32, 81, 146, 367, 33, 34, 35,
- 47, 19, 236, 281, 345, 335, 282, 79, 280, 322,
- 323, 145, 13, 73, 147, 278, 162, 88, 279, 87,
- 277, 6, 17, 18, 385, 21, 22, 23, 36, 45,
+ 39, 40, 38, 41, 42, 43, 44, 24, 25, 230,
+ 256, 242, 234, 226, 183, 257, 254, 26, 27, 28,
+ 29, 30, 31, 32, 81, 379, 368, 33, 34, 35,
+ 47, 19, 236, 227, 367, 162, 345, 79, 72, 74,
+ 335, 190, 13, 73, 251, 88, 69, 70, 71, 322,
+ 323, 6, 17, 18, 87, 21, 22, 23, 36, 45,
46, 37, 39, 40, 38, 41, 42, 43, 44, 24,
- 25, 275, 272, 127, 276, 273, 274, 271, 383, 26,
- 27, 28, 29, 30, 31, 32, 372, 358, 357, 33,
- 34, 35, 47, 19, 157, 269, 329, 375, 270, 190,
- 268, 3, 251, 190, 13, 319, 188, 321, 76, 309,
- 198, 128, 286, 6, 17, 18, 136, 21, 22, 23,
+ 25, 190, 375, 230, 188, 385, 383, 372, 358, 26,
+ 27, 28, 29, 30, 31, 32, 146, 357, 3, 33,
+ 34, 35, 47, 19, 157, 76, 329, 319, 309, 286,
+ 72, 74, 145, 321, 13, 147, 198, 73, 69, 70,
+ 71, 223, 222, 6, 17, 18, 136, 21, 22, 23,
36, 45, 46, 37, 39, 40, 38, 41, 42, 43,
- 44, 24, 25, 223, 222, 221, 220, 130, 195, 193,
- 192, 26, 27, 28, 29, 30, 31, 32, 91, 351,
- 205, 33, 34, 35, 47, 19, 201, 190, 122, 123,
- 121, 81, 131, 133, 198, 113, 114, 196, 118, 203,
- 120, 199, 119, 117, 116, 186, 17, 18, 65, 137,
- 124, 129, 125, 138, 111, 112, 93, 92, 132, 134,
- 135, 11, 10, 9, 20, 12, 15, 8, 95, 96,
- 97, 98, 99, 100, 101, 102, 103, 104, 105, 106,
- 107, 108, 341, 14, 7, 78, 68, 1,
+ 44, 24, 25, 221, 220, 230, 195, 130, 193, 192,
+ 351, 26, 27, 28, 29, 30, 31, 32, 205, 136,
+ 201, 33, 34, 35, 47, 19, 190, 81, 122, 123,
+ 121, 198, 131, 133, 297, 72, 74, 127, 128, 73,
+ 130, 113, 114, 69, 70, 71, 17, 18, 196, 118,
+ 124, 203, 125, 120, 199, 91, 119, 117, 132, 134,
+ 135, 122, 123, 121, 116, 131, 133, 186, 65, 137,
+ 66, 129, 138, 111, 112, 93, 92, 11, 9, 20,
+ 12, 15, 8, 124, 341, 125, 14, 7, 78, 68,
+ 1, 132, 134, 135, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 73, 95, 96, 97, 98, 99,
+ 100, 101, 102, 103, 104, 105, 106, 107, 108,
}
-var exprPact = [...]int{
- 353, -1000, -58, -1000, -1000, 349, 353, -1000, -1000, -1000,
- -1000, -1000, -1000, 409, 341, 247, -1000, 432, 430, 337,
+var exprPact = [...]int16{
+ 353, -1000, -58, -1000, -1000, 540, 353, -1000, -1000, -1000,
+ -1000, -1000, -1000, 409, 337, 289, -1000, 437, 428, 336,
335, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000,
-1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000,
- -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, 86, 86,
- 86, 86, 86, 86, 86, 86, 86, 86, 86, 86,
- 86, 86, 86, 349, -1000, 76, 501, 3, 101, -1000,
- -1000, -1000, -1000, -1000, -1000, 300, 253, -58, 413, -1000,
- -1000, 57, 336, 477, 333, 332, 331, -1000, -1000, 353,
- 429, 353, 23, 9, -1000, 353, 353, 353, 353, 353,
+ -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, 89, 89,
+ 89, 89, 89, 89, 89, 89, 89, 89, 89, 89,
+ 89, 89, 89, 540, -1000, 196, 534, -9, 75, -1000,
+ -1000, -1000, -1000, -1000, -1000, 303, 299, -58, 474, -1000,
+ -1000, 86, 325, 477, 332, 331, 330, -1000, -1000, 353,
+ 418, 353, 56, 14, -1000, 353, 353, 353, 353, 353,
353, 353, 353, 353, 353, 353, 353, 353, 353, -1000,
- -1000, -1000, -1000, -1000, -1000, 197, -1000, -1000, -1000, -1000,
- -1000, 488, 542, 524, -1000, 523, -1000, -1000, -1000, -1000,
- 227, 522, -1000, 549, 541, 535, 65, -1000, -1000, 82,
- 2, 330, -1000, -1000, -1000, -1000, -1000, 546, 520, 519,
- 518, 517, 323, 382, 298, 307, 238, 381, 415, 314,
- 310, 380, 222, -44, 328, 325, 319, 318, -32, -32,
- -17, -17, -74, -74, -74, -74, -26, -26, -26, -26,
- -26, -26, 197, 227, 227, 227, 484, 355, -1000, -1000,
- 392, 355, -1000, -1000, 107, -1000, 379, -1000, 391, 360,
- -1000, 57, -1000, 357, -1000, 57, -1000, 283, 277, 481,
- 458, 457, 431, 419, -1000, 1, 316, 82, 496, -1000,
- -1000, -1000, -1000, -1000, -1000, 110, 307, 201, 134, 190,
- 130, 170, 305, 110, 353, 212, 356, 291, -1000, -1000,
- 290, -1000, 493, -1000, 299, 293, 207, 162, 274, 197,
- 140, -1000, 355, 542, 489, -1000, 495, 424, 541, 535,
- 312, -1000, -1000, -1000, 284, -1000, -1000, -1000, -1000, -1000,
+ -1000, -1000, -1000, -1000, -1000, 83, -1000, -1000, -1000, -1000,
+ -1000, 456, 541, 523, -1000, 522, -1000, -1000, -1000, -1000,
+ 284, 520, -1000, 546, 535, 533, 98, -1000, -1000, 66,
+ -10, 329, -1000, -1000, -1000, -1000, -1000, 542, 518, 517,
+ 496, 495, 314, 382, 413, 310, 297, 381, 415, 302,
+ 285, 380, 273, -42, 328, 327, 324, 317, -30, -30,
+ -26, -26, -74, -74, -74, -74, -15, -15, -15, -15,
+ -15, -15, 83, 284, 284, 284, 426, 354, -1000, -1000,
+ 393, 354, -1000, -1000, 174, -1000, 379, -1000, 392, 360,
+ -1000, 86, -1000, 357, -1000, 86, -1000, 88, 76, 189,
+ 150, 137, 124, 99, -1000, -43, 312, 66, 483, -1000,
+ -1000, -1000, -1000, -1000, -1000, 122, 310, 249, 126, 261,
+ 501, 349, 293, 122, 353, 265, 356, 260, -1000, -1000,
+ 197, -1000, 482, -1000, 278, 277, 205, 129, 204, 83,
+ 323, -1000, 354, 541, 481, -1000, 491, 434, 535, 533,
+ 311, -1000, -1000, -1000, 305, -1000, -1000, -1000, -1000, -1000,
-1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000, -1000,
- -1000, -1000, -1000, 82, 480, -1000, 203, -1000, 217, 232,
- 59, 232, 416, 16, 227, 16, 126, 267, 414, 200,
- 79, -1000, -1000, 184, -1000, 353, 534, -1000, -1000, 354,
- 278, -1000, 237, -1000, -1000, 225, -1000, 137, -1000, -1000,
- -1000, -1000, -1000, -1000, -1000, -1000, 472, 471, -1000, 176,
- -1000, 110, 59, 232, 59, -1000, -1000, 197, -1000, 16,
- -1000, 131, -1000, -1000, -1000, 61, 406, 396, 120, 110,
- 171, -1000, 470, -1000, -1000, -1000, -1000, 169, 142, -1000,
- -1000, 59, -1000, 482, 63, 59, 52, 16, 16, 321,
- -1000, -1000, 350, -1000, -1000, 138, 59, -1000, -1000, 16,
- 462, -1000, -1000, 315, 438, 119, -1000,
+ -1000, -1000, -1000, 66, 480, -1000, 229, -1000, 167, 475,
+ 46, 475, 421, 16, 284, 16, 95, 144, 416, 248,
+ 298, -1000, -1000, 226, -1000, 353, 525, -1000, -1000, 350,
+ 181, -1000, 176, -1000, -1000, 153, -1000, 140, -1000, -1000,
+ -1000, -1000, -1000, -1000, -1000, -1000, 471, 462, -1000, 217,
+ -1000, 122, 46, 475, 46, -1000, -1000, 83, -1000, 16,
+ -1000, 190, -1000, -1000, -1000, 82, 414, 406, 207, 122,
+ 188, -1000, 461, -1000, -1000, -1000, -1000, 172, 160, -1000,
+ -1000, 46, -1000, 457, 52, 46, 38, 16, 16, 405,
+ -1000, -1000, 346, -1000, -1000, 131, 46, -1000, -1000, 16,
+ 460, -1000, -1000, 202, 459, 112, -1000,
}
-var exprPgo = [...]int{
-
- 0, 607, 18, 606, 2, 14, 491, 3, 16, 7,
- 605, 604, 603, 602, 15, 587, 586, 585, 584, 90,
- 583, 582, 581, 538, 577, 576, 575, 574, 13, 5,
- 573, 571, 569, 6, 568, 108, 4, 565, 564, 563,
- 562, 561, 11, 560, 559, 8, 558, 12, 557, 9,
- 17, 556, 555, 1, 501, 463, 0,
+
+var exprPgo = [...]int16{
+ 0, 610, 18, 609, 2, 14, 478, 3, 16, 7,
+ 608, 607, 606, 604, 15, 602, 601, 600, 599, 84,
+ 598, 38, 597, 575, 596, 595, 594, 593, 13, 5,
+ 592, 591, 589, 6, 588, 107, 4, 587, 584, 577,
+ 576, 574, 11, 573, 571, 8, 569, 12, 568, 9,
+ 17, 562, 561, 1, 558, 557, 0,
}
-var exprR1 = [...]int{
+var exprR1 = [...]int8{
0, 1, 2, 2, 7, 7, 7, 7, 7, 7,
7, 6, 6, 6, 8, 8, 8, 8, 8, 8,
8, 8, 8, 8, 8, 8, 8, 8, 8, 8,
@@ -420,8 +430,8 @@ var exprR1 = [...]int{
12, 12, 12, 12, 12, 12, 12, 12, 12, 12,
12, 12, 56, 5, 5, 4, 4, 4, 4,
}
-var exprR2 = [...]int{
+var exprR2 = [...]int8{
0, 1, 1, 1, 1, 1, 1, 1, 1, 1,
3, 1, 2, 3, 2, 3, 4, 5, 3, 4,
5, 6, 3, 4, 5, 6, 3, 4, 5, 6,
@@ -446,8 +456,8 @@ var exprR2 = [...]int{
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 2, 1, 3, 4, 4, 3, 3,
}
-var exprChk = [...]int{
+var exprChk = [...]int16{
-1000, -1, -2, -6, -7, -14, 26, -11, -15, -20,
-21, -22, -17, 17, -12, -16, 7, 89, 90, 68,
-18, 30, 31, 32, 44, 45, 54, 55, 56, 57,
@@ -474,9 +484,9 @@ var exprChk = [...]int{
50, -14, -8, 27, 21, -7, 7, -5, 27, 5,
-5, 27, 21, 27, 26, 26, 26, 26, -33, -33,
-33, 8, -50, 21, 13, 27, 21, 13, 21, 21,
- 72, 9, 4, 7, 72, 9, 4, 7, 9, 4,
- 7, 9, 4, 7, 9, 4, 7, 9, 4, 7,
- 9, 4, 7, 80, 26, -36, 6, -4, -8, -56,
+ 72, 9, 4, -21, 72, 9, 4, -21, 9, 4,
+ -21, 9, 4, -21, 9, 4, -21, 9, 4, -21,
+ 9, 4, -21, 80, 26, -36, 6, -4, -8, -56,
-53, -28, 70, 10, 50, 10, -53, 53, 27, -53,
-28, 27, -4, -7, 27, 21, 21, 27, 27, 6,
-5, 27, -5, 27, 27, -5, 27, -5, -49, 6,
@@ -488,8 +498,8 @@ var exprChk = [...]int{
-4, 27, 6, 27, 27, 5, -53, -56, -56, 10,
21, 27, -56, 6, 21, 6, 27,
}
-var exprDef = [...]int{
+var exprDef = [...]int16{
0, -2, 1, 2, 3, 11, 0, 4, 5, 6,
7, 8, 9, 0, 0, 0, 191, 0, 0, 0,
0, 207, 208, 209, 210, 211, 212, 213, 214, 215,
@@ -530,12 +540,12 @@ var exprDef = [...]int{
54, 55, 0, 127, 128, 0, 21, 25, 29, 32,
0, 41, 33, 0, 0, 0, 56,
}
-var exprTok1 = [...]int{
+var exprTok1 = [...]int8{
1,
}
-var exprTok2 = [...]int{
+var exprTok2 = [...]int8{
2, 3, 4, 5, 6, 7, 8, 9, 10, 11,
12, 13, 14, 15, 16, 17, 18, 19, 20, 21,
22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
@@ -547,7 +557,8 @@ var exprTok2 = [...]int{
82, 83, 84, 85, 86, 87, 88, 89, 90, 91,
92, 93, 94,
}
-var exprTok3 = [...]int{
+
+var exprTok3 = [...]int8{
0,
}
@@ -557,6 +568,8 @@ var exprErrorMessages = [...]struct {
msg string
}{}
+//line yaccpar:1
+
/* parser for yacc output */
var (
@@ -627,9 +640,9 @@ func exprErrorMessage(state, lookAhead int) string {
expected := make([]int, 0, 4)
// Look for shiftable tokens.
- base := exprPact[state]
+ base := int(exprPact[state])
for tok := TOKSTART; tok-1 < len(exprToknames); tok++ {
- if n := base + tok; n >= 0 && n < exprLast && exprChk[exprAct[n]] == tok {
+ if n := base + tok; n >= 0 && n < exprLast && int(exprChk[int(exprAct[n])]) == tok {
if len(expected) == cap(expected) {
return res
}
@@ -639,13 +652,13 @@ func exprErrorMessage(state, lookAhead int) string {
if exprDef[state] == -2 {
i := 0
- for exprExca[i] != -1 || exprExca[i+1] != state {
+ for exprExca[i] != -1 || int(exprExca[i+1]) != state {
i += 2
}
// Look for tokens that we accept or reduce.
for i += 2; exprExca[i] >= 0; i += 2 {
- tok := exprExca[i]
+ tok := int(exprExca[i])
if tok < TOKSTART || exprExca[i+1] == 0 {
continue
}
@@ -676,30 +689,30 @@ func exprlex1(lex exprLexer, lval *exprSymType) (char, token int) {
token = 0
char = lex.Lex(lval)
if char <= 0 {
- token = exprTok1[0]
+ token = int(exprTok1[0])
goto out
}
if char < len(exprTok1) {
- token = exprTok1[char]
+ token = int(exprTok1[char])
goto out
}
if char >= exprPrivate {
if char < exprPrivate+len(exprTok2) {
- token = exprTok2[char-exprPrivate]
+ token = int(exprTok2[char-exprPrivate])
goto out
}
}
for i := 0; i < len(exprTok3); i += 2 {
- token = exprTok3[i+0]
+ token = int(exprTok3[i+0])
if token == char {
- token = exprTok3[i+1]
+ token = int(exprTok3[i+1])
goto out
}
}
out:
if token == 0 {
- token = exprTok2[1] /* unknown char */
+ token = int(exprTok2[1]) /* unknown char */
}
if exprDebug >= 3 {
__yyfmt__.Printf("lex %s(%d)\n", exprTokname(token), uint(char))
@@ -754,7 +767,7 @@ exprstack:
exprS[exprp].yys = exprstate
exprnewstate:
- exprn = exprPact[exprstate]
+ exprn = int(exprPact[exprstate])
if exprn <= exprFlag {
goto exprdefault /* simple state */
}
@@ -765,8 +778,8 @@ exprnewstate:
if exprn < 0 || exprn >= exprLast {
goto exprdefault
}
- exprn = exprAct[exprn]
- if exprChk[exprn] == exprtoken { /* valid shift */
+ exprn = int(exprAct[exprn])
+ if int(exprChk[exprn]) == exprtoken { /* valid shift */
exprrcvr.char = -1
exprtoken = -1
exprVAL = exprrcvr.lval
@@ -779,7 +792,7 @@ exprnewstate:
exprdefault:
/* default state action */
- exprn = exprDef[exprstate]
+ exprn = int(exprDef[exprstate])
if exprn == -2 {
if exprrcvr.char < 0 {
exprrcvr.char, exprtoken = exprlex1(exprlex, &exprrcvr.lval)
@@ -788,18 +801,18 @@ exprdefault:
/* look through exception table */
xi := 0
for {
- if exprExca[xi+0] == -1 && exprExca[xi+1] == exprstate {
+ if exprExca[xi+0] == -1 && int(exprExca[xi+1]) == exprstate {
break
}
xi += 2
}
for xi += 2; ; xi += 2 {
- exprn = exprExca[xi+0]
+ exprn = int(exprExca[xi+0])
if exprn < 0 || exprn == exprtoken {
break
}
}
- exprn = exprExca[xi+1]
+ exprn = int(exprExca[xi+1])
if exprn < 0 {
goto ret0
}
@@ -821,10 +834,10 @@ exprdefault:
/* find a state where "error" is a legal shift action */
for exprp >= 0 {
- exprn = exprPact[exprS[exprp].yys] + exprErrCode
+ exprn = int(exprPact[exprS[exprp].yys]) + exprErrCode
if exprn >= 0 && exprn < exprLast {
- exprstate = exprAct[exprn] /* simulate a shift of "error" */
- if exprChk[exprstate] == exprErrCode {
+ exprstate = int(exprAct[exprn]) /* simulate a shift of "error" */
+ if int(exprChk[exprstate]) == exprErrCode {
goto exprstack
}
}
@@ -860,7 +873,7 @@ exprdefault:
exprpt := exprp
_ = exprpt // guard against "declared and not used"
- exprp -= exprR2[exprn]
+ exprp -= int(exprR2[exprn])
// exprp is now the index of $0. Perform the default action. Iff the
// reduced production is ε, $1 is possibly out of range.
if exprp+1 >= len(exprS) {
@@ -871,16 +884,16 @@ exprdefault:
exprVAL = exprS[exprp+1]
/* consult goto table to find next state */
- exprn = exprR1[exprn]
- exprg := exprPgo[exprn]
+ exprn = int(exprR1[exprn])
+ exprg := int(exprPgo[exprn])
exprj := exprg + exprS[exprp].yys + 1
if exprj >= exprLast {
- exprstate = exprAct[exprg]
+ exprstate = int(exprAct[exprg])
} else {
- exprstate = exprAct[exprj]
- if exprChk[exprstate] != -exprn {
- exprstate = exprAct[exprg]
+ exprstate = int(exprAct[exprj])
+ if int(exprChk[exprstate]) != -exprn {
+ exprstate = int(exprAct[exprg])
}
}
// dummy call; replaced with literal code
@@ -888,885 +901,1062 @@ exprdefault:
case 1:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:154
{
exprlex.(*parser).expr = exprDollar[1].Expr
}
case 2:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:157
{
exprVAL.Expr = exprDollar[1].LogExpr
}
case 3:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:158
{
exprVAL.Expr = exprDollar[1].MetricExpr
}
case 4:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:162
{
exprVAL.MetricExpr = exprDollar[1].RangeAggregationExpr
}
case 5:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:163
{
exprVAL.MetricExpr = exprDollar[1].VectorAggregationExpr
}
case 6:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:164
{
exprVAL.MetricExpr = exprDollar[1].BinOpExpr
}
case 7:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:165
{
exprVAL.MetricExpr = exprDollar[1].LiteralExpr
}
case 8:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:166
{
exprVAL.MetricExpr = exprDollar[1].LabelReplaceExpr
}
case 9:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:167
{
exprVAL.MetricExpr = exprDollar[1].VectorExpr
}
case 10:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:168
{
exprVAL.MetricExpr = exprDollar[2].MetricExpr
}
case 11:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:172
{
exprVAL.LogExpr = newMatcherExpr(exprDollar[1].Selector)
}
case 12:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:173
{
exprVAL.LogExpr = newPipelineExpr(newMatcherExpr(exprDollar[1].Selector), exprDollar[2].PipelineExpr)
}
case 13:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:174
{
exprVAL.LogExpr = exprDollar[2].LogExpr
}
case 14:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:178
{
exprVAL.LogRangeExpr = newLogRange(newMatcherExpr(exprDollar[1].Selector), exprDollar[2].duration, nil, nil)
}
case 15:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:179
{
exprVAL.LogRangeExpr = newLogRange(newMatcherExpr(exprDollar[1].Selector), exprDollar[2].duration, nil, exprDollar[3].OffsetExpr)
}
case 16:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:180
{
exprVAL.LogRangeExpr = newLogRange(newMatcherExpr(exprDollar[2].Selector), exprDollar[4].duration, nil, nil)
}
case 17:
exprDollar = exprS[exprpt-5 : exprpt+1]
+//line expr.y:181
{
exprVAL.LogRangeExpr = newLogRange(newMatcherExpr(exprDollar[2].Selector), exprDollar[4].duration, nil, exprDollar[5].OffsetExpr)
}
case 18:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:182
{
exprVAL.LogRangeExpr = newLogRange(newMatcherExpr(exprDollar[1].Selector), exprDollar[2].duration, exprDollar[3].UnwrapExpr, nil)
}
case 19:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:183
{
exprVAL.LogRangeExpr = newLogRange(newMatcherExpr(exprDollar[1].Selector), exprDollar[2].duration, exprDollar[4].UnwrapExpr, exprDollar[3].OffsetExpr)
}
case 20:
exprDollar = exprS[exprpt-5 : exprpt+1]
+//line expr.y:184
{
exprVAL.LogRangeExpr = newLogRange(newMatcherExpr(exprDollar[2].Selector), exprDollar[4].duration, exprDollar[5].UnwrapExpr, nil)
}
case 21:
exprDollar = exprS[exprpt-6 : exprpt+1]
+//line expr.y:185
{
exprVAL.LogRangeExpr = newLogRange(newMatcherExpr(exprDollar[2].Selector), exprDollar[4].duration, exprDollar[6].UnwrapExpr, exprDollar[5].OffsetExpr)
}
case 22:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:186
{
exprVAL.LogRangeExpr = newLogRange(newMatcherExpr(exprDollar[1].Selector), exprDollar[3].duration, exprDollar[2].UnwrapExpr, nil)
}
case 23:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:187
{
exprVAL.LogRangeExpr = newLogRange(newMatcherExpr(exprDollar[1].Selector), exprDollar[3].duration, exprDollar[2].UnwrapExpr, exprDollar[4].OffsetExpr)
}
case 24:
exprDollar = exprS[exprpt-5 : exprpt+1]
+//line expr.y:188
{
exprVAL.LogRangeExpr = newLogRange(newMatcherExpr(exprDollar[2].Selector), exprDollar[5].duration, exprDollar[3].UnwrapExpr, nil)
}
case 25:
exprDollar = exprS[exprpt-6 : exprpt+1]
+//line expr.y:189
{
exprVAL.LogRangeExpr = newLogRange(newMatcherExpr(exprDollar[2].Selector), exprDollar[5].duration, exprDollar[3].UnwrapExpr, exprDollar[6].OffsetExpr)
}
case 26:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:190
{
exprVAL.LogRangeExpr = newLogRange(newPipelineExpr(newMatcherExpr(exprDollar[1].Selector), exprDollar[2].PipelineExpr), exprDollar[3].duration, nil, nil)
}
case 27:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:191
{
exprVAL.LogRangeExpr = newLogRange(newPipelineExpr(newMatcherExpr(exprDollar[1].Selector), exprDollar[2].PipelineExpr), exprDollar[3].duration, nil, exprDollar[4].OffsetExpr)
}
case 28:
exprDollar = exprS[exprpt-5 : exprpt+1]
+//line expr.y:192
{
exprVAL.LogRangeExpr = newLogRange(newPipelineExpr(newMatcherExpr(exprDollar[2].Selector), exprDollar[3].PipelineExpr), exprDollar[5].duration, nil, nil)
}
case 29:
exprDollar = exprS[exprpt-6 : exprpt+1]
+//line expr.y:193
{
exprVAL.LogRangeExpr = newLogRange(newPipelineExpr(newMatcherExpr(exprDollar[2].Selector), exprDollar[3].PipelineExpr), exprDollar[5].duration, nil, exprDollar[6].OffsetExpr)
}
case 30:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:194
{
exprVAL.LogRangeExpr = newLogRange(newPipelineExpr(newMatcherExpr(exprDollar[1].Selector), exprDollar[2].PipelineExpr), exprDollar[4].duration, exprDollar[3].UnwrapExpr, nil)
}
case 31:
exprDollar = exprS[exprpt-5 : exprpt+1]
+//line expr.y:195
{
exprVAL.LogRangeExpr = newLogRange(newPipelineExpr(newMatcherExpr(exprDollar[1].Selector), exprDollar[2].PipelineExpr), exprDollar[4].duration, exprDollar[3].UnwrapExpr, exprDollar[5].OffsetExpr)
}
case 32:
exprDollar = exprS[exprpt-6 : exprpt+1]
+//line expr.y:196
{
exprVAL.LogRangeExpr = newLogRange(newPipelineExpr(newMatcherExpr(exprDollar[2].Selector), exprDollar[3].PipelineExpr), exprDollar[6].duration, exprDollar[4].UnwrapExpr, nil)
}
case 33:
exprDollar = exprS[exprpt-7 : exprpt+1]
+//line expr.y:197
{
exprVAL.LogRangeExpr = newLogRange(newPipelineExpr(newMatcherExpr(exprDollar[2].Selector), exprDollar[3].PipelineExpr), exprDollar[6].duration, exprDollar[4].UnwrapExpr, exprDollar[7].OffsetExpr)
}
case 34:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:198
{
exprVAL.LogRangeExpr = newLogRange(newPipelineExpr(newMatcherExpr(exprDollar[1].Selector), exprDollar[3].PipelineExpr), exprDollar[2].duration, nil, nil)
}
case 35:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:199
{
exprVAL.LogRangeExpr = newLogRange(newPipelineExpr(newMatcherExpr(exprDollar[1].Selector), exprDollar[4].PipelineExpr), exprDollar[2].duration, nil, exprDollar[3].OffsetExpr)
}
case 36:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:200
{
exprVAL.LogRangeExpr = newLogRange(newPipelineExpr(newMatcherExpr(exprDollar[1].Selector), exprDollar[3].PipelineExpr), exprDollar[2].duration, exprDollar[4].UnwrapExpr, nil)
}
case 37:
exprDollar = exprS[exprpt-5 : exprpt+1]
+//line expr.y:201
{
exprVAL.LogRangeExpr = newLogRange(newPipelineExpr(newMatcherExpr(exprDollar[1].Selector), exprDollar[4].PipelineExpr), exprDollar[2].duration, exprDollar[5].UnwrapExpr, exprDollar[3].OffsetExpr)
}
case 38:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:202
{
exprVAL.LogRangeExpr = exprDollar[2].LogRangeExpr
}
case 40:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:207
{
exprVAL.UnwrapExpr = newUnwrapExpr(exprDollar[3].str, "")
}
case 41:
exprDollar = exprS[exprpt-6 : exprpt+1]
+//line expr.y:208
{
exprVAL.UnwrapExpr = newUnwrapExpr(exprDollar[5].str, exprDollar[3].ConvOp)
}
case 42:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:209
{
exprVAL.UnwrapExpr = exprDollar[1].UnwrapExpr.addPostFilter(exprDollar[3].LabelFilter)
}
case 43:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:213
{
exprVAL.ConvOp = OpConvBytes
}
case 44:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:214
{
exprVAL.ConvOp = OpConvDuration
}
case 45:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:215
{
exprVAL.ConvOp = OpConvDurationSeconds
}
case 46:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:219
{
exprVAL.RangeAggregationExpr = newRangeAggregationExpr(exprDollar[3].LogRangeExpr, exprDollar[1].RangeOp, nil, nil)
}
case 47:
exprDollar = exprS[exprpt-6 : exprpt+1]
+//line expr.y:220
{
exprVAL.RangeAggregationExpr = newRangeAggregationExpr(exprDollar[5].LogRangeExpr, exprDollar[1].RangeOp, nil, &exprDollar[3].str)
}
case 48:
exprDollar = exprS[exprpt-5 : exprpt+1]
+//line expr.y:221
{
exprVAL.RangeAggregationExpr = newRangeAggregationExpr(exprDollar[3].LogRangeExpr, exprDollar[1].RangeOp, exprDollar[5].Grouping, nil)
}
case 49:
exprDollar = exprS[exprpt-7 : exprpt+1]
+//line expr.y:222
{
exprVAL.RangeAggregationExpr = newRangeAggregationExpr(exprDollar[5].LogRangeExpr, exprDollar[1].RangeOp, exprDollar[7].Grouping, &exprDollar[3].str)
}
case 50:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:227
{
exprVAL.VectorAggregationExpr = mustNewVectorAggregationExpr(exprDollar[3].MetricExpr, exprDollar[1].VectorOp, nil, nil)
}
case 51:
exprDollar = exprS[exprpt-5 : exprpt+1]
+//line expr.y:228
{
exprVAL.VectorAggregationExpr = mustNewVectorAggregationExpr(exprDollar[4].MetricExpr, exprDollar[1].VectorOp, exprDollar[2].Grouping, nil)
}
case 52:
exprDollar = exprS[exprpt-5 : exprpt+1]
+//line expr.y:229
{
exprVAL.VectorAggregationExpr = mustNewVectorAggregationExpr(exprDollar[3].MetricExpr, exprDollar[1].VectorOp, exprDollar[5].Grouping, nil)
}
case 53:
exprDollar = exprS[exprpt-6 : exprpt+1]
+//line expr.y:231
{
exprVAL.VectorAggregationExpr = mustNewVectorAggregationExpr(exprDollar[5].MetricExpr, exprDollar[1].VectorOp, nil, &exprDollar[3].str)
}
case 54:
exprDollar = exprS[exprpt-7 : exprpt+1]
+//line expr.y:232
{
exprVAL.VectorAggregationExpr = mustNewVectorAggregationExpr(exprDollar[5].MetricExpr, exprDollar[1].VectorOp, exprDollar[7].Grouping, &exprDollar[3].str)
}
case 55:
exprDollar = exprS[exprpt-7 : exprpt+1]
+//line expr.y:233
{
exprVAL.VectorAggregationExpr = mustNewVectorAggregationExpr(exprDollar[6].MetricExpr, exprDollar[1].VectorOp, exprDollar[2].Grouping, &exprDollar[4].str)
}
case 56:
exprDollar = exprS[exprpt-12 : exprpt+1]
+//line expr.y:238
{
exprVAL.LabelReplaceExpr = mustNewLabelReplaceExpr(exprDollar[3].MetricExpr, exprDollar[5].str, exprDollar[7].str, exprDollar[9].str, exprDollar[11].str)
}
case 57:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:242
{
exprVAL.Filter = log.LineMatchRegexp
}
case 58:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:243
{
exprVAL.Filter = log.LineMatchEqual
}
case 59:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:244
{
exprVAL.Filter = log.LineMatchPattern
}
case 60:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:245
{
exprVAL.Filter = log.LineMatchNotRegexp
}
case 61:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:246
{
exprVAL.Filter = log.LineMatchNotEqual
}
case 62:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:247
{
exprVAL.Filter = log.LineMatchNotPattern
}
case 63:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:251
{
exprVAL.Selector = exprDollar[2].Matchers
}
case 64:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:252
{
exprVAL.Selector = exprDollar[2].Matchers
}
case 65:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:253
{
}
case 66:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:257
{
exprVAL.Matchers = []*labels.Matcher{exprDollar[1].Matcher}
}
case 67:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:258
{
exprVAL.Matchers = append(exprDollar[1].Matchers, exprDollar[3].Matcher)
}
case 68:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:262
{
exprVAL.Matcher = mustNewMatcher(labels.MatchEqual, exprDollar[1].str, exprDollar[3].str)
}
case 69:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:263
{
exprVAL.Matcher = mustNewMatcher(labels.MatchNotEqual, exprDollar[1].str, exprDollar[3].str)
}
case 70:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:264
{
exprVAL.Matcher = mustNewMatcher(labels.MatchRegexp, exprDollar[1].str, exprDollar[3].str)
}
case 71:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:265
{
exprVAL.Matcher = mustNewMatcher(labels.MatchNotRegexp, exprDollar[1].str, exprDollar[3].str)
}
case 72:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:269
{
exprVAL.PipelineExpr = MultiStageExpr{exprDollar[1].PipelineStage}
}
case 73:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:270
{
exprVAL.PipelineExpr = append(exprDollar[1].PipelineExpr, exprDollar[2].PipelineStage)
}
case 74:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:274
{
exprVAL.PipelineStage = exprDollar[1].LineFilters
}
case 75:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:275
{
exprVAL.PipelineStage = exprDollar[2].LogfmtParser
}
case 76:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:276
{
exprVAL.PipelineStage = exprDollar[2].LabelParser
}
case 77:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:277
{
exprVAL.PipelineStage = exprDollar[2].JSONExpressionParser
}
case 78:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:278
{
exprVAL.PipelineStage = exprDollar[2].LogfmtExpressionParser
}
case 79:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:279
{
exprVAL.PipelineStage = &LabelFilterExpr{LabelFilterer: exprDollar[2].LabelFilter}
}
case 80:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:280
{
exprVAL.PipelineStage = exprDollar[2].LineFormatExpr
}
case 81:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:281
{
exprVAL.PipelineStage = exprDollar[2].DecolorizeExpr
}
case 82:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:282
{
exprVAL.PipelineStage = exprDollar[2].LabelFormatExpr
}
case 83:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:283
{
exprVAL.PipelineStage = exprDollar[2].DropLabelsExpr
}
case 84:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:284
{
exprVAL.PipelineStage = exprDollar[2].KeepLabelsExpr
}
case 85:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:288
{
exprVAL.FilterOp = OpFilterIP
}
case 86:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:292
{
exprVAL.OrFilter = newLineFilterExpr(log.LineMatchEqual, "", exprDollar[1].str)
}
case 87:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:293
{
exprVAL.OrFilter = newLineFilterExpr(log.LineMatchEqual, exprDollar[1].FilterOp, exprDollar[3].str)
}
case 88:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:294
{
exprVAL.OrFilter = newOrLineFilter(newLineFilterExpr(log.LineMatchEqual, "", exprDollar[1].str), exprDollar[3].OrFilter)
}
case 89:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:298
{
exprVAL.LineFilter = newLineFilterExpr(exprDollar[1].Filter, "", exprDollar[2].str)
}
case 90:
exprDollar = exprS[exprpt-5 : exprpt+1]
+//line expr.y:299
{
exprVAL.LineFilter = newLineFilterExpr(exprDollar[1].Filter, exprDollar[2].FilterOp, exprDollar[4].str)
}
case 91:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:300
{
exprVAL.LineFilter = newOrLineFilter(newLineFilterExpr(exprDollar[1].Filter, "", exprDollar[2].str), exprDollar[4].OrFilter)
}
case 92:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:304
{
exprVAL.LineFilters = exprDollar[1].LineFilter
}
case 93:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:305
{
exprVAL.LineFilters = newOrLineFilter(exprDollar[1].LineFilter, exprDollar[3].OrFilter)
}
case 94:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:306
{
exprVAL.LineFilters = newNestedLineFilterExpr(exprDollar[1].LineFilters, exprDollar[2].LineFilter)
}
case 95:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:310
{
exprVAL.ParserFlags = []string{exprDollar[1].str}
}
case 96:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:311
{
exprVAL.ParserFlags = append(exprDollar[1].ParserFlags, exprDollar[2].str)
}
case 97:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:315
{
exprVAL.LogfmtParser = newLogfmtParserExpr(nil)
}
case 98:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:316
{
exprVAL.LogfmtParser = newLogfmtParserExpr(exprDollar[2].ParserFlags)
}
case 99:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:320
{
exprVAL.LabelParser = newLabelParserExpr(OpParserTypeJSON, "")
}
case 100:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:321
{
exprVAL.LabelParser = newLabelParserExpr(OpParserTypeRegexp, exprDollar[2].str)
}
case 101:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:322
{
exprVAL.LabelParser = newLabelParserExpr(OpParserTypeUnpack, "")
}
case 102:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:323
{
exprVAL.LabelParser = newLabelParserExpr(OpParserTypePattern, exprDollar[2].str)
}
case 103:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:327
{
exprVAL.JSONExpressionParser = newJSONExpressionParser(exprDollar[2].LabelExtractionExpressionList)
}
case 104:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:330
{
exprVAL.LogfmtExpressionParser = newLogfmtExpressionParser(exprDollar[3].LabelExtractionExpressionList, exprDollar[2].ParserFlags)
}
case 105:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:331
{
exprVAL.LogfmtExpressionParser = newLogfmtExpressionParser(exprDollar[2].LabelExtractionExpressionList, nil)
}
case 106:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:334
{
exprVAL.LineFormatExpr = newLineFmtExpr(exprDollar[2].str)
}
case 107:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:336
{
exprVAL.DecolorizeExpr = newDecolorizeExpr()
}
case 108:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:339
{
exprVAL.LabelFormat = log.NewRenameLabelFmt(exprDollar[1].str, exprDollar[3].str)
}
case 109:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:340
{
exprVAL.LabelFormat = log.NewTemplateLabelFmt(exprDollar[1].str, exprDollar[3].str)
}
case 110:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:344
{
exprVAL.LabelsFormat = []log.LabelFmt{exprDollar[1].LabelFormat}
}
case 111:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:345
{
exprVAL.LabelsFormat = append(exprDollar[1].LabelsFormat, exprDollar[3].LabelFormat)
}
case 113:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:350
{
exprVAL.LabelFormatExpr = newLabelFmtExpr(exprDollar[2].LabelsFormat)
}
case 114:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:353
{
exprVAL.LabelFilter = log.NewStringLabelFilter(exprDollar[1].Matcher)
}
case 115:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:354
{
exprVAL.LabelFilter = exprDollar[1].IPLabelFilter
}
case 116:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:355
{
exprVAL.LabelFilter = exprDollar[1].UnitFilter
}
case 117:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:356
{
exprVAL.LabelFilter = exprDollar[1].NumberFilter
}
case 118:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:357
{
exprVAL.LabelFilter = exprDollar[2].LabelFilter
}
case 119:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:358
{
exprVAL.LabelFilter = log.NewAndLabelFilter(exprDollar[1].LabelFilter, exprDollar[2].LabelFilter)
}
case 120:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:359
{
exprVAL.LabelFilter = log.NewAndLabelFilter(exprDollar[1].LabelFilter, exprDollar[3].LabelFilter)
}
case 121:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:360
{
exprVAL.LabelFilter = log.NewAndLabelFilter(exprDollar[1].LabelFilter, exprDollar[3].LabelFilter)
}
case 122:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:361
{
exprVAL.LabelFilter = log.NewOrLabelFilter(exprDollar[1].LabelFilter, exprDollar[3].LabelFilter)
}
case 123:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:365
{
exprVAL.LabelExtractionExpression = log.NewLabelExtractionExpr(exprDollar[1].str, exprDollar[3].str)
}
case 124:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:366
{
exprVAL.LabelExtractionExpression = log.NewLabelExtractionExpr(exprDollar[1].str, exprDollar[1].str)
}
case 125:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:369
{
exprVAL.LabelExtractionExpressionList = []log.LabelExtractionExpr{exprDollar[1].LabelExtractionExpression}
}
case 126:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:370
{
exprVAL.LabelExtractionExpressionList = append(exprDollar[1].LabelExtractionExpressionList, exprDollar[3].LabelExtractionExpression)
}
case 127:
exprDollar = exprS[exprpt-6 : exprpt+1]
+//line expr.y:374
{
exprVAL.IPLabelFilter = log.NewIPLabelFilter(exprDollar[5].str, exprDollar[1].str, log.LabelFilterEqual)
}
case 128:
exprDollar = exprS[exprpt-6 : exprpt+1]
+//line expr.y:375
{
exprVAL.IPLabelFilter = log.NewIPLabelFilter(exprDollar[5].str, exprDollar[1].str, log.LabelFilterNotEqual)
}
case 129:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:379
{
exprVAL.UnitFilter = exprDollar[1].DurationFilter
}
case 130:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:380
{
exprVAL.UnitFilter = exprDollar[1].BytesFilter
}
case 131:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:383
{
exprVAL.DurationFilter = log.NewDurationLabelFilter(log.LabelFilterGreaterThan, exprDollar[1].str, exprDollar[3].duration)
}
case 132:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:384
{
exprVAL.DurationFilter = log.NewDurationLabelFilter(log.LabelFilterGreaterThanOrEqual, exprDollar[1].str, exprDollar[3].duration)
}
case 133:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:385
{
exprVAL.DurationFilter = log.NewDurationLabelFilter(log.LabelFilterLesserThan, exprDollar[1].str, exprDollar[3].duration)
}
case 134:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:386
{
exprVAL.DurationFilter = log.NewDurationLabelFilter(log.LabelFilterLesserThanOrEqual, exprDollar[1].str, exprDollar[3].duration)
}
case 135:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:387
{
exprVAL.DurationFilter = log.NewDurationLabelFilter(log.LabelFilterNotEqual, exprDollar[1].str, exprDollar[3].duration)
}
case 136:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:388
{
exprVAL.DurationFilter = log.NewDurationLabelFilter(log.LabelFilterEqual, exprDollar[1].str, exprDollar[3].duration)
}
case 137:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:389
{
exprVAL.DurationFilter = log.NewDurationLabelFilter(log.LabelFilterEqual, exprDollar[1].str, exprDollar[3].duration)
}
case 138:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:393
{
exprVAL.BytesFilter = log.NewBytesLabelFilter(log.LabelFilterGreaterThan, exprDollar[1].str, exprDollar[3].bytes)
}
case 139:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:394
{
exprVAL.BytesFilter = log.NewBytesLabelFilter(log.LabelFilterGreaterThanOrEqual, exprDollar[1].str, exprDollar[3].bytes)
}
case 140:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:395
{
exprVAL.BytesFilter = log.NewBytesLabelFilter(log.LabelFilterLesserThan, exprDollar[1].str, exprDollar[3].bytes)
}
case 141:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:396
{
exprVAL.BytesFilter = log.NewBytesLabelFilter(log.LabelFilterLesserThanOrEqual, exprDollar[1].str, exprDollar[3].bytes)
}
case 142:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:397
{
exprVAL.BytesFilter = log.NewBytesLabelFilter(log.LabelFilterNotEqual, exprDollar[1].str, exprDollar[3].bytes)
}
case 143:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:398
{
exprVAL.BytesFilter = log.NewBytesLabelFilter(log.LabelFilterEqual, exprDollar[1].str, exprDollar[3].bytes)
}
case 144:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:399
{
exprVAL.BytesFilter = log.NewBytesLabelFilter(log.LabelFilterEqual, exprDollar[1].str, exprDollar[3].bytes)
}
case 145:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:403
{
- exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterGreaterThan, exprDollar[1].str, mustNewFloat(exprDollar[3].str))
+ exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterGreaterThan, exprDollar[1].str, exprDollar[3].LiteralExpr.Val)
}
case 146:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:404
{
- exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterGreaterThanOrEqual, exprDollar[1].str, mustNewFloat(exprDollar[3].str))
+ exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterGreaterThanOrEqual, exprDollar[1].str, exprDollar[3].LiteralExpr.Val)
}
case 147:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:405
{
- exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterLesserThan, exprDollar[1].str, mustNewFloat(exprDollar[3].str))
+ exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterLesserThan, exprDollar[1].str, exprDollar[3].LiteralExpr.Val)
}
case 148:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:406
{
- exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterLesserThanOrEqual, exprDollar[1].str, mustNewFloat(exprDollar[3].str))
+ exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterLesserThanOrEqual, exprDollar[1].str, exprDollar[3].LiteralExpr.Val)
}
case 149:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:407
{
- exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterNotEqual, exprDollar[1].str, mustNewFloat(exprDollar[3].str))
+ exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterNotEqual, exprDollar[1].str, exprDollar[3].LiteralExpr.Val)
}
case 150:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:408
{
- exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterEqual, exprDollar[1].str, mustNewFloat(exprDollar[3].str))
+ exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterEqual, exprDollar[1].str, exprDollar[3].LiteralExpr.Val)
}
case 151:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:409
{
- exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterEqual, exprDollar[1].str, mustNewFloat(exprDollar[3].str))
+ exprVAL.NumberFilter = log.NewNumericLabelFilter(log.LabelFilterEqual, exprDollar[1].str, exprDollar[3].LiteralExpr.Val)
}
case 152:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:413
{
exprVAL.DropLabel = log.NewDropLabel(nil, exprDollar[1].str)
}
case 153:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:414
{
exprVAL.DropLabel = log.NewDropLabel(exprDollar[1].Matcher, "")
}
case 154:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:417
{
exprVAL.DropLabels = []log.DropLabel{exprDollar[1].DropLabel}
}
case 155:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:418
{
exprVAL.DropLabels = append(exprDollar[1].DropLabels, exprDollar[3].DropLabel)
}
case 156:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:421
{
exprVAL.DropLabelsExpr = newDropLabelsExpr(exprDollar[2].DropLabels)
}
case 157:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:424
{
exprVAL.KeepLabel = log.NewKeepLabel(nil, exprDollar[1].str)
}
case 158:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:425
{
exprVAL.KeepLabel = log.NewKeepLabel(exprDollar[1].Matcher, "")
}
case 159:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:428
{
exprVAL.KeepLabels = []log.KeepLabel{exprDollar[1].KeepLabel}
}
case 160:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:429
{
exprVAL.KeepLabels = append(exprDollar[1].KeepLabels, exprDollar[3].KeepLabel)
}
case 161:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:432
{
exprVAL.KeepLabelsExpr = newKeepLabelsExpr(exprDollar[2].KeepLabels)
}
case 162:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:436
{
exprVAL.BinOpExpr = mustNewBinOpExpr("or", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
case 163:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:437
{
exprVAL.BinOpExpr = mustNewBinOpExpr("and", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
case 164:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:438
{
exprVAL.BinOpExpr = mustNewBinOpExpr("unless", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
case 165:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:439
{
exprVAL.BinOpExpr = mustNewBinOpExpr("+", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
case 166:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:440
{
exprVAL.BinOpExpr = mustNewBinOpExpr("-", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
case 167:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:441
{
exprVAL.BinOpExpr = mustNewBinOpExpr("*", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
case 168:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:442
{
exprVAL.BinOpExpr = mustNewBinOpExpr("/", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
case 169:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:443
{
exprVAL.BinOpExpr = mustNewBinOpExpr("%", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
case 170:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:444
{
exprVAL.BinOpExpr = mustNewBinOpExpr("^", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
case 171:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:445
{
exprVAL.BinOpExpr = mustNewBinOpExpr("==", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
case 172:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:446
{
exprVAL.BinOpExpr = mustNewBinOpExpr("!=", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
case 173:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:447
{
exprVAL.BinOpExpr = mustNewBinOpExpr(">", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
case 174:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:448
{
exprVAL.BinOpExpr = mustNewBinOpExpr(">=", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
case 175:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:449
{
exprVAL.BinOpExpr = mustNewBinOpExpr("<", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
case 176:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:450
{
exprVAL.BinOpExpr = mustNewBinOpExpr("<=", exprDollar[3].BinOpModifier, exprDollar[1].Expr, exprDollar[4].Expr)
}
case 177:
exprDollar = exprS[exprpt-0 : exprpt+1]
+//line expr.y:454
{
exprVAL.BoolModifier = &BinOpOptions{VectorMatching: &VectorMatching{Card: CardOneToOne}}
}
case 178:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:458
{
exprVAL.BoolModifier = &BinOpOptions{VectorMatching: &VectorMatching{Card: CardOneToOne}, ReturnBool: true}
}
case 179:
exprDollar = exprS[exprpt-5 : exprpt+1]
+//line expr.y:465
{
exprVAL.OnOrIgnoringModifier = exprDollar[1].BoolModifier
exprVAL.OnOrIgnoringModifier.VectorMatching.On = true
@@ -1774,45 +1964,53 @@ exprdefault:
}
case 180:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:471
{
exprVAL.OnOrIgnoringModifier = exprDollar[1].BoolModifier
exprVAL.OnOrIgnoringModifier.VectorMatching.On = true
}
case 181:
exprDollar = exprS[exprpt-5 : exprpt+1]
+//line expr.y:476
{
exprVAL.OnOrIgnoringModifier = exprDollar[1].BoolModifier
exprVAL.OnOrIgnoringModifier.VectorMatching.MatchingLabels = exprDollar[4].Labels
}
case 182:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:481
{
exprVAL.OnOrIgnoringModifier = exprDollar[1].BoolModifier
}
case 183:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:487
{
exprVAL.BinOpModifier = exprDollar[1].BoolModifier
}
case 184:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:488
{
exprVAL.BinOpModifier = exprDollar[1].OnOrIgnoringModifier
}
case 185:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:490
{
exprVAL.BinOpModifier = exprDollar[1].OnOrIgnoringModifier
exprVAL.BinOpModifier.VectorMatching.Card = CardManyToOne
}
case 186:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:495
{
exprVAL.BinOpModifier = exprDollar[1].OnOrIgnoringModifier
exprVAL.BinOpModifier.VectorMatching.Card = CardManyToOne
}
case 187:
exprDollar = exprS[exprpt-5 : exprpt+1]
+//line expr.y:500
{
exprVAL.BinOpModifier = exprDollar[1].OnOrIgnoringModifier
exprVAL.BinOpModifier.VectorMatching.Card = CardManyToOne
@@ -1820,18 +2018,21 @@ exprdefault:
}
case 188:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:506
{
exprVAL.BinOpModifier = exprDollar[1].OnOrIgnoringModifier
exprVAL.BinOpModifier.VectorMatching.Card = CardOneToMany
}
case 189:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:511
{
exprVAL.BinOpModifier = exprDollar[1].OnOrIgnoringModifier
exprVAL.BinOpModifier.VectorMatching.Card = CardOneToMany
}
case 190:
exprDollar = exprS[exprpt-5 : exprpt+1]
+//line expr.y:516
{
exprVAL.BinOpModifier = exprDollar[1].OnOrIgnoringModifier
exprVAL.BinOpModifier.VectorMatching.Card = CardOneToMany
@@ -1839,191 +2040,229 @@ exprdefault:
}
case 191:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:524
{
exprVAL.LiteralExpr = mustNewLiteralExpr(exprDollar[1].str, false)
}
case 192:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:525
{
exprVAL.LiteralExpr = mustNewLiteralExpr(exprDollar[2].str, false)
}
case 193:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:526
{
exprVAL.LiteralExpr = mustNewLiteralExpr(exprDollar[2].str, true)
}
case 194:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:530
{
exprVAL.VectorExpr = NewVectorExpr(exprDollar[3].str)
}
case 195:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:533
{
exprVAL.Vector = OpTypeVector
}
case 196:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:537
{
exprVAL.VectorOp = OpTypeSum
}
case 197:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:538
{
exprVAL.VectorOp = OpTypeAvg
}
case 198:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:539
{
exprVAL.VectorOp = OpTypeCount
}
case 199:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:540
{
exprVAL.VectorOp = OpTypeMax
}
case 200:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:541
{
exprVAL.VectorOp = OpTypeMin
}
case 201:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:542
{
exprVAL.VectorOp = OpTypeStddev
}
case 202:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:543
{
exprVAL.VectorOp = OpTypeStdvar
}
case 203:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:544
{
exprVAL.VectorOp = OpTypeBottomK
}
case 204:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:545
{
exprVAL.VectorOp = OpTypeTopK
}
case 205:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:546
{
exprVAL.VectorOp = OpTypeSort
}
case 206:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:547
{
exprVAL.VectorOp = OpTypeSortDesc
}
case 207:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:551
{
exprVAL.RangeOp = OpRangeTypeCount
}
case 208:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:552
{
exprVAL.RangeOp = OpRangeTypeRate
}
case 209:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:553
{
exprVAL.RangeOp = OpRangeTypeRateCounter
}
case 210:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:554
{
exprVAL.RangeOp = OpRangeTypeBytes
}
case 211:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:555
{
exprVAL.RangeOp = OpRangeTypeBytesRate
}
case 212:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:556
{
exprVAL.RangeOp = OpRangeTypeAvg
}
case 213:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:557
{
exprVAL.RangeOp = OpRangeTypeSum
}
case 214:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:558
{
exprVAL.RangeOp = OpRangeTypeMin
}
case 215:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:559
{
exprVAL.RangeOp = OpRangeTypeMax
}
case 216:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:560
{
exprVAL.RangeOp = OpRangeTypeStdvar
}
case 217:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:561
{
exprVAL.RangeOp = OpRangeTypeStddev
}
case 218:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:562
{
exprVAL.RangeOp = OpRangeTypeQuantile
}
case 219:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:563
{
exprVAL.RangeOp = OpRangeTypeFirst
}
case 220:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:564
{
exprVAL.RangeOp = OpRangeTypeLast
}
case 221:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:565
{
exprVAL.RangeOp = OpRangeTypeAbsent
}
case 222:
exprDollar = exprS[exprpt-2 : exprpt+1]
+//line expr.y:569
{
exprVAL.OffsetExpr = newOffsetExpr(exprDollar[2].duration)
}
case 223:
exprDollar = exprS[exprpt-1 : exprpt+1]
+//line expr.y:572
{
exprVAL.Labels = []string{exprDollar[1].str}
}
case 224:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:573
{
exprVAL.Labels = append(exprDollar[1].Labels, exprDollar[3].str)
}
case 225:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:577
{
exprVAL.Grouping = &Grouping{Without: false, Groups: exprDollar[3].Labels}
}
case 226:
exprDollar = exprS[exprpt-4 : exprpt+1]
+//line expr.y:578
{
exprVAL.Grouping = &Grouping{Without: true, Groups: exprDollar[3].Labels}
}
case 227:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:579
{
exprVAL.Grouping = &Grouping{Without: false, Groups: nil}
}
case 228:
exprDollar = exprS[exprpt-3 : exprpt+1]
+//line expr.y:580
{
exprVAL.Grouping = &Grouping{Without: true, Groups: nil}
}
diff --git a/pkg/logql/syntax/lex_test.go b/pkg/logql/syntax/lex_test.go
index 97e6ba22c48ee..08878186bd89b 100644
--- a/pkg/logql/syntax/lex_test.go
+++ b/pkg/logql/syntax/lex_test.go
@@ -95,6 +95,19 @@ func TestLex(t *testing.T) {
{`{foo="bar"} | logfmt --strict code"`, []int{OPEN_BRACE, IDENTIFIER, EQ, STRING, CLOSE_BRACE, PIPE, LOGFMT, PARSER_FLAG, IDENTIFIER}},
{`{foo="bar"} | logfmt --keep-empty --strict code="response.code", IPAddress="host"`, []int{OPEN_BRACE, IDENTIFIER, EQ, STRING, CLOSE_BRACE, PIPE, LOGFMT, PARSER_FLAG, PARSER_FLAG, IDENTIFIER, EQ, STRING, COMMA, IDENTIFIER, EQ, STRING}},
{`decolorize`, []int{DECOLORIZE}},
+ {`123`, []int{NUMBER}},
+ {`-123`, []int{SUB, NUMBER}},
+ {`123.45`, []int{NUMBER}},
+ {`-123.45`, []int{SUB, NUMBER}},
+ {`123KB`, []int{BYTES}},
+ // Skip -123KB: Negative bytes are explicitly not supported in the Lexer.
+ {`123ms`, []int{DURATION}},
+ {`-123ms`, []int{DURATION}},
+ {`34 + - 123`, []int{NUMBER, ADD, SUB, NUMBER}},
+ {`34 + -123`, []int{NUMBER, ADD, SUB, NUMBER}},
+ {`34+-123`, []int{NUMBER, ADD, SUB, NUMBER}},
+ {`34-123`, []int{NUMBER, SUB, NUMBER}},
+ {`sum(rate({foo="bar"}[5m])-1 > 30`, []int{SUM, OPEN_PARENTHESIS, RATE, OPEN_PARENTHESIS, OPEN_BRACE, IDENTIFIER, EQ, STRING, CLOSE_BRACE, RANGE, CLOSE_PARENTHESIS, SUB, NUMBER, GT, NUMBER}},
} {
t.Run(tc.input, func(t *testing.T) {
actual := []int{}
@@ -179,6 +192,7 @@ func Test_parseDuration(t *testing.T) {
{"1w", WEEK},
{"1d", DAY},
{"1h15m30.918273645s", time.Hour + 15*time.Minute + 30*time.Second + 918273645*time.Nanosecond},
+ {"-1s", -1 * time.Second},
} {
actual, err := parseDuration(tc.input)
diff --git a/pkg/logql/syntax/parser_test.go b/pkg/logql/syntax/parser_test.go
index 3851013f4be92..f12309f2b24a5 100644
--- a/pkg/logql/syntax/parser_test.go
+++ b/pkg/logql/syntax/parser_test.go
@@ -499,19 +499,19 @@ var ParseTestCases = []struct {
// label filter for ip-matcher
{
in: `{ foo = "bar" }|logfmt|addr>=ip("1.2.3.4")`,
- err: logqlmodel.NewParseError("syntax error: unexpected ip, expecting BYTES or NUMBER or DURATION", 1, 30),
+ err: logqlmodel.NewParseError("syntax error: unexpected ip", 1, 30),
},
{
in: `{ foo = "bar" }|logfmt|addr>ip("1.2.3.4")`,
- err: logqlmodel.NewParseError("syntax error: unexpected ip, expecting BYTES or NUMBER or DURATION", 1, 29),
+ err: logqlmodel.NewParseError("syntax error: unexpected ip", 1, 29),
},
{
in: `{ foo = "bar" }|logfmt|addr<=ip("1.2.3.4")`,
- err: logqlmodel.NewParseError("syntax error: unexpected ip, expecting BYTES or NUMBER or DURATION", 1, 30),
+ err: logqlmodel.NewParseError("syntax error: unexpected ip", 1, 30),
},
{
in: `{ foo = "bar" }|logfmt|addr<ip("1.2.3.4")`,
- err: logqlmodel.NewParseError("syntax error: unexpected ip, expecting BYTES or NUMBER or DURATION", 1, 29),
+ err: logqlmodel.NewParseError("syntax error: unexpected ip", 1, 29),
},
{
in: `{ foo = "bar" }|logfmt|addr=ip("1.2.3.4")`,
|
feat
|
Support negative numbers in LogQL (#13091)
|
25c26ec27d09e6d42bee42a02e22bcf9739bad7e
|
2023-07-13 15:31:04
|
Robert Jacob
|
operator: Fix update of labels and annotations of PodTemplates (#9860)
| false
|
diff --git a/operator/internal/manifests/mutate.go b/operator/internal/manifests/mutate.go
index fbf234f94fa40..27421750bf2cc 100644
--- a/operator/internal/manifests/mutate.go
+++ b/operator/internal/manifests/mutate.go
@@ -16,14 +16,24 @@ import (
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
)
-// MutateFuncFor returns a mutate function based on the
-// existing resource's concrete type. It supports currently
-// only the following types or else panics:
-// - ConfigMap
-// - Service
-// - Deployment
-// - StatefulSet
-// - ServiceMonitor
+// MutateFuncFor returns a mutate function based on the existing resource's concrete type.
+// It currently supports the following types and will return an error for other types:
+//
+// - ConfigMap
+// - Secret
+// - Service
+// - ServiceAccount
+// - ClusterRole
+// - ClusterRoleBinding
+// - Role
+// - RoleBinding
+// - Deployment
+// - StatefulSet
+// - ServiceMonitor
+// - Ingress
+// - Route
+// - PrometheusRule
+// - PodDisruptionBudget
func MutateFuncFor(existing, desired client.Object, depAnnotations map[string]string) controllerutil.MutateFn {
return func() error {
existingAnnotations := existing.GetAnnotations()
@@ -61,7 +71,7 @@ func MutateFuncFor(existing, desired client.Object, depAnnotations map[string]st
case *corev1.Service:
svc := existing.(*corev1.Service)
wantSvc := desired.(*corev1.Service)
- return mutateService(svc, wantSvc)
+ mutateService(svc, wantSvc)
case *corev1.ServiceAccount:
sa := existing.(*corev1.ServiceAccount)
@@ -91,12 +101,12 @@ func MutateFuncFor(existing, desired client.Object, depAnnotations map[string]st
case *appsv1.Deployment:
dpl := existing.(*appsv1.Deployment)
wantDpl := desired.(*appsv1.Deployment)
- return mutateDeployment(dpl, wantDpl)
+ mutateDeployment(dpl, wantDpl)
case *appsv1.StatefulSet:
sts := existing.(*appsv1.StatefulSet)
wantSts := desired.(*appsv1.StatefulSet)
- return mutateStatefulSet(sts, wantSts)
+ mutateStatefulSet(sts, wantSts)
case *monitoringv1.ServiceMonitor:
svcMonitor := existing.(*monitoringv1.ServiceMonitor)
@@ -138,14 +148,6 @@ func mergeWithOverride(dst, src interface{}) error {
return nil
}
-func mergeWithOverrideEmpty(dst, src interface{}) error {
- err := mergo.Merge(dst, src, mergo.WithOverwriteWithEmptyValue)
- if err != nil {
- return kverrors.Wrap(err, "unable to mergeWithOverrideEmpty", "dst", dst, "src", src)
- }
- return nil
-}
-
func mutateConfigMap(existing, desired *corev1.ConfigMap) {
existing.Annotations = desired.Annotations
existing.Labels = desired.Labels
@@ -217,41 +219,30 @@ func mutatePrometheusRule(existing, desired *monitoringv1.PrometheusRule) {
existing.Spec = desired.Spec
}
-func mutateService(existing, desired *corev1.Service) error {
+func mutateService(existing, desired *corev1.Service) {
existing.Spec.Ports = desired.Spec.Ports
- if err := mergeWithOverride(&existing.Spec.Selector, desired.Spec.Selector); err != nil {
- return err
- }
- return nil
+ existing.Spec.Selector = desired.Spec.Selector
}
-func mutateDeployment(existing, desired *appsv1.Deployment) error {
+func mutateDeployment(existing, desired *appsv1.Deployment) {
// Deployment selector is immutable so we set this value only if
// a new object is going to be created
if existing.CreationTimestamp.IsZero() {
existing.Spec.Selector = desired.Spec.Selector
}
existing.Spec.Replicas = desired.Spec.Replicas
- if err := mergeWithOverrideEmpty(&existing.Spec.Template, desired.Spec.Template); err != nil {
- return err
- }
- if err := mergeWithOverride(&existing.Spec.Strategy, desired.Spec.Strategy); err != nil {
- return err
- }
- return nil
+ existing.Spec.Strategy = desired.Spec.Strategy
+ mutatePodTemplate(&existing.Spec.Template, &desired.Spec.Template)
}
-func mutateStatefulSet(existing, desired *appsv1.StatefulSet) error {
+func mutateStatefulSet(existing, desired *appsv1.StatefulSet) {
// StatefulSet selector is immutable so we set this value only if
// a new object is going to be created
if existing.CreationTimestamp.IsZero() {
existing.Spec.Selector = desired.Spec.Selector
}
existing.Spec.Replicas = desired.Spec.Replicas
- if err := mergeWithOverrideEmpty(&existing.Spec.Template, desired.Spec.Template); err != nil {
- return err
- }
- return nil
+ mutatePodTemplate(&existing.Spec.Template, &desired.Spec.Template)
}
func mutatePodDisruptionBudget(existing, desired *policyv1.PodDisruptionBudget) {
@@ -259,3 +250,19 @@ func mutatePodDisruptionBudget(existing, desired *policyv1.PodDisruptionBudget)
existing.Labels = desired.Labels
existing.Spec = desired.Spec
}
+
+func mutatePodTemplate(existing, desired *corev1.PodTemplateSpec) {
+ existing.Annotations = desired.Annotations
+ existing.Labels = desired.Labels
+ mutatePodSpec(&existing.Spec, &desired.Spec)
+}
+
+func mutatePodSpec(existing *corev1.PodSpec, desired *corev1.PodSpec) {
+ existing.Affinity = desired.Affinity
+ existing.Containers = desired.Containers
+ existing.InitContainers = desired.InitContainers
+ existing.NodeSelector = desired.NodeSelector
+ existing.Tolerations = desired.Tolerations
+ existing.TopologySpreadConstraints = desired.TopologySpreadConstraints
+ existing.Volumes = desired.Volumes
+}
diff --git a/operator/internal/manifests/mutate_test.go b/operator/internal/manifests/mutate_test.go
index c58851bc956f6..4fac03e606569 100644
--- a/operator/internal/manifests/mutate_test.go
+++ b/operator/internal/manifests/mutate_test.go
@@ -488,7 +488,7 @@ func TestGetMutateFunc_MutateRoleBinding(t *testing.T) {
require.Exactly(t, got.Subjects, want.Subjects)
}
-func TestGeMutateFunc_MutateDeploymentSpec(t *testing.T) {
+func TestMutateFuncFor_MutateDeploymentSpec(t *testing.T) {
type test struct {
name string
got *appsv1.Deployment
@@ -591,6 +591,39 @@ func TestGeMutateFunc_MutateDeploymentSpec(t *testing.T) {
},
},
},
+ {
+ name: "remove extra annotations and labels on pod",
+ got: &appsv1.Deployment{
+ Spec: appsv1.DeploymentSpec{
+ Template: corev1.PodTemplateSpec{
+ ObjectMeta: metav1.ObjectMeta{
+ Annotations: map[string]string{
+ "first-key": "first-value",
+ "second-key": "second-value",
+ },
+ Labels: map[string]string{
+ "first-key": "first-value",
+ "second-key": "second-value",
+ },
+ },
+ },
+ },
+ },
+ want: &appsv1.Deployment{
+ Spec: appsv1.DeploymentSpec{
+ Template: corev1.PodTemplateSpec{
+ ObjectMeta: metav1.ObjectMeta{
+ Annotations: map[string]string{
+ "first-key": "first-value",
+ },
+ Labels: map[string]string{
+ "first-key": "first-value",
+ },
+ },
+ },
+ },
+ },
+ },
}
for _, tst := range table {
tst := tst
@@ -615,7 +648,7 @@ func TestGeMutateFunc_MutateDeploymentSpec(t *testing.T) {
}
}
-func TestGeMutateFunc_MutateStatefulSetSpec(t *testing.T) {
+func TestMutateFuncFor_MutateStatefulSetSpec(t *testing.T) {
type test struct {
name string
got *appsv1.StatefulSet
@@ -748,6 +781,39 @@ func TestGeMutateFunc_MutateStatefulSetSpec(t *testing.T) {
},
},
},
+ {
+ name: "remove extra annotations and labels on pod",
+ got: &appsv1.StatefulSet{
+ Spec: appsv1.StatefulSetSpec{
+ Template: corev1.PodTemplateSpec{
+ ObjectMeta: metav1.ObjectMeta{
+ Annotations: map[string]string{
+ "first-key": "first-value",
+ "second-key": "second-value",
+ },
+ Labels: map[string]string{
+ "first-key": "first-value",
+ "second-key": "second-value",
+ },
+ },
+ },
+ },
+ },
+ want: &appsv1.StatefulSet{
+ Spec: appsv1.StatefulSetSpec{
+ Template: corev1.PodTemplateSpec{
+ ObjectMeta: metav1.ObjectMeta{
+ Annotations: map[string]string{
+ "first-key": "first-value",
+ },
+ Labels: map[string]string{
+ "first-key": "first-value",
+ },
+ },
+ },
+ },
+ },
+ },
}
for _, tst := range table {
tst := tst
|
operator
|
Fix update of labels and annotations of PodTemplates (#9860)
|
37875968c76c31623bce51bf88e48494ac88ac5d
|
2019-11-11 21:56:32
|
Sándor Guba
|
fluentd: Suppress unread configuration warning (#1242)
| false
|
diff --git a/fluentd/fluent-plugin-grafana-loki/fluent-plugin-grafana-loki.gemspec b/fluentd/fluent-plugin-grafana-loki/fluent-plugin-grafana-loki.gemspec
index 8bd4b77c77ff0..85a6d845764f7 100644
--- a/fluentd/fluent-plugin-grafana-loki/fluent-plugin-grafana-loki.gemspec
+++ b/fluentd/fluent-plugin-grafana-loki/fluent-plugin-grafana-loki.gemspec
@@ -4,7 +4,7 @@ $LOAD_PATH.push File.expand_path('lib', __dir__)
Gem::Specification.new do |spec|
spec.name = 'fluent-plugin-grafana-loki'
- spec.version = '1.2.1'
+ spec.version = '1.2.2'
spec.authors = %w[woodsaj briangann]
spec.email = ['[email protected]', '[email protected]']
diff --git a/fluentd/fluent-plugin-grafana-loki/lib/fluent/plugin/out_loki.rb b/fluentd/fluent-plugin-grafana-loki/lib/fluent/plugin/out_loki.rb
index f8d5d0164a726..6a3a6484e5e24 100644
--- a/fluentd/fluent-plugin-grafana-loki/lib/fluent/plugin/out_loki.rb
+++ b/fluentd/fluent-plugin-grafana-loki/lib/fluent/plugin/out_loki.rb
@@ -76,7 +76,7 @@ def configure(conf)
@record_accessors = {}
conf.elements.select { |element| element.name == 'label' }.each do |element|
element.each_pair do |k, v|
- element.key?(k) # to suppress unread configuration warning
+ element.has_key?(k) # rubocop:disable Style/PreferredHashMethods #to suppress unread configuration warning
v = k if v.empty?
@record_accessors[k] = record_accessor_create(v)
end
@@ -86,11 +86,13 @@ def configure(conf)
@remove_keys_accessors.push(record_accessor_create(key))
end
- @cert = OpenSSL::X509::Certificate.new(File.read(@cert)) if @cert
- @key = OpenSSL::PKey.read(File.read(key)) if @key
+ if [email protected]? && [email protected]?
+ @cert = OpenSSL::X509::Certificate.new(File.read(@cert)) if @cert
+ @key = OpenSSL::PKey.read(File.read(@key)) if @key
- if [email protected]_a?(OpenSSL::PKey::RSA) && [email protected]_a?(OpenSSL::PKey::DSA)
- raise "Unsupported private key type #{key.class}"
+ if [email protected]_a?(OpenSSL::PKey::RSA) && [email protected]_a?(OpenSSL::PKey::DSA)
+ raise "Unsupported private key type #{key.class}"
+ end
end
if !@ca_cert.nil? && !File.exist?(@ca_cert)
|
fluentd
|
Suppress unread configuration warning (#1242)
|
cdb8569164e8d999634d2438ff813893ff4f713c
|
2025-02-03 19:53:26
|
Paul Rogers
|
chore(deps): Disable tailwind updates (#16060)
| false
|
diff --git a/.github/renovate.json5 b/.github/renovate.json5
index 494ced9f54329..d29c5cc13ca7d 100644
--- a/.github/renovate.json5
+++ b/.github/renovate.json5
@@ -36,6 +36,12 @@
"matchManagers": ["kustomize"],
"enabled": false
},
+ {
+ // Disable certain npm updates for compatibility reasons
+ "matchManagers": ["npm"],
+ "matchPackageNames": ["tailwindcss"],
+ "enabled": false
+ },
{
// Don't automatically merge GitHub Actions updates
"matchManagers": ["github-actions"],
|
chore
|
Disable tailwind updates (#16060)
|
793a689d1ffd78333fe651d81addfaa180ae8886
|
2023-03-24 02:20:59
|
Bryan Boreham
|
iterators: re-implement mergeEntryIterator using loser.Tree for performance (#8637)
| false
|
diff --git a/pkg/iter/entry_iterator.go b/pkg/iter/entry_iterator.go
index 5b57539f232bb..8863debbf72a5 100644
--- a/pkg/iter/entry_iterator.go
+++ b/pkg/iter/entry_iterator.go
@@ -1,7 +1,6 @@
package iter
import (
- "container/heap"
"context"
"io"
"math"
@@ -57,60 +56,19 @@ func (i *streamIterator) Close() error {
return nil
}
-type iteratorHeap []EntryIterator
-
-func (h iteratorHeap) Len() int { return len(h) }
-func (h iteratorHeap) Swap(i, j int) { h[i], h[j] = h[j], h[i] }
-func (h iteratorHeap) Peek() EntryIterator { return h[0] }
-func (h *iteratorHeap) Push(x interface{}) {
- *h = append(*h, x.(EntryIterator))
-}
-
-func (h *iteratorHeap) Pop() interface{} {
- old := *h
- n := len(old)
- x := old[n-1]
- *h = old[0 : n-1]
- return x
-}
-
-type iteratorSortHeap struct {
- iteratorHeap
- byAscendingTime bool
-}
-
-func (h iteratorSortHeap) Less(i, j int) bool {
- t1, t2 := h.iteratorHeap[i].Entry().Timestamp.UnixNano(), h.iteratorHeap[j].Entry().Timestamp.UnixNano()
- if t1 == t2 {
- return h.iteratorHeap[i].StreamHash() < h.iteratorHeap[j].StreamHash()
- }
- if h.byAscendingTime {
- return t1 < t2
- }
- return t1 > t2
-}
-
// HeapIterator iterates over a heap of iterators with ability to push new iterators and get some properties like time of entry at peek and len
// Not safe for concurrent use
type HeapIterator interface {
EntryIterator
Peek() time.Time
- Len() int
+ IsEmpty() bool
Push(EntryIterator)
}
// mergeEntryIterator iterates over a heap of iterators and merge duplicate entries.
type mergeEntryIterator struct {
- heap interface {
- heap.Interface
- Peek() EntryIterator
- }
- is []EntryIterator
- // pushBuffer contains the list of iterators that needs to be pushed to the heap
- // This is to avoid allocations.
- pushBuffer []EntryIterator
- prefetched bool
- stats *stats.Context
+ tree *loser.Tree[sortFields, EntryIterator]
+ stats *stats.Context
// buffer of entries to be returned by Next()
// We buffer entries with the same timestamp to correctly dedupe them.
@@ -124,104 +82,64 @@ type mergeEntryIterator struct {
// This means using this iterator with a single iterator will result in the same result as the input iterator.
// If you don't need to deduplicate entries, use `NewSortEntryIterator` instead.
func NewMergeEntryIterator(ctx context.Context, is []EntryIterator, direction logproto.Direction) HeapIterator {
- result := &mergeEntryIterator{is: is, stats: stats.FromContext(ctx)}
- switch direction {
- case logproto.BACKWARD:
- result.heap = &iteratorSortHeap{iteratorHeap: make([]EntryIterator, 0, len(is)), byAscendingTime: false}
- case logproto.FORWARD:
- result.heap = &iteratorSortHeap{iteratorHeap: make([]EntryIterator, 0, len(is)), byAscendingTime: true}
- default:
- panic("bad direction")
- }
-
+ maxVal, less := treeLess(direction)
+ result := &mergeEntryIterator{stats: stats.FromContext(ctx)}
+ result.tree = loser.New(is, maxVal, sortFieldsAt, less, result.closeEntry)
result.buffer = make([]entryWithLabels, 0, len(is))
- result.pushBuffer = make([]EntryIterator, 0, len(is))
return result
}
-// prefetch iterates over all inner iterators to merge together, calls Next() on
-// each of them to prefetch the first entry and pushes of them - who are not
-// empty - to the heap
-func (i *mergeEntryIterator) prefetch() {
- if i.prefetched {
- return
- }
-
- i.prefetched = true
- for _, it := range i.is {
- i.requeue(it, false)
- }
-
- // We can now clear the list of input iterators to merge, given they have all
- // been processed and the non empty ones have been pushed to the heap
- i.is = nil
-}
-
-// requeue pushes the input ei EntryIterator to the heap, advancing it via an ei.Next()
-// call unless the advanced input parameter is true. In this latter case it expects that
-// the iterator has already been advanced before calling requeue().
-//
-// If the iterator has no more entries or an error occur while advancing it, the iterator
-// is not pushed to the heap and any possible error captured, so that can be get via Error().
-func (i *mergeEntryIterator) requeue(ei EntryIterator, advanced bool) {
- if advanced || ei.Next() {
- heap.Push(i.heap, ei)
- return
- }
-
- if err := ei.Error(); err != nil {
+func (i *mergeEntryIterator) closeEntry(e EntryIterator) {
+ if err := e.Error(); err != nil {
i.errs = append(i.errs, err)
}
- util.LogError("closing iterator", ei.Close)
+ util.LogError("closing iterator", e.Close)
+
}
func (i *mergeEntryIterator) Push(ei EntryIterator) {
- i.requeue(ei, false)
+ i.tree.Push(ei)
}
-// Next pop iterators from the heap until it finds an entry with a different timestamp or stream hash.
-// For each iterators we also buffer entries with the current timestamp and stream hash deduping as we loop.
-// If the iterator is not fully exhausted, it is pushed back to the heap.
+// Next fetches entries from the tree until it finds an entry with a different timestamp or stream hash.
+// Generally i.buffer has one or more entries with the same timestamp and stream hash,
+// followed by one more item where the timestamp or stream hash was different.
func (i *mergeEntryIterator) Next() bool {
- i.prefetch()
-
- if len(i.buffer) != 0 {
- i.nextFromBuffer()
- return true
+ if len(i.buffer) < 2 {
+ i.fillBuffer()
}
-
- if i.heap.Len() == 0 {
+ if len(i.buffer) == 0 {
return false
}
+ i.nextFromBuffer()
- // shortcut for the last iterator.
- if i.heap.Len() == 1 {
- i.currEntry.Entry = i.heap.Peek().Entry()
- i.currEntry.labels = i.heap.Peek().Labels()
- i.currEntry.streamHash = i.heap.Peek().StreamHash()
+ return true
+}
- if !i.heap.Peek().Next() {
- i.heap.Pop()
- }
- return true
+func (i *mergeEntryIterator) fillBuffer() {
+ if !i.tree.Next() {
+ return
}
+ // At this point we have zero or one items in i.buffer, and the next item is available from i.tree.
// We support multiple entries with the same timestamp, and we want to
- // preserve their original order. We look at all the top entries in the
- // heap with the same timestamp, and pop the ones whose common value
- // occurs most often.
-Outer:
- for i.heap.Len() > 0 {
- next := i.heap.Peek()
+ // preserve their original order.
+ // Entries with identical timestamp and line are removed as duplicates.
+ for {
+ next := i.tree.Winner()
entry := next.Entry()
- if len(i.buffer) > 0 &&
+ previous := i.buffer
+ i.buffer = append(i.buffer, entryWithLabels{
+ Entry: entry,
+ labels: next.Labels(),
+ streamHash: next.StreamHash(),
+ })
+ if len(i.buffer) > 1 &&
(i.buffer[0].streamHash != next.StreamHash() ||
!i.buffer[0].Entry.Timestamp.Equal(entry.Timestamp)) {
break
}
- heap.Pop(i.heap)
- previous := i.buffer
var dupe bool
for _, t := range previous {
if t.Entry.Line == entry.Line {
@@ -230,52 +148,24 @@ Outer:
break
}
}
- if !dupe {
- i.buffer = append(i.buffer, entryWithLabels{
- Entry: entry,
- labels: next.Labels(),
- streamHash: next.StreamHash(),
- })
+ if dupe {
+ i.buffer = previous
}
- inner:
- for {
- if !next.Next() {
- continue Outer
- }
- entry := next.Entry()
- if next.StreamHash() != i.buffer[0].streamHash ||
- !entry.Timestamp.Equal(i.buffer[0].Entry.Timestamp) {
- break
- }
- for _, t := range previous {
- if t.Entry.Line == entry.Line {
- i.stats.AddDuplicates(1)
- continue inner
- }
- }
- i.buffer = append(i.buffer, entryWithLabels{
- Entry: entry,
- labels: next.Labels(),
- streamHash: next.StreamHash(),
- })
+ if !i.tree.Next() {
+ break
}
- i.pushBuffer = append(i.pushBuffer, next)
- }
-
- for _, ei := range i.pushBuffer {
- heap.Push(i.heap, ei)
}
- i.pushBuffer = i.pushBuffer[:0]
-
- i.nextFromBuffer()
-
- return true
}
func (i *mergeEntryIterator) nextFromBuffer() {
i.currEntry.Entry = i.buffer[0].Entry
i.currEntry.labels = i.buffer[0].labels
i.currEntry.streamHash = i.buffer[0].streamHash
+ if len(i.buffer) == 2 {
+ i.buffer[0] = i.buffer[1]
+ i.buffer = i.buffer[:1]
+ return
+ }
if len(i.buffer) == 1 {
i.buffer = i.buffer[:0]
return
@@ -305,26 +195,27 @@ func (i *mergeEntryIterator) Error() error {
}
func (i *mergeEntryIterator) Close() error {
- for i.heap.Len() > 0 {
- if err := i.heap.Pop().(EntryIterator).Close(); err != nil {
- return err
- }
- }
+ i.tree.Close()
i.buffer = nil
- return nil
+ return i.Error()
}
func (i *mergeEntryIterator) Peek() time.Time {
- i.prefetch()
-
- return i.heap.Peek().Entry().Timestamp
+ if len(i.buffer) == 0 {
+ i.fillBuffer()
+ }
+ if len(i.buffer) == 0 {
+ return time.Time{}
+ }
+ return i.buffer[0].Timestamp
}
-// Len returns the number of inner iterators on the heap, still having entries
-func (i *mergeEntryIterator) Len() int {
- i.prefetch()
-
- return i.heap.Len()
+// IsEmpty returns true if there are no more entries to pull.
+func (i *mergeEntryIterator) IsEmpty() bool {
+ if len(i.buffer) == 0 {
+ i.fillBuffer()
+ }
+ return len(i.buffer) == 0
}
type entrySortIterator struct {
diff --git a/pkg/iter/entry_iterator_test.go b/pkg/iter/entry_iterator_test.go
index 16d7ad9a69fd5..8c05900c7b69b 100644
--- a/pkg/iter/entry_iterator_test.go
+++ b/pkg/iter/entry_iterator_test.go
@@ -165,8 +165,8 @@ func TestMergeIteratorPrefetch(t *testing.T) {
type tester func(t *testing.T, i HeapIterator)
tests := map[string]tester{
- "prefetch on Len() when called as first method": func(t *testing.T, i HeapIterator) {
- assert.Equal(t, 2, i.Len())
+ "prefetch on IsEmpty() when called as first method": func(t *testing.T, i HeapIterator) {
+ assert.Equal(t, false, i.IsEmpty())
},
"prefetch on Peek() when called as first method": func(t *testing.T, i HeapIterator) {
assert.Equal(t, time.Unix(0, 0), i.Peek())
@@ -520,8 +520,8 @@ func Test_DuplicateCount(t *testing.T) {
{
"replication 2 b",
[]EntryIterator{
- NewStreamIterator(stream),
- NewStreamIterator(stream),
+ mustReverseStreamIterator(NewStreamIterator(stream)),
+ mustReverseStreamIterator(NewStreamIterator(stream)),
},
logproto.BACKWARD,
3,
@@ -556,9 +556,9 @@ func Test_DuplicateCount(t *testing.T) {
{
"replication 3 b",
[]EntryIterator{
- NewStreamIterator(stream),
- NewStreamIterator(stream),
- NewStreamIterator(stream),
+ mustReverseStreamIterator(NewStreamIterator(stream)),
+ mustReverseStreamIterator(NewStreamIterator(stream)),
+ mustReverseStreamIterator(NewStreamIterator(stream)),
NewStreamIterator(logproto.Stream{
Entries: []logproto.Entry{
{
@@ -886,15 +886,21 @@ func TestDedupeMergeEntryIterator(t *testing.T) {
}),
}, logproto.FORWARD)
require.True(t, it.Next())
- require.Equal(t, "0", it.Entry().Line)
+ lines := []string{it.Entry().Line}
require.Equal(t, time.Unix(1, 0), it.Entry().Timestamp)
require.True(t, it.Next())
- require.Equal(t, "2", it.Entry().Line)
+ lines = append(lines, it.Entry().Line)
require.Equal(t, time.Unix(1, 0), it.Entry().Timestamp)
require.True(t, it.Next())
- require.Equal(t, "1", it.Entry().Line)
+ lines = append(lines, it.Entry().Line)
require.Equal(t, time.Unix(1, 0), it.Entry().Timestamp)
require.True(t, it.Next())
- require.Equal(t, "3", it.Entry().Line)
+ lines = append(lines, it.Entry().Line)
require.Equal(t, time.Unix(2, 0), it.Entry().Timestamp)
+ // Two orderings are consistent with the inputs.
+ if lines[0] == "1" {
+ require.Equal(t, []string{"1", "0", "2", "3"}, lines)
+ } else {
+ require.Equal(t, []string{"0", "2", "1", "3"}, lines)
+ }
}
diff --git a/pkg/querier/tail.go b/pkg/querier/tail.go
index e46732fb803d5..09e785a13b3e6 100644
--- a/pkg/querier/tail.go
+++ b/pkg/querier/tail.go
@@ -243,7 +243,7 @@ func (t *Tailer) next() bool {
t.streamMtx.Lock()
defer t.streamMtx.Unlock()
- if t.openStreamIterator.Len() == 0 || !time.Now().After(t.openStreamIterator.Peek().Add(t.delayFor)) || !t.openStreamIterator.Next() {
+ if t.openStreamIterator.IsEmpty() || !time.Now().After(t.openStreamIterator.Peek().Add(t.delayFor)) || !t.openStreamIterator.Next() {
return false
}
diff --git a/pkg/querier/tail_test.go b/pkg/querier/tail_test.go
index dbd9b9b1c9811..a2186cd614f73 100644
--- a/pkg/querier/tail_test.go
+++ b/pkg/querier/tail_test.go
@@ -218,7 +218,7 @@ func isTailerOpenStreamsConsumed(tailer *Tailer) bool {
tailer.streamMtx.Lock()
defer tailer.streamMtx.Unlock()
- return tailer.openStreamIterator.Len() == 0 || tailer.openStreamIterator.Peek() == time.Unix(0, 0)
+ return tailer.openStreamIterator.IsEmpty() || tailer.openStreamIterator.Peek() == time.Unix(0, 0)
}
func countEntriesInStreams(streams []logproto.Stream) int {
diff --git a/pkg/util/loser/tree.go b/pkg/util/loser/tree.go
index 64aa840245406..07e7e71ffe62a 100644
--- a/pkg/util/loser/tree.go
+++ b/pkg/util/loser/tree.go
@@ -17,6 +17,7 @@ func New[E any, S Sequence](sequences []S, maxVal E, at func(S) E, less func(E,
}
for i, s := range sequences {
t.nodes[i+nSequences].items = s
+ t.moveNext(i + nSequences) // Must call Next on each item so that At() has a value.
}
if nSequences > 0 {
t.nodes[0].index = -1 // flag to be initialized on first call to Next().
@@ -75,6 +76,9 @@ func (t *Tree[E, S]) Next() bool {
t.initialize()
return t.nodes[t.nodes[0].index].index != -1
}
+ if t.nodes[t.nodes[0].index].index == -1 { // already exhausted
+ return false
+ }
t.moveNext(t.nodes[0].index)
t.replayGames(t.nodes[0].index)
return t.nodes[t.nodes[0].index].index != -1
@@ -85,7 +89,6 @@ func (t *Tree[E, S]) initialize() {
// Initialize leaf nodes as winners to start.
for i := len(t.nodes) / 2; i < len(t.nodes); i++ {
winners[i] = i
- t.moveNext(i) // Must call Next on each item so that At() has a value.
}
for i := len(t.nodes) - 2; i > 0; i -= 2 {
// At each stage the winners play each other, and we record the loser in the node.
@@ -126,3 +129,38 @@ func (t *Tree[E, S]) playGame(a, b int) (loser, winner int) {
}
func parent(i int) int { return i / 2 }
+
+// Add a new sequence to the merge set
+func (t *Tree[E, S]) Push(sequence S) {
+ // First, see if we can replace one that was previously finished.
+ for newPos := len(t.nodes) / 2; newPos < len(t.nodes); newPos++ {
+ if t.nodes[newPos].index == -1 {
+ t.nodes[newPos].index = newPos
+ t.nodes[newPos].items = sequence
+ t.moveNext(newPos)
+ t.nodes[0].index = -1 // flag for re-initialize on next call to Next()
+ return
+ }
+ }
+ // We need to expand the tree. Pick the next biggest power of 2 to amortise resizing cost.
+ size := 1
+ for ; size <= len(t.nodes)/2; size *= 2 {
+ }
+ newPos := size + len(t.nodes)/2
+ newNodes := make([]node[E, S], size*2)
+ // Copy data over and fix up the indexes.
+ for i, n := range t.nodes[len(t.nodes)/2:] {
+ newNodes[i+size] = n
+ newNodes[i+size].index = i + size
+ }
+ t.nodes = newNodes
+ t.nodes[newPos].index = newPos
+ t.nodes[newPos].items = sequence
+ // Mark all the empty nodes we have added as finished.
+ for i := newPos + 1; i < len(t.nodes); i++ {
+ t.nodes[i].index = -1
+ t.nodes[i].value = t.maxVal
+ }
+ t.moveNext(newPos)
+ t.nodes[0].index = -1 // flag for re-initialize on next call to Next()
+}
diff --git a/pkg/util/loser/tree_test.go b/pkg/util/loser/tree_test.go
index 7d06b07b481d3..10203a5e7fd40 100644
--- a/pkg/util/loser/tree_test.go
+++ b/pkg/util/loser/tree_test.go
@@ -55,62 +55,85 @@ func checkIterablesEqual[E any, S1 loser.Sequence, S2 loser.Sequence](t *testing
}
}
+var testCases = []struct {
+ name string
+ args []*List
+ want *List
+}{
+ {
+ name: "empty input",
+ want: NewList(),
+ },
+ {
+ name: "one list",
+ args: []*List{NewList(1, 2, 3, 4)},
+ want: NewList(1, 2, 3, 4),
+ },
+ {
+ name: "two lists",
+ args: []*List{NewList(3, 4, 5), NewList(1, 2)},
+ want: NewList(1, 2, 3, 4, 5),
+ },
+ {
+ name: "two lists, first empty",
+ args: []*List{NewList(), NewList(1, 2)},
+ want: NewList(1, 2),
+ },
+ {
+ name: "two lists, second empty",
+ args: []*List{NewList(1, 2), NewList()},
+ want: NewList(1, 2),
+ },
+ {
+ name: "two lists b",
+ args: []*List{NewList(1, 2), NewList(3, 4, 5)},
+ want: NewList(1, 2, 3, 4, 5),
+ },
+ {
+ name: "two lists c",
+ args: []*List{NewList(1, 3), NewList(2, 4, 5)},
+ want: NewList(1, 2, 3, 4, 5),
+ },
+ {
+ name: "three lists",
+ args: []*List{NewList(1, 3), NewList(2, 4), NewList(5)},
+ want: NewList(1, 2, 3, 4, 5),
+ },
+}
+
func TestMerge(t *testing.T) {
- tests := []struct {
- name string
- args []*List
- want *List
- }{
- {
- name: "empty input",
- want: NewList(),
- },
- {
- name: "one list",
- args: []*List{NewList(1, 2, 3, 4)},
- want: NewList(1, 2, 3, 4),
- },
- {
- name: "two lists",
- args: []*List{NewList(3, 4, 5), NewList(1, 2)},
- want: NewList(1, 2, 3, 4, 5),
- },
- {
- name: "two lists, first empty",
- args: []*List{NewList(), NewList(1, 2)},
- want: NewList(1, 2),
- },
- {
- name: "two lists, second empty",
- args: []*List{NewList(1, 2), NewList()},
- want: NewList(1, 2),
- },
- {
- name: "two lists b",
- args: []*List{NewList(1, 2), NewList(3, 4, 5)},
- want: NewList(1, 2, 3, 4, 5),
- },
- {
- name: "two lists c",
- args: []*List{NewList(1, 3), NewList(2, 4, 5)},
- want: NewList(1, 2, 3, 4, 5),
- },
- {
- name: "three lists",
- args: []*List{NewList(1, 3), NewList(2, 4), NewList(5)},
- want: NewList(1, 2, 3, 4, 5),
- },
- }
- for _, tt := range tests {
+ at := func(s *List) uint64 { return s.At() }
+ less := func(a, b uint64) bool { return a < b }
+ at2 := func(s *loser.Tree[uint64, *List]) uint64 { return s.Winner().At() }
+ for _, tt := range testCases {
t.Run(tt.name, func(t *testing.T) {
- at := func(s *List) uint64 { return s.At() }
- less := func(a, b uint64) bool { return a < b }
numCloses := 0
close := func(s *List) {
numCloses++
}
lt := loser.New(tt.args, math.MaxUint64, at, less, close)
- at2 := func(s *loser.Tree[uint64, *List]) uint64 { return s.Winner().At() }
+ checkIterablesEqual(t, tt.want, lt, at, at2, less)
+ if numCloses != len(tt.args) {
+ t.Errorf("Expected %d closes, got %d", len(tt.args), numCloses)
+ }
+ })
+ }
+}
+
+func TestPush(t *testing.T) {
+ at := func(s *List) uint64 { return s.At() }
+ less := func(a, b uint64) bool { return a < b }
+ at2 := func(s *loser.Tree[uint64, *List]) uint64 { return s.Winner().At() }
+ for _, tt := range testCases {
+ t.Run(tt.name, func(t *testing.T) {
+ numCloses := 0
+ close := func(s *List) {
+ numCloses++
+ }
+ lt := loser.New(nil, math.MaxUint64, at, less, close)
+ for _, s := range tt.args {
+ lt.Push(s)
+ }
checkIterablesEqual(t, tt.want, lt, at, at2, less)
if numCloses != len(tt.args) {
t.Errorf("Expected %d closes, got %d", len(tt.args), numCloses)
|
iterators
|
re-implement mergeEntryIterator using loser.Tree for performance (#8637)
|
c263a681f8e19417ea3056a3e2cae7d3015d081a
|
2024-07-12 13:41:14
|
Salva Corts
|
fix: add logging to empty bloom (#13502)
| false
|
diff --git a/pkg/bloomgateway/processor.go b/pkg/bloomgateway/processor.go
index 4dc02fef435f7..9f0b2b8b8151d 100644
--- a/pkg/bloomgateway/processor.go
+++ b/pkg/bloomgateway/processor.go
@@ -167,7 +167,8 @@ func (p *processor) processBlock(_ context.Context, bq *bloomshipper.CloseableBl
iters = append(iters, it)
}
- fq := blockQuerier.Fuse(iters, p.logger)
+ logger := log.With(p.logger, "block", bq.BlockRef.String())
+ fq := blockQuerier.Fuse(iters, logger)
start := time.Now()
err = fq.Run()
diff --git a/pkg/storage/bloom/v1/fuse.go b/pkg/storage/bloom/v1/fuse.go
index 264e456161a05..2a6f74d590b85 100644
--- a/pkg/storage/bloom/v1/fuse.go
+++ b/pkg/storage/bloom/v1/fuse.go
@@ -299,15 +299,24 @@ func (fq *FusedQuerier) runSeries(schema Schema, series *SeriesWithOffsets, reqs
// Test each bloom individually
bloom := fq.bq.blooms.At()
- for j, req := range reqs {
- // TODO(owen-d): this is a stopgap to avoid filtering broken blooms until we find their cause.
- // In the case we don't have any data in the bloom, don't filter any chunks.
- if bloom.ScalableBloomFilter.Count() == 0 {
+
+ // TODO(owen-d): this is a stopgap to avoid filtering broken blooms until we find their cause.
+ // In the case we don't have any data in the bloom, don't filter any chunks.
+ if bloom.ScalableBloomFilter.Count() == 0 {
+ level.Warn(fq.logger).Log(
+ "msg", "Found bloom with no data",
+ "offset_page", offset.Page,
+ "offset_bytes", offset.ByteOffset,
+ )
+
+ for j := range reqs {
for k := range inputs[j].InBlooms {
inputs[j].found[k] = true
}
}
+ }
+ for j, req := range reqs {
// shortcut: series level removal
// we can skip testing chunk keys individually if the bloom doesn't match
// the query.
|
fix
|
add logging to empty bloom (#13502)
|
600b6cf45932527d70e984704870f84f06ddce9e
|
2019-08-13 03:15:02
|
Robert Fratto
|
makefile: disable building promtail with systemd support on non-amd64 platforms (#888)
| false
|
diff --git a/Makefile b/Makefile
index 4f5fb5325c588..1b5c5bbf82cae 100644
--- a/Makefile
+++ b/Makefile
@@ -161,11 +161,13 @@ PROMTAIL_DEBUG_GO_FLAGS := $(DEBUG_GO_FLAGS)
# Validate GOHOSTOS=linux && GOOS=linux to use CGO.
ifeq ($(shell go env GOHOSTOS),linux)
ifeq ($(shell go env GOOS),linux)
+ifeq ($(shell go env GOARCH),amd64)
PROMTAIL_CGO = 1
PROMTAIL_GO_FLAGS = $(DYN_GO_FLAGS)
PROMTAIL_DEBUG_GO_FLAGS = $(DYN_DEBUG_GO_FLAGS)
endif
endif
+endif
promtail: yacc cmd/promtail/promtail
promtail-debug: yacc cmd/promtail/promtail-debug
@@ -190,7 +192,7 @@ cmd/promtail/promtail-debug: $(APP_GO_FILES) pkg/promtail/server/ui/assets_vfsda
# Releasing #
#############
# concurrency is limited to 4 to prevent CircleCI from OOMing. Sorry
-GOX = gox $(GO_FLAGS) -parallel=4 -output="dist/{{.Dir}}_{{.OS}}_{{.Arch}}" -arch="amd64 arm64 arm" -os="linux"
+GOX = gox $(GO_FLAGS) -parallel=4 -output="dist/{{.Dir}}_{{.OS}}_{{.Arch}}" -arch="amd64 arm64 arm" -os="linux"
dist: clean
CGO_ENABLED=0 $(GOX) ./cmd/loki
CGO_ENABLED=0 $(GOX) -osarch="darwin/amd64 windows/amd64 freebsd/amd64" ./cmd/promtail ./cmd/logcli
|
makefile
|
disable building promtail with systemd support on non-amd64 platforms (#888)
|
d42476aa58fca07b17ee39d388639807624f884a
|
2024-07-26 18:57:25
|
Cyril Tovena
|
fix: Handle block offset exceeding chunk length in memchunk.go (#13661)
| false
|
diff --git a/pkg/chunkenc/memchunk.go b/pkg/chunkenc/memchunk.go
index f6197888a854f..3d87aec705dc5 100644
--- a/pkg/chunkenc/memchunk.go
+++ b/pkg/chunkenc/memchunk.go
@@ -476,6 +476,9 @@ func newByteChunk(b []byte, blockSize, targetSize int, fromCheckpoint bool) (*Me
blk.uncompressedSize = db.uvarint()
}
l := db.uvarint()
+ if blk.offset+l > len(b) {
+ return nil, fmt.Errorf("block %d offset %d + length %d exceeds chunk length %d", i, blk.offset, l, len(b))
+ }
blk.b = b[blk.offset : blk.offset+l]
// Verify checksums.
diff --git a/pkg/storage/chunk/client/object_client.go b/pkg/storage/chunk/client/object_client.go
index 2f5c3e263e853..5dfb1945a5f3b 100644
--- a/pkg/storage/chunk/client/object_client.go
+++ b/pkg/storage/chunk/client/object_client.go
@@ -4,6 +4,7 @@ import (
"bytes"
"context"
"encoding/base64"
+ "fmt"
"io"
"strings"
"time"
@@ -186,7 +187,7 @@ func (o *client) getChunk(ctx context.Context, decodeContext *chunk.DecodeContex
}
if err := c.Decode(decodeContext, buf.Bytes()); err != nil {
- return chunk.Chunk{}, errors.WithStack(err)
+ return chunk.Chunk{}, errors.WithStack(fmt.Errorf("failed to decode chunk '%s': %w", key, err))
}
return c, nil
}
diff --git a/pkg/storage/chunk/fetcher/fetcher.go b/pkg/storage/chunk/fetcher/fetcher.go
index cf763b9cbedc9..45b6970045a91 100644
--- a/pkg/storage/chunk/fetcher/fetcher.go
+++ b/pkg/storage/chunk/fetcher/fetcher.go
@@ -9,7 +9,6 @@ import (
"github.com/opentracing/opentracing-go"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promauto"
- "github.com/prometheus/prometheus/promql"
"github.com/grafana/loki/v3/pkg/logqlmodel/stats"
"github.com/grafana/loki/v3/pkg/storage/chunk"
@@ -218,8 +217,7 @@ func (c *Fetcher) FetchChunks(ctx context.Context, chunks []chunk.Chunk) ([]chun
}
if err != nil {
- // Don't rely on Cortex error translation here.
- return nil, promql.ErrStorage{Err: err}
+ level.Error(log).Log("msg", "failed downloading chunks", "err", err)
}
allChunks := append(fromCache, fromStorage...)
|
fix
|
Handle block offset exceeding chunk length in memchunk.go (#13661)
|
d8eb9dd9be7e66029aa71085e3c5efc4c3ed4657
|
2024-07-17 18:19:31
|
George Robinson
|
chore: Remove redundant comments (#13553)
| false
|
diff --git a/pkg/storage/wal/manager.go b/pkg/storage/wal/manager.go
index d5d8fff893f03..57ac8b3207fa9 100644
--- a/pkg/storage/wal/manager.go
+++ b/pkg/storage/wal/manager.go
@@ -102,21 +102,14 @@ type Manager struct {
cfg Config
metrics *ManagerMetrics
available *list.List
-
- // pending is a list of segments that are waiting to be flushed. Once
- // flushed, the segment is reset and moved to the back of the available
- // list to accept writes again.
- pending *list.List
-
+ pending *list.List
// firstAppend is the time of the first append to the segment at the
// front of the available list. It is used to know when the segment has
// exceeded the maximum age and should be moved to the pending list.
// It is reset each time this happens.
firstAppend time.Time
-
- closed bool
- mu sync.Mutex
-
+ closed bool
+ mu sync.Mutex
// Used in tests.
clock quartz.Clock
}
|
chore
|
Remove redundant comments (#13553)
|
8f69f0dbd4fa845facbffc0b718c894c8cc6719d
|
2025-02-14 04:31:42
|
Cyril Tovena
|
feat(dataobj): Implement SelectLogs using a topk min-heap (#16261)
| false
|
diff --git a/pkg/dataobj/querier/iter.go b/pkg/dataobj/querier/iter.go
index cf370edbd9dc6..4d6f84866935b 100644
--- a/pkg/dataobj/querier/iter.go
+++ b/pkg/dataobj/querier/iter.go
@@ -1,13 +1,16 @@
package querier
import (
+ "container/heap"
"context"
"io"
+ "sort"
"sync"
"github.com/grafana/loki/v3/pkg/dataobj"
"github.com/grafana/loki/v3/pkg/iter"
"github.com/grafana/loki/v3/pkg/logproto"
+ "github.com/grafana/loki/v3/pkg/logql"
"github.com/grafana/loki/v3/pkg/logql/log"
"github.com/grafana/loki/v3/pkg/logql/syntax"
)
@@ -25,8 +28,253 @@ var (
return &samples
},
}
+ entryWithLabelsPool = sync.Pool{
+ New: func() interface{} {
+ entries := make([]entryWithLabels, 0, 1024)
+ return &entries
+ },
+ }
)
+type entryWithLabels struct {
+ Labels string
+ StreamHash uint64
+ Entry logproto.Entry
+}
+
+// newEntryIterator creates a new EntryIterator for the given context, streams, and reader.
+// It reads records from the reader and adds them to the topk heap based on the direction.
+// The topk heap is used to maintain the top k entries based on the direction.
+// The final result is returned as a slice of entries.
+func newEntryIterator(ctx context.Context,
+ streams map[int64]dataobj.Stream,
+ reader *dataobj.LogsReader,
+ req logql.SelectLogParams,
+) (iter.EntryIterator, error) {
+ bufPtr := recordsPool.Get().(*[]dataobj.Record)
+ defer recordsPool.Put(bufPtr)
+ buf := *bufPtr
+
+ selector, err := req.LogSelector()
+ if err != nil {
+ return nil, err
+ }
+ pipeline, err := selector.Pipeline()
+ if err != nil {
+ return nil, err
+ }
+
+ var (
+ prevStreamID int64 = -1
+ streamExtractor log.StreamPipeline
+ streamHash uint64
+ top = newTopK(int(req.Limit), req.Direction)
+ )
+
+ for {
+ n, err := reader.Read(ctx, buf)
+ if err != nil && err != io.EOF {
+ return nil, err
+ }
+
+ if n == 0 {
+ break
+ }
+
+ for _, record := range buf[:n] {
+ stream, ok := streams[record.StreamID]
+ if !ok {
+ continue
+ }
+ if prevStreamID != record.StreamID {
+ streamExtractor = pipeline.ForStream(stream.Labels)
+ streamHash = streamExtractor.BaseLabels().Hash()
+ prevStreamID = record.StreamID
+ }
+
+ timestamp := record.Timestamp.UnixNano()
+ line, parsedLabels, ok := streamExtractor.ProcessString(timestamp, record.Line, record.Metadata...)
+ if !ok {
+ continue
+ }
+ var metadata []logproto.LabelAdapter
+ if len(record.Metadata) > 0 {
+ metadata = logproto.FromLabelsToLabelAdapters(record.Metadata)
+ }
+ top.Add(entryWithLabels{
+ Labels: parsedLabels.String(),
+ StreamHash: streamHash,
+ Entry: logproto.Entry{
+ Timestamp: record.Timestamp,
+ Line: line,
+ StructuredMetadata: metadata,
+ },
+ })
+ }
+ }
+ return top.Iterator(), nil
+}
+
+// entryHeap implements a min-heap of entries based on a custom less function.
+// The less function determines the ordering based on the direction (FORWARD/BACKWARD).
+// For FORWARD direction:
+// - When comparing timestamps: entry.Timestamp.After(b) means 'a' is "less" than 'b'
+// - Example: [t3, t2, t1] where t3 is most recent, t3 will be at the root (index 0)
+//
+// For BACKWARD direction:
+// - When comparing timestamps: entry.Timestamp.Before(b) means 'a' is "less" than 'b'
+// - Example: [t1, t2, t3] where t1 is oldest, t1 will be at the root (index 0)
+//
+// In both cases:
+// - When timestamps are equal, we use labels as a tiebreaker
+// - The root of the heap (index 0) contains the entry we want to evict first
+type entryHeap struct {
+ less func(a, b entryWithLabels) bool
+ entries []entryWithLabels
+}
+
+func (h *entryHeap) Push(x any) {
+ h.entries = append(h.entries, x.(entryWithLabels))
+}
+
+func (h *entryHeap) Pop() any {
+ old := h.entries
+ n := len(old)
+ x := old[n-1]
+ h.entries = old[:n-1]
+ return x
+}
+
+func (h *entryHeap) Len() int {
+ return len(h.entries)
+}
+
+func (h *entryHeap) Less(i, j int) bool {
+ return h.less(h.entries[i], h.entries[j])
+}
+
+func (h *entryHeap) Swap(i, j int) {
+ h.entries[i], h.entries[j] = h.entries[j], h.entries[i]
+}
+
+func lessFn(direction logproto.Direction) func(a, b entryWithLabels) bool {
+ switch direction {
+ case logproto.FORWARD:
+ return func(a, b entryWithLabels) bool {
+ if a.Entry.Timestamp.Equal(b.Entry.Timestamp) {
+ return a.Labels < b.Labels
+ }
+ return a.Entry.Timestamp.After(b.Entry.Timestamp)
+ }
+ case logproto.BACKWARD:
+ return func(a, b entryWithLabels) bool {
+ if a.Entry.Timestamp.Equal(b.Entry.Timestamp) {
+ return a.Labels < b.Labels
+ }
+ return a.Entry.Timestamp.Before(b.Entry.Timestamp)
+ }
+ default:
+ panic("invalid direction")
+ }
+}
+
+// topk maintains a min-heap of the k most relevant entries.
+// The heap is ordered by timestamp (and labels as tiebreaker) based on the direction:
+// - For FORWARD: keeps k oldest entries by evicting newest entries first
+// Example with k=3: If entries arrive as [t1,t2,t3,t4,t5], heap will contain [t1,t2,t3]
+// - For BACKWARD: keeps k newest entries by evicting oldest entries first
+// Example with k=3: If entries arrive as [t1,t2,t3,t4,t5], heap will contain [t3,t4,t5]
+type topk struct {
+ k int
+ minHeap entryHeap
+}
+
+func newTopK(k int, direction logproto.Direction) *topk {
+ if k <= 0 {
+ panic("k must be greater than 0")
+ }
+ entries := entryWithLabelsPool.Get().(*[]entryWithLabels)
+ return &topk{
+ k: k,
+ minHeap: entryHeap{
+ less: lessFn(direction),
+ entries: *entries,
+ },
+ }
+}
+
+// Add adds a new entry to the topk heap.
+// If the heap has less than k entries, the entry is added directly.
+// Otherwise, if the new entry should be evicted before the root (index 0),
+// it is discarded. If not, the root is popped (discarded) and the new entry is pushed.
+//
+// For FORWARD direction:
+// - Root contains newest entry (to be evicted first)
+// - New entries that are newer than root are discarded
+// Example: With k=3 and heap=[t1,t2,t3], a new entry t4 is discarded
+//
+// For BACKWARD direction:
+// - Root contains oldest entry (to be evicted first)
+// - New entries that are older than root are discarded
+// Example: With k=3 and heap=[t3,t4,t5], a new entry t2 is discarded
+func (t *topk) Add(r entryWithLabels) {
+ if t.minHeap.Len() < t.k {
+ heap.Push(&t.minHeap, r)
+ return
+ }
+ if t.minHeap.less(t.minHeap.entries[0], r) {
+ _ = heap.Pop(&t.minHeap)
+ heap.Push(&t.minHeap, r)
+ }
+}
+
+type sliceIterator struct {
+ entries []entryWithLabels
+ curr entryWithLabels
+}
+
+func (t *topk) Iterator() iter.EntryIterator {
+ // We swap i and j in the less comparison to reverse the ordering from the minHeap.
+ // The minHeap is ordered such that the entry to evict is at index 0.
+ // For FORWARD: newest entries are evicted first, so we want oldest entries first in the final slice
+ // For BACKWARD: oldest entries are evicted first, so we want newest entries first in the final slice
+ // By swapping i and j, we effectively reverse the minHeap ordering to get the desired final ordering
+ sort.Slice(t.minHeap.entries, func(i, j int) bool {
+ return t.minHeap.less(t.minHeap.entries[j], t.minHeap.entries[i])
+ })
+ return &sliceIterator{entries: t.minHeap.entries}
+}
+
+func (s *sliceIterator) Next() bool {
+ if len(s.entries) == 0 {
+ return false
+ }
+ s.curr = s.entries[0]
+ s.entries = s.entries[1:]
+ return true
+}
+
+func (s *sliceIterator) At() logproto.Entry {
+ return s.curr.Entry
+}
+
+func (s *sliceIterator) Err() error {
+ return nil
+}
+
+func (s *sliceIterator) Labels() string {
+ return s.curr.Labels
+}
+
+func (s *sliceIterator) StreamHash() uint64 {
+ return s.curr.StreamHash
+}
+
+func (s *sliceIterator) Close() error {
+ entryWithLabelsPool.Put(&s.entries)
+ return nil
+}
+
func newSampleIterator(ctx context.Context,
streams map[int64]dataobj.Stream,
extractor syntax.SampleExtractor,
@@ -91,7 +339,7 @@ func newSampleIterator(ctx context.Context,
s.Samples = append(s.Samples, logproto.Sample{
Timestamp: timestamp,
Value: value,
- Hash: 0, // todo write a test to verify that we should not try to dedupe when we don't have a hash
+ Hash: 0,
})
}
}
diff --git a/pkg/dataobj/querier/iter_test.go b/pkg/dataobj/querier/iter_test.go
new file mode 100644
index 0000000000000..cff55b727b797
--- /dev/null
+++ b/pkg/dataobj/querier/iter_test.go
@@ -0,0 +1,122 @@
+package querier
+
+import (
+ "testing"
+ "time"
+
+ "github.com/stretchr/testify/require"
+
+ "github.com/grafana/loki/v3/pkg/logproto"
+)
+
+// makeEntry is a helper function to create a log entry with given timestamp and line
+func makeEntry(ts time.Time, line string) logproto.Entry {
+ return logproto.Entry{
+ Timestamp: ts,
+ Line: line,
+ }
+}
+
+func TestTopKIterator(t *testing.T) {
+ tests := []struct {
+ name string
+ k int
+ direction logproto.Direction
+ input []entryWithLabels
+ want []entryWithLabels
+ }{
+ {
+ name: "forward direction with k=2",
+ k: 2,
+ direction: logproto.FORWARD,
+ input: []entryWithLabels{
+ {Entry: makeEntry(time.Unix(1, 0), "line1"), Labels: "{app=\"app1\"}", StreamHash: 1},
+ {Entry: makeEntry(time.Unix(3, 0), "line3"), Labels: "{app=\"app1\"}", StreamHash: 1},
+ {Entry: makeEntry(time.Unix(2, 0), "line2"), Labels: "{app=\"app2\"}", StreamHash: 2},
+ {Entry: makeEntry(time.Unix(4, 0), "line4"), Labels: "{app=\"app2\"}", StreamHash: 2},
+ },
+ want: []entryWithLabels{
+ {Entry: makeEntry(time.Unix(1, 0), "line1"), Labels: "{app=\"app1\"}", StreamHash: 1},
+ {Entry: makeEntry(time.Unix(2, 0), "line2"), Labels: "{app=\"app2\"}", StreamHash: 2},
+ },
+ },
+ {
+ name: "backward direction with k=3",
+ k: 3,
+ direction: logproto.BACKWARD,
+ input: []entryWithLabels{
+ {Entry: makeEntry(time.Unix(1, 0), "line1"), Labels: "{app=\"app1\"}", StreamHash: 1},
+ {Entry: makeEntry(time.Unix(4, 0), "line4"), Labels: "{app=\"app1\"}", StreamHash: 1},
+ {Entry: makeEntry(time.Unix(2, 0), "line2"), Labels: "{app=\"app2\"}", StreamHash: 2},
+ {Entry: makeEntry(time.Unix(3, 0), "line3"), Labels: "{app=\"app2\"}", StreamHash: 2},
+ {Entry: makeEntry(time.Unix(5, 0), "line5"), Labels: "{app=\"app2\"}", StreamHash: 2},
+ },
+ want: []entryWithLabels{
+ {Entry: makeEntry(time.Unix(5, 0), "line5"), Labels: "{app=\"app2\"}", StreamHash: 2},
+ {Entry: makeEntry(time.Unix(4, 0), "line4"), Labels: "{app=\"app1\"}", StreamHash: 1},
+ {Entry: makeEntry(time.Unix(3, 0), "line3"), Labels: "{app=\"app2\"}", StreamHash: 2},
+ },
+ },
+ {
+ name: "k larger than available entries",
+ k: 10,
+ direction: logproto.FORWARD,
+ input: []entryWithLabels{
+ {Entry: makeEntry(time.Unix(1, 0), "line1"), Labels: "{app=\"app1\"}", StreamHash: 1},
+ {Entry: makeEntry(time.Unix(2, 0), "line2"), Labels: "{app=\"app1\"}", StreamHash: 1},
+ },
+ want: []entryWithLabels{
+ {Entry: makeEntry(time.Unix(1, 0), "line1"), Labels: "{app=\"app1\"}", StreamHash: 1},
+ {Entry: makeEntry(time.Unix(2, 0), "line2"), Labels: "{app=\"app1\"}", StreamHash: 1},
+ },
+ },
+ {
+ name: "mixed timestamps with k=4",
+ k: 4,
+ direction: logproto.FORWARD,
+ input: []entryWithLabels{
+ {Entry: makeEntry(time.Unix(1, 0), "line1"), Labels: "{app=\"app1\"}", StreamHash: 1},
+ {Entry: makeEntry(time.Unix(4, 0), "line4"), Labels: "{app=\"app1\"}", StreamHash: 1},
+ {Entry: makeEntry(time.Unix(5, 0), "line5"), Labels: "{app=\"app1\"}", StreamHash: 1},
+ {Entry: makeEntry(time.Unix(2, 0), "line2"), Labels: "{app=\"app2\"}", StreamHash: 2},
+ {Entry: makeEntry(time.Unix(3, 0), "line3"), Labels: "{app=\"app2\"}", StreamHash: 2},
+ {Entry: makeEntry(time.Unix(6, 0), "line6"), Labels: "{app=\"app2\"}", StreamHash: 2},
+ },
+ want: []entryWithLabels{
+ {Entry: makeEntry(time.Unix(1, 0), "line1"), Labels: "{app=\"app1\"}", StreamHash: 1},
+ {Entry: makeEntry(time.Unix(2, 0), "line2"), Labels: "{app=\"app2\"}", StreamHash: 2},
+ {Entry: makeEntry(time.Unix(3, 0), "line3"), Labels: "{app=\"app2\"}", StreamHash: 2},
+ {Entry: makeEntry(time.Unix(4, 0), "line4"), Labels: "{app=\"app1\"}", StreamHash: 1},
+ },
+ },
+ }
+
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ // Create topk iterator
+ top := &topk{
+ k: tt.k,
+ minHeap: entryHeap{less: lessFn(tt.direction)},
+ }
+
+ // Add entries
+ for _, e := range tt.input {
+ top.Add(e)
+ }
+
+ // Collect results
+ var got []entryWithLabels
+ iter := top.Iterator()
+ for iter.Next() {
+ got = append(got, entryWithLabels{
+ Entry: iter.At(),
+ Labels: iter.Labels(),
+ StreamHash: iter.StreamHash(),
+ })
+ }
+
+ require.Equal(t, tt.want, got)
+ require.NoError(t, iter.Err())
+ })
+ }
+}
diff --git a/pkg/dataobj/querier/store.go b/pkg/dataobj/querier/store.go
index 9d5a43c2cff47..abd07f2a833e3 100644
--- a/pkg/dataobj/querier/store.go
+++ b/pkg/dataobj/querier/store.go
@@ -89,9 +89,17 @@ func NewStore(bucket objstore.Bucket) *Store {
}
// SelectLogs implements querier.Store
-func (s *Store) SelectLogs(_ context.Context, _ logql.SelectLogParams) (iter.EntryIterator, error) {
- // TODO: Implement
- return iter.NoopEntryIterator, nil
+func (s *Store) SelectLogs(ctx context.Context, req logql.SelectLogParams) (iter.EntryIterator, error) {
+ objects, err := s.objectsForTimeRange(ctx, req.Start, req.End)
+ if err != nil {
+ return nil, err
+ }
+ shard, err := parseShards(req.Shards)
+ if err != nil {
+ return nil, err
+ }
+
+ return selectLogs(ctx, objects, shard, req)
}
// SelectSamples implements querier.Store
@@ -149,6 +157,49 @@ func (s *Store) objectsForTimeRange(ctx context.Context, from, through time.Time
return objects, nil
}
+func selectLogs(ctx context.Context, objects []*dataobj.Object, shard logql.Shard, req logql.SelectLogParams) (iter.EntryIterator, error) {
+ selector, err := req.LogSelector()
+ if err != nil {
+ return nil, err
+ }
+ shardedObjects, err := shardObjects(ctx, objects, shard)
+ if err != nil {
+ return nil, err
+ }
+ defer func() {
+ for _, obj := range shardedObjects {
+ obj.reset()
+ shardedObjectsPool.Put(obj)
+ }
+ }()
+ streamsPredicate := streamPredicate(selector.Matchers(), req.Start, req.End)
+ // TODO: support more predicates and combine with log.Pipeline.
+ logsPredicate := dataobj.TimeRangePredicate[dataobj.LogsPredicate]{
+ StartTime: req.Start,
+ EndTime: req.End,
+ IncludeStart: true,
+ IncludeEnd: false,
+ }
+ g, ctx := errgroup.WithContext(ctx)
+ iterators := make([]iter.EntryIterator, len(shardedObjects))
+
+ for i, obj := range shardedObjects {
+ g.Go(func() error {
+ iterator, err := obj.selectLogs(ctx, streamsPredicate, logsPredicate, req)
+ if err != nil {
+ return err
+ }
+ iterators[i] = iterator
+ return nil
+ })
+ }
+ if err := g.Wait(); err != nil {
+ return nil, err
+ }
+
+ return iter.NewSortEntryIterator(iterators, req.Direction), nil
+}
+
func selectSamples(ctx context.Context, objects []*dataobj.Object, shard logql.Shard, expr syntax.SampleExpr, start, end time.Time) (iter.SampleIterator, error) {
selector, err := expr.Selector()
if err != nil {
@@ -258,6 +309,33 @@ func (s *shardedObject) reset() {
clear(s.streams)
}
+func (s *shardedObject) selectLogs(ctx context.Context, streamsPredicate dataobj.StreamsPredicate, logsPredicate dataobj.LogsPredicate, req logql.SelectLogParams) (iter.EntryIterator, error) {
+ if err := s.setPredicate(streamsPredicate, logsPredicate); err != nil {
+ return nil, err
+ }
+
+ if err := s.matchStreams(ctx); err != nil {
+ return nil, err
+ }
+ iterators := make([]iter.EntryIterator, len(s.logReaders))
+ g, ctx := errgroup.WithContext(ctx)
+
+ for i, reader := range s.logReaders {
+ g.Go(func() error {
+ iter, err := newEntryIterator(ctx, s.streams, reader, req)
+ if err != nil {
+ return err
+ }
+ iterators[i] = iter
+ return nil
+ })
+ }
+ if err := g.Wait(); err != nil {
+ return nil, err
+ }
+ return iter.NewSortEntryIterator(iterators, req.Direction), nil
+}
+
func (s *shardedObject) selectSamples(ctx context.Context, streamsPredicate dataobj.StreamsPredicate, logsPredicate dataobj.LogsPredicate, expr syntax.SampleExpr) (iter.SampleIterator, error) {
if err := s.setPredicate(streamsPredicate, logsPredicate); err != nil {
return nil, err
diff --git a/pkg/dataobj/querier/store_test.go b/pkg/dataobj/querier/store_test.go
index 07e63103a3975..dae3f2c2a6871 100644
--- a/pkg/dataobj/querier/store_test.go
+++ b/pkg/dataobj/querier/store_test.go
@@ -26,6 +26,11 @@ import (
"github.com/grafana/loki/v3/pkg/querier/plan"
)
+type sampleWithLabels struct {
+ Labels string
+ Samples logproto.Sample
+}
+
func TestStore_SelectSamples(t *testing.T) {
const testTenant = "test-tenant"
builder := newTestDataBuilder(t, testTenant)
@@ -179,6 +184,159 @@ func TestStore_SelectSamples(t *testing.T) {
}
}
+func TestStore_SelectLogs(t *testing.T) {
+ const testTenant = "test-tenant"
+ builder := newTestDataBuilder(t, testTenant)
+ defer builder.close()
+
+ // Setup test data
+ now := setupTestData(t, builder)
+ store := NewStore(builder.bucket)
+ ctx := user.InjectOrgID(context.Background(), testTenant)
+
+ tests := []struct {
+ name string
+ selector string
+ start time.Time
+ end time.Time
+ shards []string
+ limit uint32
+ direction logproto.Direction
+ want []entryWithLabels
+ }{
+ {
+ name: "select all logs in range",
+ selector: `{app=~".+"}`,
+ start: now,
+ end: now.Add(time.Hour),
+ limit: 100,
+ direction: logproto.FORWARD,
+ want: []entryWithLabels{
+ {Labels: `{app="foo", env="prod"}`, Entry: logproto.Entry{Timestamp: now, Line: "foo1"}},
+ {Labels: `{app="bar", env="prod"}`, Entry: logproto.Entry{Timestamp: now.Add(5 * time.Second), Line: "bar1"}},
+ {Labels: `{app="bar", env="dev"}`, Entry: logproto.Entry{Timestamp: now.Add(8 * time.Second), Line: "bar5"}},
+ {Labels: `{app="foo", env="dev"}`, Entry: logproto.Entry{Timestamp: now.Add(10 * time.Second), Line: "foo5"}},
+ {Labels: `{app="baz", env="prod", team="a"}`, Entry: logproto.Entry{Timestamp: now.Add(12 * time.Second), Line: "baz1"}},
+ {Labels: `{app="bar", env="prod"}`, Entry: logproto.Entry{Timestamp: now.Add(15 * time.Second), Line: "bar2"}},
+ {Labels: `{app="bar", env="dev"}`, Entry: logproto.Entry{Timestamp: now.Add(18 * time.Second), Line: "bar6"}},
+ {Labels: `{app="foo", env="dev"}`, Entry: logproto.Entry{Timestamp: now.Add(20 * time.Second), Line: "foo6"}},
+ {Labels: `{app="baz", env="prod", team="a"}`, Entry: logproto.Entry{Timestamp: now.Add(22 * time.Second), Line: "baz2"}},
+ {Labels: `{app="bar", env="prod"}`, Entry: logproto.Entry{Timestamp: now.Add(25 * time.Second), Line: "bar3"}},
+ {Labels: `{app="foo", env="prod"}`, Entry: logproto.Entry{Timestamp: now.Add(30 * time.Second), Line: "foo2"}},
+ {Labels: `{app="baz", env="prod", team="a"}`, Entry: logproto.Entry{Timestamp: now.Add(32 * time.Second), Line: "baz3"}},
+ {Labels: `{app="foo", env="dev"}`, Entry: logproto.Entry{Timestamp: now.Add(35 * time.Second), Line: "foo7"}},
+ {Labels: `{app="bar", env="dev"}`, Entry: logproto.Entry{Timestamp: now.Add(38 * time.Second), Line: "bar7"}},
+ {Labels: `{app="bar", env="prod"}`, Entry: logproto.Entry{Timestamp: now.Add(40 * time.Second), Line: "bar4"}},
+ {Labels: `{app="baz", env="prod", team="a"}`, Entry: logproto.Entry{Timestamp: now.Add(42 * time.Second), Line: "baz4"}},
+ {Labels: `{app="foo", env="prod"}`, Entry: logproto.Entry{Timestamp: now.Add(45 * time.Second), Line: "foo3"}},
+ {Labels: `{app="foo", env="prod"}`, Entry: logproto.Entry{Timestamp: now.Add(50 * time.Second), Line: "foo4"}},
+ },
+ },
+ {
+ name: "select with time range filter",
+ selector: `{app="baz", env="prod", team="a"}`,
+ start: now.Add(20 * time.Second),
+ end: now.Add(40 * time.Second),
+ limit: 100,
+ direction: logproto.FORWARD,
+ want: []entryWithLabels{
+ {Labels: `{app="baz", env="prod", team="a"}`, Entry: logproto.Entry{Timestamp: now.Add(22 * time.Second), Line: "baz2"}},
+ {Labels: `{app="baz", env="prod", team="a"}`, Entry: logproto.Entry{Timestamp: now.Add(32 * time.Second), Line: "baz3"}},
+ },
+ },
+ {
+ name: "select with label matcher",
+ selector: `{app="foo"}`,
+ start: now,
+ end: now.Add(time.Hour),
+ limit: 100,
+ direction: logproto.FORWARD,
+ want: []entryWithLabels{
+ {Labels: `{app="foo", env="prod"}`, Entry: logproto.Entry{Timestamp: now, Line: "foo1"}},
+ {Labels: `{app="foo", env="dev"}`, Entry: logproto.Entry{Timestamp: now.Add(10 * time.Second), Line: "foo5"}},
+ {Labels: `{app="foo", env="dev"}`, Entry: logproto.Entry{Timestamp: now.Add(20 * time.Second), Line: "foo6"}},
+ {Labels: `{app="foo", env="prod"}`, Entry: logproto.Entry{Timestamp: now.Add(30 * time.Second), Line: "foo2"}},
+ {Labels: `{app="foo", env="dev"}`, Entry: logproto.Entry{Timestamp: now.Add(35 * time.Second), Line: "foo7"}},
+ {Labels: `{app="foo", env="prod"}`, Entry: logproto.Entry{Timestamp: now.Add(45 * time.Second), Line: "foo3"}},
+ {Labels: `{app="foo", env="prod"}`, Entry: logproto.Entry{Timestamp: now.Add(50 * time.Second), Line: "foo4"}},
+ },
+ },
+ {
+ name: "select first shard",
+ selector: `{app=~".+"}`,
+ start: now,
+ end: now.Add(time.Hour),
+ shards: []string{"0_of_2"},
+ limit: 100,
+ direction: logproto.FORWARD,
+ want: []entryWithLabels{
+ {Labels: `{app="bar", env="prod"}`, Entry: logproto.Entry{Timestamp: now.Add(5 * time.Second), Line: "bar1"}},
+ {Labels: `{app="bar", env="dev"}`, Entry: logproto.Entry{Timestamp: now.Add(8 * time.Second), Line: "bar5"}},
+ {Labels: `{app="baz", env="prod", team="a"}`, Entry: logproto.Entry{Timestamp: now.Add(12 * time.Second), Line: "baz1"}},
+ {Labels: `{app="bar", env="prod"}`, Entry: logproto.Entry{Timestamp: now.Add(15 * time.Second), Line: "bar2"}},
+ {Labels: `{app="bar", env="dev"}`, Entry: logproto.Entry{Timestamp: now.Add(18 * time.Second), Line: "bar6"}},
+ {Labels: `{app="baz", env="prod", team="a"}`, Entry: logproto.Entry{Timestamp: now.Add(22 * time.Second), Line: "baz2"}},
+ {Labels: `{app="bar", env="prod"}`, Entry: logproto.Entry{Timestamp: now.Add(25 * time.Second), Line: "bar3"}},
+ {Labels: `{app="baz", env="prod", team="a"}`, Entry: logproto.Entry{Timestamp: now.Add(32 * time.Second), Line: "baz3"}},
+ {Labels: `{app="bar", env="dev"}`, Entry: logproto.Entry{Timestamp: now.Add(38 * time.Second), Line: "bar7"}},
+ {Labels: `{app="bar", env="prod"}`, Entry: logproto.Entry{Timestamp: now.Add(40 * time.Second), Line: "bar4"}},
+ {Labels: `{app="baz", env="prod", team="a"}`, Entry: logproto.Entry{Timestamp: now.Add(42 * time.Second), Line: "baz4"}},
+ },
+ },
+ {
+ name: "select second shard",
+ selector: `{app=~".+"}`,
+ start: now,
+ end: now.Add(time.Hour),
+ shards: []string{"1_of_2"},
+ limit: 100,
+ direction: logproto.FORWARD,
+ want: []entryWithLabels{
+ {Labels: `{app="foo", env="prod"}`, Entry: logproto.Entry{Timestamp: now, Line: "foo1"}},
+ {Labels: `{app="foo", env="dev"}`, Entry: logproto.Entry{Timestamp: now.Add(10 * time.Second), Line: "foo5"}},
+ {Labels: `{app="foo", env="dev"}`, Entry: logproto.Entry{Timestamp: now.Add(20 * time.Second), Line: "foo6"}},
+ {Labels: `{app="foo", env="prod"}`, Entry: logproto.Entry{Timestamp: now.Add(30 * time.Second), Line: "foo2"}},
+ {Labels: `{app="foo", env="dev"}`, Entry: logproto.Entry{Timestamp: now.Add(35 * time.Second), Line: "foo7"}},
+ {Labels: `{app="foo", env="prod"}`, Entry: logproto.Entry{Timestamp: now.Add(45 * time.Second), Line: "foo3"}},
+ {Labels: `{app="foo", env="prod"}`, Entry: logproto.Entry{Timestamp: now.Add(50 * time.Second), Line: "foo4"}},
+ },
+ },
+ {
+ name: "select with line filter",
+ selector: `{app=~".+"} |= "bar2"`,
+ start: now,
+ end: now.Add(time.Hour),
+ limit: 100,
+ direction: logproto.FORWARD,
+ want: []entryWithLabels{
+ {Labels: `{app="bar", env="prod"}`, Entry: logproto.Entry{Timestamp: now.Add(15 * time.Second), Line: "bar2"}},
+ },
+ },
+ }
+
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ it, err := store.SelectLogs(ctx, logql.SelectLogParams{
+ QueryRequest: &logproto.QueryRequest{
+ Start: tt.start,
+ End: tt.end,
+ Plan: planFromString(tt.selector),
+ Selector: tt.selector,
+ Shards: tt.shards,
+ Limit: tt.limit,
+ Direction: tt.direction,
+ },
+ })
+ require.NoError(t, err)
+ entries, err := readAllEntries(it)
+ require.NoError(t, err)
+ if diff := cmp.Diff(tt.want, entries); diff != "" {
+ t.Errorf("entries mismatch (-want +got):\n%s", diff)
+ }
+ })
+ }
+}
+
func setupTestData(t *testing.T, builder *testDataBuilder) time.Time {
t.Helper()
now := time.Unix(0, int64(time.Hour))
@@ -332,11 +490,6 @@ func (b *testDataBuilder) close() {
os.RemoveAll(b.dir)
}
-type sampleWithLabels struct {
- Labels string
- Samples logproto.Sample
-}
-
// Helper function to read all samples from an iterator
func readAllSamples(it iter.SampleIterator) ([]sampleWithLabels, error) {
var result []sampleWithLabels
@@ -350,3 +503,16 @@ func readAllSamples(it iter.SampleIterator) ([]sampleWithLabels, error) {
}
return result, it.Err()
}
+
+// Helper function to read all entries from an iterator
+func readAllEntries(it iter.EntryIterator) ([]entryWithLabels, error) {
+ var result []entryWithLabels
+ defer it.Close()
+ for it.Next() {
+ result = append(result, entryWithLabels{
+ Labels: it.Labels(),
+ Entry: it.At(),
+ })
+ }
+ return result, it.Err()
+}
|
feat
|
Implement SelectLogs using a topk min-heap (#16261)
|
16e87a6ff2179940f17a6bf545a3e39c76dbbdc8
|
2025-03-08 02:52:59
|
renovate[bot]
|
fix(deps): update module github.com/ibm/go-sdk-core/v5 to v5.19.0 (main) (#16647)
| false
|
diff --git a/go.mod b/go.mod
index b4aeeccaa1f0d..c9181576ddfcf 100644
--- a/go.mod
+++ b/go.mod
@@ -115,7 +115,7 @@ require (
github.com/Azure/go-autorest/autorest v0.11.30
github.com/DataDog/sketches-go v1.4.7
github.com/DmitriyVTitov/size v1.5.0
- github.com/IBM/go-sdk-core/v5 v5.18.5
+ github.com/IBM/go-sdk-core/v5 v5.19.0
github.com/IBM/ibm-cos-sdk-go v1.12.1
github.com/axiomhq/hyperloglog v0.2.5
github.com/buger/jsonparser v1.1.1
@@ -173,7 +173,7 @@ require (
github.com/ebitengine/purego v0.8.2 // indirect
github.com/envoyproxy/go-control-plane/envoy v1.32.4 // indirect
github.com/fxamacker/cbor/v2 v2.7.0 // indirect
- github.com/gabriel-vasile/mimetype v1.4.4 // indirect
+ github.com/gabriel-vasile/mimetype v1.4.8 // indirect
github.com/go-ini/ini v1.67.0 // indirect
github.com/go-ole/go-ole v1.3.0 // indirect
github.com/go-redsync/redsync/v4 v4.13.0 // indirect
@@ -299,7 +299,7 @@ require (
github.com/go-openapi/validate v0.24.0 // indirect
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
- github.com/go-playground/validator/v10 v10.19.0 // indirect
+ github.com/go-playground/validator/v10 v10.24.0 // indirect
github.com/go-zookeeper/zk v1.0.4 // indirect
github.com/gofrs/flock v0.8.1 // indirect
github.com/golang-jwt/jwt/v4 v4.5.1 // indirect
@@ -377,7 +377,7 @@ require (
go.etcd.io/etcd/api/v3 v3.5.4 // indirect
go.etcd.io/etcd/client/pkg/v3 v3.5.4 // indirect
go.etcd.io/etcd/client/v3 v3.5.4 // indirect
- go.mongodb.org/mongo-driver v1.17.0 // indirect
+ go.mongodb.org/mongo-driver v1.17.2 // indirect
go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/collector/semconv v0.118.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.59.0 // indirect
diff --git a/go.sum b/go.sum
index 21501b85037a7..ad1ffca462920 100644
--- a/go.sum
+++ b/go.sum
@@ -134,8 +134,8 @@ github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapp
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.49.0/go.mod h1:wRbFgBQUVm1YXrvWKofAEmq9HNJTDphbAaJSSX01KUI=
github.com/HdrHistogram/hdrhistogram-go v1.1.2 h1:5IcZpTvzydCQeHzK4Ef/D5rrSqwxob0t8PQPMybUNFM=
github.com/HdrHistogram/hdrhistogram-go v1.1.2/go.mod h1:yDgFjdqOqDEKOvasDdhWNXYg9BVp4O+o5f6V/ehm6Oo=
-github.com/IBM/go-sdk-core/v5 v5.18.5 h1:g0JRl3sYXJczB/yuDlrN6x22LJ6jIxhp0Sa4ARNW60c=
-github.com/IBM/go-sdk-core/v5 v5.18.5/go.mod h1:KonTFRR+8ZSgw5cxBSYo6E4WZoY1+7n1kfHM82VcjFU=
+github.com/IBM/go-sdk-core/v5 v5.19.0 h1:YN2S5JUvq/EwYulmcNFwgyYBxZhVWl9nkY22H7Hpghw=
+github.com/IBM/go-sdk-core/v5 v5.19.0/go.mod h1:deZO1J5TSlU69bCnl/YV7nPxFZA2UEaup7cq/7ZTOgw=
github.com/IBM/ibm-cos-sdk-go v1.12.1 h1:pWs5c5/j9PNJE1lIQhYtzpdCxu2fpvCq9PHs6/nDjyI=
github.com/IBM/ibm-cos-sdk-go v1.12.1/go.mod h1:7vmUThyAq4+AD1eEyGZi90ir06Z9YhsEzLBsdGPfcqo=
github.com/IBM/sarama v1.45.1 h1:nY30XqYpqyXOXSNoe2XCgjj9jklGM1Ye94ierUb1jQ0=
@@ -413,8 +413,8 @@ github.com/fullstorydev/emulators/storage v0.0.0-20240401123056-edc69752f474 h1:
github.com/fullstorydev/emulators/storage v0.0.0-20240401123056-edc69752f474/go.mod h1:PQwxF4UU8wuL+srGxr3BOhIW5zXqgucwVlO/nPZLsxw=
github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E=
github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ=
-github.com/gabriel-vasile/mimetype v1.4.4 h1:QjV6pZ7/XZ7ryI2KuyeEDE8wnh7fHP9YnQy+R0LnH8I=
-github.com/gabriel-vasile/mimetype v1.4.4/go.mod h1:JwLei5XPtWdGiMFB5Pjle1oEeoSeEuJfJE+TtfvdB/s=
+github.com/gabriel-vasile/mimetype v1.4.8 h1:FfZ3gj38NjllZIeJAmMhr+qKL8Wu+nOoI3GqacKw1NM=
+github.com/gabriel-vasile/mimetype v1.4.8/go.mod h1:ByKUIKGjh1ODkGM1asKUbQZOLGrPjydw3hYPU2YU9t8=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
@@ -466,8 +466,8 @@ github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/o
github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY=
github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY=
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
-github.com/go-playground/validator/v10 v10.19.0 h1:ol+5Fu+cSq9JD7SoSqe04GMI92cbn0+wvQ3bZ8b/AU4=
-github.com/go-playground/validator/v10 v10.19.0/go.mod h1:dbuPbCMFw/DrkbEynArYaCwl3amGuJotoKCe95atGMM=
+github.com/go-playground/validator/v10 v10.24.0 h1:KHQckvo8G6hlWnrPX4NJJ+aBfWNAE/HH+qdL2cBpCmg=
+github.com/go-playground/validator/v10 v10.24.0/go.mod h1:GGzBIJMuE98Ic/kJsBXbz1x/7cByt++cQ+YOuDM5wus=
github.com/go-redis/redis v6.15.9+incompatible h1:K0pv1D7EQUjfyoMql+r/jZqCLizCGKFlFgcHWWmHQjg=
github.com/go-redis/redis v6.15.9+incompatible/go.mod h1:NAIEuMOZ/fxfXJIrKDQDz8wamY7mA7PouImQ2Jvg6kA=
github.com/go-redis/redis/v7 v7.4.1 h1:PASvf36gyUpr2zdOUS/9Zqc80GbM+9BDyiJSJDDOrTI=
@@ -1230,8 +1230,8 @@ go.etcd.io/etcd/client/pkg/v3 v3.5.4 h1:lrneYvz923dvC14R54XcA7FXoZ3mlGZAgmwhfm7H
go.etcd.io/etcd/client/pkg/v3 v3.5.4/go.mod h1:IJHfcCEKxYu1Os13ZdwCwIUTUVGYTSAM3YSwc9/Ac1g=
go.etcd.io/etcd/client/v3 v3.5.4 h1:p83BUL3tAYS0OT/r0qglgc3M1JjhM0diV8DSWAhVXv4=
go.etcd.io/etcd/client/v3 v3.5.4/go.mod h1:ZaRkVgBZC+L+dLCjTcF1hRXpgZXQPOvnA/Ak/gq3kiY=
-go.mongodb.org/mongo-driver v1.17.0 h1:Hp4q2MCjvY19ViwimTs00wHi7G4yzxh4/2+nTx8r40k=
-go.mongodb.org/mongo-driver v1.17.0/go.mod h1:wwWm/+BuOddhcq3n68LKRmgk2wXzmF6s0SFOa0GINL4=
+go.mongodb.org/mongo-driver v1.17.2 h1:gvZyk8352qSfzyZ2UMWcpDpMSGEr1eqE4T793SqyhzM=
+go.mongodb.org/mongo-driver v1.17.2/go.mod h1:Hy04i7O2kC4RS06ZrhPRqj/u4DTYkFDAAccj+rVKqgQ=
go.opencensus.io v0.20.1/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.20.2/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
diff --git a/vendor/github.com/IBM/go-sdk-core/v5/core/container_authenticator.go b/vendor/github.com/IBM/go-sdk-core/v5/core/container_authenticator.go
index ebb6e4edcc9fa..651ae07811b1a 100644
--- a/vendor/github.com/IBM/go-sdk-core/v5/core/container_authenticator.go
+++ b/vendor/github.com/IBM/go-sdk-core/v5/core/container_authenticator.go
@@ -1,6 +1,6 @@
package core
-// (C) Copyright IBM Corp. 2021, 2024.
+// (C) Copyright IBM Corp. 2021, 2025.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -29,7 +29,7 @@ import (
)
// ContainerAuthenticator implements an IAM-based authentication schema whereby it
-// retrieves a "compute resource token" from the local compute resource (VM)
+// retrieves a "compute resource token" from the local compute resource (IKS pod, or Code Engine application, function, or job)
// and uses that to obtain an IAM access token by invoking the IAM "get token" operation with grant-type=cr-token.
// The resulting IAM access token is then added to outbound requests in an Authorization header
// of the form:
@@ -37,8 +37,8 @@ import (
// Authorization: Bearer <access-token>
type ContainerAuthenticator struct {
// [optional] The name of the file containing the injected CR token value (applies to
- // IKS-managed compute resources).
- // Default value: (1) "/var/run/secrets/tokens/vault-token" or (2) "/var/run/secrets/tokens/sa-token",
+ // IKS-managed compute resources, a Code Engine compute resource always uses the third default from below).
+ // Default value: (1) "/var/run/secrets/tokens/vault-token" or (2) "/var/run/secrets/tokens/sa-token" or (3) "/var/run/secrets/codeengine.cloud.ibm.com/compute-resource-token/token",
// whichever is found first.
CRTokenFilename string
@@ -98,9 +98,10 @@ type ContainerAuthenticator struct {
}
const (
- defaultCRTokenFilename1 = "/var/run/secrets/tokens/vault-token" // #nosec G101
- defaultCRTokenFilename2 = "/var/run/secrets/tokens/sa-token" // #nosec G101
- iamGrantTypeCRToken = "urn:ibm:params:oauth:grant-type:cr-token" // #nosec G101
+ defaultCRTokenFilename1 = "/var/run/secrets/tokens/vault-token" // #nosec G101
+ defaultCRTokenFilename2 = "/var/run/secrets/tokens/sa-token" // #nosec G101
+ defaultCRTokenFilename3 = "/var/run/secrets/codeengine.cloud.ibm.com/compute-resource-token/token" // #nosec G101
+ iamGrantTypeCRToken = "urn:ibm:params:oauth:grant-type:cr-token" // #nosec G101
)
var craRequestTokenMutex sync.Mutex
@@ -504,6 +505,9 @@ func (authenticator *ContainerAuthenticator) retrieveCRToken() (crToken string,
crToken, err = authenticator.readFile(defaultCRTokenFilename1)
if err != nil {
crToken, err = authenticator.readFile(defaultCRTokenFilename2)
+ if err != nil {
+ crToken, err = authenticator.readFile(defaultCRTokenFilename3)
+ }
}
}
diff --git a/vendor/github.com/IBM/go-sdk-core/v5/core/version.go b/vendor/github.com/IBM/go-sdk-core/v5/core/version.go
index 37bb0090c6d95..a0e94d3d2645e 100644
--- a/vendor/github.com/IBM/go-sdk-core/v5/core/version.go
+++ b/vendor/github.com/IBM/go-sdk-core/v5/core/version.go
@@ -15,4 +15,4 @@ package core
// limitations under the License.
// Version of the SDK
-const __VERSION__ = "5.18.5"
+const __VERSION__ = "5.19.0"
diff --git a/vendor/github.com/gabriel-vasile/mimetype/README.md b/vendor/github.com/gabriel-vasile/mimetype/README.md
index fd6c533e4aca2..aa88b4bda6ca4 100644
--- a/vendor/github.com/gabriel-vasile/mimetype/README.md
+++ b/vendor/github.com/gabriel-vasile/mimetype/README.md
@@ -81,7 +81,7 @@ To prevent loading entire files into memory, when detecting from a
or from a [file](https://pkg.go.dev/github.com/gabriel-vasile/mimetype#DetectFile)
**mimetype** limits itself to reading only the header of the input.
<div align="center">
- <img alt="structure" src="https://github.com/gabriel-vasile/mimetype/blob/420a05228c6a6efbb6e6f080168a25663414ff36/mimetype.gif?raw=true" width="88%">
+ <img alt="how project is structured" src="https://raw.githubusercontent.com/gabriel-vasile/mimetype/master/testdata/gif.gif" width="88%">
</div>
## Performance
diff --git a/vendor/github.com/gabriel-vasile/mimetype/internal/json/json.go b/vendor/github.com/gabriel-vasile/mimetype/internal/json/json.go
index ee39349aefe33..5b2ecee443e37 100644
--- a/vendor/github.com/gabriel-vasile/mimetype/internal/json/json.go
+++ b/vendor/github.com/gabriel-vasile/mimetype/internal/json/json.go
@@ -34,6 +34,7 @@ package json
import (
"fmt"
+ "sync"
)
type (
@@ -73,10 +74,31 @@ type (
}
)
+var scannerPool = sync.Pool{
+ New: func() any {
+ return &scanner{}
+ },
+}
+
+func newScanner() *scanner {
+ s := scannerPool.Get().(*scanner)
+ s.reset()
+ return s
+}
+
+func freeScanner(s *scanner) {
+ // Avoid hanging on to too much memory in extreme cases.
+ if len(s.parseState) > 1024 {
+ s.parseState = nil
+ }
+ scannerPool.Put(s)
+}
+
// Scan returns the number of bytes scanned and if there was any error
// in trying to reach the end of data.
func Scan(data []byte) (int, error) {
- s := &scanner{}
+ s := newScanner()
+ defer freeScanner(s)
_ = checkValid(data, s)
return s.index, s.err
}
@@ -84,7 +106,6 @@ func Scan(data []byte) (int, error) {
// checkValid verifies that data is valid JSON-encoded data.
// scan is passed in for use by checkValid to avoid an allocation.
func checkValid(data []byte, scan *scanner) error {
- scan.reset()
for _, c := range data {
scan.index++
if scan.step(scan, c) == scanError {
@@ -105,6 +126,8 @@ func (s *scanner) reset() {
s.step = stateBeginValue
s.parseState = s.parseState[0:0]
s.err = nil
+ s.endTop = false
+ s.index = 0
}
// eof tells the scanner that the end of input has been reached.
diff --git a/vendor/github.com/gabriel-vasile/mimetype/internal/magic/archive.go b/vendor/github.com/gabriel-vasile/mimetype/internal/magic/archive.go
index 554ac4d4a61b4..068d00f79aeb4 100644
--- a/vendor/github.com/gabriel-vasile/mimetype/internal/magic/archive.go
+++ b/vendor/github.com/gabriel-vasile/mimetype/internal/magic/archive.go
@@ -3,7 +3,6 @@ package magic
import (
"bytes"
"encoding/binary"
- "strconv"
)
var (
@@ -53,10 +52,15 @@ func InstallShieldCab(raw []byte, _ uint32) bool {
}
// Zstd matches a Zstandard archive file.
+// https://github.com/facebook/zstd/blob/dev/doc/zstd_compression_format.md
func Zstd(raw []byte, limit uint32) bool {
- return len(raw) >= 4 &&
- (0x22 <= raw[0] && raw[0] <= 0x28 || raw[0] == 0x1E) && // Different Zstandard versions.
- bytes.HasPrefix(raw[1:], []byte{0xB5, 0x2F, 0xFD})
+ if len(raw) < 4 {
+ return false
+ }
+ sig := binary.LittleEndian.Uint32(raw)
+ // Check for Zstandard frames and skippable frames.
+ return (sig >= 0xFD2FB522 && sig <= 0xFD2FB528) ||
+ (sig >= 0x184D2A50 && sig <= 0x184D2A5F)
}
// CRX matches a Chrome extension file: a zip archive prepended by a package header.
@@ -110,8 +114,8 @@ func Tar(raw []byte, _ uint32) bool {
}
// Get the checksum recorded into the file.
- recsum, err := tarParseOctal(raw[148:156])
- if err != nil {
+ recsum := tarParseOctal(raw[148:156])
+ if recsum == -1 {
return false
}
sum1, sum2 := tarChksum(raw)
@@ -119,28 +123,26 @@ func Tar(raw []byte, _ uint32) bool {
}
// tarParseOctal converts octal string to decimal int.
-func tarParseOctal(b []byte) (int64, error) {
+func tarParseOctal(b []byte) int64 {
// Because unused fields are filled with NULs, we need to skip leading NULs.
// Fields may also be padded with spaces or NULs.
// So we remove leading and trailing NULs and spaces to be sure.
b = bytes.Trim(b, " \x00")
if len(b) == 0 {
- return 0, nil
+ return -1
}
- x, err := strconv.ParseUint(tarParseString(b), 8, 64)
- if err != nil {
- return 0, err
- }
- return int64(x), nil
-}
-
-// tarParseString converts a NUL ended bytes slice to a string.
-func tarParseString(b []byte) string {
- if i := bytes.IndexByte(b, 0); i >= 0 {
- return string(b[:i])
+ ret := int64(0)
+ for _, b := range b {
+ if b == 0 {
+ break
+ }
+ if !(b >= '0' && b <= '7') {
+ return -1
+ }
+ ret = (ret << 3) | int64(b-'0')
}
- return string(b)
+ return ret
}
// tarChksum computes the checksum for the header block b.
diff --git a/vendor/github.com/gabriel-vasile/mimetype/internal/magic/binary.go b/vendor/github.com/gabriel-vasile/mimetype/internal/magic/binary.go
index f1e944987c371..76973201806b1 100644
--- a/vendor/github.com/gabriel-vasile/mimetype/internal/magic/binary.go
+++ b/vendor/github.com/gabriel-vasile/mimetype/internal/magic/binary.go
@@ -21,6 +21,10 @@ var (
SWF = prefix([]byte("CWS"), []byte("FWS"), []byte("ZWS"))
// Torrent has bencoded text in the beginning.
Torrent = prefix([]byte("d8:announce"))
+ // PAR1 matches a parquet file.
+ Par1 = prefix([]byte{0x50, 0x41, 0x52, 0x31})
+ // CBOR matches a Concise Binary Object Representation https://cbor.io/
+ CBOR = prefix([]byte{0xD9, 0xD9, 0xF7})
)
// Java bytecode and Mach-O binaries share the same magic number.
@@ -32,7 +36,7 @@ func classOrMachOFat(in []byte) bool {
return false
}
- return bytes.HasPrefix(in, []byte{0xCA, 0xFE, 0xBA, 0xBE})
+ return binary.BigEndian.Uint32(in) == macho.MagicFat
}
// Class matches a java class file.
@@ -42,7 +46,7 @@ func Class(raw []byte, limit uint32) bool {
// MachO matches Mach-O binaries format.
func MachO(raw []byte, limit uint32) bool {
- if classOrMachOFat(raw) && raw[7] < 20 {
+ if classOrMachOFat(raw) && raw[7] < 0x14 {
return true
}
@@ -154,11 +158,11 @@ func Marc(raw []byte, limit uint32) bool {
// the GL transmission Format (glTF).
// GLB uses little endian and its header structure is as follows:
//
-// <-- 12-byte header -->
-// | magic | version | length |
-// | (uint32) | (uint32) | (uint32) |
-// | \x67\x6C\x54\x46 | \x01\x00\x00\x00 | ... |
-// | g l T F | 1 | ... |
+// <-- 12-byte header -->
+// | magic | version | length |
+// | (uint32) | (uint32) | (uint32) |
+// | \x67\x6C\x54\x46 | \x01\x00\x00\x00 | ... |
+// | g l T F | 1 | ... |
//
// Visit [glTF specification] and [IANA glTF entry] for more details.
//
@@ -170,14 +174,15 @@ var Glb = prefix([]byte("\x67\x6C\x54\x46\x02\x00\x00\x00"),
// TzIf matches a Time Zone Information Format (TZif) file.
// See more: https://tools.ietf.org/id/draft-murchison-tzdist-tzif-00.html#rfc.section.3
// Its header structure is shown below:
-// +---------------+---+
-// | magic (4) | <-+-- version (1)
-// +---------------+---+---------------------------------------+
-// | [unused - reserved for future use] (15) |
-// +---------------+---------------+---------------+-----------+
-// | isutccnt (4) | isstdcnt (4) | leapcnt (4) |
-// +---------------+---------------+---------------+
-// | timecnt (4) | typecnt (4) | charcnt (4) |
+//
+// +---------------+---+
+// | magic (4) | <-+-- version (1)
+// +---------------+---+---------------------------------------+
+// | [unused - reserved for future use] (15) |
+// +---------------+---------------+---------------+-----------+
+// | isutccnt (4) | isstdcnt (4) | leapcnt (4) |
+// +---------------+---------------+---------------+
+// | timecnt (4) | typecnt (4) | charcnt (4) |
func TzIf(raw []byte, limit uint32) bool {
// File is at least 44 bytes (header size).
if len(raw) < 44 {
diff --git a/vendor/github.com/gabriel-vasile/mimetype/internal/magic/ftyp.go b/vendor/github.com/gabriel-vasile/mimetype/internal/magic/ftyp.go
index 6575b4aecd64b..ac727139ef209 100644
--- a/vendor/github.com/gabriel-vasile/mimetype/internal/magic/ftyp.go
+++ b/vendor/github.com/gabriel-vasile/mimetype/internal/magic/ftyp.go
@@ -1,22 +1,14 @@
package magic
-import "bytes"
+import (
+ "bytes"
+)
var (
// AVIF matches an AV1 Image File Format still or animated.
// Wikipedia page seems outdated listing image/avif-sequence for animations.
// https://github.com/AOMediaCodec/av1-avif/issues/59
AVIF = ftyp([]byte("avif"), []byte("avis"))
- // Mp4 matches an MP4 file.
- Mp4 = ftyp(
- []byte("avc1"), []byte("dash"), []byte("iso2"), []byte("iso3"),
- []byte("iso4"), []byte("iso5"), []byte("iso6"), []byte("isom"),
- []byte("mmp4"), []byte("mp41"), []byte("mp42"), []byte("mp4v"),
- []byte("mp71"), []byte("MSNV"), []byte("NDAS"), []byte("NDSC"),
- []byte("NSDC"), []byte("NSDH"), []byte("NDSM"), []byte("NDSP"),
- []byte("NDSS"), []byte("NDXC"), []byte("NDXH"), []byte("NDXM"),
- []byte("NDXP"), []byte("NDXS"), []byte("F4V "), []byte("F4P "),
- )
// ThreeGP matches a 3GPP file.
ThreeGP = ftyp(
[]byte("3gp1"), []byte("3gp2"), []byte("3gp3"), []byte("3gp4"),
@@ -53,6 +45,17 @@ var (
Heif = ftyp([]byte("mif1"), []byte("heim"), []byte("heis"), []byte("avic"))
// HeifSequence matches a High Efficiency Image File Format (HEIF) file sequence.
HeifSequence = ftyp([]byte("msf1"), []byte("hevm"), []byte("hevs"), []byte("avcs"))
+ // Mj2 matches a Motion JPEG 2000 file: https://en.wikipedia.org/wiki/Motion_JPEG_2000.
+ Mj2 = ftyp([]byte("mj2s"), []byte("mjp2"), []byte("MFSM"), []byte("MGSV"))
+ // Dvb matches a Digital Video Broadcasting file: https://dvb.org.
+ // https://cconcolato.github.io/mp4ra/filetype.html
+ // https://github.com/file/file/blob/512840337ead1076519332d24fefcaa8fac36e06/magic/Magdir/animation#L135-L154
+ Dvb = ftyp(
+ []byte("dby1"), []byte("dsms"), []byte("dts1"), []byte("dts2"),
+ []byte("dts3"), []byte("dxo "), []byte("dmb1"), []byte("dmpf"),
+ []byte("drc1"), []byte("dv1a"), []byte("dv1b"), []byte("dv2a"),
+ []byte("dv2b"), []byte("dv3a"), []byte("dv3b"), []byte("dvr1"),
+ []byte("dvt1"), []byte("emsg"))
// TODO: add support for remaining video formats at ftyps.com.
)
@@ -86,3 +89,21 @@ func QuickTime(raw []byte, _ uint32) bool {
}
return bytes.Equal(raw[:8], []byte("\x00\x00\x00\x08wide"))
}
+
+// Mp4 detects an .mp4 file. Mp4 detections only does a basic ftyp check.
+// Mp4 has many registered and unregistered code points so it's hard to keep track
+// of all. Detection will default on video/mp4 for all ftyp files.
+// ISO_IEC_14496-12 is the specification for the iso container.
+func Mp4(raw []byte, _ uint32) bool {
+ if len(raw) < 12 {
+ return false
+ }
+ // ftyps are made out of boxes. The first 4 bytes of the box represent
+ // its size in big-endian uint32. First box is the ftyp box and it is small
+ // in size. Check most significant byte is 0 to filter out false positive
+ // text files that happen to contain the string "ftyp" at index 4.
+ if raw[0] != 0 {
+ return false
+ }
+ return bytes.Equal(raw[4:8], []byte("ftyp"))
+}
diff --git a/vendor/github.com/gabriel-vasile/mimetype/internal/magic/magic.go b/vendor/github.com/gabriel-vasile/mimetype/internal/magic/magic.go
index 3ce1de113ba3f..a34c609842123 100644
--- a/vendor/github.com/gabriel-vasile/mimetype/internal/magic/magic.go
+++ b/vendor/github.com/gabriel-vasile/mimetype/internal/magic/magic.go
@@ -153,9 +153,6 @@ func ftyp(sigs ...[]byte) Detector {
if len(raw) < 12 {
return false
}
- if !bytes.Equal(raw[4:8], []byte("ftyp")) {
- return false
- }
for _, s := range sigs {
if bytes.Equal(raw[8:12], s) {
return true
@@ -242,3 +239,13 @@ func min(a, b int) int {
}
return b
}
+
+type readBuf []byte
+
+func (b *readBuf) advance(n int) bool {
+ if n < 0 || len(*b) < n {
+ return false
+ }
+ *b = (*b)[n:]
+ return true
+}
diff --git a/vendor/github.com/gabriel-vasile/mimetype/internal/magic/ms_office.go b/vendor/github.com/gabriel-vasile/mimetype/internal/magic/ms_office.go
index 5964ce596c167..7d60e22e26e94 100644
--- a/vendor/github.com/gabriel-vasile/mimetype/internal/magic/ms_office.go
+++ b/vendor/github.com/gabriel-vasile/mimetype/internal/magic/ms_office.go
@@ -5,58 +5,19 @@ import (
"encoding/binary"
)
-var (
- xlsxSigFiles = []string{
- "xl/worksheets/",
- "xl/drawings/",
- "xl/theme/",
- "xl/_rels/",
- "xl/styles.xml",
- "xl/workbook.xml",
- "xl/sharedStrings.xml",
- }
- docxSigFiles = []string{
- "word/media/",
- "word/_rels/document.xml.rels",
- "word/document.xml",
- "word/styles.xml",
- "word/fontTable.xml",
- "word/settings.xml",
- "word/numbering.xml",
- "word/header",
- "word/footer",
- }
- pptxSigFiles = []string{
- "ppt/slides/",
- "ppt/media/",
- "ppt/slideLayouts/",
- "ppt/theme/",
- "ppt/slideMasters/",
- "ppt/tags/",
- "ppt/notesMasters/",
- "ppt/_rels/",
- "ppt/handoutMasters/",
- "ppt/notesSlides/",
- "ppt/presentation.xml",
- "ppt/tableStyles.xml",
- "ppt/presProps.xml",
- "ppt/viewProps.xml",
- }
-)
-
// Xlsx matches a Microsoft Excel 2007 file.
func Xlsx(raw []byte, limit uint32) bool {
- return zipContains(raw, xlsxSigFiles...)
+ return zipContains(raw, []byte("xl/"), true)
}
// Docx matches a Microsoft Word 2007 file.
func Docx(raw []byte, limit uint32) bool {
- return zipContains(raw, docxSigFiles...)
+ return zipContains(raw, []byte("word/"), true)
}
// Pptx matches a Microsoft PowerPoint 2007 file.
func Pptx(raw []byte, limit uint32) bool {
- return zipContains(raw, pptxSigFiles...)
+ return zipContains(raw, []byte("ppt/"), true)
}
// Ole matches an Open Linking and Embedding file.
diff --git a/vendor/github.com/gabriel-vasile/mimetype/internal/magic/text.go b/vendor/github.com/gabriel-vasile/mimetype/internal/magic/text.go
index 9f1a637ba1c07..cf6446397fefc 100644
--- a/vendor/github.com/gabriel-vasile/mimetype/internal/magic/text.go
+++ b/vendor/github.com/gabriel-vasile/mimetype/internal/magic/text.go
@@ -120,7 +120,7 @@ var (
[]byte("/usr/bin/env wish"),
)
// Rtf matches a Rich Text Format file.
- Rtf = prefix([]byte("{\\rtf1"))
+ Rtf = prefix([]byte("{\\rtf"))
)
// Text matches a plain text file.
diff --git a/vendor/github.com/gabriel-vasile/mimetype/internal/magic/text_csv.go b/vendor/github.com/gabriel-vasile/mimetype/internal/magic/text_csv.go
index af2564381b50e..6083ba8c00cd1 100644
--- a/vendor/github.com/gabriel-vasile/mimetype/internal/magic/text_csv.go
+++ b/vendor/github.com/gabriel-vasile/mimetype/internal/magic/text_csv.go
@@ -1,12 +1,28 @@
package magic
import (
+ "bufio"
"bytes"
"encoding/csv"
"errors"
"io"
+ "sync"
)
+// A bufio.Reader pool to alleviate problems with memory allocations.
+var readerPool = sync.Pool{
+ New: func() any {
+ // Initiate with empty source reader.
+ return bufio.NewReader(nil)
+ },
+}
+
+func newReader(r io.Reader) *bufio.Reader {
+ br := readerPool.Get().(*bufio.Reader)
+ br.Reset(r)
+ return br
+}
+
// Csv matches a comma-separated values file.
func Csv(raw []byte, limit uint32) bool {
return sv(raw, ',', limit)
@@ -18,7 +34,11 @@ func Tsv(raw []byte, limit uint32) bool {
}
func sv(in []byte, comma rune, limit uint32) bool {
- r := csv.NewReader(bytes.NewReader(dropLastLine(in, limit)))
+ in = dropLastLine(in, limit)
+
+ br := newReader(bytes.NewReader(in))
+ defer readerPool.Put(br)
+ r := csv.NewReader(br)
r.Comma = comma
r.ReuseRecord = true
r.LazyQuotes = true
diff --git a/vendor/github.com/gabriel-vasile/mimetype/internal/magic/zip.go b/vendor/github.com/gabriel-vasile/mimetype/internal/magic/zip.go
index dabee947b9dab..f6c64829d924f 100644
--- a/vendor/github.com/gabriel-vasile/mimetype/internal/magic/zip.go
+++ b/vendor/github.com/gabriel-vasile/mimetype/internal/magic/zip.go
@@ -3,7 +3,6 @@ package magic
import (
"bytes"
"encoding/binary"
- "strings"
)
var (
@@ -43,48 +42,88 @@ func Zip(raw []byte, limit uint32) bool {
// Jar matches a Java archive file.
func Jar(raw []byte, limit uint32) bool {
- return zipContains(raw, "META-INF/MANIFEST.MF")
+ return zipContains(raw, []byte("META-INF/MANIFEST.MF"), false)
}
-// zipTokenizer holds the source zip file and scanned index.
-type zipTokenizer struct {
- in []byte
- i int // current index
-}
+func zipContains(raw, sig []byte, msoCheck bool) bool {
+ b := readBuf(raw)
+ pk := []byte("PK\003\004")
+ if len(b) < 0x1E {
+ return false
+ }
-// next returns the next file name from the zip headers.
-// https://web.archive.org/web/20191129114319/https://users.cs.jmu.edu/buchhofp/forensics/formats/pkzip.html
-func (t *zipTokenizer) next() (fileName string) {
- if t.i > len(t.in) {
- return
+ if !b.advance(0x1E) {
+ return false
+ }
+ if bytes.HasPrefix(b, sig) {
+ return true
}
- in := t.in[t.i:]
- // pkSig is the signature of the zip local file header.
- pkSig := []byte("PK\003\004")
- pkIndex := bytes.Index(in, pkSig)
- // 30 is the offset of the file name in the header.
- fNameOffset := pkIndex + 30
- // end if signature not found or file name offset outside of file.
- if pkIndex == -1 || fNameOffset > len(in) {
- return
+
+ if msoCheck {
+ skipFiles := [][]byte{
+ []byte("[Content_Types].xml"),
+ []byte("_rels/.rels"),
+ []byte("docProps"),
+ []byte("customXml"),
+ []byte("[trash]"),
+ }
+
+ hasSkipFile := false
+ for _, sf := range skipFiles {
+ if bytes.HasPrefix(b, sf) {
+ hasSkipFile = true
+ break
+ }
+ }
+ if !hasSkipFile {
+ return false
+ }
+ }
+
+ searchOffset := binary.LittleEndian.Uint32(raw[18:]) + 49
+ if !b.advance(int(searchOffset)) {
+ return false
}
- fNameLen := int(binary.LittleEndian.Uint16(in[pkIndex+26 : pkIndex+28]))
- if fNameLen <= 0 || fNameOffset+fNameLen > len(in) {
- return
+ nextHeader := bytes.Index(raw[searchOffset:], pk)
+ if !b.advance(nextHeader) {
+ return false
}
- t.i += fNameOffset + fNameLen
- return string(in[fNameOffset : fNameOffset+fNameLen])
+ if bytes.HasPrefix(b, sig) {
+ return true
+ }
+
+ for i := 0; i < 4; i++ {
+ if !b.advance(0x1A) {
+ return false
+ }
+ nextHeader = bytes.Index(b, pk)
+ if nextHeader == -1 {
+ return false
+ }
+ if !b.advance(nextHeader + 0x1E) {
+ return false
+ }
+ if bytes.HasPrefix(b, sig) {
+ return true
+ }
+ }
+ return false
}
-// zipContains returns true if the zip file headers from in contain any of the paths.
-func zipContains(in []byte, paths ...string) bool {
- t := zipTokenizer{in: in}
- for i, tok := 0, t.next(); tok != ""; i, tok = i+1, t.next() {
- for p := range paths {
- if strings.HasPrefix(tok, paths[p]) {
- return true
- }
+// APK matches an Android Package Archive.
+// The source of signatures is https://github.com/file/file/blob/1778642b8ba3d947a779a36fcd81f8e807220a19/magic/Magdir/archive#L1820-L1887
+func APK(raw []byte, _ uint32) bool {
+ apkSignatures := [][]byte{
+ []byte("AndroidManifest.xml"),
+ []byte("META-INF/com/android/build/gradle/app-metadata.properties"),
+ []byte("classes.dex"),
+ []byte("resources.arsc"),
+ []byte("res/drawable"),
+ }
+ for _, sig := range apkSignatures {
+ if zipContains(raw, sig, false) {
+ return true
}
}
diff --git a/vendor/github.com/gabriel-vasile/mimetype/mimetype.gif b/vendor/github.com/gabriel-vasile/mimetype/mimetype.gif
deleted file mode 100644
index c3e80876738f1..0000000000000
Binary files a/vendor/github.com/gabriel-vasile/mimetype/mimetype.gif and /dev/null differ
diff --git a/vendor/github.com/gabriel-vasile/mimetype/supported_mimes.md b/vendor/github.com/gabriel-vasile/mimetype/supported_mimes.md
index 5ec6f6b6502e5..f9bf03cba6d69 100644
--- a/vendor/github.com/gabriel-vasile/mimetype/supported_mimes.md
+++ b/vendor/github.com/gabriel-vasile/mimetype/supported_mimes.md
@@ -1,4 +1,4 @@
-## 173 Supported MIME types
+## 178 Supported MIME types
This file is automatically generated when running tests. Do not edit manually.
Extension | MIME type | Aliases
@@ -11,6 +11,7 @@ Extension | MIME type | Aliases
**.docx** | application/vnd.openxmlformats-officedocument.wordprocessingml.document | -
**.pptx** | application/vnd.openxmlformats-officedocument.presentationml.presentation | -
**.epub** | application/epub+zip | -
+**.apk** | application/vnd.android.package-archive | -
**.jar** | application/jar | -
**.odt** | application/vnd.oasis.opendocument.text | application/x-vnd.oasis.opendocument.text
**.ott** | application/vnd.oasis.opendocument.text-template | application/x-vnd.oasis.opendocument.text-template
@@ -75,21 +76,28 @@ Extension | MIME type | Aliases
**.au** | audio/basic | -
**.mpeg** | video/mpeg | -
**.mov** | video/quicktime | -
-**.mqv** | video/quicktime | -
**.mp4** | video/mp4 | -
-**.webm** | video/webm | audio/webm
+**.avif** | image/avif | -
**.3gp** | video/3gpp | video/3gp, audio/3gpp
**.3g2** | video/3gpp2 | video/3g2, audio/3gpp2
+**.mp4** | audio/mp4 | audio/x-mp4a
+**.mqv** | video/quicktime | -
+**.m4a** | audio/x-m4a | -
+**.m4v** | video/x-m4v | -
+**.heic** | image/heic | -
+**.heic** | image/heic-sequence | -
+**.heif** | image/heif | -
+**.heif** | image/heif-sequence | -
+**.mj2** | video/mj2 | -
+**.dvb** | video/vnd.dvb.file | -
+**.webm** | video/webm | audio/webm
**.avi** | video/x-msvideo | video/avi, video/msvideo
**.flv** | video/x-flv | -
**.mkv** | video/x-matroska | -
**.asf** | video/x-ms-asf | video/asf, video/x-ms-wmv
**.aac** | audio/aac | -
**.voc** | audio/x-unknown | -
-**.mp4** | audio/mp4 | audio/x-m4a, audio/x-mp4a
-**.m4a** | audio/x-m4a | -
**.m3u** | application/vnd.apple.mpegurl | audio/mpegurl
-**.m4v** | video/x-m4v | -
**.rmvb** | application/vnd.rn-realmedia-vbr | -
**.gz** | application/gzip | application/x-gzip, application/x-gunzip, application/gzipped, application/gzip-compressed, application/x-gzip-compressed, gzip/document
**.class** | application/x-java-applet | -
@@ -111,6 +119,7 @@ Extension | MIME type | Aliases
**.mobi** | application/x-mobipocket-ebook | -
**.lit** | application/x-ms-reader | -
**.bpg** | image/bpg | -
+**.cbor** | application/cbor | -
**.sqlite** | application/vnd.sqlite3 | application/x-sqlite3
**.dwg** | image/vnd.dwg | image/x-dwg, application/acad, application/x-acad, application/autocad_dwg, application/dwg, application/x-dwg, application/x-autocad, drawing/dwg
**.nes** | application/vnd.nintendo.snes.rom | -
@@ -118,10 +127,6 @@ Extension | MIME type | Aliases
**.macho** | application/x-mach-binary | -
**.qcp** | audio/qcelp | -
**.icns** | image/x-icns | -
-**.heic** | image/heic | -
-**.heic** | image/heic-sequence | -
-**.heif** | image/heif | -
-**.heif** | image/heif-sequence | -
**.hdr** | image/vnd.radiance | -
**.mrc** | application/marc | -
**.mdb** | application/x-msaccess | -
@@ -138,13 +143,13 @@ Extension | MIME type | Aliases
**.pat** | image/x-gimp-pat | -
**.gbr** | image/x-gimp-gbr | -
**.glb** | model/gltf-binary | -
-**.avif** | image/avif | -
**.cab** | application/x-installshield | -
**.jxr** | image/jxr | image/vnd.ms-photo
+**.parquet** | application/vnd.apache.parquet | application/x-parquet
**.txt** | text/plain | -
**.html** | text/html | -
**.svg** | image/svg+xml | -
-**.xml** | text/xml | -
+**.xml** | text/xml | application/xml
**.rss** | application/rss+xml | text/rss
**.atom** | application/atom+xml | -
**.x3d** | model/x3d+xml | -
@@ -159,7 +164,7 @@ Extension | MIME type | Aliases
**.xfdf** | application/vnd.adobe.xfdf | -
**.owl** | application/owl+xml | -
**.php** | text/x-php | -
-**.js** | application/javascript | application/x-javascript, text/javascript
+**.js** | text/javascript | application/x-javascript, application/javascript
**.lua** | text/x-lua | -
**.pl** | text/x-perl | -
**.py** | text/x-python | text/x-script.python, application/x-python
@@ -167,7 +172,7 @@ Extension | MIME type | Aliases
**.geojson** | application/geo+json | -
**.har** | application/json | -
**.ndjson** | application/x-ndjson | -
-**.rtf** | text/rtf | -
+**.rtf** | text/rtf | application/rtf
**.srt** | application/x-subrip | application/x-srt, text/x-srt
**.tcl** | text/x-tcl | application/x-tcl
**.csv** | text/csv | -
diff --git a/vendor/github.com/gabriel-vasile/mimetype/tree.go b/vendor/github.com/gabriel-vasile/mimetype/tree.go
index 253bd00649915..b5f5662277abb 100644
--- a/vendor/github.com/gabriel-vasile/mimetype/tree.go
+++ b/vendor/github.com/gabriel-vasile/mimetype/tree.go
@@ -18,14 +18,13 @@ import (
var root = newMIME("application/octet-stream", "",
func([]byte, uint32) bool { return true },
xpm, sevenZ, zip, pdf, fdf, ole, ps, psd, p7s, ogg, png, jpg, jxl, jp2, jpx,
- jpm, jxs, gif, webp, exe, elf, ar, tar, xar, bz2, fits, tiff, bmp, ico, mp3, flac,
- midi, ape, musePack, amr, wav, aiff, au, mpeg, quickTime, mqv, mp4, webM,
- threeGP, threeG2, avi, flv, mkv, asf, aac, voc, aMp4, m4a, m3u, m4v, rmvb,
- gzip, class, swf, crx, ttf, woff, woff2, otf, ttc, eot, wasm, shx, dbf, dcm, rar,
- djvu, mobi, lit, bpg, sqlite3, dwg, nes, lnk, macho, qcp, icns, heic,
- heicSeq, heif, heifSeq, hdr, mrc, mdb, accdb, zstd, cab, rpm, xz, lzip,
- torrent, cpio, tzif, xcf, pat, gbr, glb, avif, cabIS, jxr,
- // Keep text last because it is the slowest check
+ jpm, jxs, gif, webp, exe, elf, ar, tar, xar, bz2, fits, tiff, bmp, ico, mp3,
+ flac, midi, ape, musePack, amr, wav, aiff, au, mpeg, quickTime, mp4, webM,
+ avi, flv, mkv, asf, aac, voc, m3u, rmvb, gzip, class, swf, crx, ttf, woff,
+ woff2, otf, ttc, eot, wasm, shx, dbf, dcm, rar, djvu, mobi, lit, bpg, cbor,
+ sqlite3, dwg, nes, lnk, macho, qcp, icns, hdr, mrc, mdb, accdb, zstd, cab,
+ rpm, xz, lzip, torrent, cpio, tzif, xcf, pat, gbr, glb, cabIS, jxr, parquet,
+ // Keep text last because it is the slowest check.
text,
)
@@ -45,7 +44,11 @@ var (
"application/gzip-compressed", "application/x-gzip-compressed",
"gzip/document")
sevenZ = newMIME("application/x-7z-compressed", ".7z", magic.SevenZ)
- zip = newMIME("application/zip", ".zip", magic.Zip, xlsx, docx, pptx, epub, jar, odt, ods, odp, odg, odf, odc, sxc).
+ // APK must be checked before JAR because APK is a subset of JAR.
+ // This means APK should be a child of JAR detector, but in practice,
+ // the decisive signature for JAR might be located at the end of the file
+ // and not reachable because of library readLimit.
+ zip = newMIME("application/zip", ".zip", magic.Zip, xlsx, docx, pptx, epub, apk, jar, odt, ods, odp, odg, odf, odc, sxc).
alias("application/x-zip", "application/x-zip-compressed")
tar = newMIME("application/x-tar", ".tar", magic.Tar)
xar = newMIME("application/x-xar", ".xar", magic.Xar)
@@ -58,6 +61,7 @@ var (
pptx = newMIME("application/vnd.openxmlformats-officedocument.presentationml.presentation", ".pptx", magic.Pptx)
epub = newMIME("application/epub+zip", ".epub", magic.Epub)
jar = newMIME("application/jar", ".jar", magic.Jar)
+ apk = newMIME("application/vnd.android.package-archive", ".apk", magic.APK)
ole = newMIME("application/x-ole-storage", "", magic.Ole, msi, aaf, msg, xls, pub, ppt, doc)
msi = newMIME("application/x-ms-installer", ".msi", magic.Msi).
alias("application/x-windows-installer", "application/x-msi")
@@ -77,18 +81,19 @@ var (
oggAudio = newMIME("audio/ogg", ".oga", magic.OggAudio)
oggVideo = newMIME("video/ogg", ".ogv", magic.OggVideo)
text = newMIME("text/plain", ".txt", magic.Text, html, svg, xml, php, js, lua, perl, python, json, ndJSON, rtf, srt, tcl, csv, tsv, vCard, iCalendar, warc, vtt)
- xml = newMIME("text/xml", ".xml", magic.XML, rss, atom, x3d, kml, xliff, collada, gml, gpx, tcx, amf, threemf, xfdf, owl2)
- json = newMIME("application/json", ".json", magic.JSON, geoJSON, har)
- har = newMIME("application/json", ".har", magic.HAR)
- csv = newMIME("text/csv", ".csv", magic.Csv)
- tsv = newMIME("text/tab-separated-values", ".tsv", magic.Tsv)
- geoJSON = newMIME("application/geo+json", ".geojson", magic.GeoJSON)
- ndJSON = newMIME("application/x-ndjson", ".ndjson", magic.NdJSON)
- html = newMIME("text/html", ".html", magic.HTML)
- php = newMIME("text/x-php", ".php", magic.Php)
- rtf = newMIME("text/rtf", ".rtf", magic.Rtf)
- js = newMIME("application/javascript", ".js", magic.Js).
- alias("application/x-javascript", "text/javascript")
+ xml = newMIME("text/xml", ".xml", magic.XML, rss, atom, x3d, kml, xliff, collada, gml, gpx, tcx, amf, threemf, xfdf, owl2).
+ alias("application/xml")
+ json = newMIME("application/json", ".json", magic.JSON, geoJSON, har)
+ har = newMIME("application/json", ".har", magic.HAR)
+ csv = newMIME("text/csv", ".csv", magic.Csv)
+ tsv = newMIME("text/tab-separated-values", ".tsv", magic.Tsv)
+ geoJSON = newMIME("application/geo+json", ".geojson", magic.GeoJSON)
+ ndJSON = newMIME("application/x-ndjson", ".ndjson", magic.NdJSON)
+ html = newMIME("text/html", ".html", magic.HTML)
+ php = newMIME("text/x-php", ".php", magic.Php)
+ rtf = newMIME("text/rtf", ".rtf", magic.Rtf).alias("application/rtf")
+ js = newMIME("text/javascript", ".js", magic.Js).
+ alias("application/x-javascript", "application/javascript")
srt = newMIME("application/x-subrip", ".srt", magic.Srt).
alias("application/x-srt", "text/x-srt")
vtt = newMIME("text/vtt", ".vtt", magic.Vtt)
@@ -156,12 +161,14 @@ var (
aac = newMIME("audio/aac", ".aac", magic.AAC)
voc = newMIME("audio/x-unknown", ".voc", magic.Voc)
aMp4 = newMIME("audio/mp4", ".mp4", magic.AMp4).
- alias("audio/x-m4a", "audio/x-mp4a")
+ alias("audio/x-mp4a")
m4a = newMIME("audio/x-m4a", ".m4a", magic.M4a)
m3u = newMIME("application/vnd.apple.mpegurl", ".m3u", magic.M3u).
alias("audio/mpegurl")
m4v = newMIME("video/x-m4v", ".m4v", magic.M4v)
- mp4 = newMIME("video/mp4", ".mp4", magic.Mp4)
+ mj2 = newMIME("video/mj2", ".mj2", magic.Mj2)
+ dvb = newMIME("video/vnd.dvb.file", ".dvb", magic.Dvb)
+ mp4 = newMIME("video/mp4", ".mp4", magic.Mp4, avif, threeGP, threeG2, aMp4, mqv, m4a, m4v, heic, heicSeq, heif, heifSeq, mj2, dvb)
webM = newMIME("video/webm", ".webm", magic.WebM).
alias("audio/webm")
mpeg = newMIME("video/mpeg", ".mpeg", magic.Mpeg)
@@ -257,4 +264,7 @@ var (
xfdf = newMIME("application/vnd.adobe.xfdf", ".xfdf", magic.Xfdf)
glb = newMIME("model/gltf-binary", ".glb", magic.Glb)
jxr = newMIME("image/jxr", ".jxr", magic.Jxr).alias("image/vnd.ms-photo")
+ parquet = newMIME("application/vnd.apache.parquet", ".parquet", magic.Par1).
+ alias("application/x-parquet")
+ cbor = newMIME("application/cbor", ".cbor", magic.CBOR)
)
diff --git a/vendor/github.com/go-playground/validator/v10/README.md b/vendor/github.com/go-playground/validator/v10/README.md
index a6e1d0b51d100..25eadf026ceaa 100644
--- a/vendor/github.com/go-playground/validator/v10/README.md
+++ b/vendor/github.com/go-playground/validator/v10/README.md
@@ -1,7 +1,7 @@
Package validator
=================
<img align="right" src="logo.png">[](https://gitter.im/go-playground/validator?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
-
+
[](https://travis-ci.org/go-playground/validator)
[](https://coveralls.io/github/go-playground/validator?branch=master)
[](https://goreportcard.com/report/github.com/go-playground/validator)
@@ -22,6 +22,11 @@ It has the following **unique** features:
- Customizable i18n aware error messages.
- Default validator for the [gin](https://github.com/gin-gonic/gin) web framework; upgrading from v8 to v9 in gin see [here](https://github.com/go-playground/validator/tree/master/_examples/gin-upgrading-overriding)
+A Call for Maintainers
+----------------------
+
+Please read the discussiong started [here](https://github.com/go-playground/validator/discussions/1330) if you are interested in contributing/helping maintain this package.
+
Installation
------------
@@ -163,6 +168,7 @@ validate := validator.New(validator.WithRequiredStructEnabled())
| btc_addr_bech32 | Bitcoin Bech32 Address (segwit) |
| credit_card | Credit Card Number |
| mongodb | MongoDB ObjectID |
+| mongodb_connection_string | MongoDB Connection String |
| cron | Cron |
| spicedb | SpiceDb ObjectID/Permission/Type |
| datetime | Datetime |
@@ -265,74 +271,75 @@ validate := validator.New(validator.WithRequiredStructEnabled())
Benchmarks
------
-###### Run on MacBook Pro (15-inch, 2017) go version go1.10.2 darwin/amd64
+###### Run on MacBook Pro Max M3
```go
-go version go1.21.0 darwin/arm64
+go version go1.23.3 darwin/arm64
goos: darwin
goarch: arm64
+cpu: Apple M3 Max
pkg: github.com/go-playground/validator/v10
-BenchmarkFieldSuccess-8 33142266 35.94 ns/op 0 B/op 0 allocs/op
-BenchmarkFieldSuccessParallel-8 200816191 6.568 ns/op 0 B/op 0 allocs/op
-BenchmarkFieldFailure-8 6779707 175.1 ns/op 200 B/op 4 allocs/op
-BenchmarkFieldFailureParallel-8 11044147 108.4 ns/op 200 B/op 4 allocs/op
-BenchmarkFieldArrayDiveSuccess-8 6054232 194.4 ns/op 97 B/op 5 allocs/op
-BenchmarkFieldArrayDiveSuccessParallel-8 12523388 94.07 ns/op 97 B/op 5 allocs/op
-BenchmarkFieldArrayDiveFailure-8 3587043 334.3 ns/op 300 B/op 10 allocs/op
-BenchmarkFieldArrayDiveFailureParallel-8 5816665 200.8 ns/op 300 B/op 10 allocs/op
-BenchmarkFieldMapDiveSuccess-8 2217910 540.1 ns/op 288 B/op 14 allocs/op
-BenchmarkFieldMapDiveSuccessParallel-8 4446698 258.7 ns/op 288 B/op 14 allocs/op
-BenchmarkFieldMapDiveFailure-8 2392759 504.6 ns/op 376 B/op 13 allocs/op
-BenchmarkFieldMapDiveFailureParallel-8 4244199 286.9 ns/op 376 B/op 13 allocs/op
-BenchmarkFieldMapDiveWithKeysSuccess-8 2005857 592.1 ns/op 288 B/op 14 allocs/op
-BenchmarkFieldMapDiveWithKeysSuccessParallel-8 4400850 296.9 ns/op 288 B/op 14 allocs/op
-BenchmarkFieldMapDiveWithKeysFailure-8 1850227 643.8 ns/op 553 B/op 16 allocs/op
-BenchmarkFieldMapDiveWithKeysFailureParallel-8 3293233 375.1 ns/op 553 B/op 16 allocs/op
-BenchmarkFieldCustomTypeSuccess-8 12174412 98.25 ns/op 32 B/op 2 allocs/op
-BenchmarkFieldCustomTypeSuccessParallel-8 34389907 35.49 ns/op 32 B/op 2 allocs/op
-BenchmarkFieldCustomTypeFailure-8 7582524 156.6 ns/op 184 B/op 3 allocs/op
-BenchmarkFieldCustomTypeFailureParallel-8 13019902 92.79 ns/op 184 B/op 3 allocs/op
-BenchmarkFieldOrTagSuccess-8 3427260 349.4 ns/op 16 B/op 1 allocs/op
-BenchmarkFieldOrTagSuccessParallel-8 15144128 81.25 ns/op 16 B/op 1 allocs/op
-BenchmarkFieldOrTagFailure-8 5913546 201.9 ns/op 216 B/op 5 allocs/op
-BenchmarkFieldOrTagFailureParallel-8 9810212 113.7 ns/op 216 B/op 5 allocs/op
-BenchmarkStructLevelValidationSuccess-8 13456327 87.66 ns/op 16 B/op 1 allocs/op
-BenchmarkStructLevelValidationSuccessParallel-8 41818888 27.77 ns/op 16 B/op 1 allocs/op
-BenchmarkStructLevelValidationFailure-8 4166284 272.6 ns/op 264 B/op 7 allocs/op
-BenchmarkStructLevelValidationFailureParallel-8 7594581 152.1 ns/op 264 B/op 7 allocs/op
-BenchmarkStructSimpleCustomTypeSuccess-8 6508082 182.6 ns/op 32 B/op 2 allocs/op
-BenchmarkStructSimpleCustomTypeSuccessParallel-8 23078605 54.78 ns/op 32 B/op 2 allocs/op
-BenchmarkStructSimpleCustomTypeFailure-8 3118352 381.0 ns/op 416 B/op 9 allocs/op
-BenchmarkStructSimpleCustomTypeFailureParallel-8 5300738 224.1 ns/op 432 B/op 10 allocs/op
-BenchmarkStructFilteredSuccess-8 4761807 251.1 ns/op 216 B/op 5 allocs/op
-BenchmarkStructFilteredSuccessParallel-8 8792598 128.6 ns/op 216 B/op 5 allocs/op
-BenchmarkStructFilteredFailure-8 5202573 232.1 ns/op 216 B/op 5 allocs/op
-BenchmarkStructFilteredFailureParallel-8 9591267 121.4 ns/op 216 B/op 5 allocs/op
-BenchmarkStructPartialSuccess-8 5188512 231.6 ns/op 224 B/op 4 allocs/op
-BenchmarkStructPartialSuccessParallel-8 9179776 123.1 ns/op 224 B/op 4 allocs/op
-BenchmarkStructPartialFailure-8 3071212 392.5 ns/op 440 B/op 9 allocs/op
-BenchmarkStructPartialFailureParallel-8 5344261 223.7 ns/op 440 B/op 9 allocs/op
-BenchmarkStructExceptSuccess-8 3184230 375.0 ns/op 424 B/op 8 allocs/op
-BenchmarkStructExceptSuccessParallel-8 10090130 108.9 ns/op 208 B/op 3 allocs/op
-BenchmarkStructExceptFailure-8 3347226 357.7 ns/op 424 B/op 8 allocs/op
-BenchmarkStructExceptFailureParallel-8 5654923 209.5 ns/op 424 B/op 8 allocs/op
-BenchmarkStructSimpleCrossFieldSuccess-8 5232265 229.1 ns/op 56 B/op 3 allocs/op
-BenchmarkStructSimpleCrossFieldSuccessParallel-8 17436674 64.75 ns/op 56 B/op 3 allocs/op
-BenchmarkStructSimpleCrossFieldFailure-8 3128613 383.6 ns/op 272 B/op 8 allocs/op
-BenchmarkStructSimpleCrossFieldFailureParallel-8 6994113 168.8 ns/op 272 B/op 8 allocs/op
-BenchmarkStructSimpleCrossStructCrossFieldSuccess-8 3506487 340.9 ns/op 64 B/op 4 allocs/op
-BenchmarkStructSimpleCrossStructCrossFieldSuccessParallel-8 13431300 91.77 ns/op 64 B/op 4 allocs/op
-BenchmarkStructSimpleCrossStructCrossFieldFailure-8 2410566 500.9 ns/op 288 B/op 9 allocs/op
-BenchmarkStructSimpleCrossStructCrossFieldFailureParallel-8 6344510 188.2 ns/op 288 B/op 9 allocs/op
-BenchmarkStructSimpleSuccess-8 8922726 133.8 ns/op 0 B/op 0 allocs/op
-BenchmarkStructSimpleSuccessParallel-8 55291153 23.63 ns/op 0 B/op 0 allocs/op
-BenchmarkStructSimpleFailure-8 3171553 378.4 ns/op 416 B/op 9 allocs/op
-BenchmarkStructSimpleFailureParallel-8 5571692 212.0 ns/op 416 B/op 9 allocs/op
-BenchmarkStructComplexSuccess-8 1683750 714.5 ns/op 224 B/op 5 allocs/op
-BenchmarkStructComplexSuccessParallel-8 4578046 257.0 ns/op 224 B/op 5 allocs/op
-BenchmarkStructComplexFailure-8 481585 2547 ns/op 3041 B/op 48 allocs/op
-BenchmarkStructComplexFailureParallel-8 965764 1577 ns/op 3040 B/op 48 allocs/op
-BenchmarkOneof-8 17380881 68.50 ns/op 0 B/op 0 allocs/op
-BenchmarkOneofParallel-8 8084733 153.5 ns/op 0 B/op 0 allocs/op
+BenchmarkFieldSuccess-16 42461943 27.88 ns/op 0 B/op 0 allocs/op
+BenchmarkFieldSuccessParallel-16 486632887 2.289 ns/op 0 B/op 0 allocs/op
+BenchmarkFieldFailure-16 9566167 121.3 ns/op 200 B/op 4 allocs/op
+BenchmarkFieldFailureParallel-16 17551471 83.68 ns/op 200 B/op 4 allocs/op
+BenchmarkFieldArrayDiveSuccess-16 7602306 155.6 ns/op 97 B/op 5 allocs/op
+BenchmarkFieldArrayDiveSuccessParallel-16 20664610 59.80 ns/op 97 B/op 5 allocs/op
+BenchmarkFieldArrayDiveFailure-16 4659756 252.9 ns/op 301 B/op 10 allocs/op
+BenchmarkFieldArrayDiveFailureParallel-16 8010116 152.9 ns/op 301 B/op 10 allocs/op
+BenchmarkFieldMapDiveSuccess-16 2834575 421.2 ns/op 288 B/op 14 allocs/op
+BenchmarkFieldMapDiveSuccessParallel-16 7179700 171.8 ns/op 288 B/op 14 allocs/op
+BenchmarkFieldMapDiveFailure-16 3081728 384.4 ns/op 376 B/op 13 allocs/op
+BenchmarkFieldMapDiveFailureParallel-16 6058137 204.0 ns/op 377 B/op 13 allocs/op
+BenchmarkFieldMapDiveWithKeysSuccess-16 2544975 464.8 ns/op 288 B/op 14 allocs/op
+BenchmarkFieldMapDiveWithKeysSuccessParallel-16 6661954 181.4 ns/op 288 B/op 14 allocs/op
+BenchmarkFieldMapDiveWithKeysFailure-16 2435484 490.7 ns/op 553 B/op 16 allocs/op
+BenchmarkFieldMapDiveWithKeysFailureParallel-16 4249617 282.0 ns/op 554 B/op 16 allocs/op
+BenchmarkFieldCustomTypeSuccess-16 14943525 77.35 ns/op 32 B/op 2 allocs/op
+BenchmarkFieldCustomTypeSuccessParallel-16 64051954 20.61 ns/op 32 B/op 2 allocs/op
+BenchmarkFieldCustomTypeFailure-16 10721384 107.1 ns/op 184 B/op 3 allocs/op
+BenchmarkFieldCustomTypeFailureParallel-16 18714495 69.77 ns/op 184 B/op 3 allocs/op
+BenchmarkFieldOrTagSuccess-16 4063124 294.3 ns/op 16 B/op 1 allocs/op
+BenchmarkFieldOrTagSuccessParallel-16 31903756 41.22 ns/op 18 B/op 1 allocs/op
+BenchmarkFieldOrTagFailure-16 7748558 146.8 ns/op 216 B/op 5 allocs/op
+BenchmarkFieldOrTagFailureParallel-16 13139854 92.05 ns/op 216 B/op 5 allocs/op
+BenchmarkStructLevelValidationSuccess-16 16808389 70.25 ns/op 16 B/op 1 allocs/op
+BenchmarkStructLevelValidationSuccessParallel-16 90686955 14.47 ns/op 16 B/op 1 allocs/op
+BenchmarkStructLevelValidationFailure-16 5818791 200.2 ns/op 264 B/op 7 allocs/op
+BenchmarkStructLevelValidationFailureParallel-16 11115874 107.5 ns/op 264 B/op 7 allocs/op
+BenchmarkStructSimpleCustomTypeSuccess-16 7764956 151.9 ns/op 32 B/op 2 allocs/op
+BenchmarkStructSimpleCustomTypeSuccessParallel-16 52316265 30.37 ns/op 32 B/op 2 allocs/op
+BenchmarkStructSimpleCustomTypeFailure-16 4195429 277.2 ns/op 416 B/op 9 allocs/op
+BenchmarkStructSimpleCustomTypeFailureParallel-16 7305661 164.6 ns/op 432 B/op 10 allocs/op
+BenchmarkStructFilteredSuccess-16 6312625 186.1 ns/op 216 B/op 5 allocs/op
+BenchmarkStructFilteredSuccessParallel-16 13684459 93.42 ns/op 216 B/op 5 allocs/op
+BenchmarkStructFilteredFailure-16 6751482 171.2 ns/op 216 B/op 5 allocs/op
+BenchmarkStructFilteredFailureParallel-16 14146070 86.93 ns/op 216 B/op 5 allocs/op
+BenchmarkStructPartialSuccess-16 6544448 177.3 ns/op 224 B/op 4 allocs/op
+BenchmarkStructPartialSuccessParallel-16 13951946 88.73 ns/op 224 B/op 4 allocs/op
+BenchmarkStructPartialFailure-16 4075833 287.5 ns/op 440 B/op 9 allocs/op
+BenchmarkStructPartialFailureParallel-16 7490805 161.3 ns/op 440 B/op 9 allocs/op
+BenchmarkStructExceptSuccess-16 4107187 281.4 ns/op 424 B/op 8 allocs/op
+BenchmarkStructExceptSuccessParallel-16 15979173 80.86 ns/op 208 B/op 3 allocs/op
+BenchmarkStructExceptFailure-16 4434372 264.3 ns/op 424 B/op 8 allocs/op
+BenchmarkStructExceptFailureParallel-16 8081367 154.1 ns/op 424 B/op 8 allocs/op
+BenchmarkStructSimpleCrossFieldSuccess-16 6459542 183.4 ns/op 56 B/op 3 allocs/op
+BenchmarkStructSimpleCrossFieldSuccessParallel-16 41013781 37.95 ns/op 56 B/op 3 allocs/op
+BenchmarkStructSimpleCrossFieldFailure-16 4034998 292.1 ns/op 272 B/op 8 allocs/op
+BenchmarkStructSimpleCrossFieldFailureParallel-16 11348446 115.3 ns/op 272 B/op 8 allocs/op
+BenchmarkStructSimpleCrossStructCrossFieldSuccess-16 4448528 267.7 ns/op 64 B/op 4 allocs/op
+BenchmarkStructSimpleCrossStructCrossFieldSuccessParallel-16 26813619 48.33 ns/op 64 B/op 4 allocs/op
+BenchmarkStructSimpleCrossStructCrossFieldFailure-16 3090646 384.5 ns/op 288 B/op 9 allocs/op
+BenchmarkStructSimpleCrossStructCrossFieldFailureParallel-16 9870906 129.5 ns/op 288 B/op 9 allocs/op
+BenchmarkStructSimpleSuccess-16 10675562 109.5 ns/op 0 B/op 0 allocs/op
+BenchmarkStructSimpleSuccessParallel-16 131159784 8.932 ns/op 0 B/op 0 allocs/op
+BenchmarkStructSimpleFailure-16 4094979 286.6 ns/op 416 B/op 9 allocs/op
+BenchmarkStructSimpleFailureParallel-16 7606663 157.9 ns/op 416 B/op 9 allocs/op
+BenchmarkStructComplexSuccess-16 2073470 576.0 ns/op 224 B/op 5 allocs/op
+BenchmarkStructComplexSuccessParallel-16 7821831 161.3 ns/op 224 B/op 5 allocs/op
+BenchmarkStructComplexFailure-16 576358 2001 ns/op 3042 B/op 48 allocs/op
+BenchmarkStructComplexFailureParallel-16 1000000 1171 ns/op 3041 B/op 48 allocs/op
+BenchmarkOneof-16 22503973 52.82 ns/op 0 B/op 0 allocs/op
+BenchmarkOneofParallel-16 8538474 140.4 ns/op 0 B/op 0 allocs/op
```
Complementary Software
@@ -348,6 +355,20 @@ How to Contribute
Make a pull request...
+Maintenance and support for SDK major versions
+----------------------------------------------
+
+See prior discussion [here](https://github.com/go-playground/validator/discussions/1342) for more details.
+
+This package is aligned with the [Go release policy](https://go.dev/doc/devel/release) in that support is guaranteed for
+the two most recent major versions.
+
+This does not mean the package will not work with older versions of Go, only that we reserve the right to increase the
+MSGV(Minimum Supported Go Version) when the need arises to address Security issues/patches, OS issues & support or newly
+introduced functionality that would greatly benefit the maintenance and/or usage of this package.
+
+If and when the MSGV is increased it will be done so in a minimum of a `Minor` release bump.
+
License
-------
Distributed under MIT License, please see license file within the code for more details.
diff --git a/vendor/github.com/go-playground/validator/v10/baked_in.go b/vendor/github.com/go-playground/validator/v10/baked_in.go
index 95f56e0080391..2f66c18368984 100644
--- a/vendor/github.com/go-playground/validator/v10/baked_in.go
+++ b/vendor/github.com/go-playground/validator/v10/baked_in.go
@@ -64,8 +64,9 @@ var (
// defines a common or complex set of validation(s) to simplify
// adding validation to structs.
bakedInAliases = map[string]string{
- "iscolor": "hexcolor|rgb|rgba|hsl|hsla",
- "country_code": "iso3166_1_alpha2|iso3166_1_alpha3|iso3166_1_alpha_numeric",
+ "iscolor": "hexcolor|rgb|rgba|hsl|hsla",
+ "country_code": "iso3166_1_alpha2|iso3166_1_alpha3|iso3166_1_alpha_numeric",
+ "eu_country_code": "iso3166_1_alpha2_eu|iso3166_1_alpha3_eu|iso3166_1_alpha_numeric_eu",
}
// bakedInValidators is the default map of ValidationFunc
@@ -133,6 +134,7 @@ var (
"urn_rfc2141": isUrnRFC2141, // RFC 2141
"file": isFile,
"filepath": isFilePath,
+ "base32": isBase32,
"base64": isBase64,
"base64url": isBase64URL,
"base64rawurl": isBase64RawURL,
@@ -203,6 +205,7 @@ var (
"fqdn": isFQDN,
"unique": isUnique,
"oneof": isOneOf,
+ "oneofci": isOneOfCI,
"html": isHTML,
"html_encoded": isHTMLEncoded,
"url_encoded": isURLEncoded,
@@ -211,13 +214,17 @@ var (
"json": isJSON,
"jwt": isJWT,
"hostname_port": isHostnamePort,
+ "port": isPort,
"lowercase": isLowercase,
"uppercase": isUppercase,
"datetime": isDatetime,
"timezone": isTimeZone,
"iso3166_1_alpha2": isIso3166Alpha2,
+ "iso3166_1_alpha2_eu": isIso3166Alpha2EU,
"iso3166_1_alpha3": isIso3166Alpha3,
+ "iso3166_1_alpha3_eu": isIso3166Alpha3EU,
"iso3166_1_alpha_numeric": isIso3166AlphaNumeric,
+ "iso3166_1_alpha_numeric_eu": isIso3166AlphaNumericEU,
"iso3166_2": isIso31662,
"iso4217": isIso4217,
"iso4217_numeric": isIso4217Numeric,
@@ -230,7 +237,8 @@ var (
"credit_card": isCreditCard,
"cve": isCveFormat,
"luhn_checksum": hasLuhnChecksum,
- "mongodb": isMongoDB,
+ "mongodb": isMongoDBObjectId,
+ "mongodb_connection_string": isMongoDBConnectionString,
"cron": isCron,
"spicedb": isSpiceDB,
}
@@ -247,7 +255,7 @@ func parseOneOfParam2(s string) []string {
oneofValsCacheRWLock.RUnlock()
if !ok {
oneofValsCacheRWLock.Lock()
- vals = splitParamsRegex.FindAllString(s, -1)
+ vals = splitParamsRegex().FindAllString(s, -1)
for i := 0; i < len(vals); i++ {
vals[i] = strings.Replace(vals[i], "'", "", -1)
}
@@ -258,15 +266,15 @@ func parseOneOfParam2(s string) []string {
}
func isURLEncoded(fl FieldLevel) bool {
- return uRLEncodedRegex.MatchString(fl.Field().String())
+ return uRLEncodedRegex().MatchString(fl.Field().String())
}
func isHTMLEncoded(fl FieldLevel) bool {
- return hTMLEncodedRegex.MatchString(fl.Field().String())
+ return hTMLEncodedRegex().MatchString(fl.Field().String())
}
func isHTML(fl FieldLevel) bool {
- return hTMLRegex.MatchString(fl.Field().String())
+ return hTMLRegex().MatchString(fl.Field().String())
}
func isOneOf(fl FieldLevel) bool {
@@ -293,6 +301,23 @@ func isOneOf(fl FieldLevel) bool {
return false
}
+// isOneOfCI is the validation function for validating if the current field's value is one of the provided string values (case insensitive).
+func isOneOfCI(fl FieldLevel) bool {
+ vals := parseOneOfParam2(fl.Param())
+ field := fl.Field()
+
+ if field.Kind() != reflect.String {
+ panic(fmt.Sprintf("Bad field type %T", field.Interface()))
+ }
+ v := field.String()
+ for _, val := range vals {
+ if strings.EqualFold(val, v) {
+ return true
+ }
+ }
+ return false
+}
+
// isUnique is the validation function for validating if each array|slice|map value is unique
func isUnique(fl FieldLevel) bool {
field := fl.Field()
@@ -423,7 +448,7 @@ func isSSN(fl FieldLevel) bool {
return false
}
- return sSNRegex.MatchString(field.String())
+ return sSNRegex().MatchString(field.String())
}
// isLongitude is the validation function for validating if the field's value is a valid longitude coordinate.
@@ -446,7 +471,7 @@ func isLongitude(fl FieldLevel) bool {
panic(fmt.Sprintf("Bad field type %T", field.Interface()))
}
- return longitudeRegex.MatchString(v)
+ return longitudeRegex().MatchString(v)
}
// isLatitude is the validation function for validating if the field's value is a valid latitude coordinate.
@@ -469,7 +494,7 @@ func isLatitude(fl FieldLevel) bool {
panic(fmt.Sprintf("Bad field type %T", field.Interface()))
}
- return latitudeRegex.MatchString(v)
+ return latitudeRegex().MatchString(v)
}
// isDataURI is the validation function for validating if the field's value is a valid data URI.
@@ -480,11 +505,11 @@ func isDataURI(fl FieldLevel) bool {
return false
}
- if !dataURIRegex.MatchString(uri[0]) {
+ if !dataURIRegex().MatchString(uri[0]) {
return false
}
- return base64Regex.MatchString(uri[1])
+ return base64Regex().MatchString(uri[1])
}
// hasMultiByteCharacter is the validation function for validating if the field's value has a multi byte character.
@@ -495,17 +520,17 @@ func hasMultiByteCharacter(fl FieldLevel) bool {
return true
}
- return multibyteRegex.MatchString(field.String())
+ return multibyteRegex().MatchString(field.String())
}
// isPrintableASCII is the validation function for validating if the field's value is a valid printable ASCII character.
func isPrintableASCII(fl FieldLevel) bool {
- return printableASCIIRegex.MatchString(fl.Field().String())
+ return printableASCIIRegex().MatchString(fl.Field().String())
}
// isASCII is the validation function for validating if the field's value is a valid ASCII character.
func isASCII(fl FieldLevel) bool {
- return aSCIIRegex.MatchString(fl.Field().String())
+ return aSCIIRegex().MatchString(fl.Field().String())
}
// isUUID5 is the validation function for validating if the field's value is a valid v5 UUID.
@@ -555,52 +580,52 @@ func isULID(fl FieldLevel) bool {
// isMD4 is the validation function for validating if the field's value is a valid MD4.
func isMD4(fl FieldLevel) bool {
- return md4Regex.MatchString(fl.Field().String())
+ return md4Regex().MatchString(fl.Field().String())
}
// isMD5 is the validation function for validating if the field's value is a valid MD5.
func isMD5(fl FieldLevel) bool {
- return md5Regex.MatchString(fl.Field().String())
+ return md5Regex().MatchString(fl.Field().String())
}
// isSHA256 is the validation function for validating if the field's value is a valid SHA256.
func isSHA256(fl FieldLevel) bool {
- return sha256Regex.MatchString(fl.Field().String())
+ return sha256Regex().MatchString(fl.Field().String())
}
// isSHA384 is the validation function for validating if the field's value is a valid SHA384.
func isSHA384(fl FieldLevel) bool {
- return sha384Regex.MatchString(fl.Field().String())
+ return sha384Regex().MatchString(fl.Field().String())
}
// isSHA512 is the validation function for validating if the field's value is a valid SHA512.
func isSHA512(fl FieldLevel) bool {
- return sha512Regex.MatchString(fl.Field().String())
+ return sha512Regex().MatchString(fl.Field().String())
}
// isRIPEMD128 is the validation function for validating if the field's value is a valid PIPEMD128.
func isRIPEMD128(fl FieldLevel) bool {
- return ripemd128Regex.MatchString(fl.Field().String())
+ return ripemd128Regex().MatchString(fl.Field().String())
}
// isRIPEMD160 is the validation function for validating if the field's value is a valid PIPEMD160.
func isRIPEMD160(fl FieldLevel) bool {
- return ripemd160Regex.MatchString(fl.Field().String())
+ return ripemd160Regex().MatchString(fl.Field().String())
}
// isTIGER128 is the validation function for validating if the field's value is a valid TIGER128.
func isTIGER128(fl FieldLevel) bool {
- return tiger128Regex.MatchString(fl.Field().String())
+ return tiger128Regex().MatchString(fl.Field().String())
}
// isTIGER160 is the validation function for validating if the field's value is a valid TIGER160.
func isTIGER160(fl FieldLevel) bool {
- return tiger160Regex.MatchString(fl.Field().String())
+ return tiger160Regex().MatchString(fl.Field().String())
}
// isTIGER192 is the validation function for validating if the field's value is a valid isTIGER192.
func isTIGER192(fl FieldLevel) bool {
- return tiger192Regex.MatchString(fl.Field().String())
+ return tiger192Regex().MatchString(fl.Field().String())
}
// isISBN is the validation function for validating if the field's value is a valid v10 or v13 ISBN.
@@ -612,7 +637,7 @@ func isISBN(fl FieldLevel) bool {
func isISBN13(fl FieldLevel) bool {
s := strings.Replace(strings.Replace(fl.Field().String(), "-", "", 4), " ", "", 4)
- if !iSBN13Regex.MatchString(s) {
+ if !iSBN13Regex().MatchString(s) {
return false
}
@@ -632,7 +657,7 @@ func isISBN13(fl FieldLevel) bool {
func isISBN10(fl FieldLevel) bool {
s := strings.Replace(strings.Replace(fl.Field().String(), "-", "", 3), " ", "", 3)
- if !iSBN10Regex.MatchString(s) {
+ if !iSBN10Regex().MatchString(s) {
return false
}
@@ -656,7 +681,7 @@ func isISBN10(fl FieldLevel) bool {
func isISSN(fl FieldLevel) bool {
s := fl.Field().String()
- if !iSSNRegex.MatchString(s) {
+ if !iSSNRegex().MatchString(s) {
return false
}
s = strings.ReplaceAll(s, "-", "")
@@ -682,14 +707,14 @@ func isISSN(fl FieldLevel) bool {
func isEthereumAddress(fl FieldLevel) bool {
address := fl.Field().String()
- return ethAddressRegex.MatchString(address)
+ return ethAddressRegex().MatchString(address)
}
-// isEthereumAddressChecksum is the validation function for validating if the field's value is a valid checksumed Ethereum address.
+// isEthereumAddressChecksum is the validation function for validating if the field's value is a valid checksummed Ethereum address.
func isEthereumAddressChecksum(fl FieldLevel) bool {
address := fl.Field().String()
- if !ethAddressRegex.MatchString(address) {
+ if !ethAddressRegex().MatchString(address) {
return false
}
// Checksum validation. Reference: https://github.com/ethereum/EIPs/blob/master/EIPS/eip-55.md
@@ -715,7 +740,7 @@ func isEthereumAddressChecksum(fl FieldLevel) bool {
func isBitcoinAddress(fl FieldLevel) bool {
address := fl.Field().String()
- if !btcAddressRegex.MatchString(address) {
+ if !btcAddressRegex().MatchString(address) {
return false
}
@@ -752,7 +777,7 @@ func isBitcoinAddress(fl FieldLevel) bool {
func isBitcoinBech32Address(fl FieldLevel) bool {
address := fl.Field().String()
- if !btcLowerAddressRegexBech32.MatchString(address) && !btcUpperAddressRegexBech32.MatchString(address) {
+ if !btcLowerAddressRegexBech32().MatchString(address) && !btcUpperAddressRegexBech32().MatchString(address) {
return false
}
@@ -1364,6 +1389,7 @@ func isPostcodeByIso3166Alpha2(fl FieldLevel) bool {
field := fl.Field()
param := fl.Param()
+ postcodeRegexInit.Do(initPostcodes)
reg, found := postCodeRegexDict[param]
if !found {
return false
@@ -1399,19 +1425,24 @@ func isPostcodeByIso3166Alpha2Field(fl FieldLevel) bool {
return reg.MatchString(field.String())
}
+// isBase32 is the validation function for validating if the current field's value is a valid base 32.
+func isBase32(fl FieldLevel) bool {
+ return base32Regex().MatchString(fl.Field().String())
+}
+
// isBase64 is the validation function for validating if the current field's value is a valid base 64.
func isBase64(fl FieldLevel) bool {
- return base64Regex.MatchString(fl.Field().String())
+ return base64Regex().MatchString(fl.Field().String())
}
// isBase64URL is the validation function for validating if the current field's value is a valid base64 URL safe string.
func isBase64URL(fl FieldLevel) bool {
- return base64URLRegex.MatchString(fl.Field().String())
+ return base64URLRegex().MatchString(fl.Field().String())
}
// isBase64RawURL is the validation function for validating if the current field's value is a valid base64 URL safe string without '=' padding.
func isBase64RawURL(fl FieldLevel) bool {
- return base64RawURLRegex.MatchString(fl.Field().String())
+ return base64RawURLRegex().MatchString(fl.Field().String())
}
// isURI is the validation function for validating if the current field's value is a valid URI.
@@ -1657,42 +1688,42 @@ func isFilePath(fl FieldLevel) bool {
// isE164 is the validation function for validating if the current field's value is a valid e.164 formatted phone number.
func isE164(fl FieldLevel) bool {
- return e164Regex.MatchString(fl.Field().String())
+ return e164Regex().MatchString(fl.Field().String())
}
// isEmail is the validation function for validating if the current field's value is a valid email address.
func isEmail(fl FieldLevel) bool {
- return emailRegex.MatchString(fl.Field().String())
+ return emailRegex().MatchString(fl.Field().String())
}
// isHSLA is the validation function for validating if the current field's value is a valid HSLA color.
func isHSLA(fl FieldLevel) bool {
- return hslaRegex.MatchString(fl.Field().String())
+ return hslaRegex().MatchString(fl.Field().String())
}
// isHSL is the validation function for validating if the current field's value is a valid HSL color.
func isHSL(fl FieldLevel) bool {
- return hslRegex.MatchString(fl.Field().String())
+ return hslRegex().MatchString(fl.Field().String())
}
// isRGBA is the validation function for validating if the current field's value is a valid RGBA color.
func isRGBA(fl FieldLevel) bool {
- return rgbaRegex.MatchString(fl.Field().String())
+ return rgbaRegex().MatchString(fl.Field().String())
}
// isRGB is the validation function for validating if the current field's value is a valid RGB color.
func isRGB(fl FieldLevel) bool {
- return rgbRegex.MatchString(fl.Field().String())
+ return rgbRegex().MatchString(fl.Field().String())
}
// isHEXColor is the validation function for validating if the current field's value is a valid HEX color.
func isHEXColor(fl FieldLevel) bool {
- return hexColorRegex.MatchString(fl.Field().String())
+ return hexColorRegex().MatchString(fl.Field().String())
}
// isHexadecimal is the validation function for validating if the current field's value is a valid hexadecimal.
func isHexadecimal(fl FieldLevel) bool {
- return hexadecimalRegex.MatchString(fl.Field().String())
+ return hexadecimalRegex().MatchString(fl.Field().String())
}
// isNumber is the validation function for validating if the current field's value is a valid number.
@@ -1701,7 +1732,7 @@ func isNumber(fl FieldLevel) bool {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr, reflect.Float32, reflect.Float64:
return true
default:
- return numberRegex.MatchString(fl.Field().String())
+ return numberRegex().MatchString(fl.Field().String())
}
}
@@ -1711,28 +1742,28 @@ func isNumeric(fl FieldLevel) bool {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr, reflect.Float32, reflect.Float64:
return true
default:
- return numericRegex.MatchString(fl.Field().String())
+ return numericRegex().MatchString(fl.Field().String())
}
}
// isAlphanum is the validation function for validating if the current field's value is a valid alphanumeric value.
func isAlphanum(fl FieldLevel) bool {
- return alphaNumericRegex.MatchString(fl.Field().String())
+ return alphaNumericRegex().MatchString(fl.Field().String())
}
// isAlpha is the validation function for validating if the current field's value is a valid alpha value.
func isAlpha(fl FieldLevel) bool {
- return alphaRegex.MatchString(fl.Field().String())
+ return alphaRegex().MatchString(fl.Field().String())
}
// isAlphanumUnicode is the validation function for validating if the current field's value is a valid alphanumeric unicode value.
func isAlphanumUnicode(fl FieldLevel) bool {
- return alphaUnicodeNumericRegex.MatchString(fl.Field().String())
+ return alphaUnicodeNumericRegex().MatchString(fl.Field().String())
}
// isAlphaUnicode is the validation function for validating if the current field's value is a valid alpha unicode value.
func isAlphaUnicode(fl FieldLevel) bool {
- return alphaUnicodeRegex.MatchString(fl.Field().String())
+ return alphaUnicodeRegex().MatchString(fl.Field().String())
}
// isBoolean is the validation function for validating if the current field's value is a valid boolean value or can be safely converted to a boolean value.
@@ -1816,7 +1847,14 @@ func requireCheckFieldValue(
return int64(field.Len()) == asInt(value)
case reflect.Bool:
- return field.Bool() == asBool(value)
+ return field.Bool() == (value == "true")
+
+ case reflect.Ptr:
+ if field.IsNil() {
+ return value == "nil"
+ }
+ // Handle non-nil pointers
+ return requireCheckFieldValue(fl, param, value, defaultNotFoundValue)
}
// default reflect.String:
@@ -2555,11 +2593,11 @@ func isIP6Addr(fl FieldLevel) bool {
}
func isHostnameRFC952(fl FieldLevel) bool {
- return hostnameRegexRFC952.MatchString(fl.Field().String())
+ return hostnameRegexRFC952().MatchString(fl.Field().String())
}
func isHostnameRFC1123(fl FieldLevel) bool {
- return hostnameRegexRFC1123.MatchString(fl.Field().String())
+ return hostnameRegexRFC1123().MatchString(fl.Field().String())
}
func isFQDN(fl FieldLevel) bool {
@@ -2569,7 +2607,7 @@ func isFQDN(fl FieldLevel) bool {
return false
}
- return fqdnRegexRFC1123.MatchString(val)
+ return fqdnRegexRFC1123().MatchString(val)
}
// isDir is the validation function for validating if the current field's value is a valid existing directory.
@@ -2668,7 +2706,7 @@ func isJSON(fl FieldLevel) bool {
// isJWT is the validation function for validating if the current field's value is a valid JWT string.
func isJWT(fl FieldLevel) bool {
- return jWTRegex.MatchString(fl.Field().String())
+ return jWTRegex().MatchString(fl.Field().String())
}
// isHostnamePort validates a <dns>:<port> combination for fields typically used for socket address.
@@ -2687,11 +2725,18 @@ func isHostnamePort(fl FieldLevel) bool {
// If host is specified, it should match a DNS name
if host != "" {
- return hostnameRegexRFC1123.MatchString(host)
+ return hostnameRegexRFC1123().MatchString(host)
}
return true
}
+// IsPort validates if the current field's value represents a valid port
+func isPort(fl FieldLevel) bool {
+ val := fl.Field().Uint()
+
+ return val >= 1 && val <= 65535
+}
+
// isLowercase is the validation function for validating if the current field's value is a lowercase string.
func isLowercase(fl FieldLevel) bool {
field := fl.Field()
@@ -2758,14 +2803,26 @@ func isTimeZone(fl FieldLevel) bool {
// isIso3166Alpha2 is the validation function for validating if the current field's value is a valid iso3166-1 alpha-2 country code.
func isIso3166Alpha2(fl FieldLevel) bool {
- val := fl.Field().String()
- return iso3166_1_alpha2[val]
+ _, ok := iso3166_1_alpha2[fl.Field().String()]
+ return ok
+}
+
+// isIso3166Alpha2EU is the validation function for validating if the current field's value is a valid iso3166-1 alpha-2 European Union country code.
+func isIso3166Alpha2EU(fl FieldLevel) bool {
+ _, ok := iso3166_1_alpha2_eu[fl.Field().String()]
+ return ok
}
// isIso3166Alpha3 is the validation function for validating if the current field's value is a valid iso3166-1 alpha-3 country code.
func isIso3166Alpha3(fl FieldLevel) bool {
- val := fl.Field().String()
- return iso3166_1_alpha3[val]
+ _, ok := iso3166_1_alpha3[fl.Field().String()]
+ return ok
+}
+
+// isIso3166Alpha3EU is the validation function for validating if the current field's value is a valid iso3166-1 alpha-3 European Union country code.
+func isIso3166Alpha3EU(fl FieldLevel) bool {
+ _, ok := iso3166_1_alpha3_eu[fl.Field().String()]
+ return ok
}
// isIso3166AlphaNumeric is the validation function for validating if the current field's value is a valid iso3166-1 alpha-numeric country code.
@@ -2787,19 +2844,45 @@ func isIso3166AlphaNumeric(fl FieldLevel) bool {
default:
panic(fmt.Sprintf("Bad field type %T", field.Interface()))
}
- return iso3166_1_alpha_numeric[code]
+
+ _, ok := iso3166_1_alpha_numeric[code]
+ return ok
+}
+
+// isIso3166AlphaNumericEU is the validation function for validating if the current field's value is a valid iso3166-1 alpha-numeric European Union country code.
+func isIso3166AlphaNumericEU(fl FieldLevel) bool {
+ field := fl.Field()
+
+ var code int
+ switch field.Kind() {
+ case reflect.String:
+ i, err := strconv.Atoi(field.String())
+ if err != nil {
+ return false
+ }
+ code = i % 1000
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+ code = int(field.Int() % 1000)
+ case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
+ code = int(field.Uint() % 1000)
+ default:
+ panic(fmt.Sprintf("Bad field type %T", field.Interface()))
+ }
+
+ _, ok := iso3166_1_alpha_numeric_eu[code]
+ return ok
}
// isIso31662 is the validation function for validating if the current field's value is a valid iso3166-2 code.
func isIso31662(fl FieldLevel) bool {
- val := fl.Field().String()
- return iso3166_2[val]
+ _, ok := iso3166_2[fl.Field().String()]
+ return ok
}
// isIso4217 is the validation function for validating if the current field's value is a valid iso4217 currency code.
func isIso4217(fl FieldLevel) bool {
- val := fl.Field().String()
- return iso4217[val]
+ _, ok := iso4217[fl.Field().String()]
+ return ok
}
// isIso4217Numeric is the validation function for validating if the current field's value is a valid iso4217 numeric currency code.
@@ -2815,7 +2898,9 @@ func isIso4217Numeric(fl FieldLevel) bool {
default:
panic(fmt.Sprintf("Bad field type %T", field.Interface()))
}
- return iso4217_numeric[code]
+
+ _, ok := iso4217_numeric[code]
+ return ok
}
// isBCP47LanguageTag is the validation function for validating if the current field's value is a valid BCP 47 language tag, as parsed by language.Parse
@@ -2834,21 +2919,21 @@ func isBCP47LanguageTag(fl FieldLevel) bool {
func isIsoBicFormat(fl FieldLevel) bool {
bicString := fl.Field().String()
- return bicRegex.MatchString(bicString)
+ return bicRegex().MatchString(bicString)
}
// isSemverFormat is the validation function for validating if the current field's value is a valid semver version, defined in Semantic Versioning 2.0.0
func isSemverFormat(fl FieldLevel) bool {
semverString := fl.Field().String()
- return semverRegex.MatchString(semverString)
+ return semverRegex().MatchString(semverString)
}
// isCveFormat is the validation function for validating if the current field's value is a valid cve id, defined in CVE mitre org
func isCveFormat(fl FieldLevel) bool {
cveString := fl.Field().String()
- return cveRegex.MatchString(cveString)
+ return cveRegex().MatchString(cveString)
}
// isDnsRFC1035LabelFormat is the validation function
@@ -2856,7 +2941,7 @@ func isCveFormat(fl FieldLevel) bool {
// a valid dns RFC 1035 label, defined in RFC 1035.
func isDnsRFC1035LabelFormat(fl FieldLevel) bool {
val := fl.Field().String()
- return dnsRegexRFC1035Label.MatchString(val)
+ return dnsRegexRFC1035Label().MatchString(val)
}
// digitsHaveLuhnChecksum returns true if and only if the last element of the given digits slice is the Luhn checksum of the previous elements
@@ -2882,10 +2967,16 @@ func digitsHaveLuhnChecksum(digits []string) bool {
return (sum % 10) == 0
}
-// isMongoDB is the validation function for validating if the current field's value is valid mongoDB objectID
-func isMongoDB(fl FieldLevel) bool {
+// isMongoDBObjectId is the validation function for validating if the current field's value is valid MongoDB ObjectID
+func isMongoDBObjectId(fl FieldLevel) bool {
+ val := fl.Field().String()
+ return mongodbIdRegex().MatchString(val)
+}
+
+// isMongoDBConnectionString is the validation function for validating if the current field's value is valid MongoDB Connection String
+func isMongoDBConnectionString(fl FieldLevel) bool {
val := fl.Field().String()
- return mongodbRegex.MatchString(val)
+ return mongodbConnectionRegex().MatchString(val)
}
// isSpiceDB is the validation function for validating if the current field's value is valid for use with Authzed SpiceDB in the indicated way
@@ -2895,11 +2986,11 @@ func isSpiceDB(fl FieldLevel) bool {
switch param {
case "permission":
- return spicedbPermissionRegex.MatchString(val)
+ return spicedbPermissionRegex().MatchString(val)
case "type":
- return spicedbTypeRegex.MatchString(val)
+ return spicedbTypeRegex().MatchString(val)
case "id", "":
- return spicedbIDRegex.MatchString(val)
+ return spicedbIDRegex().MatchString(val)
}
panic("Unrecognized parameter: " + param)
@@ -2951,5 +3042,5 @@ func hasLuhnChecksum(fl FieldLevel) bool {
// isCron is the validation function for validating if the current field's value is a valid cron expression
func isCron(fl FieldLevel) bool {
cronString := fl.Field().String()
- return cronRegex.MatchString(cronString)
+ return cronRegex().MatchString(cronString)
}
diff --git a/vendor/github.com/go-playground/validator/v10/cache.go b/vendor/github.com/go-playground/validator/v10/cache.go
index b6bdd11a1fb87..2063e1b795d13 100644
--- a/vendor/github.com/go-playground/validator/v10/cache.go
+++ b/vendor/github.com/go-playground/validator/v10/cache.go
@@ -294,7 +294,7 @@ func (v *Validate) parseFieldTagsRecursive(tag string, fieldName string, alias s
if wrapper, ok := v.validations[current.tag]; ok {
current.fn = wrapper.fn
- current.runValidationWhenNil = wrapper.runValidatinOnNil
+ current.runValidationWhenNil = wrapper.runValidationOnNil
} else {
panic(strings.TrimSpace(fmt.Sprintf(undefinedValidation, current.tag, fieldName)))
}
diff --git a/vendor/github.com/go-playground/validator/v10/country_codes.go b/vendor/github.com/go-playground/validator/v10/country_codes.go
index 0119f0574d965..b5f10d3c11365 100644
--- a/vendor/github.com/go-playground/validator/v10/country_codes.go
+++ b/vendor/github.com/go-playground/validator/v10/country_codes.go
@@ -1,1150 +1,1177 @@
package validator
-var iso3166_1_alpha2 = map[string]bool{
+var iso3166_1_alpha2 = map[string]struct{}{
// see: https://www.iso.org/iso-3166-country-codes.html
- "AF": true, "AX": true, "AL": true, "DZ": true, "AS": true,
- "AD": true, "AO": true, "AI": true, "AQ": true, "AG": true,
- "AR": true, "AM": true, "AW": true, "AU": true, "AT": true,
- "AZ": true, "BS": true, "BH": true, "BD": true, "BB": true,
- "BY": true, "BE": true, "BZ": true, "BJ": true, "BM": true,
- "BT": true, "BO": true, "BQ": true, "BA": true, "BW": true,
- "BV": true, "BR": true, "IO": true, "BN": true, "BG": true,
- "BF": true, "BI": true, "KH": true, "CM": true, "CA": true,
- "CV": true, "KY": true, "CF": true, "TD": true, "CL": true,
- "CN": true, "CX": true, "CC": true, "CO": true, "KM": true,
- "CG": true, "CD": true, "CK": true, "CR": true, "CI": true,
- "HR": true, "CU": true, "CW": true, "CY": true, "CZ": true,
- "DK": true, "DJ": true, "DM": true, "DO": true, "EC": true,
- "EG": true, "SV": true, "GQ": true, "ER": true, "EE": true,
- "ET": true, "FK": true, "FO": true, "FJ": true, "FI": true,
- "FR": true, "GF": true, "PF": true, "TF": true, "GA": true,
- "GM": true, "GE": true, "DE": true, "GH": true, "GI": true,
- "GR": true, "GL": true, "GD": true, "GP": true, "GU": true,
- "GT": true, "GG": true, "GN": true, "GW": true, "GY": true,
- "HT": true, "HM": true, "VA": true, "HN": true, "HK": true,
- "HU": true, "IS": true, "IN": true, "ID": true, "IR": true,
- "IQ": true, "IE": true, "IM": true, "IL": true, "IT": true,
- "JM": true, "JP": true, "JE": true, "JO": true, "KZ": true,
- "KE": true, "KI": true, "KP": true, "KR": true, "KW": true,
- "KG": true, "LA": true, "LV": true, "LB": true, "LS": true,
- "LR": true, "LY": true, "LI": true, "LT": true, "LU": true,
- "MO": true, "MK": true, "MG": true, "MW": true, "MY": true,
- "MV": true, "ML": true, "MT": true, "MH": true, "MQ": true,
- "MR": true, "MU": true, "YT": true, "MX": true, "FM": true,
- "MD": true, "MC": true, "MN": true, "ME": true, "MS": true,
- "MA": true, "MZ": true, "MM": true, "NA": true, "NR": true,
- "NP": true, "NL": true, "NC": true, "NZ": true, "NI": true,
- "NE": true, "NG": true, "NU": true, "NF": true, "MP": true,
- "NO": true, "OM": true, "PK": true, "PW": true, "PS": true,
- "PA": true, "PG": true, "PY": true, "PE": true, "PH": true,
- "PN": true, "PL": true, "PT": true, "PR": true, "QA": true,
- "RE": true, "RO": true, "RU": true, "RW": true, "BL": true,
- "SH": true, "KN": true, "LC": true, "MF": true, "PM": true,
- "VC": true, "WS": true, "SM": true, "ST": true, "SA": true,
- "SN": true, "RS": true, "SC": true, "SL": true, "SG": true,
- "SX": true, "SK": true, "SI": true, "SB": true, "SO": true,
- "ZA": true, "GS": true, "SS": true, "ES": true, "LK": true,
- "SD": true, "SR": true, "SJ": true, "SZ": true, "SE": true,
- "CH": true, "SY": true, "TW": true, "TJ": true, "TZ": true,
- "TH": true, "TL": true, "TG": true, "TK": true, "TO": true,
- "TT": true, "TN": true, "TR": true, "TM": true, "TC": true,
- "TV": true, "UG": true, "UA": true, "AE": true, "GB": true,
- "US": true, "UM": true, "UY": true, "UZ": true, "VU": true,
- "VE": true, "VN": true, "VG": true, "VI": true, "WF": true,
- "EH": true, "YE": true, "ZM": true, "ZW": true, "XK": true,
+ "AF": {}, "AX": {}, "AL": {}, "DZ": {}, "AS": {},
+ "AD": {}, "AO": {}, "AI": {}, "AQ": {}, "AG": {},
+ "AR": {}, "AM": {}, "AW": {}, "AU": {}, "AT": {},
+ "AZ": {}, "BS": {}, "BH": {}, "BD": {}, "BB": {},
+ "BY": {}, "BE": {}, "BZ": {}, "BJ": {}, "BM": {},
+ "BT": {}, "BO": {}, "BQ": {}, "BA": {}, "BW": {},
+ "BV": {}, "BR": {}, "IO": {}, "BN": {}, "BG": {},
+ "BF": {}, "BI": {}, "KH": {}, "CM": {}, "CA": {},
+ "CV": {}, "KY": {}, "CF": {}, "TD": {}, "CL": {},
+ "CN": {}, "CX": {}, "CC": {}, "CO": {}, "KM": {},
+ "CG": {}, "CD": {}, "CK": {}, "CR": {}, "CI": {},
+ "HR": {}, "CU": {}, "CW": {}, "CY": {}, "CZ": {},
+ "DK": {}, "DJ": {}, "DM": {}, "DO": {}, "EC": {},
+ "EG": {}, "SV": {}, "GQ": {}, "ER": {}, "EE": {},
+ "ET": {}, "FK": {}, "FO": {}, "FJ": {}, "FI": {},
+ "FR": {}, "GF": {}, "PF": {}, "TF": {}, "GA": {},
+ "GM": {}, "GE": {}, "DE": {}, "GH": {}, "GI": {},
+ "GR": {}, "GL": {}, "GD": {}, "GP": {}, "GU": {},
+ "GT": {}, "GG": {}, "GN": {}, "GW": {}, "GY": {},
+ "HT": {}, "HM": {}, "VA": {}, "HN": {}, "HK": {},
+ "HU": {}, "IS": {}, "IN": {}, "ID": {}, "IR": {},
+ "IQ": {}, "IE": {}, "IM": {}, "IL": {}, "IT": {},
+ "JM": {}, "JP": {}, "JE": {}, "JO": {}, "KZ": {},
+ "KE": {}, "KI": {}, "KP": {}, "KR": {}, "KW": {},
+ "KG": {}, "LA": {}, "LV": {}, "LB": {}, "LS": {},
+ "LR": {}, "LY": {}, "LI": {}, "LT": {}, "LU": {},
+ "MO": {}, "MK": {}, "MG": {}, "MW": {}, "MY": {},
+ "MV": {}, "ML": {}, "MT": {}, "MH": {}, "MQ": {},
+ "MR": {}, "MU": {}, "YT": {}, "MX": {}, "FM": {},
+ "MD": {}, "MC": {}, "MN": {}, "ME": {}, "MS": {},
+ "MA": {}, "MZ": {}, "MM": {}, "NA": {}, "NR": {},
+ "NP": {}, "NL": {}, "NC": {}, "NZ": {}, "NI": {},
+ "NE": {}, "NG": {}, "NU": {}, "NF": {}, "MP": {},
+ "NO": {}, "OM": {}, "PK": {}, "PW": {}, "PS": {},
+ "PA": {}, "PG": {}, "PY": {}, "PE": {}, "PH": {},
+ "PN": {}, "PL": {}, "PT": {}, "PR": {}, "QA": {},
+ "RE": {}, "RO": {}, "RU": {}, "RW": {}, "BL": {},
+ "SH": {}, "KN": {}, "LC": {}, "MF": {}, "PM": {},
+ "VC": {}, "WS": {}, "SM": {}, "ST": {}, "SA": {},
+ "SN": {}, "RS": {}, "SC": {}, "SL": {}, "SG": {},
+ "SX": {}, "SK": {}, "SI": {}, "SB": {}, "SO": {},
+ "ZA": {}, "GS": {}, "SS": {}, "ES": {}, "LK": {},
+ "SD": {}, "SR": {}, "SJ": {}, "SZ": {}, "SE": {},
+ "CH": {}, "SY": {}, "TW": {}, "TJ": {}, "TZ": {},
+ "TH": {}, "TL": {}, "TG": {}, "TK": {}, "TO": {},
+ "TT": {}, "TN": {}, "TR": {}, "TM": {}, "TC": {},
+ "TV": {}, "UG": {}, "UA": {}, "AE": {}, "GB": {},
+ "US": {}, "UM": {}, "UY": {}, "UZ": {}, "VU": {},
+ "VE": {}, "VN": {}, "VG": {}, "VI": {}, "WF": {},
+ "EH": {}, "YE": {}, "ZM": {}, "ZW": {}, "XK": {},
}
-var iso3166_1_alpha3 = map[string]bool{
+var iso3166_1_alpha2_eu = map[string]struct{}{
+ "AT": {}, "BE": {}, "BG": {}, "HR": {}, "CY": {},
+ "CZ": {}, "DK": {}, "EE": {}, "FI": {}, "FR": {},
+ "DE": {}, "GR": {}, "HU": {}, "IE": {}, "IT": {},
+ "LV": {}, "LT": {}, "LU": {}, "MT": {}, "NL": {},
+ "PL": {}, "PT": {}, "RO": {}, "SK": {}, "SI": {},
+ "ES": {}, "SE": {},
+}
+
+var iso3166_1_alpha3 = map[string]struct{}{
// see: https://www.iso.org/iso-3166-country-codes.html
- "AFG": true, "ALB": true, "DZA": true, "ASM": true, "AND": true,
- "AGO": true, "AIA": true, "ATA": true, "ATG": true, "ARG": true,
- "ARM": true, "ABW": true, "AUS": true, "AUT": true, "AZE": true,
- "BHS": true, "BHR": true, "BGD": true, "BRB": true, "BLR": true,
- "BEL": true, "BLZ": true, "BEN": true, "BMU": true, "BTN": true,
- "BOL": true, "BES": true, "BIH": true, "BWA": true, "BVT": true,
- "BRA": true, "IOT": true, "BRN": true, "BGR": true, "BFA": true,
- "BDI": true, "CPV": true, "KHM": true, "CMR": true, "CAN": true,
- "CYM": true, "CAF": true, "TCD": true, "CHL": true, "CHN": true,
- "CXR": true, "CCK": true, "COL": true, "COM": true, "COD": true,
- "COG": true, "COK": true, "CRI": true, "HRV": true, "CUB": true,
- "CUW": true, "CYP": true, "CZE": true, "CIV": true, "DNK": true,
- "DJI": true, "DMA": true, "DOM": true, "ECU": true, "EGY": true,
- "SLV": true, "GNQ": true, "ERI": true, "EST": true, "SWZ": true,
- "ETH": true, "FLK": true, "FRO": true, "FJI": true, "FIN": true,
- "FRA": true, "GUF": true, "PYF": true, "ATF": true, "GAB": true,
- "GMB": true, "GEO": true, "DEU": true, "GHA": true, "GIB": true,
- "GRC": true, "GRL": true, "GRD": true, "GLP": true, "GUM": true,
- "GTM": true, "GGY": true, "GIN": true, "GNB": true, "GUY": true,
- "HTI": true, "HMD": true, "VAT": true, "HND": true, "HKG": true,
- "HUN": true, "ISL": true, "IND": true, "IDN": true, "IRN": true,
- "IRQ": true, "IRL": true, "IMN": true, "ISR": true, "ITA": true,
- "JAM": true, "JPN": true, "JEY": true, "JOR": true, "KAZ": true,
- "KEN": true, "KIR": true, "PRK": true, "KOR": true, "KWT": true,
- "KGZ": true, "LAO": true, "LVA": true, "LBN": true, "LSO": true,
- "LBR": true, "LBY": true, "LIE": true, "LTU": true, "LUX": true,
- "MAC": true, "MDG": true, "MWI": true, "MYS": true, "MDV": true,
- "MLI": true, "MLT": true, "MHL": true, "MTQ": true, "MRT": true,
- "MUS": true, "MYT": true, "MEX": true, "FSM": true, "MDA": true,
- "MCO": true, "MNG": true, "MNE": true, "MSR": true, "MAR": true,
- "MOZ": true, "MMR": true, "NAM": true, "NRU": true, "NPL": true,
- "NLD": true, "NCL": true, "NZL": true, "NIC": true, "NER": true,
- "NGA": true, "NIU": true, "NFK": true, "MKD": true, "MNP": true,
- "NOR": true, "OMN": true, "PAK": true, "PLW": true, "PSE": true,
- "PAN": true, "PNG": true, "PRY": true, "PER": true, "PHL": true,
- "PCN": true, "POL": true, "PRT": true, "PRI": true, "QAT": true,
- "ROU": true, "RUS": true, "RWA": true, "REU": true, "BLM": true,
- "SHN": true, "KNA": true, "LCA": true, "MAF": true, "SPM": true,
- "VCT": true, "WSM": true, "SMR": true, "STP": true, "SAU": true,
- "SEN": true, "SRB": true, "SYC": true, "SLE": true, "SGP": true,
- "SXM": true, "SVK": true, "SVN": true, "SLB": true, "SOM": true,
- "ZAF": true, "SGS": true, "SSD": true, "ESP": true, "LKA": true,
- "SDN": true, "SUR": true, "SJM": true, "SWE": true, "CHE": true,
- "SYR": true, "TWN": true, "TJK": true, "TZA": true, "THA": true,
- "TLS": true, "TGO": true, "TKL": true, "TON": true, "TTO": true,
- "TUN": true, "TUR": true, "TKM": true, "TCA": true, "TUV": true,
- "UGA": true, "UKR": true, "ARE": true, "GBR": true, "UMI": true,
- "USA": true, "URY": true, "UZB": true, "VUT": true, "VEN": true,
- "VNM": true, "VGB": true, "VIR": true, "WLF": true, "ESH": true,
- "YEM": true, "ZMB": true, "ZWE": true, "ALA": true, "UNK": true,
+ "AFG": {}, "ALB": {}, "DZA": {}, "ASM": {}, "AND": {},
+ "AGO": {}, "AIA": {}, "ATA": {}, "ATG": {}, "ARG": {},
+ "ARM": {}, "ABW": {}, "AUS": {}, "AUT": {}, "AZE": {},
+ "BHS": {}, "BHR": {}, "BGD": {}, "BRB": {}, "BLR": {},
+ "BEL": {}, "BLZ": {}, "BEN": {}, "BMU": {}, "BTN": {},
+ "BOL": {}, "BES": {}, "BIH": {}, "BWA": {}, "BVT": {},
+ "BRA": {}, "IOT": {}, "BRN": {}, "BGR": {}, "BFA": {},
+ "BDI": {}, "CPV": {}, "KHM": {}, "CMR": {}, "CAN": {},
+ "CYM": {}, "CAF": {}, "TCD": {}, "CHL": {}, "CHN": {},
+ "CXR": {}, "CCK": {}, "COL": {}, "COM": {}, "COD": {},
+ "COG": {}, "COK": {}, "CRI": {}, "HRV": {}, "CUB": {},
+ "CUW": {}, "CYP": {}, "CZE": {}, "CIV": {}, "DNK": {},
+ "DJI": {}, "DMA": {}, "DOM": {}, "ECU": {}, "EGY": {},
+ "SLV": {}, "GNQ": {}, "ERI": {}, "EST": {}, "SWZ": {},
+ "ETH": {}, "FLK": {}, "FRO": {}, "FJI": {}, "FIN": {},
+ "FRA": {}, "GUF": {}, "PYF": {}, "ATF": {}, "GAB": {},
+ "GMB": {}, "GEO": {}, "DEU": {}, "GHA": {}, "GIB": {},
+ "GRC": {}, "GRL": {}, "GRD": {}, "GLP": {}, "GUM": {},
+ "GTM": {}, "GGY": {}, "GIN": {}, "GNB": {}, "GUY": {},
+ "HTI": {}, "HMD": {}, "VAT": {}, "HND": {}, "HKG": {},
+ "HUN": {}, "ISL": {}, "IND": {}, "IDN": {}, "IRN": {},
+ "IRQ": {}, "IRL": {}, "IMN": {}, "ISR": {}, "ITA": {},
+ "JAM": {}, "JPN": {}, "JEY": {}, "JOR": {}, "KAZ": {},
+ "KEN": {}, "KIR": {}, "PRK": {}, "KOR": {}, "KWT": {},
+ "KGZ": {}, "LAO": {}, "LVA": {}, "LBN": {}, "LSO": {},
+ "LBR": {}, "LBY": {}, "LIE": {}, "LTU": {}, "LUX": {},
+ "MAC": {}, "MDG": {}, "MWI": {}, "MYS": {}, "MDV": {},
+ "MLI": {}, "MLT": {}, "MHL": {}, "MTQ": {}, "MRT": {},
+ "MUS": {}, "MYT": {}, "MEX": {}, "FSM": {}, "MDA": {},
+ "MCO": {}, "MNG": {}, "MNE": {}, "MSR": {}, "MAR": {},
+ "MOZ": {}, "MMR": {}, "NAM": {}, "NRU": {}, "NPL": {},
+ "NLD": {}, "NCL": {}, "NZL": {}, "NIC": {}, "NER": {},
+ "NGA": {}, "NIU": {}, "NFK": {}, "MKD": {}, "MNP": {},
+ "NOR": {}, "OMN": {}, "PAK": {}, "PLW": {}, "PSE": {},
+ "PAN": {}, "PNG": {}, "PRY": {}, "PER": {}, "PHL": {},
+ "PCN": {}, "POL": {}, "PRT": {}, "PRI": {}, "QAT": {},
+ "ROU": {}, "RUS": {}, "RWA": {}, "REU": {}, "BLM": {},
+ "SHN": {}, "KNA": {}, "LCA": {}, "MAF": {}, "SPM": {},
+ "VCT": {}, "WSM": {}, "SMR": {}, "STP": {}, "SAU": {},
+ "SEN": {}, "SRB": {}, "SYC": {}, "SLE": {}, "SGP": {},
+ "SXM": {}, "SVK": {}, "SVN": {}, "SLB": {}, "SOM": {},
+ "ZAF": {}, "SGS": {}, "SSD": {}, "ESP": {}, "LKA": {},
+ "SDN": {}, "SUR": {}, "SJM": {}, "SWE": {}, "CHE": {},
+ "SYR": {}, "TWN": {}, "TJK": {}, "TZA": {}, "THA": {},
+ "TLS": {}, "TGO": {}, "TKL": {}, "TON": {}, "TTO": {},
+ "TUN": {}, "TUR": {}, "TKM": {}, "TCA": {}, "TUV": {},
+ "UGA": {}, "UKR": {}, "ARE": {}, "GBR": {}, "UMI": {},
+ "USA": {}, "URY": {}, "UZB": {}, "VUT": {}, "VEN": {},
+ "VNM": {}, "VGB": {}, "VIR": {}, "WLF": {}, "ESH": {},
+ "YEM": {}, "ZMB": {}, "ZWE": {}, "ALA": {}, "UNK": {},
+}
+
+var iso3166_1_alpha3_eu = map[string]struct{}{
+ "AUT": {}, "BEL": {}, "BGR": {}, "HRV": {}, "CYP": {},
+ "CZE": {}, "DNK": {}, "EST": {}, "FIN": {}, "FRA": {},
+ "DEU": {}, "GRC": {}, "HUN": {}, "IRL": {}, "ITA": {},
+ "LVA": {}, "LTU": {}, "LUX": {}, "MLT": {}, "NLD": {},
+ "POL": {}, "PRT": {}, "ROU": {}, "SVK": {}, "SVN": {},
+ "ESP": {}, "SWE": {},
}
-var iso3166_1_alpha_numeric = map[int]bool{
+var iso3166_1_alpha_numeric = map[int]struct{}{
// see: https://www.iso.org/iso-3166-country-codes.html
- 4: true, 8: true, 12: true, 16: true, 20: true,
- 24: true, 660: true, 10: true, 28: true, 32: true,
- 51: true, 533: true, 36: true, 40: true, 31: true,
- 44: true, 48: true, 50: true, 52: true, 112: true,
- 56: true, 84: true, 204: true, 60: true, 64: true,
- 68: true, 535: true, 70: true, 72: true, 74: true,
- 76: true, 86: true, 96: true, 100: true, 854: true,
- 108: true, 132: true, 116: true, 120: true, 124: true,
- 136: true, 140: true, 148: true, 152: true, 156: true,
- 162: true, 166: true, 170: true, 174: true, 180: true,
- 178: true, 184: true, 188: true, 191: true, 192: true,
- 531: true, 196: true, 203: true, 384: true, 208: true,
- 262: true, 212: true, 214: true, 218: true, 818: true,
- 222: true, 226: true, 232: true, 233: true, 748: true,
- 231: true, 238: true, 234: true, 242: true, 246: true,
- 250: true, 254: true, 258: true, 260: true, 266: true,
- 270: true, 268: true, 276: true, 288: true, 292: true,
- 300: true, 304: true, 308: true, 312: true, 316: true,
- 320: true, 831: true, 324: true, 624: true, 328: true,
- 332: true, 334: true, 336: true, 340: true, 344: true,
- 348: true, 352: true, 356: true, 360: true, 364: true,
- 368: true, 372: true, 833: true, 376: true, 380: true,
- 388: true, 392: true, 832: true, 400: true, 398: true,
- 404: true, 296: true, 408: true, 410: true, 414: true,
- 417: true, 418: true, 428: true, 422: true, 426: true,
- 430: true, 434: true, 438: true, 440: true, 442: true,
- 446: true, 450: true, 454: true, 458: true, 462: true,
- 466: true, 470: true, 584: true, 474: true, 478: true,
- 480: true, 175: true, 484: true, 583: true, 498: true,
- 492: true, 496: true, 499: true, 500: true, 504: true,
- 508: true, 104: true, 516: true, 520: true, 524: true,
- 528: true, 540: true, 554: true, 558: true, 562: true,
- 566: true, 570: true, 574: true, 807: true, 580: true,
- 578: true, 512: true, 586: true, 585: true, 275: true,
- 591: true, 598: true, 600: true, 604: true, 608: true,
- 612: true, 616: true, 620: true, 630: true, 634: true,
- 642: true, 643: true, 646: true, 638: true, 652: true,
- 654: true, 659: true, 662: true, 663: true, 666: true,
- 670: true, 882: true, 674: true, 678: true, 682: true,
- 686: true, 688: true, 690: true, 694: true, 702: true,
- 534: true, 703: true, 705: true, 90: true, 706: true,
- 710: true, 239: true, 728: true, 724: true, 144: true,
- 729: true, 740: true, 744: true, 752: true, 756: true,
- 760: true, 158: true, 762: true, 834: true, 764: true,
- 626: true, 768: true, 772: true, 776: true, 780: true,
- 788: true, 792: true, 795: true, 796: true, 798: true,
- 800: true, 804: true, 784: true, 826: true, 581: true,
- 840: true, 858: true, 860: true, 548: true, 862: true,
- 704: true, 92: true, 850: true, 876: true, 732: true,
- 887: true, 894: true, 716: true, 248: true, 153: true,
+ 4: {}, 8: {}, 12: {}, 16: {}, 20: {},
+ 24: {}, 660: {}, 10: {}, 28: {}, 32: {},
+ 51: {}, 533: {}, 36: {}, 40: {}, 31: {},
+ 44: {}, 48: {}, 50: {}, 52: {}, 112: {},
+ 56: {}, 84: {}, 204: {}, 60: {}, 64: {},
+ 68: {}, 535: {}, 70: {}, 72: {}, 74: {},
+ 76: {}, 86: {}, 96: {}, 100: {}, 854: {},
+ 108: {}, 132: {}, 116: {}, 120: {}, 124: {},
+ 136: {}, 140: {}, 148: {}, 152: {}, 156: {},
+ 162: {}, 166: {}, 170: {}, 174: {}, 180: {},
+ 178: {}, 184: {}, 188: {}, 191: {}, 192: {},
+ 531: {}, 196: {}, 203: {}, 384: {}, 208: {},
+ 262: {}, 212: {}, 214: {}, 218: {}, 818: {},
+ 222: {}, 226: {}, 232: {}, 233: {}, 748: {},
+ 231: {}, 238: {}, 234: {}, 242: {}, 246: {},
+ 250: {}, 254: {}, 258: {}, 260: {}, 266: {},
+ 270: {}, 268: {}, 276: {}, 288: {}, 292: {},
+ 300: {}, 304: {}, 308: {}, 312: {}, 316: {},
+ 320: {}, 831: {}, 324: {}, 624: {}, 328: {},
+ 332: {}, 334: {}, 336: {}, 340: {}, 344: {},
+ 348: {}, 352: {}, 356: {}, 360: {}, 364: {},
+ 368: {}, 372: {}, 833: {}, 376: {}, 380: {},
+ 388: {}, 392: {}, 832: {}, 400: {}, 398: {},
+ 404: {}, 296: {}, 408: {}, 410: {}, 414: {},
+ 417: {}, 418: {}, 428: {}, 422: {}, 426: {},
+ 430: {}, 434: {}, 438: {}, 440: {}, 442: {},
+ 446: {}, 450: {}, 454: {}, 458: {}, 462: {},
+ 466: {}, 470: {}, 584: {}, 474: {}, 478: {},
+ 480: {}, 175: {}, 484: {}, 583: {}, 498: {},
+ 492: {}, 496: {}, 499: {}, 500: {}, 504: {},
+ 508: {}, 104: {}, 516: {}, 520: {}, 524: {},
+ 528: {}, 540: {}, 554: {}, 558: {}, 562: {},
+ 566: {}, 570: {}, 574: {}, 807: {}, 580: {},
+ 578: {}, 512: {}, 586: {}, 585: {}, 275: {},
+ 591: {}, 598: {}, 600: {}, 604: {}, 608: {},
+ 612: {}, 616: {}, 620: {}, 630: {}, 634: {},
+ 642: {}, 643: {}, 646: {}, 638: {}, 652: {},
+ 654: {}, 659: {}, 662: {}, 663: {}, 666: {},
+ 670: {}, 882: {}, 674: {}, 678: {}, 682: {},
+ 686: {}, 688: {}, 690: {}, 694: {}, 702: {},
+ 534: {}, 703: {}, 705: {}, 90: {}, 706: {},
+ 710: {}, 239: {}, 728: {}, 724: {}, 144: {},
+ 729: {}, 740: {}, 744: {}, 752: {}, 756: {},
+ 760: {}, 158: {}, 762: {}, 834: {}, 764: {},
+ 626: {}, 768: {}, 772: {}, 776: {}, 780: {},
+ 788: {}, 792: {}, 795: {}, 796: {}, 798: {},
+ 800: {}, 804: {}, 784: {}, 826: {}, 581: {},
+ 840: {}, 858: {}, 860: {}, 548: {}, 862: {},
+ 704: {}, 92: {}, 850: {}, 876: {}, 732: {},
+ 887: {}, 894: {}, 716: {}, 248: {}, 153: {},
+}
+
+var iso3166_1_alpha_numeric_eu = map[int]struct{}{
+ 40: {}, 56: {}, 100: {}, 191: {}, 196: {},
+ 200: {}, 208: {}, 233: {}, 246: {}, 250: {},
+ 276: {}, 300: {}, 348: {}, 372: {}, 380: {},
+ 428: {}, 440: {}, 442: {}, 470: {}, 528: {},
+ 616: {}, 620: {}, 642: {}, 703: {}, 705: {},
+ 724: {}, 752: {},
}
-var iso3166_2 = map[string]bool{
- "AD-02": true, "AD-03": true, "AD-04": true, "AD-05": true, "AD-06": true,
- "AD-07": true, "AD-08": true, "AE-AJ": true, "AE-AZ": true, "AE-DU": true,
- "AE-FU": true, "AE-RK": true, "AE-SH": true, "AE-UQ": true, "AF-BAL": true,
- "AF-BAM": true, "AF-BDG": true, "AF-BDS": true, "AF-BGL": true, "AF-DAY": true,
- "AF-FRA": true, "AF-FYB": true, "AF-GHA": true, "AF-GHO": true, "AF-HEL": true,
- "AF-HER": true, "AF-JOW": true, "AF-KAB": true, "AF-KAN": true, "AF-KAP": true,
- "AF-KDZ": true, "AF-KHO": true, "AF-KNR": true, "AF-LAG": true, "AF-LOG": true,
- "AF-NAN": true, "AF-NIM": true, "AF-NUR": true, "AF-PAN": true, "AF-PAR": true,
- "AF-PIA": true, "AF-PKA": true, "AF-SAM": true, "AF-SAR": true, "AF-TAK": true,
- "AF-URU": true, "AF-WAR": true, "AF-ZAB": true, "AG-03": true, "AG-04": true,
- "AG-05": true, "AG-06": true, "AG-07": true, "AG-08": true, "AG-10": true,
- "AG-11": true, "AL-01": true, "AL-02": true, "AL-03": true, "AL-04": true,
- "AL-05": true, "AL-06": true, "AL-07": true, "AL-08": true, "AL-09": true,
- "AL-10": true, "AL-11": true, "AL-12": true, "AL-BR": true, "AL-BU": true,
- "AL-DI": true, "AL-DL": true, "AL-DR": true, "AL-DV": true, "AL-EL": true,
- "AL-ER": true, "AL-FR": true, "AL-GJ": true, "AL-GR": true, "AL-HA": true,
- "AL-KA": true, "AL-KB": true, "AL-KC": true, "AL-KO": true, "AL-KR": true,
- "AL-KU": true, "AL-LB": true, "AL-LE": true, "AL-LU": true, "AL-MK": true,
- "AL-MM": true, "AL-MR": true, "AL-MT": true, "AL-PG": true, "AL-PQ": true,
- "AL-PR": true, "AL-PU": true, "AL-SH": true, "AL-SK": true, "AL-SR": true,
- "AL-TE": true, "AL-TP": true, "AL-TR": true, "AL-VL": true, "AM-AG": true,
- "AM-AR": true, "AM-AV": true, "AM-ER": true, "AM-GR": true, "AM-KT": true,
- "AM-LO": true, "AM-SH": true, "AM-SU": true, "AM-TV": true, "AM-VD": true,
- "AO-BGO": true, "AO-BGU": true, "AO-BIE": true, "AO-CAB": true, "AO-CCU": true,
- "AO-CNN": true, "AO-CNO": true, "AO-CUS": true, "AO-HUA": true, "AO-HUI": true,
- "AO-LNO": true, "AO-LSU": true, "AO-LUA": true, "AO-MAL": true, "AO-MOX": true,
- "AO-NAM": true, "AO-UIG": true, "AO-ZAI": true, "AR-A": true, "AR-B": true,
- "AR-C": true, "AR-D": true, "AR-E": true, "AR-F": true, "AR-G": true, "AR-H": true,
- "AR-J": true, "AR-K": true, "AR-L": true, "AR-M": true, "AR-N": true,
- "AR-P": true, "AR-Q": true, "AR-R": true, "AR-S": true, "AR-T": true,
- "AR-U": true, "AR-V": true, "AR-W": true, "AR-X": true, "AR-Y": true,
- "AR-Z": true, "AT-1": true, "AT-2": true, "AT-3": true, "AT-4": true,
- "AT-5": true, "AT-6": true, "AT-7": true, "AT-8": true, "AT-9": true,
- "AU-ACT": true, "AU-NSW": true, "AU-NT": true, "AU-QLD": true, "AU-SA": true,
- "AU-TAS": true, "AU-VIC": true, "AU-WA": true, "AZ-ABS": true, "AZ-AGA": true,
- "AZ-AGC": true, "AZ-AGM": true, "AZ-AGS": true, "AZ-AGU": true, "AZ-AST": true,
- "AZ-BA": true, "AZ-BAB": true, "AZ-BAL": true, "AZ-BAR": true, "AZ-BEY": true,
- "AZ-BIL": true, "AZ-CAB": true, "AZ-CAL": true, "AZ-CUL": true, "AZ-DAS": true,
- "AZ-FUZ": true, "AZ-GA": true, "AZ-GAD": true, "AZ-GOR": true, "AZ-GOY": true,
- "AZ-GYG": true, "AZ-HAC": true, "AZ-IMI": true, "AZ-ISM": true, "AZ-KAL": true,
- "AZ-KAN": true, "AZ-KUR": true, "AZ-LA": true, "AZ-LAC": true, "AZ-LAN": true,
- "AZ-LER": true, "AZ-MAS": true, "AZ-MI": true, "AZ-NA": true, "AZ-NEF": true,
- "AZ-NV": true, "AZ-NX": true, "AZ-OGU": true, "AZ-ORD": true, "AZ-QAB": true,
- "AZ-QAX": true, "AZ-QAZ": true, "AZ-QBA": true, "AZ-QBI": true, "AZ-QOB": true,
- "AZ-QUS": true, "AZ-SA": true, "AZ-SAB": true, "AZ-SAD": true, "AZ-SAH": true,
- "AZ-SAK": true, "AZ-SAL": true, "AZ-SAR": true, "AZ-SAT": true, "AZ-SBN": true,
- "AZ-SIY": true, "AZ-SKR": true, "AZ-SM": true, "AZ-SMI": true, "AZ-SMX": true,
- "AZ-SR": true, "AZ-SUS": true, "AZ-TAR": true, "AZ-TOV": true, "AZ-UCA": true,
- "AZ-XA": true, "AZ-XAC": true, "AZ-XCI": true, "AZ-XIZ": true, "AZ-XVD": true,
- "AZ-YAR": true, "AZ-YE": true, "AZ-YEV": true, "AZ-ZAN": true, "AZ-ZAQ": true,
- "AZ-ZAR": true, "BA-01": true, "BA-02": true, "BA-03": true, "BA-04": true,
- "BA-05": true, "BA-06": true, "BA-07": true, "BA-08": true, "BA-09": true,
- "BA-10": true, "BA-BIH": true, "BA-BRC": true, "BA-SRP": true, "BB-01": true,
- "BB-02": true, "BB-03": true, "BB-04": true, "BB-05": true, "BB-06": true,
- "BB-07": true, "BB-08": true, "BB-09": true, "BB-10": true, "BB-11": true,
- "BD-01": true, "BD-02": true, "BD-03": true, "BD-04": true, "BD-05": true,
- "BD-06": true, "BD-07": true, "BD-08": true, "BD-09": true, "BD-10": true,
- "BD-11": true, "BD-12": true, "BD-13": true, "BD-14": true, "BD-15": true,
- "BD-16": true, "BD-17": true, "BD-18": true, "BD-19": true, "BD-20": true,
- "BD-21": true, "BD-22": true, "BD-23": true, "BD-24": true, "BD-25": true,
- "BD-26": true, "BD-27": true, "BD-28": true, "BD-29": true, "BD-30": true,
- "BD-31": true, "BD-32": true, "BD-33": true, "BD-34": true, "BD-35": true,
- "BD-36": true, "BD-37": true, "BD-38": true, "BD-39": true, "BD-40": true,
- "BD-41": true, "BD-42": true, "BD-43": true, "BD-44": true, "BD-45": true,
- "BD-46": true, "BD-47": true, "BD-48": true, "BD-49": true, "BD-50": true,
- "BD-51": true, "BD-52": true, "BD-53": true, "BD-54": true, "BD-55": true,
- "BD-56": true, "BD-57": true, "BD-58": true, "BD-59": true, "BD-60": true,
- "BD-61": true, "BD-62": true, "BD-63": true, "BD-64": true, "BD-A": true,
- "BD-B": true, "BD-C": true, "BD-D": true, "BD-E": true, "BD-F": true,
- "BD-G": true, "BE-BRU": true, "BE-VAN": true, "BE-VBR": true, "BE-VLG": true,
- "BE-VLI": true, "BE-VOV": true, "BE-VWV": true, "BE-WAL": true, "BE-WBR": true,
- "BE-WHT": true, "BE-WLG": true, "BE-WLX": true, "BE-WNA": true, "BF-01": true,
- "BF-02": true, "BF-03": true, "BF-04": true, "BF-05": true, "BF-06": true,
- "BF-07": true, "BF-08": true, "BF-09": true, "BF-10": true, "BF-11": true,
- "BF-12": true, "BF-13": true, "BF-BAL": true, "BF-BAM": true, "BF-BAN": true,
- "BF-BAZ": true, "BF-BGR": true, "BF-BLG": true, "BF-BLK": true, "BF-COM": true,
- "BF-GAN": true, "BF-GNA": true, "BF-GOU": true, "BF-HOU": true, "BF-IOB": true,
- "BF-KAD": true, "BF-KEN": true, "BF-KMD": true, "BF-KMP": true, "BF-KOP": true,
- "BF-KOS": true, "BF-KOT": true, "BF-KOW": true, "BF-LER": true, "BF-LOR": true,
- "BF-MOU": true, "BF-NAM": true, "BF-NAO": true, "BF-NAY": true, "BF-NOU": true,
- "BF-OUB": true, "BF-OUD": true, "BF-PAS": true, "BF-PON": true, "BF-SEN": true,
- "BF-SIS": true, "BF-SMT": true, "BF-SNG": true, "BF-SOM": true, "BF-SOR": true,
- "BF-TAP": true, "BF-TUI": true, "BF-YAG": true, "BF-YAT": true, "BF-ZIR": true,
- "BF-ZON": true, "BF-ZOU": true, "BG-01": true, "BG-02": true, "BG-03": true,
- "BG-04": true, "BG-05": true, "BG-06": true, "BG-07": true, "BG-08": true,
- "BG-09": true, "BG-10": true, "BG-11": true, "BG-12": true, "BG-13": true,
- "BG-14": true, "BG-15": true, "BG-16": true, "BG-17": true, "BG-18": true,
- "BG-19": true, "BG-20": true, "BG-21": true, "BG-22": true, "BG-23": true,
- "BG-24": true, "BG-25": true, "BG-26": true, "BG-27": true, "BG-28": true,
- "BH-13": true, "BH-14": true, "BH-15": true, "BH-16": true, "BH-17": true,
- "BI-BB": true, "BI-BL": true, "BI-BM": true, "BI-BR": true, "BI-CA": true,
- "BI-CI": true, "BI-GI": true, "BI-KI": true, "BI-KR": true, "BI-KY": true,
- "BI-MA": true, "BI-MU": true, "BI-MW": true, "BI-NG": true, "BI-RM": true, "BI-RT": true,
- "BI-RY": true, "BJ-AK": true, "BJ-AL": true, "BJ-AQ": true, "BJ-BO": true,
- "BJ-CO": true, "BJ-DO": true, "BJ-KO": true, "BJ-LI": true, "BJ-MO": true,
- "BJ-OU": true, "BJ-PL": true, "BJ-ZO": true, "BN-BE": true, "BN-BM": true,
- "BN-TE": true, "BN-TU": true, "BO-B": true, "BO-C": true, "BO-H": true,
- "BO-L": true, "BO-N": true, "BO-O": true, "BO-P": true, "BO-S": true,
- "BO-T": true, "BQ-BO": true, "BQ-SA": true, "BQ-SE": true, "BR-AC": true,
- "BR-AL": true, "BR-AM": true, "BR-AP": true, "BR-BA": true, "BR-CE": true,
- "BR-DF": true, "BR-ES": true, "BR-FN": true, "BR-GO": true, "BR-MA": true,
- "BR-MG": true, "BR-MS": true, "BR-MT": true, "BR-PA": true, "BR-PB": true,
- "BR-PE": true, "BR-PI": true, "BR-PR": true, "BR-RJ": true, "BR-RN": true,
- "BR-RO": true, "BR-RR": true, "BR-RS": true, "BR-SC": true, "BR-SE": true,
- "BR-SP": true, "BR-TO": true, "BS-AK": true, "BS-BI": true, "BS-BP": true,
- "BS-BY": true, "BS-CE": true, "BS-CI": true, "BS-CK": true, "BS-CO": true,
- "BS-CS": true, "BS-EG": true, "BS-EX": true, "BS-FP": true, "BS-GC": true,
- "BS-HI": true, "BS-HT": true, "BS-IN": true, "BS-LI": true, "BS-MC": true,
- "BS-MG": true, "BS-MI": true, "BS-NE": true, "BS-NO": true, "BS-NP": true, "BS-NS": true,
- "BS-RC": true, "BS-RI": true, "BS-SA": true, "BS-SE": true, "BS-SO": true,
- "BS-SS": true, "BS-SW": true, "BS-WG": true, "BT-11": true, "BT-12": true,
- "BT-13": true, "BT-14": true, "BT-15": true, "BT-21": true, "BT-22": true,
- "BT-23": true, "BT-24": true, "BT-31": true, "BT-32": true, "BT-33": true,
- "BT-34": true, "BT-41": true, "BT-42": true, "BT-43": true, "BT-44": true,
- "BT-45": true, "BT-GA": true, "BT-TY": true, "BW-CE": true, "BW-CH": true, "BW-GH": true,
- "BW-KG": true, "BW-KL": true, "BW-KW": true, "BW-NE": true, "BW-NW": true,
- "BW-SE": true, "BW-SO": true, "BY-BR": true, "BY-HM": true, "BY-HO": true,
- "BY-HR": true, "BY-MA": true, "BY-MI": true, "BY-VI": true, "BZ-BZ": true,
- "BZ-CY": true, "BZ-CZL": true, "BZ-OW": true, "BZ-SC": true, "BZ-TOL": true,
- "CA-AB": true, "CA-BC": true, "CA-MB": true, "CA-NB": true, "CA-NL": true,
- "CA-NS": true, "CA-NT": true, "CA-NU": true, "CA-ON": true, "CA-PE": true,
- "CA-QC": true, "CA-SK": true, "CA-YT": true, "CD-BC": true, "CD-BN": true,
- "CD-EQ": true, "CD-HK": true, "CD-IT": true, "CD-KA": true, "CD-KC": true, "CD-KE": true, "CD-KG": true, "CD-KN": true,
- "CD-KW": true, "CD-KS": true, "CD-LU": true, "CD-MA": true, "CD-NK": true, "CD-OR": true, "CD-SA": true, "CD-SK": true,
- "CD-TA": true, "CD-TO": true, "CF-AC": true, "CF-BB": true, "CF-BGF": true, "CF-BK": true, "CF-HK": true, "CF-HM": true,
- "CF-HS": true, "CF-KB": true, "CF-KG": true, "CF-LB": true, "CF-MB": true,
- "CF-MP": true, "CF-NM": true, "CF-OP": true, "CF-SE": true, "CF-UK": true,
- "CF-VK": true, "CG-11": true, "CG-12": true, "CG-13": true, "CG-14": true,
- "CG-15": true, "CG-16": true, "CG-2": true, "CG-5": true, "CG-7": true, "CG-8": true,
- "CG-9": true, "CG-BZV": true, "CH-AG": true, "CH-AI": true, "CH-AR": true,
- "CH-BE": true, "CH-BL": true, "CH-BS": true, "CH-FR": true, "CH-GE": true,
- "CH-GL": true, "CH-GR": true, "CH-JU": true, "CH-LU": true, "CH-NE": true,
- "CH-NW": true, "CH-OW": true, "CH-SG": true, "CH-SH": true, "CH-SO": true,
- "CH-SZ": true, "CH-TG": true, "CH-TI": true, "CH-UR": true, "CH-VD": true,
- "CH-VS": true, "CH-ZG": true, "CH-ZH": true, "CI-AB": true, "CI-BS": true,
- "CI-CM": true, "CI-DN": true, "CI-GD": true, "CI-LC": true, "CI-LG": true,
- "CI-MG": true, "CI-SM": true, "CI-SV": true, "CI-VB": true, "CI-WR": true,
- "CI-YM": true, "CI-ZZ": true, "CL-AI": true, "CL-AN": true, "CL-AP": true,
- "CL-AR": true, "CL-AT": true, "CL-BI": true, "CL-CO": true, "CL-LI": true,
- "CL-LL": true, "CL-LR": true, "CL-MA": true, "CL-ML": true, "CL-NB": true, "CL-RM": true,
- "CL-TA": true, "CL-VS": true, "CM-AD": true, "CM-CE": true, "CM-EN": true,
- "CM-ES": true, "CM-LT": true, "CM-NO": true, "CM-NW": true, "CM-OU": true,
- "CM-SU": true, "CM-SW": true, "CN-AH": true, "CN-BJ": true, "CN-CQ": true,
- "CN-FJ": true, "CN-GS": true, "CN-GD": true, "CN-GX": true, "CN-GZ": true,
- "CN-HI": true, "CN-HE": true, "CN-HL": true, "CN-HA": true, "CN-HB": true,
- "CN-HN": true, "CN-JS": true, "CN-JX": true, "CN-JL": true, "CN-LN": true,
- "CN-NM": true, "CN-NX": true, "CN-QH": true, "CN-SN": true, "CN-SD": true, "CN-SH": true,
- "CN-SX": true, "CN-SC": true, "CN-TJ": true, "CN-XJ": true, "CN-XZ": true, "CN-YN": true,
- "CN-ZJ": true, "CO-AMA": true, "CO-ANT": true, "CO-ARA": true, "CO-ATL": true,
- "CO-BOL": true, "CO-BOY": true, "CO-CAL": true, "CO-CAQ": true, "CO-CAS": true,
- "CO-CAU": true, "CO-CES": true, "CO-CHO": true, "CO-COR": true, "CO-CUN": true,
- "CO-DC": true, "CO-GUA": true, "CO-GUV": true, "CO-HUI": true, "CO-LAG": true,
- "CO-MAG": true, "CO-MET": true, "CO-NAR": true, "CO-NSA": true, "CO-PUT": true,
- "CO-QUI": true, "CO-RIS": true, "CO-SAN": true, "CO-SAP": true, "CO-SUC": true,
- "CO-TOL": true, "CO-VAC": true, "CO-VAU": true, "CO-VID": true, "CR-A": true,
- "CR-C": true, "CR-G": true, "CR-H": true, "CR-L": true, "CR-P": true,
- "CR-SJ": true, "CU-01": true, "CU-02": true, "CU-03": true, "CU-04": true,
- "CU-05": true, "CU-06": true, "CU-07": true, "CU-08": true, "CU-09": true,
- "CU-10": true, "CU-11": true, "CU-12": true, "CU-13": true, "CU-14": true, "CU-15": true,
- "CU-16": true, "CU-99": true, "CV-B": true, "CV-BR": true, "CV-BV": true, "CV-CA": true,
- "CV-CF": true, "CV-CR": true, "CV-MA": true, "CV-MO": true, "CV-PA": true,
- "CV-PN": true, "CV-PR": true, "CV-RB": true, "CV-RG": true, "CV-RS": true,
- "CV-S": true, "CV-SD": true, "CV-SF": true, "CV-SL": true, "CV-SM": true,
- "CV-SO": true, "CV-SS": true, "CV-SV": true, "CV-TA": true, "CV-TS": true,
- "CY-01": true, "CY-02": true, "CY-03": true, "CY-04": true, "CY-05": true,
- "CY-06": true, "CZ-10": true, "CZ-101": true, "CZ-102": true, "CZ-103": true,
- "CZ-104": true, "CZ-105": true, "CZ-106": true, "CZ-107": true, "CZ-108": true,
- "CZ-109": true, "CZ-110": true, "CZ-111": true, "CZ-112": true, "CZ-113": true,
- "CZ-114": true, "CZ-115": true, "CZ-116": true, "CZ-117": true, "CZ-118": true,
- "CZ-119": true, "CZ-120": true, "CZ-121": true, "CZ-122": true, "CZ-20": true,
- "CZ-201": true, "CZ-202": true, "CZ-203": true, "CZ-204": true, "CZ-205": true,
- "CZ-206": true, "CZ-207": true, "CZ-208": true, "CZ-209": true, "CZ-20A": true,
- "CZ-20B": true, "CZ-20C": true, "CZ-31": true, "CZ-311": true, "CZ-312": true,
- "CZ-313": true, "CZ-314": true, "CZ-315": true, "CZ-316": true, "CZ-317": true,
- "CZ-32": true, "CZ-321": true, "CZ-322": true, "CZ-323": true, "CZ-324": true,
- "CZ-325": true, "CZ-326": true, "CZ-327": true, "CZ-41": true, "CZ-411": true,
- "CZ-412": true, "CZ-413": true, "CZ-42": true, "CZ-421": true, "CZ-422": true,
- "CZ-423": true, "CZ-424": true, "CZ-425": true, "CZ-426": true, "CZ-427": true,
- "CZ-51": true, "CZ-511": true, "CZ-512": true, "CZ-513": true, "CZ-514": true,
- "CZ-52": true, "CZ-521": true, "CZ-522": true, "CZ-523": true, "CZ-524": true,
- "CZ-525": true, "CZ-53": true, "CZ-531": true, "CZ-532": true, "CZ-533": true,
- "CZ-534": true, "CZ-63": true, "CZ-631": true, "CZ-632": true, "CZ-633": true,
- "CZ-634": true, "CZ-635": true, "CZ-64": true, "CZ-641": true, "CZ-642": true,
- "CZ-643": true, "CZ-644": true, "CZ-645": true, "CZ-646": true, "CZ-647": true,
- "CZ-71": true, "CZ-711": true, "CZ-712": true, "CZ-713": true, "CZ-714": true,
- "CZ-715": true, "CZ-72": true, "CZ-721": true, "CZ-722": true, "CZ-723": true,
- "CZ-724": true, "CZ-80": true, "CZ-801": true, "CZ-802": true, "CZ-803": true,
- "CZ-804": true, "CZ-805": true, "CZ-806": true, "DE-BB": true, "DE-BE": true,
- "DE-BW": true, "DE-BY": true, "DE-HB": true, "DE-HE": true, "DE-HH": true,
- "DE-MV": true, "DE-NI": true, "DE-NW": true, "DE-RP": true, "DE-SH": true,
- "DE-SL": true, "DE-SN": true, "DE-ST": true, "DE-TH": true, "DJ-AR": true,
- "DJ-AS": true, "DJ-DI": true, "DJ-DJ": true, "DJ-OB": true, "DJ-TA": true,
- "DK-81": true, "DK-82": true, "DK-83": true, "DK-84": true, "DK-85": true,
- "DM-01": true, "DM-02": true, "DM-03": true, "DM-04": true, "DM-05": true,
- "DM-06": true, "DM-07": true, "DM-08": true, "DM-09": true, "DM-10": true,
- "DO-01": true, "DO-02": true, "DO-03": true, "DO-04": true, "DO-05": true,
- "DO-06": true, "DO-07": true, "DO-08": true, "DO-09": true, "DO-10": true,
- "DO-11": true, "DO-12": true, "DO-13": true, "DO-14": true, "DO-15": true,
- "DO-16": true, "DO-17": true, "DO-18": true, "DO-19": true, "DO-20": true,
- "DO-21": true, "DO-22": true, "DO-23": true, "DO-24": true, "DO-25": true,
- "DO-26": true, "DO-27": true, "DO-28": true, "DO-29": true, "DO-30": true, "DO-31": true,
- "DZ-01": true, "DZ-02": true, "DZ-03": true, "DZ-04": true, "DZ-05": true,
- "DZ-06": true, "DZ-07": true, "DZ-08": true, "DZ-09": true, "DZ-10": true,
- "DZ-11": true, "DZ-12": true, "DZ-13": true, "DZ-14": true, "DZ-15": true,
- "DZ-16": true, "DZ-17": true, "DZ-18": true, "DZ-19": true, "DZ-20": true,
- "DZ-21": true, "DZ-22": true, "DZ-23": true, "DZ-24": true, "DZ-25": true,
- "DZ-26": true, "DZ-27": true, "DZ-28": true, "DZ-29": true, "DZ-30": true,
- "DZ-31": true, "DZ-32": true, "DZ-33": true, "DZ-34": true, "DZ-35": true,
- "DZ-36": true, "DZ-37": true, "DZ-38": true, "DZ-39": true, "DZ-40": true,
- "DZ-41": true, "DZ-42": true, "DZ-43": true, "DZ-44": true, "DZ-45": true,
- "DZ-46": true, "DZ-47": true, "DZ-48": true, "DZ-49": true, "DZ-51": true,
- "DZ-53": true, "DZ-55": true, "DZ-56": true, "DZ-57": true, "EC-A": true, "EC-B": true,
- "EC-C": true, "EC-D": true, "EC-E": true, "EC-F": true, "EC-G": true,
- "EC-H": true, "EC-I": true, "EC-L": true, "EC-M": true, "EC-N": true,
- "EC-O": true, "EC-P": true, "EC-R": true, "EC-S": true, "EC-SD": true,
- "EC-SE": true, "EC-T": true, "EC-U": true, "EC-W": true, "EC-X": true,
- "EC-Y": true, "EC-Z": true, "EE-37": true, "EE-39": true, "EE-44": true, "EE-45": true,
- "EE-49": true, "EE-50": true, "EE-51": true, "EE-52": true, "EE-56": true, "EE-57": true,
- "EE-59": true, "EE-60": true, "EE-64": true, "EE-65": true, "EE-67": true, "EE-68": true,
- "EE-70": true, "EE-71": true, "EE-74": true, "EE-78": true, "EE-79": true, "EE-81": true, "EE-82": true,
- "EE-84": true, "EE-86": true, "EE-87": true, "EG-ALX": true, "EG-ASN": true, "EG-AST": true,
- "EG-BA": true, "EG-BH": true, "EG-BNS": true, "EG-C": true, "EG-DK": true,
- "EG-DT": true, "EG-FYM": true, "EG-GH": true, "EG-GZ": true, "EG-HU": true,
- "EG-IS": true, "EG-JS": true, "EG-KB": true, "EG-KFS": true, "EG-KN": true,
- "EG-LX": true, "EG-MN": true, "EG-MNF": true, "EG-MT": true, "EG-PTS": true, "EG-SHG": true,
- "EG-SHR": true, "EG-SIN": true, "EG-SU": true, "EG-SUZ": true, "EG-WAD": true,
- "ER-AN": true, "ER-DK": true, "ER-DU": true, "ER-GB": true, "ER-MA": true,
- "ER-SK": true, "ES-A": true, "ES-AB": true, "ES-AL": true, "ES-AN": true,
- "ES-AR": true, "ES-AS": true, "ES-AV": true, "ES-B": true, "ES-BA": true,
- "ES-BI": true, "ES-BU": true, "ES-C": true, "ES-CA": true, "ES-CB": true,
- "ES-CC": true, "ES-CE": true, "ES-CL": true, "ES-CM": true, "ES-CN": true,
- "ES-CO": true, "ES-CR": true, "ES-CS": true, "ES-CT": true, "ES-CU": true,
- "ES-EX": true, "ES-GA": true, "ES-GC": true, "ES-GI": true, "ES-GR": true,
- "ES-GU": true, "ES-H": true, "ES-HU": true, "ES-IB": true, "ES-J": true,
- "ES-L": true, "ES-LE": true, "ES-LO": true, "ES-LU": true, "ES-M": true,
- "ES-MA": true, "ES-MC": true, "ES-MD": true, "ES-ML": true, "ES-MU": true,
- "ES-NA": true, "ES-NC": true, "ES-O": true, "ES-OR": true, "ES-P": true,
- "ES-PM": true, "ES-PO": true, "ES-PV": true, "ES-RI": true, "ES-S": true,
- "ES-SA": true, "ES-SE": true, "ES-SG": true, "ES-SO": true, "ES-SS": true,
- "ES-T": true, "ES-TE": true, "ES-TF": true, "ES-TO": true, "ES-V": true,
- "ES-VA": true, "ES-VC": true, "ES-VI": true, "ES-Z": true, "ES-ZA": true,
- "ET-AA": true, "ET-AF": true, "ET-AM": true, "ET-BE": true, "ET-DD": true,
- "ET-GA": true, "ET-HA": true, "ET-OR": true, "ET-SN": true, "ET-SO": true,
- "ET-TI": true, "FI-01": true, "FI-02": true, "FI-03": true, "FI-04": true,
- "FI-05": true, "FI-06": true, "FI-07": true, "FI-08": true, "FI-09": true,
- "FI-10": true, "FI-11": true, "FI-12": true, "FI-13": true, "FI-14": true,
- "FI-15": true, "FI-16": true, "FI-17": true, "FI-18": true, "FI-19": true,
- "FJ-C": true, "FJ-E": true, "FJ-N": true, "FJ-R": true, "FJ-W": true,
- "FM-KSA": true, "FM-PNI": true, "FM-TRK": true, "FM-YAP": true, "FR-01": true,
- "FR-02": true, "FR-03": true, "FR-04": true, "FR-05": true, "FR-06": true,
- "FR-07": true, "FR-08": true, "FR-09": true, "FR-10": true, "FR-11": true,
- "FR-12": true, "FR-13": true, "FR-14": true, "FR-15": true, "FR-16": true,
- "FR-17": true, "FR-18": true, "FR-19": true, "FR-20R": true, "FR-21": true, "FR-22": true,
- "FR-23": true, "FR-24": true, "FR-25": true, "FR-26": true, "FR-27": true,
- "FR-28": true, "FR-29": true, "FR-2A": true, "FR-2B": true, "FR-30": true,
- "FR-31": true, "FR-32": true, "FR-33": true, "FR-34": true, "FR-35": true,
- "FR-36": true, "FR-37": true, "FR-38": true, "FR-39": true, "FR-40": true,
- "FR-41": true, "FR-42": true, "FR-43": true, "FR-44": true, "FR-45": true,
- "FR-46": true, "FR-47": true, "FR-48": true, "FR-49": true, "FR-50": true,
- "FR-51": true, "FR-52": true, "FR-53": true, "FR-54": true, "FR-55": true,
- "FR-56": true, "FR-57": true, "FR-58": true, "FR-59": true, "FR-60": true,
- "FR-61": true, "FR-62": true, "FR-63": true, "FR-64": true, "FR-65": true,
- "FR-66": true, "FR-67": true, "FR-68": true, "FR-69": true, "FR-70": true,
- "FR-71": true, "FR-72": true, "FR-73": true, "FR-74": true, "FR-75": true,
- "FR-76": true, "FR-77": true, "FR-78": true, "FR-79": true, "FR-80": true,
- "FR-81": true, "FR-82": true, "FR-83": true, "FR-84": true, "FR-85": true,
- "FR-86": true, "FR-87": true, "FR-88": true, "FR-89": true, "FR-90": true,
- "FR-91": true, "FR-92": true, "FR-93": true, "FR-94": true, "FR-95": true,
- "FR-ARA": true, "FR-BFC": true, "FR-BL": true, "FR-BRE": true, "FR-COR": true,
- "FR-CP": true, "FR-CVL": true, "FR-GES": true, "FR-GF": true, "FR-GP": true,
- "FR-GUA": true, "FR-HDF": true, "FR-IDF": true, "FR-LRE": true, "FR-MAY": true,
- "FR-MF": true, "FR-MQ": true, "FR-NAQ": true, "FR-NC": true, "FR-NOR": true,
- "FR-OCC": true, "FR-PAC": true, "FR-PDL": true, "FR-PF": true, "FR-PM": true,
- "FR-RE": true, "FR-TF": true, "FR-WF": true, "FR-YT": true, "GA-1": true,
- "GA-2": true, "GA-3": true, "GA-4": true, "GA-5": true, "GA-6": true,
- "GA-7": true, "GA-8": true, "GA-9": true, "GB-ABC": true, "GB-ABD": true,
- "GB-ABE": true, "GB-AGB": true, "GB-AGY": true, "GB-AND": true, "GB-ANN": true,
- "GB-ANS": true, "GB-BAS": true, "GB-BBD": true, "GB-BDF": true, "GB-BDG": true,
- "GB-BEN": true, "GB-BEX": true, "GB-BFS": true, "GB-BGE": true, "GB-BGW": true,
- "GB-BIR": true, "GB-BKM": true, "GB-BMH": true, "GB-BNE": true, "GB-BNH": true,
- "GB-BNS": true, "GB-BOL": true, "GB-BPL": true, "GB-BRC": true, "GB-BRD": true,
- "GB-BRY": true, "GB-BST": true, "GB-BUR": true, "GB-CAM": true, "GB-CAY": true,
- "GB-CBF": true, "GB-CCG": true, "GB-CGN": true, "GB-CHE": true, "GB-CHW": true,
- "GB-CLD": true, "GB-CLK": true, "GB-CMA": true, "GB-CMD": true, "GB-CMN": true,
- "GB-CON": true, "GB-COV": true, "GB-CRF": true, "GB-CRY": true, "GB-CWY": true,
- "GB-DAL": true, "GB-DBY": true, "GB-DEN": true, "GB-DER": true, "GB-DEV": true,
- "GB-DGY": true, "GB-DNC": true, "GB-DND": true, "GB-DOR": true, "GB-DRS": true,
- "GB-DUD": true, "GB-DUR": true, "GB-EAL": true, "GB-EAW": true, "GB-EAY": true,
- "GB-EDH": true, "GB-EDU": true, "GB-ELN": true, "GB-ELS": true, "GB-ENF": true,
- "GB-ENG": true, "GB-ERW": true, "GB-ERY": true, "GB-ESS": true, "GB-ESX": true,
- "GB-FAL": true, "GB-FIF": true, "GB-FLN": true, "GB-FMO": true, "GB-GAT": true,
- "GB-GBN": true, "GB-GLG": true, "GB-GLS": true, "GB-GRE": true, "GB-GWN": true,
- "GB-HAL": true, "GB-HAM": true, "GB-HAV": true, "GB-HCK": true, "GB-HEF": true,
- "GB-HIL": true, "GB-HLD": true, "GB-HMF": true, "GB-HNS": true, "GB-HPL": true,
- "GB-HRT": true, "GB-HRW": true, "GB-HRY": true, "GB-IOS": true, "GB-IOW": true,
- "GB-ISL": true, "GB-IVC": true, "GB-KEC": true, "GB-KEN": true, "GB-KHL": true,
- "GB-KIR": true, "GB-KTT": true, "GB-KWL": true, "GB-LAN": true, "GB-LBC": true,
- "GB-LBH": true, "GB-LCE": true, "GB-LDS": true, "GB-LEC": true, "GB-LEW": true,
- "GB-LIN": true, "GB-LIV": true, "GB-LND": true, "GB-LUT": true, "GB-MAN": true,
- "GB-MDB": true, "GB-MDW": true, "GB-MEA": true, "GB-MIK": true, "GD-01": true,
- "GB-MLN": true, "GB-MON": true, "GB-MRT": true, "GB-MRY": true, "GB-MTY": true,
- "GB-MUL": true, "GB-NAY": true, "GB-NBL": true, "GB-NEL": true, "GB-NET": true,
- "GB-NFK": true, "GB-NGM": true, "GB-NIR": true, "GB-NLK": true, "GB-NLN": true,
- "GB-NMD": true, "GB-NSM": true, "GB-NTH": true, "GB-NTL": true, "GB-NTT": true,
- "GB-NTY": true, "GB-NWM": true, "GB-NWP": true, "GB-NYK": true, "GB-OLD": true,
- "GB-ORK": true, "GB-OXF": true, "GB-PEM": true, "GB-PKN": true, "GB-PLY": true,
- "GB-POL": true, "GB-POR": true, "GB-POW": true, "GB-PTE": true, "GB-RCC": true,
- "GB-RCH": true, "GB-RCT": true, "GB-RDB": true, "GB-RDG": true, "GB-RFW": true,
- "GB-RIC": true, "GB-ROT": true, "GB-RUT": true, "GB-SAW": true, "GB-SAY": true,
- "GB-SCB": true, "GB-SCT": true, "GB-SFK": true, "GB-SFT": true, "GB-SGC": true,
- "GB-SHF": true, "GB-SHN": true, "GB-SHR": true, "GB-SKP": true, "GB-SLF": true,
- "GB-SLG": true, "GB-SLK": true, "GB-SND": true, "GB-SOL": true, "GB-SOM": true,
- "GB-SOS": true, "GB-SRY": true, "GB-STE": true, "GB-STG": true, "GB-STH": true,
- "GB-STN": true, "GB-STS": true, "GB-STT": true, "GB-STY": true, "GB-SWA": true,
- "GB-SWD": true, "GB-SWK": true, "GB-TAM": true, "GB-TFW": true, "GB-THR": true,
- "GB-TOB": true, "GB-TOF": true, "GB-TRF": true, "GB-TWH": true, "GB-UKM": true,
- "GB-VGL": true, "GB-WAR": true, "GB-WBK": true, "GB-WDU": true, "GB-WFT": true,
- "GB-WGN": true, "GB-WIL": true, "GB-WKF": true, "GB-WLL": true, "GB-WLN": true,
- "GB-WLS": true, "GB-WLV": true, "GB-WND": true, "GB-WNM": true, "GB-WOK": true,
- "GB-WOR": true, "GB-WRL": true, "GB-WRT": true, "GB-WRX": true, "GB-WSM": true,
- "GB-WSX": true, "GB-YOR": true, "GB-ZET": true, "GD-02": true, "GD-03": true,
- "GD-04": true, "GD-05": true, "GD-06": true, "GD-10": true, "GE-AB": true,
- "GE-AJ": true, "GE-GU": true, "GE-IM": true, "GE-KA": true, "GE-KK": true,
- "GE-MM": true, "GE-RL": true, "GE-SJ": true, "GE-SK": true, "GE-SZ": true,
- "GE-TB": true, "GH-AA": true, "GH-AH": true, "GH-AF": true, "GH-BA": true, "GH-BO": true, "GH-BE": true, "GH-CP": true,
- "GH-EP": true, "GH-NP": true, "GH-TV": true, "GH-UE": true, "GH-UW": true,
- "GH-WP": true, "GL-AV": true, "GL-KU": true, "GL-QA": true, "GL-QT": true, "GL-QE": true, "GL-SM": true,
- "GM-B": true, "GM-L": true, "GM-M": true, "GM-N": true, "GM-U": true,
- "GM-W": true, "GN-B": true, "GN-BE": true, "GN-BF": true, "GN-BK": true,
- "GN-C": true, "GN-CO": true, "GN-D": true, "GN-DB": true, "GN-DI": true,
- "GN-DL": true, "GN-DU": true, "GN-F": true, "GN-FA": true, "GN-FO": true,
- "GN-FR": true, "GN-GA": true, "GN-GU": true, "GN-K": true, "GN-KA": true,
- "GN-KB": true, "GN-KD": true, "GN-KE": true, "GN-KN": true, "GN-KO": true,
- "GN-KS": true, "GN-L": true, "GN-LA": true, "GN-LE": true, "GN-LO": true,
- "GN-M": true, "GN-MC": true, "GN-MD": true, "GN-ML": true, "GN-MM": true,
- "GN-N": true, "GN-NZ": true, "GN-PI": true, "GN-SI": true, "GN-TE": true,
- "GN-TO": true, "GN-YO": true, "GQ-AN": true, "GQ-BN": true, "GQ-BS": true,
- "GQ-C": true, "GQ-CS": true, "GQ-I": true, "GQ-KN": true, "GQ-LI": true,
- "GQ-WN": true, "GR-01": true, "GR-03": true, "GR-04": true, "GR-05": true,
- "GR-06": true, "GR-07": true, "GR-11": true, "GR-12": true, "GR-13": true,
- "GR-14": true, "GR-15": true, "GR-16": true, "GR-17": true, "GR-21": true,
- "GR-22": true, "GR-23": true, "GR-24": true, "GR-31": true, "GR-32": true,
- "GR-33": true, "GR-34": true, "GR-41": true, "GR-42": true, "GR-43": true,
- "GR-44": true, "GR-51": true, "GR-52": true, "GR-53": true, "GR-54": true,
- "GR-55": true, "GR-56": true, "GR-57": true, "GR-58": true, "GR-59": true,
- "GR-61": true, "GR-62": true, "GR-63": true, "GR-64": true, "GR-69": true,
- "GR-71": true, "GR-72": true, "GR-73": true, "GR-81": true, "GR-82": true,
- "GR-83": true, "GR-84": true, "GR-85": true, "GR-91": true, "GR-92": true,
- "GR-93": true, "GR-94": true, "GR-A": true, "GR-A1": true, "GR-B": true,
- "GR-C": true, "GR-D": true, "GR-E": true, "GR-F": true, "GR-G": true,
- "GR-H": true, "GR-I": true, "GR-J": true, "GR-K": true, "GR-L": true,
- "GR-M": true, "GT-01": true, "GT-02": true, "GT-03": true, "GT-04": true,
- "GT-05": true, "GT-06": true, "GT-07": true, "GT-08": true, "GT-09": true,
- "GT-10": true, "GT-11": true, "GT-12": true, "GT-13": true, "GT-14": true,
- "GT-15": true, "GT-16": true, "GT-17": true, "GT-18": true, "GT-19": true,
- "GT-20": true, "GT-21": true, "GT-22": true, "GW-BA": true, "GW-BL": true,
- "GW-BM": true, "GW-BS": true, "GW-CA": true, "GW-GA": true, "GW-L": true,
- "GW-N": true, "GW-OI": true, "GW-QU": true, "GW-S": true, "GW-TO": true,
- "GY-BA": true, "GY-CU": true, "GY-DE": true, "GY-EB": true, "GY-ES": true,
- "GY-MA": true, "GY-PM": true, "GY-PT": true, "GY-UD": true, "GY-UT": true,
- "HN-AT": true, "HN-CH": true, "HN-CL": true, "HN-CM": true, "HN-CP": true,
- "HN-CR": true, "HN-EP": true, "HN-FM": true, "HN-GD": true, "HN-IB": true,
- "HN-IN": true, "HN-LE": true, "HN-LP": true, "HN-OC": true, "HN-OL": true,
- "HN-SB": true, "HN-VA": true, "HN-YO": true, "HR-01": true, "HR-02": true,
- "HR-03": true, "HR-04": true, "HR-05": true, "HR-06": true, "HR-07": true,
- "HR-08": true, "HR-09": true, "HR-10": true, "HR-11": true, "HR-12": true,
- "HR-13": true, "HR-14": true, "HR-15": true, "HR-16": true, "HR-17": true,
- "HR-18": true, "HR-19": true, "HR-20": true, "HR-21": true, "HT-AR": true,
- "HT-CE": true, "HT-GA": true, "HT-ND": true, "HT-NE": true, "HT-NO": true, "HT-NI": true,
- "HT-OU": true, "HT-SD": true, "HT-SE": true, "HU-BA": true, "HU-BC": true,
- "HU-BE": true, "HU-BK": true, "HU-BU": true, "HU-BZ": true, "HU-CS": true,
- "HU-DE": true, "HU-DU": true, "HU-EG": true, "HU-ER": true, "HU-FE": true,
- "HU-GS": true, "HU-GY": true, "HU-HB": true, "HU-HE": true, "HU-HV": true,
- "HU-JN": true, "HU-KE": true, "HU-KM": true, "HU-KV": true, "HU-MI": true,
- "HU-NK": true, "HU-NO": true, "HU-NY": true, "HU-PE": true, "HU-PS": true,
- "HU-SD": true, "HU-SF": true, "HU-SH": true, "HU-SK": true, "HU-SN": true,
- "HU-SO": true, "HU-SS": true, "HU-ST": true, "HU-SZ": true, "HU-TB": true,
- "HU-TO": true, "HU-VA": true, "HU-VE": true, "HU-VM": true, "HU-ZA": true,
- "HU-ZE": true, "ID-AC": true, "ID-BA": true, "ID-BB": true, "ID-BE": true,
- "ID-BT": true, "ID-GO": true, "ID-IJ": true, "ID-JA": true, "ID-JB": true,
- "ID-JI": true, "ID-JK": true, "ID-JT": true, "ID-JW": true, "ID-KA": true,
- "ID-KB": true, "ID-KI": true, "ID-KU": true, "ID-KR": true, "ID-KS": true,
- "ID-KT": true, "ID-LA": true, "ID-MA": true, "ID-ML": true, "ID-MU": true,
- "ID-NB": true, "ID-NT": true, "ID-NU": true, "ID-PA": true, "ID-PB": true,
- "ID-PE": true, "ID-PP": true, "ID-PS": true, "ID-PT": true, "ID-RI": true,
- "ID-SA": true, "ID-SB": true, "ID-SG": true, "ID-SL": true, "ID-SM": true,
- "ID-SN": true, "ID-SR": true, "ID-SS": true, "ID-ST": true, "ID-SU": true,
- "ID-YO": true, "IE-C": true, "IE-CE": true, "IE-CN": true, "IE-CO": true,
- "IE-CW": true, "IE-D": true, "IE-DL": true, "IE-G": true, "IE-KE": true,
- "IE-KK": true, "IE-KY": true, "IE-L": true, "IE-LD": true, "IE-LH": true,
- "IE-LK": true, "IE-LM": true, "IE-LS": true, "IE-M": true, "IE-MH": true,
- "IE-MN": true, "IE-MO": true, "IE-OY": true, "IE-RN": true, "IE-SO": true,
- "IE-TA": true, "IE-U": true, "IE-WD": true, "IE-WH": true, "IE-WW": true,
- "IE-WX": true, "IL-D": true, "IL-HA": true, "IL-JM": true, "IL-M": true,
- "IL-TA": true, "IL-Z": true, "IN-AN": true, "IN-AP": true, "IN-AR": true,
- "IN-AS": true, "IN-BR": true, "IN-CH": true, "IN-CT": true, "IN-DH": true,
- "IN-DL": true, "IN-DN": true, "IN-GA": true, "IN-GJ": true, "IN-HP": true,
- "IN-HR": true, "IN-JH": true, "IN-JK": true, "IN-KA": true, "IN-KL": true,
- "IN-LD": true, "IN-MH": true, "IN-ML": true, "IN-MN": true, "IN-MP": true,
- "IN-MZ": true, "IN-NL": true, "IN-TG": true, "IN-OR": true, "IN-PB": true, "IN-PY": true,
- "IN-RJ": true, "IN-SK": true, "IN-TN": true, "IN-TR": true, "IN-UP": true,
- "IN-UT": true, "IN-WB": true, "IQ-AN": true, "IQ-AR": true, "IQ-BA": true,
- "IQ-BB": true, "IQ-BG": true, "IQ-DA": true, "IQ-DI": true, "IQ-DQ": true,
- "IQ-KA": true, "IQ-KI": true, "IQ-MA": true, "IQ-MU": true, "IQ-NA": true, "IQ-NI": true,
- "IQ-QA": true, "IQ-SD": true, "IQ-SW": true, "IQ-SU": true, "IQ-TS": true, "IQ-WA": true,
- "IR-00": true, "IR-01": true, "IR-02": true, "IR-03": true, "IR-04": true, "IR-05": true,
- "IR-06": true, "IR-07": true, "IR-08": true, "IR-09": true, "IR-10": true, "IR-11": true,
- "IR-12": true, "IR-13": true, "IR-14": true, "IR-15": true, "IR-16": true,
- "IR-17": true, "IR-18": true, "IR-19": true, "IR-20": true, "IR-21": true,
- "IR-22": true, "IR-23": true, "IR-24": true, "IR-25": true, "IR-26": true,
- "IR-27": true, "IR-28": true, "IR-29": true, "IR-30": true, "IR-31": true,
- "IS-0": true, "IS-1": true, "IS-2": true, "IS-3": true, "IS-4": true,
- "IS-5": true, "IS-6": true, "IS-7": true, "IS-8": true, "IT-21": true,
- "IT-23": true, "IT-25": true, "IT-32": true, "IT-34": true, "IT-36": true,
- "IT-42": true, "IT-45": true, "IT-52": true, "IT-55": true, "IT-57": true,
- "IT-62": true, "IT-65": true, "IT-67": true, "IT-72": true, "IT-75": true,
- "IT-77": true, "IT-78": true, "IT-82": true, "IT-88": true, "IT-AG": true,
- "IT-AL": true, "IT-AN": true, "IT-AO": true, "IT-AP": true, "IT-AQ": true,
- "IT-AR": true, "IT-AT": true, "IT-AV": true, "IT-BA": true, "IT-BG": true,
- "IT-BI": true, "IT-BL": true, "IT-BN": true, "IT-BO": true, "IT-BR": true,
- "IT-BS": true, "IT-BT": true, "IT-BZ": true, "IT-CA": true, "IT-CB": true,
- "IT-CE": true, "IT-CH": true, "IT-CI": true, "IT-CL": true, "IT-CN": true,
- "IT-CO": true, "IT-CR": true, "IT-CS": true, "IT-CT": true, "IT-CZ": true,
- "IT-EN": true, "IT-FC": true, "IT-FE": true, "IT-FG": true, "IT-FI": true,
- "IT-FM": true, "IT-FR": true, "IT-GE": true, "IT-GO": true, "IT-GR": true,
- "IT-IM": true, "IT-IS": true, "IT-KR": true, "IT-LC": true, "IT-LE": true,
- "IT-LI": true, "IT-LO": true, "IT-LT": true, "IT-LU": true, "IT-MB": true,
- "IT-MC": true, "IT-ME": true, "IT-MI": true, "IT-MN": true, "IT-MO": true,
- "IT-MS": true, "IT-MT": true, "IT-NA": true, "IT-NO": true, "IT-NU": true,
- "IT-OG": true, "IT-OR": true, "IT-OT": true, "IT-PA": true, "IT-PC": true,
- "IT-PD": true, "IT-PE": true, "IT-PG": true, "IT-PI": true, "IT-PN": true,
- "IT-PO": true, "IT-PR": true, "IT-PT": true, "IT-PU": true, "IT-PV": true,
- "IT-PZ": true, "IT-RA": true, "IT-RC": true, "IT-RE": true, "IT-RG": true,
- "IT-RI": true, "IT-RM": true, "IT-RN": true, "IT-RO": true, "IT-SA": true,
- "IT-SI": true, "IT-SO": true, "IT-SP": true, "IT-SR": true, "IT-SS": true,
- "IT-SV": true, "IT-TA": true, "IT-TE": true, "IT-TN": true, "IT-TO": true,
- "IT-TP": true, "IT-TR": true, "IT-TS": true, "IT-TV": true, "IT-UD": true,
- "IT-VA": true, "IT-VB": true, "IT-VC": true, "IT-VE": true, "IT-VI": true,
- "IT-VR": true, "IT-VS": true, "IT-VT": true, "IT-VV": true, "JM-01": true,
- "JM-02": true, "JM-03": true, "JM-04": true, "JM-05": true, "JM-06": true,
- "JM-07": true, "JM-08": true, "JM-09": true, "JM-10": true, "JM-11": true,
- "JM-12": true, "JM-13": true, "JM-14": true, "JO-AJ": true, "JO-AM": true,
- "JO-AQ": true, "JO-AT": true, "JO-AZ": true, "JO-BA": true, "JO-IR": true,
- "JO-JA": true, "JO-KA": true, "JO-MA": true, "JO-MD": true, "JO-MN": true,
- "JP-01": true, "JP-02": true, "JP-03": true, "JP-04": true, "JP-05": true,
- "JP-06": true, "JP-07": true, "JP-08": true, "JP-09": true, "JP-10": true,
- "JP-11": true, "JP-12": true, "JP-13": true, "JP-14": true, "JP-15": true,
- "JP-16": true, "JP-17": true, "JP-18": true, "JP-19": true, "JP-20": true,
- "JP-21": true, "JP-22": true, "JP-23": true, "JP-24": true, "JP-25": true,
- "JP-26": true, "JP-27": true, "JP-28": true, "JP-29": true, "JP-30": true,
- "JP-31": true, "JP-32": true, "JP-33": true, "JP-34": true, "JP-35": true,
- "JP-36": true, "JP-37": true, "JP-38": true, "JP-39": true, "JP-40": true,
- "JP-41": true, "JP-42": true, "JP-43": true, "JP-44": true, "JP-45": true,
- "JP-46": true, "JP-47": true, "KE-01": true, "KE-02": true, "KE-03": true,
- "KE-04": true, "KE-05": true, "KE-06": true, "KE-07": true, "KE-08": true,
- "KE-09": true, "KE-10": true, "KE-11": true, "KE-12": true, "KE-13": true,
- "KE-14": true, "KE-15": true, "KE-16": true, "KE-17": true, "KE-18": true,
- "KE-19": true, "KE-20": true, "KE-21": true, "KE-22": true, "KE-23": true,
- "KE-24": true, "KE-25": true, "KE-26": true, "KE-27": true, "KE-28": true,
- "KE-29": true, "KE-30": true, "KE-31": true, "KE-32": true, "KE-33": true,
- "KE-34": true, "KE-35": true, "KE-36": true, "KE-37": true, "KE-38": true,
- "KE-39": true, "KE-40": true, "KE-41": true, "KE-42": true, "KE-43": true,
- "KE-44": true, "KE-45": true, "KE-46": true, "KE-47": true, "KG-B": true,
- "KG-C": true, "KG-GB": true, "KG-GO": true, "KG-J": true, "KG-N": true, "KG-O": true,
- "KG-T": true, "KG-Y": true, "KH-1": true, "KH-10": true, "KH-11": true,
- "KH-12": true, "KH-13": true, "KH-14": true, "KH-15": true, "KH-16": true,
- "KH-17": true, "KH-18": true, "KH-19": true, "KH-2": true, "KH-20": true,
- "KH-21": true, "KH-22": true, "KH-23": true, "KH-24": true, "KH-3": true,
- "KH-4": true, "KH-5": true, "KH-6": true, "KH-7": true, "KH-8": true,
- "KH-9": true, "KI-G": true, "KI-L": true, "KI-P": true, "KM-A": true,
- "KM-G": true, "KM-M": true, "KN-01": true, "KN-02": true, "KN-03": true,
- "KN-04": true, "KN-05": true, "KN-06": true, "KN-07": true, "KN-08": true,
- "KN-09": true, "KN-10": true, "KN-11": true, "KN-12": true, "KN-13": true,
- "KN-15": true, "KN-K": true, "KN-N": true, "KP-01": true, "KP-02": true,
- "KP-03": true, "KP-04": true, "KP-05": true, "KP-06": true, "KP-07": true,
- "KP-08": true, "KP-09": true, "KP-10": true, "KP-13": true, "KR-11": true,
- "KR-26": true, "KR-27": true, "KR-28": true, "KR-29": true, "KR-30": true,
- "KR-31": true, "KR-41": true, "KR-42": true, "KR-43": true, "KR-44": true,
- "KR-45": true, "KR-46": true, "KR-47": true, "KR-48": true, "KR-49": true,
- "KW-AH": true, "KW-FA": true, "KW-HA": true, "KW-JA": true, "KW-KU": true,
- "KW-MU": true, "KZ-10": true, "KZ-75": true, "KZ-19": true, "KZ-11": true,
- "KZ-15": true, "KZ-71": true, "KZ-23": true, "KZ-27": true, "KZ-47": true,
- "KZ-55": true, "KZ-35": true, "KZ-39": true, "KZ-43": true, "KZ-63": true,
- "KZ-79": true, "KZ-59": true, "KZ-61": true, "KZ-62": true, "KZ-31": true,
- "KZ-33": true, "LA-AT": true, "LA-BK": true, "LA-BL": true,
- "LA-CH": true, "LA-HO": true, "LA-KH": true, "LA-LM": true, "LA-LP": true,
- "LA-OU": true, "LA-PH": true, "LA-SL": true, "LA-SV": true, "LA-VI": true,
- "LA-VT": true, "LA-XA": true, "LA-XE": true, "LA-XI": true, "LA-XS": true,
- "LB-AK": true, "LB-AS": true, "LB-BA": true, "LB-BH": true, "LB-BI": true,
- "LB-JA": true, "LB-JL": true, "LB-NA": true, "LC-01": true, "LC-02": true,
- "LC-03": true, "LC-05": true, "LC-06": true, "LC-07": true, "LC-08": true,
- "LC-10": true, "LC-11": true, "LI-01": true, "LI-02": true,
- "LI-03": true, "LI-04": true, "LI-05": true, "LI-06": true, "LI-07": true,
- "LI-08": true, "LI-09": true, "LI-10": true, "LI-11": true, "LK-1": true,
- "LK-11": true, "LK-12": true, "LK-13": true, "LK-2": true, "LK-21": true,
- "LK-22": true, "LK-23": true, "LK-3": true, "LK-31": true, "LK-32": true,
- "LK-33": true, "LK-4": true, "LK-41": true, "LK-42": true, "LK-43": true,
- "LK-44": true, "LK-45": true, "LK-5": true, "LK-51": true, "LK-52": true,
- "LK-53": true, "LK-6": true, "LK-61": true, "LK-62": true, "LK-7": true,
- "LK-71": true, "LK-72": true, "LK-8": true, "LK-81": true, "LK-82": true,
- "LK-9": true, "LK-91": true, "LK-92": true, "LR-BG": true, "LR-BM": true,
- "LR-CM": true, "LR-GB": true, "LR-GG": true, "LR-GK": true, "LR-LO": true,
- "LR-MG": true, "LR-MO": true, "LR-MY": true, "LR-NI": true, "LR-RI": true,
- "LR-SI": true, "LS-A": true, "LS-B": true, "LS-C": true, "LS-D": true,
- "LS-E": true, "LS-F": true, "LS-G": true, "LS-H": true, "LS-J": true,
- "LS-K": true, "LT-AL": true, "LT-KL": true, "LT-KU": true, "LT-MR": true,
- "LT-PN": true, "LT-SA": true, "LT-TA": true, "LT-TE": true, "LT-UT": true,
- "LT-VL": true, "LU-CA": true, "LU-CL": true, "LU-DI": true, "LU-EC": true,
- "LU-ES": true, "LU-GR": true, "LU-LU": true, "LU-ME": true, "LU-RD": true,
- "LU-RM": true, "LU-VD": true, "LU-WI": true, "LU-D": true, "LU-G": true, "LU-L": true,
- "LV-001": true, "LV-111": true, "LV-112": true, "LV-113": true,
- "LV-002": true, "LV-003": true, "LV-004": true, "LV-005": true, "LV-006": true,
- "LV-007": true, "LV-008": true, "LV-009": true, "LV-010": true, "LV-011": true,
- "LV-012": true, "LV-013": true, "LV-014": true, "LV-015": true, "LV-016": true,
- "LV-017": true, "LV-018": true, "LV-019": true, "LV-020": true, "LV-021": true,
- "LV-022": true, "LV-023": true, "LV-024": true, "LV-025": true, "LV-026": true,
- "LV-027": true, "LV-028": true, "LV-029": true, "LV-030": true, "LV-031": true,
- "LV-032": true, "LV-033": true, "LV-034": true, "LV-035": true, "LV-036": true,
- "LV-037": true, "LV-038": true, "LV-039": true, "LV-040": true, "LV-041": true,
- "LV-042": true, "LV-043": true, "LV-044": true, "LV-045": true, "LV-046": true,
- "LV-047": true, "LV-048": true, "LV-049": true, "LV-050": true, "LV-051": true,
- "LV-052": true, "LV-053": true, "LV-054": true, "LV-055": true, "LV-056": true,
- "LV-057": true, "LV-058": true, "LV-059": true, "LV-060": true, "LV-061": true,
- "LV-062": true, "LV-063": true, "LV-064": true, "LV-065": true, "LV-066": true,
- "LV-067": true, "LV-068": true, "LV-069": true, "LV-070": true, "LV-071": true,
- "LV-072": true, "LV-073": true, "LV-074": true, "LV-075": true, "LV-076": true,
- "LV-077": true, "LV-078": true, "LV-079": true, "LV-080": true, "LV-081": true,
- "LV-082": true, "LV-083": true, "LV-084": true, "LV-085": true, "LV-086": true,
- "LV-087": true, "LV-088": true, "LV-089": true, "LV-090": true, "LV-091": true,
- "LV-092": true, "LV-093": true, "LV-094": true, "LV-095": true, "LV-096": true,
- "LV-097": true, "LV-098": true, "LV-099": true, "LV-100": true, "LV-101": true,
- "LV-102": true, "LV-103": true, "LV-104": true, "LV-105": true, "LV-106": true,
- "LV-107": true, "LV-108": true, "LV-109": true, "LV-110": true, "LV-DGV": true,
- "LV-JEL": true, "LV-JKB": true, "LV-JUR": true, "LV-LPX": true, "LV-REZ": true,
- "LV-RIX": true, "LV-VEN": true, "LV-VMR": true, "LY-BA": true, "LY-BU": true,
- "LY-DR": true, "LY-GT": true, "LY-JA": true, "LY-JB": true, "LY-JG": true,
- "LY-JI": true, "LY-JU": true, "LY-KF": true, "LY-MB": true, "LY-MI": true,
- "LY-MJ": true, "LY-MQ": true, "LY-NL": true, "LY-NQ": true, "LY-SB": true,
- "LY-SR": true, "LY-TB": true, "LY-WA": true, "LY-WD": true, "LY-WS": true,
- "LY-ZA": true, "MA-01": true, "MA-02": true, "MA-03": true, "MA-04": true,
- "MA-05": true, "MA-06": true, "MA-07": true, "MA-08": true, "MA-09": true,
- "MA-10": true, "MA-11": true, "MA-12": true, "MA-13": true, "MA-14": true,
- "MA-15": true, "MA-16": true, "MA-AGD": true, "MA-AOU": true, "MA-ASZ": true,
- "MA-AZI": true, "MA-BEM": true, "MA-BER": true, "MA-BES": true, "MA-BOD": true,
- "MA-BOM": true, "MA-CAS": true, "MA-CHE": true, "MA-CHI": true, "MA-CHT": true,
- "MA-ERR": true, "MA-ESI": true, "MA-ESM": true, "MA-FAH": true, "MA-FES": true,
- "MA-FIG": true, "MA-GUE": true, "MA-HAJ": true, "MA-HAO": true, "MA-HOC": true,
- "MA-IFR": true, "MA-INE": true, "MA-JDI": true, "MA-JRA": true, "MA-KEN": true,
- "MA-KES": true, "MA-KHE": true, "MA-KHN": true, "MA-KHO": true, "MA-LAA": true,
- "MA-LAR": true, "MA-MED": true, "MA-MEK": true, "MA-MMD": true, "MA-MMN": true,
- "MA-MOH": true, "MA-MOU": true, "MA-NAD": true, "MA-NOU": true, "MA-OUA": true,
- "MA-OUD": true, "MA-OUJ": true, "MA-RAB": true, "MA-SAF": true, "MA-SAL": true,
- "MA-SEF": true, "MA-SET": true, "MA-SIK": true, "MA-SKH": true, "MA-SYB": true,
- "MA-TAI": true, "MA-TAO": true, "MA-TAR": true, "MA-TAT": true, "MA-TAZ": true,
- "MA-TET": true, "MA-TIZ": true, "MA-TNG": true, "MA-TNT": true, "MA-ZAG": true,
- "MC-CL": true, "MC-CO": true, "MC-FO": true, "MC-GA": true, "MC-JE": true,
- "MC-LA": true, "MC-MA": true, "MC-MC": true, "MC-MG": true, "MC-MO": true,
- "MC-MU": true, "MC-PH": true, "MC-SD": true, "MC-SO": true, "MC-SP": true,
- "MC-SR": true, "MC-VR": true, "MD-AN": true, "MD-BA": true, "MD-BD": true,
- "MD-BR": true, "MD-BS": true, "MD-CA": true, "MD-CL": true, "MD-CM": true,
- "MD-CR": true, "MD-CS": true, "MD-CT": true, "MD-CU": true, "MD-DO": true,
- "MD-DR": true, "MD-DU": true, "MD-ED": true, "MD-FA": true, "MD-FL": true,
- "MD-GA": true, "MD-GL": true, "MD-HI": true, "MD-IA": true, "MD-LE": true,
- "MD-NI": true, "MD-OC": true, "MD-OR": true, "MD-RE": true, "MD-RI": true,
- "MD-SD": true, "MD-SI": true, "MD-SN": true, "MD-SO": true, "MD-ST": true,
- "MD-SV": true, "MD-TA": true, "MD-TE": true, "MD-UN": true, "ME-01": true,
- "ME-02": true, "ME-03": true, "ME-04": true, "ME-05": true, "ME-06": true,
- "ME-07": true, "ME-08": true, "ME-09": true, "ME-10": true, "ME-11": true,
- "ME-12": true, "ME-13": true, "ME-14": true, "ME-15": true, "ME-16": true,
- "ME-17": true, "ME-18": true, "ME-19": true, "ME-20": true, "ME-21": true, "ME-24": true,
- "MG-A": true, "MG-D": true, "MG-F": true, "MG-M": true, "MG-T": true,
- "MG-U": true, "MH-ALK": true, "MH-ALL": true, "MH-ARN": true, "MH-AUR": true,
- "MH-EBO": true, "MH-ENI": true, "MH-JAB": true, "MH-JAL": true, "MH-KIL": true,
- "MH-KWA": true, "MH-L": true, "MH-LAE": true, "MH-LIB": true, "MH-LIK": true,
- "MH-MAJ": true, "MH-MAL": true, "MH-MEJ": true, "MH-MIL": true, "MH-NMK": true,
- "MH-NMU": true, "MH-RON": true, "MH-T": true, "MH-UJA": true, "MH-UTI": true,
- "MH-WTJ": true, "MH-WTN": true, "MK-101": true, "MK-102": true, "MK-103": true,
- "MK-104": true, "MK-105": true,
- "MK-106": true, "MK-107": true, "MK-108": true, "MK-109": true, "MK-201": true,
- "MK-202": true, "MK-205": true, "MK-206": true, "MK-207": true, "MK-208": true,
- "MK-209": true, "MK-210": true, "MK-211": true, "MK-301": true, "MK-303": true,
- "MK-307": true, "MK-308": true, "MK-310": true, "MK-311": true, "MK-312": true,
- "MK-401": true, "MK-402": true, "MK-403": true, "MK-404": true, "MK-405": true,
- "MK-406": true, "MK-408": true, "MK-409": true, "MK-410": true, "MK-501": true,
- "MK-502": true, "MK-503": true, "MK-505": true, "MK-506": true, "MK-507": true,
- "MK-508": true, "MK-509": true, "MK-601": true, "MK-602": true, "MK-604": true,
- "MK-605": true, "MK-606": true, "MK-607": true, "MK-608": true, "MK-609": true,
- "MK-701": true, "MK-702": true, "MK-703": true, "MK-704": true, "MK-705": true,
- "MK-803": true, "MK-804": true, "MK-806": true, "MK-807": true, "MK-809": true,
- "MK-810": true, "MK-811": true, "MK-812": true, "MK-813": true, "MK-814": true,
- "MK-816": true, "ML-1": true, "ML-2": true, "ML-3": true, "ML-4": true,
- "ML-5": true, "ML-6": true, "ML-7": true, "ML-8": true, "ML-BKO": true,
- "MM-01": true, "MM-02": true, "MM-03": true, "MM-04": true, "MM-05": true,
- "MM-06": true, "MM-07": true, "MM-11": true, "MM-12": true, "MM-13": true,
- "MM-14": true, "MM-15": true, "MM-16": true, "MM-17": true, "MM-18": true, "MN-035": true,
- "MN-037": true, "MN-039": true, "MN-041": true, "MN-043": true, "MN-046": true,
- "MN-047": true, "MN-049": true, "MN-051": true, "MN-053": true, "MN-055": true,
- "MN-057": true, "MN-059": true, "MN-061": true, "MN-063": true, "MN-064": true,
- "MN-065": true, "MN-067": true, "MN-069": true, "MN-071": true, "MN-073": true,
- "MN-1": true, "MR-01": true, "MR-02": true, "MR-03": true, "MR-04": true,
- "MR-05": true, "MR-06": true, "MR-07": true, "MR-08": true, "MR-09": true,
- "MR-10": true, "MR-11": true, "MR-12": true, "MR-13": true, "MR-NKC": true, "MT-01": true,
- "MT-02": true, "MT-03": true, "MT-04": true, "MT-05": true, "MT-06": true,
- "MT-07": true, "MT-08": true, "MT-09": true, "MT-10": true, "MT-11": true,
- "MT-12": true, "MT-13": true, "MT-14": true, "MT-15": true, "MT-16": true,
- "MT-17": true, "MT-18": true, "MT-19": true, "MT-20": true, "MT-21": true,
- "MT-22": true, "MT-23": true, "MT-24": true, "MT-25": true, "MT-26": true,
- "MT-27": true, "MT-28": true, "MT-29": true, "MT-30": true, "MT-31": true,
- "MT-32": true, "MT-33": true, "MT-34": true, "MT-35": true, "MT-36": true,
- "MT-37": true, "MT-38": true, "MT-39": true, "MT-40": true, "MT-41": true,
- "MT-42": true, "MT-43": true, "MT-44": true, "MT-45": true, "MT-46": true,
- "MT-47": true, "MT-48": true, "MT-49": true, "MT-50": true, "MT-51": true,
- "MT-52": true, "MT-53": true, "MT-54": true, "MT-55": true, "MT-56": true,
- "MT-57": true, "MT-58": true, "MT-59": true, "MT-60": true, "MT-61": true,
- "MT-62": true, "MT-63": true, "MT-64": true, "MT-65": true, "MT-66": true,
- "MT-67": true, "MT-68": true, "MU-AG": true, "MU-BL": true, "MU-BR": true,
- "MU-CC": true, "MU-CU": true, "MU-FL": true, "MU-GP": true, "MU-MO": true,
- "MU-PA": true, "MU-PL": true, "MU-PU": true, "MU-PW": true, "MU-QB": true,
- "MU-RO": true, "MU-RP": true, "MU-RR": true, "MU-SA": true, "MU-VP": true, "MV-00": true,
- "MV-01": true, "MV-02": true, "MV-03": true, "MV-04": true, "MV-05": true,
- "MV-07": true, "MV-08": true, "MV-12": true, "MV-13": true, "MV-14": true,
- "MV-17": true, "MV-20": true, "MV-23": true, "MV-24": true, "MV-25": true,
- "MV-26": true, "MV-27": true, "MV-28": true, "MV-29": true, "MV-CE": true,
- "MV-MLE": true, "MV-NC": true, "MV-NO": true, "MV-SC": true, "MV-SU": true,
- "MV-UN": true, "MV-US": true, "MW-BA": true, "MW-BL": true, "MW-C": true,
- "MW-CK": true, "MW-CR": true, "MW-CT": true, "MW-DE": true, "MW-DO": true,
- "MW-KR": true, "MW-KS": true, "MW-LI": true, "MW-LK": true, "MW-MC": true,
- "MW-MG": true, "MW-MH": true, "MW-MU": true, "MW-MW": true, "MW-MZ": true,
- "MW-N": true, "MW-NB": true, "MW-NE": true, "MW-NI": true, "MW-NK": true,
- "MW-NS": true, "MW-NU": true, "MW-PH": true, "MW-RU": true, "MW-S": true,
- "MW-SA": true, "MW-TH": true, "MW-ZO": true, "MX-AGU": true, "MX-BCN": true,
- "MX-BCS": true, "MX-CAM": true, "MX-CHH": true, "MX-CHP": true, "MX-COA": true,
- "MX-COL": true, "MX-CMX": true, "MX-DIF": true, "MX-DUR": true, "MX-GRO": true, "MX-GUA": true,
- "MX-HID": true, "MX-JAL": true, "MX-MEX": true, "MX-MIC": true, "MX-MOR": true,
- "MX-NAY": true, "MX-NLE": true, "MX-OAX": true, "MX-PUE": true, "MX-QUE": true,
- "MX-ROO": true, "MX-SIN": true, "MX-SLP": true, "MX-SON": true, "MX-TAB": true,
- "MX-TAM": true, "MX-TLA": true, "MX-VER": true, "MX-YUC": true, "MX-ZAC": true,
- "MY-01": true, "MY-02": true, "MY-03": true, "MY-04": true, "MY-05": true,
- "MY-06": true, "MY-07": true, "MY-08": true, "MY-09": true, "MY-10": true,
- "MY-11": true, "MY-12": true, "MY-13": true, "MY-14": true, "MY-15": true,
- "MY-16": true, "MZ-A": true, "MZ-B": true, "MZ-G": true, "MZ-I": true,
- "MZ-L": true, "MZ-MPM": true, "MZ-N": true, "MZ-P": true, "MZ-Q": true,
- "MZ-S": true, "MZ-T": true, "NA-CA": true, "NA-ER": true, "NA-HA": true,
- "NA-KA": true, "NA-KE": true, "NA-KH": true, "NA-KU": true, "NA-KW": true, "NA-OD": true, "NA-OH": true,
- "NA-OK": true, "NA-ON": true, "NA-OS": true, "NA-OT": true, "NA-OW": true,
- "NE-1": true, "NE-2": true, "NE-3": true, "NE-4": true, "NE-5": true,
- "NE-6": true, "NE-7": true, "NE-8": true, "NG-AB": true, "NG-AD": true,
- "NG-AK": true, "NG-AN": true, "NG-BA": true, "NG-BE": true, "NG-BO": true,
- "NG-BY": true, "NG-CR": true, "NG-DE": true, "NG-EB": true, "NG-ED": true,
- "NG-EK": true, "NG-EN": true, "NG-FC": true, "NG-GO": true, "NG-IM": true,
- "NG-JI": true, "NG-KD": true, "NG-KE": true, "NG-KN": true, "NG-KO": true,
- "NG-KT": true, "NG-KW": true, "NG-LA": true, "NG-NA": true, "NG-NI": true,
- "NG-OG": true, "NG-ON": true, "NG-OS": true, "NG-OY": true, "NG-PL": true,
- "NG-RI": true, "NG-SO": true, "NG-TA": true, "NG-YO": true, "NG-ZA": true,
- "NI-AN": true, "NI-AS": true, "NI-BO": true, "NI-CA": true, "NI-CI": true,
- "NI-CO": true, "NI-ES": true, "NI-GR": true, "NI-JI": true, "NI-LE": true,
- "NI-MD": true, "NI-MN": true, "NI-MS": true, "NI-MT": true, "NI-NS": true,
- "NI-RI": true, "NI-SJ": true, "NL-AW": true, "NL-BQ1": true, "NL-BQ2": true,
- "NL-BQ3": true, "NL-CW": true, "NL-DR": true, "NL-FL": true, "NL-FR": true,
- "NL-GE": true, "NL-GR": true, "NL-LI": true, "NL-NB": true, "NL-NH": true,
- "NL-OV": true, "NL-SX": true, "NL-UT": true, "NL-ZE": true, "NL-ZH": true,
- "NO-03": true, "NO-11": true, "NO-15": true, "NO-16": true, "NO-17": true,
- "NO-18": true, "NO-21": true, "NO-30": true, "NO-34": true, "NO-38": true,
- "NO-42": true, "NO-46": true, "NO-50": true, "NO-54": true,
- "NO-22": true, "NP-1": true, "NP-2": true, "NP-3": true, "NP-4": true,
- "NP-5": true, "NP-BA": true, "NP-BH": true, "NP-DH": true, "NP-GA": true,
- "NP-JA": true, "NP-KA": true, "NP-KO": true, "NP-LU": true, "NP-MA": true,
- "NP-ME": true, "NP-NA": true, "NP-RA": true, "NP-SA": true, "NP-SE": true,
- "NR-01": true, "NR-02": true, "NR-03": true, "NR-04": true, "NR-05": true,
- "NR-06": true, "NR-07": true, "NR-08": true, "NR-09": true, "NR-10": true,
- "NR-11": true, "NR-12": true, "NR-13": true, "NR-14": true, "NZ-AUK": true,
- "NZ-BOP": true, "NZ-CAN": true, "NZ-CIT": true, "NZ-GIS": true, "NZ-HKB": true,
- "NZ-MBH": true, "NZ-MWT": true, "NZ-N": true, "NZ-NSN": true, "NZ-NTL": true,
- "NZ-OTA": true, "NZ-S": true, "NZ-STL": true, "NZ-TAS": true, "NZ-TKI": true,
- "NZ-WGN": true, "NZ-WKO": true, "NZ-WTC": true, "OM-BA": true, "OM-BS": true, "OM-BU": true, "OM-BJ": true,
- "OM-DA": true, "OM-MA": true, "OM-MU": true, "OM-SH": true, "OM-SJ": true, "OM-SS": true, "OM-WU": true,
- "OM-ZA": true, "OM-ZU": true, "PA-1": true, "PA-2": true, "PA-3": true,
- "PA-4": true, "PA-5": true, "PA-6": true, "PA-7": true, "PA-8": true,
- "PA-9": true, "PA-EM": true, "PA-KY": true, "PA-NB": true, "PE-AMA": true,
- "PE-ANC": true, "PE-APU": true, "PE-ARE": true, "PE-AYA": true, "PE-CAJ": true,
- "PE-CAL": true, "PE-CUS": true, "PE-HUC": true, "PE-HUV": true, "PE-ICA": true,
- "PE-JUN": true, "PE-LAL": true, "PE-LAM": true, "PE-LIM": true, "PE-LMA": true,
- "PE-LOR": true, "PE-MDD": true, "PE-MOQ": true, "PE-PAS": true, "PE-PIU": true,
- "PE-PUN": true, "PE-SAM": true, "PE-TAC": true, "PE-TUM": true, "PE-UCA": true,
- "PG-CPK": true, "PG-CPM": true, "PG-EBR": true, "PG-EHG": true, "PG-EPW": true,
- "PG-ESW": true, "PG-GPK": true, "PG-MBA": true, "PG-MPL": true, "PG-MPM": true,
- "PG-MRL": true, "PG-NCD": true, "PG-NIK": true, "PG-NPP": true, "PG-NSB": true,
- "PG-SAN": true, "PG-SHM": true, "PG-WBK": true, "PG-WHM": true, "PG-WPD": true,
- "PH-00": true, "PH-01": true, "PH-02": true, "PH-03": true, "PH-05": true,
- "PH-06": true, "PH-07": true, "PH-08": true, "PH-09": true, "PH-10": true,
- "PH-11": true, "PH-12": true, "PH-13": true, "PH-14": true, "PH-15": true,
- "PH-40": true, "PH-41": true, "PH-ABR": true, "PH-AGN": true, "PH-AGS": true,
- "PH-AKL": true, "PH-ALB": true, "PH-ANT": true, "PH-APA": true, "PH-AUR": true,
- "PH-BAN": true, "PH-BAS": true, "PH-BEN": true, "PH-BIL": true, "PH-BOH": true,
- "PH-BTG": true, "PH-BTN": true, "PH-BUK": true, "PH-BUL": true, "PH-CAG": true,
- "PH-CAM": true, "PH-CAN": true, "PH-CAP": true, "PH-CAS": true, "PH-CAT": true,
- "PH-CAV": true, "PH-CEB": true, "PH-COM": true, "PH-DAO": true, "PH-DAS": true,
- "PH-DAV": true, "PH-DIN": true, "PH-EAS": true, "PH-GUI": true, "PH-IFU": true,
- "PH-ILI": true, "PH-ILN": true, "PH-ILS": true, "PH-ISA": true, "PH-KAL": true,
- "PH-LAG": true, "PH-LAN": true, "PH-LAS": true, "PH-LEY": true, "PH-LUN": true,
- "PH-MAD": true, "PH-MAG": true, "PH-MAS": true, "PH-MDC": true, "PH-MDR": true,
- "PH-MOU": true, "PH-MSC": true, "PH-MSR": true, "PH-NCO": true, "PH-NEC": true,
- "PH-NER": true, "PH-NSA": true, "PH-NUE": true, "PH-NUV": true, "PH-PAM": true,
- "PH-PAN": true, "PH-PLW": true, "PH-QUE": true, "PH-QUI": true, "PH-RIZ": true,
- "PH-ROM": true, "PH-SAR": true, "PH-SCO": true, "PH-SIG": true, "PH-SLE": true,
- "PH-SLU": true, "PH-SOR": true, "PH-SUK": true, "PH-SUN": true, "PH-SUR": true,
- "PH-TAR": true, "PH-TAW": true, "PH-WSA": true, "PH-ZAN": true, "PH-ZAS": true,
- "PH-ZMB": true, "PH-ZSI": true, "PK-BA": true, "PK-GB": true, "PK-IS": true,
- "PK-JK": true, "PK-KP": true, "PK-PB": true, "PK-SD": true, "PK-TA": true,
- "PL-02": true, "PL-04": true, "PL-06": true, "PL-08": true, "PL-10": true,
- "PL-12": true, "PL-14": true, "PL-16": true, "PL-18": true, "PL-20": true,
- "PL-22": true, "PL-24": true, "PL-26": true, "PL-28": true, "PL-30": true, "PL-32": true,
- "PS-BTH": true, "PS-DEB": true, "PS-GZA": true, "PS-HBN": true,
- "PS-JEM": true, "PS-JEN": true, "PS-JRH": true, "PS-KYS": true, "PS-NBS": true,
- "PS-NGZ": true, "PS-QQA": true, "PS-RBH": true, "PS-RFH": true, "PS-SLT": true,
- "PS-TBS": true, "PS-TKM": true, "PT-01": true, "PT-02": true, "PT-03": true,
- "PT-04": true, "PT-05": true, "PT-06": true, "PT-07": true, "PT-08": true,
- "PT-09": true, "PT-10": true, "PT-11": true, "PT-12": true, "PT-13": true,
- "PT-14": true, "PT-15": true, "PT-16": true, "PT-17": true, "PT-18": true,
- "PT-20": true, "PT-30": true, "PW-002": true, "PW-004": true, "PW-010": true,
- "PW-050": true, "PW-100": true, "PW-150": true, "PW-212": true, "PW-214": true,
- "PW-218": true, "PW-222": true, "PW-224": true, "PW-226": true, "PW-227": true,
- "PW-228": true, "PW-350": true, "PW-370": true, "PY-1": true, "PY-10": true,
- "PY-11": true, "PY-12": true, "PY-13": true, "PY-14": true, "PY-15": true,
- "PY-16": true, "PY-19": true, "PY-2": true, "PY-3": true, "PY-4": true,
- "PY-5": true, "PY-6": true, "PY-7": true, "PY-8": true, "PY-9": true,
- "PY-ASU": true, "QA-DA": true, "QA-KH": true, "QA-MS": true, "QA-RA": true,
- "QA-US": true, "QA-WA": true, "QA-ZA": true, "RO-AB": true, "RO-AG": true,
- "RO-AR": true, "RO-B": true, "RO-BC": true, "RO-BH": true, "RO-BN": true,
- "RO-BR": true, "RO-BT": true, "RO-BV": true, "RO-BZ": true, "RO-CJ": true,
- "RO-CL": true, "RO-CS": true, "RO-CT": true, "RO-CV": true, "RO-DB": true,
- "RO-DJ": true, "RO-GJ": true, "RO-GL": true, "RO-GR": true, "RO-HD": true,
- "RO-HR": true, "RO-IF": true, "RO-IL": true, "RO-IS": true, "RO-MH": true,
- "RO-MM": true, "RO-MS": true, "RO-NT": true, "RO-OT": true, "RO-PH": true,
- "RO-SB": true, "RO-SJ": true, "RO-SM": true, "RO-SV": true, "RO-TL": true,
- "RO-TM": true, "RO-TR": true, "RO-VL": true, "RO-VN": true, "RO-VS": true,
- "RS-00": true, "RS-01": true, "RS-02": true, "RS-03": true, "RS-04": true,
- "RS-05": true, "RS-06": true, "RS-07": true, "RS-08": true, "RS-09": true,
- "RS-10": true, "RS-11": true, "RS-12": true, "RS-13": true, "RS-14": true,
- "RS-15": true, "RS-16": true, "RS-17": true, "RS-18": true, "RS-19": true,
- "RS-20": true, "RS-21": true, "RS-22": true, "RS-23": true, "RS-24": true,
- "RS-25": true, "RS-26": true, "RS-27": true, "RS-28": true, "RS-29": true,
- "RS-KM": true, "RS-VO": true, "RU-AD": true, "RU-AL": true, "RU-ALT": true,
- "RU-AMU": true, "RU-ARK": true, "RU-AST": true, "RU-BA": true, "RU-BEL": true,
- "RU-BRY": true, "RU-BU": true, "RU-CE": true, "RU-CHE": true, "RU-CHU": true,
- "RU-CU": true, "RU-DA": true, "RU-IN": true, "RU-IRK": true, "RU-IVA": true,
- "RU-KAM": true, "RU-KB": true, "RU-KC": true, "RU-KDA": true, "RU-KEM": true,
- "RU-KGD": true, "RU-KGN": true, "RU-KHA": true, "RU-KHM": true, "RU-KIR": true,
- "RU-KK": true, "RU-KL": true, "RU-KLU": true, "RU-KO": true, "RU-KOS": true,
- "RU-KR": true, "RU-KRS": true, "RU-KYA": true, "RU-LEN": true, "RU-LIP": true,
- "RU-MAG": true, "RU-ME": true, "RU-MO": true, "RU-MOS": true, "RU-MOW": true,
- "RU-MUR": true, "RU-NEN": true, "RU-NGR": true, "RU-NIZ": true, "RU-NVS": true,
- "RU-OMS": true, "RU-ORE": true, "RU-ORL": true, "RU-PER": true, "RU-PNZ": true,
- "RU-PRI": true, "RU-PSK": true, "RU-ROS": true, "RU-RYA": true, "RU-SA": true,
- "RU-SAK": true, "RU-SAM": true, "RU-SAR": true, "RU-SE": true, "RU-SMO": true,
- "RU-SPE": true, "RU-STA": true, "RU-SVE": true, "RU-TA": true, "RU-TAM": true,
- "RU-TOM": true, "RU-TUL": true, "RU-TVE": true, "RU-TY": true, "RU-TYU": true,
- "RU-UD": true, "RU-ULY": true, "RU-VGG": true, "RU-VLA": true, "RU-VLG": true,
- "RU-VOR": true, "RU-YAN": true, "RU-YAR": true, "RU-YEV": true, "RU-ZAB": true,
- "RW-01": true, "RW-02": true, "RW-03": true, "RW-04": true, "RW-05": true,
- "SA-01": true, "SA-02": true, "SA-03": true, "SA-04": true, "SA-05": true,
- "SA-06": true, "SA-07": true, "SA-08": true, "SA-09": true, "SA-10": true,
- "SA-11": true, "SA-12": true, "SA-14": true, "SB-CE": true, "SB-CH": true,
- "SB-CT": true, "SB-GU": true, "SB-IS": true, "SB-MK": true, "SB-ML": true,
- "SB-RB": true, "SB-TE": true, "SB-WE": true, "SC-01": true, "SC-02": true,
- "SC-03": true, "SC-04": true, "SC-05": true, "SC-06": true, "SC-07": true,
- "SC-08": true, "SC-09": true, "SC-10": true, "SC-11": true, "SC-12": true,
- "SC-13": true, "SC-14": true, "SC-15": true, "SC-16": true, "SC-17": true,
- "SC-18": true, "SC-19": true, "SC-20": true, "SC-21": true, "SC-22": true,
- "SC-23": true, "SC-24": true, "SC-25": true, "SD-DC": true, "SD-DE": true,
- "SD-DN": true, "SD-DS": true, "SD-DW": true, "SD-GD": true, "SD-GK": true, "SD-GZ": true,
- "SD-KA": true, "SD-KH": true, "SD-KN": true, "SD-KS": true, "SD-NB": true,
- "SD-NO": true, "SD-NR": true, "SD-NW": true, "SD-RS": true, "SD-SI": true,
- "SE-AB": true, "SE-AC": true, "SE-BD": true, "SE-C": true, "SE-D": true,
- "SE-E": true, "SE-F": true, "SE-G": true, "SE-H": true, "SE-I": true,
- "SE-K": true, "SE-M": true, "SE-N": true, "SE-O": true, "SE-S": true,
- "SE-T": true, "SE-U": true, "SE-W": true, "SE-X": true, "SE-Y": true,
- "SE-Z": true, "SG-01": true, "SG-02": true, "SG-03": true, "SG-04": true,
- "SG-05": true, "SH-AC": true, "SH-HL": true, "SH-TA": true, "SI-001": true,
- "SI-002": true, "SI-003": true, "SI-004": true, "SI-005": true, "SI-006": true,
- "SI-007": true, "SI-008": true, "SI-009": true, "SI-010": true, "SI-011": true,
- "SI-012": true, "SI-013": true, "SI-014": true, "SI-015": true, "SI-016": true,
- "SI-017": true, "SI-018": true, "SI-019": true, "SI-020": true, "SI-021": true,
- "SI-022": true, "SI-023": true, "SI-024": true, "SI-025": true, "SI-026": true,
- "SI-027": true, "SI-028": true, "SI-029": true, "SI-030": true, "SI-031": true,
- "SI-032": true, "SI-033": true, "SI-034": true, "SI-035": true, "SI-036": true,
- "SI-037": true, "SI-038": true, "SI-039": true, "SI-040": true, "SI-041": true,
- "SI-042": true, "SI-043": true, "SI-044": true, "SI-045": true, "SI-046": true,
- "SI-047": true, "SI-048": true, "SI-049": true, "SI-050": true, "SI-051": true,
- "SI-052": true, "SI-053": true, "SI-054": true, "SI-055": true, "SI-056": true,
- "SI-057": true, "SI-058": true, "SI-059": true, "SI-060": true, "SI-061": true,
- "SI-062": true, "SI-063": true, "SI-064": true, "SI-065": true, "SI-066": true,
- "SI-067": true, "SI-068": true, "SI-069": true, "SI-070": true, "SI-071": true,
- "SI-072": true, "SI-073": true, "SI-074": true, "SI-075": true, "SI-076": true,
- "SI-077": true, "SI-078": true, "SI-079": true, "SI-080": true, "SI-081": true,
- "SI-082": true, "SI-083": true, "SI-084": true, "SI-085": true, "SI-086": true,
- "SI-087": true, "SI-088": true, "SI-089": true, "SI-090": true, "SI-091": true,
- "SI-092": true, "SI-093": true, "SI-094": true, "SI-095": true, "SI-096": true,
- "SI-097": true, "SI-098": true, "SI-099": true, "SI-100": true, "SI-101": true,
- "SI-102": true, "SI-103": true, "SI-104": true, "SI-105": true, "SI-106": true,
- "SI-107": true, "SI-108": true, "SI-109": true, "SI-110": true, "SI-111": true,
- "SI-112": true, "SI-113": true, "SI-114": true, "SI-115": true, "SI-116": true,
- "SI-117": true, "SI-118": true, "SI-119": true, "SI-120": true, "SI-121": true,
- "SI-122": true, "SI-123": true, "SI-124": true, "SI-125": true, "SI-126": true,
- "SI-127": true, "SI-128": true, "SI-129": true, "SI-130": true, "SI-131": true,
- "SI-132": true, "SI-133": true, "SI-134": true, "SI-135": true, "SI-136": true,
- "SI-137": true, "SI-138": true, "SI-139": true, "SI-140": true, "SI-141": true,
- "SI-142": true, "SI-143": true, "SI-144": true, "SI-146": true, "SI-147": true,
- "SI-148": true, "SI-149": true, "SI-150": true, "SI-151": true, "SI-152": true,
- "SI-153": true, "SI-154": true, "SI-155": true, "SI-156": true, "SI-157": true,
- "SI-158": true, "SI-159": true, "SI-160": true, "SI-161": true, "SI-162": true,
- "SI-163": true, "SI-164": true, "SI-165": true, "SI-166": true, "SI-167": true,
- "SI-168": true, "SI-169": true, "SI-170": true, "SI-171": true, "SI-172": true,
- "SI-173": true, "SI-174": true, "SI-175": true, "SI-176": true, "SI-177": true,
- "SI-178": true, "SI-179": true, "SI-180": true, "SI-181": true, "SI-182": true,
- "SI-183": true, "SI-184": true, "SI-185": true, "SI-186": true, "SI-187": true,
- "SI-188": true, "SI-189": true, "SI-190": true, "SI-191": true, "SI-192": true,
- "SI-193": true, "SI-194": true, "SI-195": true, "SI-196": true, "SI-197": true,
- "SI-198": true, "SI-199": true, "SI-200": true, "SI-201": true, "SI-202": true,
- "SI-203": true, "SI-204": true, "SI-205": true, "SI-206": true, "SI-207": true,
- "SI-208": true, "SI-209": true, "SI-210": true, "SI-211": true, "SI-212": true, "SI-213": true, "SK-BC": true,
- "SK-BL": true, "SK-KI": true, "SK-NI": true, "SK-PV": true, "SK-TA": true,
- "SK-TC": true, "SK-ZI": true, "SL-E": true, "SL-N": true, "SL-S": true,
- "SL-W": true, "SM-01": true, "SM-02": true, "SM-03": true, "SM-04": true,
- "SM-05": true, "SM-06": true, "SM-07": true, "SM-08": true, "SM-09": true,
- "SN-DB": true, "SN-DK": true, "SN-FK": true, "SN-KA": true, "SN-KD": true,
- "SN-KE": true, "SN-KL": true, "SN-LG": true, "SN-MT": true, "SN-SE": true,
- "SN-SL": true, "SN-TC": true, "SN-TH": true, "SN-ZG": true, "SO-AW": true,
- "SO-BK": true, "SO-BN": true, "SO-BR": true, "SO-BY": true, "SO-GA": true,
- "SO-GE": true, "SO-HI": true, "SO-JD": true, "SO-JH": true, "SO-MU": true,
- "SO-NU": true, "SO-SA": true, "SO-SD": true, "SO-SH": true, "SO-SO": true,
- "SO-TO": true, "SO-WO": true, "SR-BR": true, "SR-CM": true, "SR-CR": true,
- "SR-MA": true, "SR-NI": true, "SR-PM": true, "SR-PR": true, "SR-SA": true,
- "SR-SI": true, "SR-WA": true, "SS-BN": true, "SS-BW": true, "SS-EC": true,
- "SS-EE8": true, "SS-EE": true, "SS-EW": true, "SS-JG": true, "SS-LK": true, "SS-NU": true,
- "SS-UY": true, "SS-WR": true, "ST-01": true, "ST-P": true, "ST-S": true, "SV-AH": true,
- "SV-CA": true, "SV-CH": true, "SV-CU": true, "SV-LI": true, "SV-MO": true,
- "SV-PA": true, "SV-SA": true, "SV-SM": true, "SV-SO": true, "SV-SS": true,
- "SV-SV": true, "SV-UN": true, "SV-US": true, "SY-DI": true, "SY-DR": true,
- "SY-DY": true, "SY-HA": true, "SY-HI": true, "SY-HL": true, "SY-HM": true,
- "SY-ID": true, "SY-LA": true, "SY-QU": true, "SY-RA": true, "SY-RD": true,
- "SY-SU": true, "SY-TA": true, "SZ-HH": true, "SZ-LU": true, "SZ-MA": true,
- "SZ-SH": true, "TD-BA": true, "TD-BG": true, "TD-BO": true, "TD-CB": true,
- "TD-EN": true, "TD-GR": true, "TD-HL": true, "TD-KA": true, "TD-LC": true,
- "TD-LO": true, "TD-LR": true, "TD-MA": true, "TD-MC": true, "TD-ME": true,
- "TD-MO": true, "TD-ND": true, "TD-OD": true, "TD-SA": true, "TD-SI": true,
- "TD-TA": true, "TD-TI": true, "TD-WF": true, "TG-C": true, "TG-K": true,
- "TG-M": true, "TG-P": true, "TG-S": true, "TH-10": true, "TH-11": true,
- "TH-12": true, "TH-13": true, "TH-14": true, "TH-15": true, "TH-16": true,
- "TH-17": true, "TH-18": true, "TH-19": true, "TH-20": true, "TH-21": true,
- "TH-22": true, "TH-23": true, "TH-24": true, "TH-25": true, "TH-26": true,
- "TH-27": true, "TH-30": true, "TH-31": true, "TH-32": true, "TH-33": true,
- "TH-34": true, "TH-35": true, "TH-36": true, "TH-37": true, "TH-38": true, "TH-39": true,
- "TH-40": true, "TH-41": true, "TH-42": true, "TH-43": true, "TH-44": true,
- "TH-45": true, "TH-46": true, "TH-47": true, "TH-48": true, "TH-49": true,
- "TH-50": true, "TH-51": true, "TH-52": true, "TH-53": true, "TH-54": true,
- "TH-55": true, "TH-56": true, "TH-57": true, "TH-58": true, "TH-60": true,
- "TH-61": true, "TH-62": true, "TH-63": true, "TH-64": true, "TH-65": true,
- "TH-66": true, "TH-67": true, "TH-70": true, "TH-71": true, "TH-72": true,
- "TH-73": true, "TH-74": true, "TH-75": true, "TH-76": true, "TH-77": true,
- "TH-80": true, "TH-81": true, "TH-82": true, "TH-83": true, "TH-84": true,
- "TH-85": true, "TH-86": true, "TH-90": true, "TH-91": true, "TH-92": true,
- "TH-93": true, "TH-94": true, "TH-95": true, "TH-96": true, "TH-S": true,
- "TJ-GB": true, "TJ-KT": true, "TJ-SU": true, "TJ-DU": true, "TJ-RA": true, "TL-AL": true, "TL-AN": true,
- "TL-BA": true, "TL-BO": true, "TL-CO": true, "TL-DI": true, "TL-ER": true,
- "TL-LA": true, "TL-LI": true, "TL-MF": true, "TL-MT": true, "TL-OE": true,
- "TL-VI": true, "TM-A": true, "TM-B": true, "TM-D": true, "TM-L": true,
- "TM-M": true, "TM-S": true, "TN-11": true, "TN-12": true, "TN-13": true,
- "TN-14": true, "TN-21": true, "TN-22": true, "TN-23": true, "TN-31": true,
- "TN-32": true, "TN-33": true, "TN-34": true, "TN-41": true, "TN-42": true,
- "TN-43": true, "TN-51": true, "TN-52": true, "TN-53": true, "TN-61": true,
- "TN-71": true, "TN-72": true, "TN-73": true, "TN-81": true, "TN-82": true,
- "TN-83": true, "TO-01": true, "TO-02": true, "TO-03": true, "TO-04": true,
- "TO-05": true, "TR-01": true, "TR-02": true, "TR-03": true, "TR-04": true,
- "TR-05": true, "TR-06": true, "TR-07": true, "TR-08": true, "TR-09": true,
- "TR-10": true, "TR-11": true, "TR-12": true, "TR-13": true, "TR-14": true,
- "TR-15": true, "TR-16": true, "TR-17": true, "TR-18": true, "TR-19": true,
- "TR-20": true, "TR-21": true, "TR-22": true, "TR-23": true, "TR-24": true,
- "TR-25": true, "TR-26": true, "TR-27": true, "TR-28": true, "TR-29": true,
- "TR-30": true, "TR-31": true, "TR-32": true, "TR-33": true, "TR-34": true,
- "TR-35": true, "TR-36": true, "TR-37": true, "TR-38": true, "TR-39": true,
- "TR-40": true, "TR-41": true, "TR-42": true, "TR-43": true, "TR-44": true,
- "TR-45": true, "TR-46": true, "TR-47": true, "TR-48": true, "TR-49": true,
- "TR-50": true, "TR-51": true, "TR-52": true, "TR-53": true, "TR-54": true,
- "TR-55": true, "TR-56": true, "TR-57": true, "TR-58": true, "TR-59": true,
- "TR-60": true, "TR-61": true, "TR-62": true, "TR-63": true, "TR-64": true,
- "TR-65": true, "TR-66": true, "TR-67": true, "TR-68": true, "TR-69": true,
- "TR-70": true, "TR-71": true, "TR-72": true, "TR-73": true, "TR-74": true,
- "TR-75": true, "TR-76": true, "TR-77": true, "TR-78": true, "TR-79": true,
- "TR-80": true, "TR-81": true, "TT-ARI": true, "TT-CHA": true, "TT-CTT": true,
- "TT-DMN": true, "TT-ETO": true, "TT-MRC": true, "TT-TOB": true, "TT-PED": true, "TT-POS": true, "TT-PRT": true,
- "TT-PTF": true, "TT-RCM": true, "TT-SFO": true, "TT-SGE": true, "TT-SIP": true,
- "TT-SJL": true, "TT-TUP": true, "TT-WTO": true, "TV-FUN": true, "TV-NIT": true,
- "TV-NKF": true, "TV-NKL": true, "TV-NMA": true, "TV-NMG": true, "TV-NUI": true,
- "TV-VAI": true, "TW-CHA": true, "TW-CYI": true, "TW-CYQ": true, "TW-KIN": true, "TW-HSQ": true,
- "TW-HSZ": true, "TW-HUA": true, "TW-LIE": true, "TW-ILA": true, "TW-KEE": true, "TW-KHH": true,
- "TW-KHQ": true, "TW-MIA": true, "TW-NAN": true, "TW-NWT": true, "TW-PEN": true, "TW-PIF": true,
- "TW-TAO": true, "TW-TNN": true, "TW-TNQ": true, "TW-TPE": true, "TW-TPQ": true,
- "TW-TTT": true, "TW-TXG": true, "TW-TXQ": true, "TW-YUN": true, "TZ-01": true,
- "TZ-02": true, "TZ-03": true, "TZ-04": true, "TZ-05": true, "TZ-06": true,
- "TZ-07": true, "TZ-08": true, "TZ-09": true, "TZ-10": true, "TZ-11": true,
- "TZ-12": true, "TZ-13": true, "TZ-14": true, "TZ-15": true, "TZ-16": true,
- "TZ-17": true, "TZ-18": true, "TZ-19": true, "TZ-20": true, "TZ-21": true,
- "TZ-22": true, "TZ-23": true, "TZ-24": true, "TZ-25": true, "TZ-26": true, "TZ-27": true, "TZ-28": true, "TZ-29": true, "TZ-30": true, "TZ-31": true,
- "UA-05": true, "UA-07": true, "UA-09": true, "UA-12": true, "UA-14": true,
- "UA-18": true, "UA-21": true, "UA-23": true, "UA-26": true, "UA-30": true,
- "UA-32": true, "UA-35": true, "UA-40": true, "UA-43": true, "UA-46": true,
- "UA-48": true, "UA-51": true, "UA-53": true, "UA-56": true, "UA-59": true,
- "UA-61": true, "UA-63": true, "UA-65": true, "UA-68": true, "UA-71": true,
- "UA-74": true, "UA-77": true, "UG-101": true, "UG-102": true, "UG-103": true,
- "UG-104": true, "UG-105": true, "UG-106": true, "UG-107": true, "UG-108": true,
- "UG-109": true, "UG-110": true, "UG-111": true, "UG-112": true, "UG-113": true,
- "UG-114": true, "UG-115": true, "UG-116": true, "UG-201": true, "UG-202": true,
- "UG-203": true, "UG-204": true, "UG-205": true, "UG-206": true, "UG-207": true,
- "UG-208": true, "UG-209": true, "UG-210": true, "UG-211": true, "UG-212": true,
- "UG-213": true, "UG-214": true, "UG-215": true, "UG-216": true, "UG-217": true,
- "UG-218": true, "UG-219": true, "UG-220": true, "UG-221": true, "UG-222": true,
- "UG-223": true, "UG-224": true, "UG-301": true, "UG-302": true, "UG-303": true,
- "UG-304": true, "UG-305": true, "UG-306": true, "UG-307": true, "UG-308": true,
- "UG-309": true, "UG-310": true, "UG-311": true, "UG-312": true, "UG-313": true,
- "UG-314": true, "UG-315": true, "UG-316": true, "UG-317": true, "UG-318": true,
- "UG-319": true, "UG-320": true, "UG-321": true, "UG-401": true, "UG-402": true,
- "UG-403": true, "UG-404": true, "UG-405": true, "UG-406": true, "UG-407": true,
- "UG-408": true, "UG-409": true, "UG-410": true, "UG-411": true, "UG-412": true,
- "UG-413": true, "UG-414": true, "UG-415": true, "UG-416": true, "UG-417": true,
- "UG-418": true, "UG-419": true, "UG-C": true, "UG-E": true, "UG-N": true,
- "UG-W": true, "UG-322": true, "UG-323": true, "UG-420": true, "UG-117": true,
- "UG-118": true, "UG-225": true, "UG-120": true, "UG-226": true,
- "UG-121": true, "UG-122": true, "UG-227": true, "UG-421": true,
- "UG-325": true, "UG-228": true, "UG-123": true, "UG-422": true,
- "UG-326": true, "UG-229": true, "UG-124": true, "UG-423": true,
- "UG-230": true, "UG-327": true, "UG-424": true, "UG-328": true,
- "UG-425": true, "UG-426": true, "UG-330": true,
- "UM-67": true, "UM-71": true, "UM-76": true, "UM-79": true,
- "UM-81": true, "UM-84": true, "UM-86": true, "UM-89": true, "UM-95": true,
- "US-AK": true, "US-AL": true, "US-AR": true, "US-AS": true, "US-AZ": true,
- "US-CA": true, "US-CO": true, "US-CT": true, "US-DC": true, "US-DE": true,
- "US-FL": true, "US-GA": true, "US-GU": true, "US-HI": true, "US-IA": true,
- "US-ID": true, "US-IL": true, "US-IN": true, "US-KS": true, "US-KY": true,
- "US-LA": true, "US-MA": true, "US-MD": true, "US-ME": true, "US-MI": true,
- "US-MN": true, "US-MO": true, "US-MP": true, "US-MS": true, "US-MT": true,
- "US-NC": true, "US-ND": true, "US-NE": true, "US-NH": true, "US-NJ": true,
- "US-NM": true, "US-NV": true, "US-NY": true, "US-OH": true, "US-OK": true,
- "US-OR": true, "US-PA": true, "US-PR": true, "US-RI": true, "US-SC": true,
- "US-SD": true, "US-TN": true, "US-TX": true, "US-UM": true, "US-UT": true,
- "US-VA": true, "US-VI": true, "US-VT": true, "US-WA": true, "US-WI": true,
- "US-WV": true, "US-WY": true, "UY-AR": true, "UY-CA": true, "UY-CL": true,
- "UY-CO": true, "UY-DU": true, "UY-FD": true, "UY-FS": true, "UY-LA": true,
- "UY-MA": true, "UY-MO": true, "UY-PA": true, "UY-RN": true, "UY-RO": true,
- "UY-RV": true, "UY-SA": true, "UY-SJ": true, "UY-SO": true, "UY-TA": true,
- "UY-TT": true, "UZ-AN": true, "UZ-BU": true, "UZ-FA": true, "UZ-JI": true,
- "UZ-NG": true, "UZ-NW": true, "UZ-QA": true, "UZ-QR": true, "UZ-SA": true,
- "UZ-SI": true, "UZ-SU": true, "UZ-TK": true, "UZ-TO": true, "UZ-XO": true,
- "VC-01": true, "VC-02": true, "VC-03": true, "VC-04": true, "VC-05": true,
- "VC-06": true, "VE-A": true, "VE-B": true, "VE-C": true, "VE-D": true,
- "VE-E": true, "VE-F": true, "VE-G": true, "VE-H": true, "VE-I": true,
- "VE-J": true, "VE-K": true, "VE-L": true, "VE-M": true, "VE-N": true,
- "VE-O": true, "VE-P": true, "VE-R": true, "VE-S": true, "VE-T": true,
- "VE-U": true, "VE-V": true, "VE-W": true, "VE-X": true, "VE-Y": true,
- "VE-Z": true, "VN-01": true, "VN-02": true, "VN-03": true, "VN-04": true,
- "VN-05": true, "VN-06": true, "VN-07": true, "VN-09": true, "VN-13": true,
- "VN-14": true, "VN-15": true, "VN-18": true, "VN-20": true, "VN-21": true,
- "VN-22": true, "VN-23": true, "VN-24": true, "VN-25": true, "VN-26": true,
- "VN-27": true, "VN-28": true, "VN-29": true, "VN-30": true, "VN-31": true,
- "VN-32": true, "VN-33": true, "VN-34": true, "VN-35": true, "VN-36": true,
- "VN-37": true, "VN-39": true, "VN-40": true, "VN-41": true, "VN-43": true,
- "VN-44": true, "VN-45": true, "VN-46": true, "VN-47": true, "VN-49": true,
- "VN-50": true, "VN-51": true, "VN-52": true, "VN-53": true, "VN-54": true,
- "VN-55": true, "VN-56": true, "VN-57": true, "VN-58": true, "VN-59": true,
- "VN-61": true, "VN-63": true, "VN-66": true, "VN-67": true, "VN-68": true,
- "VN-69": true, "VN-70": true, "VN-71": true, "VN-72": true, "VN-73": true,
- "VN-CT": true, "VN-DN": true, "VN-HN": true, "VN-HP": true, "VN-SG": true,
- "VU-MAP": true, "VU-PAM": true, "VU-SAM": true, "VU-SEE": true, "VU-TAE": true,
- "VU-TOB": true, "WF-SG": true, "WF-UV": true, "WS-AA": true, "WS-AL": true, "WS-AT": true, "WS-FA": true,
- "WS-GE": true, "WS-GI": true, "WS-PA": true, "WS-SA": true, "WS-TU": true,
- "WS-VF": true, "WS-VS": true, "YE-AB": true, "YE-AD": true, "YE-AM": true,
- "YE-BA": true, "YE-DA": true, "YE-DH": true, "YE-HD": true, "YE-HJ": true, "YE-HU": true,
- "YE-IB": true, "YE-JA": true, "YE-LA": true, "YE-MA": true, "YE-MR": true,
- "YE-MU": true, "YE-MW": true, "YE-RA": true, "YE-SA": true, "YE-SD": true, "YE-SH": true,
- "YE-SN": true, "YE-TA": true, "ZA-EC": true, "ZA-FS": true, "ZA-GP": true,
- "ZA-LP": true, "ZA-MP": true, "ZA-NC": true, "ZA-NW": true, "ZA-WC": true,
- "ZA-ZN": true, "ZA-KZN": true, "ZM-01": true, "ZM-02": true, "ZM-03": true, "ZM-04": true,
- "ZM-05": true, "ZM-06": true, "ZM-07": true, "ZM-08": true, "ZM-09": true, "ZM-10": true,
- "ZW-BU": true, "ZW-HA": true, "ZW-MA": true, "ZW-MC": true, "ZW-ME": true,
- "ZW-MI": true, "ZW-MN": true, "ZW-MS": true, "ZW-MV": true, "ZW-MW": true,
+var iso3166_2 = map[string]struct{}{
+ "AD-02": {}, "AD-03": {}, "AD-04": {}, "AD-05": {}, "AD-06": {},
+ "AD-07": {}, "AD-08": {}, "AE-AJ": {}, "AE-AZ": {}, "AE-DU": {},
+ "AE-FU": {}, "AE-RK": {}, "AE-SH": {}, "AE-UQ": {}, "AF-BAL": {},
+ "AF-BAM": {}, "AF-BDG": {}, "AF-BDS": {}, "AF-BGL": {}, "AF-DAY": {},
+ "AF-FRA": {}, "AF-FYB": {}, "AF-GHA": {}, "AF-GHO": {}, "AF-HEL": {},
+ "AF-HER": {}, "AF-JOW": {}, "AF-KAB": {}, "AF-KAN": {}, "AF-KAP": {},
+ "AF-KDZ": {}, "AF-KHO": {}, "AF-KNR": {}, "AF-LAG": {}, "AF-LOG": {},
+ "AF-NAN": {}, "AF-NIM": {}, "AF-NUR": {}, "AF-PAN": {}, "AF-PAR": {},
+ "AF-PIA": {}, "AF-PKA": {}, "AF-SAM": {}, "AF-SAR": {}, "AF-TAK": {},
+ "AF-URU": {}, "AF-WAR": {}, "AF-ZAB": {}, "AG-03": {}, "AG-04": {},
+ "AG-05": {}, "AG-06": {}, "AG-07": {}, "AG-08": {}, "AG-10": {},
+ "AG-11": {}, "AL-01": {}, "AL-02": {}, "AL-03": {}, "AL-04": {},
+ "AL-05": {}, "AL-06": {}, "AL-07": {}, "AL-08": {}, "AL-09": {},
+ "AL-10": {}, "AL-11": {}, "AL-12": {}, "AL-BR": {}, "AL-BU": {},
+ "AL-DI": {}, "AL-DL": {}, "AL-DR": {}, "AL-DV": {}, "AL-EL": {},
+ "AL-ER": {}, "AL-FR": {}, "AL-GJ": {}, "AL-GR": {}, "AL-HA": {},
+ "AL-KA": {}, "AL-KB": {}, "AL-KC": {}, "AL-KO": {}, "AL-KR": {},
+ "AL-KU": {}, "AL-LB": {}, "AL-LE": {}, "AL-LU": {}, "AL-MK": {},
+ "AL-MM": {}, "AL-MR": {}, "AL-MT": {}, "AL-PG": {}, "AL-PQ": {},
+ "AL-PR": {}, "AL-PU": {}, "AL-SH": {}, "AL-SK": {}, "AL-SR": {},
+ "AL-TE": {}, "AL-TP": {}, "AL-TR": {}, "AL-VL": {}, "AM-AG": {},
+ "AM-AR": {}, "AM-AV": {}, "AM-ER": {}, "AM-GR": {}, "AM-KT": {},
+ "AM-LO": {}, "AM-SH": {}, "AM-SU": {}, "AM-TV": {}, "AM-VD": {},
+ "AO-BGO": {}, "AO-BGU": {}, "AO-BIE": {}, "AO-CAB": {}, "AO-CCU": {},
+ "AO-CNN": {}, "AO-CNO": {}, "AO-CUS": {}, "AO-HUA": {}, "AO-HUI": {},
+ "AO-LNO": {}, "AO-LSU": {}, "AO-LUA": {}, "AO-MAL": {}, "AO-MOX": {},
+ "AO-NAM": {}, "AO-UIG": {}, "AO-ZAI": {}, "AR-A": {}, "AR-B": {},
+ "AR-C": {}, "AR-D": {}, "AR-E": {}, "AR-F": {}, "AR-G": {}, "AR-H": {},
+ "AR-J": {}, "AR-K": {}, "AR-L": {}, "AR-M": {}, "AR-N": {},
+ "AR-P": {}, "AR-Q": {}, "AR-R": {}, "AR-S": {}, "AR-T": {},
+ "AR-U": {}, "AR-V": {}, "AR-W": {}, "AR-X": {}, "AR-Y": {},
+ "AR-Z": {}, "AT-1": {}, "AT-2": {}, "AT-3": {}, "AT-4": {},
+ "AT-5": {}, "AT-6": {}, "AT-7": {}, "AT-8": {}, "AT-9": {},
+ "AU-ACT": {}, "AU-NSW": {}, "AU-NT": {}, "AU-QLD": {}, "AU-SA": {},
+ "AU-TAS": {}, "AU-VIC": {}, "AU-WA": {}, "AZ-ABS": {}, "AZ-AGA": {},
+ "AZ-AGC": {}, "AZ-AGM": {}, "AZ-AGS": {}, "AZ-AGU": {}, "AZ-AST": {},
+ "AZ-BA": {}, "AZ-BAB": {}, "AZ-BAL": {}, "AZ-BAR": {}, "AZ-BEY": {},
+ "AZ-BIL": {}, "AZ-CAB": {}, "AZ-CAL": {}, "AZ-CUL": {}, "AZ-DAS": {},
+ "AZ-FUZ": {}, "AZ-GA": {}, "AZ-GAD": {}, "AZ-GOR": {}, "AZ-GOY": {},
+ "AZ-GYG": {}, "AZ-HAC": {}, "AZ-IMI": {}, "AZ-ISM": {}, "AZ-KAL": {},
+ "AZ-KAN": {}, "AZ-KUR": {}, "AZ-LA": {}, "AZ-LAC": {}, "AZ-LAN": {},
+ "AZ-LER": {}, "AZ-MAS": {}, "AZ-MI": {}, "AZ-NA": {}, "AZ-NEF": {},
+ "AZ-NV": {}, "AZ-NX": {}, "AZ-OGU": {}, "AZ-ORD": {}, "AZ-QAB": {},
+ "AZ-QAX": {}, "AZ-QAZ": {}, "AZ-QBA": {}, "AZ-QBI": {}, "AZ-QOB": {},
+ "AZ-QUS": {}, "AZ-SA": {}, "AZ-SAB": {}, "AZ-SAD": {}, "AZ-SAH": {},
+ "AZ-SAK": {}, "AZ-SAL": {}, "AZ-SAR": {}, "AZ-SAT": {}, "AZ-SBN": {},
+ "AZ-SIY": {}, "AZ-SKR": {}, "AZ-SM": {}, "AZ-SMI": {}, "AZ-SMX": {},
+ "AZ-SR": {}, "AZ-SUS": {}, "AZ-TAR": {}, "AZ-TOV": {}, "AZ-UCA": {},
+ "AZ-XA": {}, "AZ-XAC": {}, "AZ-XCI": {}, "AZ-XIZ": {}, "AZ-XVD": {},
+ "AZ-YAR": {}, "AZ-YE": {}, "AZ-YEV": {}, "AZ-ZAN": {}, "AZ-ZAQ": {},
+ "AZ-ZAR": {}, "BA-01": {}, "BA-02": {}, "BA-03": {}, "BA-04": {},
+ "BA-05": {}, "BA-06": {}, "BA-07": {}, "BA-08": {}, "BA-09": {},
+ "BA-10": {}, "BA-BIH": {}, "BA-BRC": {}, "BA-SRP": {}, "BB-01": {},
+ "BB-02": {}, "BB-03": {}, "BB-04": {}, "BB-05": {}, "BB-06": {},
+ "BB-07": {}, "BB-08": {}, "BB-09": {}, "BB-10": {}, "BB-11": {},
+ "BD-01": {}, "BD-02": {}, "BD-03": {}, "BD-04": {}, "BD-05": {},
+ "BD-06": {}, "BD-07": {}, "BD-08": {}, "BD-09": {}, "BD-10": {},
+ "BD-11": {}, "BD-12": {}, "BD-13": {}, "BD-14": {}, "BD-15": {},
+ "BD-16": {}, "BD-17": {}, "BD-18": {}, "BD-19": {}, "BD-20": {},
+ "BD-21": {}, "BD-22": {}, "BD-23": {}, "BD-24": {}, "BD-25": {},
+ "BD-26": {}, "BD-27": {}, "BD-28": {}, "BD-29": {}, "BD-30": {},
+ "BD-31": {}, "BD-32": {}, "BD-33": {}, "BD-34": {}, "BD-35": {},
+ "BD-36": {}, "BD-37": {}, "BD-38": {}, "BD-39": {}, "BD-40": {},
+ "BD-41": {}, "BD-42": {}, "BD-43": {}, "BD-44": {}, "BD-45": {},
+ "BD-46": {}, "BD-47": {}, "BD-48": {}, "BD-49": {}, "BD-50": {},
+ "BD-51": {}, "BD-52": {}, "BD-53": {}, "BD-54": {}, "BD-55": {},
+ "BD-56": {}, "BD-57": {}, "BD-58": {}, "BD-59": {}, "BD-60": {},
+ "BD-61": {}, "BD-62": {}, "BD-63": {}, "BD-64": {}, "BD-A": {},
+ "BD-B": {}, "BD-C": {}, "BD-D": {}, "BD-E": {}, "BD-F": {},
+ "BD-G": {}, "BE-BRU": {}, "BE-VAN": {}, "BE-VBR": {}, "BE-VLG": {},
+ "BE-VLI": {}, "BE-VOV": {}, "BE-VWV": {}, "BE-WAL": {}, "BE-WBR": {},
+ "BE-WHT": {}, "BE-WLG": {}, "BE-WLX": {}, "BE-WNA": {}, "BF-01": {},
+ "BF-02": {}, "BF-03": {}, "BF-04": {}, "BF-05": {}, "BF-06": {},
+ "BF-07": {}, "BF-08": {}, "BF-09": {}, "BF-10": {}, "BF-11": {},
+ "BF-12": {}, "BF-13": {}, "BF-BAL": {}, "BF-BAM": {}, "BF-BAN": {},
+ "BF-BAZ": {}, "BF-BGR": {}, "BF-BLG": {}, "BF-BLK": {}, "BF-COM": {},
+ "BF-GAN": {}, "BF-GNA": {}, "BF-GOU": {}, "BF-HOU": {}, "BF-IOB": {},
+ "BF-KAD": {}, "BF-KEN": {}, "BF-KMD": {}, "BF-KMP": {}, "BF-KOP": {},
+ "BF-KOS": {}, "BF-KOT": {}, "BF-KOW": {}, "BF-LER": {}, "BF-LOR": {},
+ "BF-MOU": {}, "BF-NAM": {}, "BF-NAO": {}, "BF-NAY": {}, "BF-NOU": {},
+ "BF-OUB": {}, "BF-OUD": {}, "BF-PAS": {}, "BF-PON": {}, "BF-SEN": {},
+ "BF-SIS": {}, "BF-SMT": {}, "BF-SNG": {}, "BF-SOM": {}, "BF-SOR": {},
+ "BF-TAP": {}, "BF-TUI": {}, "BF-YAG": {}, "BF-YAT": {}, "BF-ZIR": {},
+ "BF-ZON": {}, "BF-ZOU": {}, "BG-01": {}, "BG-02": {}, "BG-03": {},
+ "BG-04": {}, "BG-05": {}, "BG-06": {}, "BG-07": {}, "BG-08": {},
+ "BG-09": {}, "BG-10": {}, "BG-11": {}, "BG-12": {}, "BG-13": {},
+ "BG-14": {}, "BG-15": {}, "BG-16": {}, "BG-17": {}, "BG-18": {},
+ "BG-19": {}, "BG-20": {}, "BG-21": {}, "BG-22": {}, "BG-23": {},
+ "BG-24": {}, "BG-25": {}, "BG-26": {}, "BG-27": {}, "BG-28": {},
+ "BH-13": {}, "BH-14": {}, "BH-15": {}, "BH-16": {}, "BH-17": {},
+ "BI-BB": {}, "BI-BL": {}, "BI-BM": {}, "BI-BR": {}, "BI-CA": {},
+ "BI-CI": {}, "BI-GI": {}, "BI-KI": {}, "BI-KR": {}, "BI-KY": {},
+ "BI-MA": {}, "BI-MU": {}, "BI-MW": {}, "BI-NG": {}, "BI-RM": {}, "BI-RT": {},
+ "BI-RY": {}, "BJ-AK": {}, "BJ-AL": {}, "BJ-AQ": {}, "BJ-BO": {},
+ "BJ-CO": {}, "BJ-DO": {}, "BJ-KO": {}, "BJ-LI": {}, "BJ-MO": {},
+ "BJ-OU": {}, "BJ-PL": {}, "BJ-ZO": {}, "BN-BE": {}, "BN-BM": {},
+ "BN-TE": {}, "BN-TU": {}, "BO-B": {}, "BO-C": {}, "BO-H": {},
+ "BO-L": {}, "BO-N": {}, "BO-O": {}, "BO-P": {}, "BO-S": {},
+ "BO-T": {}, "BQ-BO": {}, "BQ-SA": {}, "BQ-SE": {}, "BR-AC": {},
+ "BR-AL": {}, "BR-AM": {}, "BR-AP": {}, "BR-BA": {}, "BR-CE": {},
+ "BR-DF": {}, "BR-ES": {}, "BR-FN": {}, "BR-GO": {}, "BR-MA": {},
+ "BR-MG": {}, "BR-MS": {}, "BR-MT": {}, "BR-PA": {}, "BR-PB": {},
+ "BR-PE": {}, "BR-PI": {}, "BR-PR": {}, "BR-RJ": {}, "BR-RN": {},
+ "BR-RO": {}, "BR-RR": {}, "BR-RS": {}, "BR-SC": {}, "BR-SE": {},
+ "BR-SP": {}, "BR-TO": {}, "BS-AK": {}, "BS-BI": {}, "BS-BP": {},
+ "BS-BY": {}, "BS-CE": {}, "BS-CI": {}, "BS-CK": {}, "BS-CO": {},
+ "BS-CS": {}, "BS-EG": {}, "BS-EX": {}, "BS-FP": {}, "BS-GC": {},
+ "BS-HI": {}, "BS-HT": {}, "BS-IN": {}, "BS-LI": {}, "BS-MC": {},
+ "BS-MG": {}, "BS-MI": {}, "BS-NE": {}, "BS-NO": {}, "BS-NP": {}, "BS-NS": {},
+ "BS-RC": {}, "BS-RI": {}, "BS-SA": {}, "BS-SE": {}, "BS-SO": {},
+ "BS-SS": {}, "BS-SW": {}, "BS-WG": {}, "BT-11": {}, "BT-12": {},
+ "BT-13": {}, "BT-14": {}, "BT-15": {}, "BT-21": {}, "BT-22": {},
+ "BT-23": {}, "BT-24": {}, "BT-31": {}, "BT-32": {}, "BT-33": {},
+ "BT-34": {}, "BT-41": {}, "BT-42": {}, "BT-43": {}, "BT-44": {},
+ "BT-45": {}, "BT-GA": {}, "BT-TY": {}, "BW-CE": {}, "BW-CH": {}, "BW-GH": {},
+ "BW-KG": {}, "BW-KL": {}, "BW-KW": {}, "BW-NE": {}, "BW-NW": {},
+ "BW-SE": {}, "BW-SO": {}, "BY-BR": {}, "BY-HM": {}, "BY-HO": {},
+ "BY-HR": {}, "BY-MA": {}, "BY-MI": {}, "BY-VI": {}, "BZ-BZ": {},
+ "BZ-CY": {}, "BZ-CZL": {}, "BZ-OW": {}, "BZ-SC": {}, "BZ-TOL": {},
+ "CA-AB": {}, "CA-BC": {}, "CA-MB": {}, "CA-NB": {}, "CA-NL": {},
+ "CA-NS": {}, "CA-NT": {}, "CA-NU": {}, "CA-ON": {}, "CA-PE": {},
+ "CA-QC": {}, "CA-SK": {}, "CA-YT": {}, "CD-BC": {}, "CD-BN": {},
+ "CD-EQ": {}, "CD-HK": {}, "CD-IT": {}, "CD-KA": {}, "CD-KC": {}, "CD-KE": {}, "CD-KG": {}, "CD-KN": {},
+ "CD-KW": {}, "CD-KS": {}, "CD-LU": {}, "CD-MA": {}, "CD-NK": {}, "CD-OR": {}, "CD-SA": {}, "CD-SK": {},
+ "CD-TA": {}, "CD-TO": {}, "CF-AC": {}, "CF-BB": {}, "CF-BGF": {}, "CF-BK": {}, "CF-HK": {}, "CF-HM": {},
+ "CF-HS": {}, "CF-KB": {}, "CF-KG": {}, "CF-LB": {}, "CF-MB": {},
+ "CF-MP": {}, "CF-NM": {}, "CF-OP": {}, "CF-SE": {}, "CF-UK": {},
+ "CF-VK": {}, "CG-11": {}, "CG-12": {}, "CG-13": {}, "CG-14": {},
+ "CG-15": {}, "CG-16": {}, "CG-2": {}, "CG-5": {}, "CG-7": {}, "CG-8": {},
+ "CG-9": {}, "CG-BZV": {}, "CH-AG": {}, "CH-AI": {}, "CH-AR": {},
+ "CH-BE": {}, "CH-BL": {}, "CH-BS": {}, "CH-FR": {}, "CH-GE": {},
+ "CH-GL": {}, "CH-GR": {}, "CH-JU": {}, "CH-LU": {}, "CH-NE": {},
+ "CH-NW": {}, "CH-OW": {}, "CH-SG": {}, "CH-SH": {}, "CH-SO": {},
+ "CH-SZ": {}, "CH-TG": {}, "CH-TI": {}, "CH-UR": {}, "CH-VD": {},
+ "CH-VS": {}, "CH-ZG": {}, "CH-ZH": {}, "CI-AB": {}, "CI-BS": {},
+ "CI-CM": {}, "CI-DN": {}, "CI-GD": {}, "CI-LC": {}, "CI-LG": {},
+ "CI-MG": {}, "CI-SM": {}, "CI-SV": {}, "CI-VB": {}, "CI-WR": {},
+ "CI-YM": {}, "CI-ZZ": {}, "CL-AI": {}, "CL-AN": {}, "CL-AP": {},
+ "CL-AR": {}, "CL-AT": {}, "CL-BI": {}, "CL-CO": {}, "CL-LI": {},
+ "CL-LL": {}, "CL-LR": {}, "CL-MA": {}, "CL-ML": {}, "CL-NB": {}, "CL-RM": {},
+ "CL-TA": {}, "CL-VS": {}, "CM-AD": {}, "CM-CE": {}, "CM-EN": {},
+ "CM-ES": {}, "CM-LT": {}, "CM-NO": {}, "CM-NW": {}, "CM-OU": {},
+ "CM-SU": {}, "CM-SW": {}, "CN-AH": {}, "CN-BJ": {}, "CN-CQ": {},
+ "CN-FJ": {}, "CN-GS": {}, "CN-GD": {}, "CN-GX": {}, "CN-GZ": {},
+ "CN-HI": {}, "CN-HE": {}, "CN-HL": {}, "CN-HA": {}, "CN-HB": {},
+ "CN-HN": {}, "CN-JS": {}, "CN-JX": {}, "CN-JL": {}, "CN-LN": {},
+ "CN-NM": {}, "CN-NX": {}, "CN-QH": {}, "CN-SN": {}, "CN-SD": {}, "CN-SH": {},
+ "CN-SX": {}, "CN-SC": {}, "CN-TJ": {}, "CN-XJ": {}, "CN-XZ": {}, "CN-YN": {},
+ "CN-ZJ": {}, "CO-AMA": {}, "CO-ANT": {}, "CO-ARA": {}, "CO-ATL": {},
+ "CO-BOL": {}, "CO-BOY": {}, "CO-CAL": {}, "CO-CAQ": {}, "CO-CAS": {},
+ "CO-CAU": {}, "CO-CES": {}, "CO-CHO": {}, "CO-COR": {}, "CO-CUN": {},
+ "CO-DC": {}, "CO-GUA": {}, "CO-GUV": {}, "CO-HUI": {}, "CO-LAG": {},
+ "CO-MAG": {}, "CO-MET": {}, "CO-NAR": {}, "CO-NSA": {}, "CO-PUT": {},
+ "CO-QUI": {}, "CO-RIS": {}, "CO-SAN": {}, "CO-SAP": {}, "CO-SUC": {},
+ "CO-TOL": {}, "CO-VAC": {}, "CO-VAU": {}, "CO-VID": {}, "CR-A": {},
+ "CR-C": {}, "CR-G": {}, "CR-H": {}, "CR-L": {}, "CR-P": {},
+ "CR-SJ": {}, "CU-01": {}, "CU-02": {}, "CU-03": {}, "CU-04": {},
+ "CU-05": {}, "CU-06": {}, "CU-07": {}, "CU-08": {}, "CU-09": {},
+ "CU-10": {}, "CU-11": {}, "CU-12": {}, "CU-13": {}, "CU-14": {}, "CU-15": {},
+ "CU-16": {}, "CU-99": {}, "CV-B": {}, "CV-BR": {}, "CV-BV": {}, "CV-CA": {},
+ "CV-CF": {}, "CV-CR": {}, "CV-MA": {}, "CV-MO": {}, "CV-PA": {},
+ "CV-PN": {}, "CV-PR": {}, "CV-RB": {}, "CV-RG": {}, "CV-RS": {},
+ "CV-S": {}, "CV-SD": {}, "CV-SF": {}, "CV-SL": {}, "CV-SM": {},
+ "CV-SO": {}, "CV-SS": {}, "CV-SV": {}, "CV-TA": {}, "CV-TS": {},
+ "CY-01": {}, "CY-02": {}, "CY-03": {}, "CY-04": {}, "CY-05": {},
+ "CY-06": {}, "CZ-10": {}, "CZ-101": {}, "CZ-102": {}, "CZ-103": {},
+ "CZ-104": {}, "CZ-105": {}, "CZ-106": {}, "CZ-107": {}, "CZ-108": {},
+ "CZ-109": {}, "CZ-110": {}, "CZ-111": {}, "CZ-112": {}, "CZ-113": {},
+ "CZ-114": {}, "CZ-115": {}, "CZ-116": {}, "CZ-117": {}, "CZ-118": {},
+ "CZ-119": {}, "CZ-120": {}, "CZ-121": {}, "CZ-122": {}, "CZ-20": {},
+ "CZ-201": {}, "CZ-202": {}, "CZ-203": {}, "CZ-204": {}, "CZ-205": {},
+ "CZ-206": {}, "CZ-207": {}, "CZ-208": {}, "CZ-209": {}, "CZ-20A": {},
+ "CZ-20B": {}, "CZ-20C": {}, "CZ-31": {}, "CZ-311": {}, "CZ-312": {},
+ "CZ-313": {}, "CZ-314": {}, "CZ-315": {}, "CZ-316": {}, "CZ-317": {},
+ "CZ-32": {}, "CZ-321": {}, "CZ-322": {}, "CZ-323": {}, "CZ-324": {},
+ "CZ-325": {}, "CZ-326": {}, "CZ-327": {}, "CZ-41": {}, "CZ-411": {},
+ "CZ-412": {}, "CZ-413": {}, "CZ-42": {}, "CZ-421": {}, "CZ-422": {},
+ "CZ-423": {}, "CZ-424": {}, "CZ-425": {}, "CZ-426": {}, "CZ-427": {},
+ "CZ-51": {}, "CZ-511": {}, "CZ-512": {}, "CZ-513": {}, "CZ-514": {},
+ "CZ-52": {}, "CZ-521": {}, "CZ-522": {}, "CZ-523": {}, "CZ-524": {},
+ "CZ-525": {}, "CZ-53": {}, "CZ-531": {}, "CZ-532": {}, "CZ-533": {},
+ "CZ-534": {}, "CZ-63": {}, "CZ-631": {}, "CZ-632": {}, "CZ-633": {},
+ "CZ-634": {}, "CZ-635": {}, "CZ-64": {}, "CZ-641": {}, "CZ-642": {},
+ "CZ-643": {}, "CZ-644": {}, "CZ-645": {}, "CZ-646": {}, "CZ-647": {},
+ "CZ-71": {}, "CZ-711": {}, "CZ-712": {}, "CZ-713": {}, "CZ-714": {},
+ "CZ-715": {}, "CZ-72": {}, "CZ-721": {}, "CZ-722": {}, "CZ-723": {},
+ "CZ-724": {}, "CZ-80": {}, "CZ-801": {}, "CZ-802": {}, "CZ-803": {},
+ "CZ-804": {}, "CZ-805": {}, "CZ-806": {}, "DE-BB": {}, "DE-BE": {},
+ "DE-BW": {}, "DE-BY": {}, "DE-HB": {}, "DE-HE": {}, "DE-HH": {},
+ "DE-MV": {}, "DE-NI": {}, "DE-NW": {}, "DE-RP": {}, "DE-SH": {},
+ "DE-SL": {}, "DE-SN": {}, "DE-ST": {}, "DE-TH": {}, "DJ-AR": {},
+ "DJ-AS": {}, "DJ-DI": {}, "DJ-DJ": {}, "DJ-OB": {}, "DJ-TA": {},
+ "DK-81": {}, "DK-82": {}, "DK-83": {}, "DK-84": {}, "DK-85": {},
+ "DM-01": {}, "DM-02": {}, "DM-03": {}, "DM-04": {}, "DM-05": {},
+ "DM-06": {}, "DM-07": {}, "DM-08": {}, "DM-09": {}, "DM-10": {},
+ "DO-01": {}, "DO-02": {}, "DO-03": {}, "DO-04": {}, "DO-05": {},
+ "DO-06": {}, "DO-07": {}, "DO-08": {}, "DO-09": {}, "DO-10": {},
+ "DO-11": {}, "DO-12": {}, "DO-13": {}, "DO-14": {}, "DO-15": {},
+ "DO-16": {}, "DO-17": {}, "DO-18": {}, "DO-19": {}, "DO-20": {},
+ "DO-21": {}, "DO-22": {}, "DO-23": {}, "DO-24": {}, "DO-25": {},
+ "DO-26": {}, "DO-27": {}, "DO-28": {}, "DO-29": {}, "DO-30": {}, "DO-31": {},
+ "DZ-01": {}, "DZ-02": {}, "DZ-03": {}, "DZ-04": {}, "DZ-05": {},
+ "DZ-06": {}, "DZ-07": {}, "DZ-08": {}, "DZ-09": {}, "DZ-10": {},
+ "DZ-11": {}, "DZ-12": {}, "DZ-13": {}, "DZ-14": {}, "DZ-15": {},
+ "DZ-16": {}, "DZ-17": {}, "DZ-18": {}, "DZ-19": {}, "DZ-20": {},
+ "DZ-21": {}, "DZ-22": {}, "DZ-23": {}, "DZ-24": {}, "DZ-25": {},
+ "DZ-26": {}, "DZ-27": {}, "DZ-28": {}, "DZ-29": {}, "DZ-30": {},
+ "DZ-31": {}, "DZ-32": {}, "DZ-33": {}, "DZ-34": {}, "DZ-35": {},
+ "DZ-36": {}, "DZ-37": {}, "DZ-38": {}, "DZ-39": {}, "DZ-40": {},
+ "DZ-41": {}, "DZ-42": {}, "DZ-43": {}, "DZ-44": {}, "DZ-45": {},
+ "DZ-46": {}, "DZ-47": {}, "DZ-48": {}, "DZ-49": {}, "DZ-51": {},
+ "DZ-53": {}, "DZ-55": {}, "DZ-56": {}, "DZ-57": {}, "EC-A": {}, "EC-B": {},
+ "EC-C": {}, "EC-D": {}, "EC-E": {}, "EC-F": {}, "EC-G": {},
+ "EC-H": {}, "EC-I": {}, "EC-L": {}, "EC-M": {}, "EC-N": {},
+ "EC-O": {}, "EC-P": {}, "EC-R": {}, "EC-S": {}, "EC-SD": {},
+ "EC-SE": {}, "EC-T": {}, "EC-U": {}, "EC-W": {}, "EC-X": {},
+ "EC-Y": {}, "EC-Z": {}, "EE-37": {}, "EE-39": {}, "EE-44": {}, "EE-45": {},
+ "EE-49": {}, "EE-50": {}, "EE-51": {}, "EE-52": {}, "EE-56": {}, "EE-57": {},
+ "EE-59": {}, "EE-60": {}, "EE-64": {}, "EE-65": {}, "EE-67": {}, "EE-68": {},
+ "EE-70": {}, "EE-71": {}, "EE-74": {}, "EE-78": {}, "EE-79": {}, "EE-81": {}, "EE-82": {},
+ "EE-84": {}, "EE-86": {}, "EE-87": {}, "EG-ALX": {}, "EG-ASN": {}, "EG-AST": {},
+ "EG-BA": {}, "EG-BH": {}, "EG-BNS": {}, "EG-C": {}, "EG-DK": {},
+ "EG-DT": {}, "EG-FYM": {}, "EG-GH": {}, "EG-GZ": {}, "EG-HU": {},
+ "EG-IS": {}, "EG-JS": {}, "EG-KB": {}, "EG-KFS": {}, "EG-KN": {},
+ "EG-LX": {}, "EG-MN": {}, "EG-MNF": {}, "EG-MT": {}, "EG-PTS": {}, "EG-SHG": {},
+ "EG-SHR": {}, "EG-SIN": {}, "EG-SU": {}, "EG-SUZ": {}, "EG-WAD": {},
+ "ER-AN": {}, "ER-DK": {}, "ER-DU": {}, "ER-GB": {}, "ER-MA": {},
+ "ER-SK": {}, "ES-A": {}, "ES-AB": {}, "ES-AL": {}, "ES-AN": {},
+ "ES-AR": {}, "ES-AS": {}, "ES-AV": {}, "ES-B": {}, "ES-BA": {},
+ "ES-BI": {}, "ES-BU": {}, "ES-C": {}, "ES-CA": {}, "ES-CB": {},
+ "ES-CC": {}, "ES-CE": {}, "ES-CL": {}, "ES-CM": {}, "ES-CN": {},
+ "ES-CO": {}, "ES-CR": {}, "ES-CS": {}, "ES-CT": {}, "ES-CU": {},
+ "ES-EX": {}, "ES-GA": {}, "ES-GC": {}, "ES-GI": {}, "ES-GR": {},
+ "ES-GU": {}, "ES-H": {}, "ES-HU": {}, "ES-IB": {}, "ES-J": {},
+ "ES-L": {}, "ES-LE": {}, "ES-LO": {}, "ES-LU": {}, "ES-M": {},
+ "ES-MA": {}, "ES-MC": {}, "ES-MD": {}, "ES-ML": {}, "ES-MU": {},
+ "ES-NA": {}, "ES-NC": {}, "ES-O": {}, "ES-OR": {}, "ES-P": {},
+ "ES-PM": {}, "ES-PO": {}, "ES-PV": {}, "ES-RI": {}, "ES-S": {},
+ "ES-SA": {}, "ES-SE": {}, "ES-SG": {}, "ES-SO": {}, "ES-SS": {},
+ "ES-T": {}, "ES-TE": {}, "ES-TF": {}, "ES-TO": {}, "ES-V": {},
+ "ES-VA": {}, "ES-VC": {}, "ES-VI": {}, "ES-Z": {}, "ES-ZA": {},
+ "ET-AA": {}, "ET-AF": {}, "ET-AM": {}, "ET-BE": {}, "ET-DD": {},
+ "ET-GA": {}, "ET-HA": {}, "ET-OR": {}, "ET-SN": {}, "ET-SO": {},
+ "ET-TI": {}, "FI-01": {}, "FI-02": {}, "FI-03": {}, "FI-04": {},
+ "FI-05": {}, "FI-06": {}, "FI-07": {}, "FI-08": {}, "FI-09": {},
+ "FI-10": {}, "FI-11": {}, "FI-12": {}, "FI-13": {}, "FI-14": {},
+ "FI-15": {}, "FI-16": {}, "FI-17": {}, "FI-18": {}, "FI-19": {},
+ "FJ-C": {}, "FJ-E": {}, "FJ-N": {}, "FJ-R": {}, "FJ-W": {},
+ "FM-KSA": {}, "FM-PNI": {}, "FM-TRK": {}, "FM-YAP": {}, "FR-01": {},
+ "FR-02": {}, "FR-03": {}, "FR-04": {}, "FR-05": {}, "FR-06": {},
+ "FR-07": {}, "FR-08": {}, "FR-09": {}, "FR-10": {}, "FR-11": {},
+ "FR-12": {}, "FR-13": {}, "FR-14": {}, "FR-15": {}, "FR-16": {},
+ "FR-17": {}, "FR-18": {}, "FR-19": {}, "FR-20R": {}, "FR-21": {}, "FR-22": {},
+ "FR-23": {}, "FR-24": {}, "FR-25": {}, "FR-26": {}, "FR-27": {},
+ "FR-28": {}, "FR-29": {}, "FR-2A": {}, "FR-2B": {}, "FR-30": {},
+ "FR-31": {}, "FR-32": {}, "FR-33": {}, "FR-34": {}, "FR-35": {},
+ "FR-36": {}, "FR-37": {}, "FR-38": {}, "FR-39": {}, "FR-40": {},
+ "FR-41": {}, "FR-42": {}, "FR-43": {}, "FR-44": {}, "FR-45": {},
+ "FR-46": {}, "FR-47": {}, "FR-48": {}, "FR-49": {}, "FR-50": {},
+ "FR-51": {}, "FR-52": {}, "FR-53": {}, "FR-54": {}, "FR-55": {},
+ "FR-56": {}, "FR-57": {}, "FR-58": {}, "FR-59": {}, "FR-60": {},
+ "FR-61": {}, "FR-62": {}, "FR-63": {}, "FR-64": {}, "FR-65": {},
+ "FR-66": {}, "FR-67": {}, "FR-68": {}, "FR-69": {}, "FR-70": {},
+ "FR-71": {}, "FR-72": {}, "FR-73": {}, "FR-74": {}, "FR-75": {},
+ "FR-76": {}, "FR-77": {}, "FR-78": {}, "FR-79": {}, "FR-80": {},
+ "FR-81": {}, "FR-82": {}, "FR-83": {}, "FR-84": {}, "FR-85": {},
+ "FR-86": {}, "FR-87": {}, "FR-88": {}, "FR-89": {}, "FR-90": {},
+ "FR-91": {}, "FR-92": {}, "FR-93": {}, "FR-94": {}, "FR-95": {},
+ "FR-ARA": {}, "FR-BFC": {}, "FR-BL": {}, "FR-BRE": {}, "FR-COR": {},
+ "FR-CP": {}, "FR-CVL": {}, "FR-GES": {}, "FR-GF": {}, "FR-GP": {},
+ "FR-GUA": {}, "FR-HDF": {}, "FR-IDF": {}, "FR-LRE": {}, "FR-MAY": {},
+ "FR-MF": {}, "FR-MQ": {}, "FR-NAQ": {}, "FR-NC": {}, "FR-NOR": {},
+ "FR-OCC": {}, "FR-PAC": {}, "FR-PDL": {}, "FR-PF": {}, "FR-PM": {},
+ "FR-RE": {}, "FR-TF": {}, "FR-WF": {}, "FR-YT": {}, "GA-1": {},
+ "GA-2": {}, "GA-3": {}, "GA-4": {}, "GA-5": {}, "GA-6": {},
+ "GA-7": {}, "GA-8": {}, "GA-9": {}, "GB-ABC": {}, "GB-ABD": {},
+ "GB-ABE": {}, "GB-AGB": {}, "GB-AGY": {}, "GB-AND": {}, "GB-ANN": {},
+ "GB-ANS": {}, "GB-BAS": {}, "GB-BBD": {}, "GB-BDF": {}, "GB-BDG": {},
+ "GB-BEN": {}, "GB-BEX": {}, "GB-BFS": {}, "GB-BGE": {}, "GB-BGW": {},
+ "GB-BIR": {}, "GB-BKM": {}, "GB-BMH": {}, "GB-BNE": {}, "GB-BNH": {},
+ "GB-BNS": {}, "GB-BOL": {}, "GB-BPL": {}, "GB-BRC": {}, "GB-BRD": {},
+ "GB-BRY": {}, "GB-BST": {}, "GB-BUR": {}, "GB-CAM": {}, "GB-CAY": {},
+ "GB-CBF": {}, "GB-CCG": {}, "GB-CGN": {}, "GB-CHE": {}, "GB-CHW": {},
+ "GB-CLD": {}, "GB-CLK": {}, "GB-CMA": {}, "GB-CMD": {}, "GB-CMN": {},
+ "GB-CON": {}, "GB-COV": {}, "GB-CRF": {}, "GB-CRY": {}, "GB-CWY": {},
+ "GB-DAL": {}, "GB-DBY": {}, "GB-DEN": {}, "GB-DER": {}, "GB-DEV": {},
+ "GB-DGY": {}, "GB-DNC": {}, "GB-DND": {}, "GB-DOR": {}, "GB-DRS": {},
+ "GB-DUD": {}, "GB-DUR": {}, "GB-EAL": {}, "GB-EAW": {}, "GB-EAY": {},
+ "GB-EDH": {}, "GB-EDU": {}, "GB-ELN": {}, "GB-ELS": {}, "GB-ENF": {},
+ "GB-ENG": {}, "GB-ERW": {}, "GB-ERY": {}, "GB-ESS": {}, "GB-ESX": {},
+ "GB-FAL": {}, "GB-FIF": {}, "GB-FLN": {}, "GB-FMO": {}, "GB-GAT": {},
+ "GB-GBN": {}, "GB-GLG": {}, "GB-GLS": {}, "GB-GRE": {}, "GB-GWN": {},
+ "GB-HAL": {}, "GB-HAM": {}, "GB-HAV": {}, "GB-HCK": {}, "GB-HEF": {},
+ "GB-HIL": {}, "GB-HLD": {}, "GB-HMF": {}, "GB-HNS": {}, "GB-HPL": {},
+ "GB-HRT": {}, "GB-HRW": {}, "GB-HRY": {}, "GB-IOS": {}, "GB-IOW": {},
+ "GB-ISL": {}, "GB-IVC": {}, "GB-KEC": {}, "GB-KEN": {}, "GB-KHL": {},
+ "GB-KIR": {}, "GB-KTT": {}, "GB-KWL": {}, "GB-LAN": {}, "GB-LBC": {},
+ "GB-LBH": {}, "GB-LCE": {}, "GB-LDS": {}, "GB-LEC": {}, "GB-LEW": {},
+ "GB-LIN": {}, "GB-LIV": {}, "GB-LND": {}, "GB-LUT": {}, "GB-MAN": {},
+ "GB-MDB": {}, "GB-MDW": {}, "GB-MEA": {}, "GB-MIK": {}, "GD-01": {},
+ "GB-MLN": {}, "GB-MON": {}, "GB-MRT": {}, "GB-MRY": {}, "GB-MTY": {},
+ "GB-MUL": {}, "GB-NAY": {}, "GB-NBL": {}, "GB-NEL": {}, "GB-NET": {},
+ "GB-NFK": {}, "GB-NGM": {}, "GB-NIR": {}, "GB-NLK": {}, "GB-NLN": {},
+ "GB-NMD": {}, "GB-NSM": {}, "GB-NTH": {}, "GB-NTL": {}, "GB-NTT": {},
+ "GB-NTY": {}, "GB-NWM": {}, "GB-NWP": {}, "GB-NYK": {}, "GB-OLD": {},
+ "GB-ORK": {}, "GB-OXF": {}, "GB-PEM": {}, "GB-PKN": {}, "GB-PLY": {},
+ "GB-POL": {}, "GB-POR": {}, "GB-POW": {}, "GB-PTE": {}, "GB-RCC": {},
+ "GB-RCH": {}, "GB-RCT": {}, "GB-RDB": {}, "GB-RDG": {}, "GB-RFW": {},
+ "GB-RIC": {}, "GB-ROT": {}, "GB-RUT": {}, "GB-SAW": {}, "GB-SAY": {},
+ "GB-SCB": {}, "GB-SCT": {}, "GB-SFK": {}, "GB-SFT": {}, "GB-SGC": {},
+ "GB-SHF": {}, "GB-SHN": {}, "GB-SHR": {}, "GB-SKP": {}, "GB-SLF": {},
+ "GB-SLG": {}, "GB-SLK": {}, "GB-SND": {}, "GB-SOL": {}, "GB-SOM": {},
+ "GB-SOS": {}, "GB-SRY": {}, "GB-STE": {}, "GB-STG": {}, "GB-STH": {},
+ "GB-STN": {}, "GB-STS": {}, "GB-STT": {}, "GB-STY": {}, "GB-SWA": {},
+ "GB-SWD": {}, "GB-SWK": {}, "GB-TAM": {}, "GB-TFW": {}, "GB-THR": {},
+ "GB-TOB": {}, "GB-TOF": {}, "GB-TRF": {}, "GB-TWH": {}, "GB-UKM": {},
+ "GB-VGL": {}, "GB-WAR": {}, "GB-WBK": {}, "GB-WDU": {}, "GB-WFT": {},
+ "GB-WGN": {}, "GB-WIL": {}, "GB-WKF": {}, "GB-WLL": {}, "GB-WLN": {},
+ "GB-WLS": {}, "GB-WLV": {}, "GB-WND": {}, "GB-WNM": {}, "GB-WOK": {},
+ "GB-WOR": {}, "GB-WRL": {}, "GB-WRT": {}, "GB-WRX": {}, "GB-WSM": {},
+ "GB-WSX": {}, "GB-YOR": {}, "GB-ZET": {}, "GD-02": {}, "GD-03": {},
+ "GD-04": {}, "GD-05": {}, "GD-06": {}, "GD-10": {}, "GE-AB": {},
+ "GE-AJ": {}, "GE-GU": {}, "GE-IM": {}, "GE-KA": {}, "GE-KK": {},
+ "GE-MM": {}, "GE-RL": {}, "GE-SJ": {}, "GE-SK": {}, "GE-SZ": {},
+ "GE-TB": {}, "GH-AA": {}, "GH-AH": {}, "GH-AF": {}, "GH-BA": {}, "GH-BO": {}, "GH-BE": {}, "GH-CP": {},
+ "GH-EP": {}, "GH-NP": {}, "GH-TV": {}, "GH-UE": {}, "GH-UW": {},
+ "GH-WP": {}, "GL-AV": {}, "GL-KU": {}, "GL-QA": {}, "GL-QT": {}, "GL-QE": {}, "GL-SM": {},
+ "GM-B": {}, "GM-L": {}, "GM-M": {}, "GM-N": {}, "GM-U": {},
+ "GM-W": {}, "GN-B": {}, "GN-BE": {}, "GN-BF": {}, "GN-BK": {},
+ "GN-C": {}, "GN-CO": {}, "GN-D": {}, "GN-DB": {}, "GN-DI": {},
+ "GN-DL": {}, "GN-DU": {}, "GN-F": {}, "GN-FA": {}, "GN-FO": {},
+ "GN-FR": {}, "GN-GA": {}, "GN-GU": {}, "GN-K": {}, "GN-KA": {},
+ "GN-KB": {}, "GN-KD": {}, "GN-KE": {}, "GN-KN": {}, "GN-KO": {},
+ "GN-KS": {}, "GN-L": {}, "GN-LA": {}, "GN-LE": {}, "GN-LO": {},
+ "GN-M": {}, "GN-MC": {}, "GN-MD": {}, "GN-ML": {}, "GN-MM": {},
+ "GN-N": {}, "GN-NZ": {}, "GN-PI": {}, "GN-SI": {}, "GN-TE": {},
+ "GN-TO": {}, "GN-YO": {}, "GQ-AN": {}, "GQ-BN": {}, "GQ-BS": {},
+ "GQ-C": {}, "GQ-CS": {}, "GQ-I": {}, "GQ-KN": {}, "GQ-LI": {},
+ "GQ-WN": {}, "GR-01": {}, "GR-03": {}, "GR-04": {}, "GR-05": {},
+ "GR-06": {}, "GR-07": {}, "GR-11": {}, "GR-12": {}, "GR-13": {},
+ "GR-14": {}, "GR-15": {}, "GR-16": {}, "GR-17": {}, "GR-21": {},
+ "GR-22": {}, "GR-23": {}, "GR-24": {}, "GR-31": {}, "GR-32": {},
+ "GR-33": {}, "GR-34": {}, "GR-41": {}, "GR-42": {}, "GR-43": {},
+ "GR-44": {}, "GR-51": {}, "GR-52": {}, "GR-53": {}, "GR-54": {},
+ "GR-55": {}, "GR-56": {}, "GR-57": {}, "GR-58": {}, "GR-59": {},
+ "GR-61": {}, "GR-62": {}, "GR-63": {}, "GR-64": {}, "GR-69": {},
+ "GR-71": {}, "GR-72": {}, "GR-73": {}, "GR-81": {}, "GR-82": {},
+ "GR-83": {}, "GR-84": {}, "GR-85": {}, "GR-91": {}, "GR-92": {},
+ "GR-93": {}, "GR-94": {}, "GR-A": {}, "GR-A1": {}, "GR-B": {},
+ "GR-C": {}, "GR-D": {}, "GR-E": {}, "GR-F": {}, "GR-G": {},
+ "GR-H": {}, "GR-I": {}, "GR-J": {}, "GR-K": {}, "GR-L": {},
+ "GR-M": {}, "GT-01": {}, "GT-02": {}, "GT-03": {}, "GT-04": {},
+ "GT-05": {}, "GT-06": {}, "GT-07": {}, "GT-08": {}, "GT-09": {},
+ "GT-10": {}, "GT-11": {}, "GT-12": {}, "GT-13": {}, "GT-14": {},
+ "GT-15": {}, "GT-16": {}, "GT-17": {}, "GT-18": {}, "GT-19": {},
+ "GT-20": {}, "GT-21": {}, "GT-22": {}, "GW-BA": {}, "GW-BL": {},
+ "GW-BM": {}, "GW-BS": {}, "GW-CA": {}, "GW-GA": {}, "GW-L": {},
+ "GW-N": {}, "GW-OI": {}, "GW-QU": {}, "GW-S": {}, "GW-TO": {},
+ "GY-BA": {}, "GY-CU": {}, "GY-DE": {}, "GY-EB": {}, "GY-ES": {},
+ "GY-MA": {}, "GY-PM": {}, "GY-PT": {}, "GY-UD": {}, "GY-UT": {},
+ "HN-AT": {}, "HN-CH": {}, "HN-CL": {}, "HN-CM": {}, "HN-CP": {},
+ "HN-CR": {}, "HN-EP": {}, "HN-FM": {}, "HN-GD": {}, "HN-IB": {},
+ "HN-IN": {}, "HN-LE": {}, "HN-LP": {}, "HN-OC": {}, "HN-OL": {},
+ "HN-SB": {}, "HN-VA": {}, "HN-YO": {}, "HR-01": {}, "HR-02": {},
+ "HR-03": {}, "HR-04": {}, "HR-05": {}, "HR-06": {}, "HR-07": {},
+ "HR-08": {}, "HR-09": {}, "HR-10": {}, "HR-11": {}, "HR-12": {},
+ "HR-13": {}, "HR-14": {}, "HR-15": {}, "HR-16": {}, "HR-17": {},
+ "HR-18": {}, "HR-19": {}, "HR-20": {}, "HR-21": {}, "HT-AR": {},
+ "HT-CE": {}, "HT-GA": {}, "HT-ND": {}, "HT-NE": {}, "HT-NO": {}, "HT-NI": {},
+ "HT-OU": {}, "HT-SD": {}, "HT-SE": {}, "HU-BA": {}, "HU-BC": {},
+ "HU-BE": {}, "HU-BK": {}, "HU-BU": {}, "HU-BZ": {}, "HU-CS": {},
+ "HU-DE": {}, "HU-DU": {}, "HU-EG": {}, "HU-ER": {}, "HU-FE": {},
+ "HU-GS": {}, "HU-GY": {}, "HU-HB": {}, "HU-HE": {}, "HU-HV": {},
+ "HU-JN": {}, "HU-KE": {}, "HU-KM": {}, "HU-KV": {}, "HU-MI": {},
+ "HU-NK": {}, "HU-NO": {}, "HU-NY": {}, "HU-PE": {}, "HU-PS": {},
+ "HU-SD": {}, "HU-SF": {}, "HU-SH": {}, "HU-SK": {}, "HU-SN": {},
+ "HU-SO": {}, "HU-SS": {}, "HU-ST": {}, "HU-SZ": {}, "HU-TB": {},
+ "HU-TO": {}, "HU-VA": {}, "HU-VE": {}, "HU-VM": {}, "HU-ZA": {},
+ "HU-ZE": {}, "ID-AC": {}, "ID-BA": {}, "ID-BB": {}, "ID-BE": {},
+ "ID-BT": {}, "ID-GO": {}, "ID-IJ": {}, "ID-JA": {}, "ID-JB": {},
+ "ID-JI": {}, "ID-JK": {}, "ID-JT": {}, "ID-JW": {}, "ID-KA": {},
+ "ID-KB": {}, "ID-KI": {}, "ID-KU": {}, "ID-KR": {}, "ID-KS": {},
+ "ID-KT": {}, "ID-LA": {}, "ID-MA": {}, "ID-ML": {}, "ID-MU": {},
+ "ID-NB": {}, "ID-NT": {}, "ID-NU": {}, "ID-PA": {}, "ID-PB": {},
+ "ID-PE": {}, "ID-PP": {}, "ID-PS": {}, "ID-PT": {}, "ID-RI": {},
+ "ID-SA": {}, "ID-SB": {}, "ID-SG": {}, "ID-SL": {}, "ID-SM": {},
+ "ID-SN": {}, "ID-SR": {}, "ID-SS": {}, "ID-ST": {}, "ID-SU": {},
+ "ID-YO": {}, "IE-C": {}, "IE-CE": {}, "IE-CN": {}, "IE-CO": {},
+ "IE-CW": {}, "IE-D": {}, "IE-DL": {}, "IE-G": {}, "IE-KE": {},
+ "IE-KK": {}, "IE-KY": {}, "IE-L": {}, "IE-LD": {}, "IE-LH": {},
+ "IE-LK": {}, "IE-LM": {}, "IE-LS": {}, "IE-M": {}, "IE-MH": {},
+ "IE-MN": {}, "IE-MO": {}, "IE-OY": {}, "IE-RN": {}, "IE-SO": {},
+ "IE-TA": {}, "IE-U": {}, "IE-WD": {}, "IE-WH": {}, "IE-WW": {},
+ "IE-WX": {}, "IL-D": {}, "IL-HA": {}, "IL-JM": {}, "IL-M": {},
+ "IL-TA": {}, "IL-Z": {}, "IN-AN": {}, "IN-AP": {}, "IN-AR": {},
+ "IN-AS": {}, "IN-BR": {}, "IN-CH": {}, "IN-CT": {}, "IN-DH": {},
+ "IN-DL": {}, "IN-DN": {}, "IN-GA": {}, "IN-GJ": {}, "IN-HP": {},
+ "IN-HR": {}, "IN-JH": {}, "IN-JK": {}, "IN-KA": {}, "IN-KL": {},
+ "IN-LD": {}, "IN-MH": {}, "IN-ML": {}, "IN-MN": {}, "IN-MP": {},
+ "IN-MZ": {}, "IN-NL": {}, "IN-TG": {}, "IN-OR": {}, "IN-PB": {}, "IN-PY": {},
+ "IN-RJ": {}, "IN-SK": {}, "IN-TN": {}, "IN-TR": {}, "IN-UP": {},
+ "IN-UT": {}, "IN-WB": {}, "IQ-AN": {}, "IQ-AR": {}, "IQ-BA": {},
+ "IQ-BB": {}, "IQ-BG": {}, "IQ-DA": {}, "IQ-DI": {}, "IQ-DQ": {},
+ "IQ-KA": {}, "IQ-KI": {}, "IQ-MA": {}, "IQ-MU": {}, "IQ-NA": {}, "IQ-NI": {},
+ "IQ-QA": {}, "IQ-SD": {}, "IQ-SW": {}, "IQ-SU": {}, "IQ-TS": {}, "IQ-WA": {},
+ "IR-00": {}, "IR-01": {}, "IR-02": {}, "IR-03": {}, "IR-04": {}, "IR-05": {},
+ "IR-06": {}, "IR-07": {}, "IR-08": {}, "IR-09": {}, "IR-10": {}, "IR-11": {},
+ "IR-12": {}, "IR-13": {}, "IR-14": {}, "IR-15": {}, "IR-16": {},
+ "IR-17": {}, "IR-18": {}, "IR-19": {}, "IR-20": {}, "IR-21": {},
+ "IR-22": {}, "IR-23": {}, "IR-24": {}, "IR-25": {}, "IR-26": {},
+ "IR-27": {}, "IR-28": {}, "IR-29": {}, "IR-30": {}, "IR-31": {},
+ "IS-0": {}, "IS-1": {}, "IS-2": {}, "IS-3": {}, "IS-4": {},
+ "IS-5": {}, "IS-6": {}, "IS-7": {}, "IS-8": {}, "IT-21": {},
+ "IT-23": {}, "IT-25": {}, "IT-32": {}, "IT-34": {}, "IT-36": {},
+ "IT-42": {}, "IT-45": {}, "IT-52": {}, "IT-55": {}, "IT-57": {},
+ "IT-62": {}, "IT-65": {}, "IT-67": {}, "IT-72": {}, "IT-75": {},
+ "IT-77": {}, "IT-78": {}, "IT-82": {}, "IT-88": {}, "IT-AG": {},
+ "IT-AL": {}, "IT-AN": {}, "IT-AO": {}, "IT-AP": {}, "IT-AQ": {},
+ "IT-AR": {}, "IT-AT": {}, "IT-AV": {}, "IT-BA": {}, "IT-BG": {},
+ "IT-BI": {}, "IT-BL": {}, "IT-BN": {}, "IT-BO": {}, "IT-BR": {},
+ "IT-BS": {}, "IT-BT": {}, "IT-BZ": {}, "IT-CA": {}, "IT-CB": {},
+ "IT-CE": {}, "IT-CH": {}, "IT-CI": {}, "IT-CL": {}, "IT-CN": {},
+ "IT-CO": {}, "IT-CR": {}, "IT-CS": {}, "IT-CT": {}, "IT-CZ": {},
+ "IT-EN": {}, "IT-FC": {}, "IT-FE": {}, "IT-FG": {}, "IT-FI": {},
+ "IT-FM": {}, "IT-FR": {}, "IT-GE": {}, "IT-GO": {}, "IT-GR": {},
+ "IT-IM": {}, "IT-IS": {}, "IT-KR": {}, "IT-LC": {}, "IT-LE": {},
+ "IT-LI": {}, "IT-LO": {}, "IT-LT": {}, "IT-LU": {}, "IT-MB": {},
+ "IT-MC": {}, "IT-ME": {}, "IT-MI": {}, "IT-MN": {}, "IT-MO": {},
+ "IT-MS": {}, "IT-MT": {}, "IT-NA": {}, "IT-NO": {}, "IT-NU": {},
+ "IT-OG": {}, "IT-OR": {}, "IT-OT": {}, "IT-PA": {}, "IT-PC": {},
+ "IT-PD": {}, "IT-PE": {}, "IT-PG": {}, "IT-PI": {}, "IT-PN": {},
+ "IT-PO": {}, "IT-PR": {}, "IT-PT": {}, "IT-PU": {}, "IT-PV": {},
+ "IT-PZ": {}, "IT-RA": {}, "IT-RC": {}, "IT-RE": {}, "IT-RG": {},
+ "IT-RI": {}, "IT-RM": {}, "IT-RN": {}, "IT-RO": {}, "IT-SA": {},
+ "IT-SI": {}, "IT-SO": {}, "IT-SP": {}, "IT-SR": {}, "IT-SS": {},
+ "IT-SV": {}, "IT-TA": {}, "IT-TE": {}, "IT-TN": {}, "IT-TO": {},
+ "IT-TP": {}, "IT-TR": {}, "IT-TS": {}, "IT-TV": {}, "IT-UD": {},
+ "IT-VA": {}, "IT-VB": {}, "IT-VC": {}, "IT-VE": {}, "IT-VI": {},
+ "IT-VR": {}, "IT-VS": {}, "IT-VT": {}, "IT-VV": {}, "JM-01": {},
+ "JM-02": {}, "JM-03": {}, "JM-04": {}, "JM-05": {}, "JM-06": {},
+ "JM-07": {}, "JM-08": {}, "JM-09": {}, "JM-10": {}, "JM-11": {},
+ "JM-12": {}, "JM-13": {}, "JM-14": {}, "JO-AJ": {}, "JO-AM": {},
+ "JO-AQ": {}, "JO-AT": {}, "JO-AZ": {}, "JO-BA": {}, "JO-IR": {},
+ "JO-JA": {}, "JO-KA": {}, "JO-MA": {}, "JO-MD": {}, "JO-MN": {},
+ "JP-01": {}, "JP-02": {}, "JP-03": {}, "JP-04": {}, "JP-05": {},
+ "JP-06": {}, "JP-07": {}, "JP-08": {}, "JP-09": {}, "JP-10": {},
+ "JP-11": {}, "JP-12": {}, "JP-13": {}, "JP-14": {}, "JP-15": {},
+ "JP-16": {}, "JP-17": {}, "JP-18": {}, "JP-19": {}, "JP-20": {},
+ "JP-21": {}, "JP-22": {}, "JP-23": {}, "JP-24": {}, "JP-25": {},
+ "JP-26": {}, "JP-27": {}, "JP-28": {}, "JP-29": {}, "JP-30": {},
+ "JP-31": {}, "JP-32": {}, "JP-33": {}, "JP-34": {}, "JP-35": {},
+ "JP-36": {}, "JP-37": {}, "JP-38": {}, "JP-39": {}, "JP-40": {},
+ "JP-41": {}, "JP-42": {}, "JP-43": {}, "JP-44": {}, "JP-45": {},
+ "JP-46": {}, "JP-47": {}, "KE-01": {}, "KE-02": {}, "KE-03": {},
+ "KE-04": {}, "KE-05": {}, "KE-06": {}, "KE-07": {}, "KE-08": {},
+ "KE-09": {}, "KE-10": {}, "KE-11": {}, "KE-12": {}, "KE-13": {},
+ "KE-14": {}, "KE-15": {}, "KE-16": {}, "KE-17": {}, "KE-18": {},
+ "KE-19": {}, "KE-20": {}, "KE-21": {}, "KE-22": {}, "KE-23": {},
+ "KE-24": {}, "KE-25": {}, "KE-26": {}, "KE-27": {}, "KE-28": {},
+ "KE-29": {}, "KE-30": {}, "KE-31": {}, "KE-32": {}, "KE-33": {},
+ "KE-34": {}, "KE-35": {}, "KE-36": {}, "KE-37": {}, "KE-38": {},
+ "KE-39": {}, "KE-40": {}, "KE-41": {}, "KE-42": {}, "KE-43": {},
+ "KE-44": {}, "KE-45": {}, "KE-46": {}, "KE-47": {}, "KG-B": {},
+ "KG-C": {}, "KG-GB": {}, "KG-GO": {}, "KG-J": {}, "KG-N": {}, "KG-O": {},
+ "KG-T": {}, "KG-Y": {}, "KH-1": {}, "KH-10": {}, "KH-11": {},
+ "KH-12": {}, "KH-13": {}, "KH-14": {}, "KH-15": {}, "KH-16": {},
+ "KH-17": {}, "KH-18": {}, "KH-19": {}, "KH-2": {}, "KH-20": {},
+ "KH-21": {}, "KH-22": {}, "KH-23": {}, "KH-24": {}, "KH-3": {},
+ "KH-4": {}, "KH-5": {}, "KH-6": {}, "KH-7": {}, "KH-8": {},
+ "KH-9": {}, "KI-G": {}, "KI-L": {}, "KI-P": {}, "KM-A": {},
+ "KM-G": {}, "KM-M": {}, "KN-01": {}, "KN-02": {}, "KN-03": {},
+ "KN-04": {}, "KN-05": {}, "KN-06": {}, "KN-07": {}, "KN-08": {},
+ "KN-09": {}, "KN-10": {}, "KN-11": {}, "KN-12": {}, "KN-13": {},
+ "KN-15": {}, "KN-K": {}, "KN-N": {}, "KP-01": {}, "KP-02": {},
+ "KP-03": {}, "KP-04": {}, "KP-05": {}, "KP-06": {}, "KP-07": {},
+ "KP-08": {}, "KP-09": {}, "KP-10": {}, "KP-13": {}, "KR-11": {},
+ "KR-26": {}, "KR-27": {}, "KR-28": {}, "KR-29": {}, "KR-30": {},
+ "KR-31": {}, "KR-41": {}, "KR-42": {}, "KR-43": {}, "KR-44": {},
+ "KR-45": {}, "KR-46": {}, "KR-47": {}, "KR-48": {}, "KR-49": {},
+ "KW-AH": {}, "KW-FA": {}, "KW-HA": {}, "KW-JA": {}, "KW-KU": {},
+ "KW-MU": {}, "KZ-10": {}, "KZ-75": {}, "KZ-19": {}, "KZ-11": {},
+ "KZ-15": {}, "KZ-71": {}, "KZ-23": {}, "KZ-27": {}, "KZ-47": {},
+ "KZ-55": {}, "KZ-35": {}, "KZ-39": {}, "KZ-43": {}, "KZ-63": {},
+ "KZ-79": {}, "KZ-59": {}, "KZ-61": {}, "KZ-62": {}, "KZ-31": {},
+ "KZ-33": {}, "LA-AT": {}, "LA-BK": {}, "LA-BL": {},
+ "LA-CH": {}, "LA-HO": {}, "LA-KH": {}, "LA-LM": {}, "LA-LP": {},
+ "LA-OU": {}, "LA-PH": {}, "LA-SL": {}, "LA-SV": {}, "LA-VI": {},
+ "LA-VT": {}, "LA-XA": {}, "LA-XE": {}, "LA-XI": {}, "LA-XS": {},
+ "LB-AK": {}, "LB-AS": {}, "LB-BA": {}, "LB-BH": {}, "LB-BI": {},
+ "LB-JA": {}, "LB-JL": {}, "LB-NA": {}, "LC-01": {}, "LC-02": {},
+ "LC-03": {}, "LC-05": {}, "LC-06": {}, "LC-07": {}, "LC-08": {},
+ "LC-10": {}, "LC-11": {}, "LI-01": {}, "LI-02": {},
+ "LI-03": {}, "LI-04": {}, "LI-05": {}, "LI-06": {}, "LI-07": {},
+ "LI-08": {}, "LI-09": {}, "LI-10": {}, "LI-11": {}, "LK-1": {},
+ "LK-11": {}, "LK-12": {}, "LK-13": {}, "LK-2": {}, "LK-21": {},
+ "LK-22": {}, "LK-23": {}, "LK-3": {}, "LK-31": {}, "LK-32": {},
+ "LK-33": {}, "LK-4": {}, "LK-41": {}, "LK-42": {}, "LK-43": {},
+ "LK-44": {}, "LK-45": {}, "LK-5": {}, "LK-51": {}, "LK-52": {},
+ "LK-53": {}, "LK-6": {}, "LK-61": {}, "LK-62": {}, "LK-7": {},
+ "LK-71": {}, "LK-72": {}, "LK-8": {}, "LK-81": {}, "LK-82": {},
+ "LK-9": {}, "LK-91": {}, "LK-92": {}, "LR-BG": {}, "LR-BM": {},
+ "LR-CM": {}, "LR-GB": {}, "LR-GG": {}, "LR-GK": {}, "LR-LO": {},
+ "LR-MG": {}, "LR-MO": {}, "LR-MY": {}, "LR-NI": {}, "LR-RI": {},
+ "LR-SI": {}, "LS-A": {}, "LS-B": {}, "LS-C": {}, "LS-D": {},
+ "LS-E": {}, "LS-F": {}, "LS-G": {}, "LS-H": {}, "LS-J": {},
+ "LS-K": {}, "LT-AL": {}, "LT-KL": {}, "LT-KU": {}, "LT-MR": {},
+ "LT-PN": {}, "LT-SA": {}, "LT-TA": {}, "LT-TE": {}, "LT-UT": {},
+ "LT-VL": {}, "LU-CA": {}, "LU-CL": {}, "LU-DI": {}, "LU-EC": {},
+ "LU-ES": {}, "LU-GR": {}, "LU-LU": {}, "LU-ME": {}, "LU-RD": {},
+ "LU-RM": {}, "LU-VD": {}, "LU-WI": {}, "LU-D": {}, "LU-G": {}, "LU-L": {},
+ "LV-001": {}, "LV-111": {}, "LV-112": {}, "LV-113": {},
+ "LV-002": {}, "LV-003": {}, "LV-004": {}, "LV-005": {}, "LV-006": {},
+ "LV-007": {}, "LV-008": {}, "LV-009": {}, "LV-010": {}, "LV-011": {},
+ "LV-012": {}, "LV-013": {}, "LV-014": {}, "LV-015": {}, "LV-016": {},
+ "LV-017": {}, "LV-018": {}, "LV-019": {}, "LV-020": {}, "LV-021": {},
+ "LV-022": {}, "LV-023": {}, "LV-024": {}, "LV-025": {}, "LV-026": {},
+ "LV-027": {}, "LV-028": {}, "LV-029": {}, "LV-030": {}, "LV-031": {},
+ "LV-032": {}, "LV-033": {}, "LV-034": {}, "LV-035": {}, "LV-036": {},
+ "LV-037": {}, "LV-038": {}, "LV-039": {}, "LV-040": {}, "LV-041": {},
+ "LV-042": {}, "LV-043": {}, "LV-044": {}, "LV-045": {}, "LV-046": {},
+ "LV-047": {}, "LV-048": {}, "LV-049": {}, "LV-050": {}, "LV-051": {},
+ "LV-052": {}, "LV-053": {}, "LV-054": {}, "LV-055": {}, "LV-056": {},
+ "LV-057": {}, "LV-058": {}, "LV-059": {}, "LV-060": {}, "LV-061": {},
+ "LV-062": {}, "LV-063": {}, "LV-064": {}, "LV-065": {}, "LV-066": {},
+ "LV-067": {}, "LV-068": {}, "LV-069": {}, "LV-070": {}, "LV-071": {},
+ "LV-072": {}, "LV-073": {}, "LV-074": {}, "LV-075": {}, "LV-076": {},
+ "LV-077": {}, "LV-078": {}, "LV-079": {}, "LV-080": {}, "LV-081": {},
+ "LV-082": {}, "LV-083": {}, "LV-084": {}, "LV-085": {}, "LV-086": {},
+ "LV-087": {}, "LV-088": {}, "LV-089": {}, "LV-090": {}, "LV-091": {},
+ "LV-092": {}, "LV-093": {}, "LV-094": {}, "LV-095": {}, "LV-096": {},
+ "LV-097": {}, "LV-098": {}, "LV-099": {}, "LV-100": {}, "LV-101": {},
+ "LV-102": {}, "LV-103": {}, "LV-104": {}, "LV-105": {}, "LV-106": {},
+ "LV-107": {}, "LV-108": {}, "LV-109": {}, "LV-110": {}, "LV-DGV": {},
+ "LV-JEL": {}, "LV-JKB": {}, "LV-JUR": {}, "LV-LPX": {}, "LV-REZ": {},
+ "LV-RIX": {}, "LV-VEN": {}, "LV-VMR": {}, "LY-BA": {}, "LY-BU": {},
+ "LY-DR": {}, "LY-GT": {}, "LY-JA": {}, "LY-JB": {}, "LY-JG": {},
+ "LY-JI": {}, "LY-JU": {}, "LY-KF": {}, "LY-MB": {}, "LY-MI": {},
+ "LY-MJ": {}, "LY-MQ": {}, "LY-NL": {}, "LY-NQ": {}, "LY-SB": {},
+ "LY-SR": {}, "LY-TB": {}, "LY-WA": {}, "LY-WD": {}, "LY-WS": {},
+ "LY-ZA": {}, "MA-01": {}, "MA-02": {}, "MA-03": {}, "MA-04": {},
+ "MA-05": {}, "MA-06": {}, "MA-07": {}, "MA-08": {}, "MA-09": {},
+ "MA-10": {}, "MA-11": {}, "MA-12": {}, "MA-13": {}, "MA-14": {},
+ "MA-15": {}, "MA-16": {}, "MA-AGD": {}, "MA-AOU": {}, "MA-ASZ": {},
+ "MA-AZI": {}, "MA-BEM": {}, "MA-BER": {}, "MA-BES": {}, "MA-BOD": {},
+ "MA-BOM": {}, "MA-CAS": {}, "MA-CHE": {}, "MA-CHI": {}, "MA-CHT": {},
+ "MA-ERR": {}, "MA-ESI": {}, "MA-ESM": {}, "MA-FAH": {}, "MA-FES": {},
+ "MA-FIG": {}, "MA-GUE": {}, "MA-HAJ": {}, "MA-HAO": {}, "MA-HOC": {},
+ "MA-IFR": {}, "MA-INE": {}, "MA-JDI": {}, "MA-JRA": {}, "MA-KEN": {},
+ "MA-KES": {}, "MA-KHE": {}, "MA-KHN": {}, "MA-KHO": {}, "MA-LAA": {},
+ "MA-LAR": {}, "MA-MED": {}, "MA-MEK": {}, "MA-MMD": {}, "MA-MMN": {},
+ "MA-MOH": {}, "MA-MOU": {}, "MA-NAD": {}, "MA-NOU": {}, "MA-OUA": {},
+ "MA-OUD": {}, "MA-OUJ": {}, "MA-RAB": {}, "MA-SAF": {}, "MA-SAL": {},
+ "MA-SEF": {}, "MA-SET": {}, "MA-SIK": {}, "MA-SKH": {}, "MA-SYB": {},
+ "MA-TAI": {}, "MA-TAO": {}, "MA-TAR": {}, "MA-TAT": {}, "MA-TAZ": {},
+ "MA-TET": {}, "MA-TIZ": {}, "MA-TNG": {}, "MA-TNT": {}, "MA-ZAG": {},
+ "MC-CL": {}, "MC-CO": {}, "MC-FO": {}, "MC-GA": {}, "MC-JE": {},
+ "MC-LA": {}, "MC-MA": {}, "MC-MC": {}, "MC-MG": {}, "MC-MO": {},
+ "MC-MU": {}, "MC-PH": {}, "MC-SD": {}, "MC-SO": {}, "MC-SP": {},
+ "MC-SR": {}, "MC-VR": {}, "MD-AN": {}, "MD-BA": {}, "MD-BD": {},
+ "MD-BR": {}, "MD-BS": {}, "MD-CA": {}, "MD-CL": {}, "MD-CM": {},
+ "MD-CR": {}, "MD-CS": {}, "MD-CT": {}, "MD-CU": {}, "MD-DO": {},
+ "MD-DR": {}, "MD-DU": {}, "MD-ED": {}, "MD-FA": {}, "MD-FL": {},
+ "MD-GA": {}, "MD-GL": {}, "MD-HI": {}, "MD-IA": {}, "MD-LE": {},
+ "MD-NI": {}, "MD-OC": {}, "MD-OR": {}, "MD-RE": {}, "MD-RI": {},
+ "MD-SD": {}, "MD-SI": {}, "MD-SN": {}, "MD-SO": {}, "MD-ST": {},
+ "MD-SV": {}, "MD-TA": {}, "MD-TE": {}, "MD-UN": {}, "ME-01": {},
+ "ME-02": {}, "ME-03": {}, "ME-04": {}, "ME-05": {}, "ME-06": {},
+ "ME-07": {}, "ME-08": {}, "ME-09": {}, "ME-10": {}, "ME-11": {},
+ "ME-12": {}, "ME-13": {}, "ME-14": {}, "ME-15": {}, "ME-16": {},
+ "ME-17": {}, "ME-18": {}, "ME-19": {}, "ME-20": {}, "ME-21": {}, "ME-24": {},
+ "MG-A": {}, "MG-D": {}, "MG-F": {}, "MG-M": {}, "MG-T": {},
+ "MG-U": {}, "MH-ALK": {}, "MH-ALL": {}, "MH-ARN": {}, "MH-AUR": {},
+ "MH-EBO": {}, "MH-ENI": {}, "MH-JAB": {}, "MH-JAL": {}, "MH-KIL": {},
+ "MH-KWA": {}, "MH-L": {}, "MH-LAE": {}, "MH-LIB": {}, "MH-LIK": {},
+ "MH-MAJ": {}, "MH-MAL": {}, "MH-MEJ": {}, "MH-MIL": {}, "MH-NMK": {},
+ "MH-NMU": {}, "MH-RON": {}, "MH-T": {}, "MH-UJA": {}, "MH-UTI": {},
+ "MH-WTJ": {}, "MH-WTN": {}, "MK-101": {}, "MK-102": {}, "MK-103": {},
+ "MK-104": {}, "MK-105": {},
+ "MK-106": {}, "MK-107": {}, "MK-108": {}, "MK-109": {}, "MK-201": {},
+ "MK-202": {}, "MK-205": {}, "MK-206": {}, "MK-207": {}, "MK-208": {},
+ "MK-209": {}, "MK-210": {}, "MK-211": {}, "MK-301": {}, "MK-303": {},
+ "MK-307": {}, "MK-308": {}, "MK-310": {}, "MK-311": {}, "MK-312": {},
+ "MK-401": {}, "MK-402": {}, "MK-403": {}, "MK-404": {}, "MK-405": {},
+ "MK-406": {}, "MK-408": {}, "MK-409": {}, "MK-410": {}, "MK-501": {},
+ "MK-502": {}, "MK-503": {}, "MK-505": {}, "MK-506": {}, "MK-507": {},
+ "MK-508": {}, "MK-509": {}, "MK-601": {}, "MK-602": {}, "MK-604": {},
+ "MK-605": {}, "MK-606": {}, "MK-607": {}, "MK-608": {}, "MK-609": {},
+ "MK-701": {}, "MK-702": {}, "MK-703": {}, "MK-704": {}, "MK-705": {},
+ "MK-803": {}, "MK-804": {}, "MK-806": {}, "MK-807": {}, "MK-809": {},
+ "MK-810": {}, "MK-811": {}, "MK-812": {}, "MK-813": {}, "MK-814": {},
+ "MK-816": {}, "ML-1": {}, "ML-2": {}, "ML-3": {}, "ML-4": {},
+ "ML-5": {}, "ML-6": {}, "ML-7": {}, "ML-8": {}, "ML-BKO": {},
+ "MM-01": {}, "MM-02": {}, "MM-03": {}, "MM-04": {}, "MM-05": {},
+ "MM-06": {}, "MM-07": {}, "MM-11": {}, "MM-12": {}, "MM-13": {},
+ "MM-14": {}, "MM-15": {}, "MM-16": {}, "MM-17": {}, "MM-18": {}, "MN-035": {},
+ "MN-037": {}, "MN-039": {}, "MN-041": {}, "MN-043": {}, "MN-046": {},
+ "MN-047": {}, "MN-049": {}, "MN-051": {}, "MN-053": {}, "MN-055": {},
+ "MN-057": {}, "MN-059": {}, "MN-061": {}, "MN-063": {}, "MN-064": {},
+ "MN-065": {}, "MN-067": {}, "MN-069": {}, "MN-071": {}, "MN-073": {},
+ "MN-1": {}, "MR-01": {}, "MR-02": {}, "MR-03": {}, "MR-04": {},
+ "MR-05": {}, "MR-06": {}, "MR-07": {}, "MR-08": {}, "MR-09": {},
+ "MR-10": {}, "MR-11": {}, "MR-12": {}, "MR-13": {}, "MR-NKC": {}, "MT-01": {},
+ "MT-02": {}, "MT-03": {}, "MT-04": {}, "MT-05": {}, "MT-06": {},
+ "MT-07": {}, "MT-08": {}, "MT-09": {}, "MT-10": {}, "MT-11": {},
+ "MT-12": {}, "MT-13": {}, "MT-14": {}, "MT-15": {}, "MT-16": {},
+ "MT-17": {}, "MT-18": {}, "MT-19": {}, "MT-20": {}, "MT-21": {},
+ "MT-22": {}, "MT-23": {}, "MT-24": {}, "MT-25": {}, "MT-26": {},
+ "MT-27": {}, "MT-28": {}, "MT-29": {}, "MT-30": {}, "MT-31": {},
+ "MT-32": {}, "MT-33": {}, "MT-34": {}, "MT-35": {}, "MT-36": {},
+ "MT-37": {}, "MT-38": {}, "MT-39": {}, "MT-40": {}, "MT-41": {},
+ "MT-42": {}, "MT-43": {}, "MT-44": {}, "MT-45": {}, "MT-46": {},
+ "MT-47": {}, "MT-48": {}, "MT-49": {}, "MT-50": {}, "MT-51": {},
+ "MT-52": {}, "MT-53": {}, "MT-54": {}, "MT-55": {}, "MT-56": {},
+ "MT-57": {}, "MT-58": {}, "MT-59": {}, "MT-60": {}, "MT-61": {},
+ "MT-62": {}, "MT-63": {}, "MT-64": {}, "MT-65": {}, "MT-66": {},
+ "MT-67": {}, "MT-68": {}, "MU-AG": {}, "MU-BL": {}, "MU-BR": {},
+ "MU-CC": {}, "MU-CU": {}, "MU-FL": {}, "MU-GP": {}, "MU-MO": {},
+ "MU-PA": {}, "MU-PL": {}, "MU-PU": {}, "MU-PW": {}, "MU-QB": {},
+ "MU-RO": {}, "MU-RP": {}, "MU-RR": {}, "MU-SA": {}, "MU-VP": {}, "MV-00": {},
+ "MV-01": {}, "MV-02": {}, "MV-03": {}, "MV-04": {}, "MV-05": {},
+ "MV-07": {}, "MV-08": {}, "MV-12": {}, "MV-13": {}, "MV-14": {},
+ "MV-17": {}, "MV-20": {}, "MV-23": {}, "MV-24": {}, "MV-25": {},
+ "MV-26": {}, "MV-27": {}, "MV-28": {}, "MV-29": {}, "MV-CE": {},
+ "MV-MLE": {}, "MV-NC": {}, "MV-NO": {}, "MV-SC": {}, "MV-SU": {},
+ "MV-UN": {}, "MV-US": {}, "MW-BA": {}, "MW-BL": {}, "MW-C": {},
+ "MW-CK": {}, "MW-CR": {}, "MW-CT": {}, "MW-DE": {}, "MW-DO": {},
+ "MW-KR": {}, "MW-KS": {}, "MW-LI": {}, "MW-LK": {}, "MW-MC": {},
+ "MW-MG": {}, "MW-MH": {}, "MW-MU": {}, "MW-MW": {}, "MW-MZ": {},
+ "MW-N": {}, "MW-NB": {}, "MW-NE": {}, "MW-NI": {}, "MW-NK": {},
+ "MW-NS": {}, "MW-NU": {}, "MW-PH": {}, "MW-RU": {}, "MW-S": {},
+ "MW-SA": {}, "MW-TH": {}, "MW-ZO": {}, "MX-AGU": {}, "MX-BCN": {},
+ "MX-BCS": {}, "MX-CAM": {}, "MX-CHH": {}, "MX-CHP": {}, "MX-COA": {},
+ "MX-COL": {}, "MX-CMX": {}, "MX-DIF": {}, "MX-DUR": {}, "MX-GRO": {}, "MX-GUA": {},
+ "MX-HID": {}, "MX-JAL": {}, "MX-MEX": {}, "MX-MIC": {}, "MX-MOR": {},
+ "MX-NAY": {}, "MX-NLE": {}, "MX-OAX": {}, "MX-PUE": {}, "MX-QUE": {},
+ "MX-ROO": {}, "MX-SIN": {}, "MX-SLP": {}, "MX-SON": {}, "MX-TAB": {},
+ "MX-TAM": {}, "MX-TLA": {}, "MX-VER": {}, "MX-YUC": {}, "MX-ZAC": {},
+ "MY-01": {}, "MY-02": {}, "MY-03": {}, "MY-04": {}, "MY-05": {},
+ "MY-06": {}, "MY-07": {}, "MY-08": {}, "MY-09": {}, "MY-10": {},
+ "MY-11": {}, "MY-12": {}, "MY-13": {}, "MY-14": {}, "MY-15": {},
+ "MY-16": {}, "MZ-A": {}, "MZ-B": {}, "MZ-G": {}, "MZ-I": {},
+ "MZ-L": {}, "MZ-MPM": {}, "MZ-N": {}, "MZ-P": {}, "MZ-Q": {},
+ "MZ-S": {}, "MZ-T": {}, "NA-CA": {}, "NA-ER": {}, "NA-HA": {},
+ "NA-KA": {}, "NA-KE": {}, "NA-KH": {}, "NA-KU": {}, "NA-KW": {}, "NA-OD": {}, "NA-OH": {},
+ "NA-OK": {}, "NA-ON": {}, "NA-OS": {}, "NA-OT": {}, "NA-OW": {},
+ "NE-1": {}, "NE-2": {}, "NE-3": {}, "NE-4": {}, "NE-5": {},
+ "NE-6": {}, "NE-7": {}, "NE-8": {}, "NG-AB": {}, "NG-AD": {},
+ "NG-AK": {}, "NG-AN": {}, "NG-BA": {}, "NG-BE": {}, "NG-BO": {},
+ "NG-BY": {}, "NG-CR": {}, "NG-DE": {}, "NG-EB": {}, "NG-ED": {},
+ "NG-EK": {}, "NG-EN": {}, "NG-FC": {}, "NG-GO": {}, "NG-IM": {},
+ "NG-JI": {}, "NG-KD": {}, "NG-KE": {}, "NG-KN": {}, "NG-KO": {},
+ "NG-KT": {}, "NG-KW": {}, "NG-LA": {}, "NG-NA": {}, "NG-NI": {},
+ "NG-OG": {}, "NG-ON": {}, "NG-OS": {}, "NG-OY": {}, "NG-PL": {},
+ "NG-RI": {}, "NG-SO": {}, "NG-TA": {}, "NG-YO": {}, "NG-ZA": {},
+ "NI-AN": {}, "NI-AS": {}, "NI-BO": {}, "NI-CA": {}, "NI-CI": {},
+ "NI-CO": {}, "NI-ES": {}, "NI-GR": {}, "NI-JI": {}, "NI-LE": {},
+ "NI-MD": {}, "NI-MN": {}, "NI-MS": {}, "NI-MT": {}, "NI-NS": {},
+ "NI-RI": {}, "NI-SJ": {}, "NL-AW": {}, "NL-BQ1": {}, "NL-BQ2": {},
+ "NL-BQ3": {}, "NL-CW": {}, "NL-DR": {}, "NL-FL": {}, "NL-FR": {},
+ "NL-GE": {}, "NL-GR": {}, "NL-LI": {}, "NL-NB": {}, "NL-NH": {},
+ "NL-OV": {}, "NL-SX": {}, "NL-UT": {}, "NL-ZE": {}, "NL-ZH": {},
+ "NO-03": {}, "NO-11": {}, "NO-15": {}, "NO-16": {}, "NO-17": {},
+ "NO-18": {}, "NO-21": {}, "NO-30": {}, "NO-34": {}, "NO-38": {},
+ "NO-42": {}, "NO-46": {}, "NO-50": {}, "NO-54": {},
+ "NO-22": {}, "NP-1": {}, "NP-2": {}, "NP-3": {}, "NP-4": {},
+ "NP-5": {}, "NP-BA": {}, "NP-BH": {}, "NP-DH": {}, "NP-GA": {},
+ "NP-JA": {}, "NP-KA": {}, "NP-KO": {}, "NP-LU": {}, "NP-MA": {},
+ "NP-ME": {}, "NP-NA": {}, "NP-RA": {}, "NP-SA": {}, "NP-SE": {},
+ "NR-01": {}, "NR-02": {}, "NR-03": {}, "NR-04": {}, "NR-05": {},
+ "NR-06": {}, "NR-07": {}, "NR-08": {}, "NR-09": {}, "NR-10": {},
+ "NR-11": {}, "NR-12": {}, "NR-13": {}, "NR-14": {}, "NZ-AUK": {},
+ "NZ-BOP": {}, "NZ-CAN": {}, "NZ-CIT": {}, "NZ-GIS": {}, "NZ-HKB": {},
+ "NZ-MBH": {}, "NZ-MWT": {}, "NZ-N": {}, "NZ-NSN": {}, "NZ-NTL": {},
+ "NZ-OTA": {}, "NZ-S": {}, "NZ-STL": {}, "NZ-TAS": {}, "NZ-TKI": {},
+ "NZ-WGN": {}, "NZ-WKO": {}, "NZ-WTC": {}, "OM-BA": {}, "OM-BS": {}, "OM-BU": {}, "OM-BJ": {},
+ "OM-DA": {}, "OM-MA": {}, "OM-MU": {}, "OM-SH": {}, "OM-SJ": {}, "OM-SS": {}, "OM-WU": {},
+ "OM-ZA": {}, "OM-ZU": {}, "PA-1": {}, "PA-2": {}, "PA-3": {},
+ "PA-4": {}, "PA-5": {}, "PA-6": {}, "PA-7": {}, "PA-8": {},
+ "PA-9": {}, "PA-EM": {}, "PA-KY": {}, "PA-NB": {}, "PE-AMA": {},
+ "PE-ANC": {}, "PE-APU": {}, "PE-ARE": {}, "PE-AYA": {}, "PE-CAJ": {},
+ "PE-CAL": {}, "PE-CUS": {}, "PE-HUC": {}, "PE-HUV": {}, "PE-ICA": {},
+ "PE-JUN": {}, "PE-LAL": {}, "PE-LAM": {}, "PE-LIM": {}, "PE-LMA": {},
+ "PE-LOR": {}, "PE-MDD": {}, "PE-MOQ": {}, "PE-PAS": {}, "PE-PIU": {},
+ "PE-PUN": {}, "PE-SAM": {}, "PE-TAC": {}, "PE-TUM": {}, "PE-UCA": {},
+ "PG-CPK": {}, "PG-CPM": {}, "PG-EBR": {}, "PG-EHG": {}, "PG-EPW": {},
+ "PG-ESW": {}, "PG-GPK": {}, "PG-MBA": {}, "PG-MPL": {}, "PG-MPM": {},
+ "PG-MRL": {}, "PG-NCD": {}, "PG-NIK": {}, "PG-NPP": {}, "PG-NSB": {},
+ "PG-SAN": {}, "PG-SHM": {}, "PG-WBK": {}, "PG-WHM": {}, "PG-WPD": {},
+ "PH-00": {}, "PH-01": {}, "PH-02": {}, "PH-03": {}, "PH-05": {},
+ "PH-06": {}, "PH-07": {}, "PH-08": {}, "PH-09": {}, "PH-10": {},
+ "PH-11": {}, "PH-12": {}, "PH-13": {}, "PH-14": {}, "PH-15": {},
+ "PH-40": {}, "PH-41": {}, "PH-ABR": {}, "PH-AGN": {}, "PH-AGS": {},
+ "PH-AKL": {}, "PH-ALB": {}, "PH-ANT": {}, "PH-APA": {}, "PH-AUR": {},
+ "PH-BAN": {}, "PH-BAS": {}, "PH-BEN": {}, "PH-BIL": {}, "PH-BOH": {},
+ "PH-BTG": {}, "PH-BTN": {}, "PH-BUK": {}, "PH-BUL": {}, "PH-CAG": {},
+ "PH-CAM": {}, "PH-CAN": {}, "PH-CAP": {}, "PH-CAS": {}, "PH-CAT": {},
+ "PH-CAV": {}, "PH-CEB": {}, "PH-COM": {}, "PH-DAO": {}, "PH-DAS": {},
+ "PH-DAV": {}, "PH-DIN": {}, "PH-EAS": {}, "PH-GUI": {}, "PH-IFU": {},
+ "PH-ILI": {}, "PH-ILN": {}, "PH-ILS": {}, "PH-ISA": {}, "PH-KAL": {},
+ "PH-LAG": {}, "PH-LAN": {}, "PH-LAS": {}, "PH-LEY": {}, "PH-LUN": {},
+ "PH-MAD": {}, "PH-MAG": {}, "PH-MAS": {}, "PH-MDC": {}, "PH-MDR": {},
+ "PH-MOU": {}, "PH-MSC": {}, "PH-MSR": {}, "PH-NCO": {}, "PH-NEC": {},
+ "PH-NER": {}, "PH-NSA": {}, "PH-NUE": {}, "PH-NUV": {}, "PH-PAM": {},
+ "PH-PAN": {}, "PH-PLW": {}, "PH-QUE": {}, "PH-QUI": {}, "PH-RIZ": {},
+ "PH-ROM": {}, "PH-SAR": {}, "PH-SCO": {}, "PH-SIG": {}, "PH-SLE": {},
+ "PH-SLU": {}, "PH-SOR": {}, "PH-SUK": {}, "PH-SUN": {}, "PH-SUR": {},
+ "PH-TAR": {}, "PH-TAW": {}, "PH-WSA": {}, "PH-ZAN": {}, "PH-ZAS": {},
+ "PH-ZMB": {}, "PH-ZSI": {}, "PK-BA": {}, "PK-GB": {}, "PK-IS": {},
+ "PK-JK": {}, "PK-KP": {}, "PK-PB": {}, "PK-SD": {}, "PK-TA": {},
+ "PL-02": {}, "PL-04": {}, "PL-06": {}, "PL-08": {}, "PL-10": {},
+ "PL-12": {}, "PL-14": {}, "PL-16": {}, "PL-18": {}, "PL-20": {},
+ "PL-22": {}, "PL-24": {}, "PL-26": {}, "PL-28": {}, "PL-30": {}, "PL-32": {},
+ "PS-BTH": {}, "PS-DEB": {}, "PS-GZA": {}, "PS-HBN": {},
+ "PS-JEM": {}, "PS-JEN": {}, "PS-JRH": {}, "PS-KYS": {}, "PS-NBS": {},
+ "PS-NGZ": {}, "PS-QQA": {}, "PS-RBH": {}, "PS-RFH": {}, "PS-SLT": {},
+ "PS-TBS": {}, "PS-TKM": {}, "PT-01": {}, "PT-02": {}, "PT-03": {},
+ "PT-04": {}, "PT-05": {}, "PT-06": {}, "PT-07": {}, "PT-08": {},
+ "PT-09": {}, "PT-10": {}, "PT-11": {}, "PT-12": {}, "PT-13": {},
+ "PT-14": {}, "PT-15": {}, "PT-16": {}, "PT-17": {}, "PT-18": {},
+ "PT-20": {}, "PT-30": {}, "PW-002": {}, "PW-004": {}, "PW-010": {},
+ "PW-050": {}, "PW-100": {}, "PW-150": {}, "PW-212": {}, "PW-214": {},
+ "PW-218": {}, "PW-222": {}, "PW-224": {}, "PW-226": {}, "PW-227": {},
+ "PW-228": {}, "PW-350": {}, "PW-370": {}, "PY-1": {}, "PY-10": {},
+ "PY-11": {}, "PY-12": {}, "PY-13": {}, "PY-14": {}, "PY-15": {},
+ "PY-16": {}, "PY-19": {}, "PY-2": {}, "PY-3": {}, "PY-4": {},
+ "PY-5": {}, "PY-6": {}, "PY-7": {}, "PY-8": {}, "PY-9": {},
+ "PY-ASU": {}, "QA-DA": {}, "QA-KH": {}, "QA-MS": {}, "QA-RA": {},
+ "QA-US": {}, "QA-WA": {}, "QA-ZA": {}, "RO-AB": {}, "RO-AG": {},
+ "RO-AR": {}, "RO-B": {}, "RO-BC": {}, "RO-BH": {}, "RO-BN": {},
+ "RO-BR": {}, "RO-BT": {}, "RO-BV": {}, "RO-BZ": {}, "RO-CJ": {},
+ "RO-CL": {}, "RO-CS": {}, "RO-CT": {}, "RO-CV": {}, "RO-DB": {},
+ "RO-DJ": {}, "RO-GJ": {}, "RO-GL": {}, "RO-GR": {}, "RO-HD": {},
+ "RO-HR": {}, "RO-IF": {}, "RO-IL": {}, "RO-IS": {}, "RO-MH": {},
+ "RO-MM": {}, "RO-MS": {}, "RO-NT": {}, "RO-OT": {}, "RO-PH": {},
+ "RO-SB": {}, "RO-SJ": {}, "RO-SM": {}, "RO-SV": {}, "RO-TL": {},
+ "RO-TM": {}, "RO-TR": {}, "RO-VL": {}, "RO-VN": {}, "RO-VS": {},
+ "RS-00": {}, "RS-01": {}, "RS-02": {}, "RS-03": {}, "RS-04": {},
+ "RS-05": {}, "RS-06": {}, "RS-07": {}, "RS-08": {}, "RS-09": {},
+ "RS-10": {}, "RS-11": {}, "RS-12": {}, "RS-13": {}, "RS-14": {},
+ "RS-15": {}, "RS-16": {}, "RS-17": {}, "RS-18": {}, "RS-19": {},
+ "RS-20": {}, "RS-21": {}, "RS-22": {}, "RS-23": {}, "RS-24": {},
+ "RS-25": {}, "RS-26": {}, "RS-27": {}, "RS-28": {}, "RS-29": {},
+ "RS-KM": {}, "RS-VO": {}, "RU-AD": {}, "RU-AL": {}, "RU-ALT": {},
+ "RU-AMU": {}, "RU-ARK": {}, "RU-AST": {}, "RU-BA": {}, "RU-BEL": {},
+ "RU-BRY": {}, "RU-BU": {}, "RU-CE": {}, "RU-CHE": {}, "RU-CHU": {},
+ "RU-CU": {}, "RU-DA": {}, "RU-IN": {}, "RU-IRK": {}, "RU-IVA": {},
+ "RU-KAM": {}, "RU-KB": {}, "RU-KC": {}, "RU-KDA": {}, "RU-KEM": {},
+ "RU-KGD": {}, "RU-KGN": {}, "RU-KHA": {}, "RU-KHM": {}, "RU-KIR": {},
+ "RU-KK": {}, "RU-KL": {}, "RU-KLU": {}, "RU-KO": {}, "RU-KOS": {},
+ "RU-KR": {}, "RU-KRS": {}, "RU-KYA": {}, "RU-LEN": {}, "RU-LIP": {},
+ "RU-MAG": {}, "RU-ME": {}, "RU-MO": {}, "RU-MOS": {}, "RU-MOW": {},
+ "RU-MUR": {}, "RU-NEN": {}, "RU-NGR": {}, "RU-NIZ": {}, "RU-NVS": {},
+ "RU-OMS": {}, "RU-ORE": {}, "RU-ORL": {}, "RU-PER": {}, "RU-PNZ": {},
+ "RU-PRI": {}, "RU-PSK": {}, "RU-ROS": {}, "RU-RYA": {}, "RU-SA": {},
+ "RU-SAK": {}, "RU-SAM": {}, "RU-SAR": {}, "RU-SE": {}, "RU-SMO": {},
+ "RU-SPE": {}, "RU-STA": {}, "RU-SVE": {}, "RU-TA": {}, "RU-TAM": {},
+ "RU-TOM": {}, "RU-TUL": {}, "RU-TVE": {}, "RU-TY": {}, "RU-TYU": {},
+ "RU-UD": {}, "RU-ULY": {}, "RU-VGG": {}, "RU-VLA": {}, "RU-VLG": {},
+ "RU-VOR": {}, "RU-YAN": {}, "RU-YAR": {}, "RU-YEV": {}, "RU-ZAB": {},
+ "RW-01": {}, "RW-02": {}, "RW-03": {}, "RW-04": {}, "RW-05": {},
+ "SA-01": {}, "SA-02": {}, "SA-03": {}, "SA-04": {}, "SA-05": {},
+ "SA-06": {}, "SA-07": {}, "SA-08": {}, "SA-09": {}, "SA-10": {},
+ "SA-11": {}, "SA-12": {}, "SA-14": {}, "SB-CE": {}, "SB-CH": {},
+ "SB-CT": {}, "SB-GU": {}, "SB-IS": {}, "SB-MK": {}, "SB-ML": {},
+ "SB-RB": {}, "SB-TE": {}, "SB-WE": {}, "SC-01": {}, "SC-02": {},
+ "SC-03": {}, "SC-04": {}, "SC-05": {}, "SC-06": {}, "SC-07": {},
+ "SC-08": {}, "SC-09": {}, "SC-10": {}, "SC-11": {}, "SC-12": {},
+ "SC-13": {}, "SC-14": {}, "SC-15": {}, "SC-16": {}, "SC-17": {},
+ "SC-18": {}, "SC-19": {}, "SC-20": {}, "SC-21": {}, "SC-22": {},
+ "SC-23": {}, "SC-24": {}, "SC-25": {}, "SD-DC": {}, "SD-DE": {},
+ "SD-DN": {}, "SD-DS": {}, "SD-DW": {}, "SD-GD": {}, "SD-GK": {}, "SD-GZ": {},
+ "SD-KA": {}, "SD-KH": {}, "SD-KN": {}, "SD-KS": {}, "SD-NB": {},
+ "SD-NO": {}, "SD-NR": {}, "SD-NW": {}, "SD-RS": {}, "SD-SI": {},
+ "SE-AB": {}, "SE-AC": {}, "SE-BD": {}, "SE-C": {}, "SE-D": {},
+ "SE-E": {}, "SE-F": {}, "SE-G": {}, "SE-H": {}, "SE-I": {},
+ "SE-K": {}, "SE-M": {}, "SE-N": {}, "SE-O": {}, "SE-S": {},
+ "SE-T": {}, "SE-U": {}, "SE-W": {}, "SE-X": {}, "SE-Y": {},
+ "SE-Z": {}, "SG-01": {}, "SG-02": {}, "SG-03": {}, "SG-04": {},
+ "SG-05": {}, "SH-AC": {}, "SH-HL": {}, "SH-TA": {}, "SI-001": {},
+ "SI-002": {}, "SI-003": {}, "SI-004": {}, "SI-005": {}, "SI-006": {},
+ "SI-007": {}, "SI-008": {}, "SI-009": {}, "SI-010": {}, "SI-011": {},
+ "SI-012": {}, "SI-013": {}, "SI-014": {}, "SI-015": {}, "SI-016": {},
+ "SI-017": {}, "SI-018": {}, "SI-019": {}, "SI-020": {}, "SI-021": {},
+ "SI-022": {}, "SI-023": {}, "SI-024": {}, "SI-025": {}, "SI-026": {},
+ "SI-027": {}, "SI-028": {}, "SI-029": {}, "SI-030": {}, "SI-031": {},
+ "SI-032": {}, "SI-033": {}, "SI-034": {}, "SI-035": {}, "SI-036": {},
+ "SI-037": {}, "SI-038": {}, "SI-039": {}, "SI-040": {}, "SI-041": {},
+ "SI-042": {}, "SI-043": {}, "SI-044": {}, "SI-045": {}, "SI-046": {},
+ "SI-047": {}, "SI-048": {}, "SI-049": {}, "SI-050": {}, "SI-051": {},
+ "SI-052": {}, "SI-053": {}, "SI-054": {}, "SI-055": {}, "SI-056": {},
+ "SI-057": {}, "SI-058": {}, "SI-059": {}, "SI-060": {}, "SI-061": {},
+ "SI-062": {}, "SI-063": {}, "SI-064": {}, "SI-065": {}, "SI-066": {},
+ "SI-067": {}, "SI-068": {}, "SI-069": {}, "SI-070": {}, "SI-071": {},
+ "SI-072": {}, "SI-073": {}, "SI-074": {}, "SI-075": {}, "SI-076": {},
+ "SI-077": {}, "SI-078": {}, "SI-079": {}, "SI-080": {}, "SI-081": {},
+ "SI-082": {}, "SI-083": {}, "SI-084": {}, "SI-085": {}, "SI-086": {},
+ "SI-087": {}, "SI-088": {}, "SI-089": {}, "SI-090": {}, "SI-091": {},
+ "SI-092": {}, "SI-093": {}, "SI-094": {}, "SI-095": {}, "SI-096": {},
+ "SI-097": {}, "SI-098": {}, "SI-099": {}, "SI-100": {}, "SI-101": {},
+ "SI-102": {}, "SI-103": {}, "SI-104": {}, "SI-105": {}, "SI-106": {},
+ "SI-107": {}, "SI-108": {}, "SI-109": {}, "SI-110": {}, "SI-111": {},
+ "SI-112": {}, "SI-113": {}, "SI-114": {}, "SI-115": {}, "SI-116": {},
+ "SI-117": {}, "SI-118": {}, "SI-119": {}, "SI-120": {}, "SI-121": {},
+ "SI-122": {}, "SI-123": {}, "SI-124": {}, "SI-125": {}, "SI-126": {},
+ "SI-127": {}, "SI-128": {}, "SI-129": {}, "SI-130": {}, "SI-131": {},
+ "SI-132": {}, "SI-133": {}, "SI-134": {}, "SI-135": {}, "SI-136": {},
+ "SI-137": {}, "SI-138": {}, "SI-139": {}, "SI-140": {}, "SI-141": {},
+ "SI-142": {}, "SI-143": {}, "SI-144": {}, "SI-146": {}, "SI-147": {},
+ "SI-148": {}, "SI-149": {}, "SI-150": {}, "SI-151": {}, "SI-152": {},
+ "SI-153": {}, "SI-154": {}, "SI-155": {}, "SI-156": {}, "SI-157": {},
+ "SI-158": {}, "SI-159": {}, "SI-160": {}, "SI-161": {}, "SI-162": {},
+ "SI-163": {}, "SI-164": {}, "SI-165": {}, "SI-166": {}, "SI-167": {},
+ "SI-168": {}, "SI-169": {}, "SI-170": {}, "SI-171": {}, "SI-172": {},
+ "SI-173": {}, "SI-174": {}, "SI-175": {}, "SI-176": {}, "SI-177": {},
+ "SI-178": {}, "SI-179": {}, "SI-180": {}, "SI-181": {}, "SI-182": {},
+ "SI-183": {}, "SI-184": {}, "SI-185": {}, "SI-186": {}, "SI-187": {},
+ "SI-188": {}, "SI-189": {}, "SI-190": {}, "SI-191": {}, "SI-192": {},
+ "SI-193": {}, "SI-194": {}, "SI-195": {}, "SI-196": {}, "SI-197": {},
+ "SI-198": {}, "SI-199": {}, "SI-200": {}, "SI-201": {}, "SI-202": {},
+ "SI-203": {}, "SI-204": {}, "SI-205": {}, "SI-206": {}, "SI-207": {},
+ "SI-208": {}, "SI-209": {}, "SI-210": {}, "SI-211": {}, "SI-212": {}, "SI-213": {}, "SK-BC": {},
+ "SK-BL": {}, "SK-KI": {}, "SK-NI": {}, "SK-PV": {}, "SK-TA": {},
+ "SK-TC": {}, "SK-ZI": {}, "SL-E": {}, "SL-N": {}, "SL-S": {},
+ "SL-W": {}, "SM-01": {}, "SM-02": {}, "SM-03": {}, "SM-04": {},
+ "SM-05": {}, "SM-06": {}, "SM-07": {}, "SM-08": {}, "SM-09": {},
+ "SN-DB": {}, "SN-DK": {}, "SN-FK": {}, "SN-KA": {}, "SN-KD": {},
+ "SN-KE": {}, "SN-KL": {}, "SN-LG": {}, "SN-MT": {}, "SN-SE": {},
+ "SN-SL": {}, "SN-TC": {}, "SN-TH": {}, "SN-ZG": {}, "SO-AW": {},
+ "SO-BK": {}, "SO-BN": {}, "SO-BR": {}, "SO-BY": {}, "SO-GA": {},
+ "SO-GE": {}, "SO-HI": {}, "SO-JD": {}, "SO-JH": {}, "SO-MU": {},
+ "SO-NU": {}, "SO-SA": {}, "SO-SD": {}, "SO-SH": {}, "SO-SO": {},
+ "SO-TO": {}, "SO-WO": {}, "SR-BR": {}, "SR-CM": {}, "SR-CR": {},
+ "SR-MA": {}, "SR-NI": {}, "SR-PM": {}, "SR-PR": {}, "SR-SA": {},
+ "SR-SI": {}, "SR-WA": {}, "SS-BN": {}, "SS-BW": {}, "SS-EC": {},
+ "SS-EE8": {}, "SS-EE": {}, "SS-EW": {}, "SS-JG": {}, "SS-LK": {}, "SS-NU": {},
+ "SS-UY": {}, "SS-WR": {}, "ST-01": {}, "ST-P": {}, "ST-S": {}, "SV-AH": {},
+ "SV-CA": {}, "SV-CH": {}, "SV-CU": {}, "SV-LI": {}, "SV-MO": {},
+ "SV-PA": {}, "SV-SA": {}, "SV-SM": {}, "SV-SO": {}, "SV-SS": {},
+ "SV-SV": {}, "SV-UN": {}, "SV-US": {}, "SY-DI": {}, "SY-DR": {},
+ "SY-DY": {}, "SY-HA": {}, "SY-HI": {}, "SY-HL": {}, "SY-HM": {},
+ "SY-ID": {}, "SY-LA": {}, "SY-QU": {}, "SY-RA": {}, "SY-RD": {},
+ "SY-SU": {}, "SY-TA": {}, "SZ-HH": {}, "SZ-LU": {}, "SZ-MA": {},
+ "SZ-SH": {}, "TD-BA": {}, "TD-BG": {}, "TD-BO": {}, "TD-CB": {},
+ "TD-EN": {}, "TD-GR": {}, "TD-HL": {}, "TD-KA": {}, "TD-LC": {},
+ "TD-LO": {}, "TD-LR": {}, "TD-MA": {}, "TD-MC": {}, "TD-ME": {},
+ "TD-MO": {}, "TD-ND": {}, "TD-OD": {}, "TD-SA": {}, "TD-SI": {},
+ "TD-TA": {}, "TD-TI": {}, "TD-WF": {}, "TG-C": {}, "TG-K": {},
+ "TG-M": {}, "TG-P": {}, "TG-S": {}, "TH-10": {}, "TH-11": {},
+ "TH-12": {}, "TH-13": {}, "TH-14": {}, "TH-15": {}, "TH-16": {},
+ "TH-17": {}, "TH-18": {}, "TH-19": {}, "TH-20": {}, "TH-21": {},
+ "TH-22": {}, "TH-23": {}, "TH-24": {}, "TH-25": {}, "TH-26": {},
+ "TH-27": {}, "TH-30": {}, "TH-31": {}, "TH-32": {}, "TH-33": {},
+ "TH-34": {}, "TH-35": {}, "TH-36": {}, "TH-37": {}, "TH-38": {}, "TH-39": {},
+ "TH-40": {}, "TH-41": {}, "TH-42": {}, "TH-43": {}, "TH-44": {},
+ "TH-45": {}, "TH-46": {}, "TH-47": {}, "TH-48": {}, "TH-49": {},
+ "TH-50": {}, "TH-51": {}, "TH-52": {}, "TH-53": {}, "TH-54": {},
+ "TH-55": {}, "TH-56": {}, "TH-57": {}, "TH-58": {}, "TH-60": {},
+ "TH-61": {}, "TH-62": {}, "TH-63": {}, "TH-64": {}, "TH-65": {},
+ "TH-66": {}, "TH-67": {}, "TH-70": {}, "TH-71": {}, "TH-72": {},
+ "TH-73": {}, "TH-74": {}, "TH-75": {}, "TH-76": {}, "TH-77": {},
+ "TH-80": {}, "TH-81": {}, "TH-82": {}, "TH-83": {}, "TH-84": {},
+ "TH-85": {}, "TH-86": {}, "TH-90": {}, "TH-91": {}, "TH-92": {},
+ "TH-93": {}, "TH-94": {}, "TH-95": {}, "TH-96": {}, "TH-S": {},
+ "TJ-GB": {}, "TJ-KT": {}, "TJ-SU": {}, "TJ-DU": {}, "TJ-RA": {}, "TL-AL": {}, "TL-AN": {},
+ "TL-BA": {}, "TL-BO": {}, "TL-CO": {}, "TL-DI": {}, "TL-ER": {},
+ "TL-LA": {}, "TL-LI": {}, "TL-MF": {}, "TL-MT": {}, "TL-OE": {},
+ "TL-VI": {}, "TM-A": {}, "TM-B": {}, "TM-D": {}, "TM-L": {},
+ "TM-M": {}, "TM-S": {}, "TN-11": {}, "TN-12": {}, "TN-13": {},
+ "TN-14": {}, "TN-21": {}, "TN-22": {}, "TN-23": {}, "TN-31": {},
+ "TN-32": {}, "TN-33": {}, "TN-34": {}, "TN-41": {}, "TN-42": {},
+ "TN-43": {}, "TN-51": {}, "TN-52": {}, "TN-53": {}, "TN-61": {},
+ "TN-71": {}, "TN-72": {}, "TN-73": {}, "TN-81": {}, "TN-82": {},
+ "TN-83": {}, "TO-01": {}, "TO-02": {}, "TO-03": {}, "TO-04": {},
+ "TO-05": {}, "TR-01": {}, "TR-02": {}, "TR-03": {}, "TR-04": {},
+ "TR-05": {}, "TR-06": {}, "TR-07": {}, "TR-08": {}, "TR-09": {},
+ "TR-10": {}, "TR-11": {}, "TR-12": {}, "TR-13": {}, "TR-14": {},
+ "TR-15": {}, "TR-16": {}, "TR-17": {}, "TR-18": {}, "TR-19": {},
+ "TR-20": {}, "TR-21": {}, "TR-22": {}, "TR-23": {}, "TR-24": {},
+ "TR-25": {}, "TR-26": {}, "TR-27": {}, "TR-28": {}, "TR-29": {},
+ "TR-30": {}, "TR-31": {}, "TR-32": {}, "TR-33": {}, "TR-34": {},
+ "TR-35": {}, "TR-36": {}, "TR-37": {}, "TR-38": {}, "TR-39": {},
+ "TR-40": {}, "TR-41": {}, "TR-42": {}, "TR-43": {}, "TR-44": {},
+ "TR-45": {}, "TR-46": {}, "TR-47": {}, "TR-48": {}, "TR-49": {},
+ "TR-50": {}, "TR-51": {}, "TR-52": {}, "TR-53": {}, "TR-54": {},
+ "TR-55": {}, "TR-56": {}, "TR-57": {}, "TR-58": {}, "TR-59": {},
+ "TR-60": {}, "TR-61": {}, "TR-62": {}, "TR-63": {}, "TR-64": {},
+ "TR-65": {}, "TR-66": {}, "TR-67": {}, "TR-68": {}, "TR-69": {},
+ "TR-70": {}, "TR-71": {}, "TR-72": {}, "TR-73": {}, "TR-74": {},
+ "TR-75": {}, "TR-76": {}, "TR-77": {}, "TR-78": {}, "TR-79": {},
+ "TR-80": {}, "TR-81": {}, "TT-ARI": {}, "TT-CHA": {}, "TT-CTT": {},
+ "TT-DMN": {}, "TT-ETO": {}, "TT-MRC": {}, "TT-TOB": {}, "TT-PED": {}, "TT-POS": {}, "TT-PRT": {},
+ "TT-PTF": {}, "TT-RCM": {}, "TT-SFO": {}, "TT-SGE": {}, "TT-SIP": {},
+ "TT-SJL": {}, "TT-TUP": {}, "TT-WTO": {}, "TV-FUN": {}, "TV-NIT": {},
+ "TV-NKF": {}, "TV-NKL": {}, "TV-NMA": {}, "TV-NMG": {}, "TV-NUI": {},
+ "TV-VAI": {}, "TW-CHA": {}, "TW-CYI": {}, "TW-CYQ": {}, "TW-KIN": {}, "TW-HSQ": {},
+ "TW-HSZ": {}, "TW-HUA": {}, "TW-LIE": {}, "TW-ILA": {}, "TW-KEE": {}, "TW-KHH": {},
+ "TW-KHQ": {}, "TW-MIA": {}, "TW-NAN": {}, "TW-NWT": {}, "TW-PEN": {}, "TW-PIF": {},
+ "TW-TAO": {}, "TW-TNN": {}, "TW-TNQ": {}, "TW-TPE": {}, "TW-TPQ": {},
+ "TW-TTT": {}, "TW-TXG": {}, "TW-TXQ": {}, "TW-YUN": {}, "TZ-01": {},
+ "TZ-02": {}, "TZ-03": {}, "TZ-04": {}, "TZ-05": {}, "TZ-06": {},
+ "TZ-07": {}, "TZ-08": {}, "TZ-09": {}, "TZ-10": {}, "TZ-11": {},
+ "TZ-12": {}, "TZ-13": {}, "TZ-14": {}, "TZ-15": {}, "TZ-16": {},
+ "TZ-17": {}, "TZ-18": {}, "TZ-19": {}, "TZ-20": {}, "TZ-21": {},
+ "TZ-22": {}, "TZ-23": {}, "TZ-24": {}, "TZ-25": {}, "TZ-26": {}, "TZ-27": {}, "TZ-28": {}, "TZ-29": {}, "TZ-30": {}, "TZ-31": {},
+ "UA-05": {}, "UA-07": {}, "UA-09": {}, "UA-12": {}, "UA-14": {},
+ "UA-18": {}, "UA-21": {}, "UA-23": {}, "UA-26": {}, "UA-30": {},
+ "UA-32": {}, "UA-35": {}, "UA-40": {}, "UA-43": {}, "UA-46": {},
+ "UA-48": {}, "UA-51": {}, "UA-53": {}, "UA-56": {}, "UA-59": {},
+ "UA-61": {}, "UA-63": {}, "UA-65": {}, "UA-68": {}, "UA-71": {},
+ "UA-74": {}, "UA-77": {}, "UG-101": {}, "UG-102": {}, "UG-103": {},
+ "UG-104": {}, "UG-105": {}, "UG-106": {}, "UG-107": {}, "UG-108": {},
+ "UG-109": {}, "UG-110": {}, "UG-111": {}, "UG-112": {}, "UG-113": {},
+ "UG-114": {}, "UG-115": {}, "UG-116": {}, "UG-201": {}, "UG-202": {},
+ "UG-203": {}, "UG-204": {}, "UG-205": {}, "UG-206": {}, "UG-207": {},
+ "UG-208": {}, "UG-209": {}, "UG-210": {}, "UG-211": {}, "UG-212": {},
+ "UG-213": {}, "UG-214": {}, "UG-215": {}, "UG-216": {}, "UG-217": {},
+ "UG-218": {}, "UG-219": {}, "UG-220": {}, "UG-221": {}, "UG-222": {},
+ "UG-223": {}, "UG-224": {}, "UG-301": {}, "UG-302": {}, "UG-303": {},
+ "UG-304": {}, "UG-305": {}, "UG-306": {}, "UG-307": {}, "UG-308": {},
+ "UG-309": {}, "UG-310": {}, "UG-311": {}, "UG-312": {}, "UG-313": {},
+ "UG-314": {}, "UG-315": {}, "UG-316": {}, "UG-317": {}, "UG-318": {},
+ "UG-319": {}, "UG-320": {}, "UG-321": {}, "UG-401": {}, "UG-402": {},
+ "UG-403": {}, "UG-404": {}, "UG-405": {}, "UG-406": {}, "UG-407": {},
+ "UG-408": {}, "UG-409": {}, "UG-410": {}, "UG-411": {}, "UG-412": {},
+ "UG-413": {}, "UG-414": {}, "UG-415": {}, "UG-416": {}, "UG-417": {},
+ "UG-418": {}, "UG-419": {}, "UG-C": {}, "UG-E": {}, "UG-N": {},
+ "UG-W": {}, "UG-322": {}, "UG-323": {}, "UG-420": {}, "UG-117": {},
+ "UG-118": {}, "UG-225": {}, "UG-120": {}, "UG-226": {},
+ "UG-121": {}, "UG-122": {}, "UG-227": {}, "UG-421": {},
+ "UG-325": {}, "UG-228": {}, "UG-123": {}, "UG-422": {},
+ "UG-326": {}, "UG-229": {}, "UG-124": {}, "UG-423": {},
+ "UG-230": {}, "UG-327": {}, "UG-424": {}, "UG-328": {},
+ "UG-425": {}, "UG-426": {}, "UG-330": {},
+ "UM-67": {}, "UM-71": {}, "UM-76": {}, "UM-79": {},
+ "UM-81": {}, "UM-84": {}, "UM-86": {}, "UM-89": {}, "UM-95": {},
+ "US-AK": {}, "US-AL": {}, "US-AR": {}, "US-AS": {}, "US-AZ": {},
+ "US-CA": {}, "US-CO": {}, "US-CT": {}, "US-DC": {}, "US-DE": {},
+ "US-FL": {}, "US-GA": {}, "US-GU": {}, "US-HI": {}, "US-IA": {},
+ "US-ID": {}, "US-IL": {}, "US-IN": {}, "US-KS": {}, "US-KY": {},
+ "US-LA": {}, "US-MA": {}, "US-MD": {}, "US-ME": {}, "US-MI": {},
+ "US-MN": {}, "US-MO": {}, "US-MP": {}, "US-MS": {}, "US-MT": {},
+ "US-NC": {}, "US-ND": {}, "US-NE": {}, "US-NH": {}, "US-NJ": {},
+ "US-NM": {}, "US-NV": {}, "US-NY": {}, "US-OH": {}, "US-OK": {},
+ "US-OR": {}, "US-PA": {}, "US-PR": {}, "US-RI": {}, "US-SC": {},
+ "US-SD": {}, "US-TN": {}, "US-TX": {}, "US-UM": {}, "US-UT": {},
+ "US-VA": {}, "US-VI": {}, "US-VT": {}, "US-WA": {}, "US-WI": {},
+ "US-WV": {}, "US-WY": {}, "UY-AR": {}, "UY-CA": {}, "UY-CL": {},
+ "UY-CO": {}, "UY-DU": {}, "UY-FD": {}, "UY-FS": {}, "UY-LA": {},
+ "UY-MA": {}, "UY-MO": {}, "UY-PA": {}, "UY-RN": {}, "UY-RO": {},
+ "UY-RV": {}, "UY-SA": {}, "UY-SJ": {}, "UY-SO": {}, "UY-TA": {},
+ "UY-TT": {}, "UZ-AN": {}, "UZ-BU": {}, "UZ-FA": {}, "UZ-JI": {},
+ "UZ-NG": {}, "UZ-NW": {}, "UZ-QA": {}, "UZ-QR": {}, "UZ-SA": {},
+ "UZ-SI": {}, "UZ-SU": {}, "UZ-TK": {}, "UZ-TO": {}, "UZ-XO": {},
+ "VC-01": {}, "VC-02": {}, "VC-03": {}, "VC-04": {}, "VC-05": {},
+ "VC-06": {}, "VE-A": {}, "VE-B": {}, "VE-C": {}, "VE-D": {},
+ "VE-E": {}, "VE-F": {}, "VE-G": {}, "VE-H": {}, "VE-I": {},
+ "VE-J": {}, "VE-K": {}, "VE-L": {}, "VE-M": {}, "VE-N": {},
+ "VE-O": {}, "VE-P": {}, "VE-R": {}, "VE-S": {}, "VE-T": {},
+ "VE-U": {}, "VE-V": {}, "VE-W": {}, "VE-X": {}, "VE-Y": {},
+ "VE-Z": {}, "VN-01": {}, "VN-02": {}, "VN-03": {}, "VN-04": {},
+ "VN-05": {}, "VN-06": {}, "VN-07": {}, "VN-09": {}, "VN-13": {},
+ "VN-14": {}, "VN-15": {}, "VN-18": {}, "VN-20": {}, "VN-21": {},
+ "VN-22": {}, "VN-23": {}, "VN-24": {}, "VN-25": {}, "VN-26": {},
+ "VN-27": {}, "VN-28": {}, "VN-29": {}, "VN-30": {}, "VN-31": {},
+ "VN-32": {}, "VN-33": {}, "VN-34": {}, "VN-35": {}, "VN-36": {},
+ "VN-37": {}, "VN-39": {}, "VN-40": {}, "VN-41": {}, "VN-43": {},
+ "VN-44": {}, "VN-45": {}, "VN-46": {}, "VN-47": {}, "VN-49": {},
+ "VN-50": {}, "VN-51": {}, "VN-52": {}, "VN-53": {}, "VN-54": {},
+ "VN-55": {}, "VN-56": {}, "VN-57": {}, "VN-58": {}, "VN-59": {},
+ "VN-61": {}, "VN-63": {}, "VN-66": {}, "VN-67": {}, "VN-68": {},
+ "VN-69": {}, "VN-70": {}, "VN-71": {}, "VN-72": {}, "VN-73": {},
+ "VN-CT": {}, "VN-DN": {}, "VN-HN": {}, "VN-HP": {}, "VN-SG": {},
+ "VU-MAP": {}, "VU-PAM": {}, "VU-SAM": {}, "VU-SEE": {}, "VU-TAE": {},
+ "VU-TOB": {}, "WF-SG": {}, "WF-UV": {}, "WS-AA": {}, "WS-AL": {}, "WS-AT": {}, "WS-FA": {},
+ "WS-GE": {}, "WS-GI": {}, "WS-PA": {}, "WS-SA": {}, "WS-TU": {},
+ "WS-VF": {}, "WS-VS": {}, "YE-AB": {}, "YE-AD": {}, "YE-AM": {},
+ "YE-BA": {}, "YE-DA": {}, "YE-DH": {}, "YE-HD": {}, "YE-HJ": {}, "YE-HU": {},
+ "YE-IB": {}, "YE-JA": {}, "YE-LA": {}, "YE-MA": {}, "YE-MR": {},
+ "YE-MU": {}, "YE-MW": {}, "YE-RA": {}, "YE-SA": {}, "YE-SD": {}, "YE-SH": {},
+ "YE-SN": {}, "YE-TA": {}, "ZA-EC": {}, "ZA-FS": {}, "ZA-GP": {},
+ "ZA-LP": {}, "ZA-MP": {}, "ZA-NC": {}, "ZA-NW": {}, "ZA-WC": {},
+ "ZA-ZN": {}, "ZA-KZN": {}, "ZM-01": {}, "ZM-02": {}, "ZM-03": {}, "ZM-04": {},
+ "ZM-05": {}, "ZM-06": {}, "ZM-07": {}, "ZM-08": {}, "ZM-09": {}, "ZM-10": {},
+ "ZW-BU": {}, "ZW-HA": {}, "ZW-MA": {}, "ZW-MC": {}, "ZW-ME": {},
+ "ZW-MI": {}, "ZW-MN": {}, "ZW-MS": {}, "ZW-MV": {}, "ZW-MW": {},
}
diff --git a/vendor/github.com/go-playground/validator/v10/currency_codes.go b/vendor/github.com/go-playground/validator/v10/currency_codes.go
index a5cd9b18a0abc..d0317f89ccb11 100644
--- a/vendor/github.com/go-playground/validator/v10/currency_codes.go
+++ b/vendor/github.com/go-playground/validator/v10/currency_codes.go
@@ -1,79 +1,79 @@
package validator
-var iso4217 = map[string]bool{
- "AFN": true, "EUR": true, "ALL": true, "DZD": true, "USD": true,
- "AOA": true, "XCD": true, "ARS": true, "AMD": true, "AWG": true,
- "AUD": true, "AZN": true, "BSD": true, "BHD": true, "BDT": true,
- "BBD": true, "BYN": true, "BZD": true, "XOF": true, "BMD": true,
- "INR": true, "BTN": true, "BOB": true, "BOV": true, "BAM": true,
- "BWP": true, "NOK": true, "BRL": true, "BND": true, "BGN": true,
- "BIF": true, "CVE": true, "KHR": true, "XAF": true, "CAD": true,
- "KYD": true, "CLP": true, "CLF": true, "CNY": true, "COP": true,
- "COU": true, "KMF": true, "CDF": true, "NZD": true, "CRC": true,
- "HRK": true, "CUP": true, "CUC": true, "ANG": true, "CZK": true,
- "DKK": true, "DJF": true, "DOP": true, "EGP": true, "SVC": true,
- "ERN": true, "SZL": true, "ETB": true, "FKP": true, "FJD": true,
- "XPF": true, "GMD": true, "GEL": true, "GHS": true, "GIP": true,
- "GTQ": true, "GBP": true, "GNF": true, "GYD": true, "HTG": true,
- "HNL": true, "HKD": true, "HUF": true, "ISK": true, "IDR": true,
- "XDR": true, "IRR": true, "IQD": true, "ILS": true, "JMD": true,
- "JPY": true, "JOD": true, "KZT": true, "KES": true, "KPW": true,
- "KRW": true, "KWD": true, "KGS": true, "LAK": true, "LBP": true,
- "LSL": true, "ZAR": true, "LRD": true, "LYD": true, "CHF": true,
- "MOP": true, "MKD": true, "MGA": true, "MWK": true, "MYR": true,
- "MVR": true, "MRU": true, "MUR": true, "XUA": true, "MXN": true,
- "MXV": true, "MDL": true, "MNT": true, "MAD": true, "MZN": true,
- "MMK": true, "NAD": true, "NPR": true, "NIO": true, "NGN": true,
- "OMR": true, "PKR": true, "PAB": true, "PGK": true, "PYG": true,
- "PEN": true, "PHP": true, "PLN": true, "QAR": true, "RON": true,
- "RUB": true, "RWF": true, "SHP": true, "WST": true, "STN": true,
- "SAR": true, "RSD": true, "SCR": true, "SLL": true, "SGD": true,
- "XSU": true, "SBD": true, "SOS": true, "SSP": true, "LKR": true,
- "SDG": true, "SRD": true, "SEK": true, "CHE": true, "CHW": true,
- "SYP": true, "TWD": true, "TJS": true, "TZS": true, "THB": true,
- "TOP": true, "TTD": true, "TND": true, "TRY": true, "TMT": true,
- "UGX": true, "UAH": true, "AED": true, "USN": true, "UYU": true,
- "UYI": true, "UYW": true, "UZS": true, "VUV": true, "VES": true,
- "VND": true, "YER": true, "ZMW": true, "ZWL": true, "XBA": true,
- "XBB": true, "XBC": true, "XBD": true, "XTS": true, "XXX": true,
- "XAU": true, "XPD": true, "XPT": true, "XAG": true,
+var iso4217 = map[string]struct{}{
+ "AFN": {}, "EUR": {}, "ALL": {}, "DZD": {}, "USD": {},
+ "AOA": {}, "XCD": {}, "ARS": {}, "AMD": {}, "AWG": {},
+ "AUD": {}, "AZN": {}, "BSD": {}, "BHD": {}, "BDT": {},
+ "BBD": {}, "BYN": {}, "BZD": {}, "XOF": {}, "BMD": {},
+ "INR": {}, "BTN": {}, "BOB": {}, "BOV": {}, "BAM": {},
+ "BWP": {}, "NOK": {}, "BRL": {}, "BND": {}, "BGN": {},
+ "BIF": {}, "CVE": {}, "KHR": {}, "XAF": {}, "CAD": {},
+ "KYD": {}, "CLP": {}, "CLF": {}, "CNY": {}, "COP": {},
+ "COU": {}, "KMF": {}, "CDF": {}, "NZD": {}, "CRC": {},
+ "HRK": {}, "CUP": {}, "CUC": {}, "ANG": {}, "CZK": {},
+ "DKK": {}, "DJF": {}, "DOP": {}, "EGP": {}, "SVC": {},
+ "ERN": {}, "SZL": {}, "ETB": {}, "FKP": {}, "FJD": {},
+ "XPF": {}, "GMD": {}, "GEL": {}, "GHS": {}, "GIP": {},
+ "GTQ": {}, "GBP": {}, "GNF": {}, "GYD": {}, "HTG": {},
+ "HNL": {}, "HKD": {}, "HUF": {}, "ISK": {}, "IDR": {},
+ "XDR": {}, "IRR": {}, "IQD": {}, "ILS": {}, "JMD": {},
+ "JPY": {}, "JOD": {}, "KZT": {}, "KES": {}, "KPW": {},
+ "KRW": {}, "KWD": {}, "KGS": {}, "LAK": {}, "LBP": {},
+ "LSL": {}, "ZAR": {}, "LRD": {}, "LYD": {}, "CHF": {},
+ "MOP": {}, "MKD": {}, "MGA": {}, "MWK": {}, "MYR": {},
+ "MVR": {}, "MRU": {}, "MUR": {}, "XUA": {}, "MXN": {},
+ "MXV": {}, "MDL": {}, "MNT": {}, "MAD": {}, "MZN": {},
+ "MMK": {}, "NAD": {}, "NPR": {}, "NIO": {}, "NGN": {},
+ "OMR": {}, "PKR": {}, "PAB": {}, "PGK": {}, "PYG": {},
+ "PEN": {}, "PHP": {}, "PLN": {}, "QAR": {}, "RON": {},
+ "RUB": {}, "RWF": {}, "SHP": {}, "WST": {}, "STN": {},
+ "SAR": {}, "RSD": {}, "SCR": {}, "SLL": {}, "SGD": {},
+ "XSU": {}, "SBD": {}, "SOS": {}, "SSP": {}, "LKR": {},
+ "SDG": {}, "SRD": {}, "SEK": {}, "CHE": {}, "CHW": {},
+ "SYP": {}, "TWD": {}, "TJS": {}, "TZS": {}, "THB": {},
+ "TOP": {}, "TTD": {}, "TND": {}, "TRY": {}, "TMT": {},
+ "UGX": {}, "UAH": {}, "AED": {}, "USN": {}, "UYU": {},
+ "UYI": {}, "UYW": {}, "UZS": {}, "VUV": {}, "VES": {},
+ "VND": {}, "YER": {}, "ZMW": {}, "ZWL": {}, "XBA": {},
+ "XBB": {}, "XBC": {}, "XBD": {}, "XTS": {}, "XXX": {},
+ "XAU": {}, "XPD": {}, "XPT": {}, "XAG": {},
}
-var iso4217_numeric = map[int]bool{
- 8: true, 12: true, 32: true, 36: true, 44: true,
- 48: true, 50: true, 51: true, 52: true, 60: true,
- 64: true, 68: true, 72: true, 84: true, 90: true,
- 96: true, 104: true, 108: true, 116: true, 124: true,
- 132: true, 136: true, 144: true, 152: true, 156: true,
- 170: true, 174: true, 188: true, 191: true, 192: true,
- 203: true, 208: true, 214: true, 222: true, 230: true,
- 232: true, 238: true, 242: true, 262: true, 270: true,
- 292: true, 320: true, 324: true, 328: true, 332: true,
- 340: true, 344: true, 348: true, 352: true, 356: true,
- 360: true, 364: true, 368: true, 376: true, 388: true,
- 392: true, 398: true, 400: true, 404: true, 408: true,
- 410: true, 414: true, 417: true, 418: true, 422: true,
- 426: true, 430: true, 434: true, 446: true, 454: true,
- 458: true, 462: true, 480: true, 484: true, 496: true,
- 498: true, 504: true, 512: true, 516: true, 524: true,
- 532: true, 533: true, 548: true, 554: true, 558: true,
- 566: true, 578: true, 586: true, 590: true, 598: true,
- 600: true, 604: true, 608: true, 634: true, 643: true,
- 646: true, 654: true, 682: true, 690: true, 694: true,
- 702: true, 704: true, 706: true, 710: true, 728: true,
- 748: true, 752: true, 756: true, 760: true, 764: true,
- 776: true, 780: true, 784: true, 788: true, 800: true,
- 807: true, 818: true, 826: true, 834: true, 840: true,
- 858: true, 860: true, 882: true, 886: true, 901: true,
- 927: true, 928: true, 929: true, 930: true, 931: true,
- 932: true, 933: true, 934: true, 936: true, 938: true,
- 940: true, 941: true, 943: true, 944: true, 946: true,
- 947: true, 948: true, 949: true, 950: true, 951: true,
- 952: true, 953: true, 955: true, 956: true, 957: true,
- 958: true, 959: true, 960: true, 961: true, 962: true,
- 963: true, 964: true, 965: true, 967: true, 968: true,
- 969: true, 970: true, 971: true, 972: true, 973: true,
- 975: true, 976: true, 977: true, 978: true, 979: true,
- 980: true, 981: true, 984: true, 985: true, 986: true,
- 990: true, 994: true, 997: true, 999: true,
+var iso4217_numeric = map[int]struct{}{
+ 8: {}, 12: {}, 32: {}, 36: {}, 44: {},
+ 48: {}, 50: {}, 51: {}, 52: {}, 60: {},
+ 64: {}, 68: {}, 72: {}, 84: {}, 90: {},
+ 96: {}, 104: {}, 108: {}, 116: {}, 124: {},
+ 132: {}, 136: {}, 144: {}, 152: {}, 156: {},
+ 170: {}, 174: {}, 188: {}, 191: {}, 192: {},
+ 203: {}, 208: {}, 214: {}, 222: {}, 230: {},
+ 232: {}, 238: {}, 242: {}, 262: {}, 270: {},
+ 292: {}, 320: {}, 324: {}, 328: {}, 332: {},
+ 340: {}, 344: {}, 348: {}, 352: {}, 356: {},
+ 360: {}, 364: {}, 368: {}, 376: {}, 388: {},
+ 392: {}, 398: {}, 400: {}, 404: {}, 408: {},
+ 410: {}, 414: {}, 417: {}, 418: {}, 422: {},
+ 426: {}, 430: {}, 434: {}, 446: {}, 454: {},
+ 458: {}, 462: {}, 480: {}, 484: {}, 496: {},
+ 498: {}, 504: {}, 512: {}, 516: {}, 524: {},
+ 532: {}, 533: {}, 548: {}, 554: {}, 558: {},
+ 566: {}, 578: {}, 586: {}, 590: {}, 598: {},
+ 600: {}, 604: {}, 608: {}, 634: {}, 643: {},
+ 646: {}, 654: {}, 682: {}, 690: {}, 694: {},
+ 702: {}, 704: {}, 706: {}, 710: {}, 728: {},
+ 748: {}, 752: {}, 756: {}, 760: {}, 764: {},
+ 776: {}, 780: {}, 784: {}, 788: {}, 800: {},
+ 807: {}, 818: {}, 826: {}, 834: {}, 840: {},
+ 858: {}, 860: {}, 882: {}, 886: {}, 901: {},
+ 927: {}, 928: {}, 929: {}, 930: {}, 931: {},
+ 932: {}, 933: {}, 934: {}, 936: {}, 938: {},
+ 940: {}, 941: {}, 943: {}, 944: {}, 946: {},
+ 947: {}, 948: {}, 949: {}, 950: {}, 951: {},
+ 952: {}, 953: {}, 955: {}, 956: {}, 957: {},
+ 958: {}, 959: {}, 960: {}, 961: {}, 962: {},
+ 963: {}, 964: {}, 965: {}, 967: {}, 968: {},
+ 969: {}, 970: {}, 971: {}, 972: {}, 973: {},
+ 975: {}, 976: {}, 977: {}, 978: {}, 979: {},
+ 980: {}, 981: {}, 984: {}, 985: {}, 986: {},
+ 990: {}, 994: {}, 997: {}, 999: {},
}
diff --git a/vendor/github.com/go-playground/validator/v10/doc.go b/vendor/github.com/go-playground/validator/v10/doc.go
index b47409188131c..c9b1616ee5a6b 100644
--- a/vendor/github.com/go-playground/validator/v10/doc.go
+++ b/vendor/github.com/go-playground/validator/v10/doc.go
@@ -253,7 +253,7 @@ Example #2
This validates that the value is not the data types default zero value.
For numbers ensures value is not zero. For strings ensures value is
-not "". For slices, maps, pointers, interfaces, channels and functions
+not "". For booleans ensures value is not false. For slices, maps, pointers, interfaces, channels and functions
ensures the value is not nil. For structs ensures value is not the zero value when using WithRequiredStructEnabled.
Usage: required
@@ -489,12 +489,19 @@ For strings, ints, and uints, oneof will ensure that the value
is one of the values in the parameter. The parameter should be
a list of values separated by whitespace. Values may be
strings or numbers. To match strings with spaces in them, include
-the target string between single quotes.
+the target string between single quotes. Kind of like an 'enum'.
Usage: oneof=red green
oneof='red green' 'blue yellow'
oneof=5 7 9
+# One Of Case Insensitive
+
+Works the same as oneof but is case insensitive and therefore only accepts strings.
+
+ Usage: oneofci=red green
+ oneofci='red green' 'blue yellow'
+
# Greater Than
For numbers, this will ensure that the value is greater than the
@@ -911,11 +918,20 @@ This will accept any uri the golang request uri accepts
# Urn RFC 2141 String
-This validataes that a string value contains a valid URN
+This validates that a string value contains a valid URN
according to the RFC 2141 spec.
Usage: urn_rfc2141
+# Base32 String
+
+This validates that a string value contains a valid bas324 value.
+Although an empty string is valid base32 this will report an empty string
+as an error, if you wish to accept an empty string as valid you can use
+this with the omitempty tag.
+
+ Usage: base32
+
# Base64 String
This validates that a string value contains a valid base64 value.
@@ -957,7 +973,7 @@ Bitcoin Bech32 Address (segwit)
This validates that a string value contains a valid bitcoin Bech32 address as defined
by bip-0173 (https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki)
-Special thanks to Pieter Wuille for providng reference implementations.
+Special thanks to Pieter Wuille for providing reference implementations.
Usage: btc_addr_bech32
@@ -1290,7 +1306,7 @@ may not exist at the time of validation.
# HostPort
This validates that a string value contains a valid DNS hostname and port that
-can be used to valiate fields typically passed to sockets and connections.
+can be used to validate fields typically passed to sockets and connections.
Usage: hostname_port
@@ -1377,11 +1393,19 @@ This validates that a string value contains a valid credit card number using Luh
This validates that a string or (u)int value contains a valid checksum using the Luhn algorithm.
-# MongoDb ObjectID
+# MongoDB
-This validates that a string is a valid 24 character hexadecimal string.
+This validates that a string is a valid 24 character hexadecimal string or valid connection string.
Usage: mongodb
+ mongodb_connection_string
+
+Example:
+
+ type Test struct {
+ ObjectIdField string `validate:"mongodb"`
+ ConnectionStringField string `validate:"mongodb_connection_string"`
+ }
# Cron
diff --git a/vendor/github.com/go-playground/validator/v10/postcode_regexes.go b/vendor/github.com/go-playground/validator/v10/postcode_regexes.go
index e7e7b687f4bdd..326b8f7538f44 100644
--- a/vendor/github.com/go-playground/validator/v10/postcode_regexes.go
+++ b/vendor/github.com/go-playground/validator/v10/postcode_regexes.go
@@ -1,6 +1,9 @@
package validator
-import "regexp"
+import (
+ "regexp"
+ "sync"
+)
var postCodePatternDict = map[string]string{
"GB": `^GIR[ ]?0AA|((AB|AL|B|BA|BB|BD|BH|BL|BN|BR|BS|BT|CA|CB|CF|CH|CM|CO|CR|CT|CV|CW|DA|DD|DE|DG|DH|DL|DN|DT|DY|E|EC|EH|EN|EX|FK|FY|G|GL|GY|GU|HA|HD|HG|HP|HR|HS|HU|HX|IG|IM|IP|IV|JE|KA|KT|KW|KY|L|LA|LD|LE|LL|LN|LS|LU|M|ME|MK|ML|N|NE|NG|NN|NP|NR|NW|OL|OX|PA|PE|PH|PL|PO|PR|RG|RH|RM|S|SA|SE|SG|SK|SL|SM|SN|SO|SP|SR|SS|ST|SW|SY|TA|TD|TF|TN|TQ|TR|TS|TW|UB|W|WA|WC|WD|WF|WN|WR|WS|WV|YO|ZE)(\d[\dA-Z]?[ ]?\d[ABD-HJLN-UW-Z]{2}))|BFPO[ ]?\d{1,4}$`,
@@ -164,9 +167,12 @@ var postCodePatternDict = map[string]string{
"YT": `^976\d{2}$`,
}
-var postCodeRegexDict = map[string]*regexp.Regexp{}
+var (
+ postcodeRegexInit sync.Once
+ postCodeRegexDict = map[string]*regexp.Regexp{}
+)
-func init() {
+func initPostcodes() {
for countryCode, pattern := range postCodePatternDict {
postCodeRegexDict[countryCode] = regexp.MustCompile(pattern)
}
diff --git a/vendor/github.com/go-playground/validator/v10/regexes.go b/vendor/github.com/go-playground/validator/v10/regexes.go
index af98d8daa62e5..871cf7df7d0fa 100644
--- a/vendor/github.com/go-playground/validator/v10/regexes.go
+++ b/vendor/github.com/go-playground/validator/v10/regexes.go
@@ -1,6 +1,9 @@
package validator
-import "regexp"
+import (
+ "regexp"
+ "sync"
+)
const (
alphaRegexString = "^[a-zA-Z]+$"
@@ -17,6 +20,7 @@ const (
hslaRegexString = "^hsla\\(\\s*(?:0|[1-9]\\d?|[12]\\d\\d|3[0-5]\\d|360)\\s*,\\s*(?:(?:0|[1-9]\\d?|100)%)\\s*,\\s*(?:(?:0|[1-9]\\d?|100)%)\\s*,\\s*(?:(?:0.[1-9]*)|[01])\\s*\\)$"
emailRegexString = "^(?:(?:(?:(?:[a-zA-Z]|\\d|[!#\\$%&'\\*\\+\\-\\/=\\?\\^_`{\\|}~]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])+(?:\\.([a-zA-Z]|\\d|[!#\\$%&'\\*\\+\\-\\/=\\?\\^_`{\\|}~]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])+)*)|(?:(?:\\x22)(?:(?:(?:(?:\\x20|\\x09)*(?:\\x0d\\x0a))?(?:\\x20|\\x09)+)?(?:(?:[\\x01-\\x08\\x0b\\x0c\\x0e-\\x1f\\x7f]|\\x21|[\\x23-\\x5b]|[\\x5d-\\x7e]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(?:(?:[\\x01-\\x09\\x0b\\x0c\\x0d-\\x7f]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}]))))*(?:(?:(?:\\x20|\\x09)*(?:\\x0d\\x0a))?(\\x20|\\x09)+)?(?:\\x22))))@(?:(?:(?:[a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(?:(?:[a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])(?:[a-zA-Z]|\\d|-|\\.|~|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])*(?:[a-zA-Z]|\\d|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])))\\.)+(?:(?:[a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])|(?:(?:[a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])(?:[a-zA-Z]|\\d|-|\\.|~|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])*(?:[a-zA-Z]|[\\x{00A0}-\\x{D7FF}\\x{F900}-\\x{FDCF}\\x{FDF0}-\\x{FFEF}])))\\.?$"
e164RegexString = "^\\+[1-9]?[0-9]{7,14}$"
+ base32RegexString = "^(?:[A-Z2-7]{8})*(?:[A-Z2-7]{2}={6}|[A-Z2-7]{4}={4}|[A-Z2-7]{5}={3}|[A-Z2-7]{7}=|[A-Z2-7]{8})$"
base64RegexString = "^(?:[A-Za-z0-9+\\/]{4})*(?:[A-Za-z0-9+\\/]{2}==|[A-Za-z0-9+\\/]{3}=|[A-Za-z0-9+\\/]{4})$"
base64URLRegexString = "^(?:[A-Za-z0-9-_]{4})*(?:[A-Za-z0-9-_]{2}==|[A-Za-z0-9-_]{3}=|[A-Za-z0-9-_]{4})$"
base64RawURLRegexString = "^(?:[A-Za-z0-9-_]{4})*(?:[A-Za-z0-9-_]{2,4})$"
@@ -31,7 +35,7 @@ const (
uUID4RFC4122RegexString = "^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-4[0-9a-fA-F]{3}-[89abAB][0-9a-fA-F]{3}-[0-9a-fA-F]{12}$"
uUID5RFC4122RegexString = "^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-5[0-9a-fA-F]{3}-[89abAB][0-9a-fA-F]{3}-[0-9a-fA-F]{12}$"
uUIDRFC4122RegexString = "^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$"
- uLIDRegexString = "^[A-HJKMNP-TV-Z0-9]{26}$"
+ uLIDRegexString = "^(?i)[A-HJKMNP-TV-Z0-9]{26}$"
md4RegexString = "^[0-9a-f]{32}$"
md5RegexString = "^[0-9a-f]{32}$"
sha256RegexString = "^[0-9a-f]{64}$"
@@ -67,79 +71,93 @@ const (
semverRegexString = `^(0|[1-9]\d*)\.(0|[1-9]\d*)\.(0|[1-9]\d*)(?:-((?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+([0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?$` // numbered capture groups https://semver.org/
dnsRegexStringRFC1035Label = "^[a-z]([-a-z0-9]*[a-z0-9]){0,62}$"
cveRegexString = `^CVE-(1999|2\d{3})-(0[^0]\d{2}|0\d[^0]\d{1}|0\d{2}[^0]|[1-9]{1}\d{3,})$` // CVE Format Id https://cve.mitre.org/cve/identifiers/syntaxchange.html
- mongodbRegexString = "^[a-f\\d]{24}$"
- cronRegexString = `(@(annually|yearly|monthly|weekly|daily|hourly|reboot))|(@every (\d+(ns|us|µs|ms|s|m|h))+)|((((\d+,)+\d+|(\d+(\/|-)\d+)|\d+|\*) ?){5,7})`
+ mongodbIdRegexString = "^[a-f\\d]{24}$"
+ mongodbConnStringRegexString = "^mongodb(\\+srv)?:\\/\\/(([a-zA-Z\\d]+):([a-zA-Z\\d$:\\/?#\\[\\]@]+)@)?(([a-z\\d.-]+)(:[\\d]+)?)((,(([a-z\\d.-]+)(:(\\d+))?))*)?(\\/[a-zA-Z-_]{1,64})?(\\?(([a-zA-Z]+)=([a-zA-Z\\d]+))(&(([a-zA-Z\\d]+)=([a-zA-Z\\d]+))?)*)?$"
+ cronRegexString = `(@(annually|yearly|monthly|weekly|daily|hourly|reboot))|(@every (\d+(ns|us|µs|ms|s|m|h))+)|((((\d+,)+\d+|((\*|\d+)(\/|-)\d+)|\d+|\*) ?){5,7})`
spicedbIDRegexString = `^(([a-zA-Z0-9/_|\-=+]{1,})|\*)$`
spicedbPermissionRegexString = "^([a-z][a-z0-9_]{1,62}[a-z0-9])?$"
spicedbTypeRegexString = "^([a-z][a-z0-9_]{1,61}[a-z0-9]/)?[a-z][a-z0-9_]{1,62}[a-z0-9]$"
)
+func lazyRegexCompile(str string) func() *regexp.Regexp {
+ var regex *regexp.Regexp
+ var once sync.Once
+ return func() *regexp.Regexp {
+ once.Do(func() {
+ regex = regexp.MustCompile(str)
+ })
+ return regex
+ }
+}
+
var (
- alphaRegex = regexp.MustCompile(alphaRegexString)
- alphaNumericRegex = regexp.MustCompile(alphaNumericRegexString)
- alphaUnicodeRegex = regexp.MustCompile(alphaUnicodeRegexString)
- alphaUnicodeNumericRegex = regexp.MustCompile(alphaUnicodeNumericRegexString)
- numericRegex = regexp.MustCompile(numericRegexString)
- numberRegex = regexp.MustCompile(numberRegexString)
- hexadecimalRegex = regexp.MustCompile(hexadecimalRegexString)
- hexColorRegex = regexp.MustCompile(hexColorRegexString)
- rgbRegex = regexp.MustCompile(rgbRegexString)
- rgbaRegex = regexp.MustCompile(rgbaRegexString)
- hslRegex = regexp.MustCompile(hslRegexString)
- hslaRegex = regexp.MustCompile(hslaRegexString)
- e164Regex = regexp.MustCompile(e164RegexString)
- emailRegex = regexp.MustCompile(emailRegexString)
- base64Regex = regexp.MustCompile(base64RegexString)
- base64URLRegex = regexp.MustCompile(base64URLRegexString)
- base64RawURLRegex = regexp.MustCompile(base64RawURLRegexString)
- iSBN10Regex = regexp.MustCompile(iSBN10RegexString)
- iSBN13Regex = regexp.MustCompile(iSBN13RegexString)
- iSSNRegex = regexp.MustCompile(iSSNRegexString)
- uUID3Regex = regexp.MustCompile(uUID3RegexString)
- uUID4Regex = regexp.MustCompile(uUID4RegexString)
- uUID5Regex = regexp.MustCompile(uUID5RegexString)
- uUIDRegex = regexp.MustCompile(uUIDRegexString)
- uUID3RFC4122Regex = regexp.MustCompile(uUID3RFC4122RegexString)
- uUID4RFC4122Regex = regexp.MustCompile(uUID4RFC4122RegexString)
- uUID5RFC4122Regex = regexp.MustCompile(uUID5RFC4122RegexString)
- uUIDRFC4122Regex = regexp.MustCompile(uUIDRFC4122RegexString)
- uLIDRegex = regexp.MustCompile(uLIDRegexString)
- md4Regex = regexp.MustCompile(md4RegexString)
- md5Regex = regexp.MustCompile(md5RegexString)
- sha256Regex = regexp.MustCompile(sha256RegexString)
- sha384Regex = regexp.MustCompile(sha384RegexString)
- sha512Regex = regexp.MustCompile(sha512RegexString)
- ripemd128Regex = regexp.MustCompile(ripemd128RegexString)
- ripemd160Regex = regexp.MustCompile(ripemd160RegexString)
- tiger128Regex = regexp.MustCompile(tiger128RegexString)
- tiger160Regex = regexp.MustCompile(tiger160RegexString)
- tiger192Regex = regexp.MustCompile(tiger192RegexString)
- aSCIIRegex = regexp.MustCompile(aSCIIRegexString)
- printableASCIIRegex = regexp.MustCompile(printableASCIIRegexString)
- multibyteRegex = regexp.MustCompile(multibyteRegexString)
- dataURIRegex = regexp.MustCompile(dataURIRegexString)
- latitudeRegex = regexp.MustCompile(latitudeRegexString)
- longitudeRegex = regexp.MustCompile(longitudeRegexString)
- sSNRegex = regexp.MustCompile(sSNRegexString)
- hostnameRegexRFC952 = regexp.MustCompile(hostnameRegexStringRFC952)
- hostnameRegexRFC1123 = regexp.MustCompile(hostnameRegexStringRFC1123)
- fqdnRegexRFC1123 = regexp.MustCompile(fqdnRegexStringRFC1123)
- btcAddressRegex = regexp.MustCompile(btcAddressRegexString)
- btcUpperAddressRegexBech32 = regexp.MustCompile(btcAddressUpperRegexStringBech32)
- btcLowerAddressRegexBech32 = regexp.MustCompile(btcAddressLowerRegexStringBech32)
- ethAddressRegex = regexp.MustCompile(ethAddressRegexString)
- uRLEncodedRegex = regexp.MustCompile(uRLEncodedRegexString)
- hTMLEncodedRegex = regexp.MustCompile(hTMLEncodedRegexString)
- hTMLRegex = regexp.MustCompile(hTMLRegexString)
- jWTRegex = regexp.MustCompile(jWTRegexString)
- splitParamsRegex = regexp.MustCompile(splitParamsRegexString)
- bicRegex = regexp.MustCompile(bicRegexString)
- semverRegex = regexp.MustCompile(semverRegexString)
- dnsRegexRFC1035Label = regexp.MustCompile(dnsRegexStringRFC1035Label)
- cveRegex = regexp.MustCompile(cveRegexString)
- mongodbRegex = regexp.MustCompile(mongodbRegexString)
- cronRegex = regexp.MustCompile(cronRegexString)
- spicedbIDRegex = regexp.MustCompile(spicedbIDRegexString)
- spicedbPermissionRegex = regexp.MustCompile(spicedbPermissionRegexString)
- spicedbTypeRegex = regexp.MustCompile(spicedbTypeRegexString)
+ alphaRegex = lazyRegexCompile(alphaRegexString)
+ alphaNumericRegex = lazyRegexCompile(alphaNumericRegexString)
+ alphaUnicodeRegex = lazyRegexCompile(alphaUnicodeRegexString)
+ alphaUnicodeNumericRegex = lazyRegexCompile(alphaUnicodeNumericRegexString)
+ numericRegex = lazyRegexCompile(numericRegexString)
+ numberRegex = lazyRegexCompile(numberRegexString)
+ hexadecimalRegex = lazyRegexCompile(hexadecimalRegexString)
+ hexColorRegex = lazyRegexCompile(hexColorRegexString)
+ rgbRegex = lazyRegexCompile(rgbRegexString)
+ rgbaRegex = lazyRegexCompile(rgbaRegexString)
+ hslRegex = lazyRegexCompile(hslRegexString)
+ hslaRegex = lazyRegexCompile(hslaRegexString)
+ e164Regex = lazyRegexCompile(e164RegexString)
+ emailRegex = lazyRegexCompile(emailRegexString)
+ base32Regex = lazyRegexCompile(base32RegexString)
+ base64Regex = lazyRegexCompile(base64RegexString)
+ base64URLRegex = lazyRegexCompile(base64URLRegexString)
+ base64RawURLRegex = lazyRegexCompile(base64RawURLRegexString)
+ iSBN10Regex = lazyRegexCompile(iSBN10RegexString)
+ iSBN13Regex = lazyRegexCompile(iSBN13RegexString)
+ iSSNRegex = lazyRegexCompile(iSSNRegexString)
+ uUID3Regex = lazyRegexCompile(uUID3RegexString)
+ uUID4Regex = lazyRegexCompile(uUID4RegexString)
+ uUID5Regex = lazyRegexCompile(uUID5RegexString)
+ uUIDRegex = lazyRegexCompile(uUIDRegexString)
+ uUID3RFC4122Regex = lazyRegexCompile(uUID3RFC4122RegexString)
+ uUID4RFC4122Regex = lazyRegexCompile(uUID4RFC4122RegexString)
+ uUID5RFC4122Regex = lazyRegexCompile(uUID5RFC4122RegexString)
+ uUIDRFC4122Regex = lazyRegexCompile(uUIDRFC4122RegexString)
+ uLIDRegex = lazyRegexCompile(uLIDRegexString)
+ md4Regex = lazyRegexCompile(md4RegexString)
+ md5Regex = lazyRegexCompile(md5RegexString)
+ sha256Regex = lazyRegexCompile(sha256RegexString)
+ sha384Regex = lazyRegexCompile(sha384RegexString)
+ sha512Regex = lazyRegexCompile(sha512RegexString)
+ ripemd128Regex = lazyRegexCompile(ripemd128RegexString)
+ ripemd160Regex = lazyRegexCompile(ripemd160RegexString)
+ tiger128Regex = lazyRegexCompile(tiger128RegexString)
+ tiger160Regex = lazyRegexCompile(tiger160RegexString)
+ tiger192Regex = lazyRegexCompile(tiger192RegexString)
+ aSCIIRegex = lazyRegexCompile(aSCIIRegexString)
+ printableASCIIRegex = lazyRegexCompile(printableASCIIRegexString)
+ multibyteRegex = lazyRegexCompile(multibyteRegexString)
+ dataURIRegex = lazyRegexCompile(dataURIRegexString)
+ latitudeRegex = lazyRegexCompile(latitudeRegexString)
+ longitudeRegex = lazyRegexCompile(longitudeRegexString)
+ sSNRegex = lazyRegexCompile(sSNRegexString)
+ hostnameRegexRFC952 = lazyRegexCompile(hostnameRegexStringRFC952)
+ hostnameRegexRFC1123 = lazyRegexCompile(hostnameRegexStringRFC1123)
+ fqdnRegexRFC1123 = lazyRegexCompile(fqdnRegexStringRFC1123)
+ btcAddressRegex = lazyRegexCompile(btcAddressRegexString)
+ btcUpperAddressRegexBech32 = lazyRegexCompile(btcAddressUpperRegexStringBech32)
+ btcLowerAddressRegexBech32 = lazyRegexCompile(btcAddressLowerRegexStringBech32)
+ ethAddressRegex = lazyRegexCompile(ethAddressRegexString)
+ uRLEncodedRegex = lazyRegexCompile(uRLEncodedRegexString)
+ hTMLEncodedRegex = lazyRegexCompile(hTMLEncodedRegexString)
+ hTMLRegex = lazyRegexCompile(hTMLRegexString)
+ jWTRegex = lazyRegexCompile(jWTRegexString)
+ splitParamsRegex = lazyRegexCompile(splitParamsRegexString)
+ bicRegex = lazyRegexCompile(bicRegexString)
+ semverRegex = lazyRegexCompile(semverRegexString)
+ dnsRegexRFC1035Label = lazyRegexCompile(dnsRegexStringRFC1035Label)
+ cveRegex = lazyRegexCompile(cveRegexString)
+ mongodbIdRegex = lazyRegexCompile(mongodbIdRegexString)
+ mongodbConnectionRegex = lazyRegexCompile(mongodbConnStringRegexString)
+ cronRegex = lazyRegexCompile(cronRegexString)
+ spicedbIDRegex = lazyRegexCompile(spicedbIDRegexString)
+ spicedbPermissionRegex = lazyRegexCompile(spicedbPermissionRegexString)
+ spicedbTypeRegex = lazyRegexCompile(spicedbTypeRegexString)
)
diff --git a/vendor/github.com/go-playground/validator/v10/util.go b/vendor/github.com/go-playground/validator/v10/util.go
index 16851593da59b..9285223a2fede 100644
--- a/vendor/github.com/go-playground/validator/v10/util.go
+++ b/vendor/github.com/go-playground/validator/v10/util.go
@@ -271,7 +271,7 @@ func asFloat64(param string) float64 {
return i
}
-// asFloat64 returns the parameter as a float64
+// asFloat32 returns the parameter as a float32
// or panics if it can't convert
func asFloat32(param string) float64 {
i, err := strconv.ParseFloat(param, 32)
@@ -297,7 +297,8 @@ func panicIf(err error) {
// Checks if field value matches regex. If fl.Field can be cast to Stringer, it uses the Stringer interfaces
// String() return value. Otherwise, it uses fl.Field's String() value.
-func fieldMatchesRegexByStringerValOrString(regex *regexp.Regexp, fl FieldLevel) bool {
+func fieldMatchesRegexByStringerValOrString(regexFn func() *regexp.Regexp, fl FieldLevel) bool {
+ regex := regexFn()
switch fl.Field().Kind() {
case reflect.String:
return regex.MatchString(fl.Field().String())
diff --git a/vendor/github.com/go-playground/validator/v10/validator_instance.go b/vendor/github.com/go-playground/validator/v10/validator_instance.go
index 1a345138ed642..d9f148dbaec41 100644
--- a/vendor/github.com/go-playground/validator/v10/validator_instance.go
+++ b/vendor/github.com/go-playground/validator/v10/validator_instance.go
@@ -74,8 +74,8 @@ type CustomTypeFunc func(field reflect.Value) interface{}
type TagNameFunc func(field reflect.StructField) string
type internalValidationFuncWrapper struct {
- fn FuncCtx
- runValidatinOnNil bool
+ fn FuncCtx
+ runValidationOnNil bool
}
// Validate contains the validator settings and cache
@@ -245,7 +245,7 @@ func (v *Validate) registerValidation(tag string, fn FuncCtx, bakedIn bool, nilC
if !bakedIn && (ok || strings.ContainsAny(tag, restrictedTagChars)) {
panic(fmt.Sprintf(restrictedTagErr, tag))
}
- v.validations[tag] = internalValidationFuncWrapper{fn: fn, runValidatinOnNil: nilCheckable}
+ v.validations[tag] = internalValidationFuncWrapper{fn: fn, runValidationOnNil: nilCheckable}
return nil
}
@@ -676,7 +676,7 @@ func (v *Validate) VarWithValue(field interface{}, other interface{}, tag string
}
// VarWithValueCtx validates a single variable, against another variable/field's value using tag style validation and
-// allows passing of contextual validation validation information via context.Context.
+// allows passing of contextual validation information via context.Context.
// eg.
// s1 := "abcd"
// s2 := "abcd"
diff --git a/vendor/go.mongodb.org/mongo-driver/bson/bsoncodec/default_value_decoders.go b/vendor/go.mongodb.org/mongo-driver/bson/bsoncodec/default_value_decoders.go
index fc4a7b1dbf587..159297ef0a497 100644
--- a/vendor/go.mongodb.org/mongo-driver/bson/bsoncodec/default_value_decoders.go
+++ b/vendor/go.mongodb.org/mongo-driver/bson/bsoncodec/default_value_decoders.go
@@ -1521,6 +1521,12 @@ func (dvd DefaultValueDecoders) ValueUnmarshalerDecodeValue(_ DecodeContext, vr
return ValueDecoderError{Name: "ValueUnmarshalerDecodeValue", Types: []reflect.Type{tValueUnmarshaler}, Received: val}
}
+ if vr.Type() == bsontype.Null {
+ val.Set(reflect.Zero(val.Type()))
+
+ return vr.ReadNull()
+ }
+
if val.Kind() == reflect.Ptr && val.IsNil() {
if !val.CanSet() {
return ValueDecoderError{Name: "ValueUnmarshalerDecodeValue", Types: []reflect.Type{tValueUnmarshaler}, Received: val}
diff --git a/vendor/modules.txt b/vendor/modules.txt
index 894d908846033..ecf01621a3dba 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -218,8 +218,8 @@ github.com/GoogleCloudPlatform/opentelemetry-operations-go/exporter/metric
# github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping v0.49.0
## explicit; go 1.22
github.com/GoogleCloudPlatform/opentelemetry-operations-go/internal/resourcemapping
-# github.com/IBM/go-sdk-core/v5 v5.18.5
-## explicit; go 1.21
+# github.com/IBM/go-sdk-core/v5 v5.19.0
+## explicit; go 1.23.0
github.com/IBM/go-sdk-core/v5/core
# github.com/IBM/ibm-cos-sdk-go v1.12.1
## explicit; go 1.21
@@ -760,7 +760,7 @@ github.com/fsouza/fake-gcs-server/internal/notification
# github.com/fxamacker/cbor/v2 v2.7.0
## explicit; go 1.17
github.com/fxamacker/cbor/v2
-# github.com/gabriel-vasile/mimetype v1.4.4
+# github.com/gabriel-vasile/mimetype v1.4.8
## explicit; go 1.20
github.com/gabriel-vasile/mimetype
github.com/gabriel-vasile/mimetype/internal/charset
@@ -832,8 +832,8 @@ github.com/go-playground/locales/currency
# github.com/go-playground/universal-translator v0.18.1
## explicit; go 1.18
github.com/go-playground/universal-translator
-# github.com/go-playground/validator/v10 v10.19.0
-## explicit; go 1.18
+# github.com/go-playground/validator/v10 v10.24.0
+## explicit; go 1.20
github.com/go-playground/validator/v10
# github.com/go-redsync/redsync/v4 v4.13.0
## explicit; go 1.22
@@ -1814,7 +1814,7 @@ go.etcd.io/etcd/client/v3
go.etcd.io/etcd/client/v3/credentials
go.etcd.io/etcd/client/v3/internal/endpoint
go.etcd.io/etcd/client/v3/internal/resolver
-# go.mongodb.org/mongo-driver v1.17.0
+# go.mongodb.org/mongo-driver v1.17.2
## explicit; go 1.18
go.mongodb.org/mongo-driver/bson
go.mongodb.org/mongo-driver/bson/bsoncodec
|
fix
|
update module github.com/ibm/go-sdk-core/v5 to v5.19.0 (main) (#16647)
|
b2b6d76ebd4ee069bb0654e8c6fdb14fe104de10
|
2024-11-05 16:50:58
|
Joao Marcal
|
chore(operator): fix operator-publish-operator-hub.yml workflow (#14760)
| false
|
diff --git a/.github/workflows/operator-publish-operator-hub.yml b/.github/workflows/operator-publish-operator-hub.yml
index dd4d4c199af3e..6ba25fecda068 100644
--- a/.github/workflows/operator-publish-operator-hub.yml
+++ b/.github/workflows/operator-publish-operator-hub.yml
@@ -10,6 +10,9 @@ jobs:
with:
org: redhat-openshift-ecosystem
repo: community-operators-prod
+ secrets:
+ APP_ID: ${{ secrets.APP_ID }}
+ APP_PRIVATE_KEY: ${{ secrets.APP_PRIVATE_KEY }}
operator-hub-community-release:
if: startsWith(github.event.release.tag_name, 'operator/')
@@ -17,3 +20,6 @@ jobs:
with:
org: k8s-operatorhub
repo: community-operators
+ secrets:
+ APP_ID: ${{ secrets.APP_ID }}
+ APP_PRIVATE_KEY: ${{ secrets.APP_PRIVATE_KEY }}
|
chore
|
fix operator-publish-operator-hub.yml workflow (#14760)
|
53a1ab76257d900b80334d68439d7ff4bfcfd39b
|
2024-11-02 20:14:06
|
renovate[bot]
|
fix(deps): update aws-sdk-go-v2 monorepo (#14742)
| false
|
diff --git a/tools/lambda-promtail/go.mod b/tools/lambda-promtail/go.mod
index 961038477cdfb..efbddc2890619 100644
--- a/tools/lambda-promtail/go.mod
+++ b/tools/lambda-promtail/go.mod
@@ -4,9 +4,9 @@ go 1.22
require (
github.com/aws/aws-lambda-go v1.47.0
- github.com/aws/aws-sdk-go-v2 v1.30.4
- github.com/aws/aws-sdk-go-v2/config v1.27.31
- github.com/aws/aws-sdk-go-v2/service/s3 v1.61.0
+ github.com/aws/aws-sdk-go-v2 v1.32.3
+ github.com/aws/aws-sdk-go-v2/config v1.28.1
+ github.com/aws/aws-sdk-go-v2/service/s3 v1.66.2
github.com/go-kit/log v0.2.1
github.com/gogo/protobuf v1.3.2
github.com/golang/snappy v0.0.4
@@ -23,21 +23,21 @@ require (
github.com/Masterminds/sprig/v3 v3.2.3 // indirect
github.com/alecthomas/units v0.0.0-20240626203959-61d1e3462e30 // indirect
github.com/armon/go-metrics v0.4.1 // indirect
- github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.4 // indirect
- github.com/aws/aws-sdk-go-v2/credentials v1.17.30 // indirect
- github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.12 // indirect
- github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.16 // indirect
- github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.16 // indirect
+ github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.6 // indirect
+ github.com/aws/aws-sdk-go-v2/credentials v1.17.42 // indirect
+ github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.18 // indirect
+ github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.22 // indirect
+ github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.22 // indirect
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 // indirect
- github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.16 // indirect
- github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.11.4 // indirect
- github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.3.18 // indirect
- github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.11.18 // indirect
- github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.17.16 // indirect
- github.com/aws/aws-sdk-go-v2/service/sso v1.22.5 // indirect
- github.com/aws/aws-sdk-go-v2/service/ssooidc v1.26.5 // indirect
- github.com/aws/aws-sdk-go-v2/service/sts v1.30.5 // indirect
- github.com/aws/smithy-go v1.20.4 // indirect
+ github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.22 // indirect
+ github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.0 // indirect
+ github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.4.3 // indirect
+ github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.3 // indirect
+ github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.3 // indirect
+ github.com/aws/aws-sdk-go-v2/service/sso v1.24.3 // indirect
+ github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.3 // indirect
+ github.com/aws/aws-sdk-go-v2/service/sts v1.32.3 // indirect
+ github.com/aws/smithy-go v1.22.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/buger/jsonparser v1.1.1 // indirect
github.com/c2h5oh/datasize v0.0.0-20231215233829-aa82cc1e6500 // indirect
diff --git a/tools/lambda-promtail/go.sum b/tools/lambda-promtail/go.sum
index f7251a0d33709..6446f0ed7e39b 100644
--- a/tools/lambda-promtail/go.sum
+++ b/tools/lambda-promtail/go.sum
@@ -48,42 +48,42 @@ github.com/aws/aws-lambda-go v1.47.0 h1:0H8s0vumYx/YKs4sE7YM0ktwL2eWse+kfopsRI1s
github.com/aws/aws-lambda-go v1.47.0/go.mod h1:dpMpZgvWx5vuQJfBt0zqBha60q7Dd7RfgJv23DymV8A=
github.com/aws/aws-sdk-go v1.54.19 h1:tyWV+07jagrNiCcGRzRhdtVjQs7Vy41NwsuOcl0IbVI=
github.com/aws/aws-sdk-go v1.54.19/go.mod h1:eRwEWoyTWFMVYVQzKMNHWP5/RV4xIUGMQfXQHfHkpNU=
-github.com/aws/aws-sdk-go-v2 v1.30.4 h1:frhcagrVNrzmT95RJImMHgabt99vkXGslubDaDagTk8=
-github.com/aws/aws-sdk-go-v2 v1.30.4/go.mod h1:CT+ZPWXbYrci8chcARI3OmI/qgd+f6WtuLOoaIA8PR0=
-github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.4 h1:70PVAiL15/aBMh5LThwgXdSQorVr91L127ttckI9QQU=
-github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.4/go.mod h1:/MQxMqci8tlqDH+pjmoLu1i0tbWCUP1hhyMRuFxpQCw=
-github.com/aws/aws-sdk-go-v2/config v1.27.31 h1:kxBoRsjhT3pq0cKthgj6RU6bXTm/2SgdoUMyrVw0rAI=
-github.com/aws/aws-sdk-go-v2/config v1.27.31/go.mod h1:z04nZdSWFPaDwK3DdJOG2r+scLQzMYuJeW0CujEm9FM=
-github.com/aws/aws-sdk-go-v2/credentials v1.17.30 h1:aau/oYFtibVovr2rDt8FHlU17BTicFEMAi29V1U+L5Q=
-github.com/aws/aws-sdk-go-v2/credentials v1.17.30/go.mod h1:BPJ/yXV92ZVq6G8uYvbU0gSl8q94UB63nMT5ctNO38g=
-github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.12 h1:yjwoSyDZF8Jth+mUk5lSPJCkMC0lMy6FaCD51jm6ayE=
-github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.12/go.mod h1:fuR57fAgMk7ot3WcNQfb6rSEn+SUffl7ri+aa8uKysI=
-github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.16 h1:TNyt/+X43KJ9IJJMjKfa3bNTiZbUP7DeCxfbTROESwY=
-github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.16/go.mod h1:2DwJF39FlNAUiX5pAc0UNeiz16lK2t7IaFcm0LFHEgc=
-github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.16 h1:jYfy8UPmd+6kJW5YhY0L1/KftReOGxI/4NtVSTh9O/I=
-github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.16/go.mod h1:7ZfEPZxkW42Afq4uQB8H2E2e6ebh6mXTueEpYzjCzcs=
+github.com/aws/aws-sdk-go-v2 v1.32.3 h1:T0dRlFBKcdaUPGNtkBSwHZxrtis8CQU17UpNBZYd0wk=
+github.com/aws/aws-sdk-go-v2 v1.32.3/go.mod h1:2SK5n0a2karNTv5tbP1SjsX0uhttou00v/HpXKM1ZUo=
+github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.6 h1:pT3hpW0cOHRJx8Y0DfJUEQuqPild8jRGmSFmBgvydr0=
+github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.6.6/go.mod h1:j/I2++U0xX+cr44QjHay4Cvxj6FUbnxrgmqN3H1jTZA=
+github.com/aws/aws-sdk-go-v2/config v1.28.1 h1:oxIvOUXy8x0U3fR//0eq+RdCKimWI900+SV+10xsCBw=
+github.com/aws/aws-sdk-go-v2/config v1.28.1/go.mod h1:bRQcttQJiARbd5JZxw6wG0yIK3eLeSCPdg6uqmmlIiI=
+github.com/aws/aws-sdk-go-v2/credentials v1.17.42 h1:sBP0RPjBU4neGpIYyx8mkU2QqLPl5u9cmdTWVzIpHkM=
+github.com/aws/aws-sdk-go-v2/credentials v1.17.42/go.mod h1:FwZBfU530dJ26rv9saAbxa9Ej3eF/AK0OAY86k13n4M=
+github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.18 h1:68jFVtt3NulEzojFesM/WVarlFpCaXLKaBxDpzkQ9OQ=
+github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.18/go.mod h1:Fjnn5jQVIo6VyedMc0/EhPpfNlPl7dHV916O6B+49aE=
+github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.22 h1:Jw50LwEkVjuVzE1NzkhNKkBf9cRN7MtE1F/b2cOKTUM=
+github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.22/go.mod h1:Y/SmAyPcOTmpeVaWSzSKiILfXTVJwrGmYZhcRbhWuEY=
+github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.22 h1:981MHwBaRZM7+9QSR6XamDzF/o7ouUGxFzr+nVSIhrs=
+github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.22/go.mod h1:1RA1+aBEfn+CAB/Mh0MB6LsdCYCnjZm7tKXtnk499ZQ=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1 h1:VaRN3TlFdd6KxX1x3ILT5ynH6HvKgqdiXoTxAF4HQcQ=
github.com/aws/aws-sdk-go-v2/internal/ini v1.8.1/go.mod h1:FbtygfRFze9usAadmnGJNc8KsP346kEe+y2/oyhGAGc=
-github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.16 h1:mimdLQkIX1zr8GIPY1ZtALdBQGxcASiBd2MOp8m/dMc=
-github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.16/go.mod h1:YHk6owoSwrIsok+cAH9PENCOGoH5PU2EllX4vLtSrsY=
-github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.11.4 h1:KypMCbLPPHEmf9DgMGw51jMj77VfGPAN2Kv4cfhlfgI=
-github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.11.4/go.mod h1:Vz1JQXliGcQktFTN/LN6uGppAIRoLBR2bMvIMP0gOjc=
-github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.3.18 h1:GckUnpm4EJOAio1c8o25a+b3lVfwVzC9gnSBqiiNmZM=
-github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.3.18/go.mod h1:Br6+bxfG33Dk3ynmkhsW2Z/t9D4+lRqdLDNCKi85w0U=
-github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.11.18 h1:tJ5RnkHCiSH0jyd6gROjlJtNwov0eGYNz8s8nFcR0jQ=
-github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.11.18/go.mod h1:++NHzT+nAF7ZPrHPsA+ENvsXkOO8wEu+C6RXltAG4/c=
-github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.17.16 h1:jg16PhLPUiHIj8zYIW6bqzeQSuHVEiWnGA0Brz5Xv2I=
-github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.17.16/go.mod h1:Uyk1zE1VVdsHSU7096h/rwnXDzOzYQVl+FNPhPw7ShY=
-github.com/aws/aws-sdk-go-v2/service/s3 v1.61.0 h1:Wb544Wh+xfSXqJ/j3R4aX9wrKUoZsJNmilBYZb3mKQ4=
-github.com/aws/aws-sdk-go-v2/service/s3 v1.61.0/go.mod h1:BSPI0EfnYUuNHPS0uqIo5VrRwzie+Fp+YhQOUs16sKI=
-github.com/aws/aws-sdk-go-v2/service/sso v1.22.5 h1:zCsFCKvbj25i7p1u94imVoO447I/sFv8qq+lGJhRN0c=
-github.com/aws/aws-sdk-go-v2/service/sso v1.22.5/go.mod h1:ZeDX1SnKsVlejeuz41GiajjZpRSWR7/42q/EyA/QEiM=
-github.com/aws/aws-sdk-go-v2/service/ssooidc v1.26.5 h1:SKvPgvdvmiTWoi0GAJ7AsJfOz3ngVkD/ERbs5pUnHNI=
-github.com/aws/aws-sdk-go-v2/service/ssooidc v1.26.5/go.mod h1:20sz31hv/WsPa3HhU3hfrIet2kxM4Pe0r20eBZ20Tac=
-github.com/aws/aws-sdk-go-v2/service/sts v1.30.5 h1:OMsEmCyz2i89XwRwPouAJvhj81wINh+4UK+k/0Yo/q8=
-github.com/aws/aws-sdk-go-v2/service/sts v1.30.5/go.mod h1:vmSqFK+BVIwVpDAGZB3CoCXHzurt4qBE8lf+I/kRTh0=
-github.com/aws/smithy-go v1.20.4 h1:2HK1zBdPgRbjFOHlfeQZfpC4r72MOb9bZkiFwggKO+4=
-github.com/aws/smithy-go v1.20.4/go.mod h1:irrKGvNn1InZwb2d7fkIRNucdfwR8R+Ts3wxYa/cJHg=
+github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.22 h1:yV+hCAHZZYJQcwAaszoBNwLbPItHvApxT0kVIw6jRgs=
+github.com/aws/aws-sdk-go-v2/internal/v4a v1.3.22/go.mod h1:kbR1TL8llqB1eGnVbybcA4/wgScxdylOdyAd51yxPdw=
+github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.0 h1:TToQNkvGguu209puTojY/ozlqy2d/SFNcoLIqTFi42g=
+github.com/aws/aws-sdk-go-v2/service/internal/accept-encoding v1.12.0/go.mod h1:0jp+ltwkf+SwG2fm/PKo8t4y8pJSgOCO4D8Lz3k0aHQ=
+github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.4.3 h1:kT6BcZsmMtNkP/iYMcRG+mIEA/IbeiUimXtGmqF39y0=
+github.com/aws/aws-sdk-go-v2/service/internal/checksum v1.4.3/go.mod h1:Z8uGua2k4PPaGOYn66pK02rhMrot3Xk3tpBuUFPomZU=
+github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.3 h1:qcxX0JYlgWH3hpPUnd6U0ikcl6LLA9sLkXE2w1fpMvY=
+github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.3/go.mod h1:cLSNEmI45soc+Ef8K/L+8sEA3A3pYFEYf5B5UI+6bH4=
+github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.3 h1:ZC7Y/XgKUxwqcdhO5LE8P6oGP1eh6xlQReWNKfhvJno=
+github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.3/go.mod h1:WqfO7M9l9yUAw0HcHaikwRd/H6gzYdz7vjejCA5e2oY=
+github.com/aws/aws-sdk-go-v2/service/s3 v1.66.2 h1:p9TNFL8bFUMd+38YIpTAXpoxyz0MxC7FlbFEH4P4E1U=
+github.com/aws/aws-sdk-go-v2/service/s3 v1.66.2/go.mod h1:fNjyo0Coen9QTwQLWeV6WO2Nytwiu+cCcWaTdKCAqqE=
+github.com/aws/aws-sdk-go-v2/service/sso v1.24.3 h1:UTpsIf0loCIWEbrqdLb+0RxnTXfWh2vhw4nQmFi4nPc=
+github.com/aws/aws-sdk-go-v2/service/sso v1.24.3/go.mod h1:FZ9j3PFHHAR+w0BSEjK955w5YD2UwB/l/H0yAK3MJvI=
+github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.3 h1:2YCmIXv3tmiItw0LlYf6v7gEHebLY45kBEnPezbUKyU=
+github.com/aws/aws-sdk-go-v2/service/ssooidc v1.28.3/go.mod h1:u19stRyNPxGhj6dRm+Cdgu6N75qnbW7+QN0q0dsAk58=
+github.com/aws/aws-sdk-go-v2/service/sts v1.32.3 h1:wVnQ6tigGsRqSWDEEyH6lSAJ9OyFUsSnbaUWChuSGzs=
+github.com/aws/aws-sdk-go-v2/service/sts v1.32.3/go.mod h1:VZa9yTFyj4o10YGsmDO4gbQJUvvhY72fhumT8W4LqsE=
+github.com/aws/smithy-go v1.22.0 h1:uunKnWlcoL3zO7q+gG2Pk53joueEOsnNB28QdMsmiMM=
+github.com/aws/smithy-go v1.22.0/go.mod h1:irrKGvNn1InZwb2d7fkIRNucdfwR8R+Ts3wxYa/cJHg=
github.com/bboreham/go-loser v0.0.0-20230920113527-fcc2c21820a3 h1:6df1vn4bBlDDo4tARvBm7l6KA9iVMnE3NWizDeWSrps=
github.com/bboreham/go-loser v0.0.0-20230920113527-fcc2c21820a3/go.mod h1:CIWtjkly68+yqLPbvwwR/fjNJA/idrtULjZWh2v1ys0=
github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
|
fix
|
update aws-sdk-go-v2 monorepo (#14742)
|
c39786bd516c1839c4e698c7bed29573eeedca2b
|
2024-02-28 22:19:30
|
J Stickler
|
docs: update storage topic to include azure (#12063)
| false
|
diff --git a/docs/sources/setup/install/helm/configure-storage/_index.md b/docs/sources/setup/install/helm/configure-storage/_index.md
index 6a28387932d0b..e51e8503fee01 100644
--- a/docs/sources/setup/install/helm/configure-storage/_index.md
+++ b/docs/sources/setup/install/helm/configure-storage/_index.md
@@ -20,9 +20,9 @@ This guide assumes Loki will be installed in one of the modes above and that a `
**To use a managed object store:**
-1. Set the `type` of `storage` in `values.yaml` to `gcs` or `s3`.
+1. In the `values.yaml` file, set the value for `storage.type` to `azure`, `gcs`, or `s3`.
-2. Configure the storage client under `loki.storage.gcs` or `loki.storage.s3`.
+1. Configure the storage client under `loki.storage.azure`, `loki.storage.gcs`, or `loki.storage.s3`.
**To install Minio alongside Loki:**
@@ -41,7 +41,7 @@ This guide assumes Loki will be installed in one of the modes above and that a `
1. Provision an IAM role, policy and S3 bucket as described in [Storage]({{< relref "../../../../storage#aws-deployment-s3-single-store" >}}).
- If the Terraform module was used note the annotation emitted by `terraform output -raw annotation`.
-2. Add the IAM role annotation to the service account in `values.yaml`:
+1. Add the IAM role annotation to the service account in `values.yaml`:
```
serviceAccount:
@@ -49,7 +49,7 @@ This guide assumes Loki will be installed in one of the modes above and that a `
"eks.amazonaws.com/role-arn": "arn:aws:iam::<account id>:role/<role name>"
```
-3. Configure the storage:
+1. Configure the storage:
```
loki:
|
docs
|
update storage topic to include azure (#12063)
|
1b595259e26edd8828eeb1a682d33c4ddc417694
|
2024-11-14 18:45:38
|
renovate[bot]
|
fix(deps): update module github.com/twmb/franz-go/pkg/kadm to v1.14.0 (#14911)
| false
|
diff --git a/go.mod b/go.mod
index 40580662f5599..a6003e38a7c55 100644
--- a/go.mod
+++ b/go.mod
@@ -138,10 +138,10 @@ require (
github.com/schollz/progressbar/v3 v3.17.1
github.com/shirou/gopsutil/v4 v4.24.10
github.com/thanos-io/objstore v0.0.0-20241111205755-d1dd89d41f97
- github.com/twmb/franz-go v1.17.1
- github.com/twmb/franz-go/pkg/kadm v1.13.0
+ github.com/twmb/franz-go v1.18.0
+ github.com/twmb/franz-go/pkg/kadm v1.14.0
github.com/twmb/franz-go/pkg/kfake v0.0.0-20241015013301-cea7aa5d8037
- github.com/twmb/franz-go/pkg/kmsg v1.8.0
+ github.com/twmb/franz-go/pkg/kmsg v1.9.0
github.com/twmb/franz-go/plugin/kotel v1.5.0
github.com/twmb/franz-go/plugin/kprom v1.1.0
github.com/willf/bloom v2.0.3+incompatible
diff --git a/go.sum b/go.sum
index 1be9586c67979..1d7cdb6a0599a 100644
--- a/go.sum
+++ b/go.sum
@@ -2608,14 +2608,14 @@ github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/ttacon/chalk v0.0.0-20160626202418-22c06c80ed31/go.mod h1:onvgF043R+lC5RZ8IT9rBXDaEDnpnw/Cl+HFiw+v/7Q=
github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqriFuLhtthL60Sar/7RFoluCcXsuvEwTV5KM=
-github.com/twmb/franz-go v1.17.1 h1:0LwPsbbJeJ9R91DPUHSEd4su82WJWcTY1Zzbgbg4CeQ=
-github.com/twmb/franz-go v1.17.1/go.mod h1:NreRdJ2F7dziDY/m6VyspWd6sNxHKXdMZI42UfQ3GXM=
-github.com/twmb/franz-go/pkg/kadm v1.13.0 h1:bJq4C2ZikUE2jh/wl9MtMTQ/kpmnBgVFh8XMQBEC+60=
-github.com/twmb/franz-go/pkg/kadm v1.13.0/go.mod h1:VMvpfjz/szpH9WB+vGM+rteTzVv0djyHFimci9qm2C0=
+github.com/twmb/franz-go v1.18.0 h1:25FjMZfdozBywVX+5xrWC2W+W76i0xykKjTdEeD2ejw=
+github.com/twmb/franz-go v1.18.0/go.mod h1:zXCGy74M0p5FbXsLeASdyvfLFsBvTubVqctIaa5wQ+I=
+github.com/twmb/franz-go/pkg/kadm v1.14.0 h1:nAn1co1lXzJQocpzyIyOFOjUBf4WHWs5/fTprXy2IZs=
+github.com/twmb/franz-go/pkg/kadm v1.14.0/go.mod h1:XjOPz6ZaXXjrW2jVCfLuucP8H1w2TvD6y3PT2M+aAM4=
github.com/twmb/franz-go/pkg/kfake v0.0.0-20241015013301-cea7aa5d8037 h1:M4Zj79q1OdZusy/Q8TOTttvx/oHkDVY7sc0xDyRnwWs=
github.com/twmb/franz-go/pkg/kfake v0.0.0-20241015013301-cea7aa5d8037/go.mod h1:nkBI/wGFp7t1NJnnCeJdS4sX5atPAqwCPpDXKuI7SC8=
-github.com/twmb/franz-go/pkg/kmsg v1.8.0 h1:lAQB9Z3aMrIP9qF9288XcFf/ccaSxEitNA1CDTEIeTA=
-github.com/twmb/franz-go/pkg/kmsg v1.8.0/go.mod h1:HzYEb8G3uu5XevZbtU0dVbkphaKTHk0X68N5ka4q6mU=
+github.com/twmb/franz-go/pkg/kmsg v1.9.0 h1:JojYUph2TKAau6SBtErXpXGC7E3gg4vGZMv9xFU/B6M=
+github.com/twmb/franz-go/pkg/kmsg v1.9.0/go.mod h1:CMbfazviCyY6HM0SXuG5t9vOwYDHRCSrJJyBAe5paqg=
github.com/twmb/franz-go/plugin/kotel v1.5.0 h1:TiPfGUbQK384OO7ZYGdo7JuPCbJn+/8njQ/D9Je9CDE=
github.com/twmb/franz-go/plugin/kotel v1.5.0/go.mod h1:wRXzRo76x1myOUMaVHAyraXoGBdEcvlLChGTVv5+DWU=
github.com/twmb/franz-go/plugin/kprom v1.1.0 h1:grGeIJbm4llUBF8jkDjTb/b8rKllWSXjMwIqeCCcNYQ=
diff --git a/vendor/github.com/twmb/franz-go/pkg/kadm/kadm.go b/vendor/github.com/twmb/franz-go/pkg/kadm/kadm.go
index 432fc4c76a3c1..93df0868d1c10 100644
--- a/vendor/github.com/twmb/franz-go/pkg/kadm/kadm.go
+++ b/vendor/github.com/twmb/franz-go/pkg/kadm/kadm.go
@@ -191,6 +191,16 @@ type Offset struct {
Metadata string // Metadata, if non-empty, is used for offset commits.
}
+// NewOffsetFromRecord is a helper to create an Offset for a given Record
+func NewOffsetFromRecord(record *kgo.Record) Offset {
+ return Offset{
+ Topic: record.Topic,
+ Partition: record.Partition,
+ At: record.Offset + 1,
+ LeaderEpoch: record.LeaderEpoch,
+ }
+}
+
// Partitions wraps many partitions.
type Partitions []Partition
@@ -372,7 +382,7 @@ func OffsetsFromFetches(fs kgo.Fetches) Offsets {
return
}
r := p.Records[len(p.Records)-1]
- os.AddOffset(r.Topic, r.Partition, r.Offset+1, r.LeaderEpoch)
+ os.Add(NewOffsetFromRecord(r))
})
return os
}
@@ -383,7 +393,7 @@ func OffsetsFromFetches(fs kgo.Fetches) Offsets {
func OffsetsFromRecords(rs ...kgo.Record) Offsets {
os := make(Offsets)
for _, r := range rs {
- os.AddOffset(r.Topic, r.Partition, r.Offset+1, r.LeaderEpoch)
+ os.Add(NewOffsetFromRecord(&r))
}
return os
}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kerr/kerr.go b/vendor/github.com/twmb/franz-go/pkg/kerr/kerr.go
index 731a23a1975ac..1f408783142ea 100644
--- a/vendor/github.com/twmb/franz-go/pkg/kerr/kerr.go
+++ b/vendor/github.com/twmb/franz-go/pkg/kerr/kerr.go
@@ -190,6 +190,21 @@ var (
MismatchedEndpointType = &Error{"MISMATCHED_ENDPOINT_TYPE", 114, false, "The request was sent to an endpoint of the wrong type."}
UnsupportedEndpointType = &Error{"UNSUPPORTED_ENDPOINT_TYPE", 115, false, "This endpoint type is not supported yet."}
UnknownControllerID = &Error{"UNKNOWN_CONTROLLER_ID", 116, false, "This controller ID is not known"}
+
+ // UnknownSubscriptionID = &Error{"UNKNOWN_SUBSCRIPTION_ID", 117, false, "Client sent a push telemetry request with an invalid or outdated subscription ID."}
+ // TelemetryTooLarge = &Error{"TELEMETRY_TOO_LARGE", 118, false, "Client sent a push telemetry request larger than the maximum size the broker will accept."}
+ // InvalidRegistration = &Error{"INVALID_REGISTRATION", 119, false, "The controller has considered the broker registration to be invalid."}
+
+ TransactionAbortable = &Error{"TRANSACTION_ABORTABLE", 120, false, "The server encountered an error with the transaction. The client can abort the transaction to continue using this transactional ID."}
+
+ // InvalidRecordState = &Error{"INVALID_RECORD_STATE", 121, false, "The record state is invalid. The acknowledgement of delivery could not be completed."}
+ // ShareSessionNowFound = &Error{"SHARE_SESSION_NOT_FOUND", 122, false, "The share session was not found."}
+ // InvalidShareSessionEpoch = &Error{"INVALID_SHARE_SESSION_EPOCH", 123, false, "The share session epoch is invalid."}
+ // FencedStateEpoch = &Error{"FENCED_STATE_EPOCH", 124, false, "The share coordinator rejected the request because the share-group state epoch did not match."}
+ // InvalidVoterKey = &Error{"INVALID_VOTER_KEY", 125, false, "The voter key doesn't match the receiving replica's key."}
+ // DuplicateVoter = &Error{"DUPLICATE_VOTER", 126, false, "The voter is already part of the set of voters."}
+ // VoterNotFound = &Error{"VOTER_NOT_FOUND", 127, false, "The voter is not part of the set of voters."}
+ // InvalidRegularExpression = &Error{"INVALID_REGULAR_EXPRESSION", 128, false, "The regular expression is not valid."}
)
var code2err = map[int16]error{
@@ -312,4 +327,9 @@ var code2err = map[int16]error{
115: UnsupportedEndpointType, // ""
116: UnknownControllerID, // ""
+ // 117: UnknownSubscriptionID, // KIP-714 f1819f448 KAFKA-15778 & KAFKA-15779
+ // 118: TelemetryTooLarge, // ""
+ // 119: InvalidRegistration, // KIP-858 f467f6bb4 KAFKA-15361
+
+ 120: TransactionAbortable, // KIP-890 2e8d69b78 KAFKA-16314
}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/config.go b/vendor/github.com/twmb/franz-go/pkg/kgo/config.go
index 92ebeaa39055c..c7bf6963ef2a1 100644
--- a/vendor/github.com/twmb/franz-go/pkg/kgo/config.go
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/config.go
@@ -1113,8 +1113,8 @@ func RecordDeliveryTimeout(timeout time.Duration) ProducerOpt {
// For Kafka-to-Kafka transactions, the transactional ID is only one half of
// the equation. You must also assign a group to consume from.
//
-// To produce transactionally, you first BeginTransaction, then produce records
-// consumed from a group, then you EndTransaction. All records produced outside
+// To produce transactionally, you first [BeginTransaction], then produce records
+// consumed from a group, then you [EndTransaction]. All records produced outside
// of a transaction will fail immediately with an error.
//
// After producing a batch, you must commit what you consumed. Auto committing
@@ -1449,7 +1449,7 @@ func Balancers(balancers ...GroupBalancer) GroupOpt {
// in this timeout, the broker will remove the member from the group and
// initiate a rebalance.
//
-// If you are using a GroupTransactSession for EOS, wish to lower this, and are
+// If you are using a [GroupTransactSession] for EOS, wish to lower this, and are
// talking to a Kafka cluster pre 2.5, consider lowering the
// TransactionTimeout. If you do not, you risk a transaction finishing after a
// group has rebalanced, which could lead to duplicate processing. If you are
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/consumer.go b/vendor/github.com/twmb/franz-go/pkg/kgo/consumer.go
index 01f98da486a65..eb15ec00f5bb3 100644
--- a/vendor/github.com/twmb/franz-go/pkg/kgo/consumer.go
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/consumer.go
@@ -620,7 +620,7 @@ func (cl *Client) PauseFetchPartitions(topicPartitions map[string][]int32) map[s
// ResumeFetchTopics resumes fetching the input topics if they were previously
// paused. Resuming topics that are not currently paused is a per-topic no-op.
-// See the documentation on PauseTfetchTopics for more details.
+// See the documentation on PauseFetchTopics for more details.
func (cl *Client) ResumeFetchTopics(topics ...string) {
defer cl.allSinksAndSources(func(sns sinkAndSource) {
sns.source.maybeConsume()
@@ -1186,6 +1186,44 @@ func (c *consumer) assignPartitions(assignments map[string]map[int32]Offset, how
}
}
+// filterMetadataAllTopics, called BEFORE doOnMetadataUpdate, evaluates
+// all topics received against the user provided regex.
+func (c *consumer) filterMetadataAllTopics(topics []string) []string {
+ c.mu.Lock()
+ defer c.mu.Unlock()
+
+ var rns reNews
+ defer rns.log(&c.cl.cfg)
+
+ var reSeen map[string]bool
+ if c.d != nil {
+ reSeen = c.d.reSeen
+ } else {
+ reSeen = c.g.reSeen
+ }
+
+ keep := topics[:0]
+ for _, topic := range topics {
+ want, seen := reSeen[topic]
+ if !seen {
+ for rawRe, re := range c.cl.cfg.topics {
+ if want = re.MatchString(topic); want {
+ rns.add(rawRe, topic)
+ break
+ }
+ }
+ if !want {
+ rns.skip(topic)
+ }
+ reSeen[topic] = want
+ }
+ if want {
+ keep = append(keep, topic)
+ }
+ }
+ return keep
+}
+
func (c *consumer) doOnMetadataUpdate() {
if !c.consuming() {
return
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/consumer_direct.go b/vendor/github.com/twmb/franz-go/pkg/kgo/consumer_direct.go
index bf42dbcae4e8c..0dcbf9897d8a2 100644
--- a/vendor/github.com/twmb/franz-go/pkg/kgo/consumer_direct.go
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/consumer_direct.go
@@ -65,29 +65,11 @@ func (*directConsumer) getSetAssigns(setOffsets map[string]map[int32]EpochOffset
func (d *directConsumer) findNewAssignments() map[string]map[int32]Offset {
topics := d.tps.load()
- var rns reNews
- if d.cfg.regex {
- defer rns.log(d.cfg)
- }
-
toUse := make(map[string]map[int32]Offset, 10)
for topic, topicPartitions := range topics {
var useTopic bool
if d.cfg.regex {
- want, seen := d.reSeen[topic]
- if !seen {
- for rawRe, re := range d.cfg.topics {
- if want = re.MatchString(topic); want {
- rns.add(rawRe, topic)
- break
- }
- }
- if !want {
- rns.skip(topic)
- }
- d.reSeen[topic] = want
- }
- useTopic = want
+ useTopic = d.reSeen[topic]
} else {
useTopic = d.m.onlyt(topic)
}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/consumer_group.go b/vendor/github.com/twmb/franz-go/pkg/kgo/consumer_group.go
index c1946eb40e72f..dee54bd743aca 100644
--- a/vendor/github.com/twmb/franz-go/pkg/kgo/consumer_group.go
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/consumer_group.go
@@ -27,8 +27,7 @@ type groupConsumer struct {
cooperative atomicBool // true if the group balancer chosen during Join is cooperative
// The data for topics that the user assigned. Metadata updates the
- // atomic.Value in each pointer atomically. If we are consuming via
- // regex, metadata grabs the lock to add new topics.
+ // atomic.Value in each pointer atomically.
tps *topicsPartitions
reSeen map[string]bool // topics we evaluated against regex, and whether we want them or not
@@ -1714,11 +1713,6 @@ func (g *groupConsumer) findNewAssignments() {
delta int
}
- var rns reNews
- if g.cfg.regex {
- defer rns.log(&g.cl.cfg)
- }
-
var numNewTopics int
toChange := make(map[string]change, len(topics))
for topic, topicPartitions := range topics {
@@ -1741,20 +1735,7 @@ func (g *groupConsumer) findNewAssignments() {
// support adding new regex).
useTopic := true
if g.cfg.regex {
- want, seen := g.reSeen[topic]
- if !seen {
- for rawRe, re := range g.cfg.topics {
- if want = re.MatchString(topic); want {
- rns.add(rawRe, topic)
- break
- }
- }
- if !want {
- rns.skip(topic)
- }
- g.reSeen[topic] = want
- }
- useTopic = want
+ useTopic = g.reSeen[topic]
}
// We only track using the topic if there are partitions for
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/errors.go b/vendor/github.com/twmb/franz-go/pkg/kgo/errors.go
index 3ff1dbfebe81d..9af0c81c7df70 100644
--- a/vendor/github.com/twmb/franz-go/pkg/kgo/errors.go
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/errors.go
@@ -311,11 +311,11 @@ func (e *errUnknownCoordinator) Error() string {
// consumer group member was kicked from the group or was never able to join
// the group.
type ErrGroupSession struct {
- err error
+ Err error
}
func (e *ErrGroupSession) Error() string {
- return fmt.Sprintf("unable to join group session: %v", e.err)
+ return fmt.Sprintf("unable to join group session: %v", e.Err)
}
-func (e *ErrGroupSession) Unwrap() error { return e.err }
+func (e *ErrGroupSession) Unwrap() error { return e.Err }
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/metadata.go b/vendor/github.com/twmb/franz-go/pkg/kgo/metadata.go
index 33cac6414f949..cbe4b5adc5ac0 100644
--- a/vendor/github.com/twmb/franz-go/pkg/kgo/metadata.go
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/metadata.go
@@ -350,6 +350,14 @@ func (cl *Client) updateMetadata() (retryWhy multiUpdateWhy, err error) {
for topic := range latest {
allTopics = append(allTopics, topic)
}
+
+ // We filter out topics will not match any of our regex's.
+ // This ensures that the `tps` field does not contain topics
+ // we will never use (the client works with misc. topics in
+ // there, but it's better to avoid it -- and allows us to use
+ // `tps` in GetConsumeTopics).
+ allTopics = c.filterMetadataAllTopics(allTopics)
+
tpsConsumerLoad = tpsConsumer.ensureTopics(allTopics)
defer tpsConsumer.storeData(tpsConsumerLoad)
@@ -504,6 +512,7 @@ func (mp metadataPartition) newPartition(cl *Client, isProduce bool) *topicParti
failing: mp.loadErr != 0,
sink: mp.sns.sink,
topicPartitionData: td,
+ lastAckedOffset: -1,
}
} else {
p.cursor = &cursor{
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/producer.go b/vendor/github.com/twmb/franz-go/pkg/kgo/producer.go
index d9cca9920aace..1ea2a29f051d3 100644
--- a/vendor/github.com/twmb/franz-go/pkg/kgo/producer.go
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/producer.go
@@ -480,12 +480,24 @@ func (cl *Client) produce(
}()
drainBuffered := func(err error) {
- p.mu.Lock()
- quit = true
+ // The expected case here is that a context was
+ // canceled while we we waiting for space, so we are
+ // exiting and need to kill the goro above.
+ //
+ // However, it is possible that the goro above has
+ // already exited AND the context was canceled, and
+ // `select` chose the context-canceled case.
+ //
+ // So, to avoid a deadlock, we need to wakeup the
+ // goro above in another goroutine.
+ go func() {
+ p.mu.Lock()
+ quit = true
+ p.mu.Unlock()
+ p.c.Broadcast()
+ }()
+ <-wait // we wait for the goroutine to exit, then unlock again (since the goroutine leaves the mutex locked)
p.mu.Unlock()
- p.c.Broadcast() // wake the goroutine above
- <-wait
- p.mu.Unlock() // we wait for the goroutine to exit, then unlock again (since the goroutine leaves the mutex locked)
p.promiseRecordBeforeBuf(promisedRec{ctx, promise, r}, err)
}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/record_and_fetch.go b/vendor/github.com/twmb/franz-go/pkg/kgo/record_and_fetch.go
index 4f1ebe6f524b0..648db4ecbc537 100644
--- a/vendor/github.com/twmb/franz-go/pkg/kgo/record_and_fetch.go
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/record_and_fetch.go
@@ -120,7 +120,7 @@ type Record struct {
// before the record is unbuffered.
ProducerEpoch int16
- // ProducerEpoch is the producer ID of this message if it was produced
+ // ProducerID is the producer ID of this message if it was produced
// with a producer ID. An epoch and ID of 0 means it was not.
//
// For producing, this is left unset. This will be set by the client
@@ -274,6 +274,9 @@ func (p *FetchPartition) EachRecord(fn func(*Record)) {
type FetchTopic struct {
// Topic is the topic this is for.
Topic string
+ // TopicID is the ID of the topic, if your cluster supports returning
+ // topic IDs in fetch responses (Kafka 3.1+).
+ TopicID [16]byte
// Partitions contains individual partitions in the topic that were
// fetched.
Partitions []FetchPartition
@@ -345,8 +348,8 @@ type FetchError struct {
// client for.
//
// 3. an untyped batch parse failure; these are usually unrecoverable by
-// restarts, and it may be best to just let the client continue. However,
-// restarting is an option, but you may need to manually repair your
+// restarts, and it may be best to just let the client continue.
+// Restarting is an option, but you may need to manually repair your
// partition.
//
// 4. an injected ErrClientClosed; this is a fatal informational error that
@@ -560,6 +563,7 @@ func (fs Fetches) EachTopic(fn func(FetchTopic)) {
for topic, partitions := range topics {
fn(FetchTopic{
topic,
+ [16]byte{},
partitions,
})
}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/sink.go b/vendor/github.com/twmb/franz-go/pkg/kgo/sink.go
index 6d0f3dfe008dc..2b96d9169874e 100644
--- a/vendor/github.com/twmb/franz-go/pkg/kgo/sink.go
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/sink.go
@@ -406,6 +406,7 @@ func (s *sink) produce(sem <-chan struct{}) bool {
if txnReq != nil {
// txnReq can fail from:
+ // - TransactionAbortable
// - retry failure
// - auth failure
// - producer id mapping / epoch errors
@@ -417,6 +418,10 @@ func (s *sink) produce(sem <-chan struct{}) bool {
batchesStripped, err := s.doTxnReq(req, txnReq)
if err != nil {
switch {
+ case errors.Is(err, kerr.TransactionAbortable):
+ // If we get TransactionAbortable, we continue into producing.
+ // The produce will fail with the same error, and this is the
+ // only way to notify the user to abort the txn.
case isRetryableBrokerErr(err) || isDialNonTimeoutErr(err):
s.cl.bumpRepeatedLoadErr(err)
s.cl.cfg.logger.Log(LogLevelWarn, "unable to AddPartitionsToTxn due to retryable broker err, bumping client's buffered record load errors by 1 and retrying", "err", err)
@@ -431,8 +436,8 @@ func (s *sink) produce(sem <-chan struct{}) bool {
// with produce request vs. end txn (KAFKA-12671)
s.cl.failProducerID(id, epoch, err)
s.cl.cfg.logger.Log(LogLevelError, "fatal AddPartitionsToTxn error, failing all buffered records (it is possible the client can recover after EndTransaction)", "broker", logID(s.nodeID), "err", err)
+ return false
}
- return false
}
// If we stripped everything, ensure we backoff to force a
@@ -563,7 +568,7 @@ func (s *sink) issueTxnReq(
continue
}
for _, partition := range topic.Partitions {
- if err := kerr.ErrorForCode(partition.ErrorCode); err != nil {
+ if err := kerr.ErrorForCode(partition.ErrorCode); err != nil && err != kerr.TransactionAbortable { // see below for txn abortable
// OperationNotAttempted is set for all partitions that are authorized
// if any partition is unauthorized _or_ does not exist. We simply remove
// unattempted partitions and treat them as retryable.
@@ -854,6 +859,20 @@ func (s *sink) handleReqRespBatch(
// handling, but KIP-360 demonstrated that resetting sequence
// numbers is fundamentally unsafe, so we treat it like OOOSN.
//
+ // KAFKA-5793 specifically mentions for OOOSN "when you get it,
+ // it should always mean data loss". Sometime after KIP-360,
+ // Kafka changed the client to remove all places
+ // UnknownProducerID was returned, and then started referring
+ // to OOOSN as retryable. KIP-890 definitively says OOOSN is
+ // retryable. However, the Kafka source as of 24-10-10 still
+ // only retries OOOSN for batches that are NOT the expected
+ // next batch (i.e., it's next + 1, for when there are multiple
+ // in flight). With KIP-890, we still just disregard whatever
+ // supposedly non-retryable / actually-is-retryable error is
+ // returned if the LogStartOffset is _after_ what we previously
+ // produced. Specifically, this is step (4) in in wiki link
+ // within KAFKA-5793.
+ //
// InvalidMapping is similar to UnknownProducerID, but occurs
// when the txnal coordinator timed out our transaction.
//
@@ -881,6 +900,22 @@ func (s *sink) handleReqRespBatch(
// txn coordinator requests, which have PRODUCER_FENCED vs
// TRANSACTION_TIMED_OUT.
+ if batch.owner.lastAckedOffset >= 0 && rp.LogStartOffset > batch.owner.lastAckedOffset {
+ s.cl.cfg.logger.Log(LogLevelInfo, "partition prefix truncation to after our last produce caused the broker to forget us; no loss occurred, bumping producer epoch and resetting sequence numbers",
+ "broker", logID(s.nodeID),
+ "topic", topic,
+ "partition", rp.Partition,
+ "producer_id", producerID,
+ "producer_epoch", producerEpoch,
+ "err", err,
+ )
+ s.cl.failProducerID(producerID, producerEpoch, errReloadProducerID)
+ if debug {
+ fmt.Fprintf(b, "resetting@%d,%d(%s)}, ", rp.BaseOffset, nrec, err)
+ }
+ return true, false
+ }
+
if s.cl.cfg.txnID != nil || s.cl.cfg.stopOnDataLoss {
s.cl.cfg.logger.Log(LogLevelInfo, "batch errored, failing the producer ID",
"broker", logID(s.nodeID),
@@ -951,6 +986,7 @@ func (s *sink) handleReqRespBatch(
)
} else {
batch.owner.okOnSink = true
+ batch.owner.lastAckedOffset = rp.BaseOffset + int64(len(batch.records))
}
s.cl.finishBatch(batch.recBatch, producerID, producerEpoch, rp.Partition, rp.BaseOffset, err)
didProduce = err == nil
@@ -1222,6 +1258,8 @@ type recBuf struct {
// to drain.
inflight uint8
+ lastAckedOffset int64 // last ProduceResponse's BaseOffset + how many records we produced
+
topicPartitionData // updated in metadata migrateProductionTo (same spot sink is updated)
// seq is used for the seq in each record batch. It is incremented when
@@ -2057,7 +2095,7 @@ func (b *recBatch) tryBuffer(pr promisedRec, produceVersion, maxBatchBytes int32
//////////////
func (*produceRequest) Key() int16 { return 0 }
-func (*produceRequest) MaxVersion() int16 { return 10 }
+func (*produceRequest) MaxVersion() int16 { return 11 }
func (p *produceRequest) SetVersion(v int16) { p.version = v }
func (p *produceRequest) GetVersion() int16 { return p.version }
func (p *produceRequest) IsFlexible() bool { return p.version >= 9 }
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/source.go b/vendor/github.com/twmb/franz-go/pkg/kgo/source.go
index 0c475d14a9419..12732e90f3e87 100644
--- a/vendor/github.com/twmb/franz-go/pkg/kgo/source.go
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/source.go
@@ -1041,6 +1041,7 @@ func (s *source) handleReqResp(br *broker, req *fetchRequest, resp *kmsg.FetchRe
fetchTopic := FetchTopic{
Topic: topic,
+ TopicID: rt.TopicID,
Partitions: make([]FetchPartition, 0, len(rt.Partitions)),
}
diff --git a/vendor/github.com/twmb/franz-go/pkg/kgo/txn.go b/vendor/github.com/twmb/franz-go/pkg/kgo/txn.go
index 25cfd44356f62..304f28b40aaf9 100644
--- a/vendor/github.com/twmb/franz-go/pkg/kgo/txn.go
+++ b/vendor/github.com/twmb/franz-go/pkg/kgo/txn.go
@@ -159,6 +159,18 @@ func (s *GroupTransactSession) Close() {
s.cl.Close()
}
+// AllowRebalance is a wrapper around Client.AllowRebalance, with the exact
+// same semantics. Refer to that function's documentation.
+func (s *GroupTransactSession) AllowRebalance() {
+ s.cl.AllowRebalance()
+}
+
+// CloseAllowingRebalance is a wrapper around Client.CloseAllowingRebalance,
+// with the exact same semantics. Refer to that function's documentation.
+func (s *GroupTransactSession) CloseAllowingRebalance() {
+ s.cl.CloseAllowingRebalance()
+}
+
// PollFetches is a wrapper around Client.PollFetches, with the exact same
// semantics. Refer to that function's documentation.
//
@@ -281,7 +293,8 @@ func (s *GroupTransactSession) End(ctx context.Context, commit TransactionEndTry
errors.Is(err, kerr.CoordinatorLoadInProgress),
errors.Is(err, kerr.NotCoordinator),
errors.Is(err, kerr.ConcurrentTransactions),
- errors.Is(err, kerr.UnknownServerError):
+ errors.Is(err, kerr.UnknownServerError),
+ errors.Is(err, kerr.TransactionAbortable):
return true
}
return false
@@ -408,6 +421,11 @@ retry:
willTryCommit = false
goto retry
+ case errors.Is(endTxnErr, kerr.TransactionAbortable):
+ s.cl.cfg.logger.Log(LogLevelInfo, "end transaction returned TransactionAbortable; retrying as abort")
+ willTryCommit = false
+ goto retry
+
case errors.Is(endTxnErr, kerr.UnknownServerError):
s.cl.cfg.logger.Log(LogLevelInfo, "end transaction with commit unknown server error; retrying")
after := time.NewTimer(s.cl.cfg.retryBackoff(tries))
@@ -517,7 +535,7 @@ const (
// Deprecated: Kafka 3.6 removed support for the hacky behavior that
// this option was abusing. Thus, as of Kafka 3.6, this option does not
// work against Kafka. This option also has never worked for Redpanda
- // becuse Redpanda always strictly validated that partitions were a
+ // because Redpanda always strictly validated that partitions were a
// part of a transaction. Later versions of Kafka and Redpanda will
// remove the need for AddPartitionsToTxn at all and thus this option
// ultimately will be unnecessary anyway.
@@ -820,8 +838,9 @@ func (cl *Client) UnsafeAbortBufferedRecords() {
//
// If the producer ID has an error and you are trying to commit, this will
// return with kerr.OperationNotAttempted. If this happened, retry
-// EndTransaction with TryAbort. Not other error is retryable, and you should
-// not retry with TryAbort.
+// EndTransaction with TryAbort. If this returns kerr.TransactionAbortable, you
+// can retry with TryAbort. No other error is retryable, and you should not
+// retry with TryAbort.
//
// If records failed with UnknownProducerID and your Kafka version is at least
// 2.5, then aborting here will potentially allow the client to recover for
diff --git a/vendor/github.com/twmb/franz-go/pkg/kmsg/generated.go b/vendor/github.com/twmb/franz-go/pkg/kmsg/generated.go
index 75bff9958e78e..584f6f37c12df 100644
--- a/vendor/github.com/twmb/franz-go/pkg/kmsg/generated.go
+++ b/vendor/github.com/twmb/franz-go/pkg/kmsg/generated.go
@@ -13,7 +13,31 @@ import (
// MaxKey is the maximum key used for any messages in this package.
// Note that this value will change as Kafka adds more messages.
-const MaxKey = 68
+const MaxKey = 69
+
+type AssignmentTopicPartition struct {
+ TopicID [16]byte
+
+ Topic string
+
+ Partitions []int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to AssignmentTopicPartition.
+func (v *AssignmentTopicPartition) Default() {
+}
+
+// NewAssignmentTopicPartition returns a default AssignmentTopicPartition
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAssignmentTopicPartition() AssignmentTopicPartition {
+ var v AssignmentTopicPartition
+ v.Default()
+ return v
+}
// MessageV0 is the message format Kafka used prior to 0.10.
//
@@ -898,6 +922,8 @@ type GroupMetadataValueMember struct {
ClientHost string
// RebalanceTimeoutMillis is the rebalance timeout of this group member.
+ //
+ // This field has a default of -1.
RebalanceTimeoutMillis int32 // v1+
// SessionTimeoutMillis is the session timeout of this group member.
@@ -908,11 +934,15 @@ type GroupMetadataValueMember struct {
// Assignment is what the leader assigned this group member.
Assignment []byte
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
}
// Default sets any default fields. Calling this allows for future compatibility
// if new fields are added to GroupMetadataValueMember.
func (v *GroupMetadataValueMember) Default() {
+ v.RebalanceTimeoutMillis = -1
}
// NewGroupMetadataValueMember returns a default GroupMetadataValueMember
@@ -956,22 +986,33 @@ type GroupMetadataValue struct {
// CurrentStateTimestamp is the timestamp for this state of the group
// (stable, etc.).
+ //
+ // This field has a default of -1.
CurrentStateTimestamp int64 // v2+
// Members are the group members.
Members []GroupMetadataValueMember
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags // v4+
}
func (v *GroupMetadataValue) AppendTo(dst []byte) []byte {
version := v.Version
_ = version
+ isFlexible := version >= 4
+ _ = isFlexible
{
v := v.Version
dst = kbin.AppendInt16(dst, v)
}
{
v := v.ProtocolType
- dst = kbin.AppendString(dst, v)
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
}
{
v := v.Generation
@@ -979,11 +1020,19 @@ func (v *GroupMetadataValue) AppendTo(dst []byte) []byte {
}
{
v := v.Protocol
- dst = kbin.AppendNullableString(dst, v)
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
}
{
v := v.Leader
- dst = kbin.AppendNullableString(dst, v)
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
}
if version >= 2 {
v := v.CurrentStateTimestamp
@@ -991,24 +1040,44 @@ func (v *GroupMetadataValue) AppendTo(dst []byte) []byte {
}
{
v := v.Members
- dst = kbin.AppendArrayLen(dst, len(v))
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
for i := range v {
v := &v[i]
{
v := v.MemberID
- dst = kbin.AppendString(dst, v)
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
}
if version >= 3 {
v := v.InstanceID
- dst = kbin.AppendNullableString(dst, v)
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
}
{
v := v.ClientID
- dst = kbin.AppendString(dst, v)
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
}
{
v := v.ClientHost
- dst = kbin.AppendString(dst, v)
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
}
if version >= 1 {
v := v.RebalanceTimeoutMillis
@@ -1020,14 +1089,30 @@ func (v *GroupMetadataValue) AppendTo(dst []byte) []byte {
}
{
v := v.Subscription
- dst = kbin.AppendBytes(dst, v)
+ if isFlexible {
+ dst = kbin.AppendCompactBytes(dst, v)
+ } else {
+ dst = kbin.AppendBytes(dst, v)
+ }
}
{
v := v.Assignment
- dst = kbin.AppendBytes(dst, v)
+ if isFlexible {
+ dst = kbin.AppendCompactBytes(dst, v)
+ } else {
+ dst = kbin.AppendBytes(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
}
}
}
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
return dst
}
@@ -1045,13 +1130,23 @@ func (v *GroupMetadataValue) readFrom(src []byte, unsafe bool) error {
v.Version = b.Int16()
version := v.Version
_ = version
+ isFlexible := version >= 4
+ _ = isFlexible
s := v
{
var v string
if unsafe {
- v = b.UnsafeString()
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
} else {
- v = b.String()
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
}
s.ProtocolType = v
}
@@ -1061,19 +1156,35 @@ func (v *GroupMetadataValue) readFrom(src []byte, unsafe bool) error {
}
{
var v *string
- if unsafe {
- v = b.UnsafeNullableString()
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
} else {
- v = b.NullableString()
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
}
s.Protocol = v
}
{
var v *string
- if unsafe {
- v = b.UnsafeNullableString()
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
} else {
- v = b.NullableString()
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
}
s.Leader = v
}
@@ -1085,7 +1196,11 @@ func (v *GroupMetadataValue) readFrom(src []byte, unsafe bool) error {
v := s.Members
a := v
var l int32
- l = b.ArrayLen()
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
if !b.Ok() {
return b.Complete()
}
@@ -1100,36 +1215,68 @@ func (v *GroupMetadataValue) readFrom(src []byte, unsafe bool) error {
{
var v string
if unsafe {
- v = b.UnsafeString()
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
} else {
- v = b.String()
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
}
s.MemberID = v
}
if version >= 3 {
var v *string
- if unsafe {
- v = b.UnsafeNullableString()
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
} else {
- v = b.NullableString()
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
}
s.InstanceID = v
}
{
var v string
if unsafe {
- v = b.UnsafeString()
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
} else {
- v = b.String()
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
}
s.ClientID = v
}
{
var v string
if unsafe {
- v = b.UnsafeString()
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
} else {
- v = b.String()
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
}
s.ClientHost = v
}
@@ -1142,23 +1289,41 @@ func (v *GroupMetadataValue) readFrom(src []byte, unsafe bool) error {
s.SessionTimeoutMillis = v
}
{
- v := b.Bytes()
+ var v []byte
+ if isFlexible {
+ v = b.CompactBytes()
+ } else {
+ v = b.Bytes()
+ }
s.Subscription = v
}
{
- v := b.Bytes()
+ var v []byte
+ if isFlexible {
+ v = b.CompactBytes()
+ } else {
+ v = b.Bytes()
+ }
s.Assignment = v
}
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
}
v = a
s.Members = v
}
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
return b.Complete()
}
+func (v *GroupMetadataValue) IsFlexible() bool { return v.Version >= 4 }
// Default sets any default fields. Calling this allows for future compatibility
// if new fields are added to GroupMetadataValue.
func (v *GroupMetadataValue) Default() {
+ v.CurrentStateTimestamp = -1
}
// NewGroupMetadataValue returns a default GroupMetadataValue
@@ -2783,7 +2948,7 @@ type ProduceRequest struct {
}
func (*ProduceRequest) Key() int16 { return 0 }
-func (*ProduceRequest) MaxVersion() int16 { return 10 }
+func (*ProduceRequest) MaxVersion() int16 { return 11 }
func (v *ProduceRequest) SetVersion(version int16) { v.Version = version }
func (v *ProduceRequest) GetVersion() int16 { return v.Version }
func (v *ProduceRequest) IsFlexible() bool { return v.Version >= 9 }
@@ -3306,7 +3471,7 @@ type ProduceResponse struct {
}
func (*ProduceResponse) Key() int16 { return 0 }
-func (*ProduceResponse) MaxVersion() int16 { return 10 }
+func (*ProduceResponse) MaxVersion() int16 { return 11 }
func (v *ProduceResponse) SetVersion(version int16) { v.Version = version }
func (v *ProduceResponse) GetVersion() int16 { return v.Version }
func (v *ProduceResponse) IsFlexible() bool { return v.Version >= 9 }
@@ -12493,7 +12658,7 @@ type FindCoordinatorRequest struct {
}
func (*FindCoordinatorRequest) Key() int16 { return 10 }
-func (*FindCoordinatorRequest) MaxVersion() int16 { return 4 }
+func (*FindCoordinatorRequest) MaxVersion() int16 { return 5 }
func (v *FindCoordinatorRequest) SetVersion(version int16) { v.Version = version }
func (v *FindCoordinatorRequest) GetVersion() int16 { return v.Version }
func (v *FindCoordinatorRequest) IsFlexible() bool { return v.Version >= 3 }
@@ -12733,7 +12898,7 @@ type FindCoordinatorResponse struct {
}
func (*FindCoordinatorResponse) Key() int16 { return 10 }
-func (*FindCoordinatorResponse) MaxVersion() int16 { return 4 }
+func (*FindCoordinatorResponse) MaxVersion() int16 { return 5 }
func (v *FindCoordinatorResponse) SetVersion(version int16) { v.Version = version }
func (v *FindCoordinatorResponse) GetVersion() int16 { return v.Version }
func (v *FindCoordinatorResponse) IsFlexible() bool { return v.Version >= 3 }
@@ -15960,12 +16125,16 @@ type ListGroupsRequest struct {
// "Dead", or "Empty". If empty, all groups are returned.
StatesFilter []string // v4+
+ // TypesFilter, part of KIP-848, filters the types of groups we want
+ // to list. If empty, all groups are returned.
+ TypesFilter []string // v5+
+
// UnknownTags are tags Kafka sent that we do not know the purpose of.
UnknownTags Tags // v3+
}
func (*ListGroupsRequest) Key() int16 { return 16 }
-func (*ListGroupsRequest) MaxVersion() int16 { return 4 }
+func (*ListGroupsRequest) MaxVersion() int16 { return 5 }
func (v *ListGroupsRequest) SetVersion(version int16) { v.Version = version }
func (v *ListGroupsRequest) GetVersion() int16 { return v.Version }
func (v *ListGroupsRequest) IsFlexible() bool { return v.Version >= 3 }
@@ -16005,6 +16174,22 @@ func (v *ListGroupsRequest) AppendTo(dst []byte) []byte {
}
}
}
+ if version >= 5 {
+ v := v.TypesFilter
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ }
if isFlexible {
dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
dst = v.UnknownTags.AppendEach(dst)
@@ -16064,6 +16249,42 @@ func (v *ListGroupsRequest) readFrom(src []byte, unsafe bool) error {
v = a
s.StatesFilter = v
}
+ if version >= 5 {
+ v := s.TypesFilter
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]string, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ a[i] = v
+ }
+ v = a
+ s.TypesFilter = v
+ }
if isFlexible {
s.UnknownTags = internalReadTags(&b)
}
@@ -16101,6 +16322,9 @@ type ListGroupsResponseGroup struct {
// The group state.
GroupState string // v4+
+ // The group type.
+ GroupType string // v5+
+
// UnknownTags are tags Kafka sent that we do not know the purpose of.
UnknownTags Tags // v3+
}
@@ -16146,7 +16370,7 @@ type ListGroupsResponse struct {
}
func (*ListGroupsResponse) Key() int16 { return 16 }
-func (*ListGroupsResponse) MaxVersion() int16 { return 4 }
+func (*ListGroupsResponse) MaxVersion() int16 { return 5 }
func (v *ListGroupsResponse) SetVersion(version int16) { v.Version = version }
func (v *ListGroupsResponse) GetVersion() int16 { return v.Version }
func (v *ListGroupsResponse) IsFlexible() bool { return v.Version >= 3 }
@@ -16200,6 +16424,14 @@ func (v *ListGroupsResponse) AppendTo(dst []byte) []byte {
dst = kbin.AppendString(dst, v)
}
}
+ if version >= 5 {
+ v := v.GroupType
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
if isFlexible {
dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
dst = v.UnknownTags.AppendEach(dst)
@@ -16308,6 +16540,23 @@ func (v *ListGroupsResponse) readFrom(src []byte, unsafe bool) error {
}
s.GroupState = v
}
+ if version >= 5 {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.GroupType = v
+ }
if isFlexible {
s.UnknownTags = internalReadTags(&b)
}
@@ -19348,7 +19597,7 @@ type InitProducerIDRequest struct {
}
func (*InitProducerIDRequest) Key() int16 { return 22 }
-func (*InitProducerIDRequest) MaxVersion() int16 { return 4 }
+func (*InitProducerIDRequest) MaxVersion() int16 { return 5 }
func (v *InitProducerIDRequest) SetVersion(version int16) { v.Version = version }
func (v *InitProducerIDRequest) GetVersion() int16 { return v.Version }
func (v *InitProducerIDRequest) IsFlexible() bool { return v.Version >= 2 }
@@ -19523,7 +19772,7 @@ type InitProducerIDResponse struct {
}
func (*InitProducerIDResponse) Key() int16 { return 22 }
-func (*InitProducerIDResponse) MaxVersion() int16 { return 4 }
+func (*InitProducerIDResponse) MaxVersion() int16 { return 5 }
func (v *InitProducerIDResponse) SetVersion(version int16) { v.Version = version }
func (v *InitProducerIDResponse) GetVersion() int16 { return v.Version }
func (v *InitProducerIDResponse) IsFlexible() bool { return v.Version >= 2 }
@@ -20367,7 +20616,7 @@ type AddPartitionsToTxnRequest struct {
}
func (*AddPartitionsToTxnRequest) Key() int16 { return 24 }
-func (*AddPartitionsToTxnRequest) MaxVersion() int16 { return 4 }
+func (*AddPartitionsToTxnRequest) MaxVersion() int16 { return 5 }
func (v *AddPartitionsToTxnRequest) SetVersion(version int16) { v.Version = version }
func (v *AddPartitionsToTxnRequest) GetVersion() int16 { return v.Version }
func (v *AddPartitionsToTxnRequest) IsFlexible() bool { return v.Version >= 3 }
@@ -20953,7 +21202,7 @@ type AddPartitionsToTxnResponse struct {
}
func (*AddPartitionsToTxnResponse) Key() int16 { return 24 }
-func (*AddPartitionsToTxnResponse) MaxVersion() int16 { return 4 }
+func (*AddPartitionsToTxnResponse) MaxVersion() int16 { return 5 }
func (v *AddPartitionsToTxnResponse) SetVersion(version int16) { v.Version = version }
func (v *AddPartitionsToTxnResponse) GetVersion() int16 { return v.Version }
func (v *AddPartitionsToTxnResponse) IsFlexible() bool { return v.Version >= 3 }
@@ -21388,7 +21637,7 @@ type AddOffsetsToTxnRequest struct {
}
func (*AddOffsetsToTxnRequest) Key() int16 { return 25 }
-func (*AddOffsetsToTxnRequest) MaxVersion() int16 { return 3 }
+func (*AddOffsetsToTxnRequest) MaxVersion() int16 { return 4 }
func (v *AddOffsetsToTxnRequest) SetVersion(version int16) { v.Version = version }
func (v *AddOffsetsToTxnRequest) GetVersion() int16 { return v.Version }
func (v *AddOffsetsToTxnRequest) IsFlexible() bool { return v.Version >= 3 }
@@ -21559,7 +21808,7 @@ type AddOffsetsToTxnResponse struct {
}
func (*AddOffsetsToTxnResponse) Key() int16 { return 25 }
-func (*AddOffsetsToTxnResponse) MaxVersion() int16 { return 3 }
+func (*AddOffsetsToTxnResponse) MaxVersion() int16 { return 4 }
func (v *AddOffsetsToTxnResponse) SetVersion(version int16) { v.Version = version }
func (v *AddOffsetsToTxnResponse) GetVersion() int16 { return v.Version }
func (v *AddOffsetsToTxnResponse) IsFlexible() bool { return v.Version >= 3 }
@@ -21668,7 +21917,7 @@ type EndTxnRequest struct {
}
func (*EndTxnRequest) Key() int16 { return 26 }
-func (*EndTxnRequest) MaxVersion() int16 { return 3 }
+func (*EndTxnRequest) MaxVersion() int16 { return 4 }
func (v *EndTxnRequest) SetVersion(version int16) { v.Version = version }
func (v *EndTxnRequest) GetVersion() int16 { return v.Version }
func (v *EndTxnRequest) IsFlexible() bool { return v.Version >= 3 }
@@ -21832,7 +22081,7 @@ type EndTxnResponse struct {
}
func (*EndTxnResponse) Key() int16 { return 26 }
-func (*EndTxnResponse) MaxVersion() int16 { return 3 }
+func (*EndTxnResponse) MaxVersion() int16 { return 4 }
func (v *EndTxnResponse) SetVersion(version int16) { v.Version = version }
func (v *EndTxnResponse) GetVersion() int16 { return v.Version }
func (v *EndTxnResponse) IsFlexible() bool { return v.Version >= 3 }
@@ -22678,7 +22927,7 @@ type TxnOffsetCommitRequest struct {
}
func (*TxnOffsetCommitRequest) Key() int16 { return 28 }
-func (*TxnOffsetCommitRequest) MaxVersion() int16 { return 3 }
+func (*TxnOffsetCommitRequest) MaxVersion() int16 { return 4 }
func (v *TxnOffsetCommitRequest) SetVersion(version int16) { v.Version = version }
func (v *TxnOffsetCommitRequest) GetVersion() int16 { return v.Version }
func (v *TxnOffsetCommitRequest) IsFlexible() bool { return v.Version >= 3 }
@@ -23143,7 +23392,7 @@ type TxnOffsetCommitResponse struct {
}
func (*TxnOffsetCommitResponse) Key() int16 { return 28 }
-func (*TxnOffsetCommitResponse) MaxVersion() int16 { return 3 }
+func (*TxnOffsetCommitResponse) MaxVersion() int16 { return 4 }
func (v *TxnOffsetCommitResponse) SetVersion(version int16) { v.Version = version }
func (v *TxnOffsetCommitResponse) GetVersion() int16 { return v.Version }
func (v *TxnOffsetCommitResponse) IsFlexible() bool { return v.Version >= 3 }
@@ -43847,9 +44096,16 @@ type ListTransactionsRequest struct {
// The producer IDs to filter by: if empty, all transactions will be
// returned; if non-empty, only transactions which match one of the filtered
- // producer IDs will be returned
+ // producer IDs will be returned.
ProducerIDFilters []int64
+ // Duration (in millis) to filter by: if < 0, all transactions will be
+ // returned; otherwise, only transactions running longer than this duration
+ // will be returned.
+ //
+ // This field has a default of -1.
+ DurationFilterMillis int64 // v1+
+
// UnknownTags are tags Kafka sent that we do not know the purpose of.
UnknownTags Tags
}
@@ -43907,6 +44163,10 @@ func (v *ListTransactionsRequest) AppendTo(dst []byte) []byte {
dst = kbin.AppendInt64(dst, v)
}
}
+ if version >= 1 {
+ v := v.DurationFilterMillis
+ dst = kbin.AppendInt64(dst, v)
+ }
if isFlexible {
dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
dst = v.UnknownTags.AppendEach(dst)
@@ -43989,6 +44249,10 @@ func (v *ListTransactionsRequest) readFrom(src []byte, unsafe bool) error {
v = a
s.ProducerIDFilters = v
}
+ if version >= 1 {
+ v := b.Int64()
+ s.DurationFilterMillis = v
+ }
if isFlexible {
s.UnknownTags = internalReadTags(&b)
}
@@ -44006,6 +44270,7 @@ func NewPtrListTransactionsRequest() *ListTransactionsRequest {
// Default sets any default fields. Calling this allows for future compatibility
// if new fields are added to ListTransactionsRequest.
func (v *ListTransactionsRequest) Default() {
+ v.DurationFilterMillis = -1
}
// NewListTransactionsRequest returns a default ListTransactionsRequest
@@ -45327,6 +45592,1063 @@ func NewConsumerGroupHeartbeatResponse() ConsumerGroupHeartbeatResponse {
return v
}
+// Assignment contains consumer group assignments.
+type Assignment struct {
+ // The topics & partitions assigned to the member.
+ TopicPartitions []AssignmentTopicPartition
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to Assignment.
+func (v *Assignment) Default() {
+}
+
+// NewAssignment returns a default Assignment
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewAssignment() Assignment {
+ var v Assignment
+ v.Default()
+ return v
+}
+
+// ConsumerGroupDescribe is a part of KIP-848; this is the
+// "next generation" equivalent of DescribeGroups.
+type ConsumerGroupDescribeRequest struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // The IDs of the groups to describe.
+ Groups []string
+
+ // Whether to include authorized operations.
+ IncludeAuthorizedOperations bool
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*ConsumerGroupDescribeRequest) Key() int16 { return 69 }
+func (*ConsumerGroupDescribeRequest) MaxVersion() int16 { return 0 }
+func (v *ConsumerGroupDescribeRequest) SetVersion(version int16) { v.Version = version }
+func (v *ConsumerGroupDescribeRequest) GetVersion() int16 { return v.Version }
+func (v *ConsumerGroupDescribeRequest) IsFlexible() bool { return v.Version >= 0 }
+func (v *ConsumerGroupDescribeRequest) ResponseKind() Response {
+ r := &ConsumerGroupDescribeResponse{Version: v.Version}
+ r.Default()
+ return r
+}
+
+// RequestWith is requests v on r and returns the response or an error.
+// For sharded requests, the response may be merged and still return an error.
+// It is better to rely on client.RequestSharded than to rely on proper merging behavior.
+func (v *ConsumerGroupDescribeRequest) RequestWith(ctx context.Context, r Requestor) (*ConsumerGroupDescribeResponse, error) {
+ kresp, err := r.Request(ctx, v)
+ resp, _ := kresp.(*ConsumerGroupDescribeResponse)
+ return resp, err
+}
+
+func (v *ConsumerGroupDescribeRequest) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.Groups
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ }
+ {
+ v := v.IncludeAuthorizedOperations
+ dst = kbin.AppendBool(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *ConsumerGroupDescribeRequest) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ConsumerGroupDescribeRequest) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ConsumerGroupDescribeRequest) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := s.Groups
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]string, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ a[i] = v
+ }
+ v = a
+ s.Groups = v
+ }
+ {
+ v := b.Bool()
+ s.IncludeAuthorizedOperations = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrConsumerGroupDescribeRequest returns a pointer to a default ConsumerGroupDescribeRequest
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrConsumerGroupDescribeRequest() *ConsumerGroupDescribeRequest {
+ var v ConsumerGroupDescribeRequest
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ConsumerGroupDescribeRequest.
+func (v *ConsumerGroupDescribeRequest) Default() {
+}
+
+// NewConsumerGroupDescribeRequest returns a default ConsumerGroupDescribeRequest
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewConsumerGroupDescribeRequest() ConsumerGroupDescribeRequest {
+ var v ConsumerGroupDescribeRequest
+ v.Default()
+ return v
+}
+
+type ConsumerGroupDescribeResponseGroupMember struct {
+ // The member ID.
+ MemberID string
+
+ // The member instance ID, if any.
+ InstanceID *string
+
+ // The member rack ID, if any.
+ RackID *string
+
+ // The current member epoch.
+ MemberEpoch int32
+
+ // The client ID.
+ ClientID string
+
+ // The client host.
+ ClientHost string
+
+ // The subscribed topic names.
+ SubscribedTopics []string
+
+ // The subscribed topic regex, if any.
+ SubscribedTopicRegex *string
+
+ // The current assignment.
+ Assignment Assignment
+
+ // The target assignment.
+ TargetAssignment Assignment
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ConsumerGroupDescribeResponseGroupMember.
+func (v *ConsumerGroupDescribeResponseGroupMember) Default() {
+ {
+ v := &v.Assignment
+ _ = v
+ }
+ {
+ v := &v.TargetAssignment
+ _ = v
+ }
+}
+
+// NewConsumerGroupDescribeResponseGroupMember returns a default ConsumerGroupDescribeResponseGroupMember
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewConsumerGroupDescribeResponseGroupMember() ConsumerGroupDescribeResponseGroupMember {
+ var v ConsumerGroupDescribeResponseGroupMember
+ v.Default()
+ return v
+}
+
+type ConsumerGroupDescribeResponseGroup struct {
+ // ErrorCode is the error for this response.
+ //
+ // Supported errors:
+ // - GROUP_AUTHORIZATION_FAILED (version 0+)
+ // - NOT_COORDINATOR (version 0+)
+ // - COORDINATOR_NOT_AVAILABLE (version 0+)
+ // - COORDINATOR_LOAD_IN_PROGRESS (version 0+)
+ // - INVALID_REQUEST (version 0+)
+ // - INVALID_GROUP_ID (version 0+)
+ // - GROUP_ID_NOT_FOUND (version 0+)
+ ErrorCode int16
+
+ // A supplementary message if this errored.
+ ErrorMessage *string
+
+ // The group ID.
+ Group string
+
+ // The group state.
+ State string
+
+ // The group epoch.
+ Epoch int32
+
+ // The assignment epoch.
+ AssignmentEpoch int32
+
+ // The selected assignor.
+ AssignorName string
+
+ // Members of the group.
+ Members []ConsumerGroupDescribeResponseGroupMember
+
+ // 32 bit bitfield representing authorized operations for the group.
+ //
+ // This field has a default of -2147483648.
+ AuthorizedOperations int32
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ConsumerGroupDescribeResponseGroup.
+func (v *ConsumerGroupDescribeResponseGroup) Default() {
+ v.AuthorizedOperations = -2147483648
+}
+
+// NewConsumerGroupDescribeResponseGroup returns a default ConsumerGroupDescribeResponseGroup
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewConsumerGroupDescribeResponseGroup() ConsumerGroupDescribeResponseGroup {
+ var v ConsumerGroupDescribeResponseGroup
+ v.Default()
+ return v
+}
+
+// ConsumerGroupDescribeResponse is returned from a ConsumerGroupDescribeRequest.
+type ConsumerGroupDescribeResponse struct {
+ // Version is the version of this message used with a Kafka broker.
+ Version int16
+
+ // ThrottleMillis is how long of a throttle Kafka will apply to the client
+ // after responding to this request.
+ ThrottleMillis int32
+
+ Groups []ConsumerGroupDescribeResponseGroup
+
+ // UnknownTags are tags Kafka sent that we do not know the purpose of.
+ UnknownTags Tags
+}
+
+func (*ConsumerGroupDescribeResponse) Key() int16 { return 69 }
+func (*ConsumerGroupDescribeResponse) MaxVersion() int16 { return 0 }
+func (v *ConsumerGroupDescribeResponse) SetVersion(version int16) { v.Version = version }
+func (v *ConsumerGroupDescribeResponse) GetVersion() int16 { return v.Version }
+func (v *ConsumerGroupDescribeResponse) IsFlexible() bool { return v.Version >= 0 }
+func (v *ConsumerGroupDescribeResponse) Throttle() (int32, bool) {
+ return v.ThrottleMillis, v.Version >= 0
+}
+
+func (v *ConsumerGroupDescribeResponse) SetThrottle(throttleMillis int32) {
+ v.ThrottleMillis = throttleMillis
+}
+
+func (v *ConsumerGroupDescribeResponse) RequestKind() Request {
+ return &ConsumerGroupDescribeRequest{Version: v.Version}
+}
+
+func (v *ConsumerGroupDescribeResponse) AppendTo(dst []byte) []byte {
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ {
+ v := v.ThrottleMillis
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.Groups
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.ErrorCode
+ dst = kbin.AppendInt16(dst, v)
+ }
+ {
+ v := v.ErrorMessage
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.Group
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.State
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Epoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.AssignmentEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.AssignorName
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Members
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.MemberID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.InstanceID
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.RackID
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := v.MemberEpoch
+ dst = kbin.AppendInt32(dst, v)
+ }
+ {
+ v := v.ClientID
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.ClientHost
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.SubscribedTopics
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ }
+ {
+ v := v.SubscribedTopicRegex
+ if isFlexible {
+ dst = kbin.AppendCompactNullableString(dst, v)
+ } else {
+ dst = kbin.AppendNullableString(dst, v)
+ }
+ }
+ {
+ v := &v.Assignment
+ {
+ v := v.TopicPartitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.TopicID
+ dst = kbin.AppendUuid(dst, v)
+ }
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ {
+ v := &v.TargetAssignment
+ {
+ v := v.TopicPartitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := &v[i]
+ {
+ v := v.TopicID
+ dst = kbin.AppendUuid(dst, v)
+ }
+ {
+ v := v.Topic
+ if isFlexible {
+ dst = kbin.AppendCompactString(dst, v)
+ } else {
+ dst = kbin.AppendString(dst, v)
+ }
+ }
+ {
+ v := v.Partitions
+ if isFlexible {
+ dst = kbin.AppendCompactArrayLen(dst, len(v))
+ } else {
+ dst = kbin.AppendArrayLen(dst, len(v))
+ }
+ for i := range v {
+ v := v[i]
+ dst = kbin.AppendInt32(dst, v)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ {
+ v := v.AuthorizedOperations
+ dst = kbin.AppendInt32(dst, v)
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ }
+ }
+ if isFlexible {
+ dst = kbin.AppendUvarint(dst, 0+uint32(v.UnknownTags.Len()))
+ dst = v.UnknownTags.AppendEach(dst)
+ }
+ return dst
+}
+
+func (v *ConsumerGroupDescribeResponse) ReadFrom(src []byte) error {
+ return v.readFrom(src, false)
+}
+
+func (v *ConsumerGroupDescribeResponse) UnsafeReadFrom(src []byte) error {
+ return v.readFrom(src, true)
+}
+
+func (v *ConsumerGroupDescribeResponse) readFrom(src []byte, unsafe bool) error {
+ v.Default()
+ b := kbin.Reader{Src: src}
+ version := v.Version
+ _ = version
+ isFlexible := version >= 0
+ _ = isFlexible
+ s := v
+ {
+ v := b.Int32()
+ s.ThrottleMillis = v
+ }
+ {
+ v := s.Groups
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ConsumerGroupDescribeResponseGroup, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Int16()
+ s.ErrorCode = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.ErrorMessage = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Group = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.State = v
+ }
+ {
+ v := b.Int32()
+ s.Epoch = v
+ }
+ {
+ v := b.Int32()
+ s.AssignmentEpoch = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.AssignorName = v
+ }
+ {
+ v := s.Members
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]ConsumerGroupDescribeResponseGroupMember, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.MemberID = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.InstanceID = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.RackID = v
+ }
+ {
+ v := b.Int32()
+ s.MemberEpoch = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.ClientID = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.ClientHost = v
+ }
+ {
+ v := s.SubscribedTopics
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]string, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ a[i] = v
+ }
+ v = a
+ s.SubscribedTopics = v
+ }
+ {
+ var v *string
+ if isFlexible {
+ if unsafe {
+ v = b.UnsafeCompactNullableString()
+ } else {
+ v = b.CompactNullableString()
+ }
+ } else {
+ if unsafe {
+ v = b.UnsafeNullableString()
+ } else {
+ v = b.NullableString()
+ }
+ }
+ s.SubscribedTopicRegex = v
+ }
+ {
+ v := &s.Assignment
+ v.Default()
+ s := v
+ {
+ v := s.TopicPartitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AssignmentTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Uuid()
+ s.TopicID = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.TopicPartitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ {
+ v := &s.TargetAssignment
+ v.Default()
+ s := v
+ {
+ v := s.TopicPartitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]AssignmentTopicPartition, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := &a[i]
+ v.Default()
+ s := v
+ {
+ v := b.Uuid()
+ s.TopicID = v
+ }
+ {
+ var v string
+ if unsafe {
+ if isFlexible {
+ v = b.UnsafeCompactString()
+ } else {
+ v = b.UnsafeString()
+ }
+ } else {
+ if isFlexible {
+ v = b.CompactString()
+ } else {
+ v = b.String()
+ }
+ }
+ s.Topic = v
+ }
+ {
+ v := s.Partitions
+ a := v
+ var l int32
+ if isFlexible {
+ l = b.CompactArrayLen()
+ } else {
+ l = b.ArrayLen()
+ }
+ if !b.Ok() {
+ return b.Complete()
+ }
+ a = a[:0]
+ if l > 0 {
+ a = append(a, make([]int32, l)...)
+ }
+ for i := int32(0); i < l; i++ {
+ v := b.Int32()
+ a[i] = v
+ }
+ v = a
+ s.Partitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.TopicPartitions = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Members = v
+ }
+ {
+ v := b.Int32()
+ s.AuthorizedOperations = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ }
+ v = a
+ s.Groups = v
+ }
+ if isFlexible {
+ s.UnknownTags = internalReadTags(&b)
+ }
+ return b.Complete()
+}
+
+// NewPtrConsumerGroupDescribeResponse returns a pointer to a default ConsumerGroupDescribeResponse
+// This is a shortcut for creating a new(struct) and calling Default yourself.
+func NewPtrConsumerGroupDescribeResponse() *ConsumerGroupDescribeResponse {
+ var v ConsumerGroupDescribeResponse
+ v.Default()
+ return &v
+}
+
+// Default sets any default fields. Calling this allows for future compatibility
+// if new fields are added to ConsumerGroupDescribeResponse.
+func (v *ConsumerGroupDescribeResponse) Default() {
+}
+
+// NewConsumerGroupDescribeResponse returns a default ConsumerGroupDescribeResponse
+// This is a shortcut for creating a struct and calling Default yourself.
+func NewConsumerGroupDescribeResponse() ConsumerGroupDescribeResponse {
+ var v ConsumerGroupDescribeResponse
+ v.Default()
+ return v
+}
+
// RequestForKey returns the request corresponding to the given request key
// or nil if the key is unknown.
func RequestForKey(key int16) Request {
@@ -45471,6 +46793,8 @@ func RequestForKey(key int16) Request {
return NewPtrAllocateProducerIDsRequest()
case 68:
return NewPtrConsumerGroupHeartbeatRequest()
+ case 69:
+ return NewPtrConsumerGroupDescribeRequest()
}
}
@@ -45618,6 +46942,8 @@ func ResponseForKey(key int16) Response {
return NewPtrAllocateProducerIDsResponse()
case 68:
return NewPtrConsumerGroupHeartbeatResponse()
+ case 69:
+ return NewPtrConsumerGroupDescribeResponse()
}
}
@@ -45765,6 +47091,8 @@ func NameForKey(key int16) string {
return "AllocateProducerIDs"
case 68:
return "ConsumerGroupHeartbeat"
+ case 69:
+ return "ConsumerGroupDescribe"
}
}
@@ -45841,6 +47169,7 @@ const (
ListTransactions Key = 66
AllocateProducerIDs Key = 67
ConsumerGroupHeartbeat Key = 68
+ ConsumerGroupDescribe Key = 69
)
// Name returns the name for this key.
diff --git a/vendor/github.com/twmb/franz-go/pkg/kversion/kversion.go b/vendor/github.com/twmb/franz-go/pkg/kversion/kversion.go
index 3081c346f5aae..2267b91929826 100644
--- a/vendor/github.com/twmb/franz-go/pkg/kversion/kversion.go
+++ b/vendor/github.com/twmb/franz-go/pkg/kversion/kversion.go
@@ -67,6 +67,7 @@ var versions = []struct {
{"v3.5", V3_5_0()},
{"v3.6", V3_6_0()},
{"v3.7", V3_7_0()},
+ {"v3.8", V3_8_0()},
}
// VersionStrings returns all recognized versions, minus any patch, that can be
@@ -333,7 +334,7 @@ func (vs *Versions) versionGuess(opts ...VersionGuessOpt) guess {
//
// TODO: add introduced-version to differentiate some specific
// keys.
- skipKeys: []int16{4, 5, 6, 7, 27, 52, 53, 54, 55, 56, 57, 58, 59, 62, 63, 64, 67},
+ skipKeys: []int16{4, 5, 6, 7, 27, 52, 53, 54, 55, 56, 57, 58, 59, 62, 63, 64, 67, 74, 75},
}
for _, opt := range opts {
opt.apply(&cfg)
@@ -378,6 +379,7 @@ func (vs *Versions) versionGuess(opts ...VersionGuessOpt) guess {
{max350, "v3.5"},
{max360, "v3.6"},
{max370, "v3.7"},
+ {max380, "v3.8"},
} {
for k, v := range comparison.cmp.filter(cfg.listener) {
if v == -1 {
@@ -520,6 +522,7 @@ func V3_4_0() *Versions { return zkBrokerOf(max340) }
func V3_5_0() *Versions { return zkBrokerOf(max350) }
func V3_6_0() *Versions { return zkBrokerOf(max360) }
func V3_7_0() *Versions { return zkBrokerOf(max370) }
+func V3_8_0() *Versions { return zkBrokerOf(max380) }
func zkBrokerOf(lks listenerKeys) *Versions {
return &Versions{lks.filter(zkBroker)}
@@ -1158,8 +1161,37 @@ var max370 = nextMax(max360, func(v listenerKeys) listenerKeys {
return v
})
+var max380 = nextMax(max370, func(v listenerKeys) listenerKeys {
+ // KAFKA-16314 2e8d69b78ca52196decd851c8520798aa856c073 KIP-890
+ // Then error rename in cf1ba099c0723f9cf65dda4cd334d36b7ede6327
+ v[0].inc() // 11 produce
+ v[10].inc() // 5 find coordinator
+ v[22].inc() // 5 init producer id
+ v[24].inc() // 5 add partitions to txn
+ v[25].inc() // 4 add offsets to txn
+ v[26].inc() // 4 end txn
+ v[28].inc() // 4 txn offset commit
+
+ // KAFKA-15460 68745ef21a9d8fe0f37a8c5fbc7761a598718d46 KIP-848
+ v[16].inc() // 5 list groups
+
+ // KAFKA-14509 90e646052a17e3f6ec1a013d76c1e6af2fbb756e KIP-848 added
+ // 7b0352f1bd9b923b79e60b18b40f570d4bfafcc0
+ // b7c99e22a77392d6053fe231209e1de32b50a98b
+ // 68389c244e720566aaa8443cd3fc0b9d2ec4bb7a
+ // 5f410ceb04878ca44d2d007655155b5303a47907 stabilized
+ v = append(v,
+ k(zkBroker, rBroker), // 69 consumer group describe
+ )
+
+ // KAFKA-16265 b4e96913cc6c827968e47a31261e0bd8fdf677b5 KIP-994 (part 1)
+ v[66].inc()
+
+ return v
+})
+
var (
- maxStable = max370
+ maxStable = max380
maxTip = nextMax(maxStable, func(v listenerKeys) listenerKeys {
return v
})
diff --git a/vendor/modules.txt b/vendor/modules.txt
index fdd9d67229219..afc150678af4f 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -1601,7 +1601,7 @@ github.com/tklauser/go-sysconf
# github.com/tklauser/numcpus v0.6.1
## explicit; go 1.13
github.com/tklauser/numcpus
-# github.com/twmb/franz-go v1.17.1
+# github.com/twmb/franz-go v1.18.0
## explicit; go 1.21
github.com/twmb/franz-go/pkg/kbin
github.com/twmb/franz-go/pkg/kerr
@@ -1610,14 +1610,14 @@ github.com/twmb/franz-go/pkg/kgo/internal/sticky
github.com/twmb/franz-go/pkg/kversion
github.com/twmb/franz-go/pkg/sasl
github.com/twmb/franz-go/pkg/sasl/plain
-# github.com/twmb/franz-go/pkg/kadm v1.13.0
+# github.com/twmb/franz-go/pkg/kadm v1.14.0
## explicit; go 1.21
github.com/twmb/franz-go/pkg/kadm
# github.com/twmb/franz-go/pkg/kfake v0.0.0-20241015013301-cea7aa5d8037
## explicit; go 1.21
github.com/twmb/franz-go/pkg/kfake
-# github.com/twmb/franz-go/pkg/kmsg v1.8.0
-## explicit; go 1.19
+# github.com/twmb/franz-go/pkg/kmsg v1.9.0
+## explicit; go 1.21
github.com/twmb/franz-go/pkg/kmsg
github.com/twmb/franz-go/pkg/kmsg/internal/kbin
# github.com/twmb/franz-go/plugin/kotel v1.5.0
|
fix
|
update module github.com/twmb/franz-go/pkg/kadm to v1.14.0 (#14911)
|
1bb535a730856bc992493c94991765ada682e4bd
|
2022-08-30 13:44:03
|
Gerard Vanloo
|
operator: Fixing logcli pod image value for operator addons (#6997)
| false
|
diff --git a/operator/hack/addons_dev.yaml b/operator/hack/addons_dev.yaml
index 3897e5a24e2b0..11f92bf7d6f66 100644
--- a/operator/hack/addons_dev.yaml
+++ b/operator/hack/addons_dev.yaml
@@ -29,7 +29,7 @@ spec:
spec:
containers:
- name: logcli
- image: docker.io/grafana/logcli:2.6.1
+ image: docker.io/grafana/logcli:2.6.1-amd64
imagePullPolicy: IfNotPresent
command:
- /bin/sh
diff --git a/operator/hack/addons_ocp.yaml b/operator/hack/addons_ocp.yaml
index c13eadfb57e75..8f000d3c19ab1 100644
--- a/operator/hack/addons_ocp.yaml
+++ b/operator/hack/addons_ocp.yaml
@@ -29,7 +29,7 @@ spec:
spec:
containers:
- name: logcli
- image: docker.io/grafana/logcli:2.6.1
+ image: docker.io/grafana/logcli:2.6.1-amd64
imagePullPolicy: IfNotPresent
command:
- /bin/sh
|
operator
|
Fixing logcli pod image value for operator addons (#6997)
|
fd0389f153ca560d2e6653542e91dec4c369f622
|
2025-03-19 23:14:58
|
renovate[bot]
|
fix(deps): update module github.com/docker/docker to v28.0.2+incompatible (main) (#16837)
| false
|
diff --git a/go.mod b/go.mod
index d3515905d957f..93fe0c409176b 100644
--- a/go.mod
+++ b/go.mod
@@ -29,7 +29,7 @@ require (
github.com/coreos/go-systemd v0.0.0-20191104093116-d3cd4ed1dbcf
github.com/cristalhq/hedgedhttp v0.9.1
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc
- github.com/docker/docker v28.0.1+incompatible
+ github.com/docker/docker v28.0.2+incompatible
github.com/docker/go-plugins-helpers v0.0.0-20240701071450-45e2431495c8
github.com/drone/envsubst v1.0.3
github.com/dustin/go-humanize v1.0.1
diff --git a/go.sum b/go.sum
index d65db9860d3bd..5e351e32c0515 100644
--- a/go.sum
+++ b/go.sum
@@ -350,8 +350,8 @@ github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5Qvfr
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
github.com/dlclark/regexp2 v1.11.4 h1:rPYF9/LECdNymJufQKmri9gV604RvvABwgOA8un7yAo=
github.com/dlclark/regexp2 v1.11.4/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8=
-github.com/docker/docker v28.0.1+incompatible h1:FCHjSRdXhNRFjlHMTv4jUNlIBbTeRjrWfeFuJp7jpo0=
-github.com/docker/docker v28.0.1+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
+github.com/docker/docker v28.0.2+incompatible h1:9BILleFwug5FSSqWBgVevgL3ewDJfWWWyZVqlDMttE8=
+github.com/docker/docker v28.0.2+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.5.0 h1:USnMq7hx7gwdVZq1L49hLXaFtUdTADjXGp+uj1Br63c=
github.com/docker/go-connections v0.5.0/go.mod h1:ov60Kzw0kKElRwhNs9UlUHAE/F9Fe6GLaXnqyDdmEXc=
github.com/docker/go-metrics v0.0.1 h1:AgB/0SvBxihN0X8OR4SjsblXkbMvalQ8cjmtKQ2rQV8=
diff --git a/vendor/github.com/docker/docker/api/swagger.yaml b/vendor/github.com/docker/docker/api/swagger.yaml
index a4881f951e3ab..646032d6e0efc 100644
--- a/vendor/github.com/docker/docker/api/swagger.yaml
+++ b/vendor/github.com/docker/docker/api/swagger.yaml
@@ -5508,8 +5508,11 @@ definitions:
com.example.some-other-label: "some-other-value"
Data:
description: |
- Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5))
- data to store as secret.
+ Data is the data to store as a secret, formatted as a Base64-url-safe-encoded
+ ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) string.
+ It must be empty if the Driver field is set, in which case the data is
+ loaded from an external secret store. The maximum allowed size is 500KB,
+ as defined in [MaxSecretSize](https://pkg.go.dev/github.com/moby/swarmkit/[email protected]/api/validation#MaxSecretSize).
This field is only used to _create_ a secret, and is not returned by
other endpoints.
@@ -5560,8 +5563,9 @@ definitions:
type: "string"
Data:
description: |
- Base64-url-safe-encoded ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5))
- config data.
+ Data is the data to store as a config, formatted as a Base64-url-safe-encoded
+ ([RFC 4648](https://tools.ietf.org/html/rfc4648#section-5)) string.
+ The maximum allowed size is 1000KB, as defined in [MaxConfigSize](https://pkg.go.dev/github.com/moby/swarmkit/[email protected]/manager/controlapi#MaxConfigSize).
type: "string"
Templating:
description: |
diff --git a/vendor/github.com/docker/docker/api/types/registry/registry.go b/vendor/github.com/docker/docker/api/types/registry/registry.go
index b0a4d604f5f8b..8117cb09e7eee 100644
--- a/vendor/github.com/docker/docker/api/types/registry/registry.go
+++ b/vendor/github.com/docker/docker/api/types/registry/registry.go
@@ -49,15 +49,17 @@ func (ipnet *NetIPNet) MarshalJSON() ([]byte, error) {
}
// UnmarshalJSON sets the IPNet from a byte array of JSON
-func (ipnet *NetIPNet) UnmarshalJSON(b []byte) (err error) {
+func (ipnet *NetIPNet) UnmarshalJSON(b []byte) error {
var ipnetStr string
- if err = json.Unmarshal(b, &ipnetStr); err == nil {
- var cidr *net.IPNet
- if _, cidr, err = net.ParseCIDR(ipnetStr); err == nil {
- *ipnet = NetIPNet(*cidr)
- }
+ if err := json.Unmarshal(b, &ipnetStr); err != nil {
+ return err
}
- return
+ _, cidr, err := net.ParseCIDR(ipnetStr)
+ if err != nil {
+ return err
+ }
+ *ipnet = NetIPNet(*cidr)
+ return nil
}
// IndexInfo contains information about a registry
diff --git a/vendor/github.com/docker/docker/api/types/swarm/config.go b/vendor/github.com/docker/docker/api/types/swarm/config.go
index 16202ccce6151..f9a65187ffaf1 100644
--- a/vendor/github.com/docker/docker/api/types/swarm/config.go
+++ b/vendor/github.com/docker/docker/api/types/swarm/config.go
@@ -12,6 +12,12 @@ type Config struct {
// ConfigSpec represents a config specification from a config in swarm
type ConfigSpec struct {
Annotations
+
+ // Data is the data to store as a config.
+ //
+ // The maximum allowed size is 1000KB, as defined in [MaxConfigSize].
+ //
+ // [MaxConfigSize]: https://pkg.go.dev/github.com/moby/swarmkit/[email protected]/manager/controlapi#MaxConfigSize
Data []byte `json:",omitempty"`
// Templating controls whether and how to evaluate the config payload as
diff --git a/vendor/github.com/docker/docker/api/types/swarm/secret.go b/vendor/github.com/docker/docker/api/types/swarm/secret.go
index d5213ec981c3d..aeb5bb54ad1a4 100644
--- a/vendor/github.com/docker/docker/api/types/swarm/secret.go
+++ b/vendor/github.com/docker/docker/api/types/swarm/secret.go
@@ -12,8 +12,22 @@ type Secret struct {
// SecretSpec represents a secret specification from a secret in swarm
type SecretSpec struct {
Annotations
- Data []byte `json:",omitempty"`
- Driver *Driver `json:",omitempty"` // name of the secrets driver used to fetch the secret's value from an external secret store
+
+ // Data is the data to store as a secret. It must be empty if a
+ // [Driver] is used, in which case the data is loaded from an external
+ // secret store. The maximum allowed size is 500KB, as defined in
+ // [MaxSecretSize].
+ //
+ // This field is only used to create the secret, and is not returned
+ // by other endpoints.
+ //
+ // [MaxSecretSize]: https://pkg.go.dev/github.com/moby/swarmkit/[email protected]/api/validation#MaxSecretSize
+ Data []byte `json:",omitempty"`
+
+ // Driver is the name of the secrets driver used to fetch the secret's
+ // value from an external secret store. If not set, the default built-in
+ // store is used.
+ Driver *Driver `json:",omitempty"`
// Templating controls whether and how to evaluate the secret payload as
// a template. If it is not set, no templating is used.
diff --git a/vendor/github.com/docker/docker/client/container_create.go b/vendor/github.com/docker/docker/client/container_create.go
index 9b8616f7614e7..9bb106f776377 100644
--- a/vendor/github.com/docker/docker/client/container_create.go
+++ b/vendor/github.com/docker/docker/client/container_create.go
@@ -3,6 +3,7 @@ package client // import "github.com/docker/docker/client"
import (
"context"
"encoding/json"
+ "errors"
"net/url"
"path"
"sort"
@@ -54,6 +55,19 @@ func (cli *Client) ContainerCreate(ctx context.Context, config *container.Config
// When using API under 1.42, the Linux daemon doesn't respect the ConsoleSize
hostConfig.ConsoleSize = [2]uint{0, 0}
}
+ if versions.LessThan(cli.ClientVersion(), "1.44") {
+ for _, m := range hostConfig.Mounts {
+ if m.BindOptions != nil {
+ // ReadOnlyNonRecursive can be safely ignored when API < 1.44
+ if m.BindOptions.ReadOnlyForceRecursive {
+ return response, errors.New("bind-recursive=readonly requires API v1.44 or later")
+ }
+ if m.BindOptions.NonRecursive && versions.LessThan(cli.ClientVersion(), "1.40") {
+ return response, errors.New("bind-recursive=disabled requires API v1.40 or later")
+ }
+ }
+ }
+ }
hostConfig.CapAdd = normalizeCapabilities(hostConfig.CapAdd)
hostConfig.CapDrop = normalizeCapabilities(hostConfig.CapDrop)
diff --git a/vendor/github.com/docker/docker/client/service_create.go b/vendor/github.com/docker/docker/client/service_create.go
index fb12dd5b59599..54c03b138948d 100644
--- a/vendor/github.com/docker/docker/client/service_create.go
+++ b/vendor/github.com/docker/docker/client/service_create.go
@@ -37,6 +37,11 @@ func (cli *Client) ServiceCreate(ctx context.Context, service swarm.ServiceSpec,
if err := validateServiceSpec(service); err != nil {
return response, err
}
+ if versions.LessThan(cli.version, "1.30") {
+ if err := validateAPIVersion(service, cli.version); err != nil {
+ return response, err
+ }
+ }
// ensure that the image is tagged
var resolveWarning string
@@ -191,3 +196,18 @@ func validateServiceSpec(s swarm.ServiceSpec) error {
}
return nil
}
+
+func validateAPIVersion(c swarm.ServiceSpec, apiVersion string) error {
+ for _, m := range c.TaskTemplate.ContainerSpec.Mounts {
+ if m.BindOptions != nil {
+ if m.BindOptions.NonRecursive && versions.LessThan(apiVersion, "1.40") {
+ return errors.Errorf("bind-recursive=disabled requires API v1.40 or later")
+ }
+ // ReadOnlyNonRecursive can be safely ignored when API < 1.44
+ if m.BindOptions.ReadOnlyForceRecursive && versions.LessThan(apiVersion, "1.44") {
+ return errors.Errorf("bind-recursive=readonly requires API v1.44 or later")
+ }
+ }
+ }
+ return nil
+}
diff --git a/vendor/github.com/docker/docker/daemon/logger/proxy.go b/vendor/github.com/docker/docker/daemon/logger/proxy.go
index 5cb03f95ee044..502271b240034 100644
--- a/vendor/github.com/docker/docker/daemon/logger/proxy.go
+++ b/vendor/github.com/docker/docker/daemon/logger/proxy.go
@@ -32,14 +32,14 @@ func (pp *logPluginProxy) StartLogging(file string, info Info) (err error) {
req.File = file
req.Info = info
if err = pp.Call("LogDriver.StartLogging", req, &ret); err != nil {
- return
+ return err
}
if ret.Err != "" {
- err = errors.New(ret.Err)
+ return errors.New(ret.Err)
}
- return
+ return nil
}
type logPluginProxyStopLoggingRequest struct {
@@ -58,14 +58,14 @@ func (pp *logPluginProxy) StopLogging(file string) (err error) {
req.File = file
if err = pp.Call("LogDriver.StopLogging", req, &ret); err != nil {
- return
+ return err
}
if ret.Err != "" {
- err = errors.New(ret.Err)
+ return errors.New(ret.Err)
}
- return
+ return nil
}
type logPluginProxyCapabilitiesResponse struct {
@@ -77,16 +77,14 @@ func (pp *logPluginProxy) Capabilities() (cap Capability, err error) {
var ret logPluginProxyCapabilitiesResponse
if err = pp.Call("LogDriver.Capabilities", nil, &ret); err != nil {
- return
+ return Capability{}, err
}
- cap = ret.Cap
-
if ret.Err != "" {
- err = errors.New(ret.Err)
+ return Capability{}, errors.New(ret.Err)
}
- return
+ return ret.Cap, nil
}
type logPluginProxyReadLogsRequest struct {
@@ -95,9 +93,8 @@ type logPluginProxyReadLogsRequest struct {
}
func (pp *logPluginProxy) ReadLogs(info Info, config ReadConfig) (stream io.ReadCloser, err error) {
- var req logPluginProxyReadLogsRequest
-
- req.Info = info
- req.Config = config
- return pp.Stream("LogDriver.ReadLogs", req)
+ return pp.Stream("LogDriver.ReadLogs", logPluginProxyReadLogsRequest{
+ Info: info,
+ Config: config,
+ })
}
diff --git a/vendor/github.com/docker/docker/pkg/atomicwriter/atomicwriter.go b/vendor/github.com/docker/docker/pkg/atomicwriter/atomicwriter.go
index cbbe835bb128d..abf46275318c0 100644
--- a/vendor/github.com/docker/docker/pkg/atomicwriter/atomicwriter.go
+++ b/vendor/github.com/docker/docker/pkg/atomicwriter/atomicwriter.go
@@ -11,12 +11,12 @@ import (
// destination path. Writing and closing concurrently is not allowed.
// NOTE: umask is not considered for the file's permissions.
func New(filename string, perm os.FileMode) (io.WriteCloser, error) {
- f, err := os.CreateTemp(filepath.Dir(filename), ".tmp-"+filepath.Base(filename))
+ abspath, err := filepath.Abs(filename)
if err != nil {
return nil, err
}
- abspath, err := filepath.Abs(filename)
+ f, err := os.CreateTemp(filepath.Dir(abspath), ".tmp-"+filepath.Base(filename))
if err != nil {
return nil, err
}
diff --git a/vendor/github.com/docker/docker/pkg/pools/pools.go b/vendor/github.com/docker/docker/pkg/pools/pools.go
index 3ea3012b188ba..73117058830f6 100644
--- a/vendor/github.com/docker/docker/pkg/pools/pools.go
+++ b/vendor/github.com/docker/docker/pkg/pools/pools.go
@@ -76,11 +76,11 @@ func (bp *bufferPool) Put(b *[]byte) {
}
// Copy is a convenience wrapper which uses a buffer to avoid allocation in io.Copy.
-func Copy(dst io.Writer, src io.Reader) (written int64, err error) {
+func Copy(dst io.Writer, src io.Reader) (written int64, _ error) {
buf := buffer32KPool.Get()
- written, err = io.CopyBuffer(dst, src, *buf)
+ written, err := io.CopyBuffer(dst, src, *buf)
buffer32KPool.Put(buf)
- return
+ return written, err
}
// NewReadCloserWrapper returns a wrapper which puts the bufio.Reader back
@@ -88,7 +88,7 @@ func Copy(dst io.Writer, src io.Reader) (written int64, err error) {
func (bufPool *BufioReaderPool) NewReadCloserWrapper(buf *bufio.Reader, r io.Reader) io.ReadCloser {
return ioutils.NewReadCloserWrapper(r, func() error {
if readCloser, ok := r.(io.ReadCloser); ok {
- readCloser.Close()
+ _ = readCloser.Close()
}
bufPool.Put(buf)
return nil
@@ -127,9 +127,9 @@ func (bufPool *BufioWriterPool) Put(b *bufio.Writer) {
// into the pool and closes the writer if it's an io.WriteCloser.
func (bufPool *BufioWriterPool) NewWriteCloserWrapper(buf *bufio.Writer, w io.Writer) io.WriteCloser {
return ioutils.NewWriteCloserWrapper(w, func() error {
- buf.Flush()
+ _ = buf.Flush()
if writeCloser, ok := w.(io.WriteCloser); ok {
- writeCloser.Close()
+ _ = writeCloser.Close()
}
bufPool.Put(buf)
return nil
diff --git a/vendor/github.com/docker/docker/pkg/stdcopy/stdcopy.go b/vendor/github.com/docker/docker/pkg/stdcopy/stdcopy.go
index 8f6e0a737aa60..854e4c371814e 100644
--- a/vendor/github.com/docker/docker/pkg/stdcopy/stdcopy.go
+++ b/vendor/github.com/docker/docker/pkg/stdcopy/stdcopy.go
@@ -43,9 +43,9 @@ type stdWriter struct {
// It inserts the prefix header before the buffer,
// so stdcopy.StdCopy knows where to multiplex the output.
// It makes stdWriter to implement io.Writer.
-func (w *stdWriter) Write(p []byte) (n int, err error) {
+func (w *stdWriter) Write(p []byte) (int, error) {
if w == nil || w.Writer == nil {
- return 0, errors.New("Writer not instantiated")
+ return 0, errors.New("writer not instantiated")
}
if p == nil {
return 0, nil
@@ -57,7 +57,7 @@ func (w *stdWriter) Write(p []byte) (n int, err error) {
buf.Write(header[:])
buf.Write(p)
- n, err = w.Writer.Write(buf.Bytes())
+ n, err := w.Writer.Write(buf.Bytes())
n -= stdWriterPrefixLen
if n < 0 {
n = 0
@@ -65,7 +65,7 @@ func (w *stdWriter) Write(p []byte) (n int, err error) {
buf.Reset()
bufPool.Put(buf)
- return
+ return n, err
}
// NewStdWriter instantiates a new Writer.
diff --git a/vendor/modules.txt b/vendor/modules.txt
index 00b5f702678ea..ced0ca13dec5f 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -623,7 +623,7 @@ github.com/distribution/reference
## explicit; go 1.13
github.com/dlclark/regexp2
github.com/dlclark/regexp2/syntax
-# github.com/docker/docker v28.0.1+incompatible
+# github.com/docker/docker v28.0.2+incompatible
## explicit
github.com/docker/docker/api
github.com/docker/docker/api/types
|
fix
|
update module github.com/docker/docker to v28.0.2+incompatible (main) (#16837)
|
41c19ec747f9b069baa6934bc6b414c5e2a11172
|
2023-05-26 20:22:24
|
risson
|
helm: support of annotations to deployments and statefulsets (#9509)
| false
|
diff --git a/docs/sources/installation/helm/reference.md b/docs/sources/installation/helm/reference.md
index 06965e470aae0..2a180dc0b54ec 100644
--- a/docs/sources/installation/helm/reference.md
+++ b/docs/sources/installation/helm/reference.md
@@ -38,6 +38,15 @@ This is the generated reference for the Loki Helm Chart values.
<td><pre lang="">
Hard node and soft zone anti-affinity
</pre>
+</td>
+ </tr>
+ <tr>
+ <td>backend.annotations</td>
+ <td>object</td>
+ <td>Annotations for backend StatefulSet</td>
+ <td><pre lang="json">
+{}
+</pre>
</td>
</tr>
<tr>
@@ -827,6 +836,15 @@ null
<td><pre lang="">
Hard node and soft zone anti-affinity
</pre>
+</td>
+ </tr>
+ <tr>
+ <td>gateway.annotations</td>
+ <td>object</td>
+ <td>Annotations for gateway deployment</td>
+ <td><pre lang="json">
+{}
+</pre>
</td>
</tr>
<tr>
@@ -1693,6 +1711,15 @@ null
<td><pre lang="json">
{}
</pre>
+</td>
+ </tr>
+ <tr>
+ <td>loki.annotations</td>
+ <td>object</td>
+ <td>Common annotations for all deployments/StatefulSets</td>
+ <td><pre lang="json">
+{}
+</pre>
</td>
</tr>
<tr>
@@ -2850,6 +2877,15 @@ false
<td><pre lang="">
Hard node and soft zone anti-affinity
</pre>
+</td>
+ </tr>
+ <tr>
+ <td>read.annotations</td>
+ <td>object</td>
+ <td>Annotations for read deployment</td>
+ <td><pre lang="json">
+{}
+</pre>
</td>
</tr>
<tr>
@@ -3201,6 +3237,15 @@ null
<td><pre lang="">
Hard node and soft zone anti-affinity
</pre>
+</td>
+ </tr>
+ <tr>
+ <td>singleBinary.annotations</td>
+ <td>object</td>
+ <td>Annotations for single binary StatefulSet</td>
+ <td><pre lang="json">
+{}
+</pre>
</td>
</tr>
<tr>
@@ -3480,6 +3525,15 @@ null
<td><pre lang="">
Hard node and soft zone anti-affinity
</pre>
+</td>
+ </tr>
+ <tr>
+ <td>tableManager.annotations</td>
+ <td>object</td>
+ <td>Annotations for table-manager deployment</td>
+ <td><pre lang="json">
+{}
+</pre>
</td>
</tr>
<tr>
@@ -3787,6 +3841,15 @@ null
<td><pre lang="">
Hard node and soft zone anti-affinity
</pre>
+</td>
+ </tr>
+ <tr>
+ <td>write.annotations</td>
+ <td>object</td>
+ <td>Annotations for write StatefulSet</td>
+ <td><pre lang="json">
+{}
+</pre>
</td>
</tr>
<tr>
diff --git a/production/helm/loki/CHANGELOG.md b/production/helm/loki/CHANGELOG.md
index b49b83eedfb73..b81dd4bd3cc69 100644
--- a/production/helm/loki/CHANGELOG.md
+++ b/production/helm/loki/CHANGELOG.md
@@ -13,6 +13,10 @@ Entries should include a reference to the pull request that introduced the chang
[//]: # (<AUTOMATED_UPDATES_LOCATOR> : do not remove this line. This locator is used by the CI pipeline to automatically create a changelog entry for each new Loki release. Add other chart versions and respective changelog entries bellow this line.)
+## 5.5.8
+
+- [CHANGE] Add support for annotations on all Deployments and StatefulSets
+
## 5.5.7
- [BUGFIX] Fix breaking helm upgrade by changing sts podManagementPolicy from Parallel to OrderedReady which fails since that field cannot be modified on sts.
diff --git a/production/helm/loki/Chart.yaml b/production/helm/loki/Chart.yaml
index 5fb1ebcca6f2d..2e9a2391fdd9c 100644
--- a/production/helm/loki/Chart.yaml
+++ b/production/helm/loki/Chart.yaml
@@ -3,7 +3,7 @@ name: loki
description: Helm chart for Grafana Loki in simple, scalable mode
type: application
appVersion: 2.8.2
-version: 5.5.7
+version: 5.5.8
home: https://grafana.github.io/helm-charts
sources:
- https://github.com/grafana/loki
diff --git a/production/helm/loki/README.md b/production/helm/loki/README.md
index b53703ade267c..d683bd04094ea 100644
--- a/production/helm/loki/README.md
+++ b/production/helm/loki/README.md
@@ -1,6 +1,6 @@
# loki
-  
+  
Helm chart for Grafana Loki in simple, scalable mode
diff --git a/production/helm/loki/templates/backend/statefulset-backend.yaml b/production/helm/loki/templates/backend/statefulset-backend.yaml
index 4124f7d229984..653ae32acb50b 100644
--- a/production/helm/loki/templates/backend/statefulset-backend.yaml
+++ b/production/helm/loki/templates/backend/statefulset-backend.yaml
@@ -9,6 +9,13 @@ metadata:
labels:
{{- include "loki.backendLabels" . | nindent 4 }}
app.kubernetes.io/part-of: memberlist
+ annotations:
+ {{- with .Values.loki.annotations }}
+ {{- toYaml . | nindent 4 }}
+ {{- end }}
+ {{- with .Values.backend.annotations }}
+ {{- toYaml . | nindent 4 }}
+ {{- end }}
spec:
{{- if not .Values.backend.autoscaling.enabled }}
replicas: {{ .Values.backend.replicas }}
diff --git a/production/helm/loki/templates/gateway/deployment-gateway.yaml b/production/helm/loki/templates/gateway/deployment-gateway.yaml
index eaee82269e7e8..440478c7bf973 100644
--- a/production/helm/loki/templates/gateway/deployment-gateway.yaml
+++ b/production/helm/loki/templates/gateway/deployment-gateway.yaml
@@ -6,6 +6,13 @@ metadata:
namespace: {{ $.Release.Namespace }}
labels:
{{- include "loki.gatewayLabels" . | nindent 4 }}
+ annotations:
+ {{- with .Values.loki.annotations }}
+ {{- toYaml . | nindent 4 }}
+ {{- end }}
+ {{- with .Values.gateway.annotations }}
+ {{- toYaml . | nindent 4 }}
+ {{- end }}
spec:
{{- if not .Values.gateway.autoscaling.enabled }}
replicas: {{ .Values.gateway.replicas }}
diff --git a/production/helm/loki/templates/read/deployment-read.yaml b/production/helm/loki/templates/read/deployment-read.yaml
index 8938e816fbaed..3712584e31acf 100644
--- a/production/helm/loki/templates/read/deployment-read.yaml
+++ b/production/helm/loki/templates/read/deployment-read.yaml
@@ -9,6 +9,13 @@ metadata:
labels:
app.kubernetes.io/part-of: memberlist
{{- include "loki.readLabels" . | nindent 4 }}
+ annotations:
+ {{- with .Values.loki.annotations }}
+ {{- toYaml . | nindent 4 }}
+ {{- end }}
+ {{- with .Values.read.annotations }}
+ {{- toYaml . | nindent 4 }}
+ {{- end }}
spec:
{{- if not .Values.read.autoscaling.enabled }}
replicas: {{ .Values.read.replicas }}
diff --git a/production/helm/loki/templates/single-binary/statefulset.yaml b/production/helm/loki/templates/single-binary/statefulset.yaml
index 801ce539e2566..f32e24fb52e3b 100644
--- a/production/helm/loki/templates/single-binary/statefulset.yaml
+++ b/production/helm/loki/templates/single-binary/statefulset.yaml
@@ -9,6 +9,13 @@ metadata:
labels:
{{- include "loki.singleBinaryLabels" . | nindent 4 }}
app.kubernetes.io/part-of: memberlist
+ annotations:
+ {{- with .Values.loki.annotations }}
+ {{- toYaml . | nindent 4 }}
+ {{- end }}
+ {{- with .Values.singleBinary.annotations }}
+ {{- toYaml . | nindent 4 }}
+ {{- end }}
spec:
replicas: {{ include "loki.singleBinaryReplicas" . }}
podManagementPolicy: Parallel
diff --git a/production/helm/loki/templates/table-manager/deployment-table-manager.yaml b/production/helm/loki/templates/table-manager/deployment-table-manager.yaml
index f5529ebec631e..8ee50c16b4f47 100644
--- a/production/helm/loki/templates/table-manager/deployment-table-manager.yaml
+++ b/production/helm/loki/templates/table-manager/deployment-table-manager.yaml
@@ -5,10 +5,13 @@ metadata:
name: {{ include "loki.tableManagerFullname" . }}
labels:
{{- include "loki.tableManagerLabels" . | nindent 4 }}
- {{- with .Values.loki.annotations }}
annotations:
+ {{- with .Values.loki.annotations }}
{{- toYaml . | nindent 4 }}
- {{- end }}
+ {{- end }}
+ {{- with .Values.tableManager.annotations }}
+ {{- toYaml . | nindent 4 }}
+ {{- end }}
spec:
replicas: 1
revisionHistoryLimit: {{ .Values.loki.revisionHistoryLimit }}
diff --git a/production/helm/loki/templates/write/statefulset-write.yaml b/production/helm/loki/templates/write/statefulset-write.yaml
index c73e5e4f348b5..df2768addc0c6 100644
--- a/production/helm/loki/templates/write/statefulset-write.yaml
+++ b/production/helm/loki/templates/write/statefulset-write.yaml
@@ -8,6 +8,13 @@ metadata:
labels:
{{- include "loki.writeLabels" . | nindent 4 }}
app.kubernetes.io/part-of: memberlist
+ annotations:
+ {{- with .Values.loki.annotations }}
+ {{- toYaml . | nindent 4 }}
+ {{- end }}
+ {{- with .Values.write.annotations }}
+ {{- toYaml . | nindent 4 }}
+ {{- end }}
spec:
{{- if not .Values.write.autoscaling.enabled }}
replicas: {{ .Values.write.replicas }}
diff --git a/production/helm/loki/values.yaml b/production/helm/loki/values.yaml
index 32d249b2b4499..0520e50315f18 100644
--- a/production/helm/loki/values.yaml
+++ b/production/helm/loki/values.yaml
@@ -50,6 +50,8 @@ loki:
digest: null
# -- Docker image pull policy
pullPolicy: IfNotPresent
+ # -- Common annotations for all deployments/StatefulSets
+ annotations: {}
# -- Common annotations for all pods
podAnnotations: {}
# -- Common labels for all pods
@@ -679,6 +681,8 @@ write:
tag: null
# -- The name of the PriorityClass for write pods
priorityClassName: null
+ # -- Annotations for write StatefulSet
+ annotations: {}
# -- Annotations for write pods
podAnnotations: {}
# -- Additional labels for each `write` pod
@@ -761,6 +765,8 @@ tableManager:
priorityClassName: null
# -- Labels for table-manager pods
podLabels: {}
+ # -- Annotations for table-manager deployment
+ annotations: {}
# -- Annotations for table-manager pods
podAnnotations: {}
# -- Labels for table-manager service
@@ -839,6 +845,8 @@ read:
tag: null
# -- The name of the PriorityClass for read pods
priorityClassName: null
+ # -- Annotations for read deployment
+ annotations: {}
# -- Annotations for read pods
podAnnotations: {}
# -- Additional labels for each `read` pod
@@ -935,6 +943,8 @@ backend:
tag: null
# -- The name of the PriorityClass for backend pods
priorityClassName: null
+ # -- Annotations for backend StatefulSet
+ annotations: {}
# -- Annotations for backend pods
podAnnotations: {}
# -- Additional labels for each `backend` pod
@@ -1015,6 +1025,8 @@ singleBinary:
tag: null
# -- The name of the PriorityClass for single binary pods
priorityClassName: null
+ # -- Annotations for single binary StatefulSet
+ annotations: {}
# -- Annotations for single binary pods
podAnnotations: {}
# -- Additional labels for each `single binary` pod
@@ -1164,6 +1176,8 @@ gateway:
pullPolicy: IfNotPresent
# -- The name of the PriorityClass for gateway pods
priorityClassName: null
+ # -- Annotations for gateway deployment
+ annotations: {}
# -- Annotations for gateway pods
podAnnotations: {}
# -- Additional labels for gateway pods
|
helm
|
support of annotations to deployments and statefulsets (#9509)
|
62a2217d83d3b5dd9c899875987ae9441f4b26fd
|
2023-12-08 17:11:02
|
Sandeep Sukhani
|
compactor: fix metric name for a compactor (#11412)
| false
|
diff --git a/pkg/compactor/metrics.go b/pkg/compactor/metrics.go
index 28b205789693c..96fc9b16541e1 100644
--- a/pkg/compactor/metrics.go
+++ b/pkg/compactor/metrics.go
@@ -49,7 +49,7 @@ func newMetrics(r prometheus.Registerer) *metrics {
Help: "Time (in seconds) spent in applying retention",
}),
applyRetentionLastSuccess: promauto.With(r).NewGauge(prometheus.GaugeOpts{
- Namespace: "loki_boltdb_shipper",
+ Namespace: "loki_compactor",
Name: "apply_retention_last_successful_run_timestamp_seconds",
Help: "Unix timestamp of the last successful retention run",
}),
|
compactor
|
fix metric name for a compactor (#11412)
|
646c75409e057cf48553bf413b5fd834c22097a2
|
2024-07-30 07:48:26
|
Ed Welch
|
chore: log stats around chunks being flushed (#13699)
| false
|
diff --git a/pkg/ingester/flush.go b/pkg/ingester/flush.go
index d851d9e4addec..592ec0690b6b3 100644
--- a/pkg/ingester/flush.go
+++ b/pkg/ingester/flush.go
@@ -8,6 +8,7 @@ import (
"sync"
"time"
+ "github.com/dustin/go-humanize"
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/grafana/dskit/backoff"
@@ -45,6 +46,60 @@ const (
flushReasonSynced = "synced"
)
+// I don't know if this needs to be private but I only needed it in this package.
+type flushReasonCounter struct {
+ flushReasonIdle int
+ flushReasonMaxAge int
+ flushReasonForced int
+ flushReasonNotOwned int
+ flushReasonFull int
+ flushReasonSynced int
+}
+
+func (f *flushReasonCounter) Log() []interface{} {
+ // return counters only if they are non zero
+ var log []interface{}
+ if f.flushReasonIdle > 0 {
+ log = append(log, "idle", f.flushReasonIdle)
+ }
+ if f.flushReasonMaxAge > 0 {
+ log = append(log, "max_age", f.flushReasonMaxAge)
+ }
+ if f.flushReasonForced > 0 {
+ log = append(log, "forced", f.flushReasonForced)
+ }
+ if f.flushReasonNotOwned > 0 {
+ log = append(log, "not_owned", f.flushReasonNotOwned)
+ }
+ if f.flushReasonFull > 0 {
+ log = append(log, "full", f.flushReasonFull)
+ }
+ if f.flushReasonSynced > 0 {
+ log = append(log, "synced", f.flushReasonSynced)
+ }
+ return log
+}
+
+func (f *flushReasonCounter) IncrementForReason(reason string) error {
+ switch reason {
+ case flushReasonIdle:
+ f.flushReasonIdle++
+ case flushReasonMaxAge:
+ f.flushReasonMaxAge++
+ case flushReasonForced:
+ f.flushReasonForced++
+ case flushReasonNotOwned:
+ f.flushReasonNotOwned++
+ case flushReasonFull:
+ f.flushReasonFull++
+ case flushReasonSynced:
+ f.flushReasonSynced++
+ default:
+ return fmt.Errorf("unknown reason: %s", reason)
+ }
+ return nil
+}
+
// Note: this is called both during the WAL replay (zero or more times)
// and then after replay as well.
func (i *Ingester) InitFlushQueues() {
@@ -220,8 +275,33 @@ func (i *Ingester) flushUserSeries(ctx context.Context, userID string, fp model.
return nil
}
+ totalCompressedSize := 0
+ totalUncompressedSize := 0
+ frc := flushReasonCounter{}
+ for _, c := range chunks {
+ totalCompressedSize += c.chunk.CompressedSize()
+ totalUncompressedSize += c.chunk.UncompressedSize()
+ err := frc.IncrementForReason(c.reason)
+ if err != nil {
+ level.Error(i.logger).Log("msg", "error incrementing flush reason", "err", err)
+ }
+ }
+
lbs := labels.String()
- level.Info(i.logger).Log("msg", "flushing stream", "user", userID, "fp", fp, "immediate", immediate, "num_chunks", len(chunks), "labels", lbs)
+ logValues := make([]interface{}, 0, 35)
+ logValues = append(logValues,
+ "msg", "flushing stream",
+ "user", userID,
+ "fp", fp,
+ "immediate", immediate,
+ "num_chunks", len(chunks),
+ "total_comp", humanize.Bytes(uint64(totalCompressedSize)),
+ "avg_comp", humanize.Bytes(uint64(totalCompressedSize/len(chunks))),
+ "total_uncomp", humanize.Bytes(uint64(totalUncompressedSize)),
+ "avg_uncomp", humanize.Bytes(uint64(totalUncompressedSize/len(chunks))))
+ logValues = append(logValues, frc.Log()...)
+ logValues = append(logValues, "labels", lbs)
+ level.Info(i.logger).Log(logValues...)
ctx = user.InjectOrgID(ctx, userID)
ctx, cancelFunc := context.WithTimeout(ctx, i.cfg.FlushOpTimeout)
|
chore
|
log stats around chunks being flushed (#13699)
|
504779b89bc30e58275986edccea0316ee5567ef
|
2025-01-07 22:22:27
|
renovate[bot]
|
chore(deps): update terraform google to v6.15.0 (#15616)
| false
|
diff --git a/tools/gcplog/main.tf b/tools/gcplog/main.tf
index d66b03373edb2..c2ed6945d366d 100644
--- a/tools/gcplog/main.tf
+++ b/tools/gcplog/main.tf
@@ -2,7 +2,7 @@ terraform {
required_providers {
google = {
source = "hashicorp/google"
- version = "6.14.1"
+ version = "6.15.0"
}
}
}
|
chore
|
update terraform google to v6.15.0 (#15616)
|
52cb0af3aaf68b696ccb12b73bfc995b6d8bc4a6
|
2025-02-07 22:35:01
|
benclive
|
chore(dataobj): Use kgo Balancer for dataobj-consumer (#16146)
| false
|
diff --git a/pkg/dataobj/consumer/partition_processor.go b/pkg/dataobj/consumer/partition_processor.go
index 37e63a87fd7db..cf8345364b7ba 100644
--- a/pkg/dataobj/consumer/partition_processor.go
+++ b/pkg/dataobj/consumer/partition_processor.go
@@ -37,7 +37,7 @@ type partitionProcessor struct {
builderOnce sync.Once
builderCfg dataobj.BuilderConfig
bucket objstore.Bucket
- flushBuffer *bytes.Buffer
+ bufPool *sync.Pool
// Metrics
metrics *partitionOffsetMetrics
@@ -50,7 +50,7 @@ type partitionProcessor struct {
logger log.Logger
}
-func newPartitionProcessor(ctx context.Context, client *kgo.Client, builderCfg dataobj.BuilderConfig, uploaderCfg uploader.Config, bucket objstore.Bucket, tenantID string, virtualShard int32, topic string, partition int32, logger log.Logger, reg prometheus.Registerer) *partitionProcessor {
+func newPartitionProcessor(ctx context.Context, client *kgo.Client, builderCfg dataobj.BuilderConfig, uploaderCfg uploader.Config, bucket objstore.Bucket, tenantID string, virtualShard int32, topic string, partition int32, logger log.Logger, reg prometheus.Registerer, bufPool *sync.Pool) *partitionProcessor {
ctx, cancel := context.WithCancel(ctx)
decoder, err := kafka.NewDecoder()
if err != nil {
@@ -94,6 +94,7 @@ func newPartitionProcessor(ctx context.Context, client *kgo.Client, builderCfg d
metrics: metrics,
uploader: uploader,
metastoreManager: metastoreManager,
+ bufPool: bufPool,
}
}
@@ -158,7 +159,6 @@ func (p *partitionProcessor) initBuilder() error {
return
}
p.builder = builder
- p.flushBuffer = bytes.NewBuffer(make([]byte, 0, p.builderCfg.TargetObjectSize))
})
return initErr
}
@@ -194,22 +194,29 @@ func (p *partitionProcessor) processRecord(record *kgo.Record) {
return
}
- flushedDataobjStats, err := p.builder.Flush(p.flushBuffer)
- if err != nil {
- level.Error(p.logger).Log("msg", "failed to flush builder", "err", err)
- return
- }
+ func() {
+ flushBuffer := p.bufPool.Get().(*bytes.Buffer)
+ defer p.bufPool.Put(flushBuffer)
- objectPath, err := p.uploader.Upload(p.ctx, p.flushBuffer)
- if err != nil {
- level.Error(p.logger).Log("msg", "failed to upload object", "err", err)
- return
- }
+ flushBuffer.Reset()
- if err := p.metastoreManager.UpdateMetastore(p.ctx, objectPath, flushedDataobjStats); err != nil {
- level.Error(p.logger).Log("msg", "failed to update metastore", "err", err)
- return
- }
+ flushedDataobjStats, err := p.builder.Flush(flushBuffer)
+ if err != nil {
+ level.Error(p.logger).Log("msg", "failed to flush builder", "err", err)
+ return
+ }
+
+ objectPath, err := p.uploader.Upload(p.ctx, flushBuffer)
+ if err != nil {
+ level.Error(p.logger).Log("msg", "failed to upload object", "err", err)
+ return
+ }
+
+ if err := p.metastoreManager.UpdateMetastore(p.ctx, objectPath, flushedDataobjStats); err != nil {
+ level.Error(p.logger).Log("msg", "failed to update metastore", "err", err)
+ return
+ }
+ }()
if err := p.commitRecords(record); err != nil {
level.Error(p.logger).Log("msg", "failed to commit records", "err", err)
diff --git a/pkg/dataobj/consumer/service.go b/pkg/dataobj/consumer/service.go
index df7c570fe8eff..3f522e38a7c6c 100644
--- a/pkg/dataobj/consumer/service.go
+++ b/pkg/dataobj/consumer/service.go
@@ -1,6 +1,7 @@
package consumer
import (
+ "bytes"
"context"
"errors"
"strconv"
@@ -39,6 +40,8 @@ type Service struct {
// Partition management
partitionMtx sync.RWMutex
partitionHandlers map[string]map[int32]*partitionProcessor
+
+ bufPool *sync.Pool
}
func New(kafkaCfg kafka.Config, cfg Config, topicPrefix string, bucket objstore.Bucket, instanceID string, partitionRing ring.PartitionRingReader, reg prometheus.Registerer, logger log.Logger) *Service {
@@ -49,6 +52,11 @@ func New(kafkaCfg kafka.Config, cfg Config, topicPrefix string, bucket objstore.
codec: distributor.TenantPrefixCodec(topicPrefix),
partitionHandlers: make(map[string]map[int32]*partitionProcessor),
reg: reg,
+ bufPool: &sync.Pool{
+ New: func() interface{} {
+ return bytes.NewBuffer(make([]byte, 0, cfg.BuilderConfig.TargetObjectSize))
+ },
+ },
}
client, err := consumer.NewGroupClient(
@@ -92,7 +100,7 @@ func (s *Service) handlePartitionsAssigned(ctx context.Context, client *kgo.Clie
}
for _, partition := range parts {
- processor := newPartitionProcessor(ctx, client, s.cfg.BuilderConfig, s.cfg.UploaderConfig, s.bucket, tenant, virtualShard, topic, partition, s.logger, s.reg)
+ processor := newPartitionProcessor(ctx, client, s.cfg.BuilderConfig, s.cfg.UploaderConfig, s.bucket, tenant, virtualShard, topic, partition, s.logger, s.reg, s.bufPool)
s.partitionHandlers[topic][partition] = processor
processor.start()
}
diff --git a/pkg/kafka/partitionring/consumer/client.go b/pkg/kafka/partitionring/consumer/client.go
index 8790f12441260..22b8339e5513b 100644
--- a/pkg/kafka/partitionring/consumer/client.go
+++ b/pkg/kafka/partitionring/consumer/client.go
@@ -39,7 +39,7 @@ func NewGroupClient(kafkaCfg kafka.Config, partitionRing ring.PartitionRingReade
kgo.ConsumerGroup(groupName),
kgo.ConsumeRegex(),
kgo.ConsumeTopics(kafkaCfg.Topic),
- kgo.Balancers(NewCooperativeActiveStickyBalancer(partitionRing)),
+ kgo.Balancers(kgo.CooperativeStickyBalancer()),
kgo.ConsumeResetOffset(kgo.NewOffset().AtStart()),
kgo.DisableAutoCommit(),
kgo.RebalanceTimeout(5 * time.Minute),
|
chore
|
Use kgo Balancer for dataobj-consumer (#16146)
|
57e91cdc4a8d5f812cd7b6b2c1023d8f16d7e320
|
2024-04-05 22:10:22
|
J Stickler
|
docs: document pattern match filter (#12455)
| false
|
diff --git a/docs/sources/query/_index.md b/docs/sources/query/_index.md
index cf2aeb3f8acbc..0746d199b7b06 100644
--- a/docs/sources/query/_index.md
+++ b/docs/sources/query/_index.md
@@ -137,6 +137,38 @@ Same as above, but vectors have their values set to `1` if they pass the compari
sum without(app) (count_over_time({app="foo"}[1m])) > bool sum without(app) (count_over_time({app="bar"}[1m]))
```
+### Pattern match filter operators
+
+- `|>` (line match pattern)
+- `!>` (line match not pattern)
+
+Pattern Filter not only enhances efficiency but also simplifies the process of writing LogQL queries. By eliminating the need for complex regex patterns, users can create queries using a more intuitive syntax, reducing the cognitive load and potential for errors.
+
+Within the pattern syntax the `<_>` serves as a wildcard, representing any arbitrary text. This allows the query to match log lines where the specified pattern occurs, such as log lines containing static content, with variable content in between.
+
+Line match pattern example:
+
+```logql
+{service_name=`distributor`} |> `<_> caller=http.go:194 level=debug <_> msg="POST /push.v1.PusherService/Push <_>`
+```
+
+Line match not pattern example:
+
+```logql
+{service_name=`distributor`} !> `<_> caller=http.go:194 level=debug <_> msg="POST /push.v1.PusherService/Push <_>`
+```
+
+For example, the example queries above will respectively match and not match the following log line from the `distributor` service:
+
+```log
+ts=2024-04-05T08:40:13.585911094Z caller=http.go:194 level=debug traceID=23e54a271db607cc orgID=3648 msg="POST /push.v1.PusherService/Push (200) 12.684035ms"
+ts=2024-04-05T08:41:06.551403339Z caller=http.go:194 level=debug traceID=54325a1a15b42e2d orgID=1218 msg="POST /push.v1.PusherService/Push (200) 1.664285ms"
+ts=2024-04-05T08:41:06.506524777Z caller=http.go:194 level=debug traceID=69d4271da1595bcb orgID=1218 msg="POST /push.v1.PusherService/Push (200) 1.783818ms"
+ts=2024-04-05T08:41:06.473740396Z caller=http.go:194 level=debug traceID=3b8ec973e6397814 orgID=3648 msg="POST /push.v1.PusherService/Push (200) 1.893987ms"
+ts=2024-04-05T08:41:05.88999067Z caller=http.go:194 level=debug traceID=6892d7ef67b4d65c orgID=3648 msg="POST /push.v1.PusherService/Push (200) 2.314337ms"
+ts=2024-04-05T08:41:05.826266414Z caller=http.go:194 level=debug traceID=0bb76e910cfd008d orgID=3648 msg="POST /push.v1.PusherService/Push (200) 3.625744ms"
+```
+
### Order of operations
When chaining or combining operators, you have to consider operator precedence:
|
docs
|
document pattern match filter (#12455)
|
9ddc756c1d18fff4c9f91b560a688e15292f9be4
|
2025-02-04 20:28:26
|
renovate[bot]
|
fix(deps): update module golang.org/x/oauth2 to v0.26.0 (main) (#16085)
| false
|
diff --git a/go.mod b/go.mod
index f6b7dedfb55b7..b99581f1ed29d 100644
--- a/go.mod
+++ b/go.mod
@@ -145,7 +145,7 @@ require (
github.com/willf/bloom v2.0.3+incompatible
go.opentelemetry.io/collector/pdata v1.25.0
go4.org/netipx v0.0.0-20230125063823-8449b0a6169f
- golang.org/x/oauth2 v0.25.0
+ golang.org/x/oauth2 v0.26.0
golang.org/x/text v0.21.0
google.golang.org/protobuf v1.36.4
gotest.tools v2.2.0+incompatible
diff --git a/go.sum b/go.sum
index ce878f9db4506..d2cb7fc558638 100644
--- a/go.sum
+++ b/go.sum
@@ -1366,8 +1366,8 @@ golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4Iltr
golang.org/x/oauth2 v0.0.0-20191202225959-858c2ad4c8b6/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
-golang.org/x/oauth2 v0.25.0 h1:CY4y7XT9v0cRI9oupztF8AgiIu99L/ksR/Xp/6jrZ70=
-golang.org/x/oauth2 v0.25.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
+golang.org/x/oauth2 v0.26.0 h1:afQXWNNaeC4nvZ0Ed9XvCCzXM6UHJG7iCg0W4fPqSBE=
+golang.org/x/oauth2 v0.26.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
diff --git a/vendor/golang.org/x/oauth2/google/default.go b/vendor/golang.org/x/oauth2/google/default.go
index df958359a8706..0260935bab745 100644
--- a/vendor/golang.org/x/oauth2/google/default.go
+++ b/vendor/golang.org/x/oauth2/google/default.go
@@ -251,6 +251,12 @@ func FindDefaultCredentials(ctx context.Context, scopes ...string) (*Credentials
// a Google Developers service account key file, a gcloud user credentials file (a.k.a. refresh
// token JSON), or the JSON configuration file for workload identity federation in non-Google cloud
// platforms (see https://cloud.google.com/iam/docs/how-to#using-workload-identity-federation).
+//
+// Important: If you accept a credential configuration (credential JSON/File/Stream) from an
+// external source for authentication to Google Cloud Platform, you must validate it before
+// providing it to any Google API or library. Providing an unvalidated credential configuration to
+// Google APIs can compromise the security of your systems and data. For more information, refer to
+// [Validate credential configurations from external sources](https://cloud.google.com/docs/authentication/external/externally-sourced-credentials).
func CredentialsFromJSONWithParams(ctx context.Context, jsonData []byte, params CredentialsParams) (*Credentials, error) {
// Make defensive copy of the slices in params.
params = params.deepCopy()
@@ -294,6 +300,12 @@ func CredentialsFromJSONWithParams(ctx context.Context, jsonData []byte, params
}
// CredentialsFromJSON invokes CredentialsFromJSONWithParams with the specified scopes.
+//
+// Important: If you accept a credential configuration (credential JSON/File/Stream) from an
+// external source for authentication to Google Cloud Platform, you must validate it before
+// providing it to any Google API or library. Providing an unvalidated credential configuration to
+// Google APIs can compromise the security of your systems and data. For more information, refer to
+// [Validate credential configurations from external sources](https://cloud.google.com/docs/authentication/external/externally-sourced-credentials).
func CredentialsFromJSON(ctx context.Context, jsonData []byte, scopes ...string) (*Credentials, error) {
var params CredentialsParams
params.Scopes = scopes
diff --git a/vendor/golang.org/x/oauth2/google/externalaccount/basecredentials.go b/vendor/golang.org/x/oauth2/google/externalaccount/basecredentials.go
index ee34924e301b1..fc106347d85c5 100644
--- a/vendor/golang.org/x/oauth2/google/externalaccount/basecredentials.go
+++ b/vendor/golang.org/x/oauth2/google/externalaccount/basecredentials.go
@@ -278,20 +278,52 @@ type Format struct {
type CredentialSource struct {
// File is the location for file sourced credentials.
// One field amongst File, URL, Executable, or EnvironmentID should be provided, depending on the kind of credential in question.
+ //
+ // Important: If you accept a credential configuration (credential
+ // JSON/File/Stream) from an external source for authentication to Google
+ // Cloud Platform, you must validate it before providing it to any Google
+ // API or library. Providing an unvalidated credential configuration to
+ // Google APIs can compromise the security of your systems and data. For
+ // more information, refer to [Validate credential configurations from
+ // external sources](https://cloud.google.com/docs/authentication/external/externally-sourced-credentials).
File string `json:"file"`
// Url is the URL to call for URL sourced credentials.
// One field amongst File, URL, Executable, or EnvironmentID should be provided, depending on the kind of credential in question.
+ //
+ // Important: If you accept a credential configuration (credential
+ // JSON/File/Stream) from an external source for authentication to Google
+ // Cloud Platform, you must validate it before providing it to any Google
+ // API or library. Providing an unvalidated credential configuration to
+ // Google APIs can compromise the security of your systems and data. For
+ // more information, refer to [Validate credential configurations from
+ // external sources](https://cloud.google.com/docs/authentication/external/externally-sourced-credentials).
URL string `json:"url"`
// Headers are the headers to attach to the request for URL sourced credentials.
Headers map[string]string `json:"headers"`
// Executable is the configuration object for executable sourced credentials.
// One field amongst File, URL, Executable, or EnvironmentID should be provided, depending on the kind of credential in question.
+ //
+ // Important: If you accept a credential configuration (credential
+ // JSON/File/Stream) from an external source for authentication to Google
+ // Cloud Platform, you must validate it before providing it to any Google
+ // API or library. Providing an unvalidated credential configuration to
+ // Google APIs can compromise the security of your systems and data. For
+ // more information, refer to [Validate credential configurations from
+ // external sources](https://cloud.google.com/docs/authentication/external/externally-sourced-credentials).
Executable *ExecutableConfig `json:"executable"`
// EnvironmentID is the EnvironmentID used for AWS sourced credentials. This should start with "AWS".
// One field amongst File, URL, Executable, or EnvironmentID should be provided, depending on the kind of credential in question.
+ //
+ // Important: If you accept a credential configuration (credential
+ // JSON/File/Stream) from an external source for authentication to Google
+ // Cloud Platform, you must validate it before providing it to any Google
+ // API or library. Providing an unvalidated credential configuration to
+ // Google APIs can compromise the security of your systems and data. For
+ // more information, refer to [Validate credential configurations from
+ // external sources](https://cloud.google.com/docs/authentication/external/externally-sourced-credentials).
EnvironmentID string `json:"environment_id"`
// RegionURL is the metadata URL to retrieve the region from for EC2 AWS credentials.
RegionURL string `json:"region_url"`
diff --git a/vendor/modules.txt b/vendor/modules.txt
index 4f92f6e5f9ae5..cabbd0a33109d 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -1933,7 +1933,7 @@ golang.org/x/net/netutil
golang.org/x/net/proxy
golang.org/x/net/publicsuffix
golang.org/x/net/trace
-# golang.org/x/oauth2 v0.25.0
+# golang.org/x/oauth2 v0.26.0
## explicit; go 1.18
golang.org/x/oauth2
golang.org/x/oauth2/authhandler
|
fix
|
update module golang.org/x/oauth2 to v0.26.0 (main) (#16085)
|
7dee08ae725d6923753b144b789db34495156cd6
|
2024-11-13 16:16:53
|
Poyzan
|
chore: add flag to disable pod constraints to querier configs (#14875)
| false
|
diff --git a/production/ksonnet/loki/config.libsonnet b/production/ksonnet/loki/config.libsonnet
index bd7a52e53d8ea..b4e7f1d2abe57 100644
--- a/production/ksonnet/loki/config.libsonnet
+++ b/production/ksonnet/loki/config.libsonnet
@@ -52,6 +52,10 @@
// cores and will result in scheduling delays.
concurrency: 4,
+ // use_no_constraints is false by default allowing either TopologySpreadConstraints or pod antiAffinity to be configured.
+ // If no_schedule_constraints is set to true, neither of the pod constraints will be applied.
+ no_schedule_constraints: false,
+
// If use_topology_spread is true, queriers can run on nodes already running queriers but will be
// spread through the available nodes using a TopologySpreadConstraints with a max skew
// of topology_spread_max_skew.
diff --git a/production/ksonnet/loki/querier.libsonnet b/production/ksonnet/loki/querier.libsonnet
index 8cc965d777d6a..c6afbe3e85b4c 100644
--- a/production/ksonnet/loki/querier.libsonnet
+++ b/production/ksonnet/loki/querier.libsonnet
@@ -29,6 +29,7 @@ local k = import 'ksonnet-util/kausal.libsonnet';
local topologySpreadConstraints = k.core.v1.topologySpreadConstraint,
querier_deployment: if !$._config.stateful_queriers then
+ assert !($._config.querier.no_schedule_constraints && $._config.querier.use_topology_spread) : 'Must configure either no_schedule_constraints or TopologySpreadConstraints, but not both';
deployment.new('querier', 3, [$.querier_container]) +
$.config_hash_mixin +
k.util.configVolumeMount('loki', '/etc/loki/config') +
@@ -38,7 +39,8 @@ local k = import 'ksonnet-util/kausal.libsonnet';
) +
deployment.mixin.spec.strategy.rollingUpdate.withMaxSurge('15%') +
deployment.mixin.spec.strategy.rollingUpdate.withMaxUnavailable('15%') +
- if $._config.querier.use_topology_spread then
+ if $._config.querier.no_schedule_constraints then {}
+ else if $._config.querier.use_topology_spread then
deployment.spec.template.spec.withTopologySpreadConstraints(
// Evenly spread queriers among available nodes.
topologySpreadConstraints.labelSelector.withMatchLabels({ name: 'querier' }) +
@@ -50,6 +52,7 @@ local k = import 'ksonnet-util/kausal.libsonnet';
k.util.antiAffinity
else {},
+
// PVC for queriers when running as statefulsets
querier_data_pvc:: if $._config.stateful_queriers then
pvc.new('querier-data') +
|
chore
|
add flag to disable pod constraints to querier configs (#14875)
|
fc68205cdc0b891b64d3393d60b281cfb3a202b5
|
2025-03-13 22:10:04
|
George Robinson
|
chore: add tests for checking limits in distributor (#16741)
| false
|
diff --git a/pkg/distributor/distributor.go b/pkg/distributor/distributor.go
index a670e6355f8b7..f44de21a64e9f 100644
--- a/pkg/distributor/distributor.go
+++ b/pkg/distributor/distributor.go
@@ -696,7 +696,7 @@ func (d *Distributor) PushWithResolver(ctx context.Context, req *logproto.PushRe
}
if d.cfg.IngestLimitsEnabled {
- exceedsLimits, _, err := d.exceedsLimits(ctx, tenantID, streams)
+ exceedsLimits, _, err := d.exceedsLimits(ctx, tenantID, streams, d.doExceedsLimitsRPC)
if err != nil {
level.Error(d.logger).Log("msg", "failed to check if request exceeds limits, request has been accepted", "err", err)
}
@@ -1159,11 +1159,12 @@ func (d *Distributor) exceedsLimits(
ctx context.Context,
tenantID string,
streams []KeyedStream,
+ doExceedsLimitsFn doExceedsLimitsFunc,
) (bool, []string, error) {
if !d.cfg.IngestLimitsEnabled {
return false, nil, nil
}
- resp, err := d.doExceedsLimitsRPC(ctx, tenantID, streams)
+ resp, err := doExceedsLimitsFn(ctx, tenantID, streams)
if err != nil {
return false, nil, err
}
@@ -1187,6 +1188,13 @@ func (d *Distributor) exceedsLimits(
return true, reasons, nil
}
+// doExceedsLimitsFunc enables stubbing out doExceedsLimitsRPC for tests.
+type doExceedsLimitsFunc func(
+ ctx context.Context,
+ tenantID string,
+ streams []KeyedStream,
+) (*logproto.ExceedsLimitsResponse, error)
+
// doExceedsLimitsRPC executes an RPC to the limits-frontend service to check
// if per-tenant limits have been exceeded. If an RPC call returns an error,
// it failsover to the next limits-frontend service. The failover is repeated
diff --git a/pkg/distributor/distributor_test.go b/pkg/distributor/distributor_test.go
index 545e7813288b6..31fecccad98ee 100644
--- a/pkg/distributor/distributor_test.go
+++ b/pkg/distributor/distributor_test.go
@@ -2,6 +2,7 @@ package distributor
import (
"context"
+ "errors"
"fmt"
"math"
"math/rand"
@@ -2387,3 +2388,67 @@ func TestRequestScopedStreamResolver(t *testing.T) {
policy = newResolver.PolicyFor(labels.FromStrings("env", "dev"))
require.Equal(t, "policy1", policy)
}
+
+func TestExceedsLimits(t *testing.T) {
+ limits := &validation.Limits{}
+ flagext.DefaultValues(limits)
+ distributors, _ := prepare(t, 1, 0, limits, nil)
+ d := distributors[0]
+
+ ctx := context.Background()
+ streams := []KeyedStream{{
+ HashKeyNoShard: 1,
+ Stream: logproto.Stream{
+ Labels: "{foo=\"bar\"}",
+ },
+ }}
+
+ t.Run("no limits should be checked when disabled", func(t *testing.T) {
+ d.cfg.IngestLimitsEnabled = false
+ doExceedsLimitsFn := func(_ context.Context, _ string, _ []KeyedStream) (*logproto.ExceedsLimitsResponse, error) {
+ t.Fail() // Should not be called.
+ return nil, nil
+ }
+ exceedsLimits, reasons, err := d.exceedsLimits(ctx, "test", streams, doExceedsLimitsFn)
+ require.Nil(t, err)
+ require.False(t, exceedsLimits)
+ require.Nil(t, reasons)
+ })
+
+ t.Run("error should be returned if limits cannot be checked", func(t *testing.T) {
+ d.cfg.IngestLimitsEnabled = true
+ doExceedsLimitsFn := func(_ context.Context, _ string, _ []KeyedStream) (*logproto.ExceedsLimitsResponse, error) {
+ return nil, errors.New("failed to check limits")
+ }
+ exceedsLimits, reasons, err := d.exceedsLimits(ctx, "test", streams, doExceedsLimitsFn)
+ require.EqualError(t, err, "failed to check limits")
+ require.False(t, exceedsLimits)
+ require.Nil(t, reasons)
+ })
+
+ t.Run("stream exceeds limits", func(t *testing.T) {
+ doExceedsLimitsFn := func(_ context.Context, _ string, _ []KeyedStream) (*logproto.ExceedsLimitsResponse, error) {
+ return &logproto.ExceedsLimitsResponse{
+ Tenant: "test",
+ RejectedStreams: []*logproto.RejectedStream{{
+ StreamHash: 1,
+ Reason: "test",
+ }},
+ }, nil
+ }
+ exceedsLimits, reasons, err := d.exceedsLimits(ctx, "test", streams, doExceedsLimitsFn)
+ require.Nil(t, err)
+ require.True(t, exceedsLimits)
+ require.Equal(t, []string{"stream {foo=\"bar\"} was rejected because \"test\""}, reasons)
+ })
+
+ t.Run("stream does not exceed limits", func(t *testing.T) {
+ doExceedsLimitsFn := func(_ context.Context, _ string, _ []KeyedStream) (*logproto.ExceedsLimitsResponse, error) {
+ return &logproto.ExceedsLimitsResponse{}, nil
+ }
+ exceedsLimits, reasons, err := d.exceedsLimits(ctx, "test", streams, doExceedsLimitsFn)
+ require.Nil(t, err)
+ require.False(t, exceedsLimits)
+ require.Nil(t, reasons)
+ })
+}
|
chore
|
add tests for checking limits in distributor (#16741)
|
651d410eeae833977c316a6775d3aa0486cc653e
|
2025-02-21 11:49:42
|
Ashwanth
|
chore(dataobj): add "uncompressed size" column to streams section (#16322)
| false
|
diff --git a/pkg/dataobj/builder.go b/pkg/dataobj/builder.go
index 245488bc1874c..9486b26dcc227 100644
--- a/pkg/dataobj/builder.go
+++ b/pkg/dataobj/builder.go
@@ -185,7 +185,12 @@ func (b *Builder) Append(stream logproto.Stream) error {
defer timer.ObserveDuration()
for _, entry := range stream.Entries {
- streamID := b.streams.Record(ls, entry.Timestamp)
+ sz := int64(len(entry.Line))
+ for _, md := range entry.StructuredMetadata {
+ sz += int64(len(md.Value))
+ }
+
+ streamID := b.streams.Record(ls, entry.Timestamp, sz)
b.logs.Append(logs.Record{
StreamID: streamID,
diff --git a/pkg/dataobj/internal/metadata/streamsmd/streamsmd.pb.go b/pkg/dataobj/internal/metadata/streamsmd/streamsmd.pb.go
index 0415d2e93ca0e..182404feb2327 100644
--- a/pkg/dataobj/internal/metadata/streamsmd/streamsmd.pb.go
+++ b/pkg/dataobj/internal/metadata/streamsmd/streamsmd.pb.go
@@ -45,6 +45,10 @@ const (
COLUMN_TYPE_LABEL ColumnType = 4
// COLUMN_TYPE_ROWS is a column indicating the number of rows for a stream.
COLUMN_TYPE_ROWS ColumnType = 5
+ // COLUMN_TYPE_UNCOMPRESSED_SIZE is a column indicating the uncompressed size
+ // of a stream. Size of a stream is the sum of the length of all log lines and
+ // the length of all structured metadata values
+ COLUMN_TYPE_UNCOMPRESSED_SIZE ColumnType = 6
)
var ColumnType_name = map[int32]string{
@@ -54,15 +58,17 @@ var ColumnType_name = map[int32]string{
3: "COLUMN_TYPE_MAX_TIMESTAMP",
4: "COLUMN_TYPE_LABEL",
5: "COLUMN_TYPE_ROWS",
+ 6: "COLUMN_TYPE_UNCOMPRESSED_SIZE",
}
var ColumnType_value = map[string]int32{
- "COLUMN_TYPE_UNSPECIFIED": 0,
- "COLUMN_TYPE_STREAM_ID": 1,
- "COLUMN_TYPE_MIN_TIMESTAMP": 2,
- "COLUMN_TYPE_MAX_TIMESTAMP": 3,
- "COLUMN_TYPE_LABEL": 4,
- "COLUMN_TYPE_ROWS": 5,
+ "COLUMN_TYPE_UNSPECIFIED": 0,
+ "COLUMN_TYPE_STREAM_ID": 1,
+ "COLUMN_TYPE_MIN_TIMESTAMP": 2,
+ "COLUMN_TYPE_MAX_TIMESTAMP": 3,
+ "COLUMN_TYPE_LABEL": 4,
+ "COLUMN_TYPE_ROWS": 5,
+ "COLUMN_TYPE_UNCOMPRESSED_SIZE": 6,
}
func (ColumnType) EnumDescriptor() ([]byte, []int) {
@@ -271,35 +277,36 @@ func init() {
}
var fileDescriptor_7b94842ca2f0bf8d = []byte{
- // 439 bytes of a gzipped FileDescriptorProto
- 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0x52, 0x3d, 0x6f, 0xd3, 0x40,
- 0x18, 0xf6, 0xf5, 0x03, 0xaa, 0x43, 0xaa, 0xcc, 0x89, 0x8a, 0x94, 0x8a, 0x53, 0x14, 0xa9, 0xa2,
- 0x62, 0xf0, 0x09, 0x3a, 0x20, 0xd4, 0xc9, 0x49, 0x8c, 0x64, 0x29, 0x97, 0x46, 0xb6, 0x2b, 0x3e,
- 0x16, 0xeb, 0x92, 0x5c, 0x8c, 0x69, 0xec, 0xb3, 0xec, 0x6b, 0x45, 0x37, 0x26, 0x66, 0x7e, 0x06,
- 0x1b, 0x7f, 0x83, 0x31, 0x63, 0x47, 0xe2, 0x2c, 0x8c, 0xfd, 0x09, 0xc8, 0x8e, 0x5d, 0x1b, 0x81,
- 0xd2, 0x2c, 0xd6, 0xab, 0xe7, 0xcb, 0xef, 0x73, 0x7a, 0xe1, 0xab, 0xe8, 0xdc, 0x23, 0x63, 0x26,
- 0x99, 0x18, 0x7e, 0x22, 0x7e, 0x28, 0x79, 0x1c, 0xb2, 0x29, 0x09, 0xb8, 0x64, 0x19, 0x48, 0x12,
- 0x19, 0x73, 0x16, 0x24, 0xc1, 0xb8, 0x9a, 0xb4, 0x28, 0x16, 0x52, 0xa0, 0x83, 0xc2, 0xa4, 0x95,
- 0x5a, 0xad, 0x50, 0x68, 0x97, 0x2f, 0x9e, 0xdc, 0x91, 0x9a, 0x7d, 0x12, 0x2e, 0x83, 0x71, 0x35,
- 0x2d, 0x53, 0x5b, 0x14, 0xee, 0xd0, 0x42, 0x85, 0x74, 0x78, 0x7f, 0x24, 0xa6, 0x17, 0x41, 0x98,
- 0x34, 0x40, 0x73, 0xf3, 0xe8, 0xc1, 0xcb, 0x67, 0xda, 0x8a, 0x7f, 0x6a, 0x9d, 0x5c, 0xdb, 0xe5,
- 0xc9, 0xc8, 0x2a, 0x7d, 0xad, 0xaf, 0x00, 0xc2, 0x0a, 0x47, 0x27, 0x70, 0xcb, 0x0f, 0x27, 0xa2,
- 0x01, 0x9a, 0xe0, 0xff, 0x71, 0xc5, 0x3a, 0x55, 0x9c, 0x19, 0x4e, 0x84, 0x95, 0x9b, 0x32, 0xb3,
- 0xbc, 0x8a, 0x78, 0x63, 0xa3, 0x09, 0x8e, 0x76, 0xd7, 0xda, 0xc5, 0xb9, 0x8a, 0xb8, 0x95, 0x9b,
- 0x5a, 0x14, 0xee, 0x2e, 0xb1, 0xdb, 0x76, 0x27, 0x70, 0x3b, 0x62, 0x1e, 0x2f, 0xbb, 0x1d, 0xae,
- 0xcc, 0x1b, 0x30, 0x8f, 0xe7, 0xcd, 0x96, 0x9e, 0x96, 0x01, 0x77, 0x4a, 0x08, 0xbd, 0xfe, 0xab,
- 0xd4, 0xe1, 0xca, 0x52, 0x99, 0xa9, 0xaa, 0xf4, 0xfc, 0xc7, 0xed, 0xf3, 0x64, 0xab, 0xa2, 0x03,
- 0xf8, 0xb8, 0x73, 0xda, 0x3b, 0xa3, 0x7d, 0xd7, 0x79, 0x3f, 0x30, 0xdc, 0xb3, 0xbe, 0x3d, 0x30,
- 0x3a, 0xe6, 0x1b, 0xd3, 0xe8, 0xaa, 0x0a, 0xda, 0x87, 0x7b, 0x75, 0xd2, 0x76, 0x2c, 0x43, 0xa7,
- 0xae, 0xd9, 0x55, 0x01, 0x7a, 0x0a, 0xf7, 0xeb, 0x14, 0x35, 0xfb, 0xae, 0x63, 0x52, 0xc3, 0x76,
- 0x74, 0x3a, 0x50, 0x37, 0xfe, 0xa1, 0xf5, 0x77, 0x35, 0x7a, 0x13, 0xed, 0xc1, 0x87, 0x75, 0xba,
- 0xa7, 0xb7, 0x8d, 0x9e, 0xba, 0x85, 0x1e, 0x41, 0xb5, 0x0e, 0x5b, 0xa7, 0x6f, 0x6d, 0x75, 0xbb,
- 0xfd, 0x79, 0x36, 0xc7, 0xca, 0xf5, 0x1c, 0x2b, 0x37, 0x73, 0x0c, 0xbe, 0xa4, 0x18, 0x7c, 0x4f,
- 0x31, 0xf8, 0x99, 0x62, 0x30, 0x4b, 0x31, 0xf8, 0x95, 0x62, 0xf0, 0x3b, 0xc5, 0xca, 0x4d, 0x8a,
- 0xc1, 0xb7, 0x05, 0x56, 0x66, 0x0b, 0xac, 0x5c, 0x2f, 0xb0, 0xf2, 0xa1, 0xed, 0xf9, 0xf2, 0xe3,
- 0xc5, 0x50, 0x1b, 0x89, 0x80, 0x78, 0x31, 0x9b, 0xb0, 0x90, 0x91, 0xa9, 0x38, 0xf7, 0xc9, 0xe5,
- 0x31, 0x59, 0xf3, 0xfe, 0x87, 0xf7, 0xf2, 0x03, 0x3d, 0xfe, 0x13, 0x00, 0x00, 0xff, 0xff, 0x66,
- 0x7a, 0xe2, 0x3c, 0x31, 0x03, 0x00, 0x00,
+ // 457 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0x53, 0xbd, 0x6e, 0xd3, 0x40,
+ 0x00, 0xf6, 0xf5, 0x8f, 0xea, 0x90, 0x2a, 0x73, 0xa2, 0x22, 0xa5, 0xea, 0x29, 0x44, 0xaa, 0xa8,
+ 0x18, 0x7c, 0x82, 0x0e, 0x08, 0x75, 0x72, 0x92, 0x43, 0xb2, 0x94, 0x4b, 0x2c, 0xdb, 0x15, 0xd0,
+ 0xc5, 0xba, 0x24, 0x17, 0x13, 0x1a, 0xfb, 0xac, 0xf8, 0x5a, 0xd1, 0x8d, 0x89, 0x99, 0xc7, 0xe0,
+ 0x51, 0x18, 0x23, 0xa6, 0x8e, 0xc4, 0x59, 0x18, 0xfb, 0x08, 0xc8, 0x4e, 0x52, 0xbb, 0x02, 0x85,
+ 0x2e, 0xd6, 0xe9, 0xfb, 0xf3, 0xf7, 0x59, 0x3e, 0xf8, 0x3a, 0x3e, 0x0f, 0x48, 0x9f, 0x2b, 0x2e,
+ 0xbb, 0x9f, 0xc8, 0x30, 0x52, 0x62, 0x1c, 0xf1, 0x11, 0x09, 0x85, 0xe2, 0x19, 0x48, 0x12, 0x35,
+ 0x16, 0x3c, 0x4c, 0xc2, 0x7e, 0x71, 0x32, 0xe2, 0xb1, 0x54, 0x12, 0xed, 0x2f, 0x4c, 0xc6, 0x52,
+ 0x6b, 0x2c, 0x14, 0xc6, 0xe5, 0xcb, 0xa7, 0xff, 0x49, 0xcd, 0x1e, 0x89, 0x50, 0x61, 0xbf, 0x38,
+ 0xcd, 0x53, 0x6b, 0x0c, 0x6e, 0xb3, 0x85, 0x0a, 0x99, 0xf0, 0x41, 0x4f, 0x8e, 0x2e, 0xc2, 0x28,
+ 0xa9, 0x80, 0xea, 0xfa, 0xd1, 0xc3, 0x57, 0xcf, 0x8d, 0x15, 0xef, 0x34, 0x1a, 0xb9, 0xb6, 0x29,
+ 0x92, 0x9e, 0xb3, 0xf4, 0xd5, 0xbe, 0x02, 0x08, 0x0b, 0x1c, 0x9d, 0xc0, 0x8d, 0x61, 0x34, 0x90,
+ 0x15, 0x50, 0x05, 0xff, 0x8e, 0x5b, 0xd4, 0x29, 0xe2, 0xac, 0x68, 0x20, 0x9d, 0xdc, 0x94, 0x99,
+ 0xd5, 0x55, 0x2c, 0x2a, 0x6b, 0x55, 0x70, 0xb4, 0x73, 0xaf, 0x2e, 0xde, 0x55, 0x2c, 0x9c, 0xdc,
+ 0x54, 0x63, 0x70, 0x67, 0x8e, 0xdd, 0xae, 0x3b, 0x81, 0x9b, 0x31, 0x0f, 0xc4, 0x72, 0xdb, 0xe1,
+ 0xca, 0x3c, 0x9b, 0x07, 0x22, 0x5f, 0x36, 0xf7, 0xd4, 0x28, 0xdc, 0x5e, 0x42, 0xe8, 0xcd, 0x9d,
+ 0x51, 0x87, 0x2b, 0x47, 0x65, 0xa6, 0x62, 0xd2, 0x8b, 0x9f, 0xb7, 0x9f, 0x27, 0xab, 0x8a, 0xf6,
+ 0xe1, 0x93, 0x46, 0xa7, 0x75, 0xca, 0xda, 0xbe, 0xf7, 0xc1, 0xa6, 0xfe, 0x69, 0xdb, 0xb5, 0x69,
+ 0xc3, 0x7a, 0x6b, 0xd1, 0xa6, 0xae, 0xa1, 0x3d, 0xb8, 0x5b, 0x26, 0x5d, 0xcf, 0xa1, 0x26, 0xf3,
+ 0xad, 0xa6, 0x0e, 0xd0, 0x01, 0xdc, 0x2b, 0x53, 0xcc, 0x6a, 0xfb, 0x9e, 0xc5, 0xa8, 0xeb, 0x99,
+ 0xcc, 0xd6, 0xd7, 0xfe, 0xa2, 0xcd, 0xf7, 0x25, 0x7a, 0x1d, 0xed, 0xc2, 0x47, 0x65, 0xba, 0x65,
+ 0xd6, 0x69, 0x4b, 0xdf, 0x40, 0x8f, 0xa1, 0x5e, 0x86, 0x9d, 0xce, 0x3b, 0x57, 0xdf, 0x44, 0xcf,
+ 0xe0, 0xc1, 0xdd, 0x8a, 0x8d, 0x0e, 0xb3, 0x1d, 0xea, 0xba, 0xb4, 0xe9, 0xbb, 0xd6, 0x19, 0xd5,
+ 0xb7, 0xea, 0x9f, 0x27, 0x53, 0xac, 0x5d, 0x4f, 0xb1, 0x76, 0x33, 0xc5, 0xe0, 0x4b, 0x8a, 0xc1,
+ 0xf7, 0x14, 0x83, 0x1f, 0x29, 0x06, 0x93, 0x14, 0x83, 0x5f, 0x29, 0x06, 0xbf, 0x53, 0xac, 0xdd,
+ 0xa4, 0x18, 0x7c, 0x9b, 0x61, 0x6d, 0x32, 0xc3, 0xda, 0xf5, 0x0c, 0x6b, 0x67, 0xf5, 0x60, 0xa8,
+ 0x3e, 0x5e, 0x74, 0x8d, 0x9e, 0x0c, 0x49, 0x30, 0xe6, 0x03, 0x1e, 0x71, 0x32, 0x92, 0xe7, 0x43,
+ 0x72, 0x79, 0x4c, 0xee, 0x79, 0x45, 0xba, 0x5b, 0xf9, 0x3f, 0x7c, 0xfc, 0x27, 0x00, 0x00, 0xff,
+ 0xff, 0x87, 0x13, 0x9c, 0x9b, 0x54, 0x03, 0x00, 0x00,
}
func (x ColumnType) String() string {
diff --git a/pkg/dataobj/internal/metadata/streamsmd/streamsmd.proto b/pkg/dataobj/internal/metadata/streamsmd/streamsmd.proto
index 5f58faa91aa33..188e541c2ebf6 100644
--- a/pkg/dataobj/internal/metadata/streamsmd/streamsmd.proto
+++ b/pkg/dataobj/internal/metadata/streamsmd/streamsmd.proto
@@ -45,6 +45,11 @@ enum ColumnType {
// COLUMN_TYPE_ROWS is a column indicating the number of rows for a stream.
COLUMN_TYPE_ROWS = 5;
+
+ // COLUMN_TYPE_UNCOMPRESSED_SIZE is a column indicating the uncompressed size
+ // of a stream. Size of a stream is the sum of the length of all log lines and
+ // the length of all structured metadata values
+ COLUMN_TYPE_UNCOMPRESSED_SIZE = 6;
}
// ColumnMetadata describes the metadata for a column.
diff --git a/pkg/dataobj/internal/sections/streams/iter.go b/pkg/dataobj/internal/sections/streams/iter.go
index b75da46262c80..91dbdc48f5bd2 100644
--- a/pkg/dataobj/internal/sections/streams/iter.go
+++ b/pkg/dataobj/internal/sections/streams/iter.go
@@ -113,6 +113,12 @@ func decodeRow(columns []*streamsmd.ColumnDesc, row dataset.Row) (Stream, error)
}
stream.Rows = int(columnValue.Int64())
+ case streamsmd.COLUMN_TYPE_UNCOMPRESSED_SIZE:
+ if ty := columnValue.Type(); ty != datasetmd.VALUE_TYPE_INT64 {
+ return stream, fmt.Errorf("invalid type %s for %s", ty, column.Type)
+ }
+ stream.UncompressedSize = columnValue.Int64()
+
case streamsmd.COLUMN_TYPE_LABEL:
if ty := columnValue.Type(); ty != datasetmd.VALUE_TYPE_STRING {
return stream, fmt.Errorf("invalid type %s for %s", ty, column.Type)
diff --git a/pkg/dataobj/internal/sections/streams/streams.go b/pkg/dataobj/internal/sections/streams/streams.go
index 05002fb77e345..160e36983f93d 100644
--- a/pkg/dataobj/internal/sections/streams/streams.go
+++ b/pkg/dataobj/internal/sections/streams/streams.go
@@ -28,10 +28,11 @@ type Stream struct {
// object.
ID int64
- Labels labels.Labels // Stream labels.
- MinTimestamp time.Time // Minimum timestamp in the stream.
- MaxTimestamp time.Time // Maximum timestamp in the stream.
- Rows int // Number of rows in the stream.
+ Labels labels.Labels // Stream labels.
+ MinTimestamp time.Time // Minimum timestamp in the stream.
+ MaxTimestamp time.Time // Maximum timestamp in the stream.
+ UncompressedSize int64 // Uncompressed size of the log lines and stuctured metadata values in the stream.
+ Rows int // Number of rows in the stream.
}
// Reset zeroes all values in the stream struct so it can be reused.
@@ -40,6 +41,7 @@ func (s *Stream) Reset() {
s.Labels = nil
s.MinTimestamp = time.Time{}
s.MaxTimestamp = time.Time{}
+ s.UncompressedSize = 0
s.Rows = 0
}
@@ -90,9 +92,10 @@ func (s *Streams) TimeRange() (time.Time, time.Time) {
// Record a stream record within the Streams section. The provided timestamp is
// used to track the minimum and maximum timestamp of a stream. The number of
// calls to Record is used to track the number of rows for a stream.
+// The recordSize is used to track the uncompressed size of the stream.
//
// The stream ID of the recorded stream is returned.
-func (s *Streams) Record(streamLabels labels.Labels, ts time.Time) int64 {
+func (s *Streams) Record(streamLabels labels.Labels, ts time.Time, recordSize int64) int64 {
ts = ts.UTC()
s.observeRecord(ts)
@@ -104,6 +107,8 @@ func (s *Streams) Record(streamLabels labels.Labels, ts time.Time) int64 {
stream.MaxTimestamp = ts
}
stream.Rows++
+ stream.UncompressedSize += recordSize
+
return stream.ID
}
@@ -238,6 +243,10 @@ func (s *Streams) EncodeTo(enc *encoding.Encoder) error {
if err != nil {
return fmt.Errorf("creating rows column: %w", err)
}
+ uncompressedSizeBuilder, err := numberColumnBuilder(s.pageSize)
+ if err != nil {
+ return fmt.Errorf("creating uncompressed size column: %w", err)
+ }
var (
labelBuilders []*dataset.ColumnBuilder
@@ -275,6 +284,7 @@ func (s *Streams) EncodeTo(enc *encoding.Encoder) error {
_ = minTimestampBuilder.Append(i, dataset.Int64Value(stream.MinTimestamp.UnixNano()))
_ = maxTimestampBuilder.Append(i, dataset.Int64Value(stream.MaxTimestamp.UnixNano()))
_ = rowsCountBuilder.Append(i, dataset.Int64Value(int64(stream.Rows)))
+ _ = uncompressedSizeBuilder.Append(i, dataset.Int64Value(stream.UncompressedSize))
for _, label := range stream.Labels {
builder, err := getLabelColumn(label.Name)
@@ -304,6 +314,7 @@ func (s *Streams) EncodeTo(enc *encoding.Encoder) error {
errs = append(errs, encodeColumn(streamsEnc, streamsmd.COLUMN_TYPE_MIN_TIMESTAMP, minTimestampBuilder))
errs = append(errs, encodeColumn(streamsEnc, streamsmd.COLUMN_TYPE_MAX_TIMESTAMP, maxTimestampBuilder))
errs = append(errs, encodeColumn(streamsEnc, streamsmd.COLUMN_TYPE_ROWS, rowsCountBuilder))
+ errs = append(errs, encodeColumn(streamsEnc, streamsmd.COLUMN_TYPE_UNCOMPRESSED_SIZE, uncompressedSizeBuilder))
if err := errors.Join(errs...); err != nil {
return fmt.Errorf("encoding columns: %w", err)
}
diff --git a/pkg/dataobj/internal/sections/streams/streams_test.go b/pkg/dataobj/internal/sections/streams/streams_test.go
index 87f3894d6c8a0..83a6e41502c3e 100644
--- a/pkg/dataobj/internal/sections/streams/streams_test.go
+++ b/pkg/dataobj/internal/sections/streams/streams_test.go
@@ -17,18 +17,19 @@ func Test(t *testing.T) {
type ent struct {
Labels labels.Labels
Time time.Time
+ Size int64
}
tt := []ent{
- {labels.FromStrings("cluster", "test", "app", "foo"), time.Unix(10, 0).UTC()},
- {labels.FromStrings("cluster", "test", "app", "bar", "special", "yes"), time.Unix(100, 0).UTC()},
- {labels.FromStrings("cluster", "test", "app", "foo"), time.Unix(15, 0).UTC()},
- {labels.FromStrings("cluster", "test", "app", "foo"), time.Unix(9, 0).UTC()},
+ {labels.FromStrings("cluster", "test", "app", "foo"), time.Unix(10, 0).UTC(), 10},
+ {labels.FromStrings("cluster", "test", "app", "bar", "special", "yes"), time.Unix(100, 0).UTC(), 20},
+ {labels.FromStrings("cluster", "test", "app", "foo"), time.Unix(15, 0).UTC(), 15},
+ {labels.FromStrings("cluster", "test", "app", "foo"), time.Unix(9, 0).UTC(), 5},
}
tracker := streams.New(nil, 1024)
for _, tc := range tt {
- tracker.Record(tc.Labels, tc.Time)
+ tracker.Record(tc.Labels, tc.Time, tc.Size)
}
buf, err := buildObject(tracker)
@@ -36,18 +37,20 @@ func Test(t *testing.T) {
expect := []streams.Stream{
{
- ID: 1,
- Labels: labels.FromStrings("cluster", "test", "app", "foo"),
- MinTimestamp: time.Unix(9, 0).UTC(),
- MaxTimestamp: time.Unix(15, 0).UTC(),
- Rows: 3,
+ ID: 1,
+ Labels: labels.FromStrings("cluster", "test", "app", "foo"),
+ MinTimestamp: time.Unix(9, 0).UTC(),
+ MaxTimestamp: time.Unix(15, 0).UTC(),
+ Rows: 3,
+ UncompressedSize: 30,
},
{
- ID: 2,
- Labels: labels.FromStrings("cluster", "test", "app", "bar", "special", "yes"),
- MinTimestamp: time.Unix(100, 0).UTC(),
- MaxTimestamp: time.Unix(100, 0).UTC(),
- Rows: 1,
+ ID: 2,
+ Labels: labels.FromStrings("cluster", "test", "app", "bar", "special", "yes"),
+ MinTimestamp: time.Unix(100, 0).UTC(),
+ MaxTimestamp: time.Unix(100, 0).UTC(),
+ Rows: 1,
+ UncompressedSize: 20,
},
}
diff --git a/pkg/dataobj/streams_reader.go b/pkg/dataobj/streams_reader.go
index d71e7bb7d90d2..f12e0e640bde2 100644
--- a/pkg/dataobj/streams_reader.go
+++ b/pkg/dataobj/streams_reader.go
@@ -23,6 +23,9 @@ type Stream struct {
// the stream.
MinTime, MaxTime time.Time
+ // UncompressedSize is the total size of all the log lines and structured metadata values in the stream
+ UncompressedSize int64
+
// Labels of the stream.
Labels labels.Labels
}
@@ -111,10 +114,11 @@ func (r *StreamsReader) Read(ctx context.Context, s []Stream) (int, error) {
}
s[i] = Stream{
- ID: stream.ID,
- MinTime: stream.MinTimestamp,
- MaxTime: stream.MaxTimestamp,
- Labels: stream.Labels,
+ ID: stream.ID,
+ MinTime: stream.MinTimestamp,
+ MaxTime: stream.MaxTimestamp,
+ UncompressedSize: stream.UncompressedSize,
+ Labels: stream.Labels,
}
}
diff --git a/pkg/dataobj/streams_reader_test.go b/pkg/dataobj/streams_reader_test.go
index e45b21d353c6c..ff419994b736a 100644
--- a/pkg/dataobj/streams_reader_test.go
+++ b/pkg/dataobj/streams_reader_test.go
@@ -18,22 +18,23 @@ import (
)
var streamsTestdata = []struct {
- Labels labels.Labels
- Timestamp time.Time
+ Labels labels.Labels
+ Timestamp time.Time
+ UncompressedSize int64
}{
- {labels.FromStrings("cluster", "test", "app", "foo"), unixTime(10)},
- {labels.FromStrings("cluster", "test", "app", "foo"), unixTime(15)},
- {labels.FromStrings("cluster", "test", "app", "bar"), unixTime(5)},
- {labels.FromStrings("cluster", "test", "app", "bar"), unixTime(20)},
- {labels.FromStrings("cluster", "test", "app", "baz"), unixTime(25)},
- {labels.FromStrings("cluster", "test", "app", "baz"), unixTime(30)},
+ {labels.FromStrings("cluster", "test", "app", "foo"), unixTime(10), 15},
+ {labels.FromStrings("cluster", "test", "app", "foo"), unixTime(15), 10},
+ {labels.FromStrings("cluster", "test", "app", "bar"), unixTime(5), 20},
+ {labels.FromStrings("cluster", "test", "app", "bar"), unixTime(20), 25},
+ {labels.FromStrings("cluster", "test", "app", "baz"), unixTime(25), 30},
+ {labels.FromStrings("cluster", "test", "app", "baz"), unixTime(30), 5},
}
func TestStreamsReader(t *testing.T) {
expect := []dataobj.Stream{
- {1, unixTime(10), unixTime(15), labels.FromStrings("cluster", "test", "app", "foo")},
- {2, unixTime(5), unixTime(20), labels.FromStrings("cluster", "test", "app", "bar")},
- {3, unixTime(25), unixTime(30), labels.FromStrings("cluster", "test", "app", "baz")},
+ {1, unixTime(10), unixTime(15), 25, labels.FromStrings("cluster", "test", "app", "foo")},
+ {2, unixTime(5), unixTime(20), 45, labels.FromStrings("cluster", "test", "app", "bar")},
+ {3, unixTime(25), unixTime(30), 35, labels.FromStrings("cluster", "test", "app", "baz")},
}
obj := buildStreamsObject(t, 1) // Many pages
@@ -49,7 +50,7 @@ func TestStreamsReader(t *testing.T) {
func TestStreamsReader_AddLabelMatcher(t *testing.T) {
expect := []dataobj.Stream{
- {2, unixTime(5), unixTime(20), labels.FromStrings("cluster", "test", "app", "bar")},
+ {2, unixTime(5), unixTime(20), 45, labels.FromStrings("cluster", "test", "app", "bar")},
}
obj := buildStreamsObject(t, 1) // Many pages
@@ -67,8 +68,8 @@ func TestStreamsReader_AddLabelMatcher(t *testing.T) {
func TestStreamsReader_AddLabelFilter(t *testing.T) {
expect := []dataobj.Stream{
- {2, unixTime(5), unixTime(20), labels.FromStrings("cluster", "test", "app", "bar")},
- {3, unixTime(25), unixTime(30), labels.FromStrings("cluster", "test", "app", "baz")},
+ {2, unixTime(5), unixTime(20), 45, labels.FromStrings("cluster", "test", "app", "bar")},
+ {3, unixTime(25), unixTime(30), 35, labels.FromStrings("cluster", "test", "app", "baz")},
}
obj := buildStreamsObject(t, 1) // Many pages
@@ -100,7 +101,7 @@ func buildStreamsObject(t *testing.T, pageSize int) *dataobj.Object {
s := streams.New(nil, pageSize)
for _, d := range streamsTestdata {
- s.Record(d.Labels, d.Timestamp)
+ s.Record(d.Labels, d.Timestamp, d.UncompressedSize)
}
var buf bytes.Buffer
|
chore
|
add "uncompressed size" column to streams section (#16322)
|
b8a2e0cdcad7ff7706e6f3df2808c21fc2e9c723
|
2024-04-05 23:43:11
|
Callum Styan
|
ci: add github action to ensure drone is not broken (#12480)
| false
|
diff --git a/.github/workflows/verify-drone.yml b/.github/workflows/verify-drone.yml
new file mode 100644
index 0000000000000..9847ddac29147
--- /dev/null
+++ b/.github/workflows/verify-drone.yml
@@ -0,0 +1,52 @@
+name: Verify drone updates
+on: [pull_request]
+jobs:
+ check-drone-changes:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v3
+ - name: Get changed files
+ # we need continue on error because the git diff | grep pipe can return a non-zero error code if no result is found
+ continue-on-error: true
+ id: changed-files
+ run: |
+ echo "changed_files=$(git diff --name-only .drone/ | xargs)" >> $GITHUB_OUTPUT
+ git diff | grep +hmac
+ echo "sha_updated=$?" >> $GITHUB_OUTPUT
+ - name: Check that drone was updated properly
+ if: always()
+ run: |
+ jsonnetChanged=false
+ yamlChanged=false
+
+ # check whether the drone jsonnet and yaml files were updated
+ for file in ${{ steps.changed-files.outputs.changed_files }}; do
+ echo "$file was changed"
+ if [ "$file" == ".drone/drone.jsonnet" ]; then
+ echo "$file was changed"
+ jsonnetChanged=true
+ fi
+ if [ "$file" == ".drone/drone.yml" ]; then
+ echo "$file was changed"
+ yamlChanged=true
+ fi
+ done
+
+ # if niether file was changed we're okay
+ if { [ "$yamlChanged" = false ] && [ "$jsonnetChanged" = false ]; } then
+ echo "neither file was changed"
+ exit 0
+ fi
+ # if both files were changed then we should ensure that the sha in the yaml was also updated
+ if { [ "$yamlChanged" = true ] && [ "$jsonnetChanged" = true ]; } then
+ # annoyingly, the return value is a string
+ if [ "${{ steps.changed-files.outputs.sha_updated }}" = "0" ]; then
+ echo "both files were changed and sha was updated"
+ exit 0
+ fi
+ echo "both drone yaml and jsonnet were updated but the sha in the yaml file was not updated"
+ exit 1
+ fi
+ # only one of the two files was updated
+ echo "if one of the drone files (yaml or jsonnet) was changed then bothy files must be updated"
+ exit 1
\ No newline at end of file
|
ci
|
add github action to ensure drone is not broken (#12480)
|
1d20fdbf57afd433dcfbafcb7c10c6eedae21b81
|
2023-06-29 02:12:55
|
Dylan Guedes
|
logpushfailures: Log the tenant id (#9819)
| false
|
diff --git a/pkg/distributor/distributor.go b/pkg/distributor/distributor.go
index 6b1d63254b927..7c76112938e8d 100644
--- a/pkg/distributor/distributor.go
+++ b/pkg/distributor/distributor.go
@@ -323,6 +323,7 @@ func (d *Distributor) Push(ctx context.Context, req *logproto.PushRequest) (*log
stream.Labels, stream.Hash, err = d.parseStreamLabels(validationContext, stream.Labels, &stream)
if err != nil {
+ d.writeFailuresManager.Log(tenantID, err)
validationErrors.Add(err)
validation.DiscardedSamples.WithLabelValues(validation.InvalidLabels, tenantID).Add(float64(len(stream.Entries)))
bytes := 0
diff --git a/pkg/distributor/writefailures/manager.go b/pkg/distributor/writefailures/manager.go
index 2e4226fe9d687..cbf96f4c791a3 100644
--- a/pkg/distributor/writefailures/manager.go
+++ b/pkg/distributor/writefailures/manager.go
@@ -42,6 +42,6 @@ func (m *Manager) Log(tenantID string, err error) {
errMsg := err.Error()
if m.limiter.AllowN(time.Now(), tenantID, len(errMsg)) {
- level.Error(m.logger).Log("msg", "write operation failed", "details", errMsg)
+ level.Error(m.logger).Log("msg", "write operation failed", "details", errMsg, "tenant", tenantID)
}
}
|
logpushfailures
|
Log the tenant id (#9819)
|
ed84b238ad7a18c769ade8753c824d416a4cca74
|
2024-05-02 14:07:47
|
Pangidoan Butar
|
docs: Update template_functions.md (#12841)
| false
|
diff --git a/docs/sources/query/template_functions.md b/docs/sources/query/template_functions.md
index 0effe4c01ac75..9ba67620a488e 100644
--- a/docs/sources/query/template_functions.md
+++ b/docs/sources/query/template_functions.md
@@ -303,12 +303,12 @@ Example:
Use this function to test to see if one string is contained inside of another.
-Signature: `contains(s string, src string) bool`
+Signature: `contains(src string, s string,) bool`
Examples:
```template
-{{ if contains .err "ErrTimeout" }} timeout {{end}}
+{{ if contains "ErrTimeout" .err }} timeout {{end}}
{{ if contains "he" "hello" }} yes {{end}}
```
@@ -316,13 +316,13 @@ Examples:
Use this function to test to see if one string has exact matching inside of another.
-Signature: `eq(s string, src string) bool`
+Signature: `eq(src string, s string) bool`
Examples:
```template
-{{ if eq .err "ErrTimeout" }} timeout {{end}}
-{{ if eq "he" "hello" }} yes {{end}}
+{{ if eq "ErrTimeout" .err}} timeout {{end}}
+{{ if eq "hello" "hello" }} yes {{end}}
```
## hasPrefix and hasSuffix
|
docs
|
Update template_functions.md (#12841)
|
b24a1430434770a5c4eddc19bfe8e1eddf498a15
|
2020-07-09 00:02:45
|
Ed Welch
|
promtail: Loki Push API (#2296)
| false
|
diff --git a/cmd/docker-driver/config.go b/cmd/docker-driver/config.go
index 488eec3d88a32..dc539ee5af87f 100644
--- a/cmd/docker-driver/config.go
+++ b/cmd/docker-driver/config.go
@@ -21,7 +21,7 @@ import (
"github.com/grafana/loki/pkg/helpers"
"github.com/grafana/loki/pkg/logentry/stages"
"github.com/grafana/loki/pkg/promtail/client"
- "github.com/grafana/loki/pkg/promtail/targets"
+ "github.com/grafana/loki/pkg/promtail/targets/file"
"github.com/grafana/loki/pkg/util"
)
@@ -283,7 +283,7 @@ func parseConfig(logCtx logger.Info) (*config, error) {
if err == nil {
labels[defaultHostLabelName] = model.LabelValue(host)
}
- labels[targets.FilenameLabel] = model.LabelValue(logCtx.LogPath)
+ labels[file.FilenameLabel] = model.LabelValue(logCtx.LogPath)
// Process relabel configs.
if relabelString, ok := logCtx.Config[cfgRelabelKey]; ok && relabelString != "" {
diff --git a/docs/clients/promtail/README.md b/docs/clients/promtail/README.md
index f503928d41f98..389484fad5e19 100644
--- a/docs/clients/promtail/README.md
+++ b/docs/clients/promtail/README.md
@@ -32,9 +32,19 @@ Just like Prometheus, `promtail` is configured using a `scrape_configs` stanza.
drop, and the final metadata to attach to the log line. Refer to the docs for
[configuring Promtail](configuration.md) for more details.
+## Loki Push API
+
+Promtail can also be configured to receive logs from another Promtail or any Loki client by exposing the [Loki Push API](../../api.md#post-lokiapiv1push) with the [loki_push_api](./configuration.md#loki_push_api_config) scrape config.
+
+There are a few instances where this might be helpful:
+
+* complex network infrastructures where many machines having egress is not desirable.
+* using the Docker Logging Driver and wanting to provide a complex pipeline or to extract metrics from logs.
+* serverless setups where many ephemeral log sources want to send to Loki, sending to a Promtail instance with `use_incoming_timestamp` == false can avoid out of order errors and avoid having to use high cardinality labels.
+
## Receiving logs From Syslog
-When the [Syslog Target](./scraping.md#syslog-target) is being used, logs
+When the [Syslog Target](./configuration.md#syslog_config) is being used, logs
can be written with the syslog protocol to the configured port.
## Labeling and parsing
diff --git a/docs/clients/promtail/configuration.md b/docs/clients/promtail/configuration.md
index c9da1cbece2ee..a4a92291d1e13 100644
--- a/docs/clients/promtail/configuration.md
+++ b/docs/clients/promtail/configuration.md
@@ -26,6 +26,7 @@ and how to scrape logs from files.
* [tenant](#tenant)
* [journal_config](#journal_config)
* [syslog_config](#syslog_config)
+ * [loki_push_api_config](#loki_push_api_config)
* [relabel_config](#relabel_config)
* [static_config](#static_config)
* [file_sd_config](#file_sd_config)
@@ -270,6 +271,9 @@ job_name: <string>
# Describes how to receive logs from syslog.
[syslog: <syslog_config>]
+# Describes how to receive logs via the Loki push API, (e.g. from other Promtails or the Docker Logging Driver)
+[loki_push_api: <loki_push_api_config>]
+
# Describes how to relabel targets to determine if they should
# be processed.
relabel_configs:
@@ -716,6 +720,31 @@ labels:
* `__syslog_message_msg_id`: The [msgid field](https://tools.ietf.org/html/rfc5424#section-6.2.7) parsed from the message.
* `__syslog_message_sd_<sd_id>[_<iana_enterprise_id>]_<sd_name>`: The [structured-data field](https://tools.ietf.org/html/rfc5424#section-6.3) parsed from the message. The data field `[custom@99770 example="1"]` becomes `__syslog_message_sd_custom_99770_example`.
+### loki_push_api_config
+
+The `loki_push_api_config` block configures Promtail to expose a [Loki push API](../../api.md#post-lokiapiv1push) server.
+
+Each job configured with a `loki_push_api_config` will expose this API and will require a separate port.
+
+Note the `server` configuration is the same as [server_config](#server_config)
+
+
+
+```yaml
+# The push server configuration options
+[server: <server_config>]
+
+# Label map to add to every log line sent to the push API
+labels:
+ [ <labelname>: <labelvalue> ... ]
+
+# If promtail should pass on the timestamp from the incoming log or not.
+# When false promtail will assign the current timestamp to the log when it was processed
+[use_incoming_timestamp: <bool> | default = false]
+```
+
+See [Example Push Config](#example-push-config)
+
### relabel_config
Relabeling is a powerful tool to dynamically rewrite the label set of a target
@@ -1159,3 +1188,34 @@ scrape_configs:
- source_labels: ['__syslog_message_hostname']
target_label: 'host'
```
+
+## Example Push Config
+
+The example starts Promtail as a Push receiver and will accept logs from other Promtail instances or the Docker Logging Dirver:
+
+```yaml
+server:
+ http_listen_port: 9080
+ grpc_listen_port: 0
+
+positions:
+ filename: /tmp/positions.yaml
+
+clients:
+ - url: http://ip_or_hostname_where_Loki_run:3100/loki/api/v1/push
+
+scrape_configs:
+- job_name: push1
+ loki_push_api:
+ server:
+ http_listen_port: 3500
+ grpc_listen_port: 3600
+ labels:
+ pushserver: push1
+```
+
+Please note the `job_name` must be provided and must be unique between multiple `loki_push_api` scrape_configs, it will be used to register metrics.
+
+A new server instance is created so the `http_listen_port` and `grpc_listen_port` must be different from the promtail `server` config section (unless it's disabled)
+
+You can set `grpc_listen_port` to `0` to have a random port assigned if not using httpgrpc.
diff --git a/go.mod b/go.mod
index 80618ff3d3e00..43261515d3d6c 100644
--- a/go.mod
+++ b/go.mod
@@ -29,6 +29,7 @@ require (
github.com/grpc-ecosystem/grpc-opentracing v0.0.0-20180507213350-8e809c8a8645
github.com/hashicorp/golang-lru v0.5.4
github.com/hpcloud/tail v1.0.0
+ github.com/imdario/mergo v0.3.9
github.com/influxdata/go-syslog/v3 v3.0.1-0.20200510134747-836dce2cf6da
github.com/jmespath/go-jmespath v0.3.0
github.com/joncrlsn/dque v2.2.1-0.20200515025108-956d14155fa2+incompatible
diff --git a/go.sum b/go.sum
index 394a7a832760d..184117eab275b 100644
--- a/go.sum
+++ b/go.sum
@@ -693,6 +693,8 @@ github.com/hodgesds/perf-utils v0.0.8/go.mod h1:F6TfvsbtrF88i++hou29dTXlI2sfsJv+
github.com/hudl/fargo v1.3.0/go.mod h1:y3CKSmjA+wD2gak7sUSXTAoopbhU08POFhmITJgmKTg=
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
+github.com/imdario/mergo v0.3.9 h1:UauaLniWCFHWd+Jp9oCEkTBj8VO/9DKg3PV3VCNMDIg=
+github.com/imdario/mergo v0.3.9/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/influxdata/flux v0.65.0/go.mod h1:BwN2XG2lMszOoquQaFdPET8FRQfrXiZsWmcMO9rkaVY=
github.com/influxdata/go-syslog/v3 v3.0.1-0.20200510134747-836dce2cf6da h1:yEuttQd/3jcdlUYYyDPub5y/JVCYR0UPuxH4xclZi/c=
diff --git a/pkg/distributor/http.go b/pkg/distributor/http.go
index 35b0ecf8fc8d1..1952da19e1a15 100644
--- a/pkg/distributor/http.go
+++ b/pkg/distributor/http.go
@@ -20,6 +20,28 @@ const applicationJSON = "application/json"
// PushHandler reads a snappy-compressed proto from the HTTP body.
func (d *Distributor) PushHandler(w http.ResponseWriter, r *http.Request) {
+
+ req, err := ParseRequest(r)
+ if err != nil {
+ http.Error(w, err.Error(), http.StatusBadRequest)
+ return
+ }
+
+ _, err = d.Push(r.Context(), req)
+ if err == nil {
+ w.WriteHeader(http.StatusNoContent)
+ return
+ }
+
+ resp, ok := httpgrpc.HTTPResponseFromError(err)
+ if ok {
+ http.Error(w, string(resp.Body), int(resp.Code))
+ } else {
+ http.Error(w, err.Error(), http.StatusInternalServerError)
+ }
+}
+
+func ParseRequest(r *http.Request) (*logproto.PushRequest, error) {
var req logproto.PushRequest
switch r.Header.Get(contentType) {
@@ -33,27 +55,13 @@ func (d *Distributor) PushHandler(w http.ResponseWriter, r *http.Request) {
}
if err != nil {
- http.Error(w, err.Error(), http.StatusBadRequest)
- return
+ return nil, err
}
default:
if _, err := util.ParseProtoReader(r.Context(), r.Body, int(r.ContentLength), math.MaxInt32, &req, util.RawSnappy); err != nil {
- http.Error(w, err.Error(), http.StatusBadRequest)
- return
+ return nil, err
}
}
-
- _, err := d.Push(r.Context(), &req)
- if err == nil {
- w.WriteHeader(http.StatusNoContent)
- return
- }
-
- resp, ok := httpgrpc.HTTPResponseFromError(err)
- if ok {
- http.Error(w, string(resp.Body), int(resp.Code))
- } else {
- http.Error(w, err.Error(), http.StatusInternalServerError)
- }
+ return &req, nil
}
diff --git a/pkg/promtail/config/config.go b/pkg/promtail/config/config.go
index 43cff9fe7ebfe..47ebb4094a971 100644
--- a/pkg/promtail/config/config.go
+++ b/pkg/promtail/config/config.go
@@ -5,20 +5,20 @@ import (
"github.com/grafana/loki/pkg/promtail/client"
"github.com/grafana/loki/pkg/promtail/positions"
- "github.com/grafana/loki/pkg/promtail/scrape"
+ "github.com/grafana/loki/pkg/promtail/scrapeconfig"
"github.com/grafana/loki/pkg/promtail/server"
- "github.com/grafana/loki/pkg/promtail/targets"
+ "github.com/grafana/loki/pkg/promtail/targets/file"
)
// Config for promtail, describing what files to watch.
type Config struct {
ServerConfig server.Config `yaml:"server,omitempty"`
// deprecated use ClientConfigs instead
- ClientConfig client.Config `yaml:"client,omitempty"`
- ClientConfigs []client.Config `yaml:"clients,omitempty"`
- PositionsConfig positions.Config `yaml:"positions,omitempty"`
- ScrapeConfig []scrape.Config `yaml:"scrape_configs,omitempty"`
- TargetConfig targets.Config `yaml:"target_config,omitempty"`
+ ClientConfig client.Config `yaml:"client,omitempty"`
+ ClientConfigs []client.Config `yaml:"clients,omitempty"`
+ PositionsConfig positions.Config `yaml:"positions,omitempty"`
+ ScrapeConfig []scrapeconfig.Config `yaml:"scrape_configs,omitempty"`
+ TargetConfig file.Config `yaml:"target_config,omitempty"`
}
// RegisterFlags registers flags.
diff --git a/pkg/promtail/promtail_test.go b/pkg/promtail/promtail_test.go
index d0b4a76b19d2a..901a48739eece 100644
--- a/pkg/promtail/promtail_test.go
+++ b/pkg/promtail/promtail_test.go
@@ -35,8 +35,8 @@ import (
"github.com/grafana/loki/pkg/promtail/client"
"github.com/grafana/loki/pkg/promtail/config"
"github.com/grafana/loki/pkg/promtail/positions"
- "github.com/grafana/loki/pkg/promtail/scrape"
- "github.com/grafana/loki/pkg/promtail/targets"
+ "github.com/grafana/loki/pkg/promtail/scrapeconfig"
+ file2 "github.com/grafana/loki/pkg/promtail/targets/file"
)
const httpTestPort = 9080
@@ -460,7 +460,7 @@ func (h *testServerHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
}
file := ""
for _, label := range parsedLabels {
- if label.Name == targets.FilenameLabel {
+ if label.Name == file2.FilenameLabel {
file = label.Value
continue
}
@@ -601,7 +601,7 @@ func buildTestConfig(t *testing.T, positionsFileName string, logDirName string)
},
}
- scrapeConfig := scrape.Config{
+ scrapeConfig := scrapeconfig.Config{
JobName: "",
EntryParser: api.Raw,
PipelineStages: pipeline,
diff --git a/pkg/promtail/scrape/scrape.go b/pkg/promtail/scrapeconfig/scrapeconfig.go
similarity index 83%
rename from pkg/promtail/scrape/scrape.go
rename to pkg/promtail/scrapeconfig/scrapeconfig.go
index 226ca2cc97aba..113cc30545375 100644
--- a/pkg/promtail/scrape/scrape.go
+++ b/pkg/promtail/scrapeconfig/scrapeconfig.go
@@ -1,4 +1,4 @@
-package scrape
+package scrapeconfig
import (
"fmt"
@@ -6,6 +6,7 @@ import (
"time"
"github.com/prometheus/common/model"
+ "github.com/weaveworks/common/server"
sd_config "github.com/prometheus/prometheus/discovery/config"
"github.com/prometheus/prometheus/pkg/relabel"
@@ -21,6 +22,7 @@ type Config struct {
PipelineStages stages.PipelineStages `yaml:"pipeline_stages,omitempty"`
JournalConfig *JournalTargetConfig `yaml:"journal,omitempty"`
SyslogConfig *SyslogTargetConfig `yaml:"syslog,omitempty"`
+ PushConfig *PushTargetConfig `yaml:"loki_push_api,omitempty"`
RelabelConfigs []*relabel.Config `yaml:"relabel_configs,omitempty"`
ServiceDiscoveryConfig sd_config.ServiceDiscoveryConfig `yaml:",inline"`
}
@@ -66,6 +68,18 @@ type SyslogTargetConfig struct {
Labels model.LabelSet `yaml:"labels"`
}
+// PushTargetConfig describes a scrape config that listens for Loki push messages.
+type PushTargetConfig struct {
+ // Server is the weaveworks server config for listening connections
+ Server server.Config `yaml:"server"`
+
+ // Labels optionally holds labels to associate with each record received on the push api.
+ Labels model.LabelSet `yaml:"labels"`
+
+ // If promtail should maintain the incoming log timestamp or replace it with the current time.
+ KeepTimestamp bool `yaml:"use_incoming_timestamp"`
+}
+
// DefaultScrapeConfig is the default Config.
var DefaultScrapeConfig = Config{
EntryParser: api.Docker,
diff --git a/pkg/promtail/scrape/scrape_test.go b/pkg/promtail/scrapeconfig/scrapeconfig_test.go
similarity index 98%
rename from pkg/promtail/scrape/scrape_test.go
rename to pkg/promtail/scrapeconfig/scrapeconfig_test.go
index 48ccba99d4406..0f83f865a50c1 100644
--- a/pkg/promtail/scrape/scrape_test.go
+++ b/pkg/promtail/scrapeconfig/scrapeconfig_test.go
@@ -1,4 +1,4 @@
-package scrape
+package scrapeconfig
import (
"testing"
diff --git a/pkg/promtail/server/server.go b/pkg/promtail/server/server.go
index e69d060c42e52..21fa07a2c4423 100644
--- a/pkg/promtail/server/server.go
+++ b/pkg/promtail/server/server.go
@@ -21,6 +21,7 @@ import (
"github.com/grafana/loki/pkg/promtail/server/ui"
"github.com/grafana/loki/pkg/promtail/targets"
+ "github.com/grafana/loki/pkg/promtail/targets/target"
)
var (
@@ -102,24 +103,24 @@ func (s *server) serviceDiscovery(rw http.ResponseWriter, req *http.Request) {
sort.Strings(index)
scrapeConfigData := struct {
Index []string
- Targets map[string][]targets.Target
+ Targets map[string][]target.Target
Active []int
Dropped []int
Total []int
}{
Index: index,
- Targets: make(map[string][]targets.Target),
+ Targets: make(map[string][]target.Target),
Active: make([]int, len(index)),
Dropped: make([]int, len(index)),
Total: make([]int, len(index)),
}
for i, job := range scrapeConfigData.Index {
- scrapeConfigData.Targets[job] = make([]targets.Target, 0, len(allTarget[job]))
+ scrapeConfigData.Targets[job] = make([]target.Target, 0, len(allTarget[job]))
scrapeConfigData.Total[i] = len(allTarget[job])
- for _, target := range allTarget[job] {
+ for _, t := range allTarget[job] {
// Do not display more than 100 dropped targets per job to avoid
// returning too much data to the clients.
- if targets.IsDropped(target) {
+ if target.IsDropped(t) {
scrapeConfigData.Dropped[i]++
if scrapeConfigData.Dropped[i] > 100 {
continue
@@ -127,7 +128,7 @@ func (s *server) serviceDiscovery(rw http.ResponseWriter, req *http.Request) {
} else {
scrapeConfigData.Active[i]++
}
- scrapeConfigData.Targets[job] = append(scrapeConfigData.Targets[job], target)
+ scrapeConfigData.Targets[job] = append(scrapeConfigData.Targets[job], t)
}
}
@@ -148,7 +149,7 @@ func (s *server) serviceDiscovery(rw http.ResponseWriter, req *http.Request) {
}
return ""
},
- "numReady": func(ts []targets.Target) (readies int) {
+ "numReady": func(ts []target.Target) (readies int) {
for _, t := range ts {
if t.Ready() {
readies++
@@ -164,7 +165,7 @@ func (s *server) serviceDiscovery(rw http.ResponseWriter, req *http.Request) {
func (s *server) targets(rw http.ResponseWriter, req *http.Request) {
executeTemplate(req.Context(), rw, templateOptions{
Data: struct {
- TargetPools map[string][]targets.Target
+ TargetPools map[string][]target.Target
}{
TargetPools: s.tms.ActiveTargets(),
},
@@ -181,7 +182,7 @@ func (s *server) targets(rw http.ResponseWriter, req *http.Request) {
// you can't cast with a text template in go so this is a helper
return details.(map[string]string)
},
- "numReady": func(ts []targets.Target) (readies int) {
+ "numReady": func(ts []target.Target) (readies int) {
for _, t := range ts {
if t.Ready() {
readies++
diff --git a/pkg/promtail/targets/filetarget.go b/pkg/promtail/targets/file/filetarget.go
similarity index 98%
rename from pkg/promtail/targets/filetarget.go
rename to pkg/promtail/targets/file/filetarget.go
index 0d7f3c03d4471..efd1c8ad87972 100644
--- a/pkg/promtail/targets/filetarget.go
+++ b/pkg/promtail/targets/file/filetarget.go
@@ -1,4 +1,4 @@
-package targets
+package file
import (
"flag"
@@ -18,6 +18,7 @@ import (
"github.com/grafana/loki/pkg/helpers"
"github.com/grafana/loki/pkg/promtail/api"
"github.com/grafana/loki/pkg/promtail/positions"
+ "github.com/grafana/loki/pkg/promtail/targets/target"
)
var (
@@ -128,8 +129,8 @@ func (t *FileTarget) Stop() {
}
// Type implements a Target
-func (t *FileTarget) Type() TargetType {
- return FileTargetType
+func (t *FileTarget) Type() target.TargetType {
+ return target.FileTargetType
}
// DiscoveredLabels implements a Target
diff --git a/pkg/promtail/targets/filetarget_test.go b/pkg/promtail/targets/file/filetarget_test.go
similarity index 86%
rename from pkg/promtail/targets/filetarget_test.go
rename to pkg/promtail/targets/file/filetarget_test.go
index a05c3a58aec0a..9dd29af6e3067 100644
--- a/pkg/promtail/targets/filetarget_test.go
+++ b/pkg/promtail/targets/file/filetarget_test.go
@@ -1,30 +1,27 @@
-package targets
+package file
import (
"fmt"
"io/ioutil"
- "math/rand"
"os"
"path/filepath"
"sort"
- "sync"
"testing"
"time"
"github.com/go-kit/kit/log"
- "github.com/go-kit/kit/log/level"
- "github.com/prometheus/common/model"
"gopkg.in/yaml.v2"
"github.com/grafana/loki/pkg/promtail/positions"
+ "github.com/grafana/loki/pkg/promtail/targets/testutils"
)
func TestLongPositionsSyncDelayStillSavesCorrectPosition(t *testing.T) {
w := log.NewSyncWriter(os.Stderr)
logger := log.NewLogfmtLogger(w)
- initRandom()
- dirName := "/tmp/" + randName()
+ testutils.InitRandom()
+ dirName := "/tmp/" + testutils.RandName()
positionsFileName := dirName + "/positions.yml"
logFile := dirName + "/test.log"
@@ -44,9 +41,9 @@ func TestLongPositionsSyncDelayStillSavesCorrectPosition(t *testing.T) {
t.Fatal(err)
}
- client := &TestClient{
- log: logger,
- messages: make([]string, 0),
+ client := &testutils.TestClient{
+ Log: logger,
+ Messages: make([]*testutils.Entry, 0),
}
f, err := os.Create(logFile)
@@ -70,7 +67,7 @@ func TestLongPositionsSyncDelayStillSavesCorrectPosition(t *testing.T) {
}
countdown := 10000
- for len(client.messages) != 10 && countdown > 0 {
+ for len(client.Messages) != 10 && countdown > 0 {
time.Sleep(1 * time.Millisecond)
countdown--
}
@@ -97,13 +94,13 @@ func TestLongPositionsSyncDelayStillSavesCorrectPosition(t *testing.T) {
}
// Assert the number of messages the handler received is correct.
- if len(client.messages) != 10 {
- t.Error("Handler did not receive the correct number of messages, expected 10 received", len(client.messages))
+ if len(client.Messages) != 10 {
+ t.Error("Handler did not receive the correct number of messages, expected 10 received", len(client.Messages))
}
// Spot check one of the messages.
- if client.messages[0] != "test" {
- t.Error("Expected first log message to be 'test' but was", client.messages[0])
+ if client.Messages[0].Log != "test" {
+ t.Error("Expected first log message to be 'test' but was", client.Messages[0])
}
}
@@ -112,8 +109,8 @@ func TestWatchEntireDirectory(t *testing.T) {
w := log.NewSyncWriter(os.Stderr)
logger := log.NewLogfmtLogger(w)
- initRandom()
- dirName := "/tmp/" + randName()
+ testutils.InitRandom()
+ dirName := "/tmp/" + testutils.RandName()
positionsFileName := dirName + "/positions.yml"
logFileDir := dirName + "/logdir/"
@@ -137,9 +134,9 @@ func TestWatchEntireDirectory(t *testing.T) {
t.Fatal(err)
}
- client := &TestClient{
- log: logger,
- messages: make([]string, 0),
+ client := &testutils.TestClient{
+ Log: logger,
+ Messages: make([]*testutils.Entry, 0),
}
f, err := os.Create(logFileDir + "test.log")
@@ -163,7 +160,7 @@ func TestWatchEntireDirectory(t *testing.T) {
}
countdown := 10000
- for len(client.messages) != 10 && countdown > 0 {
+ for len(client.Messages) != 10 && countdown > 0 {
time.Sleep(1 * time.Millisecond)
countdown--
}
@@ -190,13 +187,13 @@ func TestWatchEntireDirectory(t *testing.T) {
}
// Assert the number of messages the handler received is correct.
- if len(client.messages) != 10 {
- t.Error("Handler did not receive the correct number of messages, expected 10 received", len(client.messages))
+ if len(client.Messages) != 10 {
+ t.Error("Handler did not receive the correct number of messages, expected 10 received", len(client.Messages))
}
// Spot check one of the messages.
- if client.messages[0] != "test" {
- t.Error("Expected first log message to be 'test' but was", client.messages[0])
+ if client.Messages[0].Log != "test" {
+ t.Error("Expected first log message to be 'test' but was", client.Messages[0])
}
}
@@ -205,8 +202,8 @@ func TestFileRolls(t *testing.T) {
w := log.NewSyncWriter(os.Stderr)
logger := log.NewLogfmtLogger(w)
- initRandom()
- dirName := "/tmp/" + randName()
+ testutils.InitRandom()
+ dirName := "/tmp/" + testutils.RandName()
positionsFile := dirName + "/positions.yml"
logFile := dirName + "/test.log"
@@ -226,9 +223,9 @@ func TestFileRolls(t *testing.T) {
t.Fatal(err)
}
- client := &TestClient{
- log: logger,
- messages: make([]string, 0),
+ client := &testutils.TestClient{
+ Log: logger,
+ Messages: make([]*testutils.Entry, 0),
}
f, err := os.Create(logFile)
@@ -252,7 +249,7 @@ func TestFileRolls(t *testing.T) {
}
countdown := 10000
- for len(client.messages) != 10 && countdown > 0 {
+ for len(client.Messages) != 10 && countdown > 0 {
time.Sleep(1 * time.Millisecond)
countdown--
}
@@ -276,7 +273,7 @@ func TestFileRolls(t *testing.T) {
}
countdown = 10000
- for len(client.messages) != 20 && countdown > 0 {
+ for len(client.Messages) != 20 && countdown > 0 {
time.Sleep(1 * time.Millisecond)
countdown--
}
@@ -284,18 +281,18 @@ func TestFileRolls(t *testing.T) {
target.Stop()
positions.Stop()
- if len(client.messages) != 20 {
- t.Error("Handler did not receive the correct number of messages, expected 20 received", len(client.messages))
+ if len(client.Messages) != 20 {
+ t.Error("Handler did not receive the correct number of messages, expected 20 received", len(client.Messages))
}
// Spot check one of the messages.
- if client.messages[0] != "test1" {
- t.Error("Expected first log message to be 'test1' but was", client.messages[0])
+ if client.Messages[0].Log != "test1" {
+ t.Error("Expected first log message to be 'test1' but was", client.Messages[0])
}
// Spot check the first message from the second file.
- if client.messages[10] != "test2" {
- t.Error("Expected first log message to be 'test2' but was", client.messages[10])
+ if client.Messages[10].Log != "test2" {
+ t.Error("Expected first log message to be 'test2' but was", client.Messages[10])
}
}
@@ -303,8 +300,8 @@ func TestResumesWhereLeftOff(t *testing.T) {
w := log.NewSyncWriter(os.Stderr)
logger := log.NewLogfmtLogger(w)
- initRandom()
- dirName := "/tmp/" + randName()
+ testutils.InitRandom()
+ dirName := "/tmp/" + testutils.RandName()
positionsFileName := dirName + "/positions.yml"
logFile := dirName + "/test.log"
@@ -324,9 +321,9 @@ func TestResumesWhereLeftOff(t *testing.T) {
t.Fatal(err)
}
- client := &TestClient{
- log: logger,
- messages: make([]string, 0),
+ client := &testutils.TestClient{
+ Log: logger,
+ Messages: make([]*testutils.Entry, 0),
}
f, err := os.Create(logFile)
@@ -350,7 +347,7 @@ func TestResumesWhereLeftOff(t *testing.T) {
}
countdown := 10000
- for len(client.messages) != 10 && countdown > 0 {
+ for len(client.Messages) != 10 && countdown > 0 {
time.Sleep(1 * time.Millisecond)
countdown--
}
@@ -384,7 +381,7 @@ func TestResumesWhereLeftOff(t *testing.T) {
}
countdown = 10000
- for len(client.messages) != 20 && countdown > 0 {
+ for len(client.Messages) != 20 && countdown > 0 {
time.Sleep(1 * time.Millisecond)
countdown--
}
@@ -392,18 +389,18 @@ func TestResumesWhereLeftOff(t *testing.T) {
target2.Stop()
ps2.Stop()
- if len(client.messages) != 20 {
- t.Error("Handler did not receive the correct number of messages, expected 20 received", len(client.messages))
+ if len(client.Messages) != 20 {
+ t.Error("Handler did not receive the correct number of messages, expected 20 received", len(client.Messages))
}
// Spot check one of the messages.
- if client.messages[0] != "test1" {
- t.Error("Expected first log message to be 'test1' but was", client.messages[0])
+ if client.Messages[0].Log != "test1" {
+ t.Error("Expected first log message to be 'test1' but was", client.Messages[0])
}
// Spot check the first message from the second file.
- if client.messages[10] != "test2" {
- t.Error("Expected first log message to be 'test2' but was", client.messages[10])
+ if client.Messages[10].Log != "test2" {
+ t.Error("Expected first log message to be 'test2' but was", client.Messages[10])
}
}
@@ -411,8 +408,8 @@ func TestGlobWithMultipleFiles(t *testing.T) {
w := log.NewSyncWriter(os.Stderr)
logger := log.NewLogfmtLogger(w)
- initRandom()
- dirName := "/tmp/" + randName()
+ testutils.InitRandom()
+ dirName := "/tmp/" + testutils.RandName()
positionsFileName := dirName + "/positions.yml"
logFile1 := dirName + "/test.log"
logFile2 := dirName + "/dirt.log"
@@ -433,9 +430,9 @@ func TestGlobWithMultipleFiles(t *testing.T) {
t.Fatal(err)
}
- client := &TestClient{
- log: logger,
- messages: make([]string, 0),
+ client := &testutils.TestClient{
+ Log: logger,
+ Messages: make([]*testutils.Entry, 0),
}
f1, err := os.Create(logFile1)
@@ -469,7 +466,7 @@ func TestGlobWithMultipleFiles(t *testing.T) {
}
countdown := 10000
- for len(client.messages) != 20 && countdown > 0 {
+ for len(client.Messages) != 20 && countdown > 0 {
time.Sleep(1 * time.Millisecond)
countdown--
}
@@ -503,8 +500,8 @@ func TestGlobWithMultipleFiles(t *testing.T) {
}
// Assert the number of messages the handler received is correct.
- if len(client.messages) != 20 {
- t.Error("Handler did not receive the correct number of messages, expected 20 received", len(client.messages))
+ if len(client.Messages) != 20 {
+ t.Error("Handler did not receive the correct number of messages, expected 20 received", len(client.Messages))
}
}
@@ -513,8 +510,8 @@ func TestFileTargetSync(t *testing.T) {
w := log.NewSyncWriter(os.Stderr)
logger := log.NewLogfmtLogger(w)
- initRandom()
- dirName := "/tmp/" + randName()
+ testutils.InitRandom()
+ dirName := "/tmp/" + testutils.RandName()
positionsFileName := dirName + "/positions.yml"
logDir1 := dirName + "/log1"
logDir1File1 := logDir1 + "/test1.log"
@@ -536,9 +533,9 @@ func TestFileTargetSync(t *testing.T) {
t.Fatal(err)
}
- client := &TestClient{
- log: logger,
- messages: make([]string, 0),
+ client := &testutils.TestClient{
+ Log: logger,
+ Messages: make([]*testutils.Entry, 0),
}
target, err := NewFileTarget(logger, client, ps, logDir1+"/*.log", nil, nil, &Config{
@@ -721,32 +718,3 @@ func TestMissing(t *testing.T) {
}
}
-
-type TestClient struct {
- log log.Logger
- messages []string
- sync.Mutex
-}
-
-func (c *TestClient) Handle(ls model.LabelSet, t time.Time, s string) error {
- level.Debug(c.log).Log("msg", "received log", "log", s)
-
- c.Lock()
- defer c.Unlock()
- c.messages = append(c.messages, s)
- return nil
-}
-
-func initRandom() {
- rand.Seed(time.Now().UnixNano())
-}
-
-var letters = []rune("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")
-
-func randName() string {
- b := make([]rune, 10)
- for i := range b {
- b[i] = letters[rand.Intn(len(letters))]
- }
- return string(b)
-}
diff --git a/pkg/promtail/targets/filetargetmanager.go b/pkg/promtail/targets/file/filetargetmanager.go
similarity index 87%
rename from pkg/promtail/targets/filetargetmanager.go
rename to pkg/promtail/targets/file/filetargetmanager.go
index 47573241b5b9e..e5ce2c66cf474 100644
--- a/pkg/promtail/targets/filetargetmanager.go
+++ b/pkg/promtail/targets/file/filetargetmanager.go
@@ -1,4 +1,4 @@
-package targets
+package file
import (
"context"
@@ -23,7 +23,8 @@ import (
"github.com/grafana/loki/pkg/logentry/stages"
"github.com/grafana/loki/pkg/promtail/api"
"github.com/grafana/loki/pkg/promtail/positions"
- "github.com/grafana/loki/pkg/promtail/scrape"
+ "github.com/grafana/loki/pkg/promtail/scrapeconfig"
+ "github.com/grafana/loki/pkg/promtail/targets/target"
)
const (
@@ -58,7 +59,7 @@ func NewFileTargetManager(
logger log.Logger,
positions positions.Positions,
client api.EntryHandler,
- scrapeConfigs []scrape.Config,
+ scrapeConfigs []scrapeconfig.Config,
targetConfig *Config,
) (*FileTargetManager, error) {
ctx, quit := context.WithCancel(context.Background())
@@ -140,7 +141,7 @@ func NewFileTargetManager(
positions: positions,
relabelConfig: cfg.RelabelConfigs,
targets: map[string]*FileTarget{},
- droppedTargets: []Target{},
+ droppedTargets: []target.Target{},
hostname: hostname,
entryHandler: pipeline.Wrap(client),
targetConfig: targetConfig,
@@ -184,8 +185,8 @@ func (tm *FileTargetManager) Stop() {
}
// ActiveTargets returns the active targets currently being scraped.
-func (tm *FileTargetManager) ActiveTargets() map[string][]Target {
- result := map[string][]Target{}
+func (tm *FileTargetManager) ActiveTargets() map[string][]target.Target {
+ result := map[string][]target.Target{}
for jobName, syncer := range tm.syncers {
result[jobName] = append(result[jobName], syncer.ActiveTargets()...)
}
@@ -193,8 +194,8 @@ func (tm *FileTargetManager) ActiveTargets() map[string][]Target {
}
// AllTargets returns all targets, active and dropped.
-func (tm *FileTargetManager) AllTargets() map[string][]Target {
- result := map[string][]Target{}
+func (tm *FileTargetManager) AllTargets() map[string][]target.Target {
+ result := map[string][]target.Target{}
for jobName, syncer := range tm.syncers {
result[jobName] = append(result[jobName], syncer.ActiveTargets()...)
result[jobName] = append(result[jobName], syncer.DroppedTargets()...)
@@ -209,7 +210,7 @@ type targetSyncer struct {
entryHandler api.EntryHandler
hostname string
- droppedTargets []Target
+ droppedTargets []target.Target
targets map[string]*FileTarget
mtx sync.Mutex
@@ -223,7 +224,7 @@ func (s *targetSyncer) sync(groups []*targetgroup.Group) {
defer s.mtx.Unlock()
targets := map[string]struct{}{}
- dropped := []Target{}
+ dropped := []target.Target{}
for _, group := range groups {
for _, t := range group.Targets {
@@ -244,7 +245,7 @@ func (s *targetSyncer) sync(groups []*targetgroup.Group) {
// Drop empty targets (drop in relabeling).
if processedLabels == nil {
- dropped = append(dropped, newDroppedTarget("dropping target, no labels", discoveredLabels))
+ dropped = append(dropped, target.NewDroppedTarget("dropping target, no labels", discoveredLabels))
level.Debug(s.log).Log("msg", "dropping target, no labels")
failedTargets.WithLabelValues("empty_labels").Inc()
continue
@@ -252,7 +253,7 @@ func (s *targetSyncer) sync(groups []*targetgroup.Group) {
host, ok := labels[hostLabel]
if ok && string(host) != s.hostname {
- dropped = append(dropped, newDroppedTarget(fmt.Sprintf("ignoring target, wrong host (labels:%s hostname:%s)", labels.String(), s.hostname), discoveredLabels))
+ dropped = append(dropped, target.NewDroppedTarget(fmt.Sprintf("ignoring target, wrong host (labels:%s hostname:%s)", labels.String(), s.hostname), discoveredLabels))
level.Debug(s.log).Log("msg", "ignoring target, wrong host", "labels", labels.String(), "hostname", s.hostname)
failedTargets.WithLabelValues("wrong_host").Inc()
continue
@@ -260,7 +261,7 @@ func (s *targetSyncer) sync(groups []*targetgroup.Group) {
path, ok := labels[pathLabel]
if !ok {
- dropped = append(dropped, newDroppedTarget("no path for target", discoveredLabels))
+ dropped = append(dropped, target.NewDroppedTarget("no path for target", discoveredLabels))
level.Info(s.log).Log("msg", "no path for target", "labels", labels.String())
failedTargets.WithLabelValues("no_path").Inc()
continue
@@ -275,7 +276,7 @@ func (s *targetSyncer) sync(groups []*targetgroup.Group) {
key := labels.String()
targets[key] = struct{}{}
if _, ok := s.targets[key]; ok {
- dropped = append(dropped, newDroppedTarget("ignoring target, already exists", discoveredLabels))
+ dropped = append(dropped, target.NewDroppedTarget("ignoring target, already exists", discoveredLabels))
level.Debug(s.log).Log("msg", "ignoring target, already exists", "labels", labels.String())
failedTargets.WithLabelValues("exists").Inc()
continue
@@ -284,7 +285,7 @@ func (s *targetSyncer) sync(groups []*targetgroup.Group) {
level.Info(s.log).Log("msg", "Adding target", "key", key)
t, err := s.newTarget(string(path), labels, discoveredLabels)
if err != nil {
- dropped = append(dropped, newDroppedTarget(fmt.Sprintf("Failed to create target: %s", err.Error()), discoveredLabels))
+ dropped = append(dropped, target.NewDroppedTarget(fmt.Sprintf("Failed to create target: %s", err.Error()), discoveredLabels))
level.Error(s.log).Log("msg", "Failed to create target", "key", key, "error", err)
failedTargets.WithLabelValues("error").Inc()
continue
@@ -310,16 +311,16 @@ func (s *targetSyncer) newTarget(path string, labels model.LabelSet, discoveredL
return NewFileTarget(s.log, s.entryHandler, s.positions, path, labels, discoveredLabels, s.targetConfig)
}
-func (s *targetSyncer) DroppedTargets() []Target {
+func (s *targetSyncer) DroppedTargets() []target.Target {
s.mtx.Lock()
defer s.mtx.Unlock()
- return append([]Target(nil), s.droppedTargets...)
+ return append([]target.Target(nil), s.droppedTargets...)
}
-func (s *targetSyncer) ActiveTargets() []Target {
+func (s *targetSyncer) ActiveTargets() []target.Target {
s.mtx.Lock()
defer s.mtx.Unlock()
- actives := []Target{}
+ actives := []target.Target{}
for _, t := range s.targets {
actives = append(actives, t)
}
diff --git a/pkg/promtail/targets/tailer.go b/pkg/promtail/targets/file/tailer.go
similarity index 99%
rename from pkg/promtail/targets/tailer.go
rename to pkg/promtail/targets/file/tailer.go
index 1c1c037a40cd2..dc1fc9755fa38 100644
--- a/pkg/promtail/targets/tailer.go
+++ b/pkg/promtail/targets/file/tailer.go
@@ -1,4 +1,4 @@
-package targets
+package file
import (
"os"
diff --git a/pkg/promtail/targets/journaltarget.go b/pkg/promtail/targets/journal/journaltarget.go
similarity index 96%
rename from pkg/promtail/targets/journaltarget.go
rename to pkg/promtail/targets/journal/journaltarget.go
index b3f420bab1a3d..320e98ec6f00d 100644
--- a/pkg/promtail/targets/journaltarget.go
+++ b/pkg/promtail/targets/journal/journaltarget.go
@@ -1,6 +1,6 @@
// +build linux,cgo
-package targets
+package journal
import (
"fmt"
@@ -16,10 +16,11 @@ import (
"github.com/go-kit/kit/log/level"
"github.com/grafana/loki/pkg/promtail/positions"
+ "github.com/grafana/loki/pkg/promtail/targets/target"
"github.com/go-kit/kit/log"
- "github.com/grafana/loki/pkg/promtail/scrape"
+ "github.com/grafana/loki/pkg/promtail/scrapeconfig"
"github.com/coreos/go-systemd/sdjournal"
"github.com/pkg/errors"
@@ -92,7 +93,7 @@ type JournalTarget struct {
positions positions.Positions
positionPath string
relabelConfig []*relabel.Config
- config *scrape.JournalTargetConfig
+ config *scrapeconfig.JournalTargetConfig
labels model.LabelSet
r journalReader
@@ -106,7 +107,7 @@ func NewJournalTarget(
positions positions.Positions,
jobName string,
relabelConfig []*relabel.Config,
- targetConfig *scrape.JournalTargetConfig,
+ targetConfig *scrapeconfig.JournalTargetConfig,
) (*JournalTarget, error) {
return journalTargetWithReader(
@@ -127,7 +128,7 @@ func journalTargetWithReader(
positions positions.Positions,
jobName string,
relabelConfig []*relabel.Config,
- targetConfig *scrape.JournalTargetConfig,
+ targetConfig *scrapeconfig.JournalTargetConfig,
readerFunc journalReaderFunc,
entryFunc journalEntryFunc,
) (*JournalTarget, error) {
@@ -292,8 +293,8 @@ func (t *JournalTarget) formatter(entry *sdjournal.JournalEntry) (string, error)
}
// Type returns JournalTargetType.
-func (t *JournalTarget) Type() TargetType {
- return JournalTargetType
+func (t *JournalTarget) Type() target.TargetType {
+ return target.JournalTargetType
}
// Ready indicates whether or not the journal is ready to be
diff --git a/pkg/promtail/targets/journaltarget_test.go b/pkg/promtail/targets/journal/journaltarget_test.go
similarity index 83%
rename from pkg/promtail/targets/journaltarget_test.go
rename to pkg/promtail/targets/journal/journaltarget_test.go
index 9253f7a26fe37..6aed7075438ed 100644
--- a/pkg/promtail/targets/journaltarget_test.go
+++ b/pkg/promtail/targets/journal/journaltarget_test.go
@@ -1,6 +1,6 @@
// +build linux,cgo
-package targets
+package journal
import (
"fmt"
@@ -17,7 +17,8 @@ import (
"github.com/stretchr/testify/assert"
- "github.com/grafana/loki/pkg/promtail/scrape"
+ "github.com/grafana/loki/pkg/promtail/scrapeconfig"
+ "github.com/grafana/loki/pkg/promtail/targets/testutils"
"github.com/go-kit/kit/log"
"github.com/stretchr/testify/require"
@@ -70,8 +71,8 @@ func TestJournalTarget(t *testing.T) {
w := log.NewSyncWriter(os.Stderr)
logger := log.NewLogfmtLogger(w)
- initRandom()
- dirName := "/tmp/" + randName()
+ testutils.InitRandom()
+ dirName := "/tmp/" + testutils.RandName()
positionsFileName := dirName + "/positions.yml"
// Set the sync period to a really long value, to guarantee the sync timer
@@ -85,9 +86,9 @@ func TestJournalTarget(t *testing.T) {
t.Fatal(err)
}
- client := &TestClient{
- log: logger,
- messages: make([]string, 0),
+ client := &testutils.TestClient{
+ Log: logger,
+ Messages: make([]*testutils.Entry, 0),
}
relabelCfg := `
@@ -102,7 +103,7 @@ func TestJournalTarget(t *testing.T) {
require.NoError(t, err)
jt, err := journalTargetWithReader(logger, client, ps, "test", relabels,
- &scrape.JournalTargetConfig{}, newMockJournalReader, newMockJournalEntry(nil))
+ &scrapeconfig.JournalTargetConfig{}, newMockJournalReader, newMockJournalEntry(nil))
require.NoError(t, err)
r := jt.r.(*mockJournalReader)
@@ -114,8 +115,8 @@ func TestJournalTarget(t *testing.T) {
})
assert.NoError(t, err)
}
- fmt.Println(client.messages)
- assert.Len(t, client.messages, 10)
+ fmt.Println(client.Messages)
+ assert.Len(t, client.Messages, 10)
require.NoError(t, jt.Stop())
}
@@ -123,8 +124,8 @@ func TestJournalTarget_JSON(t *testing.T) {
w := log.NewSyncWriter(os.Stderr)
logger := log.NewLogfmtLogger(w)
- initRandom()
- dirName := "/tmp/" + randName()
+ testutils.InitRandom()
+ dirName := "/tmp/" + testutils.RandName()
positionsFileName := dirName + "/positions.yml"
// Set the sync period to a really long value, to guarantee the sync timer
@@ -138,9 +139,9 @@ func TestJournalTarget_JSON(t *testing.T) {
t.Fatal(err)
}
- client := &TestClient{
- log: logger,
- messages: make([]string, 0),
+ client := &testutils.TestClient{
+ Log: logger,
+ Messages: make([]*testutils.Entry, 0),
}
relabelCfg := `
@@ -154,7 +155,7 @@ func TestJournalTarget_JSON(t *testing.T) {
err = yaml.Unmarshal([]byte(relabelCfg), &relabels)
require.NoError(t, err)
- cfg := &scrape.JournalTargetConfig{JSON: true}
+ cfg := &scrapeconfig.JournalTargetConfig{JSON: true}
jt, err := journalTargetWithReader(logger, client, ps, "test", relabels,
cfg, newMockJournalReader, newMockJournalEntry(nil))
@@ -172,11 +173,11 @@ func TestJournalTarget_JSON(t *testing.T) {
expectMsg := `{"CODE_FILE":"journaltarget_test.go","MESSAGE":"ping","OTHER_FIELD":"foobar"}`
- require.Greater(t, len(client.messages), 0)
- require.Equal(t, expectMsg, client.messages[len(client.messages)-1])
+ require.Greater(t, len(client.Messages), 0)
+ require.Equal(t, expectMsg, client.Messages[len(client.Messages)-1].Log)
}
- assert.Len(t, client.messages, 10)
+ assert.Len(t, client.Messages, 10)
require.NoError(t, jt.Stop())
}
@@ -184,8 +185,8 @@ func TestJournalTarget_Since(t *testing.T) {
w := log.NewSyncWriter(os.Stderr)
logger := log.NewLogfmtLogger(w)
- initRandom()
- dirName := "/tmp/" + randName()
+ testutils.InitRandom()
+ dirName := "/tmp/" + testutils.RandName()
positionsFileName := dirName + "/positions.yml"
// Set the sync period to a really long value, to guarantee the sync timer
@@ -199,12 +200,12 @@ func TestJournalTarget_Since(t *testing.T) {
t.Fatal(err)
}
- client := &TestClient{
- log: logger,
- messages: make([]string, 0),
+ client := &testutils.TestClient{
+ Log: logger,
+ Messages: make([]*testutils.Entry, 0),
}
- cfg := scrape.JournalTargetConfig{
+ cfg := scrapeconfig.JournalTargetConfig{
MaxAge: "4h",
}
@@ -220,8 +221,8 @@ func TestJournalTarget_Cursor_TooOld(t *testing.T) {
w := log.NewSyncWriter(os.Stderr)
logger := log.NewLogfmtLogger(w)
- initRandom()
- dirName := "/tmp/" + randName()
+ testutils.InitRandom()
+ dirName := "/tmp/" + testutils.RandName()
positionsFileName := dirName + "/positions.yml"
// Set the sync period to a really long value, to guarantee the sync timer
@@ -236,12 +237,12 @@ func TestJournalTarget_Cursor_TooOld(t *testing.T) {
}
ps.PutString("journal-test", "foobar")
- client := &TestClient{
- log: logger,
- messages: make([]string, 0),
+ client := &testutils.TestClient{
+ Log: logger,
+ Messages: make([]*testutils.Entry, 0),
}
- cfg := scrape.JournalTargetConfig{}
+ cfg := scrapeconfig.JournalTargetConfig{}
entryTs := time.Date(1980, time.July, 3, 12, 0, 0, 0, time.UTC)
journalEntry := newMockJournalEntry(&sdjournal.JournalEntry{
@@ -262,8 +263,8 @@ func TestJournalTarget_Cursor_NotTooOld(t *testing.T) {
w := log.NewSyncWriter(os.Stderr)
logger := log.NewLogfmtLogger(w)
- initRandom()
- dirName := "/tmp/" + randName()
+ testutils.InitRandom()
+ dirName := "/tmp/" + testutils.RandName()
positionsFileName := dirName + "/positions.yml"
// Set the sync period to a really long value, to guarantee the sync timer
@@ -278,12 +279,12 @@ func TestJournalTarget_Cursor_NotTooOld(t *testing.T) {
}
ps.PutString("journal-test", "foobar")
- client := &TestClient{
- log: logger,
- messages: make([]string, 0),
+ client := &testutils.TestClient{
+ Log: logger,
+ Messages: make([]*testutils.Entry, 0),
}
- cfg := scrape.JournalTargetConfig{}
+ cfg := scrapeconfig.JournalTargetConfig{}
entryTs := time.Now().Add(-time.Hour)
journalEntry := newMockJournalEntry(&sdjournal.JournalEntry{
diff --git a/pkg/promtail/targets/journaltargetmanager.go b/pkg/promtail/targets/journal/journaltargetmanager.go
similarity index 78%
rename from pkg/promtail/targets/journaltargetmanager.go
rename to pkg/promtail/targets/journal/journaltargetmanager.go
index 08b689eab9158..24a00b589e6af 100644
--- a/pkg/promtail/targets/journaltargetmanager.go
+++ b/pkg/promtail/targets/journal/journaltargetmanager.go
@@ -1,6 +1,6 @@
// +build !linux !cgo
-package targets
+package journal
import (
"github.com/go-kit/kit/log"
@@ -8,7 +8,8 @@ import (
"github.com/grafana/loki/pkg/promtail/api"
"github.com/grafana/loki/pkg/promtail/positions"
- "github.com/grafana/loki/pkg/promtail/scrape"
+ "github.com/grafana/loki/pkg/promtail/scrapeconfig"
+ "github.com/grafana/loki/pkg/promtail/targets/target"
)
// JournalTargetManager manages a series of JournalTargets.
@@ -20,7 +21,7 @@ func NewJournalTargetManager(
logger log.Logger,
positions positions.Positions,
client api.EntryHandler,
- scrapeConfigs []scrape.Config,
+ scrapeConfigs []scrapeconfig.Config,
) (*JournalTargetManager, error) {
level.Warn(logger).Log("msg", "WARNING!!! Journal target was configured but support for reading the systemd journal is not compiled into this build of promtail!")
return &JournalTargetManager{}, nil
@@ -36,11 +37,11 @@ func (tm *JournalTargetManager) Ready() bool {
func (tm *JournalTargetManager) Stop() {}
// ActiveTargets always returns nil on non-Linux platforms.
-func (tm *JournalTargetManager) ActiveTargets() map[string][]Target {
+func (tm *JournalTargetManager) ActiveTargets() map[string][]target.Target {
return nil
}
// AllTargets always returns nil on non-Linux platforms.
-func (tm *JournalTargetManager) AllTargets() map[string][]Target {
+func (tm *JournalTargetManager) AllTargets() map[string][]target.Target {
return nil
}
diff --git a/pkg/promtail/targets/journaltargetmanager_linux.go b/pkg/promtail/targets/journal/journaltargetmanager_linux.go
similarity index 83%
rename from pkg/promtail/targets/journaltargetmanager_linux.go
rename to pkg/promtail/targets/journal/journaltargetmanager_linux.go
index b6293dfaa3a79..bcd15d442e1b0 100644
--- a/pkg/promtail/targets/journaltargetmanager_linux.go
+++ b/pkg/promtail/targets/journal/journaltargetmanager_linux.go
@@ -1,6 +1,6 @@
// +build cgo
-package targets
+package journal
import (
"github.com/go-kit/kit/log"
@@ -10,7 +10,8 @@ import (
"github.com/grafana/loki/pkg/logentry/stages"
"github.com/grafana/loki/pkg/promtail/api"
"github.com/grafana/loki/pkg/promtail/positions"
- "github.com/grafana/loki/pkg/promtail/scrape"
+ "github.com/grafana/loki/pkg/promtail/scrapeconfig"
+ "github.com/grafana/loki/pkg/promtail/targets/target"
)
// JournalTargetManager manages a series of JournalTargets.
@@ -24,7 +25,7 @@ func NewJournalTargetManager(
logger log.Logger,
positions positions.Positions,
client api.EntryHandler,
- scrapeConfigs []scrape.Config,
+ scrapeConfigs []scrapeconfig.Config,
) (*JournalTargetManager, error) {
tm := &JournalTargetManager{
logger: logger,
@@ -82,16 +83,16 @@ func (tm *JournalTargetManager) Stop() {
// ActiveTargets returns the list of JournalTargets where journal data
// is being read. ActiveTargets is an alias to AllTargets as
// JournalTargets cannot be deactivated, only stopped.
-func (tm *JournalTargetManager) ActiveTargets() map[string][]Target {
+func (tm *JournalTargetManager) ActiveTargets() map[string][]target.Target {
return tm.AllTargets()
}
// AllTargets returns the list of all targets where journal data
// is currently being read.
-func (tm *JournalTargetManager) AllTargets() map[string][]Target {
- result := make(map[string][]Target, len(tm.targets))
+func (tm *JournalTargetManager) AllTargets() map[string][]target.Target {
+ result := make(map[string][]target.Target, len(tm.targets))
for k, v := range tm.targets {
- result[k] = []Target{v}
+ result[k] = []target.Target{v}
}
return result
}
diff --git a/pkg/promtail/targets/lokipush/pushtarget.go b/pkg/promtail/targets/lokipush/pushtarget.go
new file mode 100644
index 0000000000000..3dec7a8b20256
--- /dev/null
+++ b/pkg/promtail/targets/lokipush/pushtarget.go
@@ -0,0 +1,203 @@
+package lokipush
+
+import (
+ "flag"
+ "net/http"
+ "strings"
+ "time"
+
+ "github.com/cortexproject/cortex/pkg/util"
+ "github.com/go-kit/kit/log"
+ "github.com/go-kit/kit/log/level"
+ "github.com/imdario/mergo"
+ "github.com/prometheus/common/model"
+ "github.com/prometheus/prometheus/pkg/labels"
+ "github.com/prometheus/prometheus/pkg/relabel"
+ "github.com/weaveworks/common/server"
+
+ "github.com/grafana/loki/pkg/distributor"
+ "github.com/grafana/loki/pkg/logql"
+ "github.com/grafana/loki/pkg/promtail/api"
+ "github.com/grafana/loki/pkg/promtail/scrapeconfig"
+ "github.com/grafana/loki/pkg/promtail/targets/target"
+)
+
+type PushTarget struct {
+ logger log.Logger
+ handler api.EntryHandler
+ config *scrapeconfig.PushTargetConfig
+ relabelConfig []*relabel.Config
+ jobName string
+ server *server.Server
+}
+
+func NewPushTarget(logger log.Logger,
+ handler api.EntryHandler,
+ relabel []*relabel.Config,
+ jobName string,
+ config *scrapeconfig.PushTargetConfig) (*PushTarget, error) {
+
+ pt := &PushTarget{
+ logger: logger,
+ handler: handler,
+ relabelConfig: relabel,
+ jobName: jobName,
+ config: config,
+ }
+
+ // Bit of a chicken and egg problem trying to register the defaults and apply overrides from the loaded config.
+ // First create an empty config and set defaults.
+ defaults := server.Config{}
+ defaults.RegisterFlags(flag.NewFlagSet("empty", flag.ContinueOnError))
+ // Then apply any config values loaded as overrides to the defaults.
+ if err := mergo.Merge(&defaults, config.Server, mergo.WithOverride); err != nil {
+ level.Error(logger).Log("msg", "failed to parse configs and override defaults when configuring push server", "err", err)
+ }
+ // The merge won't overwrite with a zero value but in the case of ports 0 value
+ // indicates the desire for a random port so reset these to zero if the incoming config val is 0
+ if config.Server.HTTPListenPort == 0 {
+ defaults.HTTPListenPort = 0
+ }
+ if config.Server.GRPCListenPort == 0 {
+ defaults.GRPCListenPort = 0
+ }
+ // Set the config to the new combined config.
+ config.Server = defaults
+
+ err := pt.run()
+ if err != nil {
+ return nil, err
+ }
+
+ return pt, nil
+}
+
+func (t *PushTarget) run() error {
+ level.Info(t.logger).Log("msg", "starting push server", "job", t.jobName)
+ // To prevent metric collisions because all metrics are going to be registered in the global Prometheus registry.
+ t.config.Server.MetricsNamespace = "promtail_" + t.jobName
+
+ // We don't want the /debug and /metrics endpoints running
+ t.config.Server.RegisterInstrumentation = false
+
+ util.InitLogger(&t.config.Server)
+
+ srv, err := server.New(t.config.Server)
+ if err != nil {
+ return err
+ }
+
+ t.server = srv
+ t.server.HTTP.Handle("/loki/api/v1/push", http.HandlerFunc(t.handle))
+
+ go func() {
+ err := srv.Run()
+ if err != nil {
+ level.Error(t.logger).Log("msg", "Loki push server shutdown with error", "err", err)
+ }
+ }()
+
+ return nil
+}
+
+func (t *PushTarget) handle(w http.ResponseWriter, r *http.Request) {
+ req, err := distributor.ParseRequest(r)
+ if err != nil {
+ level.Warn(t.logger).Log("msg", "failed to parse incoming push request", "err", err.Error())
+ http.Error(w, err.Error(), http.StatusBadRequest)
+ return
+ }
+ var lastErr error
+ for _, stream := range req.Streams {
+ matchers, err := logql.ParseMatchers(stream.Labels)
+ if err != nil {
+ lastErr = err
+ continue
+ }
+
+ lb := labels.NewBuilder(make(labels.Labels, 0, len(matchers)+len(t.config.Labels)))
+
+ // Add stream labels
+ for i := range matchers {
+ lb.Set(matchers[i].Name, matchers[i].Value)
+ }
+
+ // Add configured labels
+ for k, v := range t.config.Labels {
+ lb.Set(string(k), string(v))
+ }
+
+ // Apply relabeling
+ processed := relabel.Process(lb.Labels(), t.relabelConfig...)
+ if processed == nil || len(processed) == 0 {
+ w.WriteHeader(http.StatusNoContent)
+ return
+ }
+
+ // Convert to model.LabelSet
+ filtered := model.LabelSet{}
+ for i := range processed {
+ if strings.HasPrefix(processed[i].Name, "__") {
+ continue
+ }
+ filtered[model.LabelName(processed[i].Name)] = model.LabelValue(processed[i].Value)
+ }
+
+ for _, entry := range stream.Entries {
+ var err error
+ if t.config.KeepTimestamp {
+ err = t.handler.Handle(filtered, entry.Timestamp, entry.Line)
+ } else {
+ err = t.handler.Handle(filtered, time.Now(), entry.Line)
+ }
+
+ if err != nil {
+ lastErr = err
+ continue
+ }
+ }
+ }
+
+ if lastErr != nil {
+ level.Warn(t.logger).Log("msg", "at least one entry in the push request failed to process", "err", lastErr.Error())
+ http.Error(w, lastErr.Error(), http.StatusBadRequest)
+ return
+ }
+
+ w.WriteHeader(http.StatusNoContent)
+ return
+}
+
+// Type returns PushTargetType.
+func (t *PushTarget) Type() target.TargetType {
+ return target.PushTargetType
+}
+
+// Ready indicates whether or not the PushTarget target is ready to be read from.
+func (t *PushTarget) Ready() bool {
+ return true
+}
+
+// DiscoveredLabels returns the set of labels discovered by the PushTarget, which
+// is always nil. Implements Target.
+func (t *PushTarget) DiscoveredLabels() model.LabelSet {
+ return nil
+}
+
+// Labels returns the set of labels that statically apply to all log entries
+// produced by the PushTarget.
+func (t *PushTarget) Labels() model.LabelSet {
+ return t.config.Labels
+}
+
+// Details returns target-specific details.
+func (t *PushTarget) Details() interface{} {
+ return map[string]string{}
+}
+
+// Stop shuts down the PushTarget.
+func (t *PushTarget) Stop() error {
+ level.Info(t.logger).Log("msg", "stopping push server", "job", t.jobName)
+ t.server.Shutdown()
+ return nil
+}
diff --git a/pkg/promtail/targets/lokipush/pushtarget_test.go b/pkg/promtail/targets/lokipush/pushtarget_test.go
new file mode 100644
index 0000000000000..c464ab9ab29bd
--- /dev/null
+++ b/pkg/promtail/targets/lokipush/pushtarget_test.go
@@ -0,0 +1,116 @@
+package lokipush
+
+import (
+ "flag"
+ "net"
+ "os"
+ "strconv"
+ "testing"
+ "time"
+
+ "github.com/cortexproject/cortex/pkg/util/flagext"
+ "github.com/go-kit/kit/log"
+ "github.com/prometheus/common/model"
+ "github.com/prometheus/prometheus/pkg/relabel"
+ "github.com/stretchr/testify/require"
+ "github.com/weaveworks/common/server"
+
+ "github.com/grafana/loki/pkg/promtail/client"
+ "github.com/grafana/loki/pkg/promtail/scrapeconfig"
+ "github.com/grafana/loki/pkg/promtail/targets/testutils"
+)
+
+func TestPushTarget(t *testing.T) {
+ w := log.NewSyncWriter(os.Stderr)
+ logger := log.NewLogfmtLogger(w)
+
+ //Create PushTarget
+ eh := &testutils.TestClient{
+ Log: logger,
+ Messages: make([]*testutils.Entry, 0),
+ }
+
+ // Get a randomly available port by open and closing a TCP socket
+ addr, err := net.ResolveTCPAddr("tcp", "127.0.0.1:0")
+ require.NoError(t, err)
+ l, err := net.ListenTCP("tcp", addr)
+ require.NoError(t, err)
+ port := l.Addr().(*net.TCPAddr).Port
+ err = l.Close()
+ require.NoError(t, err)
+
+ // Adjust some of the defaults
+ defaults := server.Config{}
+ defaults.RegisterFlags(flag.NewFlagSet("empty", flag.ContinueOnError))
+ defaults.HTTPListenAddress = "127.0.0.1"
+ defaults.HTTPListenPort = port
+ defaults.GRPCListenAddress = "127.0.0.1"
+ defaults.GRPCListenPort = 0 // Not testing GRPC, a random port will be assigned
+
+ config := &scrapeconfig.PushTargetConfig{
+ Server: defaults,
+ Labels: model.LabelSet{
+ "pushserver": "pushserver1",
+ "dropme": "label",
+ },
+ KeepTimestamp: true,
+ }
+
+ rlbl := []*relabel.Config{
+ {
+ Action: relabel.LabelDrop,
+ Regex: relabel.MustNewRegexp("dropme"),
+ },
+ }
+
+ pt, err := NewPushTarget(logger, eh, rlbl, "job1", config)
+ require.NoError(t, err)
+
+ // Build a client to send logs
+ serverURL := flagext.URLValue{}
+ err = serverURL.Set("http://127.0.0.1:" + strconv.Itoa(port) + "/loki/api/v1/push")
+ require.NoError(t, err)
+
+ ccfg := client.Config{
+ URL: serverURL,
+ Timeout: 1 * time.Second,
+ BatchWait: 1 * time.Second,
+ BatchSize: 100 * 1024,
+ }
+ pc, err := client.New(ccfg, logger)
+ require.NoError(t, err)
+
+ // Send some logs
+ labels := model.LabelSet{
+ "stream": "stream1",
+ "__anotherdroplabel": "dropme",
+ }
+ for i := 0; i < 100; i++ {
+ err := pc.Handle(labels, time.Unix(int64(i), 0), "line"+strconv.Itoa(i))
+ require.NoError(t, err)
+ }
+
+ // Wait for them to appear in the test handler
+ countdown := 10000
+ for len(eh.Messages) != 100 && countdown > 0 {
+ time.Sleep(1 * time.Millisecond)
+ countdown--
+ }
+
+ // Make sure we didn't timeout
+ require.Equal(t, 100, len(eh.Messages))
+
+ // Verify labels
+ expectedLabels := model.LabelSet{
+ "pushserver": "pushserver1",
+ "stream": "stream1",
+ }
+ // Spot check the first value in the result to make sure relabel rules were applied properly
+ require.Equal(t, expectedLabels, eh.Messages[0].Labels)
+
+ // With keep timestamp enabled, verify timestamp
+ require.Equal(t, time.Unix(99, 0).Unix(), eh.Messages[99].Time.Unix())
+
+ _ = pt.Stop()
+
+}
diff --git a/pkg/promtail/targets/lokipush/pushtargetmanager.go b/pkg/promtail/targets/lokipush/pushtargetmanager.go
new file mode 100644
index 0000000000000..d0106941ca66f
--- /dev/null
+++ b/pkg/promtail/targets/lokipush/pushtargetmanager.go
@@ -0,0 +1,112 @@
+package lokipush
+
+import (
+ "errors"
+ "fmt"
+ "strings"
+
+ "github.com/go-kit/kit/log"
+ "github.com/go-kit/kit/log/level"
+ "github.com/prometheus/client_golang/prometheus"
+
+ "github.com/grafana/loki/pkg/logentry/stages"
+ "github.com/grafana/loki/pkg/promtail/api"
+ "github.com/grafana/loki/pkg/promtail/scrapeconfig"
+ "github.com/grafana/loki/pkg/promtail/targets/target"
+)
+
+// SyslogTargetManager manages a series of PushTargets.
+type PushTargetManager struct {
+ logger log.Logger
+ targets map[string]*PushTarget
+}
+
+// NewPushTargetManager creates a new PushTargetManager.
+func NewPushTargetManager(
+ logger log.Logger,
+ client api.EntryHandler,
+ scrapeConfigs []scrapeconfig.Config,
+) (*PushTargetManager, error) {
+
+ tm := &PushTargetManager{
+ logger: logger,
+ targets: make(map[string]*PushTarget),
+ }
+
+ if err := validateJobName(scrapeConfigs); err != nil {
+ return nil, err
+ }
+
+ for _, cfg := range scrapeConfigs {
+ registerer := prometheus.DefaultRegisterer
+ pipeline, err := stages.NewPipeline(log.With(logger, "component", "push_pipeline_"+cfg.JobName), cfg.PipelineStages, &cfg.JobName, registerer)
+ if err != nil {
+ return nil, err
+ }
+
+ t, err := NewPushTarget(logger, pipeline.Wrap(client), cfg.RelabelConfigs, cfg.JobName, cfg.PushConfig)
+ if err != nil {
+ return nil, err
+ }
+
+ tm.targets[cfg.JobName] = t
+ }
+
+ return tm, nil
+}
+
+func validateJobName(scrapeConfigs []scrapeconfig.Config) error {
+ jobNames := map[string]struct{}{}
+ for i, cfg := range scrapeConfigs {
+ if cfg.JobName == "" {
+ return errors.New("`job_name` must be defined for the `push` scrape_config with a " +
+ "unique name to properly register metrics, " +
+ "at least one `push` scrape_config has no `job_name` defined")
+ }
+ if _, ok := jobNames[cfg.JobName]; ok {
+ return fmt.Errorf("`job_name` must be unique for each `push` scrape_config, "+
+ "a duplicate `job_name` of %s was found", cfg.JobName)
+ } else {
+ jobNames[cfg.JobName] = struct{}{}
+ }
+
+ scrapeConfigs[i].JobName = strings.Replace(cfg.JobName, " ", "_", -1)
+ }
+ return nil
+}
+
+// Ready returns true if at least one PushTarget is also ready.
+func (tm *PushTargetManager) Ready() bool {
+ for _, t := range tm.targets {
+ if t.Ready() {
+ return true
+ }
+ }
+ return false
+}
+
+// Stop stops the PushTargetManager and all of its PushTargets.
+func (tm *PushTargetManager) Stop() {
+ for _, t := range tm.targets {
+ if err := t.Stop(); err != nil {
+ level.Error(t.logger).Log("msg", "error stopping PushTarget", "err", err.Error())
+ }
+ }
+}
+
+// ActiveTargets returns the list of PushTargets where Push data
+// is being read. ActiveTargets is an alias to AllTargets as
+// PushTargets cannot be deactivated, only stopped.
+func (tm *PushTargetManager) ActiveTargets() map[string][]target.Target {
+ return tm.AllTargets()
+}
+
+// AllTargets returns the list of all targets where Push data
+// is currently being read.
+func (tm *PushTargetManager) AllTargets() map[string][]target.Target {
+ result := make(map[string][]target.Target, len(tm.targets))
+ for k, v := range tm.targets {
+ result[k] = []target.Target{v}
+ }
+ return result
+}
diff --git a/pkg/promtail/targets/lokipush/pushtargetmanager_test.go b/pkg/promtail/targets/lokipush/pushtargetmanager_test.go
new file mode 100644
index 0000000000000..1e6e5dd54524c
--- /dev/null
+++ b/pkg/promtail/targets/lokipush/pushtargetmanager_test.go
@@ -0,0 +1,77 @@
+package lokipush
+
+import (
+ "testing"
+
+ "github.com/weaveworks/common/server"
+
+ "github.com/grafana/loki/pkg/promtail/scrapeconfig"
+)
+
+func Test_validateJobName(t *testing.T) {
+ tests := []struct {
+ name string
+ configs []scrapeconfig.Config
+ // Only validated against the first job in the provided scrape configs
+ expectedJob string
+ wantErr bool
+ }{
+ {
+ name: "valid with spaces removed",
+ configs: []scrapeconfig.Config{
+ {
+ JobName: "jobby job job",
+ PushConfig: &scrapeconfig.PushTargetConfig{
+ Server: server.Config{},
+ },
+ },
+ },
+ wantErr: false,
+ expectedJob: "jobby_job_job",
+ },
+ {
+ name: "missing job",
+ configs: []scrapeconfig.Config{
+ {
+ PushConfig: &scrapeconfig.PushTargetConfig{
+ Server: server.Config{},
+ },
+ },
+ },
+ wantErr: true,
+ },
+ {
+ name: "duplicate job",
+ configs: []scrapeconfig.Config{
+ {
+ JobName: "job1",
+ PushConfig: &scrapeconfig.PushTargetConfig{
+ Server: server.Config{},
+ },
+ },
+ {
+ JobName: "job1",
+ PushConfig: &scrapeconfig.PushTargetConfig{
+ Server: server.Config{},
+ },
+ },
+ },
+ wantErr: true,
+ },
+ }
+ for _, tt := range tests {
+ t.Run(tt.name, func(t *testing.T) {
+ err := validateJobName(tt.configs)
+ if (err != nil) != tt.wantErr {
+ t.Errorf("validateJobName() error = %v, wantErr %v", err, tt.wantErr)
+ return
+ }
+ if !tt.wantErr {
+ if tt.configs[0].JobName != tt.expectedJob {
+ t.Errorf("Expected to find a job with name %v but did not find it", tt.expectedJob)
+ return
+ }
+ }
+ })
+ }
+}
diff --git a/pkg/promtail/targets/manager.go b/pkg/promtail/targets/manager.go
index b312e68c172de..1c0eb2f16124c 100644
--- a/pkg/promtail/targets/manager.go
+++ b/pkg/promtail/targets/manager.go
@@ -8,14 +8,20 @@ import (
"github.com/grafana/loki/pkg/promtail/api"
"github.com/grafana/loki/pkg/promtail/positions"
- "github.com/grafana/loki/pkg/promtail/scrape"
+ "github.com/grafana/loki/pkg/promtail/scrapeconfig"
+ "github.com/grafana/loki/pkg/promtail/targets/file"
+ "github.com/grafana/loki/pkg/promtail/targets/journal"
+ "github.com/grafana/loki/pkg/promtail/targets/lokipush"
+ "github.com/grafana/loki/pkg/promtail/targets/stdin"
+ "github.com/grafana/loki/pkg/promtail/targets/syslog"
+ "github.com/grafana/loki/pkg/promtail/targets/target"
)
type targetManager interface {
Ready() bool
Stop()
- ActiveTargets() map[string][]Target
- AllTargets() map[string][]Target
+ ActiveTargets() map[string][]target.Target
+ AllTargets() map[string][]target.Target
}
// TargetManagers manages a list of target managers.
@@ -26,21 +32,22 @@ type TargetManagers struct {
// NewTargetManagers makes a new TargetManagers
func NewTargetManagers(
- app Shutdownable,
+ app stdin.Shutdownable,
logger log.Logger,
positionsConfig positions.Config,
client api.EntryHandler,
- scrapeConfigs []scrape.Config,
- targetConfig *Config,
+ scrapeConfigs []scrapeconfig.Config,
+ targetConfig *file.Config,
) (*TargetManagers, error) {
var targetManagers []targetManager
- var fileScrapeConfigs []scrape.Config
- var journalScrapeConfigs []scrape.Config
- var syslogScrapeConfigs []scrape.Config
+ var fileScrapeConfigs []scrapeconfig.Config
+ var journalScrapeConfigs []scrapeconfig.Config
+ var syslogScrapeConfigs []scrapeconfig.Config
+ var pushScrapeConfigs []scrapeconfig.Config
if targetConfig.Stdin {
level.Debug(util.Logger).Log("msg", "configured to read from stdin")
- stdin, err := newStdinTargetManager(app, client, scrapeConfigs)
+ stdin, err := stdin.NewStdinTargetManager(app, client, scrapeConfigs)
if err != nil {
return nil, err
}
@@ -59,7 +66,7 @@ func NewTargetManagers(
}
}
if len(fileScrapeConfigs) > 0 {
- fileTargetManager, err := NewFileTargetManager(
+ fileTargetManager, err := file.NewFileTargetManager(
logger,
positions,
client,
@@ -78,7 +85,7 @@ func NewTargetManagers(
}
}
if len(journalScrapeConfigs) > 0 {
- journalTargetManager, err := NewJournalTargetManager(
+ journalTargetManager, err := journal.NewJournalTargetManager(
logger,
positions,
client,
@@ -96,13 +103,26 @@ func NewTargetManagers(
}
}
if len(syslogScrapeConfigs) > 0 {
- syslogTargetManager, err := NewSyslogTargetManager(logger, client, syslogScrapeConfigs)
+ syslogTargetManager, err := syslog.NewSyslogTargetManager(logger, client, syslogScrapeConfigs)
if err != nil {
return nil, errors.Wrap(err, "failed to make syslog target manager")
}
targetManagers = append(targetManagers, syslogTargetManager)
}
+ for _, cfg := range scrapeConfigs {
+ if cfg.PushConfig != nil {
+ pushScrapeConfigs = append(pushScrapeConfigs, cfg)
+ }
+ }
+ if len(pushScrapeConfigs) > 0 {
+ pushTargetManager, err := lokipush.NewPushTargetManager(logger, client, pushScrapeConfigs)
+ if err != nil {
+ return nil, errors.Wrap(err, "failed to make Loki Push API target manager")
+ }
+ targetManagers = append(targetManagers, pushTargetManager)
+ }
+
return &TargetManagers{
targetManagers: targetManagers,
positions: positions,
@@ -111,8 +131,8 @@ func NewTargetManagers(
}
// ActiveTargets returns active targets per jobs
-func (tm *TargetManagers) ActiveTargets() map[string][]Target {
- result := map[string][]Target{}
+func (tm *TargetManagers) ActiveTargets() map[string][]target.Target {
+ result := map[string][]target.Target{}
for _, t := range tm.targetManagers {
for job, targets := range t.ActiveTargets() {
result[job] = append(result[job], targets...)
@@ -122,8 +142,8 @@ func (tm *TargetManagers) ActiveTargets() map[string][]Target {
}
// AllTargets returns all targets per jobs
-func (tm *TargetManagers) AllTargets() map[string][]Target {
- result := map[string][]Target{}
+func (tm *TargetManagers) AllTargets() map[string][]target.Target {
+ result := map[string][]target.Target{}
for _, t := range tm.targetManagers {
for job, targets := range t.AllTargets() {
result[job] = append(result[job], targets...)
diff --git a/pkg/promtail/targets/stdin_target_manager.go b/pkg/promtail/targets/stdin/stdin_target_manager.go
similarity index 84%
rename from pkg/promtail/targets/stdin_target_manager.go
rename to pkg/promtail/targets/stdin/stdin_target_manager.go
index e67d2037230b9..09737157f8ab9 100644
--- a/pkg/promtail/targets/stdin_target_manager.go
+++ b/pkg/promtail/targets/stdin/stdin_target_manager.go
@@ -1,4 +1,4 @@
-package targets
+package stdin
import (
"bufio"
@@ -19,7 +19,8 @@ import (
"github.com/grafana/loki/pkg/logentry/stages"
"github.com/grafana/loki/pkg/promtail/api"
- "github.com/grafana/loki/pkg/promtail/scrape"
+ "github.com/grafana/loki/pkg/promtail/scrapeconfig"
+ "github.com/grafana/loki/pkg/promtail/targets/target"
)
// bufferSize is the size of the buffered reader
@@ -36,7 +37,7 @@ var (
stdIn file = os.Stdin
hostName, _ = os.Hostname()
// defaultStdInCfg is the default config for stdin target if none provided.
- defaultStdInCfg = scrape.Config{
+ defaultStdInCfg = scrapeconfig.Config{
JobName: "stdin",
ServiceDiscoveryConfig: config.ServiceDiscoveryConfig{
StaticConfigs: []*targetgroup.Group{
@@ -56,7 +57,7 @@ type stdinTargetManager struct {
app Shutdownable
}
-func newStdinTargetManager(app Shutdownable, client api.EntryHandler, configs []scrape.Config) (*stdinTargetManager, error) {
+func NewStdinTargetManager(app Shutdownable, client api.EntryHandler, configs []scrapeconfig.Config) (*stdinTargetManager, error) {
reader, err := newReaderTarget(stdIn, client, getStdinConfig(configs))
if err != nil {
return nil, err
@@ -73,7 +74,7 @@ func newStdinTargetManager(app Shutdownable, client api.EntryHandler, configs []
return stdinManager, nil
}
-func getStdinConfig(configs []scrape.Config) scrape.Config {
+func getStdinConfig(configs []scrapeconfig.Config) scrapeconfig.Config {
cfg := defaultStdInCfg
// if we receive configs we use the first one.
if len(configs) > 0 {
@@ -88,9 +89,9 @@ func getStdinConfig(configs []scrape.Config) scrape.Config {
func (t *stdinTargetManager) Ready() bool {
return t.ctx.Err() == nil
}
-func (t *stdinTargetManager) Stop() { t.cancel() }
-func (t *stdinTargetManager) ActiveTargets() map[string][]Target { return nil }
-func (t *stdinTargetManager) AllTargets() map[string][]Target { return nil }
+func (t *stdinTargetManager) Stop() { t.cancel() }
+func (t *stdinTargetManager) ActiveTargets() map[string][]target.Target { return nil }
+func (t *stdinTargetManager) AllTargets() map[string][]target.Target { return nil }
type readerTarget struct {
in *bufio.Reader
@@ -102,7 +103,7 @@ type readerTarget struct {
ctx context.Context
}
-func newReaderTarget(in io.Reader, client api.EntryHandler, cfg scrape.Config) (*readerTarget, error) {
+func newReaderTarget(in io.Reader, client api.EntryHandler, cfg scrapeconfig.Config) (*readerTarget, error) {
pipeline, err := stages.NewPipeline(log.With(util.Logger, "component", "pipeline"), cfg.PipelineStages, &cfg.JobName, prometheus.DefaultRegisterer)
if err != nil {
return nil, err
diff --git a/pkg/promtail/targets/stdin_target_manager_test.go b/pkg/promtail/targets/stdin/stdin_target_manager_test.go
similarity index 86%
rename from pkg/promtail/targets/stdin_target_manager_test.go
rename to pkg/promtail/targets/stdin/stdin_target_manager_test.go
index cfba68367d07a..b282780172f46 100644
--- a/pkg/promtail/targets/stdin_target_manager_test.go
+++ b/pkg/promtail/targets/stdin/stdin_target_manager_test.go
@@ -1,4 +1,4 @@
-package targets
+package stdin
import (
"bytes"
@@ -13,7 +13,7 @@ import (
"gopkg.in/yaml.v2"
"github.com/grafana/loki/pkg/logentry/stages"
- "github.com/grafana/loki/pkg/promtail/scrape"
+ "github.com/grafana/loki/pkg/promtail/scrapeconfig"
)
type line struct {
@@ -34,14 +34,14 @@ func Test_newReaderTarget(t *testing.T) {
tests := []struct {
name string
in io.Reader
- cfg scrape.Config
+ cfg scrapeconfig.Config
want []line
wantErr bool
}{
{
"no newlines",
bytes.NewReader([]byte("bar")),
- scrape.Config{},
+ scrapeconfig.Config{},
[]line{
{model.LabelSet{}, "bar"},
},
@@ -50,14 +50,14 @@ func Test_newReaderTarget(t *testing.T) {
{
"empty",
bytes.NewReader([]byte("")),
- scrape.Config{},
+ scrapeconfig.Config{},
nil,
false,
},
{
"newlines",
bytes.NewReader([]byte("\nfoo\r\nbar")),
- scrape.Config{},
+ scrapeconfig.Config{},
[]line{
{model.LabelSet{}, "foo"},
{model.LabelSet{}, "bar"},
@@ -67,7 +67,7 @@ func Test_newReaderTarget(t *testing.T) {
{
"pipeline",
bytes.NewReader([]byte("\nfoo\r\nbar")),
- scrape.Config{
+ scrapeconfig.Config{
PipelineStages: loadConfig(stagesConfig),
},
[]line{
@@ -129,7 +129,7 @@ func Test_Shutdown(t *testing.T) {
stdIn = newFakeStin("line")
appMock := &mockShutdownable{called: make(chan bool, 1)}
recorder := &clientRecorder{}
- manager, err := newStdinTargetManager(appMock, recorder, []scrape.Config{{}})
+ manager, err := NewStdinTargetManager(appMock, recorder, []scrapeconfig.Config{{}})
require.NoError(t, err)
require.NotNil(t, manager)
called := <-appMock.called
@@ -140,12 +140,12 @@ func Test_Shutdown(t *testing.T) {
func Test_StdinConfigs(t *testing.T) {
// should take the first config
- require.Equal(t, scrape.DefaultScrapeConfig, getStdinConfig([]scrape.Config{
- scrape.DefaultScrapeConfig,
+ require.Equal(t, scrapeconfig.DefaultScrapeConfig, getStdinConfig([]scrapeconfig.Config{
+ scrapeconfig.DefaultScrapeConfig,
{},
}))
// or use the default if none if provided
- require.Equal(t, defaultStdInCfg, getStdinConfig([]scrape.Config{}))
+ require.Equal(t, defaultStdInCfg, getStdinConfig([]scrapeconfig.Config{}))
}
var stagesConfig = `
diff --git a/pkg/promtail/targets/syslogparser/syslogparser.go b/pkg/promtail/targets/syslog/syslogparser/syslogparser.go
similarity index 100%
rename from pkg/promtail/targets/syslogparser/syslogparser.go
rename to pkg/promtail/targets/syslog/syslogparser/syslogparser.go
diff --git a/pkg/promtail/targets/syslogparser/syslogparser_test.go b/pkg/promtail/targets/syslog/syslogparser/syslogparser_test.go
similarity index 96%
rename from pkg/promtail/targets/syslogparser/syslogparser_test.go
rename to pkg/promtail/targets/syslog/syslogparser/syslogparser_test.go
index bb6e011c800d4..5f616ec293a7d 100644
--- a/pkg/promtail/targets/syslogparser/syslogparser_test.go
+++ b/pkg/promtail/targets/syslog/syslogparser/syslogparser_test.go
@@ -9,7 +9,7 @@ import (
"github.com/influxdata/go-syslog/v3/rfc5424"
"github.com/stretchr/testify/require"
- "github.com/grafana/loki/pkg/promtail/targets/syslogparser"
+ "github.com/grafana/loki/pkg/promtail/targets/syslog/syslogparser"
)
func TestParseStream_OctetCounting(t *testing.T) {
diff --git a/pkg/promtail/targets/syslogtarget.go b/pkg/promtail/targets/syslog/syslogtarget.go
similarity index 95%
rename from pkg/promtail/targets/syslogtarget.go
rename to pkg/promtail/targets/syslog/syslogtarget.go
index c1d139eec7cbc..4e5be435a8ba9 100644
--- a/pkg/promtail/targets/syslogtarget.go
+++ b/pkg/promtail/targets/syslog/syslogtarget.go
@@ -1,4 +1,4 @@
-package targets
+package syslog
import (
"context"
@@ -22,8 +22,9 @@ import (
"github.com/prometheus/prometheus/pkg/relabel"
"github.com/grafana/loki/pkg/promtail/api"
- "github.com/grafana/loki/pkg/promtail/scrape"
- "github.com/grafana/loki/pkg/promtail/targets/syslogparser"
+ "github.com/grafana/loki/pkg/promtail/scrapeconfig"
+ "github.com/grafana/loki/pkg/promtail/targets/syslog/syslogparser"
+ "github.com/grafana/loki/pkg/promtail/targets/target"
)
var (
@@ -45,7 +46,7 @@ var (
type SyslogTarget struct {
logger log.Logger
handler api.EntryHandler
- config *scrape.SyslogTargetConfig
+ config *scrapeconfig.SyslogTargetConfig
relabelConfig []*relabel.Config
listener net.Listener
@@ -66,7 +67,7 @@ func NewSyslogTarget(
logger log.Logger,
handler api.EntryHandler,
relabel []*relabel.Config,
- config *scrape.SyslogTargetConfig,
+ config *scrapeconfig.SyslogTargetConfig,
) (*SyslogTarget, error) {
ctx, cancel := context.WithCancel(context.Background())
@@ -263,8 +264,8 @@ func lookupAddr(addr string) string {
}
// Type returns SyslogTargetType.
-func (t *SyslogTarget) Type() TargetType {
- return SyslogTargetType
+func (t *SyslogTarget) Type() target.TargetType {
+ return target.SyslogTargetType
}
// Ready indicates whether or not the syslog target is ready to be read from.
diff --git a/pkg/promtail/targets/syslogtarget_test.go b/pkg/promtail/targets/syslog/syslogtarget_test.go
similarity index 97%
rename from pkg/promtail/targets/syslogtarget_test.go
rename to pkg/promtail/targets/syslog/syslogtarget_test.go
index 20be64dd4e0c7..cb1af821b60b4 100644
--- a/pkg/promtail/targets/syslogtarget_test.go
+++ b/pkg/promtail/targets/syslog/syslogtarget_test.go
@@ -1,4 +1,4 @@
-package targets
+package syslog
import (
"fmt"
@@ -17,7 +17,7 @@ import (
"github.com/stretchr/testify/require"
"gopkg.in/yaml.v2"
- "github.com/grafana/loki/pkg/promtail/scrape"
+ "github.com/grafana/loki/pkg/promtail/scrapeconfig"
)
type ClientMessage struct {
@@ -62,7 +62,7 @@ func testSyslogTarget(t *testing.T, octetCounting bool) {
logger := log.NewLogfmtLogger(w)
client := &TestLabeledClient{log: logger}
- tgt, err := NewSyslogTarget(logger, client, relabelConfig(t), &scrape.SyslogTargetConfig{
+ tgt, err := NewSyslogTarget(logger, client, relabelConfig(t), &scrapeconfig.SyslogTargetConfig{
ListenAddress: "127.0.0.1:0",
LabelStructuredData: true,
Labels: model.LabelSet{
@@ -161,7 +161,7 @@ func TestSyslogTarget_InvalidData(t *testing.T) {
logger := log.NewLogfmtLogger(w)
client := &TestLabeledClient{log: logger}
- tgt, err := NewSyslogTarget(logger, client, relabelConfig(t), &scrape.SyslogTargetConfig{
+ tgt, err := NewSyslogTarget(logger, client, relabelConfig(t), &scrapeconfig.SyslogTargetConfig{
ListenAddress: "127.0.0.1:0",
})
require.NoError(t, err)
@@ -191,7 +191,7 @@ func TestSyslogTarget_NonUTF8Message(t *testing.T) {
logger := log.NewLogfmtLogger(w)
client := &TestLabeledClient{log: logger}
- tgt, err := NewSyslogTarget(logger, client, relabelConfig(t), &scrape.SyslogTargetConfig{
+ tgt, err := NewSyslogTarget(logger, client, relabelConfig(t), &scrapeconfig.SyslogTargetConfig{
ListenAddress: "127.0.0.1:0",
})
require.NoError(t, err)
@@ -228,7 +228,7 @@ func TestSyslogTarget_IdleTimeout(t *testing.T) {
logger := log.NewLogfmtLogger(w)
client := &TestLabeledClient{log: logger}
- tgt, err := NewSyslogTarget(logger, client, relabelConfig(t), &scrape.SyslogTargetConfig{
+ tgt, err := NewSyslogTarget(logger, client, relabelConfig(t), &scrapeconfig.SyslogTargetConfig{
ListenAddress: "127.0.0.1:0",
IdleTimeout: time.Millisecond,
})
diff --git a/pkg/promtail/targets/syslogtargetmanager.go b/pkg/promtail/targets/syslog/syslogtargetmanager.go
similarity index 82%
rename from pkg/promtail/targets/syslogtargetmanager.go
rename to pkg/promtail/targets/syslog/syslogtargetmanager.go
index 00cb1227ba7c2..7bd48ef2dbdd6 100644
--- a/pkg/promtail/targets/syslogtargetmanager.go
+++ b/pkg/promtail/targets/syslog/syslogtargetmanager.go
@@ -1,4 +1,4 @@
-package targets
+package syslog
import (
"github.com/go-kit/kit/log"
@@ -7,7 +7,8 @@ import (
"github.com/grafana/loki/pkg/logentry/stages"
"github.com/grafana/loki/pkg/promtail/api"
- "github.com/grafana/loki/pkg/promtail/scrape"
+ "github.com/grafana/loki/pkg/promtail/scrapeconfig"
+ "github.com/grafana/loki/pkg/promtail/targets/target"
)
// SyslogTargetManager manages a series of SyslogTargets.
@@ -20,7 +21,7 @@ type SyslogTargetManager struct {
func NewSyslogTargetManager(
logger log.Logger,
client api.EntryHandler,
- scrapeConfigs []scrape.Config,
+ scrapeConfigs []scrapeconfig.Config,
) (*SyslogTargetManager, error) {
tm := &SyslogTargetManager{
@@ -68,16 +69,16 @@ func (tm *SyslogTargetManager) Stop() {
// ActiveTargets returns the list of SyslogTargets where syslog data
// is being read. ActiveTargets is an alias to AllTargets as
// SyslogTargets cannot be deactivated, only stopped.
-func (tm *SyslogTargetManager) ActiveTargets() map[string][]Target {
+func (tm *SyslogTargetManager) ActiveTargets() map[string][]target.Target {
return tm.AllTargets()
}
// AllTargets returns the list of all targets where syslog data
// is currently being read.
-func (tm *SyslogTargetManager) AllTargets() map[string][]Target {
- result := make(map[string][]Target, len(tm.targets))
+func (tm *SyslogTargetManager) AllTargets() map[string][]target.Target {
+ result := make(map[string][]target.Target, len(tm.targets))
for k, v := range tm.targets {
- result[k] = []Target{v}
+ result[k] = []target.Target{v}
}
return result
}
diff --git a/pkg/promtail/targets/target.go b/pkg/promtail/targets/target/target.go
similarity index 92%
rename from pkg/promtail/targets/target.go
rename to pkg/promtail/targets/target/target.go
index 15c4f51fb1e00..03cdb693256d0 100644
--- a/pkg/promtail/targets/target.go
+++ b/pkg/promtail/targets/target/target.go
@@ -1,4 +1,4 @@
-package targets
+package target
import (
"github.com/prometheus/common/model"
@@ -19,6 +19,9 @@ const (
// DroppedTargetType is a target that's been dropped.
DroppedTargetType = TargetType("dropped")
+
+ // PushTargetType is a Loki push target
+ PushTargetType = TargetType("Push")
)
// Target is a promtail scrape target
@@ -46,7 +49,7 @@ type droppedTarget struct {
reason string
}
-func newDroppedTarget(reason string, discoveredLabels model.LabelSet) Target {
+func NewDroppedTarget(reason string, discoveredLabels model.LabelSet) Target {
return &droppedTarget{
discoveredLabels: discoveredLabels,
reason: reason,
diff --git a/pkg/promtail/targets/testutils/testutils.go b/pkg/promtail/targets/testutils/testutils.go
new file mode 100644
index 0000000000000..092c3b1e089fa
--- /dev/null
+++ b/pkg/promtail/targets/testutils/testutils.go
@@ -0,0 +1,46 @@
+package testutils
+
+import (
+ "math/rand"
+ "sync"
+ "time"
+
+ "github.com/go-kit/kit/log"
+ "github.com/go-kit/kit/log/level"
+ "github.com/prometheus/common/model"
+)
+
+type Entry struct {
+ Labels model.LabelSet
+ Time time.Time
+ Log string
+}
+
+type TestClient struct {
+ Log log.Logger
+ Messages []*Entry
+ sync.Mutex
+}
+
+func (c *TestClient) Handle(ls model.LabelSet, t time.Time, s string) error {
+ level.Debug(c.Log).Log("msg", "received log", "log", s)
+
+ c.Lock()
+ defer c.Unlock()
+ c.Messages = append(c.Messages, &Entry{ls, t, s})
+ return nil
+}
+
+func InitRandom() {
+ rand.Seed(time.Now().UnixNano())
+}
+
+var letters = []rune("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")
+
+func RandName() string {
+ b := make([]rune, 10)
+ for i := range b {
+ b[i] = letters[rand.Intn(len(letters))]
+ }
+ return string(b)
+}
diff --git a/vendor/github.com/imdario/mergo/.deepsource.toml b/vendor/github.com/imdario/mergo/.deepsource.toml
new file mode 100644
index 0000000000000..8a0681af85590
--- /dev/null
+++ b/vendor/github.com/imdario/mergo/.deepsource.toml
@@ -0,0 +1,12 @@
+version = 1
+
+test_patterns = [
+ "*_test.go"
+]
+
+[[analyzers]]
+name = "go"
+enabled = true
+
+ [analyzers.meta]
+ import_path = "github.com/imdario/mergo"
\ No newline at end of file
diff --git a/vendor/github.com/imdario/mergo/.gitignore b/vendor/github.com/imdario/mergo/.gitignore
new file mode 100644
index 0000000000000..529c3412ba95d
--- /dev/null
+++ b/vendor/github.com/imdario/mergo/.gitignore
@@ -0,0 +1,33 @@
+#### joe made this: http://goel.io/joe
+
+#### go ####
+# Binaries for programs and plugins
+*.exe
+*.dll
+*.so
+*.dylib
+
+# Test binary, build with `go test -c`
+*.test
+
+# Output of the go coverage tool, specifically when used with LiteIDE
+*.out
+
+# Project-local glide cache, RE: https://github.com/Masterminds/glide/issues/736
+.glide/
+
+#### vim ####
+# Swap
+[._]*.s[a-v][a-z]
+[._]*.sw[a-p]
+[._]s[a-v][a-z]
+[._]sw[a-p]
+
+# Session
+Session.vim
+
+# Temporary
+.netrwhist
+*~
+# Auto-generated tag files
+tags
diff --git a/vendor/github.com/imdario/mergo/.travis.yml b/vendor/github.com/imdario/mergo/.travis.yml
new file mode 100644
index 0000000000000..dad29725f867a
--- /dev/null
+++ b/vendor/github.com/imdario/mergo/.travis.yml
@@ -0,0 +1,9 @@
+language: go
+install:
+ - go get -t
+ - go get golang.org/x/tools/cmd/cover
+ - go get github.com/mattn/goveralls
+script:
+ - go test -race -v ./...
+after_script:
+ - $HOME/gopath/bin/goveralls -service=travis-ci -repotoken $COVERALLS_TOKEN
diff --git a/vendor/github.com/imdario/mergo/CODE_OF_CONDUCT.md b/vendor/github.com/imdario/mergo/CODE_OF_CONDUCT.md
new file mode 100644
index 0000000000000..469b44907a09a
--- /dev/null
+++ b/vendor/github.com/imdario/mergo/CODE_OF_CONDUCT.md
@@ -0,0 +1,46 @@
+# Contributor Covenant Code of Conduct
+
+## Our Pledge
+
+In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
+
+## Our Standards
+
+Examples of behavior that contributes to creating a positive environment include:
+
+* Using welcoming and inclusive language
+* Being respectful of differing viewpoints and experiences
+* Gracefully accepting constructive criticism
+* Focusing on what is best for the community
+* Showing empathy towards other community members
+
+Examples of unacceptable behavior by participants include:
+
+* The use of sexualized language or imagery and unwelcome sexual attention or advances
+* Trolling, insulting/derogatory comments, and personal or political attacks
+* Public or private harassment
+* Publishing others' private information, such as a physical or electronic address, without explicit permission
+* Other conduct which could reasonably be considered inappropriate in a professional setting
+
+## Our Responsibilities
+
+Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
+
+Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
+
+## Scope
+
+This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
+
+## Enforcement
+
+Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at [email protected]. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
+
+Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
+
+## Attribution
+
+This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]
+
+[homepage]: http://contributor-covenant.org
+[version]: http://contributor-covenant.org/version/1/4/
diff --git a/vendor/github.com/imdario/mergo/LICENSE b/vendor/github.com/imdario/mergo/LICENSE
new file mode 100644
index 0000000000000..686680298da2f
--- /dev/null
+++ b/vendor/github.com/imdario/mergo/LICENSE
@@ -0,0 +1,28 @@
+Copyright (c) 2013 Dario Castañé. All rights reserved.
+Copyright (c) 2012 The Go Authors. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+ * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+ * Neither the name of Google Inc. nor the names of its
+contributors may be used to endorse or promote products derived from
+this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/github.com/imdario/mergo/README.md b/vendor/github.com/imdario/mergo/README.md
new file mode 100644
index 0000000000000..02fc81e0626e3
--- /dev/null
+++ b/vendor/github.com/imdario/mergo/README.md
@@ -0,0 +1,238 @@
+# Mergo
+
+A helper to merge structs and maps in Golang. Useful for configuration default values, avoiding messy if-statements.
+
+Also a lovely [comune](http://en.wikipedia.org/wiki/Mergo) (municipality) in the Province of Ancona in the Italian region of Marche.
+
+## Status
+
+It is ready for production use. [It is used in several projects by Docker, Google, The Linux Foundation, VMWare, Shopify, etc](https://github.com/imdario/mergo#mergo-in-the-wild).
+
+[![GoDoc][3]][4]
+[![GoCard][5]][6]
+[![Build Status][1]][2]
+[![Coverage Status][7]][8]
+[![Sourcegraph][9]][10]
+[](https://app.fossa.io/projects/git%2Bgithub.com%2Fimdario%2Fmergo?ref=badge_shield)
+
+[1]: https://travis-ci.org/imdario/mergo.png
+[2]: https://travis-ci.org/imdario/mergo
+[3]: https://godoc.org/github.com/imdario/mergo?status.svg
+[4]: https://godoc.org/github.com/imdario/mergo
+[5]: https://goreportcard.com/badge/imdario/mergo
+[6]: https://goreportcard.com/report/github.com/imdario/mergo
+[7]: https://coveralls.io/repos/github/imdario/mergo/badge.svg?branch=master
+[8]: https://coveralls.io/github/imdario/mergo?branch=master
+[9]: https://sourcegraph.com/github.com/imdario/mergo/-/badge.svg
+[10]: https://sourcegraph.com/github.com/imdario/mergo?badge
+
+### Latest release
+
+[Release v0.3.7](https://github.com/imdario/mergo/releases/tag/v0.3.7).
+
+### Important note
+
+Please keep in mind that in [0.3.2](//github.com/imdario/mergo/releases/tag/0.3.2) Mergo changed `Merge()`and `Map()` signatures to support [transformers](#transformers). An optional/variadic argument has been added, so it won't break existing code.
+
+If you were using Mergo **before** April 6th 2015, please check your project works as intended after updating your local copy with ```go get -u github.com/imdario/mergo```. I apologize for any issue caused by its previous behavior and any future bug that Mergo could cause (I hope it won't!) in existing projects after the change (release 0.2.0).
+
+### Donations
+
+If Mergo is useful to you, consider buying me a coffee, a beer or making a monthly donation so I can keep building great free software. :heart_eyes:
+
+<a href='https://ko-fi.com/B0B58839' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://az743702.vo.msecnd.net/cdn/kofi1.png?v=0' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
+[](https://beerpay.io/imdario/mergo)
+[](https://beerpay.io/imdario/mergo)
+<a href="https://liberapay.com/dario/donate"><img alt="Donate using Liberapay" src="https://liberapay.com/assets/widgets/donate.svg"></a>
+
+### Mergo in the wild
+
+- [moby/moby](https://github.com/moby/moby)
+- [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes)
+- [vmware/dispatch](https://github.com/vmware/dispatch)
+- [Shopify/themekit](https://github.com/Shopify/themekit)
+- [imdario/zas](https://github.com/imdario/zas)
+- [matcornic/hermes](https://github.com/matcornic/hermes)
+- [OpenBazaar/openbazaar-go](https://github.com/OpenBazaar/openbazaar-go)
+- [kataras/iris](https://github.com/kataras/iris)
+- [michaelsauter/crane](https://github.com/michaelsauter/crane)
+- [go-task/task](https://github.com/go-task/task)
+- [sensu/uchiwa](https://github.com/sensu/uchiwa)
+- [ory/hydra](https://github.com/ory/hydra)
+- [sisatech/vcli](https://github.com/sisatech/vcli)
+- [dairycart/dairycart](https://github.com/dairycart/dairycart)
+- [projectcalico/felix](https://github.com/projectcalico/felix)
+- [resin-os/balena](https://github.com/resin-os/balena)
+- [go-kivik/kivik](https://github.com/go-kivik/kivik)
+- [Telefonica/govice](https://github.com/Telefonica/govice)
+- [supergiant/supergiant](supergiant/supergiant)
+- [SergeyTsalkov/brooce](https://github.com/SergeyTsalkov/brooce)
+- [soniah/dnsmadeeasy](https://github.com/soniah/dnsmadeeasy)
+- [ohsu-comp-bio/funnel](https://github.com/ohsu-comp-bio/funnel)
+- [EagerIO/Stout](https://github.com/EagerIO/Stout)
+- [lynndylanhurley/defsynth-api](https://github.com/lynndylanhurley/defsynth-api)
+- [russross/canvasassignments](https://github.com/russross/canvasassignments)
+- [rdegges/cryptly-api](https://github.com/rdegges/cryptly-api)
+- [casualjim/exeggutor](https://github.com/casualjim/exeggutor)
+- [divshot/gitling](https://github.com/divshot/gitling)
+- [RWJMurphy/gorl](https://github.com/RWJMurphy/gorl)
+- [andrerocker/deploy42](https://github.com/andrerocker/deploy42)
+- [elwinar/rambler](https://github.com/elwinar/rambler)
+- [tmaiaroto/gopartman](https://github.com/tmaiaroto/gopartman)
+- [jfbus/impressionist](https://github.com/jfbus/impressionist)
+- [Jmeyering/zealot](https://github.com/Jmeyering/zealot)
+- [godep-migrator/rigger-host](https://github.com/godep-migrator/rigger-host)
+- [Dronevery/MultiwaySwitch-Go](https://github.com/Dronevery/MultiwaySwitch-Go)
+- [thoas/picfit](https://github.com/thoas/picfit)
+- [mantasmatelis/whooplist-server](https://github.com/mantasmatelis/whooplist-server)
+- [jnuthong/item_search](https://github.com/jnuthong/item_search)
+- [bukalapak/snowboard](https://github.com/bukalapak/snowboard)
+
+## Installation
+
+ go get github.com/imdario/mergo
+
+ // use in your .go code
+ import (
+ "github.com/imdario/mergo"
+ )
+
+## Usage
+
+You can only merge same-type structs with exported fields initialized as zero value of their type and same-types maps. Mergo won't merge unexported (private) fields but will do recursively any exported one. It won't merge empty structs value as [they are not considered zero values](https://golang.org/ref/spec#The_zero_value) either. Also maps will be merged recursively except for structs inside maps (because they are not addressable using Go reflection).
+
+```go
+if err := mergo.Merge(&dst, src); err != nil {
+ // ...
+}
+```
+
+Also, you can merge overwriting values using the transformer `WithOverride`.
+
+```go
+if err := mergo.Merge(&dst, src, mergo.WithOverride); err != nil {
+ // ...
+}
+```
+
+Additionally, you can map a `map[string]interface{}` to a struct (and otherwise, from struct to map), following the same restrictions as in `Merge()`. Keys are capitalized to find each corresponding exported field.
+
+```go
+if err := mergo.Map(&dst, srcMap); err != nil {
+ // ...
+}
+```
+
+Warning: if you map a struct to map, it won't do it recursively. Don't expect Mergo to map struct members of your struct as `map[string]interface{}`. They will be just assigned as values.
+
+More information and examples in [godoc documentation](http://godoc.org/github.com/imdario/mergo).
+
+### Nice example
+
+```go
+package main
+
+import (
+ "fmt"
+ "github.com/imdario/mergo"
+)
+
+type Foo struct {
+ A string
+ B int64
+}
+
+func main() {
+ src := Foo{
+ A: "one",
+ B: 2,
+ }
+ dest := Foo{
+ A: "two",
+ }
+ mergo.Merge(&dest, src)
+ fmt.Println(dest)
+ // Will print
+ // {two 2}
+}
+```
+
+Note: if test are failing due missing package, please execute:
+
+ go get gopkg.in/yaml.v2
+
+### Transformers
+
+Transformers allow to merge specific types differently than in the default behavior. In other words, now you can customize how some types are merged. For example, `time.Time` is a struct; it doesn't have zero value but IsZero can return true because it has fields with zero value. How can we merge a non-zero `time.Time`?
+
+```go
+package main
+
+import (
+ "fmt"
+ "github.com/imdario/mergo"
+ "reflect"
+ "time"
+)
+
+type timeTransfomer struct {
+}
+
+func (t timeTransfomer) Transformer(typ reflect.Type) func(dst, src reflect.Value) error {
+ if typ == reflect.TypeOf(time.Time{}) {
+ return func(dst, src reflect.Value) error {
+ if dst.CanSet() {
+ isZero := dst.MethodByName("IsZero")
+ result := isZero.Call([]reflect.Value{})
+ if result[0].Bool() {
+ dst.Set(src)
+ }
+ }
+ return nil
+ }
+ }
+ return nil
+}
+
+type Snapshot struct {
+ Time time.Time
+ // ...
+}
+
+func main() {
+ src := Snapshot{time.Now()}
+ dest := Snapshot{}
+ mergo.Merge(&dest, src, mergo.WithTransformers(timeTransfomer{}))
+ fmt.Println(dest)
+ // Will print
+ // { 2018-01-12 01:15:00 +0000 UTC m=+0.000000001 }
+}
+```
+
+
+## Contact me
+
+If I can help you, you have an idea or you are using Mergo in your projects, don't hesitate to drop me a line (or a pull request): [@im_dario](https://twitter.com/im_dario)
+
+## About
+
+Written by [Dario Castañé](http://dario.im).
+
+## Top Contributors
+
+[](https://sourcerer.io/fame/imdario/imdario/mergo/links/0)
+[](https://sourcerer.io/fame/imdario/imdario/mergo/links/1)
+[](https://sourcerer.io/fame/imdario/imdario/mergo/links/2)
+[](https://sourcerer.io/fame/imdario/imdario/mergo/links/3)
+[](https://sourcerer.io/fame/imdario/imdario/mergo/links/4)
+[](https://sourcerer.io/fame/imdario/imdario/mergo/links/5)
+[](https://sourcerer.io/fame/imdario/imdario/mergo/links/6)
+[](https://sourcerer.io/fame/imdario/imdario/mergo/links/7)
+
+
+## License
+
+[BSD 3-Clause](http://opensource.org/licenses/BSD-3-Clause) license, as [Go language](http://golang.org/LICENSE).
+
+
+[](https://app.fossa.io/projects/git%2Bgithub.com%2Fimdario%2Fmergo?ref=badge_large)
diff --git a/vendor/github.com/imdario/mergo/doc.go b/vendor/github.com/imdario/mergo/doc.go
new file mode 100644
index 0000000000000..6e9aa7baf3540
--- /dev/null
+++ b/vendor/github.com/imdario/mergo/doc.go
@@ -0,0 +1,44 @@
+// Copyright 2013 Dario Castañé. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+/*
+Package mergo merges same-type structs and maps by setting default values in zero-value fields.
+
+Mergo won't merge unexported (private) fields but will do recursively any exported one. It also won't merge structs inside maps (because they are not addressable using Go reflection).
+
+Usage
+
+From my own work-in-progress project:
+
+ type networkConfig struct {
+ Protocol string
+ Address string
+ ServerType string `json: "server_type"`
+ Port uint16
+ }
+
+ type FssnConfig struct {
+ Network networkConfig
+ }
+
+ var fssnDefault = FssnConfig {
+ networkConfig {
+ "tcp",
+ "127.0.0.1",
+ "http",
+ 31560,
+ },
+ }
+
+ // Inside a function [...]
+
+ if err := mergo.Merge(&config, fssnDefault); err != nil {
+ log.Fatal(err)
+ }
+
+ // More code [...]
+
+*/
+package mergo
diff --git a/vendor/github.com/imdario/mergo/map.go b/vendor/github.com/imdario/mergo/map.go
new file mode 100644
index 0000000000000..d83258b4dda22
--- /dev/null
+++ b/vendor/github.com/imdario/mergo/map.go
@@ -0,0 +1,176 @@
+// Copyright 2014 Dario Castañé. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Based on src/pkg/reflect/deepequal.go from official
+// golang's stdlib.
+
+package mergo
+
+import (
+ "fmt"
+ "reflect"
+ "unicode"
+ "unicode/utf8"
+)
+
+func changeInitialCase(s string, mapper func(rune) rune) string {
+ if s == "" {
+ return s
+ }
+ r, n := utf8.DecodeRuneInString(s)
+ return string(mapper(r)) + s[n:]
+}
+
+func isExported(field reflect.StructField) bool {
+ r, _ := utf8.DecodeRuneInString(field.Name)
+ return r >= 'A' && r <= 'Z'
+}
+
+// Traverses recursively both values, assigning src's fields values to dst.
+// The map argument tracks comparisons that have already been seen, which allows
+// short circuiting on recursive types.
+func deepMap(dst, src reflect.Value, visited map[uintptr]*visit, depth int, config *Config) (err error) {
+ overwrite := config.Overwrite
+ if dst.CanAddr() {
+ addr := dst.UnsafeAddr()
+ h := 17 * addr
+ seen := visited[h]
+ typ := dst.Type()
+ for p := seen; p != nil; p = p.next {
+ if p.ptr == addr && p.typ == typ {
+ return nil
+ }
+ }
+ // Remember, remember...
+ visited[h] = &visit{addr, typ, seen}
+ }
+ zeroValue := reflect.Value{}
+ switch dst.Kind() {
+ case reflect.Map:
+ dstMap := dst.Interface().(map[string]interface{})
+ for i, n := 0, src.NumField(); i < n; i++ {
+ srcType := src.Type()
+ field := srcType.Field(i)
+ if !isExported(field) {
+ continue
+ }
+ fieldName := field.Name
+ fieldName = changeInitialCase(fieldName, unicode.ToLower)
+ if v, ok := dstMap[fieldName]; !ok || (isEmptyValue(reflect.ValueOf(v)) || overwrite) {
+ dstMap[fieldName] = src.Field(i).Interface()
+ }
+ }
+ case reflect.Ptr:
+ if dst.IsNil() {
+ v := reflect.New(dst.Type().Elem())
+ dst.Set(v)
+ }
+ dst = dst.Elem()
+ fallthrough
+ case reflect.Struct:
+ srcMap := src.Interface().(map[string]interface{})
+ for key := range srcMap {
+ config.overwriteWithEmptyValue = true
+ srcValue := srcMap[key]
+ fieldName := changeInitialCase(key, unicode.ToUpper)
+ dstElement := dst.FieldByName(fieldName)
+ if dstElement == zeroValue {
+ // We discard it because the field doesn't exist.
+ continue
+ }
+ srcElement := reflect.ValueOf(srcValue)
+ dstKind := dstElement.Kind()
+ srcKind := srcElement.Kind()
+ if srcKind == reflect.Ptr && dstKind != reflect.Ptr {
+ srcElement = srcElement.Elem()
+ srcKind = reflect.TypeOf(srcElement.Interface()).Kind()
+ } else if dstKind == reflect.Ptr {
+ // Can this work? I guess it can't.
+ if srcKind != reflect.Ptr && srcElement.CanAddr() {
+ srcPtr := srcElement.Addr()
+ srcElement = reflect.ValueOf(srcPtr)
+ srcKind = reflect.Ptr
+ }
+ }
+
+ if !srcElement.IsValid() {
+ continue
+ }
+ if srcKind == dstKind {
+ if _, err = deepMerge(dstElement, srcElement, visited, depth+1, config); err != nil {
+ return
+ }
+ } else if dstKind == reflect.Interface && dstElement.Kind() == reflect.Interface {
+ if _, err = deepMerge(dstElement, srcElement, visited, depth+1, config); err != nil {
+ return
+ }
+ } else if srcKind == reflect.Map {
+ if err = deepMap(dstElement, srcElement, visited, depth+1, config); err != nil {
+ return
+ }
+ } else {
+ return fmt.Errorf("type mismatch on %s field: found %v, expected %v", fieldName, srcKind, dstKind)
+ }
+ }
+ }
+ return
+}
+
+// Map sets fields' values in dst from src.
+// src can be a map with string keys or a struct. dst must be the opposite:
+// if src is a map, dst must be a valid pointer to struct. If src is a struct,
+// dst must be map[string]interface{}.
+// It won't merge unexported (private) fields and will do recursively
+// any exported field.
+// If dst is a map, keys will be src fields' names in lower camel case.
+// Missing key in src that doesn't match a field in dst will be skipped. This
+// doesn't apply if dst is a map.
+// This is separated method from Merge because it is cleaner and it keeps sane
+// semantics: merging equal types, mapping different (restricted) types.
+func Map(dst, src interface{}, opts ...func(*Config)) error {
+ return _map(dst, src, opts...)
+}
+
+// MapWithOverwrite will do the same as Map except that non-empty dst attributes will be overridden by
+// non-empty src attribute values.
+// Deprecated: Use Map(…) with WithOverride
+func MapWithOverwrite(dst, src interface{}, opts ...func(*Config)) error {
+ return _map(dst, src, append(opts, WithOverride)...)
+}
+
+func _map(dst, src interface{}, opts ...func(*Config)) error {
+ var (
+ vDst, vSrc reflect.Value
+ err error
+ )
+ config := &Config{}
+
+ for _, opt := range opts {
+ opt(config)
+ }
+
+ if vDst, vSrc, err = resolveValues(dst, src); err != nil {
+ return err
+ }
+ // To be friction-less, we redirect equal-type arguments
+ // to deepMerge. Only because arguments can be anything.
+ if vSrc.Kind() == vDst.Kind() {
+ _, err := deepMerge(vDst, vSrc, make(map[uintptr]*visit), 0, config)
+ return err
+ }
+ switch vSrc.Kind() {
+ case reflect.Struct:
+ if vDst.Kind() != reflect.Map {
+ return ErrExpectedMapAsDestination
+ }
+ case reflect.Map:
+ if vDst.Kind() != reflect.Struct {
+ return ErrExpectedStructAsDestination
+ }
+ default:
+ return ErrNotSupported
+ }
+ return deepMap(vDst, vSrc, make(map[uintptr]*visit), 0, config)
+}
diff --git a/vendor/github.com/imdario/mergo/merge.go b/vendor/github.com/imdario/mergo/merge.go
new file mode 100644
index 0000000000000..3332c9c2a7ac3
--- /dev/null
+++ b/vendor/github.com/imdario/mergo/merge.go
@@ -0,0 +1,338 @@
+// Copyright 2013 Dario Castañé. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Based on src/pkg/reflect/deepequal.go from official
+// golang's stdlib.
+
+package mergo
+
+import (
+ "fmt"
+ "reflect"
+ "unsafe"
+)
+
+func hasExportedField(dst reflect.Value) (exported bool) {
+ for i, n := 0, dst.NumField(); i < n; i++ {
+ field := dst.Type().Field(i)
+ if isExportedComponent(&field) {
+ return true
+ }
+ }
+ return
+}
+
+func isExportedComponent(field *reflect.StructField) bool {
+ name := field.Name
+ pkgPath := field.PkgPath
+ if len(pkgPath) > 0 {
+ return false
+ }
+ c := name[0]
+ if 'a' <= c && c <= 'z' || c == '_' {
+ return false
+ }
+ return true
+}
+
+type Config struct {
+ Overwrite bool
+ AppendSlice bool
+ TypeCheck bool
+ Transformers Transformers
+ overwriteWithEmptyValue bool
+ overwriteSliceWithEmptyValue bool
+}
+
+type Transformers interface {
+ Transformer(reflect.Type) func(dst, src reflect.Value) error
+}
+
+// Traverses recursively both values, assigning src's fields values to dst.
+// The map argument tracks comparisons that have already been seen, which allows
+// short circuiting on recursive types.
+func deepMerge(dstIn, src reflect.Value, visited map[uintptr]*visit, depth int, config *Config) (dst reflect.Value, err error) {
+ dst = dstIn
+ overwrite := config.Overwrite
+ typeCheck := config.TypeCheck
+ overwriteWithEmptySrc := config.overwriteWithEmptyValue
+ overwriteSliceWithEmptySrc := config.overwriteSliceWithEmptyValue
+
+ if !src.IsValid() {
+ return
+ }
+
+ if dst.CanAddr() {
+ addr := dst.UnsafeAddr()
+ h := 17 * addr
+ seen := visited[h]
+ typ := dst.Type()
+ for p := seen; p != nil; p = p.next {
+ if p.ptr == addr && p.typ == typ {
+ return dst, nil
+ }
+ }
+ // Remember, remember...
+ visited[h] = &visit{addr, typ, seen}
+ }
+
+ if config.Transformers != nil && !isEmptyValue(dst) {
+ if fn := config.Transformers.Transformer(dst.Type()); fn != nil {
+ err = fn(dst, src)
+ return
+ }
+ }
+
+ if dst.IsValid() && src.IsValid() && src.Type() != dst.Type() {
+ err = fmt.Errorf("cannot append two different types (%s, %s)", src.Kind(), dst.Kind())
+ return
+ }
+
+ switch dst.Kind() {
+ case reflect.Struct:
+ if hasExportedField(dst) {
+ dstCp := reflect.New(dst.Type()).Elem()
+ for i, n := 0, dst.NumField(); i < n; i++ {
+ dstField := dst.Field(i)
+ structField := dst.Type().Field(i)
+ // copy un-exported struct fields
+ if !isExportedComponent(&structField) {
+ rf := dstCp.Field(i)
+ rf = reflect.NewAt(rf.Type(), unsafe.Pointer(rf.UnsafeAddr())).Elem() //nolint:gosec
+ dstRF := dst.Field(i)
+ if !dst.Field(i).CanAddr() {
+ continue
+ }
+
+ dstRF = reflect.NewAt(dstRF.Type(), unsafe.Pointer(dstRF.UnsafeAddr())).Elem() //nolint:gosec
+ rf.Set(dstRF)
+ continue
+ }
+ dstField, err = deepMerge(dstField, src.Field(i), visited, depth+1, config)
+ if err != nil {
+ return
+ }
+ dstCp.Field(i).Set(dstField)
+ }
+
+ if dst.CanSet() {
+ dst.Set(dstCp)
+ } else {
+ dst = dstCp
+ }
+ return
+ } else {
+ if (isReflectNil(dst) || overwrite) && (!isEmptyValue(src) || overwriteWithEmptySrc) {
+ dst = src
+ }
+ }
+
+ case reflect.Map:
+ if dst.IsNil() && !src.IsNil() {
+ if dst.CanSet() {
+ dst.Set(reflect.MakeMap(dst.Type()))
+ } else {
+ dst = src
+ return
+ }
+ }
+ for _, key := range src.MapKeys() {
+ srcElement := src.MapIndex(key)
+ dstElement := dst.MapIndex(key)
+ if !srcElement.IsValid() {
+ continue
+ }
+ if dst.MapIndex(key).IsValid() {
+ k := dstElement.Interface()
+ dstElement = reflect.ValueOf(k)
+ }
+ if isReflectNil(srcElement) {
+ if overwrite || isReflectNil(dstElement) {
+ dst.SetMapIndex(key, srcElement)
+ }
+ continue
+ }
+ if !srcElement.CanInterface() {
+ continue
+ }
+
+ if srcElement.CanInterface() {
+ srcElement = reflect.ValueOf(srcElement.Interface())
+ if dstElement.IsValid() {
+ dstElement = reflect.ValueOf(dstElement.Interface())
+ }
+ }
+ dstElement, err = deepMerge(dstElement, srcElement, visited, depth+1, config)
+ if err != nil {
+ return
+ }
+ dst.SetMapIndex(key, dstElement)
+
+ }
+ case reflect.Slice:
+ newSlice := dst
+ if (!isEmptyValue(src) || overwriteWithEmptySrc || overwriteSliceWithEmptySrc) && (overwrite || isEmptyValue(dst)) && !config.AppendSlice {
+ if typeCheck && src.Type() != dst.Type() {
+ return dst, fmt.Errorf("cannot override two slices with different type (%s, %s)", src.Type(), dst.Type())
+ }
+ newSlice = src
+ } else if config.AppendSlice {
+ if typeCheck && src.Type() != dst.Type() {
+ err = fmt.Errorf("cannot append two slice with different type (%s, %s)", src.Type(), dst.Type())
+ return
+ }
+ newSlice = reflect.AppendSlice(dst, src)
+ }
+ if dst.CanSet() {
+ dst.Set(newSlice)
+ } else {
+ dst = newSlice
+ }
+ case reflect.Ptr, reflect.Interface:
+ if isReflectNil(src) {
+ break
+ }
+
+ if dst.Kind() != reflect.Ptr && src.Type().AssignableTo(dst.Type()) {
+ if dst.IsNil() || overwrite {
+ if overwrite || isEmptyValue(dst) {
+ if dst.CanSet() {
+ dst.Set(src)
+ } else {
+ dst = src
+ }
+ }
+ }
+ break
+ }
+
+ if src.Kind() != reflect.Interface {
+ if dst.IsNil() || (src.Kind() != reflect.Ptr && overwrite) {
+ if dst.CanSet() && (overwrite || isEmptyValue(dst)) {
+ dst.Set(src)
+ }
+ } else if src.Kind() == reflect.Ptr {
+ if dst, err = deepMerge(dst.Elem(), src.Elem(), visited, depth+1, config); err != nil {
+ return
+ }
+ dst = dst.Addr()
+ } else if dst.Elem().Type() == src.Type() {
+ if dst, err = deepMerge(dst.Elem(), src, visited, depth+1, config); err != nil {
+ return
+ }
+ } else {
+ return dst, ErrDifferentArgumentsTypes
+ }
+ break
+ }
+ if dst.IsNil() || overwrite {
+ if (overwrite || isEmptyValue(dst)) && (overwriteWithEmptySrc || !isEmptyValue(src)) {
+ if dst.CanSet() {
+ dst.Set(src)
+ } else {
+ dst = src
+ }
+ }
+ } else if _, err = deepMerge(dst.Elem(), src.Elem(), visited, depth+1, config); err != nil {
+ return
+ }
+ default:
+ overwriteFull := (!isEmptyValue(src) || overwriteWithEmptySrc) && (overwrite || isEmptyValue(dst))
+ if overwriteFull {
+ if dst.CanSet() {
+ dst.Set(src)
+ } else {
+ dst = src
+ }
+ }
+ }
+
+ return
+}
+
+// Merge will fill any empty for value type attributes on the dst struct using corresponding
+// src attributes if they themselves are not empty. dst and src must be valid same-type structs
+// and dst must be a pointer to struct.
+// It won't merge unexported (private) fields and will do recursively any exported field.
+func Merge(dst, src interface{}, opts ...func(*Config)) error {
+ return merge(dst, src, opts...)
+}
+
+// MergeWithOverwrite will do the same as Merge except that non-empty dst attributes will be overridden by
+// non-empty src attribute values.
+// Deprecated: use Merge(…) with WithOverride
+func MergeWithOverwrite(dst, src interface{}, opts ...func(*Config)) error {
+ return merge(dst, src, append(opts, WithOverride)...)
+}
+
+// WithTransformers adds transformers to merge, allowing to customize the merging of some types.
+func WithTransformers(transformers Transformers) func(*Config) {
+ return func(config *Config) {
+ config.Transformers = transformers
+ }
+}
+
+// WithOverride will make merge override non-empty dst attributes with non-empty src attributes values.
+func WithOverride(config *Config) {
+ config.Overwrite = true
+}
+
+// WithOverwriteWithEmptyValue will make merge override non empty dst attributes with empty src attributes values.
+func WithOverwriteWithEmptyValue(config *Config) {
+ config.overwriteWithEmptyValue = true
+}
+
+// WithOverrideEmptySlice will make merge override empty dst slice with empty src slice.
+func WithOverrideEmptySlice(config *Config) {
+ config.overwriteSliceWithEmptyValue = true
+}
+
+// WithAppendSlice will make merge append slices instead of overwriting it.
+func WithAppendSlice(config *Config) {
+ config.AppendSlice = true
+}
+
+// WithTypeCheck will make merge check types while overwriting it (must be used with WithOverride).
+func WithTypeCheck(config *Config) {
+ config.TypeCheck = true
+}
+
+func merge(dst, src interface{}, opts ...func(*Config)) error {
+ var (
+ vDst, vSrc reflect.Value
+ err error
+ )
+
+ config := &Config{}
+
+ for _, opt := range opts {
+ opt(config)
+ }
+
+ if vDst, vSrc, err = resolveValues(dst, src); err != nil {
+ return err
+ }
+ if !vDst.CanSet() {
+ return fmt.Errorf("cannot set dst, needs reference")
+ }
+ if vDst.Type() != vSrc.Type() {
+ return ErrDifferentArgumentsTypes
+ }
+ _, err = deepMerge(vDst, vSrc, make(map[uintptr]*visit), 0, config)
+ return err
+}
+
+// IsReflectNil is the reflect value provided nil
+func isReflectNil(v reflect.Value) bool {
+ k := v.Kind()
+ switch k {
+ case reflect.Interface, reflect.Slice, reflect.Chan, reflect.Func, reflect.Map, reflect.Ptr:
+ // Both interface and slice are nil if first word is 0.
+ // Both are always bigger than a word; assume flagIndir.
+ return v.IsNil()
+ default:
+ return false
+ }
+}
diff --git a/vendor/github.com/imdario/mergo/mergo.go b/vendor/github.com/imdario/mergo/mergo.go
new file mode 100644
index 0000000000000..a82fea2fdccc3
--- /dev/null
+++ b/vendor/github.com/imdario/mergo/mergo.go
@@ -0,0 +1,97 @@
+// Copyright 2013 Dario Castañé. All rights reserved.
+// Copyright 2009 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Based on src/pkg/reflect/deepequal.go from official
+// golang's stdlib.
+
+package mergo
+
+import (
+ "errors"
+ "reflect"
+)
+
+// Errors reported by Mergo when it finds invalid arguments.
+var (
+ ErrNilArguments = errors.New("src and dst must not be nil")
+ ErrDifferentArgumentsTypes = errors.New("src and dst must be of same type")
+ ErrNotSupported = errors.New("only structs and maps are supported")
+ ErrExpectedMapAsDestination = errors.New("dst was expected to be a map")
+ ErrExpectedStructAsDestination = errors.New("dst was expected to be a struct")
+)
+
+// During deepMerge, must keep track of checks that are
+// in progress. The comparison algorithm assumes that all
+// checks in progress are true when it reencounters them.
+// Visited are stored in a map indexed by 17 * a1 + a2;
+type visit struct {
+ ptr uintptr
+ typ reflect.Type
+ next *visit
+}
+
+// From src/pkg/encoding/json/encode.go.
+func isEmptyValue(v reflect.Value) bool {
+ switch v.Kind() {
+ case reflect.Array, reflect.Map, reflect.Slice, reflect.String:
+ return v.Len() == 0
+ case reflect.Bool:
+ return !v.Bool()
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+ return v.Int() == 0
+ case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
+ return v.Uint() == 0
+ case reflect.Float32, reflect.Float64:
+ return v.Float() == 0
+ case reflect.Interface, reflect.Ptr:
+ if v.IsNil() {
+ return true
+ }
+ return isEmptyValue(v.Elem())
+ case reflect.Func:
+ return v.IsNil()
+ case reflect.Invalid:
+ return true
+ }
+ return false
+}
+
+func resolveValues(dst, src interface{}) (vDst, vSrc reflect.Value, err error) {
+ if dst == nil || src == nil {
+ err = ErrNilArguments
+ return
+ }
+ vDst = reflect.ValueOf(dst).Elem()
+ if vDst.Kind() != reflect.Struct && vDst.Kind() != reflect.Map {
+ err = ErrNotSupported
+ return
+ }
+ vSrc = reflect.ValueOf(src)
+ // We check if vSrc is a pointer to dereference it.
+ if vSrc.Kind() == reflect.Ptr {
+ vSrc = vSrc.Elem()
+ }
+ return
+}
+
+// Traverses recursively both values, assigning src's fields values to dst.
+// The map argument tracks comparisons that have already been seen, which allows
+// short circuiting on recursive types.
+func deeper(dst, src reflect.Value, visited map[uintptr]*visit, depth int) (err error) {
+ if dst.CanAddr() {
+ addr := dst.UnsafeAddr()
+ h := 17 * addr
+ seen := visited[h]
+ typ := dst.Type()
+ for p := seen; p != nil; p = p.next {
+ if p.ptr == addr && p.typ == typ {
+ return nil
+ }
+ }
+ // Remember, remember...
+ visited[h] = &visit{addr, typ, seen}
+ }
+ return // TODO refactor
+}
diff --git a/vendor/modules.txt b/vendor/modules.txt
index 29d5e63dd2440..742f5ee0b1d51 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -500,6 +500,9 @@ github.com/hpcloud/tail/ratelimiter
github.com/hpcloud/tail/util
github.com/hpcloud/tail/watch
github.com/hpcloud/tail/winfile
+# github.com/imdario/mergo v0.3.9
+## explicit
+github.com/imdario/mergo
# github.com/influxdata/go-syslog/v3 v3.0.1-0.20200510134747-836dce2cf6da
## explicit
github.com/influxdata/go-syslog/v3
|
promtail
|
Loki Push API (#2296)
|
51c42e864563f2fa9ffc160cb13f6d6126ea5c6d
|
2024-10-25 20:27:30
|
Salva Corts
|
feat: Do not add empty blooms to offsets (#14577)
| false
|
diff --git a/integration/bloom_building_test.go b/integration/bloom_building_test.go
index b5186b3b23ff0..9e7727b674f1e 100644
--- a/integration/bloom_building_test.go
+++ b/integration/bloom_building_test.go
@@ -61,15 +61,7 @@ func TestBloomBuilding(t *testing.T) {
cliIngester.Now = now
// We now ingest some logs across many series.
- series := make([]labels.Labels, 0, nSeries)
- for i := 0; i < nSeries; i++ {
- lbs := labels.FromStrings("job", fmt.Sprintf("job-%d", i))
- series = append(series, lbs)
-
- for j := 0; j < nLogsPerSeries; j++ {
- require.NoError(t, cliDistributor.PushLogLine(fmt.Sprintf("log line %d", j), now, nil, lbs.Map()))
- }
- }
+ series := writeSeries(t, nSeries, nLogsPerSeries, cliDistributor, now, "job")
// restart ingester which should flush the chunks and index
require.NoError(t, tIngester.Restart())
@@ -124,14 +116,8 @@ func TestBloomBuilding(t *testing.T) {
checkSeriesInBlooms(t, now, tenantID, bloomStore, series)
// Push some more logs so TSDBs need to be updated.
- for i := 0; i < nSeries; i++ {
- lbs := labels.FromStrings("job", fmt.Sprintf("job-new-%d", i))
- series = append(series, lbs)
-
- for j := 0; j < nLogsPerSeries; j++ {
- require.NoError(t, cliDistributor.PushLogLine(fmt.Sprintf("log line %d", j), now, nil, lbs.Map()))
- }
- }
+ newSeries := writeSeries(t, nSeries, nLogsPerSeries, cliDistributor, now, "job-new")
+ series = append(series, newSeries...)
// restart ingester which should flush the chunks and index
require.NoError(t, tIngester.Restart())
@@ -147,6 +133,33 @@ func TestBloomBuilding(t *testing.T) {
checkSeriesInBlooms(t, now, tenantID, bloomStore, series)
}
+func writeSeries(t *testing.T, nSeries int, nLogsPerSeries int, cliDistributor *client.Client, now time.Time, seriesPrefix string) []labels.Labels {
+ series := make([]labels.Labels, 0, nSeries)
+ for i := 0; i < nSeries; i++ {
+ lbs := labels.FromStrings("job", fmt.Sprintf("%s-%d", seriesPrefix, i))
+ series = append(series, lbs)
+
+ for j := 0; j < nLogsPerSeries; j++ {
+ // Only write wtructured metadata for half of the series
+ var metadata map[string]string
+ if i%2 == 0 {
+ metadata = map[string]string{
+ "traceID": fmt.Sprintf("%d%d", i, j),
+ "user": fmt.Sprintf("%d%d", i, j%10),
+ }
+ }
+
+ require.NoError(t, cliDistributor.PushLogLine(
+ fmt.Sprintf("log line %d", j),
+ now,
+ metadata,
+ lbs.Map(),
+ ))
+ }
+ }
+ return series
+}
+
func checkCompactionFinished(t *testing.T, cliCompactor *client.Client) {
checkForTimestampMetric(t, cliCompactor, "loki_boltdb_shipper_compact_tables_operation_last_successful_run_timestamp_seconds")
}
diff --git a/pkg/bloombuild/builder/spec.go b/pkg/bloombuild/builder/spec.go
index 781a0ca04872e..f7c147fb0a2f8 100644
--- a/pkg/bloombuild/builder/spec.go
+++ b/pkg/bloombuild/builder/spec.go
@@ -137,7 +137,7 @@ func (s *SimpleBloomGenerator) Generate(ctx context.Context) *LazyBlockBuilderIt
)
}
- return NewLazyBlockBuilderIterator(ctx, s.opts, s.metrics, s.populator(ctx), s.writerReaderFunc, series, s.blocksIter)
+ return NewLazyBlockBuilderIterator(ctx, s.opts, s.metrics, s.logger, s.populator(ctx), s.writerReaderFunc, series, s.blocksIter)
}
// LazyBlockBuilderIterator is a lazy iterator over blocks that builds
@@ -146,6 +146,7 @@ type LazyBlockBuilderIterator struct {
ctx context.Context
opts v1.BlockOptions
metrics *v1.Metrics
+ logger log.Logger
populate v1.BloomPopulatorFunc
writerReaderFunc func() (v1.BlockWriter, v1.BlockReader)
series iter.PeekIterator[*v1.Series]
@@ -160,6 +161,7 @@ func NewLazyBlockBuilderIterator(
ctx context.Context,
opts v1.BlockOptions,
metrics *v1.Metrics,
+ logger log.Logger,
populate v1.BloomPopulatorFunc,
writerReaderFunc func() (v1.BlockWriter, v1.BlockReader),
series iter.PeekIterator[*v1.Series],
@@ -169,6 +171,7 @@ func NewLazyBlockBuilderIterator(
ctx: ctx,
opts: opts,
metrics: metrics,
+ logger: logger,
populate: populate,
writerReaderFunc: writerReaderFunc,
series: series,
@@ -196,7 +199,7 @@ func (b *LazyBlockBuilderIterator) Next() bool {
return false
}
- mergeBuilder := v1.NewMergeBuilder(b.blocks, b.series, b.populate, b.metrics)
+ mergeBuilder := v1.NewMergeBuilder(b.blocks, b.series, b.populate, b.metrics, b.logger)
writer, reader := b.writerReaderFunc()
blockBuilder, err := v1.NewBlockBuilder(b.opts, writer)
if err != nil {
diff --git a/pkg/storage/bloom/v1/bloom_builder.go b/pkg/storage/bloom/v1/bloom_builder.go
index 9829d9ffc380a..c327f5d6bfd95 100644
--- a/pkg/storage/bloom/v1/bloom_builder.go
+++ b/pkg/storage/bloom/v1/bloom_builder.go
@@ -28,6 +28,10 @@ func NewBloomBlockBuilder(opts BlockOptions, writer io.WriteCloser) *BloomBlockB
}
}
+func (b *BloomBlockBuilder) UnflushedSize() int {
+ return b.scratch.Len() + b.page.UnflushedSize()
+}
+
func (b *BloomBlockBuilder) Append(bloom *Bloom) (BloomOffset, error) {
if !b.writtenSchema {
if err := b.writeSchema(); err != nil {
@@ -68,6 +72,14 @@ func (b *BloomBlockBuilder) writeSchema() error {
}
func (b *BloomBlockBuilder) Close() (uint32, error) {
+ if !b.writtenSchema {
+ // We will get here only if we haven't appended any bloom filters to the block
+ // This would happen only if all series yielded empty blooms
+ if err := b.writeSchema(); err != nil {
+ return 0, errors.Wrap(err, "writing schema")
+ }
+ }
+
if b.page.Count() > 0 {
if err := b.flushPage(); err != nil {
return 0, errors.Wrap(err, "flushing final bloom page")
diff --git a/pkg/storage/bloom/v1/builder.go b/pkg/storage/bloom/v1/builder.go
index fa5f5aa047a7b..466687aa44b9a 100644
--- a/pkg/storage/bloom/v1/builder.go
+++ b/pkg/storage/bloom/v1/builder.go
@@ -5,6 +5,8 @@ import (
"hash"
"io"
+ "github.com/go-kit/log"
+ "github.com/go-kit/log/level"
"github.com/pkg/errors"
"github.com/grafana/loki/v3/pkg/compression"
@@ -112,6 +114,10 @@ func (w *PageWriter) Reset() {
w.n = 0
}
+func (w *PageWriter) UnflushedSize() int {
+ return w.enc.Len()
+}
+
func (w *PageWriter) SpaceFor(numBytes int) bool {
// if a single bloom exceeds the target size, still accept it
// otherwise only accept it if adding it would not exceed the target size
@@ -189,6 +195,7 @@ type MergeBuilder struct {
// Add chunks of a single series to a bloom
populate BloomPopulatorFunc
metrics *Metrics
+ logger log.Logger
}
type BloomPopulatorFunc func(series *Series, preExistingBlooms iter.SizedIterator[*Bloom], chunksToAdd ChunkRefs, ch chan *BloomCreation)
@@ -202,6 +209,7 @@ func NewMergeBuilder(
store iter.Iterator[*Series],
populate BloomPopulatorFunc,
metrics *Metrics,
+ logger log.Logger,
) *MergeBuilder {
// combinedSeriesIter handles series with fingerprint collisions:
// because blooms dont contain the label-set (only the fingerprint),
@@ -229,6 +237,7 @@ func NewMergeBuilder(
store: combinedSeriesIter,
populate: populate,
metrics: metrics,
+ logger: logger,
}
}
@@ -306,6 +315,12 @@ func (mb *MergeBuilder) processNextSeries(
if creation.Err != nil {
return nil, info.sourceBytes, 0, false, false, errors.Wrap(creation.Err, "populating bloom")
}
+
+ if creation.Bloom.IsEmpty() {
+ level.Debug(mb.logger).Log("msg", "received empty bloom. Adding to index but skipping offsets", "fingerprint", nextInStore.Fingerprint)
+ continue
+ }
+
offset, err := builder.AddBloom(creation.Bloom)
if err != nil {
return nil, info.sourceBytes, 0, false, false, errors.Wrapf(
diff --git a/pkg/storage/bloom/v1/builder_test.go b/pkg/storage/bloom/v1/builder_test.go
index a1917bc096a7c..fa8ccbc87a3d9 100644
--- a/pkg/storage/bloom/v1/builder_test.go
+++ b/pkg/storage/bloom/v1/builder_test.go
@@ -6,6 +6,7 @@ import (
"sort"
"testing"
+ "github.com/go-kit/log"
"github.com/prometheus/common/model"
"github.com/stretchr/testify/require"
@@ -263,7 +264,7 @@ func TestMergeBuilder(t *testing.T) {
)
// Ensure that the merge builder combines all the blocks correctly
- mergeBuilder := NewMergeBuilder(dedupedBlocks(blocks), storeItr, populate, NewMetrics(nil))
+ mergeBuilder := NewMergeBuilder(dedupedBlocks(blocks), storeItr, populate, NewMetrics(nil), log.NewNopLogger())
indexBuf := bytes.NewBuffer(nil)
bloomsBuf := bytes.NewBuffer(nil)
writer := NewMemoryBlockWriter(indexBuf, bloomsBuf)
@@ -350,6 +351,8 @@ func TestMergeBuilderFingerprintCollision(t *testing.T) {
// We're not testing the ability to extend a bloom in this test
pop := func(_ *Series, _ iter.SizedIterator[*Bloom], _ ChunkRefs, ch chan *BloomCreation) {
bloom := NewBloom()
+ // Add something to the bloom so it's not empty
+ bloom.Add([]byte("hello"))
stats := indexingInfo{
sourceBytes: int(bloom.Capacity()) / 8,
indexedFields: NewSetFromLiteral[Field]("__all__"),
@@ -367,6 +370,7 @@ func TestMergeBuilderFingerprintCollision(t *testing.T) {
iter.NewSliceIter(data),
pop,
NewMetrics(nil),
+ log.NewNopLogger(),
)
_, _, err = mergeBuilder.Build(builder)
@@ -539,6 +543,7 @@ func TestMergeBuilder_Roundtrip(t *testing.T) {
dedupedStore,
pop,
NewMetrics(nil),
+ log.NewNopLogger(),
)
builder, err := NewBlockBuilder(blockOpts, writer)
require.Nil(t, err)
diff --git a/pkg/storage/bloom/v1/fuse.go b/pkg/storage/bloom/v1/fuse.go
index 147a81502c336..b25743ea4c541 100644
--- a/pkg/storage/bloom/v1/fuse.go
+++ b/pkg/storage/bloom/v1/fuse.go
@@ -32,6 +32,8 @@ func NewBloomRecorder(ctx context.Context, id string) *BloomRecorder {
chunksSkipped: atomic.NewInt64(0),
seriesMissed: atomic.NewInt64(0),
chunksMissed: atomic.NewInt64(0),
+ seriesEmpty: atomic.NewInt64(0),
+ chunksEmpty: atomic.NewInt64(0),
chunksFiltered: atomic.NewInt64(0),
}
}
@@ -45,6 +47,8 @@ type BloomRecorder struct {
seriesSkipped, chunksSkipped *atomic.Int64
// not found in bloom
seriesMissed, chunksMissed *atomic.Int64
+ // exists in block index but empty offsets
+ seriesEmpty, chunksEmpty *atomic.Int64
// filtered out
chunksFiltered *atomic.Int64
}
@@ -56,6 +60,8 @@ func (r *BloomRecorder) Merge(other *BloomRecorder) {
r.chunksSkipped.Add(other.chunksSkipped.Load())
r.seriesMissed.Add(other.seriesMissed.Load())
r.chunksMissed.Add(other.chunksMissed.Load())
+ r.seriesEmpty.Add(other.seriesEmpty.Load())
+ r.chunksEmpty.Add(other.chunksEmpty.Load())
r.chunksFiltered.Add(other.chunksFiltered.Load())
}
@@ -66,13 +72,15 @@ func (r *BloomRecorder) Report(logger log.Logger, metrics *Metrics) {
seriesFound = r.seriesFound.Load()
seriesSkipped = r.seriesSkipped.Load()
seriesMissed = r.seriesMissed.Load()
- seriesRequested = seriesFound + seriesSkipped + seriesMissed
+ seriesEmpty = r.seriesEmpty.Load()
+ seriesRequested = seriesFound + seriesSkipped + seriesMissed + seriesEmpty
chunksFound = r.chunksFound.Load()
chunksSkipped = r.chunksSkipped.Load()
chunksMissed = r.chunksMissed.Load()
chunksFiltered = r.chunksFiltered.Load()
- chunksRequested = chunksFound + chunksSkipped + chunksMissed
+ chunksEmpty = r.chunksEmpty.Load()
+ chunksRequested = chunksFound + chunksSkipped + chunksMissed + chunksEmpty
)
level.Debug(logger).Log(
"recorder_msg", "bloom search results",
@@ -82,11 +90,13 @@ func (r *BloomRecorder) Report(logger log.Logger, metrics *Metrics) {
"recorder_series_found", seriesFound,
"recorder_series_skipped", seriesSkipped,
"recorder_series_missed", seriesMissed,
+ "recorder_series_empty", seriesEmpty,
"recorder_chunks_requested", chunksRequested,
"recorder_chunks_found", chunksFound,
"recorder_chunks_skipped", chunksSkipped,
"recorder_chunks_missed", chunksMissed,
+ "recorder_chunks_empty", chunksEmpty,
"recorder_chunks_filtered", chunksFiltered,
)
@@ -94,25 +104,27 @@ func (r *BloomRecorder) Report(logger log.Logger, metrics *Metrics) {
metrics.recorderSeries.WithLabelValues(recorderRequested).Add(float64(seriesRequested))
metrics.recorderSeries.WithLabelValues(recorderFound).Add(float64(seriesFound))
metrics.recorderSeries.WithLabelValues(recorderSkipped).Add(float64(seriesSkipped))
+ metrics.recorderSeries.WithLabelValues(recorderEmpty).Add(float64(seriesEmpty))
metrics.recorderSeries.WithLabelValues(recorderMissed).Add(float64(seriesMissed))
metrics.recorderChunks.WithLabelValues(recorderRequested).Add(float64(chunksRequested))
metrics.recorderChunks.WithLabelValues(recorderFound).Add(float64(chunksFound))
metrics.recorderChunks.WithLabelValues(recorderSkipped).Add(float64(chunksSkipped))
metrics.recorderChunks.WithLabelValues(recorderMissed).Add(float64(chunksMissed))
+ metrics.recorderChunks.WithLabelValues(recorderEmpty).Add(float64(chunksEmpty))
metrics.recorderChunks.WithLabelValues(recorderFiltered).Add(float64(chunksFiltered))
}
}
-func (r *BloomRecorder) record(
- seriesFound, chunksFound, seriesSkipped, chunksSkipped, seriesMissed, chunksMissed, chunksFiltered int,
-) {
+func (r *BloomRecorder) record(seriesFound, chunksFound, seriesSkipped, chunksSkipped, seriesMissed, chunksMissed, seriesEmpty, chunksEmpty, chunksFiltered int) {
r.seriesFound.Add(int64(seriesFound))
r.chunksFound.Add(int64(chunksFound))
r.seriesSkipped.Add(int64(seriesSkipped))
r.chunksSkipped.Add(int64(chunksSkipped))
r.seriesMissed.Add(int64(seriesMissed))
r.chunksMissed.Add(int64(chunksMissed))
+ r.seriesEmpty.Add(int64(seriesEmpty))
+ r.chunksEmpty.Add(int64(chunksEmpty))
r.chunksFiltered.Add(int64(chunksFiltered))
}
@@ -170,6 +182,7 @@ func (fq *FusedQuerier) recordMissingFp(
0, 0, // found
0, 0, // skipped
1, len(input.Chks), // missed
+ 0, 0, // empty
0, // chunks filtered
)
})
@@ -184,6 +197,22 @@ func (fq *FusedQuerier) recordSkippedFp(
0, 0, // found
1, len(input.Chks), // skipped
0, 0, // missed
+ 0, 0, // empty
+ 0, // chunks filtered
+ )
+ })
+}
+
+func (fq *FusedQuerier) recordEmptyFp(
+ batch []Request,
+ fp model.Fingerprint,
+) {
+ fq.noRemovals(batch, fp, func(input Request) {
+ input.Recorder.record(
+ 0, 0, // found
+ 0, 0, // skipped
+ 0, 0, // missed
+ 1, len(input.Chks), // empty
0, // chunks filtered
)
})
@@ -280,6 +309,19 @@ func (fq *FusedQuerier) runSeries(_ Schema, series *SeriesWithMeta, reqs []Reque
})
}
+ if len(series.Offsets) == 0 {
+ // We end up here for series with no structured metadata fields.
+ // While building blooms, these series would yield empty blooms.
+ // We add these series to the index of the block so we don't report them as missing,
+ // but we don't filter any chunks for them.
+ level.Debug(fq.logger).Log(
+ "msg", "series with empty offsets",
+ "fp", series.Fingerprint,
+ )
+ fq.recordEmptyFp(reqs, series.Fingerprint)
+ return
+ }
+
for i, offset := range series.Offsets {
skip := fq.bq.blooms.LoadOffset(offset)
if skip {
@@ -361,6 +403,7 @@ func (fq *FusedQuerier) runSeries(_ Schema, series *SeriesWithMeta, reqs []Reque
1, len(inputs[i].InBlooms), // found
0, 0, // skipped
0, len(inputs[i].Missing), // missed
+ 0, 0, // empty
len(removals), // filtered
)
req.Response <- Output{
diff --git a/pkg/storage/bloom/v1/index.go b/pkg/storage/bloom/v1/index.go
index a9e03efc41af9..8be1a45d35c21 100644
--- a/pkg/storage/bloom/v1/index.go
+++ b/pkg/storage/bloom/v1/index.go
@@ -153,7 +153,8 @@ func aggregateHeaders(xs []SeriesHeader) SeriesHeader {
fromFp, _ := xs[0].Bounds.Bounds()
_, throughFP := xs[len(xs)-1].Bounds.Bounds()
res := SeriesHeader{
- Bounds: NewBounds(fromFp, throughFP),
+ NumSeries: len(xs),
+ Bounds: NewBounds(fromFp, throughFP),
}
for i, x := range xs {
diff --git a/pkg/storage/bloom/v1/index_builder.go b/pkg/storage/bloom/v1/index_builder.go
index 067a79ad03f4e..9703177f1200b 100644
--- a/pkg/storage/bloom/v1/index_builder.go
+++ b/pkg/storage/bloom/v1/index_builder.go
@@ -35,6 +35,10 @@ func NewIndexBuilder(opts BlockOptions, writer io.WriteCloser) *IndexBuilder {
}
}
+func (b *IndexBuilder) UnflushedSize() int {
+ return b.scratch.Len() + b.page.UnflushedSize()
+}
+
func (b *IndexBuilder) WriteOpts() error {
b.scratch.Reset()
b.opts.Encode(b.scratch)
diff --git a/pkg/storage/bloom/v1/metrics.go b/pkg/storage/bloom/v1/metrics.go
index e2ce99a4702d1..0ad86848df1b1 100644
--- a/pkg/storage/bloom/v1/metrics.go
+++ b/pkg/storage/bloom/v1/metrics.go
@@ -56,6 +56,7 @@ const (
recorderFound = "found"
recorderSkipped = "skipped"
recorderMissed = "missed"
+ recorderEmpty = "empty"
recorderFiltered = "filtered"
)
diff --git a/pkg/storage/bloom/v1/test_util.go b/pkg/storage/bloom/v1/test_util.go
index c2f1f3e8f8d30..f040ab4297282 100644
--- a/pkg/storage/bloom/v1/test_util.go
+++ b/pkg/storage/bloom/v1/test_util.go
@@ -132,9 +132,11 @@ func CompareIterators[A, B any](
a iter.Iterator[A],
b iter.Iterator[B],
) {
+ var i int
for a.Next() {
- require.True(t, b.Next())
+ require.Truef(t, b.Next(), "'a' has %dth element but 'b' does not'", i)
f(t, a.At(), b.At())
+ i++
}
require.False(t, b.Next())
require.NoError(t, a.Err())
diff --git a/pkg/storage/bloom/v1/versioned_builder.go b/pkg/storage/bloom/v1/versioned_builder.go
index 8844ddf43eb11..960c6bcdde928 100644
--- a/pkg/storage/bloom/v1/versioned_builder.go
+++ b/pkg/storage/bloom/v1/versioned_builder.go
@@ -125,10 +125,35 @@ func (b *V3Builder) AddSeries(series Series, offsets []BloomOffset, fields Set[F
return false, errors.Wrapf(err, "writing index for series %v", series.Fingerprint)
}
- full, _, err := b.writer.Full(b.opts.BlockSize)
+ full, err := b.full()
if err != nil {
return false, errors.Wrap(err, "checking if block is full")
}
return full, nil
}
+
+func (b *V3Builder) full() (bool, error) {
+ if b.opts.BlockSize == 0 {
+ // Unlimited block size
+ return false, nil
+ }
+
+ full, writtenSize, err := b.writer.Full(b.opts.BlockSize)
+ if err != nil {
+ return false, errors.Wrap(err, "checking if block writer is full")
+ }
+ if full {
+ return true, nil
+ }
+
+ // Even if the block writer is not full, we may have unflushed data in the bloom builders.
+ // Check if by flushing these, we would exceed the block size.
+ unflushedIndexSize := b.index.UnflushedSize()
+ unflushedBloomSize := b.blooms.UnflushedSize()
+ if uint64(writtenSize+unflushedIndexSize+unflushedBloomSize) > b.opts.BlockSize {
+ return true, nil
+ }
+
+ return false, nil
+}
diff --git a/pkg/storage/bloom/v1/versioned_builder_test.go b/pkg/storage/bloom/v1/versioned_builder_test.go
index 1e4a0f5a93b26..6d2cc621be459 100644
--- a/pkg/storage/bloom/v1/versioned_builder_test.go
+++ b/pkg/storage/bloom/v1/versioned_builder_test.go
@@ -4,6 +4,7 @@ import (
"bytes"
"testing"
+ "github.com/prometheus/common/model"
"github.com/stretchr/testify/require"
"github.com/grafana/loki/v3/pkg/compression"
@@ -17,7 +18,7 @@ import (
func smallBlockOpts(v Version, enc compression.Codec) BlockOptions {
return BlockOptions{
Schema: NewSchema(v, enc),
- SeriesPageSize: 100,
+ SeriesPageSize: 4 << 10,
BloomPageSize: 2 << 10,
BlockSize: 0, // unlimited
}
@@ -78,3 +79,103 @@ func TestV3Roundtrip(t *testing.T) {
querier,
)
}
+
+func seriesWithBlooms(nSeries int, fromFp, throughFp model.Fingerprint) []SeriesWithBlooms {
+ series, _ := MkBasicSeriesWithBlooms(nSeries, fromFp, throughFp, 0, 10000)
+ return series
+}
+
+func seriesWithoutBlooms(nSeries int, fromFp, throughFp model.Fingerprint) []SeriesWithBlooms {
+ series := seriesWithBlooms(nSeries, fromFp, throughFp)
+
+ // remove blooms from series
+ for i := range series {
+ series[i].Blooms = v2.NewEmptyIter[*Bloom]()
+ }
+
+ return series
+}
+func TestFullBlock(t *testing.T) {
+ opts := smallBlockOpts(V3, compression.None)
+ minBlockSize := opts.SeriesPageSize // 1 index page, 4KB
+ const maxEmptySeriesPerBlock = 47
+ for _, tc := range []struct {
+ name string
+ maxBlockSize uint64
+ series []SeriesWithBlooms
+ expected []SeriesWithBlooms
+ }{
+ {
+ name: "only series without blooms",
+ maxBlockSize: minBlockSize,
+ // +1 so we test adding the last series that fills the block
+ series: seriesWithoutBlooms(maxEmptySeriesPerBlock+1, 0, 0xffff),
+ expected: seriesWithoutBlooms(maxEmptySeriesPerBlock+1, 0, 0xffff),
+ },
+ {
+ name: "series without blooms and one with blooms",
+ maxBlockSize: minBlockSize,
+ series: append(
+ seriesWithoutBlooms(maxEmptySeriesPerBlock, 0, 0x7fff),
+ seriesWithBlooms(50, 0x8000, 0xffff)...,
+ ),
+ expected: append(
+ seriesWithoutBlooms(maxEmptySeriesPerBlock, 0, 0x7fff),
+ seriesWithBlooms(1, 0x8000, 0x8001)...,
+ ),
+ },
+ {
+ name: "only one series with bloom",
+ maxBlockSize: minBlockSize,
+ series: seriesWithBlooms(10, 0, 0xffff),
+ expected: seriesWithBlooms(1, 0, 1),
+ },
+ {
+ name: "one huge series with bloom and then series without",
+ maxBlockSize: minBlockSize,
+ series: append(
+ seriesWithBlooms(1, 0, 1),
+ seriesWithoutBlooms(100, 1, 0xffff)...,
+ ),
+ expected: seriesWithBlooms(1, 0, 1),
+ },
+ {
+ name: "big block",
+ maxBlockSize: 1 << 20, // 1MB
+ series: seriesWithBlooms(100, 0, 0xffff),
+ expected: seriesWithBlooms(100, 0, 0xffff),
+ },
+ } {
+ t.Run(tc.name, func(t *testing.T) {
+ indexBuf := bytes.NewBuffer(nil)
+ bloomsBuf := bytes.NewBuffer(nil)
+ writer := NewMemoryBlockWriter(indexBuf, bloomsBuf)
+ reader := NewByteReader(indexBuf, bloomsBuf)
+ opts.BlockSize = tc.maxBlockSize
+
+ b, err := NewBlockBuilderV3(opts, writer)
+ require.NoError(t, err)
+
+ _, err = b.BuildFrom(v2.NewSliceIter(tc.series))
+ require.NoError(t, err)
+
+ block := NewBlock(reader, NewMetrics(nil))
+ querier := NewBlockQuerier(block, &mempool.SimpleHeapAllocator{}, DefaultMaxPageSize).Iter()
+
+ CompareIterators(
+ t,
+ func(t *testing.T, a SeriesWithBlooms, b *SeriesWithBlooms) {
+ require.Equal(t, a.Series.Fingerprint, b.Series.Fingerprint)
+ require.ElementsMatch(t, a.Series.Chunks, b.Series.Chunks)
+ bloomsA, err := v2.Collect(a.Blooms)
+ require.NoError(t, err)
+ bloomsB, err := v2.Collect(b.Blooms)
+ require.NoError(t, err)
+ require.Equal(t, len(bloomsB), len(bloomsA))
+ },
+ v2.NewSliceIter(tc.expected),
+ querier,
+ )
+ })
+ }
+}
diff --git a/pkg/storage/stores/shipper/bloomshipper/client.go b/pkg/storage/stores/shipper/bloomshipper/client.go
index 1c66e500a6b9c..627016c63c025 100644
--- a/pkg/storage/stores/shipper/bloomshipper/client.go
+++ b/pkg/storage/stores/shipper/bloomshipper/client.go
@@ -225,13 +225,16 @@ func newBlockRefWithEncoding(ref Ref, enc compression.Codec) BlockRef {
}
func BlockFrom(enc compression.Codec, tenant, table string, blk *v1.Block) (Block, error) {
- md, _ := blk.Metadata()
+ md, err := blk.Metadata()
+ if err != nil {
+ return Block{}, errors.Wrap(err, "decoding index")
+ }
+
ref := newBlockRefWithEncoding(newRefFrom(tenant, table, md), enc)
// TODO(owen-d): pool
buf := bytes.NewBuffer(nil)
- err := v1.TarCompress(ref.Codec, buf, blk.Reader())
-
+ err = v1.TarCompress(ref.Codec, buf, blk.Reader())
if err != nil {
return Block{}, err
}
|
feat
|
Do not add empty blooms to offsets (#14577)
|
1a6751fc17bb10ba0018057f56f5cc4395c547a3
|
2024-12-05 23:22:08
|
Krzysztof Łuczak
|
fix(helm): add default wal dir to ruler config (#14920)
| false
|
diff --git a/docs/sources/setup/install/helm/reference.md b/docs/sources/setup/install/helm/reference.md
index 5dfd74ea279d2..4975387e96dc8 100644
--- a/docs/sources/setup/install/helm/reference.md
+++ b/docs/sources/setup/install/helm/reference.md
@@ -6247,7 +6247,11 @@ null
<td>object</td>
<td>Check https://grafana.com/docs/loki/latest/configuration/#ruler for more info on configuring ruler</td>
<td><pre lang="json">
-{}
+{
+ "wal": {
+ "dir": "/var/loki/ruler-wal"
+ }
+}
</pre>
</td>
</tr>
diff --git a/production/helm/loki/CHANGELOG.md b/production/helm/loki/CHANGELOG.md
index eac1c61615101..d93e82d275032 100644
--- a/production/helm/loki/CHANGELOG.md
+++ b/production/helm/loki/CHANGELOG.md
@@ -13,6 +13,7 @@ Entries should include a reference to the pull request that introduced the chang
[//]: # (<AUTOMATED_UPDATES_LOCATOR> : do not remove this line. This locator is used by the CI pipeline to automatically create a changelog entry for each new Loki release. Add other chart versions and respective changelog entries bellow this line.)
+- [BUGFIX] Add default wal dir to ruler config ([#14920](https://github.com/grafana/loki/pull/14920))
## 6.22.0
## 6.20.0
diff --git a/production/helm/loki/values.yaml b/production/helm/loki/values.yaml
index 78961a7243009..94afe8f5a829e 100644
--- a/production/helm/loki/values.yaml
+++ b/production/helm/loki/values.yaml
@@ -412,7 +412,9 @@ loki:
prefix: index_
period: 24h
# -- Check https://grafana.com/docs/loki/latest/configuration/#ruler for more info on configuring ruler
- rulerConfig: {}
+ rulerConfig:
+ wal:
+ dir: /var/loki/ruler-wal
# -- Structured loki configuration, takes precedence over `loki.config`, `loki.schemaConfig`, `loki.storageConfig`
structuredConfig: {}
# -- Additional query scheduler config
|
fix
|
add default wal dir to ruler config (#14920)
|
10fef0c728627d9a0e38b5feaf2fd50bbcd9f9c2
|
2025-01-25 03:34:19
|
Roked
|
docs: Update _index.md: add information about non-blocking mode to the documentation (#15910)
| false
|
diff --git a/docs/sources/send-data/docker-driver/_index.md b/docs/sources/send-data/docker-driver/_index.md
index fac6b0d3e34b6..f351b6cf27750 100644
--- a/docs/sources/send-data/docker-driver/_index.md
+++ b/docs/sources/send-data/docker-driver/_index.md
@@ -81,4 +81,6 @@ The driver keeps all logs in memory and will drop log entries if Loki is not rea
The wait time can be lowered by setting `loki-retries=2`, `loki-max-backoff=800ms`, `loki-timeout=1s` and `keep-file=true`. This way the daemon will be locked only for a short time and the logs will be persisted locally when the Loki client is unable to re-connect.
+Also you can use non-blocking mode by setting `services.logger.logging.options.mode=non-blocking` in your `docker-compose` file. Non-blocking means that the process of writing logs to Loki will not block the main flow of an application or service if Loki is temporarily unavailable or unable to process log messages. In non-blocking mode, log messages will be buffered and sent to Loki asynchronously, which allows the main thread to continue working without delay. If Loki is unavailable, log messages will be stored in a buffer and sent when Loki becomes available again. However, this setting is useful to prevent blocking the main flow of an application or service due to logging issues, but it can also lead to loss of log messages if the buffer overflows or if Loki is unavailable for a long time.
+
To avoid this issue, use the Promtail [Docker target](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/promtail/configuration/#docker) or [Docker service discovery](https://grafana.com/docs/loki/<LOKI_VERSION>/send-data/promtail/configuration/#docker_sd_configs).
|
docs
|
Update _index.md: add information about non-blocking mode to the documentation (#15910)
|
a9daddeeb9b7cd25431d56847874885aa0927c7d
|
2022-09-29 23:05:43
|
Jeff Cantrill
|
operator: Enable subscriptions on OpenShift 4.12 (#7296)
| false
|
diff --git a/operator/bundle/metadata/properties.yaml b/operator/bundle/metadata/properties.yaml
index 0cf01e92ae4b4..8085abb30e494 100644
--- a/operator/bundle/metadata/properties.yaml
+++ b/operator/bundle/metadata/properties.yaml
@@ -1,3 +1,3 @@
properties:
- type: olm.maxOpenShiftVersion
- value: 4.11
+ value: 4.12
|
operator
|
Enable subscriptions on OpenShift 4.12 (#7296)
|
f6cfff5356ab24f89cc263c8a8e27dd21204855a
|
2024-03-09 01:50:58
|
Christian Haudum
|
refactor(bloomstore): Introduce fetch option for blocks fetcher (#12160)
| false
|
diff --git a/docs/sources/configure/_index.md b/docs/sources/configure/_index.md
index de9b2ae9444b3..06f9a042091a4 100644
--- a/docs/sources/configure/_index.md
+++ b/docs/sources/configure/_index.md
@@ -2349,11 +2349,6 @@ bloom_shipper:
# CLI flag: -bloom.shipper.working-directory
[working_directory: <string> | default = "bloom-shipper"]
- # In an eventually consistent system like the bloom components, we usually
- # want to ignore blocks that are missing in storage.
- # CLI flag: -bloom.shipper.ignore-missing-blocks
- [ignore_missing_blocks: <boolean> | default = true]
-
blocks_downloading_queue:
# The count of parallel workers that download Bloom Blocks.
# CLI flag: -bloom.shipper.blocks-downloading-queue.workers-count
diff --git a/pkg/bloomcompactor/controller.go b/pkg/bloomcompactor/controller.go
index b876c92f38ee6..55fd7548881bf 100644
--- a/pkg/bloomcompactor/controller.go
+++ b/pkg/bloomcompactor/controller.go
@@ -300,8 +300,9 @@ func (s *SimpleBloomController) loadWorkForGap(
// NB(owen-d): we filter out nil blocks here to avoid panics in the bloom generator since the fetcher
// input->output length and indexing in its contract
+ // NB(chaudum): Do we want to fetch in strict mode and fail instead?
f := FetchFunc[bloomshipper.BlockRef, *bloomshipper.CloseableBlockQuerier](func(ctx context.Context, refs []bloomshipper.BlockRef) ([]*bloomshipper.CloseableBlockQuerier, error) {
- blks, err := fetcher.FetchBlocks(ctx, refs)
+ blks, err := fetcher.FetchBlocks(ctx, refs, bloomshipper.WithFetchAsync(false), bloomshipper.WithIgnoreNotFound(true))
if err != nil {
return nil, err
}
diff --git a/pkg/bloomcompactor/tracker.go b/pkg/bloomcompactor/tracker.go
index 7c78e0706dffd..34f726f322a09 100644
--- a/pkg/bloomcompactor/tracker.go
+++ b/pkg/bloomcompactor/tracker.go
@@ -5,10 +5,11 @@ import (
"math"
"sync"
- v1 "github.com/grafana/loki/pkg/storage/bloom/v1"
- "github.com/grafana/loki/pkg/storage/config"
"github.com/pkg/errors"
"github.com/prometheus/common/model"
+
+ v1 "github.com/grafana/loki/pkg/storage/bloom/v1"
+ "github.com/grafana/loki/pkg/storage/config"
)
type tableRangeProgress struct {
diff --git a/pkg/bloomcompactor/tracker_test.go b/pkg/bloomcompactor/tracker_test.go
index 87f10bf12b126..494073e7cc520 100644
--- a/pkg/bloomcompactor/tracker_test.go
+++ b/pkg/bloomcompactor/tracker_test.go
@@ -3,10 +3,11 @@ package bloomcompactor
import (
"testing"
- v1 "github.com/grafana/loki/pkg/storage/bloom/v1"
- "github.com/grafana/loki/pkg/storage/config"
"github.com/prometheus/common/model"
"github.com/stretchr/testify/require"
+
+ v1 "github.com/grafana/loki/pkg/storage/bloom/v1"
+ "github.com/grafana/loki/pkg/storage/config"
)
func mkTblRange(tenant string, tbl config.DayTime, from, through model.Fingerprint) *tenantTableRange {
diff --git a/pkg/bloomcompactor/versioned_range.go b/pkg/bloomcompactor/versioned_range.go
index 0b69a6d5ccbaf..0c399025f610f 100644
--- a/pkg/bloomcompactor/versioned_range.go
+++ b/pkg/bloomcompactor/versioned_range.go
@@ -3,9 +3,10 @@ package bloomcompactor
import (
"sort"
+ "github.com/prometheus/common/model"
+
v1 "github.com/grafana/loki/pkg/storage/bloom/v1"
"github.com/grafana/loki/pkg/storage/stores/shipper/bloomshipper"
- "github.com/prometheus/common/model"
)
type tsdbToken struct {
@@ -94,7 +95,7 @@ func (t tsdbTokenRange) Add(version int, bounds v1.FingerprintBounds) (res tsdbT
}
// If we need to update the range, there are 5 cases:
- // 1. `equal`: the incoming range equals an exising range ()
+ // 1. `equal`: the incoming range equals an existing range ()
// ------ # addition
// ------ # src
// 2. `subset`: the incoming range is a subset of an existing range
diff --git a/pkg/bloomcompactor/versioned_range_test.go b/pkg/bloomcompactor/versioned_range_test.go
index 91a5a94fb2265..6c4329a0dba99 100644
--- a/pkg/bloomcompactor/versioned_range_test.go
+++ b/pkg/bloomcompactor/versioned_range_test.go
@@ -3,11 +3,12 @@ package bloomcompactor
import (
"testing"
+ "github.com/prometheus/common/model"
+ "github.com/stretchr/testify/require"
+
v1 "github.com/grafana/loki/pkg/storage/bloom/v1"
"github.com/grafana/loki/pkg/storage/stores/shipper/bloomshipper"
"github.com/grafana/loki/pkg/storage/stores/shipper/indexshipper/tsdb"
- "github.com/prometheus/common/model"
- "github.com/stretchr/testify/require"
)
func Test_TsdbTokenRange(t *testing.T) {
diff --git a/pkg/bloomgateway/bloomgateway_test.go b/pkg/bloomgateway/bloomgateway_test.go
index 59a3c7a840bd5..54651596f9b22 100644
--- a/pkg/bloomgateway/bloomgateway_test.go
+++ b/pkg/bloomgateway/bloomgateway_test.go
@@ -73,6 +73,9 @@ func setupBloomStore(t *testing.T) *bloomshipper.BloomStore {
storageCfg := storage.Config{
BloomShipperConfig: bloomshipperconfig.Config{
WorkingDirectory: t.TempDir(),
+ BlocksDownloadingQueue: bloomshipperconfig.DownloadingQueueConfig{
+ WorkersCount: 1,
+ },
},
FSConfig: local.FSConfig{
Directory: t.TempDir(),
diff --git a/pkg/bloomgateway/processor.go b/pkg/bloomgateway/processor.go
index 58cc73b743b26..1790f4ba87beb 100644
--- a/pkg/bloomgateway/processor.go
+++ b/pkg/bloomgateway/processor.go
@@ -79,15 +79,19 @@ func (p *processor) processBlocks(ctx context.Context, data []blockWithTasks) er
refs = append(refs, block.ref)
}
- bqs, err := p.store.FetchBlocks(ctx, refs)
+ start := time.Now()
+ bqs, err := p.store.FetchBlocks(ctx, refs, bloomshipper.WithFetchAsync(true), bloomshipper.WithIgnoreNotFound(true))
+ level.Debug(p.logger).Log("msg", "fetch blocks", "count", len(bqs), "duration", time.Since(start), "err", err)
+
if err != nil {
return err
}
+ // TODO(chaudum): use `concurrency` lib with bound parallelism
for i, bq := range bqs {
block := data[i]
if bq == nil {
- level.Warn(p.logger).Log("msg", "skipping not found block", "block", block.ref)
+ // TODO(chaudum): Add metric for skipped blocks
continue
}
level.Debug(p.logger).Log(
diff --git a/pkg/bloomgateway/processor_test.go b/pkg/bloomgateway/processor_test.go
index d910d66a65d9e..246da373f3574 100644
--- a/pkg/bloomgateway/processor_test.go
+++ b/pkg/bloomgateway/processor_test.go
@@ -65,7 +65,7 @@ func (s *dummyStore) Client(_ model.Time) (bloomshipper.Client, error) {
func (s *dummyStore) Stop() {
}
-func (s *dummyStore) FetchBlocks(_ context.Context, refs []bloomshipper.BlockRef) ([]*bloomshipper.CloseableBlockQuerier, error) {
+func (s *dummyStore) FetchBlocks(_ context.Context, refs []bloomshipper.BlockRef, _ ...bloomshipper.FetchOption) ([]*bloomshipper.CloseableBlockQuerier, error) {
result := make([]*bloomshipper.CloseableBlockQuerier, 0, len(s.querieres))
if s.err != nil {
diff --git a/pkg/loki/modules.go b/pkg/loki/modules.go
index 51018ccb960e3..a3d9937734c83 100644
--- a/pkg/loki/modules.go
+++ b/pkg/loki/modules.go
@@ -672,13 +672,19 @@ func (t *Loki) initBloomStore() (services.Service, error) {
if err != nil {
return nil, fmt.Errorf("failed to create metas cache: %w", err)
}
+ } else {
+ level.Info(logger).Log("msg", "no metas cache configured")
}
var blocksCache cache.TypedCache[string, bloomshipper.BlockDirectory]
if bsCfg.BlocksCache.IsEnabled() {
blocksCache = bloomshipper.NewBlocksCache(bsCfg.BlocksCache, reg, logger)
err = bloomshipper.LoadBlocksDirIntoCache(t.Cfg.StorageConfig.BloomShipperConfig.WorkingDirectory, blocksCache, logger)
- level.Warn(logger).Log("msg", "failed to preload blocks cache", "err", err)
+ if err != nil {
+ level.Warn(logger).Log("msg", "failed to preload blocks cache", "err", err)
+ }
+ } else {
+ level.Info(logger).Log("msg", "no blocks cache configured")
}
t.BloomStore, err = bloomshipper.NewBloomStore(t.Cfg.SchemaConfig.Configs, t.Cfg.StorageConfig, t.ClientMetrics, metasCache, blocksCache, reg, logger)
diff --git a/pkg/loki/modules_test.go b/pkg/loki/modules_test.go
index 047ba5f838a52..61cc9198bbf28 100644
--- a/pkg/loki/modules_test.go
+++ b/pkg/loki/modules_test.go
@@ -368,6 +368,9 @@ func minimalWorkingConfig(t *testing.T, dir, target string, cfgTransformers ...f
FSConfig: local.FSConfig{Directory: dir},
BloomShipperConfig: bloomshipperconfig.Config{
WorkingDirectory: filepath.Join(dir, "blooms"),
+ BlocksDownloadingQueue: bloomshipperconfig.DownloadingQueueConfig{
+ WorkersCount: 1,
+ },
},
BoltDBShipperConfig: boltdb.IndexCfg{
Config: indexshipper.Config{
diff --git a/pkg/storage/stores/shipper/bloomshipper/config/config.go b/pkg/storage/stores/shipper/bloomshipper/config/config.go
index be17997ad2458..8b9eb7d9c706d 100644
--- a/pkg/storage/stores/shipper/bloomshipper/config/config.go
+++ b/pkg/storage/stores/shipper/bloomshipper/config/config.go
@@ -12,7 +12,6 @@ import (
type Config struct {
WorkingDirectory string `yaml:"working_directory"`
- IgnoreMissingBlocks bool `yaml:"ignore_missing_blocks"`
BlocksDownloadingQueue DownloadingQueueConfig `yaml:"blocks_downloading_queue"`
BlocksCache cache.EmbeddedCacheConfig `yaml:"blocks_cache"`
MetasCache cache.Config `yaml:"metas_cache"`
@@ -30,7 +29,6 @@ func (cfg *DownloadingQueueConfig) RegisterFlagsWithPrefix(prefix string, f *fla
func (c *Config) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) {
f.StringVar(&c.WorkingDirectory, prefix+"shipper.working-directory", "bloom-shipper", "Working directory to store downloaded Bloom Blocks.")
- f.BoolVar(&c.IgnoreMissingBlocks, prefix+"shipper.ignore-missing-blocks", true, "In an eventually consistent system like the bloom components, we usually want to ignore blocks that are missing in storage.")
c.BlocksDownloadingQueue.RegisterFlagsWithPrefix(prefix+"shipper.blocks-downloading-queue.", f)
c.BlocksCache.RegisterFlagsWithPrefixAndDefaults(prefix+"blocks-cache.", "Cache for bloom blocks. ", f, 24*time.Hour)
c.MetasCache.RegisterFlagsWithPrefix(prefix+"metas-cache.", "Cache for bloom metas. ", f)
diff --git a/pkg/storage/stores/shipper/bloomshipper/fetcher.go b/pkg/storage/stores/shipper/bloomshipper/fetcher.go
index db2f8320aaa1c..bffb571b38ed6 100644
--- a/pkg/storage/stores/shipper/bloomshipper/fetcher.go
+++ b/pkg/storage/stores/shipper/bloomshipper/fetcher.go
@@ -19,9 +19,34 @@ import (
"github.com/grafana/loki/pkg/util/constants"
)
+type options struct {
+ ignoreNotFound bool // ignore 404s from object storage; default=true
+ fetchAsync bool // dispatch downloading of block and return immediately; default=false
+}
+
+func (o *options) apply(opts ...FetchOption) {
+ for _, opt := range opts {
+ opt(o)
+ }
+}
+
+type FetchOption func(opts *options)
+
+func WithIgnoreNotFound(v bool) FetchOption {
+ return func(opts *options) {
+ opts.ignoreNotFound = v
+ }
+}
+
+func WithFetchAsync(v bool) FetchOption {
+ return func(opts *options) {
+ opts.fetchAsync = v
+ }
+}
+
type fetcher interface {
FetchMetas(ctx context.Context, refs []MetaRef) ([]Meta, error)
- FetchBlocks(ctx context.Context, refs []BlockRef) ([]*CloseableBlockQuerier, error)
+ FetchBlocks(ctx context.Context, refs []BlockRef, opts ...FetchOption) ([]*CloseableBlockQuerier, error)
Close()
}
@@ -52,7 +77,11 @@ func NewFetcher(cfg bloomStoreConfig, client Client, metasCache cache.Cache, blo
metrics: newFetcherMetrics(reg, constants.Loki, "bloom_store"),
logger: logger,
}
- fetcher.q = newDownloadQueue[BlockRef, BlockDirectory](1000, cfg.numWorkers, fetcher.processTask, logger)
+ q, err := newDownloadQueue[BlockRef, BlockDirectory](1000, cfg.numWorkers, fetcher.processTask, logger)
+ if err != nil {
+ return nil, errors.Wrap(err, "creating download queue for fetcher")
+ }
+ fetcher.q = q
return fetcher, nil
}
@@ -60,6 +89,7 @@ func (f *Fetcher) Close() {
f.q.close()
}
+// FetchMetas implements fetcher
func (f *Fetcher) FetchMetas(ctx context.Context, refs []MetaRef) ([]Meta, error) {
if ctx.Err() != nil {
return nil, errors.Wrap(ctx.Err(), "fetch Metas")
@@ -139,40 +169,76 @@ func (f *Fetcher) writeBackMetas(ctx context.Context, metas []Meta) error {
return f.metasCache.Store(ctx, keys, data)
}
-func (f *Fetcher) FetchBlocks(ctx context.Context, refs []BlockRef) ([]*CloseableBlockQuerier, error) {
- n := len(refs)
+// FetchBlocks implements fetcher
+func (f *Fetcher) FetchBlocks(ctx context.Context, refs []BlockRef, opts ...FetchOption) ([]*CloseableBlockQuerier, error) {
+ // apply fetch options
+ cfg := &options{ignoreNotFound: true, fetchAsync: false}
+ cfg.apply(opts...)
+ // first, resolve blocks from cache and enqueue missing blocks to download queue
+ n := len(refs)
+ results := make([]*CloseableBlockQuerier, n)
responses := make(chan downloadResponse[BlockDirectory], n)
errors := make(chan error, n)
+
+ found, missing := 0, 0
+
+ // TODO(chaudum): Can fetching items from cache be made more efficient
+ // by fetching all keys at once.
+ // The problem is keeping the order of the responses.
+
for i := 0; i < n; i++ {
- f.q.enqueue(downloadRequest[BlockRef, BlockDirectory]{
- ctx: ctx,
- item: refs[i],
- key: f.client.Block(refs[i]).Addr(),
- idx: i,
- results: responses,
- errors: errors,
- })
+ key := f.client.Block(refs[i]).Addr()
+ dir, isFound, err := f.getBlockDir(ctx, key)
+ if err != nil {
+ return results, err
+ }
+ if !isFound {
+ f.q.enqueue(downloadRequest[BlockRef, BlockDirectory]{
+ ctx: ctx,
+ item: refs[i],
+ key: key,
+ idx: i,
+ results: responses,
+ errors: errors,
+ })
+ missing++
+ continue
+ }
+ found++
+ results[i] = dir.BlockQuerier()
}
- count := 0
- results := make([]*CloseableBlockQuerier, n)
- for i := 0; i < n; i++ {
+ // fetchAsync defines whether the function may return early or whether it
+ // should wait for responses from the download queue
+ if cfg.fetchAsync {
+ f.metrics.blocksFetched.Observe(float64(found))
+ level.Debug(f.logger).Log("msg", "request unavailable blocks in the background", "missing", missing, "found", found)
+ return results, nil
+ }
+
+ level.Debug(f.logger).Log("msg", "wait for unavailable blocks", "missing", missing, "found", found)
+ // second, wait for missing blocks to be fetched and append them to the
+ // results
+ for i := 0; i < missing; i++ {
select {
case err := <-errors:
// TODO(owen-d): add metrics for missing blocks
- if !f.cfg.ignoreMissingBlocks && !f.client.IsObjectNotFoundErr(err) {
- f.metrics.blocksFetched.Observe(float64(count))
- return results, err
+ if f.client.IsObjectNotFoundErr(err) && cfg.ignoreNotFound {
+ level.Warn(f.logger).Log("msg", "ignore not found block", "err", err)
+ continue
}
- level.Warn(f.logger).Log("msg", "ignore missing block", "err", err)
+
+ f.metrics.blocksFetched.Observe(float64(found))
+ return results, err
case res := <-responses:
- count++
+ found++
results[res.idx] = res.item.BlockQuerier()
}
}
- f.metrics.blocksFetched.Observe(float64(count))
+ level.Debug(f.logger).Log("msg", "return found blocks", "found", found)
+ f.metrics.blocksFetched.Observe(float64(found))
return results, nil
}
@@ -182,12 +248,33 @@ func (f *Fetcher) processTask(ctx context.Context, task downloadRequest[BlockRef
return
}
- result, err := f.fetchBlock(ctx, task.item)
+ // check if block was fetched while task was waiting in queue
+ result, exists, err := f.getBlockDir(ctx, task.key)
if err != nil {
task.errors <- err
return
}
+ // return item from cache
+ if exists {
+ level.Debug(f.logger).Log("msg", "send download response", "reason", "cache")
+ task.results <- downloadResponse[BlockDirectory]{
+ item: result,
+ key: task.key,
+ idx: task.idx,
+ }
+ return
+ }
+
+ // fetch from storage
+ result, err = f.fetchBlock(ctx, task.item)
+ if err != nil {
+ task.errors <- err
+ return
+ }
+
+ // return item from storage
+ level.Debug(f.logger).Log("msg", "send download response", "reason", "storage")
task.results <- downloadResponse[BlockDirectory]{
item: result,
key: task.key,
@@ -195,40 +282,36 @@ func (f *Fetcher) processTask(ctx context.Context, task downloadRequest[BlockRef
}
}
-// fetchBlock resolves a block from three locations:
-// 1. from cache
-// 2. from file system
-// 3. from remote storage
-func (f *Fetcher) fetchBlock(ctx context.Context, ref BlockRef) (BlockDirectory, error) {
+// getBlockDir resolves a block directory without fetching the block from
+// remote object storage. Returns three arguments: the block directory, a
+// boolean whether the block was found in cache, and optionally an error.
+func (f *Fetcher) getBlockDir(ctx context.Context, key string) (BlockDirectory, bool, error) {
var zero BlockDirectory
if ctx.Err() != nil {
- return zero, errors.Wrap(ctx.Err(), "fetch block")
+ return zero, false, errors.Wrap(ctx.Err(), "get block")
}
- keys := []string{f.client.Block(ref).Addr()}
-
- _, fromCache, _, err := f.blocksCache.Fetch(ctx, keys)
+ _, fromCache, _, err := f.blocksCache.Fetch(ctx, []string{key})
if err != nil {
- return zero, err
+ return zero, false, err
}
// item found in cache
if len(fromCache) == 1 {
f.metrics.blocksFetchedSize.WithLabelValues(sourceCache).Observe(float64(fromCache[0].Size()))
- return fromCache[0], nil
+ return fromCache[0], true, nil
}
- fromLocalFS, _, err := f.loadBlocksFromFS(ctx, []BlockRef{ref})
- if err != nil {
- return zero, err
- }
+ // item wasn't found
+ return zero, false, nil
+}
- // item found on local file system
- if len(fromLocalFS) == 1 {
- err = f.writeBackBlocks(ctx, fromLocalFS)
- f.metrics.blocksFetchedSize.WithLabelValues(sourceFilesystem).Observe(float64(fromLocalFS[0].Size()))
- return fromLocalFS[0], err
+func (f *Fetcher) fetchBlock(ctx context.Context, ref BlockRef) (BlockDirectory, error) {
+ var zero BlockDirectory
+
+ if ctx.Err() != nil {
+ return zero, errors.Wrap(ctx.Err(), "fetch block")
}
fromStorage, err := f.client.GetBlock(ctx, ref)
@@ -323,7 +406,13 @@ type downloadQueue[T any, R any] struct {
logger log.Logger
}
-func newDownloadQueue[T any, R any](size, workers int, process processFunc[T, R], logger log.Logger) *downloadQueue[T, R] {
+func newDownloadQueue[T any, R any](size, workers int, process processFunc[T, R], logger log.Logger) (*downloadQueue[T, R], error) {
+ if size < 1 {
+ return nil, errors.New("queue size needs to be greater than 0")
+ }
+ if workers < 1 {
+ return nil, errors.New("queue requires at least 1 worker")
+ }
q := &downloadQueue[T, R]{
queue: make(chan downloadRequest[T, R], size),
mu: keymutex.NewHashed(workers),
@@ -335,7 +424,7 @@ func newDownloadQueue[T any, R any](size, workers int, process processFunc[T, R]
q.wg.Add(1)
go q.runWorker()
}
- return q
+ return q, nil
}
func (q *downloadQueue[T, R]) enqueue(t downloadRequest[T, R]) {
@@ -355,6 +444,10 @@ func (q *downloadQueue[T, R]) runWorker() {
}
func (q *downloadQueue[T, R]) do(ctx context.Context, task downloadRequest[T, R]) {
+ if ctx.Err() != nil {
+ task.errors <- ctx.Err()
+ return
+ }
q.mu.LockKey(task.key)
defer func() {
err := q.mu.UnlockKey(task.key)
diff --git a/pkg/storage/stores/shipper/bloomshipper/fetcher_test.go b/pkg/storage/stores/shipper/bloomshipper/fetcher_test.go
index 420807f4875e6..78a681dac5014 100644
--- a/pkg/storage/stores/shipper/bloomshipper/fetcher_test.go
+++ b/pkg/storage/stores/shipper/bloomshipper/fetcher_test.go
@@ -139,6 +139,124 @@ func TestMetasFetcher(t *testing.T) {
}
}
+func TestFetchOptions(t *testing.T) {
+ options := &options{
+ ignoreNotFound: false,
+ fetchAsync: false,
+ }
+
+ options.apply(WithFetchAsync(true), WithIgnoreNotFound(true))
+
+ require.True(t, options.fetchAsync)
+ require.True(t, options.ignoreNotFound)
+}
+
+func TestFetcher_DownloadQueue(t *testing.T) {
+ t.Run("invalid arguments", func(t *testing.T) {
+ for _, tc := range []struct {
+ size, workers int
+ err string
+ }{
+ {
+ size: 0, workers: 1, err: "queue size needs to be greater than 0",
+ },
+ {
+ size: 1, workers: 0, err: "queue requires at least 1 worker",
+ },
+ } {
+ tc := tc
+ t.Run(tc.err, func(t *testing.T) {
+ _, err := newDownloadQueue[bool, bool](
+ tc.size,
+ tc.workers,
+ func(ctx context.Context, r downloadRequest[bool, bool]) {},
+ log.NewNopLogger(),
+ )
+ require.ErrorContains(t, err, tc.err)
+ })
+ }
+ })
+
+ t.Run("cancelled context", func(t *testing.T) {
+ ctx := context.Background()
+
+ q, err := newDownloadQueue[bool, bool](
+ 100,
+ 1,
+ func(_ context.Context, _ downloadRequest[bool, bool]) {},
+ log.NewNopLogger(),
+ )
+ require.NoError(t, err)
+ t.Cleanup(q.close)
+
+ ctx, cancel := context.WithCancel(ctx)
+ cancel()
+
+ resultsCh := make(chan downloadResponse[bool], 1)
+ errorsCh := make(chan error, 1)
+
+ r := downloadRequest[bool, bool]{
+ ctx: ctx,
+ item: false,
+ key: "test",
+ idx: 0,
+ results: resultsCh,
+ errors: errorsCh,
+ }
+ q.enqueue(r)
+
+ select {
+ case err := <-errorsCh:
+ require.Error(t, err)
+ case res := <-resultsCh:
+ require.False(t, true, "got %+v should have received an error instead", res)
+ }
+
+ })
+
+ t.Run("process function is called with context and request as arguments", func(t *testing.T) {
+ ctx := context.Background()
+
+ q, err := newDownloadQueue[bool, bool](
+ 100,
+ 1,
+ func(_ context.Context, r downloadRequest[bool, bool]) {
+ r.results <- downloadResponse[bool]{
+ key: r.key,
+ idx: r.idx,
+ item: true,
+ }
+ },
+ log.NewNopLogger(),
+ )
+ require.NoError(t, err)
+ t.Cleanup(q.close)
+
+ resultsCh := make(chan downloadResponse[bool], 1)
+ errorsCh := make(chan error, 1)
+
+ r := downloadRequest[bool, bool]{
+ ctx: ctx,
+ item: false,
+ key: "test",
+ idx: 0,
+ results: resultsCh,
+ errors: errorsCh,
+ }
+ q.enqueue(r)
+
+ select {
+ case err := <-errorsCh:
+ require.False(t, true, "got %+v should have received a response instead", err)
+ case res := <-resultsCh:
+ require.True(t, res.item)
+ require.Equal(t, r.key, res.key)
+ require.Equal(t, r.idx, res.idx)
+ }
+
+ })
+}
+
func TestFetcher_LoadBlocksFromFS(t *testing.T) {
base := t.TempDir()
cfg := bloomStoreConfig{workingDir: base, numWorkers: 1}
@@ -194,7 +312,9 @@ func createBlockDir(t *testing.T, path string) {
}
func TestFetcher_IsBlockDir(t *testing.T) {
- fetcher, _ := NewFetcher(bloomStoreConfig{}, nil, nil, nil, nil, log.NewNopLogger())
+ cfg := bloomStoreConfig{numWorkers: 1}
+
+ fetcher, _ := NewFetcher(cfg, nil, nil, nil, nil, log.NewNopLogger())
t.Run("path does not exist", func(t *testing.T) {
base := t.TempDir()
diff --git a/pkg/storage/stores/shipper/bloomshipper/shipper.go b/pkg/storage/stores/shipper/bloomshipper/shipper.go
index 3267886ac063e..66982bc065f87 100644
--- a/pkg/storage/stores/shipper/bloomshipper/shipper.go
+++ b/pkg/storage/stores/shipper/bloomshipper/shipper.go
@@ -30,7 +30,7 @@ func NewShipper(client Store) *Shipper {
// ForEach is a convenience function that wraps the store's FetchBlocks function
// and automatically closes the block querier once the callback was run.
func (s *Shipper) ForEach(ctx context.Context, refs []BlockRef, callback ForEachBlockCallback) error {
- bqs, err := s.store.FetchBlocks(ctx, refs)
+ bqs, err := s.store.FetchBlocks(ctx, refs, WithFetchAsync(false))
if err != nil {
return err
}
diff --git a/pkg/storage/stores/shipper/bloomshipper/store.go b/pkg/storage/stores/shipper/bloomshipper/store.go
index 02fe42e189c0d..887bbdb1b8f99 100644
--- a/pkg/storage/stores/shipper/bloomshipper/store.go
+++ b/pkg/storage/stores/shipper/bloomshipper/store.go
@@ -29,16 +29,15 @@ var (
type Store interface {
ResolveMetas(ctx context.Context, params MetaSearchParams) ([][]MetaRef, []*Fetcher, error)
FetchMetas(ctx context.Context, params MetaSearchParams) ([]Meta, error)
- FetchBlocks(ctx context.Context, refs []BlockRef) ([]*CloseableBlockQuerier, error)
+ FetchBlocks(ctx context.Context, refs []BlockRef, opts ...FetchOption) ([]*CloseableBlockQuerier, error)
Fetcher(ts model.Time) (*Fetcher, error)
Client(ts model.Time) (Client, error)
Stop()
}
type bloomStoreConfig struct {
- workingDir string
- numWorkers int
- ignoreMissingBlocks bool
+ workingDir string
+ numWorkers int
}
// Compiler check to ensure bloomStoreEntry implements the Store interface
@@ -124,7 +123,7 @@ func (b *bloomStoreEntry) FetchMetas(ctx context.Context, params MetaSearchParam
}
// FetchBlocks implements Store.
-func (b *bloomStoreEntry) FetchBlocks(ctx context.Context, refs []BlockRef) ([]*CloseableBlockQuerier, error) {
+func (b *bloomStoreEntry) FetchBlocks(ctx context.Context, refs []BlockRef, _ ...FetchOption) ([]*CloseableBlockQuerier, error) {
return b.fetcher.FetchBlocks(ctx, refs)
}
@@ -185,9 +184,8 @@ func NewBloomStore(
// TODO(chaudum): Remove wrapper
cfg := bloomStoreConfig{
- workingDir: storageConfig.BloomShipperConfig.WorkingDirectory,
- numWorkers: storageConfig.BloomShipperConfig.BlocksDownloadingQueue.WorkersCount,
- ignoreMissingBlocks: storageConfig.BloomShipperConfig.IgnoreMissingBlocks,
+ workingDir: storageConfig.BloomShipperConfig.WorkingDirectory,
+ numWorkers: storageConfig.BloomShipperConfig.BlocksDownloadingQueue.WorkersCount,
}
if err := util.EnsureDirectory(cfg.workingDir); err != nil {
@@ -317,12 +315,8 @@ func (b *BloomStore) FetchMetas(ctx context.Context, params MetaSearchParams) ([
return metas, nil
}
-// FetchBlocks implements Store.
-func (b *BloomStore) FetchBlocks(ctx context.Context, blocks []BlockRef) ([]*CloseableBlockQuerier, error) {
-
- var refs [][]BlockRef
- var fetchers []*Fetcher
-
+// partitionBlocksByFetcher returns a slice of BlockRefs for each fetcher
+func (b *BloomStore) partitionBlocksByFetcher(blocks []BlockRef) (refs [][]BlockRef, fetchers []*Fetcher) {
for i := len(b.stores) - 1; i >= 0; i-- {
s := b.stores[i]
from, through := s.start, model.Latest
@@ -343,17 +337,22 @@ func (b *BloomStore) FetchBlocks(ctx context.Context, blocks []BlockRef) ([]*Clo
}
}
+ return refs, fetchers
+}
+
+// FetchBlocks implements Store.
+func (b *BloomStore) FetchBlocks(ctx context.Context, blocks []BlockRef, opts ...FetchOption) ([]*CloseableBlockQuerier, error) {
+ refs, fetchers := b.partitionBlocksByFetcher(blocks)
+
results := make([]*CloseableBlockQuerier, 0, len(blocks))
for i := range fetchers {
- res, err := fetchers[i].FetchBlocks(ctx, refs[i])
+ res, err := fetchers[i].FetchBlocks(ctx, refs[i], opts...)
if err != nil {
return results, err
}
results = append(results, res...)
}
- level.Debug(b.logger).Log("msg", "fetch blocks", "num_req", len(blocks), "num_resp", len(results))
-
// sort responses (results []*CloseableBlockQuerier) based on requests (blocks []BlockRef)
sortBlocks(results, blocks)
|
refactor
|
Introduce fetch option for blocks fetcher (#12160)
|
d7ff42664681794b9ef5026ac3758cdd9569ac1a
|
2024-10-02 20:47:38
|
Trevor Whitney
|
feat: ability to log stream selectors before service name detection (#14154)
| false
|
diff --git a/clients/pkg/promtail/targets/lokipush/pushtarget.go b/clients/pkg/promtail/targets/lokipush/pushtarget.go
index 1ec021c0b28a8..f6e33eb8f72d9 100644
--- a/clients/pkg/promtail/targets/lokipush/pushtarget.go
+++ b/clients/pkg/promtail/targets/lokipush/pushtarget.go
@@ -111,7 +111,7 @@ func (t *PushTarget) run() error {
func (t *PushTarget) handleLoki(w http.ResponseWriter, r *http.Request) {
logger := util_log.WithContext(r.Context(), util_log.Logger)
userID, _ := tenant.TenantID(r.Context())
- req, err := push.ParseRequest(logger, userID, r, nil, push.EmptyLimits{}, push.ParseLokiRequest, nil)
+ req, err := push.ParseRequest(logger, userID, r, nil, push.EmptyLimits{}, push.ParseLokiRequest, nil, false)
if err != nil {
level.Warn(t.logger).Log("msg", "failed to parse incoming push request", "err", err.Error())
http.Error(w, err.Error(), http.StatusBadRequest)
diff --git a/pkg/distributor/http.go b/pkg/distributor/http.go
index ec0660b91bc01..636a16bb507b1 100644
--- a/pkg/distributor/http.go
+++ b/pkg/distributor/http.go
@@ -58,7 +58,8 @@ func (d *Distributor) pushHandler(w http.ResponseWriter, r *http.Request, pushRe
pushRequestParser = d.RequestParserWrapper(pushRequestParser)
}
- req, err := push.ParseRequest(logger, tenantID, r, d.tenantsRetention, d.validator.Limits, pushRequestParser, d.usageTracker)
+ logPushRequestStreams := d.tenantConfigs.LogPushRequestStreams(tenantID)
+ req, err := push.ParseRequest(logger, tenantID, r, d.tenantsRetention, d.validator.Limits, pushRequestParser, d.usageTracker, logPushRequestStreams)
if err != nil {
if d.tenantConfigs.LogPushRequest(tenantID) {
level.Debug(logger).Log(
@@ -73,7 +74,7 @@ func (d *Distributor) pushHandler(w http.ResponseWriter, r *http.Request, pushRe
return
}
- if d.tenantConfigs.LogPushRequestStreams(tenantID) {
+ if logPushRequestStreams {
var sb strings.Builder
for _, s := range req.Streams {
sb.WriteString(s.Labels)
diff --git a/pkg/distributor/http_test.go b/pkg/distributor/http_test.go
index b6281b81bf3d7..c6b8f3d017af6 100644
--- a/pkg/distributor/http_test.go
+++ b/pkg/distributor/http_test.go
@@ -7,6 +7,7 @@ import (
"net/http/httptest"
"testing"
+ "github.com/go-kit/log"
"github.com/grafana/dskit/user"
"github.com/grafana/loki/v3/pkg/loghttp/push"
@@ -114,6 +115,14 @@ func Test_OtelErrorHeaderInterceptor(t *testing.T) {
}
}
-func stubParser(_ string, _ *http.Request, _ push.TenantsRetention, _ push.Limits, _ push.UsageTracker) (*logproto.PushRequest, *push.Stats, error) {
+func stubParser(
+ _ string,
+ _ *http.Request,
+ _ push.TenantsRetention,
+ _ push.Limits,
+ _ push.UsageTracker,
+ _ bool,
+ _ log.Logger,
+) (*logproto.PushRequest, *push.Stats, error) {
return &logproto.PushRequest{}, &push.Stats{}, nil
}
diff --git a/pkg/loghttp/push/otlp.go b/pkg/loghttp/push/otlp.go
index 13aea9ee59caa..3e654b9c21ef2 100644
--- a/pkg/loghttp/push/otlp.go
+++ b/pkg/loghttp/push/otlp.go
@@ -8,8 +8,11 @@ import (
"io"
"net/http"
"sort"
+ "strings"
"time"
+ "github.com/go-kit/log"
+ "github.com/go-kit/log/level"
"github.com/pkg/errors"
"github.com/prometheus/common/model"
"github.com/prometheus/prometheus/model/labels"
@@ -40,14 +43,14 @@ func newPushStats() *Stats {
}
}
-func ParseOTLPRequest(userID string, r *http.Request, tenantsRetention TenantsRetention, limits Limits, tracker UsageTracker) (*logproto.PushRequest, *Stats, error) {
+func ParseOTLPRequest(userID string, r *http.Request, tenantsRetention TenantsRetention, limits Limits, tracker UsageTracker, logPushRequestStreams bool, logger log.Logger) (*logproto.PushRequest, *Stats, error) {
stats := newPushStats()
otlpLogs, err := extractLogs(r, stats)
if err != nil {
return nil, nil, err
}
- req := otlpToLokiPushRequest(r.Context(), otlpLogs, userID, tenantsRetention, limits.OTLPConfig(userID), limits.DiscoverServiceName(userID), tracker, stats)
+ req := otlpToLokiPushRequest(r.Context(), otlpLogs, userID, tenantsRetention, limits.OTLPConfig(userID), limits.DiscoverServiceName(userID), tracker, stats, logPushRequestStreams, logger)
return req, stats, nil
}
@@ -98,7 +101,7 @@ func extractLogs(r *http.Request, pushStats *Stats) (plog.Logs, error) {
return req.Logs(), nil
}
-func otlpToLokiPushRequest(ctx context.Context, ld plog.Logs, userID string, tenantsRetention TenantsRetention, otlpConfig OTLPConfig, discoverServiceName []string, tracker UsageTracker, stats *Stats) *logproto.PushRequest {
+func otlpToLokiPushRequest(ctx context.Context, ld plog.Logs, userID string, tenantsRetention TenantsRetention, otlpConfig OTLPConfig, discoverServiceName []string, tracker UsageTracker, stats *Stats, logPushRequestStreams bool, logger log.Logger) *logproto.PushRequest {
if ld.LogRecordCount() == 0 {
return &logproto.PushRequest{}
}
@@ -113,6 +116,10 @@ func otlpToLokiPushRequest(ctx context.Context, ld plog.Logs, userID string, ten
resourceAttributesAsStructuredMetadata := make(push.LabelsAdapter, 0, resAttrs.Len())
streamLabels := make(model.LabelSet, 30) // we have a default labels limit of 30 so just initialize the map of same size
+ var pushedLabels model.LabelSet
+ if logPushRequestStreams {
+ pushedLabels = make(model.LabelSet, 30)
+ }
shouldDiscoverServiceName := len(discoverServiceName) > 0 && !stats.IsAggregatedMetric
hasServiceName := false
@@ -129,6 +136,9 @@ func otlpToLokiPushRequest(ctx context.Context, ld plog.Logs, userID string, ten
if action == IndexLabel {
for _, lbl := range attributeAsLabels {
streamLabels[model.LabelName(lbl.Name)] = model.LabelValue(lbl.Value)
+ if logPushRequestStreams && pushedLabels != nil {
+ pushedLabels[model.LabelName(lbl.Name)] = model.LabelValue(lbl.Value)
+ }
if !hasServiceName && shouldDiscoverServiceName {
for _, labelName := range discoverServiceName {
@@ -151,6 +161,23 @@ func otlpToLokiPushRequest(ctx context.Context, ld plog.Logs, userID string, ten
streamLabels[model.LabelName(LabelServiceName)] = model.LabelValue(ServiceUnknown)
}
+ if logPushRequestStreams {
+ var sb strings.Builder
+ sb.WriteString("{")
+ labels := make([]string, 0, len(pushedLabels))
+ for name, value := range pushedLabels {
+ labels = append(labels, fmt.Sprintf(`%s="%s"`, name, value))
+ }
+ sb.WriteString(strings.Join(labels, ", "))
+ sb.WriteString("}")
+
+ level.Debug(logger).Log(
+ "msg", "OTLP push request stream before service name discovery",
+ "stream", sb.String(),
+ "service_name", streamLabels[model.LabelName(LabelServiceName)],
+ )
+ }
+
if err := streamLabels.Validate(); err != nil {
stats.Errs = append(stats.Errs, fmt.Errorf("invalid labels: %w", err))
continue
diff --git a/pkg/loghttp/push/otlp_test.go b/pkg/loghttp/push/otlp_test.go
index e2ca137f274c0..5e5632eec0082 100644
--- a/pkg/loghttp/push/otlp_test.go
+++ b/pkg/loghttp/push/otlp_test.go
@@ -7,6 +7,7 @@ import (
"testing"
"time"
+ "github.com/go-kit/log"
"github.com/prometheus/prometheus/model/labels"
"github.com/prometheus/prometheus/model/relabel"
"github.com/stretchr/testify/require"
@@ -508,7 +509,18 @@ func TestOTLPToLokiPushRequest(t *testing.T) {
t.Run(tc.name, func(t *testing.T) {
stats := newPushStats()
tracker := NewMockTracker()
- pushReq := otlpToLokiPushRequest(context.Background(), tc.generateLogs(), "foo", fakeRetention{}, tc.otlpConfig, defaultServiceDetection, tracker, stats)
+ pushReq := otlpToLokiPushRequest(
+ context.Background(),
+ tc.generateLogs(),
+ "foo",
+ fakeRetention{},
+ tc.otlpConfig,
+ defaultServiceDetection,
+ tracker,
+ stats,
+ false,
+ log.NewNopLogger(),
+ )
require.Equal(t, tc.expectedPushRequest, *pushReq)
require.Equal(t, tc.expectedStats, *stats)
diff --git a/pkg/loghttp/push/push.go b/pkg/loghttp/push/push.go
index e048546fb4083..be1d8b34b9f31 100644
--- a/pkg/loghttp/push/push.go
+++ b/pkg/loghttp/push/push.go
@@ -84,7 +84,7 @@ func (EmptyLimits) DiscoverServiceName(string) []string {
}
type (
- RequestParser func(userID string, r *http.Request, tenantsRetention TenantsRetention, limits Limits, tracker UsageTracker) (*logproto.PushRequest, *Stats, error)
+ RequestParser func(userID string, r *http.Request, tenantsRetention TenantsRetention, limits Limits, tracker UsageTracker, logPushRequestStreams bool, logger log.Logger) (*logproto.PushRequest, *Stats, error)
RequestParserWrapper func(inner RequestParser) RequestParser
)
@@ -106,8 +106,8 @@ type Stats struct {
IsAggregatedMetric bool
}
-func ParseRequest(logger log.Logger, userID string, r *http.Request, tenantsRetention TenantsRetention, limits Limits, pushRequestParser RequestParser, tracker UsageTracker) (*logproto.PushRequest, error) {
- req, pushStats, err := pushRequestParser(userID, r, tenantsRetention, limits, tracker)
+func ParseRequest(logger log.Logger, userID string, r *http.Request, tenantsRetention TenantsRetention, limits Limits, pushRequestParser RequestParser, tracker UsageTracker, logPushRequestStreams bool) (*logproto.PushRequest, error) {
+ req, pushStats, err := pushRequestParser(userID, r, tenantsRetention, limits, tracker, logPushRequestStreams, logger)
if err != nil {
return nil, err
}
@@ -164,7 +164,7 @@ func ParseRequest(logger log.Logger, userID string, r *http.Request, tenantsRete
return req, nil
}
-func ParseLokiRequest(userID string, r *http.Request, tenantsRetention TenantsRetention, limits Limits, tracker UsageTracker) (*logproto.PushRequest, *Stats, error) {
+func ParseLokiRequest(userID string, r *http.Request, tenantsRetention TenantsRetention, limits Limits, tracker UsageTracker, logPushRequestStreams bool, logger log.Logger) (*logproto.PushRequest, *Stats, error) {
// Body
var body io.Reader
// bodySize should always reflect the compressed size of the request body
@@ -247,8 +247,13 @@ func ParseLokiRequest(userID string, r *http.Request, tenantsRetention TenantsRe
pushStats.IsAggregatedMetric = true
}
+ var beforeServiceName string
+ if logPushRequestStreams {
+ beforeServiceName = lbs.String()
+ }
+
+ serviceName := ServiceUnknown
if !lbs.Has(LabelServiceName) && len(discoverServiceName) > 0 && !pushStats.IsAggregatedMetric {
- serviceName := ServiceUnknown
for _, labelName := range discoverServiceName {
if labelVal := lbs.Get(labelName); labelVal != "" {
serviceName = labelVal
@@ -264,6 +269,14 @@ func ParseLokiRequest(userID string, r *http.Request, tenantsRetention TenantsRe
lbs = lb.Del(LabelServiceName).Labels()
}
+ if logPushRequestStreams {
+ level.Debug(logger).Log(
+ "msg", "push request stream before service name discovery",
+ "labels", beforeServiceName,
+ "service_name", serviceName,
+ )
+ }
+
var retentionPeriod time.Duration
if tenantsRetention != nil {
retentionPeriod = tenantsRetention.RetentionPeriodFor(userID, lbs)
diff --git a/pkg/loghttp/push/push_test.go b/pkg/loghttp/push/push_test.go
index e63b2c873c8de..b5609dec57a69 100644
--- a/pkg/loghttp/push/push_test.go
+++ b/pkg/loghttp/push/push_test.go
@@ -262,7 +262,16 @@ func TestParseRequest(t *testing.T) {
}
tracker := NewMockTracker()
- data, err := ParseRequest(util_log.Logger, "fake", request, nil, &fakeLimits{enabled: test.enableServiceDiscovery}, ParseLokiRequest, tracker)
+ data, err := ParseRequest(
+ util_log.Logger,
+ "fake",
+ request,
+ nil,
+ &fakeLimits{enabled: test.enableServiceDiscovery},
+ ParseLokiRequest,
+ tracker,
+ false,
+ )
structuredMetadataBytesReceived := int(structuredMetadataBytesReceivedStats.Value()["total"].(int64)) - previousStructuredMetadataBytesReceived
previousStructuredMetadataBytesReceived += structuredMetadataBytesReceived
@@ -355,7 +364,7 @@ func Test_ServiceDetection(t *testing.T) {
request := createRequest("/loki/api/v1/push", strings.NewReader(body))
limits := &fakeLimits{enabled: true, labels: []string{"foo"}}
- data, err := ParseRequest(util_log.Logger, "fake", request, nil, limits, ParseLokiRequest, tracker)
+ data, err := ParseRequest(util_log.Logger, "fake", request, nil, limits, ParseLokiRequest, tracker, false)
require.NoError(t, err)
require.Equal(t, labels.FromStrings("foo", "bar", LabelServiceName, "bar").String(), data.Streams[0].Labels)
@@ -366,7 +375,7 @@ func Test_ServiceDetection(t *testing.T) {
request := createRequest("/otlp/v1/push", bytes.NewReader(body))
limits := &fakeLimits{enabled: true}
- data, err := ParseRequest(util_log.Logger, "fake", request, limits, limits, ParseOTLPRequest, tracker)
+ data, err := ParseRequest(util_log.Logger, "fake", request, limits, limits, ParseOTLPRequest, tracker, false)
require.NoError(t, err)
require.Equal(t, labels.FromStrings("k8s_job_name", "bar", LabelServiceName, "bar").String(), data.Streams[0].Labels)
})
@@ -380,7 +389,7 @@ func Test_ServiceDetection(t *testing.T) {
labels: []string{"special"},
indexAttributes: []string{"special"},
}
- data, err := ParseRequest(util_log.Logger, "fake", request, limits, limits, ParseOTLPRequest, tracker)
+ data, err := ParseRequest(util_log.Logger, "fake", request, limits, limits, ParseOTLPRequest, tracker, false)
require.NoError(t, err)
require.Equal(t, labels.FromStrings("special", "sauce", LabelServiceName, "sauce").String(), data.Streams[0].Labels)
})
@@ -394,7 +403,7 @@ func Test_ServiceDetection(t *testing.T) {
labels: []string{"special"},
indexAttributes: []string{},
}
- data, err := ParseRequest(util_log.Logger, "fake", request, limits, limits, ParseOTLPRequest, tracker)
+ data, err := ParseRequest(util_log.Logger, "fake", request, limits, limits, ParseOTLPRequest, tracker, false)
require.NoError(t, err)
require.Equal(t, labels.FromStrings(LabelServiceName, ServiceUnknown).String(), data.Streams[0].Labels)
})
|
feat
|
ability to log stream selectors before service name detection (#14154)
|
3f5f0c38d0f5a610d235964ce3849d06c8f9ee1f
|
2023-01-13 04:36:03
|
Alexandru Gologan
|
helm: fix ruler config storage regression (#8071)
| false
|
diff --git a/production/helm/loki/templates/_helpers.tpl b/production/helm/loki/templates/_helpers.tpl
index 37566db42077b..1c84850f0db9b 100644
--- a/production/helm/loki/templates/_helpers.tpl
+++ b/production/helm/loki/templates/_helpers.tpl
@@ -273,6 +273,7 @@ gcs:
{{- end -}}
{{- else if eq .Values.loki.storage.type "azure" -}}
{{- with .Values.loki.storage.azure }}
+type: "azure"
azure:
account_name: {{ .accountName }}
{{- with .accountKey }}
@@ -287,24 +288,18 @@ azure:
request_timeout: {{ . }}
{{- end }}
{{- end -}}
+{{- else }}
+type: "local"
{{- end -}}
{{- end -}}
-{{/* Predicate function to determin if custom ruler config should be included */}}
-{{- define "loki.shouldIncludeRulerConfig" }}
-{{- or (not (empty .Values.loki.rulerConfig)) (.Values.minio.enabled) (eq .Values.loki.storage.type "s3") (eq .Values.loki.storage.type "gcs") (eq .Values.loki.storage.type "azure") }}
-{{- end }}
-
{{/* Loki ruler config */}}
{{- define "loki.rulerConfig" }}
-{{- if eq (include "loki.shouldIncludeRulerConfig" .) "true" }}
ruler:
+ storage:
+ {{- include "loki.rulerStorageConfig" . | nindent 4}}
{{- if (not (empty .Values.loki.rulerConfig)) }}
{{- toYaml .Values.loki.rulerConfig | nindent 2}}
-{{- else }}
- storage:
- {{- include "loki.rulerStorageConfig" . | nindent 4}}
-{{- end }}
{{- end }}
{{- end }}
|
helm
|
fix ruler config storage regression (#8071)
|
794d7da845b09c6eeacb7589e6d1d04c8ea21105
|
2019-09-21 01:45:44
|
sh0rez
|
feat: -version flag (#1036)
| false
|
diff --git a/Makefile b/Makefile
index d20a244929c45..6e3d2b32ebc3f 100644
--- a/Makefile
+++ b/Makefile
@@ -40,7 +40,7 @@ APP_GO_FILES := $(shell find . $(DONT_FIND) -name .y.go -prune -o -name .pb.go -
# Build flags
VPREFIX := github.com/grafana/loki/vendor/github.com/prometheus/common/version
-GO_LDFLAGS := -s -w -X $(VPREFIX).Branch=$(GIT_BRANCH) -X $(VPREFIX).Version=$(IMAGE_TAG) -X $(VPREFIX).Revision=$(GIT_REVISION)
+GO_LDFLAGS := -s -w -X $(VPREFIX).Branch=$(GIT_BRANCH) -X $(VPREFIX).Version=$(IMAGE_TAG) -X $(VPREFIX).Revision=$(GIT_REVISION) -X $(VPREFIX).BuildUser=$(shell whoami)@$(shell hostname) -X $(VPREFIX).BuildDate=$(shell date -u +"%Y-%m-%dT%H:%M:%SZ")
GO_FLAGS := -ldflags "-extldflags \"-static\" $(GO_LDFLAGS)" -tags netgo
DYN_GO_FLAGS := -ldflags "$(GO_LDFLAGS)" -tags netgo
# Per some websites I've seen to add `-gcflags "all=-N -l"`, the gcflags seem poorly if at all documented
diff --git a/cmd/logcli/main.go b/cmd/logcli/main.go
index 23ba98edea282..a5332ceb8c150 100644
--- a/cmd/logcli/main.go
+++ b/cmd/logcli/main.go
@@ -11,12 +11,13 @@ import (
"github.com/grafana/loki/pkg/logcli/output"
"github.com/grafana/loki/pkg/logcli/query"
"github.com/prometheus/common/config"
+ "github.com/prometheus/common/version"
"gopkg.in/alecthomas/kingpin.v2"
)
var (
- app = kingpin.New("logcli", "A command-line for loki.")
+ app = kingpin.New("logcli", "A command-line for loki.").Version(version.Print("logcli"))
quiet = app.Flag("quiet", "suppress everything but log lines").Default("false").Short('q').Bool()
outputMode = app.Flag("output", "specify output mode [default, raw, jsonl]").Default("default").Short('o').Enum("default", "raw", "jsonl")
timezone = app.Flag("timezone", "Specify the timezone to use when formatting output timestamps [Local, UTC]").Default("Local").Short('z').Enum("Local", "UTC")
diff --git a/cmd/loki-canary/main.go b/cmd/loki-canary/main.go
index d32889f07b26b..eded56a2635bc 100644
--- a/cmd/loki-canary/main.go
+++ b/cmd/loki-canary/main.go
@@ -11,6 +11,7 @@ import (
"time"
"github.com/prometheus/client_golang/prometheus/promhttp"
+ "github.com/prometheus/common/version"
"github.com/grafana/loki/pkg/canary/comparator"
"github.com/grafana/loki/pkg/canary/reader"
@@ -32,8 +33,16 @@ func main() {
wait := flag.Duration("wait", 60*time.Second, "Duration to wait for log entries before reporting them lost")
pruneInterval := flag.Duration("pruneinterval", 60*time.Second, "Frequency to check sent vs received logs, also the frequency which queries for missing logs will be dispatched to loki")
buckets := flag.Int("buckets", 10, "Number of buckets in the response_latency histogram")
+
+ printVersion := flag.Bool("version", false, "Print this builds version information")
+
flag.Parse()
+ if *printVersion {
+ fmt.Print(version.Print("loki-canary"))
+ os.Exit(0)
+ }
+
if *addr == "" {
_, _ = fmt.Fprintf(os.Stderr, "Must specify a Loki address with -addr\n")
os.Exit(1)
diff --git a/cmd/loki/main.go b/cmd/loki/main.go
index 60676cf3603ac..9924ce4df6e97 100644
--- a/cmd/loki/main.go
+++ b/cmd/loki/main.go
@@ -30,8 +30,15 @@ func main() {
)
flag.StringVar(&configFile, "config.file", "", "Configuration file to load.")
flagext.RegisterFlags(&cfg)
+
+ printVersion := flag.Bool("version", false, "Print this builds version information")
flag.Parse()
+ if *printVersion {
+ fmt.Print(version.Print("loki"))
+ os.Exit(0)
+ }
+
// LimitsConfig has a customer UnmarshalYAML that will set the defaults to a global.
// This global is set to the config passed into the last call to `NewOverrides`. If we don't
// call it atleast once, the defaults are set to an empty struct.
diff --git a/cmd/promtail/main.go b/cmd/promtail/main.go
index d37278c24fb9c..4933ebc8244b3 100644
--- a/cmd/promtail/main.go
+++ b/cmd/promtail/main.go
@@ -2,6 +2,7 @@ package main
import (
"flag"
+ "fmt"
"os"
"reflect"
@@ -29,8 +30,15 @@ func main() {
)
flag.StringVar(&configFile, "config.file", "promtail.yml", "The config file.")
flagext.RegisterFlags(&config)
+
+ printVersion := flag.Bool("version", false, "Print this builds version information")
flag.Parse()
+ if *printVersion {
+ fmt.Print(version.Print("promtail"))
+ os.Exit(0)
+ }
+
util.InitLogger(&config.ServerConfig.Config)
if configFile != "" {
|
feat
|
-version flag (#1036)
|
9b62fe7d93f1cee1a876f9ca0392f44f54110ee2
|
2023-07-20 18:23:56
|
Mohamed-Amine Bouqsimi
|
operator: Custom configuration for LokiStack admin groups (#9931)
| false
|
diff --git a/operator/CHANGELOG.md b/operator/CHANGELOG.md
index dac857044afb4..dad61cc3289bb 100644
--- a/operator/CHANGELOG.md
+++ b/operator/CHANGELOG.md
@@ -1,5 +1,6 @@
## Main
+- [9931](https://github.com/grafana/loki/pull/9931) **aminesnow**: Custom configuration for LokiStack admin groups
- [9971](https://github.com/grafana/loki/pull/9971) **aminesnow**: Add namespace and tenantId labels to RecordingRules
- [9906](https://github.com/grafana/loki/pull/9906) **JoaoBraveCoding**: Add mTLS authentication to tenants
- [9963](https://github.com/grafana/loki/pull/9963) **xperimental**: Fix application tenant alertmanager configuration
diff --git a/operator/apis/loki/v1/lokistack_types.go b/operator/apis/loki/v1/lokistack_types.go
index 298e668996d63..db4d561594c54 100644
--- a/operator/apis/loki/v1/lokistack_types.go
+++ b/operator/apis/loki/v1/lokistack_types.go
@@ -259,6 +259,29 @@ type TenantsSpec struct {
// +kubebuilder:validation:Optional
// +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Authorization"
Authorization *AuthorizationSpec `json:"authorization,omitempty"`
+
+ // Openshift defines the configuration specific to Openshift modes.
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Openshift"
+ Openshift *OpenshiftTenantSpec `json:"openshift,omitempty"`
+}
+
+// OpenshiftTenantSpec defines the configuration specific to Openshift modes.
+type OpenshiftTenantSpec struct {
+ // AdminGroups defines a list of groups, whose members are considered to have admin-privileges by the Loki Operator.
+ // Setting this to an empty array disables admin groups.
+ //
+ // By default the following groups are considered admin-groups:
+ // - system:cluster-admins
+ // - cluster-admin
+ // - dedicated-admin
+ //
+ // +optional
+ // +kubebuilder:validation:Optional
+ // +operator-sdk:csv:customresourcedefinitions:type=spec,displayName="Admin Groups"
+ AdminGroups []string `json:"adminGroups"`
}
// LokiComponentSpec defines the requirements to configure scheduling
diff --git a/operator/apis/loki/v1/zz_generated.deepcopy.go b/operator/apis/loki/v1/zz_generated.deepcopy.go
index 1445f4c0f6cb9..57e8d3e75cd3b 100644
--- a/operator/apis/loki/v1/zz_generated.deepcopy.go
+++ b/operator/apis/loki/v1/zz_generated.deepcopy.go
@@ -1073,6 +1073,26 @@ func (in *ObjectStorageTLSSpec) DeepCopy() *ObjectStorageTLSSpec {
return out
}
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *OpenshiftTenantSpec) DeepCopyInto(out *OpenshiftTenantSpec) {
+ *out = *in
+ if in.AdminGroups != nil {
+ in, out := &in.AdminGroups, &out.AdminGroups
+ *out = make([]string, len(*in))
+ copy(*out, *in)
+ }
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new OpenshiftTenantSpec.
+func (in *OpenshiftTenantSpec) DeepCopy() *OpenshiftTenantSpec {
+ if in == nil {
+ return nil
+ }
+ out := new(OpenshiftTenantSpec)
+ in.DeepCopyInto(out)
+ return out
+}
+
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in PodStatusMap) DeepCopyInto(out *PodStatusMap) {
{
@@ -1680,6 +1700,11 @@ func (in *TenantsSpec) DeepCopyInto(out *TenantsSpec) {
*out = new(AuthorizationSpec)
(*in).DeepCopyInto(*out)
}
+ if in.Openshift != nil {
+ in, out := &in.Openshift, &out.Openshift
+ *out = new(OpenshiftTenantSpec)
+ (*in).DeepCopyInto(*out)
+ }
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TenantsSpec.
diff --git a/operator/bundle/community-openshift/manifests/loki-operator.clusterserviceversion.yaml b/operator/bundle/community-openshift/manifests/loki-operator.clusterserviceversion.yaml
index d91aee8bf9e39..f7accdeb1ec04 100644
--- a/operator/bundle/community-openshift/manifests/loki-operator.clusterserviceversion.yaml
+++ b/operator/bundle/community-openshift/manifests/loki-operator.clusterserviceversion.yaml
@@ -150,7 +150,7 @@ metadata:
categories: OpenShift Optional, Logging & Tracing
certified: "false"
containerImage: docker.io/grafana/loki-operator:main-ac1c1fd
- createdAt: "2023-07-17T16:04:46Z"
+ createdAt: "2023-07-19T14:44:08Z"
description: The Community Loki Operator provides Kubernetes native deployment
and management of Loki and related logging components.
operators.operatorframework.io/builder: operator-sdk-unknown
@@ -766,6 +766,15 @@ spec:
- urn:alm:descriptor:com.tectonic.ui:select:dynamic
- urn:alm:descriptor:com.tectonic.ui:select:openshift-logging
- urn:alm:descriptor:com.tectonic.ui:select:openshift-network
+ - description: Openshift defines the configuration specific to Openshift modes.
+ displayName: Openshift
+ path: tenants.openshift
+ - description: "AdminGroups defines a list of groups, whose members are considered
+ to have admin-privileges by the Loki Operator. Setting this to an empty
+ array disables admin groups. \n By default the following groups are considered
+ admin-groups: - system:cluster-admins - cluster-admin - dedicated-admin"
+ displayName: Admin Groups
+ path: tenants.openshift.adminGroups
statusDescriptors:
- description: Distributor is a map to the per pod status of the distributor
deployment
diff --git a/operator/bundle/community-openshift/manifests/loki.grafana.com_lokistacks.yaml b/operator/bundle/community-openshift/manifests/loki.grafana.com_lokistacks.yaml
index c194829794ab7..326184c0fcf2d 100644
--- a/operator/bundle/community-openshift/manifests/loki.grafana.com_lokistacks.yaml
+++ b/operator/bundle/community-openshift/manifests/loki.grafana.com_lokistacks.yaml
@@ -3823,6 +3823,21 @@ spec:
- openshift-logging
- openshift-network
type: string
+ openshift:
+ description: Openshift defines the configuration specific to Openshift
+ modes.
+ properties:
+ adminGroups:
+ description: "AdminGroups defines a list of groups, whose
+ members are considered to have admin-privileges by the Loki
+ Operator. Setting this to an empty array disables admin
+ groups. \n By default the following groups are considered
+ admin-groups: - system:cluster-admins - cluster-admin -
+ dedicated-admin"
+ items:
+ type: string
+ type: array
+ type: object
required:
- mode
type: object
diff --git a/operator/bundle/community/manifests/loki-operator.clusterserviceversion.yaml b/operator/bundle/community/manifests/loki-operator.clusterserviceversion.yaml
index 92d0c0fa961f7..cf2b142a25878 100644
--- a/operator/bundle/community/manifests/loki-operator.clusterserviceversion.yaml
+++ b/operator/bundle/community/manifests/loki-operator.clusterserviceversion.yaml
@@ -150,7 +150,7 @@ metadata:
categories: OpenShift Optional, Logging & Tracing
certified: "false"
containerImage: docker.io/grafana/loki-operator:main-ac1c1fd
- createdAt: "2023-07-17T16:04:44Z"
+ createdAt: "2023-07-19T14:44:06Z"
description: The Community Loki Operator provides Kubernetes native deployment
and management of Loki and related logging components.
operators.operatorframework.io/builder: operator-sdk-unknown
@@ -766,6 +766,15 @@ spec:
- urn:alm:descriptor:com.tectonic.ui:select:dynamic
- urn:alm:descriptor:com.tectonic.ui:select:openshift-logging
- urn:alm:descriptor:com.tectonic.ui:select:openshift-network
+ - description: Openshift defines the configuration specific to Openshift modes.
+ displayName: Openshift
+ path: tenants.openshift
+ - description: "AdminGroups defines a list of groups, whose members are considered
+ to have admin-privileges by the Loki Operator. Setting this to an empty
+ array disables admin groups. \n By default the following groups are considered
+ admin-groups: - system:cluster-admins - cluster-admin - dedicated-admin"
+ displayName: Admin Groups
+ path: tenants.openshift.adminGroups
statusDescriptors:
- description: Distributor is a map to the per pod status of the distributor
deployment
diff --git a/operator/bundle/community/manifests/loki.grafana.com_lokistacks.yaml b/operator/bundle/community/manifests/loki.grafana.com_lokistacks.yaml
index c0c4097d53063..49c82691db287 100644
--- a/operator/bundle/community/manifests/loki.grafana.com_lokistacks.yaml
+++ b/operator/bundle/community/manifests/loki.grafana.com_lokistacks.yaml
@@ -3823,6 +3823,21 @@ spec:
- openshift-logging
- openshift-network
type: string
+ openshift:
+ description: Openshift defines the configuration specific to Openshift
+ modes.
+ properties:
+ adminGroups:
+ description: "AdminGroups defines a list of groups, whose
+ members are considered to have admin-privileges by the Loki
+ Operator. Setting this to an empty array disables admin
+ groups. \n By default the following groups are considered
+ admin-groups: - system:cluster-admins - cluster-admin -
+ dedicated-admin"
+ items:
+ type: string
+ type: array
+ type: object
required:
- mode
type: object
diff --git a/operator/bundle/openshift/manifests/loki-operator.clusterserviceversion.yaml b/operator/bundle/openshift/manifests/loki-operator.clusterserviceversion.yaml
index cf32b8595f2ce..ff762a79ae93b 100644
--- a/operator/bundle/openshift/manifests/loki-operator.clusterserviceversion.yaml
+++ b/operator/bundle/openshift/manifests/loki-operator.clusterserviceversion.yaml
@@ -150,7 +150,7 @@ metadata:
categories: OpenShift Optional, Logging & Tracing
certified: "false"
containerImage: quay.io/openshift-logging/loki-operator:v0.1.0
- createdAt: "2023-07-17T16:04:47Z"
+ createdAt: "2023-07-19T14:44:10Z"
description: |
The Loki Operator for OCP provides a means for configuring and managing a Loki stack for cluster logging.
## Prerequisites and Requirements
@@ -779,6 +779,15 @@ spec:
- urn:alm:descriptor:com.tectonic.ui:select:dynamic
- urn:alm:descriptor:com.tectonic.ui:select:openshift-logging
- urn:alm:descriptor:com.tectonic.ui:select:openshift-network
+ - description: Openshift defines the configuration specific to Openshift modes.
+ displayName: Openshift
+ path: tenants.openshift
+ - description: "AdminGroups defines a list of groups, whose members are considered
+ to have admin-privileges by the Loki Operator. Setting this to an empty
+ array disables admin groups. \n By default the following groups are considered
+ admin-groups: - system:cluster-admins - cluster-admin - dedicated-admin"
+ displayName: Admin Groups
+ path: tenants.openshift.adminGroups
statusDescriptors:
- description: Distributor is a map to the per pod status of the distributor
deployment
diff --git a/operator/bundle/openshift/manifests/loki.grafana.com_lokistacks.yaml b/operator/bundle/openshift/manifests/loki.grafana.com_lokistacks.yaml
index 54a2d5527e2cb..84a0094aa14ff 100644
--- a/operator/bundle/openshift/manifests/loki.grafana.com_lokistacks.yaml
+++ b/operator/bundle/openshift/manifests/loki.grafana.com_lokistacks.yaml
@@ -3823,6 +3823,21 @@ spec:
- openshift-logging
- openshift-network
type: string
+ openshift:
+ description: Openshift defines the configuration specific to Openshift
+ modes.
+ properties:
+ adminGroups:
+ description: "AdminGroups defines a list of groups, whose
+ members are considered to have admin-privileges by the Loki
+ Operator. Setting this to an empty array disables admin
+ groups. \n By default the following groups are considered
+ admin-groups: - system:cluster-admins - cluster-admin -
+ dedicated-admin"
+ items:
+ type: string
+ type: array
+ type: object
required:
- mode
type: object
diff --git a/operator/config/crd/bases/loki.grafana.com_lokistacks.yaml b/operator/config/crd/bases/loki.grafana.com_lokistacks.yaml
index bb419cee885e3..03450ee2d13ab 100644
--- a/operator/config/crd/bases/loki.grafana.com_lokistacks.yaml
+++ b/operator/config/crd/bases/loki.grafana.com_lokistacks.yaml
@@ -3806,6 +3806,21 @@ spec:
- openshift-logging
- openshift-network
type: string
+ openshift:
+ description: Openshift defines the configuration specific to Openshift
+ modes.
+ properties:
+ adminGroups:
+ description: "AdminGroups defines a list of groups, whose
+ members are considered to have admin-privileges by the Loki
+ Operator. Setting this to an empty array disables admin
+ groups. \n By default the following groups are considered
+ admin-groups: - system:cluster-admins - cluster-admin -
+ dedicated-admin"
+ items:
+ type: string
+ type: array
+ type: object
required:
- mode
type: object
diff --git a/operator/config/manifests/community-openshift/bases/loki-operator.clusterserviceversion.yaml b/operator/config/manifests/community-openshift/bases/loki-operator.clusterserviceversion.yaml
index a5a2d2f10a523..dd107b35f5904 100644
--- a/operator/config/manifests/community-openshift/bases/loki-operator.clusterserviceversion.yaml
+++ b/operator/config/manifests/community-openshift/bases/loki-operator.clusterserviceversion.yaml
@@ -679,6 +679,15 @@ spec:
- urn:alm:descriptor:com.tectonic.ui:select:dynamic
- urn:alm:descriptor:com.tectonic.ui:select:openshift-logging
- urn:alm:descriptor:com.tectonic.ui:select:openshift-network
+ - description: Openshift defines the configuration specific to Openshift modes.
+ displayName: Openshift
+ path: tenants.openshift
+ - description: "AdminGroups defines a list of groups, whose members are considered
+ to have admin-privileges by the Loki Operator. Setting this to an empty
+ array disables admin groups. \n By default the following groups are considered
+ admin-groups: - system:cluster-admins - cluster-admin - dedicated-admin"
+ displayName: Admin Groups
+ path: tenants.openshift.adminGroups
statusDescriptors:
- description: Distributor is a map to the per pod status of the distributor
deployment
diff --git a/operator/config/manifests/community/bases/loki-operator.clusterserviceversion.yaml b/operator/config/manifests/community/bases/loki-operator.clusterserviceversion.yaml
index cd6cf3cb516ad..adb076daf8253 100644
--- a/operator/config/manifests/community/bases/loki-operator.clusterserviceversion.yaml
+++ b/operator/config/manifests/community/bases/loki-operator.clusterserviceversion.yaml
@@ -679,6 +679,15 @@ spec:
- urn:alm:descriptor:com.tectonic.ui:select:dynamic
- urn:alm:descriptor:com.tectonic.ui:select:openshift-logging
- urn:alm:descriptor:com.tectonic.ui:select:openshift-network
+ - description: Openshift defines the configuration specific to Openshift modes.
+ displayName: Openshift
+ path: tenants.openshift
+ - description: "AdminGroups defines a list of groups, whose members are considered
+ to have admin-privileges by the Loki Operator. Setting this to an empty
+ array disables admin groups. \n By default the following groups are considered
+ admin-groups: - system:cluster-admins - cluster-admin - dedicated-admin"
+ displayName: Admin Groups
+ path: tenants.openshift.adminGroups
statusDescriptors:
- description: Distributor is a map to the per pod status of the distributor
deployment
diff --git a/operator/config/manifests/openshift/bases/loki-operator.clusterserviceversion.yaml b/operator/config/manifests/openshift/bases/loki-operator.clusterserviceversion.yaml
index 27110f1b42e91..5b08f7e45f0ef 100644
--- a/operator/config/manifests/openshift/bases/loki-operator.clusterserviceversion.yaml
+++ b/operator/config/manifests/openshift/bases/loki-operator.clusterserviceversion.yaml
@@ -691,6 +691,15 @@ spec:
- urn:alm:descriptor:com.tectonic.ui:select:dynamic
- urn:alm:descriptor:com.tectonic.ui:select:openshift-logging
- urn:alm:descriptor:com.tectonic.ui:select:openshift-network
+ - description: Openshift defines the configuration specific to Openshift modes.
+ displayName: Openshift
+ path: tenants.openshift
+ - description: "AdminGroups defines a list of groups, whose members are considered
+ to have admin-privileges by the Loki Operator. Setting this to an empty
+ array disables admin groups. \n By default the following groups are considered
+ admin-groups: - system:cluster-admins - cluster-admin - dedicated-admin"
+ displayName: Admin Groups
+ path: tenants.openshift.adminGroups
statusDescriptors:
- description: Distributor is a map to the per pod status of the distributor
deployment
diff --git a/operator/internal/manifests/gateway.go b/operator/internal/manifests/gateway.go
index 16942d30ee853..9184a469ddf8b 100644
--- a/operator/internal/manifests/gateway.go
+++ b/operator/internal/manifests/gateway.go
@@ -26,7 +26,14 @@ const (
tlsSecretVolume = "tls-secret"
)
-var logsEndpointRe = regexp.MustCompile(`^--logs\.(?:read|tail|write|rules)\.endpoint=http://.+`)
+var (
+ logsEndpointRe = regexp.MustCompile(`^--logs\.(?:read|tail|write|rules)\.endpoint=http://.+`)
+ defaultAdminGroups = []string{
+ "system:cluster-admins",
+ "cluster-admin",
+ "dedicated-admin",
+ }
+)
// BuildGateway returns a list of k8s objects for Loki Stack Gateway
func BuildGateway(opts Options) ([]client.Object, error) {
@@ -76,7 +83,12 @@ func BuildGateway(opts Options) ([]client.Object, error) {
}
if opts.Stack.Tenants != nil {
- if err := configureGatewayDeploymentForMode(dpl, opts.Stack.Tenants, opts.Gates, minTLSVersion, ciphers); err != nil {
+ adminGroups := defaultAdminGroups
+ if opts.Stack.Tenants.Openshift != nil && opts.Stack.Tenants.Openshift.AdminGroups != nil {
+ adminGroups = opts.Stack.Tenants.Openshift.AdminGroups
+ }
+
+ if err := configureGatewayDeploymentForMode(dpl, opts.Stack.Tenants, opts.Gates, minTLSVersion, ciphers, adminGroups); err != nil {
return nil, err
}
diff --git a/operator/internal/manifests/gateway_tenants.go b/operator/internal/manifests/gateway_tenants.go
index 83e9e7d367995..b137ee483d20e 100644
--- a/operator/internal/manifests/gateway_tenants.go
+++ b/operator/internal/manifests/gateway_tenants.go
@@ -65,7 +65,7 @@ func ApplyGatewayDefaultOptions(opts *Options) error {
return nil
}
-func configureGatewayDeploymentForMode(d *appsv1.Deployment, tenants *lokiv1.TenantsSpec, fg configv1.FeatureGates, minTLSVersion string, ciphers string) error {
+func configureGatewayDeploymentForMode(d *appsv1.Deployment, tenants *lokiv1.TenantsSpec, fg configv1.FeatureGates, minTLSVersion string, ciphers string, adminGroups []string) error {
switch tenants.Mode {
case lokiv1.Static, lokiv1.Dynamic:
if tenants != nil {
@@ -74,7 +74,7 @@ func configureGatewayDeploymentForMode(d *appsv1.Deployment, tenants *lokiv1.Ten
return nil
case lokiv1.OpenshiftLogging, lokiv1.OpenshiftNetwork:
tlsDir := gatewayServerHTTPTLSDir()
- return openshift.ConfigureGatewayDeployment(d, tenants.Mode, tlsSecretVolume, tlsDir, minTLSVersion, ciphers, fg.HTTPEncryption)
+ return openshift.ConfigureGatewayDeployment(d, tenants.Mode, tlsSecretVolume, tlsDir, minTLSVersion, ciphers, fg.HTTPEncryption, adminGroups)
}
return nil
diff --git a/operator/internal/manifests/gateway_tenants_test.go b/operator/internal/manifests/gateway_tenants_test.go
index e26beb2cd16c4..68e639d9575ae 100644
--- a/operator/internal/manifests/gateway_tenants_test.go
+++ b/operator/internal/manifests/gateway_tenants_test.go
@@ -419,6 +419,7 @@ func TestConfigureDeploymentForMode(t *testing.T) {
stackName string
stackNs string
featureGates configv1.FeatureGates
+ adminGroups []string
tenants *lokiv1.TenantsSpec
dpl *appsv1.Deployment
want *appsv1.Deployment
@@ -442,9 +443,10 @@ func TestConfigureDeploymentForMode(t *testing.T) {
want: defaultGatewayDeployment(),
},
{
- desc: "static mode with mTLS tenant configured",
- stackName: "test",
- stackNs: "test-ns",
+ desc: "static mode with mTLS tenant configured",
+ stackName: "test",
+ stackNs: "test-ns",
+ adminGroups: defaultAdminGroups,
tenants: &lokiv1.TenantsSpec{
Mode: lokiv1.Static,
Authentication: []lokiv1.AuthenticationSpec{
@@ -574,9 +576,10 @@ func TestConfigureDeploymentForMode(t *testing.T) {
tenants: &lokiv1.TenantsSpec{
Mode: lokiv1.OpenshiftLogging,
},
- stackName: "test",
- stackNs: "test-ns",
- dpl: defaultGatewayDeployment(),
+ stackName: "test",
+ stackNs: "test-ns",
+ dpl: defaultGatewayDeployment(),
+ adminGroups: defaultAdminGroups,
want: &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Namespace: "test-ns",
@@ -593,12 +596,12 @@ func TestConfigureDeploymentForMode(t *testing.T) {
Image: "quay.io/observatorium/opa-openshift:latest",
Args: []string{
"--log.level=warn",
- "--opa.skip-tenants=audit,infrastructure",
- "--opa.admin-groups=system:cluster-admins,cluster-admin,dedicated-admin",
"--web.listen=:8082",
"--web.internal.listen=:8083",
"--web.healthchecks.url=http://localhost:8082",
+ "--opa.skip-tenants=audit,infrastructure",
"--opa.package=lokistack",
+ "--opa.admin-groups=system:cluster-admins,cluster-admin,dedicated-admin",
"--opa.matcher=kubernetes_namespace_name",
`--openshift.mappings=application=loki.grafana.com`,
`--openshift.mappings=infrastructure=loki.grafana.com`,
@@ -658,6 +661,7 @@ func TestConfigureDeploymentForMode(t *testing.T) {
HTTPEncryption: true,
ServiceMonitorTLSEndpoints: true,
},
+ adminGroups: defaultAdminGroups,
dpl: &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Name: "test-gateway",
@@ -698,12 +702,12 @@ func TestConfigureDeploymentForMode(t *testing.T) {
Image: "quay.io/observatorium/opa-openshift:latest",
Args: []string{
"--log.level=warn",
- "--opa.skip-tenants=audit,infrastructure",
- "--opa.admin-groups=system:cluster-admins,cluster-admin,dedicated-admin",
"--web.listen=:8082",
"--web.internal.listen=:8083",
"--web.healthchecks.url=http://localhost:8082",
+ "--opa.skip-tenants=audit,infrastructure",
"--opa.package=lokistack",
+ "--opa.admin-groups=system:cluster-admins,cluster-admin,dedicated-admin",
"--opa.matcher=kubernetes_namespace_name",
"--tls.internal.server.cert-file=/var/run/tls/http/server/tls.crt",
"--tls.internal.server.key-file=/var/run/tls/http/server/tls.key",
@@ -773,8 +777,9 @@ func TestConfigureDeploymentForMode(t *testing.T) {
tenants: &lokiv1.TenantsSpec{
Mode: lokiv1.OpenshiftNetwork,
},
- stackName: "test",
- stackNs: "test-ns",
+ stackName: "test",
+ stackNs: "test-ns",
+ adminGroups: defaultAdminGroups,
dpl: &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Namespace: "test-ns",
@@ -812,12 +817,12 @@ func TestConfigureDeploymentForMode(t *testing.T) {
Image: "quay.io/observatorium/opa-openshift:latest",
Args: []string{
"--log.level=warn",
- "--opa.skip-tenants=audit,infrastructure",
- "--opa.admin-groups=system:cluster-admins,cluster-admin,dedicated-admin",
"--web.listen=:8082",
"--web.internal.listen=:8083",
"--web.healthchecks.url=http://localhost:8082",
+ "--opa.skip-tenants=audit,infrastructure",
"--opa.package=lokistack",
+ "--opa.admin-groups=system:cluster-admins,cluster-admin,dedicated-admin",
"--opa.matcher=SrcK8S_Namespace,DstK8S_Namespace",
"--opa.matcher-op=or",
`--openshift.mappings=network=loki.grafana.com`,
@@ -881,6 +886,7 @@ func TestConfigureDeploymentForMode(t *testing.T) {
HTTPEncryption: true,
ServiceMonitorTLSEndpoints: true,
},
+ adminGroups: defaultAdminGroups,
dpl: &appsv1.Deployment{
ObjectMeta: metav1.ObjectMeta{
Namespace: "test-ns",
@@ -918,12 +924,12 @@ func TestConfigureDeploymentForMode(t *testing.T) {
Image: "quay.io/observatorium/opa-openshift:latest",
Args: []string{
"--log.level=warn",
- "--opa.skip-tenants=audit,infrastructure",
- "--opa.admin-groups=system:cluster-admins,cluster-admin,dedicated-admin",
"--web.listen=:8082",
"--web.internal.listen=:8083",
"--web.healthchecks.url=http://localhost:8082",
+ "--opa.skip-tenants=audit,infrastructure",
"--opa.package=lokistack",
+ "--opa.admin-groups=system:cluster-admins,cluster-admin,dedicated-admin",
"--opa.matcher=SrcK8S_Namespace,DstK8S_Namespace",
"--opa.matcher-op=or",
"--tls.internal.server.cert-file=/var/run/tls/http/server/tls.crt",
@@ -987,13 +993,203 @@ func TestConfigureDeploymentForMode(t *testing.T) {
},
},
},
+ {
+ desc: "openshift-logging mode with custom admin group list",
+ tenants: &lokiv1.TenantsSpec{
+ Mode: lokiv1.OpenshiftLogging,
+ },
+ stackName: "test",
+ stackNs: "test-ns",
+ adminGroups: []string{
+ "custom-admins",
+ "other-admins",
+ },
+ dpl: &appsv1.Deployment{
+ ObjectMeta: metav1.ObjectMeta{
+ Namespace: "test-ns",
+ },
+ Spec: appsv1.DeploymentSpec{
+ Template: corev1.PodTemplateSpec{
+ Spec: corev1.PodSpec{
+ Containers: []corev1.Container{
+ {
+ Name: gatewayContainerName,
+ },
+ },
+ },
+ },
+ },
+ },
+ want: &appsv1.Deployment{
+ ObjectMeta: metav1.ObjectMeta{
+ Namespace: "test-ns",
+ },
+ Spec: appsv1.DeploymentSpec{
+ Template: corev1.PodTemplateSpec{
+ Spec: corev1.PodSpec{
+ Containers: []corev1.Container{
+ {
+ Name: gatewayContainerName,
+ },
+ {
+ Name: "opa",
+ Image: "quay.io/observatorium/opa-openshift:latest",
+ Args: []string{
+ "--log.level=warn",
+ "--web.listen=:8082",
+ "--web.internal.listen=:8083",
+ "--web.healthchecks.url=http://localhost:8082",
+ "--opa.skip-tenants=audit,infrastructure",
+ "--opa.package=lokistack",
+ "--opa.admin-groups=custom-admins,other-admins",
+ "--opa.matcher=kubernetes_namespace_name",
+ `--openshift.mappings=application=loki.grafana.com`,
+ `--openshift.mappings=infrastructure=loki.grafana.com`,
+ `--openshift.mappings=audit=loki.grafana.com`,
+ },
+ Ports: []corev1.ContainerPort{
+ {
+ Name: openshift.GatewayOPAHTTPPortName,
+ ContainerPort: openshift.GatewayOPAHTTPPort,
+ Protocol: corev1.ProtocolTCP,
+ },
+ {
+ Name: openshift.GatewayOPAInternalPortName,
+ ContainerPort: openshift.GatewayOPAInternalPort,
+ Protocol: corev1.ProtocolTCP,
+ },
+ },
+ LivenessProbe: &corev1.Probe{
+ ProbeHandler: corev1.ProbeHandler{
+ HTTPGet: &corev1.HTTPGetAction{
+ Path: "/live",
+ Port: intstr.FromInt(int(openshift.GatewayOPAInternalPort)),
+ Scheme: corev1.URISchemeHTTP,
+ },
+ },
+ TimeoutSeconds: 2,
+ PeriodSeconds: 30,
+ FailureThreshold: 10,
+ },
+ ReadinessProbe: &corev1.Probe{
+ ProbeHandler: corev1.ProbeHandler{
+ HTTPGet: &corev1.HTTPGetAction{
+ Path: "/ready",
+ Port: intstr.FromInt(int(openshift.GatewayOPAInternalPort)),
+ Scheme: corev1.URISchemeHTTP,
+ },
+ },
+ TimeoutSeconds: 1,
+ PeriodSeconds: 5,
+ FailureThreshold: 12,
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ {
+ desc: "openshift-logging mode with empty admin group list",
+ tenants: &lokiv1.TenantsSpec{
+ Mode: lokiv1.OpenshiftLogging,
+ },
+ stackName: "test",
+ stackNs: "test-ns",
+ adminGroups: []string{},
+ dpl: &appsv1.Deployment{
+ ObjectMeta: metav1.ObjectMeta{
+ Namespace: "test-ns",
+ },
+ Spec: appsv1.DeploymentSpec{
+ Template: corev1.PodTemplateSpec{
+ Spec: corev1.PodSpec{
+ Containers: []corev1.Container{
+ {
+ Name: gatewayContainerName,
+ },
+ },
+ },
+ },
+ },
+ },
+ want: &appsv1.Deployment{
+ ObjectMeta: metav1.ObjectMeta{
+ Namespace: "test-ns",
+ },
+ Spec: appsv1.DeploymentSpec{
+ Template: corev1.PodTemplateSpec{
+ Spec: corev1.PodSpec{
+ Containers: []corev1.Container{
+ {
+ Name: gatewayContainerName,
+ },
+ {
+ Name: "opa",
+ Image: "quay.io/observatorium/opa-openshift:latest",
+ Args: []string{
+ "--log.level=warn",
+ "--web.listen=:8082",
+ "--web.internal.listen=:8083",
+ "--web.healthchecks.url=http://localhost:8082",
+ "--opa.skip-tenants=audit,infrastructure",
+ "--opa.package=lokistack",
+ "--opa.matcher=kubernetes_namespace_name",
+ `--openshift.mappings=application=loki.grafana.com`,
+ `--openshift.mappings=infrastructure=loki.grafana.com`,
+ `--openshift.mappings=audit=loki.grafana.com`,
+ },
+ Ports: []corev1.ContainerPort{
+ {
+ Name: openshift.GatewayOPAHTTPPortName,
+ ContainerPort: openshift.GatewayOPAHTTPPort,
+ Protocol: corev1.ProtocolTCP,
+ },
+ {
+ Name: openshift.GatewayOPAInternalPortName,
+ ContainerPort: openshift.GatewayOPAInternalPort,
+ Protocol: corev1.ProtocolTCP,
+ },
+ },
+ LivenessProbe: &corev1.Probe{
+ ProbeHandler: corev1.ProbeHandler{
+ HTTPGet: &corev1.HTTPGetAction{
+ Path: "/live",
+ Port: intstr.FromInt(int(openshift.GatewayOPAInternalPort)),
+ Scheme: corev1.URISchemeHTTP,
+ },
+ },
+ TimeoutSeconds: 2,
+ PeriodSeconds: 30,
+ FailureThreshold: 10,
+ },
+ ReadinessProbe: &corev1.Probe{
+ ProbeHandler: corev1.ProbeHandler{
+ HTTPGet: &corev1.HTTPGetAction{
+ Path: "/ready",
+ Port: intstr.FromInt(int(openshift.GatewayOPAInternalPort)),
+ Scheme: corev1.URISchemeHTTP,
+ },
+ },
+ TimeoutSeconds: 1,
+ PeriodSeconds: 5,
+ FailureThreshold: 12,
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
}
for _, tc := range tc {
tc := tc
t.Run(tc.desc, func(t *testing.T) {
t.Parallel()
- err := configureGatewayDeploymentForMode(tc.dpl, tc.tenants, tc.featureGates, "min-version", "cipher1,cipher2")
+ err := configureGatewayDeploymentForMode(tc.dpl, tc.tenants, tc.featureGates, "min-version", "cipher1,cipher2", tc.adminGroups)
require.NoError(t, err)
require.Equal(t, tc.want, tc.dpl)
})
diff --git a/operator/internal/manifests/openshift/configure.go b/operator/internal/manifests/openshift/configure.go
index 27a5c9d441065..4b8a3f4625b33 100644
--- a/operator/internal/manifests/openshift/configure.go
+++ b/operator/internal/manifests/openshift/configure.go
@@ -60,12 +60,12 @@ func ConfigureGatewayDeployment(
mode lokiv1.ModeType,
secretVolumeName, tlsDir string,
minTLSVersion, ciphers string,
- withTLS bool,
+ withTLS bool, adminGroups []string,
) error {
p := corev1.PodSpec{
ServiceAccountName: d.GetName(),
Containers: []corev1.Container{
- newOPAOpenShiftContainer(mode, secretVolumeName, tlsDir, minTLSVersion, ciphers, withTLS),
+ newOPAOpenShiftContainer(mode, secretVolumeName, tlsDir, minTLSVersion, ciphers, withTLS, adminGroups),
},
}
diff --git a/operator/internal/manifests/openshift/opa_openshift.go b/operator/internal/manifests/openshift/opa_openshift.go
index 43489c50efbbe..c86d429e51e0b 100644
--- a/operator/internal/manifests/openshift/opa_openshift.go
+++ b/operator/internal/manifests/openshift/opa_openshift.go
@@ -4,6 +4,7 @@ import (
"fmt"
"os"
"path"
+ "strings"
lokiv1 "github.com/grafana/loki/operator/apis/loki/v1"
corev1 "k8s.io/api/core/v1"
@@ -22,7 +23,7 @@ const (
opaNetworkLabelMatchers = "SrcK8S_Namespace,DstK8S_Namespace"
)
-func newOPAOpenShiftContainer(mode lokiv1.ModeType, secretVolumeName, tlsDir, minTLSVersion, ciphers string, withTLS bool) corev1.Container {
+func newOPAOpenShiftContainer(mode lokiv1.ModeType, secretVolumeName, tlsDir, minTLSVersion, ciphers string, withTLS bool, adminGroups []string) corev1.Container {
var (
image string
args []string
@@ -38,14 +39,17 @@ func newOPAOpenShiftContainer(mode lokiv1.ModeType, secretVolumeName, tlsDir, mi
uriScheme = corev1.URISchemeHTTP
args = []string{
"--log.level=warn",
- "--opa.skip-tenants=audit,infrastructure",
- "--opa.admin-groups=system:cluster-admins,cluster-admin,dedicated-admin",
fmt.Sprintf("--web.listen=:%d", GatewayOPAHTTPPort),
fmt.Sprintf("--web.internal.listen=:%d", GatewayOPAInternalPort),
fmt.Sprintf("--web.healthchecks.url=http://localhost:%d", GatewayOPAHTTPPort),
+ "--opa.skip-tenants=audit,infrastructure",
fmt.Sprintf("--opa.package=%s", opaDefaultPackage),
}
+ if len(adminGroups) > 0 {
+ args = append(args, fmt.Sprintf("--opa.admin-groups=%s", strings.Join(adminGroups, ",")))
+ }
+
if mode != lokiv1.OpenshiftNetwork {
args = append(args, []string{
fmt.Sprintf("--opa.matcher=%s", opaDefaultLabelMatcher),
|
operator
|
Custom configuration for LokiStack admin groups (#9931)
|
690a387cce93bbd5d6e243210c431d2129c4f26f
|
2020-10-14 16:20:59
|
Kaviraj
|
fix: Remove depricated `entry_parser` from scrapeconfig (#2752)
| false
|
diff --git a/docs/sources/clients/promtail/configuration.md b/docs/sources/clients/promtail/configuration.md
index 016d9690e304d..566bc36b9cdd5 100644
--- a/docs/sources/clients/promtail/configuration.md
+++ b/docs/sources/clients/promtail/configuration.md
@@ -259,12 +259,12 @@ backoff_config:
# Use map like {"foo": "bar"} to add a label foo with
# value bar.
# These can also be specified from command line:
-# -client.external-labels=k1=v1,k2=v2
+# -client.external-labels=k1=v1,k2=v2
# (or --client.external-labels depending on your OS)
-# labels supplied by the command line are applied
+# labels supplied by the command line are applied
# to all clients configured in the `clients` section.
# NOTE: values defined in the config file will replace values
-# defined on the command line for a given client if the
+# defined on the command line for a given client if the
# label keys are the same.
external_labels:
[ <labelname>: <labelvalue> ... ]
@@ -299,10 +299,6 @@ of targets using a specified discovery method:
# Name to identify this scrape config in the Promtail UI.
job_name: <string>
-# Describes how to parse log lines. Supported values [cri docker raw]
-# Deprecated in favor of pipeline_stages using the cri or docker stages.
-[entry_parser: <string> | default = "docker"]
-
# Describes how to transform logs from targets.
[pipeline_stages: <pipeline_stages>]
diff --git a/docs/sources/design-documents/labels.md b/docs/sources/design-documents/labels.md
index f1854d5a4ccc1..052af82fa00e8 100644
--- a/docs/sources/design-documents/labels.md
+++ b/docs/sources/design-documents/labels.md
@@ -92,7 +92,7 @@ Our pipelined config might look like this:
```yaml
scrape_configs:
- job_name: system
- entry_parsers:
+ pipeline_stages:
- json:
timestamp:
source: time
@@ -153,7 +153,7 @@ There is an alternative configuration that could be used here to accomplish the
```yaml
scrape_configs:
- job_name: system
- entry_parsers:
+ pipeline_stages:
- json:
timestamp:
source: time
@@ -195,7 +195,7 @@ For example, the config above might be simplified to:
```yaml
scrape_configs:
- job_name: system
- entry_parsers:
+ pipeline_stages:
- docker:
```
@@ -204,7 +204,7 @@ or
```yaml
scrape_configs:
- job_name: system
- entry_parsers:
+ pipeline_stages:
- cri:
```
@@ -213,7 +213,7 @@ Which could still easily be extended to extract additional labels:
```yaml
scrape_configs:
- job_name: system
- entry_parsers:
+ pipeline_stages:
- docker:
- regex:
expr: '.*level=(?P<level>[a-zA-Z]+).*'
@@ -227,7 +227,7 @@ An even further simplification would be to attempt to autodetect the log format,
```yaml
scrape_configs:
- job_name: system
- entry_parsers:
+ pipeline_stages:
- auto:
```
diff --git a/pkg/promtail/api/entry_parser.go b/pkg/promtail/api/entry_parser.go
deleted file mode 100644
index 58f33675da996..0000000000000
--- a/pkg/promtail/api/entry_parser.go
+++ /dev/null
@@ -1,104 +0,0 @@
-package api
-
-import (
- "fmt"
- "regexp"
- "strings"
- "time"
-
- json "github.com/json-iterator/go"
- "github.com/prometheus/common/model"
-)
-
-// EntryParser describes how to parse log lines.
-type EntryParser int
-
-// Different supported EntryParsers.
-const (
- Docker EntryParser = iota
- Raw
- CRI
-)
-
-var (
- criPattern = regexp.MustCompile(`^(?s)(?P<time>\S+?) (?P<stream>stdout|stderr) (?P<flags>\S+?) (?P<content>.*)$`)
-)
-
-// String returns a string representation of the EntryParser.
-func (e EntryParser) String() string {
- switch e {
- case CRI:
- return "cri"
- case Docker:
- return "docker"
- case Raw:
- return "raw"
- default:
- panic(e)
- }
-}
-
-// Set implements flag.Value.
-func (e *EntryParser) Set(s string) error {
- switch strings.ToLower(s) {
- case "cri":
- *e = CRI
- return nil
- case "docker":
- *e = Docker
- return nil
- case "raw":
- *e = Raw
- return nil
- default:
- return fmt.Errorf("unrecognised EntryParser: %v", s)
- }
-}
-
-// UnmarshalYAML implements yaml.Unmarshaler.
-func (e *EntryParser) UnmarshalYAML(unmarshal func(interface{}) error) error {
- var s string
- if err := unmarshal(&s); err != nil {
- return err
- }
- return e.Set(s)
-}
-
-// Wrap implements EntryMiddleware.
-func (e EntryParser) Wrap(next EntryHandler) EntryHandler {
- switch e {
- case CRI:
- return EntryHandlerFunc(func(labels model.LabelSet, _ time.Time, line string) error {
- parts := criPattern.FindStringSubmatch(line)
- if parts == nil || len(parts) < 5 {
- return fmt.Errorf("Line did not match the CRI log format: '%s'", line)
- }
-
- timestamp, err := time.Parse(time.RFC3339Nano, parts[1])
- if err != nil {
- return fmt.Errorf("CRI timestamp '%s' does not match RFC3339Nano", parts[1])
- }
-
- labels = labels.Merge(model.LabelSet{"stream": model.LabelValue(parts[2])})
- return next.Handle(labels, timestamp, parts[4])
- })
- case Docker:
- return EntryHandlerFunc(func(labels model.LabelSet, _ time.Time, line string) error {
- // Docker-style json object per line.
- var entry struct {
- Log string
- Stream string
- Time time.Time
- }
- if err := json.Unmarshal([]byte(line), &entry); err != nil {
- return err
- }
- labels = labels.Merge(model.LabelSet{"stream": model.LabelValue(entry.Stream)})
- return next.Handle(labels, entry.Time, entry.Log)
- })
- case Raw:
- return next
- default:
- panic(fmt.Sprintf("unrecognised EntryParser: %s", e))
- }
-}
diff --git a/pkg/promtail/api/entry_parser_test.go b/pkg/promtail/api/entry_parser_test.go
deleted file mode 100644
index d88d41a17712d..0000000000000
--- a/pkg/promtail/api/entry_parser_test.go
+++ /dev/null
@@ -1,106 +0,0 @@
-package api
-
-import (
- "testing"
- "time"
-
- "github.com/prometheus/common/model"
- "github.com/stretchr/testify/assert"
- "github.com/stretchr/testify/require"
-)
-
-var (
- TestTimeStr = "2019-01-01T01:00:00.000000001Z"
- TestTime, _ = time.Parse(time.RFC3339Nano, TestTimeStr)
-)
-
-type Entry struct {
- Time time.Time
- Log string
- Labels model.LabelSet
-}
-
-func NewEntry(time time.Time, message string, stream string) Entry {
- return Entry{time, message, model.LabelSet{"stream": model.LabelValue(stream)}}
-}
-
-type TestCase struct {
- Line string // input
- ExpectedError bool
- Expected Entry
-}
-
-var criTestCases = []TestCase{
- {"", true, Entry{}},
- {TestTimeStr, true, Entry{}},
- {TestTimeStr + " stdout", true, Entry{}},
- {TestTimeStr + " invalid F message", true, Entry{}},
- {"2019-01-01 01:00:00.000000001 stdout F message", true, Entry{}},
- {" " + TestTimeStr + " stdout F message", true, Entry{}},
- {TestTimeStr + " stdout F ", false, NewEntry(TestTime, "", "stdout")},
- {TestTimeStr + " stdout F message", false, NewEntry(TestTime, "message", "stdout")},
- {TestTimeStr + " stderr P message", false, NewEntry(TestTime, "message", "stderr")},
- {TestTimeStr + " stderr P message1\nmessage2", false, NewEntry(TestTime, "message1\nmessage2", "stderr")},
-}
-
-func TestCRI(t *testing.T) {
- runTestCases(CRI, criTestCases, t)
-}
-
-var dockerTestCases = []TestCase{
- {
- Line: "{{\"log\":\"bad json, should fail to parse\\n\",\"stream\":\"stderr\",\"time\":\"2019-03-04T21:37:44.789508817Z\"}",
- ExpectedError: true,
- Expected: Entry{},
- },
- {
- Line: "{\"log\":\"some silly log message\\n\",\"stream\":\"stderr\",\"time\":\"2019-03-04T21:37:44.789508817Z\"}",
- ExpectedError: false,
- Expected: NewEntry(time.Date(2019, 03, 04, 21, 37, 44, 789508817, time.UTC),
- "some silly log message\n",
- "stderr"),
- },
- {
- Line: "{\"log\":\"10.15.0.5 - - [04/Mar/2019:21:37:44 +0000] \\\"POST /api/prom/push HTTP/1.1\\\" 200 0 \\\"\\\" \\\"Go-http-client/1.1\\\"\\n\",\"stream\":\"stdout\",\"time\":\"2019-03-04T21:37:44.790195228Z\"}",
- ExpectedError: false,
- Expected: NewEntry(time.Date(2019, 03, 04, 21, 37, 44, 790195228, time.UTC),
- "10.15.0.5 - - [04/Mar/2019:21:37:44 +0000] \"POST /api/prom/push HTTP/1.1\" 200 0 \"\" \"Go-http-client/1.1\"\n",
- "stdout"),
- },
-}
-
-func TestDocker(t *testing.T) {
- runTestCases(Docker, dockerTestCases, t)
-}
-
-func runTestCases(parser EntryParser, testCases []TestCase, t *testing.T) {
- for i, tc := range testCases {
- client := &TestClient{
- Entries: make([]Entry, 0),
- }
-
- handler := parser.Wrap(client)
- err := handler.Handle(model.LabelSet{}, time.Now(), tc.Line)
-
- if err != nil && tc.ExpectedError {
- continue
- } else if err != nil {
- t.Fatal("Unexpected error for test case", i, "with entry", tc.Line, "\nerror:", err)
- }
-
- require.Equal(t, 1, len(client.Entries), "Handler did not receive the correct number of Entries for test case %d", i)
- entry := client.Entries[0]
- assert.Equal(t, tc.Expected.Time, entry.Time, "Time error for test case %d, with entry %s", i, tc.Line)
- assert.Equal(t, tc.Expected.Log, entry.Log, "Log entry error for test case %d, with entry %s", i, tc.Line)
- assert.True(t, tc.Expected.Labels.Equal(entry.Labels), "Label error for test case %d, labels did not match; expected: %s, found %s", i, tc.Expected.Labels, entry.Labels)
- }
-}
-
-type TestClient struct {
- Entries []Entry
-}
-
-func (c *TestClient) Handle(ls model.LabelSet, t time.Time, s string) error {
- c.Entries = append(c.Entries, Entry{t, s, ls})
- return nil
-}
diff --git a/pkg/promtail/promtail_test.go b/pkg/promtail/promtail_test.go
index 901a48739eece..7b8d007156080 100644
--- a/pkg/promtail/promtail_test.go
+++ b/pkg/promtail/promtail_test.go
@@ -31,7 +31,6 @@ import (
"github.com/grafana/loki/pkg/logentry/stages"
"github.com/grafana/loki/pkg/logproto"
- "github.com/grafana/loki/pkg/promtail/api"
"github.com/grafana/loki/pkg/promtail/client"
"github.com/grafana/loki/pkg/promtail/config"
"github.com/grafana/loki/pkg/promtail/positions"
@@ -603,7 +602,6 @@ func buildTestConfig(t *testing.T, positionsFileName string, logDirName string)
scrapeConfig := scrapeconfig.Config{
JobName: "",
- EntryParser: api.Raw,
PipelineStages: pipeline,
RelabelConfigs: nil,
ServiceDiscoveryConfig: serviceConfig,
diff --git a/pkg/promtail/scrapeconfig/scrapeconfig.go b/pkg/promtail/scrapeconfig/scrapeconfig.go
index 113cc30545375..6063af79be65a 100644
--- a/pkg/promtail/scrapeconfig/scrapeconfig.go
+++ b/pkg/promtail/scrapeconfig/scrapeconfig.go
@@ -12,13 +12,11 @@ import (
"github.com/prometheus/prometheus/pkg/relabel"
"github.com/grafana/loki/pkg/logentry/stages"
- "github.com/grafana/loki/pkg/promtail/api"
)
// Config describes a job to scrape.
type Config struct {
JobName string `yaml:"job_name,omitempty"`
- EntryParser api.EntryParser `yaml:"entry_parser"`
PipelineStages stages.PipelineStages `yaml:"pipeline_stages,omitempty"`
JournalConfig *JournalTargetConfig `yaml:"journal,omitempty"`
SyslogConfig *SyslogTargetConfig `yaml:"syslog,omitempty"`
@@ -82,7 +80,11 @@ type PushTargetConfig struct {
// DefaultScrapeConfig is the default Config.
var DefaultScrapeConfig = Config{
- EntryParser: api.Docker,
+ PipelineStages: []interface{}{
+ map[interface{}]interface{}{
+ stages.StageTypeDocker: nil,
+ },
+ },
}
// HasServiceDiscoveryConfig checks to see if the service discovery used for
diff --git a/pkg/promtail/scrapeconfig/scrapeconfig_test.go b/pkg/promtail/scrapeconfig/scrapeconfig_test.go
index 0f83f865a50c1..bd557c534e365 100644
--- a/pkg/promtail/scrapeconfig/scrapeconfig_test.go
+++ b/pkg/promtail/scrapeconfig/scrapeconfig_test.go
@@ -11,7 +11,7 @@ var testYaml = `
pipeline_stages:
- regex:
expr: "./*"
- - json:
+ - json:
timestamp:
source: time
format: RFC3339
@@ -19,7 +19,7 @@ pipeline_stages:
stream:
source: json_key_name.json_sub_key_name
output:
- source: log
+ source: log
job_name: kubernetes-pods-name
kubernetes_sd_configs:
- role: pod
diff --git a/pkg/promtail/targets/file/filetargetmanager.go b/pkg/promtail/targets/file/filetargetmanager.go
index e5ce2c66cf474..615e0f161bf53 100644
--- a/pkg/promtail/targets/file/filetargetmanager.go
+++ b/pkg/promtail/targets/file/filetargetmanager.go
@@ -87,30 +87,6 @@ func NewFileTargetManager(
return nil, err
}
- // Backwards compatibility with old EntryParser config
- if pipeline.Size() == 0 {
- switch cfg.EntryParser {
- case api.CRI:
- level.Warn(logger).Log("msg", "WARNING!!! entry_parser config is deprecated, please change to pipeline_stages")
- cri, err := stages.NewCRI(logger, registerer)
- if err != nil {
- return nil, err
- }
- pipeline.AddStage(cri)
- case api.Docker:
- level.Warn(logger).Log("msg", "WARNING!!! entry_parser config is deprecated, please change to pipeline_stages")
- docker, err := stages.NewDocker(logger, registerer)
- if err != nil {
- return nil, err
- }
- pipeline.AddStage(docker)
- case api.Raw:
- level.Warn(logger).Log("msg", "WARNING!!! entry_parser config is deprecated, please change to pipeline_stages")
- default:
-
- }
- }
-
// Add Source value to the static config target groups for unique identification
// within scrape pool. Also, default target label to localhost if target is not
// defined in promtail config.
|
fix
|
Remove depricated `entry_parser` from scrapeconfig (#2752)
|
45bdebf8b22d52e73248ae63a07360cc7b8ac7dc
|
2019-08-16 00:57:00
|
sh0rez
|
chore(ci/cd): build containers using drone.io (#891)
| false
|
diff --git a/.circleci/config.yml b/.circleci/config.yml
index e0882e5ca3ddd..bd623920c2bda 100644
--- a/.circleci/config.yml
+++ b/.circleci/config.yml
@@ -112,7 +112,7 @@ workflows:
- run:
name: container
command: |
- make $APP-image
+ make $APP-image-cross
# builds and pushes a container
.publish: &publish
diff --git a/.drone/docker-manifest.tmpl b/.drone/docker-manifest.tmpl
new file mode 100644
index 0000000000000..7ebb8c942407a
--- /dev/null
+++ b/.drone/docker-manifest.tmpl
@@ -0,0 +1,22 @@
+image: grafanasaur/{{config.target}}:{{#if build.tag}}{{build.tag}}{{else}}{{build.branch}}-{{substr 0 7 build.commit}}{{/if}}
+{{#if build.tags}}
+tags:
+{{#each build.tags}}
+ - {{this}}
+{{/each}}
+{{/if}}
+manifests:
+ - image: grafanasaur/{{config.target}}:{{#if build.tag}}{{build.tag}}{{else}}{{build.branch}}-{{substr 0 7 build.commit}}{{/if}}-amd64
+ platform:
+ architecture: amd64
+ os: linux
+ - image: grafanasaur/{{config.target}}:{{#if build.tag}}{{build.tag}}{{else}}{{build.branch}}-{{substr 0 7 build.commit}}{{/if}}-arm64
+ platform:
+ architecture: arm64
+ os: linux
+ variant: v8
+ - image: grafanasaur/{{config.target}}:{{#if build.tag}}{{build.tag}}{{else}}{{build.branch}}-{{substr 0 7 build.commit}}{{/if}}-arm
+ platform:
+ architecture: arm
+ os: linux
+ variant: v7
diff --git a/.drone/drone.jsonnet b/.drone/drone.jsonnet
new file mode 100644
index 0000000000000..366d1f6c47069
--- /dev/null
+++ b/.drone/drone.jsonnet
@@ -0,0 +1,101 @@
+local apps = ['loki', 'loki-canary', 'promtail'];
+local archs = ['amd64', 'arm64', 'arm'];
+
+local condition(verb) = {
+ tagMaster: {
+ ref: {
+ [verb]:
+ [
+ 'refs/heads/master',
+ 'refs/tags/v*',
+ ],
+ },
+ },
+};
+
+local pipeline(name) = {
+ kind: 'pipeline',
+ name: name,
+ steps: [],
+};
+
+local docker(arch, app) = {
+ name: '%s-image' % if $.settings.dry_run then 'build-' + app else 'publish-' + app,
+ image: 'plugins/docker',
+ settings: {
+ repo: 'grafanasaur/%s' % app,
+ dockerfile: 'cmd/%s/Dockerfile' % app,
+ username: { from_secret: 'saur_username' },
+ password: { from_secret: 'saur_password' },
+ dry_run: false,
+ },
+};
+
+local multiarch_image(arch) = pipeline('docker-' + arch) {
+ platform: {
+ os: 'linux',
+ arch: arch,
+ },
+ steps: [{
+ name: 'image-tag',
+ image: 'alpine',
+ commands: [
+ 'apk add --no-cache bash git',
+ 'git fetch origin --tags',
+ 'echo $(./tools/image-tag)-%s > .tags' % arch,
+ ],
+ }] + [
+ // dry run for everything that is not tag or master
+ docker(arch, app) {
+ depends_on: ['image-tag'],
+ when: condition('exclude').tagMaster,
+ settings+: { dry_run: true },
+ }
+ for app in apps
+ ] + [
+ // publish for tag or master
+ docker(arch, app) {
+ depends_on: ['image-tag'],
+ when: condition('include').tagMaster,
+ }
+ for app in apps
+ ],
+};
+
+local manifest(apps) = pipeline('manifest') {
+ steps: [
+ {
+ name: 'manifest-' + app,
+ image: 'plugins/manifest',
+ settings: {
+ // the target parameter is abused for the app's name,
+ // as it is unused in spec mode. See docker-manifest.tmpl
+ target: app,
+ spec: '.drone/docker-manifest.tmpl',
+ ignore_missing: true,
+ username: { from_secret: 'docker_username' },
+ password: { from_secret: 'docker_password' },
+ },
+ depends_on: ['clone'],
+ }
+ for app in apps
+ ],
+} + {
+ depends_on: [
+ 'docker-%s' % arch
+ for arch in archs
+ ],
+};
+
+local drone = [
+ multiarch_image(arch)
+ for arch in archs
+] + [
+ manifest(['promtail', 'loki', 'loki-canary']) {
+ trigger: condition('include').tagMaster,
+ },
+];
+
+{
+ drone: std.manifestYamlStream(drone),
+}
diff --git a/.drone/drone.yml b/.drone/drone.yml
new file mode 100644
index 0000000000000..10c77f0230661
--- /dev/null
+++ b/.drone/drone.yml
@@ -0,0 +1,393 @@
+kind: pipeline
+name: docker-amd64
+platform:
+ arch: amd64
+ os: linux
+steps:
+- commands:
+ - apk add --no-cache bash git
+ - git fetch origin --tags
+ - echo $(./tools/image-tag)-amd64 > .tags
+ image: alpine
+ name: image-tag
+- depends_on:
+ - image-tag
+ image: plugins/docker
+ name: build-loki-image
+ settings:
+ dockerfile: cmd/loki/Dockerfile
+ dry_run: true
+ password:
+ from_secret: saur_password
+ repo: grafanasaur/loki
+ username:
+ from_secret: saur_username
+ when:
+ ref:
+ exclude:
+ - refs/heads/master
+ - refs/tags/v*
+- depends_on:
+ - image-tag
+ image: plugins/docker
+ name: build-loki-canary-image
+ settings:
+ dockerfile: cmd/loki-canary/Dockerfile
+ dry_run: true
+ password:
+ from_secret: saur_password
+ repo: grafanasaur/loki-canary
+ username:
+ from_secret: saur_username
+ when:
+ ref:
+ exclude:
+ - refs/heads/master
+ - refs/tags/v*
+- depends_on:
+ - image-tag
+ image: plugins/docker
+ name: build-promtail-image
+ settings:
+ dockerfile: cmd/promtail/Dockerfile
+ dry_run: true
+ password:
+ from_secret: saur_password
+ repo: grafanasaur/promtail
+ username:
+ from_secret: saur_username
+ when:
+ ref:
+ exclude:
+ - refs/heads/master
+ - refs/tags/v*
+- depends_on:
+ - image-tag
+ image: plugins/docker
+ name: publish-loki-image
+ settings:
+ dockerfile: cmd/loki/Dockerfile
+ dry_run: false
+ password:
+ from_secret: saur_password
+ repo: grafanasaur/loki
+ username:
+ from_secret: saur_username
+ when:
+ ref:
+ include:
+ - refs/heads/master
+ - refs/tags/v*
+- depends_on:
+ - image-tag
+ image: plugins/docker
+ name: publish-loki-canary-image
+ settings:
+ dockerfile: cmd/loki-canary/Dockerfile
+ dry_run: false
+ password:
+ from_secret: saur_password
+ repo: grafanasaur/loki-canary
+ username:
+ from_secret: saur_username
+ when:
+ ref:
+ include:
+ - refs/heads/master
+ - refs/tags/v*
+- depends_on:
+ - image-tag
+ image: plugins/docker
+ name: publish-promtail-image
+ settings:
+ dockerfile: cmd/promtail/Dockerfile
+ dry_run: false
+ password:
+ from_secret: saur_password
+ repo: grafanasaur/promtail
+ username:
+ from_secret: saur_username
+ when:
+ ref:
+ include:
+ - refs/heads/master
+ - refs/tags/v*
+---
+kind: pipeline
+name: docker-arm64
+platform:
+ arch: arm64
+ os: linux
+steps:
+- commands:
+ - apk add --no-cache bash git
+ - git fetch origin --tags
+ - echo $(./tools/image-tag)-arm64 > .tags
+ image: alpine
+ name: image-tag
+- depends_on:
+ - image-tag
+ image: plugins/docker
+ name: build-loki-image
+ settings:
+ dockerfile: cmd/loki/Dockerfile
+ dry_run: true
+ password:
+ from_secret: saur_password
+ repo: grafanasaur/loki
+ username:
+ from_secret: saur_username
+ when:
+ ref:
+ exclude:
+ - refs/heads/master
+ - refs/tags/v*
+- depends_on:
+ - image-tag
+ image: plugins/docker
+ name: build-loki-canary-image
+ settings:
+ dockerfile: cmd/loki-canary/Dockerfile
+ dry_run: true
+ password:
+ from_secret: saur_password
+ repo: grafanasaur/loki-canary
+ username:
+ from_secret: saur_username
+ when:
+ ref:
+ exclude:
+ - refs/heads/master
+ - refs/tags/v*
+- depends_on:
+ - image-tag
+ image: plugins/docker
+ name: build-promtail-image
+ settings:
+ dockerfile: cmd/promtail/Dockerfile
+ dry_run: true
+ password:
+ from_secret: saur_password
+ repo: grafanasaur/promtail
+ username:
+ from_secret: saur_username
+ when:
+ ref:
+ exclude:
+ - refs/heads/master
+ - refs/tags/v*
+- depends_on:
+ - image-tag
+ image: plugins/docker
+ name: publish-loki-image
+ settings:
+ dockerfile: cmd/loki/Dockerfile
+ dry_run: false
+ password:
+ from_secret: saur_password
+ repo: grafanasaur/loki
+ username:
+ from_secret: saur_username
+ when:
+ ref:
+ include:
+ - refs/heads/master
+ - refs/tags/v*
+- depends_on:
+ - image-tag
+ image: plugins/docker
+ name: publish-loki-canary-image
+ settings:
+ dockerfile: cmd/loki-canary/Dockerfile
+ dry_run: false
+ password:
+ from_secret: saur_password
+ repo: grafanasaur/loki-canary
+ username:
+ from_secret: saur_username
+ when:
+ ref:
+ include:
+ - refs/heads/master
+ - refs/tags/v*
+- depends_on:
+ - image-tag
+ image: plugins/docker
+ name: publish-promtail-image
+ settings:
+ dockerfile: cmd/promtail/Dockerfile
+ dry_run: false
+ password:
+ from_secret: saur_password
+ repo: grafanasaur/promtail
+ username:
+ from_secret: saur_username
+ when:
+ ref:
+ include:
+ - refs/heads/master
+ - refs/tags/v*
+---
+kind: pipeline
+name: docker-arm
+platform:
+ arch: arm
+ os: linux
+steps:
+- commands:
+ - apk add --no-cache bash git
+ - git fetch origin --tags
+ - echo $(./tools/image-tag)-arm > .tags
+ image: alpine
+ name: image-tag
+- depends_on:
+ - image-tag
+ image: plugins/docker
+ name: build-loki-image
+ settings:
+ dockerfile: cmd/loki/Dockerfile
+ dry_run: true
+ password:
+ from_secret: saur_password
+ repo: grafanasaur/loki
+ username:
+ from_secret: saur_username
+ when:
+ ref:
+ exclude:
+ - refs/heads/master
+ - refs/tags/v*
+- depends_on:
+ - image-tag
+ image: plugins/docker
+ name: build-loki-canary-image
+ settings:
+ dockerfile: cmd/loki-canary/Dockerfile
+ dry_run: true
+ password:
+ from_secret: saur_password
+ repo: grafanasaur/loki-canary
+ username:
+ from_secret: saur_username
+ when:
+ ref:
+ exclude:
+ - refs/heads/master
+ - refs/tags/v*
+- depends_on:
+ - image-tag
+ image: plugins/docker
+ name: build-promtail-image
+ settings:
+ dockerfile: cmd/promtail/Dockerfile
+ dry_run: true
+ password:
+ from_secret: saur_password
+ repo: grafanasaur/promtail
+ username:
+ from_secret: saur_username
+ when:
+ ref:
+ exclude:
+ - refs/heads/master
+ - refs/tags/v*
+- depends_on:
+ - image-tag
+ image: plugins/docker
+ name: publish-loki-image
+ settings:
+ dockerfile: cmd/loki/Dockerfile
+ dry_run: false
+ password:
+ from_secret: saur_password
+ repo: grafanasaur/loki
+ username:
+ from_secret: saur_username
+ when:
+ ref:
+ include:
+ - refs/heads/master
+ - refs/tags/v*
+- depends_on:
+ - image-tag
+ image: plugins/docker
+ name: publish-loki-canary-image
+ settings:
+ dockerfile: cmd/loki-canary/Dockerfile
+ dry_run: false
+ password:
+ from_secret: saur_password
+ repo: grafanasaur/loki-canary
+ username:
+ from_secret: saur_username
+ when:
+ ref:
+ include:
+ - refs/heads/master
+ - refs/tags/v*
+- depends_on:
+ - image-tag
+ image: plugins/docker
+ name: publish-promtail-image
+ settings:
+ dockerfile: cmd/promtail/Dockerfile
+ dry_run: false
+ password:
+ from_secret: saur_password
+ repo: grafanasaur/promtail
+ username:
+ from_secret: saur_username
+ when:
+ ref:
+ include:
+ - refs/heads/master
+ - refs/tags/v*
+---
+depends_on:
+- docker-amd64
+- docker-arm64
+- docker-arm
+kind: pipeline
+name: manifest
+steps:
+- depends_on:
+ - clone
+ image: plugins/manifest
+ name: manifest-promtail
+ settings:
+ ignore_missing: true
+ password:
+ from_secret: docker_password
+ spec: .drone/docker-manifest.tmpl
+ target: promtail
+ username:
+ from_secret: docker_username
+- depends_on:
+ - clone
+ image: plugins/manifest
+ name: manifest-loki
+ settings:
+ ignore_missing: true
+ password:
+ from_secret: docker_password
+ spec: .drone/docker-manifest.tmpl
+ target: loki
+ username:
+ from_secret: docker_username
+- depends_on:
+ - clone
+ image: plugins/manifest
+ name: manifest-loki-canary
+ settings:
+ ignore_missing: true
+ password:
+ from_secret: docker_password
+ spec: .drone/docker-manifest.tmpl
+ target: loki-canary
+ username:
+ from_secret: docker_username
+trigger:
+ ref:
+ include:
+ - refs/heads/master
+ - refs/tags/v*
diff --git a/Makefile b/Makefile
index 1b5c5bbf82cae..37c2855dd579b 100644
--- a/Makefile
+++ b/Makefile
@@ -394,7 +394,9 @@ endef
# promtail
promtail-image:
- $(SUDO) $(BUILD_OCI) -t $(IMAGE_PREFIX)/promtail:$(IMAGE_TAG) -f cmd/promtail/Dockerfile .
+ $(SUDO) docker build -t $(IMAGE_PREFIX)/promtail:$(IMAGE_TAG) -f cmd/promtail/Dockerfile .
+promtail-image-cross:
+ $(SUDO) $(BUILD_OCI) -t $(IMAGE_PREFIX)/promtail:$(IMAGE_TAG) -f cmd/promtail/Dockerfile.cross .
promtail-debug-image: OCI_PLATFORMS=
promtail-debug-image:
@@ -405,7 +407,9 @@ promtail-push: promtail-image
# loki
loki-image:
- $(SUDO) $(BUILD_OCI) -t $(IMAGE_PREFIX)/loki:$(IMAGE_TAG) -f cmd/loki/Dockerfile .
+ $(SUDO) docker build -t $(IMAGE_PREFIX)/loki:$(IMAGE_TAG) -f cmd/loki/Dockerfile .
+loki-image-cross:
+ $(SUDO) $(BUILD_OCI) -t $(IMAGE_PREFIX)/loki:$(IMAGE_TAG) -f cmd/loki/Dockerfile.cross .
loki-debug-image: OCI_PLATFORMS=
loki-debug-image:
@@ -416,7 +420,9 @@ loki-push: loki-image
# loki-canary
loki-canary-image:
- $(SUDO) $(BUILD_OCI) -t $(IMAGE_PREFIX)/loki-canary:$(IMAGE_TAG) -f cmd/loki-canary/Dockerfile .
+ $(SUDO) docker build -t $(IMAGE_PREFIX)/loki-canary:$(IMAGE_TAG) -f cmd/loki-canary/Dockerfile .
+loki-canary-image-cross:
+ $(SUDO) $(BUILD_OCI) -t $(IMAGE_PREFIX)/loki-canary:$(IMAGE_TAG) -f cmd/loki-canary/Dockerfile.cross .
loki-canary-push: loki-canary-image
$(SUDO) $(PUSH_OCI) $(IMAGE_PREFIX)/loki-canary:$(IMAGE_TAG)
@@ -432,3 +438,7 @@ build-image:
benchmark-store:
go run ./pkg/storage/hack/main.go
go test ./pkg/storage/ -bench=. -benchmem -memprofile memprofile.out -cpuprofile cpuprofile.out
+
+# regenerate drone yaml
+drone:
+ jsonnet .drone/drone.jsonnet | jq .drone -r | yq -y . > .drone/drone.yml
diff --git a/cmd/loki-canary/Dockerfile b/cmd/loki-canary/Dockerfile
index f769def8895ea..37ec2ac7dace9 100644
--- a/cmd/loki-canary/Dockerfile
+++ b/cmd/loki-canary/Dockerfile
@@ -1,16 +1,7 @@
-ARG BUILD_IMAGE=grafana/loki-build-image:latest
-# Directories in this file are referenced from the root of the project not this folder
-# This file is intented to be called from the root like so:
-# docker build -t grafana/promtail -f cmd/promtail/Dockerfile .
-FROM golang:1.11.4-alpine as goenv
-RUN go env GOARCH > /goarch && \
- go env GOARM > /goarm
-
-FROM --platform=linux/amd64 $BUILD_IMAGE as build
-COPY --from=goenv /goarch /goarm /
+FROM golang:1.12 as build
COPY . /go/src/github.com/grafana/loki
WORKDIR /go/src/github.com/grafana/loki
-RUN make clean && GOARCH=$(cat /goarch) GOARM=$(cat /goarm) make BUILD_IN_CONTAINER=false loki-canary
+RUN make clean && make BUILD_IN_CONTAINER=false loki-canary
FROM alpine:3.9
RUN apk add --update --no-cache ca-certificates
diff --git a/cmd/loki-canary/Dockerfile.cross b/cmd/loki-canary/Dockerfile.cross
new file mode 100644
index 0000000000000..f769def8895ea
--- /dev/null
+++ b/cmd/loki-canary/Dockerfile.cross
@@ -0,0 +1,18 @@
+ARG BUILD_IMAGE=grafana/loki-build-image:latest
+# Directories in this file are referenced from the root of the project not this folder
+# This file is intented to be called from the root like so:
+# docker build -t grafana/promtail -f cmd/promtail/Dockerfile .
+FROM golang:1.11.4-alpine as goenv
+RUN go env GOARCH > /goarch && \
+ go env GOARM > /goarm
+
+FROM --platform=linux/amd64 $BUILD_IMAGE as build
+COPY --from=goenv /goarch /goarm /
+COPY . /go/src/github.com/grafana/loki
+WORKDIR /go/src/github.com/grafana/loki
+RUN make clean && GOARCH=$(cat /goarch) GOARM=$(cat /goarm) make BUILD_IN_CONTAINER=false loki-canary
+
+FROM alpine:3.9
+RUN apk add --update --no-cache ca-certificates
+COPY --from=build /go/src/github.com/grafana/loki/cmd/loki-canary/loki-canary /usr/bin/loki-canary
+ENTRYPOINT [ "/usr/bin/loki-canary" ]
diff --git a/cmd/loki/Dockerfile b/cmd/loki/Dockerfile
index 213275263cb51..7f556e3f70f66 100644
--- a/cmd/loki/Dockerfile
+++ b/cmd/loki/Dockerfile
@@ -1,16 +1,7 @@
-ARG BUILD_IMAGE=grafana/loki-build-image:latest
-# Directories in this file are referenced from the root of the project not this folder
-# This file is intented to be called from the root like so:
-# docker build -t grafana/loki -f cmd/loki/Dockerfile .
-FROM golang:1.11.4-alpine as goenv
-RUN go env GOARCH > /goarch && \
- go env GOARM > /goarm
-
-FROM --platform=linux/amd64 $BUILD_IMAGE as build
-COPY --from=goenv /goarch /goarm /
+FROM golang:1.12 as build
COPY . /go/src/github.com/grafana/loki
WORKDIR /go/src/github.com/grafana/loki
-RUN make clean && GOARCH=$(cat /goarch) GOARM=$(cat /goarm) make BUILD_IN_CONTAINER=false loki
+RUN make clean && make BUILD_IN_CONTAINER=false loki
FROM alpine:3.9
RUN apk add --update --no-cache ca-certificates
diff --git a/cmd/loki/Dockerfile.cross b/cmd/loki/Dockerfile.cross
new file mode 100644
index 0000000000000..213275263cb51
--- /dev/null
+++ b/cmd/loki/Dockerfile.cross
@@ -0,0 +1,21 @@
+ARG BUILD_IMAGE=grafana/loki-build-image:latest
+# Directories in this file are referenced from the root of the project not this folder
+# This file is intented to be called from the root like so:
+# docker build -t grafana/loki -f cmd/loki/Dockerfile .
+FROM golang:1.11.4-alpine as goenv
+RUN go env GOARCH > /goarch && \
+ go env GOARM > /goarm
+
+FROM --platform=linux/amd64 $BUILD_IMAGE as build
+COPY --from=goenv /goarch /goarm /
+COPY . /go/src/github.com/grafana/loki
+WORKDIR /go/src/github.com/grafana/loki
+RUN make clean && GOARCH=$(cat /goarch) GOARM=$(cat /goarm) make BUILD_IN_CONTAINER=false loki
+
+FROM alpine:3.9
+RUN apk add --update --no-cache ca-certificates
+COPY --from=build /go/src/github.com/grafana/loki/cmd/loki/loki /usr/bin/loki
+COPY cmd/loki/loki-local-config.yaml /etc/loki/local-config.yaml
+EXPOSE 80
+ENTRYPOINT [ "/usr/bin/loki" ]
+CMD ["-config.file=/etc/loki/local-config.yaml"]
diff --git a/cmd/promtail/Dockerfile b/cmd/promtail/Dockerfile
index 606343265558c..c939a3221d9d8 100644
--- a/cmd/promtail/Dockerfile
+++ b/cmd/promtail/Dockerfile
@@ -1,26 +1,17 @@
-ARG BUILD_IMAGE=grafana/loki-build-image:latest
-# Directories in this file are referenced from the root of the project not this folder
-# This file is intented to be called from the root like so:
-# docker build -t grafana/promtail -f cmd/promtail/Dockerfile .
-FROM golang:1.11.4-alpine as goenv
-RUN go env GOARCH > /goarch && \
- go env GOARM > /goarm
-
-FROM --platform=linux/amd64 $BUILD_IMAGE as build
-COPY --from=goenv /goarch /goarm /
+FROM golang:1.12 as build
COPY . /go/src/github.com/grafana/loki
WORKDIR /go/src/github.com/grafana/loki
-RUN make clean && GOARCH=$(cat /goarch) GOARM=$(cat /goarm) make BUILD_IN_CONTAINER=false promtail
+RUN apt-get update && apt-get install -qy libsystemd-dev
+RUN make clean && make BUILD_IN_CONTAINER=false promtail
# Promtail requires debian as the base image to support systemd journal reading
FROM debian:stretch-slim
# tzdata required for the timestamp stage to work
RUN apt-get update && \
- apt-get install -qy \
- tzdata ca-certificates libsystemd-dev && \
- rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
+ apt-get install -qy \
+ tzdata ca-certificates libsystemd-dev && \
+ rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
COPY --from=build /go/src/github.com/grafana/loki/cmd/promtail/promtail /usr/bin/promtail
COPY cmd/promtail/promtail-local-config.yaml /etc/promtail/local-config.yaml
COPY cmd/promtail/promtail-docker-config.yaml /etc/promtail/docker-config.yaml
ENTRYPOINT ["/usr/bin/promtail"]
-
diff --git a/cmd/promtail/Dockerfile.cross b/cmd/promtail/Dockerfile.cross
new file mode 100644
index 0000000000000..606343265558c
--- /dev/null
+++ b/cmd/promtail/Dockerfile.cross
@@ -0,0 +1,26 @@
+ARG BUILD_IMAGE=grafana/loki-build-image:latest
+# Directories in this file are referenced from the root of the project not this folder
+# This file is intented to be called from the root like so:
+# docker build -t grafana/promtail -f cmd/promtail/Dockerfile .
+FROM golang:1.11.4-alpine as goenv
+RUN go env GOARCH > /goarch && \
+ go env GOARM > /goarm
+
+FROM --platform=linux/amd64 $BUILD_IMAGE as build
+COPY --from=goenv /goarch /goarm /
+COPY . /go/src/github.com/grafana/loki
+WORKDIR /go/src/github.com/grafana/loki
+RUN make clean && GOARCH=$(cat /goarch) GOARM=$(cat /goarm) make BUILD_IN_CONTAINER=false promtail
+
+# Promtail requires debian as the base image to support systemd journal reading
+FROM debian:stretch-slim
+# tzdata required for the timestamp stage to work
+RUN apt-get update && \
+ apt-get install -qy \
+ tzdata ca-certificates libsystemd-dev && \
+ rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
+COPY --from=build /go/src/github.com/grafana/loki/cmd/promtail/promtail /usr/bin/promtail
+COPY cmd/promtail/promtail-local-config.yaml /etc/promtail/local-config.yaml
+COPY cmd/promtail/promtail-docker-config.yaml /etc/promtail/docker-config.yaml
+ENTRYPOINT ["/usr/bin/promtail"]
+
diff --git a/tools/image-tag b/tools/image-tag
index c8db5f11d5531..40a63b94a2927 100755
--- a/tools/image-tag
+++ b/tools/image-tag
@@ -8,6 +8,6 @@ WIP=$(git diff --quiet || echo '-WIP')
BRANCH=$(git rev-parse --abbrev-ref HEAD | sed s#/#-#g)
SHA=$(git rev-parse --short HEAD)
-# If this is a tag, use it, otherwise branch-hash
+# If this is a tag, use it, otherwise branch-hash
TAG=$(git describe --exact-match 2> /dev/null || echo "${BRANCH}-${SHA}")
echo ${TAG}${WIP}
|
chore
|
build containers using drone.io (#891)
|
8322e518e68de286b2bc58cf15ea9fe947eeec86
|
2024-09-23 19:01:40
|
Paul Rogers
|
fix: Add additional validation for timeout while retrieving headers (#14217)
| false
|
diff --git a/pkg/storage/chunk/client/gcp/gcs_object_client.go b/pkg/storage/chunk/client/gcp/gcs_object_client.go
index d1289a61e7715..69be4e2acf372 100644
--- a/pkg/storage/chunk/client/gcp/gcs_object_client.go
+++ b/pkg/storage/chunk/client/gcp/gcs_object_client.go
@@ -7,6 +7,7 @@ import (
"io"
"net"
"net/http"
+ "strings"
"time"
"cloud.google.com/go/storage"
@@ -269,7 +270,9 @@ func (s *GCSObjectClient) IsStorageTimeoutErr(err error) bool {
// TODO(dannyk): move these out to be generic
// context errors are all client-side
if isContextErr(err) {
- return false
+ // Go 1.23 changed the type of the error returned by the http client when a timeout occurs
+ // while waiting for headers. This is a server side timeout.
+ return strings.Contains(err.Error(), "Client.Timeout exceeded while awaiting header")
}
// connection misconfiguration, or writing on a closed connection
|
fix
|
Add additional validation for timeout while retrieving headers (#14217)
|
5c502e497fcf1e53030a885046098ee9981bf07b
|
2024-01-03 15:29:38
|
Christian Haudum
|
docs: Fix link to `ecs-task.json` file in ECS instructions (#11568)
| false
|
diff --git a/docs/sources/send-data/promtail/cloud/ecs/_index.md b/docs/sources/send-data/promtail/cloud/ecs/_index.md
index 90682f265ded5..e9eaca48f52ac 100644
--- a/docs/sources/send-data/promtail/cloud/ecs/_index.md
+++ b/docs/sources/send-data/promtail/cloud/ecs/_index.md
@@ -88,13 +88,13 @@ Our [task definition][task] will be made of two containers, the [Firelens][Firel
Let's download the task definition, we'll go through the most important parts.
```bash
-curl https://raw.githubusercontent.com/grafana/loki/main/docs/sources/clients/aws/ecs/ecs-task.json > ecs-task.json
+curl https://raw.githubusercontent.com/grafana/loki/main/docs/sources/send-data/promtail/cloud/ecs/ecs-task.json > ecs-task.json
```
```json
{
"essential": true,
- "image": "grafana/fluent-bit-plugin-loki:2.0.0-amd64",
+ "image": "grafana/fluent-bit-plugin-loki:2.9.3-amd64",
"name": "log_router",
"firelensConfiguration": {
"type": "fluentbit",
diff --git a/docs/sources/send-data/promtail/cloud/ecs/ecs-task.json b/docs/sources/send-data/promtail/cloud/ecs/ecs-task.json
index d9f7b9b8d6708..dd2db2251825d 100644
--- a/docs/sources/send-data/promtail/cloud/ecs/ecs-task.json
+++ b/docs/sources/send-data/promtail/cloud/ecs/ecs-task.json
@@ -2,7 +2,7 @@
"containerDefinitions": [
{
"essential": true,
- "image": "grafana/fluent-bit-plugin-loki:1.6.0-amd64",
+ "image": "grafana/fluent-bit-plugin-loki:2.9.3-amd64",
"name": "log_router",
"firelensConfiguration": {
"type": "fluentbit",
|
docs
|
Fix link to `ecs-task.json` file in ECS instructions (#11568)
|
3b0502ddd314d885269ef3da24751ca957038998
|
2024-06-27 00:00:16
|
Christian Haudum
|
chore(blooms): Various minor code cleanups (#13332)
| false
|
diff --git a/pkg/bloomgateway/client.go b/pkg/bloomgateway/client.go
index 8a01514cdf2f1..1eaa1bb328834 100644
--- a/pkg/bloomgateway/client.go
+++ b/pkg/bloomgateway/client.go
@@ -4,7 +4,6 @@ import (
"context"
"flag"
"io"
- "math"
"sort"
"github.com/go-kit/log"
@@ -208,7 +207,6 @@ func (c *GatewayClient) FilterChunks(ctx context.Context, _ string, interval blo
return nil, nil
}
- firstFp, lastFp := uint64(math.MaxUint64), uint64(0)
pos := make(map[string]int)
servers := make([]addrWithGroups, 0, len(blocks))
for _, blockWithSeries := range blocks {
@@ -217,15 +215,6 @@ func (c *GatewayClient) FilterChunks(ctx context.Context, _ string, interval blo
return nil, errors.Wrapf(err, "server address for block: %s", blockWithSeries.block)
}
- // min/max fingerprint needed for the cache locality score
- first, last := getFirstLast(blockWithSeries.series)
- if first.Fingerprint < firstFp {
- firstFp = first.Fingerprint
- }
- if last.Fingerprint > lastFp {
- lastFp = last.Fingerprint
- }
-
if idx, found := pos[addr]; found {
servers[idx].groups = append(servers[idx].groups, blockWithSeries.series...)
servers[idx].blocks = append(servers[idx].blocks, blockWithSeries.block.String())
diff --git a/pkg/bloomgateway/client_test.go b/pkg/bloomgateway/client_test.go
index d46d881078dae..8a22edfcb9789 100644
--- a/pkg/bloomgateway/client_test.go
+++ b/pkg/bloomgateway/client_test.go
@@ -46,17 +46,17 @@ func shortRef(f, t model.Time, c uint32) *logproto.ShortRef {
func TestGatewayClient_MergeSeries(t *testing.T) {
inputs := [][]*logproto.GroupedChunkRefs{
- // response 1
+ // response 1 -- sorted
{
{Fingerprint: 0x00, Refs: []*logproto.ShortRef{shortRef(0, 1, 1), shortRef(1, 2, 2)}}, // not overlapping
{Fingerprint: 0x01, Refs: []*logproto.ShortRef{shortRef(0, 1, 3), shortRef(1, 2, 4)}}, // fully overlapping chunks
{Fingerprint: 0x02, Refs: []*logproto.ShortRef{shortRef(0, 1, 5), shortRef(1, 2, 6)}}, // partially overlapping chunks
},
- // response 2
+ // response 2 -- not sorted
{
+ {Fingerprint: 0x03, Refs: []*logproto.ShortRef{shortRef(0, 1, 8), shortRef(1, 2, 9)}}, // not overlapping
{Fingerprint: 0x01, Refs: []*logproto.ShortRef{shortRef(0, 1, 3), shortRef(1, 2, 4)}}, // fully overlapping chunks
{Fingerprint: 0x02, Refs: []*logproto.ShortRef{shortRef(1, 2, 6), shortRef(2, 3, 7)}}, // partially overlapping chunks
- {Fingerprint: 0x03, Refs: []*logproto.ShortRef{shortRef(0, 1, 8), shortRef(1, 2, 9)}}, // not overlapping
},
}
diff --git a/pkg/bloomgateway/multiplexing.go b/pkg/bloomgateway/multiplexing.go
index 08afeffcbf70c..3520d7b18057e 100644
--- a/pkg/bloomgateway/multiplexing.go
+++ b/pkg/bloomgateway/multiplexing.go
@@ -127,7 +127,7 @@ func (t Task) Copy(series []*logproto.GroupedChunkRefs) Task {
interval: t.interval,
table: t.table,
ctx: t.ctx,
- done: make(chan struct{}),
+ done: t.done,
}
}
diff --git a/pkg/bloomgateway/processor.go b/pkg/bloomgateway/processor.go
index 7b1285891a537..b0d4f57ca5c15 100644
--- a/pkg/bloomgateway/processor.go
+++ b/pkg/bloomgateway/processor.go
@@ -2,7 +2,6 @@ package bloomgateway
import (
"context"
- "math"
"time"
"github.com/go-kit/log"
@@ -35,21 +34,12 @@ type processor struct {
metrics *workerMetrics
}
-func (p *processor) run(ctx context.Context, tasks []Task) error {
- return p.runWithBounds(ctx, tasks, v1.MultiFingerprintBounds{{Min: 0, Max: math.MaxUint64}})
-}
-
-func (p *processor) runWithBounds(ctx context.Context, tasks []Task, bounds v1.MultiFingerprintBounds) error {
+func (p *processor) processTasks(ctx context.Context, tasks []Task) error {
tenant := tasks[0].tenant
- level.Info(p.logger).Log(
- "msg", "process tasks with bounds",
- "tenant", tenant,
- "tasks", len(tasks),
- "bounds", len(bounds),
- )
+ level.Info(p.logger).Log("msg", "process tasks", "tenant", tenant, "tasks", len(tasks))
for ts, tasks := range group(tasks, func(t Task) config.DayTime { return t.table }) {
- err := p.processTasks(ctx, tenant, ts, bounds, tasks)
+ err := p.processTasksForDay(ctx, tenant, ts, tasks)
if err != nil {
for _, task := range tasks {
task.CloseWithError(err)
@@ -63,7 +53,7 @@ func (p *processor) runWithBounds(ctx context.Context, tasks []Task, bounds v1.M
return nil
}
-func (p *processor) processTasks(ctx context.Context, tenant string, day config.DayTime, _ v1.MultiFingerprintBounds, tasks []Task) error {
+func (p *processor) processTasksForDay(ctx context.Context, tenant string, day config.DayTime, tasks []Task) error {
level.Info(p.logger).Log("msg", "process tasks for day", "tenant", tenant, "tasks", len(tasks), "day", day.String())
var duration time.Duration
@@ -72,10 +62,10 @@ func (p *processor) processTasks(ctx context.Context, tenant string, day config.
blocksRefs = append(blocksRefs, task.blocks...)
}
- data := partitionTasks(tasks, blocksRefs)
+ tasksByBlock := partitionTasksByBlock(tasks, blocksRefs)
- refs := make([]bloomshipper.BlockRef, 0, len(data))
- for _, block := range data {
+ refs := make([]bloomshipper.BlockRef, 0, len(tasksByBlock))
+ for _, block := range tasksByBlock {
refs = append(refs, block.ref)
}
@@ -103,7 +93,7 @@ func (p *processor) processTasks(ctx context.Context, tenant string, day config.
}
startProcess := time.Now()
- res := p.processBlocks(ctx, bqs, data)
+ res := p.processBlocks(ctx, bqs, tasksByBlock)
duration = time.Since(startProcess)
for _, t := range tasks {
diff --git a/pkg/bloomgateway/processor_test.go b/pkg/bloomgateway/processor_test.go
index 0a2fd804ead78..8ce78e7bdb76c 100644
--- a/pkg/bloomgateway/processor_test.go
+++ b/pkg/bloomgateway/processor_test.go
@@ -166,7 +166,7 @@ func TestProcessor(t *testing.T) {
}(tasks[i])
}
- err := p.run(ctx, tasks)
+ err := p.processTasks(ctx, tasks)
wg.Wait()
require.NoError(t, err)
require.Equal(t, int64(0), results.Load())
@@ -218,7 +218,7 @@ func TestProcessor(t *testing.T) {
}(tasks[i])
}
- err := p.run(ctx, tasks)
+ err := p.processTasks(ctx, tasks)
wg.Wait()
require.NoError(t, err)
require.Equal(t, int64(len(swb.series)), results.Load())
@@ -267,7 +267,7 @@ func TestProcessor(t *testing.T) {
}(tasks[i])
}
- err := p.run(ctx, tasks)
+ err := p.processTasks(ctx, tasks)
wg.Wait()
require.Errorf(t, err, "store failed")
require.Equal(t, int64(0), results.Load())
diff --git a/pkg/bloomgateway/util.go b/pkg/bloomgateway/util.go
index 5f115ba75cfc8..f568c13f588d6 100644
--- a/pkg/bloomgateway/util.go
+++ b/pkg/bloomgateway/util.go
@@ -48,7 +48,7 @@ type blockWithTasks struct {
tasks []Task
}
-func partitionTasks(tasks []Task, blocks []bloomshipper.BlockRef) []blockWithTasks {
+func partitionTasksByBlock(tasks []Task, blocks []bloomshipper.BlockRef) []blockWithTasks {
result := make([]blockWithTasks, 0, len(blocks))
for _, block := range blocks {
diff --git a/pkg/bloomgateway/util_test.go b/pkg/bloomgateway/util_test.go
index f6ae68cf2aa2b..d52246c1f296c 100644
--- a/pkg/bloomgateway/util_test.go
+++ b/pkg/bloomgateway/util_test.go
@@ -73,7 +73,7 @@ func mkBlockRef(minFp, maxFp uint64) bloomshipper.BlockRef {
}
}
-func TestPartitionTasks(t *testing.T) {
+func TestPartitionTasksByBlock(t *testing.T) {
t.Run("consecutive block ranges", func(t *testing.T) {
bounds := []bloomshipper.BlockRef{
@@ -93,7 +93,7 @@ func TestPartitionTasks(t *testing.T) {
tasks[i%nTasks].series = append(tasks[i%nTasks].series, &logproto.GroupedChunkRefs{Fingerprint: uint64(i)})
}
- results := partitionTasks(tasks, bounds)
+ results := partitionTasksByBlock(tasks, bounds)
require.Equal(t, 3, len(results)) // ensure we only return bounds in range
actualFingerprints := make([]*logproto.GroupedChunkRefs, 0, nSeries)
@@ -128,7 +128,7 @@ func TestPartitionTasks(t *testing.T) {
task.series = append(task.series, &logproto.GroupedChunkRefs{Fingerprint: uint64(i)})
}
- results := partitionTasks([]Task{task}, bounds)
+ results := partitionTasksByBlock([]Task{task}, bounds)
require.Equal(t, 3, len(results)) // ensure we only return bounds in range
for _, res := range results {
// ensure we have the right number of tasks per bound
@@ -153,9 +153,38 @@ func TestPartitionTasks(t *testing.T) {
},
}
- results := partitionTasks(tasks, bounds)
+ results := partitionTasksByBlock(tasks, bounds)
require.Len(t, results, 0)
})
+
+ t.Run("overlapping and unsorted block ranges", func(t *testing.T) {
+ bounds := []bloomshipper.BlockRef{
+ mkBlockRef(5, 14),
+ mkBlockRef(0, 9),
+ mkBlockRef(10, 19),
+ }
+
+ tasks := []Task{
+ {
+ series: []*logproto.GroupedChunkRefs{
+ {Fingerprint: 6},
+ },
+ },
+ {
+ series: []*logproto.GroupedChunkRefs{
+ {Fingerprint: 12},
+ },
+ },
+ }
+
+ expected := []blockWithTasks{
+ {ref: bounds[0], tasks: tasks}, // both tasks
+ {ref: bounds[1], tasks: tasks[:1]}, // first task
+ {ref: bounds[2], tasks: tasks[1:]}, // second task
+ }
+ results := partitionTasksByBlock(tasks, bounds)
+ require.Equal(t, expected, results)
+ })
}
func TestPartitionRequest(t *testing.T) {
diff --git a/pkg/bloomgateway/worker.go b/pkg/bloomgateway/worker.go
index 6b234db27189c..0fa154c3b413f 100644
--- a/pkg/bloomgateway/worker.go
+++ b/pkg/bloomgateway/worker.go
@@ -8,11 +8,9 @@ import (
"github.com/go-kit/log/level"
"github.com/grafana/dskit/services"
"github.com/pkg/errors"
- "github.com/prometheus/common/model"
"go.uber.org/atomic"
"github.com/grafana/loki/v3/pkg/queue"
- v1 "github.com/grafana/loki/v3/pkg/storage/bloom/v1"
"github.com/grafana/loki/v3/pkg/storage/stores/shipper/bloomshipper"
)
@@ -92,7 +90,6 @@ func (w *worker) running(_ context.Context) error {
w.metrics.tasksDequeued.WithLabelValues(w.id, labelSuccess).Add(float64(len(items)))
tasks := make([]Task, 0, len(items))
- var mb v1.MultiFingerprintBounds
for _, item := range items {
task, ok := item.(Task)
if !ok {
@@ -104,13 +101,10 @@ func (w *worker) running(_ context.Context) error {
w.metrics.queueDuration.WithLabelValues(w.id).Observe(time.Since(task.enqueueTime).Seconds())
FromContext(task.ctx).AddQueueTime(time.Since(task.enqueueTime))
tasks = append(tasks, task)
-
- first, last := getFirstLast(task.series)
- mb = mb.Union(v1.NewBounds(model.Fingerprint(first.Fingerprint), model.Fingerprint(last.Fingerprint)))
}
start = time.Now()
- err = p.runWithBounds(taskCtx, tasks, mb)
+ err = p.processTasks(taskCtx, tasks)
if err != nil {
w.metrics.processDuration.WithLabelValues(w.id, labelFailure).Observe(time.Since(start).Seconds())
|
chore
|
Various minor code cleanups (#13332)
|
b30c0d31bf253d759834c0e7bd5584aa45e23089
|
2019-09-22 18:05:10
|
sh0rez
|
doc(production): replace ksonnet with Tanka (#1039)
| false
|
diff --git a/production/ksonnet/README.md b/production/ksonnet/README.md
index 66ef0ba09e6ce..be964c71956db 100644
--- a/production/ksonnet/README.md
+++ b/production/ksonnet/README.md
@@ -2,23 +2,21 @@
## Prerequisites
-Make sure you have the ksonnet v0.8.0:
+Make sure you have a recent version of [Tanka](https://github.com/grafana/tanka). Follow their [install instructions](https://tanka.dev/#getting-started) to do so. Make sure to install [jsonnet-bundler](https://github.com/jsonnet-bundler/jsonnet-bundler) as well.
-```
-$ brew install https://raw.githubusercontent.com/ksonnet/homebrew-tap/82ef24cb7b454d1857db40e38671426c18cd8820/ks.rb
-$ brew pin ks
-$ ks version
-ksonnet version: v0.8.0
-jsonnet version: v0.9.5
-client-go version: v1.6.8-beta.0+$Format:%h$
+```bash
+# Verify it works
+$ tk --version
+tk version v0.5.0
```
-In your config repo, if you don't have a ksonnet application, make a new one (will copy credentials from current context):
+In your config repo, if you don't yet have the directory structure of Tanka set up:
-```
-$ ks init <application name>
-$ cd <application name>
-$ ks env add loki --namespace=loki
+```bash
+# create a directory (any name works)
+$ mkdir config && cd config/
+$ tk init
+$ tk env add loki --namespace=loki
```
## Deploying Promtail to your cluster.
@@ -26,13 +24,11 @@ $ ks env add loki --namespace=loki
Grab the promtail module using jb:
```
-$ go get -u github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb
-$ jb init
$ jb install github.com/grafana/loki/production/ksonnet/promtail
```
Replace the contents of `environments/loki/main.jsonnet` with:
-```
+```jsonnet
local promtail = import 'promtail/promtail.libsonnet';
@@ -72,7 +68,7 @@ jb install github.com/grafana/loki/production/ksonnet/loki
Be sure to replace the username, password and the relevant htpasswd contents.
Replace the contents of `environments/loki/main.jsonnet` with:
-```
+```jsonnet
local gateway = import 'loki/gateway.libsonnet';
local loki = import 'loki/loki.libsonnet';
local promtail = import 'promtail/promtail.libsonnet';
@@ -97,5 +93,5 @@ loki + promtail + gateway {
```
Notice that `container_root_path` is your own data root for docker daemon, use `docker info | grep "Root Dir"` to get it.
-Do `ks show loki` to see the manifests being deployed to the cluster.
-Finally `ks apply loki` to deploy the server components to your cluster.
+Use `tk show environments/loki` to see the manifests being deployed to the cluster.
+Finally `tk apply environments/loki` will deploy the server components to your cluster.
|
doc
|
replace ksonnet with Tanka (#1039)
|
837cfc010c01ab370bb0562fc1b3e5d6eb52b128
|
2024-01-22 20:27:34
|
Bayan Taani
|
operator: Add a custom metric that collects Lokistacks requiring a schema upgrade (#11513)
| false
|
diff --git a/operator/CHANGELOG.md b/operator/CHANGELOG.md
index 0400d952208dd..2f57c42a78d71 100644
--- a/operator/CHANGELOG.md
+++ b/operator/CHANGELOG.md
@@ -1,5 +1,6 @@
## Main
+- [11513](https://github.com/grafana/loki/pull/11513) **btaani**: Add a custom metric that collects Lokistacks requiring a schema upgrade
- [11718](https://github.com/grafana/loki/pull/11718) **periklis**: Upgrade k8s.io, sigs.k8s.io and openshift deps
- [11671](https://github.com/grafana/loki/pull/11671) **JoaoBraveCoding**: Update mixins to fix structured metadata dashboards
- [11624](https://github.com/grafana/loki/pull/11624) **xperimental**: React to changes in ConfigMap used for storage CA
diff --git a/operator/docs/lokistack/sop.md b/operator/docs/lokistack/sop.md
index 8c437bd53b678..1ae656e1d5e85 100644
--- a/operator/docs/lokistack/sop.md
+++ b/operator/docs/lokistack/sop.md
@@ -308,3 +308,28 @@ The query queue is currently under high load.
### Steps
- Increase the number of queriers
+
+## Lokistack Storage Schema Warning
+
+### Impact
+
+The LokiStack warns on a newer object storage schema being available for configuration.
+
+### Summary
+
+The schema configuration does not contain the most recent schema version and needs an update.
+
+### Severity
+
+`Warning`
+
+### Access Required
+
+- Console access to the cluster
+- Edit access to the namespace where the LokiStack is deployed:
+ - OpenShift
+ - `openshift-logging` (LokiStack)
+
+### Steps
+
+- Add a new object storage schema V13 with a future EffectiveDate
\ No newline at end of file
diff --git a/operator/internal/handlers/lokistack_create_or_update.go b/operator/internal/handlers/lokistack_create_or_update.go
index b64713f2d0fda..2f78f75d02c5b 100644
--- a/operator/internal/handlers/lokistack_create_or_update.go
+++ b/operator/internal/handlers/lokistack_create_or_update.go
@@ -208,9 +208,9 @@ func CreateOrUpdateLokiStack(
return kverrors.New("failed to configure lokistack resources", "name", req.NamespacedName)
}
- // 1x.extra-small is used only for development, so the metrics will not
+ // 1x.demo is used only for development, so the metrics will not
// be collected.
- if opts.Stack.Size != lokiv1.SizeOneXExtraSmall && opts.Stack.Size != lokiv1.SizeOneXDemo {
+ if opts.Stack.Size != lokiv1.SizeOneXDemo {
metrics.Collect(&opts.Stack, opts.Name)
}
diff --git a/operator/internal/manifests/internal/alerts/prometheus-alerts.yaml b/operator/internal/manifests/internal/alerts/prometheus-alerts.yaml
index f378c49fd78c8..6d2d978843ddc 100644
--- a/operator/internal/manifests/internal/alerts/prometheus-alerts.yaml
+++ b/operator/internal/manifests/internal/alerts/prometheus-alerts.yaml
@@ -175,3 +175,13 @@ groups:
for: 15m
labels:
severity: warning
+ - alert: LokistackSchemaUpgradesRequired
+ annotations:
+ message: |-
+ Object storage schema needs upgrade.
+ summary: "The applied storage schema config is old and should be upgraded."
+ runbook_url: "[[ .RunbookURL ]]#Lokistack-Schema-Upgrades-Required"
+ expr: sum by(stack_id) (lokistack_warnings_count) > 0
+ labels:
+ severity: warning
+ resource: '{{ $labels.stack_id}}'
diff --git a/operator/internal/metrics/metrics.go b/operator/internal/metrics/metrics.go
index 3c994f13c61ef..fc87b76c6fa96 100644
--- a/operator/internal/metrics/metrics.go
+++ b/operator/internal/metrics/metrics.go
@@ -51,6 +51,14 @@ var (
},
[]string{"size", "stack_id"},
)
+
+ lokistackWarningsCount = prometheus.NewGaugeVec(
+ prometheus.GaugeOpts{
+ Name: "lokistack_warnings_count",
+ Help: "Counts the number of warnings set on a LokiStack.",
+ },
+ []string{"reason", "stack_id"},
+ )
)
// RegisterMetricCollectors registers the prometheus collectors with the k8 default metrics
@@ -60,6 +68,7 @@ func RegisterMetricCollectors() {
userDefinedLimitsMetric,
globalStreamLimitMetric,
averageTenantStreamLimitMetric,
+ lokistackWarningsCount,
}
for _, collector := range metricCollectors {
@@ -104,6 +113,17 @@ func Collect(spec *lokiv1.LokiStackSpec, stackName string) {
setGlobalStreamLimitMetric(size, stackName, globalRate)
setAverageTenantStreamLimitMetric(size, stackName, tenantRate)
}
+
+ if len(spec.Storage.Schemas) > 0 && spec.Storage.Schemas[len(spec.Storage.Schemas)-1].Version != lokiv1.ObjectStorageSchemaV13 {
+ setLokistackSchemaUpgradesRequired(stackName, true)
+ }
+}
+
+func setLokistackSchemaUpgradesRequired(identifier string, active bool) {
+ lokistackWarningsCount.With(prometheus.Labels{
+ "reason": string(lokiv1.ReasonStorageNeedsSchemaUpdate),
+ "stack_id": identifier,
+ }).Set(boolValue(active))
}
func setDeploymentMetric(size lokiv1.LokiStackSizeType, identifier string, active bool) {
|
operator
|
Add a custom metric that collects Lokistacks requiring a schema upgrade (#11513)
|
161a192aec9cfd22b307f0190ea12b7684375889
|
2024-11-25 15:48:51
|
George Robinson
|
fix: Use separate variable to track the consume offset (#15095)
| false
|
diff --git a/pkg/kafka/partition/reader_service.go b/pkg/kafka/partition/reader_service.go
index 7e2c3bc3a381c..d9ea75d38f02a 100644
--- a/pkg/kafka/partition/reader_service.go
+++ b/pkg/kafka/partition/reader_service.go
@@ -143,14 +143,16 @@ func (s *ReaderService) starting(ctx context.Context) error {
if lastCommittedOffset == int64(KafkaEndOffset) {
level.Warn(s.logger).Log("msg", fmt.Sprintf("no committed offset found, starting from %d", kafkaStartOffset))
- lastCommittedOffset = int64(KafkaStartOffset)
+ } else {
+ level.Debug(s.logger).Log("msg", "last committed offset", "offset", lastCommittedOffset)
}
+ consumeOffset := int64(kafkaStartOffset)
if lastCommittedOffset >= 0 {
- lastCommittedOffset++ // We want to begin to read from the next offset, but only if we've previously committed an offset.
+ // Read from the next offset.
+ consumeOffset = lastCommittedOffset + 1
}
-
- s.reader.SetOffsetForConsumption(lastCommittedOffset)
+ s.reader.SetOffsetForConsumption(consumeOffset)
if targetLag, maxLag := s.cfg.TargetConsumerLagAtStartup, s.cfg.MaxConsumerLagAtStartup; targetLag > 0 && maxLag > 0 {
consumer, err := s.consumerFactory(s.committer)
|
fix
|
Use separate variable to track the consume offset (#15095)
|
d4cae667ae84d0528bef68185f40d26141a029c2
|
2020-01-24 18:44:43
|
Peter Štibraný
|
loki: use new runtimeconfig package from Cortex (#1484)
| false
|
diff --git a/cmd/loki/main.go b/cmd/loki/main.go
index aa26dba3c5be2..4107548a9aadb 100644
--- a/cmd/loki/main.go
+++ b/cmd/loki/main.go
@@ -39,10 +39,7 @@ func main() {
// This global is set to the config passed into the last call to `NewOverrides`. If we don't
// call it atleast once, the defaults are set to an empty struct.
// We call it with the flag values so that the config file unmarshalling only overrides the values set in the config.
- if _, err := validation.NewOverrides(config.LimitsConfig); err != nil {
- level.Error(util.Logger).Log("msg", "setting up overrides", "error", err)
- os.Exit(1)
- }
+ validation.SetDefaultLimitsForYAMLUnmarshalling(config.LimitsConfig)
// Init the logger which will honor the log level set in config.Server
if reflect.DeepEqual(&config.Server.LogLevel, &logging.Level{}) {
diff --git a/docs/configuration/README.md b/docs/configuration/README.md
index 7cf7e31bd50b3..3efd126c7bae6 100644
--- a/docs/configuration/README.md
+++ b/docs/configuration/README.md
@@ -24,6 +24,7 @@ Configuration examples can be found in the [Configuration Examples](examples.md)
* [table_manager_config](#table_manager_config)
* [provision_config](#provision_config)
* [auto_scaling_config](#auto_scaling_config)
+* [Runtime Configuration file](#runtime-configuration-file)
## Configuration File Reference
@@ -88,6 +89,9 @@ Supported contents and default values of `loki.yaml`:
# Configures the table manager for retention
[table_manager: <table_manager_config>]
+
+# Configuration for "runtime config" module, responsible for reloading runtime configuration file.
+[runtime_config: <runtime_config>]
```
## server_config
@@ -799,10 +803,10 @@ logs in Loki.
# Maximum number of stream matchers per query.
[max_streams_matchers_per_query: <int> | default = 1000]
-# Filename of per-user overrides file
+# Feature renamed to 'runtime configuration', flag deprecated in favor of -runtime-config.file (runtime_config.file in YAML)
[per_tenant_override_config: <string>]
-# Period with which to reload the overrides file if configured.
+# Feature renamed to 'runtime configuration', flag deprecated in favor of -runtime-config.reload-period (runtime_config.period in YAML)
[per_tenant_override_period: <duration> | default = 10s]
```
@@ -906,3 +910,36 @@ The `auto_scaling_config` block configures autoscaling for DynamoDB.
# DynamoDB target ratio of consumed capacity to provisioned capacity.
[target: <float> | default = 80]
```
+
+## Runtime Configuration file
+
+Loki has a concept of "runtime config" file, which is simply a file that is reloaded while Loki is running. It is used by some Loki components to allow operator to change some aspects of Loki configuration without restarting it. File is specified by using `-runtime-config.file=<filename>` flag and reload period (which defaults to 10 seconds) can be changed by `-runtime-config.reload-period=<duration>` flag. Previously this mechanism was only used by limits overrides, and flags were called `-limits.per-user-override-config=<filename>` and `-limits.per-user-override-period=10s` respectively. These are still used, if `-runtime-config.file=<filename>` is not specified.
+
+At the moment, two components use runtime configuration: limits and multi KV store.
+
+Options for runtime configuration reload can also be configured via YAML:
+
+```yaml
+# Configuration file to periodically check and reload.
+[file: <string>: default = empty]
+
+# How often to check the file.
+[period: <duration>: default 10 seconds]
+```
+
+Example runtime configuration file:
+
+```yaml
+overrides:
+ tenant1:
+ ingestion_rate_mb: 10
+ max_streams_per_user: 100000
+ max_chunks_per_query: 100000
+ tenant2:
+ max_streams_per_user: 1000000
+ max_chunks_per_query: 1000000
+
+multi_kv_config:
+ mirror-enabled: false
+ primary: consul
+```
diff --git a/go.mod b/go.mod
index 9d91bd8bc6a98..585d3cc2f247d 100644
--- a/go.mod
+++ b/go.mod
@@ -9,7 +9,7 @@ require (
github.com/containerd/containerd v1.3.2 // indirect
github.com/containerd/fifo v0.0.0-20190226154929-a9fb20d87448 // indirect
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e
- github.com/cortexproject/cortex v0.4.1-0.20191217132644-cd4009e2f8e7
+ github.com/cortexproject/cortex v0.4.1-0.20200122092731-ab3e8360fe30
github.com/davecgh/go-spew v1.1.1
github.com/docker/distribution v2.7.1+incompatible // indirect
github.com/docker/docker v0.7.3-0.20190817195342-4760db040282
@@ -20,11 +20,11 @@ require (
github.com/fluent/fluent-bit-go v0.0.0-20190925192703-ea13c021720c
github.com/frankban/quicktest v1.7.2 // indirect
github.com/go-kit/kit v0.9.0
- github.com/gogo/protobuf v1.3.0 // remember to update loki-build-image/Dockerfile too
+ github.com/gogo/protobuf v1.3.1 // remember to update loki-build-image/Dockerfile too
github.com/golang/snappy v0.0.1
github.com/gorilla/mux v1.7.1
github.com/gorilla/websocket v1.4.0
- github.com/grpc-ecosystem/grpc-gateway v1.9.6 // indirect
+ github.com/grpc-ecosystem/go-grpc-prometheus v1.2.1-0.20191002090509-6af20e3a5340 // indirect
github.com/grpc-ecosystem/grpc-opentracing v0.0.0-20180507213350-8e809c8a8645
github.com/hashicorp/golang-lru v0.5.3
github.com/hpcloud/tail v1.0.0
@@ -38,10 +38,10 @@ require (
github.com/opentracing/opentracing-go v1.1.0
github.com/pierrec/lz4 v2.3.1-0.20191115212037-9085dacd1e1e+incompatible
github.com/pkg/errors v0.8.1
- github.com/prometheus/client_golang v1.1.0
+ github.com/prometheus/client_golang v1.2.1
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4
github.com/prometheus/common v0.7.0
- github.com/prometheus/prometheus v1.8.2-0.20190918104050-8744afdd1ea0
+ github.com/prometheus/prometheus v1.8.2-0.20191126064551-80ba03c67da1
github.com/shurcooL/httpfs v0.0.0-20190707220628-8d4bc4ba7749
github.com/shurcooL/vfsgen v0.0.0-20181202132449-6a9ea43bcacd
github.com/stretchr/testify v1.4.0
@@ -49,18 +49,13 @@ require (
github.com/ugorji/go v1.1.7 // indirect
github.com/weaveworks/common v0.0.0-20191103151037-0e7cefadc44f
go.etcd.io/etcd v0.0.0-20190815204525-8f85f0dc2607 // indirect
- go.opencensus.io v0.22.1 // indirect
- golang.org/x/net v0.0.0-20190923162816-aa69164e4478
- golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e // indirect
+ golang.org/x/net v0.0.0-20191112182307-2180aed22343
golang.org/x/sys v0.0.0-20191218084908-4a24b4065292 // indirect
- golang.org/x/tools v0.0.0-20190925134113-a044388aa56f // indirect
- google.golang.org/appengine v1.6.3 // indirect
- google.golang.org/genproto v0.0.0-20190916214212-f660b8655731 // indirect
google.golang.org/grpc v1.25.1
gopkg.in/alecthomas/kingpin.v2 v2.2.6
gopkg.in/fsnotify.v1 v1.4.7
- gopkg.in/yaml.v2 v2.2.2
- k8s.io/klog v0.4.0
+ gopkg.in/yaml.v2 v2.2.5
+ k8s.io/klog v1.0.0
)
replace github.com/hpcloud/tail => github.com/grafana/tail v0.0.0-20191024143944-0b54ddf21fe7
diff --git a/go.sum b/go.sum
index cd88772276f63..a9520a2ce3f2f 100644
--- a/go.sum
+++ b/go.sum
@@ -4,13 +4,32 @@ cloud.google.com/go v0.37.4/go.mod h1:NHPJ89PdicEuT9hdPXMROBD91xc5uRDxsMtSB16k7h
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
cloud.google.com/go v0.44.1 h1:7gXaI3V/b4DRaK++rTqhRajcT7z8gtP0qKMZTXqlySM=
cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
+cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
+cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
+cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
+cloud.google.com/go v0.49.0 h1:CH+lkubJzcPYB1Ggupcq0+k8Ni2ILdG2lYjDIgavDBQ=
+cloud.google.com/go v0.49.0/go.mod h1:hGvAdzcWNbyuxS3nWhD7H2cIJxjRRTRLQVB0bdputVY=
+cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
+cloud.google.com/go/bigquery v1.3.0 h1:sAbMqjY1PEQKZBWfbu6Y6bsupJ9c4QdHnzg/VvYTLcE=
+cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
+cloud.google.com/go/bigtable v1.1.0 h1:+IakvK2mFz1FbfA9Ti0JoKRPiJkORngh9xhfMbVkJqw=
+cloud.google.com/go/bigtable v1.1.0/go.mod h1:B6ByKcIdYmhoyDzmOnQxyOhN6r05qnewYIxxG6L0/b4=
+cloud.google.com/go/datastore v1.0.0 h1:Kt+gOPPp2LEPWp8CSfxhsM8ik9CcyE/gYu+0r+RnZvM=
+cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
+cloud.google.com/go/pubsub v1.0.1 h1:W9tAK3E57P75u0XLLR82LZyw8VpAnhmyTOxW9qzmyj8=
+cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
+cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
+cloud.google.com/go/storage v1.3.0 h1:2Ze/3nQD5F+HfL0xOPM2EeawDWs+NPRtzgcre+17iZU=
+cloud.google.com/go/storage v1.3.0/go.mod h1:9IAwXhoyBJ7z9LcAwkj0/7NnPzYaPeZxxVp3zm+5IqA=
contrib.go.opencensus.io/exporter/ocagent v0.6.0 h1:Z1n6UAyr0QwM284yUuh5Zd8JlvxUGAhFZcgMJkMPrGM=
contrib.go.opencensus.io/exporter/ocagent v0.6.0/go.mod h1:zmKjrJcdo0aYcVS7bmEeSEBLPA9YJp5bjrofdU3pIXs=
+dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
github.com/Azure/azure-pipeline-go v0.2.1 h1:OLBdZJ3yvOn2MezlWvbrBMTEUQC72zAftRZOMdj5HYo=
github.com/Azure/azure-pipeline-go v0.2.1/go.mod h1:UGSo8XybXnIGZ3epmeBw7Jdz+HiUVpqIlpz/HKHylF4=
+github.com/Azure/azure-pipeline-go v0.2.2 h1:6oiIS9yaG6XCCzhgAgKFfIWyo4LLCiDhZot6ltoThhY=
+github.com/Azure/azure-pipeline-go v0.2.2/go.mod h1:4rQ/NZncSvGqNkkOsNpOU1tgoNuIlp9AfUH5G1tvCHc=
github.com/Azure/azure-sdk-for-go v36.2.0+incompatible h1:09cv2WoH0g6jl6m2iT+R9qcIPZKhXEL0sbmLhxP895s=
github.com/Azure/azure-sdk-for-go v36.2.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
-github.com/Azure/azure-storage-blob-go v0.7.0/go.mod h1:f9YQKtsG1nMisotuTPpO0tjNuEjKRYAcJU8/ydDI++4=
github.com/Azure/azure-storage-blob-go v0.8.0 h1:53qhf0Oxa0nOjgbDeeYPUeyiNmafAFEY95rZLK0Tj6o=
github.com/Azure/azure-storage-blob-go v0.8.0/go.mod h1:lPI3aLPpuLTeUwh1sViKXFxwl2B6teiRqI0deQUvsw0=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78 h1:w+iIsaOQNcT7OZ575w+acHgRric5iCyQh+xv+KJ4HB8=
@@ -18,11 +37,11 @@ github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX
github.com/Azure/go-autorest v13.3.0+incompatible h1:8Ix0VdeOllBx9jEcZ2Wb1uqWUpE1awmJiaHztwaJCPk=
github.com/Azure/go-autorest v13.3.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest/autorest v0.9.0/go.mod h1:xyHB1BMZT0cuDHU7I0+g046+BFDTQ8rEZB0s4Yfa6bI=
-github.com/Azure/go-autorest/autorest v0.9.2 h1:6AWuh3uWrsZJcNoCHrCF/+g4aKPCU39kaMO6/qrnK/4=
-github.com/Azure/go-autorest/autorest v0.9.2/go.mod h1:xyHB1BMZT0cuDHU7I0+g046+BFDTQ8rEZB0s4Yfa6bI=
+github.com/Azure/go-autorest/autorest v0.9.3-0.20191028180845-3492b2aff503 h1:uUhdsDMg2GbFLF5GfQPtLMWd5vdDZSfqvqQp3waafxQ=
+github.com/Azure/go-autorest/autorest v0.9.3-0.20191028180845-3492b2aff503/go.mod h1:xyHB1BMZT0cuDHU7I0+g046+BFDTQ8rEZB0s4Yfa6bI=
github.com/Azure/go-autorest/autorest/adal v0.5.0/go.mod h1:8Z9fGy2MpX0PvDjB1pEgQTmVqjGhiHBW7RJJEciWzS0=
-github.com/Azure/go-autorest/autorest/adal v0.8.0 h1:CxTzQrySOxDnKpLjFJeZAS5Qrv/qFPkgLjx5bOAi//I=
-github.com/Azure/go-autorest/autorest/adal v0.8.0/go.mod h1:Z6vX6WXXuyieHAXwMj0S6HY6e6wcHn37qQMBQlvY3lc=
+github.com/Azure/go-autorest/autorest/adal v0.8.1-0.20191028180845-3492b2aff503 h1:Hxqlh1uAA8aGpa1dFhDNhll7U/rkWtG8ZItFvRMr7l0=
+github.com/Azure/go-autorest/autorest/adal v0.8.1-0.20191028180845-3492b2aff503/go.mod h1:Z6vX6WXXuyieHAXwMj0S6HY6e6wcHn37qQMBQlvY3lc=
github.com/Azure/go-autorest/autorest/date v0.1.0/go.mod h1:plvfp3oPSKwf2DNjlBjWF/7vwR+cUD/ELuzDCXwHUVA=
github.com/Azure/go-autorest/autorest/date v0.2.0 h1:yW+Zlqf26583pE43KhfnhFcdmSWlm5Ew6bxipnr/tbM=
github.com/Azure/go-autorest/autorest/date v0.2.0/go.mod h1:vcORJHLJEh643/Ioh9+vPmf1Ij9AEBM5FuBIXLmIy0g=
@@ -30,10 +49,10 @@ github.com/Azure/go-autorest/autorest/mocks v0.1.0/go.mod h1:OTyCOPRA2IgIlWxVYxB
github.com/Azure/go-autorest/autorest/mocks v0.2.0/go.mod h1:OTyCOPRA2IgIlWxVYxBee2F5Gr4kF2zd2J5cFRaIDN0=
github.com/Azure/go-autorest/autorest/mocks v0.3.0 h1:qJumjCaCudz+OcqE9/XtEPfvtOjOmKaui4EOpFI6zZc=
github.com/Azure/go-autorest/autorest/mocks v0.3.0/go.mod h1:a8FDP3DYzQ4RYfVAxAN3SVSiiO77gL2j2ronKKP0syM=
-github.com/Azure/go-autorest/autorest/to v0.3.0 h1:zebkZaadz7+wIQYgC7GXaz3Wb28yKYfVkkBKwc38VF8=
-github.com/Azure/go-autorest/autorest/to v0.3.0/go.mod h1:MgwOyqaIuKdG4TL/2ywSsIWKAfJfgHDo8ObuUk3t5sA=
-github.com/Azure/go-autorest/autorest/validation v0.2.0 h1:15vMO4y76dehZSq7pAaOLQxC6dZYsSrj2GQpflyM/L4=
-github.com/Azure/go-autorest/autorest/validation v0.2.0/go.mod h1:3EEqHnBxQGHXRYq3HT1WyXAvT7LLY3tl70hw6tQIbjI=
+github.com/Azure/go-autorest/autorest/to v0.3.1-0.20191028180845-3492b2aff503 h1:2McfZNaDqGPjv2pddK547PENIk4HV+NT7gvqRq4L0us=
+github.com/Azure/go-autorest/autorest/to v0.3.1-0.20191028180845-3492b2aff503/go.mod h1:MgwOyqaIuKdG4TL/2ywSsIWKAfJfgHDo8ObuUk3t5sA=
+github.com/Azure/go-autorest/autorest/validation v0.2.1-0.20191028180845-3492b2aff503 h1:RBrGlrkPWapMcLp1M6ywCqyYKOAT5ERI6lYFvGKOThE=
+github.com/Azure/go-autorest/autorest/validation v0.2.1-0.20191028180845-3492b2aff503/go.mod h1:3EEqHnBxQGHXRYq3HT1WyXAvT7LLY3tl70hw6tQIbjI=
github.com/Azure/go-autorest/logger v0.1.0 h1:ruG4BSDXONFRrZZJ2GUXDiUyVpayPmb1GnWeHDdaNKY=
github.com/Azure/go-autorest/logger v0.1.0/go.mod h1:oExouG+K6PryycPJfVSxi/koC6LSNgds39diKLz7Vrc=
github.com/Azure/go-autorest/tracing v0.5.0 h1:TRn4WjSnkcSy5AEG3pnbtFSwNtwzjr4VYyQflFE619k=
@@ -42,6 +61,7 @@ github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/DataDog/datadog-go v2.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ=
+github.com/DataDog/datadog-go v3.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ=
github.com/Masterminds/squirrel v0.0.0-20161115235646-20f192218cf5/go.mod h1:xnKTFzjGUiZtiOagBsfnvomW+nJg2usB1ZpordQWqNM=
github.com/Microsoft/go-winio v0.4.11/go.mod h1:VhR8bwka0BXejwEJY73c50VrPtXAaKcyvVC4A4RozmA=
github.com/Microsoft/go-winio v0.4.12 h1:xAfWHN1IrQ0NJ9TBC0KBZoqLjzDTr1ML+4MywiUOryc=
@@ -53,6 +73,8 @@ github.com/Nvveen/Gotty v0.0.0-20120604004816-cd527374f1e5/go.mod h1:lmUJ/7eu/Q8
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/OneOfOne/xxhash v1.2.5 h1:zl/OfRA6nftbBK9qTohYBJ5xvw6C/oNKizR7cZGl3cI=
github.com/OneOfOne/xxhash v1.2.5/go.mod h1:eZbhyaAYD41SGSSsnmcpxVoRiQ/MPUTjUdIIOT9Um7Q=
+github.com/OneOfOne/xxhash v1.2.6 h1:U68crOE3y3MPttCMQGywZOLrTeF5HHJ3/vDBCJn9/bA=
+github.com/OneOfOne/xxhash v1.2.6/go.mod h1:eZbhyaAYD41SGSSsnmcpxVoRiQ/MPUTjUdIIOT9Um7Q=
github.com/PuerkitoBio/purell v1.0.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/purell v1.1.0/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
@@ -60,19 +82,24 @@ github.com/PuerkitoBio/urlesc v0.0.0-20160726150825-5bd2802263f2/go.mod h1:uGdko
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo=
github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMxUHB2q5Ap20/P/eIdh4G0pI=
-github.com/a8m/mark v0.1.1-0.20170507133748-44f2db618845/go.mod h1:c8Mh99Cw82nrsAnPgxQSZHkswVOJF7/MqZb1ZdvriLM=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751 h1:JYp7IbQjafoB+tBA3gMyHYHrpOtNuDiK/uB5uXxq5wM=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4 h1:Hs82Z41s6SdL1CELW+XaDYmOH4hkBN4/N9og/AsOv7E=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
+github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d h1:UQZhZ2O0vMHr2cI+DC1Mbh0TJxzA3RcLoMsFw+aXw7E=
+github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
+github.com/aliyun/aliyun-oss-go-sdk v2.0.4+incompatible/go.mod h1:T/Aws4fEfogEE9v+HPhhw+CntffsBHJ8nXQCwKr0/g8=
+github.com/antihax/optional v0.0.0-20180407024304-ca021399b1a6/go.mod h1:V8iCPQYkqmusNa815XgQio277wI47sdRh1dUOLdyC6Q=
github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da h1:8GUt8eRujhVEGZFFEjBj46YV4rDjvGrNxb0KMWYkL2I=
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
github.com/armon/go-metrics v0.0.0-20190430140413-ec5e00d3c878 h1:EFSB7Zo9Eg91v7MJPVsifUysc/wPdN+NOnVe6bWbdBM=
github.com/armon/go-metrics v0.0.0-20190430140413-ec5e00d3c878/go.mod h1:3AMJUQhVx52RsWOnlkpikZr01T/yAVN2gn0861vByNg=
+github.com/armon/go-metrics v0.3.0 h1:B7AQgHi8QSEi4uHu7Sbsga+IJDU+CENgjxoo81vDUqU=
+github.com/armon/go-metrics v0.3.0/go.mod h1:zXjbSimjXTd7vOpY8B0/2LpvNvDoXBuplAD+gJD3GYs=
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/armon/go-radix v1.0.0/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/asaskevich/govalidator v0.0.0-20180720115003-f9ffefc3facf/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
@@ -82,8 +109,9 @@ github.com/aws/aws-sdk-go v1.22.4 h1:Mcq67g9mZEBvBuj/x7mF9KCyw5M8/4I/cjQPkdCsq0I
github.com/aws/aws-sdk-go v1.22.4/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/aws/aws-sdk-go v1.23.12 h1:2UnxgNO6Y5J1OrkXS8XNp0UatDxD1bWHiDT62RDPggI=
github.com/aws/aws-sdk-go v1.23.12/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
-github.com/aws/aws-sdk-go v1.25.22 h1:DXcA0jjMnGt2awoWM2qwCu+ouGDB5FYnGxCVrRweE/0=
-github.com/aws/aws-sdk-go v1.25.22/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
+github.com/aws/aws-sdk-go v1.25.35 h1:fe2tJnqty/v/50pyngKdNk/NP8PFphYDA1Z7N3EiiiE=
+github.com/aws/aws-sdk-go v1.25.35/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
+github.com/baiyubin/aliyun-sts-go-sdk v0.0.0-20180326062324-cfa1a18b161f/go.mod h1:AuiFmCCPBSrqvVMvuqFuk0qogytodnVFVSN5CeJB8Gc=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
@@ -107,6 +135,9 @@ github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA
github.com/cespare/xxhash v0.0.0-20181017004759-096ff4a8a059/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash v1.1.0 h1:a6HrQnmkObjyL+Gs60czilIUGqrzKutQD6XZog3p+ko=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
+github.com/cespare/xxhash/v2 v2.1.0/go.mod h1:dgIUBU3pDso/gPgZ1osOZ0iQf77oPR28Tjxl5dIMyVM=
+github.com/cespare/xxhash/v2 v2.1.1 h1:6MnRN8NT7+YBpUIWxHtefFZOKTAPgGjpQSxqLNn0+qY=
+github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/circonus-labs/circonus-gometrics v2.3.1+incompatible/go.mod h1:nmEj6Dob7S7YxXgwXpfOuvO54S+tGdZdw9fuRZt25Ag=
github.com/circonus-labs/circonusllhist v0.1.3/go.mod h1:kMXHVDlOchFAehlya5ePtbp5jckzBHf4XRpQvBOLI+I=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
@@ -132,8 +163,8 @@ github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7
github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f h1:lBNOc5arjvs8E5mO2tbpBpLoyyu8B6e44T7hJy6potg=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
-github.com/cortexproject/cortex v0.4.1-0.20191217132644-cd4009e2f8e7 h1:njm5BUay8R005/pn7JYtMUWqg0uL3Ab1Lqg46TFLSEM=
-github.com/cortexproject/cortex v0.4.1-0.20191217132644-cd4009e2f8e7/go.mod h1:q8S8k4LL/UO2jcaXNUl3NVsQBPoYM/L3sPqRkLa41oI=
+github.com/cortexproject/cortex v0.4.1-0.20200122092731-ab3e8360fe30 h1:eUzVHTW1754OBI5DuOF9KVg9iEDKnzeUjJasFJLQ3t4=
+github.com/cortexproject/cortex v0.4.1-0.20200122092731-ab3e8360fe30/go.mod h1:GI9wHA+WyoOnzkrOkpaobueBeRERdbistcSRuu4wXwA=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
github.com/cznic/b v0.0.0-20180115125044-35e9bbe41f07/go.mod h1:URriBxXwVq5ijiJ12C7iIZqlA69nTlI+LgI6/pwftG8=
github.com/cznic/fileutil v0.0.0-20180108211300-6a051e75936f/go.mod h1:8S58EK26zhXSxzv7NQFpnliaOQsmDUxvoQO3rt154Vg=
@@ -171,6 +202,7 @@ github.com/docker/go-units v0.3.3/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDD
github.com/docker/go-units v0.4.0 h1:3uh0PgVws3nIA0Q+MwDC8yjEPf9zjRfZZWXZYDct3Tw=
github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/docker/spdystream v0.0.0-20160310174837-449fdfce4d96/go.mod h1:Qh8CwZgvJUkLughtfhJv5dyTYa91l1fOUCrgjqmcifM=
+github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
@@ -181,7 +213,9 @@ github.com/edsrzf/mmap-go v0.0.0-20170320065105-0bce6a688712/go.mod h1:YO35OhQPt
github.com/edsrzf/mmap-go v1.0.0 h1:CEBF7HpRnUCSJgGUb5h1Gm7e3VkmVDrR8lvWVLtrOFw=
github.com/edsrzf/mmap-go v1.0.0/go.mod h1:YO35OhQPt3KJa3ryjFM5Bs14WD66h8eGKpfaBNrHW5M=
github.com/elastic/go-sysinfo v1.0.1/go.mod h1:O/D5m1VpYLwGjCYzEt63g3Z1uO3jXfwyzzjiW90t8cY=
+github.com/elastic/go-sysinfo v1.1.1/go.mod h1:i1ZYdU10oLNfRzq4vq62BEwD2fH8KaWh6eh0ikPT9F0=
github.com/elastic/go-windows v1.0.0/go.mod h1:TsU0Nrp7/y3+VwE82FoZF8gC/XFg/Elz6CcloAxnPgU=
+github.com/elastic/go-windows v1.0.1/go.mod h1:FoVvqWSun28vaDQPbj2Elfc0JahhPB7WQEGa3c814Ss=
github.com/elazarl/goproxy v0.0.0-20170405201442-c4fc26588b6e/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc=
github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
@@ -194,7 +228,7 @@ github.com/facette/natsort v0.0.0-20181210072756-2cd4dd1e2dcb h1:IT4JYU7k4ikYg1S
github.com/facette/natsort v0.0.0-20181210072756-2cd4dd1e2dcb/go.mod h1:bH6Xx7IW64qjjJq8M2u4dxNaBiDfKK+z/3eGDpXEQhc=
github.com/fatih/color v1.7.0 h1:DkWD4oS2D8LGGgTQ6IvwJJXSL5Vp2ffcQg58nFV38Ys=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
-github.com/fatih/structtag v1.0.0/go.mod h1:IKitwq45uXL/yqi5mYghiD3w9H6eTOvI9vnk8tXMphA=
+github.com/fatih/structtag v1.1.0/go.mod h1:mBJUNpUnHmRKrKlQQlmCrh5PuhftFbNv8Ys4/aAZl94=
github.com/fluent/fluent-bit-go v0.0.0-20190925192703-ea13c021720c h1:QwbffUs/+ptC4kTFPEN9Ej2latTq3bZJ5HO/OwPXYMs=
github.com/fluent/fluent-bit-go v0.0.0-20190925192703-ea13c021720c/go.mod h1:WQX+afhrekY9rGK+WT4xvKSlzmia9gDoLYu4GGYGASQ=
github.com/fluent/fluent-logger-golang v1.2.1 h1:CMA+mw2zMiOGEOarZtaqM3GBWT1IVLNncNi0nKELtmU=
@@ -206,11 +240,11 @@ github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsouza/fake-gcs-server v1.7.0 h1:Un0BXUXrRWYSmYyC1Rqm2e2WJfTPyDy/HGMz31emTi8=
github.com/fsouza/fake-gcs-server v1.7.0/go.mod h1:5XIRs4YvwNbNoz+1JF8j6KLAyDh7RHGAyAK3EP2EsNk=
-github.com/gernest/wow v0.1.0/go.mod h1:dEPabJRi5BneI1Nev1VWo0ZlcTWibHWp43qxKms4elY=
github.com/ghodss/yaml v0.0.0-20150909031657-73d445a93680/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/globalsign/mgo v0.0.0-20180905125535-1ca0a4f7cbcb/go.mod h1:xkRDCp4j0OGD1HRkm4kmhM+pmpv3AKq5SU7GMg4oO/Q=
github.com/globalsign/mgo v0.0.0-20181015135952-eeefdecb41b8/go.mod h1:xkRDCp4j0OGD1HRkm4kmhM+pmpv3AKq5SU7GMg4oO/Q=
+github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0 h1:wDJmvq38kDhkVxi50ni9ykkdUr1PKgqKOoi01fa0Mdk=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
@@ -281,8 +315,8 @@ github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zV
github.com/gogo/protobuf v1.2.2-0.20190723190241-65acae22fc9d/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/gogo/protobuf v1.2.2-0.20190730201129-28a6bbf47e48 h1:X+zN6RZXsvnrSJaAIQhZezPfAfvsqihKKR8oiLHid34=
github.com/gogo/protobuf v1.2.2-0.20190730201129-28a6bbf47e48/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
-github.com/gogo/protobuf v1.3.0 h1:G8O7TerXerS4F6sx9OV7/nRfJdnXgHZu/S/7F2SN+UE=
-github.com/gogo/protobuf v1.3.0/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
+github.com/gogo/protobuf v1.3.1 h1:DqDEcV5aeaTmdFBePNpYsp3FlcVH/2ISVVM9Qf8PSls=
+github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/gogo/status v1.0.3 h1:WkVBY59mw7qUNTr/bLwO7J2vesJ0rQ2C3tMXrTd3w5M=
github.com/gogo/status v1.0.3/go.mod h1:SavQ51ycCLnc7dGyJxp8YAmudx8xqiVrRf+6IXRsugc=
github.com/golang-migrate/migrate/v4 v4.7.0/go.mod h1:Qvut3N4xKWjoH3sokBccML6WyHSnggXm/DvMMnTsQIc=
@@ -291,6 +325,8 @@ github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfU
github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6 h1:ZgQEtGgCBiWRM39fZuwSd1LwSqqSW0hOdXCYYDX0R3I=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
+github.com/golang/groupcache v0.0.0-20191027212112-611e8accdfc9 h1:uHTyIjqVhYRhLbJ8nIiOJHkEZZ+5YoOsAbD3sk82NiE=
+github.com/golang/groupcache v0.0.0-20191027212112-611e8accdfc9/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
@@ -325,6 +361,7 @@ github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXi
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190723021845-34ac40c74b70/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
+github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
@@ -338,10 +375,15 @@ github.com/googleapis/gnostic v0.0.0-20170426233943-68f4ded48ba9/go.mod h1:sJBsC
github.com/googleapis/gnostic v0.0.0-20170729233727-0c5108395e2d/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
github.com/googleapis/gnostic v0.3.0 h1:CcQijm0XKekKjP/YCz28LXVSpgguuB+nCxaSjCe09y0=
github.com/googleapis/gnostic v0.3.0/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
+github.com/googleapis/gnostic v0.3.1 h1:WeAefnSUHlBb0iJKwxFDZdbfGwkd7xRNuV+IpXMJhYk=
+github.com/googleapis/gnostic v0.3.1/go.mod h1:on+2t9HRStVgn95RSsFWFz+6Q0Snyqv1awfrALZdbtU=
github.com/gophercloud/gophercloud v0.0.0-20190126172459-c818fa66e4c8/go.mod h1:3WdhXV3rUYy9p6AUW8d94kr+HS62Y4VL9mBnFxsD8q4=
github.com/gophercloud/gophercloud v0.3.0 h1:6sjpKIpVwRIIwmcEGp+WwNovNsem+c+2vm6oxshRpL8=
github.com/gophercloud/gophercloud v0.3.0/go.mod h1:vxM41WHh5uqHVBMZHzuwNOHh8XEoIEcSTewFxm1c5g8=
+github.com/gophercloud/gophercloud v0.6.0 h1:Xb2lcqZtml1XjgYZxbeayEemq7ASbeTp09m36gQFpEU=
+github.com/gophercloud/gophercloud v0.6.0/go.mod h1:GICNByuaEBibcjmjvI7QvYJSZEbGkcYwAR7EZK2WMqM=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
+github.com/gopherjs/gopherjs v0.0.0-20191106031601-ce3c9ade29de/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/context v1.1.1 h1:AWwleXJkX/nhcU9bZSnZoi3h/qGYqQAGhq6zZe/aQW8=
github.com/gorilla/context v1.1.1/go.mod h1:kBGZzfjB9CEq2AlWe17Uuf7NDRt0dE0s8S51q0aT7Yg=
github.com/gorilla/mux v1.6.2 h1:Pgr17XVTNXAk3q/r4CpKzC5xBM/qW1uVLV+IhRZpIIk=
@@ -365,16 +407,20 @@ github.com/grpc-ecosystem/go-grpc-prometheus v1.2.1-0.20191002090509-6af20e3a534
github.com/grpc-ecosystem/grpc-gateway v1.4.1/go.mod h1:RSKVYQBd5MCa4OVpNdGskqpgL2+G+NZTnrVHpWWfpdw=
github.com/grpc-ecosystem/grpc-gateway v1.9.4/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.9.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
-github.com/grpc-ecosystem/grpc-gateway v1.9.6 h1:8p0pcgLlw2iuZVsdHdPaMUXFOA+6gDixcXbHEMzSyW8=
-github.com/grpc-ecosystem/grpc-gateway v1.9.6/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
+github.com/grpc-ecosystem/grpc-gateway v1.12.1 h1:zCy2xE9ablevUOrUZc3Dl72Dt+ya2FNAvC2yLYMHzi4=
+github.com/grpc-ecosystem/grpc-gateway v1.12.1/go.mod h1:8XEsbTttt/W+VvjtQhLACqCisSPWTxCZ7sBRjU6iH9c=
github.com/grpc-ecosystem/grpc-opentracing v0.0.0-20180507213350-8e809c8a8645 h1:MJG/KsmcqMwFAkh8mTnAwhyKoB+sTAnY4CACC110tbU=
github.com/grpc-ecosystem/grpc-opentracing v0.0.0-20180507213350-8e809c8a8645/go.mod h1:6iZfnjpejD4L/4DwD7NryNaJyCQdzwWwH2MWhCA90Kw=
github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed h1:5upAirOpQc1Q53c0bnx2ufif5kANL7bfZWcc6VJWJd8=
github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed/go.mod h1:tMWxXQ9wFIaZeTI9F+hmhFiGpFmhOHzyShyFUhRm0H4=
github.com/hashicorp/consul/api v1.1.0 h1:BNQPM9ytxj6jbjjdRPioQ94T6YXriSopn0i8COv6SRA=
github.com/hashicorp/consul/api v1.1.0/go.mod h1:VmuI/Lkw1nC05EYQWNKwWGbkg+FbDBtguAZLlVdkD9Q=
+github.com/hashicorp/consul/api v1.3.0 h1:HXNYlRkkM/t+Y/Yhxtwcy02dlYwIaoxzvxPnS+cqy78=
+github.com/hashicorp/consul/api v1.3.0/go.mod h1:MmDNSzIMUjNpY/mQ398R4bk2FnqQLoPndWW5VkKPlCE=
github.com/hashicorp/consul/sdk v0.1.1 h1:LnuDWGNsoajlhGyHJvuWW6FVqRl8JOTPqS6CPTsYjhY=
github.com/hashicorp/consul/sdk v0.1.1/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
+github.com/hashicorp/consul/sdk v0.3.0 h1:UOxjlb4xVNF93jak1mzzoBatyFju9nrkxpVwIp/QqxQ=
+github.com/hashicorp/consul/sdk v0.3.0/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
@@ -412,10 +458,14 @@ github.com/hashicorp/memberlist v0.1.3 h1:EmmoJme1matNzb+hMpDuR/0sbJSUisxyqBGG67
github.com/hashicorp/memberlist v0.1.3/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I=
github.com/hashicorp/memberlist v0.1.4 h1:gkyML/r71w3FL8gUi74Vk76avkj/9lYAY9lvg0OcoGs=
github.com/hashicorp/memberlist v0.1.4/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I=
+github.com/hashicorp/memberlist v0.1.5 h1:AYBsgJOW9gab/toO5tEB8lWetVgDKZycqkebJ8xxpqM=
+github.com/hashicorp/memberlist v0.1.5/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I=
github.com/hashicorp/serf v0.8.2 h1:YZ7UKsJv+hKjqGVUUbtE3HNj79Eln2oQ75tniF6iPt0=
github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc=
github.com/hashicorp/serf v0.8.3 h1:MWYcmct5EtKz0efYooPcL0yNkem+7kWxqXDi/UIh+8k=
github.com/hashicorp/serf v0.8.3/go.mod h1:UpNcs7fFbpKIyZaUuSW6EPiH+eZC7OuyFD+wc1oal+k=
+github.com/hashicorp/serf v0.8.5 h1:ZynDUIQiA8usmRgPdGPHFdPnb1wgGI9tK3mO9hcAJjc=
+github.com/hashicorp/serf v0.8.5/go.mod h1:UpNcs7fFbpKIyZaUuSW6EPiH+eZC7OuyFD+wc1oal+k=
github.com/imdario/mergo v0.3.5/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/influxdata/go-syslog/v2 v2.0.1 h1:l44S4l4Q8MhGQcoOxJpbo+QQYxJqp0vdgIVHh4+DO0s=
@@ -432,18 +482,24 @@ github.com/jonboulle/clockwork v0.1.0 h1:VKV+ZcuP6l3yW9doeqz6ziZGgcynBVQO+obU0+0
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/jpillora/backoff v0.0.0-20180909062703-3050d21c67d7 h1:K//n/AqR5HjG3qxbrBCL4vJPW0MVFSs9CPK1OOJdRME=
github.com/jpillora/backoff v0.0.0-20180909062703-3050d21c67d7/go.mod h1:2iMrUgbbvHEiQClaW2NsSzMyGHqN+rDFqY705q49KG0=
+github.com/jpillora/backoff v1.0.0 h1:uvFg412JmmHBHw7iwprIxkPMI+sGQ4kzOWsMeHnm2EA=
+github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=
github.com/json-iterator/go v0.0.0-20180612202835-f2b4162afba3/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v0.0.0-20180701071628-ab8a2e0c74be/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.5/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.7 h1:KfgG9LzI+pYjr4xvmz/5H4FXjokeP+rlHLhv3iH62Fo=
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
+github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.9 h1:9yzud/Ht36ygwatGx56VwCZtlI/2AD15T1X2sjSuGns=
github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024 h1:rBMNdlhTLzJjJSDIjNEXX1Pz3Hmwmz91v+zycvx9PJc=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
+github.com/jstemmer/go-junit-report v0.9.1 h1:6QPYqodiu3GuPL+7mfx+NwDdp2eTkp9IfEUpgAwUN0o=
+github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/XSXhF0NWZEnDohbsk=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
+github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
github.com/kardianos/osext v0.0.0-20190222173326-2bc1f35cddc0/go.mod h1:1NbS8ALrpOvjt0rHPNLyCIeMtbizbir8U//inJ+zuB8=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
@@ -471,6 +527,8 @@ github.com/leanovate/gopter v0.2.4/go.mod h1:gNcbPWNEWRe4lm+bycKqxUYoH5uoVje5SkO
github.com/leodido/ragel-machinery v0.0.0-20181214104525-299bdde78165 h1:bCiVCRCs1Heq84lurVinUPy19keqGEe4jh5vtK37jcg=
github.com/leodido/ragel-machinery v0.0.0-20181214104525-299bdde78165/go.mod h1:WZxr2/6a/Ar9bMDc2rN/LJrE/hF6bXE4LPyDSIxwAfg=
github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
+github.com/lightstep/lightstep-tracer-common/golang/gogo v0.0.0-20190605223551-bc2310a04743/go.mod h1:qklhhLq1aX+mtWk9cPHPzaBjWImj5ULL6C7HFJtXQMM=
+github.com/lightstep/lightstep-tracer-go v0.18.0/go.mod h1:jlF1pusYV4pidLvZ+XD0UBX0ZE6WURAspgAczcDHrL4=
github.com/lovoo/gcloud-opentracing v0.3.0/go.mod h1:ZFqk2y38kMDDikZPAK7ynTTGuyt17nSPdS3K5e+ZTBY=
github.com/mailru/easyjson v0.0.0-20160728113105-d5b7844b561a/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/mailru/easyjson v0.0.0-20180823135443-60711f1a8329/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
@@ -480,13 +538,15 @@ github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN
github.com/mattn/go-colorable v0.0.9 h1:UVL0vNpWh04HeJXV0KLcaT7r06gOH2l4OW6ddYRUIY4=
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
github.com/mattn/go-ieproxy v0.0.0-20190610004146-91bb50d98149/go.mod h1:31jz6HNzdxOmlERGGEc4v/dMssOfmp2p5bT/okiKFFc=
-github.com/mattn/go-ieproxy v0.0.0-20190805055040-f9202b1cfdeb h1:hXqqXzQtJbENrsb+rsIqkVqcg4FUJL0SQFGw08Dgivw=
-github.com/mattn/go-ieproxy v0.0.0-20190805055040-f9202b1cfdeb/go.mod h1:31jz6HNzdxOmlERGGEc4v/dMssOfmp2p5bT/okiKFFc=
+github.com/mattn/go-ieproxy v0.0.0-20190702010315-6dee0af9227d/go.mod h1:31jz6HNzdxOmlERGGEc4v/dMssOfmp2p5bT/okiKFFc=
+github.com/mattn/go-ieproxy v0.0.0-20191113090002-7c0f6868bffe h1:YioO2TiJyAHWHyCRQCP8jk5IzTqmsbGc5qQPIhHo6xs=
+github.com/mattn/go-ieproxy v0.0.0-20191113090002-7c0f6868bffe/go.mod h1:pYabZ6IHcRpFh7vIaLfK7rdcWgFEb3SFJ6/gNWuh88E=
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-isatty v0.0.4 h1:bnP0vzxcAdeI1zdubAl5PjU6zsERjGZb7raWodagDYs=
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/mattn/go-runewidth v0.0.4/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
+github.com/mattn/go-runewidth v0.0.6/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI=
github.com/mattn/go-sqlite3 v1.10.0/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc=
github.com/matttproud/golang_protobuf_extensions v1.0.0/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
@@ -494,10 +554,10 @@ github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/miekg/dns v1.1.15 h1:CSSIDtllwGLMoA6zjdKnaE6Tx6eVUxQ29LUgGetiDCI=
github.com/miekg/dns v1.1.15/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
-github.com/miekg/dns v1.1.19 h1:0ymbfaLG1/utH2+BydNiF+dx1jSEmdr/nylOtkGHZZg=
-github.com/miekg/dns v1.1.19/go.mod h1:bPDLeHnStXmXAq1m/Ch/hvfNHr14JKNPMBo3VZKjuso=
-github.com/minio/cli v1.20.0/go.mod h1:bYxnK0uS629N3Bq+AOZZ+6lwF77Sodk4+UL9vNuXhOY=
-github.com/minio/minio-go/v6 v6.0.27-0.20190529152532-de69c0e465ed/go.mod h1:vaNT59cWULS37E+E9zkuN/BVnKHyXtVGS+b04Boc66Y=
+github.com/miekg/dns v1.1.22 h1:Jm64b3bO9kP43ddLjL2EY3Io6bmy1qGb9Xxz6TqS6rc=
+github.com/miekg/dns v1.1.22/go.mod h1:bPDLeHnStXmXAq1m/Ch/hvfNHr14JKNPMBo3VZKjuso=
+github.com/minio/minio-go/v6 v6.0.44/go.mod h1:qD0lajrGW49lKZLtXKtCB4X/qkMf0a5tBvN2PaZg7Gg=
+github.com/minio/sha256-simd v0.1.1/go.mod h1:B5e1o+1/KgNmWrSQK08Y6Z1Vb5pwIktudl0J58iy0KM=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/go-homedir v1.0.0 h1:vKb8ShqSby24Yrqr/yDYkuFz8d0WUjys40rvnGC8aR0=
github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
@@ -521,7 +581,7 @@ github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3Rllmb
github.com/morikuni/aec v0.0.0-20170113033406-39771216ff4c/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A=
github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
-github.com/mozillazg/go-cos v0.12.0/go.mod h1:Zp6DvvXn0RUOXGJ2chmWt2bLEqRAnJnS3DnAZsJsoaE=
+github.com/mozillazg/go-cos v0.13.0/go.mod h1:Zp6DvvXn0RUOXGJ2chmWt2bLEqRAnJnS3DnAZsJsoaE=
github.com/mozillazg/go-httpheader v0.2.1/go.mod h1:jJ8xECTlalr6ValeXYdOF8fFUISeBAdw6E61aqQma60=
github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
@@ -535,17 +595,24 @@ github.com/oklog/ulid v1.3.1 h1:EGfNDEx6MqHz8B3uNV6QAib1UR2Lm97sHi3ocA6ESJ4=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/olekukonko/tablewriter v0.0.0-20170122224234-a0225b3f23b5/go.mod h1:vsDQFd/mU46D+Z4whnwzcISnGGzXWMclvtLoiIKAKIo=
github.com/olekukonko/tablewriter v0.0.1/go.mod h1:vsDQFd/mU46D+Z4whnwzcISnGGzXWMclvtLoiIKAKIo=
+github.com/olekukonko/tablewriter v0.0.2/go.mod h1:rSAaSIOAGT9odnlyGlUfAJaoc5w2fSBUmeGDbRWPxyQ=
github.com/onsi/ginkgo v0.0.0-20170829012221-11459a886d9c/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.8.0 h1:VkHVNpR4iVnU8XQR6DBm8BqYjN7CRzw+xKUbVVbbW9w=
github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
+github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
+github.com/onsi/ginkgo v1.10.3 h1:OoxbjfXVZyod1fmWYhI7SEyaD8B00ynP3T+D5GiyHOY=
+github.com/onsi/ginkgo v1.10.3/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
github.com/onsi/gomega v0.0.0-20190113212917-5533ce8a0da3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.4.2/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.5.0 h1:izbySO9zDPmjJ8rDjLvkA2zJHIo+HkYXHnf7eN7SSyo=
github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
+github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
+github.com/onsi/gomega v1.7.1 h1:K0jcRCwNQM3vFGh1ppMtDh/+7ApJrjldlX8fA0jDTLQ=
+github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/opencontainers/go-digest v1.0.0-rc1 h1:WzifXhOVOEOuFYOJAW6aQqW0TooG2iki3E3Ii+WN7gQ=
github.com/opencontainers/go-digest v1.0.0-rc1/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
github.com/opencontainers/image-spec v1.0.1 h1:JMemWkRwHx4Zj+fVxWoMCFm/8sYGGrUVojFA6h/TRcI=
@@ -555,6 +622,7 @@ github.com/opentracing-contrib/go-grpc v0.0.0-20180928155321-4b5a12d3ff02/go.mod
github.com/opentracing-contrib/go-stdlib v0.0.0-20190519235532-cf7a6c988dc9 h1:QsgXACQhd9QJhEmRumbsMQQvBtmdS0mafoVEBplWXEg=
github.com/opentracing-contrib/go-stdlib v0.0.0-20190519235532-cf7a6c988dc9/go.mod h1:PLldrQSroqzH70Xl+1DQcGnefIbqsKR7UDaiux3zV+w=
github.com/opentracing/basictracer-go v1.0.0/go.mod h1:QfBfYuafItcjQuMwinw9GhYKwFXS9KnPs5lxoYwgW74=
+github.com/opentracing/opentracing-go v1.0.2/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/opentracing/opentracing-go v1.1.0 h1:pWlfV3Bxv7k65HYwkikxat0+s3pV4bsqf19k25Ur8rU=
github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/openzipkin/zipkin-go v0.1.6/go.mod h1:QgAqvLzwWbR/WpD4A3cGpPtJrZXNIiJc5AZX7/PBEpw=
@@ -585,6 +653,9 @@ github.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829/go.mod
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.1.0 h1:BQ53HtBmfOitExawJ6LokA4x8ov/z0SYYb0+HxJfRI8=
github.com/prometheus/client_golang v1.1.0/go.mod h1:I1FGZT9+L76gKKOs5djB6ezCbFQP1xR9D75/vuwEF3g=
+github.com/prometheus/client_golang v1.2.0/go.mod h1:XMU6Z2MjaRKVu/dC1qupJI9SiNkDYzz3xecMgSW/F+U=
+github.com/prometheus/client_golang v1.2.1 h1:JnMpQc6ppsNgw9QPAGF6Dod479itz7lvlsMzzNayLOI=
+github.com/prometheus/client_golang v1.2.1/go.mod h1:XMU6Z2MjaRKVu/dC1qupJI9SiNkDYzz3xecMgSW/F+U=
github.com/prometheus/client_model v0.0.0-20170216185247-6f3806018612/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
@@ -607,21 +678,27 @@ github.com/prometheus/procfs v0.0.0-20190425082905-87a4384529e0/go.mod h1:TjEm7z
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.3 h1:CTwfnzjQ+8dS6MhHHu4YswVAD99sL2wjPqP+VkURmKE=
github.com/prometheus/procfs v0.0.3/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
+github.com/prometheus/procfs v0.0.5/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
+github.com/prometheus/procfs v0.0.6 h1:0qbH+Yqu/cj1ViVLvEWCP6qMQ4efWUj6bQqOEA0V0U4=
+github.com/prometheus/procfs v0.0.6/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A=
github.com/prometheus/prometheus v0.0.0-20180315085919-58e2a31db8de/go.mod h1:oAIUtOny2rjMX0OWN5vPR5/q/twIROJvdqnQKDdil/s=
github.com/prometheus/prometheus v0.0.0-20190818123050-43acd0e2e93f h1:7C9G4yUogM8QP85pmf11vlBPuV6u2mPbqvbjPVKcNis=
github.com/prometheus/prometheus v0.0.0-20190818123050-43acd0e2e93f/go.mod h1:rMTlmxGCvukf2KMu3fClMDKLLoJ5hl61MhcJ7xKakf0=
-github.com/prometheus/prometheus v1.8.2-0.20190913102521-8ab628b35467/go.mod h1:aojjoH+vNHyJUTJoW15HoQWMKXxNhQylU6/G261nqxQ=
-github.com/prometheus/prometheus v1.8.2-0.20190918104050-8744afdd1ea0 h1:W4dTblzSVIBNfDimJhh70OpZQQMwLVpwK50scXdH94w=
-github.com/prometheus/prometheus v1.8.2-0.20190918104050-8744afdd1ea0/go.mod h1:elNqjVbwD3sCZJqKzyN7uEuwGcCpeJvv67D6BrHsDbw=
+github.com/prometheus/prometheus v1.8.2-0.20191126064551-80ba03c67da1 h1:5ee1ewJCJYB7Bp314qaPcRNFaAPsdHN6BFzBC1wMVbQ=
+github.com/prometheus/prometheus v1.8.2-0.20191126064551-80ba03c67da1/go.mod h1:PVTKYlgELGIDbIKIyWRzD4WKjnavPynGOFLSuDpvOwU=
github.com/rafaeljusto/redigomock v0.0.0-20190202135759-257e089e14a1 h1:+kGqA4dNN5hn7WwvKdzHl0rdN5AEkbNZd0VjRltAiZg=
github.com/rafaeljusto/redigomock v0.0.0-20190202135759-257e089e14a1/go.mod h1:JaY6n2sDr+z2WTsXkOmNRUfDy6FN0L6Nk7x06ndm4tY=
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a/go.mod h1:bCqnVzQkZxMG4s8nGwiZ5l3QUCyqpo9Y+/ZMZ9VjZe4=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
+github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
+github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rs/cors v1.6.0/go.mod h1:gFx+x8UowdsKA9AchylcLynDq+nNFfI8FkUZdN/jGCU=
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
github.com/ryanuber/columnize v2.1.0+incompatible/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
github.com/samuel/go-zookeeper v0.0.0-20190810000440-0ceca61e4d75 h1:cA+Ubq9qEVIQhIWvP2kNuSZ2CmnfBJFSRq+kO1pu2cc=
github.com/samuel/go-zookeeper v0.0.0-20190810000440-0ceca61e4d75/go.mod h1:gi+0XIa01GRL2eRQVjQkKGqKF3SF9vZR/HnPullcV2E=
+github.com/samuel/go-zookeeper v0.0.0-20190923202752-2cc03de413da h1:p3Vo3i64TCLY7gIfzeQaUJ+kppEO5WQG3cL8iE8tGHU=
+github.com/samuel/go-zookeeper v0.0.0-20190923202752-2cc03de413da/go.mod h1:gi+0XIa01GRL2eRQVjQkKGqKF3SF9vZR/HnPullcV2E=
github.com/santhosh-tekuri/jsonschema v1.2.4/go.mod h1:TEAUOeZSmIxTTuHatJzrvARHiuO9LYd+cIxzgEHCQI4=
github.com/satori/go.uuid v0.0.0-20160603004225-b111a074d5ef/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
@@ -643,7 +720,9 @@ github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMB
github.com/sirupsen/logrus v1.4.2 h1:SPIRibHv4MatM3XXNO2BJeFLZwZ2LvZgfQ5+UNI2im4=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
+github.com/smartystreets/assertions v1.0.1/go.mod h1:kHHU4qYBaI3q23Pp3VPrmWhuIUrLW/7eUrw0BU5VaoM=
github.com/smartystreets/goconvey v0.0.0-20190330032615-68dc04aab96a/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
+github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4 h1:0HKaf1o97UwFjHH9o5XsHUOF+tqmdA7KEzXLpiyaw0E=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
@@ -655,6 +734,8 @@ github.com/spf13/pflag v0.0.0-20170130214245-9ff6c6923cff/go.mod h1:DYY7MBk1bdzu
github.com/spf13/pflag v1.0.1/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.3 h1:zPAT6CGy6wXeQ7NtTnaTerfKOsV6V6F8agHXFiazDkg=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
+github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
+github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.2.0 h1:Hbg2NidpLE8veEBkEZTL3CvlkUIVzuU9jDplZO54c48=
@@ -665,7 +746,7 @@ github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
-github.com/thanos-io/thanos v0.8.1/go.mod h1:qQDi/6tgypn96+VzSumlxfJIgFX2y3ablfhHHLZ05cg=
+github.com/thanos-io/thanos v0.8.1-0.20200102143048-a37ac093a67a/go.mod h1:vFgLQqNITtnP9Qnlc9UpEMi40QdFUIgUV8nsBetzuvE=
github.com/tidwall/pretty v0.0.0-20180105212114-65a9db5fad51/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/tinylib/msgp v0.0.0-20161221055906-38a6f61a768d h1:Ninez2SUm08xpmnw7kVxCeOc3DahF6IuMuRMCdM4wTQ=
@@ -676,11 +757,10 @@ github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1
github.com/tonistiigi/fifo v0.0.0-20190226154929-a9fb20d87448 h1:hbyjqt5UnyKeOT3rFVxLxi7iTI6XqR2p4TkwEAQdUiw=
github.com/tonistiigi/fifo v0.0.0-20190226154929-a9fb20d87448/go.mod h1:Q5IRRDY+cjIaiOjTAnXN5LKQV5MPqVx5ofQn85Jy5Yw=
github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqriFuLhtthL60Sar/7RFoluCcXsuvEwTV5KM=
+github.com/uber-go/atomic v1.4.0 h1:yOuPqEq4ovnhEjpHmfFwsqBXDYbQeT6Nb0bwD6XnD5o=
github.com/uber-go/atomic v1.4.0/go.mod h1:/Ct5t2lcmbJ4OSe/waGBoaVvVqtO0bmtfVNex1PFV8g=
-github.com/uber/jaeger-client-go v2.16.0+incompatible h1:Q2Pp6v3QYiocMxomCaJuwQGFt7E53bPYqEgug/AoBtY=
-github.com/uber/jaeger-client-go v2.16.0+incompatible/go.mod h1:WVhlPFC8FDjOFMMWRy2pZqQJSXxYSwNYOkTr/Z6d3Kk=
-github.com/uber/jaeger-client-go v2.20.0+incompatible h1:ttG9wKdl2ikV/BGOtu+eb+VPp+R7jMeuM177Ihs5Fdc=
-github.com/uber/jaeger-client-go v2.20.0+incompatible/go.mod h1:WVhlPFC8FDjOFMMWRy2pZqQJSXxYSwNYOkTr/Z6d3Kk=
+github.com/uber/jaeger-client-go v2.20.1+incompatible h1:HgqpYBng0n7tLJIlyT4kPCIv5XgCsF+kai1NnnrJzEU=
+github.com/uber/jaeger-client-go v2.20.1+incompatible/go.mod h1:WVhlPFC8FDjOFMMWRy2pZqQJSXxYSwNYOkTr/Z6d3Kk=
github.com/uber/jaeger-lib v2.2.0+incompatible h1:MxZXOiR2JuoANZ3J6DE/U0kSFv/eJ/GfSYVCjK7dyaw=
github.com/uber/jaeger-lib v2.2.0+incompatible/go.mod h1:ComeNDZlWwrWnDv8aPp0Ba6+uUTzImX/AauajbLI56U=
github.com/ugorji/go v1.1.4 h1:j4s+tAvLfL3bZyefP2SEWmhBzmuIlH/eqNuPdFPgngw=
@@ -718,13 +798,16 @@ go.mongodb.org/mongo-driver v1.0.3/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qL
go.mongodb.org/mongo-driver v1.0.4/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
go.mongodb.org/mongo-driver v1.1.0/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
go.opencensus.io v0.20.1/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
+go.opencensus.io v0.20.2/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
-go.opencensus.io v0.22.1 h1:8dP3SGL7MPB94crU3bEPplMPe83FI4EouesJUeFHv50=
-go.opencensus.io v0.22.1/go.mod h1:Ap50jQcDJrx6rB6VgeeFPtuPIf3wMRvRfrfYDO6+BmA=
+go.opencensus.io v0.22.2 h1:75k/FF0Q2YM8QYo07VPddOLBslDt1MZOdEslOHvmzAs=
+go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.uber.org/atomic v1.3.2/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/atomic v1.4.0 h1:cxzIVoETapQEqDhQu3QfnvXAV4AlzcvUCxkVUFw3+EU=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
+go.uber.org/atomic v1.5.0 h1:OI5t8sDa1Or+q8AeE+yKeB/SDYioSHAgcVljj9JIETY=
+go.uber.org/atomic v1.5.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
go.uber.org/automaxprocs v1.2.0/go.mod h1:YfO3fm683kQpzETxlTGZhGIVmXAhaw3gxeBADbpZtnU=
go.uber.org/multierr v1.1.0 h1:HoEmRHQPVSqub6w2z2d2EOVs2fjyFRGyofhKuyDq0QI=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
@@ -735,12 +818,12 @@ golang.org/x/crypto v0.0.0-20180608092829-8ac0e0d97ce4/go.mod h1:6SG95UA2DQfeDnf
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181025213731-e84da0312774/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
-golang.org/x/crypto v0.0.0-20190103213133-ff983b9c42bc/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190211182817-74369b46fc67/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190320223903-b7391e95e576/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190325154230-a5d413f7728c/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190426145343-a29dc8fdc734/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190513172903-22d7a77e9e5f/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
@@ -748,17 +831,29 @@ golang.org/x/crypto v0.0.0-20190617133340-57b3e21c3d56/go.mod h1:yigFU9vqHzYiE8U
golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4 h1:HuIa8hRrWRSrqYzx1qI49NNxhdi2PrY7gxVSq1JjLDc=
golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190923035154-9ee001bba392/go.mod h1:/lpIB1dKB+9EgE3H3cr1v9wB50oz8l4C4h62xy7jSTY=
-golang.org/x/crypto v0.0.0-20191002192127-34f69633bfdc h1:c0o/qxkaO2LF5t6fQrT4b5hzyggAkLLlCUjqfRxd8Q4=
-golang.org/x/crypto v0.0.0-20191002192127-34f69633bfdc/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20191112222119-e1110fd1c708 h1:pXVtWnwHkrWD9ru3sDxY/qFK/bfc0egRovX91EjWjf4=
+golang.org/x/crypto v0.0.0-20191112222119-e1110fd1c708/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
+golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
+golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
+golang.org/x/exp v0.0.0-20191029154019-8994fa331a53/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
+golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136 h1:A1gGSx58LAGVHUUsOf7IiR0u8Xb6W51gRwfDBhkdcaw=
+golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
+golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20190930215403-16217165b5de h1:5hukYrvBGR8/eNkX5mdUezrA6JiaEZDtJb9Ei+1LlBs=
+golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
+golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
+golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
+golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478 h1:l5EDrHhldLYb3ZRHDUhXF7Om7MvYXnkV9/iQNo1lX6g=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
@@ -784,7 +879,6 @@ golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5h
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181122145206-62eef0e2fa9b/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190102155601-82a175fd1598/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
-golang.org/x/sys v0.0.0-20190116161447-11f53e031339/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190209173611-3b5209105503/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -808,6 +902,10 @@ golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190922100055-0a153f010e69/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190924154521-2837fb4f24fe h1:6fAMxZRR6sl1Uq8U61gxU+kPTs2tR8uOySCbBP7BN/M=
golang.org/x/sys v0.0.0-20190924154521-2837fb4f24fe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191010194322-b09406accb47/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191025021431-6c3a3bfe00ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191112214154-59a1497f0cea/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191113165036-4c7a9d0fe056/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191218084908-4a24b4065292 h1:Y8q0zsdcgAd+JU8VUA8p8Qv2YhuY9zevDG2ORt5qBUI=
golang.org/x/sys v0.0.0-20191218084908-4a24b4065292/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.0.0-20160726164857-2910a502d2bf/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -842,11 +940,20 @@ golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBn
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190617190820-da514acc4774/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190813034749-528a2984e271/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190907020128-2ca718005c18/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
-golang.org/x/tools v0.0.0-20190925134113-a044388aa56f h1:MekROnXIcau1KD9io9Mt9QFsP9kZU+DeY1OA/HX/200=
-golang.org/x/tools v0.0.0-20190925134113-a044388aa56f/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20190918214516-5a1a30219888/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191111182352-50fa39b762bc/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191113191852-77e3bb0ad9e7/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2 h1:EtTFh6h4SAKemS+CURDMTDIANuduG5zKEXShyy18bGA=
+golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/api v0.3.1/go.mod h1:6wY9I6uQWHQ8EM57III9mq/AjF+i8G65rmVagqKMtkk=
google.golang.org/api v0.3.2/go.mod h1:6wY9I6uQWHQ8EM57III9mq/AjF+i8G65rmVagqKMtkk=
@@ -854,15 +961,17 @@ google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEt
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
google.golang.org/api v0.8.0 h1:VGGbLNyPF7dvYHhcUGYBBGCRDDK0RRJAI6KCvo0CL+E=
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
-google.golang.org/api v0.11.0 h1:n/qM3q0/rV2F0pox7o0CvNhlPvZAo7pLbef122cbLJ0=
-google.golang.org/api v0.11.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
+google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
+google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
+google.golang.org/api v0.14.0 h1:uMf5uLi4eQMRrMKhCplNik4U4H8Z6C1br3zOtAa/aDE=
+google.golang.org/api v0.14.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.3.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
-google.golang.org/appengine v1.6.3 h1:hvZejVcIxAKHR8Pq2gXaDggf6CWT1QEqO+JEBeOKCG8=
-google.golang.org/appengine v1.6.3/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
+google.golang.org/appengine v1.6.5 h1:tycE03LOZYQNhDpS27tcQdAzLCVMaj7QT2SXxebnpCM=
+google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/genproto v0.0.0-20180608181217-32ee49c4dd80/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
@@ -870,20 +979,27 @@ google.golang.org/genproto v0.0.0-20190404172233-64821d5d2107/go.mod h1:VzzqZJRn
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190530194941-fb225487d101/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s=
google.golang.org/genproto v0.0.0-20190716160619-c506a9f90610/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
-google.golang.org/genproto v0.0.0-20190916214212-f660b8655731 h1:Phvl0+G5t5k/EUFUi0wPdUUeTL2HydMQUXHnunWgSb0=
-google.golang.org/genproto v0.0.0-20190916214212-f660b8655731/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
+google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
+google.golang.org/genproto v0.0.0-20190927181202-20e1ac93f88c/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
+google.golang.org/genproto v0.0.0-20191028173616-919d9bdd9fe6/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9 h1:6XzpBoANz1NqMNfDXzc2QmHmbb1vyMsvRfoP5rM+K1I=
+google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/grpc v1.14.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.18.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
+google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.22.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.22.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
+google.golang.org/grpc v1.24.0/go.mod h1:XDChyiUovWa60DnaeDeZmSW86xtLtjtZbwvSiRnRtcA=
google.golang.org/grpc v1.25.1 h1:wdKvqQk7IttEw92GoRyKG2IDrUIpgpj6H6m81yfeMW0=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
@@ -894,6 +1010,7 @@ gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw=
+gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/fsnotify/fsnotify.v1 v1.4.7 h1:XNNYLJHt73EyYiCZi6+xjupS9CpvmiDgjPTAjrBlQbo=
@@ -903,14 +1020,18 @@ gopkg.in/inf.v0 v0.9.0/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/ini.v1 v1.42.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
+gopkg.in/ini.v1 v1.51.0/go.mod h1:pNLf8WUiyNEtQjuu5G5vTm06TEv9tsIgeAvK8hOrP4k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
-gopkg.in/urfave/cli.v1 v1.20.0/go.mod h1:vuBzUtMdQeixQj8LVd+/98pzhxNGQoyuPBlsXHOQNO0=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v2 v2.2.5 h1:ymVxjfMaHvXD8RqPRmzHHsB3VvucivSkIAvJFDI5O3c=
+gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo=
gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
honnef.co/go/tools v0.0.0-20180728063816-88497007e858/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
@@ -918,13 +1039,19 @@ honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWh
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+honnef.co/go/tools v0.0.1-2019.2.3 h1:3JgtbtFHMiCmsznwGVTUWbgGov+pVqnlf1dEJTNAXeM=
+honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
howett.net/plist v0.0.0-20181124034731-591f970eefbb/go.mod h1:vMygbs4qMhSZSc4lCUl2OEE+rDiIIJAIdR4m7MiMcm0=
k8s.io/api v0.0.0-20190620084959-7cf5895f2711/go.mod h1:TBhBqb1AWbBQbW3XRusr7n7E4v2+5ZY8r8sAMnyFC5A=
k8s.io/api v0.0.0-20190813020757-36bff7324fb7 h1:4uJOjRn9kWq4AqJRE8+qzmAy+lJd9rh8TY455dNef4U=
k8s.io/api v0.0.0-20190813020757-36bff7324fb7/go.mod h1:3Iy+myeAORNCLgjd/Xu9ebwN7Vh59Bw0vh9jhoX+V58=
+k8s.io/api v0.0.0-20191115095533-47f6de673b26 h1:6L7CEQVcduEr9eUPN3r3RliLvDrvcaniFOE5B5zRfmc=
+k8s.io/api v0.0.0-20191115095533-47f6de673b26/go.mod h1:iA/8arsvelvo4IDqIhX4IbjTEKBGgvsf2OraTuRtLFU=
k8s.io/apimachinery v0.0.0-20190612205821-1799e75a0719/go.mod h1:I4A+glKBHiTgiEjQiCCQfCAIcIMFGt291SmsvcrFzJA=
k8s.io/apimachinery v0.0.0-20190809020650-423f5d784010 h1:pyoq062NftC1y/OcnbSvgolyZDJ8y4fmUPWMkdA6gfU=
k8s.io/apimachinery v0.0.0-20190809020650-423f5d784010/go.mod h1:Waf/xTS2FGRrgXCkO5FP3XxTOWh0qLf2QhL1qFZZ/R8=
+k8s.io/apimachinery v0.0.0-20191115015347-3c7067801da2 h1:TSH6UZ+y3etc/aDbVqow1NT8o7SJXkxhLKbp3Ywhyvg=
+k8s.io/apimachinery v0.0.0-20191115015347-3c7067801da2/go.mod h1:dXFS2zaQR8fyzuvRdJDHw2Aerij/yVGJSre0bZQSVJA=
k8s.io/client-go v0.0.0-20190620085101-78d2af792bab h1:E8Fecph0qbNsAbijJJQryKu4Oi9QTp5cVpjTE+nqg6g=
k8s.io/client-go v0.0.0-20190620085101-78d2af792bab/go.mod h1:E95RaSlHr79aHaX0aGSwcPNfygDiPKOVXdmivCIZT0k=
k8s.io/gengo v0.0.0-20190128074634-0689ccc1d7d6/go.mod h1:ezvh/TsK7cY6rbqRK0oQQ8IAqLxYwwyPxAX1Pzy0ii0=
@@ -933,13 +1060,19 @@ k8s.io/klog v0.3.0/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=
k8s.io/klog v0.3.1/go.mod h1:Gq+BEi5rUBO/HRz0bTSXDUcqjScdoY3a9IHpCEIOOfk=
k8s.io/klog v0.4.0 h1:lCJCxf/LIowc2IGS9TPjWDyXY4nOmdGdfcwwDQCOURQ=
k8s.io/klog v0.4.0/go.mod h1:4Bi6QPql/J/LkTDqv7R/cd3hPo4k2DG6Ptcz060Ez5I=
+k8s.io/klog v1.0.0 h1:Pt+yjF5aB1xDSVbau4VsWe+dQNzA0qv1LlXdC2dF6Q8=
+k8s.io/klog v1.0.0/go.mod h1:4Bi6QPql/J/LkTDqv7R/cd3hPo4k2DG6Ptcz060Ez5I=
k8s.io/kube-openapi v0.0.0-20190228160746-b3a7cee44a30/go.mod h1:BXM9ceUBTj2QnfH2MK1odQs778ajze1RxcmP6S8RVVc=
k8s.io/kube-openapi v0.0.0-20190709113604-33be087ad058/go.mod h1:nfDlWeOsu3pUf4yWGL+ERqohP4YsZcBJXWMK+gkzOA4=
k8s.io/kube-openapi v0.0.0-20190722073852-5e22f3d471e6 h1:s9IxTKe9GwDH0S/WaX62nFYr0or32DsTWex9AileL7U=
k8s.io/kube-openapi v0.0.0-20190722073852-5e22f3d471e6/go.mod h1:RZvgC8MSN6DjiMV6oIfEE9pDL9CYXokkfaCKZeHm3nc=
+k8s.io/kube-openapi v0.0.0-20191107075043-30be4d16710a h1:UcxjrRMyNx/i/y8G7kPvLyy7rfbeuf1PYyBf973pgyU=
+k8s.io/kube-openapi v0.0.0-20191107075043-30be4d16710a/go.mod h1:1TqjTSzOxsLGIKfj0lK8EeCP7K1iUG65v09OM0/WG5E=
k8s.io/utils v0.0.0-20190221042446-c2654d5206da/go.mod h1:8k8uAuAQ0rXslZKaEWd0c3oVhZz7sSzSiPnVZayjIX0=
k8s.io/utils v0.0.0-20190809000727-6c36bc71fc4a h1:uy5HAgt4Ha5rEMbhZA+aM1j2cq5LmR6LQ71EYC2sVH4=
k8s.io/utils v0.0.0-20190809000727-6c36bc71fc4a/go.mod h1:sZAwmy6armz5eXlNoLmJcl4F1QuKu7sr+mFQ0byX7Ew=
+k8s.io/utils v0.0.0-20191114200735-6ca3b61696b6 h1:p0Ai3qVtkbCG/Af26dBmU0E1W58NID3hSSh7cMyylpM=
+k8s.io/utils v0.0.0-20191114200735-6ca3b61696b6/go.mod h1:sZAwmy6armz5eXlNoLmJcl4F1QuKu7sr+mFQ0byX7Ew=
rsc.io/binaryregexp v0.2.0 h1:HfqmD5MEmC0zvwBuF187nq9mdnXjXsSivRiXN7SmRkE=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
sigs.k8s.io/structured-merge-diff v0.0.0-20190525122527-15d366b2352e/go.mod h1:wWxsB5ozmmv/SG7nM11ayaAW51xMvak/t1r0CSlcokI=
diff --git a/pkg/distributor/distributor.go b/pkg/distributor/distributor.go
index 6386cbfd5880f..653daeebb5306 100644
--- a/pkg/distributor/distributor.go
+++ b/pkg/distributor/distributor.go
@@ -102,7 +102,7 @@ func New(cfg Config, clientCfg client.Config, ingestersRing ring.ReadRing, overr
if overrides.IngestionRateStrategy() == validation.GlobalIngestionRateStrategy {
var err error
- distributorsRing, err = ring.NewLifecycler(cfg.DistributorRing.ToLifecyclerConfig(), nil, "distributor", ring.DistributorRingKey)
+ distributorsRing, err = ring.NewLifecycler(cfg.DistributorRing.ToLifecyclerConfig(), nil, "distributor", ring.DistributorRingKey, false)
if err != nil {
return nil, err
}
diff --git a/pkg/distributor/distributor_test.go b/pkg/distributor/distributor_test.go
index c6ff0f8f394a7..f253753b81ef9 100644
--- a/pkg/distributor/distributor_test.go
+++ b/pkg/distributor/distributor_test.go
@@ -176,7 +176,7 @@ func prepare(t *testing.T, limits *validation.Limits, kvStore kv.Client) *Distri
)
flagext.DefaultValues(&distributorConfig, &clientConfig)
- overrides, err := validation.NewOverrides(*limits)
+ overrides, err := validation.NewOverrides(*limits, nil)
require.NoError(t, err)
// Mock the ingesters ring
diff --git a/pkg/distributor/ingestion_rate_strategy_test.go b/pkg/distributor/ingestion_rate_strategy_test.go
index 1be590fd59538..880542f66e6ba 100644
--- a/pkg/distributor/ingestion_rate_strategy_test.go
+++ b/pkg/distributor/ingestion_rate_strategy_test.go
@@ -54,7 +54,7 @@ func TestIngestionRateStrategy(t *testing.T) {
var strategy limiter.RateLimiterStrategy
// Init limits overrides
- overrides, err := validation.NewOverrides(testData.limits)
+ overrides, err := validation.NewOverrides(testData.limits, nil)
require.NoError(t, err)
// Instance the strategy
diff --git a/pkg/ingester/flush_test.go b/pkg/ingester/flush_test.go
index 82ab2a603697d..69a78b4bbb3bb 100644
--- a/pkg/ingester/flush_test.go
+++ b/pkg/ingester/flush_test.go
@@ -166,7 +166,7 @@ func newTestStore(t require.TestingT, cfg Config) (*testStore, *Ingester) {
chunks: map[string][]chunk.Chunk{},
}
- limits, err := validation.NewOverrides(defaultLimitsTestConfig())
+ limits, err := validation.NewOverrides(defaultLimitsTestConfig(), nil)
require.NoError(t, err)
ing, err := New(cfg, client.Config{}, store, limits)
diff --git a/pkg/ingester/ingester.go b/pkg/ingester/ingester.go
index eae0be9d51fb5..30c7574acdfb0 100644
--- a/pkg/ingester/ingester.go
+++ b/pkg/ingester/ingester.go
@@ -142,7 +142,7 @@ func New(cfg Config, clientConfig client.Config, store ChunkStore, limits *valid
go i.flushLoop(j)
}
- i.lifecycler, err = ring.NewLifecycler(cfg.LifecyclerConfig, i, "ingester", ring.IngesterRingKey)
+ i.lifecycler, err = ring.NewLifecycler(cfg.LifecyclerConfig, i, "ingester", ring.IngesterRingKey, true)
if err != nil {
return nil, err
}
diff --git a/pkg/ingester/ingester_test.go b/pkg/ingester/ingester_test.go
index 1894e11032777..1dadd5da63b70 100644
--- a/pkg/ingester/ingester_test.go
+++ b/pkg/ingester/ingester_test.go
@@ -23,7 +23,7 @@ import (
func TestIngester(t *testing.T) {
ingesterConfig := defaultIngesterTestConfig(t)
- limits, err := validation.NewOverrides(defaultLimitsTestConfig())
+ limits, err := validation.NewOverrides(defaultLimitsTestConfig(), nil)
require.NoError(t, err)
store := &mockStore{
@@ -190,7 +190,7 @@ func TestIngesterStreamLimitExceeded(t *testing.T) {
ingesterConfig := defaultIngesterTestConfig(t)
defaultLimits := defaultLimitsTestConfig()
defaultLimits.MaxLocalStreamsPerUser = 1
- overrides, err := validation.NewOverrides(defaultLimits)
+ overrides, err := validation.NewOverrides(defaultLimits, nil)
require.NoError(t, err)
diff --git a/pkg/ingester/instance_test.go b/pkg/ingester/instance_test.go
index 3eb7b01a701fa..086fc7d1c322a 100644
--- a/pkg/ingester/instance_test.go
+++ b/pkg/ingester/instance_test.go
@@ -24,7 +24,7 @@ var defaultFactory = func() chunkenc.Chunk {
}
func TestLabelsCollisions(t *testing.T) {
- limits, err := validation.NewOverrides(validation.Limits{MaxLocalStreamsPerUser: 1000})
+ limits, err := validation.NewOverrides(validation.Limits{MaxLocalStreamsPerUser: 1000}, nil)
require.NoError(t, err)
limiter := NewLimiter(limits, &ringCountMock{count: 1}, 1)
@@ -51,7 +51,7 @@ func TestLabelsCollisions(t *testing.T) {
}
func TestConcurrentPushes(t *testing.T) {
- limits, err := validation.NewOverrides(validation.Limits{MaxLocalStreamsPerUser: 1000})
+ limits, err := validation.NewOverrides(validation.Limits{MaxLocalStreamsPerUser: 1000}, nil)
require.NoError(t, err)
limiter := NewLimiter(limits, &ringCountMock{count: 1}, 1)
@@ -102,7 +102,7 @@ func TestConcurrentPushes(t *testing.T) {
}
func TestSyncPeriod(t *testing.T) {
- limits, err := validation.NewOverrides(validation.Limits{MaxLocalStreamsPerUser: 1000})
+ limits, err := validation.NewOverrides(validation.Limits{MaxLocalStreamsPerUser: 1000}, nil)
require.NoError(t, err)
limiter := NewLimiter(limits, &ringCountMock{count: 1}, 1)
diff --git a/pkg/ingester/limiter_test.go b/pkg/ingester/limiter_test.go
index 3042b1b58f1dc..d03e04388186b 100644
--- a/pkg/ingester/limiter_test.go
+++ b/pkg/ingester/limiter_test.go
@@ -73,7 +73,7 @@ func TestLimiter_maxStreamsPerUser(t *testing.T) {
limits, err := validation.NewOverrides(validation.Limits{
MaxLocalStreamsPerUser: testData.maxLocalStreamsPerUser,
MaxGlobalStreamsPerUser: testData.maxGlobalStreamsPerUser,
- })
+ }, nil)
require.NoError(t, err)
limiter := NewLimiter(limits, ring, testData.ringReplicationFactor)
@@ -130,7 +130,7 @@ func TestLimiter_AssertMaxStreamsPerUser(t *testing.T) {
limits, err := validation.NewOverrides(validation.Limits{
MaxLocalStreamsPerUser: testData.maxLocalStreamsPerUser,
MaxGlobalStreamsPerUser: testData.maxGlobalStreamsPerUser,
- })
+ }, nil)
require.NoError(t, err)
limiter := NewLimiter(limits, ring, testData.ringReplicationFactor)
diff --git a/pkg/loki/loki.go b/pkg/loki/loki.go
index e5a2e87893114..8932877c4d37b 100644
--- a/pkg/loki/loki.go
+++ b/pkg/loki/loki.go
@@ -8,6 +8,7 @@ import (
"github.com/cortexproject/cortex/pkg/querier/frontend"
"github.com/cortexproject/cortex/pkg/ring"
"github.com/cortexproject/cortex/pkg/util"
+ "github.com/cortexproject/cortex/pkg/util/runtimeconfig"
"github.com/go-kit/kit/log/level"
"github.com/pkg/errors"
@@ -30,19 +31,20 @@ type Config struct {
AuthEnabled bool `yaml:"auth_enabled,omitempty"`
HTTPPrefix string `yaml:"http_prefix"`
- Server server.Config `yaml:"server,omitempty"`
- Distributor distributor.Config `yaml:"distributor,omitempty"`
- Querier querier.Config `yaml:"querier,omitempty"`
- IngesterClient client.Config `yaml:"ingester_client,omitempty"`
- Ingester ingester.Config `yaml:"ingester,omitempty"`
- StorageConfig storage.Config `yaml:"storage_config,omitempty"`
- ChunkStoreConfig chunk.StoreConfig `yaml:"chunk_store_config,omitempty"`
- SchemaConfig chunk.SchemaConfig `yaml:"schema_config,omitempty"`
- LimitsConfig validation.Limits `yaml:"limits_config,omitempty"`
- TableManager chunk.TableManagerConfig `yaml:"table_manager,omitempty"`
- Worker frontend.WorkerConfig `yaml:"frontend_worker,omitempty"`
- Frontend frontend.Config `yaml:"frontend,omitempty"`
- QueryRange queryrange.Config `yaml:"query_range,omitempty"`
+ Server server.Config `yaml:"server,omitempty"`
+ Distributor distributor.Config `yaml:"distributor,omitempty"`
+ Querier querier.Config `yaml:"querier,omitempty"`
+ IngesterClient client.Config `yaml:"ingester_client,omitempty"`
+ Ingester ingester.Config `yaml:"ingester,omitempty"`
+ StorageConfig storage.Config `yaml:"storage_config,omitempty"`
+ ChunkStoreConfig chunk.StoreConfig `yaml:"chunk_store_config,omitempty"`
+ SchemaConfig chunk.SchemaConfig `yaml:"schema_config,omitempty"`
+ LimitsConfig validation.Limits `yaml:"limits_config,omitempty"`
+ TableManager chunk.TableManagerConfig `yaml:"table_manager,omitempty"`
+ Worker frontend.WorkerConfig `yaml:"frontend_worker,omitempty"`
+ Frontend frontend.Config `yaml:"frontend,omitempty"`
+ QueryRange queryrange.Config `yaml:"query_range,omitempty"`
+ RuntimeConfig runtimeconfig.ManagerConfig `yaml:"runtime_config,omitempty"`
}
// RegisterFlags registers flag.
@@ -66,23 +68,25 @@ func (c *Config) RegisterFlags(f *flag.FlagSet) {
c.Frontend.RegisterFlags(f)
c.Worker.RegisterFlags(f)
c.QueryRange.RegisterFlags(f)
+ c.RuntimeConfig.RegisterFlags(f)
}
// Loki is the root datastructure for Loki.
type Loki struct {
cfg Config
- server *server.Server
- ring *ring.Ring
- overrides *validation.Overrides
- distributor *distributor.Distributor
- ingester *ingester.Ingester
- querier *querier.Querier
- store storage.Store
- tableManager *chunk.TableManager
- worker frontend.Worker
- frontend *frontend.Frontend
- stopper queryrange.Stopper
+ server *server.Server
+ ring *ring.Ring
+ overrides *validation.Overrides
+ distributor *distributor.Distributor
+ ingester *ingester.Ingester
+ querier *querier.Querier
+ store storage.Store
+ tableManager *chunk.TableManager
+ worker frontend.Worker
+ frontend *frontend.Frontend
+ stopper queryrange.Stopper
+ runtimeConfig *runtimeconfig.Manager
httpAuthMiddleware middleware.Interface
}
diff --git a/pkg/loki/modules.go b/pkg/loki/modules.go
index 19b72c774f6f1..156a1f1dd728d 100644
--- a/pkg/loki/modules.go
+++ b/pkg/loki/modules.go
@@ -12,6 +12,7 @@ import (
"github.com/cortexproject/cortex/pkg/querier/frontend"
"github.com/cortexproject/cortex/pkg/ring"
"github.com/cortexproject/cortex/pkg/util"
+ "github.com/cortexproject/cortex/pkg/util/runtimeconfig"
"github.com/go-kit/kit/log/level"
"github.com/prometheus/client_golang/prometheus"
@@ -36,6 +37,7 @@ type moduleName int
// The various modules that make up Loki.
const (
Ring moduleName = iota
+ RuntimeConfig
Overrides
Server
Distributor
@@ -60,6 +62,8 @@ func (m moduleName) String() string {
switch m {
case Ring:
return "ring"
+ case RuntimeConfig:
+ return "runtime-config"
case Overrides:
return "overrides"
case Server:
@@ -88,6 +92,9 @@ func (m *moduleName) Set(s string) error {
case "ring":
*m = Ring
return nil
+ case "runtime-config":
+ *m = RuntimeConfig
+ return nil
case "overrides":
*m = Overrides
return nil
@@ -126,6 +133,7 @@ func (t *Loki) initServer() (err error) {
}
func (t *Loki) initRing() (err error) {
+ t.cfg.Ingester.LifecyclerConfig.RingConfig.KVStore.Multi.ConfigProvider = multiClientRuntimeConfigChannel(t.runtimeConfig)
t.ring, err = ring.New(t.cfg.Ingester.LifecyclerConfig.RingConfig, "ingester", ring.IngesterRingKey)
if err != nil {
return
@@ -135,8 +143,27 @@ func (t *Loki) initRing() (err error) {
return
}
+func (t *Loki) initRuntimeConfig() (err error) {
+ if t.cfg.RuntimeConfig.LoadPath == "" {
+ t.cfg.RuntimeConfig.LoadPath = t.cfg.LimitsConfig.PerTenantOverrideConfig
+ t.cfg.RuntimeConfig.ReloadPeriod = t.cfg.LimitsConfig.PerTenantOverridePeriod
+ }
+ t.cfg.RuntimeConfig.Loader = loadRuntimeConfig
+
+ // make sure to set default limits before we start loading configuration into memory
+ validation.SetDefaultLimitsForYAMLUnmarshalling(t.cfg.LimitsConfig)
+
+ t.runtimeConfig, err = runtimeconfig.NewRuntimeConfigManager(t.cfg.RuntimeConfig)
+ return err
+}
+
+func (t *Loki) stopRuntimeConfig() (err error) {
+ t.runtimeConfig.Stop()
+ return nil
+}
+
func (t *Loki) initOverrides() (err error) {
- t.overrides, err = validation.NewOverrides(t.cfg.LimitsConfig)
+ t.overrides, err = validation.NewOverrides(t.cfg.LimitsConfig, tenantLimitsFromRuntimeConfig(t.runtimeConfig))
return err
}
@@ -198,6 +225,7 @@ func (t *Loki) initQuerier() (err error) {
}
func (t *Loki) initIngester() (err error) {
+ t.cfg.Ingester.LifecyclerConfig.RingConfig.KVStore.Multi.ConfigProvider = multiClientRuntimeConfigChannel(t.runtimeConfig)
t.cfg.Ingester.LifecyclerConfig.ListenPort = &t.cfg.Server.GRPCListenPort
t.ingester, err = ingester.New(t.cfg.Ingester, t.cfg.IngesterClient, t.store, t.overrides)
if err != nil {
@@ -392,12 +420,18 @@ var modules = map[moduleName]module{
init: (*Loki).initServer,
},
+ RuntimeConfig: {
+ init: (*Loki).initRuntimeConfig,
+ stop: (*Loki).stopRuntimeConfig,
+ },
+
Ring: {
- deps: []moduleName{Server},
+ deps: []moduleName{RuntimeConfig, Server},
init: (*Loki).initRing,
},
Overrides: {
+ deps: []moduleName{RuntimeConfig},
init: (*Loki).initOverrides,
},
diff --git a/pkg/loki/runtime_config.go b/pkg/loki/runtime_config.go
new file mode 100644
index 0000000000000..af65d8d30dde3
--- /dev/null
+++ b/pkg/loki/runtime_config.go
@@ -0,0 +1,72 @@
+package loki
+
+import (
+ "os"
+
+ "github.com/cortexproject/cortex/pkg/ring/kv"
+ "github.com/cortexproject/cortex/pkg/util/runtimeconfig"
+ "gopkg.in/yaml.v2"
+
+ "github.com/grafana/loki/pkg/util/validation"
+)
+
+// runtimeConfigValues are values that can be reloaded from configuration file while Loki is running.
+// Reloading is done by runtimeconfig.Manager, which also keeps the currently loaded config.
+// These values are then pushed to the components that are interested in them.
+type runtimeConfigValues struct {
+ TenantLimits map[string]*validation.Limits `yaml:"overrides"`
+
+ Multi kv.MultiRuntimeConfig `yaml:"multi_kv_config"`
+}
+
+func loadRuntimeConfig(filename string) (interface{}, error) {
+ f, err := os.Open(filename)
+ if err != nil {
+ return nil, err
+ }
+
+ var overrides = &runtimeConfigValues{}
+
+ decoder := yaml.NewDecoder(f)
+ decoder.SetStrict(true)
+ if err := decoder.Decode(&overrides); err != nil {
+ return nil, err
+ }
+
+ return overrides, nil
+}
+
+func tenantLimitsFromRuntimeConfig(c *runtimeconfig.Manager) validation.TenantLimits {
+ return func(userID string) *validation.Limits {
+ cfg, ok := c.GetConfig().(*runtimeConfigValues)
+ if !ok || cfg == nil {
+ return nil
+ }
+
+ return cfg.TenantLimits[userID]
+ }
+}
+
+func multiClientRuntimeConfigChannel(manager *runtimeconfig.Manager) func() <-chan kv.MultiRuntimeConfig {
+ // returns function that can be used in MultiConfig.ConfigProvider
+ return func() <-chan kv.MultiRuntimeConfig {
+ outCh := make(chan kv.MultiRuntimeConfig, 1)
+
+ // push initial config to the channel
+ val := manager.GetConfig()
+ if cfg, ok := val.(*runtimeConfigValues); ok && cfg != nil {
+ outCh <- cfg.Multi
+ }
+
+ ch := manager.CreateListenerChannel(1)
+ go func() {
+ for val := range ch {
+ if cfg, ok := val.(*runtimeConfigValues); ok && cfg != nil {
+ outCh <- cfg.Multi
+ }
+ }
+ }()
+
+ return outCh
+ }
+}
diff --git a/pkg/querier/querier_test.go b/pkg/querier/querier_test.go
index df504f198832f..fc67c8d89ec07 100644
--- a/pkg/querier/querier_test.go
+++ b/pkg/querier/querier_test.go
@@ -46,7 +46,7 @@ func TestQuerier_Label_QueryTimeoutConfigFlag(t *testing.T) {
store := newStoreMock()
store.On("LabelValuesForMetricName", mock.Anything, "test", model.TimeFromUnixNano(startTime.UnixNano()), model.TimeFromUnixNano(endTime.UnixNano()), "logs", "test").Return([]string{"foo", "bar"}, nil)
- limits, err := validation.NewOverrides(defaultLimitsTestConfig())
+ limits, err := validation.NewOverrides(defaultLimitsTestConfig(), nil)
require.NoError(t, err)
q, err := newQuerier(
@@ -97,7 +97,7 @@ func TestQuerier_Tail_QueryTimeoutConfigFlag(t *testing.T) {
ingesterClient.On("Query", mock.Anything, mock.Anything, mock.Anything).Return(queryClient, nil)
ingesterClient.On("Tail", mock.Anything, &request, mock.Anything).Return(tailClient, nil)
- limits, err := validation.NewOverrides(defaultLimitsTestConfig())
+ limits, err := validation.NewOverrides(defaultLimitsTestConfig(), nil)
require.NoError(t, err)
q, err := newQuerier(
@@ -198,7 +198,7 @@ func TestQuerier_tailDisconnectedIngesters(t *testing.T) {
ingesterClient := newQuerierClientMock()
ingesterClient.On("Tail", mock.Anything, &req, mock.Anything).Return(newTailClientMock(), nil)
- limits, err := validation.NewOverrides(defaultLimitsTestConfig())
+ limits, err := validation.NewOverrides(defaultLimitsTestConfig(), nil)
require.NoError(t, err)
q, err := newQuerier(
@@ -272,7 +272,7 @@ func TestQuerier_validateQueryRequest(t *testing.T) {
defaultLimits.MaxStreamsMatchersPerQuery = 1
defaultLimits.MaxQueryLength = 2 * time.Minute
- limits, err := validation.NewOverrides(defaultLimits)
+ limits, err := validation.NewOverrides(defaultLimits, nil)
require.NoError(t, err)
q, err := newQuerier(
@@ -429,7 +429,7 @@ func TestQuerier_SeriesAPI(t *testing.T) {
tc.setup(store, queryClient, ingesterClient, defaultLimits, tc.req)
}
- limits, err := validation.NewOverrides(defaultLimits)
+ limits, err := validation.NewOverrides(defaultLimits, nil)
require.NoError(t, err)
q, err := newQuerier(
diff --git a/pkg/storage/store_test.go b/pkg/storage/store_test.go
index ebd2c1fa136c3..bb3ac17bf9a96 100644
--- a/pkg/storage/store_test.go
+++ b/pkg/storage/store_test.go
@@ -141,7 +141,7 @@ func printHeap(b *testing.B, show bool) {
func getLocalStore() Store {
limits, err := validation.NewOverrides(validation.Limits{
MaxQueryLength: 6000 * time.Hour,
- })
+ }, nil)
if err != nil {
panic(err)
}
diff --git a/pkg/util/validation/limits.go b/pkg/util/validation/limits.go
index c80857b1065da..e306b9116634b 100644
--- a/pkg/util/validation/limits.go
+++ b/pkg/util/validation/limits.go
@@ -2,12 +2,7 @@ package validation
import (
"flag"
- "os"
"time"
-
- "github.com/cortexproject/cortex/pkg/util/validation"
-
- "gopkg.in/yaml.v2"
)
const (
@@ -97,159 +92,132 @@ func (l *Limits) UnmarshalYAML(unmarshal func(interface{}) error) error {
// find a nicer way I'm afraid.
var defaultLimits *Limits
+// SetDefaultLimitsForYAMLUnmarshalling sets global default limits, used when loading
+// Limits from YAML files. This is used to ensure per-tenant limits are defaulted to
+// those values.
+func SetDefaultLimitsForYAMLUnmarshalling(defaults Limits) {
+ defaultLimits = &defaults
+}
+
+// TenantLimits is a function that returns limits for given tenant, or
+// nil, if there are no tenant-specific limits.
+type TenantLimits func(userID string) *Limits
+
// Overrides periodically fetch a set of per-user overrides, and provides convenience
// functions for fetching the correct value.
type Overrides struct {
- overridesManager *validation.OverridesManager
+ defaultLimits *Limits
+ tenantLimits TenantLimits
}
// NewOverrides makes a new Overrides.
-// We store the supplied limits in a global variable to ensure per-tenant limits
-// are defaulted to those values. As such, the last call to NewOverrides will
-// become the new global defaults.
-func NewOverrides(defaults Limits) (*Overrides, error) {
- defaultLimits = &defaults
- overridesManagerConfig := validation.OverridesManagerConfig{
- OverridesReloadPeriod: defaults.PerTenantOverridePeriod,
- OverridesLoadPath: defaults.PerTenantOverrideConfig,
- OverridesLoader: loadOverrides,
- Defaults: &defaults,
- }
-
- overridesManager, err := validation.NewOverridesManager(overridesManagerConfig)
- if err != nil {
- return nil, err
- }
-
+func NewOverrides(defaults Limits, tenantLimits TenantLimits) (*Overrides, error) {
return &Overrides{
- overridesManager: overridesManager,
+ tenantLimits: tenantLimits,
+ defaultLimits: &defaults,
}, nil
}
-// Stop background reloading of overrides.
-func (o *Overrides) Stop() {
- o.overridesManager.Stop()
-}
-
// IngestionRateStrategy returns whether the ingestion rate limit should be individually applied
// to each distributor instance (local) or evenly shared across the cluster (global).
func (o *Overrides) IngestionRateStrategy() string {
// The ingestion rate strategy can't be overridden on a per-tenant basis,
// so here we just pick the value for a not-existing user ID (empty string).
- return o.overridesManager.GetLimits("").(*Limits).IngestionRateStrategy
+ return o.getOverridesForUser("").IngestionRateStrategy
}
// IngestionRateBytes returns the limit on ingester rate (MBs per second).
func (o *Overrides) IngestionRateBytes(userID string) float64 {
- return o.overridesManager.GetLimits(userID).(*Limits).IngestionRateMB * bytesInMB
+ return o.getOverridesForUser(userID).IngestionRateMB * bytesInMB
}
// IngestionBurstSizeBytes returns the burst size for ingestion rate.
func (o *Overrides) IngestionBurstSizeBytes(userID string) int {
- return int(o.overridesManager.GetLimits(userID).(*Limits).IngestionBurstSizeMB * bytesInMB)
+ return int(o.getOverridesForUser(userID).IngestionBurstSizeMB * bytesInMB)
}
// MaxLabelNameLength returns maximum length a label name can be.
func (o *Overrides) MaxLabelNameLength(userID string) int {
- return o.overridesManager.GetLimits(userID).(*Limits).MaxLabelNameLength
+ return o.getOverridesForUser(userID).MaxLabelNameLength
}
// MaxLabelValueLength returns maximum length a label value can be. This also is
// the maximum length of a metric name.
func (o *Overrides) MaxLabelValueLength(userID string) int {
- return o.overridesManager.GetLimits(userID).(*Limits).MaxLabelValueLength
+ return o.getOverridesForUser(userID).MaxLabelValueLength
}
// MaxLabelNamesPerSeries returns maximum number of label/value pairs timeseries.
func (o *Overrides) MaxLabelNamesPerSeries(userID string) int {
- return o.overridesManager.GetLimits(userID).(*Limits).MaxLabelNamesPerSeries
+ return o.getOverridesForUser(userID).MaxLabelNamesPerSeries
}
// RejectOldSamples returns true when we should reject samples older than certain
// age.
func (o *Overrides) RejectOldSamples(userID string) bool {
- return o.overridesManager.GetLimits(userID).(*Limits).RejectOldSamples
+ return o.getOverridesForUser(userID).RejectOldSamples
}
// RejectOldSamplesMaxAge returns the age at which samples should be rejected.
func (o *Overrides) RejectOldSamplesMaxAge(userID string) time.Duration {
- return o.overridesManager.GetLimits(userID).(*Limits).RejectOldSamplesMaxAge
+ return o.getOverridesForUser(userID).RejectOldSamplesMaxAge
}
// CreationGracePeriod is misnamed, and actually returns how far into the future
// we should accept samples.
func (o *Overrides) CreationGracePeriod(userID string) time.Duration {
- return o.overridesManager.GetLimits(userID).(*Limits).CreationGracePeriod
+ return o.getOverridesForUser(userID).CreationGracePeriod
}
// MaxLocalStreamsPerUser returns the maximum number of streams a user is allowed to store
// in a single ingester.
func (o *Overrides) MaxLocalStreamsPerUser(userID string) int {
- return o.overridesManager.GetLimits(userID).(*Limits).MaxLocalStreamsPerUser
+ return o.getOverridesForUser(userID).MaxLocalStreamsPerUser
}
// MaxGlobalStreamsPerUser returns the maximum number of streams a user is allowed to store
// across the cluster.
func (o *Overrides) MaxGlobalStreamsPerUser(userID string) int {
- return o.overridesManager.GetLimits(userID).(*Limits).MaxGlobalStreamsPerUser
+ return o.getOverridesForUser(userID).MaxGlobalStreamsPerUser
}
// MaxChunksPerQuery returns the maximum number of chunks allowed per query.
func (o *Overrides) MaxChunksPerQuery(userID string) int {
- return o.overridesManager.GetLimits(userID).(*Limits).MaxChunksPerQuery
+ return o.getOverridesForUser(userID).MaxChunksPerQuery
}
// MaxQueryLength returns the limit of the length (in time) of a query.
func (o *Overrides) MaxQueryLength(userID string) time.Duration {
- return o.overridesManager.GetLimits(userID).(*Limits).MaxQueryLength
+ return o.getOverridesForUser(userID).MaxQueryLength
}
// MaxQueryParallelism returns the limit to the number of sub-queries the
// frontend will process in parallel.
func (o *Overrides) MaxQueryParallelism(userID string) int {
- return o.overridesManager.GetLimits(userID).(*Limits).MaxQueryParallelism
+ return o.getOverridesForUser(userID).MaxQueryParallelism
}
// EnforceMetricName whether to enforce the presence of a metric name.
func (o *Overrides) EnforceMetricName(userID string) bool {
- return o.overridesManager.GetLimits(userID).(*Limits).EnforceMetricName
+ return o.getOverridesForUser(userID).EnforceMetricName
}
// CardinalityLimit whether to enforce the presence of a metric name.
func (o *Overrides) CardinalityLimit(userID string) int {
- return o.overridesManager.GetLimits(userID).(*Limits).CardinalityLimit
+ return o.getOverridesForUser(userID).CardinalityLimit
}
// MaxStreamsMatchersPerQuery returns the limit to number of streams matchers per query.
func (o *Overrides) MaxStreamsMatchersPerQuery(userID string) int {
- return o.overridesManager.GetLimits(userID).(*Limits).MaxStreamsMatchersPerQuery
-}
-
-// Loads overrides and returns the limits as an interface to store them in OverridesManager.
-// We need to implement it here since OverridesManager must store type Limits in an interface but
-// it doesn't know its definition to initialize it.
-// We could have used yamlv3.Node for this but there is no way to enforce strict decoding due to a bug in it
-// TODO: Use yamlv3.Node to move this to OverridesManager after https://github.com/go-yaml/yaml/issues/460 is fixed
-func loadOverrides(filename string) (map[string]interface{}, error) {
- f, err := os.Open(filename)
- if err != nil {
- return nil, err
- }
-
- var overrides struct {
- Overrides map[string]*Limits `yaml:"overrides"`
- }
-
- decoder := yaml.NewDecoder(f)
- decoder.SetStrict(true)
- if err := decoder.Decode(&overrides); err != nil {
- return nil, err
- }
+ return o.getOverridesForUser(userID).MaxStreamsMatchersPerQuery
+}
- overridesAsInterface := map[string]interface{}{}
- for userID := range overrides.Overrides {
- overridesAsInterface[userID] = overrides.Overrides[userID]
+func (o *Overrides) getOverridesForUser(userID string) *Limits {
+ if o.tenantLimits != nil {
+ l := o.tenantLimits(userID)
+ if l != nil {
+ return l
+ }
}
-
- return overridesAsInterface, nil
+ return o.defaultLimits
}
diff --git a/vendor/cloud.google.com/go/CHANGES.md b/vendor/cloud.google.com/go/CHANGES.md
new file mode 100644
index 0000000000000..2f27e0fd023c6
--- /dev/null
+++ b/vendor/cloud.google.com/go/CHANGES.md
@@ -0,0 +1,1401 @@
+# Changes
+
+## v0.49.0
+
+-functions/metadata:
+ - Handle string resources in JSON unmarshaller.
+- Various updates to autogenerated clients.
+
+## v0.48.0
+
+- Various updates to autogenerated clients
+
+## v0.47.0
+
+This release drops support for Go 1.9 and Go 1.10: we continue to officially
+support Go 1.11, Go 1.12, and Go 1.13.
+
+- Various updates to autogenerated clients.
+- Add cloudbuild/apiv1 client.
+
+## v0.46.3
+
+This is an empty release that was created solely to aid in storage's module
+carve-out. See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository.
+
+## v0.46.2
+
+This is an empty release that was created solely to aid in spanner's module
+carve-out. See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository.
+
+## v0.46.1
+
+This is an empty release that was created solely to aid in firestore's module
+carve-out. See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository.
+
+## v0.46.0
+
+- spanner:
+ - Retry "Session not found" for read-only transactions.
+ - Retry aborted PDMLs.
+- spanner/spannertest:
+ - Fix a bug that was causing 0X-prefixed number to be parsed incorrectly.
+- storage:
+ - Add HMACKeyOptions.
+ - Remove *REGIONAL from StorageClass documentation. Using MULTI_REGIONAL,
+ DURABLE_REDUCED_AVAILABILITY, and REGIONAL are no longer best practice
+ StorageClasses but they are still acceptable values.
+- trace:
+ - Remove cloud.google.com/go/trace. Package cloud.google.com/go/trace has been
+ marked OBSOLETE for several years: it is now no longer provided. If you
+ relied on this package, please vendor it or switch to using
+ https://cloud.google.com/trace/docs/setup/go (which obsoleted it).
+
+## v0.45.1
+
+This is an empty release that was created solely to aid in pubsub's module
+carve-out. See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository.
+
+## v0.45.0
+
+- compute/metadata:
+ - Add Email method.
+- storage:
+ - Fix duplicated retry logic.
+ - Add ReaderObjectAttrs.StartOffset.
+ - Support reading last N bytes of a file when a negative range is given, such
+ as `obj.NewRangeReader(ctx, -10, -1)`.
+ - Add HMACKey listing functionality.
+- spanner/spannertest:
+ - Support primary keys with no columns.
+ - Fix MinInt64 parsing.
+ - Implement deletion of key ranges.
+ - Handle reads during a read-write transaction.
+ - Handle returning DATE values.
+- pubsub:
+ - Fix Ack/Modack request size calculation.
+- logging:
+ - Add auto-detection of monitored resources on GAE Standard.
+
+## v0.44.3
+
+This is an empty release that was created solely to aid in bigtable's module
+carve-out. See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository.
+
+## v0.44.2
+
+This is an empty release that was created solely to aid in bigquery's module
+carve-out. See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository.
+
+## v0.44.1
+
+This is an empty release that was created solely to aid in datastore's module
+carve-out. See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository.
+
+## v0.44.0
+
+- datastore:
+ - Interface elements whose underlying types are supported, are now supported.
+ - Reduce time to initial retry from 1s to 100ms.
+- firestore:
+ - Add Increment transformation.
+- storage:
+ - Allow emulator with STORAGE_EMULATOR_HOST.
+ - Add methods for HMAC key management.
+- pubsub:
+ - Add PublishCount and PublishLatency measurements.
+ - Add DefaultPublishViews and DefaultSubscribeViews for convenience of
+ importing all views.
+ - Add add Subscription.PushConfig.AuthenticationMethod.
+- spanner:
+ - Allow emulator usage with SPANNER_EMULATOR_HOST.
+ - Add cloud.google.com/go/spanner/spannertest, a spanner emulator.
+ - Add cloud.google.com/go/spanner/spansql which contains types and a parser
+ for the Cloud Spanner SQL dialect.
+- asset:
+ - Add apiv1p2beta1 client.
+
+## v0.43.0
+
+This is an empty release that was created solely to aid in logging's module
+carve-out. See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository.
+
+## v0.42.0
+
+- bigtable:
+ - Add an admin method to update an instance and clusters.
+ - Fix bttest regex matching behavior for alternations (things like `|a`).
+ - Expose BlockAllFilter filter.
+- bigquery:
+ - Add Routines API support.
+- storage:
+ - Add read-only Bucket.LocationType.
+- logging:
+ - Add TraceSampled to Entry.
+ - Fix to properly extract {Trace, Span}Id from X-Cloud-Trace-Context.
+- pubsub:
+ - Add Cloud Key Management to TopicConfig.
+ - Change ExpirationPolicy to optional.Duration.
+- automl:
+ - Add apiv1beta1 client.
+- iam:
+ - Fix compilation problem with iam/credentials/apiv1.
+
+## v0.41.0
+
+- bigtable:
+ - Check results from PredicateFilter in bttest, which fixes certain false matches.
+- profiler:
+ - debugLog checks user defined logging options before logging.
+- spanner:
+ - PartitionedUpdates respect query parameters.
+ - StartInstance allows specifying cloud API access scopes.
+- bigquery:
+ - Use empty slice instead of nil for ValueSaver, fixing an issue with zero-length, repeated, nested fields causing panics.
+- firestore:
+ - Return same number of snapshots as doc refs (in the form of duplicate records) during GetAll.
+- replay:
+ - Change references to IPv4 addresses to localhost, making replay compatible with IPv6.
+
+## v0.40.0
+
+- all:
+ - Update to protobuf-golang v1.3.1.
+- datastore:
+ - Attempt to decode GAE-encoded keys if initial decoding attempt fails.
+ - Support integer time conversion.
+- pubsub:
+ - Add PublishSettings.BundlerByteLimit. If users receive pubsub.ErrOverflow,
+ this value should be adjusted higher.
+ - Use IPv6 compatible target in testutil.
+- bigtable:
+ - Fix Latin-1 regexp filters in bttest, allowing \C.
+ - Expose PassAllFilter.
+- profiler:
+ - Add log messages for slow path in start.
+ - Fix start to allow retry until success.
+- firestore:
+ - Add admin client.
+- containeranalysis:
+ - Add apiv1 client.
+- grafeas:
+ - Add apiv1 client.
+
+## 0.39.0
+
+- bigtable:
+ - Implement DeleteInstance in bttest.
+ - Return an error on invalid ReadRowsRequest.RowRange key ranges in bttest.
+- bigquery:
+ - Move RequirePartitionFilter outside of TimePartioning.
+ - Expose models API.
+- firestore:
+ - Allow array values in create and update calls.
+ - Add CollectionGroup method.
+- pubsub:
+ - Add ExpirationPolicy to Subscription.
+- storage:
+ - Add V4 signing.
+- rpcreplay:
+ - Match streams by first sent request. This further improves rpcreplay's
+ ability to distinguish streams.
+- httpreplay:
+ - Set up Man-In-The-Middle config only once. This should improve proxy
+ creation when multiple proxies are used in a single process.
+ - Remove error on empty Content-Type, allowing requests with no Content-Type
+ header but a non-empty body.
+- all:
+ - Fix an edge case bug in auto-generated library pagination by properly
+ propagating pagetoken.
+
+## 0.38.0
+
+This update includes a substantial reduction in our transitive dependency list
+by way of updating to [email protected].
+
+- spanner:
+ - Error implements GRPCStatus, allowing status.Convert.
+- bigtable:
+ - Fix a bug in bttest that prevents single column queries returning results
+ that match other filters.
+ - Remove verbose retry logging.
+- logging:
+ - Ensure RequestUrl has proper UTF-8, removing the need for users to wrap and
+ rune replace manually.
+- recaptchaenterprise:
+ - Add v1beta1 client.
+- phishingprotection:
+ - Add v1beta1 client.
+
+## 0.37.4
+
+This patch releases re-builds the go.sum. This was not possible in the
+previous release.
+
+- firestore:
+ - Add sentinel value DetectProjectID for auto-detecting project ID.
+ - Add OpenCensus tracing for public methods.
+ - Marked stable. All future changes come with a backwards compatibility
+ guarantee.
+ - Removed firestore/apiv1beta1. All users relying on this low-level library
+ should migrate to firestore/apiv1. Note that most users should use the
+ high-level firestore package instead.
+- pubsub:
+ - Allow large messages in synchronous pull case.
+ - Cap bundler byte limit. This should prevent OOM conditions when there are
+ a very large number of message publishes occurring.
+- storage:
+ - Add ETag to BucketAttrs and ObjectAttrs.
+- datastore:
+ - Removed some non-sensical OpenCensus traces.
+- webrisk:
+ - Add v1 client.
+- asset:
+ - Add v1 client.
+- cloudtasks:
+ - Add v2 client.
+
+## 0.37.3
+
+This patch release removes github.com/golang/lint from the transitive
+dependency list, resolving `go get -u` problems.
+
+Note: this release intentionally has a broken go.sum. Please use v0.37.4.
+
+## 0.37.2
+
+This patch release is mostly intended to bring in v0.3.0 of
+google.golang.org/api, which fixes a GCF deployment issue.
+
+Note: we had to-date accidentally marked Redis as stable. In this release, we've
+fixed it by downgrading its documentation to alpha, as it is in other languages
+and docs.
+
+- all:
+ - Document context in generated libraries.
+
+## 0.37.1
+
+Small go.mod version bumps to bring in v0.2.0 of google.golang.org/api, which
+introduces a new oauth2 url.
+
+## 0.37.0
+
+- spanner:
+ - Add BatchDML method.
+ - Reduced initial time between retries.
+- bigquery:
+ - Produce better error messages for InferSchema.
+ - Add logical type control for avro loads.
+ - Add support for the GEOGRAPHY type.
+- datastore:
+ - Add sentinel value DetectProjectID for auto-detecting project ID.
+ - Allow flatten tag on struct pointers.
+ - Fixed a bug that caused queries to panic with invalid queries. Instead they
+ will now return an error.
+- profiler:
+ - Add ability to override GCE zone and instance.
+- pubsub:
+ - BEHAVIOR CHANGE: Refactor error code retry logic. RPCs should now more
+ consistently retry specific error codes based on whether they're idempotent
+ or non-idempotent.
+- httpreplay: Fixed a bug when a non-GET request had a zero-length body causing
+ the Content-Length header to be dropped.
+- iot:
+ - Add new apiv1 client.
+- securitycenter:
+ - Add new apiv1 client.
+- cloudscheduler:
+ - Add new apiv1 client.
+
+## 0.36.0
+
+- spanner:
+ - Reduce minimum retry backoff from 1s to 100ms. This makes time between
+ retries much faster and should improve latency.
+- storage:
+ - Add support for Bucket Policy Only.
+- kms:
+ - Add ResourceIAM helper method.
+ - Deprecate KeyRingIAM and CryptoKeyIAM. Please use ResourceIAM.
+- firestore:
+ - Switch from v1beta1 API to v1 API.
+ - Allow emulator with FIRESTORE_EMULATOR_HOST.
+- bigquery:
+ - Add NumLongTermBytes to Table.
+ - Add TotalBytesProcessedAccuracy to QueryStatistics.
+- irm:
+ - Add new v1alpha2 client.
+- talent:
+ - Add new v4beta1 client.
+- rpcreplay:
+ - Fix connection to work with grpc >= 1.17.
+ - It is now required for an actual gRPC server to be running for Dial to
+ succeed.
+
+## 0.35.1
+
+- spanner:
+ - Adds OpenCensus views back to public API.
+
+## v0.35.0
+
+- all:
+ - Add go.mod and go.sum.
+ - Switch usage of gax-go to gax-go/v2.
+- bigquery:
+ - Fix bug where time partitioning could not be removed from a table.
+ - Fix panic that occurred with empty query parameters.
+- bttest:
+ - Fix bug where deleted rows were returned by ReadRows.
+- bigtable/emulator:
+ - Configure max message size to 256 MiB.
+- firestore:
+ - Allow non-transactional queries in transactions.
+ - Allow StartAt/EndBefore on direct children at any depth.
+ - QuerySnapshotIterator.Stop may be called in an error state.
+ - Fix bug the prevented reset of transaction write state in between retries.
+- functions/metadata:
+ - Make Metadata.Resource a pointer.
+- logging:
+ - Make SpanID available in logging.Entry.
+- metadata:
+ - Wrap !200 error code in a typed err.
+- profiler:
+ - Add function to check if function name is within a particular file in the
+ profile.
+ - Set parent field in create profile request.
+ - Return kubernetes client to start cluster, so client can be used to poll
+ cluster.
+ - Add function for checking if filename is in profile.
+- pubsub:
+ - Fix bug where messages expired without an initial modack in
+ synchronous=true mode.
+ - Receive does not retry ResourceExhausted errors.
+- spanner:
+ - client.Close now cancels existing requests and should be much faster for
+ large amounts of sessions.
+ - Correctly allow MinOpened sessions to be spun up.
+
+## v0.34.0
+
+- functions/metadata:
+ - Switch to using JSON in context.
+ - Make Resource a value.
+- vision: Fix ProductSearch return type.
+- datastore: Add an example for how to handle MultiError.
+
+## v0.33.1
+
+- compute: Removes an erroneously added go.mod.
+- logging: Populate source location in fromLogEntry.
+
+## v0.33.0
+
+- bttest:
+ - Add support for apply_label_transformer.
+- expr:
+ - Add expr library.
+- firestore:
+ - Support retrieval of missing documents.
+- kms:
+ - Add IAM methods.
+- pubsub:
+ - Clarify extension documentation.
+- scheduler:
+ - Add v1beta1 client.
+- vision:
+ - Add product search helper.
+ - Add new product search client.
+
+## v0.32.0
+
+Note: This release is the last to support Go 1.6 and 1.8.
+
+- bigquery:
+ - Add support for removing an expiration.
+ - Ignore NeverExpire in Table.Create.
+ - Validate table expiration time.
+- cbt:
+ - Add note about not supporting arbitrary bytes.
+- datastore:
+ - Align key checks.
+- firestore:
+ - Return an error when using Start/End without providing values.
+- pubsub:
+ - Add pstest Close method.
+ - Clarify MaxExtension documentation.
+- securitycenter:
+ - Add v1beta1 client.
+- spanner:
+ - Allow nil in mutations.
+ - Improve doc of SessionPoolConfig.MaxOpened.
+ - Increase session deletion timeout from 5s to 15s.
+
+## v0.31.0
+
+- bigtable:
+ - Group mutations across multiple requests.
+- bigquery:
+ - Link to bigquery troubleshooting errors page in bigquery.Error comment.
+- cbt:
+ - Fix go generate command.
+ - Document usage of both maxage + maxversions.
+- datastore:
+ - Passing nil keys results in ErrInvalidKey.
+- firestore:
+ - Clarify what Document.DataTo does with untouched struct fields.
+- profile:
+ - Validate service name in agent.
+- pubsub:
+ - Fix deadlock with pstest and ctx.Cancel.
+ - Fix a possible deadlock in pstest.
+- trace:
+ - Update doc URL with new fragment.
+
+Special thanks to @fastest963 for going above and beyond helping us to debug
+hard-to-reproduce Pub/Sub issues.
+
+## v0.30.0
+
+- spanner: DML support added. See https://godoc.org/cloud.google.com/go/spanner#hdr-DML_and_Partitioned_DML for more information.
+- bigtable: bttest supports row sample filter.
+- functions: metadata package added for accessing Cloud Functions resource metadata.
+
+## v0.29.0
+
+- bigtable:
+ - Add retry to all idempotent RPCs.
+ - cbt supports complex GC policies.
+ - Emulator supports arbitrary bytes in regex filters.
+- firestore: Add ArrayUnion and ArrayRemove.
+- logging: Add the ContextFunc option to supply the context used for
+ asynchronous RPCs.
+- profiler: Ignore NotDefinedError when fetching the instance name
+- pubsub:
+ - BEHAVIOR CHANGE: Receive doesn't retry if an RPC returns codes.Cancelled.
+ - BEHAVIOR CHANGE: Receive retries on Unavailable intead of returning.
+ - Fix deadlock.
+ - Restore Ack/Nack/Modacks metrics.
+ - Improve context handling in iterator.
+ - Implement synchronous mode for Receive.
+ - pstest: add Pull.
+- spanner: Add a metric for the number of sessions currently opened.
+- storage:
+ - Canceling the context releases all resources.
+ - Add additional RetentionPolicy attributes.
+- vision/apiv1: Add LocalizeObjects method.
+
+## v0.28.0
+
+- bigtable:
+ - Emulator returns Unimplemented for snapshot RPCs.
+- bigquery:
+ - Support zero-length repeated, nested fields.
+- cloud assets:
+ - Add v1beta client.
+- datastore:
+ - Don't nil out transaction ID on retry.
+- firestore:
+ - BREAKING CHANGE: When watching a query with Query.Snapshots, QuerySnapshotIterator.Next
+ returns a QuerySnapshot which contains read time, result size, change list and the DocumentIterator
+ (previously, QuerySnapshotIterator.Next returned just the DocumentIterator). See: https://godoc.org/cloud.google.com/go/firestore#Query.Snapshots.
+ - Add array-contains operator.
+- IAM:
+ - Add iam/credentials/apiv1 client.
+- pubsub:
+ - Canceling the context passed to Subscription.Receive causes Receive to return when
+ processing finishes on all messages currently in progress, even if new messages are arriving.
+- redis:
+ - Add redis/apiv1 client.
+- storage:
+ - Add Reader.Attrs.
+ - Deprecate several Reader getter methods: please use Reader.Attrs for these instead.
+ - Add ObjectHandle.Bucket and ObjectHandle.Object methods.
+
+## v0.27.0
+
+- bigquery:
+ - Allow modification of encryption configuration and partitioning options to a table via the Update call.
+ - Add a SchemaFromJSON function that converts a JSON table schema.
+- bigtable:
+ - Restore cbt count functionality.
+- containeranalysis:
+ - Add v1beta client.
+- spanner:
+ - Fix a case where an iterator might not be closed correctly.
+- storage:
+ - Add ServiceAccount method https://godoc.org/cloud.google.com/go/storage#Client.ServiceAccount.
+ - Add a method to Reader that returns the parsed value of the Last-Modified header.
+
+## v0.26.0
+
+- bigquery:
+ - Support filtering listed jobs by min/max creation time.
+ - Support data clustering (https://godoc.org/cloud.google.com/go/bigquery#Clustering).
+ - Include job creator email in Job struct.
+- bigtable:
+ - Add `RowSampleFilter`.
+ - emulator: BREAKING BEHAVIOR CHANGE: Regexps in row, family, column and value filters
+ must match the entire target string to succeed. Previously, the emulator was
+ succeeding on partial matches.
+ NOTE: As of this release, this change only affects the emulator when run
+ from this repo (bigtable/cmd/emulator/cbtemulator.go). The version launched
+ from `gcloud` will be updated in a subsequent `gcloud` release.
+- dataproc: Add apiv1beta2 client.
+- datastore: Save non-nil pointer fields on omitempty.
+- logging: populate Entry.Trace from the HTTP X-Cloud-Trace-Context header.
+- logging/logadmin: Support writer_identity and include_children.
+- pubsub:
+ - Support labels on topics and subscriptions.
+ - Support message storage policy for topics.
+ - Use the distribution of ack times to determine when to extend ack deadlines.
+ The only user-visible effect of this change should be that programs that
+ call only `Subscription.Receive` need no IAM permissions other than `Pub/Sub
+ Subscriber`.
+- storage:
+ - Support predefined ACLs.
+ - Support additional ACL fields other than Entity and Role.
+ - Support bucket websites.
+ - Support bucket logging.
+
+
+## v0.25.0
+
+- Added [Code of Conduct](https://github.com/googleapis/google-cloud-go/blob/master/CODE_OF_CONDUCT.md)
+- bigtable:
+ - cbt: Support a GC policy of "never".
+- errorreporting:
+ - Support User.
+ - Close now calls Flush.
+ - Use OnError (previously ignored).
+ - Pass through the RPC error as-is to OnError.
+- httpreplay: A tool for recording and replaying HTTP requests
+ (for the bigquery and storage clients in this repo).
+- kms: v1 client added
+- logging: add SourceLocation to Entry.
+- storage: improve CRC checking on read.
+
+## v0.24.0
+
+- bigquery: Support for the NUMERIC type.
+- bigtable:
+ - cbt: Optionally specify columns for read/lookup
+ - Support instance-level administration.
+- oslogin: New client for the OS Login API.
+- pubsub:
+ - The package is now stable. There will be no further breaking changes.
+ - Internal changes to improve Subscription.Receive behavior.
+- storage: Support updating bucket lifecycle config.
+- spanner: Support struct-typed parameter bindings.
+- texttospeech: New client for the Text-to-Speech API.
+
+## v0.23.0
+
+- bigquery: Add DDL stats to query statistics.
+- bigtable:
+ - cbt: Add cells-per-column limit for row lookup.
+ - cbt: Make it possible to combine read filters.
+- dlp: v2beta2 client removed. Use the v2 client instead.
+- firestore, spanner: Fix compilation errors due to protobuf changes.
+
+## v0.22.0
+
+- bigtable:
+ - cbt: Support cells per column limit for row read.
+ - bttest: Correctly handle empty RowSet.
+ - Fix ReadModifyWrite operation in emulator.
+ - Fix API path in GetCluster.
+
+- bigquery:
+ - BEHAVIOR CHANGE: Retry on 503 status code.
+ - Add dataset.DeleteWithContents.
+ - Add SchemaUpdateOptions for query jobs.
+ - Add Timeline to QueryStatistics.
+ - Add more stats to ExplainQueryStage.
+ - Support Parquet data format.
+
+- datastore:
+ - Support omitempty for times.
+
+- dlp:
+ - **BREAKING CHANGE:** Remove v1beta1 client. Please migrate to the v2 client,
+ which is now out of beta.
+ - Add v2 client.
+
+- firestore:
+ - BEHAVIOR CHANGE: Treat set({}, MergeAll) as valid.
+
+- iam:
+ - Support JWT signing via SignJwt callopt.
+
+- profiler:
+ - BEHAVIOR CHANGE: PollForSerialOutput returns an error when context.Done.
+ - BEHAVIOR CHANGE: Increase the initial backoff to 1 minute.
+ - Avoid returning empty serial port output.
+
+- pubsub:
+ - BEHAVIOR CHANGE: Don't backoff during next retryable error once stream is healthy.
+ - BEHAVIOR CHANGE: Don't backoff on EOF.
+ - pstest: Support Acknowledge and ModifyAckDeadline RPCs.
+
+- redis:
+ - Add v1 beta Redis client.
+
+- spanner:
+ - Support SessionLabels.
+
+- speech:
+ - Add api v1 beta1 client.
+
+- storage:
+ - BEHAVIOR CHANGE: Retry reads when retryable error occurs.
+ - Fix delete of object in requester-pays bucket.
+ - Support KMS integration.
+
+## v0.21.0
+
+- bigquery:
+ - Add OpenCensus tracing.
+
+- firestore:
+ - **BREAKING CHANGE:** If a document does not exist, return a DocumentSnapshot
+ whose Exists method returns false. DocumentRef.Get and Transaction.Get
+ return the non-nil DocumentSnapshot in addition to a NotFound error.
+ **DocumentRef.GetAll and Transaction.GetAll return a non-nil
+ DocumentSnapshot instead of nil.**
+ - Add DocumentIterator.Stop. **Call Stop whenever you are done with a
+ DocumentIterator.**
+ - Added Query.Snapshots and DocumentRef.Snapshots, which provide realtime
+ notification of updates. See https://cloud.google.com/firestore/docs/query-data/listen.
+ - Canceling an RPC now always returns a grpc.Status with codes.Canceled.
+
+- spanner:
+ - Add `CommitTimestamp`, which supports inserting the commit timestamp of a
+ transaction into a column.
+
+## v0.20.0
+
+- bigquery: Support SchemaUpdateOptions for load jobs.
+
+- bigtable:
+ - Add SampleRowKeys.
+ - cbt: Support union, intersection GCPolicy.
+ - Retry admin RPCS.
+ - Add trace spans to retries.
+
+- datastore: Add OpenCensus tracing.
+
+- firestore:
+ - Fix queries involving Null and NaN.
+ - Allow Timestamp protobuffers for time values.
+
+- logging: Add a WriteTimeout option.
+
+- spanner: Support Batch API.
+
+- storage: Add OpenCensus tracing.
+
+## v0.19.0
+
+- bigquery:
+ - Support customer-managed encryption keys.
+
+- bigtable:
+ - Improved emulator support.
+ - Support GetCluster.
+
+- datastore:
+ - Add general mutations.
+ - Support pointer struct fields.
+ - Support transaction options.
+
+- firestore:
+ - Add Transaction.GetAll.
+ - Support document cursors.
+
+- logging:
+ - Support concurrent RPCs to the service.
+ - Support per-entry resources.
+
+- profiler:
+ - Add config options to disable heap and thread profiling.
+ - Read the project ID from $GOOGLE_CLOUD_PROJECT when it's set.
+
+- pubsub:
+ - BEHAVIOR CHANGE: Release flow control after ack/nack (instead of after the
+ callback returns).
+ - Add SubscriptionInProject.
+ - Add OpenCensus instrumentation for streaming pull.
+
+- storage:
+ - Support CORS.
+
+## v0.18.0
+
+- bigquery:
+ - Marked stable.
+ - Schema inference of nullable fields supported.
+ - Added TimePartitioning to QueryConfig.
+
+- firestore: Data provided to DocumentRef.Set with a Merge option can contain
+ Delete sentinels.
+
+- logging: Clients can accept parent resources other than projects.
+
+- pubsub:
+ - pubsub/pstest: A lighweight fake for pubsub. Experimental; feedback welcome.
+ - Support updating more subscription metadata: AckDeadline,
+ RetainAckedMessages and RetentionDuration.
+
+- oslogin/apiv1beta: New client for the Cloud OS Login API.
+
+- rpcreplay: A package for recording and replaying gRPC traffic.
+
+- spanner:
+ - Add a ReadWithOptions that supports a row limit, as well as an index.
+ - Support query plan and execution statistics.
+ - Added [OpenCensus](http://opencensus.io) support.
+
+- storage: Clarify checksum validation for gzipped files (it is not validated
+ when the file is served uncompressed).
+
+
+## v0.17.0
+
+- firestore BREAKING CHANGES:
+ - Remove UpdateMap and UpdateStruct; rename UpdatePaths to Update.
+ Change
+ `docref.UpdateMap(ctx, map[string]interface{}{"a.b", 1})`
+ to
+ `docref.Update(ctx, []firestore.Update{{Path: "a.b", Value: 1}})`
+
+ Change
+ `docref.UpdateStruct(ctx, []string{"Field"}, aStruct)`
+ to
+ `docref.Update(ctx, []firestore.Update{{Path: "Field", Value: aStruct.Field}})`
+ - Rename MergePaths to Merge; require args to be FieldPaths
+ - A value stored as an integer can be read into a floating-point field, and vice versa.
+- bigtable/cmd/cbt:
+ - Support deleting a column.
+ - Add regex option for row read.
+- spanner: Mark stable.
+- storage:
+ - Add Reader.ContentEncoding method.
+ - Fix handling of SignedURL headers.
+- bigquery:
+ - If Uploader.Put is called with no rows, it returns nil without making a
+ call.
+ - Schema inference supports the "nullable" option in struct tags for
+ non-required fields.
+ - TimePartitioning supports "Field".
+
+
+## v0.16.0
+
+- Other bigquery changes:
+ - `JobIterator.Next` returns `*Job`; removed `JobInfo` (BREAKING CHANGE).
+ - UseStandardSQL is deprecated; set UseLegacySQL to true if you need
+ Legacy SQL.
+ - Uploader.Put will generate a random insert ID if you do not provide one.
+ - Support time partitioning for load jobs.
+ - Support dry-run queries.
+ - A `Job` remembers its last retrieved status.
+ - Support retrieving job configuration.
+ - Support labels for jobs and tables.
+ - Support dataset access lists.
+ - Improve support for external data sources, including data from Bigtable and
+ Google Sheets, and tables with external data.
+ - Support updating a table's view configuration.
+ - Fix uploading civil times with nanoseconds.
+
+- storage:
+ - Support PubSub notifications.
+ - Support Requester Pays buckets.
+
+- profiler: Support goroutine and mutex profile types.
+
+## v0.15.0
+
+- firestore: beta release. See the
+ [announcement](https://firebase.googleblog.com/2017/10/introducing-cloud-firestore.html).
+
+- errorreporting: The existing package has been redesigned.
+
+- errors: This package has been removed. Use errorreporting.
+
+
+## v0.14.0
+
+- bigquery BREAKING CHANGES:
+ - Standard SQL is the default for queries and views.
+ - `Table.Create` takes `TableMetadata` as a second argument, instead of
+ options.
+ - `Dataset.Create` takes `DatasetMetadata` as a second argument.
+ - `DatasetMetadata` field `ID` renamed to `FullID`
+ - `TableMetadata` field `ID` renamed to `FullID`
+
+- Other bigquery changes:
+ - The client will append a random suffix to a provided job ID if you set
+ `AddJobIDSuffix` to true in a job config.
+ - Listing jobs is supported.
+ - Better retry logic.
+
+- vision, language, speech: clients are now stable
+
+- monitoring: client is now beta
+
+- profiler:
+ - Rename InstanceName to Instance, ZoneName to Zone
+ - Auto-detect service name and version on AppEngine.
+
+## v0.13.0
+
+- bigquery: UseLegacySQL options for CreateTable and QueryConfig. Use these
+ options to continue using Legacy SQL after the client switches its default
+ to Standard SQL.
+
+- bigquery: Support for updating dataset labels.
+
+- bigquery: Set DatasetIterator.ProjectID to list datasets in a project other
+ than the client's. DatasetsInProject is no longer needed and is deprecated.
+
+- bigtable: Fail ListInstances when any zones fail.
+
+- spanner: support decoding of slices of basic types (e.g. []string, []int64,
+ etc.)
+
+- logging/logadmin: UpdateSink no longer creates a sink if it is missing
+ (actually a change to the underlying service, not the client)
+
+- profiler: Service and ServiceVersion replace Target in Config.
+
+## v0.12.0
+
+- pubsub: Subscription.Receive now uses streaming pull.
+
+- pubsub: add Client.TopicInProject to access topics in a different project
+ than the client.
+
+- errors: renamed errorreporting. The errors package will be removed shortly.
+
+- datastore: improved retry behavior.
+
+- bigquery: support updates to dataset metadata, with etags.
+
+- bigquery: add etag support to Table.Update (BREAKING: etag argument added).
+
+- bigquery: generate all job IDs on the client.
+
+- storage: support bucket lifecycle configurations.
+
+
+## v0.11.0
+
+- Clients for spanner, pubsub and video are now in beta.
+
+- New client for DLP.
+
+- spanner: performance and testing improvements.
+
+- storage: requester-pays buckets are supported.
+
+- storage, profiler, bigtable, bigquery: bug fixes and other minor improvements.
+
+- pubsub: bug fixes and other minor improvements
+
+## v0.10.0
+
+- pubsub: Subscription.ModifyPushConfig replaced with Subscription.Update.
+
+- pubsub: Subscription.Receive now runs concurrently for higher throughput.
+
+- vision: cloud.google.com/go/vision is deprecated. Use
+cloud.google.com/go/vision/apiv1 instead.
+
+- translation: now stable.
+
+- trace: several changes to the surface. See the link below.
+
+### Code changes required from v0.9.0
+
+- pubsub: Replace
+
+ ```
+ sub.ModifyPushConfig(ctx, pubsub.PushConfig{Endpoint: "https://example.com/push"})
+ ```
+
+ with
+
+ ```
+ sub.Update(ctx, pubsub.SubscriptionConfigToUpdate{
+ PushConfig: &pubsub.PushConfig{Endpoint: "https://example.com/push"},
+ })
+ ```
+
+- trace: traceGRPCServerInterceptor will be provided from *trace.Client.
+Given an initialized `*trace.Client` named `tc`, instead of
+
+ ```
+ s := grpc.NewServer(grpc.UnaryInterceptor(trace.GRPCServerInterceptor(tc)))
+ ```
+
+ write
+
+ ```
+ s := grpc.NewServer(grpc.UnaryInterceptor(tc.GRPCServerInterceptor()))
+ ```
+
+- trace trace.GRPCClientInterceptor will also provided from *trace.Client.
+Instead of
+
+ ```
+ conn, err := grpc.Dial(srv.Addr, grpc.WithUnaryInterceptor(trace.GRPCClientInterceptor()))
+ ```
+
+ write
+
+ ```
+ conn, err := grpc.Dial(srv.Addr, grpc.WithUnaryInterceptor(tc.GRPCClientInterceptor()))
+ ```
+
+- trace: We removed the deprecated `trace.EnableGRPCTracing`. Use the gRPC
+interceptor as a dial option as shown below when initializing Cloud package
+clients:
+
+ ```
+ c, err := pubsub.NewClient(ctx, "project-id", option.WithGRPCDialOption(grpc.WithUnaryInterceptor(tc.GRPCClientInterceptor())))
+ if err != nil {
+ ...
+ }
+ ```
+
+
+## v0.9.0
+
+- Breaking changes to some autogenerated clients.
+- rpcreplay package added.
+
+## v0.8.0
+
+- profiler package added.
+- storage:
+ - Retry Objects.Insert call.
+ - Add ProgressFunc to WRiter.
+- pubsub: breaking changes:
+ - Publish is now asynchronous ([announcement](https://groups.google.com/d/topic/google-api-go-announce/aaqRDIQ3rvU/discussion)).
+ - Subscription.Pull replaced by Subscription.Receive, which takes a callback ([announcement](https://groups.google.com/d/topic/google-api-go-announce/8pt6oetAdKc/discussion)).
+ - Message.Done replaced with Message.Ack and Message.Nack.
+
+## v0.7.0
+
+- Release of a client library for Spanner. See
+the
+[blog
+post](https://cloudplatform.googleblog.com/2017/02/introducing-Cloud-Spanner-a-global-database-service-for-mission-critical-applications.html).
+Note that although the Spanner service is beta, the Go client library is alpha.
+
+## v0.6.0
+
+- Beta release of BigQuery, DataStore, Logging and Storage. See the
+[blog post](https://cloudplatform.googleblog.com/2016/12/announcing-new-google-cloud-client.html).
+
+- bigquery:
+ - struct support. Read a row directly into a struct with
+`RowIterator.Next`, and upload a row directly from a struct with `Uploader.Put`.
+You can also use field tags. See the [package documentation][cloud-bigquery-ref]
+for details.
+
+ - The `ValueList` type was removed. It is no longer necessary. Instead of
+ ```go
+ var v ValueList
+ ... it.Next(&v) ..
+ ```
+ use
+
+ ```go
+ var v []Value
+ ... it.Next(&v) ...
+ ```
+
+ - Previously, repeatedly calling `RowIterator.Next` on the same `[]Value` or
+ `ValueList` would append to the slice. Now each call resets the size to zero first.
+
+ - Schema inference will infer the SQL type BYTES for a struct field of
+ type []byte. Previously it inferred STRING.
+
+ - The types `uint`, `uint64` and `uintptr` are no longer supported in schema
+ inference. BigQuery's integer type is INT64, and those types may hold values
+ that are not correctly represented in a 64-bit signed integer.
+
+## v0.5.0
+
+- bigquery:
+ - The SQL types DATE, TIME and DATETIME are now supported. They correspond to
+ the `Date`, `Time` and `DateTime` types in the new `cloud.google.com/go/civil`
+ package.
+ - Support for query parameters.
+ - Support deleting a dataset.
+ - Values from INTEGER columns will now be returned as int64, not int. This
+ will avoid errors arising from large values on 32-bit systems.
+- datastore:
+ - Nested Go structs encoded as Entity values, instead of a
+flattened list of the embedded struct's fields. This means that you may now have twice-nested slices, eg.
+ ```go
+ type State struct {
+ Cities []struct{
+ Populations []int
+ }
+ }
+ ```
+ See [the announcement](https://groups.google.com/forum/#!topic/google-api-go-announce/79jtrdeuJAg) for
+more details.
+ - Contexts no longer hold namespaces; instead you must set a key's namespace
+ explicitly. Also, key functions have been changed and renamed.
+ - The WithNamespace function has been removed. To specify a namespace in a Query, use the Query.Namespace method:
+ ```go
+ q := datastore.NewQuery("Kind").Namespace("ns")
+ ```
+ - All the fields of Key are exported. That means you can construct any Key with a struct literal:
+ ```go
+ k := &Key{Kind: "Kind", ID: 37, Namespace: "ns"}
+ ```
+ - As a result of the above, the Key methods Kind, ID, d.Name, Parent, SetParent and Namespace have been removed.
+ - `NewIncompleteKey` has been removed, replaced by `IncompleteKey`. Replace
+ ```go
+ NewIncompleteKey(ctx, kind, parent)
+ ```
+ with
+ ```go
+ IncompleteKey(kind, parent)
+ ```
+ and if you do use namespaces, make sure you set the namespace on the returned key.
+ - `NewKey` has been removed, replaced by `NameKey` and `IDKey`. Replace
+ ```go
+ NewKey(ctx, kind, name, 0, parent)
+ NewKey(ctx, kind, "", id, parent)
+ ```
+ with
+ ```go
+ NameKey(kind, name, parent)
+ IDKey(kind, id, parent)
+ ```
+ and if you do use namespaces, make sure you set the namespace on the returned key.
+ - The `Done` variable has been removed. Replace `datastore.Done` with `iterator.Done`, from the package `google.golang.org/api/iterator`.
+ - The `Client.Close` method will have a return type of error. It will return the result of closing the underlying gRPC connection.
+ - See [the announcement](https://groups.google.com/forum/#!topic/google-api-go-announce/hqXtM_4Ix-0) for
+more details.
+
+## v0.4.0
+
+- bigquery:
+ -`NewGCSReference` is now a function, not a method on `Client`.
+ - `Table.LoaderFrom` now accepts a `ReaderSource`, enabling
+ loading data into a table from a file or any `io.Reader`.
+ * Client.Table and Client.OpenTable have been removed.
+ Replace
+ ```go
+ client.OpenTable("project", "dataset", "table")
+ ```
+ with
+ ```go
+ client.DatasetInProject("project", "dataset").Table("table")
+ ```
+
+ * Client.CreateTable has been removed.
+ Replace
+ ```go
+ client.CreateTable(ctx, "project", "dataset", "table")
+ ```
+ with
+ ```go
+ client.DatasetInProject("project", "dataset").Table("table").Create(ctx)
+ ```
+
+ * Dataset.ListTables have been replaced with Dataset.Tables.
+ Replace
+ ```go
+ tables, err := ds.ListTables(ctx)
+ ```
+ with
+ ```go
+ it := ds.Tables(ctx)
+ for {
+ table, err := it.Next()
+ if err == iterator.Done {
+ break
+ }
+ if err != nil {
+ // TODO: Handle error.
+ }
+ // TODO: use table.
+ }
+ ```
+
+ * Client.Read has been replaced with Job.Read, Table.Read and Query.Read.
+ Replace
+ ```go
+ it, err := client.Read(ctx, job)
+ ```
+ with
+ ```go
+ it, err := job.Read(ctx)
+ ```
+ and similarly for reading from tables or queries.
+
+ * The iterator returned from the Read methods is now named RowIterator. Its
+ behavior is closer to the other iterators in these libraries. It no longer
+ supports the Schema method; see the next item.
+ Replace
+ ```go
+ for it.Next(ctx) {
+ var vals ValueList
+ if err := it.Get(&vals); err != nil {
+ // TODO: Handle error.
+ }
+ // TODO: use vals.
+ }
+ if err := it.Err(); err != nil {
+ // TODO: Handle error.
+ }
+ ```
+ with
+ ```
+ for {
+ var vals ValueList
+ err := it.Next(&vals)
+ if err == iterator.Done {
+ break
+ }
+ if err != nil {
+ // TODO: Handle error.
+ }
+ // TODO: use vals.
+ }
+ ```
+ Instead of the `RecordsPerRequest(n)` option, write
+ ```go
+ it.PageInfo().MaxSize = n
+ ```
+ Instead of the `StartIndex(i)` option, write
+ ```go
+ it.StartIndex = i
+ ```
+
+ * ValueLoader.Load now takes a Schema in addition to a slice of Values.
+ Replace
+ ```go
+ func (vl *myValueLoader) Load(v []bigquery.Value)
+ ```
+ with
+ ```go
+ func (vl *myValueLoader) Load(v []bigquery.Value, s bigquery.Schema)
+ ```
+
+
+ * Table.Patch is replace by Table.Update.
+ Replace
+ ```go
+ p := table.Patch()
+ p.Description("new description")
+ metadata, err := p.Apply(ctx)
+ ```
+ with
+ ```go
+ metadata, err := table.Update(ctx, bigquery.TableMetadataToUpdate{
+ Description: "new description",
+ })
+ ```
+
+ * Client.Copy is replaced by separate methods for each of its four functions.
+ All options have been replaced by struct fields.
+
+ * To load data from Google Cloud Storage into a table, use Table.LoaderFrom.
+
+ Replace
+ ```go
+ client.Copy(ctx, table, gcsRef)
+ ```
+ with
+ ```go
+ table.LoaderFrom(gcsRef).Run(ctx)
+ ```
+ Instead of passing options to Copy, set fields on the Loader:
+ ```go
+ loader := table.LoaderFrom(gcsRef)
+ loader.WriteDisposition = bigquery.WriteTruncate
+ ```
+
+ * To extract data from a table into Google Cloud Storage, use
+ Table.ExtractorTo. Set fields on the returned Extractor instead of
+ passing options.
+
+ Replace
+ ```go
+ client.Copy(ctx, gcsRef, table)
+ ```
+ with
+ ```go
+ table.ExtractorTo(gcsRef).Run(ctx)
+ ```
+
+ * To copy data into a table from one or more other tables, use
+ Table.CopierFrom. Set fields on the returned Copier instead of passing options.
+
+ Replace
+ ```go
+ client.Copy(ctx, dstTable, srcTable)
+ ```
+ with
+ ```go
+ dst.Table.CopierFrom(srcTable).Run(ctx)
+ ```
+
+ * To start a query job, create a Query and call its Run method. Set fields
+ on the query instead of passing options.
+
+ Replace
+ ```go
+ client.Copy(ctx, table, query)
+ ```
+ with
+ ```go
+ query.Run(ctx)
+ ```
+
+ * Table.NewUploader has been renamed to Table.Uploader. Instead of options,
+ configure an Uploader by setting its fields.
+ Replace
+ ```go
+ u := table.NewUploader(bigquery.UploadIgnoreUnknownValues())
+ ```
+ with
+ ```go
+ u := table.NewUploader(bigquery.UploadIgnoreUnknownValues())
+ u.IgnoreUnknownValues = true
+ ```
+
+- pubsub: remove `pubsub.Done`. Use `iterator.Done` instead, where `iterator` is the package
+`google.golang.org/api/iterator`.
+
+## v0.3.0
+
+- storage:
+ * AdminClient replaced by methods on Client.
+ Replace
+ ```go
+ adminClient.CreateBucket(ctx, bucketName, attrs)
+ ```
+ with
+ ```go
+ client.Bucket(bucketName).Create(ctx, projectID, attrs)
+ ```
+
+ * BucketHandle.List replaced by BucketHandle.Objects.
+ Replace
+ ```go
+ for query != nil {
+ objs, err := bucket.List(d.ctx, query)
+ if err != nil { ... }
+ query = objs.Next
+ for _, obj := range objs.Results {
+ fmt.Println(obj)
+ }
+ }
+ ```
+ with
+ ```go
+ iter := bucket.Objects(d.ctx, query)
+ for {
+ obj, err := iter.Next()
+ if err == iterator.Done {
+ break
+ }
+ if err != nil { ... }
+ fmt.Println(obj)
+ }
+ ```
+ (The `iterator` package is at `google.golang.org/api/iterator`.)
+
+ Replace `Query.Cursor` with `ObjectIterator.PageInfo().Token`.
+
+ Replace `Query.MaxResults` with `ObjectIterator.PageInfo().MaxSize`.
+
+
+ * ObjectHandle.CopyTo replaced by ObjectHandle.CopierFrom.
+ Replace
+ ```go
+ attrs, err := src.CopyTo(ctx, dst, nil)
+ ```
+ with
+ ```go
+ attrs, err := dst.CopierFrom(src).Run(ctx)
+ ```
+
+ Replace
+ ```go
+ attrs, err := src.CopyTo(ctx, dst, &storage.ObjectAttrs{ContextType: "text/html"})
+ ```
+ with
+ ```go
+ c := dst.CopierFrom(src)
+ c.ContextType = "text/html"
+ attrs, err := c.Run(ctx)
+ ```
+
+ * ObjectHandle.ComposeFrom replaced by ObjectHandle.ComposerFrom.
+ Replace
+ ```go
+ attrs, err := dst.ComposeFrom(ctx, []*storage.ObjectHandle{src1, src2}, nil)
+ ```
+ with
+ ```go
+ attrs, err := dst.ComposerFrom(src1, src2).Run(ctx)
+ ```
+
+ * ObjectHandle.Update's ObjectAttrs argument replaced by ObjectAttrsToUpdate.
+ Replace
+ ```go
+ attrs, err := obj.Update(ctx, &storage.ObjectAttrs{ContextType: "text/html"})
+ ```
+ with
+ ```go
+ attrs, err := obj.Update(ctx, storage.ObjectAttrsToUpdate{ContextType: "text/html"})
+ ```
+
+ * ObjectHandle.WithConditions replaced by ObjectHandle.If.
+ Replace
+ ```go
+ obj.WithConditions(storage.Generation(gen), storage.IfMetaGenerationMatch(mgen))
+ ```
+ with
+ ```go
+ obj.Generation(gen).If(storage.Conditions{MetagenerationMatch: mgen})
+ ```
+
+ Replace
+ ```go
+ obj.WithConditions(storage.IfGenerationMatch(0))
+ ```
+ with
+ ```go
+ obj.If(storage.Conditions{DoesNotExist: true})
+ ```
+
+ * `storage.Done` replaced by `iterator.Done` (from package `google.golang.org/api/iterator`).
+
+- Package preview/logging deleted. Use logging instead.
+
+## v0.2.0
+
+- Logging client replaced with preview version (see below).
+
+- New clients for some of Google's Machine Learning APIs: Vision, Speech, and
+Natural Language.
+
+- Preview version of a new [Stackdriver Logging][cloud-logging] client in
+[`cloud.google.com/go/preview/logging`](https://godoc.org/cloud.google.com/go/preview/logging).
+This client uses gRPC as its transport layer, and supports log reading, sinks
+and metrics. It will replace the current client at `cloud.google.com/go/logging` shortly.
+
+
diff --git a/vendor/cloud.google.com/go/CODE_OF_CONDUCT.md b/vendor/cloud.google.com/go/CODE_OF_CONDUCT.md
new file mode 100644
index 0000000000000..8fd1bc9c22bde
--- /dev/null
+++ b/vendor/cloud.google.com/go/CODE_OF_CONDUCT.md
@@ -0,0 +1,44 @@
+# Contributor Code of Conduct
+
+As contributors and maintainers of this project,
+and in the interest of fostering an open and welcoming community,
+we pledge to respect all people who contribute through reporting issues,
+posting feature requests, updating documentation,
+submitting pull requests or patches, and other activities.
+
+We are committed to making participation in this project
+a harassment-free experience for everyone,
+regardless of level of experience, gender, gender identity and expression,
+sexual orientation, disability, personal appearance,
+body size, race, ethnicity, age, religion, or nationality.
+
+Examples of unacceptable behavior by participants include:
+
+* The use of sexualized language or imagery
+* Personal attacks
+* Trolling or insulting/derogatory comments
+* Public or private harassment
+* Publishing other's private information,
+such as physical or electronic
+addresses, without explicit permission
+* Other unethical or unprofessional conduct.
+
+Project maintainers have the right and responsibility to remove, edit, or reject
+comments, commits, code, wiki edits, issues, and other contributions
+that are not aligned to this Code of Conduct.
+By adopting this Code of Conduct,
+project maintainers commit themselves to fairly and consistently
+applying these principles to every aspect of managing this project.
+Project maintainers who do not follow or enforce the Code of Conduct
+may be permanently removed from the project team.
+
+This code of conduct applies both within project spaces and in public spaces
+when an individual is representing the project or its community.
+
+Instances of abusive, harassing, or otherwise unacceptable behavior
+may be reported by opening an issue
+or contacting one or more of the project maintainers.
+
+This Code of Conduct is adapted from the [Contributor Covenant](http://contributor-covenant.org), version 1.2.0,
+available at [http://contributor-covenant.org/version/1/2/0/](http://contributor-covenant.org/version/1/2/0/)
+
diff --git a/vendor/cloud.google.com/go/CONTRIBUTING.md b/vendor/cloud.google.com/go/CONTRIBUTING.md
new file mode 100644
index 0000000000000..fe82c7c38a011
--- /dev/null
+++ b/vendor/cloud.google.com/go/CONTRIBUTING.md
@@ -0,0 +1,306 @@
+# Contributing
+
+1. [Install Go](https://golang.org/dl/).
+ 1. Ensure that your `GOBIN` directory (by default `$(go env GOPATH)/bin`)
+ is in your `PATH`.
+ 1. Check it's working by running `go version`.
+ * If it doesn't work, check the install location, usually
+ `/usr/local/go`, is on your `PATH`.
+
+1. Sign one of the
+[contributor license agreements](#contributor-license-agreements) below.
+
+1. Run `go get golang.org/x/review/git-codereview && go install golang.org/x/review/git-codereview`
+to install the code reviewing tool.
+
+ 1. Ensure it's working by running `git codereview` (check your `PATH` if
+ not).
+
+ 1. If you would like, you may want to set up aliases for `git-codereview`,
+ such that `git codereview change` becomes `git change`. See the
+ [godoc](https://godoc.org/golang.org/x/review/git-codereview) for details.
+
+ * Should you run into issues with the `git-codereview` tool, please note
+ that all error messages will assume that you have set up these aliases.
+
+1. Change to a directory of your choosing and clone the repo.
+
+ ```
+ cd ~/code
+ git clone https://code.googlesource.com/gocloud
+ ```
+
+ * If you have already checked out the source, make sure that the remote
+ `git` `origin` is https://code.googlesource.com/gocloud:
+
+ ```
+ git remote -v
+ # ...
+ git remote set-url origin https://code.googlesource.com/gocloud
+ ```
+
+ * The project uses [Go Modules](https://blog.golang.org/using-go-modules)
+ for dependency management See
+ [`gopls`](https://github.com/golang/go/wiki/gopls) for making your editor
+ work with modules.
+
+1. Change to the project directory:
+
+ ```
+ cd ~/code/gocloud
+ ```
+
+1. Make sure your `git` auth is configured correctly by visiting
+https://code.googlesource.com, clicking "Generate Password" at the top-right,
+and following the directions. Otherwise, `git codereview mail` in the next step
+will fail.
+
+1. Now you are ready to make changes. Don't create a new branch or make commits in the traditional
+way. Use the following`git codereview` commands to create a commit and create a Gerrit CL:
+
+ ```
+ git codereview change <branch-name> # Use this instead of git checkout -b <branch-name>
+ # Make changes.
+ git add ...
+ git codereview change # Use this instead of git commit
+ git codereview mail # If this fails, the error message will contain instructions to fix it.
+ ```
+
+ * This will create a new `git` branch for you to develop on. Once your
+ change is merged, you can delete this branch.
+
+1. As you make changes for code review, ammend the commit and re-mail the
+change:
+
+ ```
+ # Make more changes.
+ git add ...
+ git codereview change
+ git codereview mail
+ ```
+
+ * **Warning**: do not change the `Change-Id` at the bottom of the commit
+ message - it's how Gerrit knows which change this is (or if it's new).
+
+ * When you fixes issues from code review, respond to each code review
+ message then click **Reply** at the top of the page.
+
+ * Each new mailed amendment will create a new patch set for
+ your change in Gerrit. Patch sets can be compared and reviewed.
+
+ * **Note**: if your change includes a breaking change, our breaking change
+ detector will cause CI/CD to fail. If your breaking change is acceptable
+ in some way, add a `BREAKING_CHANGE_ACCEPTABLE=<reason>` line to the commit
+ message to cause the detector not to be run and to make it clear why that is
+ acceptable.
+
+1. Finally, add reviewers to your CL when it's ready for review. Reviewers will
+not be added automatically. If you're not sure who to add for your code review,
+add deklerk@, tbp@, cbro@, and codyoss@.
+
+
+## Integration Tests
+
+In addition to the unit tests, you may run the integration test suite. These
+directions describe setting up your environment to run integration tests for
+_all_ packages: note that many of these instructions may be redundant if you
+intend only to run integration tests on a single package.
+
+#### GCP Setup
+
+To run the integrations tests, creation and configuration of two projects in
+the Google Developers Console is required: one specifically for Firestore
+integration tests, and another for all other integration tests. We'll refer to
+these projects as "general project" and "Firestore project".
+
+After creating each project, you must [create a service account](https://developers.google.com/identity/protocols/OAuth2ServiceAccount#creatinganaccount)
+for each project. Ensure the project-level **Owner**
+[IAM role](console.cloud.google.com/iam-admin/iam/project) role is added to
+each service account. During the creation of the service account, you should
+download the JSON credential file for use later.
+
+Next, ensure the following APIs are enabled in the general project:
+
+- BigQuery API
+- BigQuery Data Transfer API
+- Cloud Dataproc API
+- Cloud Dataproc Control API Private
+- Cloud Datastore API
+- Cloud Firestore API
+- Cloud Key Management Service (KMS) API
+- Cloud Natural Language API
+- Cloud OS Login API
+- Cloud Pub/Sub API
+- Cloud Resource Manager API
+- Cloud Spanner API
+- Cloud Speech API
+- Cloud Translation API
+- Cloud Video Intelligence API
+- Cloud Vision API
+- Compute Engine API
+- Compute Engine Instance Group Manager API
+- Container Registry API
+- Firebase Rules API
+- Google Cloud APIs
+- Google Cloud Deployment Manager V2 API
+- Google Cloud SQL
+- Google Cloud Storage
+- Google Cloud Storage JSON API
+- Google Compute Engine Instance Group Updater API
+- Google Compute Engine Instance Groups API
+- Kubernetes Engine API
+- Stackdriver Error Reporting API
+
+Next, create a Datastore database in the general project, and a Firestore
+database in the Firestore project.
+
+Finally, in the general project, create an API key for the translate API:
+
+- Go to GCP Developer Console.
+- Navigate to APIs & Services > Credentials.
+- Click Create Credentials > API Key.
+- Save this key for use in `GCLOUD_TESTS_API_KEY` as described below.
+
+#### Local Setup
+
+Once the two projects are created and configured, set the following environment
+variables:
+
+- `GCLOUD_TESTS_GOLANG_PROJECT_ID`: Developers Console project's ID (e.g.
+bamboo-shift-455) for the general project.
+- `GCLOUD_TESTS_GOLANG_KEY`: The path to the JSON key file of the general
+project's service account.
+- `GCLOUD_TESTS_GOLANG_FIRESTORE_PROJECT_ID`: Developers Console project's ID
+(e.g. doorway-cliff-677) for the Firestore project.
+- `GCLOUD_TESTS_GOLANG_FIRESTORE_KEY`: The path to the JSON key file of the
+Firestore project's service account.
+- `GCLOUD_TESTS_GOLANG_KEYRING`: The full name of the keyring for the tests,
+in the form
+"projects/P/locations/L/keyRings/R". The creation of this is described below.
+- `GCLOUD_TESTS_API_KEY`: API key for using the Translate API.
+- `GCLOUD_TESTS_GOLANG_ZONE`: Compute Engine zone.
+
+Install the [gcloud command-line tool][gcloudcli] to your machine and use it to
+create some resources used in integration tests.
+
+From the project's root directory:
+
+``` sh
+# Sets the default project in your env.
+$ gcloud config set project $GCLOUD_TESTS_GOLANG_PROJECT_ID
+
+# Authenticates the gcloud tool with your account.
+$ gcloud auth login
+
+# Create the indexes used in the datastore integration tests.
+$ gcloud datastore indexes create datastore/testdata/index.yaml
+
+# Creates a Google Cloud storage bucket with the same name as your test project,
+# and with the Stackdriver Logging service account as owner, for the sink
+# integration tests in logging.
+$ gsutil mb gs://$GCLOUD_TESTS_GOLANG_PROJECT_ID
+$ gsutil acl ch -g [email protected]:O gs://$GCLOUD_TESTS_GOLANG_PROJECT_ID
+
+# Creates a PubSub topic for integration tests of storage notifications.
+$ gcloud beta pubsub topics create go-storage-notification-test
+# Next, go to the Pub/Sub dashboard in GCP console. Authorize the user
+# "service-<numberic project id>@gs-project-accounts.iam.gserviceaccount.com"
+# as a publisher to that topic.
+
+# Creates a Spanner instance for the spanner integration tests.
+$ gcloud beta spanner instances create go-integration-test --config regional-us-central1 --nodes 10 --description 'Instance for go client test'
+# NOTE: Spanner instances are priced by the node-hour, so you may want to
+# delete the instance after testing with 'gcloud beta spanner instances delete'.
+
+$ export MY_KEYRING=some-keyring-name
+$ export MY_LOCATION=global
+# Creates a KMS keyring, in the same location as the default location for your
+# project's buckets.
+$ gcloud kms keyrings create $MY_KEYRING --location $MY_LOCATION
+# Creates two keys in the keyring, named key1 and key2.
+$ gcloud kms keys create key1 --keyring $MY_KEYRING --location $MY_LOCATION --purpose encryption
+$ gcloud kms keys create key2 --keyring $MY_KEYRING --location $MY_LOCATION --purpose encryption
+# Sets the GCLOUD_TESTS_GOLANG_KEYRING environment variable.
+$ export GCLOUD_TESTS_GOLANG_KEYRING=projects/$GCLOUD_TESTS_GOLANG_PROJECT_ID/locations/$MY_LOCATION/keyRings/$MY_KEYRING
+# Authorizes Google Cloud Storage to encrypt and decrypt using key1.
+gsutil kms authorize -p $GCLOUD_TESTS_GOLANG_PROJECT_ID -k $GCLOUD_TESTS_GOLANG_KEYRING/cryptoKeys/key1
+```
+
+#### Running
+
+Once you've done the necessary setup, you can run the integration tests by
+running:
+
+``` sh
+$ go test -v cloud.google.com/go/...
+```
+
+#### Replay
+
+Some packages can record the RPCs during integration tests to a file for
+subsequent replay. To record, pass the `-record` flag to `go test`. The
+recording will be saved to the _package_`.replay` file. To replay integration
+tests from a saved recording, the replay file must be present, the `-short`
+flag must be passed to `go test`, and the `GCLOUD_TESTS_GOLANG_ENABLE_REPLAY`
+environment variable must have a non-empty value.
+
+## Contributor License Agreements
+
+Before we can accept your pull requests you'll need to sign a Contributor
+License Agreement (CLA):
+
+- **If you are an individual writing original source code** and **you own the
+intellectual property**, then you'll need to sign an [individual CLA][indvcla].
+- **If you work for a company that wants to allow you to contribute your
+work**, then you'll need to sign a [corporate CLA][corpcla].
+
+You can sign these electronically (just scroll to the bottom). After that,
+we'll be able to accept your pull requests.
+
+## Contributor Code of Conduct
+
+As contributors and maintainers of this project,
+and in the interest of fostering an open and welcoming community,
+we pledge to respect all people who contribute through reporting issues,
+posting feature requests, updating documentation,
+submitting pull requests or patches, and other activities.
+
+We are committed to making participation in this project
+a harassment-free experience for everyone,
+regardless of level of experience, gender, gender identity and expression,
+sexual orientation, disability, personal appearance,
+body size, race, ethnicity, age, religion, or nationality.
+
+Examples of unacceptable behavior by participants include:
+
+* The use of sexualized language or imagery
+* Personal attacks
+* Trolling or insulting/derogatory comments
+* Public or private harassment
+* Publishing other's private information,
+such as physical or electronic
+addresses, without explicit permission
+* Other unethical or unprofessional conduct.
+
+Project maintainers have the right and responsibility to remove, edit, or reject
+comments, commits, code, wiki edits, issues, and other contributions
+that are not aligned to this Code of Conduct.
+By adopting this Code of Conduct,
+project maintainers commit themselves to fairly and consistently
+applying these principles to every aspect of managing this project.
+Project maintainers who do not follow or enforce the Code of Conduct
+may be permanently removed from the project team.
+
+This code of conduct applies both within project spaces and in public spaces
+when an individual is representing the project or its community.
+
+Instances of abusive, harassing, or otherwise unacceptable behavior
+may be reported by opening an issue
+or contacting one or more of the project maintainers.
+
+This Code of Conduct is adapted from the [Contributor Covenant](http://contributor-covenant.org), version 1.2.0,
+available at [http://contributor-covenant.org/version/1/2/0/](http://contributor-covenant.org/version/1/2/0/)
+
+[gcloudcli]: https://developers.google.com/cloud/sdk/gcloud/
+[indvcla]: https://developers.google.com/open-source/cla/individual
+[corpcla]: https://developers.google.com/open-source/cla/corporate
diff --git a/vendor/cloud.google.com/go/README.md b/vendor/cloud.google.com/go/README.md
new file mode 100644
index 0000000000000..49345c4e1ef36
--- /dev/null
+++ b/vendor/cloud.google.com/go/README.md
@@ -0,0 +1,179 @@
+# Google Cloud Client Libraries for Go
+
+[](https://godoc.org/cloud.google.com/go)
+
+Go packages for [Google Cloud Platform](https://cloud.google.com) services.
+
+``` go
+import "cloud.google.com/go"
+```
+
+To install the packages on your system, *do not clone the repo*. Instead:
+
+1. Change to your project directory:
+
+ ```
+ cd /my/cloud/project
+ ```
+1. Get the package you want to use. Some products have their own module, so it's
+ best to `go get` the package(s) you want to use:
+
+ ```
+ $ go get cloud.google.com/go/firestore # Replace with the package you want to use.
+ ```
+
+**NOTE:** Some of these packages are under development, and may occasionally
+make backwards-incompatible changes.
+
+**NOTE:** Github repo is a mirror of [https://code.googlesource.com/gocloud](https://code.googlesource.com/gocloud).
+
+## Supported APIs
+
+Google API | Status | Package
+------------------------------------------------|--------------|-----------------------------------------------------------
+[Asset][cloud-asset] | alpha | [`cloud.google.com/go/asset/v1beta`](https://godoc.org/cloud.google.com/go/asset/v1beta)
+[Automl][cloud-automl] | stable | [`cloud.google.com/go/automl/apiv1`](https://godoc.org/cloud.google.com/go/automl/apiv1)
+[BigQuery][cloud-bigquery] | stable | [`cloud.google.com/go/bigquery`](https://godoc.org/cloud.google.com/go/bigquery)
+[Bigtable][cloud-bigtable] | stable | [`cloud.google.com/go/bigtable`](https://godoc.org/cloud.google.com/go/bigtable)
+[Cloudbuild][cloud-build] | alpha | [`cloud.google.com/go/cloudbuild/apiv1`](https://godoc.org/cloud.google.com/go/cloudbuild/apiv1)
+[Cloudtasks][cloud-tasks] | stable | [`cloud.google.com/go/cloudtasks/apiv2`](https://godoc.org/cloud.google.com/go/cloudtasks/apiv2)
+[Container][cloud-container] | stable | [`cloud.google.com/go/container/apiv1`](https://godoc.org/cloud.google.com/go/container/apiv1)
+[ContainerAnalysis][cloud-containeranalysis] | beta | [`cloud.google.com/go/containeranalysis/apiv1`](https://godoc.org/cloud.google.com/go/containeranalysis/apiv1)
+[Dataproc][cloud-dataproc] | stable | [`cloud.google.com/go/dataproc/apiv1`](https://godoc.org/cloud.google.com/go/dataproc/apiv1)
+[Datastore][cloud-datastore] | stable | [`cloud.google.com/go/datastore`](https://godoc.org/cloud.google.com/go/datastore)
+[Debugger][cloud-debugger] | alpha | [`cloud.google.com/go/debugger/apiv2`](https://godoc.org/cloud.google.com/go/debugger/apiv2)
+[Dialogflow][cloud-dialogflow] | alpha | [`cloud.google.com/go/dialogflow/apiv2`](https://godoc.org/cloud.google.com/go/dialogflow/apiv2)
+[Data Loss Prevention][cloud-dlp] | alpha | [`cloud.google.com/go/dlp/apiv2`](https://godoc.org/cloud.google.com/go/dlp/apiv2)
+[ErrorReporting][cloud-errors] | alpha | [`cloud.google.com/go/errorreporting`](https://godoc.org/cloud.google.com/go/errorreporting)
+[Firestore][cloud-firestore] | stable | [`cloud.google.com/go/firestore`](https://godoc.org/cloud.google.com/go/firestore)
+[IAM][cloud-iam] | stable | [`cloud.google.com/go/iam`](https://godoc.org/cloud.google.com/go/iam)
+[IoT][cloud-iot] | alpha | [`cloud.google.com/iot/apiv1`](https://godoc.org/cloud.google.com/iot/apiv1)
+[IRM][cloud-irm] | alpha | [`cloud.google.com/irm/apiv1alpha2`](https://godoc.org/cloud.google.com/irm/apiv1alpha2)
+[KMS][cloud-kms] | stable | [`cloud.google.com/go/kms`](https://godoc.org/cloud.google.com/go/kms)
+[Natural Language][cloud-natural-language] | stable | [`cloud.google.com/go/language/apiv1`](https://godoc.org/cloud.google.com/go/language/apiv1)
+[Logging][cloud-logging] | stable | [`cloud.google.com/go/logging`](https://godoc.org/cloud.google.com/go/logging)
+[Memorystore][cloud-memorystore] | alpha | [`cloud.google.com/go/redis/apiv1`](https://godoc.org/cloud.google.com/go/redis/apiv1)
+[Monitoring][cloud-monitoring] | alpha | [`cloud.google.com/go/monitoring/apiv3`](https://godoc.org/cloud.google.com/go/monitoring/apiv3)
+[OS Login][cloud-oslogin] | alpha | [`cloud.google.com/go/oslogin/apiv1`](https://godoc.org/cloud.google.com/go/oslogin/apiv1)
+[Pub/Sub][cloud-pubsub] | stable | [`cloud.google.com/go/pubsub`](https://godoc.org/cloud.google.com/go/pubsub)
+[Phishing Protection][cloud-phishingprotection] | alpha | [`cloud.google.com/go/phishingprotection/apiv1beta1`](https://godoc.org/cloud.google.com/go/phishingprotection/apiv1beta1)
+[reCAPTCHA Enterprise][cloud-recaptcha] | alpha | [`cloud.google.com/go/recaptchaenterprise/apiv1beta1`](https://godoc.org/cloud.google.com/go/recaptchaenterprise/apiv1beta1)
+[Recommender][cloud-recommender] | beta | [`cloud.google.com/go/recommender/apiv1beta1`](https://godoc.org/cloud.google.com/go/recommender/apiv1beta1)
+[Scheduler][cloud-scheduler] | stable | [`cloud.google.com/go/scheduler/apiv1`](https://godoc.org/cloud.google.com/go/scheduler/apiv1)
+[Securitycenter][cloud-securitycenter] | alpha | [`cloud.google.com/go/securitycenter/apiv1`](https://godoc.org/cloud.google.com/go/securitycenter/apiv1)
+[Spanner][cloud-spanner] | stable | [`cloud.google.com/go/spanner`](https://godoc.org/cloud.google.com/go/spanner)
+[Speech][cloud-speech] | stable | [`cloud.google.com/go/speech/apiv1`](https://godoc.org/cloud.google.com/go/speech/apiv1)
+[Storage][cloud-storage] | stable | [`cloud.google.com/go/storage`](https://godoc.org/cloud.google.com/go/storage)
+[Talent][cloud-talent] | alpha | [`cloud.google.com/go/talent/apiv4beta1`](https://godoc.org/cloud.google.com/go/talent/apiv4beta1)
+[Text To Speech][cloud-texttospeech] | alpha | [`cloud.google.com/go/texttospeech/apiv1`](https://godoc.org/cloud.google.com/go/texttospeech/apiv1)
+[Trace][cloud-trace] | alpha | [`cloud.google.com/go/trace/apiv2`](https://godoc.org/cloud.google.com/go/trace/apiv2)
+[Translate][cloud-translate] | stable | [`cloud.google.com/go/translate`](https://godoc.org/cloud.google.com/go/translate)
+[Video Intelligence][cloud-video] | alpha | [`cloud.google.com/go/videointelligence/apiv1beta1`](https://godoc.org/cloud.google.com/go/videointelligence/apiv1beta1)
+[Vision][cloud-vision] | stable | [`cloud.google.com/go/vision/apiv1`](https://godoc.org/cloud.google.com/go/vision/apiv1)
+[Webrisk][cloud-webrisk] | alpha | [`cloud.google.com/go/webrisk/apiv1beta1`](https://godoc.org/cloud.google.com/go/webrisk/apiv1beta1)
+
+> **Alpha status**: the API is still being actively developed. As a
+> result, it might change in backward-incompatible ways and is not recommended
+> for production use.
+>
+> **Beta status**: the API is largely complete, but still has outstanding
+> features and bugs to be addressed. There may be minor backwards-incompatible
+> changes where necessary.
+>
+> **Stable status**: the API is mature and ready for production use. We will
+> continue addressing bugs and feature requests.
+
+Documentation and examples are available at [godoc.org/cloud.google.com/go](https://godoc.org/cloud.google.com/go)
+
+## Go Versions Supported
+
+We support the two most recent major versions of Go. If Google App Engine uses
+an older version, we support that as well.
+
+## Authorization
+
+By default, each API will use [Google Application Default Credentials](https://developers.google.com/identity/protocols/application-default-credentials)
+for authorization credentials used in calling the API endpoints. This will allow your
+application to run in many environments without requiring explicit configuration.
+
+[snip]:# (auth)
+```go
+client, err := storage.NewClient(ctx)
+```
+
+To authorize using a
+[JSON key file](https://cloud.google.com/iam/docs/managing-service-account-keys),
+pass
+[`option.WithCredentialsFile`](https://godoc.org/google.golang.org/api/option#WithCredentialsFile)
+to the `NewClient` function of the desired package. For example:
+
+[snip]:# (auth-JSON)
+```go
+client, err := storage.NewClient(ctx, option.WithCredentialsFile("path/to/keyfile.json"))
+```
+
+You can exert more control over authorization by using the
+[`golang.org/x/oauth2`](https://godoc.org/golang.org/x/oauth2) package to
+create an `oauth2.TokenSource`. Then pass
+[`option.WithTokenSource`](https://godoc.org/google.golang.org/api/option#WithTokenSource)
+to the `NewClient` function:
+[snip]:# (auth-ts)
+```go
+tokenSource := ...
+client, err := storage.NewClient(ctx, option.WithTokenSource(tokenSource))
+```
+
+## Contributing
+
+Contributions are welcome. Please, see the
+[CONTRIBUTING](https://github.com/GoogleCloudPlatform/google-cloud-go/blob/master/CONTRIBUTING.md)
+document for details. We're using Gerrit for our code reviews. Please don't open pull
+requests against this repo, new pull requests will be automatically closed.
+
+Please note that this project is released with a Contributor Code of Conduct.
+By participating in this project you agree to abide by its terms.
+See [Contributor Code of Conduct](https://github.com/GoogleCloudPlatform/google-cloud-go/blob/master/CONTRIBUTING.md#contributor-code-of-conduct)
+for more information.
+
+[cloud-asset]: https://cloud.google.com/security-command-center/docs/how-to-asset-inventory
+[cloud-automl]: https://cloud.google.com/automl
+[cloud-build]: https://cloud.google.com/cloud-build/
+[cloud-bigquery]: https://cloud.google.com/bigquery/
+[cloud-bigtable]: https://cloud.google.com/bigtable/
+[cloud-container]: https://cloud.google.com/containers/
+[cloud-containeranalysis]: https://cloud.google.com/container-registry/docs/container-analysis
+[cloud-dataproc]: https://cloud.google.com/dataproc/
+[cloud-datastore]: https://cloud.google.com/datastore/
+[cloud-dialogflow]: https://cloud.google.com/dialogflow-enterprise/
+[cloud-debugger]: https://cloud.google.com/debugger/
+[cloud-dlp]: https://cloud.google.com/dlp/
+[cloud-errors]: https://cloud.google.com/error-reporting/
+[cloud-firestore]: https://cloud.google.com/firestore/
+[cloud-iam]: https://cloud.google.com/iam/
+[cloud-iot]: https://cloud.google.com/iot-core/
+[cloud-irm]: https://cloud.google.com/incident-response/docs/concepts
+[cloud-kms]: https://cloud.google.com/kms/
+[cloud-pubsub]: https://cloud.google.com/pubsub/
+[cloud-storage]: https://cloud.google.com/storage/
+[cloud-language]: https://cloud.google.com/natural-language
+[cloud-logging]: https://cloud.google.com/logging/
+[cloud-natural-language]: https://cloud.google.com/natural-language/
+[cloud-memorystore]: https://cloud.google.com/memorystore/
+[cloud-monitoring]: https://cloud.google.com/monitoring/
+[cloud-oslogin]: https://cloud.google.com/compute/docs/oslogin/rest
+[cloud-phishingprotection]: https://cloud.google.com/phishing-protection/
+[cloud-securitycenter]: https://cloud.google.com/security-command-center/
+[cloud-scheduler]: https://cloud.google.com/scheduler
+[cloud-spanner]: https://cloud.google.com/spanner/
+[cloud-speech]: https://cloud.google.com/speech
+[cloud-talent]: https://cloud.google.com/solutions/talent-solution/
+[cloud-tasks]: https://cloud.google.com/tasks/
+[cloud-texttospeech]: https://cloud.google.com/texttospeech/
+[cloud-talent]: https://cloud.google.com/solutions/talent-solution/
+[cloud-trace]: https://cloud.google.com/trace/
+[cloud-translate]: https://cloud.google.com/translate
+[cloud-recaptcha]: https://cloud.google.com/recaptcha-enterprise/
+[cloud-recommender]: https://cloud.google.com/recommendations/
+[cloud-video]: https://cloud.google.com/video-intelligence/
+[cloud-vision]: https://cloud.google.com/vision
+[cloud-webrisk]: https://cloud.google.com/web-risk/
diff --git a/vendor/cloud.google.com/go/RELEASING.md b/vendor/cloud.google.com/go/RELEASING.md
new file mode 100644
index 0000000000000..02added7992f8
--- /dev/null
+++ b/vendor/cloud.google.com/go/RELEASING.md
@@ -0,0 +1,153 @@
+# Setup from scratch
+
+1. [Install Go](https://golang.org/dl/).
+ 1. Ensure that your `GOBIN` directory (by default `$(go env GOPATH)/bin`)
+ is in your `PATH`.
+ 1. Check it's working by running `go version`.
+ * If it doesn't work, check the install location, usually
+ `/usr/local/go`, is on your `PATH`.
+
+1. Sign one of the
+[contributor license agreements](#contributor-license-agreements) below.
+
+1. Run `go get golang.org/x/review/git-codereview && go install golang.org/x/review/git-codereview`
+to install the code reviewing tool.
+
+ 1. Ensure it's working by running `git codereview` (check your `PATH` if
+ not).
+
+ 1. If you would like, you may want to set up aliases for `git-codereview`,
+ such that `git codereview change` becomes `git change`. See the
+ [godoc](https://godoc.org/golang.org/x/review/git-codereview) for details.
+
+ * Should you run into issues with the `git-codereview` tool, please note
+ that all error messages will assume that you have set up these aliases.
+
+1. Change to a directory of your choosing and clone the repo.
+
+ ```
+ cd ~/code
+ git clone https://code.googlesource.com/gocloud
+ ```
+
+ * If you have already checked out the source, make sure that the remote
+ `git` `origin` is https://code.googlesource.com/gocloud:
+
+ ```
+ git remote -v
+ # ...
+ git remote set-url origin https://code.googlesource.com/gocloud
+ ```
+
+ * The project uses [Go Modules](https://blog.golang.org/using-go-modules)
+ for dependency management See
+ [`gopls`](https://github.com/golang/go/wiki/gopls) for making your editor
+ work with modules.
+
+1. Change to the project directory and add the github remote:
+
+ ```
+ cd ~/code/gocloud
+ git remote add github https://github.com/googleapis/google-cloud-go
+ ```
+
+1. Make sure your `git` auth is configured correctly by visiting
+https://code.googlesource.com, clicking "Generate Password" at the top-right,
+and following the directions. Otherwise, `git codereview mail` in the next step
+will fail.
+
+# Which module to release?
+
+The Go client libraries have several modules. Each module does not strictly
+correspond to a single library - they correspond to trees of directories. If a
+file needs to be released, you must release the closest ancestor module.
+
+To see all modules:
+
+```
+$ cat `find . -name go.mod` | grep module
+module cloud.google.com/go
+module cloud.google.com/go/bigtable
+module cloud.google.com/go/firestore
+module cloud.google.com/go/bigquery
+module cloud.google.com/go/storage
+module cloud.google.com/go/datastore
+module cloud.google.com/go/pubsub
+module cloud.google.com/go/spanner
+module cloud.google.com/go/logging
+```
+
+The `cloud.google.com/go` is the repository root module. Each other module is
+a submodule.
+
+So, if you need to release a change in `bigtable/bttest/inmem.go`, the closest
+ancestor module is `cloud.google.com/go/bigtable` - so you should release a new
+version of the `cloud.google.com/go/bigtable` submodule.
+
+If you need to release a change in `asset/apiv1/asset_client.go`, the closest
+ancestor module is `cloud.google.com/go` - so you should release a new version
+of the `cloud.google.com/go` repository root module. Note: releasing
+`cloud.google.com/go` has no impact on any of the submodules, and vice-versa.
+They are released entirely independently.
+
+# How to release `cloud.google.com/go`
+
+1. Navigate to `~/code/gocloud/` and switch to master.
+1. `git pull`
+1. Run `git tag -l | grep -v beta | grep -v alpha` to see all existing releases.
+ The current latest tag `$CV` is the largest tag. It should look something
+ like `vX.Y.Z` (note: ignore all `LIB/vX.Y.Z` tags - these are tags for a
+ specific library, not the module root). We'll call the current version `$CV`
+ and the new version `$NV`.
+1. On master, run `git log $CV...` to list all the changes since the last
+ release. NOTE: You must manually visually parse out changes to submodules [1]
+ (the `git log` is going to show you things in submodules, which are not going
+ to be part of your release).
+1. Edit `CHANGES.md` to include a summary of the changes.
+1. `cd internal/version && go generate && cd -`
+1. Mail the CL: `git add -A && git change <branch name> && git mail`
+1. Wait for the CL to be submitted. Once it's submitted, and without submitting
+ any other CLs in the meantime:
+ a. Switch to master.
+ b. `git pull`
+ c. Tag the repo with the next version: `git tag $NV`.
+ d. Push the tag to both remotes:
+ `git push origin $NV`
+ `git push github $NV`
+1. Update [the releases page](https://github.com/googleapis/google-cloud-go/releases)
+ with the new release, copying the contents of `CHANGES.md`.
+
+# How to release a submodule
+
+We have several submodules, including `cloud.google.com/go/logging`,
+`cloud.google.com/go/datastore`, and so on.
+
+To release a submodule:
+
+(these instructions assume we're releasing `cloud.google.com/go/datastore` - adjust accordingly)
+
+1. Navigate to `~/code/gocloud/` and switch to master.
+1. `git pull`
+1. Run `git tag -l | grep datastore | grep -v beta | grep -v alpha` to see all
+ existing releases. The current latest tag `$CV` is the largest tag. It
+ should look something like `datastore/vX.Y.Z`. We'll call the current version
+ `$CV` and the new version `$NV`.
+1. On master, run `git log $CV.. -- datastore/` to list all the changes to the
+ submodule directory since the last release.
+1. Edit `datastore/CHANGES.md` to include a summary of the changes.
+1. `cd internal/version && go generate && cd -`
+1. Mail the CL: `git add -A && git change <branch name> && git mail`
+1. Wait for the CL to be submitted. Once it's submitted, and without submitting
+ any other CLs in the meantime:
+ a. Switch to master.
+ b. `git pull`
+ c. Tag the repo with the next version: `git tag $NV`.
+ d. Push the tag to both remotes:
+ `git push origin $NV`
+ `git push github $NV`
+1. Update [the releases page](https://github.com/googleapis/google-cloud-go/releases)
+ with the new release, copying the contents of `datastore/CHANGES.md`.
+
+# Appendix
+
+1: This should get better as submodule tooling matures.
diff --git a/vendor/cloud.google.com/go/bigtable/.repo-metadata.json b/vendor/cloud.google.com/go/bigtable/.repo-metadata.json
new file mode 100644
index 0000000000000..f4958ef69510e
--- /dev/null
+++ b/vendor/cloud.google.com/go/bigtable/.repo-metadata.json
@@ -0,0 +1,12 @@
+{
+ "name": "bigtable",
+ "name_pretty": "Cloud Bigtable API",
+ "product_documentation": "https://cloud.google.com/bigtable",
+ "client_documentation": "https://godoc.org/cloud.google.com/go/bigtable",
+ "release_level": "ga",
+ "language": "go",
+ "repo": "googleapis/google-cloud-go",
+ "distribution_name": "cloud.google.com/go/bigtable",
+ "api_id": "bigtable.googleapis.com",
+ "requires_billing": true
+}
diff --git a/vendor/cloud.google.com/go/bigtable/CHANGES.md b/vendor/cloud.google.com/go/bigtable/CHANGES.md
new file mode 100644
index 0000000000000..d264e8d117e6d
--- /dev/null
+++ b/vendor/cloud.google.com/go/bigtable/CHANGES.md
@@ -0,0 +1,18 @@
+# Changes
+
+## v1.1.0
+
+* Add support to cbt tool to drop all rows from a table.
+
+* Adds a method to update an instance with clusters.
+
+* Adds StorageType to ClusterInfo.
+
+* Add support for the `-auth-token` flag to cbt tool.
+
+* Adds support for Table-level IAM, including some bug fixes.
+
+## v1.0.0
+
+This is the first tag to carve out bigtable as its own module. See:
+https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository.
diff --git a/vendor/cloud.google.com/go/bigtable/LICENSE b/vendor/cloud.google.com/go/bigtable/LICENSE
new file mode 100644
index 0000000000000..d645695673349
--- /dev/null
+++ b/vendor/cloud.google.com/go/bigtable/LICENSE
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/vendor/cloud.google.com/go/bigtable/admin.go b/vendor/cloud.google.com/go/bigtable/admin.go
index 94041588c7149..d776b5a53cba6 100644
--- a/vendor/cloud.google.com/go/bigtable/admin.go
+++ b/vendor/cloud.google.com/go/bigtable/admin.go
@@ -17,6 +17,7 @@ limitations under the License.
package bigtable
import (
+ "container/list"
"context"
"errors"
"fmt"
@@ -287,6 +288,18 @@ func (ac *AdminClient) DropRowRange(ctx context.Context, table, rowKeyPrefix str
return err
}
+// DropAllRows permanently deletes all rows from the specified table.
+func (ac *AdminClient) DropAllRows(ctx context.Context, table string) error {
+ ctx = mergeOutgoingMetadata(ctx, withGoogleClientInfo(), ac.md)
+ prefix := ac.instancePrefix()
+ req := &btapb.DropRowRangeRequest{
+ Name: prefix + "/tables/" + table,
+ Target: &btapb.DropRowRangeRequest_DeleteAllDataFromTable{DeleteAllDataFromTable: true},
+ }
+ _, err := ac.tClient.DropRowRange(ctx, req)
+ return err
+}
+
// CreateTableFromSnapshot creates a table from snapshot.
// The table will be created in the same cluster as the snapshot.
//
@@ -544,6 +557,7 @@ func (ac *AdminClient) isConsistent(ctx context.Context, tableName, token string
// WaitForReplication waits until all the writes committed before the call started have been propagated to all the clusters in the instance via replication.
func (ac *AdminClient) WaitForReplication(ctx context.Context, table string) error {
+ ctx = mergeOutgoingMetadata(ctx, withGoogleClientInfo(), ac.md)
// Get the token.
prefix := ac.instancePrefix()
tableName := prefix + "/tables/" + table
@@ -572,6 +586,12 @@ func (ac *AdminClient) WaitForReplication(ctx context.Context, table string) err
}
}
+// TableIAM creates an IAM client specific to a given Instance and Table within the configured project.
+func (ac *AdminClient) TableIAM(tableID string) *iam.Handle {
+ return iam.InternalNewHandleGRPCClient(ac.tClient,
+ "projects/"+ac.project+"/instances/"+ac.instance+"/tables/"+tableID)
+}
+
const instanceAdminAddr = "bigtableadmin.googleapis.com:443"
// InstanceAdminClient is a client type for performing admin operations on instances.
@@ -640,6 +660,14 @@ func (st StorageType) proto() btapb.StorageType {
return btapb.StorageType_SSD
}
+func storageTypeFromProto(st btapb.StorageType) StorageType {
+ if st == btapb.StorageType_HDD {
+ return HDD
+ }
+
+ return SSD
+}
+
// InstanceType is the type of the instance
type InstanceType int32
@@ -720,26 +748,11 @@ func (iac *InstanceAdminClient) CreateInstanceWithClusters(ctx context.Context,
return longrunning.InternalNewOperation(iac.lroClient, lro).Wait(ctx, &resp)
}
-// UpdateInstanceWithClusters updates an instance and its clusters.
-// The provided InstanceWithClustersConfig is used as follows:
-// - InstanceID is required
-// - DisplayName and InstanceType are updated only if they are not empty
-// - ClusterID is required for any provided cluster
-// - All other cluster fields are ignored except for NumNodes, which if set will be updated
-//
-// This method may return an error after partially succeeding, for example if the instance is updated
-// but a cluster update fails. If an error is returned, InstanceInfo and Clusters may be called to
-// determine the current state.
-func (iac *InstanceAdminClient) UpdateInstanceWithClusters(ctx context.Context, conf *InstanceWithClustersConfig) error {
- ctx = mergeOutgoingMetadata(ctx, withGoogleClientInfo(), iac.md)
-
+// updateInstance updates a single instance based on config fields that operate
+// at an instance level: DisplayName and InstanceType.
+func (iac *InstanceAdminClient) updateInstance(ctx context.Context, conf *InstanceWithClustersConfig) (updated bool, err error) {
if conf.InstanceID == "" {
- return errors.New("InstanceID is required")
- }
- for _, cluster := range conf.Clusters {
- if cluster.ClusterID == "" {
- return errors.New("ClusterID is required for every cluster")
- }
+ return false, errors.New("InstanceID is required")
}
// Update the instance, if necessary
@@ -758,17 +771,46 @@ func (iac *InstanceAdminClient) UpdateInstanceWithClusters(ctx context.Context,
ireq.Instance.Type = btapb.Instance_Type(conf.InstanceType)
mask.Paths = append(mask.Paths, "type")
}
- updatedInstance := false
- if len(mask.Paths) > 0 {
- lro, err := iac.iClient.PartialUpdateInstance(ctx, ireq)
- if err != nil {
- return err
- }
- err = longrunning.InternalNewOperation(iac.lroClient, lro).Wait(ctx, nil)
- if err != nil {
- return err
+
+ if len(mask.Paths) == 0 {
+ return false, nil
+ }
+
+ lro, err := iac.iClient.PartialUpdateInstance(ctx, ireq)
+ if err != nil {
+ return false, err
+ }
+ err = longrunning.InternalNewOperation(iac.lroClient, lro).Wait(ctx, nil)
+ if err != nil {
+ return false, err
+ }
+
+ return true, nil
+}
+
+// UpdateInstanceWithClusters updates an instance and its clusters. Updateable
+// fields are instance display name, instance type and cluster size.
+// The provided InstanceWithClustersConfig is used as follows:
+// - InstanceID is required
+// - DisplayName and InstanceType are updated only if they are not empty
+// - ClusterID is required for any provided cluster
+// - All other cluster fields are ignored except for NumNodes, which if set will be updated
+//
+// This method may return an error after partially succeeding, for example if the instance is updated
+// but a cluster update fails. If an error is returned, InstanceInfo and Clusters may be called to
+// determine the current state.
+func (iac *InstanceAdminClient) UpdateInstanceWithClusters(ctx context.Context, conf *InstanceWithClustersConfig) error {
+ ctx = mergeOutgoingMetadata(ctx, withGoogleClientInfo(), iac.md)
+
+ for _, cluster := range conf.Clusters {
+ if cluster.ClusterID == "" {
+ return errors.New("ClusterID is required for every cluster")
}
- updatedInstance = true
+ }
+
+ updatedInstance, err := iac.updateInstance(ctx, conf)
+ if err != nil {
+ return err
}
// Update any clusters
@@ -875,10 +917,11 @@ func (cc *ClusterConfig) proto(project string) *btapb.Cluster {
// ClusterInfo represents information about a cluster.
type ClusterInfo struct {
- Name string // name of the cluster
- Zone string // GCP zone of the cluster (e.g. "us-central1-a")
- ServeNodes int // number of allocated serve nodes
- State string // state of the cluster
+ Name string // name of the cluster
+ Zone string // GCP zone of the cluster (e.g. "us-central1-a")
+ ServeNodes int // number of allocated serve nodes
+ State string // state of the cluster
+ StorageType StorageType // the storage type of the cluster
}
// CreateCluster creates a new cluster in an instance.
@@ -940,10 +983,11 @@ func (iac *InstanceAdminClient) Clusters(ctx context.Context, instanceID string)
nameParts := strings.Split(c.Name, "/")
locParts := strings.Split(c.Location, "/")
cis = append(cis, &ClusterInfo{
- Name: nameParts[len(nameParts)-1],
- Zone: locParts[len(locParts)-1],
- ServeNodes: int(c.ServeNodes),
- State: c.State.String(),
+ Name: nameParts[len(nameParts)-1],
+ Zone: locParts[len(locParts)-1],
+ ServeNodes: int(c.ServeNodes),
+ State: c.State.String(),
+ StorageType: storageTypeFromProto(c.DefaultStorageType),
})
}
return cis, nil
@@ -966,10 +1010,11 @@ func (iac *InstanceAdminClient) GetCluster(ctx context.Context, instanceID, clus
nameParts := strings.Split(c.Name, "/")
locParts := strings.Split(c.Location, "/")
cis := &ClusterInfo{
- Name: nameParts[len(nameParts)-1],
- Zone: locParts[len(locParts)-1],
- ServeNodes: int(c.ServeNodes),
- State: c.State.String(),
+ Name: nameParts[len(nameParts)-1],
+ Zone: locParts[len(locParts)-1],
+ ServeNodes: int(c.ServeNodes),
+ State: c.State.String(),
+ StorageType: storageTypeFromProto(c.DefaultStorageType),
}
return cis, nil
}
@@ -977,7 +1022,6 @@ func (iac *InstanceAdminClient) GetCluster(ctx context.Context, instanceID, clus
// InstanceIAM returns the instance's IAM handle.
func (iac *InstanceAdminClient) InstanceIAM(instanceID string) *iam.Handle {
return iam.InternalNewHandleGRPCClient(iac.iClient, "projects/"+iac.project+"/instances/"+instanceID)
-
}
// Routing policies.
@@ -1204,3 +1248,147 @@ func (iac *InstanceAdminClient) DeleteAppProfile(ctx context.Context, instanceID
return err
}
+
+// UpdateInstanceResults contains information about the
+// changes made after invoking UpdateInstanceAndSyncClusters.
+type UpdateInstanceResults struct {
+ InstanceUpdated bool
+ CreatedClusters []string
+ DeletedClusters []string
+ UpdatedClusters []string
+}
+
+func (r *UpdateInstanceResults) String() string {
+ return fmt.Sprintf("Instance updated? %v Clusters added:%v Clusters deleted:%v Clusters updated:%v",
+ r.InstanceUpdated, r.CreatedClusters, r.DeletedClusters, r.UpdatedClusters)
+}
+
+func max(x, y int) int {
+ if x > y {
+ return x
+ }
+ return y
+}
+
+// UpdateInstanceAndSyncClusters updates an instance and its clusters, and will synchronize the
+// clusters in the instance with the provided clusters, creating and deleting them as necessary.
+// The provided InstanceWithClustersConfig is used as follows:
+// - InstanceID is required
+// - DisplayName and InstanceType are updated only if they are not empty
+// - ClusterID is required for any provided cluster
+// - Any cluster present in conf.Clusters but not part of the instance will be created using CreateCluster
+// and the given ClusterConfig.
+// - Any cluster missing from conf.Clusters but present in the instance will be removed from the instance
+// using DeleteCluster.
+// - Any cluster in conf.Clusters that also exists in the instance will be updated to contain the
+// provided number of nodes if set.
+//
+// This method may return an error after partially succeeding, for example if the instance is updated
+// but a cluster update fails. If an error is returned, InstanceInfo and Clusters may be called to
+// determine the current state. The return UpdateInstanceResults will describe the work done by the
+// method, whether partial or complete.
+func UpdateInstanceAndSyncClusters(ctx context.Context, iac *InstanceAdminClient, conf *InstanceWithClustersConfig) (*UpdateInstanceResults, error) {
+ ctx = mergeOutgoingMetadata(ctx, withGoogleClientInfo(), iac.md)
+
+ // First fetch the existing clusters so we know what to remove, add or update.
+ existingClusters, err := iac.Clusters(ctx, conf.InstanceID)
+ if err != nil {
+ return nil, err
+ }
+
+ updatedInstance, err := iac.updateInstance(ctx, conf)
+ if err != nil {
+ return nil, err
+ }
+
+ results := &UpdateInstanceResults{InstanceUpdated: updatedInstance}
+
+ existingClusterNames := make(map[string]bool)
+ for _, cluster := range existingClusters {
+ existingClusterNames[cluster.Name] = true
+ }
+
+ // Synchronize clusters that were passed in with the existing clusters in the instance.
+ // First update any cluster we encounter that already exists in the instance.
+ // Collect the clusters that we will create and delete so that we can minimize disruption
+ // of the instance.
+ clustersToCreate := list.New()
+ clustersToDelete := list.New()
+ for _, cluster := range conf.Clusters {
+ _, clusterExists := existingClusterNames[cluster.ClusterID]
+ if !clusterExists {
+ // The cluster doesn't exist yet, so we must create it.
+ clustersToCreate.PushBack(cluster)
+ continue
+ }
+ delete(existingClusterNames, cluster.ClusterID)
+
+ if cluster.NumNodes <= 0 {
+ // We only synchronize clusters with a valid number of nodes.
+ continue
+ }
+
+ // We simply want to update this cluster
+ err = iac.UpdateCluster(ctx, conf.InstanceID, cluster.ClusterID, cluster.NumNodes)
+ if err != nil {
+ return results, fmt.Errorf("UpdateCluster %q failed %v; Progress: %v",
+ cluster.ClusterID, err, results)
+ }
+ results.UpdatedClusters = append(results.UpdatedClusters, cluster.ClusterID)
+ }
+
+ // Any cluster left in existingClusterNames was NOT in the given config and should be deleted.
+ for clusterToDelete := range existingClusterNames {
+ clustersToDelete.PushBack(clusterToDelete)
+ }
+
+ // Now that we have the clusters that we need to create and delete, we do so keeping the following
+ // in mind:
+ // - Don't delete the last cluster in the instance, as that will result in an error.
+ // - Attempt to offset each deletion with a creation before another deletion, so that instance
+ // capacity is never reduced more than necessary.
+ // Note that there is a limit on number of clusters in an instance which we are not aware of here,
+ // so delete a cluster before adding one (as long as there are > 1 clusters left) so that we are
+ // less likely to exceed the maximum number of clusters.
+ numExistingClusters := len(existingClusters)
+ nextCreation := clustersToCreate.Front()
+ nextDeletion := clustersToDelete.Front()
+ for {
+ // We are done when both lists are empty.
+ if nextCreation == nil && nextDeletion == nil {
+ break
+ }
+
+ // If there is more than one existing cluster, we always want to delete first if possible.
+ // If there are no more creations left, always go ahead with the deletion.
+ if (numExistingClusters > 1 && nextDeletion != nil) || nextCreation == nil {
+ clusterToDelete := nextDeletion.Value.(string)
+ err = iac.DeleteCluster(ctx, conf.InstanceID, clusterToDelete)
+ if err != nil {
+ return results, fmt.Errorf("DeleteCluster %q failed %v; Progress: %v",
+ clusterToDelete, err, results)
+ }
+ results.DeletedClusters = append(results.DeletedClusters, clusterToDelete)
+ numExistingClusters--
+ nextDeletion = nextDeletion.Next()
+ }
+
+ // Now create a new cluster if required.
+ if nextCreation != nil {
+ clusterToCreate := nextCreation.Value.(ClusterConfig)
+ // Assume the cluster config is well formed and rely on the underlying call to error out.
+ // Make sure to set the InstanceID, though, since we know what it must be.
+ clusterToCreate.InstanceID = conf.InstanceID
+ err = iac.CreateCluster(ctx, &clusterToCreate)
+ if err != nil {
+ return results, fmt.Errorf("CreateCluster %v failed %v; Progress: %v",
+ clusterToCreate, err, results)
+ }
+ results.CreatedClusters = append(results.CreatedClusters, clusterToCreate.ClusterID)
+ numExistingClusters++
+ nextCreation = nextCreation.Next()
+ }
+ }
+
+ return results, nil
+}
diff --git a/vendor/cloud.google.com/go/bigtable/bigtable.go b/vendor/cloud.google.com/go/bigtable/bigtable.go
index acf7d6de5c5cb..7c8ea33fea80f 100644
--- a/vendor/cloud.google.com/go/bigtable/bigtable.go
+++ b/vendor/cloud.google.com/go/bigtable/bigtable.go
@@ -603,6 +603,9 @@ func NewMutation() *Mutation {
// If the filter matches any cell in the row, mtrue is applied;
// otherwise, mfalse is applied.
// Either given mutation may be nil.
+//
+// The application of a ReadModifyWrite is atomic; concurrent ReadModifyWrites will
+// be executed serially by the server.
func NewCondMutation(cond Filter, mtrue, mfalse *Mutation) *Mutation {
return &Mutation{cond: cond, mtrue: mtrue, mfalse: mfalse}
}
diff --git a/vendor/cloud.google.com/go/bigtable/go.mod b/vendor/cloud.google.com/go/bigtable/go.mod
new file mode 100644
index 0000000000000..49746206314c9
--- /dev/null
+++ b/vendor/cloud.google.com/go/bigtable/go.mod
@@ -0,0 +1,20 @@
+module cloud.google.com/go/bigtable
+
+go 1.11
+
+require (
+ cloud.google.com/go v0.46.3
+ cloud.google.com/go/storage v1.0.0 // indirect
+ github.com/golang/protobuf v1.3.2
+ github.com/google/btree v1.0.0
+ github.com/google/go-cmp v0.3.0
+ github.com/googleapis/gax-go/v2 v2.0.5
+ golang.org/x/exp v0.0.0-20191029154019-8994fa331a53 // indirect
+ golang.org/x/lint v0.0.0-20190930215403-16217165b5de // indirect
+ golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45
+ golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5 // indirect
+ google.golang.org/api v0.13.0
+ google.golang.org/genproto v0.0.0-20191028173616-919d9bdd9fe6
+ google.golang.org/grpc v1.21.1
+ rsc.io/binaryregexp v0.2.0
+)
diff --git a/vendor/cloud.google.com/go/bigtable/go.sum b/vendor/cloud.google.com/go/bigtable/go.sum
new file mode 100644
index 0000000000000..a8c47147affe0
--- /dev/null
+++ b/vendor/cloud.google.com/go/bigtable/go.sum
@@ -0,0 +1,180 @@
+cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
+cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
+cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
+cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
+cloud.google.com/go v0.44.2 h1:yH/xNMI6CEel8IuF+gaXLvg2N1JZ6pOMkkr25uH8+2k=
+cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
+cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
+cloud.google.com/go v0.46.3 h1:AVXDdKsrtX33oR9fbCMu/+c1o8Ofjq6Ku/MInaLVg5Y=
+cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
+cloud.google.com/go/bigquery v1.0.1 h1:hL+ycaJpVE9M7nLoiXb/Pn10ENE2u+oddxbD8uu0ZVU=
+cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
+cloud.google.com/go/datastore v1.0.0 h1:Kt+gOPPp2LEPWp8CSfxhsM8ik9CcyE/gYu+0r+RnZvM=
+cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
+cloud.google.com/go/pubsub v1.0.1 h1:W9tAK3E57P75u0XLLR82LZyw8VpAnhmyTOxW9qzmyj8=
+cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
+cloud.google.com/go/storage v1.0.0 h1:VV2nUM3wwLLGh9lSABFgZMjInyUbJeaRSE64WuAIQ+4=
+cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
+dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
+github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
+github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
+github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
+github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
+github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
+github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58=
+github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
+github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
+github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
+github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
+github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
+github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
+github.com/google/btree v1.0.0 h1:0udJVsspx3VBr5FwtLhQQtuAsVc79tTq0ocGIPAU6qo=
+github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
+github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
+github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
+github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
+github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no=
+github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
+github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
+github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
+github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
+github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
+github.com/googleapis/gax-go/v2 v2.0.5 h1:sjZBwGj9Jlw33ImPtvFviGYvseOtDM7hkSKB7+Tv3SM=
+github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
+github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
+github.com/hashicorp/golang-lru v0.5.1 h1:0hERBMJE1eitiLkihrMvRVBYAkpHzc/J3QdDN+dAcgU=
+github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
+github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024 h1:rBMNdlhTLzJjJSDIjNEXX1Pz3Hmwmz91v+zycvx9PJc=
+github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
+github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
+github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
+github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
+github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
+github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
+go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
+go.opencensus.io v0.22.0 h1:C9hSCOW830chIVkdja34wa6Ky+IzWllkUinR+BtRZd4=
+go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
+golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
+golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
+golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
+golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522 h1:OeRHuibLsmZkFj773W4LcfAGsSxJgfPONhr8cmO+eLA=
+golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
+golang.org/x/exp v0.0.0-20190829153037-c13cbed26979 h1:Agxu5KLo8o7Bb634SVDnhIfpTvxmzUwhbYAzBvXt6h4=
+golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
+golang.org/x/exp v0.0.0-20191029154019-8994fa331a53 h1:QzIrbrpgiq5AZk7Vyo+9TfeGdhgmGZzAZbnKqRSnkc0=
+golang.org/x/exp v0.0.0-20191029154019-8994fa331a53/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
+golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
+golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
+golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
+golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
+golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
+golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20190409202823-959b441ac422 h1:QzoH/1pFpZguR8NrRHLcO6jKqfv2zpuSqZLgdm7ZmjI=
+golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac h1:8R1esu+8QioDxo4E4mX6bFztO+dMTM49DNAaWfO5OeY=
+golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20190930215403-16217165b5de h1:5hukYrvBGR8/eNkX5mdUezrA6JiaEZDtJb9Ei+1LlBs=
+golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
+golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
+golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
+golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
+golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
+golang.org/x/net v0.0.0-20190620200207-3b0461eec859 h1:R/3boaszxrf1GEUWTVDzSKVwLmSJpwZ1yqXm8j0v2QI=
+golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
+golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
+golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45 h1:SVwTIAaPC2U/AvvLNZ2a7OVsmBpC8L5BlwK1whH3hm0=
+golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
+golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20190423024810-112230192c58 h1:8gQV6CLnAEikrhgkHFbMAEhagSSnXWGV915qUMm9mrU=
+golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0 h1:HyfiK1WMnHj5FXFXatD+Qs1A/xC2Run6RzeW1SyHxpc=
+golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
+golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
+golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
+golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
+golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
+golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0 h1:Dh6fw+p6FyRl5x/FvNswO1ji0lIGzm3KP8Y9VkS9PTE=
+golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff h1:On1qIo75ByTwFJ4/W2bIqHcwJ9XAqtSWUs8GwRrIhtc=
+golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5 h1:hKsoRgsbwY1NafxrwTs+k64bikrLBkAgPir1TNCj3Zs=
+golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
+google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
+google.golang.org/api v0.8.0 h1:VGGbLNyPF7dvYHhcUGYBBGCRDDK0RRJAI6KCvo0CL+E=
+google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
+google.golang.org/api v0.9.0 h1:jbyannxz0XFD3zdjgrSUsaJbgpH4eTrkdhRChkHPfO8=
+google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
+google.golang.org/api v0.13.0 h1:Q3Ui3V3/CVinFWFiW39Iw0kMuVrRzYX0wN6OPFp0lTA=
+google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
+google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
+google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
+google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
+google.golang.org/appengine v1.6.1 h1:QzqyMA1tlu6CgqCDUtU9V+ZKhLFT2dkJuANu5QaxI3I=
+google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
+google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
+google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64 h1:iKtrH9Y8mcbADOP0YFaEMth7OfuHY9xHOwNj4znpM1A=
+google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
+google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
+google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51 h1:Ex1mq5jaJof+kRnYi3SlYJ8KKa9Ao3NHyIT5XJ1gF6U=
+google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
+google.golang.org/genproto v0.0.0-20191028173616-919d9bdd9fe6 h1:UXl+Zk3jqqcbEVV7ace5lrt4YdA4tXiz3f/KbmD29Vo=
+google.golang.org/genproto v0.0.0-20191028173616-919d9bdd9fe6/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
+google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
+google.golang.org/grpc v1.21.1 h1:j6XxA85m/6txkUCHvzlV5f+HBNl/1r5cZ2A/3IEFOO8=
+google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
+gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
+honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a h1:LJwr7TCTghdatWv40WobzlKXc9c4s8oGa7QKJUtHhWA=
+honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+honnef.co/go/tools v0.0.1-2019.2.3 h1:3JgtbtFHMiCmsznwGVTUWbgGov+pVqnlf1dEJTNAXeM=
+honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
+rsc.io/binaryregexp v0.2.0 h1:HfqmD5MEmC0zvwBuF187nq9mdnXjXsSivRiXN7SmRkE=
+rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
diff --git a/vendor/cloud.google.com/go/bigtable/go_mod_tidy_hack.go b/vendor/cloud.google.com/go/bigtable/go_mod_tidy_hack.go
new file mode 100644
index 0000000000000..90d00ac433ce5
--- /dev/null
+++ b/vendor/cloud.google.com/go/bigtable/go_mod_tidy_hack.go
@@ -0,0 +1,22 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// This file, and the cloud.google.com/go import, won't actually become part of
+// the resultant binary.
+// +build modhack
+
+package bigtable
+
+// Necessary for safely adding multi-module repo. See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository
+import _ "cloud.google.com/go"
diff --git a/vendor/cloud.google.com/go/compute/metadata/.repo-metadata.json b/vendor/cloud.google.com/go/compute/metadata/.repo-metadata.json
new file mode 100644
index 0000000000000..ca022ccc41a05
--- /dev/null
+++ b/vendor/cloud.google.com/go/compute/metadata/.repo-metadata.json
@@ -0,0 +1,12 @@
+{
+ "name": "metadata",
+ "name_pretty": "Google Compute Engine Metadata API",
+ "product_documentation": "https://cloud.google.com/compute/docs/storing-retrieving-metadata",
+ "client_documentation": "https://godoc.org/cloud.google.com/go/compute/metadata",
+ "release_level": "ga",
+ "language": "go",
+ "repo": "googleapis/google-cloud-go",
+ "distribution_name": "cloud.google.com/go/compute/metadata",
+ "api_id": "compute:metadata",
+ "requires_billing": false
+}
diff --git a/vendor/cloud.google.com/go/compute/metadata/metadata.go b/vendor/cloud.google.com/go/compute/metadata/metadata.go
index 125b7033c96b9..4ff4e2f1ca6d3 100644
--- a/vendor/cloud.google.com/go/compute/metadata/metadata.go
+++ b/vendor/cloud.google.com/go/compute/metadata/metadata.go
@@ -227,6 +227,9 @@ func InternalIP() (string, error) { return defaultClient.InternalIP() }
// ExternalIP returns the instance's primary external (public) IP address.
func ExternalIP() (string, error) { return defaultClient.ExternalIP() }
+// Email calls Client.Email on the default client.
+func Email(serviceAccount string) (string, error) { return defaultClient.Email(serviceAccount) }
+
// Hostname returns the instance's hostname. This will be of the form
// "<instanceID>.c.<projID>.internal".
func Hostname() (string, error) { return defaultClient.Hostname() }
@@ -367,6 +370,16 @@ func (c *Client) InternalIP() (string, error) {
return c.getTrimmed("instance/network-interfaces/0/ip")
}
+// Email returns the email address associated with the service account.
+// The account may be empty or the string "default" to use the instance's
+// main account.
+func (c *Client) Email(serviceAccount string) (string, error) {
+ if serviceAccount == "" {
+ serviceAccount = "default"
+ }
+ return c.getTrimmed("instance/service-accounts/" + serviceAccount + "/email")
+}
+
// ExternalIP returns the instance's primary external (public) IP address.
func (c *Client) ExternalIP() (string, error) {
return c.getTrimmed("instance/network-interfaces/0/access-configs/0/external-ip")
diff --git a/vendor/cloud.google.com/go/doc.go b/vendor/cloud.google.com/go/doc.go
new file mode 100644
index 0000000000000..237d84561ce0a
--- /dev/null
+++ b/vendor/cloud.google.com/go/doc.go
@@ -0,0 +1,100 @@
+// Copyright 2014 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+/*
+Package cloud is the root of the packages used to access Google Cloud
+Services. See https://godoc.org/cloud.google.com/go for a full list
+of sub-packages.
+
+
+Client Options
+
+All clients in sub-packages are configurable via client options. These options are
+described here: https://godoc.org/google.golang.org/api/option.
+
+
+Authentication and Authorization
+
+All the clients in sub-packages support authentication via Google Application Default
+Credentials (see https://cloud.google.com/docs/authentication/production), or
+by providing a JSON key file for a Service Account. See the authentication examples
+in this package for details.
+
+
+Timeouts and Cancellation
+
+By default, all requests in sub-packages will run indefinitely, retrying on transient
+errors when correctness allows. To set timeouts or arrange for cancellation, use
+contexts. See the examples for details.
+
+Do not attempt to control the initial connection (dialing) of a service by setting a
+timeout on the context passed to NewClient. Dialing is non-blocking, so timeouts
+would be ineffective and would only interfere with credential refreshing, which uses
+the same context.
+
+
+Connection Pooling
+
+Connection pooling differs in clients based on their transport. Cloud
+clients either rely on HTTP or gRPC transports to communicate
+with Google Cloud.
+
+Cloud clients that use HTTP (bigquery, compute, storage, and translate) rely on the
+underlying HTTP transport to cache connections for later re-use. These are cached to
+the default http.MaxIdleConns and http.MaxIdleConnsPerHost settings in
+http.DefaultTransport.
+
+For gRPC clients (all others in this repo), connection pooling is configurable. Users
+of cloud client libraries may specify option.WithGRPCConnectionPool(n) as a client
+option to NewClient calls. This configures the underlying gRPC connections to be
+pooled and addressed in a round robin fashion.
+
+
+Using the Libraries with Docker
+
+Minimal docker images like Alpine lack CA certificates. This causes RPCs to appear to
+hang, because gRPC retries indefinitely. See https://github.com/googleapis/google-cloud-go/issues/928
+for more information.
+
+
+Debugging
+
+To see gRPC logs, set the environment variable GRPC_GO_LOG_SEVERITY_LEVEL. See
+https://godoc.org/google.golang.org/grpc/grpclog for more information.
+
+For HTTP logging, set the GODEBUG environment variable to "http2debug=1" or "http2debug=2".
+
+
+Client Stability
+
+Clients in this repository are considered alpha or beta unless otherwise
+marked as stable in the README.md. Semver is not used to communicate stability
+of clients.
+
+Alpha and beta clients may change or go away without notice.
+
+Clients marked stable will maintain compatibility with future versions for as
+long as we can reasonably sustain. Incompatible changes might be made in some
+situations, including:
+
+- Security bugs may prompt backwards-incompatible changes.
+
+- Situations in which components are no longer feasible to maintain without
+making breaking changes, including removal.
+
+- Parts of the client surface may be outright unstable and subject to change.
+These parts of the surface will be labeled with the note, "It is EXPERIMENTAL
+and subject to change or removal without notice."
+*/
+package cloud // import "cloud.google.com/go"
diff --git a/vendor/cloud.google.com/go/gapics.txt b/vendor/cloud.google.com/go/gapics.txt
new file mode 100644
index 0000000000000..a3ed21fdab21f
--- /dev/null
+++ b/vendor/cloud.google.com/go/gapics.txt
@@ -0,0 +1,48 @@
+google/api/expr/artman_cel.yaml
+google/cloud/asset/artman_cloudasset_v1beta1.yaml
+google/cloud/asset/artman_cloudasset_v1p2beta1.yaml
+google/iam/credentials/artman_iamcredentials_v1.yaml
+google/cloud/automl/artman_automl_v1.yaml
+google/cloud/automl/artman_automl_v1beta1.yaml
+google/cloud/bigquery/datatransfer/artman_bigquerydatatransfer.yaml
+google/cloud/bigquery/storage/artman_bigquerystorage_v1beta1.yaml
+google/cloud/dataproc/artman_dataproc_v1.yaml
+google/cloud/dataproc/artman_dataproc_v1beta2.yaml
+google/cloud/dialogflow/v2/artman_dialogflow_v2.yaml
+google/cloud/iot/artman_cloudiot.yaml
+google/cloud/irm/artman_irm_v1alpha2.yaml
+google/cloud/kms/artman_cloudkms.yaml
+google/cloud/language/artman_language_v1beta2.yaml
+google/cloud/oslogin/artman_oslogin_v1.yaml
+google/cloud/oslogin/artman_oslogin_v1beta.yaml
+google/cloud/recaptchaenterprise/artman_recaptchaenterprise_v1beta1.yaml
+google/cloud/recommender/artman_recommender_v1beta1.yaml
+google/cloud/redis/artman_redis_v1beta1.yaml
+google/cloud/redis/artman_redis_v1.yaml
+google/cloud/securitycenter/artman_securitycenter_v1beta1.yaml
+google/cloud/securitycenter/artman_securitycenter_v1.yaml
+google/cloud/talent/artman_talent_v4beta1.yaml
+google/cloud/tasks/artman_cloudtasks_v2beta2.yaml
+google/cloud/tasks/artman_cloudtasks_v2beta3.yaml
+google/cloud/tasks/artman_cloudtasks_v2.yaml
+google/cloud/videointelligence/artman_videointelligence_v1.yaml
+google/cloud/videointelligence/artman_videointelligence_v1beta2.yaml
+google/cloud/vision/artman_vision_v1.yaml
+google/cloud/vision/artman_vision_v1p1beta1.yaml
+google/cloud/webrisk/artman_webrisk_v1beta1.yaml
+google/devtools/artman_clouddebugger.yaml
+google/devtools/cloudbuild/artman_cloudbuild.yaml
+google/devtools/clouderrorreporting/artman_errorreporting.yaml
+google/devtools/cloudtrace/artman_cloudtrace_v1.yaml
+google/devtools/cloudtrace/artman_cloudtrace_v2.yaml
+google/devtools/containeranalysis/artman_containeranalysis_v1beta1.yaml
+google/firestore/artman_firestore.yaml
+google/firestore/admin/artman_firestore_v1.yaml
+google/logging/artman_logging.yaml
+google/longrunning/artman_longrunning.yaml
+google/monitoring/artman_monitoring.yaml
+google/privacy/dlp/artman_dlp_v2.yaml
+google/pubsub/artman_pubsub.yaml
+google/spanner/admin/database/artman_spanner_admin_database.yaml
+google/spanner/admin/instance/artman_spanner_admin_instance.yaml
+google/spanner/artman_spanner.yaml
diff --git a/vendor/cloud.google.com/go/go.mod b/vendor/cloud.google.com/go/go.mod
new file mode 100644
index 0000000000000..053559a30aaf3
--- /dev/null
+++ b/vendor/cloud.google.com/go/go.mod
@@ -0,0 +1,28 @@
+module cloud.google.com/go
+
+go 1.11
+
+require (
+ cloud.google.com/go/bigquery v1.0.1
+ cloud.google.com/go/datastore v1.0.0
+ cloud.google.com/go/pubsub v1.0.1
+ cloud.google.com/go/storage v1.0.0
+ github.com/golang/mock v1.3.1
+ github.com/golang/protobuf v1.3.2
+ github.com/google/go-cmp v0.3.0
+ github.com/google/martian v2.1.0+incompatible
+ github.com/google/pprof v0.0.0-20190515194954-54271f7e092f
+ github.com/googleapis/gax-go/v2 v2.0.5
+ github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024
+ go.opencensus.io v0.22.0
+ golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136
+ golang.org/x/lint v0.0.0-20190930215403-16217165b5de
+ golang.org/x/net v0.0.0-20190620200207-3b0461eec859
+ golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45
+ golang.org/x/text v0.3.2
+ golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2
+ google.golang.org/api v0.14.0
+ google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9
+ google.golang.org/grpc v1.21.1
+ honnef.co/go/tools v0.0.1-2019.2.3
+)
diff --git a/vendor/cloud.google.com/go/go.sum b/vendor/cloud.google.com/go/go.sum
new file mode 100644
index 0000000000000..1cc173da6569d
--- /dev/null
+++ b/vendor/cloud.google.com/go/go.sum
@@ -0,0 +1,209 @@
+cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
+cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
+cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
+cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
+cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
+cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
+cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
+cloud.google.com/go/bigquery v1.0.1 h1:hL+ycaJpVE9M7nLoiXb/Pn10ENE2u+oddxbD8uu0ZVU=
+cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
+cloud.google.com/go/datastore v1.0.0 h1:Kt+gOPPp2LEPWp8CSfxhsM8ik9CcyE/gYu+0r+RnZvM=
+cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
+cloud.google.com/go/pubsub v1.0.1 h1:W9tAK3E57P75u0XLLR82LZyw8VpAnhmyTOxW9qzmyj8=
+cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
+cloud.google.com/go/storage v1.0.0 h1:VV2nUM3wwLLGh9lSABFgZMjInyUbJeaRSE64WuAIQ+4=
+cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
+dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
+github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
+github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
+github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
+github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
+github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
+github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58=
+github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
+github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
+github.com/golang/mock v1.2.0 h1:28o5sBqPkBsMGnC6b4MvE2TzSr5/AT4c/1fLqVGIwlk=
+github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
+github.com/golang/mock v1.3.1 h1:qGJ6qTW+x6xX/my+8YUVl4WNpX9B7+/l2tRsHGZ7f2s=
+github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
+github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/golang/protobuf v1.3.1 h1:YF8+flBXS5eO826T4nzqPrxfhQThhXl0YzfuUPu4SBg=
+github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
+github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c h1:964Od4U6p2jUkFxvCydnIczKteheJEzHRToSGK3Bnlw=
+github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
+github.com/google/btree v1.0.0 h1:0udJVsspx3VBr5FwtLhQQtuAsVc79tTq0ocGIPAU6qo=
+github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
+github.com/google/go-cmp v0.2.0 h1:+dTQ8DZQJz0Mb/HjFlkptS1FeQ4cWSnN941F8aEG4SQ=
+github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
+github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
+github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
+github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no=
+github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
+github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57 h1:eqyIo2HjKhKe/mJzTG8n4VqvLXIOEG+SLdDqX7xGtkY=
+github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
+github.com/google/pprof v0.0.0-20190515194954-54271f7e092f h1:Jnx61latede7zDD3DiiP4gmNz33uK0U5HDUaF0a/HVQ=
+github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
+github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
+github.com/googleapis/gax-go/v2 v2.0.4 h1:hU4mGcQI4DaAYW+IbTun+2qEZVFxK0ySjQLTbS0VQKc=
+github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
+github.com/googleapis/gax-go/v2 v2.0.5 h1:sjZBwGj9Jlw33ImPtvFviGYvseOtDM7hkSKB7+Tv3SM=
+github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
+github.com/hashicorp/golang-lru v0.5.0 h1:CL2msUPvZTLb5O648aiLNJw3hnBxN2+1Jq8rCOH9wdo=
+github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
+github.com/hashicorp/golang-lru v0.5.1 h1:0hERBMJE1eitiLkihrMvRVBYAkpHzc/J3QdDN+dAcgU=
+github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
+github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024 h1:rBMNdlhTLzJjJSDIjNEXX1Pz3Hmwmz91v+zycvx9PJc=
+github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
+github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
+github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
+github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
+github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
+github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
+go.opencensus.io v0.21.0 h1:mU6zScU4U1YAFPHEHYk+3JC4SY7JxgkqS10ZOSyksNg=
+go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
+go.opencensus.io v0.22.0 h1:C9hSCOW830chIVkdja34wa6Ky+IzWllkUinR+BtRZd4=
+go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
+golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
+golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/exp v0.0.0-20190121172915-509febef88a4 h1:c2HOrn5iMezYjSlGPncknSEr/8x5LELb/ilJbXi9DEA=
+golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
+golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
+golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522 h1:OeRHuibLsmZkFj773W4LcfAGsSxJgfPONhr8cmO+eLA=
+golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
+golang.org/x/exp v0.0.0-20190829153037-c13cbed26979 h1:Agxu5KLo8o7Bb634SVDnhIfpTvxmzUwhbYAzBvXt6h4=
+golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
+golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136 h1:A1gGSx58LAGVHUUsOf7IiR0u8Xb6W51gRwfDBhkdcaw=
+golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
+golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
+golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
+golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
+golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
+golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f h1:hX65Cu3JDlGH3uEdK7I99Ii+9kjD6mvnnpfLdEAH0x4=
+golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
+golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20190409202823-959b441ac422 h1:QzoH/1pFpZguR8NrRHLcO6jKqfv2zpuSqZLgdm7ZmjI=
+golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac h1:8R1esu+8QioDxo4E4mX6bFztO+dMTM49DNAaWfO5OeY=
+golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20190930215403-16217165b5de h1:5hukYrvBGR8/eNkX5mdUezrA6JiaEZDtJb9Ei+1LlBs=
+golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
+golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
+golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
+golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
+golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20190311183353-d8887717615a h1:oWX7TPOiFAMXLq8o0ikBYfCJVlRHBcsciT5bXOrH628=
+golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c h1:uOCk1iQW6Vc18bnC13MfzScl+wdKBmM9Y9kU7Z83/lw=
+golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
+golang.org/x/net v0.0.0-20190620200207-3b0461eec859 h1:R/3boaszxrf1GEUWTVDzSKVwLmSJpwZ1yqXm8j0v2QI=
+golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
+golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421 h1:Wo7BWFiOk0QRFMLYMqJGFMd9CgUAcGx7V+qEg/h5IBI=
+golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
+golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45 h1:SVwTIAaPC2U/AvvLNZ2a7OVsmBpC8L5BlwK1whH3hm0=
+golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
+golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6 h1:bjcUS9ztw9kFmmIxJInhon/0Is3p+EHBKNgquIzo1OI=
+golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20190423024810-112230192c58 h1:8gQV6CLnAEikrhgkHFbMAEhagSSnXWGV915qUMm9mrU=
+golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a h1:1BGLXjeY4akVXGgbC9HugT3Jv3hCI0z56oJR5vAMgBU=
+golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b h1:ag/x1USPSsqHud38I9BAC88qdNLDHHtQ4mlgQIZPPNA=
+golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0 h1:HyfiK1WMnHj5FXFXatD+Qs1A/xC2Run6RzeW1SyHxpc=
+golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2 h1:z99zHgr7hKfrUcX/KsoJk5FJfjTceCKIp96+biqP4To=
+golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
+golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
+golang.org/x/time v0.0.0-20181108054448-85acf8d2951c h1:fqgJT0MGcGpPgpWU7VRdRjuArfcOvC4AoJmILihzhDg=
+golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ=
+golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
+golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20190312170243-e65039ee4138 h1:H3uGjxCR/6Ds0Mjgyp7LMK81+LvmbvWWEnJhzk1Pi9E=
+golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
+golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c h1:97SnQk1GYRXJgvwZ8fadnxDOWfKvkNQHH3CtZntPSrM=
+golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
+golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0 h1:Dh6fw+p6FyRl5x/FvNswO1ji0lIGzm3KP8Y9VkS9PTE=
+golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff h1:On1qIo75ByTwFJ4/W2bIqHcwJ9XAqtSWUs8GwRrIhtc=
+golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2 h1:EtTFh6h4SAKemS+CURDMTDIANuduG5zKEXShyy18bGA=
+golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
+google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
+google.golang.org/api v0.8.0 h1:VGGbLNyPF7dvYHhcUGYBBGCRDDK0RRJAI6KCvo0CL+E=
+google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
+google.golang.org/api v0.9.0 h1:jbyannxz0XFD3zdjgrSUsaJbgpH4eTrkdhRChkHPfO8=
+google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
+google.golang.org/api v0.14.0 h1:uMf5uLi4eQMRrMKhCplNik4U4H8Z6C1br3zOtAa/aDE=
+google.golang.org/api v0.14.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
+google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
+google.golang.org/appengine v1.4.0 h1:/wp5JvzpHIxhs/dumFmF7BXTf3Z+dd4uXta4kVyO508=
+google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
+google.golang.org/appengine v1.5.0 h1:KxkO13IPW4Lslp2bz+KHP2E3gtFlrIGNThxkZQ3g+4c=
+google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
+google.golang.org/appengine v1.6.1 h1:QzqyMA1tlu6CgqCDUtU9V+ZKhLFT2dkJuANu5QaxI3I=
+google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
+google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
+google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19 h1:Lj2SnHtxkRGJDqnGaSjo+CCdIieEnwVazbOXILwQemk=
+google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873 h1:nfPFGzJkUDX6uBmpN/pSw7MbOAWegH5QDQuoXFHedLg=
+google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64 h1:iKtrH9Y8mcbADOP0YFaEMth7OfuHY9xHOwNj4znpM1A=
+google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
+google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55 h1:gSJIx1SDwno+2ElGhA4+qG2zF97qiUzTM+rQ0klBOcE=
+google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
+google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51 h1:Ex1mq5jaJof+kRnYi3SlYJ8KKa9Ao3NHyIT5XJ1gF6U=
+google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
+google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9 h1:6XzpBoANz1NqMNfDXzc2QmHmbb1vyMsvRfoP5rM+K1I=
+google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/grpc v1.19.0 h1:cfg4PD8YEdSFnm7qLV4++93WcmhH2nIUhMjhdCvl3j8=
+google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
+google.golang.org/grpc v1.20.1 h1:Hz2g2wirWK7H0qIIhGIqRGTuMwTE8HEKFnDZZ7lm9NU=
+google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
+google.golang.org/grpc v1.21.1 h1:j6XxA85m/6txkUCHvzlV5f+HBNl/1r5cZ2A/3IEFOO8=
+google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
+gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
+honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a h1:/8zB6iBfHCl1qAnEAWwGPNrUvapuy6CPla1VM0k8hQw=
+honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a h1:LJwr7TCTghdatWv40WobzlKXc9c4s8oGa7QKJUtHhWA=
+honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+honnef.co/go/tools v0.0.1-2019.2.3 h1:3JgtbtFHMiCmsznwGVTUWbgGov+pVqnlf1dEJTNAXeM=
+honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
+rsc.io/binaryregexp v0.2.0 h1:HfqmD5MEmC0zvwBuF187nq9mdnXjXsSivRiXN7SmRkE=
+rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
diff --git a/vendor/cloud.google.com/go/iam/.repo-metadata.json b/vendor/cloud.google.com/go/iam/.repo-metadata.json
new file mode 100644
index 0000000000000..0edb2d8b454c1
--- /dev/null
+++ b/vendor/cloud.google.com/go/iam/.repo-metadata.json
@@ -0,0 +1,12 @@
+{
+ "name": "iam",
+ "name_pretty": "Cloud Identify and Access Management API",
+ "product_documentation": "https://cloud.google.com/iam",
+ "client_documentation": "https://godoc.org/cloud.google.com/go/iam",
+ "release_level": "ga",
+ "language": "go",
+ "repo": "googleapis/google-cloud-go",
+ "distribution_name": "cloud.google.com/go/iam",
+ "api_id": "iam.googleapis.com",
+ "requires_billing": true
+}
diff --git a/vendor/cloud.google.com/go/internal/version/update_version.sh b/vendor/cloud.google.com/go/internal/version/update_version.sh
index fecf1f03fdeb8..d7c5a3e219c95 100644
--- a/vendor/cloud.google.com/go/internal/version/update_version.sh
+++ b/vendor/cloud.google.com/go/internal/version/update_version.sh
@@ -1,4 +1,17 @@
#!/bin/bash
+# Copyright 2019 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
today=$(date +%Y%m%d)
diff --git a/vendor/cloud.google.com/go/internal/version/version.go b/vendor/cloud.google.com/go/internal/version/version.go
index d291921b18f92..f817afb7301bf 100644
--- a/vendor/cloud.google.com/go/internal/version/version.go
+++ b/vendor/cloud.google.com/go/internal/version/version.go
@@ -26,7 +26,7 @@ import (
// Repo is the current version of the client libraries in this
// repo. It should be a date in YYYYMMDD format.
-const Repo = "20190802"
+const Repo = "20191119"
// Go returns the Go runtime version. The returned string
// has no whitespace.
diff --git a/vendor/cloud.google.com/go/issue_template.md b/vendor/cloud.google.com/go/issue_template.md
new file mode 100644
index 0000000000000..e2ccef3e78dfa
--- /dev/null
+++ b/vendor/cloud.google.com/go/issue_template.md
@@ -0,0 +1,17 @@
+(delete this for feature requests)
+
+## Client
+
+e.g. PubSub
+
+## Describe Your Environment
+
+e.g. Alpine Docker on GKE
+
+## Expected Behavior
+
+e.g. Messages arrive really fast.
+
+## Actual Behavior
+
+e.g. Messages arrive really slowly.
\ No newline at end of file
diff --git a/vendor/cloud.google.com/go/longrunning/.repo-metadata.json b/vendor/cloud.google.com/go/longrunning/.repo-metadata.json
new file mode 100644
index 0000000000000..4c10fbded1d39
--- /dev/null
+++ b/vendor/cloud.google.com/go/longrunning/.repo-metadata.json
@@ -0,0 +1,12 @@
+{
+ "name": "longrunning",
+ "name_pretty": "Longrunning operations API",
+ "product_documentation": "https://cloud.google.com/service-infrastructure/docs/service-management/reference/rpc/google.longrunning",
+ "client_documentation": "https://godoc.org/cloud.google.com/go/longrunning",
+ "release_level": "alpha",
+ "language": "go",
+ "repo": "googleapis/google-cloud-go",
+ "distribution_name": "cloud.google.com/go/longrunning",
+ "api_id": "longrunning.googleapis.com",
+ "requires_billing": true
+}
diff --git a/vendor/cloud.google.com/go/longrunning/autogen/.repo-metadata.json b/vendor/cloud.google.com/go/longrunning/autogen/.repo-metadata.json
new file mode 100644
index 0000000000000..2dec58d3fa309
--- /dev/null
+++ b/vendor/cloud.google.com/go/longrunning/autogen/.repo-metadata.json
@@ -0,0 +1,12 @@
+{
+ "name": "longrunning",
+ "name_pretty": "Longrunning operations API",
+ "product_documentation": "https://cloud.google.com/service-infrastructure/docs/service-management/reference/rpc/google.longrunning",
+ "client_documentation": "https://godoc.org/cloud.google.com/go/longrunning/autogen",
+ "release_level": "alpha",
+ "language": "go",
+ "repo": "googleapis/google-cloud-go",
+ "distribution_name": "cloud.google.com/go",
+ "api_id": "longrunning.googleapis.com",
+ "requires_billing": true
+}
diff --git a/vendor/cloud.google.com/go/longrunning/autogen/doc.go b/vendor/cloud.google.com/go/longrunning/autogen/doc.go
index 59e345888d2bb..6347fb320dd3c 100644
--- a/vendor/cloud.google.com/go/longrunning/autogen/doc.go
+++ b/vendor/cloud.google.com/go/longrunning/autogen/doc.go
@@ -96,4 +96,4 @@ func versionGo() string {
return "UNKNOWN"
}
-const versionClient = "20190801"
+const versionClient = "20191115"
diff --git a/vendor/cloud.google.com/go/longrunning/autogen/operations_client.go b/vendor/cloud.google.com/go/longrunning/autogen/operations_client.go
index 5ede122767fe4..8f24d0afc5d50 100644
--- a/vendor/cloud.google.com/go/longrunning/autogen/operations_client.go
+++ b/vendor/cloud.google.com/go/longrunning/autogen/operations_client.go
@@ -46,6 +46,8 @@ func defaultOperationsClientOptions() []option.ClientOption {
return []option.ClientOption{
option.WithEndpoint("longrunning.googleapis.com:443"),
option.WithScopes(DefaultAuthScopes()...),
+ option.WithGRPCDialOption(grpc.WithDefaultCallOptions(
+ grpc.MaxCallRecvMsgSize(math.MaxInt32))),
}
}
diff --git a/vendor/cloud.google.com/go/manuals.txt b/vendor/cloud.google.com/go/manuals.txt
new file mode 100644
index 0000000000000..58b7bd129410e
--- /dev/null
+++ b/vendor/cloud.google.com/go/manuals.txt
@@ -0,0 +1,8 @@
+errorreporting/apiv1beta1
+firestore/apiv1beta1
+firestore/apiv1
+logging/apiv2
+longrunning/autogen
+pubsub/apiv1
+spanner/apiv1
+trace/apiv1
diff --git a/vendor/cloud.google.com/go/microgens.csv b/vendor/cloud.google.com/go/microgens.csv
new file mode 100644
index 0000000000000..a367dd480597c
--- /dev/null
+++ b/vendor/cloud.google.com/go/microgens.csv
@@ -0,0 +1,10 @@
+input directory path, go module;package flag, gRPC ServiceConfig path flag, API service config path flag, release level
+google/cloud/texttospeech/v1, --go-gapic-package cloud.google.com/go/texttospeech/apiv1;texttospeech, --grpc-service-config google/cloud/texttospeech/v1/texttospeech_grpc_service_config.json, --gapic-service-config google/cloud/texttospeech/v1/texttospeech_v1.yaml, --release-level alpha
+google/cloud/asset/v1, --go-gapic-package cloud.google.com/go/asset/apiv1;asset, --grpc-service-config google/cloud/asset/v1/cloudasset_grpc_service_config.json, --gapic-service-config google/cloud/asset/v1/cloudasset_v1.yaml, --release-level alpha
+google/cloud/language/v1, --go-gapic-package cloud.google.com/go/language/apiv1;language, --grpc-service-config google/cloud/language/v1/language_grpc_service_config.json, --gapic-service-config google/cloud/language/language_v1.yaml, --release-level alpha
+google/cloud/phishingprotection/v1beta1, --go-gapic-package cloud.google.com/go/phishingprotection/apiv1beta1;phishingprotection, --grpc-service-config google/cloud/phishingprotection/v1beta1/phishingprotection_grpc_service_config.json, --gapic-service-config google/cloud/phishingprotection/v1beta1/phishingprotection_v1beta1.yaml, --release-level beta
+google/cloud/translate/v3, --go-gapic-package cloud.google.com/go/translate/apiv3;translate, --grpc-service-config google/cloud/translate/v3/translate_grpc_service_config.json, --gapic-service-config google/cloud/translate/v3/translate_v3.yaml,
+google/cloud/scheduler/v1, --go-gapic-package cloud.google.com/go/scheduler/apiv1;scheduler, --grpc-service-config google/cloud/scheduler/v1/cloudscheduler_grpc_service_config.json, --gapic-service-config google/cloud/scheduler/v1/cloudscheduler_v1.yaml,
+google/cloud/scheduler/v1beta1, --go-gapic-package cloud.google.com/go/scheduler/apiv1beta1;scheduler, --grpc-service-config google/cloud/scheduler/v1beta1/cloudscheduler_grpc_service_config.json, --gapic-service-config google/cloud/scheduler/v1beta1/cloudscheduler_v1beta1.yaml, --release-level beta
+google/cloud/speech/v1, --go-gapic-package cloud.google.com/go/speech/apiv1;speech, --grpc-service-config google/cloud/speech/v1/speech_grpc_service_config.json, --gapic-service-config google/cloud/speech/v1/speech_v1.yaml,
+google/cloud/speech/v1p1beta1, --go-gapic-package cloud.google.com/go/speech/apiv1p1beta1;speech, --grpc-service-config google/cloud/speech/v1p1beta1/speech_grpc_service_config.json, --gapic-service-config google/cloud/speech/v1p1beta1/speech_v1p1beta1.yaml, --release-level beta
diff --git a/vendor/cloud.google.com/go/regen-gapic.sh b/vendor/cloud.google.com/go/regen-gapic.sh
new file mode 100644
index 0000000000000..8dae7afda9ed8
--- /dev/null
+++ b/vendor/cloud.google.com/go/regen-gapic.sh
@@ -0,0 +1,81 @@
+#!/bin/bash
+# Copyright 2019 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# This script generates all GAPIC clients in this repo.
+# See instructions at go/yoshi-site.
+
+set -ex
+
+GOCLOUD_DIR="$(dirname "$0")"
+HOST_MOUNT="$PWD"
+
+# need to mount the /var/folders properly for macos
+# https://stackoverflow.com/questions/45122459/docker-mounts-denied-the-paths-are-not-shared-from-os-x-and-are-not-known/45123074
+if [[ "$OSTYPE" == "darwin"* ]] && [[ "$HOST_MOUNT" == "/var/folders"* ]]; then
+ HOST_MOUNT=/private$HOST_MOUNT
+fi
+
+microgen() {
+ input=$1
+ options="${@:2}"
+
+ # see https://github.com/googleapis/gapic-generator-go/blob/master/README.md#docker-wrapper for details
+ docker run \
+ --user $UID \
+ --mount type=bind,source=$HOST_MOUNT,destination=/conf,readonly \
+ --mount type=bind,source=$HOST_MOUNT/$input,destination=/in/$input,readonly \
+ --mount type=bind,source=/tmp,destination=/out \
+ --rm \
+ gcr.io/gapic-images/gapic-generator-go:0.9.1 \
+ $options
+}
+
+for gencfg in $(cat $GOCLOUD_DIR/gapics.txt); do
+ rm -rf artman-genfiles/*
+ artman --config "$gencfg" generate go_gapic
+ cp -r artman-genfiles/gapi-*/cloud.google.com/go/* $GOCLOUD_DIR
+done
+
+rm -rf /tmp/cloud.google.com
+{
+ # skip the first line with column titles
+ read -r
+ while IFS=, read -r input mod retrycfg apicfg release
+ do
+ microgen $input "$mod" "$retrycfg" "$apicfg" "$release"
+ done
+} < $GOCLOUD_DIR/microgens.csv
+
+# copy generated code if any was created
+[ -d "/tmp/cloud.google.com/go" ] && cp -r /tmp/cloud.google.com/go/* $GOCLOUD_DIR
+
+pushd $GOCLOUD_DIR
+ gofmt -s -d -l -w . && goimports -w .
+
+ # NOTE(pongad): `sed -i` doesn't work on Macs, because -i option needs an argument.
+ # `-i ''` doesn't work on GNU, since the empty string is treated as a file name.
+ # So we just create the backup and delete it after.
+ ver=$(date +%Y%m%d)
+ git ls-files -mo | while read modified; do
+ dir=${modified%/*.*}
+ find . -path "*/$dir/doc.go" -exec sed -i.backup -e "s/^const versionClient.*/const versionClient = \"$ver\"/" '{}' +
+ done
+popd
+
+for manualdir in $(cat $GOCLOUD_DIR/manuals.txt); do
+ find "$GOCLOUD_DIR/$manualdir" -name '*.go' -exec sed -i.backup -e 's/setGoogleClientInfo/SetGoogleClientInfo/g' '{}' '+'
+done
+
+find $GOCLOUD_DIR -name '*.backup' -delete
diff --git a/vendor/cloud.google.com/go/storage/.repo-metadata.json b/vendor/cloud.google.com/go/storage/.repo-metadata.json
new file mode 100644
index 0000000000000..a42f91cd35f1b
--- /dev/null
+++ b/vendor/cloud.google.com/go/storage/.repo-metadata.json
@@ -0,0 +1,12 @@
+{
+ "name": "storage",
+ "name_pretty": "storage",
+ "product_documentation": "https://cloud.google.com/storage",
+ "client_documentation": "https://godoc.org/cloud.google.com/go/storage",
+ "release_level": "ga",
+ "language": "go",
+ "repo": "googleapis/google-cloud-go",
+ "distribution_name": "cloud.google.com/go/storage",
+ "api_id": "storage:v2",
+ "requires_billing": true
+}
diff --git a/vendor/cloud.google.com/go/storage/CHANGES.md b/vendor/cloud.google.com/go/storage/CHANGES.md
new file mode 100644
index 0000000000000..f3468a9abadb0
--- /dev/null
+++ b/vendor/cloud.google.com/go/storage/CHANGES.md
@@ -0,0 +1,38 @@
+# Changes
+
+## v1.3.0
+
+- Use `storage.googleapis.com/storage/v1` by default for GCS requests
+ instead of `www.googleapis.com/storage/v1`.
+
+## v1.2.1
+
+- Fixed a bug where UniformBucketLevelAccess and BucketPolicyOnly were not
+ being sent in all cases.
+
+## v1.2.0
+
+- Add support for UniformBucketLevelAccess. This configures access checks
+ to use only bucket-level IAM policies.
+ See: https://godoc.org/cloud.google.com/go/storage#UniformBucketLevelAccess.
+- Fix userAgent to use correct version.
+
+## v1.1.2
+
+- Fix memory leak in BucketIterator and ObjectIterator.
+
+## v1.1.1
+
+- Send BucketPolicyOnly even when it's disabled.
+
+## v1.1.0
+
+- Performance improvements for ObjectIterator and BucketIterator.
+- Fix Bucket.ObjectIterator size calculation checks.
+- Added HMACKeyOptions to all the methods which allows for options such as
+ UserProject to be set per invocation and optionally be used.
+
+## v1.0.0
+
+This is the first tag to carve out storage as its own module. See:
+https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository.
\ No newline at end of file
diff --git a/vendor/cloud.google.com/go/storage/LICENSE b/vendor/cloud.google.com/go/storage/LICENSE
new file mode 100644
index 0000000000000..d645695673349
--- /dev/null
+++ b/vendor/cloud.google.com/go/storage/LICENSE
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/vendor/cloud.google.com/go/storage/bucket.go b/vendor/cloud.google.com/go/storage/bucket.go
index 07c470d3e0d4e..dccfc989cd001 100644
--- a/vendor/cloud.google.com/go/storage/bucket.go
+++ b/vendor/cloud.google.com/go/storage/bucket.go
@@ -232,10 +232,18 @@ type BucketAttrs struct {
// ACL is the list of access control rules on the bucket.
ACL []ACLRule
- // BucketPolicyOnly configures access checks to use only bucket-level IAM
- // policies.
+ // BucketPolicyOnly is an alias for UniformBucketLevelAccess. Use of
+ // UniformBucketLevelAccess is recommended above the use of this field.
+ // Setting BucketPolicyOnly.Enabled OR UniformBucketLevelAccess.Enabled to
+ // true, will enable UniformBucketLevelAccess.
BucketPolicyOnly BucketPolicyOnly
+ // UniformBucketLevelAccess configures access checks to use only bucket-level IAM
+ // policies and ignore any ACL rules for the bucket.
+ // See https://cloud.google.com/storage/docs/uniform-bucket-level-access
+ // for more information.
+ UniformBucketLevelAccess UniformBucketLevelAccess
+
// DefaultObjectACL is the list of access controls to
// apply to new objects when no object ACL is provided.
DefaultObjectACL []ACLRule
@@ -267,14 +275,8 @@ type BucketAttrs struct {
// StorageClass is the default storage class of the bucket. This defines
// how objects in the bucket are stored and determines the SLA
- // and the cost of storage. Typical values are "MULTI_REGIONAL",
- // "REGIONAL", "NEARLINE", "COLDLINE", "STANDARD" and
- // "DURABLE_REDUCED_AVAILABILITY". Defaults to "STANDARD", which
- // is equivalent to "MULTI_REGIONAL" or "REGIONAL" depending on
- // the bucket's location settings.
- //
- // "DURABLE_REDUCED_AVAILABILITY", "MULTI_REGIONAL" and "REGIONAL"
- // are considered legacy storage classes.
+ // and the cost of storage. Typical values are "NEARLINE", "COLDLINE" and
+ // "STANDARD". Defaults to "STANDARD".
StorageClass string
// Created is the creation time of the bucket.
@@ -327,8 +329,8 @@ type BucketAttrs struct {
LocationType string
}
-// BucketPolicyOnly configures access checks to use only bucket-level IAM
-// policies.
+// BucketPolicyOnly is an alias for UniformBucketLevelAccess.
+// Use of UniformBucketLevelAccess is preferred above BucketPolicyOnly.
type BucketPolicyOnly struct {
// Enabled specifies whether access checks use only bucket-level IAM
// policies. Enabled may be disabled until the locked time.
@@ -338,6 +340,17 @@ type BucketPolicyOnly struct {
LockedTime time.Time
}
+// UniformBucketLevelAccess configures access checks to use only bucket-level IAM
+// policies.
+type UniformBucketLevelAccess struct {
+ // Enabled specifies whether access checks use only bucket-level IAM
+ // policies. Enabled may be disabled until the locked time.
+ Enabled bool
+ // LockedTime specifies the deadline for changing Enabled from true to
+ // false.
+ LockedTime time.Time
+}
+
// Lifecycle is the lifecycle configuration for objects in the bucket.
type Lifecycle struct {
Rules []LifecycleRule
@@ -446,8 +459,7 @@ type LifecycleCondition struct {
// MatchesStorageClasses is the condition matching the object's storage
// class.
//
- // Values include "MULTI_REGIONAL", "REGIONAL", "NEARLINE", "COLDLINE",
- // "STANDARD", and "DURABLE_REDUCED_AVAILABILITY".
+ // Values include "NEARLINE", "COLDLINE" and "STANDARD".
MatchesStorageClasses []string
// NumNewerVersions is the condition matching objects with a number of newer versions.
@@ -495,26 +507,27 @@ func newBucket(b *raw.Bucket) (*BucketAttrs, error) {
return nil, err
}
return &BucketAttrs{
- Name: b.Name,
- Location: b.Location,
- MetaGeneration: b.Metageneration,
- DefaultEventBasedHold: b.DefaultEventBasedHold,
- StorageClass: b.StorageClass,
- Created: convertTime(b.TimeCreated),
- VersioningEnabled: b.Versioning != nil && b.Versioning.Enabled,
- ACL: toBucketACLRules(b.Acl),
- DefaultObjectACL: toObjectACLRules(b.DefaultObjectAcl),
- Labels: b.Labels,
- RequesterPays: b.Billing != nil && b.Billing.RequesterPays,
- Lifecycle: toLifecycle(b.Lifecycle),
- RetentionPolicy: rp,
- CORS: toCORS(b.Cors),
- Encryption: toBucketEncryption(b.Encryption),
- Logging: toBucketLogging(b.Logging),
- Website: toBucketWebsite(b.Website),
- BucketPolicyOnly: toBucketPolicyOnly(b.IamConfiguration),
- Etag: b.Etag,
- LocationType: b.LocationType,
+ Name: b.Name,
+ Location: b.Location,
+ MetaGeneration: b.Metageneration,
+ DefaultEventBasedHold: b.DefaultEventBasedHold,
+ StorageClass: b.StorageClass,
+ Created: convertTime(b.TimeCreated),
+ VersioningEnabled: b.Versioning != nil && b.Versioning.Enabled,
+ ACL: toBucketACLRules(b.Acl),
+ DefaultObjectACL: toObjectACLRules(b.DefaultObjectAcl),
+ Labels: b.Labels,
+ RequesterPays: b.Billing != nil && b.Billing.RequesterPays,
+ Lifecycle: toLifecycle(b.Lifecycle),
+ RetentionPolicy: rp,
+ CORS: toCORS(b.Cors),
+ Encryption: toBucketEncryption(b.Encryption),
+ Logging: toBucketLogging(b.Logging),
+ Website: toBucketWebsite(b.Website),
+ BucketPolicyOnly: toBucketPolicyOnly(b.IamConfiguration),
+ UniformBucketLevelAccess: toUniformBucketLevelAccess(b.IamConfiguration),
+ Etag: b.Etag,
+ LocationType: b.LocationType,
}, nil
}
@@ -540,9 +553,9 @@ func (b *BucketAttrs) toRawBucket() *raw.Bucket {
bb = &raw.BucketBilling{RequesterPays: true}
}
var bktIAM *raw.BucketIamConfiguration
- if b.BucketPolicyOnly.Enabled {
+ if b.UniformBucketLevelAccess.Enabled || b.BucketPolicyOnly.Enabled {
bktIAM = &raw.BucketIamConfiguration{
- BucketPolicyOnly: &raw.BucketIamConfigurationBucketPolicyOnly{
+ UniformBucketLevelAccess: &raw.BucketIamConfigurationUniformBucketLevelAccess{
Enabled: true,
},
}
@@ -609,10 +622,20 @@ type BucketAttrsToUpdate struct {
// newly created objects in this bucket.
DefaultEventBasedHold optional.Bool
- // BucketPolicyOnly configures access checks to use only bucket-level IAM
- // policies.
+ // BucketPolicyOnly is an alias for UniformBucketLevelAccess. Use of
+ // UniformBucketLevelAccess is recommended above the use of this field.
+ // Setting BucketPolicyOnly.Enabled OR UniformBucketLevelAccess.Enabled to
+ // true, will enable UniformBucketLevelAccess. If both BucketPolicyOnly and
+ // UniformBucketLevelAccess are set, the value of UniformBucketLevelAccess
+ // will take precedence.
BucketPolicyOnly *BucketPolicyOnly
+ // UniformBucketLevelAccess configures access checks to use only bucket-level IAM
+ // policies and ignore any ACL rules for the bucket.
+ // See https://cloud.google.com/storage/docs/uniform-bucket-level-access
+ // for more information.
+ UniformBucketLevelAccess *UniformBucketLevelAccess
+
// If set, updates the retention policy of the bucket. Using
// RetentionPolicy.RetentionPeriod = 0 will delete the existing policy.
//
@@ -701,8 +724,17 @@ func (ua *BucketAttrsToUpdate) toRawBucket() *raw.Bucket {
}
if ua.BucketPolicyOnly != nil {
rb.IamConfiguration = &raw.BucketIamConfiguration{
- BucketPolicyOnly: &raw.BucketIamConfigurationBucketPolicyOnly{
- Enabled: ua.BucketPolicyOnly.Enabled,
+ UniformBucketLevelAccess: &raw.BucketIamConfigurationUniformBucketLevelAccess{
+ Enabled: ua.BucketPolicyOnly.Enabled,
+ ForceSendFields: []string{"Enabled"},
+ },
+ }
+ }
+ if ua.UniformBucketLevelAccess != nil {
+ rb.IamConfiguration = &raw.BucketIamConfiguration{
+ UniformBucketLevelAccess: &raw.BucketIamConfigurationUniformBucketLevelAccess{
+ Enabled: ua.UniformBucketLevelAccess.Enabled,
+ ForceSendFields: []string{"Enabled"},
},
}
}
@@ -1041,8 +1073,26 @@ func toBucketPolicyOnly(b *raw.BucketIamConfiguration) BucketPolicyOnly {
}
}
+func toUniformBucketLevelAccess(b *raw.BucketIamConfiguration) UniformBucketLevelAccess {
+ if b == nil || b.UniformBucketLevelAccess == nil || !b.UniformBucketLevelAccess.Enabled {
+ return UniformBucketLevelAccess{}
+ }
+ lt, err := time.Parse(time.RFC3339, b.UniformBucketLevelAccess.LockedTime)
+ if err != nil {
+ return UniformBucketLevelAccess{
+ Enabled: true,
+ }
+ }
+ return UniformBucketLevelAccess{
+ Enabled: true,
+ LockedTime: lt,
+ }
+}
+
// Objects returns an iterator over the objects in the bucket that match the Query q.
// If q is nil, no filtering is done.
+//
+// Note: The returned iterator is not safe for concurrent operations without explicit synchronization.
func (b *BucketHandle) Objects(ctx context.Context, q *Query) *ObjectIterator {
it := &ObjectIterator{
ctx: ctx,
@@ -1059,6 +1109,8 @@ func (b *BucketHandle) Objects(ctx context.Context, q *Query) *ObjectIterator {
}
// An ObjectIterator is an iterator over ObjectAttrs.
+//
+// Note: This iterator is not safe for concurrent operations without explicit synchronization.
type ObjectIterator struct {
ctx context.Context
bucket *BucketHandle
@@ -1069,6 +1121,8 @@ type ObjectIterator struct {
}
// PageInfo supports pagination. See the google.golang.org/api/iterator package for details.
+//
+// Note: This method is not safe for concurrent operations without explicit synchronization.
func (it *ObjectIterator) PageInfo() *iterator.PageInfo { return it.pageInfo }
// Next returns the next result. Its second return value is iterator.Done if
@@ -1078,6 +1132,8 @@ func (it *ObjectIterator) PageInfo() *iterator.PageInfo { return it.pageInfo }
// If Query.Delimiter is non-empty, some of the ObjectAttrs returned by Next will
// have a non-empty Prefix field, and a zero value for all other fields. These
// represent prefixes.
+//
+// Note: This method is not safe for concurrent operations without explicit synchronization.
func (it *ObjectIterator) Next() (*ObjectAttrs, error) {
if err := it.nextFunc(); err != nil {
return nil, err
@@ -1126,6 +1182,8 @@ func (it *ObjectIterator) fetch(pageSize int, pageToken string) (string, error)
// optionally set the iterator's Prefix field to restrict the list to buckets
// whose names begin with the prefix. By default, all buckets in the project
// are returned.
+//
+// Note: The returned iterator is not safe for concurrent operations without explicit synchronization.
func (c *Client) Buckets(ctx context.Context, projectID string) *BucketIterator {
it := &BucketIterator{
ctx: ctx,
@@ -1136,10 +1194,13 @@ func (c *Client) Buckets(ctx context.Context, projectID string) *BucketIterator
it.fetch,
func() int { return len(it.buckets) },
func() interface{} { b := it.buckets; it.buckets = nil; return b })
+
return it
}
// A BucketIterator is an iterator over BucketAttrs.
+//
+// Note: This iterator is not safe for concurrent operations without explicit synchronization.
type BucketIterator struct {
// Prefix restricts the iterator to buckets whose names begin with it.
Prefix string
@@ -1155,6 +1216,8 @@ type BucketIterator struct {
// Next returns the next result. Its second return value is iterator.Done if
// there are no more results. Once Next returns iterator.Done, all subsequent
// calls will return iterator.Done.
+//
+// Note: This method is not safe for concurrent operations without explicit synchronization.
func (it *BucketIterator) Next() (*BucketAttrs, error) {
if err := it.nextFunc(); err != nil {
return nil, err
@@ -1165,6 +1228,8 @@ func (it *BucketIterator) Next() (*BucketAttrs, error) {
}
// PageInfo supports pagination. See the google.golang.org/api/iterator package for details.
+//
+// Note: This method is not safe for concurrent operations without explicit synchronization.
func (it *BucketIterator) PageInfo() *iterator.PageInfo { return it.pageInfo }
func (it *BucketIterator) fetch(pageSize int, pageToken string) (token string, err error) {
diff --git a/vendor/cloud.google.com/go/storage/go.mod b/vendor/cloud.google.com/go/storage/go.mod
new file mode 100644
index 0000000000000..707aca68a3e88
--- /dev/null
+++ b/vendor/cloud.google.com/go/storage/go.mod
@@ -0,0 +1,17 @@
+module cloud.google.com/go/storage
+
+go 1.11
+
+require (
+ cloud.google.com/go v0.46.3
+ github.com/golang/protobuf v1.3.2
+ github.com/google/go-cmp v0.3.0
+ github.com/googleapis/gax-go/v2 v2.0.5
+ golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136 // indirect
+ golang.org/x/lint v0.0.0-20190930215403-16217165b5de // indirect
+ golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45
+ golang.org/x/tools v0.0.0-20191111182352-50fa39b762bc // indirect
+ google.golang.org/api v0.13.0
+ google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a
+ google.golang.org/grpc v1.21.1
+)
diff --git a/vendor/cloud.google.com/go/storage/go.sum b/vendor/cloud.google.com/go/storage/go.sum
new file mode 100644
index 0000000000000..49e8e49663ead
--- /dev/null
+++ b/vendor/cloud.google.com/go/storage/go.sum
@@ -0,0 +1,169 @@
+cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
+cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
+cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
+cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
+cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
+cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
+cloud.google.com/go v0.46.3 h1:AVXDdKsrtX33oR9fbCMu/+c1o8Ofjq6Ku/MInaLVg5Y=
+cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
+cloud.google.com/go/bigquery v1.0.1 h1:hL+ycaJpVE9M7nLoiXb/Pn10ENE2u+oddxbD8uu0ZVU=
+cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
+cloud.google.com/go/datastore v1.0.0 h1:Kt+gOPPp2LEPWp8CSfxhsM8ik9CcyE/gYu+0r+RnZvM=
+cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
+cloud.google.com/go/pubsub v1.0.1 h1:W9tAK3E57P75u0XLLR82LZyw8VpAnhmyTOxW9qzmyj8=
+cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
+dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
+github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
+github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
+github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
+github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
+github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
+github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58=
+github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
+github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
+github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
+github.com/golang/mock v1.3.1/go.mod h1:sBzyDLLjw3U8JLTeZvSv8jJB+tU5PVekmnlKIyFUx0Y=
+github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/golang/protobuf v1.3.2 h1:6nsPYzhq5kReh6QImI3k5qWzO4PEbvbIW2cwSfR/6xs=
+github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
+github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
+github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
+github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
+github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
+github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
+github.com/google/martian v2.1.0+incompatible h1:/CP5g8u/VJHijgedC/Legn3BAbAaWPgecwXBIDzw5no=
+github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
+github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
+github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
+github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
+github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
+github.com/googleapis/gax-go/v2 v2.0.5 h1:sjZBwGj9Jlw33ImPtvFviGYvseOtDM7hkSKB7+Tv3SM=
+github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
+github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
+github.com/hashicorp/golang-lru v0.5.1 h1:0hERBMJE1eitiLkihrMvRVBYAkpHzc/J3QdDN+dAcgU=
+github.com/hashicorp/golang-lru v0.5.1/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
+github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024 h1:rBMNdlhTLzJjJSDIjNEXX1Pz3Hmwmz91v+zycvx9PJc=
+github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
+github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
+github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
+github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
+github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
+github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
+go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
+go.opencensus.io v0.22.0 h1:C9hSCOW830chIVkdja34wa6Ky+IzWllkUinR+BtRZd4=
+go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
+golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
+golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
+golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
+golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
+golang.org/x/exp v0.0.0-20190829153037-c13cbed26979 h1:Agxu5KLo8o7Bb634SVDnhIfpTvxmzUwhbYAzBvXt6h4=
+golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
+golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136 h1:A1gGSx58LAGVHUUsOf7IiR0u8Xb6W51gRwfDBhkdcaw=
+golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136/go.mod h1:JXzH8nQsPlswgeRAPE3MuO9GYsAcnJvJ4vnMwN/5qkY=
+golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
+golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
+golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
+golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
+golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
+golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac h1:8R1esu+8QioDxo4E4mX6bFztO+dMTM49DNAaWfO5OeY=
+golang.org/x/lint v0.0.0-20190909230951-414d861bb4ac/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/lint v0.0.0-20190930215403-16217165b5de h1:5hukYrvBGR8/eNkX5mdUezrA6JiaEZDtJb9Ei+1LlBs=
+golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
+golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
+golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
+golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
+golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
+golang.org/x/net v0.0.0-20190620200207-3b0461eec859 h1:R/3boaszxrf1GEUWTVDzSKVwLmSJpwZ1yqXm8j0v2QI=
+golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
+golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
+golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45 h1:SVwTIAaPC2U/AvvLNZ2a7OVsmBpC8L5BlwK1whH3hm0=
+golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
+golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.0.0-20190423024810-112230192c58 h1:8gQV6CLnAEikrhgkHFbMAEhagSSnXWGV915qUMm9mrU=
+golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0 h1:HyfiK1WMnHj5FXFXatD+Qs1A/xC2Run6RzeW1SyHxpc=
+golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
+golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
+golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
+golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
+golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20190312151545-0bb0c0a6e846/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20190312170243-e65039ee4138/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
+golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
+golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff h1:On1qIo75ByTwFJ4/W2bIqHcwJ9XAqtSWUs8GwRrIhtc=
+golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191012152004-8de300cfc20a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/tools v0.0.0-20191111182352-50fa39b762bc h1:V4uzEpfPev8S0NzQIqZE2HsWLn0BP+3KeizTW0xmsUg=
+golang.org/x/tools v0.0.0-20191111182352-50fa39b762bc/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
+google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
+google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
+google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
+google.golang.org/api v0.9.0 h1:jbyannxz0XFD3zdjgrSUsaJbgpH4eTrkdhRChkHPfO8=
+google.golang.org/api v0.9.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
+google.golang.org/api v0.13.0 h1:Q3Ui3V3/CVinFWFiW39Iw0kMuVrRzYX0wN6OPFp0lTA=
+google.golang.org/api v0.13.0/go.mod h1:iLdEw5Ide6rF15KTC1Kkl0iskquN2gFfn9o9XIsbkAI=
+google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
+google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
+google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
+google.golang.org/appengine v1.6.1 h1:QzqyMA1tlu6CgqCDUtU9V+ZKhLFT2dkJuANu5QaxI3I=
+google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
+google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
+google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
+google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
+google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51 h1:Ex1mq5jaJof+kRnYi3SlYJ8KKa9Ao3NHyIT5XJ1gF6U=
+google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
+google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a h1:Ob5/580gVHBJZgXnff1cZDbG+xLtMVE5mDRTe+nIsX4=
+google.golang.org/genproto v0.0.0-20191108220845-16a3f7862a1a/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
+google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
+google.golang.org/grpc v1.21.1 h1:j6XxA85m/6txkUCHvzlV5f+HBNl/1r5cZ2A/3IEFOO8=
+google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
+gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
+honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
+honnef.co/go/tools v0.0.1-2019.2.3 h1:3JgtbtFHMiCmsznwGVTUWbgGov+pVqnlf1dEJTNAXeM=
+honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
+rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
diff --git a/vendor/cloud.google.com/go/storage/go_mod_tidy_hack.go b/vendor/cloud.google.com/go/storage/go_mod_tidy_hack.go
new file mode 100644
index 0000000000000..7df7a1d7155a2
--- /dev/null
+++ b/vendor/cloud.google.com/go/storage/go_mod_tidy_hack.go
@@ -0,0 +1,22 @@
+// Copyright 2019 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// This file, and the cloud.google.com/go import, won't actually become part of
+// the resultant binary.
+// +build modhack
+
+package storage
+
+// Necessary for safely adding multi-module repo. See: https://github.com/golang/go/wiki/Modules#is-it-possible-to-add-a-module-to-a-multi-module-repository
+import _ "cloud.google.com/go"
diff --git a/vendor/cloud.google.com/go/storage/hmac.go b/vendor/cloud.google.com/go/storage/hmac.go
index a906d44fe013e..7d8185f37b810 100644
--- a/vendor/cloud.google.com/go/storage/hmac.go
+++ b/vendor/cloud.google.com/go/storage/hmac.go
@@ -20,10 +20,13 @@ import (
"fmt"
"time"
+ "google.golang.org/api/iterator"
raw "google.golang.org/api/storage/v1"
)
// HMACState is the state of the HMAC key.
+//
+// This type is EXPERIMENTAL and subject to change or removal without notice.
type HMACState string
const (
@@ -47,7 +50,7 @@ const (
// HMAC keys are used to authenticate signed access to objects. To enable HMAC key
// authentication, please visit https://cloud.google.com/storage/docs/migrating.
//
-// This type is experimental and subject to change.
+// This type is EXPERIMENTAL and subject to change or removal without notice.
type HMACKey struct {
// The HMAC's secret key.
Secret string
@@ -82,7 +85,7 @@ type HMACKey struct {
// HMACKeyHandle helps provide access and management for HMAC keys.
//
-// This type is experimental and subject to change.
+// This type is EXPERIMENTAL and subject to change or removal without notice.
type HMACKeyHandle struct {
projectID string
accessID string
@@ -91,6 +94,8 @@ type HMACKeyHandle struct {
}
// HMACKeyHandle creates a handle that will be used for HMACKey operations.
+//
+// This method is EXPERIMENTAL and subject to change or removal without notice.
func (c *Client) HMACKeyHandle(projectID, accessID string) *HMACKeyHandle {
return &HMACKeyHandle{
projectID: projectID,
@@ -101,8 +106,22 @@ func (c *Client) HMACKeyHandle(projectID, accessID string) *HMACKeyHandle {
// Get invokes an RPC to retrieve the HMAC key referenced by the
// HMACKeyHandle's accessID.
-func (hkh *HMACKeyHandle) Get(ctx context.Context) (*HMACKey, error) {
+//
+// Options such as UserProjectForHMACKeys can be used to set the
+// userProject to be billed against for operations.
+//
+// This method is EXPERIMENTAL and subject to change or removal without notice.
+func (hkh *HMACKeyHandle) Get(ctx context.Context, opts ...HMACKeyOption) (*HMACKey, error) {
call := hkh.raw.Get(hkh.projectID, hkh.accessID)
+
+ desc := new(hmacKeyDesc)
+ for _, opt := range opts {
+ opt.withHMACKeyDesc(desc)
+ }
+ if desc.userProjectID != "" {
+ call = call.UserProject(desc.userProjectID)
+ }
+
setClientHeader(call.Header())
var metadata *raw.HmacKeyMetadata
@@ -124,8 +143,17 @@ func (hkh *HMACKeyHandle) Get(ctx context.Context) (*HMACKey, error) {
// Delete invokes an RPC to delete the key referenced by accessID, on Google Cloud Storage.
// Only inactive HMAC keys can be deleted.
// After deletion, a key cannot be used to authenticate requests.
-func (hkh *HMACKeyHandle) Delete(ctx context.Context) error {
+//
+// This method is EXPERIMENTAL and subject to change or removal without notice.
+func (hkh *HMACKeyHandle) Delete(ctx context.Context, opts ...HMACKeyOption) error {
delCall := hkh.raw.Delete(hkh.projectID, hkh.accessID)
+ desc := new(hmacKeyDesc)
+ for _, opt := range opts {
+ opt.withHMACKeyDesc(desc)
+ }
+ if desc.userProjectID != "" {
+ delCall = delCall.UserProject(desc.userProjectID)
+ }
setClientHeader(delCall.Header())
return runWithRetry(ctx, func() error {
@@ -133,7 +161,7 @@ func (hkh *HMACKeyHandle) Delete(ctx context.Context) error {
})
}
-func pbHmacKeyToHMACKey(pb *raw.HmacKey, isCreate bool) (*HMACKey, error) {
+func pbHmacKeyToHMACKey(pb *raw.HmacKey, updatedTimeCanBeNil bool) (*HMACKey, error) {
pbmd := pb.Metadata
if pbmd == nil {
return nil, errors.New("field Metadata cannot be nil")
@@ -143,7 +171,7 @@ func pbHmacKeyToHMACKey(pb *raw.HmacKey, isCreate bool) (*HMACKey, error) {
return nil, fmt.Errorf("field CreatedTime: %v", err)
}
updatedTime, err := time.Parse(time.RFC3339, pbmd.Updated)
- if err != nil && !isCreate {
+ if err != nil && !updatedTimeCanBeNil {
return nil, fmt.Errorf("field UpdatedTime: %v", err)
}
@@ -164,7 +192,9 @@ func pbHmacKeyToHMACKey(pb *raw.HmacKey, isCreate bool) (*HMACKey, error) {
}
// CreateHMACKey invokes an RPC for Google Cloud Storage to create a new HMACKey.
-func (c *Client) CreateHMACKey(ctx context.Context, projectID, serviceAccountEmail string) (*HMACKey, error) {
+//
+// This method is EXPERIMENTAL and subject to change or removal without notice.
+func (c *Client) CreateHMACKey(ctx context.Context, projectID, serviceAccountEmail string, opts ...HMACKeyOption) (*HMACKey, error) {
if projectID == "" {
return nil, errors.New("storage: expecting a non-blank projectID")
}
@@ -174,6 +204,14 @@ func (c *Client) CreateHMACKey(ctx context.Context, projectID, serviceAccountEma
svc := raw.NewProjectsHmacKeysService(c.raw)
call := svc.Create(projectID, serviceAccountEmail)
+ desc := new(hmacKeyDesc)
+ for _, opt := range opts {
+ opt.withHMACKeyDesc(desc)
+ }
+ if desc.userProjectID != "" {
+ call = call.UserProject(desc.userProjectID)
+ }
+
setClientHeader(call.Header())
var hkPb *raw.HmacKey
@@ -190,6 +228,8 @@ func (c *Client) CreateHMACKey(ctx context.Context, projectID, serviceAccountEma
}
// HMACKeyAttrsToUpdate defines the attributes of an HMACKey that will be updated.
+//
+// This type is EXPERIMENTAL and subject to change or removal without notice.
type HMACKeyAttrsToUpdate struct {
// State is required and must be either StateActive or StateInactive.
State HMACState
@@ -199,7 +239,9 @@ type HMACKeyAttrsToUpdate struct {
}
// Update mutates the HMACKey referred to by accessID.
-func (h *HMACKeyHandle) Update(ctx context.Context, au HMACKeyAttrsToUpdate) (*HMACKey, error) {
+//
+// This method is EXPERIMENTAL and subject to change or removal without notice.
+func (h *HMACKeyHandle) Update(ctx context.Context, au HMACKeyAttrsToUpdate, opts ...HMACKeyOption) (*HMACKey, error) {
if au.State != Active && au.State != Inactive {
return nil, fmt.Errorf("storage: invalid state %q for update, must be either %q or %q", au.State, Active, Inactive)
}
@@ -208,6 +250,14 @@ func (h *HMACKeyHandle) Update(ctx context.Context, au HMACKeyAttrsToUpdate) (*H
Etag: au.Etag,
State: string(au.State),
})
+
+ desc := new(hmacKeyDesc)
+ for _, opt := range opts {
+ opt.withHMACKeyDesc(desc)
+ }
+ if desc.userProjectID != "" {
+ call = call.UserProject(desc.userProjectID)
+ }
setClientHeader(call.Header())
var metadata *raw.HmacKeyMetadata
@@ -225,3 +275,167 @@ func (h *HMACKeyHandle) Update(ctx context.Context, au HMACKeyAttrsToUpdate) (*H
}
return pbHmacKeyToHMACKey(hkPb, false)
}
+
+// An HMACKeysIterator is an iterator over HMACKeys.
+//
+// Note: This iterator is not safe for concurrent operations without explicit synchronization.
+//
+// This type is EXPERIMENTAL and subject to change or removal without notice.
+type HMACKeysIterator struct {
+ ctx context.Context
+ raw *raw.ProjectsHmacKeysService
+ projectID string
+ hmacKeys []*HMACKey
+ pageInfo *iterator.PageInfo
+ nextFunc func() error
+ index int
+ desc hmacKeyDesc
+}
+
+// ListHMACKeys returns an iterator for listing HMACKeys.
+//
+// Note: This iterator is not safe for concurrent operations without explicit synchronization.
+//
+// This method is EXPERIMENTAL and subject to change or removal without notice.
+func (c *Client) ListHMACKeys(ctx context.Context, projectID string, opts ...HMACKeyOption) *HMACKeysIterator {
+ it := &HMACKeysIterator{
+ ctx: ctx,
+ raw: raw.NewProjectsHmacKeysService(c.raw),
+ projectID: projectID,
+ }
+
+ for _, opt := range opts {
+ opt.withHMACKeyDesc(&it.desc)
+ }
+
+ it.pageInfo, it.nextFunc = iterator.NewPageInfo(
+ it.fetch,
+ func() int { return len(it.hmacKeys) - it.index },
+ func() interface{} {
+ prev := it.hmacKeys
+ it.hmacKeys = it.hmacKeys[:0]
+ it.index = 0
+ return prev
+ })
+ return it
+}
+
+// Next returns the next result. Its second return value is iterator.Done if
+// there are no more results. Once Next returns iterator.Done, all subsequent
+// calls will return iterator.Done.
+//
+// Note: This iterator is not safe for concurrent operations without explicit synchronization.
+//
+// This method is EXPERIMENTAL and subject to change or removal without notice.
+func (it *HMACKeysIterator) Next() (*HMACKey, error) {
+ if err := it.nextFunc(); err != nil {
+ return nil, err
+ }
+
+ key := it.hmacKeys[it.index]
+ it.index++
+
+ return key, nil
+}
+
+// PageInfo supports pagination. See the google.golang.org/api/iterator package for details.
+//
+// Note: This iterator is not safe for concurrent operations without explicit synchronization.
+//
+// This method is EXPERIMENTAL and subject to change or removal without notice.
+func (it *HMACKeysIterator) PageInfo() *iterator.PageInfo { return it.pageInfo }
+
+func (it *HMACKeysIterator) fetch(pageSize int, pageToken string) (token string, err error) {
+ call := it.raw.List(it.projectID)
+ setClientHeader(call.Header())
+ if pageToken != "" {
+ call = call.PageToken(pageToken)
+ }
+ if it.desc.showDeletedKeys {
+ call = call.ShowDeletedKeys(true)
+ }
+ if it.desc.userProjectID != "" {
+ call = call.UserProject(it.desc.userProjectID)
+ }
+ if it.desc.forServiceAccountEmail != "" {
+ call = call.ServiceAccountEmail(it.desc.forServiceAccountEmail)
+ }
+ if pageSize > 0 {
+ call = call.MaxResults(int64(pageSize))
+ }
+
+ ctx := it.ctx
+ var resp *raw.HmacKeysMetadata
+ err = runWithRetry(it.ctx, func() error {
+ resp, err = call.Context(ctx).Do()
+ return err
+ })
+ if err != nil {
+ return "", err
+ }
+
+ for _, metadata := range resp.Items {
+ hkPb := &raw.HmacKey{
+ Metadata: metadata,
+ }
+ hkey, err := pbHmacKeyToHMACKey(hkPb, true)
+ if err != nil {
+ return "", err
+ }
+ it.hmacKeys = append(it.hmacKeys, hkey)
+ }
+ return resp.NextPageToken, nil
+}
+
+type hmacKeyDesc struct {
+ forServiceAccountEmail string
+ showDeletedKeys bool
+ userProjectID string
+}
+
+// HMACKeyOption configures the behavior of HMACKey related methods and actions.
+//
+// This interface is EXPERIMENTAL and subject to change or removal without notice.
+type HMACKeyOption interface {
+ withHMACKeyDesc(*hmacKeyDesc)
+}
+
+type hmacKeyDescFunc func(*hmacKeyDesc)
+
+func (hkdf hmacKeyDescFunc) withHMACKeyDesc(hkd *hmacKeyDesc) {
+ hkdf(hkd)
+}
+
+// ForHMACKeyServiceAccountEmail returns HMAC Keys that are
+// associated with the email address of a service account in the project.
+//
+// Only one service account email can be used as a filter, so if multiple
+// of these options are applied, the last email to be set will be used.
+//
+// This option is EXPERIMENTAL and subject to change or removal without notice.
+func ForHMACKeyServiceAccountEmail(serviceAccountEmail string) HMACKeyOption {
+ return hmacKeyDescFunc(func(hkd *hmacKeyDesc) {
+ hkd.forServiceAccountEmail = serviceAccountEmail
+ })
+}
+
+// ShowDeletedHMACKeys will also list keys whose state is "DELETED".
+//
+// This option is EXPERIMENTAL and subject to change or removal without notice.
+func ShowDeletedHMACKeys() HMACKeyOption {
+ return hmacKeyDescFunc(func(hkd *hmacKeyDesc) {
+ hkd.showDeletedKeys = true
+ })
+}
+
+// UserProjectForHMACKeys will bill the request against userProjectID
+// if userProjectID is non-empty.
+//
+// Note: This is a noop right now and only provided for API compatibility.
+//
+// This option is EXPERIMENTAL and subject to change or removal without notice.
+func UserProjectForHMACKeys(userProjectID string) HMACKeyOption {
+ return hmacKeyDescFunc(func(hkd *hmacKeyDesc) {
+ hkd.userProjectID = userProjectID
+ })
+}
diff --git a/vendor/cloud.google.com/go/storage/reader.go b/vendor/cloud.google.com/go/storage/reader.go
index dbed8ac1bc99d..5c83651bd9b7f 100644
--- a/vendor/cloud.google.com/go/storage/reader.go
+++ b/vendor/cloud.google.com/go/storage/reader.go
@@ -43,6 +43,11 @@ type ReaderObjectAttrs struct {
// Size is the length of the object's content.
Size int64
+ // StartOffset is the byte offset within the object
+ // from which reading begins.
+ // This value is only non-zero for range requests.
+ StartOffset int64
+
// ContentType is the MIME type of the object's content.
ContentType string
@@ -78,7 +83,9 @@ func (o *ObjectHandle) NewReader(ctx context.Context) (*Reader, error) {
// NewRangeReader reads part of an object, reading at most length bytes
// starting at the given offset. If length is negative, the object is read
-// until the end.
+// until the end. If offset is negative, the object is read abs(offset) bytes
+// from the end, and length must also be negative to indicate all remaining
+// bytes will be read.
func (o *ObjectHandle) NewRangeReader(ctx context.Context, offset, length int64) (r *Reader, err error) {
ctx = trace.StartSpan(ctx, "cloud.google.com/go/storage.Object.NewRangeReader")
defer func() { trace.EndSpan(ctx, err) }()
@@ -86,8 +93,8 @@ func (o *ObjectHandle) NewRangeReader(ctx context.Context, offset, length int64)
if err := o.validate(); err != nil {
return nil, err
}
- if offset < 0 {
- return nil, fmt.Errorf("storage: invalid offset %d < 0", offset)
+ if offset < 0 && length >= 0 {
+ return nil, fmt.Errorf("storage: invalid offset %d < 0 requires negative length", offset)
}
if o.conds != nil {
if err := o.conds.validate("NewRangeReader"); err != nil {
@@ -124,7 +131,9 @@ func (o *ObjectHandle) NewRangeReader(ctx context.Context, offset, length int64)
// have already read seen bytes.
reopen := func(seen int64) (*http.Response, error) {
start := offset + seen
- if length < 0 && start > 0 {
+ if length < 0 && start < 0 {
+ req.Header.Set("Range", fmt.Sprintf("bytes=%d", start))
+ } else if length < 0 && start > 0 {
req.Header.Set("Range", fmt.Sprintf("bytes=%d-", start))
} else if length > 0 {
// The end character isn't affected by how many bytes we've seen.
@@ -177,20 +186,28 @@ func (o *ObjectHandle) NewRangeReader(ctx context.Context, offset, length int64)
return nil, err
}
var (
- size int64 // total size of object, even if a range was requested.
- checkCRC bool
- crc uint32
+ size int64 // total size of object, even if a range was requested.
+ checkCRC bool
+ crc uint32
+ startOffset int64 // non-zero if range request.
)
if res.StatusCode == http.StatusPartialContent {
cr := strings.TrimSpace(res.Header.Get("Content-Range"))
if !strings.HasPrefix(cr, "bytes ") || !strings.Contains(cr, "/") {
-
return nil, fmt.Errorf("storage: invalid Content-Range %q", cr)
}
size, err = strconv.ParseInt(cr[strings.LastIndex(cr, "/")+1:], 10, 64)
if err != nil {
return nil, fmt.Errorf("storage: invalid Content-Range %q", cr)
}
+
+ dashIndex := strings.Index(cr, "-")
+ if dashIndex >= 0 {
+ startOffset, err = strconv.ParseInt(cr[len("bytes="):dashIndex], 10, 64)
+ if err != nil {
+ return nil, fmt.Errorf("storage: invalid Content-Range %q: %v", cr, err)
+ }
+ }
} else {
size = res.ContentLength
// Check the CRC iff all of the following hold:
@@ -236,6 +253,7 @@ func (o *ObjectHandle) NewRangeReader(ctx context.Context, offset, length int64)
ContentEncoding: res.Header.Get("Content-Encoding"),
CacheControl: res.Header.Get("Cache-Control"),
LastModified: lm,
+ StartOffset: startOffset,
Generation: gen,
Metageneration: metaGen,
}
diff --git a/vendor/cloud.google.com/go/storage/storage.go b/vendor/cloud.google.com/go/storage/storage.go
index d35bd7568e793..59a859bfdcd12 100644
--- a/vendor/cloud.google.com/go/storage/storage.go
+++ b/vendor/cloud.google.com/go/storage/storage.go
@@ -54,7 +54,7 @@ var (
ErrObjectNotExist = errors.New("storage: object doesn't exist")
)
-const userAgent = "gcloud-golang-storage/20151204"
+var userAgent = fmt.Sprintf("gcloud-golang-storage/%s", version.Repo)
const (
// ScopeFullControl grants permissions to manage your
@@ -94,11 +94,20 @@ type Client struct {
// NewClient creates a new Google Cloud Storage client.
// The default scope is ScopeFullControl. To use a different scope, like ScopeReadOnly, use option.WithScopes.
func NewClient(ctx context.Context, opts ...option.ClientOption) (*Client, error) {
- o := []option.ClientOption{
- option.WithScopes(ScopeFullControl),
- option.WithUserAgent(userAgent),
+ var host, readHost, scheme string
+
+ if host = os.Getenv("STORAGE_EMULATOR_HOST"); host == "" {
+ scheme = "https"
+ readHost = "storage.googleapis.com"
+
+ opts = append(opts, option.WithScopes(ScopeFullControl), option.WithUserAgent(userAgent))
+ } else {
+ scheme = "http"
+ readHost = host
+
+ opts = append(opts, option.WithoutAuthentication())
}
- opts = append(o, opts...)
+
hc, ep, err := htransport.NewClient(ctx, opts...)
if err != nil {
return nil, fmt.Errorf("dialing: %v", err)
@@ -107,17 +116,14 @@ func NewClient(ctx context.Context, opts ...option.ClientOption) (*Client, error
if err != nil {
return nil, fmt.Errorf("storage client: %v", err)
}
- if ep != "" {
- rawService.BasePath = ep
- }
- scheme := "https"
- var host, readHost string
- if host = os.Getenv("STORAGE_EMULATOR_HOST"); host != "" {
- scheme = "http"
- readHost = host
+ if ep == "" {
+ // Override the default value for BasePath from the raw client.
+ // TODO: remove when the raw client uses this endpoint as its default (~end of 2020)
+ rawService.BasePath = "https://storage.googleapis.com/storage/v1/"
} else {
- readHost = "storage.googleapis.com"
+ rawService.BasePath = ep
}
+
return &Client{
hc: hc,
raw: rawService,
@@ -992,10 +998,8 @@ type ObjectAttrs struct {
// StorageClass is the storage class of the object.
// This value defines how objects in the bucket are stored and
// determines the SLA and the cost of storage. Typical values are
- // "MULTI_REGIONAL", "REGIONAL", "NEARLINE", "COLDLINE", "STANDARD"
- // and "DURABLE_REDUCED_AVAILABILITY".
- // It defaults to "STANDARD", which is equivalent to "MULTI_REGIONAL"
- // or "REGIONAL" depending on the bucket's location settings.
+ // "NEARLINE", "COLDLINE" and "STANDARD".
+ // It defaults to "STANDARD".
StorageClass string
// Created is the time the object was created. This field is read-only.
diff --git a/vendor/cloud.google.com/go/storage/writer.go b/vendor/cloud.google.com/go/storage/writer.go
index 0f52414b84c7e..a116592128707 100644
--- a/vendor/cloud.google.com/go/storage/writer.go
+++ b/vendor/cloud.google.com/go/storage/writer.go
@@ -148,21 +148,16 @@ func (w *Writer) open() error {
call.UserProject(w.o.userProject)
}
setClientHeader(call.Header())
- // If the chunk size is zero, then no chunking is done on the Reader,
- // which means we cannot retry: the first call will read the data, and if
- // it fails, there is no way to re-read.
- if w.ChunkSize == 0 {
- resp, err = call.Do()
- } else {
- // We will only retry here if the initial POST, which obtains a URI for
- // the resumable upload, fails with a retryable error. The upload itself
- // has its own retry logic.
- err = runWithRetry(w.ctx, func() error {
- var err2 error
- resp, err2 = call.Do()
- return err2
- })
- }
+
+ // The internals that perform call.Do automatically retry
+ // uploading chunks, hence no need to add retries here.
+ // See issue https://github.com/googleapis/google-cloud-go/issues/1507.
+ //
+ // However, since this whole call's internals involve making the initial
+ // resumable upload session, the first HTTP request is not retried.
+ // TODO: Follow-up with google.golang.org/gensupport to solve
+ // https://github.com/googleapis/google-api-go-client/issues/392.
+ resp, err = call.Do()
}
if err != nil {
w.mu.Lock()
@@ -231,7 +226,7 @@ func (w *Writer) Close() error {
}
// monitorCancel is intended to be used as a background goroutine. It monitors the
-// the context, and when it observes that the context has been canceled, it manually
+// context, and when it observes that the context has been canceled, it manually
// closes things that do not take a context.
func (w *Writer) monitorCancel() {
select {
diff --git a/vendor/cloud.google.com/go/tidyall.sh b/vendor/cloud.google.com/go/tidyall.sh
new file mode 100644
index 0000000000000..e9bc9d6a3fb2f
--- /dev/null
+++ b/vendor/cloud.google.com/go/tidyall.sh
@@ -0,0 +1,28 @@
+#!/bin/bash
+# Copyright 2019 Google Inc.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Run at repo root.
+
+go mod tidy
+for i in `find . -name go.mod`; do
+pushd `dirname $i`;
+ # Update genproto and api to latest for every module (latest version is
+ # always correct version). tidy will remove the dependencies if they're not
+ # actually used by the client.
+ go get -u google.golang.org/api
+ go get -u google.golang.org/genproto
+ go mod tidy;
+popd;
+done
diff --git a/vendor/cloud.google.com/go/tools.go b/vendor/cloud.google.com/go/tools.go
new file mode 100644
index 0000000000000..fa01cc44ccc4b
--- /dev/null
+++ b/vendor/cloud.google.com/go/tools.go
@@ -0,0 +1,33 @@
+// +build tools
+
+// Copyright 2018 Google LLC
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// This package exists to cause `go mod` and `go get` to believe these tools
+// are dependencies, even though they are not runtime dependencies of any
+// package (these are tools used by our CI builds). This means they will appear
+// in our `go.mod` file, but will not be a part of the build. Also, since the
+// build target is something non-existent, these should not be included in any
+// binaries.
+
+package cloud
+
+import (
+ _ "github.com/golang/protobuf/protoc-gen-go"
+ _ "github.com/jstemmer/go-junit-report"
+ _ "golang.org/x/exp/cmd/apidiff"
+ _ "golang.org/x/lint/golint"
+ _ "golang.org/x/tools/cmd/goimports"
+ _ "honnef.co/go/tools/cmd/staticcheck"
+)
diff --git a/vendor/github.com/Azure/go-autorest/autorest/adal/token.go b/vendor/github.com/Azure/go-autorest/autorest/adal/token.go
index 7c7fca3718f3e..33bbd6ea1505d 100644
--- a/vendor/github.com/Azure/go-autorest/autorest/adal/token.go
+++ b/vendor/github.com/Azure/go-autorest/autorest/adal/token.go
@@ -106,6 +106,9 @@ type RefresherWithContext interface {
// a successful token refresh
type TokenRefreshCallback func(Token) error
+// TokenRefresh is a type representing a custom callback to refresh a token
+type TokenRefresh func(ctx context.Context, resource string) (*Token, error)
+
// Token encapsulates the access token used to authorize Azure requests.
// https://docs.microsoft.com/en-us/azure/active-directory/develop/v1-oauth2-client-creds-grant-flow#service-to-service-access-token-response
type Token struct {
@@ -344,10 +347,11 @@ func (secret ServicePrincipalAuthorizationCodeSecret) MarshalJSON() ([]byte, err
// ServicePrincipalToken encapsulates a Token created for a Service Principal.
type ServicePrincipalToken struct {
- inner servicePrincipalToken
- refreshLock *sync.RWMutex
- sender Sender
- refreshCallbacks []TokenRefreshCallback
+ inner servicePrincipalToken
+ refreshLock *sync.RWMutex
+ sender Sender
+ customRefreshFunc TokenRefresh
+ refreshCallbacks []TokenRefreshCallback
// MaxMSIRefreshAttempts is the maximum number of attempts to refresh an MSI token.
MaxMSIRefreshAttempts int
}
@@ -362,6 +366,11 @@ func (spt *ServicePrincipalToken) SetRefreshCallbacks(callbacks []TokenRefreshCa
spt.refreshCallbacks = callbacks
}
+// SetCustomRefreshFunc sets a custom refresh function used to refresh the token.
+func (spt *ServicePrincipalToken) SetCustomRefreshFunc(customRefreshFunc TokenRefresh) {
+ spt.customRefreshFunc = customRefreshFunc
+}
+
// MarshalJSON implements the json.Marshaler interface.
func (spt ServicePrincipalToken) MarshalJSON() ([]byte, error) {
return json.Marshal(spt.inner)
@@ -786,13 +795,13 @@ func (spt *ServicePrincipalToken) InvokeRefreshCallbacks(token Token) error {
}
// Refresh obtains a fresh token for the Service Principal.
-// This method is not safe for concurrent use and should be syncrhonized.
+// This method is safe for concurrent use.
func (spt *ServicePrincipalToken) Refresh() error {
return spt.RefreshWithContext(context.Background())
}
// RefreshWithContext obtains a fresh token for the Service Principal.
-// This method is not safe for concurrent use and should be syncrhonized.
+// This method is safe for concurrent use.
func (spt *ServicePrincipalToken) RefreshWithContext(ctx context.Context) error {
spt.refreshLock.Lock()
defer spt.refreshLock.Unlock()
@@ -800,13 +809,13 @@ func (spt *ServicePrincipalToken) RefreshWithContext(ctx context.Context) error
}
// RefreshExchange refreshes the token, but for a different resource.
-// This method is not safe for concurrent use and should be syncrhonized.
+// This method is safe for concurrent use.
func (spt *ServicePrincipalToken) RefreshExchange(resource string) error {
return spt.RefreshExchangeWithContext(context.Background(), resource)
}
// RefreshExchangeWithContext refreshes the token, but for a different resource.
-// This method is not safe for concurrent use and should be syncrhonized.
+// This method is safe for concurrent use.
func (spt *ServicePrincipalToken) RefreshExchangeWithContext(ctx context.Context, resource string) error {
spt.refreshLock.Lock()
defer spt.refreshLock.Unlock()
@@ -833,6 +842,15 @@ func isIMDS(u url.URL) bool {
}
func (spt *ServicePrincipalToken) refreshInternal(ctx context.Context, resource string) error {
+ if spt.customRefreshFunc != nil {
+ token, err := spt.customRefreshFunc(ctx, resource)
+ if err != nil {
+ return err
+ }
+ spt.inner.Token = *token
+ return spt.InvokeRefreshCallbacks(spt.inner.Token)
+ }
+
req, err := http.NewRequest(http.MethodPost, spt.inner.OauthConfig.TokenEndpoint.String(), nil)
if err != nil {
return fmt.Errorf("adal: Failed to build the refresh request. Error = '%v'", err)
diff --git a/vendor/github.com/Azure/go-autorest/autorest/authorization_sas.go b/vendor/github.com/Azure/go-autorest/autorest/authorization_sas.go
new file mode 100644
index 0000000000000..89a659cb6646d
--- /dev/null
+++ b/vendor/github.com/Azure/go-autorest/autorest/authorization_sas.go
@@ -0,0 +1,67 @@
+package autorest
+
+// Copyright 2017 Microsoft Corporation
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+import (
+ "fmt"
+ "net/http"
+ "strings"
+)
+
+// SASTokenAuthorizer implements an authorization for SAS Token Authentication
+// this can be used for interaction with Blob Storage Endpoints
+type SASTokenAuthorizer struct {
+ sasToken string
+}
+
+// NewSASTokenAuthorizer creates a SASTokenAuthorizer using the given credentials
+func NewSASTokenAuthorizer(sasToken string) (*SASTokenAuthorizer, error) {
+ if strings.TrimSpace(sasToken) == "" {
+ return nil, fmt.Errorf("sasToken cannot be empty")
+ }
+
+ token := sasToken
+ if strings.HasPrefix(sasToken, "?") {
+ token = strings.TrimPrefix(sasToken, "?")
+ }
+
+ return &SASTokenAuthorizer{
+ sasToken: token,
+ }, nil
+}
+
+// WithAuthorization returns a PrepareDecorator that adds a shared access signature token to the
+// URI's query parameters. This can be used for the Blob, Queue, and File Services.
+//
+// See https://docs.microsoft.com/en-us/rest/api/storageservices/delegate-access-with-shared-access-signature
+func (sas *SASTokenAuthorizer) WithAuthorization() PrepareDecorator {
+ return func(p Preparer) Preparer {
+ return PreparerFunc(func(r *http.Request) (*http.Request, error) {
+ r, err := p.Prepare(r)
+ if err != nil {
+ return r, err
+ }
+
+ if r.URL.RawQuery != "" {
+ r.URL.RawQuery = fmt.Sprintf("%s&%s", r.URL.RawQuery, sas.sasToken)
+ } else {
+ r.URL.RawQuery = sas.sasToken
+ }
+
+ r.RequestURI = r.URL.String()
+ return Prepare(r)
+ })
+ }
+}
diff --git a/vendor/github.com/Azure/go-autorest/autorest/authorization_storage.go b/vendor/github.com/Azure/go-autorest/autorest/authorization_storage.go
new file mode 100644
index 0000000000000..33e5f12701745
--- /dev/null
+++ b/vendor/github.com/Azure/go-autorest/autorest/authorization_storage.go
@@ -0,0 +1,301 @@
+package autorest
+
+// Copyright 2017 Microsoft Corporation
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+import (
+ "bytes"
+ "crypto/hmac"
+ "crypto/sha256"
+ "encoding/base64"
+ "fmt"
+ "net/http"
+ "net/url"
+ "sort"
+ "strings"
+ "time"
+)
+
+// SharedKeyType defines the enumeration for the various shared key types.
+// See https://docs.microsoft.com/en-us/rest/api/storageservices/authorize-with-shared-key for details on the shared key types.
+type SharedKeyType string
+
+const (
+ // SharedKey is used to authorize against blobs, files and queues services.
+ SharedKey SharedKeyType = "sharedKey"
+
+ // SharedKeyForTable is used to authorize against the table service.
+ SharedKeyForTable SharedKeyType = "sharedKeyTable"
+
+ // SharedKeyLite is used to authorize against blobs, files and queues services. It's provided for
+ // backwards compatibility with API versions before 2009-09-19. Prefer SharedKey instead.
+ SharedKeyLite SharedKeyType = "sharedKeyLite"
+
+ // SharedKeyLiteForTable is used to authorize against the table service. It's provided for
+ // backwards compatibility with older table API versions. Prefer SharedKeyForTable instead.
+ SharedKeyLiteForTable SharedKeyType = "sharedKeyLiteTable"
+)
+
+const (
+ headerAccept = "Accept"
+ headerAcceptCharset = "Accept-Charset"
+ headerContentEncoding = "Content-Encoding"
+ headerContentLength = "Content-Length"
+ headerContentMD5 = "Content-MD5"
+ headerContentLanguage = "Content-Language"
+ headerIfModifiedSince = "If-Modified-Since"
+ headerIfMatch = "If-Match"
+ headerIfNoneMatch = "If-None-Match"
+ headerIfUnmodifiedSince = "If-Unmodified-Since"
+ headerDate = "Date"
+ headerXMSDate = "X-Ms-Date"
+ headerXMSVersion = "x-ms-version"
+ headerRange = "Range"
+)
+
+const storageEmulatorAccountName = "devstoreaccount1"
+
+// SharedKeyAuthorizer implements an authorization for Shared Key
+// this can be used for interaction with Blob, File and Queue Storage Endpoints
+type SharedKeyAuthorizer struct {
+ accountName string
+ accountKey []byte
+ keyType SharedKeyType
+}
+
+// NewSharedKeyAuthorizer creates a SharedKeyAuthorizer using the provided credentials and shared key type.
+func NewSharedKeyAuthorizer(accountName, accountKey string, keyType SharedKeyType) (*SharedKeyAuthorizer, error) {
+ key, err := base64.StdEncoding.DecodeString(accountKey)
+ if err != nil {
+ return nil, fmt.Errorf("malformed storage account key: %v", err)
+ }
+ return &SharedKeyAuthorizer{
+ accountName: accountName,
+ accountKey: key,
+ keyType: keyType,
+ }, nil
+}
+
+// WithAuthorization returns a PrepareDecorator that adds an HTTP Authorization header whose
+// value is "<SharedKeyType> " followed by the computed key.
+// This can be used for the Blob, Queue, and File Services
+//
+// from: https://docs.microsoft.com/en-us/rest/api/storageservices/authorize-with-shared-key
+// You may use Shared Key authorization to authorize a request made against the
+// 2009-09-19 version and later of the Blob and Queue services,
+// and version 2014-02-14 and later of the File services.
+func (sk *SharedKeyAuthorizer) WithAuthorization() PrepareDecorator {
+ return func(p Preparer) Preparer {
+ return PreparerFunc(func(r *http.Request) (*http.Request, error) {
+ r, err := p.Prepare(r)
+ if err != nil {
+ return r, err
+ }
+
+ sk, err := buildSharedKey(sk.accountName, sk.accountKey, r, sk.keyType)
+ return Prepare(r, WithHeader(headerAuthorization, sk))
+ })
+ }
+}
+
+func buildSharedKey(accName string, accKey []byte, req *http.Request, keyType SharedKeyType) (string, error) {
+ canRes, err := buildCanonicalizedResource(accName, req.URL.String(), keyType)
+ if err != nil {
+ return "", err
+ }
+
+ if req.Header == nil {
+ req.Header = http.Header{}
+ }
+
+ // ensure date is set
+ if req.Header.Get(headerDate) == "" && req.Header.Get(headerXMSDate) == "" {
+ date := time.Now().UTC().Format(http.TimeFormat)
+ req.Header.Set(headerXMSDate, date)
+ }
+ canString, err := buildCanonicalizedString(req.Method, req.Header, canRes, keyType)
+ if err != nil {
+ return "", err
+ }
+ return createAuthorizationHeader(accName, accKey, canString, keyType), nil
+}
+
+func buildCanonicalizedResource(accountName, uri string, keyType SharedKeyType) (string, error) {
+ errMsg := "buildCanonicalizedResource error: %s"
+ u, err := url.Parse(uri)
+ if err != nil {
+ return "", fmt.Errorf(errMsg, err.Error())
+ }
+
+ cr := bytes.NewBufferString("")
+ if accountName != storageEmulatorAccountName {
+ cr.WriteString("/")
+ cr.WriteString(getCanonicalizedAccountName(accountName))
+ }
+
+ if len(u.Path) > 0 {
+ // Any portion of the CanonicalizedResource string that is derived from
+ // the resource's URI should be encoded exactly as it is in the URI.
+ // -- https://msdn.microsoft.com/en-gb/library/azure/dd179428.aspx
+ cr.WriteString(u.EscapedPath())
+ }
+
+ params, err := url.ParseQuery(u.RawQuery)
+ if err != nil {
+ return "", fmt.Errorf(errMsg, err.Error())
+ }
+
+ // See https://github.com/Azure/azure-storage-net/blob/master/Lib/Common/Core/Util/AuthenticationUtility.cs#L277
+ if keyType == SharedKey {
+ if len(params) > 0 {
+ cr.WriteString("\n")
+
+ keys := []string{}
+ for key := range params {
+ keys = append(keys, key)
+ }
+ sort.Strings(keys)
+
+ completeParams := []string{}
+ for _, key := range keys {
+ if len(params[key]) > 1 {
+ sort.Strings(params[key])
+ }
+
+ completeParams = append(completeParams, fmt.Sprintf("%s:%s", key, strings.Join(params[key], ",")))
+ }
+ cr.WriteString(strings.Join(completeParams, "\n"))
+ }
+ } else {
+ // search for "comp" parameter, if exists then add it to canonicalizedresource
+ if v, ok := params["comp"]; ok {
+ cr.WriteString("?comp=" + v[0])
+ }
+ }
+
+ return string(cr.Bytes()), nil
+}
+
+func getCanonicalizedAccountName(accountName string) string {
+ // since we may be trying to access a secondary storage account, we need to
+ // remove the -secondary part of the storage name
+ return strings.TrimSuffix(accountName, "-secondary")
+}
+
+func buildCanonicalizedString(verb string, headers http.Header, canonicalizedResource string, keyType SharedKeyType) (string, error) {
+ contentLength := headers.Get(headerContentLength)
+ if contentLength == "0" {
+ contentLength = ""
+ }
+ date := headers.Get(headerDate)
+ if v := headers.Get(headerXMSDate); v != "" {
+ if keyType == SharedKey || keyType == SharedKeyLite {
+ date = ""
+ } else {
+ date = v
+ }
+ }
+ var canString string
+ switch keyType {
+ case SharedKey:
+ canString = strings.Join([]string{
+ verb,
+ headers.Get(headerContentEncoding),
+ headers.Get(headerContentLanguage),
+ contentLength,
+ headers.Get(headerContentMD5),
+ headers.Get(headerContentType),
+ date,
+ headers.Get(headerIfModifiedSince),
+ headers.Get(headerIfMatch),
+ headers.Get(headerIfNoneMatch),
+ headers.Get(headerIfUnmodifiedSince),
+ headers.Get(headerRange),
+ buildCanonicalizedHeader(headers),
+ canonicalizedResource,
+ }, "\n")
+ case SharedKeyForTable:
+ canString = strings.Join([]string{
+ verb,
+ headers.Get(headerContentMD5),
+ headers.Get(headerContentType),
+ date,
+ canonicalizedResource,
+ }, "\n")
+ case SharedKeyLite:
+ canString = strings.Join([]string{
+ verb,
+ headers.Get(headerContentMD5),
+ headers.Get(headerContentType),
+ date,
+ buildCanonicalizedHeader(headers),
+ canonicalizedResource,
+ }, "\n")
+ case SharedKeyLiteForTable:
+ canString = strings.Join([]string{
+ date,
+ canonicalizedResource,
+ }, "\n")
+ default:
+ return "", fmt.Errorf("key type '%s' is not supported", keyType)
+ }
+ return canString, nil
+}
+
+func buildCanonicalizedHeader(headers http.Header) string {
+ cm := make(map[string]string)
+
+ for k := range headers {
+ headerName := strings.TrimSpace(strings.ToLower(k))
+ if strings.HasPrefix(headerName, "x-ms-") {
+ cm[headerName] = headers.Get(k)
+ }
+ }
+
+ if len(cm) == 0 {
+ return ""
+ }
+
+ keys := []string{}
+ for key := range cm {
+ keys = append(keys, key)
+ }
+
+ sort.Strings(keys)
+
+ ch := bytes.NewBufferString("")
+
+ for _, key := range keys {
+ ch.WriteString(key)
+ ch.WriteRune(':')
+ ch.WriteString(cm[key])
+ ch.WriteRune('\n')
+ }
+
+ return strings.TrimSuffix(string(ch.Bytes()), "\n")
+}
+
+func createAuthorizationHeader(accountName string, accountKey []byte, canonicalizedString string, keyType SharedKeyType) string {
+ h := hmac.New(sha256.New, accountKey)
+ h.Write([]byte(canonicalizedString))
+ signature := base64.StdEncoding.EncodeToString(h.Sum(nil))
+ var key string
+ switch keyType {
+ case SharedKey, SharedKeyForTable:
+ key = "SharedKey"
+ case SharedKeyLite, SharedKeyLiteForTable:
+ key = "SharedKeyLite"
+ }
+ return fmt.Sprintf("%s %s:%s", key, getCanonicalizedAccountName(accountName), signature)
+}
diff --git a/vendor/github.com/Azure/go-autorest/autorest/azure/azure.go b/vendor/github.com/Azure/go-autorest/autorest/azure/azure.go
index 3a0a439ff930c..26be936b7e5f2 100644
--- a/vendor/github.com/Azure/go-autorest/autorest/azure/azure.go
+++ b/vendor/github.com/Azure/go-autorest/autorest/azure/azure.go
@@ -17,6 +17,7 @@ package azure
// limitations under the License.
import (
+ "bytes"
"encoding/json"
"fmt"
"io/ioutil"
@@ -143,7 +144,7 @@ type RequestError struct {
autorest.DetailedError
// The error returned by the Azure service.
- ServiceError *ServiceError `json:"error"`
+ ServiceError *ServiceError `json:"error" xml:"Error"`
// The request id (from the x-ms-request-id-header) of the request.
RequestID string
@@ -285,26 +286,34 @@ func WithErrorUnlessStatusCode(codes ...int) autorest.RespondDecorator {
var e RequestError
defer resp.Body.Close()
+ encodedAs := autorest.EncodedAsJSON
+ if strings.Contains(resp.Header.Get("Content-Type"), "xml") {
+ encodedAs = autorest.EncodedAsXML
+ }
+
// Copy and replace the Body in case it does not contain an error object.
// This will leave the Body available to the caller.
- b, decodeErr := autorest.CopyAndDecode(autorest.EncodedAsJSON, resp.Body, &e)
+ b, decodeErr := autorest.CopyAndDecode(encodedAs, resp.Body, &e)
resp.Body = ioutil.NopCloser(&b)
if decodeErr != nil {
return fmt.Errorf("autorest/azure: error response cannot be parsed: %q error: %v", b.String(), decodeErr)
}
if e.ServiceError == nil {
// Check if error is unwrapped ServiceError
- if err := json.Unmarshal(b.Bytes(), &e.ServiceError); err != nil {
+ decoder := autorest.NewDecoder(encodedAs, bytes.NewReader(b.Bytes()))
+ if err := decoder.Decode(&e.ServiceError); err != nil {
return err
}
}
if e.ServiceError.Message == "" {
// if we're here it means the returned error wasn't OData v4 compliant.
- // try to unmarshal the body as raw JSON in hopes of getting something.
+ // try to unmarshal the body in hopes of getting something.
rawBody := map[string]interface{}{}
- if err := json.Unmarshal(b.Bytes(), &rawBody); err != nil {
+ decoder := autorest.NewDecoder(encodedAs, bytes.NewReader(b.Bytes()))
+ if err := decoder.Decode(&rawBody); err != nil {
return err
}
+
e.ServiceError = &ServiceError{
Code: "Unknown",
Message: "Unknown service error",
diff --git a/vendor/github.com/Azure/go-autorest/autorest/azure/rp.go b/vendor/github.com/Azure/go-autorest/autorest/azure/rp.go
index 86ce9f2b5b1e5..c6d39f6866552 100644
--- a/vendor/github.com/Azure/go-autorest/autorest/azure/rp.go
+++ b/vendor/github.com/Azure/go-autorest/autorest/azure/rp.go
@@ -47,11 +47,15 @@ func DoRetryWithRegistration(client autorest.Client) autorest.SendDecorator {
if resp.StatusCode != http.StatusConflict || client.SkipResourceProviderRegistration {
return resp, err
}
+
var re RequestError
- err = autorest.Respond(
- resp,
- autorest.ByUnmarshallingJSON(&re),
- )
+ if strings.Contains(r.Header.Get("Content-Type"), "xml") {
+ // XML errors (e.g. Storage Data Plane) only return the inner object
+ err = autorest.Respond(resp, autorest.ByUnmarshallingXML(&re.ServiceError))
+ } else {
+ err = autorest.Respond(resp, autorest.ByUnmarshallingJSON(&re))
+ }
+
if err != nil {
return resp, err
}
diff --git a/vendor/github.com/Azure/go-autorest/autorest/version.go b/vendor/github.com/Azure/go-autorest/autorest/version.go
index 7a71089c9c4cf..56a29b2c5d020 100644
--- a/vendor/github.com/Azure/go-autorest/autorest/version.go
+++ b/vendor/github.com/Azure/go-autorest/autorest/version.go
@@ -19,7 +19,7 @@ import (
"runtime"
)
-const number = "v13.0.2"
+const number = "v13.3.0"
var (
userAgent = fmt.Sprintf("Go/%s (%s-%s) go-autorest/%s",
diff --git a/vendor/github.com/BurntSushi/toml/.gitignore b/vendor/github.com/BurntSushi/toml/.gitignore
new file mode 100644
index 0000000000000..0cd3800377d4d
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/.gitignore
@@ -0,0 +1,5 @@
+TAGS
+tags
+.*.swp
+tomlcheck/tomlcheck
+toml.test
diff --git a/vendor/github.com/BurntSushi/toml/.travis.yml b/vendor/github.com/BurntSushi/toml/.travis.yml
new file mode 100644
index 0000000000000..8b8afc4f0e00d
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/.travis.yml
@@ -0,0 +1,15 @@
+language: go
+go:
+ - 1.1
+ - 1.2
+ - 1.3
+ - 1.4
+ - 1.5
+ - 1.6
+ - tip
+install:
+ - go install ./...
+ - go get github.com/BurntSushi/toml-test
+script:
+ - export PATH="$PATH:$HOME/gopath/bin"
+ - make test
diff --git a/vendor/github.com/BurntSushi/toml/COMPATIBLE b/vendor/github.com/BurntSushi/toml/COMPATIBLE
new file mode 100644
index 0000000000000..6efcfd0ce55ef
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/COMPATIBLE
@@ -0,0 +1,3 @@
+Compatible with TOML version
+[v0.4.0](https://github.com/toml-lang/toml/blob/v0.4.0/versions/en/toml-v0.4.0.md)
+
diff --git a/vendor/github.com/BurntSushi/toml/COPYING b/vendor/github.com/BurntSushi/toml/COPYING
new file mode 100644
index 0000000000000..01b5743200b84
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/COPYING
@@ -0,0 +1,21 @@
+The MIT License (MIT)
+
+Copyright (c) 2013 TOML authors
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
diff --git a/vendor/github.com/BurntSushi/toml/Makefile b/vendor/github.com/BurntSushi/toml/Makefile
new file mode 100644
index 0000000000000..3600848d331ab
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/Makefile
@@ -0,0 +1,19 @@
+install:
+ go install ./...
+
+test: install
+ go test -v
+ toml-test toml-test-decoder
+ toml-test -encoder toml-test-encoder
+
+fmt:
+ gofmt -w *.go */*.go
+ colcheck *.go */*.go
+
+tags:
+ find ./ -name '*.go' -print0 | xargs -0 gotags > TAGS
+
+push:
+ git push origin master
+ git push github master
+
diff --git a/vendor/github.com/BurntSushi/toml/README.md b/vendor/github.com/BurntSushi/toml/README.md
new file mode 100644
index 0000000000000..7c1b37ecc7a02
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/README.md
@@ -0,0 +1,218 @@
+## TOML parser and encoder for Go with reflection
+
+TOML stands for Tom's Obvious, Minimal Language. This Go package provides a
+reflection interface similar to Go's standard library `json` and `xml`
+packages. This package also supports the `encoding.TextUnmarshaler` and
+`encoding.TextMarshaler` interfaces so that you can define custom data
+representations. (There is an example of this below.)
+
+Spec: https://github.com/toml-lang/toml
+
+Compatible with TOML version
+[v0.4.0](https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.4.0.md)
+
+Documentation: https://godoc.org/github.com/BurntSushi/toml
+
+Installation:
+
+```bash
+go get github.com/BurntSushi/toml
+```
+
+Try the toml validator:
+
+```bash
+go get github.com/BurntSushi/toml/cmd/tomlv
+tomlv some-toml-file.toml
+```
+
+[](https://travis-ci.org/BurntSushi/toml) [](https://godoc.org/github.com/BurntSushi/toml)
+
+### Testing
+
+This package passes all tests in
+[toml-test](https://github.com/BurntSushi/toml-test) for both the decoder
+and the encoder.
+
+### Examples
+
+This package works similarly to how the Go standard library handles `XML`
+and `JSON`. Namely, data is loaded into Go values via reflection.
+
+For the simplest example, consider some TOML file as just a list of keys
+and values:
+
+```toml
+Age = 25
+Cats = [ "Cauchy", "Plato" ]
+Pi = 3.14
+Perfection = [ 6, 28, 496, 8128 ]
+DOB = 1987-07-05T05:45:00Z
+```
+
+Which could be defined in Go as:
+
+```go
+type Config struct {
+ Age int
+ Cats []string
+ Pi float64
+ Perfection []int
+ DOB time.Time // requires `import time`
+}
+```
+
+And then decoded with:
+
+```go
+var conf Config
+if _, err := toml.Decode(tomlData, &conf); err != nil {
+ // handle error
+}
+```
+
+You can also use struct tags if your struct field name doesn't map to a TOML
+key value directly:
+
+```toml
+some_key_NAME = "wat"
+```
+
+```go
+type TOML struct {
+ ObscureKey string `toml:"some_key_NAME"`
+}
+```
+
+### Using the `encoding.TextUnmarshaler` interface
+
+Here's an example that automatically parses duration strings into
+`time.Duration` values:
+
+```toml
+[[song]]
+name = "Thunder Road"
+duration = "4m49s"
+
+[[song]]
+name = "Stairway to Heaven"
+duration = "8m03s"
+```
+
+Which can be decoded with:
+
+```go
+type song struct {
+ Name string
+ Duration duration
+}
+type songs struct {
+ Song []song
+}
+var favorites songs
+if _, err := toml.Decode(blob, &favorites); err != nil {
+ log.Fatal(err)
+}
+
+for _, s := range favorites.Song {
+ fmt.Printf("%s (%s)\n", s.Name, s.Duration)
+}
+```
+
+And you'll also need a `duration` type that satisfies the
+`encoding.TextUnmarshaler` interface:
+
+```go
+type duration struct {
+ time.Duration
+}
+
+func (d *duration) UnmarshalText(text []byte) error {
+ var err error
+ d.Duration, err = time.ParseDuration(string(text))
+ return err
+}
+```
+
+### More complex usage
+
+Here's an example of how to load the example from the official spec page:
+
+```toml
+# This is a TOML document. Boom.
+
+title = "TOML Example"
+
+[owner]
+name = "Tom Preston-Werner"
+organization = "GitHub"
+bio = "GitHub Cofounder & CEO\nLikes tater tots and beer."
+dob = 1979-05-27T07:32:00Z # First class dates? Why not?
+
+[database]
+server = "192.168.1.1"
+ports = [ 8001, 8001, 8002 ]
+connection_max = 5000
+enabled = true
+
+[servers]
+
+ # You can indent as you please. Tabs or spaces. TOML don't care.
+ [servers.alpha]
+ ip = "10.0.0.1"
+ dc = "eqdc10"
+
+ [servers.beta]
+ ip = "10.0.0.2"
+ dc = "eqdc10"
+
+[clients]
+data = [ ["gamma", "delta"], [1, 2] ] # just an update to make sure parsers support it
+
+# Line breaks are OK when inside arrays
+hosts = [
+ "alpha",
+ "omega"
+]
+```
+
+And the corresponding Go types are:
+
+```go
+type tomlConfig struct {
+ Title string
+ Owner ownerInfo
+ DB database `toml:"database"`
+ Servers map[string]server
+ Clients clients
+}
+
+type ownerInfo struct {
+ Name string
+ Org string `toml:"organization"`
+ Bio string
+ DOB time.Time
+}
+
+type database struct {
+ Server string
+ Ports []int
+ ConnMax int `toml:"connection_max"`
+ Enabled bool
+}
+
+type server struct {
+ IP string
+ DC string
+}
+
+type clients struct {
+ Data [][]interface{}
+ Hosts []string
+}
+```
+
+Note that a case insensitive match will be tried if an exact match can't be
+found.
+
+A working example of the above can be found in `_examples/example.{go,toml}`.
diff --git a/vendor/github.com/BurntSushi/toml/decode.go b/vendor/github.com/BurntSushi/toml/decode.go
new file mode 100644
index 0000000000000..b0fd51d5b6ea5
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/decode.go
@@ -0,0 +1,509 @@
+package toml
+
+import (
+ "fmt"
+ "io"
+ "io/ioutil"
+ "math"
+ "reflect"
+ "strings"
+ "time"
+)
+
+func e(format string, args ...interface{}) error {
+ return fmt.Errorf("toml: "+format, args...)
+}
+
+// Unmarshaler is the interface implemented by objects that can unmarshal a
+// TOML description of themselves.
+type Unmarshaler interface {
+ UnmarshalTOML(interface{}) error
+}
+
+// Unmarshal decodes the contents of `p` in TOML format into a pointer `v`.
+func Unmarshal(p []byte, v interface{}) error {
+ _, err := Decode(string(p), v)
+ return err
+}
+
+// Primitive is a TOML value that hasn't been decoded into a Go value.
+// When using the various `Decode*` functions, the type `Primitive` may
+// be given to any value, and its decoding will be delayed.
+//
+// A `Primitive` value can be decoded using the `PrimitiveDecode` function.
+//
+// The underlying representation of a `Primitive` value is subject to change.
+// Do not rely on it.
+//
+// N.B. Primitive values are still parsed, so using them will only avoid
+// the overhead of reflection. They can be useful when you don't know the
+// exact type of TOML data until run time.
+type Primitive struct {
+ undecoded interface{}
+ context Key
+}
+
+// DEPRECATED!
+//
+// Use MetaData.PrimitiveDecode instead.
+func PrimitiveDecode(primValue Primitive, v interface{}) error {
+ md := MetaData{decoded: make(map[string]bool)}
+ return md.unify(primValue.undecoded, rvalue(v))
+}
+
+// PrimitiveDecode is just like the other `Decode*` functions, except it
+// decodes a TOML value that has already been parsed. Valid primitive values
+// can *only* be obtained from values filled by the decoder functions,
+// including this method. (i.e., `v` may contain more `Primitive`
+// values.)
+//
+// Meta data for primitive values is included in the meta data returned by
+// the `Decode*` functions with one exception: keys returned by the Undecoded
+// method will only reflect keys that were decoded. Namely, any keys hidden
+// behind a Primitive will be considered undecoded. Executing this method will
+// update the undecoded keys in the meta data. (See the example.)
+func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
+ md.context = primValue.context
+ defer func() { md.context = nil }()
+ return md.unify(primValue.undecoded, rvalue(v))
+}
+
+// Decode will decode the contents of `data` in TOML format into a pointer
+// `v`.
+//
+// TOML hashes correspond to Go structs or maps. (Dealer's choice. They can be
+// used interchangeably.)
+//
+// TOML arrays of tables correspond to either a slice of structs or a slice
+// of maps.
+//
+// TOML datetimes correspond to Go `time.Time` values.
+//
+// All other TOML types (float, string, int, bool and array) correspond
+// to the obvious Go types.
+//
+// An exception to the above rules is if a type implements the
+// encoding.TextUnmarshaler interface. In this case, any primitive TOML value
+// (floats, strings, integers, booleans and datetimes) will be converted to
+// a byte string and given to the value's UnmarshalText method. See the
+// Unmarshaler example for a demonstration with time duration strings.
+//
+// Key mapping
+//
+// TOML keys can map to either keys in a Go map or field names in a Go
+// struct. The special `toml` struct tag may be used to map TOML keys to
+// struct fields that don't match the key name exactly. (See the example.)
+// A case insensitive match to struct names will be tried if an exact match
+// can't be found.
+//
+// The mapping between TOML values and Go values is loose. That is, there
+// may exist TOML values that cannot be placed into your representation, and
+// there may be parts of your representation that do not correspond to
+// TOML values. This loose mapping can be made stricter by using the IsDefined
+// and/or Undecoded methods on the MetaData returned.
+//
+// This decoder will not handle cyclic types. If a cyclic type is passed,
+// `Decode` will not terminate.
+func Decode(data string, v interface{}) (MetaData, error) {
+ rv := reflect.ValueOf(v)
+ if rv.Kind() != reflect.Ptr {
+ return MetaData{}, e("Decode of non-pointer %s", reflect.TypeOf(v))
+ }
+ if rv.IsNil() {
+ return MetaData{}, e("Decode of nil %s", reflect.TypeOf(v))
+ }
+ p, err := parse(data)
+ if err != nil {
+ return MetaData{}, err
+ }
+ md := MetaData{
+ p.mapping, p.types, p.ordered,
+ make(map[string]bool, len(p.ordered)), nil,
+ }
+ return md, md.unify(p.mapping, indirect(rv))
+}
+
+// DecodeFile is just like Decode, except it will automatically read the
+// contents of the file at `fpath` and decode it for you.
+func DecodeFile(fpath string, v interface{}) (MetaData, error) {
+ bs, err := ioutil.ReadFile(fpath)
+ if err != nil {
+ return MetaData{}, err
+ }
+ return Decode(string(bs), v)
+}
+
+// DecodeReader is just like Decode, except it will consume all bytes
+// from the reader and decode it for you.
+func DecodeReader(r io.Reader, v interface{}) (MetaData, error) {
+ bs, err := ioutil.ReadAll(r)
+ if err != nil {
+ return MetaData{}, err
+ }
+ return Decode(string(bs), v)
+}
+
+// unify performs a sort of type unification based on the structure of `rv`,
+// which is the client representation.
+//
+// Any type mismatch produces an error. Finding a type that we don't know
+// how to handle produces an unsupported type error.
+func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
+
+ // Special case. Look for a `Primitive` value.
+ if rv.Type() == reflect.TypeOf((*Primitive)(nil)).Elem() {
+ // Save the undecoded data and the key context into the primitive
+ // value.
+ context := make(Key, len(md.context))
+ copy(context, md.context)
+ rv.Set(reflect.ValueOf(Primitive{
+ undecoded: data,
+ context: context,
+ }))
+ return nil
+ }
+
+ // Special case. Unmarshaler Interface support.
+ if rv.CanAddr() {
+ if v, ok := rv.Addr().Interface().(Unmarshaler); ok {
+ return v.UnmarshalTOML(data)
+ }
+ }
+
+ // Special case. Handle time.Time values specifically.
+ // TODO: Remove this code when we decide to drop support for Go 1.1.
+ // This isn't necessary in Go 1.2 because time.Time satisfies the encoding
+ // interfaces.
+ if rv.Type().AssignableTo(rvalue(time.Time{}).Type()) {
+ return md.unifyDatetime(data, rv)
+ }
+
+ // Special case. Look for a value satisfying the TextUnmarshaler interface.
+ if v, ok := rv.Interface().(TextUnmarshaler); ok {
+ return md.unifyText(data, v)
+ }
+ // BUG(burntsushi)
+ // The behavior here is incorrect whenever a Go type satisfies the
+ // encoding.TextUnmarshaler interface but also corresponds to a TOML
+ // hash or array. In particular, the unmarshaler should only be applied
+ // to primitive TOML values. But at this point, it will be applied to
+ // all kinds of values and produce an incorrect error whenever those values
+ // are hashes or arrays (including arrays of tables).
+
+ k := rv.Kind()
+
+ // laziness
+ if k >= reflect.Int && k <= reflect.Uint64 {
+ return md.unifyInt(data, rv)
+ }
+ switch k {
+ case reflect.Ptr:
+ elem := reflect.New(rv.Type().Elem())
+ err := md.unify(data, reflect.Indirect(elem))
+ if err != nil {
+ return err
+ }
+ rv.Set(elem)
+ return nil
+ case reflect.Struct:
+ return md.unifyStruct(data, rv)
+ case reflect.Map:
+ return md.unifyMap(data, rv)
+ case reflect.Array:
+ return md.unifyArray(data, rv)
+ case reflect.Slice:
+ return md.unifySlice(data, rv)
+ case reflect.String:
+ return md.unifyString(data, rv)
+ case reflect.Bool:
+ return md.unifyBool(data, rv)
+ case reflect.Interface:
+ // we only support empty interfaces.
+ if rv.NumMethod() > 0 {
+ return e("unsupported type %s", rv.Type())
+ }
+ return md.unifyAnything(data, rv)
+ case reflect.Float32:
+ fallthrough
+ case reflect.Float64:
+ return md.unifyFloat64(data, rv)
+ }
+ return e("unsupported type %s", rv.Kind())
+}
+
+func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
+ tmap, ok := mapping.(map[string]interface{})
+ if !ok {
+ if mapping == nil {
+ return nil
+ }
+ return e("type mismatch for %s: expected table but found %T",
+ rv.Type().String(), mapping)
+ }
+
+ for key, datum := range tmap {
+ var f *field
+ fields := cachedTypeFields(rv.Type())
+ for i := range fields {
+ ff := &fields[i]
+ if ff.name == key {
+ f = ff
+ break
+ }
+ if f == nil && strings.EqualFold(ff.name, key) {
+ f = ff
+ }
+ }
+ if f != nil {
+ subv := rv
+ for _, i := range f.index {
+ subv = indirect(subv.Field(i))
+ }
+ if isUnifiable(subv) {
+ md.decoded[md.context.add(key).String()] = true
+ md.context = append(md.context, key)
+ if err := md.unify(datum, subv); err != nil {
+ return err
+ }
+ md.context = md.context[0 : len(md.context)-1]
+ } else if f.name != "" {
+ // Bad user! No soup for you!
+ return e("cannot write unexported field %s.%s",
+ rv.Type().String(), f.name)
+ }
+ }
+ }
+ return nil
+}
+
+func (md *MetaData) unifyMap(mapping interface{}, rv reflect.Value) error {
+ tmap, ok := mapping.(map[string]interface{})
+ if !ok {
+ if tmap == nil {
+ return nil
+ }
+ return badtype("map", mapping)
+ }
+ if rv.IsNil() {
+ rv.Set(reflect.MakeMap(rv.Type()))
+ }
+ for k, v := range tmap {
+ md.decoded[md.context.add(k).String()] = true
+ md.context = append(md.context, k)
+
+ rvkey := indirect(reflect.New(rv.Type().Key()))
+ rvval := reflect.Indirect(reflect.New(rv.Type().Elem()))
+ if err := md.unify(v, rvval); err != nil {
+ return err
+ }
+ md.context = md.context[0 : len(md.context)-1]
+
+ rvkey.SetString(k)
+ rv.SetMapIndex(rvkey, rvval)
+ }
+ return nil
+}
+
+func (md *MetaData) unifyArray(data interface{}, rv reflect.Value) error {
+ datav := reflect.ValueOf(data)
+ if datav.Kind() != reflect.Slice {
+ if !datav.IsValid() {
+ return nil
+ }
+ return badtype("slice", data)
+ }
+ sliceLen := datav.Len()
+ if sliceLen != rv.Len() {
+ return e("expected array length %d; got TOML array of length %d",
+ rv.Len(), sliceLen)
+ }
+ return md.unifySliceArray(datav, rv)
+}
+
+func (md *MetaData) unifySlice(data interface{}, rv reflect.Value) error {
+ datav := reflect.ValueOf(data)
+ if datav.Kind() != reflect.Slice {
+ if !datav.IsValid() {
+ return nil
+ }
+ return badtype("slice", data)
+ }
+ n := datav.Len()
+ if rv.IsNil() || rv.Cap() < n {
+ rv.Set(reflect.MakeSlice(rv.Type(), n, n))
+ }
+ rv.SetLen(n)
+ return md.unifySliceArray(datav, rv)
+}
+
+func (md *MetaData) unifySliceArray(data, rv reflect.Value) error {
+ sliceLen := data.Len()
+ for i := 0; i < sliceLen; i++ {
+ v := data.Index(i).Interface()
+ sliceval := indirect(rv.Index(i))
+ if err := md.unify(v, sliceval); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func (md *MetaData) unifyDatetime(data interface{}, rv reflect.Value) error {
+ if _, ok := data.(time.Time); ok {
+ rv.Set(reflect.ValueOf(data))
+ return nil
+ }
+ return badtype("time.Time", data)
+}
+
+func (md *MetaData) unifyString(data interface{}, rv reflect.Value) error {
+ if s, ok := data.(string); ok {
+ rv.SetString(s)
+ return nil
+ }
+ return badtype("string", data)
+}
+
+func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
+ if num, ok := data.(float64); ok {
+ switch rv.Kind() {
+ case reflect.Float32:
+ fallthrough
+ case reflect.Float64:
+ rv.SetFloat(num)
+ default:
+ panic("bug")
+ }
+ return nil
+ }
+ return badtype("float", data)
+}
+
+func (md *MetaData) unifyInt(data interface{}, rv reflect.Value) error {
+ if num, ok := data.(int64); ok {
+ if rv.Kind() >= reflect.Int && rv.Kind() <= reflect.Int64 {
+ switch rv.Kind() {
+ case reflect.Int, reflect.Int64:
+ // No bounds checking necessary.
+ case reflect.Int8:
+ if num < math.MinInt8 || num > math.MaxInt8 {
+ return e("value %d is out of range for int8", num)
+ }
+ case reflect.Int16:
+ if num < math.MinInt16 || num > math.MaxInt16 {
+ return e("value %d is out of range for int16", num)
+ }
+ case reflect.Int32:
+ if num < math.MinInt32 || num > math.MaxInt32 {
+ return e("value %d is out of range for int32", num)
+ }
+ }
+ rv.SetInt(num)
+ } else if rv.Kind() >= reflect.Uint && rv.Kind() <= reflect.Uint64 {
+ unum := uint64(num)
+ switch rv.Kind() {
+ case reflect.Uint, reflect.Uint64:
+ // No bounds checking necessary.
+ case reflect.Uint8:
+ if num < 0 || unum > math.MaxUint8 {
+ return e("value %d is out of range for uint8", num)
+ }
+ case reflect.Uint16:
+ if num < 0 || unum > math.MaxUint16 {
+ return e("value %d is out of range for uint16", num)
+ }
+ case reflect.Uint32:
+ if num < 0 || unum > math.MaxUint32 {
+ return e("value %d is out of range for uint32", num)
+ }
+ }
+ rv.SetUint(unum)
+ } else {
+ panic("unreachable")
+ }
+ return nil
+ }
+ return badtype("integer", data)
+}
+
+func (md *MetaData) unifyBool(data interface{}, rv reflect.Value) error {
+ if b, ok := data.(bool); ok {
+ rv.SetBool(b)
+ return nil
+ }
+ return badtype("boolean", data)
+}
+
+func (md *MetaData) unifyAnything(data interface{}, rv reflect.Value) error {
+ rv.Set(reflect.ValueOf(data))
+ return nil
+}
+
+func (md *MetaData) unifyText(data interface{}, v TextUnmarshaler) error {
+ var s string
+ switch sdata := data.(type) {
+ case TextMarshaler:
+ text, err := sdata.MarshalText()
+ if err != nil {
+ return err
+ }
+ s = string(text)
+ case fmt.Stringer:
+ s = sdata.String()
+ case string:
+ s = sdata
+ case bool:
+ s = fmt.Sprintf("%v", sdata)
+ case int64:
+ s = fmt.Sprintf("%d", sdata)
+ case float64:
+ s = fmt.Sprintf("%f", sdata)
+ default:
+ return badtype("primitive (string-like)", data)
+ }
+ if err := v.UnmarshalText([]byte(s)); err != nil {
+ return err
+ }
+ return nil
+}
+
+// rvalue returns a reflect.Value of `v`. All pointers are resolved.
+func rvalue(v interface{}) reflect.Value {
+ return indirect(reflect.ValueOf(v))
+}
+
+// indirect returns the value pointed to by a pointer.
+// Pointers are followed until the value is not a pointer.
+// New values are allocated for each nil pointer.
+//
+// An exception to this rule is if the value satisfies an interface of
+// interest to us (like encoding.TextUnmarshaler).
+func indirect(v reflect.Value) reflect.Value {
+ if v.Kind() != reflect.Ptr {
+ if v.CanSet() {
+ pv := v.Addr()
+ if _, ok := pv.Interface().(TextUnmarshaler); ok {
+ return pv
+ }
+ }
+ return v
+ }
+ if v.IsNil() {
+ v.Set(reflect.New(v.Type().Elem()))
+ }
+ return indirect(reflect.Indirect(v))
+}
+
+func isUnifiable(rv reflect.Value) bool {
+ if rv.CanSet() {
+ return true
+ }
+ if _, ok := rv.Interface().(TextUnmarshaler); ok {
+ return true
+ }
+ return false
+}
+
+func badtype(expected string, data interface{}) error {
+ return e("cannot load TOML value of type %T into a Go %s", data, expected)
+}
diff --git a/vendor/github.com/BurntSushi/toml/decode_meta.go b/vendor/github.com/BurntSushi/toml/decode_meta.go
new file mode 100644
index 0000000000000..b9914a6798cf9
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/decode_meta.go
@@ -0,0 +1,121 @@
+package toml
+
+import "strings"
+
+// MetaData allows access to meta information about TOML data that may not
+// be inferrable via reflection. In particular, whether a key has been defined
+// and the TOML type of a key.
+type MetaData struct {
+ mapping map[string]interface{}
+ types map[string]tomlType
+ keys []Key
+ decoded map[string]bool
+ context Key // Used only during decoding.
+}
+
+// IsDefined returns true if the key given exists in the TOML data. The key
+// should be specified hierarchially. e.g.,
+//
+// // access the TOML key 'a.b.c'
+// IsDefined("a", "b", "c")
+//
+// IsDefined will return false if an empty key given. Keys are case sensitive.
+func (md *MetaData) IsDefined(key ...string) bool {
+ if len(key) == 0 {
+ return false
+ }
+
+ var hash map[string]interface{}
+ var ok bool
+ var hashOrVal interface{} = md.mapping
+ for _, k := range key {
+ if hash, ok = hashOrVal.(map[string]interface{}); !ok {
+ return false
+ }
+ if hashOrVal, ok = hash[k]; !ok {
+ return false
+ }
+ }
+ return true
+}
+
+// Type returns a string representation of the type of the key specified.
+//
+// Type will return the empty string if given an empty key or a key that
+// does not exist. Keys are case sensitive.
+func (md *MetaData) Type(key ...string) string {
+ fullkey := strings.Join(key, ".")
+ if typ, ok := md.types[fullkey]; ok {
+ return typ.typeString()
+ }
+ return ""
+}
+
+// Key is the type of any TOML key, including key groups. Use (MetaData).Keys
+// to get values of this type.
+type Key []string
+
+func (k Key) String() string {
+ return strings.Join(k, ".")
+}
+
+func (k Key) maybeQuotedAll() string {
+ var ss []string
+ for i := range k {
+ ss = append(ss, k.maybeQuoted(i))
+ }
+ return strings.Join(ss, ".")
+}
+
+func (k Key) maybeQuoted(i int) string {
+ quote := false
+ for _, c := range k[i] {
+ if !isBareKeyChar(c) {
+ quote = true
+ break
+ }
+ }
+ if quote {
+ return "\"" + strings.Replace(k[i], "\"", "\\\"", -1) + "\""
+ }
+ return k[i]
+}
+
+func (k Key) add(piece string) Key {
+ newKey := make(Key, len(k)+1)
+ copy(newKey, k)
+ newKey[len(k)] = piece
+ return newKey
+}
+
+// Keys returns a slice of every key in the TOML data, including key groups.
+// Each key is itself a slice, where the first element is the top of the
+// hierarchy and the last is the most specific.
+//
+// The list will have the same order as the keys appeared in the TOML data.
+//
+// All keys returned are non-empty.
+func (md *MetaData) Keys() []Key {
+ return md.keys
+}
+
+// Undecoded returns all keys that have not been decoded in the order in which
+// they appear in the original TOML document.
+//
+// This includes keys that haven't been decoded because of a Primitive value.
+// Once the Primitive value is decoded, the keys will be considered decoded.
+//
+// Also note that decoding into an empty interface will result in no decoding,
+// and so no keys will be considered decoded.
+//
+// In this sense, the Undecoded keys correspond to keys in the TOML document
+// that do not have a concrete type in your representation.
+func (md *MetaData) Undecoded() []Key {
+ undecoded := make([]Key, 0, len(md.keys))
+ for _, key := range md.keys {
+ if !md.decoded[key.String()] {
+ undecoded = append(undecoded, key)
+ }
+ }
+ return undecoded
+}
diff --git a/vendor/github.com/BurntSushi/toml/doc.go b/vendor/github.com/BurntSushi/toml/doc.go
new file mode 100644
index 0000000000000..b371f396edcac
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/doc.go
@@ -0,0 +1,27 @@
+/*
+Package toml provides facilities for decoding and encoding TOML configuration
+files via reflection. There is also support for delaying decoding with
+the Primitive type, and querying the set of keys in a TOML document with the
+MetaData type.
+
+The specification implemented: https://github.com/toml-lang/toml
+
+The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify
+whether a file is a valid TOML document. It can also be used to print the
+type of each key in a TOML document.
+
+Testing
+
+There are two important types of tests used for this package. The first is
+contained inside '*_test.go' files and uses the standard Go unit testing
+framework. These tests are primarily devoted to holistically testing the
+decoder and encoder.
+
+The second type of testing is used to verify the implementation's adherence
+to the TOML specification. These tests have been factored into their own
+project: https://github.com/BurntSushi/toml-test
+
+The reason the tests are in a separate project is so that they can be used by
+any implementation of TOML. Namely, it is language agnostic.
+*/
+package toml
diff --git a/vendor/github.com/BurntSushi/toml/encode.go b/vendor/github.com/BurntSushi/toml/encode.go
new file mode 100644
index 0000000000000..d905c21a24662
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/encode.go
@@ -0,0 +1,568 @@
+package toml
+
+import (
+ "bufio"
+ "errors"
+ "fmt"
+ "io"
+ "reflect"
+ "sort"
+ "strconv"
+ "strings"
+ "time"
+)
+
+type tomlEncodeError struct{ error }
+
+var (
+ errArrayMixedElementTypes = errors.New(
+ "toml: cannot encode array with mixed element types")
+ errArrayNilElement = errors.New(
+ "toml: cannot encode array with nil element")
+ errNonString = errors.New(
+ "toml: cannot encode a map with non-string key type")
+ errAnonNonStruct = errors.New(
+ "toml: cannot encode an anonymous field that is not a struct")
+ errArrayNoTable = errors.New(
+ "toml: TOML array element cannot contain a table")
+ errNoKey = errors.New(
+ "toml: top-level values must be Go maps or structs")
+ errAnything = errors.New("") // used in testing
+)
+
+var quotedReplacer = strings.NewReplacer(
+ "\t", "\\t",
+ "\n", "\\n",
+ "\r", "\\r",
+ "\"", "\\\"",
+ "\\", "\\\\",
+)
+
+// Encoder controls the encoding of Go values to a TOML document to some
+// io.Writer.
+//
+// The indentation level can be controlled with the Indent field.
+type Encoder struct {
+ // A single indentation level. By default it is two spaces.
+ Indent string
+
+ // hasWritten is whether we have written any output to w yet.
+ hasWritten bool
+ w *bufio.Writer
+}
+
+// NewEncoder returns a TOML encoder that encodes Go values to the io.Writer
+// given. By default, a single indentation level is 2 spaces.
+func NewEncoder(w io.Writer) *Encoder {
+ return &Encoder{
+ w: bufio.NewWriter(w),
+ Indent: " ",
+ }
+}
+
+// Encode writes a TOML representation of the Go value to the underlying
+// io.Writer. If the value given cannot be encoded to a valid TOML document,
+// then an error is returned.
+//
+// The mapping between Go values and TOML values should be precisely the same
+// as for the Decode* functions. Similarly, the TextMarshaler interface is
+// supported by encoding the resulting bytes as strings. (If you want to write
+// arbitrary binary data then you will need to use something like base64 since
+// TOML does not have any binary types.)
+//
+// When encoding TOML hashes (i.e., Go maps or structs), keys without any
+// sub-hashes are encoded first.
+//
+// If a Go map is encoded, then its keys are sorted alphabetically for
+// deterministic output. More control over this behavior may be provided if
+// there is demand for it.
+//
+// Encoding Go values without a corresponding TOML representation---like map
+// types with non-string keys---will cause an error to be returned. Similarly
+// for mixed arrays/slices, arrays/slices with nil elements, embedded
+// non-struct types and nested slices containing maps or structs.
+// (e.g., [][]map[string]string is not allowed but []map[string]string is OK
+// and so is []map[string][]string.)
+func (enc *Encoder) Encode(v interface{}) error {
+ rv := eindirect(reflect.ValueOf(v))
+ if err := enc.safeEncode(Key([]string{}), rv); err != nil {
+ return err
+ }
+ return enc.w.Flush()
+}
+
+func (enc *Encoder) safeEncode(key Key, rv reflect.Value) (err error) {
+ defer func() {
+ if r := recover(); r != nil {
+ if terr, ok := r.(tomlEncodeError); ok {
+ err = terr.error
+ return
+ }
+ panic(r)
+ }
+ }()
+ enc.encode(key, rv)
+ return nil
+}
+
+func (enc *Encoder) encode(key Key, rv reflect.Value) {
+ // Special case. Time needs to be in ISO8601 format.
+ // Special case. If we can marshal the type to text, then we used that.
+ // Basically, this prevents the encoder for handling these types as
+ // generic structs (or whatever the underlying type of a TextMarshaler is).
+ switch rv.Interface().(type) {
+ case time.Time, TextMarshaler:
+ enc.keyEqElement(key, rv)
+ return
+ }
+
+ k := rv.Kind()
+ switch k {
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
+ reflect.Int64,
+ reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
+ reflect.Uint64,
+ reflect.Float32, reflect.Float64, reflect.String, reflect.Bool:
+ enc.keyEqElement(key, rv)
+ case reflect.Array, reflect.Slice:
+ if typeEqual(tomlArrayHash, tomlTypeOfGo(rv)) {
+ enc.eArrayOfTables(key, rv)
+ } else {
+ enc.keyEqElement(key, rv)
+ }
+ case reflect.Interface:
+ if rv.IsNil() {
+ return
+ }
+ enc.encode(key, rv.Elem())
+ case reflect.Map:
+ if rv.IsNil() {
+ return
+ }
+ enc.eTable(key, rv)
+ case reflect.Ptr:
+ if rv.IsNil() {
+ return
+ }
+ enc.encode(key, rv.Elem())
+ case reflect.Struct:
+ enc.eTable(key, rv)
+ default:
+ panic(e("unsupported type for key '%s': %s", key, k))
+ }
+}
+
+// eElement encodes any value that can be an array element (primitives and
+// arrays).
+func (enc *Encoder) eElement(rv reflect.Value) {
+ switch v := rv.Interface().(type) {
+ case time.Time:
+ // Special case time.Time as a primitive. Has to come before
+ // TextMarshaler below because time.Time implements
+ // encoding.TextMarshaler, but we need to always use UTC.
+ enc.wf(v.UTC().Format("2006-01-02T15:04:05Z"))
+ return
+ case TextMarshaler:
+ // Special case. Use text marshaler if it's available for this value.
+ if s, err := v.MarshalText(); err != nil {
+ encPanic(err)
+ } else {
+ enc.writeQuoted(string(s))
+ }
+ return
+ }
+ switch rv.Kind() {
+ case reflect.Bool:
+ enc.wf(strconv.FormatBool(rv.Bool()))
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
+ reflect.Int64:
+ enc.wf(strconv.FormatInt(rv.Int(), 10))
+ case reflect.Uint, reflect.Uint8, reflect.Uint16,
+ reflect.Uint32, reflect.Uint64:
+ enc.wf(strconv.FormatUint(rv.Uint(), 10))
+ case reflect.Float32:
+ enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 32)))
+ case reflect.Float64:
+ enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 64)))
+ case reflect.Array, reflect.Slice:
+ enc.eArrayOrSliceElement(rv)
+ case reflect.Interface:
+ enc.eElement(rv.Elem())
+ case reflect.String:
+ enc.writeQuoted(rv.String())
+ default:
+ panic(e("unexpected primitive type: %s", rv.Kind()))
+ }
+}
+
+// By the TOML spec, all floats must have a decimal with at least one
+// number on either side.
+func floatAddDecimal(fstr string) string {
+ if !strings.Contains(fstr, ".") {
+ return fstr + ".0"
+ }
+ return fstr
+}
+
+func (enc *Encoder) writeQuoted(s string) {
+ enc.wf("\"%s\"", quotedReplacer.Replace(s))
+}
+
+func (enc *Encoder) eArrayOrSliceElement(rv reflect.Value) {
+ length := rv.Len()
+ enc.wf("[")
+ for i := 0; i < length; i++ {
+ elem := rv.Index(i)
+ enc.eElement(elem)
+ if i != length-1 {
+ enc.wf(", ")
+ }
+ }
+ enc.wf("]")
+}
+
+func (enc *Encoder) eArrayOfTables(key Key, rv reflect.Value) {
+ if len(key) == 0 {
+ encPanic(errNoKey)
+ }
+ for i := 0; i < rv.Len(); i++ {
+ trv := rv.Index(i)
+ if isNil(trv) {
+ continue
+ }
+ panicIfInvalidKey(key)
+ enc.newline()
+ enc.wf("%s[[%s]]", enc.indentStr(key), key.maybeQuotedAll())
+ enc.newline()
+ enc.eMapOrStruct(key, trv)
+ }
+}
+
+func (enc *Encoder) eTable(key Key, rv reflect.Value) {
+ panicIfInvalidKey(key)
+ if len(key) == 1 {
+ // Output an extra newline between top-level tables.
+ // (The newline isn't written if nothing else has been written though.)
+ enc.newline()
+ }
+ if len(key) > 0 {
+ enc.wf("%s[%s]", enc.indentStr(key), key.maybeQuotedAll())
+ enc.newline()
+ }
+ enc.eMapOrStruct(key, rv)
+}
+
+func (enc *Encoder) eMapOrStruct(key Key, rv reflect.Value) {
+ switch rv := eindirect(rv); rv.Kind() {
+ case reflect.Map:
+ enc.eMap(key, rv)
+ case reflect.Struct:
+ enc.eStruct(key, rv)
+ default:
+ panic("eTable: unhandled reflect.Value Kind: " + rv.Kind().String())
+ }
+}
+
+func (enc *Encoder) eMap(key Key, rv reflect.Value) {
+ rt := rv.Type()
+ if rt.Key().Kind() != reflect.String {
+ encPanic(errNonString)
+ }
+
+ // Sort keys so that we have deterministic output. And write keys directly
+ // underneath this key first, before writing sub-structs or sub-maps.
+ var mapKeysDirect, mapKeysSub []string
+ for _, mapKey := range rv.MapKeys() {
+ k := mapKey.String()
+ if typeIsHash(tomlTypeOfGo(rv.MapIndex(mapKey))) {
+ mapKeysSub = append(mapKeysSub, k)
+ } else {
+ mapKeysDirect = append(mapKeysDirect, k)
+ }
+ }
+
+ var writeMapKeys = func(mapKeys []string) {
+ sort.Strings(mapKeys)
+ for _, mapKey := range mapKeys {
+ mrv := rv.MapIndex(reflect.ValueOf(mapKey))
+ if isNil(mrv) {
+ // Don't write anything for nil fields.
+ continue
+ }
+ enc.encode(key.add(mapKey), mrv)
+ }
+ }
+ writeMapKeys(mapKeysDirect)
+ writeMapKeys(mapKeysSub)
+}
+
+func (enc *Encoder) eStruct(key Key, rv reflect.Value) {
+ // Write keys for fields directly under this key first, because if we write
+ // a field that creates a new table, then all keys under it will be in that
+ // table (not the one we're writing here).
+ rt := rv.Type()
+ var fieldsDirect, fieldsSub [][]int
+ var addFields func(rt reflect.Type, rv reflect.Value, start []int)
+ addFields = func(rt reflect.Type, rv reflect.Value, start []int) {
+ for i := 0; i < rt.NumField(); i++ {
+ f := rt.Field(i)
+ // skip unexported fields
+ if f.PkgPath != "" && !f.Anonymous {
+ continue
+ }
+ frv := rv.Field(i)
+ if f.Anonymous {
+ t := f.Type
+ switch t.Kind() {
+ case reflect.Struct:
+ // Treat anonymous struct fields with
+ // tag names as though they are not
+ // anonymous, like encoding/json does.
+ if getOptions(f.Tag).name == "" {
+ addFields(t, frv, f.Index)
+ continue
+ }
+ case reflect.Ptr:
+ if t.Elem().Kind() == reflect.Struct &&
+ getOptions(f.Tag).name == "" {
+ if !frv.IsNil() {
+ addFields(t.Elem(), frv.Elem(), f.Index)
+ }
+ continue
+ }
+ // Fall through to the normal field encoding logic below
+ // for non-struct anonymous fields.
+ }
+ }
+
+ if typeIsHash(tomlTypeOfGo(frv)) {
+ fieldsSub = append(fieldsSub, append(start, f.Index...))
+ } else {
+ fieldsDirect = append(fieldsDirect, append(start, f.Index...))
+ }
+ }
+ }
+ addFields(rt, rv, nil)
+
+ var writeFields = func(fields [][]int) {
+ for _, fieldIndex := range fields {
+ sft := rt.FieldByIndex(fieldIndex)
+ sf := rv.FieldByIndex(fieldIndex)
+ if isNil(sf) {
+ // Don't write anything for nil fields.
+ continue
+ }
+
+ opts := getOptions(sft.Tag)
+ if opts.skip {
+ continue
+ }
+ keyName := sft.Name
+ if opts.name != "" {
+ keyName = opts.name
+ }
+ if opts.omitempty && isEmpty(sf) {
+ continue
+ }
+ if opts.omitzero && isZero(sf) {
+ continue
+ }
+
+ enc.encode(key.add(keyName), sf)
+ }
+ }
+ writeFields(fieldsDirect)
+ writeFields(fieldsSub)
+}
+
+// tomlTypeName returns the TOML type name of the Go value's type. It is
+// used to determine whether the types of array elements are mixed (which is
+// forbidden). If the Go value is nil, then it is illegal for it to be an array
+// element, and valueIsNil is returned as true.
+
+// Returns the TOML type of a Go value. The type may be `nil`, which means
+// no concrete TOML type could be found.
+func tomlTypeOfGo(rv reflect.Value) tomlType {
+ if isNil(rv) || !rv.IsValid() {
+ return nil
+ }
+ switch rv.Kind() {
+ case reflect.Bool:
+ return tomlBool
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
+ reflect.Int64,
+ reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
+ reflect.Uint64:
+ return tomlInteger
+ case reflect.Float32, reflect.Float64:
+ return tomlFloat
+ case reflect.Array, reflect.Slice:
+ if typeEqual(tomlHash, tomlArrayType(rv)) {
+ return tomlArrayHash
+ }
+ return tomlArray
+ case reflect.Ptr, reflect.Interface:
+ return tomlTypeOfGo(rv.Elem())
+ case reflect.String:
+ return tomlString
+ case reflect.Map:
+ return tomlHash
+ case reflect.Struct:
+ switch rv.Interface().(type) {
+ case time.Time:
+ return tomlDatetime
+ case TextMarshaler:
+ return tomlString
+ default:
+ return tomlHash
+ }
+ default:
+ panic("unexpected reflect.Kind: " + rv.Kind().String())
+ }
+}
+
+// tomlArrayType returns the element type of a TOML array. The type returned
+// may be nil if it cannot be determined (e.g., a nil slice or a zero length
+// slize). This function may also panic if it finds a type that cannot be
+// expressed in TOML (such as nil elements, heterogeneous arrays or directly
+// nested arrays of tables).
+func tomlArrayType(rv reflect.Value) tomlType {
+ if isNil(rv) || !rv.IsValid() || rv.Len() == 0 {
+ return nil
+ }
+ firstType := tomlTypeOfGo(rv.Index(0))
+ if firstType == nil {
+ encPanic(errArrayNilElement)
+ }
+
+ rvlen := rv.Len()
+ for i := 1; i < rvlen; i++ {
+ elem := rv.Index(i)
+ switch elemType := tomlTypeOfGo(elem); {
+ case elemType == nil:
+ encPanic(errArrayNilElement)
+ case !typeEqual(firstType, elemType):
+ encPanic(errArrayMixedElementTypes)
+ }
+ }
+ // If we have a nested array, then we must make sure that the nested
+ // array contains ONLY primitives.
+ // This checks arbitrarily nested arrays.
+ if typeEqual(firstType, tomlArray) || typeEqual(firstType, tomlArrayHash) {
+ nest := tomlArrayType(eindirect(rv.Index(0)))
+ if typeEqual(nest, tomlHash) || typeEqual(nest, tomlArrayHash) {
+ encPanic(errArrayNoTable)
+ }
+ }
+ return firstType
+}
+
+type tagOptions struct {
+ skip bool // "-"
+ name string
+ omitempty bool
+ omitzero bool
+}
+
+func getOptions(tag reflect.StructTag) tagOptions {
+ t := tag.Get("toml")
+ if t == "-" {
+ return tagOptions{skip: true}
+ }
+ var opts tagOptions
+ parts := strings.Split(t, ",")
+ opts.name = parts[0]
+ for _, s := range parts[1:] {
+ switch s {
+ case "omitempty":
+ opts.omitempty = true
+ case "omitzero":
+ opts.omitzero = true
+ }
+ }
+ return opts
+}
+
+func isZero(rv reflect.Value) bool {
+ switch rv.Kind() {
+ case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
+ return rv.Int() == 0
+ case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
+ return rv.Uint() == 0
+ case reflect.Float32, reflect.Float64:
+ return rv.Float() == 0.0
+ }
+ return false
+}
+
+func isEmpty(rv reflect.Value) bool {
+ switch rv.Kind() {
+ case reflect.Array, reflect.Slice, reflect.Map, reflect.String:
+ return rv.Len() == 0
+ case reflect.Bool:
+ return !rv.Bool()
+ }
+ return false
+}
+
+func (enc *Encoder) newline() {
+ if enc.hasWritten {
+ enc.wf("\n")
+ }
+}
+
+func (enc *Encoder) keyEqElement(key Key, val reflect.Value) {
+ if len(key) == 0 {
+ encPanic(errNoKey)
+ }
+ panicIfInvalidKey(key)
+ enc.wf("%s%s = ", enc.indentStr(key), key.maybeQuoted(len(key)-1))
+ enc.eElement(val)
+ enc.newline()
+}
+
+func (enc *Encoder) wf(format string, v ...interface{}) {
+ if _, err := fmt.Fprintf(enc.w, format, v...); err != nil {
+ encPanic(err)
+ }
+ enc.hasWritten = true
+}
+
+func (enc *Encoder) indentStr(key Key) string {
+ return strings.Repeat(enc.Indent, len(key)-1)
+}
+
+func encPanic(err error) {
+ panic(tomlEncodeError{err})
+}
+
+func eindirect(v reflect.Value) reflect.Value {
+ switch v.Kind() {
+ case reflect.Ptr, reflect.Interface:
+ return eindirect(v.Elem())
+ default:
+ return v
+ }
+}
+
+func isNil(rv reflect.Value) bool {
+ switch rv.Kind() {
+ case reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
+ return rv.IsNil()
+ default:
+ return false
+ }
+}
+
+func panicIfInvalidKey(key Key) {
+ for _, k := range key {
+ if len(k) == 0 {
+ encPanic(e("Key '%s' is not a valid table name. Key names "+
+ "cannot be empty.", key.maybeQuotedAll()))
+ }
+ }
+}
+
+func isValidKeyName(s string) bool {
+ return len(s) != 0
+}
diff --git a/vendor/github.com/BurntSushi/toml/encoding_types.go b/vendor/github.com/BurntSushi/toml/encoding_types.go
new file mode 100644
index 0000000000000..d36e1dd6002be
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/encoding_types.go
@@ -0,0 +1,19 @@
+// +build go1.2
+
+package toml
+
+// In order to support Go 1.1, we define our own TextMarshaler and
+// TextUnmarshaler types. For Go 1.2+, we just alias them with the
+// standard library interfaces.
+
+import (
+ "encoding"
+)
+
+// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
+// so that Go 1.1 can be supported.
+type TextMarshaler encoding.TextMarshaler
+
+// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
+// here so that Go 1.1 can be supported.
+type TextUnmarshaler encoding.TextUnmarshaler
diff --git a/vendor/github.com/BurntSushi/toml/encoding_types_1.1.go b/vendor/github.com/BurntSushi/toml/encoding_types_1.1.go
new file mode 100644
index 0000000000000..e8d503d04690d
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/encoding_types_1.1.go
@@ -0,0 +1,18 @@
+// +build !go1.2
+
+package toml
+
+// These interfaces were introduced in Go 1.2, so we add them manually when
+// compiling for Go 1.1.
+
+// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
+// so that Go 1.1 can be supported.
+type TextMarshaler interface {
+ MarshalText() (text []byte, err error)
+}
+
+// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
+// here so that Go 1.1 can be supported.
+type TextUnmarshaler interface {
+ UnmarshalText(text []byte) error
+}
diff --git a/vendor/github.com/BurntSushi/toml/lex.go b/vendor/github.com/BurntSushi/toml/lex.go
new file mode 100644
index 0000000000000..e0a742a8870f1
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/lex.go
@@ -0,0 +1,953 @@
+package toml
+
+import (
+ "fmt"
+ "strings"
+ "unicode"
+ "unicode/utf8"
+)
+
+type itemType int
+
+const (
+ itemError itemType = iota
+ itemNIL // used in the parser to indicate no type
+ itemEOF
+ itemText
+ itemString
+ itemRawString
+ itemMultilineString
+ itemRawMultilineString
+ itemBool
+ itemInteger
+ itemFloat
+ itemDatetime
+ itemArray // the start of an array
+ itemArrayEnd
+ itemTableStart
+ itemTableEnd
+ itemArrayTableStart
+ itemArrayTableEnd
+ itemKeyStart
+ itemCommentStart
+ itemInlineTableStart
+ itemInlineTableEnd
+)
+
+const (
+ eof = 0
+ comma = ','
+ tableStart = '['
+ tableEnd = ']'
+ arrayTableStart = '['
+ arrayTableEnd = ']'
+ tableSep = '.'
+ keySep = '='
+ arrayStart = '['
+ arrayEnd = ']'
+ commentStart = '#'
+ stringStart = '"'
+ stringEnd = '"'
+ rawStringStart = '\''
+ rawStringEnd = '\''
+ inlineTableStart = '{'
+ inlineTableEnd = '}'
+)
+
+type stateFn func(lx *lexer) stateFn
+
+type lexer struct {
+ input string
+ start int
+ pos int
+ line int
+ state stateFn
+ items chan item
+
+ // Allow for backing up up to three runes.
+ // This is necessary because TOML contains 3-rune tokens (""" and ''').
+ prevWidths [3]int
+ nprev int // how many of prevWidths are in use
+ // If we emit an eof, we can still back up, but it is not OK to call
+ // next again.
+ atEOF bool
+
+ // A stack of state functions used to maintain context.
+ // The idea is to reuse parts of the state machine in various places.
+ // For example, values can appear at the top level or within arbitrarily
+ // nested arrays. The last state on the stack is used after a value has
+ // been lexed. Similarly for comments.
+ stack []stateFn
+}
+
+type item struct {
+ typ itemType
+ val string
+ line int
+}
+
+func (lx *lexer) nextItem() item {
+ for {
+ select {
+ case item := <-lx.items:
+ return item
+ default:
+ lx.state = lx.state(lx)
+ }
+ }
+}
+
+func lex(input string) *lexer {
+ lx := &lexer{
+ input: input,
+ state: lexTop,
+ line: 1,
+ items: make(chan item, 10),
+ stack: make([]stateFn, 0, 10),
+ }
+ return lx
+}
+
+func (lx *lexer) push(state stateFn) {
+ lx.stack = append(lx.stack, state)
+}
+
+func (lx *lexer) pop() stateFn {
+ if len(lx.stack) == 0 {
+ return lx.errorf("BUG in lexer: no states to pop")
+ }
+ last := lx.stack[len(lx.stack)-1]
+ lx.stack = lx.stack[0 : len(lx.stack)-1]
+ return last
+}
+
+func (lx *lexer) current() string {
+ return lx.input[lx.start:lx.pos]
+}
+
+func (lx *lexer) emit(typ itemType) {
+ lx.items <- item{typ, lx.current(), lx.line}
+ lx.start = lx.pos
+}
+
+func (lx *lexer) emitTrim(typ itemType) {
+ lx.items <- item{typ, strings.TrimSpace(lx.current()), lx.line}
+ lx.start = lx.pos
+}
+
+func (lx *lexer) next() (r rune) {
+ if lx.atEOF {
+ panic("next called after EOF")
+ }
+ if lx.pos >= len(lx.input) {
+ lx.atEOF = true
+ return eof
+ }
+
+ if lx.input[lx.pos] == '\n' {
+ lx.line++
+ }
+ lx.prevWidths[2] = lx.prevWidths[1]
+ lx.prevWidths[1] = lx.prevWidths[0]
+ if lx.nprev < 3 {
+ lx.nprev++
+ }
+ r, w := utf8.DecodeRuneInString(lx.input[lx.pos:])
+ lx.prevWidths[0] = w
+ lx.pos += w
+ return r
+}
+
+// ignore skips over the pending input before this point.
+func (lx *lexer) ignore() {
+ lx.start = lx.pos
+}
+
+// backup steps back one rune. Can be called only twice between calls to next.
+func (lx *lexer) backup() {
+ if lx.atEOF {
+ lx.atEOF = false
+ return
+ }
+ if lx.nprev < 1 {
+ panic("backed up too far")
+ }
+ w := lx.prevWidths[0]
+ lx.prevWidths[0] = lx.prevWidths[1]
+ lx.prevWidths[1] = lx.prevWidths[2]
+ lx.nprev--
+ lx.pos -= w
+ if lx.pos < len(lx.input) && lx.input[lx.pos] == '\n' {
+ lx.line--
+ }
+}
+
+// accept consumes the next rune if it's equal to `valid`.
+func (lx *lexer) accept(valid rune) bool {
+ if lx.next() == valid {
+ return true
+ }
+ lx.backup()
+ return false
+}
+
+// peek returns but does not consume the next rune in the input.
+func (lx *lexer) peek() rune {
+ r := lx.next()
+ lx.backup()
+ return r
+}
+
+// skip ignores all input that matches the given predicate.
+func (lx *lexer) skip(pred func(rune) bool) {
+ for {
+ r := lx.next()
+ if pred(r) {
+ continue
+ }
+ lx.backup()
+ lx.ignore()
+ return
+ }
+}
+
+// errorf stops all lexing by emitting an error and returning `nil`.
+// Note that any value that is a character is escaped if it's a special
+// character (newlines, tabs, etc.).
+func (lx *lexer) errorf(format string, values ...interface{}) stateFn {
+ lx.items <- item{
+ itemError,
+ fmt.Sprintf(format, values...),
+ lx.line,
+ }
+ return nil
+}
+
+// lexTop consumes elements at the top level of TOML data.
+func lexTop(lx *lexer) stateFn {
+ r := lx.next()
+ if isWhitespace(r) || isNL(r) {
+ return lexSkip(lx, lexTop)
+ }
+ switch r {
+ case commentStart:
+ lx.push(lexTop)
+ return lexCommentStart
+ case tableStart:
+ return lexTableStart
+ case eof:
+ if lx.pos > lx.start {
+ return lx.errorf("unexpected EOF")
+ }
+ lx.emit(itemEOF)
+ return nil
+ }
+
+ // At this point, the only valid item can be a key, so we back up
+ // and let the key lexer do the rest.
+ lx.backup()
+ lx.push(lexTopEnd)
+ return lexKeyStart
+}
+
+// lexTopEnd is entered whenever a top-level item has been consumed. (A value
+// or a table.) It must see only whitespace, and will turn back to lexTop
+// upon a newline. If it sees EOF, it will quit the lexer successfully.
+func lexTopEnd(lx *lexer) stateFn {
+ r := lx.next()
+ switch {
+ case r == commentStart:
+ // a comment will read to a newline for us.
+ lx.push(lexTop)
+ return lexCommentStart
+ case isWhitespace(r):
+ return lexTopEnd
+ case isNL(r):
+ lx.ignore()
+ return lexTop
+ case r == eof:
+ lx.emit(itemEOF)
+ return nil
+ }
+ return lx.errorf("expected a top-level item to end with a newline, "+
+ "comment, or EOF, but got %q instead", r)
+}
+
+// lexTable lexes the beginning of a table. Namely, it makes sure that
+// it starts with a character other than '.' and ']'.
+// It assumes that '[' has already been consumed.
+// It also handles the case that this is an item in an array of tables.
+// e.g., '[[name]]'.
+func lexTableStart(lx *lexer) stateFn {
+ if lx.peek() == arrayTableStart {
+ lx.next()
+ lx.emit(itemArrayTableStart)
+ lx.push(lexArrayTableEnd)
+ } else {
+ lx.emit(itemTableStart)
+ lx.push(lexTableEnd)
+ }
+ return lexTableNameStart
+}
+
+func lexTableEnd(lx *lexer) stateFn {
+ lx.emit(itemTableEnd)
+ return lexTopEnd
+}
+
+func lexArrayTableEnd(lx *lexer) stateFn {
+ if r := lx.next(); r != arrayTableEnd {
+ return lx.errorf("expected end of table array name delimiter %q, "+
+ "but got %q instead", arrayTableEnd, r)
+ }
+ lx.emit(itemArrayTableEnd)
+ return lexTopEnd
+}
+
+func lexTableNameStart(lx *lexer) stateFn {
+ lx.skip(isWhitespace)
+ switch r := lx.peek(); {
+ case r == tableEnd || r == eof:
+ return lx.errorf("unexpected end of table name " +
+ "(table names cannot be empty)")
+ case r == tableSep:
+ return lx.errorf("unexpected table separator " +
+ "(table names cannot be empty)")
+ case r == stringStart || r == rawStringStart:
+ lx.ignore()
+ lx.push(lexTableNameEnd)
+ return lexValue // reuse string lexing
+ default:
+ return lexBareTableName
+ }
+}
+
+// lexBareTableName lexes the name of a table. It assumes that at least one
+// valid character for the table has already been read.
+func lexBareTableName(lx *lexer) stateFn {
+ r := lx.next()
+ if isBareKeyChar(r) {
+ return lexBareTableName
+ }
+ lx.backup()
+ lx.emit(itemText)
+ return lexTableNameEnd
+}
+
+// lexTableNameEnd reads the end of a piece of a table name, optionally
+// consuming whitespace.
+func lexTableNameEnd(lx *lexer) stateFn {
+ lx.skip(isWhitespace)
+ switch r := lx.next(); {
+ case isWhitespace(r):
+ return lexTableNameEnd
+ case r == tableSep:
+ lx.ignore()
+ return lexTableNameStart
+ case r == tableEnd:
+ return lx.pop()
+ default:
+ return lx.errorf("expected '.' or ']' to end table name, "+
+ "but got %q instead", r)
+ }
+}
+
+// lexKeyStart consumes a key name up until the first non-whitespace character.
+// lexKeyStart will ignore whitespace.
+func lexKeyStart(lx *lexer) stateFn {
+ r := lx.peek()
+ switch {
+ case r == keySep:
+ return lx.errorf("unexpected key separator %q", keySep)
+ case isWhitespace(r) || isNL(r):
+ lx.next()
+ return lexSkip(lx, lexKeyStart)
+ case r == stringStart || r == rawStringStart:
+ lx.ignore()
+ lx.emit(itemKeyStart)
+ lx.push(lexKeyEnd)
+ return lexValue // reuse string lexing
+ default:
+ lx.ignore()
+ lx.emit(itemKeyStart)
+ return lexBareKey
+ }
+}
+
+// lexBareKey consumes the text of a bare key. Assumes that the first character
+// (which is not whitespace) has not yet been consumed.
+func lexBareKey(lx *lexer) stateFn {
+ switch r := lx.next(); {
+ case isBareKeyChar(r):
+ return lexBareKey
+ case isWhitespace(r):
+ lx.backup()
+ lx.emit(itemText)
+ return lexKeyEnd
+ case r == keySep:
+ lx.backup()
+ lx.emit(itemText)
+ return lexKeyEnd
+ default:
+ return lx.errorf("bare keys cannot contain %q", r)
+ }
+}
+
+// lexKeyEnd consumes the end of a key and trims whitespace (up to the key
+// separator).
+func lexKeyEnd(lx *lexer) stateFn {
+ switch r := lx.next(); {
+ case r == keySep:
+ return lexSkip(lx, lexValue)
+ case isWhitespace(r):
+ return lexSkip(lx, lexKeyEnd)
+ default:
+ return lx.errorf("expected key separator %q, but got %q instead",
+ keySep, r)
+ }
+}
+
+// lexValue starts the consumption of a value anywhere a value is expected.
+// lexValue will ignore whitespace.
+// After a value is lexed, the last state on the next is popped and returned.
+func lexValue(lx *lexer) stateFn {
+ // We allow whitespace to precede a value, but NOT newlines.
+ // In array syntax, the array states are responsible for ignoring newlines.
+ r := lx.next()
+ switch {
+ case isWhitespace(r):
+ return lexSkip(lx, lexValue)
+ case isDigit(r):
+ lx.backup() // avoid an extra state and use the same as above
+ return lexNumberOrDateStart
+ }
+ switch r {
+ case arrayStart:
+ lx.ignore()
+ lx.emit(itemArray)
+ return lexArrayValue
+ case inlineTableStart:
+ lx.ignore()
+ lx.emit(itemInlineTableStart)
+ return lexInlineTableValue
+ case stringStart:
+ if lx.accept(stringStart) {
+ if lx.accept(stringStart) {
+ lx.ignore() // Ignore """
+ return lexMultilineString
+ }
+ lx.backup()
+ }
+ lx.ignore() // ignore the '"'
+ return lexString
+ case rawStringStart:
+ if lx.accept(rawStringStart) {
+ if lx.accept(rawStringStart) {
+ lx.ignore() // Ignore """
+ return lexMultilineRawString
+ }
+ lx.backup()
+ }
+ lx.ignore() // ignore the "'"
+ return lexRawString
+ case '+', '-':
+ return lexNumberStart
+ case '.': // special error case, be kind to users
+ return lx.errorf("floats must start with a digit, not '.'")
+ }
+ if unicode.IsLetter(r) {
+ // Be permissive here; lexBool will give a nice error if the
+ // user wrote something like
+ // x = foo
+ // (i.e. not 'true' or 'false' but is something else word-like.)
+ lx.backup()
+ return lexBool
+ }
+ return lx.errorf("expected value but found %q instead", r)
+}
+
+// lexArrayValue consumes one value in an array. It assumes that '[' or ','
+// have already been consumed. All whitespace and newlines are ignored.
+func lexArrayValue(lx *lexer) stateFn {
+ r := lx.next()
+ switch {
+ case isWhitespace(r) || isNL(r):
+ return lexSkip(lx, lexArrayValue)
+ case r == commentStart:
+ lx.push(lexArrayValue)
+ return lexCommentStart
+ case r == comma:
+ return lx.errorf("unexpected comma")
+ case r == arrayEnd:
+ // NOTE(caleb): The spec isn't clear about whether you can have
+ // a trailing comma or not, so we'll allow it.
+ return lexArrayEnd
+ }
+
+ lx.backup()
+ lx.push(lexArrayValueEnd)
+ return lexValue
+}
+
+// lexArrayValueEnd consumes everything between the end of an array value and
+// the next value (or the end of the array): it ignores whitespace and newlines
+// and expects either a ',' or a ']'.
+func lexArrayValueEnd(lx *lexer) stateFn {
+ r := lx.next()
+ switch {
+ case isWhitespace(r) || isNL(r):
+ return lexSkip(lx, lexArrayValueEnd)
+ case r == commentStart:
+ lx.push(lexArrayValueEnd)
+ return lexCommentStart
+ case r == comma:
+ lx.ignore()
+ return lexArrayValue // move on to the next value
+ case r == arrayEnd:
+ return lexArrayEnd
+ }
+ return lx.errorf(
+ "expected a comma or array terminator %q, but got %q instead",
+ arrayEnd, r,
+ )
+}
+
+// lexArrayEnd finishes the lexing of an array.
+// It assumes that a ']' has just been consumed.
+func lexArrayEnd(lx *lexer) stateFn {
+ lx.ignore()
+ lx.emit(itemArrayEnd)
+ return lx.pop()
+}
+
+// lexInlineTableValue consumes one key/value pair in an inline table.
+// It assumes that '{' or ',' have already been consumed. Whitespace is ignored.
+func lexInlineTableValue(lx *lexer) stateFn {
+ r := lx.next()
+ switch {
+ case isWhitespace(r):
+ return lexSkip(lx, lexInlineTableValue)
+ case isNL(r):
+ return lx.errorf("newlines not allowed within inline tables")
+ case r == commentStart:
+ lx.push(lexInlineTableValue)
+ return lexCommentStart
+ case r == comma:
+ return lx.errorf("unexpected comma")
+ case r == inlineTableEnd:
+ return lexInlineTableEnd
+ }
+ lx.backup()
+ lx.push(lexInlineTableValueEnd)
+ return lexKeyStart
+}
+
+// lexInlineTableValueEnd consumes everything between the end of an inline table
+// key/value pair and the next pair (or the end of the table):
+// it ignores whitespace and expects either a ',' or a '}'.
+func lexInlineTableValueEnd(lx *lexer) stateFn {
+ r := lx.next()
+ switch {
+ case isWhitespace(r):
+ return lexSkip(lx, lexInlineTableValueEnd)
+ case isNL(r):
+ return lx.errorf("newlines not allowed within inline tables")
+ case r == commentStart:
+ lx.push(lexInlineTableValueEnd)
+ return lexCommentStart
+ case r == comma:
+ lx.ignore()
+ return lexInlineTableValue
+ case r == inlineTableEnd:
+ return lexInlineTableEnd
+ }
+ return lx.errorf("expected a comma or an inline table terminator %q, "+
+ "but got %q instead", inlineTableEnd, r)
+}
+
+// lexInlineTableEnd finishes the lexing of an inline table.
+// It assumes that a '}' has just been consumed.
+func lexInlineTableEnd(lx *lexer) stateFn {
+ lx.ignore()
+ lx.emit(itemInlineTableEnd)
+ return lx.pop()
+}
+
+// lexString consumes the inner contents of a string. It assumes that the
+// beginning '"' has already been consumed and ignored.
+func lexString(lx *lexer) stateFn {
+ r := lx.next()
+ switch {
+ case r == eof:
+ return lx.errorf("unexpected EOF")
+ case isNL(r):
+ return lx.errorf("strings cannot contain newlines")
+ case r == '\\':
+ lx.push(lexString)
+ return lexStringEscape
+ case r == stringEnd:
+ lx.backup()
+ lx.emit(itemString)
+ lx.next()
+ lx.ignore()
+ return lx.pop()
+ }
+ return lexString
+}
+
+// lexMultilineString consumes the inner contents of a string. It assumes that
+// the beginning '"""' has already been consumed and ignored.
+func lexMultilineString(lx *lexer) stateFn {
+ switch lx.next() {
+ case eof:
+ return lx.errorf("unexpected EOF")
+ case '\\':
+ return lexMultilineStringEscape
+ case stringEnd:
+ if lx.accept(stringEnd) {
+ if lx.accept(stringEnd) {
+ lx.backup()
+ lx.backup()
+ lx.backup()
+ lx.emit(itemMultilineString)
+ lx.next()
+ lx.next()
+ lx.next()
+ lx.ignore()
+ return lx.pop()
+ }
+ lx.backup()
+ }
+ }
+ return lexMultilineString
+}
+
+// lexRawString consumes a raw string. Nothing can be escaped in such a string.
+// It assumes that the beginning "'" has already been consumed and ignored.
+func lexRawString(lx *lexer) stateFn {
+ r := lx.next()
+ switch {
+ case r == eof:
+ return lx.errorf("unexpected EOF")
+ case isNL(r):
+ return lx.errorf("strings cannot contain newlines")
+ case r == rawStringEnd:
+ lx.backup()
+ lx.emit(itemRawString)
+ lx.next()
+ lx.ignore()
+ return lx.pop()
+ }
+ return lexRawString
+}
+
+// lexMultilineRawString consumes a raw string. Nothing can be escaped in such
+// a string. It assumes that the beginning "'''" has already been consumed and
+// ignored.
+func lexMultilineRawString(lx *lexer) stateFn {
+ switch lx.next() {
+ case eof:
+ return lx.errorf("unexpected EOF")
+ case rawStringEnd:
+ if lx.accept(rawStringEnd) {
+ if lx.accept(rawStringEnd) {
+ lx.backup()
+ lx.backup()
+ lx.backup()
+ lx.emit(itemRawMultilineString)
+ lx.next()
+ lx.next()
+ lx.next()
+ lx.ignore()
+ return lx.pop()
+ }
+ lx.backup()
+ }
+ }
+ return lexMultilineRawString
+}
+
+// lexMultilineStringEscape consumes an escaped character. It assumes that the
+// preceding '\\' has already been consumed.
+func lexMultilineStringEscape(lx *lexer) stateFn {
+ // Handle the special case first:
+ if isNL(lx.next()) {
+ return lexMultilineString
+ }
+ lx.backup()
+ lx.push(lexMultilineString)
+ return lexStringEscape(lx)
+}
+
+func lexStringEscape(lx *lexer) stateFn {
+ r := lx.next()
+ switch r {
+ case 'b':
+ fallthrough
+ case 't':
+ fallthrough
+ case 'n':
+ fallthrough
+ case 'f':
+ fallthrough
+ case 'r':
+ fallthrough
+ case '"':
+ fallthrough
+ case '\\':
+ return lx.pop()
+ case 'u':
+ return lexShortUnicodeEscape
+ case 'U':
+ return lexLongUnicodeEscape
+ }
+ return lx.errorf("invalid escape character %q; only the following "+
+ "escape characters are allowed: "+
+ `\b, \t, \n, \f, \r, \", \\, \uXXXX, and \UXXXXXXXX`, r)
+}
+
+func lexShortUnicodeEscape(lx *lexer) stateFn {
+ var r rune
+ for i := 0; i < 4; i++ {
+ r = lx.next()
+ if !isHexadecimal(r) {
+ return lx.errorf(`expected four hexadecimal digits after '\u', `+
+ "but got %q instead", lx.current())
+ }
+ }
+ return lx.pop()
+}
+
+func lexLongUnicodeEscape(lx *lexer) stateFn {
+ var r rune
+ for i := 0; i < 8; i++ {
+ r = lx.next()
+ if !isHexadecimal(r) {
+ return lx.errorf(`expected eight hexadecimal digits after '\U', `+
+ "but got %q instead", lx.current())
+ }
+ }
+ return lx.pop()
+}
+
+// lexNumberOrDateStart consumes either an integer, a float, or datetime.
+func lexNumberOrDateStart(lx *lexer) stateFn {
+ r := lx.next()
+ if isDigit(r) {
+ return lexNumberOrDate
+ }
+ switch r {
+ case '_':
+ return lexNumber
+ case 'e', 'E':
+ return lexFloat
+ case '.':
+ return lx.errorf("floats must start with a digit, not '.'")
+ }
+ return lx.errorf("expected a digit but got %q", r)
+}
+
+// lexNumberOrDate consumes either an integer, float or datetime.
+func lexNumberOrDate(lx *lexer) stateFn {
+ r := lx.next()
+ if isDigit(r) {
+ return lexNumberOrDate
+ }
+ switch r {
+ case '-':
+ return lexDatetime
+ case '_':
+ return lexNumber
+ case '.', 'e', 'E':
+ return lexFloat
+ }
+
+ lx.backup()
+ lx.emit(itemInteger)
+ return lx.pop()
+}
+
+// lexDatetime consumes a Datetime, to a first approximation.
+// The parser validates that it matches one of the accepted formats.
+func lexDatetime(lx *lexer) stateFn {
+ r := lx.next()
+ if isDigit(r) {
+ return lexDatetime
+ }
+ switch r {
+ case '-', 'T', ':', '.', 'Z', '+':
+ return lexDatetime
+ }
+
+ lx.backup()
+ lx.emit(itemDatetime)
+ return lx.pop()
+}
+
+// lexNumberStart consumes either an integer or a float. It assumes that a sign
+// has already been read, but that *no* digits have been consumed.
+// lexNumberStart will move to the appropriate integer or float states.
+func lexNumberStart(lx *lexer) stateFn {
+ // We MUST see a digit. Even floats have to start with a digit.
+ r := lx.next()
+ if !isDigit(r) {
+ if r == '.' {
+ return lx.errorf("floats must start with a digit, not '.'")
+ }
+ return lx.errorf("expected a digit but got %q", r)
+ }
+ return lexNumber
+}
+
+// lexNumber consumes an integer or a float after seeing the first digit.
+func lexNumber(lx *lexer) stateFn {
+ r := lx.next()
+ if isDigit(r) {
+ return lexNumber
+ }
+ switch r {
+ case '_':
+ return lexNumber
+ case '.', 'e', 'E':
+ return lexFloat
+ }
+
+ lx.backup()
+ lx.emit(itemInteger)
+ return lx.pop()
+}
+
+// lexFloat consumes the elements of a float. It allows any sequence of
+// float-like characters, so floats emitted by the lexer are only a first
+// approximation and must be validated by the parser.
+func lexFloat(lx *lexer) stateFn {
+ r := lx.next()
+ if isDigit(r) {
+ return lexFloat
+ }
+ switch r {
+ case '_', '.', '-', '+', 'e', 'E':
+ return lexFloat
+ }
+
+ lx.backup()
+ lx.emit(itemFloat)
+ return lx.pop()
+}
+
+// lexBool consumes a bool string: 'true' or 'false.
+func lexBool(lx *lexer) stateFn {
+ var rs []rune
+ for {
+ r := lx.next()
+ if !unicode.IsLetter(r) {
+ lx.backup()
+ break
+ }
+ rs = append(rs, r)
+ }
+ s := string(rs)
+ switch s {
+ case "true", "false":
+ lx.emit(itemBool)
+ return lx.pop()
+ }
+ return lx.errorf("expected value but found %q instead", s)
+}
+
+// lexCommentStart begins the lexing of a comment. It will emit
+// itemCommentStart and consume no characters, passing control to lexComment.
+func lexCommentStart(lx *lexer) stateFn {
+ lx.ignore()
+ lx.emit(itemCommentStart)
+ return lexComment
+}
+
+// lexComment lexes an entire comment. It assumes that '#' has been consumed.
+// It will consume *up to* the first newline character, and pass control
+// back to the last state on the stack.
+func lexComment(lx *lexer) stateFn {
+ r := lx.peek()
+ if isNL(r) || r == eof {
+ lx.emit(itemText)
+ return lx.pop()
+ }
+ lx.next()
+ return lexComment
+}
+
+// lexSkip ignores all slurped input and moves on to the next state.
+func lexSkip(lx *lexer, nextState stateFn) stateFn {
+ return func(lx *lexer) stateFn {
+ lx.ignore()
+ return nextState
+ }
+}
+
+// isWhitespace returns true if `r` is a whitespace character according
+// to the spec.
+func isWhitespace(r rune) bool {
+ return r == '\t' || r == ' '
+}
+
+func isNL(r rune) bool {
+ return r == '\n' || r == '\r'
+}
+
+func isDigit(r rune) bool {
+ return r >= '0' && r <= '9'
+}
+
+func isHexadecimal(r rune) bool {
+ return (r >= '0' && r <= '9') ||
+ (r >= 'a' && r <= 'f') ||
+ (r >= 'A' && r <= 'F')
+}
+
+func isBareKeyChar(r rune) bool {
+ return (r >= 'A' && r <= 'Z') ||
+ (r >= 'a' && r <= 'z') ||
+ (r >= '0' && r <= '9') ||
+ r == '_' ||
+ r == '-'
+}
+
+func (itype itemType) String() string {
+ switch itype {
+ case itemError:
+ return "Error"
+ case itemNIL:
+ return "NIL"
+ case itemEOF:
+ return "EOF"
+ case itemText:
+ return "Text"
+ case itemString, itemRawString, itemMultilineString, itemRawMultilineString:
+ return "String"
+ case itemBool:
+ return "Bool"
+ case itemInteger:
+ return "Integer"
+ case itemFloat:
+ return "Float"
+ case itemDatetime:
+ return "DateTime"
+ case itemTableStart:
+ return "TableStart"
+ case itemTableEnd:
+ return "TableEnd"
+ case itemKeyStart:
+ return "KeyStart"
+ case itemArray:
+ return "Array"
+ case itemArrayEnd:
+ return "ArrayEnd"
+ case itemCommentStart:
+ return "CommentStart"
+ }
+ panic(fmt.Sprintf("BUG: Unknown type '%d'.", int(itype)))
+}
+
+func (item item) String() string {
+ return fmt.Sprintf("(%s, %s)", item.typ.String(), item.val)
+}
diff --git a/vendor/github.com/BurntSushi/toml/parse.go b/vendor/github.com/BurntSushi/toml/parse.go
new file mode 100644
index 0000000000000..50869ef9266e4
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/parse.go
@@ -0,0 +1,592 @@
+package toml
+
+import (
+ "fmt"
+ "strconv"
+ "strings"
+ "time"
+ "unicode"
+ "unicode/utf8"
+)
+
+type parser struct {
+ mapping map[string]interface{}
+ types map[string]tomlType
+ lx *lexer
+
+ // A list of keys in the order that they appear in the TOML data.
+ ordered []Key
+
+ // the full key for the current hash in scope
+ context Key
+
+ // the base key name for everything except hashes
+ currentKey string
+
+ // rough approximation of line number
+ approxLine int
+
+ // A map of 'key.group.names' to whether they were created implicitly.
+ implicits map[string]bool
+}
+
+type parseError string
+
+func (pe parseError) Error() string {
+ return string(pe)
+}
+
+func parse(data string) (p *parser, err error) {
+ defer func() {
+ if r := recover(); r != nil {
+ var ok bool
+ if err, ok = r.(parseError); ok {
+ return
+ }
+ panic(r)
+ }
+ }()
+
+ p = &parser{
+ mapping: make(map[string]interface{}),
+ types: make(map[string]tomlType),
+ lx: lex(data),
+ ordered: make([]Key, 0),
+ implicits: make(map[string]bool),
+ }
+ for {
+ item := p.next()
+ if item.typ == itemEOF {
+ break
+ }
+ p.topLevel(item)
+ }
+
+ return p, nil
+}
+
+func (p *parser) panicf(format string, v ...interface{}) {
+ msg := fmt.Sprintf("Near line %d (last key parsed '%s'): %s",
+ p.approxLine, p.current(), fmt.Sprintf(format, v...))
+ panic(parseError(msg))
+}
+
+func (p *parser) next() item {
+ it := p.lx.nextItem()
+ if it.typ == itemError {
+ p.panicf("%s", it.val)
+ }
+ return it
+}
+
+func (p *parser) bug(format string, v ...interface{}) {
+ panic(fmt.Sprintf("BUG: "+format+"\n\n", v...))
+}
+
+func (p *parser) expect(typ itemType) item {
+ it := p.next()
+ p.assertEqual(typ, it.typ)
+ return it
+}
+
+func (p *parser) assertEqual(expected, got itemType) {
+ if expected != got {
+ p.bug("Expected '%s' but got '%s'.", expected, got)
+ }
+}
+
+func (p *parser) topLevel(item item) {
+ switch item.typ {
+ case itemCommentStart:
+ p.approxLine = item.line
+ p.expect(itemText)
+ case itemTableStart:
+ kg := p.next()
+ p.approxLine = kg.line
+
+ var key Key
+ for ; kg.typ != itemTableEnd && kg.typ != itemEOF; kg = p.next() {
+ key = append(key, p.keyString(kg))
+ }
+ p.assertEqual(itemTableEnd, kg.typ)
+
+ p.establishContext(key, false)
+ p.setType("", tomlHash)
+ p.ordered = append(p.ordered, key)
+ case itemArrayTableStart:
+ kg := p.next()
+ p.approxLine = kg.line
+
+ var key Key
+ for ; kg.typ != itemArrayTableEnd && kg.typ != itemEOF; kg = p.next() {
+ key = append(key, p.keyString(kg))
+ }
+ p.assertEqual(itemArrayTableEnd, kg.typ)
+
+ p.establishContext(key, true)
+ p.setType("", tomlArrayHash)
+ p.ordered = append(p.ordered, key)
+ case itemKeyStart:
+ kname := p.next()
+ p.approxLine = kname.line
+ p.currentKey = p.keyString(kname)
+
+ val, typ := p.value(p.next())
+ p.setValue(p.currentKey, val)
+ p.setType(p.currentKey, typ)
+ p.ordered = append(p.ordered, p.context.add(p.currentKey))
+ p.currentKey = ""
+ default:
+ p.bug("Unexpected type at top level: %s", item.typ)
+ }
+}
+
+// Gets a string for a key (or part of a key in a table name).
+func (p *parser) keyString(it item) string {
+ switch it.typ {
+ case itemText:
+ return it.val
+ case itemString, itemMultilineString,
+ itemRawString, itemRawMultilineString:
+ s, _ := p.value(it)
+ return s.(string)
+ default:
+ p.bug("Unexpected key type: %s", it.typ)
+ panic("unreachable")
+ }
+}
+
+// value translates an expected value from the lexer into a Go value wrapped
+// as an empty interface.
+func (p *parser) value(it item) (interface{}, tomlType) {
+ switch it.typ {
+ case itemString:
+ return p.replaceEscapes(it.val), p.typeOfPrimitive(it)
+ case itemMultilineString:
+ trimmed := stripFirstNewline(stripEscapedWhitespace(it.val))
+ return p.replaceEscapes(trimmed), p.typeOfPrimitive(it)
+ case itemRawString:
+ return it.val, p.typeOfPrimitive(it)
+ case itemRawMultilineString:
+ return stripFirstNewline(it.val), p.typeOfPrimitive(it)
+ case itemBool:
+ switch it.val {
+ case "true":
+ return true, p.typeOfPrimitive(it)
+ case "false":
+ return false, p.typeOfPrimitive(it)
+ }
+ p.bug("Expected boolean value, but got '%s'.", it.val)
+ case itemInteger:
+ if !numUnderscoresOK(it.val) {
+ p.panicf("Invalid integer %q: underscores must be surrounded by digits",
+ it.val)
+ }
+ val := strings.Replace(it.val, "_", "", -1)
+ num, err := strconv.ParseInt(val, 10, 64)
+ if err != nil {
+ // Distinguish integer values. Normally, it'd be a bug if the lexer
+ // provides an invalid integer, but it's possible that the number is
+ // out of range of valid values (which the lexer cannot determine).
+ // So mark the former as a bug but the latter as a legitimate user
+ // error.
+ if e, ok := err.(*strconv.NumError); ok &&
+ e.Err == strconv.ErrRange {
+
+ p.panicf("Integer '%s' is out of the range of 64-bit "+
+ "signed integers.", it.val)
+ } else {
+ p.bug("Expected integer value, but got '%s'.", it.val)
+ }
+ }
+ return num, p.typeOfPrimitive(it)
+ case itemFloat:
+ parts := strings.FieldsFunc(it.val, func(r rune) bool {
+ switch r {
+ case '.', 'e', 'E':
+ return true
+ }
+ return false
+ })
+ for _, part := range parts {
+ if !numUnderscoresOK(part) {
+ p.panicf("Invalid float %q: underscores must be "+
+ "surrounded by digits", it.val)
+ }
+ }
+ if !numPeriodsOK(it.val) {
+ // As a special case, numbers like '123.' or '1.e2',
+ // which are valid as far as Go/strconv are concerned,
+ // must be rejected because TOML says that a fractional
+ // part consists of '.' followed by 1+ digits.
+ p.panicf("Invalid float %q: '.' must be followed "+
+ "by one or more digits", it.val)
+ }
+ val := strings.Replace(it.val, "_", "", -1)
+ num, err := strconv.ParseFloat(val, 64)
+ if err != nil {
+ if e, ok := err.(*strconv.NumError); ok &&
+ e.Err == strconv.ErrRange {
+
+ p.panicf("Float '%s' is out of the range of 64-bit "+
+ "IEEE-754 floating-point numbers.", it.val)
+ } else {
+ p.panicf("Invalid float value: %q", it.val)
+ }
+ }
+ return num, p.typeOfPrimitive(it)
+ case itemDatetime:
+ var t time.Time
+ var ok bool
+ var err error
+ for _, format := range []string{
+ "2006-01-02T15:04:05Z07:00",
+ "2006-01-02T15:04:05",
+ "2006-01-02",
+ } {
+ t, err = time.ParseInLocation(format, it.val, time.Local)
+ if err == nil {
+ ok = true
+ break
+ }
+ }
+ if !ok {
+ p.panicf("Invalid TOML Datetime: %q.", it.val)
+ }
+ return t, p.typeOfPrimitive(it)
+ case itemArray:
+ array := make([]interface{}, 0)
+ types := make([]tomlType, 0)
+
+ for it = p.next(); it.typ != itemArrayEnd; it = p.next() {
+ if it.typ == itemCommentStart {
+ p.expect(itemText)
+ continue
+ }
+
+ val, typ := p.value(it)
+ array = append(array, val)
+ types = append(types, typ)
+ }
+ return array, p.typeOfArray(types)
+ case itemInlineTableStart:
+ var (
+ hash = make(map[string]interface{})
+ outerContext = p.context
+ outerKey = p.currentKey
+ )
+
+ p.context = append(p.context, p.currentKey)
+ p.currentKey = ""
+ for it := p.next(); it.typ != itemInlineTableEnd; it = p.next() {
+ if it.typ != itemKeyStart {
+ p.bug("Expected key start but instead found %q, around line %d",
+ it.val, p.approxLine)
+ }
+ if it.typ == itemCommentStart {
+ p.expect(itemText)
+ continue
+ }
+
+ // retrieve key
+ k := p.next()
+ p.approxLine = k.line
+ kname := p.keyString(k)
+
+ // retrieve value
+ p.currentKey = kname
+ val, typ := p.value(p.next())
+ // make sure we keep metadata up to date
+ p.setType(kname, typ)
+ p.ordered = append(p.ordered, p.context.add(p.currentKey))
+ hash[kname] = val
+ }
+ p.context = outerContext
+ p.currentKey = outerKey
+ return hash, tomlHash
+ }
+ p.bug("Unexpected value type: %s", it.typ)
+ panic("unreachable")
+}
+
+// numUnderscoresOK checks whether each underscore in s is surrounded by
+// characters that are not underscores.
+func numUnderscoresOK(s string) bool {
+ accept := false
+ for _, r := range s {
+ if r == '_' {
+ if !accept {
+ return false
+ }
+ accept = false
+ continue
+ }
+ accept = true
+ }
+ return accept
+}
+
+// numPeriodsOK checks whether every period in s is followed by a digit.
+func numPeriodsOK(s string) bool {
+ period := false
+ for _, r := range s {
+ if period && !isDigit(r) {
+ return false
+ }
+ period = r == '.'
+ }
+ return !period
+}
+
+// establishContext sets the current context of the parser,
+// where the context is either a hash or an array of hashes. Which one is
+// set depends on the value of the `array` parameter.
+//
+// Establishing the context also makes sure that the key isn't a duplicate, and
+// will create implicit hashes automatically.
+func (p *parser) establishContext(key Key, array bool) {
+ var ok bool
+
+ // Always start at the top level and drill down for our context.
+ hashContext := p.mapping
+ keyContext := make(Key, 0)
+
+ // We only need implicit hashes for key[0:-1]
+ for _, k := range key[0 : len(key)-1] {
+ _, ok = hashContext[k]
+ keyContext = append(keyContext, k)
+
+ // No key? Make an implicit hash and move on.
+ if !ok {
+ p.addImplicit(keyContext)
+ hashContext[k] = make(map[string]interface{})
+ }
+
+ // If the hash context is actually an array of tables, then set
+ // the hash context to the last element in that array.
+ //
+ // Otherwise, it better be a table, since this MUST be a key group (by
+ // virtue of it not being the last element in a key).
+ switch t := hashContext[k].(type) {
+ case []map[string]interface{}:
+ hashContext = t[len(t)-1]
+ case map[string]interface{}:
+ hashContext = t
+ default:
+ p.panicf("Key '%s' was already created as a hash.", keyContext)
+ }
+ }
+
+ p.context = keyContext
+ if array {
+ // If this is the first element for this array, then allocate a new
+ // list of tables for it.
+ k := key[len(key)-1]
+ if _, ok := hashContext[k]; !ok {
+ hashContext[k] = make([]map[string]interface{}, 0, 5)
+ }
+
+ // Add a new table. But make sure the key hasn't already been used
+ // for something else.
+ if hash, ok := hashContext[k].([]map[string]interface{}); ok {
+ hashContext[k] = append(hash, make(map[string]interface{}))
+ } else {
+ p.panicf("Key '%s' was already created and cannot be used as "+
+ "an array.", keyContext)
+ }
+ } else {
+ p.setValue(key[len(key)-1], make(map[string]interface{}))
+ }
+ p.context = append(p.context, key[len(key)-1])
+}
+
+// setValue sets the given key to the given value in the current context.
+// It will make sure that the key hasn't already been defined, account for
+// implicit key groups.
+func (p *parser) setValue(key string, value interface{}) {
+ var tmpHash interface{}
+ var ok bool
+
+ hash := p.mapping
+ keyContext := make(Key, 0)
+ for _, k := range p.context {
+ keyContext = append(keyContext, k)
+ if tmpHash, ok = hash[k]; !ok {
+ p.bug("Context for key '%s' has not been established.", keyContext)
+ }
+ switch t := tmpHash.(type) {
+ case []map[string]interface{}:
+ // The context is a table of hashes. Pick the most recent table
+ // defined as the current hash.
+ hash = t[len(t)-1]
+ case map[string]interface{}:
+ hash = t
+ default:
+ p.bug("Expected hash to have type 'map[string]interface{}', but "+
+ "it has '%T' instead.", tmpHash)
+ }
+ }
+ keyContext = append(keyContext, key)
+
+ if _, ok := hash[key]; ok {
+ // Typically, if the given key has already been set, then we have
+ // to raise an error since duplicate keys are disallowed. However,
+ // it's possible that a key was previously defined implicitly. In this
+ // case, it is allowed to be redefined concretely. (See the
+ // `tests/valid/implicit-and-explicit-after.toml` test in `toml-test`.)
+ //
+ // But we have to make sure to stop marking it as an implicit. (So that
+ // another redefinition provokes an error.)
+ //
+ // Note that since it has already been defined (as a hash), we don't
+ // want to overwrite it. So our business is done.
+ if p.isImplicit(keyContext) {
+ p.removeImplicit(keyContext)
+ return
+ }
+
+ // Otherwise, we have a concrete key trying to override a previous
+ // key, which is *always* wrong.
+ p.panicf("Key '%s' has already been defined.", keyContext)
+ }
+ hash[key] = value
+}
+
+// setType sets the type of a particular value at a given key.
+// It should be called immediately AFTER setValue.
+//
+// Note that if `key` is empty, then the type given will be applied to the
+// current context (which is either a table or an array of tables).
+func (p *parser) setType(key string, typ tomlType) {
+ keyContext := make(Key, 0, len(p.context)+1)
+ for _, k := range p.context {
+ keyContext = append(keyContext, k)
+ }
+ if len(key) > 0 { // allow type setting for hashes
+ keyContext = append(keyContext, key)
+ }
+ p.types[keyContext.String()] = typ
+}
+
+// addImplicit sets the given Key as having been created implicitly.
+func (p *parser) addImplicit(key Key) {
+ p.implicits[key.String()] = true
+}
+
+// removeImplicit stops tagging the given key as having been implicitly
+// created.
+func (p *parser) removeImplicit(key Key) {
+ p.implicits[key.String()] = false
+}
+
+// isImplicit returns true if the key group pointed to by the key was created
+// implicitly.
+func (p *parser) isImplicit(key Key) bool {
+ return p.implicits[key.String()]
+}
+
+// current returns the full key name of the current context.
+func (p *parser) current() string {
+ if len(p.currentKey) == 0 {
+ return p.context.String()
+ }
+ if len(p.context) == 0 {
+ return p.currentKey
+ }
+ return fmt.Sprintf("%s.%s", p.context, p.currentKey)
+}
+
+func stripFirstNewline(s string) string {
+ if len(s) == 0 || s[0] != '\n' {
+ return s
+ }
+ return s[1:]
+}
+
+func stripEscapedWhitespace(s string) string {
+ esc := strings.Split(s, "\\\n")
+ if len(esc) > 1 {
+ for i := 1; i < len(esc); i++ {
+ esc[i] = strings.TrimLeftFunc(esc[i], unicode.IsSpace)
+ }
+ }
+ return strings.Join(esc, "")
+}
+
+func (p *parser) replaceEscapes(str string) string {
+ var replaced []rune
+ s := []byte(str)
+ r := 0
+ for r < len(s) {
+ if s[r] != '\\' {
+ c, size := utf8.DecodeRune(s[r:])
+ r += size
+ replaced = append(replaced, c)
+ continue
+ }
+ r += 1
+ if r >= len(s) {
+ p.bug("Escape sequence at end of string.")
+ return ""
+ }
+ switch s[r] {
+ default:
+ p.bug("Expected valid escape code after \\, but got %q.", s[r])
+ return ""
+ case 'b':
+ replaced = append(replaced, rune(0x0008))
+ r += 1
+ case 't':
+ replaced = append(replaced, rune(0x0009))
+ r += 1
+ case 'n':
+ replaced = append(replaced, rune(0x000A))
+ r += 1
+ case 'f':
+ replaced = append(replaced, rune(0x000C))
+ r += 1
+ case 'r':
+ replaced = append(replaced, rune(0x000D))
+ r += 1
+ case '"':
+ replaced = append(replaced, rune(0x0022))
+ r += 1
+ case '\\':
+ replaced = append(replaced, rune(0x005C))
+ r += 1
+ case 'u':
+ // At this point, we know we have a Unicode escape of the form
+ // `uXXXX` at [r, r+5). (Because the lexer guarantees this
+ // for us.)
+ escaped := p.asciiEscapeToUnicode(s[r+1 : r+5])
+ replaced = append(replaced, escaped)
+ r += 5
+ case 'U':
+ // At this point, we know we have a Unicode escape of the form
+ // `uXXXX` at [r, r+9). (Because the lexer guarantees this
+ // for us.)
+ escaped := p.asciiEscapeToUnicode(s[r+1 : r+9])
+ replaced = append(replaced, escaped)
+ r += 9
+ }
+ }
+ return string(replaced)
+}
+
+func (p *parser) asciiEscapeToUnicode(bs []byte) rune {
+ s := string(bs)
+ hex, err := strconv.ParseUint(strings.ToLower(s), 16, 32)
+ if err != nil {
+ p.bug("Could not parse '%s' as a hexadecimal number, but the "+
+ "lexer claims it's OK: %s", s, err)
+ }
+ if !utf8.ValidRune(rune(hex)) {
+ p.panicf("Escaped character '\\u%s' is not valid UTF-8.", s)
+ }
+ return rune(hex)
+}
+
+func isStringType(ty itemType) bool {
+ return ty == itemString || ty == itemMultilineString ||
+ ty == itemRawString || ty == itemRawMultilineString
+}
diff --git a/vendor/github.com/BurntSushi/toml/session.vim b/vendor/github.com/BurntSushi/toml/session.vim
new file mode 100644
index 0000000000000..562164be06030
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/session.vim
@@ -0,0 +1 @@
+au BufWritePost *.go silent!make tags > /dev/null 2>&1
diff --git a/vendor/github.com/BurntSushi/toml/type_check.go b/vendor/github.com/BurntSushi/toml/type_check.go
new file mode 100644
index 0000000000000..c73f8afc1a6db
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/type_check.go
@@ -0,0 +1,91 @@
+package toml
+
+// tomlType represents any Go type that corresponds to a TOML type.
+// While the first draft of the TOML spec has a simplistic type system that
+// probably doesn't need this level of sophistication, we seem to be militating
+// toward adding real composite types.
+type tomlType interface {
+ typeString() string
+}
+
+// typeEqual accepts any two types and returns true if they are equal.
+func typeEqual(t1, t2 tomlType) bool {
+ if t1 == nil || t2 == nil {
+ return false
+ }
+ return t1.typeString() == t2.typeString()
+}
+
+func typeIsHash(t tomlType) bool {
+ return typeEqual(t, tomlHash) || typeEqual(t, tomlArrayHash)
+}
+
+type tomlBaseType string
+
+func (btype tomlBaseType) typeString() string {
+ return string(btype)
+}
+
+func (btype tomlBaseType) String() string {
+ return btype.typeString()
+}
+
+var (
+ tomlInteger tomlBaseType = "Integer"
+ tomlFloat tomlBaseType = "Float"
+ tomlDatetime tomlBaseType = "Datetime"
+ tomlString tomlBaseType = "String"
+ tomlBool tomlBaseType = "Bool"
+ tomlArray tomlBaseType = "Array"
+ tomlHash tomlBaseType = "Hash"
+ tomlArrayHash tomlBaseType = "ArrayHash"
+)
+
+// typeOfPrimitive returns a tomlType of any primitive value in TOML.
+// Primitive values are: Integer, Float, Datetime, String and Bool.
+//
+// Passing a lexer item other than the following will cause a BUG message
+// to occur: itemString, itemBool, itemInteger, itemFloat, itemDatetime.
+func (p *parser) typeOfPrimitive(lexItem item) tomlType {
+ switch lexItem.typ {
+ case itemInteger:
+ return tomlInteger
+ case itemFloat:
+ return tomlFloat
+ case itemDatetime:
+ return tomlDatetime
+ case itemString:
+ return tomlString
+ case itemMultilineString:
+ return tomlString
+ case itemRawString:
+ return tomlString
+ case itemRawMultilineString:
+ return tomlString
+ case itemBool:
+ return tomlBool
+ }
+ p.bug("Cannot infer primitive type of lex item '%s'.", lexItem)
+ panic("unreachable")
+}
+
+// typeOfArray returns a tomlType for an array given a list of types of its
+// values.
+//
+// In the current spec, if an array is homogeneous, then its type is always
+// "Array". If the array is not homogeneous, an error is generated.
+func (p *parser) typeOfArray(types []tomlType) tomlType {
+ // Empty arrays are cool.
+ if len(types) == 0 {
+ return tomlArray
+ }
+
+ theType := types[0]
+ for _, t := range types[1:] {
+ if !typeEqual(theType, t) {
+ p.panicf("Array contains values of type '%s' and '%s', but "+
+ "arrays must be homogeneous.", theType, t)
+ }
+ }
+ return tomlArray
+}
diff --git a/vendor/github.com/BurntSushi/toml/type_fields.go b/vendor/github.com/BurntSushi/toml/type_fields.go
new file mode 100644
index 0000000000000..608997c22f68c
--- /dev/null
+++ b/vendor/github.com/BurntSushi/toml/type_fields.go
@@ -0,0 +1,242 @@
+package toml
+
+// Struct field handling is adapted from code in encoding/json:
+//
+// Copyright 2010 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the Go distribution.
+
+import (
+ "reflect"
+ "sort"
+ "sync"
+)
+
+// A field represents a single field found in a struct.
+type field struct {
+ name string // the name of the field (`toml` tag included)
+ tag bool // whether field has a `toml` tag
+ index []int // represents the depth of an anonymous field
+ typ reflect.Type // the type of the field
+}
+
+// byName sorts field by name, breaking ties with depth,
+// then breaking ties with "name came from toml tag", then
+// breaking ties with index sequence.
+type byName []field
+
+func (x byName) Len() int { return len(x) }
+
+func (x byName) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
+
+func (x byName) Less(i, j int) bool {
+ if x[i].name != x[j].name {
+ return x[i].name < x[j].name
+ }
+ if len(x[i].index) != len(x[j].index) {
+ return len(x[i].index) < len(x[j].index)
+ }
+ if x[i].tag != x[j].tag {
+ return x[i].tag
+ }
+ return byIndex(x).Less(i, j)
+}
+
+// byIndex sorts field by index sequence.
+type byIndex []field
+
+func (x byIndex) Len() int { return len(x) }
+
+func (x byIndex) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
+
+func (x byIndex) Less(i, j int) bool {
+ for k, xik := range x[i].index {
+ if k >= len(x[j].index) {
+ return false
+ }
+ if xik != x[j].index[k] {
+ return xik < x[j].index[k]
+ }
+ }
+ return len(x[i].index) < len(x[j].index)
+}
+
+// typeFields returns a list of fields that TOML should recognize for the given
+// type. The algorithm is breadth-first search over the set of structs to
+// include - the top struct and then any reachable anonymous structs.
+func typeFields(t reflect.Type) []field {
+ // Anonymous fields to explore at the current level and the next.
+ current := []field{}
+ next := []field{{typ: t}}
+
+ // Count of queued names for current level and the next.
+ count := map[reflect.Type]int{}
+ nextCount := map[reflect.Type]int{}
+
+ // Types already visited at an earlier level.
+ visited := map[reflect.Type]bool{}
+
+ // Fields found.
+ var fields []field
+
+ for len(next) > 0 {
+ current, next = next, current[:0]
+ count, nextCount = nextCount, map[reflect.Type]int{}
+
+ for _, f := range current {
+ if visited[f.typ] {
+ continue
+ }
+ visited[f.typ] = true
+
+ // Scan f.typ for fields to include.
+ for i := 0; i < f.typ.NumField(); i++ {
+ sf := f.typ.Field(i)
+ if sf.PkgPath != "" && !sf.Anonymous { // unexported
+ continue
+ }
+ opts := getOptions(sf.Tag)
+ if opts.skip {
+ continue
+ }
+ index := make([]int, len(f.index)+1)
+ copy(index, f.index)
+ index[len(f.index)] = i
+
+ ft := sf.Type
+ if ft.Name() == "" && ft.Kind() == reflect.Ptr {
+ // Follow pointer.
+ ft = ft.Elem()
+ }
+
+ // Record found field and index sequence.
+ if opts.name != "" || !sf.Anonymous || ft.Kind() != reflect.Struct {
+ tagged := opts.name != ""
+ name := opts.name
+ if name == "" {
+ name = sf.Name
+ }
+ fields = append(fields, field{name, tagged, index, ft})
+ if count[f.typ] > 1 {
+ // If there were multiple instances, add a second,
+ // so that the annihilation code will see a duplicate.
+ // It only cares about the distinction between 1 or 2,
+ // so don't bother generating any more copies.
+ fields = append(fields, fields[len(fields)-1])
+ }
+ continue
+ }
+
+ // Record new anonymous struct to explore in next round.
+ nextCount[ft]++
+ if nextCount[ft] == 1 {
+ f := field{name: ft.Name(), index: index, typ: ft}
+ next = append(next, f)
+ }
+ }
+ }
+ }
+
+ sort.Sort(byName(fields))
+
+ // Delete all fields that are hidden by the Go rules for embedded fields,
+ // except that fields with TOML tags are promoted.
+
+ // The fields are sorted in primary order of name, secondary order
+ // of field index length. Loop over names; for each name, delete
+ // hidden fields by choosing the one dominant field that survives.
+ out := fields[:0]
+ for advance, i := 0, 0; i < len(fields); i += advance {
+ // One iteration per name.
+ // Find the sequence of fields with the name of this first field.
+ fi := fields[i]
+ name := fi.name
+ for advance = 1; i+advance < len(fields); advance++ {
+ fj := fields[i+advance]
+ if fj.name != name {
+ break
+ }
+ }
+ if advance == 1 { // Only one field with this name
+ out = append(out, fi)
+ continue
+ }
+ dominant, ok := dominantField(fields[i : i+advance])
+ if ok {
+ out = append(out, dominant)
+ }
+ }
+
+ fields = out
+ sort.Sort(byIndex(fields))
+
+ return fields
+}
+
+// dominantField looks through the fields, all of which are known to
+// have the same name, to find the single field that dominates the
+// others using Go's embedding rules, modified by the presence of
+// TOML tags. If there are multiple top-level fields, the boolean
+// will be false: This condition is an error in Go and we skip all
+// the fields.
+func dominantField(fields []field) (field, bool) {
+ // The fields are sorted in increasing index-length order. The winner
+ // must therefore be one with the shortest index length. Drop all
+ // longer entries, which is easy: just truncate the slice.
+ length := len(fields[0].index)
+ tagged := -1 // Index of first tagged field.
+ for i, f := range fields {
+ if len(f.index) > length {
+ fields = fields[:i]
+ break
+ }
+ if f.tag {
+ if tagged >= 0 {
+ // Multiple tagged fields at the same level: conflict.
+ // Return no field.
+ return field{}, false
+ }
+ tagged = i
+ }
+ }
+ if tagged >= 0 {
+ return fields[tagged], true
+ }
+ // All remaining fields have the same length. If there's more than one,
+ // we have a conflict (two fields named "X" at the same level) and we
+ // return no field.
+ if len(fields) > 1 {
+ return field{}, false
+ }
+ return fields[0], true
+}
+
+var fieldCache struct {
+ sync.RWMutex
+ m map[reflect.Type][]field
+}
+
+// cachedTypeFields is like typeFields but uses a cache to avoid repeated work.
+func cachedTypeFields(t reflect.Type) []field {
+ fieldCache.RLock()
+ f := fieldCache.m[t]
+ fieldCache.RUnlock()
+ if f != nil {
+ return f
+ }
+
+ // Compute fields without lock.
+ // Might duplicate effort but won't hold other computations back.
+ f = typeFields(t)
+ if f == nil {
+ f = []field{}
+ }
+
+ fieldCache.Lock()
+ if fieldCache.m == nil {
+ fieldCache.m = map[reflect.Type][]field{}
+ }
+ fieldCache.m[t] = f
+ fieldCache.Unlock()
+ return f
+}
diff --git a/vendor/github.com/alecthomas/units/bytes.go b/vendor/github.com/alecthomas/units/bytes.go
index eaadeb8005a58..61d0ca479abd4 100644
--- a/vendor/github.com/alecthomas/units/bytes.go
+++ b/vendor/github.com/alecthomas/units/bytes.go
@@ -27,6 +27,7 @@ var (
// ParseBase2Bytes supports both iB and B in base-2 multipliers. That is, KB
// and KiB are both 1024.
+// However "kB", which is the correct SI spelling of 1000 Bytes, is rejected.
func ParseBase2Bytes(s string) (Base2Bytes, error) {
n, err := ParseUnit(s, bytesUnitMap)
if err != nil {
@@ -68,12 +69,13 @@ func ParseMetricBytes(s string) (MetricBytes, error) {
return MetricBytes(n), err
}
+// TODO: represents 1000B as uppercase "KB", while SI standard requires "kB".
func (m MetricBytes) String() string {
return ToString(int64(m), 1000, "B", "B")
}
// ParseStrictBytes supports both iB and B suffixes for base 2 and metric,
-// respectively. That is, KiB represents 1024 and KB represents 1000.
+// respectively. That is, KiB represents 1024 and kB, KB represent 1000.
func ParseStrictBytes(s string) (int64, error) {
n, err := ParseUnit(s, bytesUnitMap)
if err != nil {
diff --git a/vendor/github.com/alecthomas/units/go.mod b/vendor/github.com/alecthomas/units/go.mod
index f5721732748c4..c7fb91f2b27d8 100644
--- a/vendor/github.com/alecthomas/units/go.mod
+++ b/vendor/github.com/alecthomas/units/go.mod
@@ -1 +1,3 @@
module github.com/alecthomas/units
+
+require github.com/stretchr/testify v1.4.0
diff --git a/vendor/github.com/alecthomas/units/go.sum b/vendor/github.com/alecthomas/units/go.sum
new file mode 100644
index 0000000000000..8fdee5854f19a
--- /dev/null
+++ b/vendor/github.com/alecthomas/units/go.sum
@@ -0,0 +1,11 @@
+github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8=
+github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
+github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
+github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
+github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
+github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
+gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
+gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
+gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
diff --git a/vendor/github.com/alecthomas/units/si.go b/vendor/github.com/alecthomas/units/si.go
index 8234a9d52cb3b..99b2fa4fcb02c 100644
--- a/vendor/github.com/alecthomas/units/si.go
+++ b/vendor/github.com/alecthomas/units/si.go
@@ -14,13 +14,37 @@ const (
)
func MakeUnitMap(suffix, shortSuffix string, scale int64) map[string]float64 {
- return map[string]float64{
- shortSuffix: 1,
- "K" + suffix: float64(scale),
+ res := map[string]float64{
+ shortSuffix: 1,
+ // see below for "k" / "K"
"M" + suffix: float64(scale * scale),
"G" + suffix: float64(scale * scale * scale),
"T" + suffix: float64(scale * scale * scale * scale),
"P" + suffix: float64(scale * scale * scale * scale * scale),
"E" + suffix: float64(scale * scale * scale * scale * scale * scale),
}
+
+ // Standard SI prefixes use lowercase "k" for kilo = 1000.
+ // For compatibility, and to be fool-proof, we accept both "k" and "K" in metric mode.
+ //
+ // However, official binary prefixes are always capitalized - "KiB" -
+ // and we specifically never parse "kB" as 1024B because:
+ //
+ // (1) people pedantic enough to use lowercase according to SI unlikely to abuse "k" to mean 1024 :-)
+ //
+ // (2) Use of capital K for 1024 was an informal tradition predating IEC prefixes:
+ // "The binary meaning of the kilobyte for 1024 bytes typically uses the symbol KB, with an
+ // uppercase letter K."
+ // -- https://en.wikipedia.org/wiki/Kilobyte#Base_2_(1024_bytes)
+ // "Capitalization of the letter K became the de facto standard for binary notation, although this
+ // could not be extended to higher powers, and use of the lowercase k did persist.[13][14][15]"
+ // -- https://en.wikipedia.org/wiki/Binary_prefix#History
+ // See also the extensive https://en.wikipedia.org/wiki/Timeline_of_binary_prefixes.
+ if scale == 1024 {
+ res["K"+suffix] = float64(scale)
+ } else {
+ res["k"+suffix] = float64(scale)
+ res["K"+suffix] = float64(scale)
+ }
+ return res
}
diff --git a/vendor/github.com/armon/go-metrics/go.mod b/vendor/github.com/armon/go-metrics/go.mod
index 88e1e98fbf4fe..5df13430f80fb 100644
--- a/vendor/github.com/armon/go-metrics/go.mod
+++ b/vendor/github.com/armon/go-metrics/go.mod
@@ -3,14 +3,17 @@ module github.com/armon/go-metrics
go 1.12
require (
- github.com/DataDog/datadog-go v2.2.0+incompatible
+ github.com/DataDog/datadog-go v3.2.0+incompatible
github.com/circonus-labs/circonus-gometrics v2.3.1+incompatible
github.com/circonus-labs/circonusllhist v0.1.3 // indirect
+ github.com/golang/protobuf v1.2.0
github.com/hashicorp/go-immutable-radix v1.0.0
github.com/hashicorp/go-retryablehttp v0.5.3 // indirect
github.com/pascaldekloe/goe v0.1.0
github.com/pkg/errors v0.8.1 // indirect
github.com/prometheus/client_golang v0.9.2
+ github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910
+ github.com/prometheus/common v0.0.0-20181126121408-4724e9255275
github.com/stretchr/testify v1.3.0 // indirect
github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926 // indirect
)
diff --git a/vendor/github.com/armon/go-metrics/go.sum b/vendor/github.com/armon/go-metrics/go.sum
index 5ffd8329affa6..0c4c45c915ebb 100644
--- a/vendor/github.com/armon/go-metrics/go.sum
+++ b/vendor/github.com/armon/go-metrics/go.sum
@@ -1,5 +1,5 @@
-github.com/DataDog/datadog-go v2.2.0+incompatible h1:V5BKkxACZLjzHjSgBbr2gvLA2Ae49yhc6CSY7MLy5k4=
-github.com/DataDog/datadog-go v2.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ=
+github.com/DataDog/datadog-go v3.2.0+incompatible h1:qSG2N4FghB1He/r2mFrWKCaL7dXCilEuNEeAn20fdD4=
+github.com/DataDog/datadog-go v3.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973 h1:xJ4a3vCFaGF/jqvzLMYoU8P317H5OQ+Via4RmuPwCS0=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/circonus-labs/circonus-gometrics v2.3.1+incompatible h1:C29Ae4G5GtYyYMm1aztcyj/J5ckgJm2zwdDajFbx1NY=
@@ -36,6 +36,7 @@ github.com/prometheus/common v0.0.0-20181126121408-4724e9255275 h1:PnBWHBf+6L0jO
github.com/prometheus/common v0.0.0-20181126121408-4724e9255275/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a h1:9a8MnZMP0X2nLJdBg+pBmGgkJlSaKC2KaQmTCk1XDtE=
github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
+github.com/stretchr/objx v0.1.0 h1:4G4v2dO3VZwixGIRoQ5Lfboy6nUhCyYzaqnIAPPhYs4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
diff --git a/vendor/github.com/armon/go-metrics/prometheus/prometheus.go b/vendor/github.com/armon/go-metrics/prometheus/prometheus.go
index 9b339be3aaa1d..f648c6063922c 100644
--- a/vendor/github.com/armon/go-metrics/prometheus/prometheus.go
+++ b/vendor/github.com/armon/go-metrics/prometheus/prometheus.go
@@ -4,6 +4,7 @@ package prometheus
import (
"fmt"
+ "log"
"strings"
"sync"
"time"
@@ -12,6 +13,7 @@ import (
"github.com/armon/go-metrics"
"github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/push"
)
var (
@@ -192,3 +194,58 @@ func (p *PrometheusSink) IncrCounterWithLabels(parts []string, val float32, labe
g.Add(float64(val))
p.updates[hash] = time.Now()
}
+
+type PrometheusPushSink struct {
+ *PrometheusSink
+ pusher *push.Pusher
+ address string
+ pushInterval time.Duration
+ stopChan chan struct{}
+}
+
+func NewPrometheusPushSink(address string, pushIterval time.Duration, name string) (*PrometheusPushSink, error) {
+
+ promSink := &PrometheusSink{
+ gauges: make(map[string]prometheus.Gauge),
+ summaries: make(map[string]prometheus.Summary),
+ counters: make(map[string]prometheus.Counter),
+ updates: make(map[string]time.Time),
+ expiration: 60 * time.Second,
+ }
+
+ pusher := push.New(address, name).Collector(promSink)
+
+ sink := &PrometheusPushSink{
+ promSink,
+ pusher,
+ address,
+ pushIterval,
+ make(chan struct{}),
+ }
+
+ sink.flushMetrics()
+ return sink, nil
+}
+
+func (s *PrometheusPushSink) flushMetrics() {
+ ticker := time.NewTicker(s.pushInterval)
+
+ go func() {
+ for {
+ select {
+ case <-ticker.C:
+ err := s.pusher.Push()
+ if err != nil {
+ log.Printf("[ERR] Error pushing to Prometheus! Err: %s", err)
+ }
+ case <-s.stopChan:
+ ticker.Stop()
+ return
+ }
+ }
+ }()
+}
+
+func (s *PrometheusPushSink) Shutdown() {
+ close(s.stopChan)
+}
diff --git a/vendor/github.com/aws/aws-sdk-go/aws/awsutil/path_value.go b/vendor/github.com/aws/aws-sdk-go/aws/awsutil/path_value.go
index 285e54d67993d..a4eb6a7f43aae 100644
--- a/vendor/github.com/aws/aws-sdk-go/aws/awsutil/path_value.go
+++ b/vendor/github.com/aws/aws-sdk-go/aws/awsutil/path_value.go
@@ -70,7 +70,7 @@ func rValuesAtPath(v interface{}, path string, createPath, caseSensitive, nilTer
value = value.FieldByNameFunc(func(name string) bool {
if c == name {
return true
- } else if !caseSensitive && strings.ToLower(name) == strings.ToLower(c) {
+ } else if !caseSensitive && strings.EqualFold(name, c) {
return true
}
return false
diff --git a/vendor/github.com/aws/aws-sdk-go/aws/config.go b/vendor/github.com/aws/aws-sdk-go/aws/config.go
index 8a7699b961997..93ebbcc13f8a8 100644
--- a/vendor/github.com/aws/aws-sdk-go/aws/config.go
+++ b/vendor/github.com/aws/aws-sdk-go/aws/config.go
@@ -249,6 +249,9 @@ type Config struct {
// STSRegionalEndpoint will enable regional or legacy endpoint resolving
STSRegionalEndpoint endpoints.STSRegionalEndpoint
+
+ // S3UsEast1RegionalEndpoint will enable regional or legacy endpoint resolving
+ S3UsEast1RegionalEndpoint endpoints.S3UsEast1RegionalEndpoint
}
// NewConfig returns a new Config pointer that can be chained with builder
@@ -430,6 +433,13 @@ func (c *Config) WithSTSRegionalEndpoint(sre endpoints.STSRegionalEndpoint) *Con
return c
}
+// WithS3UsEast1RegionalEndpoint will set whether or not to use regional endpoint flag
+// when resolving the endpoint for a service
+func (c *Config) WithS3UsEast1RegionalEndpoint(sre endpoints.S3UsEast1RegionalEndpoint) *Config {
+ c.S3UsEast1RegionalEndpoint = sre
+ return c
+}
+
func mergeInConfig(dst *Config, other *Config) {
if other == nil {
return
@@ -534,6 +544,10 @@ func mergeInConfig(dst *Config, other *Config) {
if other.STSRegionalEndpoint != endpoints.UnsetSTSEndpoint {
dst.STSRegionalEndpoint = other.STSRegionalEndpoint
}
+
+ if other.S3UsEast1RegionalEndpoint != endpoints.UnsetS3UsEast1Endpoint {
+ dst.S3UsEast1RegionalEndpoint = other.S3UsEast1RegionalEndpoint
+ }
}
// Copy will return a shallow copy of the Config object. If any additional
diff --git a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/decode.go b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/decode.go
index 87b9ff3ffec2b..343a2106f81a4 100644
--- a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/decode.go
+++ b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/decode.go
@@ -83,6 +83,7 @@ func decodeV3Endpoints(modelDef modelDefinition, opts DecodeModelOptions) (Resol
p := &ps[i]
custAddEC2Metadata(p)
custAddS3DualStack(p)
+ custRegionalS3(p)
custRmIotDataService(p)
custFixAppAutoscalingChina(p)
custFixAppAutoscalingUsGov(p)
@@ -100,6 +101,33 @@ func custAddS3DualStack(p *partition) {
custAddDualstack(p, "s3-control")
}
+func custRegionalS3(p *partition) {
+ if p.ID != "aws" {
+ return
+ }
+
+ service, ok := p.Services["s3"]
+ if !ok {
+ return
+ }
+
+ // If global endpoint already exists no customization needed.
+ if _, ok := service.Endpoints["aws-global"]; ok {
+ return
+ }
+
+ service.PartitionEndpoint = "aws-global"
+ service.Endpoints["us-east-1"] = endpoint{}
+ service.Endpoints["aws-global"] = endpoint{
+ Hostname: "s3.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "us-east-1",
+ },
+ }
+
+ p.Services["s3"] = service
+}
+
func custAddDualstack(p *partition, svcName string) {
s, ok := p.Services[svcName]
if !ok {
diff --git a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go
index 28ccafc8edbf1..de07715d57a61 100644
--- a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go
+++ b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/defaults.go
@@ -210,6 +210,7 @@ var awsPartition = partition{
"eu-west-2": endpoint{},
"eu-west-3": endpoint{},
"me-south-1": endpoint{},
+ "sa-east-1": endpoint{},
"us-east-1": endpoint{},
"us-east-2": endpoint{},
"us-west-1": endpoint{},
@@ -524,8 +525,11 @@ var awsPartition = partition{
"eu-north-1": endpoint{},
"eu-west-1": endpoint{},
"eu-west-2": endpoint{},
+ "eu-west-3": endpoint{},
+ "me-south-1": endpoint{},
"us-east-1": endpoint{},
"us-east-2": endpoint{},
+ "us-west-1": endpoint{},
"us-west-2": endpoint{},
},
},
@@ -877,11 +881,12 @@ var awsPartition = partition{
Region: "ca-central-1",
},
},
- "sa-east-1": endpoint{},
- "us-east-1": endpoint{},
- "us-east-2": endpoint{},
- "us-west-1": endpoint{},
- "us-west-2": endpoint{},
+ "me-south-1": endpoint{},
+ "sa-east-1": endpoint{},
+ "us-east-1": endpoint{},
+ "us-east-2": endpoint{},
+ "us-west-1": endpoint{},
+ "us-west-2": endpoint{},
},
},
"codedeploy": service{
@@ -1099,6 +1104,22 @@ var awsPartition = partition{
"us-west-2": endpoint{},
},
},
+ "dataexchange": service{
+
+ Endpoints: endpoints{
+ "ap-northeast-1": endpoint{},
+ "ap-northeast-2": endpoint{},
+ "ap-southeast-1": endpoint{},
+ "ap-southeast-2": endpoint{},
+ "eu-central-1": endpoint{},
+ "eu-west-1": endpoint{},
+ "eu-west-2": endpoint{},
+ "us-east-1": endpoint{},
+ "us-east-2": endpoint{},
+ "us-west-1": endpoint{},
+ "us-west-2": endpoint{},
+ },
+ },
"datapipeline": service{
Endpoints: endpoints{
@@ -1161,6 +1182,8 @@ var awsPartition = partition{
"ap-southeast-2": endpoint{},
"eu-central-1": endpoint{},
"eu-west-1": endpoint{},
+ "eu-west-2": endpoint{},
+ "eu-west-3": endpoint{},
"sa-east-1": endpoint{},
"us-east-1": endpoint{},
"us-east-2": endpoint{},
@@ -1277,6 +1300,12 @@ var awsPartition = partition{
Region: "eu-west-2",
},
},
+ "eu-west-3": endpoint{
+ Hostname: "rds.eu-west-3.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "eu-west-3",
+ },
+ },
"us-east-1": endpoint{
Hostname: "rds.us-east-1.amazonaws.com",
CredentialScope: credentialScope{
@@ -1300,6 +1329,7 @@ var awsPartition = partition{
"ds": service{
Endpoints: endpoints{
+ "ap-east-1": endpoint{},
"ap-northeast-1": endpoint{},
"ap-northeast-2": endpoint{},
"ap-south-1": endpoint{},
@@ -1665,6 +1695,7 @@ var awsPartition = partition{
"eu-west-1": endpoint{},
"eu-west-2": endpoint{},
"eu-west-3": endpoint{},
+ "me-south-1": endpoint{},
"sa-east-1": endpoint{},
"us-east-1": endpoint{},
"us-east-2": endpoint{},
@@ -1679,11 +1710,16 @@ var awsPartition = partition{
Endpoints: endpoints{
"ap-northeast-1": endpoint{},
"ap-northeast-2": endpoint{},
+ "ap-south-1": endpoint{},
"ap-southeast-1": endpoint{},
"ap-southeast-2": endpoint{},
+ "ca-central-1": endpoint{},
"eu-central-1": endpoint{},
+ "eu-north-1": endpoint{},
"eu-west-1": endpoint{},
"eu-west-2": endpoint{},
+ "eu-west-3": endpoint{},
+ "sa-east-1": endpoint{},
"us-east-1": endpoint{},
"us-east-2": endpoint{},
"us-west-1": endpoint{},
@@ -2156,6 +2192,9 @@ var awsPartition = partition{
Endpoints: endpoints{
"ap-northeast-1": endpoint{},
+ "ap-south-1": endpoint{},
+ "ap-southeast-1": endpoint{},
+ "ap-southeast-2": endpoint{},
"eu-west-1": endpoint{},
"us-east-1": endpoint{},
"us-east-2": endpoint{},
@@ -2514,6 +2553,12 @@ var awsPartition = partition{
Region: "ap-southeast-2",
},
},
+ "ca-central-1": endpoint{
+ Hostname: "rds.ca-central-1.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "ca-central-1",
+ },
+ },
"eu-central-1": endpoint{
Hostname: "rds.eu-central-1.amazonaws.com",
CredentialScope: credentialScope{
@@ -2538,6 +2583,12 @@ var awsPartition = partition{
Region: "eu-west-2",
},
},
+ "me-south-1": endpoint{
+ Hostname: "rds.me-south-1.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "me-south-1",
+ },
+ },
"us-east-1": endpoint{
Hostname: "rds.us-east-1.amazonaws.com",
CredentialScope: credentialScope{
@@ -2558,6 +2609,65 @@ var awsPartition = partition{
},
},
},
+ "oidc": service{
+
+ Endpoints: endpoints{
+ "ap-southeast-1": endpoint{
+ Hostname: "oidc.ap-southeast-1.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "ap-southeast-1",
+ },
+ },
+ "ap-southeast-2": endpoint{
+ Hostname: "oidc.ap-southeast-2.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "ap-southeast-2",
+ },
+ },
+ "ca-central-1": endpoint{
+ Hostname: "oidc.ca-central-1.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "ca-central-1",
+ },
+ },
+ "eu-central-1": endpoint{
+ Hostname: "oidc.eu-central-1.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "eu-central-1",
+ },
+ },
+ "eu-west-1": endpoint{
+ Hostname: "oidc.eu-west-1.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "eu-west-1",
+ },
+ },
+ "eu-west-2": endpoint{
+ Hostname: "oidc.eu-west-2.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "eu-west-2",
+ },
+ },
+ "us-east-1": endpoint{
+ Hostname: "oidc.us-east-1.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "us-east-1",
+ },
+ },
+ "us-east-2": endpoint{
+ Hostname: "oidc.us-east-2.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "us-east-2",
+ },
+ },
+ "us-west-2": endpoint{
+ Hostname: "oidc.us-west-2.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "us-west-2",
+ },
+ },
+ },
+ },
"opsworks": service{
Endpoints: endpoints{
@@ -2641,6 +2751,65 @@ var awsPartition = partition{
"us-west-2": endpoint{},
},
},
+ "portal.sso": service{
+
+ Endpoints: endpoints{
+ "ap-southeast-1": endpoint{
+ Hostname: "portal.sso.ap-southeast-1.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "ap-southeast-1",
+ },
+ },
+ "ap-southeast-2": endpoint{
+ Hostname: "portal.sso.ap-southeast-2.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "ap-southeast-2",
+ },
+ },
+ "ca-central-1": endpoint{
+ Hostname: "portal.sso.ca-central-1.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "ca-central-1",
+ },
+ },
+ "eu-central-1": endpoint{
+ Hostname: "portal.sso.eu-central-1.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "eu-central-1",
+ },
+ },
+ "eu-west-1": endpoint{
+ Hostname: "portal.sso.eu-west-1.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "eu-west-1",
+ },
+ },
+ "eu-west-2": endpoint{
+ Hostname: "portal.sso.eu-west-2.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "eu-west-2",
+ },
+ },
+ "us-east-1": endpoint{
+ Hostname: "portal.sso.us-east-1.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "us-east-1",
+ },
+ },
+ "us-east-2": endpoint{
+ Hostname: "portal.sso.us-east-2.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "us-east-2",
+ },
+ },
+ "us-west-2": endpoint{
+ Hostname: "portal.sso.us-west-2.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "us-west-2",
+ },
+ },
+ },
+ },
"projects.iot1click": service{
Endpoints: endpoints{
@@ -2907,7 +3076,7 @@ var awsPartition = partition{
},
},
"s3": service{
- PartitionEndpoint: "us-east-1",
+ PartitionEndpoint: "aws-global",
IsRegionalized: boxedTrue,
Defaults: endpoint{
Protocols: []string{"http", "https"},
@@ -2932,6 +3101,12 @@ var awsPartition = partition{
Hostname: "s3.ap-southeast-2.amazonaws.com",
SignatureVersions: []string{"s3", "s3v4"},
},
+ "aws-global": endpoint{
+ Hostname: "s3.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "us-east-1",
+ },
+ },
"ca-central-1": endpoint{},
"eu-central-1": endpoint{},
"eu-north-1": endpoint{},
@@ -2953,10 +3128,7 @@ var awsPartition = partition{
Hostname: "s3.sa-east-1.amazonaws.com",
SignatureVersions: []string{"s3", "s3v4"},
},
- "us-east-1": endpoint{
- Hostname: "s3.amazonaws.com",
- SignatureVersions: []string{"s3", "s3v4"},
- },
+ "us-east-1": endpoint{},
"us-east-2": endpoint{},
"us-west-1": endpoint{
Hostname: "s3.us-west-1.amazonaws.com",
@@ -3119,6 +3291,19 @@ var awsPartition = partition{
},
},
},
+ "savingsplans": service{
+ PartitionEndpoint: "aws-global",
+ IsRegionalized: boxedFalse,
+
+ Endpoints: endpoints{
+ "aws-global": endpoint{
+ Hostname: "savingsplans.amazonaws.com",
+ CredentialScope: credentialScope{
+ Region: "us-east-1",
+ },
+ },
+ },
+ },
"sdb": service{
Defaults: endpoint{
Protocols: []string{"http", "https"},
@@ -3248,6 +3433,9 @@ var awsPartition = partition{
"eu-west-3": endpoint{
Protocols: []string{"https"},
},
+ "me-south-1": endpoint{
+ Protocols: []string{"https"},
+ },
"sa-east-1": endpoint{
Protocols: []string{"https"},
},
@@ -3723,6 +3911,7 @@ var awsPartition = partition{
Protocols: []string{"https"},
},
Endpoints: endpoints{
+ "ap-east-1": endpoint{},
"ap-northeast-2": endpoint{},
"ap-south-1": endpoint{},
"ap-southeast-1": endpoint{},
@@ -3732,6 +3921,7 @@ var awsPartition = partition{
"eu-west-1": endpoint{},
"eu-west-2": endpoint{},
"eu-west-3": endpoint{},
+ "me-south-1": endpoint{},
"sa-east-1": endpoint{},
"us-east-1": endpoint{},
"us-east-2": endpoint{},
@@ -4040,6 +4230,12 @@ var awscnPartition = partition{
"cn-northwest-1": endpoint{},
},
},
+ "dax": service{
+
+ Endpoints: endpoints{
+ "cn-northwest-1": endpoint{},
+ },
+ },
"directconnect": service{
Endpoints: endpoints{
@@ -4427,6 +4623,12 @@ var awscnPartition = partition{
},
},
},
+ "workspaces": service{
+
+ Endpoints: endpoints{
+ "cn-northwest-1": endpoint{},
+ },
+ },
},
}
diff --git a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/endpoints.go b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/endpoints.go
index fadff07d64cb1..1f53d9cb686d7 100644
--- a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/endpoints.go
+++ b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/endpoints.go
@@ -50,12 +50,28 @@ type Options struct {
// STS Regional Endpoint flag helps with resolving the STS endpoint
STSRegionalEndpoint STSRegionalEndpoint
+
+ // S3 Regional Endpoint flag helps with resolving the S3 endpoint
+ S3UsEast1RegionalEndpoint S3UsEast1RegionalEndpoint
}
-// STSRegionalEndpoint is an enum type alias for int
-// It is used internally by the core sdk as STS Regional Endpoint flag value
+// STSRegionalEndpoint is an enum for the states of the STS Regional Endpoint
+// options.
type STSRegionalEndpoint int
+func (e STSRegionalEndpoint) String() string {
+ switch e {
+ case LegacySTSEndpoint:
+ return "legacy"
+ case RegionalSTSEndpoint:
+ return "regional"
+ case UnsetSTSEndpoint:
+ return ""
+ default:
+ return "unknown"
+ }
+}
+
const (
// UnsetSTSEndpoint represents that STS Regional Endpoint flag is not specified.
@@ -86,6 +102,55 @@ func GetSTSRegionalEndpoint(s string) (STSRegionalEndpoint, error) {
}
}
+// S3UsEast1RegionalEndpoint is an enum for the states of the S3 us-east-1
+// Regional Endpoint options.
+type S3UsEast1RegionalEndpoint int
+
+func (e S3UsEast1RegionalEndpoint) String() string {
+ switch e {
+ case LegacyS3UsEast1Endpoint:
+ return "legacy"
+ case RegionalS3UsEast1Endpoint:
+ return "regional"
+ case UnsetS3UsEast1Endpoint:
+ return ""
+ default:
+ return "unknown"
+ }
+}
+
+const (
+
+ // UnsetS3UsEast1Endpoint represents that S3 Regional Endpoint flag is not
+ // specified.
+ UnsetS3UsEast1Endpoint S3UsEast1RegionalEndpoint = iota
+
+ // LegacyS3UsEast1Endpoint represents when S3 Regional Endpoint flag is
+ // specified to use legacy endpoints.
+ LegacyS3UsEast1Endpoint
+
+ // RegionalS3UsEast1Endpoint represents when S3 Regional Endpoint flag is
+ // specified to use regional endpoints.
+ RegionalS3UsEast1Endpoint
+)
+
+// GetS3UsEast1RegionalEndpoint function returns the S3UsEast1RegionalEndpointFlag based
+// on the input string provided in env config or shared config by the user.
+//
+// `legacy`, `regional` are the only case-insensitive valid strings for
+// resolving the S3 regional Endpoint flag.
+func GetS3UsEast1RegionalEndpoint(s string) (S3UsEast1RegionalEndpoint, error) {
+ switch {
+ case strings.EqualFold(s, "legacy"):
+ return LegacyS3UsEast1Endpoint, nil
+ case strings.EqualFold(s, "regional"):
+ return RegionalS3UsEast1Endpoint, nil
+ default:
+ return UnsetS3UsEast1Endpoint,
+ fmt.Errorf("unable to resolve the value of S3UsEast1RegionalEndpoint for %v", s)
+ }
+}
+
// Set combines all of the option functions together.
func (o *Options) Set(optFns ...func(*Options)) {
for _, fn := range optFns {
diff --git a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/legacy_regions.go b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/legacy_regions.go
new file mode 100644
index 0000000000000..df75e899adbe8
--- /dev/null
+++ b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/legacy_regions.go
@@ -0,0 +1,24 @@
+package endpoints
+
+var legacyGlobalRegions = map[string]map[string]struct{}{
+ "sts": {
+ "ap-northeast-1": {},
+ "ap-south-1": {},
+ "ap-southeast-1": {},
+ "ap-southeast-2": {},
+ "ca-central-1": {},
+ "eu-central-1": {},
+ "eu-north-1": {},
+ "eu-west-1": {},
+ "eu-west-2": {},
+ "eu-west-3": {},
+ "sa-east-1": {},
+ "us-east-1": {},
+ "us-east-2": {},
+ "us-west-1": {},
+ "us-west-2": {},
+ },
+ "s3": {
+ "us-east-1": {},
+ },
+}
diff --git a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/sts_legacy_regions.go b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/sts_legacy_regions.go
deleted file mode 100644
index 26139621972d8..0000000000000
--- a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/sts_legacy_regions.go
+++ /dev/null
@@ -1,19 +0,0 @@
-package endpoints
-
-var stsLegacyGlobalRegions = map[string]struct{}{
- "ap-northeast-1": {},
- "ap-south-1": {},
- "ap-southeast-1": {},
- "ap-southeast-2": {},
- "ca-central-1": {},
- "eu-central-1": {},
- "eu-north-1": {},
- "eu-west-1": {},
- "eu-west-2": {},
- "eu-west-3": {},
- "sa-east-1": {},
- "us-east-1": {},
- "us-east-2": {},
- "us-west-1": {},
- "us-west-2": {},
-}
diff --git a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/v3model.go b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/v3model.go
index 7b09adff63e72..eb2ac83c99275 100644
--- a/vendor/github.com/aws/aws-sdk-go/aws/endpoints/v3model.go
+++ b/vendor/github.com/aws/aws-sdk-go/aws/endpoints/v3model.go
@@ -110,8 +110,9 @@ func (p partition) EndpointFor(service, region string, opts ...func(*Options)) (
region = s.PartitionEndpoint
}
- if service == "sts" && opt.STSRegionalEndpoint != RegionalSTSEndpoint {
- if _, ok := stsLegacyGlobalRegions[region]; ok {
+ if (service == "sts" && opt.STSRegionalEndpoint != RegionalSTSEndpoint) ||
+ (service == "s3" && opt.S3UsEast1RegionalEndpoint != RegionalS3UsEast1Endpoint) {
+ if _, ok := legacyGlobalRegions[service][region]; ok {
region = "aws-global"
}
}
@@ -240,11 +241,23 @@ func (e endpoint) resolve(service, partitionID, region, dnsSuffix string, defs [
merged.mergeIn(e)
e = merged
- hostname := e.Hostname
+ signingRegion := e.CredentialScope.Region
+ if len(signingRegion) == 0 {
+ signingRegion = region
+ }
+ signingName := e.CredentialScope.Service
+ var signingNameDerived bool
+ if len(signingName) == 0 {
+ signingName = service
+ signingNameDerived = true
+ }
+
+ hostname := e.Hostname
// Offset the hostname for dualstack if enabled
if opts.UseDualStack && e.HasDualStack == boxedTrue {
hostname = e.DualStackHostname
+ region = signingRegion
}
u := strings.Replace(hostname, "{service}", service, 1)
@@ -254,18 +267,6 @@ func (e endpoint) resolve(service, partitionID, region, dnsSuffix string, defs [
scheme := getEndpointScheme(e.Protocols, opts.DisableSSL)
u = fmt.Sprintf("%s://%s", scheme, u)
- signingRegion := e.CredentialScope.Region
- if len(signingRegion) == 0 {
- signingRegion = region
- }
-
- signingName := e.CredentialScope.Service
- var signingNameDerived bool
- if len(signingName) == 0 {
- signingName = service
- signingNameDerived = true
- }
-
return ResolvedEndpoint{
URL: u,
PartitionID: partitionID,
diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/request.go b/vendor/github.com/aws/aws-sdk-go/aws/request/request.go
index 8e332cce6a6ed..52178141da626 100644
--- a/vendor/github.com/aws/aws-sdk-go/aws/request/request.go
+++ b/vendor/github.com/aws/aws-sdk-go/aws/request/request.go
@@ -99,8 +99,12 @@ type Operation struct {
BeforePresignFn func(r *Request) error
}
-// New returns a new Request pointer for the service API
-// operation and parameters.
+// New returns a new Request pointer for the service API operation and
+// parameters.
+//
+// A Retryer should be provided to direct how the request is retried. If
+// Retryer is nil, a default no retry value will be used. You can use
+// NoOpRetryer in the Client package to disable retry behavior directly.
//
// Params is any value of input parameters to be the request payload.
// Data is pointer value to an object which the request's response
@@ -108,6 +112,10 @@ type Operation struct {
func New(cfg aws.Config, clientInfo metadata.ClientInfo, handlers Handlers,
retryer Retryer, operation *Operation, params interface{}, data interface{}) *Request {
+ if retryer == nil {
+ retryer = noOpRetryer{}
+ }
+
method := operation.HTTPMethod
if method == "" {
method = "POST"
diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/request_pagination.go b/vendor/github.com/aws/aws-sdk-go/aws/request/request_pagination.go
index f093fc542df0d..64784e16f3dec 100644
--- a/vendor/github.com/aws/aws-sdk-go/aws/request/request_pagination.go
+++ b/vendor/github.com/aws/aws-sdk-go/aws/request/request_pagination.go
@@ -17,11 +17,13 @@ import (
// does the pagination between API operations, and Paginator defines the
// configuration that will be used per page request.
//
-// cont := true
-// for p.Next() && cont {
+// for p.Next() {
// data := p.Page().(*s3.ListObjectsOutput)
// // process the page's data
+// // ...
+// // break out of loop to stop fetching additional pages
// }
+//
// return p.Err()
//
// See service client API operation Pages methods for examples how the SDK will
diff --git a/vendor/github.com/aws/aws-sdk-go/aws/request/retryer.go b/vendor/github.com/aws/aws-sdk-go/aws/request/retryer.go
index e84084da5effb..8015acc67ebf4 100644
--- a/vendor/github.com/aws/aws-sdk-go/aws/request/retryer.go
+++ b/vendor/github.com/aws/aws-sdk-go/aws/request/retryer.go
@@ -35,10 +35,41 @@ type Retryer interface {
}
// WithRetryer sets a Retryer value to the given Config returning the Config
-// value for chaining.
+// value for chaining. The value must not be nil.
func WithRetryer(cfg *aws.Config, retryer Retryer) *aws.Config {
+ if retryer == nil {
+ if cfg.Logger != nil {
+ cfg.Logger.Log("ERROR: Request.WithRetryer called with nil retryer. Replacing with retry disabled Retryer.")
+ }
+ retryer = noOpRetryer{}
+ }
cfg.Retryer = retryer
return cfg
+
+}
+
+// noOpRetryer is a internal no op retryer used when a request is created
+// without a retryer.
+//
+// Provides a retryer that performs no retries.
+// It should be used when we do not want retries to be performed.
+type noOpRetryer struct{}
+
+// MaxRetries returns the number of maximum returns the service will use to make
+// an individual API; For NoOpRetryer the MaxRetries will always be zero.
+func (d noOpRetryer) MaxRetries() int {
+ return 0
+}
+
+// ShouldRetry will always return false for NoOpRetryer, as it should never retry.
+func (d noOpRetryer) ShouldRetry(_ *Request) bool {
+ return false
+}
+
+// RetryRules returns the delay duration before retrying this request again;
+// since NoOpRetryer does not retry, RetryRules always returns 0.
+func (d noOpRetryer) RetryRules(_ *Request) time.Duration {
+ return 0
}
// retryableCodes is a collection of service response codes which are retry-able
diff --git a/vendor/github.com/aws/aws-sdk-go/aws/session/credentials.go b/vendor/github.com/aws/aws-sdk-go/aws/session/credentials.go
index 7713ccfca5eb5..cc64e24f1d563 100644
--- a/vendor/github.com/aws/aws-sdk-go/aws/session/credentials.go
+++ b/vendor/github.com/aws/aws-sdk-go/aws/session/credentials.go
@@ -47,10 +47,10 @@ func resolveCredentials(cfg *aws.Config,
}
// WebIdentityEmptyRoleARNErr will occur if 'AWS_WEB_IDENTITY_TOKEN_FILE' was set but
-// 'AWS_IAM_ROLE_ARN' was not set.
+// 'AWS_ROLE_ARN' was not set.
var WebIdentityEmptyRoleARNErr = awserr.New(stscreds.ErrCodeWebIdentity, "role ARN is not set", nil)
-// WebIdentityEmptyTokenFilePathErr will occur if 'AWS_IAM_ROLE_ARN' was set but
+// WebIdentityEmptyTokenFilePathErr will occur if 'AWS_ROLE_ARN' was set but
// 'AWS_WEB_IDENTITY_TOKEN_FILE' was not set.
var WebIdentityEmptyTokenFilePathErr = awserr.New(stscreds.ErrCodeWebIdentity, "token file path is not set", nil)
diff --git a/vendor/github.com/aws/aws-sdk-go/aws/session/env_config.go b/vendor/github.com/aws/aws-sdk-go/aws/session/env_config.go
index 530cc3a9c0697..4092ab8fb7ef9 100644
--- a/vendor/github.com/aws/aws-sdk-go/aws/session/env_config.go
+++ b/vendor/github.com/aws/aws-sdk-go/aws/session/env_config.go
@@ -128,11 +128,19 @@ type envConfig struct {
// AWS_ROLE_SESSION_NAME=session_name
RoleSessionName string
- // Specifies the Regional Endpoint flag for the sdk to resolve the endpoint for a service
+ // Specifies the STS Regional Endpoint flag for the SDK to resolve the endpoint
+ // for a service.
//
- // AWS_STS_REGIONAL_ENDPOINTS =sts_regional_endpoint
+ // AWS_STS_REGIONAL_ENDPOINTS=regional
// This can take value as `regional` or `legacy`
STSRegionalEndpoint endpoints.STSRegionalEndpoint
+
+ // Specifies the S3 Regional Endpoint flag for the SDK to resolve the
+ // endpoint for a service.
+ //
+ // AWS_S3_US_EAST_1_REGIONAL_ENDPOINT=regional
+ // This can take value as `regional` or `legacy`
+ S3UsEast1RegionalEndpoint endpoints.S3UsEast1RegionalEndpoint
}
var (
@@ -190,6 +198,9 @@ var (
stsRegionalEndpointKey = []string{
"AWS_STS_REGIONAL_ENDPOINTS",
}
+ s3UsEast1RegionalEndpoint = []string{
+ "AWS_S3_US_EAST_1_REGIONAL_ENDPOINT",
+ }
)
// loadEnvConfig retrieves the SDK's environment configuration.
@@ -275,14 +286,24 @@ func envConfigLoad(enableSharedConfig bool) (envConfig, error) {
cfg.CustomCABundle = os.Getenv("AWS_CA_BUNDLE")
+ var err error
// STS Regional Endpoint variable
for _, k := range stsRegionalEndpointKey {
if v := os.Getenv(k); len(v) != 0 {
- STSRegionalEndpoint, err := endpoints.GetSTSRegionalEndpoint(v)
+ cfg.STSRegionalEndpoint, err = endpoints.GetSTSRegionalEndpoint(v)
+ if err != nil {
+ return cfg, fmt.Errorf("failed to load, %v from env config, %v", k, err)
+ }
+ }
+ }
+
+ // S3 Regional Endpoint variable
+ for _, k := range s3UsEast1RegionalEndpoint {
+ if v := os.Getenv(k); len(v) != 0 {
+ cfg.S3UsEast1RegionalEndpoint, err = endpoints.GetS3UsEast1RegionalEndpoint(v)
if err != nil {
return cfg, fmt.Errorf("failed to load, %v from env config, %v", k, err)
}
- cfg.STSRegionalEndpoint = STSRegionalEndpoint
}
}
diff --git a/vendor/github.com/aws/aws-sdk-go/aws/session/session.go b/vendor/github.com/aws/aws-sdk-go/aws/session/session.go
index 15fa647699f87..ab6daac7c3064 100644
--- a/vendor/github.com/aws/aws-sdk-go/aws/session/session.go
+++ b/vendor/github.com/aws/aws-sdk-go/aws/session/session.go
@@ -555,7 +555,20 @@ func mergeConfigSrcs(cfg, userCfg *aws.Config,
}
// Regional Endpoint flag for STS endpoint resolving
- mergeSTSRegionalEndpointConfig(cfg, envCfg, sharedCfg)
+ mergeSTSRegionalEndpointConfig(cfg, []endpoints.STSRegionalEndpoint{
+ userCfg.STSRegionalEndpoint,
+ envCfg.STSRegionalEndpoint,
+ sharedCfg.STSRegionalEndpoint,
+ endpoints.LegacySTSEndpoint,
+ })
+
+ // Regional Endpoint flag for S3 endpoint resolving
+ mergeS3UsEast1RegionalEndpointConfig(cfg, []endpoints.S3UsEast1RegionalEndpoint{
+ userCfg.S3UsEast1RegionalEndpoint,
+ envCfg.S3UsEast1RegionalEndpoint,
+ sharedCfg.S3UsEast1RegionalEndpoint,
+ endpoints.LegacyS3UsEast1Endpoint,
+ })
// Configure credentials if not already set by the user when creating the
// Session.
@@ -570,20 +583,22 @@ func mergeConfigSrcs(cfg, userCfg *aws.Config,
return nil
}
-// mergeSTSRegionalEndpointConfig function merges the STSRegionalEndpoint into cfg from
-// envConfig and SharedConfig with envConfig being given precedence over SharedConfig
-func mergeSTSRegionalEndpointConfig(cfg *aws.Config, envCfg envConfig, sharedCfg sharedConfig) error {
-
- cfg.STSRegionalEndpoint = envCfg.STSRegionalEndpoint
-
- if cfg.STSRegionalEndpoint == endpoints.UnsetSTSEndpoint {
- cfg.STSRegionalEndpoint = sharedCfg.STSRegionalEndpoint
+func mergeSTSRegionalEndpointConfig(cfg *aws.Config, values []endpoints.STSRegionalEndpoint) {
+ for _, v := range values {
+ if v != endpoints.UnsetSTSEndpoint {
+ cfg.STSRegionalEndpoint = v
+ break
+ }
}
+}
- if cfg.STSRegionalEndpoint == endpoints.UnsetSTSEndpoint {
- cfg.STSRegionalEndpoint = endpoints.LegacySTSEndpoint
+func mergeS3UsEast1RegionalEndpointConfig(cfg *aws.Config, values []endpoints.S3UsEast1RegionalEndpoint) {
+ for _, v := range values {
+ if v != endpoints.UnsetS3UsEast1Endpoint {
+ cfg.S3UsEast1RegionalEndpoint = v
+ break
+ }
}
- return nil
}
func initHandlers(s *Session) {
@@ -653,6 +668,11 @@ func (s *Session) resolveEndpoint(service, region string, cfg *aws.Config) (endp
// precedence.
opt.STSRegionalEndpoint = cfg.STSRegionalEndpoint
+ // Support for S3UsEast1RegionalEndpoint where the S3UsEast1RegionalEndpoint is
+ // provided in envConfig or sharedConfig with envConfig getting
+ // precedence.
+ opt.S3UsEast1RegionalEndpoint = cfg.S3UsEast1RegionalEndpoint
+
// Support the condition where the service is modeled but its
// endpoint metadata is not available.
opt.ResolveUnknownService = true
diff --git a/vendor/github.com/aws/aws-sdk-go/aws/session/shared_config.go b/vendor/github.com/aws/aws-sdk-go/aws/session/shared_config.go
index 8574668960b42..1d7b049cf7c7f 100644
--- a/vendor/github.com/aws/aws-sdk-go/aws/session/shared_config.go
+++ b/vendor/github.com/aws/aws-sdk-go/aws/session/shared_config.go
@@ -44,6 +44,9 @@ const (
// Additional config fields for regional or legacy endpoints
stsRegionalEndpointSharedKey = `sts_regional_endpoints`
+ // Additional config fields for regional or legacy endpoints
+ s3UsEast1RegionalSharedKey = `s3_us_east_1_regional_endpoint`
+
// DefaultSharedConfigProfile is the default profile to be used when
// loading configuration from the config files if another profile name
// is not provided.
@@ -92,11 +95,17 @@ type sharedConfig struct {
CSMPort string
CSMClientID string
- // Specifies the Regional Endpoint flag for the sdk to resolve the endpoint for a service
+ // Specifies the Regional Endpoint flag for the SDK to resolve the endpoint for a service
//
- // sts_regional_endpoints = sts_regional_endpoint
+ // sts_regional_endpoints = regional
// This can take value as `LegacySTSEndpoint` or `RegionalSTSEndpoint`
STSRegionalEndpoint endpoints.STSRegionalEndpoint
+
+ // Specifies the Regional Endpoint flag for the SDK to resolve the endpoint for a service
+ //
+ // s3_us_east_1_regional_endpoint = regional
+ // This can take value as `LegacyS3UsEast1Endpoint` or `RegionalS3UsEast1Endpoint`
+ S3UsEast1RegionalEndpoint endpoints.S3UsEast1RegionalEndpoint
}
type sharedConfigFile struct {
@@ -259,10 +268,19 @@ func (cfg *sharedConfig) setFromIniFile(profile string, file sharedConfigFile, e
sre, err := endpoints.GetSTSRegionalEndpoint(v)
if err != nil {
return fmt.Errorf("failed to load %s from shared config, %s, %v",
- stsRegionalEndpointKey, file.Filename, err)
+ stsRegionalEndpointSharedKey, file.Filename, err)
}
cfg.STSRegionalEndpoint = sre
}
+
+ if v := section.String(s3UsEast1RegionalSharedKey); len(v) != 0 {
+ sre, err := endpoints.GetS3UsEast1RegionalEndpoint(v)
+ if err != nil {
+ return fmt.Errorf("failed to load %s from shared config, %s, %v",
+ s3UsEast1RegionalSharedKey, file.Filename, err)
+ }
+ cfg.S3UsEast1RegionalEndpoint = sre
+ }
}
updateString(&cfg.CredentialProcess, section, credentialProcessKey)
diff --git a/vendor/github.com/aws/aws-sdk-go/aws/version.go b/vendor/github.com/aws/aws-sdk-go/aws/version.go
index 8d4da6459ad27..b03cfb752dd30 100644
--- a/vendor/github.com/aws/aws-sdk-go/aws/version.go
+++ b/vendor/github.com/aws/aws-sdk-go/aws/version.go
@@ -5,4 +5,4 @@ package aws
const SDKName = "aws-sdk-go"
// SDKVersion is the version of this SDK
-const SDKVersion = "1.25.22"
+const SDKVersion = "1.25.35"
diff --git a/vendor/github.com/aws/aws-sdk-go/service/applicationautoscaling/api.go b/vendor/github.com/aws/aws-sdk-go/service/applicationautoscaling/api.go
index 7d560dc2002e4..4a7e21e00ff2c 100644
--- a/vendor/github.com/aws/aws-sdk-go/service/applicationautoscaling/api.go
+++ b/vendor/github.com/aws/aws-sdk-go/service/applicationautoscaling/api.go
@@ -463,10 +463,12 @@ func (c *ApplicationAutoScaling) DescribeScalableTargetsPagesWithContext(ctx aws
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeScalableTargetsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeScalableTargetsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -617,10 +619,12 @@ func (c *ApplicationAutoScaling) DescribeScalingActivitiesPagesWithContext(ctx a
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeScalingActivitiesOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeScalingActivitiesOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -777,10 +781,12 @@ func (c *ApplicationAutoScaling) DescribeScalingPoliciesPagesWithContext(ctx aws
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeScalingPoliciesOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeScalingPoliciesOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -930,10 +936,12 @@ func (c *ApplicationAutoScaling) DescribeScheduledActionsPagesWithContext(ctx aw
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeScheduledActionsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeScheduledActionsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
diff --git a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/api.go b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/api.go
index 8c889ff345544..562870b35d141 100644
--- a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/api.go
+++ b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/api.go
@@ -229,10 +229,12 @@ func (c *DynamoDB) BatchGetItemPagesWithContext(ctx aws.Context, input *BatchGet
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*BatchGetItemOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*BatchGetItemOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -2687,10 +2689,12 @@ func (c *DynamoDB) ListTablesPagesWithContext(ctx aws.Context, input *ListTables
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*ListTablesOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*ListTablesOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -3185,10 +3189,12 @@ func (c *DynamoDB) QueryPagesWithContext(ctx aws.Context, input *QueryInput, fn
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*QueryOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*QueryOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -3695,10 +3701,12 @@ func (c *DynamoDB) ScanPagesWithContext(ctx aws.Context, input *ScanInput, fn fu
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*ScanOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*ScanOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -3905,16 +3913,6 @@ func (c *DynamoDB) TransactGetItemsRequest(input *TransactGetItemsInput) (req *r
// cannot retrieve items from tables in more than one AWS account or Region.
// The aggregate size of the items in the transaction cannot exceed 4 MB.
//
-// All AWS Regions and AWS GovCloud (US) support up to 25 items per transaction
-// with up to 4 MB of data, except the following AWS Regions:
-//
-// * China (Beijing)
-//
-// * China (Ningxia)
-//
-// The China (Beijing) and China (Ningxia) Regions support up to 10 items per
-// transaction with up to 4 MB of data.
-//
// DynamoDB rejects the entire TransactGetItems request if any of the following
// is true:
//
@@ -3960,8 +3958,6 @@ func (c *DynamoDB) TransactGetItemsRequest(input *TransactGetItemsInput) (req *r
// index (LSI) becomes too large, or a similar validation error occurs because
// of changes made by the transaction.
//
-// * The aggregate size of the items in the transaction exceeds 4 MBs.
-//
// * There is a user error, such as an invalid data format.
//
// DynamoDB cancels a TransactGetItems request under the following circumstances:
@@ -3976,8 +3972,6 @@ func (c *DynamoDB) TransactGetItemsRequest(input *TransactGetItemsInput) (req *r
// * There is insufficient provisioned capacity for the transaction to be
// completed.
//
-// * The aggregate size of the items in the transaction exceeds 4 MBs.
-//
// * There is a user error, such as an invalid data format.
//
// If using Java, DynamoDB lists the cancellation reasons on the CancellationReasons
@@ -4039,6 +4033,11 @@ func (c *DynamoDB) TransactGetItemsRequest(input *TransactGetItemsInput) (req *r
// Exponential Backoff (https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.RetryAndBackoff)
// in the Amazon DynamoDB Developer Guide.
//
+// * ErrCodeRequestLimitExceeded "RequestLimitExceeded"
+// Throughput exceeds the current throughput limit for your account. Please
+// contact AWS Support at AWS Support (https://aws.amazon.com/support) to request
+// a limit increase.
+//
// * ErrCodeInternalServerError "InternalServerError"
// An error occurred on the server side.
//
@@ -4136,16 +4135,6 @@ func (c *DynamoDB) TransactWriteItemsRequest(input *TransactWriteItemsInput) (re
// item. The aggregate size of the items in the transaction cannot exceed 4
// MB.
//
-// All AWS Regions and AWS GovCloud (US) support up to 25 items per transaction
-// with up to 4 MB of data, except the following AWS Regions:
-//
-// * China (Beijing)
-//
-// * China (Ningxia)
-//
-// The China (Beijing) and China (Ningxia) Regions support up to 10 items per
-// transaction with up to 4 MB of data.
-//
// The actions are completed atomically so that either all of them succeed,
// or all of them fail. They are defined by the following objects:
//
@@ -4226,8 +4215,6 @@ func (c *DynamoDB) TransactWriteItemsRequest(input *TransactWriteItemsInput) (re
// index (LSI) becomes too large, or a similar validation error occurs because
// of changes made by the transaction.
//
-// * The aggregate size of the items in the transaction exceeds 4 MBs.
-//
// * There is a user error, such as an invalid data format.
//
// DynamoDB cancels a TransactGetItems request under the following circumstances:
@@ -4242,8 +4229,6 @@ func (c *DynamoDB) TransactWriteItemsRequest(input *TransactWriteItemsInput) (re
// * There is insufficient provisioned capacity for the transaction to be
// completed.
//
-// * The aggregate size of the items in the transaction exceeds 4 MBs.
-//
// * There is a user error, such as an invalid data format.
//
// If using Java, DynamoDB lists the cancellation reasons on the CancellationReasons
@@ -4312,6 +4297,11 @@ func (c *DynamoDB) TransactWriteItemsRequest(input *TransactWriteItemsInput) (re
// Exponential Backoff (https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Programming.Errors.html#Programming.Errors.RetryAndBackoff)
// in the Amazon DynamoDB Developer Guide.
//
+// * ErrCodeRequestLimitExceeded "RequestLimitExceeded"
+// Throughput exceeds the current throughput limit for your account. Please
+// contact AWS Support at AWS Support (https://aws.amazon.com/support) to request
+// a limit increase.
+//
// * ErrCodeInternalServerError "InternalServerError"
// An error occurred on the server side.
//
@@ -5603,7 +5593,7 @@ func (s *AutoScalingPolicyDescription) SetTargetTrackingScalingPolicyConfigurati
return s
}
-// Represents the autoscaling policy to be modified.
+// Represents the auto scaling policy to be modified.
type AutoScalingPolicyUpdate struct {
_ struct{} `type:"structure"`
@@ -5659,15 +5649,15 @@ func (s *AutoScalingPolicyUpdate) SetTargetTrackingScalingPolicyConfiguration(v
return s
}
-// Represents the autoscaling settings for a global table or global secondary
+// Represents the auto scaling settings for a global table or global secondary
// index.
type AutoScalingSettingsDescription struct {
_ struct{} `type:"structure"`
- // Disabled autoscaling for this global table or global secondary index.
+ // Disabled auto scaling for this global table or global secondary index.
AutoScalingDisabled *bool `type:"boolean"`
- // Role ARN used for configuring autoScaling policy.
+ // Role ARN used for configuring the auto scaling policy.
AutoScalingRoleArn *string `type:"string"`
// The maximum capacity units that a global table or global secondary index
@@ -5722,15 +5712,15 @@ func (s *AutoScalingSettingsDescription) SetScalingPolicies(v []*AutoScalingPoli
return s
}
-// Represents the autoscaling settings to be modified for a global table or
+// Represents the auto scaling settings to be modified for a global table or
// global secondary index.
type AutoScalingSettingsUpdate struct {
_ struct{} `type:"structure"`
- // Disabled autoscaling for this global table or global secondary index.
+ // Disabled auto scaling for this global table or global secondary index.
AutoScalingDisabled *bool `type:"boolean"`
- // Role ARN used for configuring autoscaling policy.
+ // Role ARN used for configuring auto scaling policy.
AutoScalingRoleArn *string `min:"1" type:"string"`
// The maximum capacity units that a global table or global secondary index
@@ -5826,7 +5816,7 @@ type AutoScalingTargetTrackingScalingPolicyConfigurationDescription struct {
// subsequent scale in requests until it has expired. You should scale in conservatively
// to protect your application's availability. However, if another alarm triggers
// a scale out policy during the cooldown period after a scale-in, application
- // autoscaling scales out your scalable target immediately.
+ // auto scaling scales out your scalable target immediately.
ScaleInCooldown *int64 `type:"integer"`
// The amount of time, in seconds, after a scale out activity completes before
@@ -5894,7 +5884,7 @@ type AutoScalingTargetTrackingScalingPolicyConfigurationUpdate struct {
// subsequent scale in requests until it has expired. You should scale in conservatively
// to protect your application's availability. However, if another alarm triggers
// a scale out policy during the cooldown period after a scale-in, application
- // autoscaling scales out your scalable target immediately.
+ // auto scaling scales out your scalable target immediately.
ScaleInCooldown *int64 `type:"integer"`
// The amount of time, in seconds, after a scale out activity completes before
@@ -6897,7 +6887,7 @@ func (s *Condition) SetComparisonOperator(v string) *Condition {
}
// Represents a request to perform a check that an item exists or to check the
-// condition of specific attributes of the item..
+// condition of specific attributes of the item.
type ConditionCheck struct {
_ struct{} `type:"structure"`
@@ -7388,7 +7378,7 @@ func (s *CreateGlobalTableOutput) SetGlobalTableDescription(v *GlobalTableDescri
type CreateReplicaAction struct {
_ struct{} `type:"structure"`
- // The region of the replica to be added.
+ // The Region of the replica to be added.
//
// RegionName is a required field
RegionName *string `type:"string" required:"true"`
@@ -7435,11 +7425,11 @@ type CreateTableInput struct {
// Controls how you are charged for read and write throughput and how you manage
// capacity. This setting can be changed later.
//
- // * PROVISIONED - Sets the billing mode to PROVISIONED. We recommend using
- // PROVISIONED for predictable workloads.
+ // * PROVISIONED - We recommend using PROVISIONED for predictable workloads.
+ // PROVISIONED sets the billing mode to Provisioned Mode (https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html#HowItWorks.ProvisionedThroughput.Manual).
//
- // * PAY_PER_REQUEST - Sets the billing mode to PAY_PER_REQUEST. We recommend
- // using PAY_PER_REQUEST for unpredictable workloads.
+ // * PAY_PER_REQUEST - We recommend using PAY_PER_REQUEST for unpredictable
+ // workloads. PAY_PER_REQUEST sets the billing mode to On-Demand Mode (https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html#HowItWorks.OnDemand).
BillingMode *string `type:"string" enum:"BillingMode"`
// One or more global secondary indexes (the maximum is 20) to be created on
@@ -8243,7 +8233,7 @@ func (s *DeleteItemOutput) SetItemCollectionMetrics(v *ItemCollectionMetrics) *D
type DeleteReplicaAction struct {
_ struct{} `type:"structure"`
- // The region of the replica to be removed.
+ // The Region of the replica to be removed.
//
// RegionName is a required field
RegionName *string `type:"string" required:"true"`
@@ -8918,7 +8908,7 @@ func (s *Endpoint) SetCachePeriodInMinutes(v int64) *Endpoint {
}
// Represents a condition to be compared with an attribute value. This condition
-// can be used with DeleteItem, PutItem or UpdateItem operations; if the comparison
+// can be used with DeleteItem, PutItem, or UpdateItem operations; if the comparison
// evaluates to true, the operation succeeds; if not, the operation fails. You
// can use ExpectedAttributeValue in one of two different ways:
//
@@ -9443,7 +9433,7 @@ type GlobalSecondaryIndex struct {
// * RANGE - sort key
//
// The partition key of an item is also known as its hash attribute. The term
- // "hash attribute" derives from DynamoDB' usage of an internal hash function
+ // "hash attribute" derives from DynamoDB's usage of an internal hash function
// to evenly distribute data items across partitions, based on their partition
// key values.
//
@@ -9560,6 +9550,11 @@ type GlobalSecondaryIndexDescription struct {
// DynamoDB will do so. After all items have been processed, the backfilling
// operation is complete and Backfilling is false.
//
+ // You can delete an index that is being created during the Backfilling phase
+ // when IndexStatus is set to CREATING and Backfilling is true. You can't delete
+ // the index that is being created when IndexStatus is set to CREATING and Backfilling
+ // is false.
+ //
// For indexes that were created during a CreateTable operation, the Backfilling
// attribute does not appear in the DescribeTable output.
Backfilling *bool `type:"boolean"`
@@ -9598,7 +9593,7 @@ type GlobalSecondaryIndexDescription struct {
// * RANGE - sort key
//
// The partition key of an item is also known as its hash attribute. The term
- // "hash attribute" derives from DynamoDB' usage of an internal hash function
+ // "hash attribute" derives from DynamoDB's usage of an internal hash function
// to evenly distribute data items across partitions, based on their partition
// key values.
//
@@ -9701,7 +9696,7 @@ type GlobalSecondaryIndexInfo struct {
// * RANGE - sort key
//
// The partition key of an item is also known as its hash attribute. The term
- // "hash attribute" derives from DynamoDB' usage of an internal hash function
+ // "hash attribute" derives from DynamoDB's usage of an internal hash function
// to evenly distribute data items across partitions, based on their partition
// key values.
//
@@ -9847,7 +9842,7 @@ type GlobalTable struct {
// The global table name.
GlobalTableName *string `min:"3" type:"string"`
- // The regions where the global table has replicas.
+ // The Regions where the global table has replicas.
ReplicationGroup []*Replica `type:"list"`
}
@@ -9897,7 +9892,7 @@ type GlobalTableDescription struct {
// * ACTIVE - The global table is ready for use.
GlobalTableStatus *string `type:"string" enum:"GlobalTableStatus"`
- // The regions where the global table has replicas.
+ // The Regions where the global table has replicas.
ReplicationGroup []*ReplicaDescription `type:"list"`
}
@@ -9952,7 +9947,7 @@ type GlobalTableGlobalSecondaryIndexSettingsUpdate struct {
// IndexName is a required field
IndexName *string `min:"3" type:"string" required:"true"`
- // AutoScaling settings for managing a global secondary index's write capacity
+ // Auto scaling settings for managing a global secondary index's write capacity
// units.
ProvisionedWriteCapacityAutoScalingSettingsUpdate *AutoScalingSettingsUpdate `type:"structure"`
@@ -10108,7 +10103,7 @@ type KeySchemaElement struct {
// * RANGE - sort key
//
// The partition key of an item is also known as its hash attribute. The term
- // "hash attribute" derives from DynamoDB' usage of an internal hash function
+ // "hash attribute" derives from DynamoDB's usage of an internal hash function
// to evenly distribute data items across partitions, based on their partition
// key values.
//
@@ -10725,7 +10720,7 @@ type LocalSecondaryIndex struct {
// * RANGE - sort key
//
// The partition key of an item is also known as its hash attribute. The term
- // "hash attribute" derives from DynamoDB' usage of an internal hash function
+ // "hash attribute" derives from DynamoDB's usage of an internal hash function
// to evenly distribute data items across partitions, based on their partition
// key values.
//
@@ -10839,7 +10834,7 @@ type LocalSecondaryIndexDescription struct {
// * RANGE - sort key
//
// The partition key of an item is also known as its hash attribute. The term
- // "hash attribute" derives from DynamoDB' usage of an internal hash function
+ // "hash attribute" derives from DynamoDB's usage of an internal hash function
// to evenly distribute data items across partitions, based on their partition
// key values.
//
@@ -10916,7 +10911,7 @@ type LocalSecondaryIndexInfo struct {
// * RANGE - sort key
//
// The partition key of an item is also known as its hash attribute. The term
- // "hash attribute" derives from DynamoDB' usage of an internal hash function
+ // "hash attribute" derives from DynamoDB's usage of an internal hash function
// to evenly distribute data items across partitions, based on their partition
// key values.
//
@@ -10963,8 +10958,8 @@ func (s *LocalSecondaryIndexInfo) SetProjection(v *Projection) *LocalSecondaryIn
type PointInTimeRecoveryDescription struct {
_ struct{} `type:"structure"`
- // Specifies the earliest point in time you can restore your table to. It You
- // can restore your table to any point in time during the last 35 days.
+ // Specifies the earliest point in time you can restore your table to. You can
+ // restore your table to any point in time during the last 35 days.
EarliestRestorableDateTime *time.Time `type:"timestamp"`
// LatestRestorableDateTime is typically 5 minutes before the current time.
@@ -11067,7 +11062,7 @@ type Projection struct {
// * KEYS_ONLY - Only the index and primary keys are projected into the index.
//
// * INCLUDE - Only the specified table attributes are projected into the
- // index. The list of projected attributes are in NonKeyAttributes.
+ // index. The list of projected attributes is in NonKeyAttributes.
//
// * ALL - All of the table attributes are projected into the index.
ProjectionType *string `type:"string" enum:"ProjectionType"`
@@ -11662,7 +11657,7 @@ type PutRequest struct {
// A map of attribute name to attribute values, representing the primary key
// of an item to be processed by PutItem. All of the table's primary key attributes
// must be specified, and their data types must match those of the table's key
- // schema. If any attributes are present in the item which are part of an index
+ // schema. If any attributes are present in the item that are part of an index
// key schema for the table, their types must match the index key schema.
//
// Item is a required field
@@ -12225,7 +12220,7 @@ func (s *QueryOutput) SetScannedCount(v int64) *QueryOutput {
type Replica struct {
_ struct{} `type:"structure"`
- // The region where the replica needs to be created.
+ // The Region where the replica needs to be created.
RegionName *string `type:"string"`
}
@@ -12249,7 +12244,7 @@ func (s *Replica) SetRegionName(v string) *Replica {
type ReplicaDescription struct {
_ struct{} `type:"structure"`
- // The name of the region.
+ // The name of the Region.
RegionName *string `type:"string"`
}
@@ -12290,7 +12285,7 @@ type ReplicaGlobalSecondaryIndexSettingsDescription struct {
// * ACTIVE - The global secondary index is ready for use.
IndexStatus *string `type:"string" enum:"IndexStatus"`
- // Autoscaling settings for a global secondary index replica's read capacity
+ // Auto scaling settings for a global secondary index replica's read capacity
// units.
ProvisionedReadCapacityAutoScalingSettings *AutoScalingSettingsDescription `type:"structure"`
@@ -12298,7 +12293,7 @@ type ReplicaGlobalSecondaryIndexSettingsDescription struct {
// DynamoDB returns a ThrottlingException.
ProvisionedReadCapacityUnits *int64 `min:"1" type:"long"`
- // AutoScaling settings for a global secondary index replica's write capacity
+ // Auto scaling settings for a global secondary index replica's write capacity
// units.
ProvisionedWriteCapacityAutoScalingSettings *AutoScalingSettingsDescription `type:"structure"`
@@ -12364,7 +12359,7 @@ type ReplicaGlobalSecondaryIndexSettingsUpdate struct {
// IndexName is a required field
IndexName *string `min:"3" type:"string" required:"true"`
- // Autoscaling settings for managing a global secondary index replica's read
+ // Auto scaling settings for managing a global secondary index replica's read
// capacity units.
ProvisionedReadCapacityAutoScalingSettingsUpdate *AutoScalingSettingsUpdate `type:"structure"`
@@ -12429,7 +12424,7 @@ func (s *ReplicaGlobalSecondaryIndexSettingsUpdate) SetProvisionedReadCapacityUn
type ReplicaSettingsDescription struct {
_ struct{} `type:"structure"`
- // The region name of the replica.
+ // The Region name of the replica.
//
// RegionName is a required field
RegionName *string `type:"string" required:"true"`
@@ -12440,7 +12435,7 @@ type ReplicaSettingsDescription struct {
// Replica global secondary index settings for the global table.
ReplicaGlobalSecondaryIndexSettings []*ReplicaGlobalSecondaryIndexSettingsDescription `type:"list"`
- // Autoscaling settings for a global table replica's read capacity units.
+ // Auto scaling settings for a global table replica's read capacity units.
ReplicaProvisionedReadCapacityAutoScalingSettings *AutoScalingSettingsDescription `type:"structure"`
// The maximum number of strongly consistent reads consumed per second before
@@ -12449,7 +12444,7 @@ type ReplicaSettingsDescription struct {
// in the Amazon DynamoDB Developer Guide.
ReplicaProvisionedReadCapacityUnits *int64 `type:"long"`
- // AutoScaling settings for a global table replica's write capacity units.
+ // Auto scaling settings for a global table replica's write capacity units.
ReplicaProvisionedWriteCapacityAutoScalingSettings *AutoScalingSettingsDescription `type:"structure"`
// The maximum number of writes consumed per second before DynamoDB returns
@@ -12458,15 +12453,15 @@ type ReplicaSettingsDescription struct {
// in the Amazon DynamoDB Developer Guide.
ReplicaProvisionedWriteCapacityUnits *int64 `type:"long"`
- // The current state of the region:
+ // The current state of the Region:
//
- // * CREATING - The region is being created.
+ // * CREATING - The Region is being created.
//
- // * UPDATING - The region is being updated.
+ // * UPDATING - The Region is being updated.
//
- // * DELETING - The region is being deleted.
+ // * DELETING - The Region is being deleted.
//
- // * ACTIVE - The region is ready for use.
+ // * ACTIVE - The Region is ready for use.
ReplicaStatus *string `type:"string" enum:"ReplicaStatus"`
}
@@ -12528,11 +12523,11 @@ func (s *ReplicaSettingsDescription) SetReplicaStatus(v string) *ReplicaSettings
return s
}
-// Represents the settings for a global table in a region that will be modified.
+// Represents the settings for a global table in a Region that will be modified.
type ReplicaSettingsUpdate struct {
_ struct{} `type:"structure"`
- // The region of the replica to be added.
+ // The Region of the replica to be added.
//
// RegionName is a required field
RegionName *string `type:"string" required:"true"`
@@ -12541,7 +12536,7 @@ type ReplicaSettingsUpdate struct {
// will be modified.
ReplicaGlobalSecondaryIndexSettingsUpdate []*ReplicaGlobalSecondaryIndexSettingsUpdate `min:"1" type:"list"`
- // Autoscaling settings for managing a global table replica's read capacity
+ // Auto scaling settings for managing a global table replica's read capacity
// units.
ReplicaProvisionedReadCapacityAutoScalingSettingsUpdate *AutoScalingSettingsUpdate `type:"structure"`
@@ -12693,10 +12688,10 @@ type RestoreSummary struct {
// RestoreInProgress is a required field
RestoreInProgress *bool `type:"boolean" required:"true"`
- // ARN of the backup from which the table was restored.
+ // The Amazon Resource Name (ARN) of the backup from which the table was restored.
SourceBackupArn *string `min:"37" type:"string"`
- // ARN of the source table of the backup that is being restored.
+ // The ARN of the source table of the backup that is being restored.
SourceTableArn *string `type:"string"`
}
@@ -12742,6 +12737,22 @@ type RestoreTableFromBackupInput struct {
// BackupArn is a required field
BackupArn *string `min:"37" type:"string" required:"true"`
+ // The billing mode of the restored table.
+ BillingModeOverride *string `type:"string" enum:"BillingMode"`
+
+ // List of global secondary indexes for the restored table. The indexes provided
+ // should match existing secondary indexes. You can choose to exclude some or
+ // all of the indexes at the time of restore.
+ GlobalSecondaryIndexOverride []*GlobalSecondaryIndex `type:"list"`
+
+ // List of local secondary indexes for the restored table. The indexes provided
+ // should match existing secondary indexes. You can choose to exclude some or
+ // all of the indexes at the time of restore.
+ LocalSecondaryIndexOverride []*LocalSecondaryIndex `type:"list"`
+
+ // Provisioned throughput settings for the restored table.
+ ProvisionedThroughputOverride *ProvisionedThroughput `type:"structure"`
+
// The name of the new table to which the backup must be restored.
//
// TargetTableName is a required field
@@ -12773,6 +12784,31 @@ func (s *RestoreTableFromBackupInput) Validate() error {
if s.TargetTableName != nil && len(*s.TargetTableName) < 3 {
invalidParams.Add(request.NewErrParamMinLen("TargetTableName", 3))
}
+ if s.GlobalSecondaryIndexOverride != nil {
+ for i, v := range s.GlobalSecondaryIndexOverride {
+ if v == nil {
+ continue
+ }
+ if err := v.Validate(); err != nil {
+ invalidParams.AddNested(fmt.Sprintf("%s[%v]", "GlobalSecondaryIndexOverride", i), err.(request.ErrInvalidParams))
+ }
+ }
+ }
+ if s.LocalSecondaryIndexOverride != nil {
+ for i, v := range s.LocalSecondaryIndexOverride {
+ if v == nil {
+ continue
+ }
+ if err := v.Validate(); err != nil {
+ invalidParams.AddNested(fmt.Sprintf("%s[%v]", "LocalSecondaryIndexOverride", i), err.(request.ErrInvalidParams))
+ }
+ }
+ }
+ if s.ProvisionedThroughputOverride != nil {
+ if err := s.ProvisionedThroughputOverride.Validate(); err != nil {
+ invalidParams.AddNested("ProvisionedThroughputOverride", err.(request.ErrInvalidParams))
+ }
+ }
if invalidParams.Len() > 0 {
return invalidParams
@@ -12786,6 +12822,30 @@ func (s *RestoreTableFromBackupInput) SetBackupArn(v string) *RestoreTableFromBa
return s
}
+// SetBillingModeOverride sets the BillingModeOverride field's value.
+func (s *RestoreTableFromBackupInput) SetBillingModeOverride(v string) *RestoreTableFromBackupInput {
+ s.BillingModeOverride = &v
+ return s
+}
+
+// SetGlobalSecondaryIndexOverride sets the GlobalSecondaryIndexOverride field's value.
+func (s *RestoreTableFromBackupInput) SetGlobalSecondaryIndexOverride(v []*GlobalSecondaryIndex) *RestoreTableFromBackupInput {
+ s.GlobalSecondaryIndexOverride = v
+ return s
+}
+
+// SetLocalSecondaryIndexOverride sets the LocalSecondaryIndexOverride field's value.
+func (s *RestoreTableFromBackupInput) SetLocalSecondaryIndexOverride(v []*LocalSecondaryIndex) *RestoreTableFromBackupInput {
+ s.LocalSecondaryIndexOverride = v
+ return s
+}
+
+// SetProvisionedThroughputOverride sets the ProvisionedThroughputOverride field's value.
+func (s *RestoreTableFromBackupInput) SetProvisionedThroughputOverride(v *ProvisionedThroughput) *RestoreTableFromBackupInput {
+ s.ProvisionedThroughputOverride = v
+ return s
+}
+
// SetTargetTableName sets the TargetTableName field's value.
func (s *RestoreTableFromBackupInput) SetTargetTableName(v string) *RestoreTableFromBackupInput {
s.TargetTableName = &v
@@ -12818,6 +12878,22 @@ func (s *RestoreTableFromBackupOutput) SetTableDescription(v *TableDescription)
type RestoreTableToPointInTimeInput struct {
_ struct{} `type:"structure"`
+ // The billing mode of the restored table.
+ BillingModeOverride *string `type:"string" enum:"BillingMode"`
+
+ // List of global secondary indexes for the restored table. The indexes provided
+ // should match existing secondary indexes. You can choose to exclude some or
+ // all of the indexes at the time of restore.
+ GlobalSecondaryIndexOverride []*GlobalSecondaryIndex `type:"list"`
+
+ // List of local secondary indexes for the restored table. The indexes provided
+ // should match existing secondary indexes. You can choose to exclude some or
+ // all of the indexes at the time of restore.
+ LocalSecondaryIndexOverride []*LocalSecondaryIndex `type:"list"`
+
+ // Provisioned throughput settings for the restored table.
+ ProvisionedThroughputOverride *ProvisionedThroughput `type:"structure"`
+
// Time in the past to restore the table to.
RestoreDateTime *time.Time `type:"timestamp"`
@@ -12861,6 +12937,31 @@ func (s *RestoreTableToPointInTimeInput) Validate() error {
if s.TargetTableName != nil && len(*s.TargetTableName) < 3 {
invalidParams.Add(request.NewErrParamMinLen("TargetTableName", 3))
}
+ if s.GlobalSecondaryIndexOverride != nil {
+ for i, v := range s.GlobalSecondaryIndexOverride {
+ if v == nil {
+ continue
+ }
+ if err := v.Validate(); err != nil {
+ invalidParams.AddNested(fmt.Sprintf("%s[%v]", "GlobalSecondaryIndexOverride", i), err.(request.ErrInvalidParams))
+ }
+ }
+ }
+ if s.LocalSecondaryIndexOverride != nil {
+ for i, v := range s.LocalSecondaryIndexOverride {
+ if v == nil {
+ continue
+ }
+ if err := v.Validate(); err != nil {
+ invalidParams.AddNested(fmt.Sprintf("%s[%v]", "LocalSecondaryIndexOverride", i), err.(request.ErrInvalidParams))
+ }
+ }
+ }
+ if s.ProvisionedThroughputOverride != nil {
+ if err := s.ProvisionedThroughputOverride.Validate(); err != nil {
+ invalidParams.AddNested("ProvisionedThroughputOverride", err.(request.ErrInvalidParams))
+ }
+ }
if invalidParams.Len() > 0 {
return invalidParams
@@ -12868,6 +12969,30 @@ func (s *RestoreTableToPointInTimeInput) Validate() error {
return nil
}
+// SetBillingModeOverride sets the BillingModeOverride field's value.
+func (s *RestoreTableToPointInTimeInput) SetBillingModeOverride(v string) *RestoreTableToPointInTimeInput {
+ s.BillingModeOverride = &v
+ return s
+}
+
+// SetGlobalSecondaryIndexOverride sets the GlobalSecondaryIndexOverride field's value.
+func (s *RestoreTableToPointInTimeInput) SetGlobalSecondaryIndexOverride(v []*GlobalSecondaryIndex) *RestoreTableToPointInTimeInput {
+ s.GlobalSecondaryIndexOverride = v
+ return s
+}
+
+// SetLocalSecondaryIndexOverride sets the LocalSecondaryIndexOverride field's value.
+func (s *RestoreTableToPointInTimeInput) SetLocalSecondaryIndexOverride(v []*LocalSecondaryIndex) *RestoreTableToPointInTimeInput {
+ s.LocalSecondaryIndexOverride = v
+ return s
+}
+
+// SetProvisionedThroughputOverride sets the ProvisionedThroughputOverride field's value.
+func (s *RestoreTableToPointInTimeInput) SetProvisionedThroughputOverride(v *ProvisionedThroughput) *RestoreTableToPointInTimeInput {
+ s.ProvisionedThroughputOverride = v
+ return s
+}
+
// SetRestoreDateTime sets the RestoreDateTime field's value.
func (s *RestoreTableToPointInTimeInput) SetRestoreDateTime(v time.Time) *RestoreTableToPointInTimeInput {
s.RestoreDateTime = &v
@@ -12919,13 +13044,14 @@ func (s *RestoreTableToPointInTimeOutput) SetTableDescription(v *TableDescriptio
type SSEDescription struct {
_ struct{} `type:"structure"`
- // The KMS customer master key (CMK) ARN used for the KMS encryption.
+ // The KMS customer master key (CMK) ARN used for the AWS KMS encryption.
KMSMasterKeyArn *string `type:"string"`
// Server-side encryption type. The only supported value is:
//
- // * KMS - Server-side encryption which uses AWS Key Management Service.
- // Key is stored in your account and is managed by AWS KMS (KMS charges apply).
+ // * KMS - Server-side encryption that uses AWS Key Management Service. The
+ // key is stored in your account and is managed by AWS KMS (AWS KMS charges
+ // apply).
SSEType *string `type:"string" enum:"SSEType"`
// Represents the current state of server-side encryption. The only supported
@@ -12975,16 +13101,17 @@ type SSESpecification struct {
// (false) or not specified, server-side encryption is set to AWS owned CMK.
Enabled *bool `type:"boolean"`
- // The KMS Customer Master Key (CMK) which should be used for the KMS encryption.
+ // The KMS customer master key (CMK) that should be used for the AWS KMS encryption.
// To specify a CMK, use its key ID, Amazon Resource Name (ARN), alias name,
// or alias ARN. Note that you should only provide this parameter if the key
- // is different from the default DynamoDB Customer Master Key alias/aws/dynamodb.
+ // is different from the default DynamoDB customer master key alias/aws/dynamodb.
KMSMasterKeyId *string `type:"string"`
// Server-side encryption type. The only supported value is:
//
- // * KMS - Server-side encryption which uses AWS Key Management Service.
- // Key is stored in your account and is managed by AWS KMS (KMS charges apply).
+ // * KMS - Server-side encryption that uses AWS Key Management Service. The
+ // key is stored in your account and is managed by AWS KMS (AWS KMS charges
+ // apply).
SSEType *string `type:"string" enum:"SSEType"`
}
@@ -13502,7 +13629,7 @@ type SourceTableDetails struct {
// We recommend using PAY_PER_REQUEST for unpredictable workloads.
BillingMode *string `type:"string" enum:"BillingMode"`
- // Number of items in the table. Please note this is an approximate value.
+ // Number of items in the table. Note that this is an approximate value.
ItemCount *int64 `type:"long"`
// Schema of the table.
@@ -13533,7 +13660,7 @@ type SourceTableDetails struct {
// TableName is a required field
TableName *string `min:"3" type:"string" required:"true"`
- // Size of the table in bytes. Please note this is an approximate value.
+ // Size of the table in bytes. Note that this is an approximate value.
TableSizeBytes *int64 `type:"long"`
}
@@ -13607,7 +13734,7 @@ type SourceTableFeatureDetails struct {
_ struct{} `type:"structure"`
// Represents the GSI properties for the table when the backup was created.
- // It includes the IndexName, KeySchema, Projection and ProvisionedThroughput
+ // It includes the IndexName, KeySchema, Projection, and ProvisionedThroughput
// for the GSIs on the table at the time of backup.
GlobalSecondaryIndexes []*GlobalSecondaryIndexInfo `type:"list"`
@@ -13741,9 +13868,14 @@ type TableDescription struct {
//
// * Backfilling - If true, then the index is currently in the backfilling
// phase. Backfilling occurs only when a new global secondary index is added
- // to the table; it is the process by which DynamoDB populates the new index
+ // to the table. It is the process by which DynamoDB populates the new index
// with data from the table. (This attribute does not appear for indexes
- // that were created during a CreateTable operation.)
+ // that were created during a CreateTable operation.) You can delete an index
+ // that is being created during the Backfilling phase when IndexStatus is
+ // set to CREATING and Backfilling is true. You can't delete the index that
+ // is being created when IndexStatus is set to CREATING and Backfilling is
+ // false. (This attribute does not appear for indexes that were created during
+ // a CreateTable operation.)
//
// * IndexName - The name of the global secondary index.
//
@@ -13769,7 +13901,7 @@ type TableDescription struct {
// specification is composed of: ProjectionType - One of the following: KEYS_ONLY
// - Only the index and primary keys are projected into the index. INCLUDE
// - Only the specified table attributes are projected into the index. The
- // list of projected attributes are in NonKeyAttributes. ALL - All of the
+ // list of projected attributes is in NonKeyAttributes. ALL - All of the
// table attributes are projected into the index. NonKeyAttributes - A list
// of one or more non-key attribute names that are projected into the secondary
// index. The total count of attributes provided in NonKeyAttributes, summed
@@ -13817,11 +13949,11 @@ type TableDescription struct {
// However, the combination of the following three elements is guaranteed to
// be unique:
//
- // * the AWS customer ID.
+ // * AWS customer ID
//
- // * the table name.
+ // * Table name
//
- // * the StreamLabel.
+ // * StreamLabel
LatestStreamLabel *string `type:"string"`
// Represents one or more local secondary indexes on the table. Each index is
@@ -13842,7 +13974,7 @@ type TableDescription struct {
// specification is composed of: ProjectionType - One of the following: KEYS_ONLY
// - Only the index and primary keys are projected into the index. INCLUDE
// - Only the specified table attributes are projected into the index. The
- // list of projected attributes are in NonKeyAttributes. ALL - All of the
+ // list of projected attributes is in NonKeyAttributes. ALL - All of the
// table attributes are projected into the index. NonKeyAttributes - A list
// of one or more non-key attribute names that are projected into the secondary
// index. The total count of attributes provided in NonKeyAttributes, summed
@@ -15071,6 +15203,12 @@ type UpdateGlobalTableSettingsInput struct {
// The billing mode of the global table. If GlobalTableBillingMode is not specified,
// the global table defaults to PROVISIONED capacity billing mode.
+ //
+ // * PROVISIONED - We recommend using PROVISIONED for predictable workloads.
+ // PROVISIONED sets the billing mode to Provisioned Mode (https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html#HowItWorks.ProvisionedThroughput.Manual).
+ //
+ // * PAY_PER_REQUEST - We recommend using PAY_PER_REQUEST for unpredictable
+ // workloads. PAY_PER_REQUEST sets the billing mode to On-Demand Mode (https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html#HowItWorks.OnDemand).
GlobalTableBillingMode *string `type:"string" enum:"BillingMode"`
// Represents the settings of a global secondary index for a global table that
@@ -15622,11 +15760,11 @@ type UpdateTableInput struct {
// values are estimated based on the consumed read and write capacity of your
// table and global secondary indexes over the past 30 minutes.
//
- // * PROVISIONED - Sets the billing mode to PROVISIONED. We recommend using
- // PROVISIONED for predictable workloads.
+ // * PROVISIONED - We recommend using PROVISIONED for predictable workloads.
+ // PROVISIONED sets the billing mode to Provisioned Mode (https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html#HowItWorks.ProvisionedThroughput.Manual).
//
- // * PAY_PER_REQUEST - Sets the billing mode to PAY_PER_REQUEST. We recommend
- // using PAY_PER_REQUEST for unpredictable workloads.
+ // * PAY_PER_REQUEST - We recommend using PAY_PER_REQUEST for unpredictable
+ // workloads. PAY_PER_REQUEST sets the billing mode to On-Demand Mode (https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.ReadWriteCapacityMode.html#HowItWorks.OnDemand).
BillingMode *string `type:"string" enum:"BillingMode"`
// An array of one or more global secondary indexes for the table. For each
@@ -15639,6 +15777,9 @@ type UpdateTableInput struct {
//
// * Delete - remove a global secondary index from the table.
//
+ // You can create or delete only one global secondary index per UpdateTable
+ // operation.
+ //
// For more information, see Managing Global Secondary Indexes (https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.OnlineOps.html)
// in the Amazon DynamoDB Developer Guide.
GlobalSecondaryIndexUpdates []*GlobalSecondaryIndexUpdate `type:"list"`
@@ -15866,8 +16007,8 @@ func (s *UpdateTimeToLiveOutput) SetTimeToLiveSpecification(v *TimeToLiveSpecifi
// Represents an operation to perform - either DeleteItem or PutItem. You can
// only request one of these operations, not both, in a single WriteRequest.
-// If you do need to perform both of these operations, you will need to provide
-// two separate WriteRequest objects.
+// If you do need to perform both of these operations, you need to provide two
+// separate WriteRequest objects.
type WriteRequest struct {
_ struct{} `type:"structure"`
diff --git a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/errors.go b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/errors.go
index 71f3e7d3d532e..e1b7931960d71 100644
--- a/vendor/github.com/aws/aws-sdk-go/service/dynamodb/errors.go
+++ b/vendor/github.com/aws/aws-sdk-go/service/dynamodb/errors.go
@@ -184,8 +184,6 @@ const (
// index (LSI) becomes too large, or a similar validation error occurs because
// of changes made by the transaction.
//
- // * The aggregate size of the items in the transaction exceeds 4 MBs.
- //
// * There is a user error, such as an invalid data format.
//
// DynamoDB cancels a TransactGetItems request under the following circumstances:
@@ -200,8 +198,6 @@ const (
// * There is insufficient provisioned capacity for the transaction to be
// completed.
//
- // * The aggregate size of the items in the transaction exceeds 4 MBs.
- //
// * There is a user error, such as an invalid data format.
//
// If using Java, DynamoDB lists the cancellation reasons on the CancellationReasons
diff --git a/vendor/github.com/aws/aws-sdk-go/service/ec2/api.go b/vendor/github.com/aws/aws-sdk-go/service/ec2/api.go
index 3df136f884d1f..29e19cb10abb4 100644
--- a/vendor/github.com/aws/aws-sdk-go/service/ec2/api.go
+++ b/vendor/github.com/aws/aws-sdk-go/service/ec2/api.go
@@ -5619,7 +5619,7 @@ func (c *EC2) CreateSnapshotsRequest(input *CreateSnapshotsInput) (req *request.
// Creates crash-consistent snapshots of multiple EBS volumes and stores the
// data in S3. Volumes are chosen by specifying an instance. Any attached volumes
// will produce one snapshot each that is crash-consistent across the instance.
-// Boot volumes can be excluded by changing the paramaters.
+// Boot volumes can be excluded by changing the parameters.
//
// Returns awserr.Error for service API and SDK errors. Use runtime type assertions
// with awserr.Error's Code and Message methods to get detailed information about
@@ -5958,8 +5958,10 @@ func (c *EC2) CreateTrafficMirrorFilterRequest(input *CreateTrafficMirrorFilterI
// A Traffic Mirror filter is a set of rules that defines the traffic to mirror.
//
// By default, no traffic is mirrored. To mirror traffic, use CreateTrafficMirrorFilterRule
+// (https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateTrafficMirrorFilterRule.htm)
// to add Traffic Mirror rules to the filter. The rules you add define what
// traffic gets mirrored. You can also use ModifyTrafficMirrorFilterNetworkServices
+// (https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_ModifyTrafficMirrorFilterNetworkServices.html)
// to mirror supported network services.
//
// Returns awserr.Error for service API and SDK errors. Use runtime type assertions
@@ -6034,7 +6036,7 @@ func (c *EC2) CreateTrafficMirrorFilterRuleRequest(input *CreateTrafficMirrorFil
// CreateTrafficMirrorFilterRule API operation for Amazon Elastic Compute Cloud.
//
-// Creates a Traffic Mirror rule.
+// Creates a Traffic Mirror filter rule.
//
// A Traffic Mirror rule defines the Traffic Mirror source traffic to mirror.
//
@@ -6122,8 +6124,8 @@ func (c *EC2) CreateTrafficMirrorSessionRequest(input *CreateTrafficMirrorSessio
// can be in the same VPC, or in a different VPC connected via VPC peering or
// a transit gateway.
//
-// By default, no traffic is mirrored. Use CreateTrafficMirrorFilter to create
-// filter rules that specify the traffic to mirror.
+// By default, no traffic is mirrored. Use CreateTrafficMirrorFilter (https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateTrafficMirrorFilter.htm)
+// to create filter rules that specify the traffic to mirror.
//
// Returns awserr.Error for service API and SDK errors. Use runtime type assertions
// with awserr.Error's Code and Message methods to get detailed information about
@@ -6206,7 +6208,8 @@ func (c *EC2) CreateTrafficMirrorTargetRequest(input *CreateTrafficMirrorTargetI
//
// A Traffic Mirror target can be a network interface, or a Network Load Balancer.
//
-// To use the target in a Traffic Mirror session, use CreateTrafficMirrorSession.
+// To use the target in a Traffic Mirror session, use CreateTrafficMirrorSession
+// (https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_CreateTrafficMirrorSession.htm).
//
// Returns awserr.Error for service API and SDK errors. Use runtime type assertions
// with awserr.Error's Code and Message methods to get detailed information about
@@ -11343,10 +11346,12 @@ func (c *EC2) DescribeByoipCidrsPagesWithContext(ctx aws.Context, input *Describ
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeByoipCidrsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeByoipCidrsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -11474,10 +11479,12 @@ func (c *EC2) DescribeCapacityReservationsPagesWithContext(ctx aws.Context, inpu
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeCapacityReservationsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeCapacityReservationsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -11607,10 +11614,12 @@ func (c *EC2) DescribeClassicLinkInstancesPagesWithContext(ctx aws.Context, inpu
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeClassicLinkInstancesOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeClassicLinkInstancesOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -11737,10 +11746,12 @@ func (c *EC2) DescribeClientVpnAuthorizationRulesPagesWithContext(ctx aws.Contex
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeClientVpnAuthorizationRulesOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeClientVpnAuthorizationRulesOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -11868,10 +11879,12 @@ func (c *EC2) DescribeClientVpnConnectionsPagesWithContext(ctx aws.Context, inpu
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeClientVpnConnectionsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeClientVpnConnectionsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -11998,10 +12011,12 @@ func (c *EC2) DescribeClientVpnEndpointsPagesWithContext(ctx aws.Context, input
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeClientVpnEndpointsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeClientVpnEndpointsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -12128,10 +12143,12 @@ func (c *EC2) DescribeClientVpnRoutesPagesWithContext(ctx aws.Context, input *De
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeClientVpnRoutesOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeClientVpnRoutesOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -12258,10 +12275,12 @@ func (c *EC2) DescribeClientVpnTargetNetworksPagesWithContext(ctx aws.Context, i
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeClientVpnTargetNetworksOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeClientVpnTargetNetworksOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -12546,10 +12565,12 @@ func (c *EC2) DescribeDhcpOptionsPagesWithContext(ctx aws.Context, input *Descri
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeDhcpOptionsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeDhcpOptionsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -12676,10 +12697,12 @@ func (c *EC2) DescribeEgressOnlyInternetGatewaysPagesWithContext(ctx aws.Context
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeEgressOnlyInternetGatewaysOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeEgressOnlyInternetGatewaysOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -13179,10 +13202,12 @@ func (c *EC2) DescribeFleetsPagesWithContext(ctx aws.Context, input *DescribeFle
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeFleetsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeFleetsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -13311,10 +13336,12 @@ func (c *EC2) DescribeFlowLogsPagesWithContext(ctx aws.Context, input *DescribeF
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeFlowLogsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeFlowLogsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -13517,10 +13544,12 @@ func (c *EC2) DescribeFpgaImagesPagesWithContext(ctx aws.Context, input *Describ
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeFpgaImagesOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeFpgaImagesOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -13655,10 +13684,12 @@ func (c *EC2) DescribeHostReservationOfferingsPagesWithContext(ctx aws.Context,
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeHostReservationOfferingsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeHostReservationOfferingsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -13785,10 +13816,12 @@ func (c *EC2) DescribeHostReservationsPagesWithContext(ctx aws.Context, input *D
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeHostReservationsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeHostReservationsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -13919,10 +13952,12 @@ func (c *EC2) DescribeHostsPagesWithContext(ctx aws.Context, input *DescribeHost
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeHostsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeHostsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -14049,10 +14084,12 @@ func (c *EC2) DescribeIamInstanceProfileAssociationsPagesWithContext(ctx aws.Con
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeIamInstanceProfileAssociationsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeIamInstanceProfileAssociationsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -14521,10 +14558,12 @@ func (c *EC2) DescribeImportImageTasksPagesWithContext(ctx aws.Context, input *D
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeImportImageTasksOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeImportImageTasksOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -14651,10 +14690,12 @@ func (c *EC2) DescribeImportSnapshotTasksPagesWithContext(ctx aws.Context, input
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeImportSnapshotTasksOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeImportSnapshotTasksOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -14882,10 +14923,12 @@ func (c *EC2) DescribeInstanceCreditSpecificationsPagesWithContext(ctx aws.Conte
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeInstanceCreditSpecificationsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeInstanceCreditSpecificationsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -15033,10 +15076,12 @@ func (c *EC2) DescribeInstanceStatusPagesWithContext(ctx aws.Context, input *Des
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeInstanceStatusOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeInstanceStatusOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -15178,10 +15223,12 @@ func (c *EC2) DescribeInstancesPagesWithContext(ctx aws.Context, input *Describe
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeInstancesOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeInstancesOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -15308,10 +15355,12 @@ func (c *EC2) DescribeInternetGatewaysPagesWithContext(ctx aws.Context, input *D
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeInternetGatewaysOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeInternetGatewaysOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -15516,10 +15565,12 @@ func (c *EC2) DescribeLaunchTemplateVersionsPagesWithContext(ctx aws.Context, in
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeLaunchTemplateVersionsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeLaunchTemplateVersionsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -15646,10 +15697,12 @@ func (c *EC2) DescribeLaunchTemplatesPagesWithContext(ctx aws.Context, input *De
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeLaunchTemplatesOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeLaunchTemplatesOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -15778,10 +15831,12 @@ func (c *EC2) DescribeMovingAddressesPagesWithContext(ctx aws.Context, input *De
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeMovingAddressesOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeMovingAddressesOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -15908,10 +15963,12 @@ func (c *EC2) DescribeNatGatewaysPagesWithContext(ctx aws.Context, input *Descri
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeNatGatewaysOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeNatGatewaysOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -16041,10 +16098,12 @@ func (c *EC2) DescribeNetworkAclsPagesWithContext(ctx aws.Context, input *Descri
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeNetworkAclsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeNetworkAclsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -16246,10 +16305,12 @@ func (c *EC2) DescribeNetworkInterfacePermissionsPagesWithContext(ctx aws.Contex
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeNetworkInterfacePermissionsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeNetworkInterfacePermissionsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -16376,10 +16437,12 @@ func (c *EC2) DescribeNetworkInterfacesPagesWithContext(ctx aws.Context, input *
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeNetworkInterfacesOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeNetworkInterfacesOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -16587,10 +16650,12 @@ func (c *EC2) DescribePrefixListsPagesWithContext(ctx aws.Context, input *Descri
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribePrefixListsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribePrefixListsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -16731,10 +16796,12 @@ func (c *EC2) DescribePrincipalIdFormatPagesWithContext(ctx aws.Context, input *
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribePrincipalIdFormatOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribePrincipalIdFormatOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -16861,10 +16928,12 @@ func (c *EC2) DescribePublicIpv4PoolsPagesWithContext(ctx aws.Context, input *De
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribePublicIpv4PoolsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribePublicIpv4PoolsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -17250,10 +17319,12 @@ func (c *EC2) DescribeReservedInstancesModificationsPagesWithContext(ctx aws.Con
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeReservedInstancesModificationsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeReservedInstancesModificationsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -17391,10 +17462,12 @@ func (c *EC2) DescribeReservedInstancesOfferingsPagesWithContext(ctx aws.Context
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeReservedInstancesOfferingsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeReservedInstancesOfferingsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -17529,10 +17602,12 @@ func (c *EC2) DescribeRouteTablesPagesWithContext(ctx aws.Context, input *Descri
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeRouteTablesOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeRouteTablesOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -17667,10 +17742,12 @@ func (c *EC2) DescribeScheduledInstanceAvailabilityPagesWithContext(ctx aws.Cont
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeScheduledInstanceAvailabilityOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeScheduledInstanceAvailabilityOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -17797,10 +17874,12 @@ func (c *EC2) DescribeScheduledInstancesPagesWithContext(ctx aws.Context, input
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeScheduledInstancesOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeScheduledInstancesOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -18009,10 +18088,12 @@ func (c *EC2) DescribeSecurityGroupsPagesWithContext(ctx aws.Context, input *Des
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeSecurityGroupsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeSecurityGroupsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -18264,10 +18345,12 @@ func (c *EC2) DescribeSnapshotsPagesWithContext(ctx aws.Context, input *Describe
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeSnapshotsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeSnapshotsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -18626,10 +18709,12 @@ func (c *EC2) DescribeSpotFleetRequestsPagesWithContext(ctx aws.Context, input *
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeSpotFleetRequestsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeSpotFleetRequestsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -18772,10 +18857,12 @@ func (c *EC2) DescribeSpotInstanceRequestsPagesWithContext(ctx aws.Context, inpu
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeSpotInstanceRequestsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeSpotInstanceRequestsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -18909,10 +18996,12 @@ func (c *EC2) DescribeSpotPriceHistoryPagesWithContext(ctx aws.Context, input *D
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeSpotPriceHistoryOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeSpotPriceHistoryOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -19042,10 +19131,12 @@ func (c *EC2) DescribeStaleSecurityGroupsPagesWithContext(ctx aws.Context, input
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeStaleSecurityGroupsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeStaleSecurityGroupsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -19175,10 +19266,12 @@ func (c *EC2) DescribeSubnetsPagesWithContext(ctx aws.Context, input *DescribeSu
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeSubnetsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeSubnetsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -19308,10 +19401,12 @@ func (c *EC2) DescribeTagsPagesWithContext(ctx aws.Context, input *DescribeTagsI
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeTagsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeTagsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -19438,10 +19533,12 @@ func (c *EC2) DescribeTrafficMirrorFiltersPagesWithContext(ctx aws.Context, inpu
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeTrafficMirrorFiltersOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeTrafficMirrorFiltersOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -19569,10 +19666,12 @@ func (c *EC2) DescribeTrafficMirrorSessionsPagesWithContext(ctx aws.Context, inp
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeTrafficMirrorSessionsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeTrafficMirrorSessionsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -19699,10 +19798,12 @@ func (c *EC2) DescribeTrafficMirrorTargetsPagesWithContext(ctx aws.Context, inpu
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeTrafficMirrorTargetsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeTrafficMirrorTargetsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -19832,10 +19933,12 @@ func (c *EC2) DescribeTransitGatewayAttachmentsPagesWithContext(ctx aws.Context,
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeTransitGatewayAttachmentsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeTransitGatewayAttachmentsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -19963,10 +20066,12 @@ func (c *EC2) DescribeTransitGatewayRouteTablesPagesWithContext(ctx aws.Context,
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeTransitGatewayRouteTablesOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeTransitGatewayRouteTablesOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -20094,10 +20199,12 @@ func (c *EC2) DescribeTransitGatewayVpcAttachmentsPagesWithContext(ctx aws.Conte
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeTransitGatewayVpcAttachmentsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeTransitGatewayVpcAttachmentsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -20225,10 +20332,12 @@ func (c *EC2) DescribeTransitGatewaysPagesWithContext(ctx aws.Context, input *De
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeTransitGatewaysOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeTransitGatewaysOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -20468,10 +20577,12 @@ func (c *EC2) DescribeVolumeStatusPagesWithContext(ctx aws.Context, input *Descr
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeVolumeStatusOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeVolumeStatusOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -20608,10 +20719,12 @@ func (c *EC2) DescribeVolumesPagesWithContext(ctx aws.Context, input *DescribeVo
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeVolumesOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeVolumesOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -20751,10 +20864,12 @@ func (c *EC2) DescribeVolumesModificationsPagesWithContext(ctx aws.Context, inpu
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeVolumesModificationsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeVolumesModificationsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -21036,10 +21151,12 @@ func (c *EC2) DescribeVpcClassicLinkDnsSupportPagesWithContext(ctx aws.Context,
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeVpcClassicLinkDnsSupportOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeVpcClassicLinkDnsSupportOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -21167,10 +21284,12 @@ func (c *EC2) DescribeVpcEndpointConnectionNotificationsPagesWithContext(ctx aws
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeVpcEndpointConnectionNotificationsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeVpcEndpointConnectionNotificationsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -21298,10 +21417,12 @@ func (c *EC2) DescribeVpcEndpointConnectionsPagesWithContext(ctx aws.Context, in
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeVpcEndpointConnectionsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeVpcEndpointConnectionsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -21428,10 +21549,12 @@ func (c *EC2) DescribeVpcEndpointServiceConfigurationsPagesWithContext(ctx aws.C
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeVpcEndpointServiceConfigurationsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeVpcEndpointServiceConfigurationsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -21559,10 +21682,12 @@ func (c *EC2) DescribeVpcEndpointServicePermissionsPagesWithContext(ctx aws.Cont
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeVpcEndpointServicePermissionsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeVpcEndpointServicePermissionsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -21763,10 +21888,12 @@ func (c *EC2) DescribeVpcEndpointsPagesWithContext(ctx aws.Context, input *Descr
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeVpcEndpointsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeVpcEndpointsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -21893,10 +22020,12 @@ func (c *EC2) DescribeVpcPeeringConnectionsPagesWithContext(ctx aws.Context, inp
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeVpcPeeringConnectionsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeVpcPeeringConnectionsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -22023,10 +22152,12 @@ func (c *EC2) DescribeVpcsPagesWithContext(ctx aws.Context, input *DescribeVpcsI
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*DescribeVpcsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*DescribeVpcsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -25149,10 +25280,12 @@ func (c *EC2) GetTransitGatewayAttachmentPropagationsPagesWithContext(ctx aws.Co
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*GetTransitGatewayAttachmentPropagationsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*GetTransitGatewayAttachmentPropagationsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -25280,10 +25413,12 @@ func (c *EC2) GetTransitGatewayRouteTableAssociationsPagesWithContext(ctx aws.Co
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*GetTransitGatewayRouteTableAssociationsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*GetTransitGatewayRouteTableAssociationsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -25411,10 +25546,12 @@ func (c *EC2) GetTransitGatewayRouteTablePropagationsPagesWithContext(ctx aws.Co
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*GetTransitGatewayRouteTablePropagationsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*GetTransitGatewayRouteTablePropagationsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -26180,10 +26317,10 @@ func (c *EC2) ModifyFleetRequest(input *ModifyFleetInput) (req *request.Request,
//
// To scale up your EC2 Fleet, increase its target capacity. The EC2 Fleet launches
// the additional Spot Instances according to the allocation strategy for the
-// EC2 Fleet request. If the allocation strategy is lowestPrice, the EC2 Fleet
+// EC2 Fleet request. If the allocation strategy is lowest-price, the EC2 Fleet
// launches instances using the Spot Instance pool with the lowest price. If
// the allocation strategy is diversified, the EC2 Fleet distributes the instances
-// across the Spot Instance pools. If the allocation strategy is capacityOptimized,
+// across the Spot Instance pools. If the allocation strategy is capacity-optimized,
// EC2 Fleet launches instances from Spot Instance pools with optimal capacity
// for the number of instances that are launching.
//
@@ -26191,11 +26328,11 @@ func (c *EC2) ModifyFleetRequest(input *ModifyFleetInput) (req *request.Request,
// Fleet cancels any open requests that exceed the new target capacity. You
// can request that the EC2 Fleet terminate Spot Instances until the size of
// the fleet no longer exceeds the new target capacity. If the allocation strategy
-// is lowestPrice, the EC2 Fleet terminates the instances with the highest price
-// per unit. If the allocation strategy is capacityOptimized, the EC2 Fleet
-// terminates the instances in the Spot Instance pools that have the least available
-// Spot Instance capacity. If the allocation strategy is diversified, the EC2
-// Fleet terminates instances across the Spot Instance pools. Alternatively,
+// is lowest-price, the EC2 Fleet terminates the instances with the highest
+// price per unit. If the allocation strategy is capacity-optimized, the EC2
+// Fleet terminates the instances in the Spot Instance pools that have the least
+// available Spot Instance capacity. If the allocation strategy is diversified,
+// the EC2 Fleet terminates instances across the Spot Instance pools. Alternatively,
// you can request that the EC2 Fleet keep the fleet at its current size, but
// not replace any Spot Instances that are interrupted or that you terminate
// manually.
@@ -27624,7 +27761,7 @@ func (c *EC2) ModifyTrafficMirrorFilterNetworkServicesRequest(input *ModifyTraff
// to mirror network services, use RemoveNetworkServices to remove the network
// services from the Traffic Mirror filter.
//
-// FFor information about filter rule properties, see Network Services (https://docs.aws.amazon.com/vpc/latest/mirroring/traffic-mirroring-considerations.html#traffic-mirroring-network-services)
+// For information about filter rule properties, see Network Services (https://docs.aws.amazon.com/vpc/latest/mirroring/traffic-mirroring-considerations.html)
// in the Traffic Mirroring User Guide .
//
// Returns awserr.Error for service API and SDK errors. Use runtime type assertions
@@ -41196,7 +41333,7 @@ type CreateKeyPairOutput struct {
KeyFingerprint *string `locationName:"keyFingerprint" type:"string"`
// An unencrypted PEM encoded RSA private key.
- KeyMaterial *string `locationName:"keyMaterial" type:"string"`
+ KeyMaterial *string `locationName:"keyMaterial" type:"string" sensitive:"true"`
// The name of the key pair.
KeyName *string `locationName:"keyName" type:"string"`
@@ -43392,9 +43529,8 @@ type CreateTrafficMirrorSessionInput struct {
// The number of bytes in each packet to mirror. These are bytes after the VXLAN
// header. Do not specify this parameter when you want to mirror the entire
// packet. To mirror a subset of the packet, set this to the length (in bytes)
- // that you want to mirror. For example, if you set this value to 1network0,
- // then the first 100 bytes that meet the filter criteria are copied to the
- // target.
+ // that you want to mirror. For example, if you set this value to 100, then
+ // the first 100 bytes that meet the filter criteria are copied to the target.
//
// If you do not want to mirror the entire packet, use the PacketLength parameter
// to specify the number of bytes in each packet to mirror.
@@ -66863,8 +66999,7 @@ func (s *GroupIdentifier) SetGroupName(v string) *GroupIdentifier {
// Indicates whether your instance is configured for hibernation. This parameter
// is valid only if the instance meets the hibernation prerequisites (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html#hibernating-prerequisites).
-// Hibernation is currently supported only for Amazon Linux. For more information,
-// see Hibernate Your Instance (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html)
+// For more information, see Hibernate Your Instance (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html)
// in the Amazon Elastic Compute Cloud User Guide.
type HibernationOptions struct {
_ struct{} `type:"structure"`
@@ -66892,8 +67027,7 @@ func (s *HibernationOptions) SetConfigured(v bool) *HibernationOptions {
// Indicates whether your instance is configured for hibernation. This parameter
// is valid only if the instance meets the hibernation prerequisites (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html#hibernating-prerequisites).
-// Hibernation is currently supported only for Amazon Linux. For more information,
-// see Hibernate Your Instance (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html)
+// For more information, see Hibernate Your Instance (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html)
// in the Amazon Elastic Compute Cloud User Guide.
type HibernationOptionsRequest struct {
_ struct{} `type:"structure"`
@@ -68673,7 +68807,7 @@ type ImportInstanceLaunchSpecification struct {
SubnetId *string `locationName:"subnetId" type:"string"`
// The Base64-encoded user data to make available to the instance.
- UserData *UserData `locationName:"userData" type:"structure"`
+ UserData *UserData `locationName:"userData" type:"structure" sensitive:"true"`
}
// String returns the string representation
@@ -72302,7 +72436,6 @@ func (s *LaunchTemplateHibernationOptions) SetConfigured(v bool) *LaunchTemplate
// Indicates whether the instance is configured for hibernation. This parameter
// is valid only if the instance meets the hibernation prerequisites (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html#hibernating-prerequisites).
-// Hibernation is currently supported only for Amazon Linux.
type LaunchTemplateHibernationOptionsRequest struct {
_ struct{} `type:"structure"`
@@ -80630,7 +80763,8 @@ type PurchaseReservedInstancesOfferingInput struct {
// prices.
LimitPrice *ReservedInstanceLimitPrice `locationName:"limitPrice" type:"structure"`
- // The time at which to purchase the Reserved Instance.
+ // The time at which to purchase the Reserved Instance, in UTC format (for example,
+ // YYYY-MM-DDTHH:MM:SSZ).
PurchaseTime *time.Time `type:"timestamp"`
// The ID of the Reserved Instance offering to purchase.
@@ -82377,8 +82511,7 @@ type RequestLaunchTemplateData struct {
// Indicates whether an instance is enabled for hibernation. This parameter
// is valid only if the instance meets the hibernation prerequisites (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html#hibernating-prerequisites).
- // Hibernation is currently supported only for Amazon Linux. For more information,
- // see Hibernate Your Instance (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html)
+ // For more information, see Hibernate Your Instance (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Hibernate.html)
// in the Amazon Elastic Compute Cloud User Guide.
HibernationOptions *LaunchTemplateHibernationOptionsRequest `type:"structure"`
@@ -89470,14 +89603,14 @@ type SpotOptions struct {
// Indicates how to allocate the target Spot Instance capacity across the Spot
// Instance pools specified by the EC2 Fleet.
//
- // If the allocation strategy is lowestPrice, EC2 Fleet launches instances from
- // the Spot Instance pools with the lowest price. This is the default allocation
+ // If the allocation strategy is lowest-price, EC2 Fleet launches instances
+ // from the Spot Instance pools with the lowest price. This is the default allocation
// strategy.
//
// If the allocation strategy is diversified, EC2 Fleet launches instances from
// all the Spot Instance pools that you specify.
//
- // If the allocation strategy is capacityOptimized, EC2 Fleet launches instances
+ // If the allocation strategy is capacity-optimized, EC2 Fleet launches instances
// from Spot Instance pools with optimal capacity for the number of instances
// that are launching.
AllocationStrategy *string `locationName:"allocationStrategy" type:"string" enum:"SpotAllocationStrategy"`
@@ -89486,7 +89619,7 @@ type SpotOptions struct {
InstanceInterruptionBehavior *string `locationName:"instanceInterruptionBehavior" type:"string" enum:"SpotInstanceInterruptionBehavior"`
// The number of Spot pools across which to allocate your target Spot capacity.
- // Valid only when AllocationStrategy is set to lowestPrice. EC2 Fleet selects
+ // Valid only when AllocationStrategy is set to lowest-price. EC2 Fleet selects
// the cheapest Spot pools and evenly allocates your target Spot capacity across
// the number of Spot pools that you specify.
InstancePoolsToUseCount *int64 `locationName:"instancePoolsToUseCount" type:"integer"`
@@ -89566,14 +89699,14 @@ type SpotOptionsRequest struct {
// Indicates how to allocate the target Spot Instance capacity across the Spot
// Instance pools specified by the EC2 Fleet.
//
- // If the allocation strategy is lowestPrice, EC2 Fleet launches instances from
- // the Spot Instance pools with the lowest price. This is the default allocation
+ // If the allocation strategy is lowest-price, EC2 Fleet launches instances
+ // from the Spot Instance pools with the lowest price. This is the default allocation
// strategy.
//
// If the allocation strategy is diversified, EC2 Fleet launches instances from
// all the Spot Instance pools that you specify.
//
- // If the allocation strategy is capacityOptimized, EC2 Fleet launches instances
+ // If the allocation strategy is capacity-optimized, EC2 Fleet launches instances
// from Spot Instance pools with optimal capacity for the number of instances
// that are launching.
AllocationStrategy *string `type:"string" enum:"SpotAllocationStrategy"`
@@ -93413,7 +93546,7 @@ func (s *UserBucketDetails) SetS3Key(v string) *UserBucketDetails {
// Describes the user data for an instance.
type UserData struct {
- _ struct{} `type:"structure"`
+ _ struct{} `type:"structure" sensitive:"true"`
// The user data. If you are using an AWS SDK or command line tool, Base64-encoding
// is performed for you, and you can load the text from a file. Otherwise, you
@@ -97183,6 +97316,12 @@ const (
// InstanceTypeU12tb1Metal is a InstanceType enum value
InstanceTypeU12tb1Metal = "u-12tb1.metal"
+ // InstanceTypeU18tb1Metal is a InstanceType enum value
+ InstanceTypeU18tb1Metal = "u-18tb1.metal"
+
+ // InstanceTypeU24tb1Metal is a InstanceType enum value
+ InstanceTypeU24tb1Metal = "u-24tb1.metal"
+
// InstanceTypeA1Medium is a InstanceType enum value
InstanceTypeA1Medium = "a1.medium"
diff --git a/vendor/github.com/aws/aws-sdk-go/service/s3/api.go b/vendor/github.com/aws/aws-sdk-go/service/s3/api.go
index 91bf5225ddfe9..a979c59f1bb3d 100644
--- a/vendor/github.com/aws/aws-sdk-go/service/s3/api.go
+++ b/vendor/github.com/aws/aws-sdk-go/service/s3/api.go
@@ -4246,10 +4246,12 @@ func (c *S3) ListMultipartUploadsPagesWithContext(ctx aws.Context, input *ListMu
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*ListMultipartUploadsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*ListMultipartUploadsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -4376,10 +4378,12 @@ func (c *S3) ListObjectVersionsPagesWithContext(ctx aws.Context, input *ListObje
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*ListObjectVersionsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*ListObjectVersionsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -4513,10 +4517,12 @@ func (c *S3) ListObjectsPagesWithContext(ctx aws.Context, input *ListObjectsInpu
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*ListObjectsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*ListObjectsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -4651,10 +4657,12 @@ func (c *S3) ListObjectsV2PagesWithContext(ctx aws.Context, input *ListObjectsV2
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*ListObjectsV2Output), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*ListObjectsV2Output), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -4781,10 +4789,12 @@ func (c *S3) ListPartsPagesWithContext(ctx aws.Context, input *ListPartsInput, f
},
}
- cont := true
- for p.Next() && cont {
- cont = fn(p.Page().(*ListPartsOutput), !p.HasNextPage())
+ for p.Next() {
+ if !fn(p.Page().(*ListPartsOutput), !p.HasNextPage()) {
+ break
+ }
}
+
return p.Err()
}
@@ -24704,6 +24714,9 @@ const (
// InventoryOptionalFieldObjectLockLegalHoldStatus is a InventoryOptionalField enum value
InventoryOptionalFieldObjectLockLegalHoldStatus = "ObjectLockLegalHoldStatus"
+
+ // InventoryOptionalFieldIntelligentTieringAccessTier is a InventoryOptionalField enum value
+ InventoryOptionalFieldIntelligentTieringAccessTier = "IntelligentTieringAccessTier"
)
const (
diff --git a/vendor/github.com/cespare/xxhash/v2/.travis.yml b/vendor/github.com/cespare/xxhash/v2/.travis.yml
new file mode 100644
index 0000000000000..c516ea88da735
--- /dev/null
+++ b/vendor/github.com/cespare/xxhash/v2/.travis.yml
@@ -0,0 +1,8 @@
+language: go
+go:
+ - "1.x"
+ - master
+env:
+ - TAGS=""
+ - TAGS="-tags purego"
+script: go test $TAGS -v ./...
diff --git a/vendor/github.com/cespare/xxhash/v2/LICENSE.txt b/vendor/github.com/cespare/xxhash/v2/LICENSE.txt
new file mode 100644
index 0000000000000..24b53065f40b5
--- /dev/null
+++ b/vendor/github.com/cespare/xxhash/v2/LICENSE.txt
@@ -0,0 +1,22 @@
+Copyright (c) 2016 Caleb Spare
+
+MIT License
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
diff --git a/vendor/github.com/cespare/xxhash/v2/README.md b/vendor/github.com/cespare/xxhash/v2/README.md
new file mode 100644
index 0000000000000..2fd8693c21b20
--- /dev/null
+++ b/vendor/github.com/cespare/xxhash/v2/README.md
@@ -0,0 +1,67 @@
+# xxhash
+
+[](https://godoc.org/github.com/cespare/xxhash)
+[](https://travis-ci.org/cespare/xxhash)
+
+xxhash is a Go implementation of the 64-bit
+[xxHash](http://cyan4973.github.io/xxHash/) algorithm, XXH64. This is a
+high-quality hashing algorithm that is much faster than anything in the Go
+standard library.
+
+This package provides a straightforward API:
+
+```
+func Sum64(b []byte) uint64
+func Sum64String(s string) uint64
+type Digest struct{ ... }
+ func New() *Digest
+```
+
+The `Digest` type implements hash.Hash64. Its key methods are:
+
+```
+func (*Digest) Write([]byte) (int, error)
+func (*Digest) WriteString(string) (int, error)
+func (*Digest) Sum64() uint64
+```
+
+This implementation provides a fast pure-Go implementation and an even faster
+assembly implementation for amd64.
+
+## Compatibility
+
+This package is in a module and the latest code is in version 2 of the module.
+You need a version of Go with at least "minimal module compatibility" to use
+github.com/cespare/xxhash/v2:
+
+* 1.9.7+ for Go 1.9
+* 1.10.3+ for Go 1.10
+* Go 1.11 or later
+
+I recommend using the latest release of Go.
+
+## Benchmarks
+
+Here are some quick benchmarks comparing the pure-Go and assembly
+implementations of Sum64.
+
+| input size | purego | asm |
+| --- | --- | --- |
+| 5 B | 979.66 MB/s | 1291.17 MB/s |
+| 100 B | 7475.26 MB/s | 7973.40 MB/s |
+| 4 KB | 17573.46 MB/s | 17602.65 MB/s |
+| 10 MB | 17131.46 MB/s | 17142.16 MB/s |
+
+These numbers were generated on Ubuntu 18.04 with an Intel i7-8700K CPU using
+the following commands under Go 1.11.2:
+
+```
+$ go test -tags purego -benchtime 10s -bench '/xxhash,direct,bytes'
+$ go test -benchtime 10s -bench '/xxhash,direct,bytes'
+```
+
+## Projects using this package
+
+- [InfluxDB](https://github.com/influxdata/influxdb)
+- [Prometheus](https://github.com/prometheus/prometheus)
+- [FreeCache](https://github.com/coocood/freecache)
diff --git a/vendor/github.com/cespare/xxhash/v2/go.mod b/vendor/github.com/cespare/xxhash/v2/go.mod
new file mode 100644
index 0000000000000..49f67608bf6bb
--- /dev/null
+++ b/vendor/github.com/cespare/xxhash/v2/go.mod
@@ -0,0 +1,3 @@
+module github.com/cespare/xxhash/v2
+
+go 1.11
diff --git a/vendor/github.com/cespare/xxhash/v2/go.sum b/vendor/github.com/cespare/xxhash/v2/go.sum
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/vendor/github.com/cespare/xxhash/v2/xxhash.go b/vendor/github.com/cespare/xxhash/v2/xxhash.go
new file mode 100644
index 0000000000000..db0b35fbe39f0
--- /dev/null
+++ b/vendor/github.com/cespare/xxhash/v2/xxhash.go
@@ -0,0 +1,236 @@
+// Package xxhash implements the 64-bit variant of xxHash (XXH64) as described
+// at http://cyan4973.github.io/xxHash/.
+package xxhash
+
+import (
+ "encoding/binary"
+ "errors"
+ "math/bits"
+)
+
+const (
+ prime1 uint64 = 11400714785074694791
+ prime2 uint64 = 14029467366897019727
+ prime3 uint64 = 1609587929392839161
+ prime4 uint64 = 9650029242287828579
+ prime5 uint64 = 2870177450012600261
+)
+
+// NOTE(caleb): I'm using both consts and vars of the primes. Using consts where
+// possible in the Go code is worth a small (but measurable) performance boost
+// by avoiding some MOVQs. Vars are needed for the asm and also are useful for
+// convenience in the Go code in a few places where we need to intentionally
+// avoid constant arithmetic (e.g., v1 := prime1 + prime2 fails because the
+// result overflows a uint64).
+var (
+ prime1v = prime1
+ prime2v = prime2
+ prime3v = prime3
+ prime4v = prime4
+ prime5v = prime5
+)
+
+// Digest implements hash.Hash64.
+type Digest struct {
+ v1 uint64
+ v2 uint64
+ v3 uint64
+ v4 uint64
+ total uint64
+ mem [32]byte
+ n int // how much of mem is used
+}
+
+// New creates a new Digest that computes the 64-bit xxHash algorithm.
+func New() *Digest {
+ var d Digest
+ d.Reset()
+ return &d
+}
+
+// Reset clears the Digest's state so that it can be reused.
+func (d *Digest) Reset() {
+ d.v1 = prime1v + prime2
+ d.v2 = prime2
+ d.v3 = 0
+ d.v4 = -prime1v
+ d.total = 0
+ d.n = 0
+}
+
+// Size always returns 8 bytes.
+func (d *Digest) Size() int { return 8 }
+
+// BlockSize always returns 32 bytes.
+func (d *Digest) BlockSize() int { return 32 }
+
+// Write adds more data to d. It always returns len(b), nil.
+func (d *Digest) Write(b []byte) (n int, err error) {
+ n = len(b)
+ d.total += uint64(n)
+
+ if d.n+n < 32 {
+ // This new data doesn't even fill the current block.
+ copy(d.mem[d.n:], b)
+ d.n += n
+ return
+ }
+
+ if d.n > 0 {
+ // Finish off the partial block.
+ copy(d.mem[d.n:], b)
+ d.v1 = round(d.v1, u64(d.mem[0:8]))
+ d.v2 = round(d.v2, u64(d.mem[8:16]))
+ d.v3 = round(d.v3, u64(d.mem[16:24]))
+ d.v4 = round(d.v4, u64(d.mem[24:32]))
+ b = b[32-d.n:]
+ d.n = 0
+ }
+
+ if len(b) >= 32 {
+ // One or more full blocks left.
+ nw := writeBlocks(d, b)
+ b = b[nw:]
+ }
+
+ // Store any remaining partial block.
+ copy(d.mem[:], b)
+ d.n = len(b)
+
+ return
+}
+
+// Sum appends the current hash to b and returns the resulting slice.
+func (d *Digest) Sum(b []byte) []byte {
+ s := d.Sum64()
+ return append(
+ b,
+ byte(s>>56),
+ byte(s>>48),
+ byte(s>>40),
+ byte(s>>32),
+ byte(s>>24),
+ byte(s>>16),
+ byte(s>>8),
+ byte(s),
+ )
+}
+
+// Sum64 returns the current hash.
+func (d *Digest) Sum64() uint64 {
+ var h uint64
+
+ if d.total >= 32 {
+ v1, v2, v3, v4 := d.v1, d.v2, d.v3, d.v4
+ h = rol1(v1) + rol7(v2) + rol12(v3) + rol18(v4)
+ h = mergeRound(h, v1)
+ h = mergeRound(h, v2)
+ h = mergeRound(h, v3)
+ h = mergeRound(h, v4)
+ } else {
+ h = d.v3 + prime5
+ }
+
+ h += d.total
+
+ i, end := 0, d.n
+ for ; i+8 <= end; i += 8 {
+ k1 := round(0, u64(d.mem[i:i+8]))
+ h ^= k1
+ h = rol27(h)*prime1 + prime4
+ }
+ if i+4 <= end {
+ h ^= uint64(u32(d.mem[i:i+4])) * prime1
+ h = rol23(h)*prime2 + prime3
+ i += 4
+ }
+ for i < end {
+ h ^= uint64(d.mem[i]) * prime5
+ h = rol11(h) * prime1
+ i++
+ }
+
+ h ^= h >> 33
+ h *= prime2
+ h ^= h >> 29
+ h *= prime3
+ h ^= h >> 32
+
+ return h
+}
+
+const (
+ magic = "xxh\x06"
+ marshaledSize = len(magic) + 8*5 + 32
+)
+
+// MarshalBinary implements the encoding.BinaryMarshaler interface.
+func (d *Digest) MarshalBinary() ([]byte, error) {
+ b := make([]byte, 0, marshaledSize)
+ b = append(b, magic...)
+ b = appendUint64(b, d.v1)
+ b = appendUint64(b, d.v2)
+ b = appendUint64(b, d.v3)
+ b = appendUint64(b, d.v4)
+ b = appendUint64(b, d.total)
+ b = append(b, d.mem[:d.n]...)
+ b = b[:len(b)+len(d.mem)-d.n]
+ return b, nil
+}
+
+// UnmarshalBinary implements the encoding.BinaryUnmarshaler interface.
+func (d *Digest) UnmarshalBinary(b []byte) error {
+ if len(b) < len(magic) || string(b[:len(magic)]) != magic {
+ return errors.New("xxhash: invalid hash state identifier")
+ }
+ if len(b) != marshaledSize {
+ return errors.New("xxhash: invalid hash state size")
+ }
+ b = b[len(magic):]
+ b, d.v1 = consumeUint64(b)
+ b, d.v2 = consumeUint64(b)
+ b, d.v3 = consumeUint64(b)
+ b, d.v4 = consumeUint64(b)
+ b, d.total = consumeUint64(b)
+ copy(d.mem[:], b)
+ b = b[len(d.mem):]
+ d.n = int(d.total % uint64(len(d.mem)))
+ return nil
+}
+
+func appendUint64(b []byte, x uint64) []byte {
+ var a [8]byte
+ binary.LittleEndian.PutUint64(a[:], x)
+ return append(b, a[:]...)
+}
+
+func consumeUint64(b []byte) ([]byte, uint64) {
+ x := u64(b)
+ return b[8:], x
+}
+
+func u64(b []byte) uint64 { return binary.LittleEndian.Uint64(b) }
+func u32(b []byte) uint32 { return binary.LittleEndian.Uint32(b) }
+
+func round(acc, input uint64) uint64 {
+ acc += input * prime2
+ acc = rol31(acc)
+ acc *= prime1
+ return acc
+}
+
+func mergeRound(acc, val uint64) uint64 {
+ val = round(0, val)
+ acc ^= val
+ acc = acc*prime1 + prime4
+ return acc
+}
+
+func rol1(x uint64) uint64 { return bits.RotateLeft64(x, 1) }
+func rol7(x uint64) uint64 { return bits.RotateLeft64(x, 7) }
+func rol11(x uint64) uint64 { return bits.RotateLeft64(x, 11) }
+func rol12(x uint64) uint64 { return bits.RotateLeft64(x, 12) }
+func rol18(x uint64) uint64 { return bits.RotateLeft64(x, 18) }
+func rol23(x uint64) uint64 { return bits.RotateLeft64(x, 23) }
+func rol27(x uint64) uint64 { return bits.RotateLeft64(x, 27) }
+func rol31(x uint64) uint64 { return bits.RotateLeft64(x, 31) }
diff --git a/vendor/github.com/cespare/xxhash/v2/xxhash_amd64.go b/vendor/github.com/cespare/xxhash/v2/xxhash_amd64.go
new file mode 100644
index 0000000000000..ad14b807f4d96
--- /dev/null
+++ b/vendor/github.com/cespare/xxhash/v2/xxhash_amd64.go
@@ -0,0 +1,13 @@
+// +build !appengine
+// +build gc
+// +build !purego
+
+package xxhash
+
+// Sum64 computes the 64-bit xxHash digest of b.
+//
+//go:noescape
+func Sum64(b []byte) uint64
+
+//go:noescape
+func writeBlocks(d *Digest, b []byte) int
diff --git a/vendor/github.com/cespare/xxhash/v2/xxhash_amd64.s b/vendor/github.com/cespare/xxhash/v2/xxhash_amd64.s
new file mode 100644
index 0000000000000..d580e32aed4af
--- /dev/null
+++ b/vendor/github.com/cespare/xxhash/v2/xxhash_amd64.s
@@ -0,0 +1,215 @@
+// +build !appengine
+// +build gc
+// +build !purego
+
+#include "textflag.h"
+
+// Register allocation:
+// AX h
+// CX pointer to advance through b
+// DX n
+// BX loop end
+// R8 v1, k1
+// R9 v2
+// R10 v3
+// R11 v4
+// R12 tmp
+// R13 prime1v
+// R14 prime2v
+// R15 prime4v
+
+// round reads from and advances the buffer pointer in CX.
+// It assumes that R13 has prime1v and R14 has prime2v.
+#define round(r) \
+ MOVQ (CX), R12 \
+ ADDQ $8, CX \
+ IMULQ R14, R12 \
+ ADDQ R12, r \
+ ROLQ $31, r \
+ IMULQ R13, r
+
+// mergeRound applies a merge round on the two registers acc and val.
+// It assumes that R13 has prime1v, R14 has prime2v, and R15 has prime4v.
+#define mergeRound(acc, val) \
+ IMULQ R14, val \
+ ROLQ $31, val \
+ IMULQ R13, val \
+ XORQ val, acc \
+ IMULQ R13, acc \
+ ADDQ R15, acc
+
+// func Sum64(b []byte) uint64
+TEXT ·Sum64(SB), NOSPLIT, $0-32
+ // Load fixed primes.
+ MOVQ ·prime1v(SB), R13
+ MOVQ ·prime2v(SB), R14
+ MOVQ ·prime4v(SB), R15
+
+ // Load slice.
+ MOVQ b_base+0(FP), CX
+ MOVQ b_len+8(FP), DX
+ LEAQ (CX)(DX*1), BX
+
+ // The first loop limit will be len(b)-32.
+ SUBQ $32, BX
+
+ // Check whether we have at least one block.
+ CMPQ DX, $32
+ JLT noBlocks
+
+ // Set up initial state (v1, v2, v3, v4).
+ MOVQ R13, R8
+ ADDQ R14, R8
+ MOVQ R14, R9
+ XORQ R10, R10
+ XORQ R11, R11
+ SUBQ R13, R11
+
+ // Loop until CX > BX.
+blockLoop:
+ round(R8)
+ round(R9)
+ round(R10)
+ round(R11)
+
+ CMPQ CX, BX
+ JLE blockLoop
+
+ MOVQ R8, AX
+ ROLQ $1, AX
+ MOVQ R9, R12
+ ROLQ $7, R12
+ ADDQ R12, AX
+ MOVQ R10, R12
+ ROLQ $12, R12
+ ADDQ R12, AX
+ MOVQ R11, R12
+ ROLQ $18, R12
+ ADDQ R12, AX
+
+ mergeRound(AX, R8)
+ mergeRound(AX, R9)
+ mergeRound(AX, R10)
+ mergeRound(AX, R11)
+
+ JMP afterBlocks
+
+noBlocks:
+ MOVQ ·prime5v(SB), AX
+
+afterBlocks:
+ ADDQ DX, AX
+
+ // Right now BX has len(b)-32, and we want to loop until CX > len(b)-8.
+ ADDQ $24, BX
+
+ CMPQ CX, BX
+ JG fourByte
+
+wordLoop:
+ // Calculate k1.
+ MOVQ (CX), R8
+ ADDQ $8, CX
+ IMULQ R14, R8
+ ROLQ $31, R8
+ IMULQ R13, R8
+
+ XORQ R8, AX
+ ROLQ $27, AX
+ IMULQ R13, AX
+ ADDQ R15, AX
+
+ CMPQ CX, BX
+ JLE wordLoop
+
+fourByte:
+ ADDQ $4, BX
+ CMPQ CX, BX
+ JG singles
+
+ MOVL (CX), R8
+ ADDQ $4, CX
+ IMULQ R13, R8
+ XORQ R8, AX
+
+ ROLQ $23, AX
+ IMULQ R14, AX
+ ADDQ ·prime3v(SB), AX
+
+singles:
+ ADDQ $4, BX
+ CMPQ CX, BX
+ JGE finalize
+
+singlesLoop:
+ MOVBQZX (CX), R12
+ ADDQ $1, CX
+ IMULQ ·prime5v(SB), R12
+ XORQ R12, AX
+
+ ROLQ $11, AX
+ IMULQ R13, AX
+
+ CMPQ CX, BX
+ JL singlesLoop
+
+finalize:
+ MOVQ AX, R12
+ SHRQ $33, R12
+ XORQ R12, AX
+ IMULQ R14, AX
+ MOVQ AX, R12
+ SHRQ $29, R12
+ XORQ R12, AX
+ IMULQ ·prime3v(SB), AX
+ MOVQ AX, R12
+ SHRQ $32, R12
+ XORQ R12, AX
+
+ MOVQ AX, ret+24(FP)
+ RET
+
+// writeBlocks uses the same registers as above except that it uses AX to store
+// the d pointer.
+
+// func writeBlocks(d *Digest, b []byte) int
+TEXT ·writeBlocks(SB), NOSPLIT, $0-40
+ // Load fixed primes needed for round.
+ MOVQ ·prime1v(SB), R13
+ MOVQ ·prime2v(SB), R14
+
+ // Load slice.
+ MOVQ b_base+8(FP), CX
+ MOVQ b_len+16(FP), DX
+ LEAQ (CX)(DX*1), BX
+ SUBQ $32, BX
+
+ // Load vN from d.
+ MOVQ d+0(FP), AX
+ MOVQ 0(AX), R8 // v1
+ MOVQ 8(AX), R9 // v2
+ MOVQ 16(AX), R10 // v3
+ MOVQ 24(AX), R11 // v4
+
+ // We don't need to check the loop condition here; this function is
+ // always called with at least one block of data to process.
+blockLoop:
+ round(R8)
+ round(R9)
+ round(R10)
+ round(R11)
+
+ CMPQ CX, BX
+ JLE blockLoop
+
+ // Copy vN back to d.
+ MOVQ R8, 0(AX)
+ MOVQ R9, 8(AX)
+ MOVQ R10, 16(AX)
+ MOVQ R11, 24(AX)
+
+ // The number of bytes written is CX minus the old base pointer.
+ SUBQ b_base+8(FP), CX
+ MOVQ CX, ret+32(FP)
+
+ RET
diff --git a/vendor/github.com/cespare/xxhash/v2/xxhash_other.go b/vendor/github.com/cespare/xxhash/v2/xxhash_other.go
new file mode 100644
index 0000000000000..4a5a821603e5b
--- /dev/null
+++ b/vendor/github.com/cespare/xxhash/v2/xxhash_other.go
@@ -0,0 +1,76 @@
+// +build !amd64 appengine !gc purego
+
+package xxhash
+
+// Sum64 computes the 64-bit xxHash digest of b.
+func Sum64(b []byte) uint64 {
+ // A simpler version would be
+ // d := New()
+ // d.Write(b)
+ // return d.Sum64()
+ // but this is faster, particularly for small inputs.
+
+ n := len(b)
+ var h uint64
+
+ if n >= 32 {
+ v1 := prime1v + prime2
+ v2 := prime2
+ v3 := uint64(0)
+ v4 := -prime1v
+ for len(b) >= 32 {
+ v1 = round(v1, u64(b[0:8:len(b)]))
+ v2 = round(v2, u64(b[8:16:len(b)]))
+ v3 = round(v3, u64(b[16:24:len(b)]))
+ v4 = round(v4, u64(b[24:32:len(b)]))
+ b = b[32:len(b):len(b)]
+ }
+ h = rol1(v1) + rol7(v2) + rol12(v3) + rol18(v4)
+ h = mergeRound(h, v1)
+ h = mergeRound(h, v2)
+ h = mergeRound(h, v3)
+ h = mergeRound(h, v4)
+ } else {
+ h = prime5
+ }
+
+ h += uint64(n)
+
+ i, end := 0, len(b)
+ for ; i+8 <= end; i += 8 {
+ k1 := round(0, u64(b[i:i+8:len(b)]))
+ h ^= k1
+ h = rol27(h)*prime1 + prime4
+ }
+ if i+4 <= end {
+ h ^= uint64(u32(b[i:i+4:len(b)])) * prime1
+ h = rol23(h)*prime2 + prime3
+ i += 4
+ }
+ for ; i < end; i++ {
+ h ^= uint64(b[i]) * prime5
+ h = rol11(h) * prime1
+ }
+
+ h ^= h >> 33
+ h *= prime2
+ h ^= h >> 29
+ h *= prime3
+ h ^= h >> 32
+
+ return h
+}
+
+func writeBlocks(d *Digest, b []byte) int {
+ v1, v2, v3, v4 := d.v1, d.v2, d.v3, d.v4
+ n := len(b)
+ for len(b) >= 32 {
+ v1 = round(v1, u64(b[0:8:len(b)]))
+ v2 = round(v2, u64(b[8:16:len(b)]))
+ v3 = round(v3, u64(b[16:24:len(b)]))
+ v4 = round(v4, u64(b[24:32:len(b)]))
+ b = b[32:len(b):len(b)]
+ }
+ d.v1, d.v2, d.v3, d.v4 = v1, v2, v3, v4
+ return n - len(b)
+}
diff --git a/vendor/github.com/cespare/xxhash/v2/xxhash_safe.go b/vendor/github.com/cespare/xxhash/v2/xxhash_safe.go
new file mode 100644
index 0000000000000..fc9bea7a31f2b
--- /dev/null
+++ b/vendor/github.com/cespare/xxhash/v2/xxhash_safe.go
@@ -0,0 +1,15 @@
+// +build appengine
+
+// This file contains the safe implementations of otherwise unsafe-using code.
+
+package xxhash
+
+// Sum64String computes the 64-bit xxHash digest of s.
+func Sum64String(s string) uint64 {
+ return Sum64([]byte(s))
+}
+
+// WriteString adds more data to d. It always returns len(s), nil.
+func (d *Digest) WriteString(s string) (n int, err error) {
+ return d.Write([]byte(s))
+}
diff --git a/vendor/github.com/cespare/xxhash/v2/xxhash_unsafe.go b/vendor/github.com/cespare/xxhash/v2/xxhash_unsafe.go
new file mode 100644
index 0000000000000..53bf76efbc247
--- /dev/null
+++ b/vendor/github.com/cespare/xxhash/v2/xxhash_unsafe.go
@@ -0,0 +1,46 @@
+// +build !appengine
+
+// This file encapsulates usage of unsafe.
+// xxhash_safe.go contains the safe implementations.
+
+package xxhash
+
+import (
+ "reflect"
+ "unsafe"
+)
+
+// Notes:
+//
+// See https://groups.google.com/d/msg/golang-nuts/dcjzJy-bSpw/tcZYBzQqAQAJ
+// for some discussion about these unsafe conversions.
+//
+// In the future it's possible that compiler optimizations will make these
+// unsafe operations unnecessary: https://golang.org/issue/2205.
+//
+// Both of these wrapper functions still incur function call overhead since they
+// will not be inlined. We could write Go/asm copies of Sum64 and Digest.Write
+// for strings to squeeze out a bit more speed. Mid-stack inlining should
+// eventually fix this.
+
+// Sum64String computes the 64-bit xxHash digest of s.
+// It may be faster than Sum64([]byte(s)) by avoiding a copy.
+func Sum64String(s string) uint64 {
+ var b []byte
+ bh := (*reflect.SliceHeader)(unsafe.Pointer(&b))
+ bh.Data = (*reflect.StringHeader)(unsafe.Pointer(&s)).Data
+ bh.Len = len(s)
+ bh.Cap = len(s)
+ return Sum64(b)
+}
+
+// WriteString adds more data to d. It always returns len(s), nil.
+// It may be faster than Write([]byte(s)) by avoiding a copy.
+func (d *Digest) WriteString(s string) (n int, err error) {
+ var b []byte
+ bh := (*reflect.SliceHeader)(unsafe.Pointer(&b))
+ bh.Data = (*reflect.StringHeader)(unsafe.Pointer(&s)).Data
+ bh.Len = len(s)
+ bh.Cap = len(s)
+ return d.Write(b)
+}
diff --git a/vendor/github.com/cortexproject/cortex/pkg/chunk/cache/cache.go b/vendor/github.com/cortexproject/cortex/pkg/chunk/cache/cache.go
index 540b03de4539f..17e1503ff3139 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/chunk/cache/cache.go
+++ b/vendor/github.com/cortexproject/cortex/pkg/chunk/cache/cache.go
@@ -33,10 +33,10 @@ type Config struct {
Fifocache FifoCacheConfig `yaml:"fifocache,omitempty"`
// This is to name the cache metrics properly.
- Prefix string `yaml:"prefix,omitempty"`
+ Prefix string `yaml:"prefix,omitempty" doc:"hidden"`
// For tests to inject specific implementations.
- Cache Cache
+ Cache Cache `yaml:"-"`
}
// RegisterFlagsWithPrefix adds the flags required to config this to the given FlagSet
diff --git a/vendor/github.com/cortexproject/cortex/pkg/chunk/cache/redis_cache.go b/vendor/github.com/cortexproject/cortex/pkg/chunk/cache/redis_cache.go
index 43e14dba3fc89..c6cbc66200664 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/chunk/cache/redis_cache.go
+++ b/vendor/github.com/cortexproject/cortex/pkg/chunk/cache/redis_cache.go
@@ -25,6 +25,8 @@ type RedisConfig struct {
Expiration time.Duration `yaml:"expiration,omitempty"`
MaxIdleConns int `yaml:"max_idle_conns,omitempty"`
MaxActiveConns int `yaml:"max_active_conns,omitempty"`
+ Password string `yaml:"password"`
+ EnableTLS bool `yaml:"enable_tls"`
}
// RegisterFlagsWithPrefix adds the flags required to config this to the given FlagSet
@@ -34,6 +36,8 @@ func (cfg *RedisConfig) RegisterFlagsWithPrefix(prefix, description string, f *f
f.DurationVar(&cfg.Expiration, prefix+"redis.expiration", 0, description+"How long keys stay in the redis.")
f.IntVar(&cfg.MaxIdleConns, prefix+"redis.max-idle-conns", 80, description+"Maximum number of idle connections in pool.")
f.IntVar(&cfg.MaxActiveConns, prefix+"redis.max-active-conns", 0, description+"Maximum number of active connections in pool.")
+ f.StringVar(&cfg.Password, prefix+"redis.password", "", description+"Password to use when connecting to redis.")
+ f.BoolVar(&cfg.EnableTLS, prefix+"redis.enable-tls", false, description+"Enables connecting to redis with TLS.")
}
// NewRedisCache creates a new RedisCache
@@ -44,7 +48,15 @@ func NewRedisCache(cfg RedisConfig, name string, pool *redis.Pool) *RedisCache {
MaxIdle: cfg.MaxIdleConns,
MaxActive: cfg.MaxActiveConns,
Dial: func() (redis.Conn, error) {
- c, err := redis.Dial("tcp", cfg.Endpoint)
+ options := make([]redis.DialOption, 0, 2)
+ if cfg.EnableTLS {
+ options = append(options, redis.DialUseTLS(true))
+ }
+ if cfg.Password != "" {
+ options = append(options, redis.DialPassword(cfg.Password))
+ }
+
+ c, err := redis.Dial("tcp", cfg.Endpoint, options...)
if err != nil {
return nil, err
}
diff --git a/vendor/github.com/cortexproject/cortex/pkg/chunk/gcp/bigtable_index_client.go b/vendor/github.com/cortexproject/cortex/pkg/chunk/gcp/bigtable_index_client.go
index d18a2515e0c3b..ff44f128f6851 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/chunk/gcp/bigtable_index_client.go
+++ b/vendor/github.com/cortexproject/cortex/pkg/chunk/gcp/bigtable_index_client.go
@@ -36,8 +36,8 @@ type Config struct {
GRPCClientConfig grpcclient.Config `yaml:"grpc_client_config"`
- ColumnKey bool
- DistributeKeys bool
+ ColumnKey bool `yaml:"-"`
+ DistributeKeys bool `yaml:"-"`
TableCacheEnabled bool
TableCacheExpiration time.Duration
diff --git a/vendor/github.com/cortexproject/cortex/pkg/chunk/schema_config.go b/vendor/github.com/cortexproject/cortex/pkg/chunk/schema_config.go
index 4e3e768339553..06312f827bfcf 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/chunk/schema_config.go
+++ b/vendor/github.com/cortexproject/cortex/pkg/chunk/schema_config.go
@@ -424,8 +424,8 @@ func (cfg *PeriodicTableConfig) periodicTables(from, through model.Time, pCfg Pr
nowWeek = now / periodSecs
result = []TableDesc{}
)
- // If through ends on 00:00 of the day, don't include the upcoming day
- if through.Unix()%secondsInDay == 0 {
+ // If interval ends exactly on a period boundary, don’t include the upcoming period
+ if through.Unix()%periodSecs == 0 {
lastTable--
}
// Don't make tables further back than the configured retention
diff --git a/vendor/github.com/cortexproject/cortex/pkg/chunk/storage/caching_fixtures.go b/vendor/github.com/cortexproject/cortex/pkg/chunk/storage/caching_fixtures.go
index 47e90fe44e572..ece8043098692 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/chunk/storage/caching_fixtures.go
+++ b/vendor/github.com/cortexproject/cortex/pkg/chunk/storage/caching_fixtures.go
@@ -41,5 +41,5 @@ func defaultLimits() (*validation.Overrides, error) {
var defaults validation.Limits
flagext.DefaultValues(&defaults)
defaults.CardinalityLimit = 5
- return validation.NewOverrides(defaults)
+ return validation.NewOverrides(defaults, nil)
}
diff --git a/vendor/github.com/cortexproject/cortex/pkg/chunk/table_manager.go b/vendor/github.com/cortexproject/cortex/pkg/chunk/table_manager.go
index eba9b23f3587a..6001cb7f12bfb 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/chunk/table_manager.go
+++ b/vendor/github.com/cortexproject/cortex/pkg/chunk/table_manager.go
@@ -239,7 +239,8 @@ func (m *TableManager) calculateExpectedTables() []TableDesc {
result := []TableDesc{}
for i, config := range m.schemaCfg.Configs {
- if config.From.Time.Time().After(mtime.Now()) {
+ // Consider configs which we are about to hit and requires tables to be created due to grace period
+ if config.From.Time.Time().After(mtime.Now().Add(m.cfg.CreationGracePeriod)) {
continue
}
if config.IndexTables.Period == 0 { // non-periodic table
diff --git a/vendor/github.com/cortexproject/cortex/pkg/distributor/distributor.go b/vendor/github.com/cortexproject/cortex/pkg/distributor/distributor.go
index efba1fec25ed3..0eaec36d44d35 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/distributor/distributor.go
+++ b/vendor/github.com/cortexproject/cortex/pkg/distributor/distributor.go
@@ -3,7 +3,6 @@ package distributor
import (
"context"
"flag"
- "fmt"
"net/http"
"sort"
"strings"
@@ -189,7 +188,7 @@ func New(cfg Config, clientConfig ingester_client.Config, limits *validation.Ove
if !canJoinDistributorsRing {
ingestionRateStrategy = newInfiniteIngestionRateStrategy()
} else if limits.IngestionRateStrategy() == validation.GlobalIngestionRateStrategy {
- distributorsRing, err = ring.NewLifecycler(cfg.DistributorRing.ToLifecyclerConfig(), nil, "distributor", ring.DistributorRingKey)
+ distributorsRing, err = ring.NewLifecycler(cfg.DistributorRing.ToLifecyclerConfig(), nil, "distributor", ring.DistributorRingKey, true)
if err != nil {
return nil, err
}
@@ -227,7 +226,7 @@ func (d *Distributor) Stop() {
func (d *Distributor) tokenForLabels(userID string, labels []client.LabelAdapter) (uint32, error) {
if d.cfg.ShardByAllLabels {
- return shardByAllLabels(userID, labels)
+ return shardByAllLabels(userID, labels), nil
}
metricName, err := extract.MetricNameFromLabelAdapters(labels)
@@ -244,18 +243,15 @@ func shardByMetricName(userID string, metricName string) uint32 {
return h
}
-func shardByAllLabels(userID string, labels []client.LabelAdapter) (uint32, error) {
+// This function generates different values for different order of same labels.
+func shardByAllLabels(userID string, labels []client.LabelAdapter) uint32 {
h := client.HashNew32()
h = client.HashAdd32(h, userID)
- var lastLabelName string
for _, label := range labels {
- if strings.Compare(lastLabelName, label.Name) >= 0 {
- return 0, fmt.Errorf("Labels not sorted")
- }
h = client.HashAdd32(h, label.Name)
h = client.HashAdd32(h, label.Value)
}
- return h, nil
+ return h
}
// Remove the label labelname from a slice of LabelPairs if it exists.
@@ -374,6 +370,13 @@ func (d *Distributor) Push(ctx context.Context, req *client.WriteRequest) (*clie
continue
}
+ // We rely on sorted labels in different places:
+ // 1) When computing token for labels, and sharding by all labels. Here different order of labels returns
+ // different tokens, which is bad.
+ // 2) In validation code, when checking for duplicate label names. As duplicate label names are rejected
+ // later in the validation phase, we ignore them here.
+ sortLabelsIfNeeded(ts.Labels)
+
// Generate the sharding token based on the series labels without the HA replica
// label and dropped labels (if any)
key, err := d.tokenForLabels(userID, ts.Labels)
@@ -449,6 +452,28 @@ func (d *Distributor) Push(ctx context.Context, req *client.WriteRequest) (*clie
return &client.WriteResponse{}, lastPartialErr
}
+func sortLabelsIfNeeded(labels []client.LabelAdapter) {
+ // no need to run sort.Slice, if labels are already sorted, which is most of the time.
+ // we can avoid extra memory allocations (mostly interface-related) this way.
+ sorted := true
+ last := ""
+ for _, l := range labels {
+ if strings.Compare(last, l.Name) > 0 {
+ sorted = false
+ break
+ }
+ last = l.Name
+ }
+
+ if sorted {
+ return
+ }
+
+ sort.Slice(labels, func(i, j int) bool {
+ return strings.Compare(labels[i].Name, labels[j].Name) < 0
+ })
+}
+
func (d *Distributor) sendSamples(ctx context.Context, ingester ring.IngesterDesc, timeseries []client.PreallocTimeseries) error {
h, err := d.ingesterPool.GetClientFor(ingester.Addr)
if err != nil {
diff --git a/vendor/github.com/cortexproject/cortex/pkg/distributor/distributor_ring.go b/vendor/github.com/cortexproject/cortex/pkg/distributor/distributor_ring.go
index 673da46e7e38f..d4787ff9633a7 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/distributor/distributor_ring.go
+++ b/vendor/github.com/cortexproject/cortex/pkg/distributor/distributor_ring.go
@@ -22,10 +22,10 @@ type RingConfig struct {
HeartbeatTimeout time.Duration `yaml:"heartbeat_timeout,omitempty"`
// Instance details
- InstanceID string `yaml:"instance_id"`
- InstanceInterfaceNames []string `yaml:"instance_interface_names"`
- InstancePort int `yaml:"instance_port"`
- InstanceAddr string `yaml:"instance_addr"`
+ InstanceID string `yaml:"instance_id" doc:"hidden"`
+ InstanceInterfaceNames []string `yaml:"instance_interface_names" doc:"hidden"`
+ InstancePort int `yaml:"instance_port" doc:"hidden"`
+ InstanceAddr string `yaml:"instance_addr" doc:"hidden"`
// Injected internally
ListenPort int `yaml:"-"`
@@ -40,7 +40,7 @@ func (cfg *RingConfig) RegisterFlags(f *flag.FlagSet) {
}
// Ring flags
- cfg.KVStore.RegisterFlagsWithPrefix("distributor.ring.", f)
+ cfg.KVStore.RegisterFlagsWithPrefix("distributor.ring.", "collectors/", f)
f.DurationVar(&cfg.HeartbeatPeriod, "distributor.ring.heartbeat-period", 5*time.Second, "Period at which to heartbeat to the ring.")
f.DurationVar(&cfg.HeartbeatTimeout, "distributor.ring.heartbeat-timeout", time.Minute, "The heartbeat timeout after which distributors are considered unhealthy within the ring.")
diff --git a/vendor/github.com/cortexproject/cortex/pkg/distributor/ha_tracker.go b/vendor/github.com/cortexproject/cortex/pkg/distributor/ha_tracker.go
index cba42bab8f7c5..e2499961b1b3b 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/distributor/ha_tracker.go
+++ b/vendor/github.com/cortexproject/cortex/pkg/distributor/ha_tracker.go
@@ -110,13 +110,17 @@ func (cfg *HATrackerConfig) RegisterFlags(f *flag.FlagSet) {
f.DurationVar(&cfg.UpdateTimeoutJitterMax,
"distributor.ha-tracker.update-timeout-jitter-max",
5*time.Second,
- "To spread the HA deduping heartbeats out over time.")
+ "Maximum jitter applied to the update timeout, in order to spread the HA heartbeats over time.")
f.DurationVar(&cfg.FailoverTimeout,
"distributor.ha-tracker.failover-timeout",
30*time.Second,
"If we don't receive any samples from the accepted replica for a cluster in this amount of time we will failover to the next replica we receive a sample from. This value must be greater than the update timeout")
- // We want the ability to use different Consul instances for the ring and for HA cluster tracking.
- cfg.KVStore.RegisterFlagsWithPrefix("distributor.ha-tracker.", f)
+
+ // We want the ability to use different Consul instances for the ring and
+ // for HA cluster tracking. We also customize the default keys prefix, in
+ // order to not clash with the ring key if they both share the same KVStore
+ // backend (ie. run on the same consul cluster).
+ cfg.KVStore.RegisterFlagsWithPrefix("distributor.ha-tracker.", "ha-tracker/", f)
}
// Validate config and returns error on failure
diff --git a/vendor/github.com/cortexproject/cortex/pkg/ingester/client/compat.go b/vendor/github.com/cortexproject/cortex/pkg/ingester/client/compat.go
index 4174896e421b0..3e308f1ded717 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/ingester/client/compat.go
+++ b/vendor/github.com/cortexproject/cortex/pkg/ingester/client/compat.go
@@ -190,10 +190,29 @@ func fromLabelMatchers(matchers []*LabelMatcher) ([]*labels.Matcher, error) {
// FromLabelAdaptersToLabels casts []LabelAdapter to labels.Labels.
// It uses unsafe, but as LabelAdapter == labels.Label this should be safe.
// This allows us to use labels.Labels directly in protos.
+//
+// Note: while resulting labels.Labels is supposedly sorted, this function
+// doesn't enforce that. If input is not sorted, output will be wrong.
func FromLabelAdaptersToLabels(ls []LabelAdapter) labels.Labels {
return *(*labels.Labels)(unsafe.Pointer(&ls))
}
+// FromLabelAdaptersToLabelsWithCopy converts []LabelAdapter to labels.Labels.
+// Do NOT use unsafe to convert between data types because this function may
+// get in input labels whose data structure is reused.
+func FromLabelAdaptersToLabelsWithCopy(input []LabelAdapter) labels.Labels {
+ result := make(labels.Labels, len(input))
+
+ for i, l := range input {
+ result[i] = labels.Label{
+ Name: l.Name,
+ Value: l.Value,
+ }
+ }
+
+ return result
+}
+
// FromLabelsToLabelAdapters casts labels.Labels to []LabelAdapter.
// It uses unsafe, but as LabelAdapter == labels.Label this should be safe.
// This allows us to use labels.Labels directly in protos.
diff --git a/vendor/github.com/cortexproject/cortex/pkg/ingester/client/pool.go b/vendor/github.com/cortexproject/cortex/pkg/ingester/client/pool.go
index 91b7b9027115f..de9f96db3857a 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/ingester/client/pool.go
+++ b/vendor/github.com/cortexproject/cortex/pkg/ingester/client/pool.go
@@ -32,7 +32,7 @@ type Factory func(addr string) (grpc_health_v1.HealthClient, error)
type PoolConfig struct {
ClientCleanupPeriod time.Duration `yaml:"client_cleanup_period,omitempty"`
HealthCheckIngesters bool `yaml:"health_check_ingesters,omitempty"`
- RemoteTimeout time.Duration
+ RemoteTimeout time.Duration `yaml:"-"`
}
// RegisterFlags adds the flags required to config this to the given FlagSet.
diff --git a/vendor/github.com/cortexproject/cortex/pkg/querier/queryrange/query_range.go b/vendor/github.com/cortexproject/cortex/pkg/querier/queryrange/query_range.go
index 2dfde812df45f..bac9d1d75406e 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/querier/queryrange/query_range.go
+++ b/vendor/github.com/cortexproject/cortex/pkg/querier/queryrange/query_range.go
@@ -33,6 +33,9 @@ var (
// PrometheusCodec is a codec to encode and decode Prometheus query range requests and responses.
PrometheusCodec Codec = &prometheusCodec{}
+
+ // Name of the cache control header.
+ cachecontrolHeader = "Cache-Control"
)
// Codec is used to encode/decode query range requests and responses so they can be passed down to middlewares.
@@ -221,6 +224,10 @@ func (prometheusCodec) DecodeResponse(ctx context.Context, r *http.Response, _ R
if err := json.Unmarshal(buf, &resp); err != nil {
return nil, httpgrpc.Errorf(http.StatusInternalServerError, "error decoding response: %v", err)
}
+
+ for h, hv := range r.Header {
+ resp.Headers = append(resp.Headers, &PrometheusResponseHeader{Name: h, Values: hv})
+ }
return &resp, nil
}
diff --git a/vendor/github.com/cortexproject/cortex/pkg/querier/queryrange/queryrange.pb.go b/vendor/github.com/cortexproject/cortex/pkg/querier/queryrange/queryrange.pb.go
index c06ab389058b4..b355ce69bab56 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/querier/queryrange/queryrange.pb.go
+++ b/vendor/github.com/cortexproject/cortex/pkg/querier/queryrange/queryrange.pb.go
@@ -114,17 +114,69 @@ func (m *PrometheusRequest) GetQuery() string {
return ""
}
+type PrometheusResponseHeader struct {
+ Name string `protobuf:"bytes,1,opt,name=Name,json=name,proto3" json:"-"`
+ Values []string `protobuf:"bytes,2,rep,name=Values,json=values,proto3" json:"-"`
+}
+
+func (m *PrometheusResponseHeader) Reset() { *m = PrometheusResponseHeader{} }
+func (*PrometheusResponseHeader) ProtoMessage() {}
+func (*PrometheusResponseHeader) Descriptor() ([]byte, []int) {
+ return fileDescriptor_79b02382e213d0b2, []int{1}
+}
+func (m *PrometheusResponseHeader) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *PrometheusResponseHeader) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_PrometheusResponseHeader.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalTo(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+ }
+}
+func (m *PrometheusResponseHeader) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_PrometheusResponseHeader.Merge(m, src)
+}
+func (m *PrometheusResponseHeader) XXX_Size() int {
+ return m.Size()
+}
+func (m *PrometheusResponseHeader) XXX_DiscardUnknown() {
+ xxx_messageInfo_PrometheusResponseHeader.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_PrometheusResponseHeader proto.InternalMessageInfo
+
+func (m *PrometheusResponseHeader) GetName() string {
+ if m != nil {
+ return m.Name
+ }
+ return ""
+}
+
+func (m *PrometheusResponseHeader) GetValues() []string {
+ if m != nil {
+ return m.Values
+ }
+ return nil
+}
+
type PrometheusResponse struct {
- Status string `protobuf:"bytes,1,opt,name=Status,json=status,proto3" json:"status"`
- Data PrometheusData `protobuf:"bytes,2,opt,name=Data,json=data,proto3" json:"data,omitempty"`
- ErrorType string `protobuf:"bytes,3,opt,name=ErrorType,json=errorType,proto3" json:"errorType,omitempty"`
- Error string `protobuf:"bytes,4,opt,name=Error,json=error,proto3" json:"error,omitempty"`
+ Status string `protobuf:"bytes,1,opt,name=Status,json=status,proto3" json:"status"`
+ Data PrometheusData `protobuf:"bytes,2,opt,name=Data,json=data,proto3" json:"data,omitempty"`
+ ErrorType string `protobuf:"bytes,3,opt,name=ErrorType,json=errorType,proto3" json:"errorType,omitempty"`
+ Error string `protobuf:"bytes,4,opt,name=Error,json=error,proto3" json:"error,omitempty"`
+ Headers []*PrometheusResponseHeader `protobuf:"bytes,5,rep,name=Headers,json=headers,proto3" json:"-"`
}
func (m *PrometheusResponse) Reset() { *m = PrometheusResponse{} }
func (*PrometheusResponse) ProtoMessage() {}
func (*PrometheusResponse) Descriptor() ([]byte, []int) {
- return fileDescriptor_79b02382e213d0b2, []int{1}
+ return fileDescriptor_79b02382e213d0b2, []int{2}
}
func (m *PrometheusResponse) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -181,6 +233,13 @@ func (m *PrometheusResponse) GetError() string {
return ""
}
+func (m *PrometheusResponse) GetHeaders() []*PrometheusResponseHeader {
+ if m != nil {
+ return m.Headers
+ }
+ return nil
+}
+
type PrometheusData struct {
ResultType string `protobuf:"bytes,1,opt,name=ResultType,json=resultType,proto3" json:"resultType"`
Result []SampleStream `protobuf:"bytes,2,rep,name=Result,json=result,proto3" json:"result"`
@@ -189,7 +248,7 @@ type PrometheusData struct {
func (m *PrometheusData) Reset() { *m = PrometheusData{} }
func (*PrometheusData) ProtoMessage() {}
func (*PrometheusData) Descriptor() ([]byte, []int) {
- return fileDescriptor_79b02382e213d0b2, []int{2}
+ return fileDescriptor_79b02382e213d0b2, []int{3}
}
func (m *PrometheusData) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -240,7 +299,7 @@ type SampleStream struct {
func (m *SampleStream) Reset() { *m = SampleStream{} }
func (*SampleStream) ProtoMessage() {}
func (*SampleStream) Descriptor() ([]byte, []int) {
- return fileDescriptor_79b02382e213d0b2, []int{3}
+ return fileDescriptor_79b02382e213d0b2, []int{4}
}
func (m *SampleStream) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -285,7 +344,7 @@ type CachedResponse struct {
func (m *CachedResponse) Reset() { *m = CachedResponse{} }
func (*CachedResponse) ProtoMessage() {}
func (*CachedResponse) Descriptor() ([]byte, []int) {
- return fileDescriptor_79b02382e213d0b2, []int{4}
+ return fileDescriptor_79b02382e213d0b2, []int{5}
}
func (m *CachedResponse) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -338,7 +397,7 @@ type Extent struct {
func (m *Extent) Reset() { *m = Extent{} }
func (*Extent) ProtoMessage() {}
func (*Extent) Descriptor() ([]byte, []int) {
- return fileDescriptor_79b02382e213d0b2, []int{5}
+ return fileDescriptor_79b02382e213d0b2, []int{6}
}
func (m *Extent) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -397,6 +456,7 @@ func (m *Extent) GetResponse() *types.Any {
func init() {
proto.RegisterType((*PrometheusRequest)(nil), "queryrange.PrometheusRequest")
+ proto.RegisterType((*PrometheusResponseHeader)(nil), "queryrange.PrometheusResponseHeader")
proto.RegisterType((*PrometheusResponse)(nil), "queryrange.PrometheusResponse")
proto.RegisterType((*PrometheusData)(nil), "queryrange.PrometheusData")
proto.RegisterType((*SampleStream)(nil), "queryrange.SampleStream")
@@ -407,53 +467,57 @@ func init() {
func init() { proto.RegisterFile("queryrange.proto", fileDescriptor_79b02382e213d0b2) }
var fileDescriptor_79b02382e213d0b2 = []byte{
- // 730 bytes of a gzipped FileDescriptorProto
- 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x54, 0xbd, 0x6e, 0xe3, 0x46,
- 0x10, 0xd6, 0x5a, 0x12, 0x25, 0xad, 0x0d, 0xd9, 0x5e, 0x1b, 0x09, 0xe5, 0x82, 0x14, 0x54, 0x29,
- 0x40, 0x4c, 0x01, 0x0e, 0x02, 0xa4, 0x49, 0x60, 0x33, 0x76, 0x80, 0x04, 0x29, 0x8c, 0x75, 0xaa,
- 0x34, 0xc1, 0x4a, 0x9c, 0x50, 0xb4, 0xc5, 0x1f, 0x2f, 0x97, 0x81, 0x55, 0x04, 0x08, 0xf2, 0x04,
- 0x57, 0xde, 0x23, 0x5c, 0x71, 0x0f, 0x70, 0x6f, 0x70, 0x2e, 0x5d, 0x1a, 0x57, 0xf0, 0xce, 0x74,
- 0x73, 0x60, 0xe5, 0x47, 0x38, 0x70, 0x97, 0x94, 0x78, 0x77, 0xdd, 0x35, 0xe6, 0xcc, 0xb7, 0xdf,
- 0xcc, 0x7c, 0x33, 0x9e, 0x11, 0xde, 0xb9, 0x4e, 0x80, 0x2f, 0x39, 0x0b, 0x5c, 0xb0, 0x22, 0x1e,
- 0x8a, 0x90, 0xe0, 0x35, 0x72, 0x70, 0xe8, 0x7a, 0x62, 0x9e, 0x4c, 0xad, 0x59, 0xe8, 0x4f, 0xdc,
- 0xd0, 0x0d, 0x27, 0x92, 0x32, 0x4d, 0xfe, 0x96, 0x9e, 0x74, 0xa4, 0xa5, 0x42, 0x0f, 0x0c, 0x37,
- 0x0c, 0xdd, 0x05, 0xac, 0x59, 0x4e, 0xc2, 0x99, 0xf0, 0xc2, 0xa0, 0x7c, 0x3f, 0xae, 0xa5, 0x9b,
- 0x85, 0x5c, 0xc0, 0x4d, 0xc4, 0xc3, 0x4b, 0x98, 0x89, 0xd2, 0x9b, 0x44, 0x57, 0xee, 0xc4, 0x0b,
- 0x5c, 0x88, 0x05, 0xf0, 0xc9, 0x6c, 0xe1, 0x41, 0x50, 0x3d, 0x95, 0x19, 0x06, 0x9f, 0x56, 0x60,
- 0xc1, 0x52, 0x3d, 0x8d, 0x5e, 0x21, 0xbc, 0x7b, 0xce, 0x43, 0x1f, 0xc4, 0x1c, 0x92, 0x98, 0xc2,
- 0x75, 0x02, 0xb1, 0x20, 0x04, 0xb7, 0x22, 0x26, 0xe6, 0x3a, 0x1a, 0xa2, 0x71, 0x8f, 0x4a, 0x9b,
- 0xec, 0xe3, 0x76, 0x2c, 0x18, 0x17, 0xfa, 0xc6, 0x10, 0x8d, 0x9b, 0x54, 0x39, 0x64, 0x07, 0x37,
- 0x21, 0x70, 0xf4, 0xa6, 0xc4, 0x0a, 0xb3, 0x88, 0x8d, 0x05, 0x44, 0x7a, 0x4b, 0x42, 0xd2, 0x26,
- 0x3f, 0xe2, 0x8e, 0xf0, 0x7c, 0x08, 0x13, 0xa1, 0xb7, 0x87, 0x68, 0xbc, 0x79, 0x34, 0xb0, 0x94,
- 0x24, 0xab, 0x92, 0x64, 0x9d, 0x96, 0x4d, 0xdb, 0xdd, 0xdb, 0xd4, 0x6c, 0x3c, 0x7f, 0x6b, 0x22,
- 0x5a, 0xc5, 0x14, 0xa5, 0xe5, 0x78, 0x75, 0x4d, 0xea, 0x51, 0xce, 0x28, 0x43, 0x98, 0xd4, 0xa5,
- 0xc7, 0x51, 0x18, 0xc4, 0x40, 0x46, 0x58, 0xbb, 0x10, 0x4c, 0x24, 0xb1, 0x52, 0x6f, 0xe3, 0x3c,
- 0x35, 0xb5, 0x58, 0x22, 0xb4, 0xfc, 0x92, 0x5f, 0x70, 0xeb, 0x94, 0x09, 0x26, 0x5b, 0xd9, 0x3c,
- 0x3a, 0xb0, 0x6a, 0xff, 0xce, 0x75, 0xc6, 0x82, 0x61, 0x7f, 0x55, 0xa8, 0xc9, 0x53, 0xb3, 0xef,
- 0x30, 0xc1, 0xbe, 0x0d, 0x7d, 0x4f, 0x80, 0x1f, 0x89, 0x25, 0x6d, 0x15, 0x3e, 0xf9, 0x1e, 0xf7,
- 0xce, 0x38, 0x0f, 0xf9, 0x1f, 0xcb, 0x08, 0xe4, 0x0c, 0x7a, 0xf6, 0xd7, 0x79, 0x6a, 0xee, 0x41,
- 0x05, 0xd6, 0x22, 0x7a, 0x2b, 0x90, 0x7c, 0x83, 0xdb, 0x32, 0x4c, 0xce, 0xa8, 0x67, 0xef, 0xe5,
- 0xa9, 0xb9, 0x2d, 0x5f, 0x6b, 0xf4, 0xb6, 0x04, 0x46, 0xff, 0x23, 0xdc, 0xff, 0x58, 0x12, 0xb1,
- 0x30, 0xa6, 0x10, 0x27, 0x0b, 0x21, 0xab, 0xaa, 0x26, 0xfb, 0x79, 0x6a, 0x62, 0xbe, 0x42, 0x69,
- 0xcd, 0x26, 0xc7, 0x58, 0x53, 0x7c, 0x7d, 0x63, 0xd8, 0x1c, 0x6f, 0x1e, 0xe9, 0xf5, 0x76, 0x2f,
- 0x98, 0x1f, 0x2d, 0xe0, 0x42, 0x70, 0x60, 0xbe, 0xdd, 0x2f, 0x9b, 0xd5, 0x54, 0x34, 0x2d, 0xbf,
- 0xa3, 0xd7, 0x08, 0x6f, 0xd5, 0x89, 0xe4, 0x5f, 0xac, 0x2d, 0xd8, 0x14, 0x16, 0xc5, 0x8c, 0x8b,
- 0x94, 0xbb, 0x56, 0xb9, 0x6f, 0xbf, 0x17, 0xe8, 0x39, 0xf3, 0xb8, 0x4d, 0x8b, 0x5c, 0x6f, 0x52,
- 0xf3, 0x4b, 0xb6, 0x57, 0xa5, 0x39, 0x71, 0x58, 0x24, 0x80, 0x17, 0x7a, 0x7c, 0x10, 0xdc, 0x9b,
- 0xd1, 0xb2, 0x28, 0xf9, 0x01, 0x77, 0x62, 0x29, 0x27, 0x2e, 0x5b, 0xea, 0x57, 0xf5, 0x95, 0xca,
- 0x75, 0x23, 0xff, 0xb0, 0x45, 0x02, 0x31, 0xad, 0xe8, 0xa3, 0x4b, 0xdc, 0xff, 0x99, 0xcd, 0xe6,
- 0xe0, 0xac, 0xd6, 0x65, 0x80, 0x9b, 0x57, 0xb0, 0x2c, 0xc7, 0xd8, 0xc9, 0x53, 0xb3, 0x70, 0x69,
- 0xf1, 0xa7, 0xd8, 0x5a, 0xb8, 0x11, 0x10, 0x88, 0xaa, 0x0c, 0xa9, 0x4f, 0xee, 0x4c, 0x3e, 0xd9,
- 0xdb, 0x65, 0xa9, 0x8a, 0x4a, 0x2b, 0x63, 0xf4, 0x12, 0x61, 0x4d, 0x91, 0x88, 0x59, 0xdd, 0x4e,
- 0x51, 0xa6, 0x69, 0xf7, 0xf2, 0xd4, 0x54, 0x40, 0x75, 0x46, 0x03, 0x75, 0x46, 0xf2, 0xb4, 0x94,
- 0x0a, 0x08, 0x1c, 0x75, 0x4f, 0x43, 0xdc, 0x15, 0x9c, 0xcd, 0xe0, 0x2f, 0xcf, 0x29, 0xf7, 0xa5,
- 0x9d, 0xa7, 0x26, 0x3a, 0xa4, 0x1d, 0x09, 0xff, 0xea, 0x90, 0x9f, 0x70, 0x97, 0x97, 0xed, 0x94,
- 0xe7, 0xb5, 0xff, 0xd9, 0x79, 0x9d, 0x04, 0x4b, 0x7b, 0x2b, 0x4f, 0xcd, 0x15, 0x93, 0xae, 0xac,
- 0xdf, 0x5a, 0xdd, 0xe6, 0x4e, 0xcb, 0x3e, 0xbe, 0x7b, 0x30, 0x1a, 0xf7, 0x0f, 0x46, 0xe3, 0xe9,
- 0xc1, 0x40, 0xff, 0x65, 0x06, 0x7a, 0x91, 0x19, 0xe8, 0x36, 0x33, 0xd0, 0x5d, 0x66, 0xa0, 0x77,
- 0x99, 0x81, 0xde, 0x67, 0x46, 0xe3, 0x29, 0x33, 0xd0, 0xb3, 0x47, 0xa3, 0x71, 0xf7, 0x68, 0x34,
- 0xee, 0x1f, 0x8d, 0xc6, 0x9f, 0xb5, 0xdf, 0xbd, 0xa9, 0x26, 0xab, 0x7d, 0xf7, 0x21, 0x00, 0x00,
- 0xff, 0xff, 0x29, 0x0e, 0x39, 0xd1, 0x1e, 0x05, 0x00, 0x00,
+ // 792 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x54, 0x4f, 0x8f, 0xdb, 0x44,
+ 0x14, 0xcf, 0xac, 0x1d, 0x67, 0x33, 0xa9, 0xd2, 0xed, 0xb4, 0x02, 0x67, 0x25, 0xec, 0xc8, 0xe2,
+ 0x10, 0x24, 0xea, 0x48, 0x41, 0x48, 0x5c, 0x40, 0x5b, 0xd3, 0x45, 0x80, 0x10, 0xaa, 0x66, 0x2b,
+ 0x0e, 0x5c, 0xd0, 0x24, 0x7e, 0x38, 0x6e, 0xe3, 0x3f, 0x1d, 0x8f, 0x51, 0x73, 0x40, 0x42, 0x7c,
+ 0x02, 0x8e, 0x7c, 0x04, 0x90, 0xf8, 0x00, 0x7c, 0x03, 0x7a, 0xdc, 0x63, 0xc5, 0xc1, 0xb0, 0xd9,
+ 0x0b, 0xf2, 0xa9, 0x1f, 0x01, 0x79, 0x66, 0x9c, 0xb8, 0x94, 0x53, 0x2f, 0x9b, 0xf7, 0x7e, 0xef,
+ 0xdf, 0xef, 0xfd, 0x3c, 0x6f, 0xf1, 0xc9, 0x93, 0x12, 0xf8, 0x96, 0xb3, 0x34, 0x02, 0x3f, 0xe7,
+ 0x99, 0xc8, 0x08, 0x3e, 0x20, 0xa7, 0x77, 0xa3, 0x58, 0xac, 0xcb, 0xa5, 0xbf, 0xca, 0x92, 0x79,
+ 0x94, 0x45, 0xd9, 0x5c, 0xa6, 0x2c, 0xcb, 0x6f, 0xa5, 0x27, 0x1d, 0x69, 0xa9, 0xd2, 0x53, 0x27,
+ 0xca, 0xb2, 0x68, 0x03, 0x87, 0xac, 0xb0, 0xe4, 0x4c, 0xc4, 0x59, 0xaa, 0xe3, 0x67, 0x9d, 0x76,
+ 0xab, 0x8c, 0x0b, 0x78, 0x9a, 0xf3, 0xec, 0x11, 0xac, 0x84, 0xf6, 0xe6, 0xf9, 0xe3, 0x68, 0x1e,
+ 0xa7, 0x11, 0x14, 0x02, 0xf8, 0x7c, 0xb5, 0x89, 0x21, 0x6d, 0x43, 0xba, 0xc3, 0xe4, 0xbf, 0x13,
+ 0x58, 0xba, 0x55, 0x21, 0xef, 0x77, 0x84, 0x6f, 0x3d, 0xe0, 0x59, 0x02, 0x62, 0x0d, 0x65, 0x41,
+ 0xe1, 0x49, 0x09, 0x85, 0x20, 0x04, 0x9b, 0x39, 0x13, 0x6b, 0x1b, 0x4d, 0xd1, 0x6c, 0x48, 0xa5,
+ 0x4d, 0xee, 0xe0, 0x7e, 0x21, 0x18, 0x17, 0xf6, 0xd1, 0x14, 0xcd, 0x0c, 0xaa, 0x1c, 0x72, 0x82,
+ 0x0d, 0x48, 0x43, 0xdb, 0x90, 0x58, 0x63, 0x36, 0xb5, 0x85, 0x80, 0xdc, 0x36, 0x25, 0x24, 0x6d,
+ 0xf2, 0x21, 0x1e, 0x88, 0x38, 0x81, 0xac, 0x14, 0x76, 0x7f, 0x8a, 0x66, 0xa3, 0xc5, 0xc4, 0x57,
+ 0x94, 0xfc, 0x96, 0x92, 0x7f, 0x5f, 0x2f, 0x1d, 0x1c, 0x3f, 0xab, 0xdc, 0xde, 0xcf, 0x7f, 0xb9,
+ 0x88, 0xb6, 0x35, 0xcd, 0x68, 0x29, 0xaf, 0x6d, 0x49, 0x3e, 0xca, 0xf1, 0x1e, 0x62, 0xbb, 0xcb,
+ 0xbc, 0xc8, 0xb3, 0xb4, 0x80, 0x4f, 0x81, 0x85, 0xc0, 0xc9, 0x04, 0x9b, 0x5f, 0xb2, 0x04, 0xd4,
+ 0x02, 0x41, 0xbf, 0xae, 0x5c, 0x74, 0x97, 0x9a, 0x29, 0x4b, 0x80, 0xbc, 0x85, 0xad, 0xaf, 0xd8,
+ 0xa6, 0x84, 0xc2, 0x3e, 0x9a, 0x1a, 0x87, 0xa0, 0xf5, 0x9d, 0x04, 0xbd, 0x5f, 0x8f, 0x30, 0x79,
+ 0xb5, 0x2d, 0xf1, 0xb0, 0x75, 0x21, 0x98, 0x28, 0x0b, 0xdd, 0x12, 0xd7, 0x95, 0x6b, 0x15, 0x12,
+ 0xa1, 0xfa, 0x97, 0x7c, 0x82, 0xcd, 0xfb, 0x4c, 0x30, 0x29, 0xd0, 0x68, 0x71, 0xea, 0x77, 0x1e,
+ 0xc9, 0xa1, 0x63, 0x93, 0x11, 0xbc, 0xd1, 0xec, 0x58, 0x57, 0xee, 0x38, 0x64, 0x82, 0xbd, 0x9b,
+ 0x25, 0xb1, 0x80, 0x24, 0x17, 0x5b, 0x6a, 0x36, 0x3e, 0x79, 0x1f, 0x0f, 0xcf, 0x39, 0xcf, 0xf8,
+ 0xc3, 0x6d, 0x0e, 0x52, 0xd9, 0x61, 0xf0, 0x66, 0x5d, 0xb9, 0xb7, 0xa1, 0x05, 0x3b, 0x15, 0xc3,
+ 0x3d, 0x48, 0xde, 0xc1, 0x7d, 0x59, 0x26, 0x95, 0x1f, 0x06, 0xb7, 0xeb, 0xca, 0xbd, 0x29, 0xa3,
+ 0x9d, 0xf4, 0xbe, 0x04, 0xc8, 0x39, 0x1e, 0x28, 0xa1, 0x0a, 0xbb, 0x3f, 0x35, 0x66, 0xa3, 0xc5,
+ 0xdb, 0xff, 0x4f, 0xf6, 0x65, 0x55, 0x5b, 0xa9, 0x06, 0x6b, 0x55, 0xeb, 0xfd, 0x88, 0xf0, 0xf8,
+ 0xe5, 0xcd, 0x88, 0x8f, 0x31, 0x85, 0xa2, 0xdc, 0x08, 0x49, 0x5e, 0x69, 0x35, 0xae, 0x2b, 0x17,
+ 0xf3, 0x3d, 0x4a, 0x3b, 0x36, 0x39, 0xc3, 0x96, 0xca, 0x97, 0x5f, 0x63, 0xb4, 0xb0, 0xbb, 0x44,
+ 0x2e, 0x58, 0x92, 0x6f, 0xe0, 0x42, 0x70, 0x60, 0x49, 0x30, 0xd6, 0x9a, 0x59, 0xaa, 0x9a, 0xea,
+ 0x5f, 0xef, 0x0f, 0x84, 0x6f, 0x74, 0x13, 0xc9, 0xf7, 0xd8, 0xda, 0xb0, 0x25, 0x6c, 0x9a, 0x4f,
+ 0xd5, 0xb4, 0xbc, 0xe5, 0xeb, 0x63, 0xf8, 0xa2, 0x41, 0x1f, 0xb0, 0x98, 0x07, 0xb4, 0xe9, 0xf5,
+ 0x67, 0xe5, 0xbe, 0xce, 0x69, 0xa9, 0x36, 0xf7, 0x42, 0x96, 0x0b, 0xe0, 0x0d, 0x9f, 0x04, 0x04,
+ 0x8f, 0x57, 0x54, 0x0f, 0x25, 0x1f, 0xe0, 0x41, 0x21, 0xe9, 0x14, 0x7a, 0xa5, 0x71, 0x3b, 0x5f,
+ 0xb1, 0x3c, 0x2c, 0xa2, 0x5e, 0x1c, 0x6d, 0xd3, 0xbd, 0x47, 0x78, 0xfc, 0x31, 0x5b, 0xad, 0x21,
+ 0xdc, 0xbf, 0xba, 0x09, 0x36, 0x1e, 0xc3, 0x56, 0xcb, 0x38, 0xa8, 0x2b, 0xb7, 0x71, 0x69, 0xf3,
+ 0xa7, 0x39, 0x29, 0x78, 0x2a, 0x20, 0x15, 0xed, 0x18, 0xd2, 0x55, 0xee, 0x5c, 0x86, 0x82, 0x9b,
+ 0x7a, 0x54, 0x9b, 0x4a, 0x5b, 0xc3, 0xfb, 0x0d, 0x61, 0x4b, 0x25, 0x11, 0xb7, 0x3d, 0xec, 0x66,
+ 0x8c, 0x11, 0x0c, 0xeb, 0xca, 0x55, 0x40, 0x7b, 0xe3, 0x13, 0x75, 0xe3, 0xf2, 0xee, 0x15, 0x0b,
+ 0x48, 0x43, 0x75, 0xec, 0x53, 0x7c, 0x2c, 0x38, 0x5b, 0xc1, 0x37, 0x71, 0xa8, 0x9f, 0x5d, 0xfb,
+ 0x46, 0x24, 0xfc, 0x59, 0x48, 0x3e, 0xc2, 0xc7, 0x5c, 0xaf, 0xa3, 0x6f, 0xff, 0xce, 0x2b, 0xb7,
+ 0x7f, 0x2f, 0xdd, 0x06, 0x37, 0xea, 0xca, 0xdd, 0x67, 0xd2, 0xbd, 0xf5, 0xb9, 0x79, 0x6c, 0x9c,
+ 0x98, 0xc1, 0xd9, 0xe5, 0x95, 0xd3, 0x7b, 0x7e, 0xe5, 0xf4, 0x5e, 0x5c, 0x39, 0xe8, 0x87, 0x9d,
+ 0x83, 0x7e, 0xd9, 0x39, 0xe8, 0xd9, 0xce, 0x41, 0x97, 0x3b, 0x07, 0xfd, 0xbd, 0x73, 0xd0, 0x3f,
+ 0x3b, 0xa7, 0xf7, 0x62, 0xe7, 0xa0, 0x9f, 0xae, 0x9d, 0xde, 0xe5, 0xb5, 0xd3, 0x7b, 0x7e, 0xed,
+ 0xf4, 0xbe, 0xee, 0xfc, 0x53, 0x5e, 0x5a, 0x72, 0xda, 0x7b, 0xff, 0x06, 0x00, 0x00, 0xff, 0xff,
+ 0xa8, 0x2f, 0xc9, 0x09, 0xbb, 0x05, 0x00, 0x00,
}
func (this *PrometheusRequest) Equal(that interface{}) bool {
@@ -495,6 +559,38 @@ func (this *PrometheusRequest) Equal(that interface{}) bool {
}
return true
}
+func (this *PrometheusResponseHeader) Equal(that interface{}) bool {
+ if that == nil {
+ return this == nil
+ }
+
+ that1, ok := that.(*PrometheusResponseHeader)
+ if !ok {
+ that2, ok := that.(PrometheusResponseHeader)
+ if ok {
+ that1 = &that2
+ } else {
+ return false
+ }
+ }
+ if that1 == nil {
+ return this == nil
+ } else if this == nil {
+ return false
+ }
+ if this.Name != that1.Name {
+ return false
+ }
+ if len(this.Values) != len(that1.Values) {
+ return false
+ }
+ for i := range this.Values {
+ if this.Values[i] != that1.Values[i] {
+ return false
+ }
+ }
+ return true
+}
func (this *PrometheusResponse) Equal(that interface{}) bool {
if that == nil {
return this == nil
@@ -526,6 +622,14 @@ func (this *PrometheusResponse) Equal(that interface{}) bool {
if this.Error != that1.Error {
return false
}
+ if len(this.Headers) != len(that1.Headers) {
+ return false
+ }
+ for i := range this.Headers {
+ if !this.Headers[i].Equal(that1.Headers[i]) {
+ return false
+ }
+ }
return true
}
func (this *PrometheusData) Equal(that interface{}) bool {
@@ -677,16 +781,30 @@ func (this *PrometheusRequest) GoString() string {
s = append(s, "}")
return strings.Join(s, "")
}
+func (this *PrometheusResponseHeader) GoString() string {
+ if this == nil {
+ return "nil"
+ }
+ s := make([]string, 0, 6)
+ s = append(s, "&queryrange.PrometheusResponseHeader{")
+ s = append(s, "Name: "+fmt.Sprintf("%#v", this.Name)+",\n")
+ s = append(s, "Values: "+fmt.Sprintf("%#v", this.Values)+",\n")
+ s = append(s, "}")
+ return strings.Join(s, "")
+}
func (this *PrometheusResponse) GoString() string {
if this == nil {
return "nil"
}
- s := make([]string, 0, 8)
+ s := make([]string, 0, 9)
s = append(s, "&queryrange.PrometheusResponse{")
s = append(s, "Status: "+fmt.Sprintf("%#v", this.Status)+",\n")
s = append(s, "Data: "+strings.Replace(this.Data.GoString(), `&`, ``, 1)+",\n")
s = append(s, "ErrorType: "+fmt.Sprintf("%#v", this.ErrorType)+",\n")
s = append(s, "Error: "+fmt.Sprintf("%#v", this.Error)+",\n")
+ if this.Headers != nil {
+ s = append(s, "Headers: "+fmt.Sprintf("%#v", this.Headers)+",\n")
+ }
s = append(s, "}")
return strings.Join(s, "")
}
@@ -817,6 +935,45 @@ func (m *PrometheusRequest) MarshalTo(dAtA []byte) (int, error) {
return i, nil
}
+func (m *PrometheusResponseHeader) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalTo(dAtA)
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *PrometheusResponseHeader) MarshalTo(dAtA []byte) (int, error) {
+ var i int
+ _ = i
+ var l int
+ _ = l
+ if len(m.Name) > 0 {
+ dAtA[i] = 0xa
+ i++
+ i = encodeVarintQueryrange(dAtA, i, uint64(len(m.Name)))
+ i += copy(dAtA[i:], m.Name)
+ }
+ if len(m.Values) > 0 {
+ for _, s := range m.Values {
+ dAtA[i] = 0x12
+ i++
+ l = len(s)
+ for l >= 1<<7 {
+ dAtA[i] = uint8(uint64(l)&0x7f | 0x80)
+ l >>= 7
+ i++
+ }
+ dAtA[i] = uint8(l)
+ i++
+ i += copy(dAtA[i:], s)
+ }
+ }
+ return i, nil
+}
+
func (m *PrometheusResponse) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
@@ -858,6 +1015,18 @@ func (m *PrometheusResponse) MarshalTo(dAtA []byte) (int, error) {
i = encodeVarintQueryrange(dAtA, i, uint64(len(m.Error)))
i += copy(dAtA[i:], m.Error)
}
+ if len(m.Headers) > 0 {
+ for _, msg := range m.Headers {
+ dAtA[i] = 0x2a
+ i++
+ i = encodeVarintQueryrange(dAtA, i, uint64(msg.Size()))
+ n, err := msg.MarshalTo(dAtA[i:])
+ if err != nil {
+ return 0, err
+ }
+ i += n
+ }
+ }
return i, nil
}
@@ -1056,6 +1225,25 @@ func (m *PrometheusRequest) Size() (n int) {
return n
}
+func (m *PrometheusResponseHeader) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = len(m.Name)
+ if l > 0 {
+ n += 1 + l + sovQueryrange(uint64(l))
+ }
+ if len(m.Values) > 0 {
+ for _, s := range m.Values {
+ l = len(s)
+ n += 1 + l + sovQueryrange(uint64(l))
+ }
+ }
+ return n
+}
+
func (m *PrometheusResponse) Size() (n int) {
if m == nil {
return 0
@@ -1076,6 +1264,12 @@ func (m *PrometheusResponse) Size() (n int) {
if l > 0 {
n += 1 + l + sovQueryrange(uint64(l))
}
+ if len(m.Headers) > 0 {
+ for _, e := range m.Headers {
+ l = e.Size()
+ n += 1 + l + sovQueryrange(uint64(l))
+ }
+ }
return n
}
@@ -1189,6 +1383,17 @@ func (this *PrometheusRequest) String() string {
}, "")
return s
}
+func (this *PrometheusResponseHeader) String() string {
+ if this == nil {
+ return "nil"
+ }
+ s := strings.Join([]string{`&PrometheusResponseHeader{`,
+ `Name:` + fmt.Sprintf("%v", this.Name) + `,`,
+ `Values:` + fmt.Sprintf("%v", this.Values) + `,`,
+ `}`,
+ }, "")
+ return s
+}
func (this *PrometheusResponse) String() string {
if this == nil {
return "nil"
@@ -1198,6 +1403,7 @@ func (this *PrometheusResponse) String() string {
`Data:` + strings.Replace(strings.Replace(this.Data.String(), "PrometheusData", "PrometheusData", 1), `&`, ``, 1) + `,`,
`ErrorType:` + fmt.Sprintf("%v", this.ErrorType) + `,`,
`Error:` + fmt.Sprintf("%v", this.Error) + `,`,
+ `Headers:` + strings.Replace(fmt.Sprintf("%v", this.Headers), "PrometheusResponseHeader", "PrometheusResponseHeader", 1) + `,`,
`}`,
}, "")
return s
@@ -1463,6 +1669,123 @@ func (m *PrometheusRequest) Unmarshal(dAtA []byte) error {
}
return nil
}
+func (m *PrometheusResponseHeader) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowQueryrange
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: PrometheusResponseHeader: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: PrometheusResponseHeader: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowQueryrange
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthQueryrange
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthQueryrange
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Name = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Values", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowQueryrange
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthQueryrange
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthQueryrange
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Values = append(m.Values, string(dAtA[iNdEx:postIndex]))
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipQueryrange(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if skippy < 0 {
+ return ErrInvalidLengthQueryrange
+ }
+ if (iNdEx + skippy) < 0 {
+ return ErrInvalidLengthQueryrange
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
func (m *PrometheusResponse) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
@@ -1621,6 +1944,40 @@ func (m *PrometheusResponse) Unmarshal(dAtA []byte) error {
}
m.Error = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
+ case 5:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Headers", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowQueryrange
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthQueryrange
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthQueryrange
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Headers = append(m.Headers, &PrometheusResponseHeader{})
+ if err := m.Headers[len(m.Headers)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipQueryrange(dAtA[iNdEx:])
diff --git a/vendor/github.com/cortexproject/cortex/pkg/querier/queryrange/queryrange.proto b/vendor/github.com/cortexproject/cortex/pkg/querier/queryrange/queryrange.proto
index 9761905265ad9..e6b8dc8ff71c6 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/querier/queryrange/queryrange.proto
+++ b/vendor/github.com/cortexproject/cortex/pkg/querier/queryrange/queryrange.proto
@@ -21,11 +21,17 @@ message PrometheusRequest {
string query = 6;
}
+message PrometheusResponseHeader {
+ string Name = 1 [(gogoproto.jsontag) = "-"];
+ repeated string Values = 2 [(gogoproto.jsontag) = "-"];
+}
+
message PrometheusResponse {
string Status = 1 [(gogoproto.jsontag) = "status"];
PrometheusData Data = 2 [(gogoproto.nullable) = false, (gogoproto.jsontag) = "data,omitempty"];
string ErrorType = 3 [(gogoproto.jsontag) = "errorType,omitempty"];
string Error = 4 [(gogoproto.jsontag) = "error,omitempty"];
+ repeated PrometheusResponseHeader Headers = 5 [(gogoproto.jsontag) = "-"];
}
message PrometheusData {
diff --git a/vendor/github.com/cortexproject/cortex/pkg/querier/queryrange/results_cache.go b/vendor/github.com/cortexproject/cortex/pkg/querier/queryrange/results_cache.go
index dde104fcdbe01..802cb32fdee71 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/querier/queryrange/results_cache.go
+++ b/vendor/github.com/cortexproject/cortex/pkg/querier/queryrange/results_cache.go
@@ -22,6 +22,11 @@ import (
"github.com/cortexproject/cortex/pkg/util/spanlogger"
)
+var (
+ // Value that cachecontrolHeader has if the response indicates that the results should not be cached.
+ noCacheValue = "no-store"
+)
+
// ResultsCacheConfig is the config for the results cache.
type ResultsCacheConfig struct {
CacheConfig cache.Config `yaml:"cache"`
@@ -59,6 +64,7 @@ var PrometheusResponseExtractor = ExtractorFunc(func(start, end int64, from Resp
ResultType: promRes.Data.ResultType,
Result: extractMatrix(start, end, promRes.Data.Result),
},
+ Headers: promRes.Headers,
}
})
@@ -133,12 +139,41 @@ func (s resultsCache) Do(ctx context.Context, r Request) (Response, error) {
return response, err
}
+// shouldCacheResponse says whether the response should be cached or not.
+func shouldCacheResponse(r Response) bool {
+ if promResp, ok := r.(*PrometheusResponse); ok {
+ shouldCache := true
+ outer:
+ for _, hv := range promResp.Headers {
+ if hv == nil {
+ continue
+ }
+ if hv.Name != cachecontrolHeader {
+ continue
+ }
+ for _, v := range hv.Values {
+ if v == noCacheValue {
+ shouldCache = false
+ break outer
+ }
+ }
+ }
+ return shouldCache
+ }
+ return true
+}
+
func (s resultsCache) handleMiss(ctx context.Context, r Request) (Response, []Extent, error) {
response, err := s.next.Do(ctx, r)
if err != nil {
return nil, nil, err
}
+ if !shouldCacheResponse(response) {
+ level.Debug(s.logger).Log("msg", fmt.Sprintf("%s header in response is equal to %s, not caching the response", cachecontrolHeader, noCacheValue))
+ return response, []Extent{}, nil
+ }
+
extent, err := toExtent(ctx, r, response)
if err != nil {
return nil, nil, err
diff --git a/vendor/github.com/cortexproject/cortex/pkg/ring/flush.go b/vendor/github.com/cortexproject/cortex/pkg/ring/flush.go
index ebf97960dee6f..7760fd8f6c4f5 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/ring/flush.go
+++ b/vendor/github.com/cortexproject/cortex/pkg/ring/flush.go
@@ -1,6 +1,12 @@
package ring
-import "context"
+import (
+ "context"
+ "errors"
+)
+
+// ErrTransferDisabled is the error returned by TransferOut when the transfers are disabled.
+var ErrTransferDisabled = errors.New("transfers disabled")
// FlushTransferer controls the shutdown of an instance in the ring.
type FlushTransferer interface {
diff --git a/vendor/github.com/cortexproject/cortex/pkg/ring/kv/client.go b/vendor/github.com/cortexproject/cortex/pkg/ring/kv/client.go
index ba3317365ccdf..269c7d9da327c 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/ring/kv/client.go
+++ b/vendor/github.com/cortexproject/cortex/pkg/ring/kv/client.go
@@ -20,16 +20,24 @@ import (
var inmemoryStoreInit sync.Once
var inmemoryStore Client
-// Config is config for a KVStore currently used by ring and HA tracker,
-// where store can be consul or inmemory.
-type Config struct {
- Store string `yaml:"store,omitempty"`
+// StoreConfig is a configuration used for building single store client, either
+// Consul, Etcd, Memberlist or MultiClient. It was extracted from Config to keep
+// single-client config separate from final client-config (with all the wrappers)
+type StoreConfig struct {
Consul consul.Config `yaml:"consul,omitempty"`
Etcd etcd.Config `yaml:"etcd,omitempty"`
Memberlist memberlist.Config `yaml:"memberlist,omitempty"`
- Prefix string `yaml:"prefix,omitempty"`
+ Multi MultiConfig `yaml:"multi,omitempty"`
+}
+
+// Config is config for a KVStore currently used by ring and HA tracker,
+// where store can be consul or inmemory.
+type Config struct {
+ Store string `yaml:"store,omitempty"`
+ Prefix string `yaml:"prefix,omitempty"`
+ StoreConfig `yaml:",inline"`
- Mock Client
+ Mock Client `yaml:"-"`
}
// RegisterFlagsWithPrefix adds the flags required to config this to the given FlagSet.
@@ -37,19 +45,21 @@ type Config struct {
// store flag with the prefix ring, so ring.store. For everything else we pass the prefix
// to the Consul flags.
// If prefix is not an empty string it should end with a period.
-func (cfg *Config) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) {
+func (cfg *Config) RegisterFlagsWithPrefix(flagsPrefix, defaultPrefix string, f *flag.FlagSet) {
// We need Consul flags to not have the ring prefix to maintain compatibility.
// This needs to be fixed in the future (1.0 release maybe?) when we normalize flags.
// At the moment we have consul.<flag-name>, and ring.store, going forward it would
// be easier to have everything under ring, so ring.consul.<flag-name>
- cfg.Consul.RegisterFlags(f, prefix)
- cfg.Etcd.RegisterFlagsWithPrefix(f, prefix)
- cfg.Memberlist.RegisterFlags(f, prefix)
- if prefix == "" {
- prefix = "ring."
+ cfg.Consul.RegisterFlags(f, flagsPrefix)
+ cfg.Etcd.RegisterFlagsWithPrefix(f, flagsPrefix)
+ cfg.Multi.RegisterFlagsWithPrefix(f, flagsPrefix)
+ cfg.Memberlist.RegisterFlags(f, flagsPrefix)
+
+ if flagsPrefix == "" {
+ flagsPrefix = "ring."
}
- f.StringVar(&cfg.Prefix, prefix+"prefix", "collectors/", "The prefix for the keys in the store. Should end with a /.")
- f.StringVar(&cfg.Store, prefix+"store", "consul", "Backend storage to use for the ring (consul, etcd, inmemory, memberlist [experimental]).")
+ f.StringVar(&cfg.Prefix, flagsPrefix+"prefix", defaultPrefix, "The prefix for the keys in the store. Should end with a /.")
+ f.StringVar(&cfg.Store, flagsPrefix+"store", "consul", "Backend storage to use for the ring. Supported values are: consul, etcd, inmemory, multi, memberlist (experimental).")
}
// Client is a high-level client for key-value stores (such as Etcd and
@@ -86,10 +96,14 @@ func NewClient(cfg Config, codec codec.Codec) (Client, error) {
return cfg.Mock, nil
}
+ return createClient(cfg.Store, cfg.Prefix, cfg.StoreConfig, codec)
+}
+
+func createClient(name string, prefix string, cfg StoreConfig, codec codec.Codec) (Client, error) {
var client Client
var err error
- switch cfg.Store {
+ switch name {
case "consul":
client, err = consul.NewClient(cfg.Consul, codec)
@@ -108,17 +122,49 @@ func NewClient(cfg Config, codec codec.Codec) (Client, error) {
cfg.Memberlist.MetricsRegisterer = prometheus.DefaultRegisterer
client, err = memberlist.NewMemberlistClient(cfg.Memberlist, codec)
+ case "multi":
+ client, err = buildMultiClient(cfg, codec)
+
default:
- return nil, fmt.Errorf("invalid KV store type: %s", cfg.Store)
+ return nil, fmt.Errorf("invalid KV store type: %s", name)
}
if err != nil {
return nil, err
}
- if cfg.Prefix != "" {
- client = PrefixClient(client, cfg.Prefix)
+ if prefix != "" {
+ client = PrefixClient(client, prefix)
}
return metrics{client}, nil
}
+
+func buildMultiClient(cfg StoreConfig, codec codec.Codec) (Client, error) {
+ if cfg.Multi.Primary == "" || cfg.Multi.Secondary == "" {
+ return nil, fmt.Errorf("primary or secondary store not set")
+ }
+ if cfg.Multi.Primary == "multi" || cfg.Multi.Secondary == "multi" {
+ return nil, fmt.Errorf("primary and secondary stores cannot be multi-stores")
+ }
+ if cfg.Multi.Primary == cfg.Multi.Secondary {
+ return nil, fmt.Errorf("primary and secondary stores must be different")
+ }
+
+ primary, err := createClient(cfg.Multi.Primary, "", cfg, codec)
+ if err != nil {
+ return nil, err
+ }
+
+ secondary, err := createClient(cfg.Multi.Secondary, "", cfg, codec)
+ if err != nil {
+ return nil, err
+ }
+
+ clients := []kvclient{
+ {client: primary, name: cfg.Multi.Primary},
+ {client: secondary, name: cfg.Multi.Secondary},
+ }
+
+ return NewMultiClient(cfg.Multi, clients), nil
+}
diff --git a/vendor/github.com/cortexproject/cortex/pkg/ring/kv/memberlist/memberlist_client.go b/vendor/github.com/cortexproject/cortex/pkg/ring/kv/memberlist/memberlist_client.go
index 44a2dbe28cc43..6958f9151ab87 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/ring/kv/memberlist/memberlist_client.go
+++ b/vendor/github.com/cortexproject/cortex/pkg/ring/kv/memberlist/memberlist_client.go
@@ -50,14 +50,15 @@ type Config struct {
TCPTransport TCPTransportConfig `yaml:",inline"`
// Where to put custom metrics. Metrics are not registered, if this is nil.
- MetricsRegisterer prometheus.Registerer
- MetricsNamespace string
+ MetricsRegisterer prometheus.Registerer `yaml:"-"`
+ MetricsNamespace string `yaml:"-"`
}
// RegisterFlags registers flags.
func (cfg *Config) RegisterFlags(f *flag.FlagSet, prefix string) {
// "Defaults to hostname" -- memberlist sets it to hostname by default.
f.StringVar(&cfg.NodeName, prefix+"memberlist.nodename", "", "Name of the node in memberlist cluster. Defaults to hostname.") // memberlist.DefaultLANConfig will put hostname here.
+ f.DurationVar(&cfg.StreamTimeout, prefix+"memberlist.stream-timeout", 0, "The timeout for establishing a connection with a remote node, and for read/write operations. Uses memberlist LAN defaults if 0.")
f.IntVar(&cfg.RetransmitMult, prefix+"memberlist.retransmit-factor", 0, "Multiplication factor used when sending out messages (factor * log(N+1)).")
f.Var(&cfg.JoinMembers, prefix+"memberlist.join", "Other cluster members to join. Can be specified multiple times. Memberlist store is EXPERIMENTAL.")
f.BoolVar(&cfg.AbortIfJoinFails, prefix+"memberlist.abort-if-join-fails", true, "If this node fails to join memberlist cluster, abort.")
diff --git a/vendor/github.com/cortexproject/cortex/pkg/ring/kv/memberlist/tcp_transport.go b/vendor/github.com/cortexproject/cortex/pkg/ring/kv/memberlist/tcp_transport.go
index 33af0aae121ec..2abf8949ac422 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/ring/kv/memberlist/tcp_transport.go
+++ b/vendor/github.com/cortexproject/cortex/pkg/ring/kv/memberlist/tcp_transport.go
@@ -49,14 +49,14 @@ type TCPTransportConfig struct {
// WriteTo is used to send "UDP" packets. Since we use TCP, we can detect more errors,
// but memberlist doesn't seem to cope with that very well.
- ReportWriteToErrors bool
+ ReportWriteToErrors bool `yaml:"-"`
// Transport logs lot of messages at debug level, so it deserves an extra flag for turning it on
- TransportDebug bool
+ TransportDebug bool `yaml:"-"`
// Where to put custom metrics. nil = don't register.
- MetricsRegisterer prometheus.Registerer
- MetricsNamespace string
+ MetricsRegisterer prometheus.Registerer `yaml:"-"`
+ MetricsNamespace string `yaml:"-"`
}
// RegisterFlags registers flags.
diff --git a/vendor/github.com/cortexproject/cortex/pkg/ring/kv/multi.go b/vendor/github.com/cortexproject/cortex/pkg/ring/kv/multi.go
new file mode 100644
index 0000000000000..73c141947917f
--- /dev/null
+++ b/vendor/github.com/cortexproject/cortex/pkg/ring/kv/multi.go
@@ -0,0 +1,361 @@
+package kv
+
+import (
+ "context"
+ "flag"
+ "fmt"
+ "sync"
+ "time"
+
+ "github.com/cortexproject/cortex/pkg/util"
+ "github.com/go-kit/kit/log"
+ "github.com/prometheus/client_golang/prometheus"
+
+ "github.com/go-kit/kit/log/level"
+ "github.com/uber-go/atomic"
+)
+
+var (
+ primaryStoreGauge = prometheus.NewGaugeVec(prometheus.GaugeOpts{
+ Name: "cortex_multikv_primary_store",
+ Help: "Selected primary KV store",
+ }, []string{"store"})
+
+ mirrorEnabledGauge = prometheus.NewGauge(prometheus.GaugeOpts{
+ Name: "cortex_multikv_mirror_enabled",
+ Help: "Is mirroring to secondary store enabled",
+ })
+
+ mirrorWritesCounter = prometheus.NewCounter(prometheus.CounterOpts{
+ Name: "cortex_multikv_mirror_writes_total",
+ Help: "Number of mirror-writes to secondary store",
+ })
+
+ mirrorFailuresCounter = prometheus.NewCounter(prometheus.CounterOpts{
+ Name: "cortex_multikv_mirror_write_errors_total",
+ Help: "Number of failures to mirror-write to secondary store",
+ })
+)
+
+func init() {
+ prometheus.MustRegister(primaryStoreGauge, mirrorEnabledGauge, mirrorWritesCounter, mirrorFailuresCounter)
+}
+
+// MultiConfig is a configuration for MultiClient.
+type MultiConfig struct {
+ Primary string `yaml:"primary"`
+ Secondary string `yaml:"secondary"`
+
+ MirrorEnabled bool `yaml:"mirror_enabled"`
+ MirrorTimeout time.Duration `yaml:"mirror_timeout"`
+
+ // ConfigProvider returns channel with MultiRuntimeConfig updates.
+ ConfigProvider func() <-chan MultiRuntimeConfig `yaml:"-"`
+}
+
+// RegisterFlagsWithPrefix registers flags with prefix.
+func (cfg *MultiConfig) RegisterFlagsWithPrefix(f *flag.FlagSet, prefix string) {
+ f.StringVar(&cfg.Primary, prefix+"multi.primary", "", "Primary backend storage used by multi-client.")
+ f.StringVar(&cfg.Secondary, prefix+"multi.secondary", "", "Secondary backend storage used by multi-client.")
+ f.BoolVar(&cfg.MirrorEnabled, prefix+"multi.mirror-enabled", false, "Mirror writes to secondary store.")
+ f.DurationVar(&cfg.MirrorTimeout, prefix+"multi.mirror-timeout", 2*time.Second, "Timeout for storing value to secondary store.")
+}
+
+// MultiRuntimeConfig has values that can change in runtime (via overrides)
+type MultiRuntimeConfig struct {
+ // Primary store used by MultiClient. Can be updated in runtime to switch to a different store (eg. consul -> etcd,
+ // or to gossip). Doing this allows nice migration between stores. Empty values are ignored.
+ PrimaryStore string `yaml:"primary"`
+
+ // Mirroring enabled or not. Nil = no change.
+ Mirroring *bool `yaml:"mirror_enabled"`
+}
+
+type kvclient struct {
+ client Client
+ name string
+}
+
+type clientInProgress struct {
+ client int
+ cancel context.CancelFunc
+}
+
+// MultiClient implements kv.Client by forwarding all API calls to primary client.
+// Writes performed via CAS method are also (optionally) forwarded to secondary clients.
+type MultiClient struct {
+ // Available KV clients
+ clients []kvclient
+
+ mirrorTimeout time.Duration
+ mirroringEnabled *atomic.Bool
+
+ // logger with "multikv" component
+ logger log.Logger
+
+ // The primary client used for interaction.
+ primaryID *atomic.Int32
+
+ cancel context.CancelFunc
+
+ inProgressMu sync.Mutex
+ // Cancel functions for ongoing operations. key is a value from inProgressCnt.
+ // What we really need is a []context.CancelFunc, but functions cannot be compared against each other using ==,
+ // so we use this map instead.
+ inProgress map[int]clientInProgress
+ inProgressCnt int
+}
+
+// NewMultiClient creates new MultiClient with given KV Clients.
+// First client in the slice is the primary client.
+func NewMultiClient(cfg MultiConfig, clients []kvclient) *MultiClient {
+ c := &MultiClient{
+ clients: clients,
+ primaryID: atomic.NewInt32(0),
+ inProgress: map[int]clientInProgress{},
+
+ mirrorTimeout: cfg.MirrorTimeout,
+ mirroringEnabled: atomic.NewBool(cfg.MirrorEnabled),
+
+ logger: log.With(util.Logger, "component", "multikv"),
+ }
+
+ ctx, cancelFn := context.WithCancel(context.Background())
+ c.cancel = cancelFn
+
+ if cfg.ConfigProvider != nil {
+ go c.watchConfigChannel(ctx, cfg.ConfigProvider())
+ }
+
+ c.updatePrimaryStoreGauge()
+ c.updateMirrorEnabledGauge()
+ return c
+}
+
+func (m *MultiClient) watchConfigChannel(ctx context.Context, configChannel <-chan MultiRuntimeConfig) {
+ for {
+ select {
+ case cfg, ok := <-configChannel:
+ if !ok {
+ return
+ }
+
+ if cfg.Mirroring != nil {
+ enabled := *cfg.Mirroring
+ old := m.mirroringEnabled.Swap(enabled)
+ if old != enabled {
+ level.Info(m.logger).Log("msg", "toggled mirroring", "enabled", enabled)
+ }
+ m.updateMirrorEnabledGauge()
+ }
+
+ if cfg.PrimaryStore != "" {
+ switched, err := m.setNewPrimaryClient(cfg.PrimaryStore)
+ if switched {
+ level.Info(m.logger).Log("msg", "switched primary KV store", "primary", cfg.PrimaryStore)
+ }
+ if err != nil {
+ level.Error(m.logger).Log("msg", "failed to switch primary KV store", "primary", cfg.PrimaryStore, "err", err)
+ }
+ }
+
+ case <-ctx.Done():
+ return
+ }
+ }
+}
+
+func (m *MultiClient) getPrimaryClient() (int, kvclient) {
+ v := m.primaryID.Load()
+ return int(v), m.clients[v]
+}
+
+// returns true, if primary client has changed
+func (m *MultiClient) setNewPrimaryClient(store string) (bool, error) {
+ newPrimaryIx := -1
+ for ix, c := range m.clients {
+ if c.name == store {
+ newPrimaryIx = ix
+ break
+ }
+ }
+
+ if newPrimaryIx < 0 {
+ return false, fmt.Errorf("KV store not found")
+ }
+
+ prev := int(m.primaryID.Swap(int32(newPrimaryIx)))
+ if prev == newPrimaryIx {
+ return false, nil
+ }
+
+ defer m.updatePrimaryStoreGauge() // do as the last thing, after releasing the lock
+
+ // switching to new primary... cancel clients using previous one
+ m.inProgressMu.Lock()
+ defer m.inProgressMu.Unlock()
+
+ for _, inp := range m.inProgress {
+ if inp.client == prev {
+ inp.cancel()
+ }
+ }
+ return true, nil
+}
+
+func (m *MultiClient) updatePrimaryStoreGauge() {
+ _, pkv := m.getPrimaryClient()
+
+ for _, kv := range m.clients {
+ value := float64(0)
+ if pkv == kv {
+ value = 1
+ }
+
+ primaryStoreGauge.WithLabelValues(kv.name).Set(value)
+ }
+}
+
+func (m *MultiClient) updateMirrorEnabledGauge() {
+ if m.mirroringEnabled.Load() {
+ mirrorEnabledGauge.Set(1)
+ } else {
+ mirrorEnabledGauge.Set(0)
+ }
+}
+
+func (m *MultiClient) registerCancelFn(clientID int, fn context.CancelFunc) int {
+ m.inProgressMu.Lock()
+ defer m.inProgressMu.Unlock()
+
+ m.inProgressCnt++
+ id := m.inProgressCnt
+ m.inProgress[id] = clientInProgress{client: clientID, cancel: fn}
+ return id
+}
+
+func (m *MultiClient) unregisterCancelFn(id int) {
+ m.inProgressMu.Lock()
+ defer m.inProgressMu.Unlock()
+
+ delete(m.inProgress, id)
+}
+
+// Runs supplied fn with current primary client. If primary client changes, fn is restarted.
+// When fn finishes (with or without error), this method returns given error value.
+func (m *MultiClient) runWithPrimaryClient(origCtx context.Context, fn func(newCtx context.Context, primary kvclient) error) error {
+ cancelFn := context.CancelFunc(nil)
+ cancelFnID := 0
+
+ cleanup := func() {
+ if cancelFn != nil {
+ cancelFn()
+ }
+ if cancelFnID > 0 {
+ m.unregisterCancelFn(cancelFnID)
+ }
+ }
+
+ defer cleanup()
+
+ // This only loops if switchover to a new primary backend happens while calling 'fn', which is very rare.
+ for {
+ cleanup()
+ pid, kv := m.getPrimaryClient()
+
+ var cancelCtx context.Context
+ cancelCtx, cancelFn = context.WithCancel(origCtx)
+ cancelFnID = m.registerCancelFn(pid, cancelFn)
+
+ err := fn(cancelCtx, kv)
+
+ if err == nil {
+ return nil
+ }
+
+ if cancelCtx.Err() == context.Canceled && origCtx.Err() == nil {
+ // our context was cancelled, but outer context is not done yet. retry
+ continue
+ }
+
+ return err
+ }
+}
+
+// Get is a part of kv.Client interface.
+func (m *MultiClient) Get(ctx context.Context, key string) (interface{}, error) {
+ _, kv := m.getPrimaryClient()
+ val, err := kv.client.Get(ctx, key)
+ return val, err
+}
+
+// CAS is a part of kv.Client interface.
+func (m *MultiClient) CAS(ctx context.Context, key string, f func(in interface{}) (out interface{}, retry bool, err error)) error {
+ _, kv := m.getPrimaryClient()
+
+ updatedValue := interface{}(nil)
+ err := kv.client.CAS(ctx, key, func(in interface{}) (interface{}, bool, error) {
+ out, retry, err := f(in)
+ updatedValue = out
+ return out, retry, err
+ })
+
+ if err == nil && updatedValue != nil && m.mirroringEnabled.Load() {
+ m.writeToSecondary(ctx, kv, key, updatedValue)
+ }
+
+ return err
+}
+
+// WatchKey is a part of kv.Client interface.
+func (m *MultiClient) WatchKey(ctx context.Context, key string, f func(interface{}) bool) {
+ _ = m.runWithPrimaryClient(ctx, func(newCtx context.Context, primary kvclient) error {
+ primary.client.WatchKey(newCtx, key, f)
+ return newCtx.Err()
+ })
+}
+
+// WatchPrefix is a part of kv.Client interface.
+func (m *MultiClient) WatchPrefix(ctx context.Context, prefix string, f func(string, interface{}) bool) {
+ _ = m.runWithPrimaryClient(ctx, func(newCtx context.Context, primary kvclient) error {
+ primary.client.WatchPrefix(newCtx, prefix, f)
+ return newCtx.Err()
+ })
+}
+
+func (m *MultiClient) writeToSecondary(ctx context.Context, primary kvclient, key string, newValue interface{}) {
+ if m.mirrorTimeout > 0 {
+ var cfn context.CancelFunc
+ ctx, cfn = context.WithTimeout(ctx, m.mirrorTimeout)
+ defer cfn()
+ }
+
+ // let's propagate new value to all remaining clients
+ for _, kvc := range m.clients {
+ if kvc == primary {
+ continue
+ }
+
+ mirrorWritesCounter.Inc()
+ err := kvc.client.CAS(ctx, key, func(in interface{}) (out interface{}, retry bool, err error) {
+ // try once
+ return newValue, false, nil
+ })
+
+ if err != nil {
+ mirrorFailuresCounter.Inc()
+ level.Warn(m.logger).Log("msg", "failed to update value in secondary store", "key", key, "err", err, "primary", primary.name, "secondary", kvc.name)
+ } else {
+ level.Debug(m.logger).Log("msg", "stored updated value to secondary store", "key", key, "primary", primary.name, "secondary", kvc.name)
+ }
+ }
+}
+
+// Stop the multiClient and all configured clients.
+func (m *MultiClient) Stop() {
+ m.cancel()
+
+ for _, kv := range m.clients {
+ kv.client.Stop()
+ }
+}
diff --git a/vendor/github.com/cortexproject/cortex/pkg/ring/lifecycler.go b/vendor/github.com/cortexproject/cortex/pkg/ring/lifecycler.go
index 5fd70d7f3a02e..bd1841a43685f 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/ring/lifecycler.go
+++ b/vendor/github.com/cortexproject/cortex/pkg/ring/lifecycler.go
@@ -43,7 +43,7 @@ type LifecyclerConfig struct {
RingConfig Config `yaml:"ring,omitempty"`
// Config for the ingester lifecycle control
- ListenPort *int
+ ListenPort *int `yaml:"-"`
NumTokens int `yaml:"num_tokens,omitempty"`
HeartbeatPeriod time.Duration `yaml:"heartbeat_period,omitempty"`
ObservePeriod time.Duration `yaml:"observe_period,omitempty"`
@@ -54,10 +54,10 @@ type LifecyclerConfig struct {
TokensFilePath string `yaml:"tokens_file_path,omitempty"`
// For testing, you can override the address and ID of this ingester
- Addr string `yaml:"address"`
- Port int
- ID string
- SkipUnregister bool
+ Addr string `yaml:"address" doc:"hidden"`
+ Port int `doc:"hidden"`
+ ID string `doc:"hidden"`
+ SkipUnregister bool `yaml:"-"`
// graveyard for unused flags.
UnusedFlag bool `yaml:"claim_on_rollout,omitempty"` // DEPRECATED - left for backwards-compatibility
@@ -119,6 +119,9 @@ type Lifecycler struct {
RingName string
RingKey string
+ // Whether to flush if transfer fails on shutdown.
+ flushOnShutdown bool
+
// We need to remember the ingester state just in case consul goes away and comes
// back empty. And it changes during lifecycle of ingester.
stateMtx sync.RWMutex
@@ -136,7 +139,8 @@ type Lifecycler struct {
}
// NewLifecycler makes and starts a new Lifecycler.
-func NewLifecycler(cfg LifecyclerConfig, flushTransferer FlushTransferer, ringName, ringKey string) (*Lifecycler, error) {
+func NewLifecycler(cfg LifecyclerConfig, flushTransferer FlushTransferer, ringName, ringKey string, flushOnShutdown bool) (*Lifecycler, error) {
+
addr := cfg.Addr
if addr == "" {
var err error
@@ -166,10 +170,11 @@ func NewLifecycler(cfg LifecyclerConfig, flushTransferer FlushTransferer, ringNa
flushTransferer: flushTransferer,
KVStore: store,
- Addr: fmt.Sprintf("%s:%d", addr, port),
- ID: cfg.ID,
- RingName: ringName,
- RingKey: ringKey,
+ Addr: fmt.Sprintf("%s:%d", addr, port),
+ ID: cfg.ID,
+ RingName: ringName,
+ RingKey: ringKey,
+ flushOnShutdown: flushOnShutdown,
quit: make(chan struct{}),
actorChan: make(chan func()),
@@ -497,6 +502,12 @@ func (i *Lifecycler) initRing(ctx context.Context) error {
return ringDesc, true, nil
}
+ // If the ingester failed to clean it's ring entry up in can leave it's state in LEAVING.
+ // Move it into ACTIVE to ensure the ingester joins the ring.
+ if ingesterDesc.State == LEAVING && len(ingesterDesc.Tokens) == i.cfg.NumTokens {
+ ingesterDesc.State = ACTIVE
+ }
+
// We exist in the ring, so assume the ring is right and copy out tokens & state out of there.
i.setState(ingesterDesc.State)
tokens, _ := ringDesc.TokensFor(i.ID)
@@ -688,12 +699,27 @@ func (i *Lifecycler) updateCounters(ringDesc *Desc) {
i.countersLock.Unlock()
}
+// FlushOnShutdown returns if flushing is enabled if transfer fails on a shutdown.
+func (i *Lifecycler) FlushOnShutdown() bool {
+ return i.flushOnShutdown
+}
+
+// SetFlushOnShutdown enables/disables flush on shutdown if transfer fails.
+// Passing 'true' enables it, and 'false' disabled it.
+func (i *Lifecycler) SetFlushOnShutdown(flushOnShutdown bool) {
+ i.flushOnShutdown = flushOnShutdown
+}
+
func (i *Lifecycler) processShutdown(ctx context.Context) {
- flushRequired := true
+ flushRequired := i.flushOnShutdown
transferStart := time.Now()
if err := i.flushTransferer.TransferOut(ctx); err != nil {
- level.Error(util.Logger).Log("msg", "Failed to transfer chunks to another instance", "ring", i.RingName, "err", err)
- shutdownDuration.WithLabelValues("transfer", "fail", i.RingName).Observe(time.Since(transferStart).Seconds())
+ if err == ErrTransferDisabled {
+ level.Info(util.Logger).Log("msg", "transfers are disabled")
+ } else {
+ level.Error(util.Logger).Log("msg", "failed to transfer chunks to another instance", "ring", i.RingName, "err", err)
+ shutdownDuration.WithLabelValues("transfer", "fail", i.RingName).Observe(time.Since(transferStart).Seconds())
+ }
} else {
flushRequired = false
shutdownDuration.WithLabelValues("transfer", "success", i.RingName).Observe(time.Since(transferStart).Seconds())
diff --git a/vendor/github.com/cortexproject/cortex/pkg/ring/model.go b/vendor/github.com/cortexproject/cortex/pkg/ring/model.go
index 191f7c5c3fef3..7b36a68258594 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/ring/model.go
+++ b/vendor/github.com/cortexproject/cortex/pkg/ring/model.go
@@ -171,7 +171,7 @@ func (i *IngesterDesc) IsHealthy(op Operation, heartbeatTimeout time.Duration) b
healthy = true
}
- return healthy && time.Now().Sub(time.Unix(i.Timestamp, 0)) <= heartbeatTimeout
+ return healthy && time.Since(time.Unix(i.Timestamp, 0)) <= heartbeatTimeout
}
// Merge merges other ring into this one. Returns sub-ring that represents the change,
diff --git a/vendor/github.com/cortexproject/cortex/pkg/ring/ring.go b/vendor/github.com/cortexproject/cortex/pkg/ring/ring.go
index 986f1a57c7d5b..c6a37ecc6e87d 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/ring/ring.go
+++ b/vendor/github.com/cortexproject/cortex/pkg/ring/ring.go
@@ -79,7 +79,7 @@ func (cfg *Config) RegisterFlags(f *flag.FlagSet) {
// RegisterFlagsWithPrefix adds the flags required to config this to the given FlagSet with a specified prefix
func (cfg *Config) RegisterFlagsWithPrefix(prefix string, f *flag.FlagSet) {
- cfg.KVStore.RegisterFlagsWithPrefix(prefix, f)
+ cfg.KVStore.RegisterFlagsWithPrefix(prefix, "collectors/", f)
f.DurationVar(&cfg.HeartbeatTimeout, prefix+"ring.heartbeat-timeout", time.Minute, "The heartbeat timeout after which ingesters are skipped for reads/writes.")
f.IntVar(&cfg.ReplicationFactor, prefix+"distributor.replication-factor", 3, "The number of ingesters to write to and read from.")
diff --git a/vendor/github.com/cortexproject/cortex/pkg/util/flagext/ignored.go b/vendor/github.com/cortexproject/cortex/pkg/util/flagext/ignored.go
new file mode 100644
index 0000000000000..2dd49a87ebdd5
--- /dev/null
+++ b/vendor/github.com/cortexproject/cortex/pkg/util/flagext/ignored.go
@@ -0,0 +1,22 @@
+package flagext
+
+import (
+ "flag"
+)
+
+type ignoredFlag struct {
+ name string
+}
+
+func (ignoredFlag) String() string {
+ return "ignored"
+}
+
+func (d ignoredFlag) Set(string) error {
+ return nil
+}
+
+// IgnoredFlag ignores set value, without any warning
+func IgnoredFlag(f *flag.FlagSet, name, message string) {
+ f.Var(ignoredFlag{name}, name, message)
+}
diff --git a/vendor/github.com/cortexproject/cortex/pkg/util/limiter/rate_limiter.go b/vendor/github.com/cortexproject/cortex/pkg/util/limiter/rate_limiter.go
index 9677d1eca9814..48fc6a4262375 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/util/limiter/rate_limiter.go
+++ b/vendor/github.com/cortexproject/cortex/pkg/util/limiter/rate_limiter.go
@@ -99,7 +99,7 @@ func (l *RateLimiter) recheckTenantLimiter(now time.Time, tenantID string) *rate
l.tenantsLock.Lock()
defer l.tenantsLock.Unlock()
- entry, _ := l.tenants[tenantID]
+ entry := l.tenants[tenantID]
// We check again if the recheck period elapsed, cause it may
// have already been rechecked in the meanwhile.
diff --git a/vendor/github.com/cortexproject/cortex/pkg/util/metrics_helper.go b/vendor/github.com/cortexproject/cortex/pkg/util/metrics_helper.go
new file mode 100644
index 0000000000000..4957c40dede9c
--- /dev/null
+++ b/vendor/github.com/cortexproject/cortex/pkg/util/metrics_helper.go
@@ -0,0 +1,296 @@
+package util
+
+import (
+ "bytes"
+ "errors"
+ "fmt"
+
+ "github.com/go-kit/kit/log/level"
+ "github.com/prometheus/client_golang/prometheus"
+ dto "github.com/prometheus/client_model/go"
+)
+
+// MetricFamiliesPerUser is a collection of metrics gathered via calling Gatherer.Gather() method on different
+// gatherers, one per user.
+// First key = userID, second key = metric name.
+// Value = slice of gathered values with the same metric name.
+type MetricFamiliesPerUser map[string]map[string]*dto.MetricFamily
+
+func BuildMetricFamiliesPerUserFromUserRegistries(regs map[string]*prometheus.Registry) MetricFamiliesPerUser {
+ data := MetricFamiliesPerUser{}
+ for userID, r := range regs {
+ m, err := r.Gather()
+ if err == nil {
+ err = data.AddGatheredDataForUser(userID, m)
+ }
+
+ if err != nil {
+ level.Warn(Logger).Log("msg", "failed to gather metrics from registry", "user", userID, "err", err)
+ continue
+ }
+ }
+ return data
+}
+
+// AddGatheredDataForUser adds user-specific output of Gatherer.Gather method.
+// Gatherer.Gather specifies that there metric families are uniquely named, and we use that fact here.
+// If they are not, this method returns error.
+func (d MetricFamiliesPerUser) AddGatheredDataForUser(userID string, metrics []*dto.MetricFamily) error {
+ // Keeping map of metric name to its family makes it easier to do searches later.
+ perMetricName := map[string]*dto.MetricFamily{}
+
+ for _, m := range metrics {
+ name := m.GetName()
+ // these errors should never happen when passing Gatherer.Gather() output.
+ if name == "" {
+ return errors.New("empty name for metric family")
+ }
+ if perMetricName[name] != nil {
+ return fmt.Errorf("non-unique name for metric family: %q", name)
+ }
+
+ perMetricName[name] = m
+ }
+
+ d[userID] = perMetricName
+ return nil
+}
+
+func (d MetricFamiliesPerUser) SendSumOfCounters(out chan<- prometheus.Metric, desc *prometheus.Desc, counter string) {
+ result := float64(0)
+ for _, perMetric := range d {
+ result += sum(perMetric[counter], counterValue)
+ }
+
+ out <- prometheus.MustNewConstMetric(desc, prometheus.CounterValue, result)
+}
+
+func (d MetricFamiliesPerUser) SendSumOfCountersWithLabels(out chan<- prometheus.Metric, desc *prometheus.Desc, counter string, labelNames ...string) {
+ result := d.sumOfSingleValuesWithLabels(counter, counterValue, labelNames)
+ for _, cr := range result {
+ out <- prometheus.MustNewConstMetric(desc, prometheus.CounterValue, cr.value, cr.labelValues...)
+ }
+}
+
+func (d MetricFamiliesPerUser) SendSumOfCountersPerUser(out chan<- prometheus.Metric, desc *prometheus.Desc, counter string) {
+ for user, perMetric := range d {
+ v := sum(perMetric[counter], counterValue)
+
+ out <- prometheus.MustNewConstMetric(desc, prometheus.CounterValue, v, user)
+ }
+}
+
+func (d MetricFamiliesPerUser) SendSumOfGauges(out chan<- prometheus.Metric, desc *prometheus.Desc, gauge string) {
+ result := float64(0)
+ for _, perMetric := range d {
+ result += sum(perMetric[gauge], gaugeValue)
+ }
+ out <- prometheus.MustNewConstMetric(desc, prometheus.GaugeValue, result)
+}
+
+func (d MetricFamiliesPerUser) SendSumOfGaugesWithLabels(out chan<- prometheus.Metric, desc *prometheus.Desc, gauge string, labelNames ...string) {
+ result := d.sumOfSingleValuesWithLabels(gauge, gaugeValue, labelNames)
+ for _, cr := range result {
+ out <- prometheus.MustNewConstMetric(desc, prometheus.GaugeValue, cr.value, cr.labelValues...)
+ }
+}
+
+type singleResult struct {
+ value float64
+ labelValues []string
+}
+
+func (d MetricFamiliesPerUser) sumOfSingleValuesWithLabels(metric string, fn func(*dto.Metric) float64, labelNames []string) map[string]singleResult {
+ result := map[string]singleResult{}
+
+ for _, userMetrics := range d {
+ metricsPerLabelValue := getMetricsWithLabelNames(userMetrics[metric], labelNames)
+
+ for key, mlv := range metricsPerLabelValue {
+ for _, m := range mlv.metrics {
+ r := result[key]
+ if r.labelValues == nil {
+ r.labelValues = mlv.labelValues
+ }
+
+ r.value += fn(m)
+ result[key] = r
+ }
+ }
+ }
+
+ return result
+}
+
+func (d MetricFamiliesPerUser) SendSumOfSummaries(out chan<- prometheus.Metric, desc *prometheus.Desc, summaryName string) {
+ var (
+ sampleCount uint64
+ sampleSum float64
+ quantiles map[float64]float64
+ )
+
+ for _, userMetrics := range d {
+ for _, m := range userMetrics[summaryName].GetMetric() {
+ summary := m.GetSummary()
+ sampleCount += summary.GetSampleCount()
+ sampleSum += summary.GetSampleSum()
+ quantiles = mergeSummaryQuantiles(quantiles, summary.GetQuantile())
+ }
+ }
+
+ out <- prometheus.MustNewConstSummary(desc, sampleCount, sampleSum, quantiles)
+}
+
+func (d MetricFamiliesPerUser) SendSumOfSummariesWithLabels(out chan<- prometheus.Metric, desc *prometheus.Desc, summaryName string, labelNames ...string) {
+ type summaryResult struct {
+ sampleCount uint64
+ sampleSum float64
+ quantiles map[float64]float64
+ labelValues []string
+ }
+
+ result := map[string]summaryResult{}
+
+ for _, userMetrics := range d {
+ metricsPerLabelValue := getMetricsWithLabelNames(userMetrics[summaryName], labelNames)
+
+ for key, mwl := range metricsPerLabelValue {
+ for _, m := range mwl.metrics {
+ r := result[key]
+ if r.labelValues == nil {
+ r.labelValues = mwl.labelValues
+ }
+
+ summary := m.GetSummary()
+ r.sampleCount += summary.GetSampleCount()
+ r.sampleSum += summary.GetSampleSum()
+ r.quantiles = mergeSummaryQuantiles(r.quantiles, summary.GetQuantile())
+
+ result[key] = r
+ }
+ }
+ }
+
+ for _, sr := range result {
+ out <- prometheus.MustNewConstSummary(desc, sr.sampleCount, sr.sampleSum, sr.quantiles, sr.labelValues...)
+ }
+}
+
+func (d MetricFamiliesPerUser) SendSumOfHistograms(out chan<- prometheus.Metric, desc *prometheus.Desc, histogramName string) {
+ var (
+ sampleCount uint64
+ sampleSum float64
+ buckets map[float64]uint64
+ )
+
+ for _, userMetrics := range d {
+ for _, m := range userMetrics[histogramName].GetMetric() {
+ histo := m.GetHistogram()
+ sampleCount += histo.GetSampleCount()
+ sampleSum += histo.GetSampleSum()
+ buckets = mergeHistogramBuckets(buckets, histo.GetBucket())
+ }
+ }
+
+ out <- prometheus.MustNewConstHistogram(desc, sampleCount, sampleSum, buckets)
+}
+
+func mergeSummaryQuantiles(quantiles map[float64]float64, summaryQuantiles []*dto.Quantile) map[float64]float64 {
+ if len(summaryQuantiles) == 0 {
+ return quantiles
+ }
+
+ out := quantiles
+ if out == nil {
+ out = map[float64]float64{}
+ }
+
+ for _, q := range summaryQuantiles {
+ // we assume that all summaries have same quantiles
+ out[q.GetQuantile()] += q.GetValue()
+ }
+ return out
+}
+
+func mergeHistogramBuckets(buckets map[float64]uint64, histogramBuckets []*dto.Bucket) map[float64]uint64 {
+ if len(histogramBuckets) == 0 {
+ return buckets
+ }
+
+ out := buckets
+ if out == nil {
+ out = map[float64]uint64{}
+ }
+
+ for _, q := range histogramBuckets {
+ // we assume that all histograms have same buckets
+ out[q.GetUpperBound()] += q.GetCumulativeCount()
+ }
+ return out
+}
+
+type metricsWithLabels struct {
+ labelValues []string
+ metrics []*dto.Metric
+}
+
+func getMetricsWithLabelNames(mf *dto.MetricFamily, labelNames []string) map[string]metricsWithLabels {
+ result := map[string]metricsWithLabels{}
+
+ for _, m := range mf.GetMetric() {
+ lbls, include := getLabelValues(m, labelNames)
+ if !include {
+ continue
+ }
+
+ key := getLabelsString(lbls)
+ r := result[key]
+ if r.labelValues == nil {
+ r.labelValues = lbls
+ }
+ r.metrics = append(r.metrics, m)
+ result[key] = r
+ }
+ return result
+}
+
+func getLabelValues(m *dto.Metric, labelNames []string) ([]string, bool) {
+ all := map[string]string{}
+ for _, lp := range m.GetLabel() {
+ all[lp.GetName()] = lp.GetValue()
+ }
+
+ result := make([]string, 0, len(labelNames))
+ for _, ln := range labelNames {
+ lv, ok := all[ln]
+ if !ok {
+ // required labels not found
+ return nil, false
+ }
+ result = append(result, lv)
+ }
+ return result, true
+}
+
+func getLabelsString(labelValues []string) string {
+ buf := bytes.Buffer{}
+ for _, v := range labelValues {
+ buf.WriteString(v)
+ buf.WriteByte(0) // separator, not used in prometheus labels
+ }
+ return buf.String()
+}
+
+// sum returns sum of values from all metrics from same metric family (= series with the same metric name, but different labels)
+// Supplied function extracts value.
+func sum(mf *dto.MetricFamily, fn func(*dto.Metric) float64) float64 {
+ result := float64(0)
+ for _, m := range mf.GetMetric() {
+ result += fn(m)
+ }
+ return result
+}
+
+// This works even if m is nil, m.Counter is nil or m.Counter.Value is nil (it returns 0 in those cases)
+func counterValue(m *dto.Metric) float64 { return m.GetCounter().GetValue() }
+func gaugeValue(m *dto.Metric) float64 { return m.GetGauge().GetValue() }
diff --git a/vendor/github.com/cortexproject/cortex/pkg/util/runtimeconfig/manager.go b/vendor/github.com/cortexproject/cortex/pkg/util/runtimeconfig/manager.go
new file mode 100644
index 0000000000000..3f3a8bd6758ba
--- /dev/null
+++ b/vendor/github.com/cortexproject/cortex/pkg/util/runtimeconfig/manager.go
@@ -0,0 +1,172 @@
+package runtimeconfig
+
+import (
+ "flag"
+ "sync"
+ "time"
+
+ "github.com/cortexproject/cortex/pkg/util"
+ "github.com/go-kit/kit/log/level"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/client_golang/prometheus/promauto"
+)
+
+var overridesReloadSuccess = promauto.NewGauge(prometheus.GaugeOpts{
+ Name: "cortex_overrides_last_reload_successful",
+ Help: "Whether the last config reload attempt was successful.",
+})
+
+// Loader loads the configuration from file.
+type Loader func(filename string) (interface{}, error)
+
+// ManagerConfig holds the config for an Manager instance.
+// It holds config related to loading per-tenant config.
+type ManagerConfig struct {
+ ReloadPeriod time.Duration `yaml:"period"`
+ LoadPath string `yaml:"file"`
+ Loader Loader `yaml:"-"`
+}
+
+// RegisterFlags registers flags.
+func (mc *ManagerConfig) RegisterFlags(f *flag.FlagSet) {
+ f.StringVar(&mc.LoadPath, "runtime-config.file", "", "File with the configuration that can be updated in runtime.")
+ f.DurationVar(&mc.ReloadPeriod, "runtime-config.reload-period", 10*time.Second, "How often to check runtime config file.")
+}
+
+// Manager periodically reloads the configuration from a file, and keeps this
+// configuration available for clients.
+type Manager struct {
+ cfg ManagerConfig
+ quit chan struct{}
+
+ listenersMtx sync.Mutex
+ listeners []chan interface{}
+
+ configMtx sync.RWMutex
+ config interface{}
+}
+
+// NewRuntimeConfigManager creates an instance of Manager and starts reload config loop based on config
+func NewRuntimeConfigManager(cfg ManagerConfig) (*Manager, error) {
+ mgr := Manager{
+ cfg: cfg,
+ quit: make(chan struct{}),
+ }
+
+ if cfg.LoadPath != "" {
+ if err := mgr.loadConfig(); err != nil {
+ // Log but don't stop on error - we don't want to halt all ingesters because of a typo
+ level.Error(util.Logger).Log("msg", "failed to load config", "err", err)
+ }
+ go mgr.loop()
+ } else {
+ level.Info(util.Logger).Log("msg", "runtime config disabled: file not specified")
+ }
+
+ return &mgr, nil
+}
+
+// CreateListenerChannel creates new channel that can be used to receive new config values.
+// If there is no receiver waiting for value when config manager tries to send the update,
+// or channel buffer is full, update is discarded.
+//
+// When config manager is stopped, it closes all channels to notify receivers that they will
+// not receive any more updates.
+func (om *Manager) CreateListenerChannel(buffer int) <-chan interface{} {
+ ch := make(chan interface{}, buffer)
+
+ om.listenersMtx.Lock()
+ defer om.listenersMtx.Unlock()
+
+ om.listeners = append(om.listeners, ch)
+ return ch
+}
+
+// CloseListenerChannel removes given channel from list of channels to send notifications to and closes channel.
+func (om *Manager) CloseListenerChannel(listener <-chan interface{}) {
+ om.listenersMtx.Lock()
+ defer om.listenersMtx.Unlock()
+
+ for ix, ch := range om.listeners {
+ if ch == listener {
+ om.listeners = append(om.listeners[:ix], om.listeners[ix+1:]...)
+ close(ch)
+ break
+ }
+ }
+}
+
+func (om *Manager) loop() {
+ ticker := time.NewTicker(om.cfg.ReloadPeriod)
+ defer ticker.Stop()
+
+ for {
+ select {
+ case <-ticker.C:
+ err := om.loadConfig()
+ if err != nil {
+ // Log but don't stop on error - we don't want to halt all ingesters because of a typo
+ level.Error(util.Logger).Log("msg", "failed to load config", "err", err)
+ }
+ case <-om.quit:
+ return
+ }
+ }
+}
+
+// loadConfig loads configuration using the loader function, and if successful,
+// stores it as current configuration and notifies listeners.
+func (om *Manager) loadConfig() error {
+ cfg, err := om.cfg.Loader(om.cfg.LoadPath)
+ if err != nil {
+ overridesReloadSuccess.Set(0)
+ return err
+ }
+ overridesReloadSuccess.Set(1)
+
+ om.setConfig(cfg)
+ om.callListeners(cfg)
+
+ return nil
+}
+
+func (om *Manager) setConfig(config interface{}) {
+ om.configMtx.Lock()
+ defer om.configMtx.Unlock()
+ om.config = config
+}
+
+func (om *Manager) callListeners(newValue interface{}) {
+ om.listenersMtx.Lock()
+ defer om.listenersMtx.Unlock()
+
+ for _, ch := range om.listeners {
+ select {
+ case ch <- newValue:
+ // ok
+ default:
+ // nobody is listening or buffer full.
+ }
+ }
+}
+
+// Stop stops the Manager
+func (om *Manager) Stop() {
+ close(om.quit)
+
+ om.listenersMtx.Lock()
+ defer om.listenersMtx.Unlock()
+
+ for _, ch := range om.listeners {
+ close(ch)
+ }
+ om.listeners = nil
+}
+
+// GetConfig returns last loaded config value, possibly nil.
+func (om *Manager) GetConfig() interface{} {
+ om.configMtx.RLock()
+ defer om.configMtx.RUnlock()
+
+ return om.config
+}
diff --git a/vendor/github.com/cortexproject/cortex/pkg/util/validation/limits.go b/vendor/github.com/cortexproject/cortex/pkg/util/validation/limits.go
index 2eb60d9a1630e..bae57b70660ce 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/util/validation/limits.go
+++ b/vendor/github.com/cortexproject/cortex/pkg/util/validation/limits.go
@@ -3,11 +3,8 @@ package validation
import (
"errors"
"flag"
- "os"
"time"
- "gopkg.in/yaml.v2"
-
"github.com/cortexproject/cortex/pkg/util/flagext"
)
@@ -55,7 +52,7 @@ type Limits struct {
MaxQueryParallelism int `yaml:"max_query_parallelism"`
CardinalityLimit int `yaml:"cardinality_limit"`
- // Config for overrides, convenient if it goes here.
+ // Config for overrides, convenient if it goes here. [Deprecated in favor of RuntimeConfig flag in cortex.Config]
PerTenantOverrideConfig string `yaml:"per_tenant_override_config"`
PerTenantOverridePeriod time.Duration `yaml:"per_tenant_override_period"`
}
@@ -90,8 +87,8 @@ func (l *Limits) RegisterFlags(f *flag.FlagSet) {
f.IntVar(&l.MaxQueryParallelism, "querier.max-query-parallelism", 14, "Maximum number of queries will be scheduled in parallel by the frontend.")
f.IntVar(&l.CardinalityLimit, "store.cardinality-limit", 1e5, "Cardinality limit for index queries.")
- f.StringVar(&l.PerTenantOverrideConfig, "limits.per-user-override-config", "", "File name of per-user overrides.")
- f.DurationVar(&l.PerTenantOverridePeriod, "limits.per-user-override-period", 10*time.Second, "Period with which to reload the overrides.")
+ f.StringVar(&l.PerTenantOverrideConfig, "limits.per-user-override-config", "", "File name of per-user overrides. [deprecated, use -runtime-config.file instead]")
+ f.DurationVar(&l.PerTenantOverridePeriod, "limits.per-user-override-period", 10*time.Second, "Period with which to reload the overrides. [deprecated, use -runtime-config.reload-period instead]")
}
// Validate the limits config and returns an error if the validation
@@ -126,197 +123,169 @@ func (l *Limits) UnmarshalYAML(unmarshal func(interface{}) error) error {
// find a nicer way I'm afraid.
var defaultLimits *Limits
+// SetDefaultLimitsForYAMLUnmarshalling sets global default limits, used when loading
+// Limits from YAML files. This is used to ensure per-tenant limits are defaulted to
+// those values.
+func SetDefaultLimitsForYAMLUnmarshalling(defaults Limits) {
+ defaultLimits = &defaults
+}
+
+// TenantLimits is a function that returns limits for given tenant, or
+// nil, if there are no tenant-specific limits.
+type TenantLimits func(userID string) *Limits
+
// Overrides periodically fetch a set of per-user overrides, and provides convenience
// functions for fetching the correct value.
type Overrides struct {
- overridesManager *OverridesManager
+ defaultLimits *Limits
+ tenantLimits TenantLimits
}
// NewOverrides makes a new Overrides.
-// We store the supplied limits in a global variable to ensure per-tenant limits
-// are defaulted to those values. As such, the last call to NewOverrides will
-// become the new global defaults.
-func NewOverrides(defaults Limits) (*Overrides, error) {
- defaultLimits = &defaults
- overridesManagerConfig := OverridesManagerConfig{
- OverridesReloadPeriod: defaults.PerTenantOverridePeriod,
- OverridesLoadPath: defaults.PerTenantOverrideConfig,
- OverridesLoader: loadOverrides,
- Defaults: &defaults,
- }
-
- overridesManager, err := NewOverridesManager(overridesManagerConfig)
- if err != nil {
- return nil, err
- }
-
+func NewOverrides(defaults Limits, tenantLimits TenantLimits) (*Overrides, error) {
return &Overrides{
- overridesManager: overridesManager,
+ tenantLimits: tenantLimits,
+ defaultLimits: &defaults,
}, nil
}
-// Stop background reloading of overrides.
-func (o *Overrides) Stop() {
- o.overridesManager.Stop()
-}
-
// IngestionRate returns the limit on ingester rate (samples per second).
func (o *Overrides) IngestionRate(userID string) float64 {
- return o.overridesManager.GetLimits(userID).(*Limits).IngestionRate
+ return o.getOverridesForUser(userID).IngestionRate
}
// IngestionRateStrategy returns whether the ingestion rate limit should be individually applied
// to each distributor instance (local) or evenly shared across the cluster (global).
func (o *Overrides) IngestionRateStrategy() string {
// The ingestion rate strategy can't be overridden on a per-tenant basis
- defaultLimits := o.overridesManager.cfg.Defaults
- return defaultLimits.(*Limits).IngestionRateStrategy
+ return o.defaultLimits.IngestionRateStrategy
}
// IngestionBurstSize returns the burst size for ingestion rate.
func (o *Overrides) IngestionBurstSize(userID string) int {
- return o.overridesManager.GetLimits(userID).(*Limits).IngestionBurstSize
+ return o.getOverridesForUser(userID).IngestionBurstSize
}
// AcceptHASamples returns whether the distributor should track and accept samples from HA replicas for this user.
func (o *Overrides) AcceptHASamples(userID string) bool {
- return o.overridesManager.GetLimits(userID).(*Limits).AcceptHASamples
+ return o.getOverridesForUser(userID).AcceptHASamples
}
// HAClusterLabel returns the cluster label to look for when deciding whether to accept a sample from a Prometheus HA replica.
func (o *Overrides) HAClusterLabel(userID string) string {
- return o.overridesManager.GetLimits(userID).(*Limits).HAClusterLabel
+ return o.getOverridesForUser(userID).HAClusterLabel
}
// HAReplicaLabel returns the replica label to look for when deciding whether to accept a sample from a Prometheus HA replica.
func (o *Overrides) HAReplicaLabel(userID string) string {
- return o.overridesManager.GetLimits(userID).(*Limits).HAReplicaLabel
+ return o.getOverridesForUser(userID).HAReplicaLabel
}
// DropLabels returns the list of labels to be dropped when ingesting HA samples for the user.
func (o *Overrides) DropLabels(userID string) flagext.StringSlice {
- return o.overridesManager.GetLimits(userID).(*Limits).DropLabels
+ return o.getOverridesForUser(userID).DropLabels
}
// MaxLabelNameLength returns maximum length a label name can be.
func (o *Overrides) MaxLabelNameLength(userID string) int {
- return o.overridesManager.GetLimits(userID).(*Limits).MaxLabelNameLength
+ return o.getOverridesForUser(userID).MaxLabelNameLength
}
// MaxLabelValueLength returns maximum length a label value can be. This also is
// the maximum length of a metric name.
func (o *Overrides) MaxLabelValueLength(userID string) int {
- return o.overridesManager.GetLimits(userID).(*Limits).MaxLabelValueLength
+ return o.getOverridesForUser(userID).MaxLabelValueLength
}
// MaxLabelNamesPerSeries returns maximum number of label/value pairs timeseries.
func (o *Overrides) MaxLabelNamesPerSeries(userID string) int {
- return o.overridesManager.GetLimits(userID).(*Limits).MaxLabelNamesPerSeries
+ return o.getOverridesForUser(userID).MaxLabelNamesPerSeries
}
// RejectOldSamples returns true when we should reject samples older than certain
// age.
func (o *Overrides) RejectOldSamples(userID string) bool {
- return o.overridesManager.GetLimits(userID).(*Limits).RejectOldSamples
+ return o.getOverridesForUser(userID).RejectOldSamples
}
// RejectOldSamplesMaxAge returns the age at which samples should be rejected.
func (o *Overrides) RejectOldSamplesMaxAge(userID string) time.Duration {
- return o.overridesManager.GetLimits(userID).(*Limits).RejectOldSamplesMaxAge
+ return o.getOverridesForUser(userID).RejectOldSamplesMaxAge
}
// CreationGracePeriod is misnamed, and actually returns how far into the future
// we should accept samples.
func (o *Overrides) CreationGracePeriod(userID string) time.Duration {
- return o.overridesManager.GetLimits(userID).(*Limits).CreationGracePeriod
+ return o.getOverridesForUser(userID).CreationGracePeriod
}
// MaxSeriesPerQuery returns the maximum number of series a query is allowed to hit.
func (o *Overrides) MaxSeriesPerQuery(userID string) int {
- return o.overridesManager.GetLimits(userID).(*Limits).MaxSeriesPerQuery
+ return o.getOverridesForUser(userID).MaxSeriesPerQuery
}
// MaxSamplesPerQuery returns the maximum number of samples in a query (from the ingester).
func (o *Overrides) MaxSamplesPerQuery(userID string) int {
- return o.overridesManager.GetLimits(userID).(*Limits).MaxSamplesPerQuery
+ return o.getOverridesForUser(userID).MaxSamplesPerQuery
}
// MaxLocalSeriesPerUser returns the maximum number of series a user is allowed to store in a single ingester.
func (o *Overrides) MaxLocalSeriesPerUser(userID string) int {
- return o.overridesManager.GetLimits(userID).(*Limits).MaxLocalSeriesPerUser
+ return o.getOverridesForUser(userID).MaxLocalSeriesPerUser
}
// MaxLocalSeriesPerMetric returns the maximum number of series allowed per metric in a single ingester.
func (o *Overrides) MaxLocalSeriesPerMetric(userID string) int {
- return o.overridesManager.GetLimits(userID).(*Limits).MaxLocalSeriesPerMetric
+ return o.getOverridesForUser(userID).MaxLocalSeriesPerMetric
}
// MaxGlobalSeriesPerUser returns the maximum number of series a user is allowed to store across the cluster.
func (o *Overrides) MaxGlobalSeriesPerUser(userID string) int {
- return o.overridesManager.GetLimits(userID).(*Limits).MaxGlobalSeriesPerUser
+ return o.getOverridesForUser(userID).MaxGlobalSeriesPerUser
}
// MaxGlobalSeriesPerMetric returns the maximum number of series allowed per metric across the cluster.
func (o *Overrides) MaxGlobalSeriesPerMetric(userID string) int {
- return o.overridesManager.GetLimits(userID).(*Limits).MaxGlobalSeriesPerMetric
+ return o.getOverridesForUser(userID).MaxGlobalSeriesPerMetric
}
// MaxChunksPerQuery returns the maximum number of chunks allowed per query.
func (o *Overrides) MaxChunksPerQuery(userID string) int {
- return o.overridesManager.GetLimits(userID).(*Limits).MaxChunksPerQuery
+ return o.getOverridesForUser(userID).MaxChunksPerQuery
}
// MaxQueryLength returns the limit of the length (in time) of a query.
func (o *Overrides) MaxQueryLength(userID string) time.Duration {
- return o.overridesManager.GetLimits(userID).(*Limits).MaxQueryLength
+ return o.getOverridesForUser(userID).MaxQueryLength
}
// MaxQueryParallelism returns the limit to the number of sub-queries the
// frontend will process in parallel.
func (o *Overrides) MaxQueryParallelism(userID string) int {
- return o.overridesManager.GetLimits(userID).(*Limits).MaxQueryParallelism
+ return o.getOverridesForUser(userID).MaxQueryParallelism
}
// EnforceMetricName whether to enforce the presence of a metric name.
func (o *Overrides) EnforceMetricName(userID string) bool {
- return o.overridesManager.GetLimits(userID).(*Limits).EnforceMetricName
+ return o.getOverridesForUser(userID).EnforceMetricName
}
// CardinalityLimit returns the maximum number of timeseries allowed in a query.
func (o *Overrides) CardinalityLimit(userID string) int {
- return o.overridesManager.GetLimits(userID).(*Limits).CardinalityLimit
+ return o.getOverridesForUser(userID).CardinalityLimit
}
// MinChunkLength returns the minimum size of chunk that will be saved by ingesters
func (o *Overrides) MinChunkLength(userID string) int {
- return o.overridesManager.GetLimits(userID).(*Limits).MinChunkLength
-}
-
-// Loads overrides and returns the limits as an interface to store them in OverridesManager.
-// We need to implement it here since OverridesManager must store type Limits in an interface but
-// it doesn't know its definition to initialize it.
-// We could have used yamlv3.Node for this but there is no way to enforce strict decoding due to a bug in it
-// TODO: Use yamlv3.Node to move this to OverridesManager after https://github.com/go-yaml/yaml/issues/460 is fixed
-func loadOverrides(filename string) (map[string]interface{}, error) {
- f, err := os.Open(filename)
- if err != nil {
- return nil, err
- }
-
- var overrides struct {
- Overrides map[string]*Limits `yaml:"overrides"`
- }
-
- decoder := yaml.NewDecoder(f)
- decoder.SetStrict(true)
- if err := decoder.Decode(&overrides); err != nil {
- return nil, err
- }
+ return o.getOverridesForUser(userID).MinChunkLength
+}
- overridesAsInterface := map[string]interface{}{}
- for userID := range overrides.Overrides {
- overridesAsInterface[userID] = overrides.Overrides[userID]
+func (o *Overrides) getOverridesForUser(userID string) *Limits {
+ if o.tenantLimits != nil {
+ l := o.tenantLimits(userID)
+ if l != nil {
+ return l
+ }
}
-
- return overridesAsInterface, nil
+ return o.defaultLimits
}
diff --git a/vendor/github.com/cortexproject/cortex/pkg/util/validation/override.go b/vendor/github.com/cortexproject/cortex/pkg/util/validation/override.go
deleted file mode 100644
index e6ef2d2a271e1..0000000000000
--- a/vendor/github.com/cortexproject/cortex/pkg/util/validation/override.go
+++ /dev/null
@@ -1,107 +0,0 @@
-package validation
-
-import (
- "sync"
- "time"
-
- "github.com/cortexproject/cortex/pkg/util"
- "github.com/go-kit/kit/log/level"
- "github.com/prometheus/client_golang/prometheus"
- "github.com/prometheus/client_golang/prometheus/promauto"
-)
-
-var overridesReloadSuccess = promauto.NewGauge(prometheus.GaugeOpts{
- Name: "cortex_overrides_last_reload_successful",
- Help: "Whether the last overrides reload attempt was successful.",
-})
-
-// OverridesLoader loads the overrides
-type OverridesLoader func(string) (map[string]interface{}, error)
-
-// OverridesManagerConfig holds the config for an OverridesManager instance.
-// It holds config related to loading per-tentant overrides and the default limits
-type OverridesManagerConfig struct {
- OverridesReloadPeriod time.Duration
- OverridesLoadPath string
- OverridesLoader OverridesLoader
- Defaults interface{}
-}
-
-// OverridesManager manages default and per user limits i.e overrides.
-// It can periodically keep reloading overrides based on config.
-type OverridesManager struct {
- cfg OverridesManagerConfig
- overrides map[string]interface{}
- overridesMtx sync.RWMutex
- quit chan struct{}
-}
-
-// NewOverridesManager creates an instance of OverridesManager and starts reload overrides loop based on config
-func NewOverridesManager(cfg OverridesManagerConfig) (*OverridesManager, error) {
- overridesManager := OverridesManager{
- cfg: cfg,
- quit: make(chan struct{}),
- }
-
- if cfg.OverridesLoadPath != "" {
- if err := overridesManager.loadOverrides(); err != nil {
- // Log but don't stop on error - we don't want to halt all ingesters because of a typo
- level.Error(util.Logger).Log("msg", "failed to load limit overrides", "err", err)
- }
- go overridesManager.loop()
- } else {
- level.Info(util.Logger).Log("msg", "per-tenant overrides disabled")
- }
-
- return &overridesManager, nil
-}
-
-func (om *OverridesManager) loop() {
- ticker := time.NewTicker(om.cfg.OverridesReloadPeriod)
- defer ticker.Stop()
-
- for {
- select {
- case <-ticker.C:
- err := om.loadOverrides()
- if err != nil {
- // Log but don't stop on error - we don't want to halt all ingesters because of a typo
- level.Error(util.Logger).Log("msg", "failed to load limit overrides", "err", err)
- }
- case <-om.quit:
- return
- }
- }
-}
-
-func (om *OverridesManager) loadOverrides() error {
- overrides, err := om.cfg.OverridesLoader(om.cfg.OverridesLoadPath)
- if err != nil {
- overridesReloadSuccess.Set(0)
- return err
- }
- overridesReloadSuccess.Set(1)
-
- om.overridesMtx.Lock()
- defer om.overridesMtx.Unlock()
- om.overrides = overrides
- return nil
-}
-
-// Stop stops the OverridesManager
-func (om *OverridesManager) Stop() {
- close(om.quit)
-}
-
-// GetLimits returns Limits for a specific userID if its set otherwise the default Limits
-func (om *OverridesManager) GetLimits(userID string) interface{} {
- om.overridesMtx.RLock()
- defer om.overridesMtx.RUnlock()
-
- override, ok := om.overrides[userID]
- if !ok {
- return om.cfg.Defaults
- }
-
- return override
-}
diff --git a/vendor/github.com/cortexproject/cortex/pkg/util/validation/validate.go b/vendor/github.com/cortexproject/cortex/pkg/util/validation/validate.go
index 0ffce09c1ae50..49782413c9951 100644
--- a/vendor/github.com/cortexproject/cortex/pkg/util/validation/validate.go
+++ b/vendor/github.com/cortexproject/cortex/pkg/util/validation/validate.go
@@ -1,7 +1,9 @@
package validation
import (
+ "fmt"
"net/http"
+ "strings"
"time"
"github.com/cortexproject/cortex/pkg/ingester/client"
@@ -14,14 +16,16 @@ import (
const (
discardReasonLabel = "reason"
- errMissingMetricName = "sample missing metric name"
- errInvalidMetricName = "sample invalid metric name: %.200q"
- errInvalidLabel = "sample invalid label: %.200q metric %.200q"
- errLabelNameTooLong = "label name too long: %.200q metric %.200q"
- errLabelValueTooLong = "label value too long: %.200q metric %.200q"
- errTooManyLabels = "sample for '%s' has %d label names; limit %d"
- errTooOld = "sample for '%s' has timestamp too old: %d"
- errTooNew = "sample for '%s' has timestamp too new: %d"
+ errMissingMetricName = "sample missing metric name"
+ errInvalidMetricName = "sample invalid metric name: %.200q"
+ errInvalidLabel = "sample invalid label: %.200q metric %.200q"
+ errLabelNameTooLong = "label name too long: %.200q metric %.200q"
+ errLabelValueTooLong = "label value too long: %.200q metric %.200q"
+ errTooManyLabels = "sample for '%s' has %d label names; limit %d"
+ errTooOld = "sample for '%s' has timestamp too old: %d"
+ errTooNew = "sample for '%s' has timestamp too new: %d"
+ errDuplicateLabelName = "duplicate label name: %.200q metric %.200q"
+ errLabelsNotSorted = "labels not sorted: %.200q metric %.200q"
// ErrQueryTooLong is used in chunk store and query frontend.
ErrQueryTooLong = "invalid query, length > limit (%s > %s)"
@@ -31,6 +35,8 @@ const (
tooFarInFuture = "too_far_in_future"
invalidLabel = "label_invalid"
labelNameTooLong = "label_name_too_long"
+ duplicateLabelNames = "duplicate_label_names"
+ labelsNotSorted = "labels_not_sorted"
labelValueTooLong = "label_value_too_long"
// RateLimited is one of the values for the reason to discard samples.
@@ -102,6 +108,7 @@ func ValidateLabels(cfg LabelValidationConfig, userID string, ls []client.LabelA
maxLabelNameLength := cfg.MaxLabelNameLength(userID)
maxLabelValueLength := cfg.MaxLabelValueLength(userID)
+ lastLabelName := ""
for _, l := range ls {
var errTemplate string
var reason string
@@ -118,11 +125,48 @@ func ValidateLabels(cfg LabelValidationConfig, userID string, ls []client.LabelA
reason = labelValueTooLong
errTemplate = errLabelValueTooLong
cause = l.Value
+ } else if cmp := strings.Compare(lastLabelName, l.Name); cmp >= 0 {
+ if cmp == 0 {
+ reason = duplicateLabelNames
+ errTemplate = errDuplicateLabelName
+ cause = l.Name
+ } else {
+ reason = labelsNotSorted
+ errTemplate = errLabelsNotSorted
+ cause = l.Name
+ }
}
if errTemplate != "" {
DiscardedSamples.WithLabelValues(reason, userID).Inc()
- return httpgrpc.Errorf(http.StatusBadRequest, errTemplate, cause, client.FromLabelAdaptersToMetric(ls).String())
+ return httpgrpc.Errorf(http.StatusBadRequest, errTemplate, cause, formatLabelSet(ls))
}
+ lastLabelName = l.Name
}
return nil
}
+
+// this function formats label adapters as a metric name with labels, while preserving
+// label order, and keeping duplicates. If there are multiple "__name__" labels, only
+// first one is used as metric name, other ones will be included as regular labels.
+func formatLabelSet(ls []client.LabelAdapter) string {
+ metricName, hasMetricName := "", false
+
+ labelStrings := make([]string, 0, len(ls))
+ for _, l := range ls {
+ if l.Name == model.MetricNameLabel && !hasMetricName && l.Value != "" {
+ metricName = l.Value
+ hasMetricName = true
+ } else {
+ labelStrings = append(labelStrings, fmt.Sprintf("%s=%q", l.Name, l.Value))
+ }
+ }
+
+ if len(labelStrings) == 0 {
+ if hasMetricName {
+ return metricName
+ }
+ return "{}"
+ }
+
+ return fmt.Sprintf("%s{%s}", metricName, strings.Join(labelStrings, ", "))
+}
diff --git a/vendor/github.com/gogo/protobuf/proto/encode.go b/vendor/github.com/gogo/protobuf/proto/encode.go
index 3abfed2cff04b..9581ccd3042fc 100644
--- a/vendor/github.com/gogo/protobuf/proto/encode.go
+++ b/vendor/github.com/gogo/protobuf/proto/encode.go
@@ -189,6 +189,8 @@ type Marshaler interface {
// prefixed by a varint-encoded length.
func (p *Buffer) EncodeMessage(pb Message) error {
siz := Size(pb)
+ sizVar := SizeVarint(uint64(siz))
+ p.grow(siz + sizVar)
p.EncodeVarint(uint64(siz))
return p.Marshal(pb)
}
diff --git a/vendor/github.com/gogo/protobuf/proto/properties.go b/vendor/github.com/gogo/protobuf/proto/properties.go
index 62c55624a8a8c..28da1475fb397 100644
--- a/vendor/github.com/gogo/protobuf/proto/properties.go
+++ b/vendor/github.com/gogo/protobuf/proto/properties.go
@@ -43,7 +43,6 @@ package proto
import (
"fmt"
"log"
- "os"
"reflect"
"sort"
"strconv"
@@ -205,7 +204,7 @@ func (p *Properties) Parse(s string) {
// "bytes,49,opt,name=foo,def=hello!"
fields := strings.Split(s, ",") // breaks def=, but handled below.
if len(fields) < 2 {
- fmt.Fprintf(os.Stderr, "proto: tag has too few fields: %q\n", s)
+ log.Printf("proto: tag has too few fields: %q", s)
return
}
@@ -225,7 +224,7 @@ func (p *Properties) Parse(s string) {
p.WireType = WireBytes
// no numeric converter for non-numeric types
default:
- fmt.Fprintf(os.Stderr, "proto: tag has unknown wire type: %q\n", s)
+ log.Printf("proto: tag has unknown wire type: %q", s)
return
}
diff --git a/vendor/github.com/gogo/protobuf/proto/table_marshal.go b/vendor/github.com/gogo/protobuf/proto/table_marshal.go
index db9927a0c754d..f8babdefab94e 100644
--- a/vendor/github.com/gogo/protobuf/proto/table_marshal.go
+++ b/vendor/github.com/gogo/protobuf/proto/table_marshal.go
@@ -2969,7 +2969,9 @@ func (p *Buffer) Marshal(pb Message) error {
if m, ok := pb.(newMarshaler); ok {
siz := m.XXX_Size()
p.grow(siz) // make sure buf has enough capacity
- p.buf, err = m.XXX_Marshal(p.buf, p.deterministic)
+ pp := p.buf[len(p.buf) : len(p.buf) : len(p.buf)+siz]
+ pp, err = m.XXX_Marshal(pp, p.deterministic)
+ p.buf = append(p.buf, pp...)
return err
}
if m, ok := pb.(Marshaler); ok {
diff --git a/vendor/github.com/gogo/protobuf/proto/text.go b/vendor/github.com/gogo/protobuf/proto/text.go
index 0407ba85d01cc..87416afe95507 100644
--- a/vendor/github.com/gogo/protobuf/proto/text.go
+++ b/vendor/github.com/gogo/protobuf/proto/text.go
@@ -476,6 +476,8 @@ func (tm *TextMarshaler) writeStruct(w *textWriter, sv reflect.Value) error {
return nil
}
+var textMarshalerType = reflect.TypeOf((*encoding.TextMarshaler)(nil)).Elem()
+
// writeAny writes an arbitrary field.
func (tm *TextMarshaler) writeAny(w *textWriter, v reflect.Value, props *Properties) error {
v = reflect.Indirect(v)
@@ -589,8 +591,8 @@ func (tm *TextMarshaler) writeAny(w *textWriter, v reflect.Value, props *Propert
// mutating this value.
v = v.Addr()
}
- if etm, ok := v.Interface().(encoding.TextMarshaler); ok {
- text, err := etm.MarshalText()
+ if v.Type().Implements(textMarshalerType) {
+ text, err := v.Interface().(encoding.TextMarshaler).MarshalText()
if err != nil {
return err
}
diff --git a/vendor/github.com/gogo/protobuf/protoc-gen-gogo/descriptor/descriptor.pb.go b/vendor/github.com/gogo/protobuf/protoc-gen-gogo/descriptor/descriptor.pb.go
index d1307d9223895..18b2a3318a573 100644
--- a/vendor/github.com/gogo/protobuf/protoc-gen-gogo/descriptor/descriptor.pb.go
+++ b/vendor/github.com/gogo/protobuf/protoc-gen-gogo/descriptor/descriptor.pb.go
@@ -1364,8 +1364,8 @@ type FileOptions struct {
// determining the namespace.
PhpNamespace *string `protobuf:"bytes,41,opt,name=php_namespace,json=phpNamespace" json:"php_namespace,omitempty"`
// Use this option to change the namespace of php generated metadata classes.
- // Default is empty. When this option is empty, the proto file name will be used
- // for determining the namespace.
+ // Default is empty. When this option is empty, the proto file name will be
+ // used for determining the namespace.
PhpMetadataNamespace *string `protobuf:"bytes,44,opt,name=php_metadata_namespace,json=phpMetadataNamespace" json:"php_metadata_namespace,omitempty"`
// Use this option to change the package of ruby generated classes. Default
// is empty. When this option is not set, the package name will be used for
@@ -1615,7 +1615,7 @@ type MessageOptions struct {
//
// Implementations may choose not to generate the map_entry=true message, but
// use a native map in the target language to hold the keys and values.
- // The reflection APIs in such implementions still need to work as
+ // The reflection APIs in such implementations still need to work as
// if the field is a repeated message field.
//
// NOTE: Do not set the option in .proto files. Always use the maps syntax
@@ -2363,7 +2363,7 @@ type SourceCodeInfo struct {
// beginning of the "extend" block and is shared by all extensions within
// the block.
// - Just because a location's span is a subset of some other location's span
- // does not mean that it is a descendent. For example, a "group" defines
+ // does not mean that it is a descendant. For example, a "group" defines
// both a type and a field in a single declaration. Thus, the locations
// corresponding to the type and field and their components will overlap.
// - Code which tries to interpret locations should probably be designed to
diff --git a/vendor/github.com/gogo/protobuf/types/any.pb.go b/vendor/github.com/gogo/protobuf/types/any.pb.go
index 3074a3d8a030f..98e269d5439e3 100644
--- a/vendor/github.com/gogo/protobuf/types/any.pb.go
+++ b/vendor/github.com/gogo/protobuf/types/any.pb.go
@@ -614,6 +614,7 @@ func (m *Any) Unmarshal(dAtA []byte) error {
func skipAny(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
+ depth := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
@@ -645,10 +646,8 @@ func skipAny(dAtA []byte) (n int, err error) {
break
}
}
- return iNdEx, nil
case 1:
iNdEx += 8
- return iNdEx, nil
case 2:
var length int
for shift := uint(0); ; shift += 7 {
@@ -669,55 +668,30 @@ func skipAny(dAtA []byte) (n int, err error) {
return 0, ErrInvalidLengthAny
}
iNdEx += length
- if iNdEx < 0 {
- return 0, ErrInvalidLengthAny
- }
- return iNdEx, nil
case 3:
- for {
- var innerWire uint64
- var start int = iNdEx
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return 0, ErrIntOverflowAny
- }
- if iNdEx >= l {
- return 0, io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- innerWire |= (uint64(b) & 0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- innerWireType := int(innerWire & 0x7)
- if innerWireType == 4 {
- break
- }
- next, err := skipAny(dAtA[start:])
- if err != nil {
- return 0, err
- }
- iNdEx = start + next
- if iNdEx < 0 {
- return 0, ErrInvalidLengthAny
- }
- }
- return iNdEx, nil
+ depth++
case 4:
- return iNdEx, nil
+ if depth == 0 {
+ return 0, ErrUnexpectedEndOfGroupAny
+ }
+ depth--
case 5:
iNdEx += 4
- return iNdEx, nil
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
+ if iNdEx < 0 {
+ return 0, ErrInvalidLengthAny
+ }
+ if depth == 0 {
+ return iNdEx, nil
+ }
}
- panic("unreachable")
+ return 0, io.ErrUnexpectedEOF
}
var (
- ErrInvalidLengthAny = fmt.Errorf("proto: negative length found during unmarshaling")
- ErrIntOverflowAny = fmt.Errorf("proto: integer overflow")
+ ErrInvalidLengthAny = fmt.Errorf("proto: negative length found during unmarshaling")
+ ErrIntOverflowAny = fmt.Errorf("proto: integer overflow")
+ ErrUnexpectedEndOfGroupAny = fmt.Errorf("proto: unexpected end of group")
)
diff --git a/vendor/github.com/gogo/protobuf/types/api.pb.go b/vendor/github.com/gogo/protobuf/types/api.pb.go
index 61612e21a831f..58bf4b53b326a 100644
--- a/vendor/github.com/gogo/protobuf/types/api.pb.go
+++ b/vendor/github.com/gogo/protobuf/types/api.pb.go
@@ -2060,6 +2060,7 @@ func (m *Mixin) Unmarshal(dAtA []byte) error {
func skipApi(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
+ depth := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
@@ -2091,10 +2092,8 @@ func skipApi(dAtA []byte) (n int, err error) {
break
}
}
- return iNdEx, nil
case 1:
iNdEx += 8
- return iNdEx, nil
case 2:
var length int
for shift := uint(0); ; shift += 7 {
@@ -2115,55 +2114,30 @@ func skipApi(dAtA []byte) (n int, err error) {
return 0, ErrInvalidLengthApi
}
iNdEx += length
- if iNdEx < 0 {
- return 0, ErrInvalidLengthApi
- }
- return iNdEx, nil
case 3:
- for {
- var innerWire uint64
- var start int = iNdEx
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return 0, ErrIntOverflowApi
- }
- if iNdEx >= l {
- return 0, io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- innerWire |= (uint64(b) & 0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- innerWireType := int(innerWire & 0x7)
- if innerWireType == 4 {
- break
- }
- next, err := skipApi(dAtA[start:])
- if err != nil {
- return 0, err
- }
- iNdEx = start + next
- if iNdEx < 0 {
- return 0, ErrInvalidLengthApi
- }
- }
- return iNdEx, nil
+ depth++
case 4:
- return iNdEx, nil
+ if depth == 0 {
+ return 0, ErrUnexpectedEndOfGroupApi
+ }
+ depth--
case 5:
iNdEx += 4
- return iNdEx, nil
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
+ if iNdEx < 0 {
+ return 0, ErrInvalidLengthApi
+ }
+ if depth == 0 {
+ return iNdEx, nil
+ }
}
- panic("unreachable")
+ return 0, io.ErrUnexpectedEOF
}
var (
- ErrInvalidLengthApi = fmt.Errorf("proto: negative length found during unmarshaling")
- ErrIntOverflowApi = fmt.Errorf("proto: integer overflow")
+ ErrInvalidLengthApi = fmt.Errorf("proto: negative length found during unmarshaling")
+ ErrIntOverflowApi = fmt.Errorf("proto: integer overflow")
+ ErrUnexpectedEndOfGroupApi = fmt.Errorf("proto: unexpected end of group")
)
diff --git a/vendor/github.com/gogo/protobuf/types/duration.pb.go b/vendor/github.com/gogo/protobuf/types/duration.pb.go
index 32b957c2bd27f..3959f0669098d 100644
--- a/vendor/github.com/gogo/protobuf/types/duration.pb.go
+++ b/vendor/github.com/gogo/protobuf/types/duration.pb.go
@@ -437,6 +437,7 @@ func (m *Duration) Unmarshal(dAtA []byte) error {
func skipDuration(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
+ depth := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
@@ -468,10 +469,8 @@ func skipDuration(dAtA []byte) (n int, err error) {
break
}
}
- return iNdEx, nil
case 1:
iNdEx += 8
- return iNdEx, nil
case 2:
var length int
for shift := uint(0); ; shift += 7 {
@@ -492,55 +491,30 @@ func skipDuration(dAtA []byte) (n int, err error) {
return 0, ErrInvalidLengthDuration
}
iNdEx += length
- if iNdEx < 0 {
- return 0, ErrInvalidLengthDuration
- }
- return iNdEx, nil
case 3:
- for {
- var innerWire uint64
- var start int = iNdEx
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return 0, ErrIntOverflowDuration
- }
- if iNdEx >= l {
- return 0, io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- innerWire |= (uint64(b) & 0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- innerWireType := int(innerWire & 0x7)
- if innerWireType == 4 {
- break
- }
- next, err := skipDuration(dAtA[start:])
- if err != nil {
- return 0, err
- }
- iNdEx = start + next
- if iNdEx < 0 {
- return 0, ErrInvalidLengthDuration
- }
- }
- return iNdEx, nil
+ depth++
case 4:
- return iNdEx, nil
+ if depth == 0 {
+ return 0, ErrUnexpectedEndOfGroupDuration
+ }
+ depth--
case 5:
iNdEx += 4
- return iNdEx, nil
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
+ if iNdEx < 0 {
+ return 0, ErrInvalidLengthDuration
+ }
+ if depth == 0 {
+ return iNdEx, nil
+ }
}
- panic("unreachable")
+ return 0, io.ErrUnexpectedEOF
}
var (
- ErrInvalidLengthDuration = fmt.Errorf("proto: negative length found during unmarshaling")
- ErrIntOverflowDuration = fmt.Errorf("proto: integer overflow")
+ ErrInvalidLengthDuration = fmt.Errorf("proto: negative length found during unmarshaling")
+ ErrIntOverflowDuration = fmt.Errorf("proto: integer overflow")
+ ErrUnexpectedEndOfGroupDuration = fmt.Errorf("proto: unexpected end of group")
)
diff --git a/vendor/github.com/gogo/protobuf/types/empty.pb.go b/vendor/github.com/gogo/protobuf/types/empty.pb.go
index b061be5e4b3f5..17e3aa5583946 100644
--- a/vendor/github.com/gogo/protobuf/types/empty.pb.go
+++ b/vendor/github.com/gogo/protobuf/types/empty.pb.go
@@ -382,6 +382,7 @@ func (m *Empty) Unmarshal(dAtA []byte) error {
func skipEmpty(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
+ depth := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
@@ -413,10 +414,8 @@ func skipEmpty(dAtA []byte) (n int, err error) {
break
}
}
- return iNdEx, nil
case 1:
iNdEx += 8
- return iNdEx, nil
case 2:
var length int
for shift := uint(0); ; shift += 7 {
@@ -437,55 +436,30 @@ func skipEmpty(dAtA []byte) (n int, err error) {
return 0, ErrInvalidLengthEmpty
}
iNdEx += length
- if iNdEx < 0 {
- return 0, ErrInvalidLengthEmpty
- }
- return iNdEx, nil
case 3:
- for {
- var innerWire uint64
- var start int = iNdEx
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return 0, ErrIntOverflowEmpty
- }
- if iNdEx >= l {
- return 0, io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- innerWire |= (uint64(b) & 0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- innerWireType := int(innerWire & 0x7)
- if innerWireType == 4 {
- break
- }
- next, err := skipEmpty(dAtA[start:])
- if err != nil {
- return 0, err
- }
- iNdEx = start + next
- if iNdEx < 0 {
- return 0, ErrInvalidLengthEmpty
- }
- }
- return iNdEx, nil
+ depth++
case 4:
- return iNdEx, nil
+ if depth == 0 {
+ return 0, ErrUnexpectedEndOfGroupEmpty
+ }
+ depth--
case 5:
iNdEx += 4
- return iNdEx, nil
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
+ if iNdEx < 0 {
+ return 0, ErrInvalidLengthEmpty
+ }
+ if depth == 0 {
+ return iNdEx, nil
+ }
}
- panic("unreachable")
+ return 0, io.ErrUnexpectedEOF
}
var (
- ErrInvalidLengthEmpty = fmt.Errorf("proto: negative length found during unmarshaling")
- ErrIntOverflowEmpty = fmt.Errorf("proto: integer overflow")
+ ErrInvalidLengthEmpty = fmt.Errorf("proto: negative length found during unmarshaling")
+ ErrIntOverflowEmpty = fmt.Errorf("proto: integer overflow")
+ ErrUnexpectedEndOfGroupEmpty = fmt.Errorf("proto: unexpected end of group")
)
diff --git a/vendor/github.com/gogo/protobuf/types/field_mask.pb.go b/vendor/github.com/gogo/protobuf/types/field_mask.pb.go
index 61ef57e2ca881..7226b57f7353a 100644
--- a/vendor/github.com/gogo/protobuf/types/field_mask.pb.go
+++ b/vendor/github.com/gogo/protobuf/types/field_mask.pb.go
@@ -658,6 +658,7 @@ func (m *FieldMask) Unmarshal(dAtA []byte) error {
func skipFieldMask(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
+ depth := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
@@ -689,10 +690,8 @@ func skipFieldMask(dAtA []byte) (n int, err error) {
break
}
}
- return iNdEx, nil
case 1:
iNdEx += 8
- return iNdEx, nil
case 2:
var length int
for shift := uint(0); ; shift += 7 {
@@ -713,55 +712,30 @@ func skipFieldMask(dAtA []byte) (n int, err error) {
return 0, ErrInvalidLengthFieldMask
}
iNdEx += length
- if iNdEx < 0 {
- return 0, ErrInvalidLengthFieldMask
- }
- return iNdEx, nil
case 3:
- for {
- var innerWire uint64
- var start int = iNdEx
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return 0, ErrIntOverflowFieldMask
- }
- if iNdEx >= l {
- return 0, io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- innerWire |= (uint64(b) & 0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- innerWireType := int(innerWire & 0x7)
- if innerWireType == 4 {
- break
- }
- next, err := skipFieldMask(dAtA[start:])
- if err != nil {
- return 0, err
- }
- iNdEx = start + next
- if iNdEx < 0 {
- return 0, ErrInvalidLengthFieldMask
- }
- }
- return iNdEx, nil
+ depth++
case 4:
- return iNdEx, nil
+ if depth == 0 {
+ return 0, ErrUnexpectedEndOfGroupFieldMask
+ }
+ depth--
case 5:
iNdEx += 4
- return iNdEx, nil
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
+ if iNdEx < 0 {
+ return 0, ErrInvalidLengthFieldMask
+ }
+ if depth == 0 {
+ return iNdEx, nil
+ }
}
- panic("unreachable")
+ return 0, io.ErrUnexpectedEOF
}
var (
- ErrInvalidLengthFieldMask = fmt.Errorf("proto: negative length found during unmarshaling")
- ErrIntOverflowFieldMask = fmt.Errorf("proto: integer overflow")
+ ErrInvalidLengthFieldMask = fmt.Errorf("proto: negative length found during unmarshaling")
+ ErrIntOverflowFieldMask = fmt.Errorf("proto: integer overflow")
+ ErrUnexpectedEndOfGroupFieldMask = fmt.Errorf("proto: unexpected end of group")
)
diff --git a/vendor/github.com/gogo/protobuf/types/source_context.pb.go b/vendor/github.com/gogo/protobuf/types/source_context.pb.go
index 9b0752ed504af..61045ce10d5dc 100644
--- a/vendor/github.com/gogo/protobuf/types/source_context.pb.go
+++ b/vendor/github.com/gogo/protobuf/types/source_context.pb.go
@@ -444,6 +444,7 @@ func (m *SourceContext) Unmarshal(dAtA []byte) error {
func skipSourceContext(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
+ depth := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
@@ -475,10 +476,8 @@ func skipSourceContext(dAtA []byte) (n int, err error) {
break
}
}
- return iNdEx, nil
case 1:
iNdEx += 8
- return iNdEx, nil
case 2:
var length int
for shift := uint(0); ; shift += 7 {
@@ -499,55 +498,30 @@ func skipSourceContext(dAtA []byte) (n int, err error) {
return 0, ErrInvalidLengthSourceContext
}
iNdEx += length
- if iNdEx < 0 {
- return 0, ErrInvalidLengthSourceContext
- }
- return iNdEx, nil
case 3:
- for {
- var innerWire uint64
- var start int = iNdEx
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return 0, ErrIntOverflowSourceContext
- }
- if iNdEx >= l {
- return 0, io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- innerWire |= (uint64(b) & 0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- innerWireType := int(innerWire & 0x7)
- if innerWireType == 4 {
- break
- }
- next, err := skipSourceContext(dAtA[start:])
- if err != nil {
- return 0, err
- }
- iNdEx = start + next
- if iNdEx < 0 {
- return 0, ErrInvalidLengthSourceContext
- }
- }
- return iNdEx, nil
+ depth++
case 4:
- return iNdEx, nil
+ if depth == 0 {
+ return 0, ErrUnexpectedEndOfGroupSourceContext
+ }
+ depth--
case 5:
iNdEx += 4
- return iNdEx, nil
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
+ if iNdEx < 0 {
+ return 0, ErrInvalidLengthSourceContext
+ }
+ if depth == 0 {
+ return iNdEx, nil
+ }
}
- panic("unreachable")
+ return 0, io.ErrUnexpectedEOF
}
var (
- ErrInvalidLengthSourceContext = fmt.Errorf("proto: negative length found during unmarshaling")
- ErrIntOverflowSourceContext = fmt.Errorf("proto: integer overflow")
+ ErrInvalidLengthSourceContext = fmt.Errorf("proto: negative length found during unmarshaling")
+ ErrIntOverflowSourceContext = fmt.Errorf("proto: integer overflow")
+ ErrUnexpectedEndOfGroupSourceContext = fmt.Errorf("proto: unexpected end of group")
)
diff --git a/vendor/github.com/gogo/protobuf/types/struct.pb.go b/vendor/github.com/gogo/protobuf/types/struct.pb.go
index f0a2d36ebba2b..cea553eef6014 100644
--- a/vendor/github.com/gogo/protobuf/types/struct.pb.go
+++ b/vendor/github.com/gogo/protobuf/types/struct.pb.go
@@ -177,22 +177,22 @@ type isValue_Kind interface {
}
type Value_NullValue struct {
- NullValue NullValue `protobuf:"varint,1,opt,name=null_value,json=nullValue,proto3,enum=google.protobuf.NullValue,oneof"`
+ NullValue NullValue `protobuf:"varint,1,opt,name=null_value,json=nullValue,proto3,enum=google.protobuf.NullValue,oneof" json:"null_value,omitempty"`
}
type Value_NumberValue struct {
- NumberValue float64 `protobuf:"fixed64,2,opt,name=number_value,json=numberValue,proto3,oneof"`
+ NumberValue float64 `protobuf:"fixed64,2,opt,name=number_value,json=numberValue,proto3,oneof" json:"number_value,omitempty"`
}
type Value_StringValue struct {
- StringValue string `protobuf:"bytes,3,opt,name=string_value,json=stringValue,proto3,oneof"`
+ StringValue string `protobuf:"bytes,3,opt,name=string_value,json=stringValue,proto3,oneof" json:"string_value,omitempty"`
}
type Value_BoolValue struct {
- BoolValue bool `protobuf:"varint,4,opt,name=bool_value,json=boolValue,proto3,oneof"`
+ BoolValue bool `protobuf:"varint,4,opt,name=bool_value,json=boolValue,proto3,oneof" json:"bool_value,omitempty"`
}
type Value_StructValue struct {
- StructValue *Struct `protobuf:"bytes,5,opt,name=struct_value,json=structValue,proto3,oneof"`
+ StructValue *Struct `protobuf:"bytes,5,opt,name=struct_value,json=structValue,proto3,oneof" json:"struct_value,omitempty"`
}
type Value_ListValue struct {
- ListValue *ListValue `protobuf:"bytes,6,opt,name=list_value,json=listValue,proto3,oneof"`
+ ListValue *ListValue `protobuf:"bytes,6,opt,name=list_value,json=listValue,proto3,oneof" json:"list_value,omitempty"`
}
func (*Value_NullValue) isValue_Kind() {}
@@ -1167,7 +1167,8 @@ func (m *Value) MarshalToSizedBuffer(dAtA []byte) (int, error) {
}
func (m *Value_NullValue) MarshalTo(dAtA []byte) (int, error) {
- return m.MarshalToSizedBuffer(dAtA[:m.Size()])
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *Value_NullValue) MarshalToSizedBuffer(dAtA []byte) (int, error) {
@@ -1178,7 +1179,8 @@ func (m *Value_NullValue) MarshalToSizedBuffer(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
func (m *Value_NumberValue) MarshalTo(dAtA []byte) (int, error) {
- return m.MarshalToSizedBuffer(dAtA[:m.Size()])
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *Value_NumberValue) MarshalToSizedBuffer(dAtA []byte) (int, error) {
@@ -1190,7 +1192,8 @@ func (m *Value_NumberValue) MarshalToSizedBuffer(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
func (m *Value_StringValue) MarshalTo(dAtA []byte) (int, error) {
- return m.MarshalToSizedBuffer(dAtA[:m.Size()])
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *Value_StringValue) MarshalToSizedBuffer(dAtA []byte) (int, error) {
@@ -1203,7 +1206,8 @@ func (m *Value_StringValue) MarshalToSizedBuffer(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
func (m *Value_BoolValue) MarshalTo(dAtA []byte) (int, error) {
- return m.MarshalToSizedBuffer(dAtA[:m.Size()])
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *Value_BoolValue) MarshalToSizedBuffer(dAtA []byte) (int, error) {
@@ -1219,7 +1223,8 @@ func (m *Value_BoolValue) MarshalToSizedBuffer(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
func (m *Value_StructValue) MarshalTo(dAtA []byte) (int, error) {
- return m.MarshalToSizedBuffer(dAtA[:m.Size()])
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *Value_StructValue) MarshalToSizedBuffer(dAtA []byte) (int, error) {
@@ -1239,7 +1244,8 @@ func (m *Value_StructValue) MarshalToSizedBuffer(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
func (m *Value_ListValue) MarshalTo(dAtA []byte) (int, error) {
- return m.MarshalToSizedBuffer(dAtA[:m.Size()])
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
}
func (m *Value_ListValue) MarshalToSizedBuffer(dAtA []byte) (int, error) {
@@ -2191,6 +2197,7 @@ func (m *ListValue) Unmarshal(dAtA []byte) error {
func skipStruct(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
+ depth := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
@@ -2222,10 +2229,8 @@ func skipStruct(dAtA []byte) (n int, err error) {
break
}
}
- return iNdEx, nil
case 1:
iNdEx += 8
- return iNdEx, nil
case 2:
var length int
for shift := uint(0); ; shift += 7 {
@@ -2246,55 +2251,30 @@ func skipStruct(dAtA []byte) (n int, err error) {
return 0, ErrInvalidLengthStruct
}
iNdEx += length
- if iNdEx < 0 {
- return 0, ErrInvalidLengthStruct
- }
- return iNdEx, nil
case 3:
- for {
- var innerWire uint64
- var start int = iNdEx
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return 0, ErrIntOverflowStruct
- }
- if iNdEx >= l {
- return 0, io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- innerWire |= (uint64(b) & 0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- innerWireType := int(innerWire & 0x7)
- if innerWireType == 4 {
- break
- }
- next, err := skipStruct(dAtA[start:])
- if err != nil {
- return 0, err
- }
- iNdEx = start + next
- if iNdEx < 0 {
- return 0, ErrInvalidLengthStruct
- }
- }
- return iNdEx, nil
+ depth++
case 4:
- return iNdEx, nil
+ if depth == 0 {
+ return 0, ErrUnexpectedEndOfGroupStruct
+ }
+ depth--
case 5:
iNdEx += 4
- return iNdEx, nil
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
+ if iNdEx < 0 {
+ return 0, ErrInvalidLengthStruct
+ }
+ if depth == 0 {
+ return iNdEx, nil
+ }
}
- panic("unreachable")
+ return 0, io.ErrUnexpectedEOF
}
var (
- ErrInvalidLengthStruct = fmt.Errorf("proto: negative length found during unmarshaling")
- ErrIntOverflowStruct = fmt.Errorf("proto: integer overflow")
+ ErrInvalidLengthStruct = fmt.Errorf("proto: negative length found during unmarshaling")
+ ErrIntOverflowStruct = fmt.Errorf("proto: integer overflow")
+ ErrUnexpectedEndOfGroupStruct = fmt.Errorf("proto: unexpected end of group")
)
diff --git a/vendor/github.com/gogo/protobuf/types/timestamp.pb.go b/vendor/github.com/gogo/protobuf/types/timestamp.pb.go
index 63975b8e57d60..b818752670c8a 100644
--- a/vendor/github.com/gogo/protobuf/types/timestamp.pb.go
+++ b/vendor/github.com/gogo/protobuf/types/timestamp.pb.go
@@ -98,11 +98,13 @@ const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
// 01:30 UTC on January 15, 2017.
//
// In JavaScript, one can convert a Date object to this format using the
-// standard [toISOString()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/toISOString)
+// standard
+// [toISOString()](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/toISOString)
// method. In Python, a standard `datetime.datetime` object can be converted
-// to this format using [`strftime`](https://docs.python.org/2/library/time.html#time.strftime)
-// with the time format spec '%Y-%m-%dT%H:%M:%S.%fZ'. Likewise, in Java, one
-// can use the Joda Time's [`ISODateTimeFormat.dateTime()`](
+// to this format using
+// [`strftime`](https://docs.python.org/2/library/time.html#time.strftime) with
+// the time format spec '%Y-%m-%dT%H:%M:%S.%fZ'. Likewise, in Java, one can use
+// the Joda Time's [`ISODateTimeFormat.dateTime()`](
// http://www.joda.org/joda-time/apidocs/org/joda/time/format/ISODateTimeFormat.html#dateTime%2D%2D
// ) to obtain a formatter capable of generating timestamps in this format.
//
@@ -457,6 +459,7 @@ func (m *Timestamp) Unmarshal(dAtA []byte) error {
func skipTimestamp(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
+ depth := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
@@ -488,10 +491,8 @@ func skipTimestamp(dAtA []byte) (n int, err error) {
break
}
}
- return iNdEx, nil
case 1:
iNdEx += 8
- return iNdEx, nil
case 2:
var length int
for shift := uint(0); ; shift += 7 {
@@ -512,55 +513,30 @@ func skipTimestamp(dAtA []byte) (n int, err error) {
return 0, ErrInvalidLengthTimestamp
}
iNdEx += length
- if iNdEx < 0 {
- return 0, ErrInvalidLengthTimestamp
- }
- return iNdEx, nil
case 3:
- for {
- var innerWire uint64
- var start int = iNdEx
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return 0, ErrIntOverflowTimestamp
- }
- if iNdEx >= l {
- return 0, io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- innerWire |= (uint64(b) & 0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- innerWireType := int(innerWire & 0x7)
- if innerWireType == 4 {
- break
- }
- next, err := skipTimestamp(dAtA[start:])
- if err != nil {
- return 0, err
- }
- iNdEx = start + next
- if iNdEx < 0 {
- return 0, ErrInvalidLengthTimestamp
- }
- }
- return iNdEx, nil
+ depth++
case 4:
- return iNdEx, nil
+ if depth == 0 {
+ return 0, ErrUnexpectedEndOfGroupTimestamp
+ }
+ depth--
case 5:
iNdEx += 4
- return iNdEx, nil
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
+ if iNdEx < 0 {
+ return 0, ErrInvalidLengthTimestamp
+ }
+ if depth == 0 {
+ return iNdEx, nil
+ }
}
- panic("unreachable")
+ return 0, io.ErrUnexpectedEOF
}
var (
- ErrInvalidLengthTimestamp = fmt.Errorf("proto: negative length found during unmarshaling")
- ErrIntOverflowTimestamp = fmt.Errorf("proto: integer overflow")
+ ErrInvalidLengthTimestamp = fmt.Errorf("proto: negative length found during unmarshaling")
+ ErrIntOverflowTimestamp = fmt.Errorf("proto: integer overflow")
+ ErrUnexpectedEndOfGroupTimestamp = fmt.Errorf("proto: unexpected end of group")
)
diff --git a/vendor/github.com/gogo/protobuf/types/type.pb.go b/vendor/github.com/gogo/protobuf/types/type.pb.go
index a3a4f354e989a..13b7ec02f79aa 100644
--- a/vendor/github.com/gogo/protobuf/types/type.pb.go
+++ b/vendor/github.com/gogo/protobuf/types/type.pb.go
@@ -3287,6 +3287,7 @@ func (m *Option) Unmarshal(dAtA []byte) error {
func skipType(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
+ depth := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
@@ -3318,10 +3319,8 @@ func skipType(dAtA []byte) (n int, err error) {
break
}
}
- return iNdEx, nil
case 1:
iNdEx += 8
- return iNdEx, nil
case 2:
var length int
for shift := uint(0); ; shift += 7 {
@@ -3342,55 +3341,30 @@ func skipType(dAtA []byte) (n int, err error) {
return 0, ErrInvalidLengthType
}
iNdEx += length
- if iNdEx < 0 {
- return 0, ErrInvalidLengthType
- }
- return iNdEx, nil
case 3:
- for {
- var innerWire uint64
- var start int = iNdEx
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return 0, ErrIntOverflowType
- }
- if iNdEx >= l {
- return 0, io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- innerWire |= (uint64(b) & 0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- innerWireType := int(innerWire & 0x7)
- if innerWireType == 4 {
- break
- }
- next, err := skipType(dAtA[start:])
- if err != nil {
- return 0, err
- }
- iNdEx = start + next
- if iNdEx < 0 {
- return 0, ErrInvalidLengthType
- }
- }
- return iNdEx, nil
+ depth++
case 4:
- return iNdEx, nil
+ if depth == 0 {
+ return 0, ErrUnexpectedEndOfGroupType
+ }
+ depth--
case 5:
iNdEx += 4
- return iNdEx, nil
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
+ if iNdEx < 0 {
+ return 0, ErrInvalidLengthType
+ }
+ if depth == 0 {
+ return iNdEx, nil
+ }
}
- panic("unreachable")
+ return 0, io.ErrUnexpectedEOF
}
var (
- ErrInvalidLengthType = fmt.Errorf("proto: negative length found during unmarshaling")
- ErrIntOverflowType = fmt.Errorf("proto: integer overflow")
+ ErrInvalidLengthType = fmt.Errorf("proto: negative length found during unmarshaling")
+ ErrIntOverflowType = fmt.Errorf("proto: integer overflow")
+ ErrUnexpectedEndOfGroupType = fmt.Errorf("proto: unexpected end of group")
)
diff --git a/vendor/github.com/gogo/protobuf/types/wrappers.pb.go b/vendor/github.com/gogo/protobuf/types/wrappers.pb.go
index 5628dffa403af..8f1edb57d309a 100644
--- a/vendor/github.com/gogo/protobuf/types/wrappers.pb.go
+++ b/vendor/github.com/gogo/protobuf/types/wrappers.pb.go
@@ -2647,6 +2647,7 @@ func (m *BytesValue) Unmarshal(dAtA []byte) error {
func skipWrappers(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
+ depth := 0
for iNdEx < l {
var wire uint64
for shift := uint(0); ; shift += 7 {
@@ -2678,10 +2679,8 @@ func skipWrappers(dAtA []byte) (n int, err error) {
break
}
}
- return iNdEx, nil
case 1:
iNdEx += 8
- return iNdEx, nil
case 2:
var length int
for shift := uint(0); ; shift += 7 {
@@ -2702,55 +2701,30 @@ func skipWrappers(dAtA []byte) (n int, err error) {
return 0, ErrInvalidLengthWrappers
}
iNdEx += length
- if iNdEx < 0 {
- return 0, ErrInvalidLengthWrappers
- }
- return iNdEx, nil
case 3:
- for {
- var innerWire uint64
- var start int = iNdEx
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return 0, ErrIntOverflowWrappers
- }
- if iNdEx >= l {
- return 0, io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- innerWire |= (uint64(b) & 0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- innerWireType := int(innerWire & 0x7)
- if innerWireType == 4 {
- break
- }
- next, err := skipWrappers(dAtA[start:])
- if err != nil {
- return 0, err
- }
- iNdEx = start + next
- if iNdEx < 0 {
- return 0, ErrInvalidLengthWrappers
- }
- }
- return iNdEx, nil
+ depth++
case 4:
- return iNdEx, nil
+ if depth == 0 {
+ return 0, ErrUnexpectedEndOfGroupWrappers
+ }
+ depth--
case 5:
iNdEx += 4
- return iNdEx, nil
default:
return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
}
+ if iNdEx < 0 {
+ return 0, ErrInvalidLengthWrappers
+ }
+ if depth == 0 {
+ return iNdEx, nil
+ }
}
- panic("unreachable")
+ return 0, io.ErrUnexpectedEOF
}
var (
- ErrInvalidLengthWrappers = fmt.Errorf("proto: negative length found during unmarshaling")
- ErrIntOverflowWrappers = fmt.Errorf("proto: integer overflow")
+ ErrInvalidLengthWrappers = fmt.Errorf("proto: negative length found during unmarshaling")
+ ErrIntOverflowWrappers = fmt.Errorf("proto: integer overflow")
+ ErrUnexpectedEndOfGroupWrappers = fmt.Errorf("proto: unexpected end of group")
)
diff --git a/vendor/github.com/golang/protobuf/descriptor/descriptor.go b/vendor/github.com/golang/protobuf/descriptor/descriptor.go
new file mode 100644
index 0000000000000..ac7e51bfb19c6
--- /dev/null
+++ b/vendor/github.com/golang/protobuf/descriptor/descriptor.go
@@ -0,0 +1,93 @@
+// Go support for Protocol Buffers - Google's data interchange format
+//
+// Copyright 2016 The Go Authors. All rights reserved.
+// https://github.com/golang/protobuf
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+// Package descriptor provides functions for obtaining protocol buffer
+// descriptors for generated Go types.
+//
+// These functions cannot go in package proto because they depend on the
+// generated protobuf descriptor messages, which themselves depend on proto.
+package descriptor
+
+import (
+ "bytes"
+ "compress/gzip"
+ "fmt"
+ "io/ioutil"
+
+ "github.com/golang/protobuf/proto"
+ protobuf "github.com/golang/protobuf/protoc-gen-go/descriptor"
+)
+
+// extractFile extracts a FileDescriptorProto from a gzip'd buffer.
+func extractFile(gz []byte) (*protobuf.FileDescriptorProto, error) {
+ r, err := gzip.NewReader(bytes.NewReader(gz))
+ if err != nil {
+ return nil, fmt.Errorf("failed to open gzip reader: %v", err)
+ }
+ defer r.Close()
+
+ b, err := ioutil.ReadAll(r)
+ if err != nil {
+ return nil, fmt.Errorf("failed to uncompress descriptor: %v", err)
+ }
+
+ fd := new(protobuf.FileDescriptorProto)
+ if err := proto.Unmarshal(b, fd); err != nil {
+ return nil, fmt.Errorf("malformed FileDescriptorProto: %v", err)
+ }
+
+ return fd, nil
+}
+
+// Message is a proto.Message with a method to return its descriptor.
+//
+// Message types generated by the protocol compiler always satisfy
+// the Message interface.
+type Message interface {
+ proto.Message
+ Descriptor() ([]byte, []int)
+}
+
+// ForMessage returns a FileDescriptorProto and a DescriptorProto from within it
+// describing the given message.
+func ForMessage(msg Message) (fd *protobuf.FileDescriptorProto, md *protobuf.DescriptorProto) {
+ gz, path := msg.Descriptor()
+ fd, err := extractFile(gz)
+ if err != nil {
+ panic(fmt.Sprintf("invalid FileDescriptorProto for %T: %v", msg, err))
+ }
+
+ md = fd.MessageType[path[0]]
+ for _, i := range path[1:] {
+ md = md.NestedType[i]
+ }
+ return fd, md
+}
diff --git a/vendor/github.com/golang/protobuf/protoc-gen-go/doc.go b/vendor/github.com/golang/protobuf/protoc-gen-go/doc.go
new file mode 100644
index 0000000000000..0d6055d610e3a
--- /dev/null
+++ b/vendor/github.com/golang/protobuf/protoc-gen-go/doc.go
@@ -0,0 +1,51 @@
+// Go support for Protocol Buffers - Google's data interchange format
+//
+// Copyright 2010 The Go Authors. All rights reserved.
+// https://github.com/golang/protobuf
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+/*
+ A plugin for the Google protocol buffer compiler to generate Go code.
+ Run it by building this program and putting it in your path with the name
+ protoc-gen-go
+ That word 'go' at the end becomes part of the option string set for the
+ protocol compiler, so once the protocol compiler (protoc) is installed
+ you can run
+ protoc --go_out=output_directory input_directory/file.proto
+ to generate Go bindings for the protocol defined by file.proto.
+ With that input, the output will be written to
+ output_directory/file.pb.go
+
+ The generated code is documented in the package comment for
+ the library.
+
+ See the README and documentation for protocol buffers to learn more:
+ https://developers.google.com/protocol-buffers/
+
+*/
+package documentation
diff --git a/vendor/github.com/golang/protobuf/protoc-gen-go/grpc/grpc.go b/vendor/github.com/golang/protobuf/protoc-gen-go/grpc/grpc.go
new file mode 100644
index 0000000000000..5d1e3f0f61931
--- /dev/null
+++ b/vendor/github.com/golang/protobuf/protoc-gen-go/grpc/grpc.go
@@ -0,0 +1,537 @@
+// Go support for Protocol Buffers - Google's data interchange format
+//
+// Copyright 2015 The Go Authors. All rights reserved.
+// https://github.com/golang/protobuf
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+// Package grpc outputs gRPC service descriptions in Go code.
+// It runs as a plugin for the Go protocol buffer compiler plugin.
+// It is linked in to protoc-gen-go.
+package grpc
+
+import (
+ "fmt"
+ "strconv"
+ "strings"
+
+ pb "github.com/golang/protobuf/protoc-gen-go/descriptor"
+ "github.com/golang/protobuf/protoc-gen-go/generator"
+)
+
+// generatedCodeVersion indicates a version of the generated code.
+// It is incremented whenever an incompatibility between the generated code and
+// the grpc package is introduced; the generated code references
+// a constant, grpc.SupportPackageIsVersionN (where N is generatedCodeVersion).
+const generatedCodeVersion = 4
+
+// Paths for packages used by code generated in this file,
+// relative to the import_prefix of the generator.Generator.
+const (
+ contextPkgPath = "context"
+ grpcPkgPath = "google.golang.org/grpc"
+ codePkgPath = "google.golang.org/grpc/codes"
+ statusPkgPath = "google.golang.org/grpc/status"
+)
+
+func init() {
+ generator.RegisterPlugin(new(grpc))
+}
+
+// grpc is an implementation of the Go protocol buffer compiler's
+// plugin architecture. It generates bindings for gRPC support.
+type grpc struct {
+ gen *generator.Generator
+}
+
+// Name returns the name of this plugin, "grpc".
+func (g *grpc) Name() string {
+ return "grpc"
+}
+
+// The names for packages imported in the generated code.
+// They may vary from the final path component of the import path
+// if the name is used by other packages.
+var (
+ contextPkg string
+ grpcPkg string
+)
+
+// Init initializes the plugin.
+func (g *grpc) Init(gen *generator.Generator) {
+ g.gen = gen
+}
+
+// Given a type name defined in a .proto, return its object.
+// Also record that we're using it, to guarantee the associated import.
+func (g *grpc) objectNamed(name string) generator.Object {
+ g.gen.RecordTypeUse(name)
+ return g.gen.ObjectNamed(name)
+}
+
+// Given a type name defined in a .proto, return its name as we will print it.
+func (g *grpc) typeName(str string) string {
+ return g.gen.TypeName(g.objectNamed(str))
+}
+
+// P forwards to g.gen.P.
+func (g *grpc) P(args ...interface{}) { g.gen.P(args...) }
+
+// Generate generates code for the services in the given file.
+func (g *grpc) Generate(file *generator.FileDescriptor) {
+ if len(file.FileDescriptorProto.Service) == 0 {
+ return
+ }
+
+ contextPkg = string(g.gen.AddImport(contextPkgPath))
+ grpcPkg = string(g.gen.AddImport(grpcPkgPath))
+
+ g.P("// Reference imports to suppress errors if they are not otherwise used.")
+ g.P("var _ ", contextPkg, ".Context")
+ g.P("var _ ", grpcPkg, ".ClientConn")
+ g.P()
+
+ // Assert version compatibility.
+ g.P("// This is a compile-time assertion to ensure that this generated file")
+ g.P("// is compatible with the grpc package it is being compiled against.")
+ g.P("const _ = ", grpcPkg, ".SupportPackageIsVersion", generatedCodeVersion)
+ g.P()
+
+ for i, service := range file.FileDescriptorProto.Service {
+ g.generateService(file, service, i)
+ }
+}
+
+// GenerateImports generates the import declaration for this file.
+func (g *grpc) GenerateImports(file *generator.FileDescriptor) {
+}
+
+// reservedClientName records whether a client name is reserved on the client side.
+var reservedClientName = map[string]bool{
+ // TODO: do we need any in gRPC?
+}
+
+func unexport(s string) string { return strings.ToLower(s[:1]) + s[1:] }
+
+// deprecationComment is the standard comment added to deprecated
+// messages, fields, enums, and enum values.
+var deprecationComment = "// Deprecated: Do not use."
+
+// generateService generates all the code for the named service.
+func (g *grpc) generateService(file *generator.FileDescriptor, service *pb.ServiceDescriptorProto, index int) {
+ path := fmt.Sprintf("6,%d", index) // 6 means service.
+
+ origServName := service.GetName()
+ fullServName := origServName
+ if pkg := file.GetPackage(); pkg != "" {
+ fullServName = pkg + "." + fullServName
+ }
+ servName := generator.CamelCase(origServName)
+ deprecated := service.GetOptions().GetDeprecated()
+
+ g.P()
+ g.P(fmt.Sprintf(`// %sClient is the client API for %s service.
+//
+// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.`, servName, servName))
+
+ // Client interface.
+ if deprecated {
+ g.P("//")
+ g.P(deprecationComment)
+ }
+ g.P("type ", servName, "Client interface {")
+ for i, method := range service.Method {
+ g.gen.PrintComments(fmt.Sprintf("%s,2,%d", path, i)) // 2 means method in a service.
+ g.P(g.generateClientSignature(servName, method))
+ }
+ g.P("}")
+ g.P()
+
+ // Client structure.
+ g.P("type ", unexport(servName), "Client struct {")
+ g.P("cc *", grpcPkg, ".ClientConn")
+ g.P("}")
+ g.P()
+
+ // NewClient factory.
+ if deprecated {
+ g.P(deprecationComment)
+ }
+ g.P("func New", servName, "Client (cc *", grpcPkg, ".ClientConn) ", servName, "Client {")
+ g.P("return &", unexport(servName), "Client{cc}")
+ g.P("}")
+ g.P()
+
+ var methodIndex, streamIndex int
+ serviceDescVar := "_" + servName + "_serviceDesc"
+ // Client method implementations.
+ for _, method := range service.Method {
+ var descExpr string
+ if !method.GetServerStreaming() && !method.GetClientStreaming() {
+ // Unary RPC method
+ descExpr = fmt.Sprintf("&%s.Methods[%d]", serviceDescVar, methodIndex)
+ methodIndex++
+ } else {
+ // Streaming RPC method
+ descExpr = fmt.Sprintf("&%s.Streams[%d]", serviceDescVar, streamIndex)
+ streamIndex++
+ }
+ g.generateClientMethod(servName, fullServName, serviceDescVar, method, descExpr)
+ }
+
+ // Server interface.
+ serverType := servName + "Server"
+ g.P("// ", serverType, " is the server API for ", servName, " service.")
+ if deprecated {
+ g.P("//")
+ g.P(deprecationComment)
+ }
+ g.P("type ", serverType, " interface {")
+ for i, method := range service.Method {
+ g.gen.PrintComments(fmt.Sprintf("%s,2,%d", path, i)) // 2 means method in a service.
+ g.P(g.generateServerSignature(servName, method))
+ }
+ g.P("}")
+ g.P()
+
+ // Server Unimplemented struct for forward compatability.
+ if deprecated {
+ g.P(deprecationComment)
+ }
+ g.generateUnimplementedServer(servName, service)
+
+ // Server registration.
+ if deprecated {
+ g.P(deprecationComment)
+ }
+ g.P("func Register", servName, "Server(s *", grpcPkg, ".Server, srv ", serverType, ") {")
+ g.P("s.RegisterService(&", serviceDescVar, `, srv)`)
+ g.P("}")
+ g.P()
+
+ // Server handler implementations.
+ var handlerNames []string
+ for _, method := range service.Method {
+ hname := g.generateServerMethod(servName, fullServName, method)
+ handlerNames = append(handlerNames, hname)
+ }
+
+ // Service descriptor.
+ g.P("var ", serviceDescVar, " = ", grpcPkg, ".ServiceDesc {")
+ g.P("ServiceName: ", strconv.Quote(fullServName), ",")
+ g.P("HandlerType: (*", serverType, ")(nil),")
+ g.P("Methods: []", grpcPkg, ".MethodDesc{")
+ for i, method := range service.Method {
+ if method.GetServerStreaming() || method.GetClientStreaming() {
+ continue
+ }
+ g.P("{")
+ g.P("MethodName: ", strconv.Quote(method.GetName()), ",")
+ g.P("Handler: ", handlerNames[i], ",")
+ g.P("},")
+ }
+ g.P("},")
+ g.P("Streams: []", grpcPkg, ".StreamDesc{")
+ for i, method := range service.Method {
+ if !method.GetServerStreaming() && !method.GetClientStreaming() {
+ continue
+ }
+ g.P("{")
+ g.P("StreamName: ", strconv.Quote(method.GetName()), ",")
+ g.P("Handler: ", handlerNames[i], ",")
+ if method.GetServerStreaming() {
+ g.P("ServerStreams: true,")
+ }
+ if method.GetClientStreaming() {
+ g.P("ClientStreams: true,")
+ }
+ g.P("},")
+ }
+ g.P("},")
+ g.P("Metadata: \"", file.GetName(), "\",")
+ g.P("}")
+ g.P()
+}
+
+// generateUnimplementedServer creates the unimplemented server struct
+func (g *grpc) generateUnimplementedServer(servName string, service *pb.ServiceDescriptorProto) {
+ serverType := servName + "Server"
+ g.P("// Unimplemented", serverType, " can be embedded to have forward compatible implementations.")
+ g.P("type Unimplemented", serverType, " struct {")
+ g.P("}")
+ g.P()
+ // Unimplemented<service_name>Server's concrete methods
+ for _, method := range service.Method {
+ g.generateServerMethodConcrete(servName, method)
+ }
+ g.P()
+}
+
+// generateServerMethodConcrete returns unimplemented methods which ensure forward compatibility
+func (g *grpc) generateServerMethodConcrete(servName string, method *pb.MethodDescriptorProto) {
+ header := g.generateServerSignatureWithParamNames(servName, method)
+ g.P("func (*Unimplemented", servName, "Server) ", header, " {")
+ var nilArg string
+ if !method.GetServerStreaming() && !method.GetClientStreaming() {
+ nilArg = "nil, "
+ }
+ methName := generator.CamelCase(method.GetName())
+ statusPkg := string(g.gen.AddImport(statusPkgPath))
+ codePkg := string(g.gen.AddImport(codePkgPath))
+ g.P("return ", nilArg, statusPkg, `.Errorf(`, codePkg, `.Unimplemented, "method `, methName, ` not implemented")`)
+ g.P("}")
+}
+
+// generateClientSignature returns the client-side signature for a method.
+func (g *grpc) generateClientSignature(servName string, method *pb.MethodDescriptorProto) string {
+ origMethName := method.GetName()
+ methName := generator.CamelCase(origMethName)
+ if reservedClientName[methName] {
+ methName += "_"
+ }
+ reqArg := ", in *" + g.typeName(method.GetInputType())
+ if method.GetClientStreaming() {
+ reqArg = ""
+ }
+ respName := "*" + g.typeName(method.GetOutputType())
+ if method.GetServerStreaming() || method.GetClientStreaming() {
+ respName = servName + "_" + generator.CamelCase(origMethName) + "Client"
+ }
+ return fmt.Sprintf("%s(ctx %s.Context%s, opts ...%s.CallOption) (%s, error)", methName, contextPkg, reqArg, grpcPkg, respName)
+}
+
+func (g *grpc) generateClientMethod(servName, fullServName, serviceDescVar string, method *pb.MethodDescriptorProto, descExpr string) {
+ sname := fmt.Sprintf("/%s/%s", fullServName, method.GetName())
+ methName := generator.CamelCase(method.GetName())
+ inType := g.typeName(method.GetInputType())
+ outType := g.typeName(method.GetOutputType())
+
+ if method.GetOptions().GetDeprecated() {
+ g.P(deprecationComment)
+ }
+ g.P("func (c *", unexport(servName), "Client) ", g.generateClientSignature(servName, method), "{")
+ if !method.GetServerStreaming() && !method.GetClientStreaming() {
+ g.P("out := new(", outType, ")")
+ // TODO: Pass descExpr to Invoke.
+ g.P(`err := c.cc.Invoke(ctx, "`, sname, `", in, out, opts...)`)
+ g.P("if err != nil { return nil, err }")
+ g.P("return out, nil")
+ g.P("}")
+ g.P()
+ return
+ }
+ streamType := unexport(servName) + methName + "Client"
+ g.P("stream, err := c.cc.NewStream(ctx, ", descExpr, `, "`, sname, `", opts...)`)
+ g.P("if err != nil { return nil, err }")
+ g.P("x := &", streamType, "{stream}")
+ if !method.GetClientStreaming() {
+ g.P("if err := x.ClientStream.SendMsg(in); err != nil { return nil, err }")
+ g.P("if err := x.ClientStream.CloseSend(); err != nil { return nil, err }")
+ }
+ g.P("return x, nil")
+ g.P("}")
+ g.P()
+
+ genSend := method.GetClientStreaming()
+ genRecv := method.GetServerStreaming()
+ genCloseAndRecv := !method.GetServerStreaming()
+
+ // Stream auxiliary types and methods.
+ g.P("type ", servName, "_", methName, "Client interface {")
+ if genSend {
+ g.P("Send(*", inType, ") error")
+ }
+ if genRecv {
+ g.P("Recv() (*", outType, ", error)")
+ }
+ if genCloseAndRecv {
+ g.P("CloseAndRecv() (*", outType, ", error)")
+ }
+ g.P(grpcPkg, ".ClientStream")
+ g.P("}")
+ g.P()
+
+ g.P("type ", streamType, " struct {")
+ g.P(grpcPkg, ".ClientStream")
+ g.P("}")
+ g.P()
+
+ if genSend {
+ g.P("func (x *", streamType, ") Send(m *", inType, ") error {")
+ g.P("return x.ClientStream.SendMsg(m)")
+ g.P("}")
+ g.P()
+ }
+ if genRecv {
+ g.P("func (x *", streamType, ") Recv() (*", outType, ", error) {")
+ g.P("m := new(", outType, ")")
+ g.P("if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err }")
+ g.P("return m, nil")
+ g.P("}")
+ g.P()
+ }
+ if genCloseAndRecv {
+ g.P("func (x *", streamType, ") CloseAndRecv() (*", outType, ", error) {")
+ g.P("if err := x.ClientStream.CloseSend(); err != nil { return nil, err }")
+ g.P("m := new(", outType, ")")
+ g.P("if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err }")
+ g.P("return m, nil")
+ g.P("}")
+ g.P()
+ }
+}
+
+// generateServerSignatureWithParamNames returns the server-side signature for a method with parameter names.
+func (g *grpc) generateServerSignatureWithParamNames(servName string, method *pb.MethodDescriptorProto) string {
+ origMethName := method.GetName()
+ methName := generator.CamelCase(origMethName)
+ if reservedClientName[methName] {
+ methName += "_"
+ }
+
+ var reqArgs []string
+ ret := "error"
+ if !method.GetServerStreaming() && !method.GetClientStreaming() {
+ reqArgs = append(reqArgs, "ctx "+contextPkg+".Context")
+ ret = "(*" + g.typeName(method.GetOutputType()) + ", error)"
+ }
+ if !method.GetClientStreaming() {
+ reqArgs = append(reqArgs, "req *"+g.typeName(method.GetInputType()))
+ }
+ if method.GetServerStreaming() || method.GetClientStreaming() {
+ reqArgs = append(reqArgs, "srv "+servName+"_"+generator.CamelCase(origMethName)+"Server")
+ }
+
+ return methName + "(" + strings.Join(reqArgs, ", ") + ") " + ret
+}
+
+// generateServerSignature returns the server-side signature for a method.
+func (g *grpc) generateServerSignature(servName string, method *pb.MethodDescriptorProto) string {
+ origMethName := method.GetName()
+ methName := generator.CamelCase(origMethName)
+ if reservedClientName[methName] {
+ methName += "_"
+ }
+
+ var reqArgs []string
+ ret := "error"
+ if !method.GetServerStreaming() && !method.GetClientStreaming() {
+ reqArgs = append(reqArgs, contextPkg+".Context")
+ ret = "(*" + g.typeName(method.GetOutputType()) + ", error)"
+ }
+ if !method.GetClientStreaming() {
+ reqArgs = append(reqArgs, "*"+g.typeName(method.GetInputType()))
+ }
+ if method.GetServerStreaming() || method.GetClientStreaming() {
+ reqArgs = append(reqArgs, servName+"_"+generator.CamelCase(origMethName)+"Server")
+ }
+
+ return methName + "(" + strings.Join(reqArgs, ", ") + ") " + ret
+}
+
+func (g *grpc) generateServerMethod(servName, fullServName string, method *pb.MethodDescriptorProto) string {
+ methName := generator.CamelCase(method.GetName())
+ hname := fmt.Sprintf("_%s_%s_Handler", servName, methName)
+ inType := g.typeName(method.GetInputType())
+ outType := g.typeName(method.GetOutputType())
+
+ if !method.GetServerStreaming() && !method.GetClientStreaming() {
+ g.P("func ", hname, "(srv interface{}, ctx ", contextPkg, ".Context, dec func(interface{}) error, interceptor ", grpcPkg, ".UnaryServerInterceptor) (interface{}, error) {")
+ g.P("in := new(", inType, ")")
+ g.P("if err := dec(in); err != nil { return nil, err }")
+ g.P("if interceptor == nil { return srv.(", servName, "Server).", methName, "(ctx, in) }")
+ g.P("info := &", grpcPkg, ".UnaryServerInfo{")
+ g.P("Server: srv,")
+ g.P("FullMethod: ", strconv.Quote(fmt.Sprintf("/%s/%s", fullServName, methName)), ",")
+ g.P("}")
+ g.P("handler := func(ctx ", contextPkg, ".Context, req interface{}) (interface{}, error) {")
+ g.P("return srv.(", servName, "Server).", methName, "(ctx, req.(*", inType, "))")
+ g.P("}")
+ g.P("return interceptor(ctx, in, info, handler)")
+ g.P("}")
+ g.P()
+ return hname
+ }
+ streamType := unexport(servName) + methName + "Server"
+ g.P("func ", hname, "(srv interface{}, stream ", grpcPkg, ".ServerStream) error {")
+ if !method.GetClientStreaming() {
+ g.P("m := new(", inType, ")")
+ g.P("if err := stream.RecvMsg(m); err != nil { return err }")
+ g.P("return srv.(", servName, "Server).", methName, "(m, &", streamType, "{stream})")
+ } else {
+ g.P("return srv.(", servName, "Server).", methName, "(&", streamType, "{stream})")
+ }
+ g.P("}")
+ g.P()
+
+ genSend := method.GetServerStreaming()
+ genSendAndClose := !method.GetServerStreaming()
+ genRecv := method.GetClientStreaming()
+
+ // Stream auxiliary types and methods.
+ g.P("type ", servName, "_", methName, "Server interface {")
+ if genSend {
+ g.P("Send(*", outType, ") error")
+ }
+ if genSendAndClose {
+ g.P("SendAndClose(*", outType, ") error")
+ }
+ if genRecv {
+ g.P("Recv() (*", inType, ", error)")
+ }
+ g.P(grpcPkg, ".ServerStream")
+ g.P("}")
+ g.P()
+
+ g.P("type ", streamType, " struct {")
+ g.P(grpcPkg, ".ServerStream")
+ g.P("}")
+ g.P()
+
+ if genSend {
+ g.P("func (x *", streamType, ") Send(m *", outType, ") error {")
+ g.P("return x.ServerStream.SendMsg(m)")
+ g.P("}")
+ g.P()
+ }
+ if genSendAndClose {
+ g.P("func (x *", streamType, ") SendAndClose(m *", outType, ") error {")
+ g.P("return x.ServerStream.SendMsg(m)")
+ g.P("}")
+ g.P()
+ }
+ if genRecv {
+ g.P("func (x *", streamType, ") Recv() (*", inType, ", error) {")
+ g.P("m := new(", inType, ")")
+ g.P("if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err }")
+ g.P("return m, nil")
+ g.P("}")
+ g.P()
+ }
+
+ return hname
+}
diff --git a/vendor/github.com/golang/protobuf/protoc-gen-go/link_grpc.go b/vendor/github.com/golang/protobuf/protoc-gen-go/link_grpc.go
new file mode 100644
index 0000000000000..532a550050ee3
--- /dev/null
+++ b/vendor/github.com/golang/protobuf/protoc-gen-go/link_grpc.go
@@ -0,0 +1,34 @@
+// Go support for Protocol Buffers - Google's data interchange format
+//
+// Copyright 2015 The Go Authors. All rights reserved.
+// https://github.com/golang/protobuf
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+package main
+
+import _ "github.com/golang/protobuf/protoc-gen-go/grpc"
diff --git a/vendor/github.com/golang/protobuf/protoc-gen-go/main.go b/vendor/github.com/golang/protobuf/protoc-gen-go/main.go
new file mode 100644
index 0000000000000..8e2486de0b2e2
--- /dev/null
+++ b/vendor/github.com/golang/protobuf/protoc-gen-go/main.go
@@ -0,0 +1,98 @@
+// Go support for Protocol Buffers - Google's data interchange format
+//
+// Copyright 2010 The Go Authors. All rights reserved.
+// https://github.com/golang/protobuf
+//
+// Redistribution and use in source and binary forms, with or without
+// modification, are permitted provided that the following conditions are
+// met:
+//
+// * Redistributions of source code must retain the above copyright
+// notice, this list of conditions and the following disclaimer.
+// * Redistributions in binary form must reproduce the above
+// copyright notice, this list of conditions and the following disclaimer
+// in the documentation and/or other materials provided with the
+// distribution.
+// * Neither the name of Google Inc. nor the names of its
+// contributors may be used to endorse or promote products derived from
+// this software without specific prior written permission.
+//
+// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+// protoc-gen-go is a plugin for the Google protocol buffer compiler to generate
+// Go code. Run it by building this program and putting it in your path with
+// the name
+// protoc-gen-go
+// That word 'go' at the end becomes part of the option string set for the
+// protocol compiler, so once the protocol compiler (protoc) is installed
+// you can run
+// protoc --go_out=output_directory input_directory/file.proto
+// to generate Go bindings for the protocol defined by file.proto.
+// With that input, the output will be written to
+// output_directory/file.pb.go
+//
+// The generated code is documented in the package comment for
+// the library.
+//
+// See the README and documentation for protocol buffers to learn more:
+// https://developers.google.com/protocol-buffers/
+package main
+
+import (
+ "io/ioutil"
+ "os"
+
+ "github.com/golang/protobuf/proto"
+ "github.com/golang/protobuf/protoc-gen-go/generator"
+)
+
+func main() {
+ // Begin by allocating a generator. The request and response structures are stored there
+ // so we can do error handling easily - the response structure contains the field to
+ // report failure.
+ g := generator.New()
+
+ data, err := ioutil.ReadAll(os.Stdin)
+ if err != nil {
+ g.Error(err, "reading input")
+ }
+
+ if err := proto.Unmarshal(data, g.Request); err != nil {
+ g.Error(err, "parsing input proto")
+ }
+
+ if len(g.Request.FileToGenerate) == 0 {
+ g.Fail("no files to generate")
+ }
+
+ g.CommandLineParameters(g.Request.GetParameter())
+
+ // Create a wrapped version of the Descriptors and EnumDescriptors that
+ // point to the file that defines them.
+ g.WrapTypes()
+
+ g.SetPackageNames()
+ g.BuildTypeNameMap()
+
+ g.GenerateAllFiles()
+
+ // Send back the results.
+ data, err = proto.Marshal(g.Response)
+ if err != nil {
+ g.Error(err, "failed to marshal output proto")
+ }
+ _, err = os.Stdout.Write(data)
+ if err != nil {
+ g.Error(err, "failed to write output proto")
+ }
+}
diff --git a/vendor/github.com/googleapis/gnostic/OpenAPIv2/OpenAPIv2.pb.go b/vendor/github.com/googleapis/gnostic/OpenAPIv2/OpenAPIv2.pb.go
index a030fa67653db..6199e7cb34814 100644
--- a/vendor/github.com/googleapis/gnostic/OpenAPIv2/OpenAPIv2.pb.go
+++ b/vendor/github.com/googleapis/gnostic/OpenAPIv2/OpenAPIv2.pb.go
@@ -1,80 +1,14 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: OpenAPIv2/OpenAPIv2.proto
-/*
-Package openapi_v2 is a generated protocol buffer package.
-
-It is generated from these files:
- OpenAPIv2/OpenAPIv2.proto
-
-It has these top-level messages:
- AdditionalPropertiesItem
- Any
- ApiKeySecurity
- BasicAuthenticationSecurity
- BodyParameter
- Contact
- Default
- Definitions
- Document
- Examples
- ExternalDocs
- FileSchema
- FormDataParameterSubSchema
- Header
- HeaderParameterSubSchema
- Headers
- Info
- ItemsItem
- JsonReference
- License
- NamedAny
- NamedHeader
- NamedParameter
- NamedPathItem
- NamedResponse
- NamedResponseValue
- NamedSchema
- NamedSecurityDefinitionsItem
- NamedString
- NamedStringArray
- NonBodyParameter
- Oauth2AccessCodeSecurity
- Oauth2ApplicationSecurity
- Oauth2ImplicitSecurity
- Oauth2PasswordSecurity
- Oauth2Scopes
- Operation
- Parameter
- ParameterDefinitions
- ParametersItem
- PathItem
- PathParameterSubSchema
- Paths
- PrimitivesItems
- Properties
- QueryParameterSubSchema
- Response
- ResponseDefinitions
- ResponseValue
- Responses
- Schema
- SchemaItem
- SecurityDefinitions
- SecurityDefinitionsItem
- SecurityRequirement
- StringArray
- Tag
- TypeItem
- VendorExtension
- Xml
-*/
package openapi_v2
-import proto "github.com/golang/protobuf/proto"
-import fmt "fmt"
-import math "math"
-import google_protobuf "github.com/golang/protobuf/ptypes/any"
+import (
+ fmt "fmt"
+ proto "github.com/golang/protobuf/proto"
+ any "github.com/golang/protobuf/ptypes/any"
+ math "math"
+)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
@@ -85,32 +19,57 @@ var _ = math.Inf
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
-const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
+const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
type AdditionalPropertiesItem struct {
// Types that are valid to be assigned to Oneof:
// *AdditionalPropertiesItem_Schema
// *AdditionalPropertiesItem_Boolean
- Oneof isAdditionalPropertiesItem_Oneof `protobuf_oneof:"oneof"`
+ Oneof isAdditionalPropertiesItem_Oneof `protobuf_oneof:"oneof"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-func (m *AdditionalPropertiesItem) Reset() { *m = AdditionalPropertiesItem{} }
-func (m *AdditionalPropertiesItem) String() string { return proto.CompactTextString(m) }
-func (*AdditionalPropertiesItem) ProtoMessage() {}
-func (*AdditionalPropertiesItem) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
+func (m *AdditionalPropertiesItem) Reset() { *m = AdditionalPropertiesItem{} }
+func (m *AdditionalPropertiesItem) String() string { return proto.CompactTextString(m) }
+func (*AdditionalPropertiesItem) ProtoMessage() {}
+func (*AdditionalPropertiesItem) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{0}
+}
+
+func (m *AdditionalPropertiesItem) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_AdditionalPropertiesItem.Unmarshal(m, b)
+}
+func (m *AdditionalPropertiesItem) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_AdditionalPropertiesItem.Marshal(b, m, deterministic)
+}
+func (m *AdditionalPropertiesItem) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_AdditionalPropertiesItem.Merge(m, src)
+}
+func (m *AdditionalPropertiesItem) XXX_Size() int {
+ return xxx_messageInfo_AdditionalPropertiesItem.Size(m)
+}
+func (m *AdditionalPropertiesItem) XXX_DiscardUnknown() {
+ xxx_messageInfo_AdditionalPropertiesItem.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_AdditionalPropertiesItem proto.InternalMessageInfo
type isAdditionalPropertiesItem_Oneof interface {
isAdditionalPropertiesItem_Oneof()
}
type AdditionalPropertiesItem_Schema struct {
- Schema *Schema `protobuf:"bytes,1,opt,name=schema,oneof"`
+ Schema *Schema `protobuf:"bytes,1,opt,name=schema,proto3,oneof"`
}
+
type AdditionalPropertiesItem_Boolean struct {
- Boolean bool `protobuf:"varint,2,opt,name=boolean,oneof"`
+ Boolean bool `protobuf:"varint,2,opt,name=boolean,proto3,oneof"`
}
-func (*AdditionalPropertiesItem_Schema) isAdditionalPropertiesItem_Oneof() {}
+func (*AdditionalPropertiesItem_Schema) isAdditionalPropertiesItem_Oneof() {}
+
func (*AdditionalPropertiesItem_Boolean) isAdditionalPropertiesItem_Oneof() {}
func (m *AdditionalPropertiesItem) GetOneof() isAdditionalPropertiesItem_Oneof {
@@ -134,90 +93,48 @@ func (m *AdditionalPropertiesItem) GetBoolean() bool {
return false
}
-// XXX_OneofFuncs is for the internal use of the proto package.
-func (*AdditionalPropertiesItem) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
- return _AdditionalPropertiesItem_OneofMarshaler, _AdditionalPropertiesItem_OneofUnmarshaler, _AdditionalPropertiesItem_OneofSizer, []interface{}{
+// XXX_OneofWrappers is for the internal use of the proto package.
+func (*AdditionalPropertiesItem) XXX_OneofWrappers() []interface{} {
+ return []interface{}{
(*AdditionalPropertiesItem_Schema)(nil),
(*AdditionalPropertiesItem_Boolean)(nil),
}
}
-func _AdditionalPropertiesItem_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
- m := msg.(*AdditionalPropertiesItem)
- // oneof
- switch x := m.Oneof.(type) {
- case *AdditionalPropertiesItem_Schema:
- b.EncodeVarint(1<<3 | proto.WireBytes)
- if err := b.EncodeMessage(x.Schema); err != nil {
- return err
- }
- case *AdditionalPropertiesItem_Boolean:
- t := uint64(0)
- if x.Boolean {
- t = 1
- }
- b.EncodeVarint(2<<3 | proto.WireVarint)
- b.EncodeVarint(t)
- case nil:
- default:
- return fmt.Errorf("AdditionalPropertiesItem.Oneof has unexpected type %T", x)
- }
- return nil
-}
-
-func _AdditionalPropertiesItem_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
- m := msg.(*AdditionalPropertiesItem)
- switch tag {
- case 1: // oneof.schema
- if wire != proto.WireBytes {
- return true, proto.ErrInternalBadWireType
- }
- msg := new(Schema)
- err := b.DecodeMessage(msg)
- m.Oneof = &AdditionalPropertiesItem_Schema{msg}
- return true, err
- case 2: // oneof.boolean
- if wire != proto.WireVarint {
- return true, proto.ErrInternalBadWireType
- }
- x, err := b.DecodeVarint()
- m.Oneof = &AdditionalPropertiesItem_Boolean{x != 0}
- return true, err
- default:
- return false, nil
- }
-}
-
-func _AdditionalPropertiesItem_OneofSizer(msg proto.Message) (n int) {
- m := msg.(*AdditionalPropertiesItem)
- // oneof
- switch x := m.Oneof.(type) {
- case *AdditionalPropertiesItem_Schema:
- s := proto.Size(x.Schema)
- n += proto.SizeVarint(1<<3 | proto.WireBytes)
- n += proto.SizeVarint(uint64(s))
- n += s
- case *AdditionalPropertiesItem_Boolean:
- n += proto.SizeVarint(2<<3 | proto.WireVarint)
- n += 1
- case nil:
- default:
- panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
- }
- return n
+type Any struct {
+ Value *any.Any `protobuf:"bytes,1,opt,name=value,proto3" json:"value,omitempty"`
+ Yaml string `protobuf:"bytes,2,opt,name=yaml,proto3" json:"yaml,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-type Any struct {
- Value *google_protobuf.Any `protobuf:"bytes,1,opt,name=value" json:"value,omitempty"`
- Yaml string `protobuf:"bytes,2,opt,name=yaml" json:"yaml,omitempty"`
+func (m *Any) Reset() { *m = Any{} }
+func (m *Any) String() string { return proto.CompactTextString(m) }
+func (*Any) ProtoMessage() {}
+func (*Any) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{1}
}
-func (m *Any) Reset() { *m = Any{} }
-func (m *Any) String() string { return proto.CompactTextString(m) }
-func (*Any) ProtoMessage() {}
-func (*Any) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
+func (m *Any) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_Any.Unmarshal(m, b)
+}
+func (m *Any) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_Any.Marshal(b, m, deterministic)
+}
+func (m *Any) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Any.Merge(m, src)
+}
+func (m *Any) XXX_Size() int {
+ return xxx_messageInfo_Any.Size(m)
+}
+func (m *Any) XXX_DiscardUnknown() {
+ xxx_messageInfo_Any.DiscardUnknown(m)
+}
-func (m *Any) GetValue() *google_protobuf.Any {
+var xxx_messageInfo_Any proto.InternalMessageInfo
+
+func (m *Any) GetValue() *any.Any {
if m != nil {
return m.Value
}
@@ -232,17 +149,40 @@ func (m *Any) GetYaml() string {
}
type ApiKeySecurity struct {
- Type string `protobuf:"bytes,1,opt,name=type" json:"type,omitempty"`
- Name string `protobuf:"bytes,2,opt,name=name" json:"name,omitempty"`
- In string `protobuf:"bytes,3,opt,name=in" json:"in,omitempty"`
- Description string `protobuf:"bytes,4,opt,name=description" json:"description,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,5,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
+ Type string `protobuf:"bytes,1,opt,name=type,proto3" json:"type,omitempty"`
+ Name string `protobuf:"bytes,2,opt,name=name,proto3" json:"name,omitempty"`
+ In string `protobuf:"bytes,3,opt,name=in,proto3" json:"in,omitempty"`
+ Description string `protobuf:"bytes,4,opt,name=description,proto3" json:"description,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,5,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *ApiKeySecurity) Reset() { *m = ApiKeySecurity{} }
+func (m *ApiKeySecurity) String() string { return proto.CompactTextString(m) }
+func (*ApiKeySecurity) ProtoMessage() {}
+func (*ApiKeySecurity) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{2}
+}
+
+func (m *ApiKeySecurity) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_ApiKeySecurity.Unmarshal(m, b)
+}
+func (m *ApiKeySecurity) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_ApiKeySecurity.Marshal(b, m, deterministic)
+}
+func (m *ApiKeySecurity) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ApiKeySecurity.Merge(m, src)
+}
+func (m *ApiKeySecurity) XXX_Size() int {
+ return xxx_messageInfo_ApiKeySecurity.Size(m)
+}
+func (m *ApiKeySecurity) XXX_DiscardUnknown() {
+ xxx_messageInfo_ApiKeySecurity.DiscardUnknown(m)
}
-func (m *ApiKeySecurity) Reset() { *m = ApiKeySecurity{} }
-func (m *ApiKeySecurity) String() string { return proto.CompactTextString(m) }
-func (*ApiKeySecurity) ProtoMessage() {}
-func (*ApiKeySecurity) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
+var xxx_messageInfo_ApiKeySecurity proto.InternalMessageInfo
func (m *ApiKeySecurity) GetType() string {
if m != nil {
@@ -280,15 +220,38 @@ func (m *ApiKeySecurity) GetVendorExtension() []*NamedAny {
}
type BasicAuthenticationSecurity struct {
- Type string `protobuf:"bytes,1,opt,name=type" json:"type,omitempty"`
- Description string `protobuf:"bytes,2,opt,name=description" json:"description,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,3,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
+ Type string `protobuf:"bytes,1,opt,name=type,proto3" json:"type,omitempty"`
+ Description string `protobuf:"bytes,2,opt,name=description,proto3" json:"description,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,3,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *BasicAuthenticationSecurity) Reset() { *m = BasicAuthenticationSecurity{} }
+func (m *BasicAuthenticationSecurity) String() string { return proto.CompactTextString(m) }
+func (*BasicAuthenticationSecurity) ProtoMessage() {}
+func (*BasicAuthenticationSecurity) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{3}
+}
+
+func (m *BasicAuthenticationSecurity) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_BasicAuthenticationSecurity.Unmarshal(m, b)
+}
+func (m *BasicAuthenticationSecurity) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_BasicAuthenticationSecurity.Marshal(b, m, deterministic)
+}
+func (m *BasicAuthenticationSecurity) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_BasicAuthenticationSecurity.Merge(m, src)
+}
+func (m *BasicAuthenticationSecurity) XXX_Size() int {
+ return xxx_messageInfo_BasicAuthenticationSecurity.Size(m)
+}
+func (m *BasicAuthenticationSecurity) XXX_DiscardUnknown() {
+ xxx_messageInfo_BasicAuthenticationSecurity.DiscardUnknown(m)
}
-func (m *BasicAuthenticationSecurity) Reset() { *m = BasicAuthenticationSecurity{} }
-func (m *BasicAuthenticationSecurity) String() string { return proto.CompactTextString(m) }
-func (*BasicAuthenticationSecurity) ProtoMessage() {}
-func (*BasicAuthenticationSecurity) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
+var xxx_messageInfo_BasicAuthenticationSecurity proto.InternalMessageInfo
func (m *BasicAuthenticationSecurity) GetType() string {
if m != nil {
@@ -313,21 +276,44 @@ func (m *BasicAuthenticationSecurity) GetVendorExtension() []*NamedAny {
type BodyParameter struct {
// A brief description of the parameter. This could contain examples of use. GitHub Flavored Markdown is allowed.
- Description string `protobuf:"bytes,1,opt,name=description" json:"description,omitempty"`
+ Description string `protobuf:"bytes,1,opt,name=description,proto3" json:"description,omitempty"`
// The name of the parameter.
- Name string `protobuf:"bytes,2,opt,name=name" json:"name,omitempty"`
+ Name string `protobuf:"bytes,2,opt,name=name,proto3" json:"name,omitempty"`
// Determines the location of the parameter.
- In string `protobuf:"bytes,3,opt,name=in" json:"in,omitempty"`
+ In string `protobuf:"bytes,3,opt,name=in,proto3" json:"in,omitempty"`
// Determines whether or not this parameter is required or optional.
- Required bool `protobuf:"varint,4,opt,name=required" json:"required,omitempty"`
- Schema *Schema `protobuf:"bytes,5,opt,name=schema" json:"schema,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,6,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
+ Required bool `protobuf:"varint,4,opt,name=required,proto3" json:"required,omitempty"`
+ Schema *Schema `protobuf:"bytes,5,opt,name=schema,proto3" json:"schema,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,6,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-func (m *BodyParameter) Reset() { *m = BodyParameter{} }
-func (m *BodyParameter) String() string { return proto.CompactTextString(m) }
-func (*BodyParameter) ProtoMessage() {}
-func (*BodyParameter) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} }
+func (m *BodyParameter) Reset() { *m = BodyParameter{} }
+func (m *BodyParameter) String() string { return proto.CompactTextString(m) }
+func (*BodyParameter) ProtoMessage() {}
+func (*BodyParameter) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{4}
+}
+
+func (m *BodyParameter) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_BodyParameter.Unmarshal(m, b)
+}
+func (m *BodyParameter) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_BodyParameter.Marshal(b, m, deterministic)
+}
+func (m *BodyParameter) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_BodyParameter.Merge(m, src)
+}
+func (m *BodyParameter) XXX_Size() int {
+ return xxx_messageInfo_BodyParameter.Size(m)
+}
+func (m *BodyParameter) XXX_DiscardUnknown() {
+ xxx_messageInfo_BodyParameter.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_BodyParameter proto.InternalMessageInfo
func (m *BodyParameter) GetDescription() string {
if m != nil {
@@ -374,18 +360,41 @@ func (m *BodyParameter) GetVendorExtension() []*NamedAny {
// Contact information for the owners of the API.
type Contact struct {
// The identifying name of the contact person/organization.
- Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
+ Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
// The URL pointing to the contact information.
- Url string `protobuf:"bytes,2,opt,name=url" json:"url,omitempty"`
+ Url string `protobuf:"bytes,2,opt,name=url,proto3" json:"url,omitempty"`
// The email address of the contact person/organization.
- Email string `protobuf:"bytes,3,opt,name=email" json:"email,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,4,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
+ Email string `protobuf:"bytes,3,opt,name=email,proto3" json:"email,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,4,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-func (m *Contact) Reset() { *m = Contact{} }
-func (m *Contact) String() string { return proto.CompactTextString(m) }
-func (*Contact) ProtoMessage() {}
-func (*Contact) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} }
+func (m *Contact) Reset() { *m = Contact{} }
+func (m *Contact) String() string { return proto.CompactTextString(m) }
+func (*Contact) ProtoMessage() {}
+func (*Contact) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{5}
+}
+
+func (m *Contact) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_Contact.Unmarshal(m, b)
+}
+func (m *Contact) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_Contact.Marshal(b, m, deterministic)
+}
+func (m *Contact) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Contact.Merge(m, src)
+}
+func (m *Contact) XXX_Size() int {
+ return xxx_messageInfo_Contact.Size(m)
+}
+func (m *Contact) XXX_DiscardUnknown() {
+ xxx_messageInfo_Contact.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_Contact proto.InternalMessageInfo
func (m *Contact) GetName() string {
if m != nil {
@@ -416,13 +425,36 @@ func (m *Contact) GetVendorExtension() []*NamedAny {
}
type Default struct {
- AdditionalProperties []*NamedAny `protobuf:"bytes,1,rep,name=additional_properties,json=additionalProperties" json:"additional_properties,omitempty"`
+ AdditionalProperties []*NamedAny `protobuf:"bytes,1,rep,name=additional_properties,json=additionalProperties,proto3" json:"additional_properties,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *Default) Reset() { *m = Default{} }
+func (m *Default) String() string { return proto.CompactTextString(m) }
+func (*Default) ProtoMessage() {}
+func (*Default) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{6}
+}
+
+func (m *Default) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_Default.Unmarshal(m, b)
+}
+func (m *Default) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_Default.Marshal(b, m, deterministic)
+}
+func (m *Default) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Default.Merge(m, src)
+}
+func (m *Default) XXX_Size() int {
+ return xxx_messageInfo_Default.Size(m)
+}
+func (m *Default) XXX_DiscardUnknown() {
+ xxx_messageInfo_Default.DiscardUnknown(m)
}
-func (m *Default) Reset() { *m = Default{} }
-func (m *Default) String() string { return proto.CompactTextString(m) }
-func (*Default) ProtoMessage() {}
-func (*Default) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} }
+var xxx_messageInfo_Default proto.InternalMessageInfo
func (m *Default) GetAdditionalProperties() []*NamedAny {
if m != nil {
@@ -433,13 +465,36 @@ func (m *Default) GetAdditionalProperties() []*NamedAny {
// One or more JSON objects describing the schemas being consumed and produced by the API.
type Definitions struct {
- AdditionalProperties []*NamedSchema `protobuf:"bytes,1,rep,name=additional_properties,json=additionalProperties" json:"additional_properties,omitempty"`
+ AdditionalProperties []*NamedSchema `protobuf:"bytes,1,rep,name=additional_properties,json=additionalProperties,proto3" json:"additional_properties,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-func (m *Definitions) Reset() { *m = Definitions{} }
-func (m *Definitions) String() string { return proto.CompactTextString(m) }
-func (*Definitions) ProtoMessage() {}
-func (*Definitions) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} }
+func (m *Definitions) Reset() { *m = Definitions{} }
+func (m *Definitions) String() string { return proto.CompactTextString(m) }
+func (*Definitions) ProtoMessage() {}
+func (*Definitions) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{7}
+}
+
+func (m *Definitions) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_Definitions.Unmarshal(m, b)
+}
+func (m *Definitions) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_Definitions.Marshal(b, m, deterministic)
+}
+func (m *Definitions) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Definitions.Merge(m, src)
+}
+func (m *Definitions) XXX_Size() int {
+ return xxx_messageInfo_Definitions.Size(m)
+}
+func (m *Definitions) XXX_DiscardUnknown() {
+ xxx_messageInfo_Definitions.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_Definitions proto.InternalMessageInfo
func (m *Definitions) GetAdditionalProperties() []*NamedSchema {
if m != nil {
@@ -450,33 +505,56 @@ func (m *Definitions) GetAdditionalProperties() []*NamedSchema {
type Document struct {
// The Swagger version of this document.
- Swagger string `protobuf:"bytes,1,opt,name=swagger" json:"swagger,omitempty"`
- Info *Info `protobuf:"bytes,2,opt,name=info" json:"info,omitempty"`
+ Swagger string `protobuf:"bytes,1,opt,name=swagger,proto3" json:"swagger,omitempty"`
+ Info *Info `protobuf:"bytes,2,opt,name=info,proto3" json:"info,omitempty"`
// The host (name or ip) of the API. Example: 'swagger.io'
- Host string `protobuf:"bytes,3,opt,name=host" json:"host,omitempty"`
+ Host string `protobuf:"bytes,3,opt,name=host,proto3" json:"host,omitempty"`
// The base path to the API. Example: '/api'.
- BasePath string `protobuf:"bytes,4,opt,name=base_path,json=basePath" json:"base_path,omitempty"`
+ BasePath string `protobuf:"bytes,4,opt,name=base_path,json=basePath,proto3" json:"base_path,omitempty"`
// The transfer protocol of the API.
- Schemes []string `protobuf:"bytes,5,rep,name=schemes" json:"schemes,omitempty"`
+ Schemes []string `protobuf:"bytes,5,rep,name=schemes,proto3" json:"schemes,omitempty"`
// A list of MIME types accepted by the API.
- Consumes []string `protobuf:"bytes,6,rep,name=consumes" json:"consumes,omitempty"`
+ Consumes []string `protobuf:"bytes,6,rep,name=consumes,proto3" json:"consumes,omitempty"`
// A list of MIME types the API can produce.
- Produces []string `protobuf:"bytes,7,rep,name=produces" json:"produces,omitempty"`
- Paths *Paths `protobuf:"bytes,8,opt,name=paths" json:"paths,omitempty"`
- Definitions *Definitions `protobuf:"bytes,9,opt,name=definitions" json:"definitions,omitempty"`
- Parameters *ParameterDefinitions `protobuf:"bytes,10,opt,name=parameters" json:"parameters,omitempty"`
- Responses *ResponseDefinitions `protobuf:"bytes,11,opt,name=responses" json:"responses,omitempty"`
- Security []*SecurityRequirement `protobuf:"bytes,12,rep,name=security" json:"security,omitempty"`
- SecurityDefinitions *SecurityDefinitions `protobuf:"bytes,13,opt,name=security_definitions,json=securityDefinitions" json:"security_definitions,omitempty"`
- Tags []*Tag `protobuf:"bytes,14,rep,name=tags" json:"tags,omitempty"`
- ExternalDocs *ExternalDocs `protobuf:"bytes,15,opt,name=external_docs,json=externalDocs" json:"external_docs,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,16,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
-}
-
-func (m *Document) Reset() { *m = Document{} }
-func (m *Document) String() string { return proto.CompactTextString(m) }
-func (*Document) ProtoMessage() {}
-func (*Document) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8} }
+ Produces []string `protobuf:"bytes,7,rep,name=produces,proto3" json:"produces,omitempty"`
+ Paths *Paths `protobuf:"bytes,8,opt,name=paths,proto3" json:"paths,omitempty"`
+ Definitions *Definitions `protobuf:"bytes,9,opt,name=definitions,proto3" json:"definitions,omitempty"`
+ Parameters *ParameterDefinitions `protobuf:"bytes,10,opt,name=parameters,proto3" json:"parameters,omitempty"`
+ Responses *ResponseDefinitions `protobuf:"bytes,11,opt,name=responses,proto3" json:"responses,omitempty"`
+ Security []*SecurityRequirement `protobuf:"bytes,12,rep,name=security,proto3" json:"security,omitempty"`
+ SecurityDefinitions *SecurityDefinitions `protobuf:"bytes,13,opt,name=security_definitions,json=securityDefinitions,proto3" json:"security_definitions,omitempty"`
+ Tags []*Tag `protobuf:"bytes,14,rep,name=tags,proto3" json:"tags,omitempty"`
+ ExternalDocs *ExternalDocs `protobuf:"bytes,15,opt,name=external_docs,json=externalDocs,proto3" json:"external_docs,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,16,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *Document) Reset() { *m = Document{} }
+func (m *Document) String() string { return proto.CompactTextString(m) }
+func (*Document) ProtoMessage() {}
+func (*Document) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{8}
+}
+
+func (m *Document) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_Document.Unmarshal(m, b)
+}
+func (m *Document) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_Document.Marshal(b, m, deterministic)
+}
+func (m *Document) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Document.Merge(m, src)
+}
+func (m *Document) XXX_Size() int {
+ return xxx_messageInfo_Document.Size(m)
+}
+func (m *Document) XXX_DiscardUnknown() {
+ xxx_messageInfo_Document.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_Document proto.InternalMessageInfo
func (m *Document) GetSwagger() string {
if m != nil {
@@ -591,13 +669,36 @@ func (m *Document) GetVendorExtension() []*NamedAny {
}
type Examples struct {
- AdditionalProperties []*NamedAny `protobuf:"bytes,1,rep,name=additional_properties,json=additionalProperties" json:"additional_properties,omitempty"`
+ AdditionalProperties []*NamedAny `protobuf:"bytes,1,rep,name=additional_properties,json=additionalProperties,proto3" json:"additional_properties,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *Examples) Reset() { *m = Examples{} }
+func (m *Examples) String() string { return proto.CompactTextString(m) }
+func (*Examples) ProtoMessage() {}
+func (*Examples) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{9}
+}
+
+func (m *Examples) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_Examples.Unmarshal(m, b)
+}
+func (m *Examples) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_Examples.Marshal(b, m, deterministic)
+}
+func (m *Examples) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Examples.Merge(m, src)
+}
+func (m *Examples) XXX_Size() int {
+ return xxx_messageInfo_Examples.Size(m)
+}
+func (m *Examples) XXX_DiscardUnknown() {
+ xxx_messageInfo_Examples.DiscardUnknown(m)
}
-func (m *Examples) Reset() { *m = Examples{} }
-func (m *Examples) String() string { return proto.CompactTextString(m) }
-func (*Examples) ProtoMessage() {}
-func (*Examples) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{9} }
+var xxx_messageInfo_Examples proto.InternalMessageInfo
func (m *Examples) GetAdditionalProperties() []*NamedAny {
if m != nil {
@@ -608,15 +709,38 @@ func (m *Examples) GetAdditionalProperties() []*NamedAny {
// information about external documentation
type ExternalDocs struct {
- Description string `protobuf:"bytes,1,opt,name=description" json:"description,omitempty"`
- Url string `protobuf:"bytes,2,opt,name=url" json:"url,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,3,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
+ Description string `protobuf:"bytes,1,opt,name=description,proto3" json:"description,omitempty"`
+ Url string `protobuf:"bytes,2,opt,name=url,proto3" json:"url,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,3,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-func (m *ExternalDocs) Reset() { *m = ExternalDocs{} }
-func (m *ExternalDocs) String() string { return proto.CompactTextString(m) }
-func (*ExternalDocs) ProtoMessage() {}
-func (*ExternalDocs) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{10} }
+func (m *ExternalDocs) Reset() { *m = ExternalDocs{} }
+func (m *ExternalDocs) String() string { return proto.CompactTextString(m) }
+func (*ExternalDocs) ProtoMessage() {}
+func (*ExternalDocs) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{10}
+}
+
+func (m *ExternalDocs) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_ExternalDocs.Unmarshal(m, b)
+}
+func (m *ExternalDocs) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_ExternalDocs.Marshal(b, m, deterministic)
+}
+func (m *ExternalDocs) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ExternalDocs.Merge(m, src)
+}
+func (m *ExternalDocs) XXX_Size() int {
+ return xxx_messageInfo_ExternalDocs.Size(m)
+}
+func (m *ExternalDocs) XXX_DiscardUnknown() {
+ xxx_messageInfo_ExternalDocs.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_ExternalDocs proto.InternalMessageInfo
func (m *ExternalDocs) GetDescription() string {
if m != nil {
@@ -641,22 +765,45 @@ func (m *ExternalDocs) GetVendorExtension() []*NamedAny {
// A deterministic version of a JSON Schema object.
type FileSchema struct {
- Format string `protobuf:"bytes,1,opt,name=format" json:"format,omitempty"`
- Title string `protobuf:"bytes,2,opt,name=title" json:"title,omitempty"`
- Description string `protobuf:"bytes,3,opt,name=description" json:"description,omitempty"`
- Default *Any `protobuf:"bytes,4,opt,name=default" json:"default,omitempty"`
- Required []string `protobuf:"bytes,5,rep,name=required" json:"required,omitempty"`
- Type string `protobuf:"bytes,6,opt,name=type" json:"type,omitempty"`
- ReadOnly bool `protobuf:"varint,7,opt,name=read_only,json=readOnly" json:"read_only,omitempty"`
- ExternalDocs *ExternalDocs `protobuf:"bytes,8,opt,name=external_docs,json=externalDocs" json:"external_docs,omitempty"`
- Example *Any `protobuf:"bytes,9,opt,name=example" json:"example,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,10,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
-}
-
-func (m *FileSchema) Reset() { *m = FileSchema{} }
-func (m *FileSchema) String() string { return proto.CompactTextString(m) }
-func (*FileSchema) ProtoMessage() {}
-func (*FileSchema) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{11} }
+ Format string `protobuf:"bytes,1,opt,name=format,proto3" json:"format,omitempty"`
+ Title string `protobuf:"bytes,2,opt,name=title,proto3" json:"title,omitempty"`
+ Description string `protobuf:"bytes,3,opt,name=description,proto3" json:"description,omitempty"`
+ Default *Any `protobuf:"bytes,4,opt,name=default,proto3" json:"default,omitempty"`
+ Required []string `protobuf:"bytes,5,rep,name=required,proto3" json:"required,omitempty"`
+ Type string `protobuf:"bytes,6,opt,name=type,proto3" json:"type,omitempty"`
+ ReadOnly bool `protobuf:"varint,7,opt,name=read_only,json=readOnly,proto3" json:"read_only,omitempty"`
+ ExternalDocs *ExternalDocs `protobuf:"bytes,8,opt,name=external_docs,json=externalDocs,proto3" json:"external_docs,omitempty"`
+ Example *Any `protobuf:"bytes,9,opt,name=example,proto3" json:"example,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,10,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *FileSchema) Reset() { *m = FileSchema{} }
+func (m *FileSchema) String() string { return proto.CompactTextString(m) }
+func (*FileSchema) ProtoMessage() {}
+func (*FileSchema) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{11}
+}
+
+func (m *FileSchema) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_FileSchema.Unmarshal(m, b)
+}
+func (m *FileSchema) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_FileSchema.Marshal(b, m, deterministic)
+}
+func (m *FileSchema) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_FileSchema.Merge(m, src)
+}
+func (m *FileSchema) XXX_Size() int {
+ return xxx_messageInfo_FileSchema.Size(m)
+}
+func (m *FileSchema) XXX_DiscardUnknown() {
+ xxx_messageInfo_FileSchema.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_FileSchema proto.InternalMessageInfo
func (m *FileSchema) GetFormat() string {
if m != nil {
@@ -730,39 +877,62 @@ func (m *FileSchema) GetVendorExtension() []*NamedAny {
type FormDataParameterSubSchema struct {
// Determines whether or not this parameter is required or optional.
- Required bool `protobuf:"varint,1,opt,name=required" json:"required,omitempty"`
+ Required bool `protobuf:"varint,1,opt,name=required,proto3" json:"required,omitempty"`
// Determines the location of the parameter.
- In string `protobuf:"bytes,2,opt,name=in" json:"in,omitempty"`
+ In string `protobuf:"bytes,2,opt,name=in,proto3" json:"in,omitempty"`
// A brief description of the parameter. This could contain examples of use. GitHub Flavored Markdown is allowed.
- Description string `protobuf:"bytes,3,opt,name=description" json:"description,omitempty"`
+ Description string `protobuf:"bytes,3,opt,name=description,proto3" json:"description,omitempty"`
// The name of the parameter.
- Name string `protobuf:"bytes,4,opt,name=name" json:"name,omitempty"`
+ Name string `protobuf:"bytes,4,opt,name=name,proto3" json:"name,omitempty"`
// allows sending a parameter by name only or with an empty value.
- AllowEmptyValue bool `protobuf:"varint,5,opt,name=allow_empty_value,json=allowEmptyValue" json:"allow_empty_value,omitempty"`
- Type string `protobuf:"bytes,6,opt,name=type" json:"type,omitempty"`
- Format string `protobuf:"bytes,7,opt,name=format" json:"format,omitempty"`
- Items *PrimitivesItems `protobuf:"bytes,8,opt,name=items" json:"items,omitempty"`
- CollectionFormat string `protobuf:"bytes,9,opt,name=collection_format,json=collectionFormat" json:"collection_format,omitempty"`
- Default *Any `protobuf:"bytes,10,opt,name=default" json:"default,omitempty"`
- Maximum float64 `protobuf:"fixed64,11,opt,name=maximum" json:"maximum,omitempty"`
- ExclusiveMaximum bool `protobuf:"varint,12,opt,name=exclusive_maximum,json=exclusiveMaximum" json:"exclusive_maximum,omitempty"`
- Minimum float64 `protobuf:"fixed64,13,opt,name=minimum" json:"minimum,omitempty"`
- ExclusiveMinimum bool `protobuf:"varint,14,opt,name=exclusive_minimum,json=exclusiveMinimum" json:"exclusive_minimum,omitempty"`
- MaxLength int64 `protobuf:"varint,15,opt,name=max_length,json=maxLength" json:"max_length,omitempty"`
- MinLength int64 `protobuf:"varint,16,opt,name=min_length,json=minLength" json:"min_length,omitempty"`
- Pattern string `protobuf:"bytes,17,opt,name=pattern" json:"pattern,omitempty"`
- MaxItems int64 `protobuf:"varint,18,opt,name=max_items,json=maxItems" json:"max_items,omitempty"`
- MinItems int64 `protobuf:"varint,19,opt,name=min_items,json=minItems" json:"min_items,omitempty"`
- UniqueItems bool `protobuf:"varint,20,opt,name=unique_items,json=uniqueItems" json:"unique_items,omitempty"`
- Enum []*Any `protobuf:"bytes,21,rep,name=enum" json:"enum,omitempty"`
- MultipleOf float64 `protobuf:"fixed64,22,opt,name=multiple_of,json=multipleOf" json:"multiple_of,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,23,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
-}
-
-func (m *FormDataParameterSubSchema) Reset() { *m = FormDataParameterSubSchema{} }
-func (m *FormDataParameterSubSchema) String() string { return proto.CompactTextString(m) }
-func (*FormDataParameterSubSchema) ProtoMessage() {}
-func (*FormDataParameterSubSchema) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{12} }
+ AllowEmptyValue bool `protobuf:"varint,5,opt,name=allow_empty_value,json=allowEmptyValue,proto3" json:"allow_empty_value,omitempty"`
+ Type string `protobuf:"bytes,6,opt,name=type,proto3" json:"type,omitempty"`
+ Format string `protobuf:"bytes,7,opt,name=format,proto3" json:"format,omitempty"`
+ Items *PrimitivesItems `protobuf:"bytes,8,opt,name=items,proto3" json:"items,omitempty"`
+ CollectionFormat string `protobuf:"bytes,9,opt,name=collection_format,json=collectionFormat,proto3" json:"collection_format,omitempty"`
+ Default *Any `protobuf:"bytes,10,opt,name=default,proto3" json:"default,omitempty"`
+ Maximum float64 `protobuf:"fixed64,11,opt,name=maximum,proto3" json:"maximum,omitempty"`
+ ExclusiveMaximum bool `protobuf:"varint,12,opt,name=exclusive_maximum,json=exclusiveMaximum,proto3" json:"exclusive_maximum,omitempty"`
+ Minimum float64 `protobuf:"fixed64,13,opt,name=minimum,proto3" json:"minimum,omitempty"`
+ ExclusiveMinimum bool `protobuf:"varint,14,opt,name=exclusive_minimum,json=exclusiveMinimum,proto3" json:"exclusive_minimum,omitempty"`
+ MaxLength int64 `protobuf:"varint,15,opt,name=max_length,json=maxLength,proto3" json:"max_length,omitempty"`
+ MinLength int64 `protobuf:"varint,16,opt,name=min_length,json=minLength,proto3" json:"min_length,omitempty"`
+ Pattern string `protobuf:"bytes,17,opt,name=pattern,proto3" json:"pattern,omitempty"`
+ MaxItems int64 `protobuf:"varint,18,opt,name=max_items,json=maxItems,proto3" json:"max_items,omitempty"`
+ MinItems int64 `protobuf:"varint,19,opt,name=min_items,json=minItems,proto3" json:"min_items,omitempty"`
+ UniqueItems bool `protobuf:"varint,20,opt,name=unique_items,json=uniqueItems,proto3" json:"unique_items,omitempty"`
+ Enum []*Any `protobuf:"bytes,21,rep,name=enum,proto3" json:"enum,omitempty"`
+ MultipleOf float64 `protobuf:"fixed64,22,opt,name=multiple_of,json=multipleOf,proto3" json:"multiple_of,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,23,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *FormDataParameterSubSchema) Reset() { *m = FormDataParameterSubSchema{} }
+func (m *FormDataParameterSubSchema) String() string { return proto.CompactTextString(m) }
+func (*FormDataParameterSubSchema) ProtoMessage() {}
+func (*FormDataParameterSubSchema) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{12}
+}
+
+func (m *FormDataParameterSubSchema) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_FormDataParameterSubSchema.Unmarshal(m, b)
+}
+func (m *FormDataParameterSubSchema) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_FormDataParameterSubSchema.Marshal(b, m, deterministic)
+}
+func (m *FormDataParameterSubSchema) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_FormDataParameterSubSchema.Merge(m, src)
+}
+func (m *FormDataParameterSubSchema) XXX_Size() int {
+ return xxx_messageInfo_FormDataParameterSubSchema.Size(m)
+}
+func (m *FormDataParameterSubSchema) XXX_DiscardUnknown() {
+ xxx_messageInfo_FormDataParameterSubSchema.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_FormDataParameterSubSchema proto.InternalMessageInfo
func (m *FormDataParameterSubSchema) GetRequired() bool {
if m != nil {
@@ -926,31 +1096,54 @@ func (m *FormDataParameterSubSchema) GetVendorExtension() []*NamedAny {
}
type Header struct {
- Type string `protobuf:"bytes,1,opt,name=type" json:"type,omitempty"`
- Format string `protobuf:"bytes,2,opt,name=format" json:"format,omitempty"`
- Items *PrimitivesItems `protobuf:"bytes,3,opt,name=items" json:"items,omitempty"`
- CollectionFormat string `protobuf:"bytes,4,opt,name=collection_format,json=collectionFormat" json:"collection_format,omitempty"`
- Default *Any `protobuf:"bytes,5,opt,name=default" json:"default,omitempty"`
- Maximum float64 `protobuf:"fixed64,6,opt,name=maximum" json:"maximum,omitempty"`
- ExclusiveMaximum bool `protobuf:"varint,7,opt,name=exclusive_maximum,json=exclusiveMaximum" json:"exclusive_maximum,omitempty"`
- Minimum float64 `protobuf:"fixed64,8,opt,name=minimum" json:"minimum,omitempty"`
- ExclusiveMinimum bool `protobuf:"varint,9,opt,name=exclusive_minimum,json=exclusiveMinimum" json:"exclusive_minimum,omitempty"`
- MaxLength int64 `protobuf:"varint,10,opt,name=max_length,json=maxLength" json:"max_length,omitempty"`
- MinLength int64 `protobuf:"varint,11,opt,name=min_length,json=minLength" json:"min_length,omitempty"`
- Pattern string `protobuf:"bytes,12,opt,name=pattern" json:"pattern,omitempty"`
- MaxItems int64 `protobuf:"varint,13,opt,name=max_items,json=maxItems" json:"max_items,omitempty"`
- MinItems int64 `protobuf:"varint,14,opt,name=min_items,json=minItems" json:"min_items,omitempty"`
- UniqueItems bool `protobuf:"varint,15,opt,name=unique_items,json=uniqueItems" json:"unique_items,omitempty"`
- Enum []*Any `protobuf:"bytes,16,rep,name=enum" json:"enum,omitempty"`
- MultipleOf float64 `protobuf:"fixed64,17,opt,name=multiple_of,json=multipleOf" json:"multiple_of,omitempty"`
- Description string `protobuf:"bytes,18,opt,name=description" json:"description,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,19,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
-}
-
-func (m *Header) Reset() { *m = Header{} }
-func (m *Header) String() string { return proto.CompactTextString(m) }
-func (*Header) ProtoMessage() {}
-func (*Header) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{13} }
+ Type string `protobuf:"bytes,1,opt,name=type,proto3" json:"type,omitempty"`
+ Format string `protobuf:"bytes,2,opt,name=format,proto3" json:"format,omitempty"`
+ Items *PrimitivesItems `protobuf:"bytes,3,opt,name=items,proto3" json:"items,omitempty"`
+ CollectionFormat string `protobuf:"bytes,4,opt,name=collection_format,json=collectionFormat,proto3" json:"collection_format,omitempty"`
+ Default *Any `protobuf:"bytes,5,opt,name=default,proto3" json:"default,omitempty"`
+ Maximum float64 `protobuf:"fixed64,6,opt,name=maximum,proto3" json:"maximum,omitempty"`
+ ExclusiveMaximum bool `protobuf:"varint,7,opt,name=exclusive_maximum,json=exclusiveMaximum,proto3" json:"exclusive_maximum,omitempty"`
+ Minimum float64 `protobuf:"fixed64,8,opt,name=minimum,proto3" json:"minimum,omitempty"`
+ ExclusiveMinimum bool `protobuf:"varint,9,opt,name=exclusive_minimum,json=exclusiveMinimum,proto3" json:"exclusive_minimum,omitempty"`
+ MaxLength int64 `protobuf:"varint,10,opt,name=max_length,json=maxLength,proto3" json:"max_length,omitempty"`
+ MinLength int64 `protobuf:"varint,11,opt,name=min_length,json=minLength,proto3" json:"min_length,omitempty"`
+ Pattern string `protobuf:"bytes,12,opt,name=pattern,proto3" json:"pattern,omitempty"`
+ MaxItems int64 `protobuf:"varint,13,opt,name=max_items,json=maxItems,proto3" json:"max_items,omitempty"`
+ MinItems int64 `protobuf:"varint,14,opt,name=min_items,json=minItems,proto3" json:"min_items,omitempty"`
+ UniqueItems bool `protobuf:"varint,15,opt,name=unique_items,json=uniqueItems,proto3" json:"unique_items,omitempty"`
+ Enum []*Any `protobuf:"bytes,16,rep,name=enum,proto3" json:"enum,omitempty"`
+ MultipleOf float64 `protobuf:"fixed64,17,opt,name=multiple_of,json=multipleOf,proto3" json:"multiple_of,omitempty"`
+ Description string `protobuf:"bytes,18,opt,name=description,proto3" json:"description,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,19,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *Header) Reset() { *m = Header{} }
+func (m *Header) String() string { return proto.CompactTextString(m) }
+func (*Header) ProtoMessage() {}
+func (*Header) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{13}
+}
+
+func (m *Header) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_Header.Unmarshal(m, b)
+}
+func (m *Header) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_Header.Marshal(b, m, deterministic)
+}
+func (m *Header) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Header.Merge(m, src)
+}
+func (m *Header) XXX_Size() int {
+ return xxx_messageInfo_Header.Size(m)
+}
+func (m *Header) XXX_DiscardUnknown() {
+ xxx_messageInfo_Header.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_Header proto.InternalMessageInfo
func (m *Header) GetType() string {
if m != nil {
@@ -1087,37 +1280,60 @@ func (m *Header) GetVendorExtension() []*NamedAny {
type HeaderParameterSubSchema struct {
// Determines whether or not this parameter is required or optional.
- Required bool `protobuf:"varint,1,opt,name=required" json:"required,omitempty"`
+ Required bool `protobuf:"varint,1,opt,name=required,proto3" json:"required,omitempty"`
// Determines the location of the parameter.
- In string `protobuf:"bytes,2,opt,name=in" json:"in,omitempty"`
+ In string `protobuf:"bytes,2,opt,name=in,proto3" json:"in,omitempty"`
// A brief description of the parameter. This could contain examples of use. GitHub Flavored Markdown is allowed.
- Description string `protobuf:"bytes,3,opt,name=description" json:"description,omitempty"`
+ Description string `protobuf:"bytes,3,opt,name=description,proto3" json:"description,omitempty"`
// The name of the parameter.
- Name string `protobuf:"bytes,4,opt,name=name" json:"name,omitempty"`
- Type string `protobuf:"bytes,5,opt,name=type" json:"type,omitempty"`
- Format string `protobuf:"bytes,6,opt,name=format" json:"format,omitempty"`
- Items *PrimitivesItems `protobuf:"bytes,7,opt,name=items" json:"items,omitempty"`
- CollectionFormat string `protobuf:"bytes,8,opt,name=collection_format,json=collectionFormat" json:"collection_format,omitempty"`
- Default *Any `protobuf:"bytes,9,opt,name=default" json:"default,omitempty"`
- Maximum float64 `protobuf:"fixed64,10,opt,name=maximum" json:"maximum,omitempty"`
- ExclusiveMaximum bool `protobuf:"varint,11,opt,name=exclusive_maximum,json=exclusiveMaximum" json:"exclusive_maximum,omitempty"`
- Minimum float64 `protobuf:"fixed64,12,opt,name=minimum" json:"minimum,omitempty"`
- ExclusiveMinimum bool `protobuf:"varint,13,opt,name=exclusive_minimum,json=exclusiveMinimum" json:"exclusive_minimum,omitempty"`
- MaxLength int64 `protobuf:"varint,14,opt,name=max_length,json=maxLength" json:"max_length,omitempty"`
- MinLength int64 `protobuf:"varint,15,opt,name=min_length,json=minLength" json:"min_length,omitempty"`
- Pattern string `protobuf:"bytes,16,opt,name=pattern" json:"pattern,omitempty"`
- MaxItems int64 `protobuf:"varint,17,opt,name=max_items,json=maxItems" json:"max_items,omitempty"`
- MinItems int64 `protobuf:"varint,18,opt,name=min_items,json=minItems" json:"min_items,omitempty"`
- UniqueItems bool `protobuf:"varint,19,opt,name=unique_items,json=uniqueItems" json:"unique_items,omitempty"`
- Enum []*Any `protobuf:"bytes,20,rep,name=enum" json:"enum,omitempty"`
- MultipleOf float64 `protobuf:"fixed64,21,opt,name=multiple_of,json=multipleOf" json:"multiple_of,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,22,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
-}
-
-func (m *HeaderParameterSubSchema) Reset() { *m = HeaderParameterSubSchema{} }
-func (m *HeaderParameterSubSchema) String() string { return proto.CompactTextString(m) }
-func (*HeaderParameterSubSchema) ProtoMessage() {}
-func (*HeaderParameterSubSchema) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{14} }
+ Name string `protobuf:"bytes,4,opt,name=name,proto3" json:"name,omitempty"`
+ Type string `protobuf:"bytes,5,opt,name=type,proto3" json:"type,omitempty"`
+ Format string `protobuf:"bytes,6,opt,name=format,proto3" json:"format,omitempty"`
+ Items *PrimitivesItems `protobuf:"bytes,7,opt,name=items,proto3" json:"items,omitempty"`
+ CollectionFormat string `protobuf:"bytes,8,opt,name=collection_format,json=collectionFormat,proto3" json:"collection_format,omitempty"`
+ Default *Any `protobuf:"bytes,9,opt,name=default,proto3" json:"default,omitempty"`
+ Maximum float64 `protobuf:"fixed64,10,opt,name=maximum,proto3" json:"maximum,omitempty"`
+ ExclusiveMaximum bool `protobuf:"varint,11,opt,name=exclusive_maximum,json=exclusiveMaximum,proto3" json:"exclusive_maximum,omitempty"`
+ Minimum float64 `protobuf:"fixed64,12,opt,name=minimum,proto3" json:"minimum,omitempty"`
+ ExclusiveMinimum bool `protobuf:"varint,13,opt,name=exclusive_minimum,json=exclusiveMinimum,proto3" json:"exclusive_minimum,omitempty"`
+ MaxLength int64 `protobuf:"varint,14,opt,name=max_length,json=maxLength,proto3" json:"max_length,omitempty"`
+ MinLength int64 `protobuf:"varint,15,opt,name=min_length,json=minLength,proto3" json:"min_length,omitempty"`
+ Pattern string `protobuf:"bytes,16,opt,name=pattern,proto3" json:"pattern,omitempty"`
+ MaxItems int64 `protobuf:"varint,17,opt,name=max_items,json=maxItems,proto3" json:"max_items,omitempty"`
+ MinItems int64 `protobuf:"varint,18,opt,name=min_items,json=minItems,proto3" json:"min_items,omitempty"`
+ UniqueItems bool `protobuf:"varint,19,opt,name=unique_items,json=uniqueItems,proto3" json:"unique_items,omitempty"`
+ Enum []*Any `protobuf:"bytes,20,rep,name=enum,proto3" json:"enum,omitempty"`
+ MultipleOf float64 `protobuf:"fixed64,21,opt,name=multiple_of,json=multipleOf,proto3" json:"multiple_of,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,22,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *HeaderParameterSubSchema) Reset() { *m = HeaderParameterSubSchema{} }
+func (m *HeaderParameterSubSchema) String() string { return proto.CompactTextString(m) }
+func (*HeaderParameterSubSchema) ProtoMessage() {}
+func (*HeaderParameterSubSchema) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{14}
+}
+
+func (m *HeaderParameterSubSchema) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_HeaderParameterSubSchema.Unmarshal(m, b)
+}
+func (m *HeaderParameterSubSchema) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_HeaderParameterSubSchema.Marshal(b, m, deterministic)
+}
+func (m *HeaderParameterSubSchema) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_HeaderParameterSubSchema.Merge(m, src)
+}
+func (m *HeaderParameterSubSchema) XXX_Size() int {
+ return xxx_messageInfo_HeaderParameterSubSchema.Size(m)
+}
+func (m *HeaderParameterSubSchema) XXX_DiscardUnknown() {
+ xxx_messageInfo_HeaderParameterSubSchema.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_HeaderParameterSubSchema proto.InternalMessageInfo
func (m *HeaderParameterSubSchema) GetRequired() bool {
if m != nil {
@@ -1274,13 +1490,36 @@ func (m *HeaderParameterSubSchema) GetVendorExtension() []*NamedAny {
}
type Headers struct {
- AdditionalProperties []*NamedHeader `protobuf:"bytes,1,rep,name=additional_properties,json=additionalProperties" json:"additional_properties,omitempty"`
+ AdditionalProperties []*NamedHeader `protobuf:"bytes,1,rep,name=additional_properties,json=additionalProperties,proto3" json:"additional_properties,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *Headers) Reset() { *m = Headers{} }
+func (m *Headers) String() string { return proto.CompactTextString(m) }
+func (*Headers) ProtoMessage() {}
+func (*Headers) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{15}
+}
+
+func (m *Headers) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_Headers.Unmarshal(m, b)
+}
+func (m *Headers) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_Headers.Marshal(b, m, deterministic)
+}
+func (m *Headers) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Headers.Merge(m, src)
+}
+func (m *Headers) XXX_Size() int {
+ return xxx_messageInfo_Headers.Size(m)
+}
+func (m *Headers) XXX_DiscardUnknown() {
+ xxx_messageInfo_Headers.DiscardUnknown(m)
}
-func (m *Headers) Reset() { *m = Headers{} }
-func (m *Headers) String() string { return proto.CompactTextString(m) }
-func (*Headers) ProtoMessage() {}
-func (*Headers) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{15} }
+var xxx_messageInfo_Headers proto.InternalMessageInfo
func (m *Headers) GetAdditionalProperties() []*NamedHeader {
if m != nil {
@@ -1292,22 +1531,45 @@ func (m *Headers) GetAdditionalProperties() []*NamedHeader {
// General information about the API.
type Info struct {
// A unique and precise title of the API.
- Title string `protobuf:"bytes,1,opt,name=title" json:"title,omitempty"`
+ Title string `protobuf:"bytes,1,opt,name=title,proto3" json:"title,omitempty"`
// A semantic version number of the API.
- Version string `protobuf:"bytes,2,opt,name=version" json:"version,omitempty"`
+ Version string `protobuf:"bytes,2,opt,name=version,proto3" json:"version,omitempty"`
// A longer description of the API. Should be different from the title. GitHub Flavored Markdown is allowed.
- Description string `protobuf:"bytes,3,opt,name=description" json:"description,omitempty"`
+ Description string `protobuf:"bytes,3,opt,name=description,proto3" json:"description,omitempty"`
// The terms of service for the API.
- TermsOfService string `protobuf:"bytes,4,opt,name=terms_of_service,json=termsOfService" json:"terms_of_service,omitempty"`
- Contact *Contact `protobuf:"bytes,5,opt,name=contact" json:"contact,omitempty"`
- License *License `protobuf:"bytes,6,opt,name=license" json:"license,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,7,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
+ TermsOfService string `protobuf:"bytes,4,opt,name=terms_of_service,json=termsOfService,proto3" json:"terms_of_service,omitempty"`
+ Contact *Contact `protobuf:"bytes,5,opt,name=contact,proto3" json:"contact,omitempty"`
+ License *License `protobuf:"bytes,6,opt,name=license,proto3" json:"license,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,7,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *Info) Reset() { *m = Info{} }
+func (m *Info) String() string { return proto.CompactTextString(m) }
+func (*Info) ProtoMessage() {}
+func (*Info) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{16}
+}
+
+func (m *Info) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_Info.Unmarshal(m, b)
+}
+func (m *Info) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_Info.Marshal(b, m, deterministic)
+}
+func (m *Info) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Info.Merge(m, src)
+}
+func (m *Info) XXX_Size() int {
+ return xxx_messageInfo_Info.Size(m)
+}
+func (m *Info) XXX_DiscardUnknown() {
+ xxx_messageInfo_Info.DiscardUnknown(m)
}
-func (m *Info) Reset() { *m = Info{} }
-func (m *Info) String() string { return proto.CompactTextString(m) }
-func (*Info) ProtoMessage() {}
-func (*Info) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{16} }
+var xxx_messageInfo_Info proto.InternalMessageInfo
func (m *Info) GetTitle() string {
if m != nil {
@@ -1359,13 +1621,36 @@ func (m *Info) GetVendorExtension() []*NamedAny {
}
type ItemsItem struct {
- Schema []*Schema `protobuf:"bytes,1,rep,name=schema" json:"schema,omitempty"`
+ Schema []*Schema `protobuf:"bytes,1,rep,name=schema,proto3" json:"schema,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-func (m *ItemsItem) Reset() { *m = ItemsItem{} }
-func (m *ItemsItem) String() string { return proto.CompactTextString(m) }
-func (*ItemsItem) ProtoMessage() {}
-func (*ItemsItem) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{17} }
+func (m *ItemsItem) Reset() { *m = ItemsItem{} }
+func (m *ItemsItem) String() string { return proto.CompactTextString(m) }
+func (*ItemsItem) ProtoMessage() {}
+func (*ItemsItem) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{17}
+}
+
+func (m *ItemsItem) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_ItemsItem.Unmarshal(m, b)
+}
+func (m *ItemsItem) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_ItemsItem.Marshal(b, m, deterministic)
+}
+func (m *ItemsItem) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ItemsItem.Merge(m, src)
+}
+func (m *ItemsItem) XXX_Size() int {
+ return xxx_messageInfo_ItemsItem.Size(m)
+}
+func (m *ItemsItem) XXX_DiscardUnknown() {
+ xxx_messageInfo_ItemsItem.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_ItemsItem proto.InternalMessageInfo
func (m *ItemsItem) GetSchema() []*Schema {
if m != nil {
@@ -1375,14 +1660,37 @@ func (m *ItemsItem) GetSchema() []*Schema {
}
type JsonReference struct {
- XRef string `protobuf:"bytes,1,opt,name=_ref,json=Ref" json:"_ref,omitempty"`
- Description string `protobuf:"bytes,2,opt,name=description" json:"description,omitempty"`
+ XRef string `protobuf:"bytes,1,opt,name=_ref,json=Ref,proto3" json:"_ref,omitempty"`
+ Description string `protobuf:"bytes,2,opt,name=description,proto3" json:"description,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-func (m *JsonReference) Reset() { *m = JsonReference{} }
-func (m *JsonReference) String() string { return proto.CompactTextString(m) }
-func (*JsonReference) ProtoMessage() {}
-func (*JsonReference) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{18} }
+func (m *JsonReference) Reset() { *m = JsonReference{} }
+func (m *JsonReference) String() string { return proto.CompactTextString(m) }
+func (*JsonReference) ProtoMessage() {}
+func (*JsonReference) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{18}
+}
+
+func (m *JsonReference) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_JsonReference.Unmarshal(m, b)
+}
+func (m *JsonReference) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_JsonReference.Marshal(b, m, deterministic)
+}
+func (m *JsonReference) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_JsonReference.Merge(m, src)
+}
+func (m *JsonReference) XXX_Size() int {
+ return xxx_messageInfo_JsonReference.Size(m)
+}
+func (m *JsonReference) XXX_DiscardUnknown() {
+ xxx_messageInfo_JsonReference.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_JsonReference proto.InternalMessageInfo
func (m *JsonReference) GetXRef() string {
if m != nil {
@@ -1400,16 +1708,39 @@ func (m *JsonReference) GetDescription() string {
type License struct {
// The name of the license type. It's encouraged to use an OSI compatible license.
- Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
+ Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
// The URL pointing to the license.
- Url string `protobuf:"bytes,2,opt,name=url" json:"url,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,3,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
+ Url string `protobuf:"bytes,2,opt,name=url,proto3" json:"url,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,3,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *License) Reset() { *m = License{} }
+func (m *License) String() string { return proto.CompactTextString(m) }
+func (*License) ProtoMessage() {}
+func (*License) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{19}
+}
+
+func (m *License) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_License.Unmarshal(m, b)
+}
+func (m *License) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_License.Marshal(b, m, deterministic)
+}
+func (m *License) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_License.Merge(m, src)
+}
+func (m *License) XXX_Size() int {
+ return xxx_messageInfo_License.Size(m)
+}
+func (m *License) XXX_DiscardUnknown() {
+ xxx_messageInfo_License.DiscardUnknown(m)
}
-func (m *License) Reset() { *m = License{} }
-func (m *License) String() string { return proto.CompactTextString(m) }
-func (*License) ProtoMessage() {}
-func (*License) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{19} }
+var xxx_messageInfo_License proto.InternalMessageInfo
func (m *License) GetName() string {
if m != nil {
@@ -1435,15 +1766,38 @@ func (m *License) GetVendorExtension() []*NamedAny {
// Automatically-generated message used to represent maps of Any as ordered (name,value) pairs.
type NamedAny struct {
// Map key
- Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
+ Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
// Mapped value
- Value *Any `protobuf:"bytes,2,opt,name=value" json:"value,omitempty"`
+ Value *Any `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *NamedAny) Reset() { *m = NamedAny{} }
+func (m *NamedAny) String() string { return proto.CompactTextString(m) }
+func (*NamedAny) ProtoMessage() {}
+func (*NamedAny) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{20}
}
-func (m *NamedAny) Reset() { *m = NamedAny{} }
-func (m *NamedAny) String() string { return proto.CompactTextString(m) }
-func (*NamedAny) ProtoMessage() {}
-func (*NamedAny) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{20} }
+func (m *NamedAny) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_NamedAny.Unmarshal(m, b)
+}
+func (m *NamedAny) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_NamedAny.Marshal(b, m, deterministic)
+}
+func (m *NamedAny) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_NamedAny.Merge(m, src)
+}
+func (m *NamedAny) XXX_Size() int {
+ return xxx_messageInfo_NamedAny.Size(m)
+}
+func (m *NamedAny) XXX_DiscardUnknown() {
+ xxx_messageInfo_NamedAny.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_NamedAny proto.InternalMessageInfo
func (m *NamedAny) GetName() string {
if m != nil {
@@ -1462,15 +1816,38 @@ func (m *NamedAny) GetValue() *Any {
// Automatically-generated message used to represent maps of Header as ordered (name,value) pairs.
type NamedHeader struct {
// Map key
- Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
+ Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
// Mapped value
- Value *Header `protobuf:"bytes,2,opt,name=value" json:"value,omitempty"`
+ Value *Header `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-func (m *NamedHeader) Reset() { *m = NamedHeader{} }
-func (m *NamedHeader) String() string { return proto.CompactTextString(m) }
-func (*NamedHeader) ProtoMessage() {}
-func (*NamedHeader) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{21} }
+func (m *NamedHeader) Reset() { *m = NamedHeader{} }
+func (m *NamedHeader) String() string { return proto.CompactTextString(m) }
+func (*NamedHeader) ProtoMessage() {}
+func (*NamedHeader) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{21}
+}
+
+func (m *NamedHeader) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_NamedHeader.Unmarshal(m, b)
+}
+func (m *NamedHeader) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_NamedHeader.Marshal(b, m, deterministic)
+}
+func (m *NamedHeader) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_NamedHeader.Merge(m, src)
+}
+func (m *NamedHeader) XXX_Size() int {
+ return xxx_messageInfo_NamedHeader.Size(m)
+}
+func (m *NamedHeader) XXX_DiscardUnknown() {
+ xxx_messageInfo_NamedHeader.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_NamedHeader proto.InternalMessageInfo
func (m *NamedHeader) GetName() string {
if m != nil {
@@ -1489,15 +1866,38 @@ func (m *NamedHeader) GetValue() *Header {
// Automatically-generated message used to represent maps of Parameter as ordered (name,value) pairs.
type NamedParameter struct {
// Map key
- Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
+ Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
// Mapped value
- Value *Parameter `protobuf:"bytes,2,opt,name=value" json:"value,omitempty"`
+ Value *Parameter `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-func (m *NamedParameter) Reset() { *m = NamedParameter{} }
-func (m *NamedParameter) String() string { return proto.CompactTextString(m) }
-func (*NamedParameter) ProtoMessage() {}
-func (*NamedParameter) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{22} }
+func (m *NamedParameter) Reset() { *m = NamedParameter{} }
+func (m *NamedParameter) String() string { return proto.CompactTextString(m) }
+func (*NamedParameter) ProtoMessage() {}
+func (*NamedParameter) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{22}
+}
+
+func (m *NamedParameter) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_NamedParameter.Unmarshal(m, b)
+}
+func (m *NamedParameter) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_NamedParameter.Marshal(b, m, deterministic)
+}
+func (m *NamedParameter) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_NamedParameter.Merge(m, src)
+}
+func (m *NamedParameter) XXX_Size() int {
+ return xxx_messageInfo_NamedParameter.Size(m)
+}
+func (m *NamedParameter) XXX_DiscardUnknown() {
+ xxx_messageInfo_NamedParameter.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_NamedParameter proto.InternalMessageInfo
func (m *NamedParameter) GetName() string {
if m != nil {
@@ -1516,15 +1916,38 @@ func (m *NamedParameter) GetValue() *Parameter {
// Automatically-generated message used to represent maps of PathItem as ordered (name,value) pairs.
type NamedPathItem struct {
// Map key
- Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
+ Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
// Mapped value
- Value *PathItem `protobuf:"bytes,2,opt,name=value" json:"value,omitempty"`
+ Value *PathItem `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-func (m *NamedPathItem) Reset() { *m = NamedPathItem{} }
-func (m *NamedPathItem) String() string { return proto.CompactTextString(m) }
-func (*NamedPathItem) ProtoMessage() {}
-func (*NamedPathItem) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{23} }
+func (m *NamedPathItem) Reset() { *m = NamedPathItem{} }
+func (m *NamedPathItem) String() string { return proto.CompactTextString(m) }
+func (*NamedPathItem) ProtoMessage() {}
+func (*NamedPathItem) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{23}
+}
+
+func (m *NamedPathItem) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_NamedPathItem.Unmarshal(m, b)
+}
+func (m *NamedPathItem) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_NamedPathItem.Marshal(b, m, deterministic)
+}
+func (m *NamedPathItem) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_NamedPathItem.Merge(m, src)
+}
+func (m *NamedPathItem) XXX_Size() int {
+ return xxx_messageInfo_NamedPathItem.Size(m)
+}
+func (m *NamedPathItem) XXX_DiscardUnknown() {
+ xxx_messageInfo_NamedPathItem.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_NamedPathItem proto.InternalMessageInfo
func (m *NamedPathItem) GetName() string {
if m != nil {
@@ -1543,15 +1966,38 @@ func (m *NamedPathItem) GetValue() *PathItem {
// Automatically-generated message used to represent maps of Response as ordered (name,value) pairs.
type NamedResponse struct {
// Map key
- Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
+ Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
// Mapped value
- Value *Response `protobuf:"bytes,2,opt,name=value" json:"value,omitempty"`
+ Value *Response `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *NamedResponse) Reset() { *m = NamedResponse{} }
+func (m *NamedResponse) String() string { return proto.CompactTextString(m) }
+func (*NamedResponse) ProtoMessage() {}
+func (*NamedResponse) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{24}
}
-func (m *NamedResponse) Reset() { *m = NamedResponse{} }
-func (m *NamedResponse) String() string { return proto.CompactTextString(m) }
-func (*NamedResponse) ProtoMessage() {}
-func (*NamedResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{24} }
+func (m *NamedResponse) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_NamedResponse.Unmarshal(m, b)
+}
+func (m *NamedResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_NamedResponse.Marshal(b, m, deterministic)
+}
+func (m *NamedResponse) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_NamedResponse.Merge(m, src)
+}
+func (m *NamedResponse) XXX_Size() int {
+ return xxx_messageInfo_NamedResponse.Size(m)
+}
+func (m *NamedResponse) XXX_DiscardUnknown() {
+ xxx_messageInfo_NamedResponse.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_NamedResponse proto.InternalMessageInfo
func (m *NamedResponse) GetName() string {
if m != nil {
@@ -1570,15 +2016,38 @@ func (m *NamedResponse) GetValue() *Response {
// Automatically-generated message used to represent maps of ResponseValue as ordered (name,value) pairs.
type NamedResponseValue struct {
// Map key
- Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
+ Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
// Mapped value
- Value *ResponseValue `protobuf:"bytes,2,opt,name=value" json:"value,omitempty"`
+ Value *ResponseValue `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-func (m *NamedResponseValue) Reset() { *m = NamedResponseValue{} }
-func (m *NamedResponseValue) String() string { return proto.CompactTextString(m) }
-func (*NamedResponseValue) ProtoMessage() {}
-func (*NamedResponseValue) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{25} }
+func (m *NamedResponseValue) Reset() { *m = NamedResponseValue{} }
+func (m *NamedResponseValue) String() string { return proto.CompactTextString(m) }
+func (*NamedResponseValue) ProtoMessage() {}
+func (*NamedResponseValue) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{25}
+}
+
+func (m *NamedResponseValue) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_NamedResponseValue.Unmarshal(m, b)
+}
+func (m *NamedResponseValue) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_NamedResponseValue.Marshal(b, m, deterministic)
+}
+func (m *NamedResponseValue) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_NamedResponseValue.Merge(m, src)
+}
+func (m *NamedResponseValue) XXX_Size() int {
+ return xxx_messageInfo_NamedResponseValue.Size(m)
+}
+func (m *NamedResponseValue) XXX_DiscardUnknown() {
+ xxx_messageInfo_NamedResponseValue.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_NamedResponseValue proto.InternalMessageInfo
func (m *NamedResponseValue) GetName() string {
if m != nil {
@@ -1597,15 +2066,38 @@ func (m *NamedResponseValue) GetValue() *ResponseValue {
// Automatically-generated message used to represent maps of Schema as ordered (name,value) pairs.
type NamedSchema struct {
// Map key
- Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
+ Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
// Mapped value
- Value *Schema `protobuf:"bytes,2,opt,name=value" json:"value,omitempty"`
+ Value *Schema `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *NamedSchema) Reset() { *m = NamedSchema{} }
+func (m *NamedSchema) String() string { return proto.CompactTextString(m) }
+func (*NamedSchema) ProtoMessage() {}
+func (*NamedSchema) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{26}
+}
+
+func (m *NamedSchema) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_NamedSchema.Unmarshal(m, b)
+}
+func (m *NamedSchema) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_NamedSchema.Marshal(b, m, deterministic)
+}
+func (m *NamedSchema) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_NamedSchema.Merge(m, src)
+}
+func (m *NamedSchema) XXX_Size() int {
+ return xxx_messageInfo_NamedSchema.Size(m)
+}
+func (m *NamedSchema) XXX_DiscardUnknown() {
+ xxx_messageInfo_NamedSchema.DiscardUnknown(m)
}
-func (m *NamedSchema) Reset() { *m = NamedSchema{} }
-func (m *NamedSchema) String() string { return proto.CompactTextString(m) }
-func (*NamedSchema) ProtoMessage() {}
-func (*NamedSchema) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{26} }
+var xxx_messageInfo_NamedSchema proto.InternalMessageInfo
func (m *NamedSchema) GetName() string {
if m != nil {
@@ -1624,15 +2116,38 @@ func (m *NamedSchema) GetValue() *Schema {
// Automatically-generated message used to represent maps of SecurityDefinitionsItem as ordered (name,value) pairs.
type NamedSecurityDefinitionsItem struct {
// Map key
- Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
+ Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
// Mapped value
- Value *SecurityDefinitionsItem `protobuf:"bytes,2,opt,name=value" json:"value,omitempty"`
+ Value *SecurityDefinitionsItem `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-func (m *NamedSecurityDefinitionsItem) Reset() { *m = NamedSecurityDefinitionsItem{} }
-func (m *NamedSecurityDefinitionsItem) String() string { return proto.CompactTextString(m) }
-func (*NamedSecurityDefinitionsItem) ProtoMessage() {}
-func (*NamedSecurityDefinitionsItem) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{27} }
+func (m *NamedSecurityDefinitionsItem) Reset() { *m = NamedSecurityDefinitionsItem{} }
+func (m *NamedSecurityDefinitionsItem) String() string { return proto.CompactTextString(m) }
+func (*NamedSecurityDefinitionsItem) ProtoMessage() {}
+func (*NamedSecurityDefinitionsItem) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{27}
+}
+
+func (m *NamedSecurityDefinitionsItem) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_NamedSecurityDefinitionsItem.Unmarshal(m, b)
+}
+func (m *NamedSecurityDefinitionsItem) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_NamedSecurityDefinitionsItem.Marshal(b, m, deterministic)
+}
+func (m *NamedSecurityDefinitionsItem) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_NamedSecurityDefinitionsItem.Merge(m, src)
+}
+func (m *NamedSecurityDefinitionsItem) XXX_Size() int {
+ return xxx_messageInfo_NamedSecurityDefinitionsItem.Size(m)
+}
+func (m *NamedSecurityDefinitionsItem) XXX_DiscardUnknown() {
+ xxx_messageInfo_NamedSecurityDefinitionsItem.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_NamedSecurityDefinitionsItem proto.InternalMessageInfo
func (m *NamedSecurityDefinitionsItem) GetName() string {
if m != nil {
@@ -1651,15 +2166,38 @@ func (m *NamedSecurityDefinitionsItem) GetValue() *SecurityDefinitionsItem {
// Automatically-generated message used to represent maps of string as ordered (name,value) pairs.
type NamedString struct {
// Map key
- Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
+ Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
// Mapped value
- Value string `protobuf:"bytes,2,opt,name=value" json:"value,omitempty"`
+ Value string `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *NamedString) Reset() { *m = NamedString{} }
+func (m *NamedString) String() string { return proto.CompactTextString(m) }
+func (*NamedString) ProtoMessage() {}
+func (*NamedString) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{28}
+}
+
+func (m *NamedString) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_NamedString.Unmarshal(m, b)
+}
+func (m *NamedString) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_NamedString.Marshal(b, m, deterministic)
+}
+func (m *NamedString) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_NamedString.Merge(m, src)
+}
+func (m *NamedString) XXX_Size() int {
+ return xxx_messageInfo_NamedString.Size(m)
+}
+func (m *NamedString) XXX_DiscardUnknown() {
+ xxx_messageInfo_NamedString.DiscardUnknown(m)
}
-func (m *NamedString) Reset() { *m = NamedString{} }
-func (m *NamedString) String() string { return proto.CompactTextString(m) }
-func (*NamedString) ProtoMessage() {}
-func (*NamedString) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{28} }
+var xxx_messageInfo_NamedString proto.InternalMessageInfo
func (m *NamedString) GetName() string {
if m != nil {
@@ -1678,15 +2216,38 @@ func (m *NamedString) GetValue() string {
// Automatically-generated message used to represent maps of StringArray as ordered (name,value) pairs.
type NamedStringArray struct {
// Map key
- Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
+ Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
// Mapped value
- Value *StringArray `protobuf:"bytes,2,opt,name=value" json:"value,omitempty"`
+ Value *StringArray `protobuf:"bytes,2,opt,name=value,proto3" json:"value,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *NamedStringArray) Reset() { *m = NamedStringArray{} }
+func (m *NamedStringArray) String() string { return proto.CompactTextString(m) }
+func (*NamedStringArray) ProtoMessage() {}
+func (*NamedStringArray) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{29}
}
-func (m *NamedStringArray) Reset() { *m = NamedStringArray{} }
-func (m *NamedStringArray) String() string { return proto.CompactTextString(m) }
-func (*NamedStringArray) ProtoMessage() {}
-func (*NamedStringArray) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{29} }
+func (m *NamedStringArray) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_NamedStringArray.Unmarshal(m, b)
+}
+func (m *NamedStringArray) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_NamedStringArray.Marshal(b, m, deterministic)
+}
+func (m *NamedStringArray) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_NamedStringArray.Merge(m, src)
+}
+func (m *NamedStringArray) XXX_Size() int {
+ return xxx_messageInfo_NamedStringArray.Size(m)
+}
+func (m *NamedStringArray) XXX_DiscardUnknown() {
+ xxx_messageInfo_NamedStringArray.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_NamedStringArray proto.InternalMessageInfo
func (m *NamedStringArray) GetName() string {
if m != nil {
@@ -1708,35 +2269,64 @@ type NonBodyParameter struct {
// *NonBodyParameter_FormDataParameterSubSchema
// *NonBodyParameter_QueryParameterSubSchema
// *NonBodyParameter_PathParameterSubSchema
- Oneof isNonBodyParameter_Oneof `protobuf_oneof:"oneof"`
+ Oneof isNonBodyParameter_Oneof `protobuf_oneof:"oneof"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *NonBodyParameter) Reset() { *m = NonBodyParameter{} }
+func (m *NonBodyParameter) String() string { return proto.CompactTextString(m) }
+func (*NonBodyParameter) ProtoMessage() {}
+func (*NonBodyParameter) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{30}
+}
+
+func (m *NonBodyParameter) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_NonBodyParameter.Unmarshal(m, b)
+}
+func (m *NonBodyParameter) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_NonBodyParameter.Marshal(b, m, deterministic)
+}
+func (m *NonBodyParameter) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_NonBodyParameter.Merge(m, src)
+}
+func (m *NonBodyParameter) XXX_Size() int {
+ return xxx_messageInfo_NonBodyParameter.Size(m)
+}
+func (m *NonBodyParameter) XXX_DiscardUnknown() {
+ xxx_messageInfo_NonBodyParameter.DiscardUnknown(m)
}
-func (m *NonBodyParameter) Reset() { *m = NonBodyParameter{} }
-func (m *NonBodyParameter) String() string { return proto.CompactTextString(m) }
-func (*NonBodyParameter) ProtoMessage() {}
-func (*NonBodyParameter) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{30} }
+var xxx_messageInfo_NonBodyParameter proto.InternalMessageInfo
type isNonBodyParameter_Oneof interface {
isNonBodyParameter_Oneof()
}
type NonBodyParameter_HeaderParameterSubSchema struct {
- HeaderParameterSubSchema *HeaderParameterSubSchema `protobuf:"bytes,1,opt,name=header_parameter_sub_schema,json=headerParameterSubSchema,oneof"`
+ HeaderParameterSubSchema *HeaderParameterSubSchema `protobuf:"bytes,1,opt,name=header_parameter_sub_schema,json=headerParameterSubSchema,proto3,oneof"`
}
+
type NonBodyParameter_FormDataParameterSubSchema struct {
- FormDataParameterSubSchema *FormDataParameterSubSchema `protobuf:"bytes,2,opt,name=form_data_parameter_sub_schema,json=formDataParameterSubSchema,oneof"`
+ FormDataParameterSubSchema *FormDataParameterSubSchema `protobuf:"bytes,2,opt,name=form_data_parameter_sub_schema,json=formDataParameterSubSchema,proto3,oneof"`
}
+
type NonBodyParameter_QueryParameterSubSchema struct {
- QueryParameterSubSchema *QueryParameterSubSchema `protobuf:"bytes,3,opt,name=query_parameter_sub_schema,json=queryParameterSubSchema,oneof"`
+ QueryParameterSubSchema *QueryParameterSubSchema `protobuf:"bytes,3,opt,name=query_parameter_sub_schema,json=queryParameterSubSchema,proto3,oneof"`
}
+
type NonBodyParameter_PathParameterSubSchema struct {
- PathParameterSubSchema *PathParameterSubSchema `protobuf:"bytes,4,opt,name=path_parameter_sub_schema,json=pathParameterSubSchema,oneof"`
+ PathParameterSubSchema *PathParameterSubSchema `protobuf:"bytes,4,opt,name=path_parameter_sub_schema,json=pathParameterSubSchema,proto3,oneof"`
}
-func (*NonBodyParameter_HeaderParameterSubSchema) isNonBodyParameter_Oneof() {}
+func (*NonBodyParameter_HeaderParameterSubSchema) isNonBodyParameter_Oneof() {}
+
func (*NonBodyParameter_FormDataParameterSubSchema) isNonBodyParameter_Oneof() {}
-func (*NonBodyParameter_QueryParameterSubSchema) isNonBodyParameter_Oneof() {}
-func (*NonBodyParameter_PathParameterSubSchema) isNonBodyParameter_Oneof() {}
+
+func (*NonBodyParameter_QueryParameterSubSchema) isNonBodyParameter_Oneof() {}
+
+func (*NonBodyParameter_PathParameterSubSchema) isNonBodyParameter_Oneof() {}
func (m *NonBodyParameter) GetOneof() isNonBodyParameter_Oneof {
if m != nil {
@@ -1773,9 +2363,9 @@ func (m *NonBodyParameter) GetPathParameterSubSchema() *PathParameterSubSchema {
return nil
}
-// XXX_OneofFuncs is for the internal use of the proto package.
-func (*NonBodyParameter) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
- return _NonBodyParameter_OneofMarshaler, _NonBodyParameter_OneofUnmarshaler, _NonBodyParameter_OneofSizer, []interface{}{
+// XXX_OneofWrappers is for the internal use of the proto package.
+func (*NonBodyParameter) XXX_OneofWrappers() []interface{} {
+ return []interface{}{
(*NonBodyParameter_HeaderParameterSubSchema)(nil),
(*NonBodyParameter_FormDataParameterSubSchema)(nil),
(*NonBodyParameter_QueryParameterSubSchema)(nil),
@@ -1783,122 +2373,43 @@ func (*NonBodyParameter) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buff
}
}
-func _NonBodyParameter_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
- m := msg.(*NonBodyParameter)
- // oneof
- switch x := m.Oneof.(type) {
- case *NonBodyParameter_HeaderParameterSubSchema:
- b.EncodeVarint(1<<3 | proto.WireBytes)
- if err := b.EncodeMessage(x.HeaderParameterSubSchema); err != nil {
- return err
- }
- case *NonBodyParameter_FormDataParameterSubSchema:
- b.EncodeVarint(2<<3 | proto.WireBytes)
- if err := b.EncodeMessage(x.FormDataParameterSubSchema); err != nil {
- return err
- }
- case *NonBodyParameter_QueryParameterSubSchema:
- b.EncodeVarint(3<<3 | proto.WireBytes)
- if err := b.EncodeMessage(x.QueryParameterSubSchema); err != nil {
- return err
- }
- case *NonBodyParameter_PathParameterSubSchema:
- b.EncodeVarint(4<<3 | proto.WireBytes)
- if err := b.EncodeMessage(x.PathParameterSubSchema); err != nil {
- return err
- }
- case nil:
- default:
- return fmt.Errorf("NonBodyParameter.Oneof has unexpected type %T", x)
- }
- return nil
-}
-
-func _NonBodyParameter_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
- m := msg.(*NonBodyParameter)
- switch tag {
- case 1: // oneof.header_parameter_sub_schema
- if wire != proto.WireBytes {
- return true, proto.ErrInternalBadWireType
- }
- msg := new(HeaderParameterSubSchema)
- err := b.DecodeMessage(msg)
- m.Oneof = &NonBodyParameter_HeaderParameterSubSchema{msg}
- return true, err
- case 2: // oneof.form_data_parameter_sub_schema
- if wire != proto.WireBytes {
- return true, proto.ErrInternalBadWireType
- }
- msg := new(FormDataParameterSubSchema)
- err := b.DecodeMessage(msg)
- m.Oneof = &NonBodyParameter_FormDataParameterSubSchema{msg}
- return true, err
- case 3: // oneof.query_parameter_sub_schema
- if wire != proto.WireBytes {
- return true, proto.ErrInternalBadWireType
- }
- msg := new(QueryParameterSubSchema)
- err := b.DecodeMessage(msg)
- m.Oneof = &NonBodyParameter_QueryParameterSubSchema{msg}
- return true, err
- case 4: // oneof.path_parameter_sub_schema
- if wire != proto.WireBytes {
- return true, proto.ErrInternalBadWireType
- }
- msg := new(PathParameterSubSchema)
- err := b.DecodeMessage(msg)
- m.Oneof = &NonBodyParameter_PathParameterSubSchema{msg}
- return true, err
- default:
- return false, nil
- }
-}
-
-func _NonBodyParameter_OneofSizer(msg proto.Message) (n int) {
- m := msg.(*NonBodyParameter)
- // oneof
- switch x := m.Oneof.(type) {
- case *NonBodyParameter_HeaderParameterSubSchema:
- s := proto.Size(x.HeaderParameterSubSchema)
- n += proto.SizeVarint(1<<3 | proto.WireBytes)
- n += proto.SizeVarint(uint64(s))
- n += s
- case *NonBodyParameter_FormDataParameterSubSchema:
- s := proto.Size(x.FormDataParameterSubSchema)
- n += proto.SizeVarint(2<<3 | proto.WireBytes)
- n += proto.SizeVarint(uint64(s))
- n += s
- case *NonBodyParameter_QueryParameterSubSchema:
- s := proto.Size(x.QueryParameterSubSchema)
- n += proto.SizeVarint(3<<3 | proto.WireBytes)
- n += proto.SizeVarint(uint64(s))
- n += s
- case *NonBodyParameter_PathParameterSubSchema:
- s := proto.Size(x.PathParameterSubSchema)
- n += proto.SizeVarint(4<<3 | proto.WireBytes)
- n += proto.SizeVarint(uint64(s))
- n += s
- case nil:
- default:
- panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
- }
- return n
+type Oauth2AccessCodeSecurity struct {
+ Type string `protobuf:"bytes,1,opt,name=type,proto3" json:"type,omitempty"`
+ Flow string `protobuf:"bytes,2,opt,name=flow,proto3" json:"flow,omitempty"`
+ Scopes *Oauth2Scopes `protobuf:"bytes,3,opt,name=scopes,proto3" json:"scopes,omitempty"`
+ AuthorizationUrl string `protobuf:"bytes,4,opt,name=authorization_url,json=authorizationUrl,proto3" json:"authorization_url,omitempty"`
+ TokenUrl string `protobuf:"bytes,5,opt,name=token_url,json=tokenUrl,proto3" json:"token_url,omitempty"`
+ Description string `protobuf:"bytes,6,opt,name=description,proto3" json:"description,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,7,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-type Oauth2AccessCodeSecurity struct {
- Type string `protobuf:"bytes,1,opt,name=type" json:"type,omitempty"`
- Flow string `protobuf:"bytes,2,opt,name=flow" json:"flow,omitempty"`
- Scopes *Oauth2Scopes `protobuf:"bytes,3,opt,name=scopes" json:"scopes,omitempty"`
- AuthorizationUrl string `protobuf:"bytes,4,opt,name=authorization_url,json=authorizationUrl" json:"authorization_url,omitempty"`
- TokenUrl string `protobuf:"bytes,5,opt,name=token_url,json=tokenUrl" json:"token_url,omitempty"`
- Description string `protobuf:"bytes,6,opt,name=description" json:"description,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,7,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
+func (m *Oauth2AccessCodeSecurity) Reset() { *m = Oauth2AccessCodeSecurity{} }
+func (m *Oauth2AccessCodeSecurity) String() string { return proto.CompactTextString(m) }
+func (*Oauth2AccessCodeSecurity) ProtoMessage() {}
+func (*Oauth2AccessCodeSecurity) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{31}
}
-func (m *Oauth2AccessCodeSecurity) Reset() { *m = Oauth2AccessCodeSecurity{} }
-func (m *Oauth2AccessCodeSecurity) String() string { return proto.CompactTextString(m) }
-func (*Oauth2AccessCodeSecurity) ProtoMessage() {}
-func (*Oauth2AccessCodeSecurity) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{31} }
+func (m *Oauth2AccessCodeSecurity) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_Oauth2AccessCodeSecurity.Unmarshal(m, b)
+}
+func (m *Oauth2AccessCodeSecurity) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_Oauth2AccessCodeSecurity.Marshal(b, m, deterministic)
+}
+func (m *Oauth2AccessCodeSecurity) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Oauth2AccessCodeSecurity.Merge(m, src)
+}
+func (m *Oauth2AccessCodeSecurity) XXX_Size() int {
+ return xxx_messageInfo_Oauth2AccessCodeSecurity.Size(m)
+}
+func (m *Oauth2AccessCodeSecurity) XXX_DiscardUnknown() {
+ xxx_messageInfo_Oauth2AccessCodeSecurity.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_Oauth2AccessCodeSecurity proto.InternalMessageInfo
func (m *Oauth2AccessCodeSecurity) GetType() string {
if m != nil {
@@ -1950,18 +2461,41 @@ func (m *Oauth2AccessCodeSecurity) GetVendorExtension() []*NamedAny {
}
type Oauth2ApplicationSecurity struct {
- Type string `protobuf:"bytes,1,opt,name=type" json:"type,omitempty"`
- Flow string `protobuf:"bytes,2,opt,name=flow" json:"flow,omitempty"`
- Scopes *Oauth2Scopes `protobuf:"bytes,3,opt,name=scopes" json:"scopes,omitempty"`
- TokenUrl string `protobuf:"bytes,4,opt,name=token_url,json=tokenUrl" json:"token_url,omitempty"`
- Description string `protobuf:"bytes,5,opt,name=description" json:"description,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,6,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
+ Type string `protobuf:"bytes,1,opt,name=type,proto3" json:"type,omitempty"`
+ Flow string `protobuf:"bytes,2,opt,name=flow,proto3" json:"flow,omitempty"`
+ Scopes *Oauth2Scopes `protobuf:"bytes,3,opt,name=scopes,proto3" json:"scopes,omitempty"`
+ TokenUrl string `protobuf:"bytes,4,opt,name=token_url,json=tokenUrl,proto3" json:"token_url,omitempty"`
+ Description string `protobuf:"bytes,5,opt,name=description,proto3" json:"description,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,6,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-func (m *Oauth2ApplicationSecurity) Reset() { *m = Oauth2ApplicationSecurity{} }
-func (m *Oauth2ApplicationSecurity) String() string { return proto.CompactTextString(m) }
-func (*Oauth2ApplicationSecurity) ProtoMessage() {}
-func (*Oauth2ApplicationSecurity) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{32} }
+func (m *Oauth2ApplicationSecurity) Reset() { *m = Oauth2ApplicationSecurity{} }
+func (m *Oauth2ApplicationSecurity) String() string { return proto.CompactTextString(m) }
+func (*Oauth2ApplicationSecurity) ProtoMessage() {}
+func (*Oauth2ApplicationSecurity) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{32}
+}
+
+func (m *Oauth2ApplicationSecurity) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_Oauth2ApplicationSecurity.Unmarshal(m, b)
+}
+func (m *Oauth2ApplicationSecurity) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_Oauth2ApplicationSecurity.Marshal(b, m, deterministic)
+}
+func (m *Oauth2ApplicationSecurity) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Oauth2ApplicationSecurity.Merge(m, src)
+}
+func (m *Oauth2ApplicationSecurity) XXX_Size() int {
+ return xxx_messageInfo_Oauth2ApplicationSecurity.Size(m)
+}
+func (m *Oauth2ApplicationSecurity) XXX_DiscardUnknown() {
+ xxx_messageInfo_Oauth2ApplicationSecurity.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_Oauth2ApplicationSecurity proto.InternalMessageInfo
func (m *Oauth2ApplicationSecurity) GetType() string {
if m != nil {
@@ -2006,18 +2540,41 @@ func (m *Oauth2ApplicationSecurity) GetVendorExtension() []*NamedAny {
}
type Oauth2ImplicitSecurity struct {
- Type string `protobuf:"bytes,1,opt,name=type" json:"type,omitempty"`
- Flow string `protobuf:"bytes,2,opt,name=flow" json:"flow,omitempty"`
- Scopes *Oauth2Scopes `protobuf:"bytes,3,opt,name=scopes" json:"scopes,omitempty"`
- AuthorizationUrl string `protobuf:"bytes,4,opt,name=authorization_url,json=authorizationUrl" json:"authorization_url,omitempty"`
- Description string `protobuf:"bytes,5,opt,name=description" json:"description,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,6,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
+ Type string `protobuf:"bytes,1,opt,name=type,proto3" json:"type,omitempty"`
+ Flow string `protobuf:"bytes,2,opt,name=flow,proto3" json:"flow,omitempty"`
+ Scopes *Oauth2Scopes `protobuf:"bytes,3,opt,name=scopes,proto3" json:"scopes,omitempty"`
+ AuthorizationUrl string `protobuf:"bytes,4,opt,name=authorization_url,json=authorizationUrl,proto3" json:"authorization_url,omitempty"`
+ Description string `protobuf:"bytes,5,opt,name=description,proto3" json:"description,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,6,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *Oauth2ImplicitSecurity) Reset() { *m = Oauth2ImplicitSecurity{} }
+func (m *Oauth2ImplicitSecurity) String() string { return proto.CompactTextString(m) }
+func (*Oauth2ImplicitSecurity) ProtoMessage() {}
+func (*Oauth2ImplicitSecurity) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{33}
+}
+
+func (m *Oauth2ImplicitSecurity) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_Oauth2ImplicitSecurity.Unmarshal(m, b)
+}
+func (m *Oauth2ImplicitSecurity) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_Oauth2ImplicitSecurity.Marshal(b, m, deterministic)
+}
+func (m *Oauth2ImplicitSecurity) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Oauth2ImplicitSecurity.Merge(m, src)
+}
+func (m *Oauth2ImplicitSecurity) XXX_Size() int {
+ return xxx_messageInfo_Oauth2ImplicitSecurity.Size(m)
+}
+func (m *Oauth2ImplicitSecurity) XXX_DiscardUnknown() {
+ xxx_messageInfo_Oauth2ImplicitSecurity.DiscardUnknown(m)
}
-func (m *Oauth2ImplicitSecurity) Reset() { *m = Oauth2ImplicitSecurity{} }
-func (m *Oauth2ImplicitSecurity) String() string { return proto.CompactTextString(m) }
-func (*Oauth2ImplicitSecurity) ProtoMessage() {}
-func (*Oauth2ImplicitSecurity) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{33} }
+var xxx_messageInfo_Oauth2ImplicitSecurity proto.InternalMessageInfo
func (m *Oauth2ImplicitSecurity) GetType() string {
if m != nil {
@@ -2062,18 +2619,41 @@ func (m *Oauth2ImplicitSecurity) GetVendorExtension() []*NamedAny {
}
type Oauth2PasswordSecurity struct {
- Type string `protobuf:"bytes,1,opt,name=type" json:"type,omitempty"`
- Flow string `protobuf:"bytes,2,opt,name=flow" json:"flow,omitempty"`
- Scopes *Oauth2Scopes `protobuf:"bytes,3,opt,name=scopes" json:"scopes,omitempty"`
- TokenUrl string `protobuf:"bytes,4,opt,name=token_url,json=tokenUrl" json:"token_url,omitempty"`
- Description string `protobuf:"bytes,5,opt,name=description" json:"description,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,6,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
+ Type string `protobuf:"bytes,1,opt,name=type,proto3" json:"type,omitempty"`
+ Flow string `protobuf:"bytes,2,opt,name=flow,proto3" json:"flow,omitempty"`
+ Scopes *Oauth2Scopes `protobuf:"bytes,3,opt,name=scopes,proto3" json:"scopes,omitempty"`
+ TokenUrl string `protobuf:"bytes,4,opt,name=token_url,json=tokenUrl,proto3" json:"token_url,omitempty"`
+ Description string `protobuf:"bytes,5,opt,name=description,proto3" json:"description,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,6,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *Oauth2PasswordSecurity) Reset() { *m = Oauth2PasswordSecurity{} }
+func (m *Oauth2PasswordSecurity) String() string { return proto.CompactTextString(m) }
+func (*Oauth2PasswordSecurity) ProtoMessage() {}
+func (*Oauth2PasswordSecurity) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{34}
}
-func (m *Oauth2PasswordSecurity) Reset() { *m = Oauth2PasswordSecurity{} }
-func (m *Oauth2PasswordSecurity) String() string { return proto.CompactTextString(m) }
-func (*Oauth2PasswordSecurity) ProtoMessage() {}
-func (*Oauth2PasswordSecurity) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{34} }
+func (m *Oauth2PasswordSecurity) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_Oauth2PasswordSecurity.Unmarshal(m, b)
+}
+func (m *Oauth2PasswordSecurity) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_Oauth2PasswordSecurity.Marshal(b, m, deterministic)
+}
+func (m *Oauth2PasswordSecurity) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Oauth2PasswordSecurity.Merge(m, src)
+}
+func (m *Oauth2PasswordSecurity) XXX_Size() int {
+ return xxx_messageInfo_Oauth2PasswordSecurity.Size(m)
+}
+func (m *Oauth2PasswordSecurity) XXX_DiscardUnknown() {
+ xxx_messageInfo_Oauth2PasswordSecurity.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_Oauth2PasswordSecurity proto.InternalMessageInfo
func (m *Oauth2PasswordSecurity) GetType() string {
if m != nil {
@@ -2118,13 +2698,36 @@ func (m *Oauth2PasswordSecurity) GetVendorExtension() []*NamedAny {
}
type Oauth2Scopes struct {
- AdditionalProperties []*NamedString `protobuf:"bytes,1,rep,name=additional_properties,json=additionalProperties" json:"additional_properties,omitempty"`
+ AdditionalProperties []*NamedString `protobuf:"bytes,1,rep,name=additional_properties,json=additionalProperties,proto3" json:"additional_properties,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-func (m *Oauth2Scopes) Reset() { *m = Oauth2Scopes{} }
-func (m *Oauth2Scopes) String() string { return proto.CompactTextString(m) }
-func (*Oauth2Scopes) ProtoMessage() {}
-func (*Oauth2Scopes) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{35} }
+func (m *Oauth2Scopes) Reset() { *m = Oauth2Scopes{} }
+func (m *Oauth2Scopes) String() string { return proto.CompactTextString(m) }
+func (*Oauth2Scopes) ProtoMessage() {}
+func (*Oauth2Scopes) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{35}
+}
+
+func (m *Oauth2Scopes) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_Oauth2Scopes.Unmarshal(m, b)
+}
+func (m *Oauth2Scopes) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_Oauth2Scopes.Marshal(b, m, deterministic)
+}
+func (m *Oauth2Scopes) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Oauth2Scopes.Merge(m, src)
+}
+func (m *Oauth2Scopes) XXX_Size() int {
+ return xxx_messageInfo_Oauth2Scopes.Size(m)
+}
+func (m *Oauth2Scopes) XXX_DiscardUnknown() {
+ xxx_messageInfo_Oauth2Scopes.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_Oauth2Scopes proto.InternalMessageInfo
func (m *Oauth2Scopes) GetAdditionalProperties() []*NamedString {
if m != nil {
@@ -2134,32 +2737,55 @@ func (m *Oauth2Scopes) GetAdditionalProperties() []*NamedString {
}
type Operation struct {
- Tags []string `protobuf:"bytes,1,rep,name=tags" json:"tags,omitempty"`
+ Tags []string `protobuf:"bytes,1,rep,name=tags,proto3" json:"tags,omitempty"`
// A brief summary of the operation.
- Summary string `protobuf:"bytes,2,opt,name=summary" json:"summary,omitempty"`
+ Summary string `protobuf:"bytes,2,opt,name=summary,proto3" json:"summary,omitempty"`
// A longer description of the operation, GitHub Flavored Markdown is allowed.
- Description string `protobuf:"bytes,3,opt,name=description" json:"description,omitempty"`
- ExternalDocs *ExternalDocs `protobuf:"bytes,4,opt,name=external_docs,json=externalDocs" json:"external_docs,omitempty"`
+ Description string `protobuf:"bytes,3,opt,name=description,proto3" json:"description,omitempty"`
+ ExternalDocs *ExternalDocs `protobuf:"bytes,4,opt,name=external_docs,json=externalDocs,proto3" json:"external_docs,omitempty"`
// A unique identifier of the operation.
- OperationId string `protobuf:"bytes,5,opt,name=operation_id,json=operationId" json:"operation_id,omitempty"`
+ OperationId string `protobuf:"bytes,5,opt,name=operation_id,json=operationId,proto3" json:"operation_id,omitempty"`
// A list of MIME types the API can produce.
- Produces []string `protobuf:"bytes,6,rep,name=produces" json:"produces,omitempty"`
+ Produces []string `protobuf:"bytes,6,rep,name=produces,proto3" json:"produces,omitempty"`
// A list of MIME types the API can consume.
- Consumes []string `protobuf:"bytes,7,rep,name=consumes" json:"consumes,omitempty"`
+ Consumes []string `protobuf:"bytes,7,rep,name=consumes,proto3" json:"consumes,omitempty"`
// The parameters needed to send a valid API call.
- Parameters []*ParametersItem `protobuf:"bytes,8,rep,name=parameters" json:"parameters,omitempty"`
- Responses *Responses `protobuf:"bytes,9,opt,name=responses" json:"responses,omitempty"`
+ Parameters []*ParametersItem `protobuf:"bytes,8,rep,name=parameters,proto3" json:"parameters,omitempty"`
+ Responses *Responses `protobuf:"bytes,9,opt,name=responses,proto3" json:"responses,omitempty"`
// The transfer protocol of the API.
- Schemes []string `protobuf:"bytes,10,rep,name=schemes" json:"schemes,omitempty"`
- Deprecated bool `protobuf:"varint,11,opt,name=deprecated" json:"deprecated,omitempty"`
- Security []*SecurityRequirement `protobuf:"bytes,12,rep,name=security" json:"security,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,13,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
+ Schemes []string `protobuf:"bytes,10,rep,name=schemes,proto3" json:"schemes,omitempty"`
+ Deprecated bool `protobuf:"varint,11,opt,name=deprecated,proto3" json:"deprecated,omitempty"`
+ Security []*SecurityRequirement `protobuf:"bytes,12,rep,name=security,proto3" json:"security,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,13,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *Operation) Reset() { *m = Operation{} }
+func (m *Operation) String() string { return proto.CompactTextString(m) }
+func (*Operation) ProtoMessage() {}
+func (*Operation) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{36}
+}
+
+func (m *Operation) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_Operation.Unmarshal(m, b)
+}
+func (m *Operation) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_Operation.Marshal(b, m, deterministic)
+}
+func (m *Operation) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Operation.Merge(m, src)
+}
+func (m *Operation) XXX_Size() int {
+ return xxx_messageInfo_Operation.Size(m)
+}
+func (m *Operation) XXX_DiscardUnknown() {
+ xxx_messageInfo_Operation.DiscardUnknown(m)
}
-func (m *Operation) Reset() { *m = Operation{} }
-func (m *Operation) String() string { return proto.CompactTextString(m) }
-func (*Operation) ProtoMessage() {}
-func (*Operation) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{36} }
+var xxx_messageInfo_Operation proto.InternalMessageInfo
func (m *Operation) GetTags() []string {
if m != nil {
@@ -2256,26 +2882,51 @@ type Parameter struct {
// Types that are valid to be assigned to Oneof:
// *Parameter_BodyParameter
// *Parameter_NonBodyParameter
- Oneof isParameter_Oneof `protobuf_oneof:"oneof"`
+ Oneof isParameter_Oneof `protobuf_oneof:"oneof"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-func (m *Parameter) Reset() { *m = Parameter{} }
-func (m *Parameter) String() string { return proto.CompactTextString(m) }
-func (*Parameter) ProtoMessage() {}
-func (*Parameter) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{37} }
+func (m *Parameter) Reset() { *m = Parameter{} }
+func (m *Parameter) String() string { return proto.CompactTextString(m) }
+func (*Parameter) ProtoMessage() {}
+func (*Parameter) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{37}
+}
+
+func (m *Parameter) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_Parameter.Unmarshal(m, b)
+}
+func (m *Parameter) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_Parameter.Marshal(b, m, deterministic)
+}
+func (m *Parameter) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Parameter.Merge(m, src)
+}
+func (m *Parameter) XXX_Size() int {
+ return xxx_messageInfo_Parameter.Size(m)
+}
+func (m *Parameter) XXX_DiscardUnknown() {
+ xxx_messageInfo_Parameter.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_Parameter proto.InternalMessageInfo
type isParameter_Oneof interface {
isParameter_Oneof()
}
type Parameter_BodyParameter struct {
- BodyParameter *BodyParameter `protobuf:"bytes,1,opt,name=body_parameter,json=bodyParameter,oneof"`
+ BodyParameter *BodyParameter `protobuf:"bytes,1,opt,name=body_parameter,json=bodyParameter,proto3,oneof"`
}
+
type Parameter_NonBodyParameter struct {
- NonBodyParameter *NonBodyParameter `protobuf:"bytes,2,opt,name=non_body_parameter,json=nonBodyParameter,oneof"`
+ NonBodyParameter *NonBodyParameter `protobuf:"bytes,2,opt,name=non_body_parameter,json=nonBodyParameter,proto3,oneof"`
}
-func (*Parameter_BodyParameter) isParameter_Oneof() {}
+func (*Parameter_BodyParameter) isParameter_Oneof() {}
+
func (*Parameter_NonBodyParameter) isParameter_Oneof() {}
func (m *Parameter) GetOneof() isParameter_Oneof {
@@ -2299,89 +2950,46 @@ func (m *Parameter) GetNonBodyParameter() *NonBodyParameter {
return nil
}
-// XXX_OneofFuncs is for the internal use of the proto package.
-func (*Parameter) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
- return _Parameter_OneofMarshaler, _Parameter_OneofUnmarshaler, _Parameter_OneofSizer, []interface{}{
+// XXX_OneofWrappers is for the internal use of the proto package.
+func (*Parameter) XXX_OneofWrappers() []interface{} {
+ return []interface{}{
(*Parameter_BodyParameter)(nil),
(*Parameter_NonBodyParameter)(nil),
}
}
-func _Parameter_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
- m := msg.(*Parameter)
- // oneof
- switch x := m.Oneof.(type) {
- case *Parameter_BodyParameter:
- b.EncodeVarint(1<<3 | proto.WireBytes)
- if err := b.EncodeMessage(x.BodyParameter); err != nil {
- return err
- }
- case *Parameter_NonBodyParameter:
- b.EncodeVarint(2<<3 | proto.WireBytes)
- if err := b.EncodeMessage(x.NonBodyParameter); err != nil {
- return err
- }
- case nil:
- default:
- return fmt.Errorf("Parameter.Oneof has unexpected type %T", x)
- }
- return nil
-}
-
-func _Parameter_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
- m := msg.(*Parameter)
- switch tag {
- case 1: // oneof.body_parameter
- if wire != proto.WireBytes {
- return true, proto.ErrInternalBadWireType
- }
- msg := new(BodyParameter)
- err := b.DecodeMessage(msg)
- m.Oneof = &Parameter_BodyParameter{msg}
- return true, err
- case 2: // oneof.non_body_parameter
- if wire != proto.WireBytes {
- return true, proto.ErrInternalBadWireType
- }
- msg := new(NonBodyParameter)
- err := b.DecodeMessage(msg)
- m.Oneof = &Parameter_NonBodyParameter{msg}
- return true, err
- default:
- return false, nil
- }
-}
-
-func _Parameter_OneofSizer(msg proto.Message) (n int) {
- m := msg.(*Parameter)
- // oneof
- switch x := m.Oneof.(type) {
- case *Parameter_BodyParameter:
- s := proto.Size(x.BodyParameter)
- n += proto.SizeVarint(1<<3 | proto.WireBytes)
- n += proto.SizeVarint(uint64(s))
- n += s
- case *Parameter_NonBodyParameter:
- s := proto.Size(x.NonBodyParameter)
- n += proto.SizeVarint(2<<3 | proto.WireBytes)
- n += proto.SizeVarint(uint64(s))
- n += s
- case nil:
- default:
- panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
- }
- return n
-}
-
// One or more JSON representations for parameters
type ParameterDefinitions struct {
- AdditionalProperties []*NamedParameter `protobuf:"bytes,1,rep,name=additional_properties,json=additionalProperties" json:"additional_properties,omitempty"`
+ AdditionalProperties []*NamedParameter `protobuf:"bytes,1,rep,name=additional_properties,json=additionalProperties,proto3" json:"additional_properties,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-func (m *ParameterDefinitions) Reset() { *m = ParameterDefinitions{} }
-func (m *ParameterDefinitions) String() string { return proto.CompactTextString(m) }
-func (*ParameterDefinitions) ProtoMessage() {}
-func (*ParameterDefinitions) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{38} }
+func (m *ParameterDefinitions) Reset() { *m = ParameterDefinitions{} }
+func (m *ParameterDefinitions) String() string { return proto.CompactTextString(m) }
+func (*ParameterDefinitions) ProtoMessage() {}
+func (*ParameterDefinitions) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{38}
+}
+
+func (m *ParameterDefinitions) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_ParameterDefinitions.Unmarshal(m, b)
+}
+func (m *ParameterDefinitions) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_ParameterDefinitions.Marshal(b, m, deterministic)
+}
+func (m *ParameterDefinitions) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ParameterDefinitions.Merge(m, src)
+}
+func (m *ParameterDefinitions) XXX_Size() int {
+ return xxx_messageInfo_ParameterDefinitions.Size(m)
+}
+func (m *ParameterDefinitions) XXX_DiscardUnknown() {
+ xxx_messageInfo_ParameterDefinitions.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_ParameterDefinitions proto.InternalMessageInfo
func (m *ParameterDefinitions) GetAdditionalProperties() []*NamedParameter {
if m != nil {
@@ -2394,26 +3002,51 @@ type ParametersItem struct {
// Types that are valid to be assigned to Oneof:
// *ParametersItem_Parameter
// *ParametersItem_JsonReference
- Oneof isParametersItem_Oneof `protobuf_oneof:"oneof"`
+ Oneof isParametersItem_Oneof `protobuf_oneof:"oneof"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *ParametersItem) Reset() { *m = ParametersItem{} }
+func (m *ParametersItem) String() string { return proto.CompactTextString(m) }
+func (*ParametersItem) ProtoMessage() {}
+func (*ParametersItem) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{39}
+}
+
+func (m *ParametersItem) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_ParametersItem.Unmarshal(m, b)
+}
+func (m *ParametersItem) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_ParametersItem.Marshal(b, m, deterministic)
+}
+func (m *ParametersItem) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ParametersItem.Merge(m, src)
+}
+func (m *ParametersItem) XXX_Size() int {
+ return xxx_messageInfo_ParametersItem.Size(m)
+}
+func (m *ParametersItem) XXX_DiscardUnknown() {
+ xxx_messageInfo_ParametersItem.DiscardUnknown(m)
}
-func (m *ParametersItem) Reset() { *m = ParametersItem{} }
-func (m *ParametersItem) String() string { return proto.CompactTextString(m) }
-func (*ParametersItem) ProtoMessage() {}
-func (*ParametersItem) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{39} }
+var xxx_messageInfo_ParametersItem proto.InternalMessageInfo
type isParametersItem_Oneof interface {
isParametersItem_Oneof()
}
type ParametersItem_Parameter struct {
- Parameter *Parameter `protobuf:"bytes,1,opt,name=parameter,oneof"`
+ Parameter *Parameter `protobuf:"bytes,1,opt,name=parameter,proto3,oneof"`
}
+
type ParametersItem_JsonReference struct {
- JsonReference *JsonReference `protobuf:"bytes,2,opt,name=json_reference,json=jsonReference,oneof"`
+ JsonReference *JsonReference `protobuf:"bytes,2,opt,name=json_reference,json=jsonReference,proto3,oneof"`
}
-func (*ParametersItem_Parameter) isParametersItem_Oneof() {}
+func (*ParametersItem_Parameter) isParametersItem_Oneof() {}
+
func (*ParametersItem_JsonReference) isParametersItem_Oneof() {}
func (m *ParametersItem) GetOneof() isParametersItem_Oneof {
@@ -2437,98 +3070,55 @@ func (m *ParametersItem) GetJsonReference() *JsonReference {
return nil
}
-// XXX_OneofFuncs is for the internal use of the proto package.
-func (*ParametersItem) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
- return _ParametersItem_OneofMarshaler, _ParametersItem_OneofUnmarshaler, _ParametersItem_OneofSizer, []interface{}{
+// XXX_OneofWrappers is for the internal use of the proto package.
+func (*ParametersItem) XXX_OneofWrappers() []interface{} {
+ return []interface{}{
(*ParametersItem_Parameter)(nil),
(*ParametersItem_JsonReference)(nil),
}
}
-func _ParametersItem_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
- m := msg.(*ParametersItem)
- // oneof
- switch x := m.Oneof.(type) {
- case *ParametersItem_Parameter:
- b.EncodeVarint(1<<3 | proto.WireBytes)
- if err := b.EncodeMessage(x.Parameter); err != nil {
- return err
- }
- case *ParametersItem_JsonReference:
- b.EncodeVarint(2<<3 | proto.WireBytes)
- if err := b.EncodeMessage(x.JsonReference); err != nil {
- return err
- }
- case nil:
- default:
- return fmt.Errorf("ParametersItem.Oneof has unexpected type %T", x)
- }
- return nil
-}
-
-func _ParametersItem_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
- m := msg.(*ParametersItem)
- switch tag {
- case 1: // oneof.parameter
- if wire != proto.WireBytes {
- return true, proto.ErrInternalBadWireType
- }
- msg := new(Parameter)
- err := b.DecodeMessage(msg)
- m.Oneof = &ParametersItem_Parameter{msg}
- return true, err
- case 2: // oneof.json_reference
- if wire != proto.WireBytes {
- return true, proto.ErrInternalBadWireType
- }
- msg := new(JsonReference)
- err := b.DecodeMessage(msg)
- m.Oneof = &ParametersItem_JsonReference{msg}
- return true, err
- default:
- return false, nil
- }
-}
-
-func _ParametersItem_OneofSizer(msg proto.Message) (n int) {
- m := msg.(*ParametersItem)
- // oneof
- switch x := m.Oneof.(type) {
- case *ParametersItem_Parameter:
- s := proto.Size(x.Parameter)
- n += proto.SizeVarint(1<<3 | proto.WireBytes)
- n += proto.SizeVarint(uint64(s))
- n += s
- case *ParametersItem_JsonReference:
- s := proto.Size(x.JsonReference)
- n += proto.SizeVarint(2<<3 | proto.WireBytes)
- n += proto.SizeVarint(uint64(s))
- n += s
- case nil:
- default:
- panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
- }
- return n
-}
-
type PathItem struct {
- XRef string `protobuf:"bytes,1,opt,name=_ref,json=Ref" json:"_ref,omitempty"`
- Get *Operation `protobuf:"bytes,2,opt,name=get" json:"get,omitempty"`
- Put *Operation `protobuf:"bytes,3,opt,name=put" json:"put,omitempty"`
- Post *Operation `protobuf:"bytes,4,opt,name=post" json:"post,omitempty"`
- Delete *Operation `protobuf:"bytes,5,opt,name=delete" json:"delete,omitempty"`
- Options *Operation `protobuf:"bytes,6,opt,name=options" json:"options,omitempty"`
- Head *Operation `protobuf:"bytes,7,opt,name=head" json:"head,omitempty"`
- Patch *Operation `protobuf:"bytes,8,opt,name=patch" json:"patch,omitempty"`
+ XRef string `protobuf:"bytes,1,opt,name=_ref,json=Ref,proto3" json:"_ref,omitempty"`
+ Get *Operation `protobuf:"bytes,2,opt,name=get,proto3" json:"get,omitempty"`
+ Put *Operation `protobuf:"bytes,3,opt,name=put,proto3" json:"put,omitempty"`
+ Post *Operation `protobuf:"bytes,4,opt,name=post,proto3" json:"post,omitempty"`
+ Delete *Operation `protobuf:"bytes,5,opt,name=delete,proto3" json:"delete,omitempty"`
+ Options *Operation `protobuf:"bytes,6,opt,name=options,proto3" json:"options,omitempty"`
+ Head *Operation `protobuf:"bytes,7,opt,name=head,proto3" json:"head,omitempty"`
+ Patch *Operation `protobuf:"bytes,8,opt,name=patch,proto3" json:"patch,omitempty"`
// The parameters needed to send a valid API call.
- Parameters []*ParametersItem `protobuf:"bytes,9,rep,name=parameters" json:"parameters,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,10,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
+ Parameters []*ParametersItem `protobuf:"bytes,9,rep,name=parameters,proto3" json:"parameters,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,10,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-func (m *PathItem) Reset() { *m = PathItem{} }
-func (m *PathItem) String() string { return proto.CompactTextString(m) }
-func (*PathItem) ProtoMessage() {}
-func (*PathItem) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{40} }
+func (m *PathItem) Reset() { *m = PathItem{} }
+func (m *PathItem) String() string { return proto.CompactTextString(m) }
+func (*PathItem) ProtoMessage() {}
+func (*PathItem) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{40}
+}
+
+func (m *PathItem) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_PathItem.Unmarshal(m, b)
+}
+func (m *PathItem) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_PathItem.Marshal(b, m, deterministic)
+}
+func (m *PathItem) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_PathItem.Merge(m, src)
+}
+func (m *PathItem) XXX_Size() int {
+ return xxx_messageInfo_PathItem.Size(m)
+}
+func (m *PathItem) XXX_DiscardUnknown() {
+ xxx_messageInfo_PathItem.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_PathItem proto.InternalMessageInfo
func (m *PathItem) GetXRef() string {
if m != nil {
@@ -2602,37 +3192,60 @@ func (m *PathItem) GetVendorExtension() []*NamedAny {
type PathParameterSubSchema struct {
// Determines whether or not this parameter is required or optional.
- Required bool `protobuf:"varint,1,opt,name=required" json:"required,omitempty"`
+ Required bool `protobuf:"varint,1,opt,name=required,proto3" json:"required,omitempty"`
// Determines the location of the parameter.
- In string `protobuf:"bytes,2,opt,name=in" json:"in,omitempty"`
+ In string `protobuf:"bytes,2,opt,name=in,proto3" json:"in,omitempty"`
// A brief description of the parameter. This could contain examples of use. GitHub Flavored Markdown is allowed.
- Description string `protobuf:"bytes,3,opt,name=description" json:"description,omitempty"`
+ Description string `protobuf:"bytes,3,opt,name=description,proto3" json:"description,omitempty"`
// The name of the parameter.
- Name string `protobuf:"bytes,4,opt,name=name" json:"name,omitempty"`
- Type string `protobuf:"bytes,5,opt,name=type" json:"type,omitempty"`
- Format string `protobuf:"bytes,6,opt,name=format" json:"format,omitempty"`
- Items *PrimitivesItems `protobuf:"bytes,7,opt,name=items" json:"items,omitempty"`
- CollectionFormat string `protobuf:"bytes,8,opt,name=collection_format,json=collectionFormat" json:"collection_format,omitempty"`
- Default *Any `protobuf:"bytes,9,opt,name=default" json:"default,omitempty"`
- Maximum float64 `protobuf:"fixed64,10,opt,name=maximum" json:"maximum,omitempty"`
- ExclusiveMaximum bool `protobuf:"varint,11,opt,name=exclusive_maximum,json=exclusiveMaximum" json:"exclusive_maximum,omitempty"`
- Minimum float64 `protobuf:"fixed64,12,opt,name=minimum" json:"minimum,omitempty"`
- ExclusiveMinimum bool `protobuf:"varint,13,opt,name=exclusive_minimum,json=exclusiveMinimum" json:"exclusive_minimum,omitempty"`
- MaxLength int64 `protobuf:"varint,14,opt,name=max_length,json=maxLength" json:"max_length,omitempty"`
- MinLength int64 `protobuf:"varint,15,opt,name=min_length,json=minLength" json:"min_length,omitempty"`
- Pattern string `protobuf:"bytes,16,opt,name=pattern" json:"pattern,omitempty"`
- MaxItems int64 `protobuf:"varint,17,opt,name=max_items,json=maxItems" json:"max_items,omitempty"`
- MinItems int64 `protobuf:"varint,18,opt,name=min_items,json=minItems" json:"min_items,omitempty"`
- UniqueItems bool `protobuf:"varint,19,opt,name=unique_items,json=uniqueItems" json:"unique_items,omitempty"`
- Enum []*Any `protobuf:"bytes,20,rep,name=enum" json:"enum,omitempty"`
- MultipleOf float64 `protobuf:"fixed64,21,opt,name=multiple_of,json=multipleOf" json:"multiple_of,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,22,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
-}
-
-func (m *PathParameterSubSchema) Reset() { *m = PathParameterSubSchema{} }
-func (m *PathParameterSubSchema) String() string { return proto.CompactTextString(m) }
-func (*PathParameterSubSchema) ProtoMessage() {}
-func (*PathParameterSubSchema) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{41} }
+ Name string `protobuf:"bytes,4,opt,name=name,proto3" json:"name,omitempty"`
+ Type string `protobuf:"bytes,5,opt,name=type,proto3" json:"type,omitempty"`
+ Format string `protobuf:"bytes,6,opt,name=format,proto3" json:"format,omitempty"`
+ Items *PrimitivesItems `protobuf:"bytes,7,opt,name=items,proto3" json:"items,omitempty"`
+ CollectionFormat string `protobuf:"bytes,8,opt,name=collection_format,json=collectionFormat,proto3" json:"collection_format,omitempty"`
+ Default *Any `protobuf:"bytes,9,opt,name=default,proto3" json:"default,omitempty"`
+ Maximum float64 `protobuf:"fixed64,10,opt,name=maximum,proto3" json:"maximum,omitempty"`
+ ExclusiveMaximum bool `protobuf:"varint,11,opt,name=exclusive_maximum,json=exclusiveMaximum,proto3" json:"exclusive_maximum,omitempty"`
+ Minimum float64 `protobuf:"fixed64,12,opt,name=minimum,proto3" json:"minimum,omitempty"`
+ ExclusiveMinimum bool `protobuf:"varint,13,opt,name=exclusive_minimum,json=exclusiveMinimum,proto3" json:"exclusive_minimum,omitempty"`
+ MaxLength int64 `protobuf:"varint,14,opt,name=max_length,json=maxLength,proto3" json:"max_length,omitempty"`
+ MinLength int64 `protobuf:"varint,15,opt,name=min_length,json=minLength,proto3" json:"min_length,omitempty"`
+ Pattern string `protobuf:"bytes,16,opt,name=pattern,proto3" json:"pattern,omitempty"`
+ MaxItems int64 `protobuf:"varint,17,opt,name=max_items,json=maxItems,proto3" json:"max_items,omitempty"`
+ MinItems int64 `protobuf:"varint,18,opt,name=min_items,json=minItems,proto3" json:"min_items,omitempty"`
+ UniqueItems bool `protobuf:"varint,19,opt,name=unique_items,json=uniqueItems,proto3" json:"unique_items,omitempty"`
+ Enum []*Any `protobuf:"bytes,20,rep,name=enum,proto3" json:"enum,omitempty"`
+ MultipleOf float64 `protobuf:"fixed64,21,opt,name=multiple_of,json=multipleOf,proto3" json:"multiple_of,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,22,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *PathParameterSubSchema) Reset() { *m = PathParameterSubSchema{} }
+func (m *PathParameterSubSchema) String() string { return proto.CompactTextString(m) }
+func (*PathParameterSubSchema) ProtoMessage() {}
+func (*PathParameterSubSchema) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{41}
+}
+
+func (m *PathParameterSubSchema) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_PathParameterSubSchema.Unmarshal(m, b)
+}
+func (m *PathParameterSubSchema) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_PathParameterSubSchema.Marshal(b, m, deterministic)
+}
+func (m *PathParameterSubSchema) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_PathParameterSubSchema.Merge(m, src)
+}
+func (m *PathParameterSubSchema) XXX_Size() int {
+ return xxx_messageInfo_PathParameterSubSchema.Size(m)
+}
+func (m *PathParameterSubSchema) XXX_DiscardUnknown() {
+ xxx_messageInfo_PathParameterSubSchema.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_PathParameterSubSchema proto.InternalMessageInfo
func (m *PathParameterSubSchema) GetRequired() bool {
if m != nil {
@@ -2790,14 +3403,37 @@ func (m *PathParameterSubSchema) GetVendorExtension() []*NamedAny {
// Relative paths to the individual endpoints. They must be relative to the 'basePath'.
type Paths struct {
- VendorExtension []*NamedAny `protobuf:"bytes,1,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
- Path []*NamedPathItem `protobuf:"bytes,2,rep,name=path" json:"path,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,1,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ Path []*NamedPathItem `protobuf:"bytes,2,rep,name=path,proto3" json:"path,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *Paths) Reset() { *m = Paths{} }
+func (m *Paths) String() string { return proto.CompactTextString(m) }
+func (*Paths) ProtoMessage() {}
+func (*Paths) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{42}
+}
+
+func (m *Paths) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_Paths.Unmarshal(m, b)
+}
+func (m *Paths) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_Paths.Marshal(b, m, deterministic)
+}
+func (m *Paths) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Paths.Merge(m, src)
+}
+func (m *Paths) XXX_Size() int {
+ return xxx_messageInfo_Paths.Size(m)
+}
+func (m *Paths) XXX_DiscardUnknown() {
+ xxx_messageInfo_Paths.DiscardUnknown(m)
}
-func (m *Paths) Reset() { *m = Paths{} }
-func (m *Paths) String() string { return proto.CompactTextString(m) }
-func (*Paths) ProtoMessage() {}
-func (*Paths) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{42} }
+var xxx_messageInfo_Paths proto.InternalMessageInfo
func (m *Paths) GetVendorExtension() []*NamedAny {
if m != nil {
@@ -2814,30 +3450,53 @@ func (m *Paths) GetPath() []*NamedPathItem {
}
type PrimitivesItems struct {
- Type string `protobuf:"bytes,1,opt,name=type" json:"type,omitempty"`
- Format string `protobuf:"bytes,2,opt,name=format" json:"format,omitempty"`
- Items *PrimitivesItems `protobuf:"bytes,3,opt,name=items" json:"items,omitempty"`
- CollectionFormat string `protobuf:"bytes,4,opt,name=collection_format,json=collectionFormat" json:"collection_format,omitempty"`
- Default *Any `protobuf:"bytes,5,opt,name=default" json:"default,omitempty"`
- Maximum float64 `protobuf:"fixed64,6,opt,name=maximum" json:"maximum,omitempty"`
- ExclusiveMaximum bool `protobuf:"varint,7,opt,name=exclusive_maximum,json=exclusiveMaximum" json:"exclusive_maximum,omitempty"`
- Minimum float64 `protobuf:"fixed64,8,opt,name=minimum" json:"minimum,omitempty"`
- ExclusiveMinimum bool `protobuf:"varint,9,opt,name=exclusive_minimum,json=exclusiveMinimum" json:"exclusive_minimum,omitempty"`
- MaxLength int64 `protobuf:"varint,10,opt,name=max_length,json=maxLength" json:"max_length,omitempty"`
- MinLength int64 `protobuf:"varint,11,opt,name=min_length,json=minLength" json:"min_length,omitempty"`
- Pattern string `protobuf:"bytes,12,opt,name=pattern" json:"pattern,omitempty"`
- MaxItems int64 `protobuf:"varint,13,opt,name=max_items,json=maxItems" json:"max_items,omitempty"`
- MinItems int64 `protobuf:"varint,14,opt,name=min_items,json=minItems" json:"min_items,omitempty"`
- UniqueItems bool `protobuf:"varint,15,opt,name=unique_items,json=uniqueItems" json:"unique_items,omitempty"`
- Enum []*Any `protobuf:"bytes,16,rep,name=enum" json:"enum,omitempty"`
- MultipleOf float64 `protobuf:"fixed64,17,opt,name=multiple_of,json=multipleOf" json:"multiple_of,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,18,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
-}
-
-func (m *PrimitivesItems) Reset() { *m = PrimitivesItems{} }
-func (m *PrimitivesItems) String() string { return proto.CompactTextString(m) }
-func (*PrimitivesItems) ProtoMessage() {}
-func (*PrimitivesItems) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{43} }
+ Type string `protobuf:"bytes,1,opt,name=type,proto3" json:"type,omitempty"`
+ Format string `protobuf:"bytes,2,opt,name=format,proto3" json:"format,omitempty"`
+ Items *PrimitivesItems `protobuf:"bytes,3,opt,name=items,proto3" json:"items,omitempty"`
+ CollectionFormat string `protobuf:"bytes,4,opt,name=collection_format,json=collectionFormat,proto3" json:"collection_format,omitempty"`
+ Default *Any `protobuf:"bytes,5,opt,name=default,proto3" json:"default,omitempty"`
+ Maximum float64 `protobuf:"fixed64,6,opt,name=maximum,proto3" json:"maximum,omitempty"`
+ ExclusiveMaximum bool `protobuf:"varint,7,opt,name=exclusive_maximum,json=exclusiveMaximum,proto3" json:"exclusive_maximum,omitempty"`
+ Minimum float64 `protobuf:"fixed64,8,opt,name=minimum,proto3" json:"minimum,omitempty"`
+ ExclusiveMinimum bool `protobuf:"varint,9,opt,name=exclusive_minimum,json=exclusiveMinimum,proto3" json:"exclusive_minimum,omitempty"`
+ MaxLength int64 `protobuf:"varint,10,opt,name=max_length,json=maxLength,proto3" json:"max_length,omitempty"`
+ MinLength int64 `protobuf:"varint,11,opt,name=min_length,json=minLength,proto3" json:"min_length,omitempty"`
+ Pattern string `protobuf:"bytes,12,opt,name=pattern,proto3" json:"pattern,omitempty"`
+ MaxItems int64 `protobuf:"varint,13,opt,name=max_items,json=maxItems,proto3" json:"max_items,omitempty"`
+ MinItems int64 `protobuf:"varint,14,opt,name=min_items,json=minItems,proto3" json:"min_items,omitempty"`
+ UniqueItems bool `protobuf:"varint,15,opt,name=unique_items,json=uniqueItems,proto3" json:"unique_items,omitempty"`
+ Enum []*Any `protobuf:"bytes,16,rep,name=enum,proto3" json:"enum,omitempty"`
+ MultipleOf float64 `protobuf:"fixed64,17,opt,name=multiple_of,json=multipleOf,proto3" json:"multiple_of,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,18,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *PrimitivesItems) Reset() { *m = PrimitivesItems{} }
+func (m *PrimitivesItems) String() string { return proto.CompactTextString(m) }
+func (*PrimitivesItems) ProtoMessage() {}
+func (*PrimitivesItems) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{43}
+}
+
+func (m *PrimitivesItems) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_PrimitivesItems.Unmarshal(m, b)
+}
+func (m *PrimitivesItems) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_PrimitivesItems.Marshal(b, m, deterministic)
+}
+func (m *PrimitivesItems) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_PrimitivesItems.Merge(m, src)
+}
+func (m *PrimitivesItems) XXX_Size() int {
+ return xxx_messageInfo_PrimitivesItems.Size(m)
+}
+func (m *PrimitivesItems) XXX_DiscardUnknown() {
+ xxx_messageInfo_PrimitivesItems.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_PrimitivesItems proto.InternalMessageInfo
func (m *PrimitivesItems) GetType() string {
if m != nil {
@@ -2966,13 +3625,36 @@ func (m *PrimitivesItems) GetVendorExtension() []*NamedAny {
}
type Properties struct {
- AdditionalProperties []*NamedSchema `protobuf:"bytes,1,rep,name=additional_properties,json=additionalProperties" json:"additional_properties,omitempty"`
+ AdditionalProperties []*NamedSchema `protobuf:"bytes,1,rep,name=additional_properties,json=additionalProperties,proto3" json:"additional_properties,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *Properties) Reset() { *m = Properties{} }
+func (m *Properties) String() string { return proto.CompactTextString(m) }
+func (*Properties) ProtoMessage() {}
+func (*Properties) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{44}
+}
+
+func (m *Properties) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_Properties.Unmarshal(m, b)
+}
+func (m *Properties) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_Properties.Marshal(b, m, deterministic)
+}
+func (m *Properties) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Properties.Merge(m, src)
+}
+func (m *Properties) XXX_Size() int {
+ return xxx_messageInfo_Properties.Size(m)
+}
+func (m *Properties) XXX_DiscardUnknown() {
+ xxx_messageInfo_Properties.DiscardUnknown(m)
}
-func (m *Properties) Reset() { *m = Properties{} }
-func (m *Properties) String() string { return proto.CompactTextString(m) }
-func (*Properties) ProtoMessage() {}
-func (*Properties) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{44} }
+var xxx_messageInfo_Properties proto.InternalMessageInfo
func (m *Properties) GetAdditionalProperties() []*NamedSchema {
if m != nil {
@@ -2983,39 +3665,62 @@ func (m *Properties) GetAdditionalProperties() []*NamedSchema {
type QueryParameterSubSchema struct {
// Determines whether or not this parameter is required or optional.
- Required bool `protobuf:"varint,1,opt,name=required" json:"required,omitempty"`
+ Required bool `protobuf:"varint,1,opt,name=required,proto3" json:"required,omitempty"`
// Determines the location of the parameter.
- In string `protobuf:"bytes,2,opt,name=in" json:"in,omitempty"`
+ In string `protobuf:"bytes,2,opt,name=in,proto3" json:"in,omitempty"`
// A brief description of the parameter. This could contain examples of use. GitHub Flavored Markdown is allowed.
- Description string `protobuf:"bytes,3,opt,name=description" json:"description,omitempty"`
+ Description string `protobuf:"bytes,3,opt,name=description,proto3" json:"description,omitempty"`
// The name of the parameter.
- Name string `protobuf:"bytes,4,opt,name=name" json:"name,omitempty"`
+ Name string `protobuf:"bytes,4,opt,name=name,proto3" json:"name,omitempty"`
// allows sending a parameter by name only or with an empty value.
- AllowEmptyValue bool `protobuf:"varint,5,opt,name=allow_empty_value,json=allowEmptyValue" json:"allow_empty_value,omitempty"`
- Type string `protobuf:"bytes,6,opt,name=type" json:"type,omitempty"`
- Format string `protobuf:"bytes,7,opt,name=format" json:"format,omitempty"`
- Items *PrimitivesItems `protobuf:"bytes,8,opt,name=items" json:"items,omitempty"`
- CollectionFormat string `protobuf:"bytes,9,opt,name=collection_format,json=collectionFormat" json:"collection_format,omitempty"`
- Default *Any `protobuf:"bytes,10,opt,name=default" json:"default,omitempty"`
- Maximum float64 `protobuf:"fixed64,11,opt,name=maximum" json:"maximum,omitempty"`
- ExclusiveMaximum bool `protobuf:"varint,12,opt,name=exclusive_maximum,json=exclusiveMaximum" json:"exclusive_maximum,omitempty"`
- Minimum float64 `protobuf:"fixed64,13,opt,name=minimum" json:"minimum,omitempty"`
- ExclusiveMinimum bool `protobuf:"varint,14,opt,name=exclusive_minimum,json=exclusiveMinimum" json:"exclusive_minimum,omitempty"`
- MaxLength int64 `protobuf:"varint,15,opt,name=max_length,json=maxLength" json:"max_length,omitempty"`
- MinLength int64 `protobuf:"varint,16,opt,name=min_length,json=minLength" json:"min_length,omitempty"`
- Pattern string `protobuf:"bytes,17,opt,name=pattern" json:"pattern,omitempty"`
- MaxItems int64 `protobuf:"varint,18,opt,name=max_items,json=maxItems" json:"max_items,omitempty"`
- MinItems int64 `protobuf:"varint,19,opt,name=min_items,json=minItems" json:"min_items,omitempty"`
- UniqueItems bool `protobuf:"varint,20,opt,name=unique_items,json=uniqueItems" json:"unique_items,omitempty"`
- Enum []*Any `protobuf:"bytes,21,rep,name=enum" json:"enum,omitempty"`
- MultipleOf float64 `protobuf:"fixed64,22,opt,name=multiple_of,json=multipleOf" json:"multiple_of,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,23,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
-}
-
-func (m *QueryParameterSubSchema) Reset() { *m = QueryParameterSubSchema{} }
-func (m *QueryParameterSubSchema) String() string { return proto.CompactTextString(m) }
-func (*QueryParameterSubSchema) ProtoMessage() {}
-func (*QueryParameterSubSchema) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{45} }
+ AllowEmptyValue bool `protobuf:"varint,5,opt,name=allow_empty_value,json=allowEmptyValue,proto3" json:"allow_empty_value,omitempty"`
+ Type string `protobuf:"bytes,6,opt,name=type,proto3" json:"type,omitempty"`
+ Format string `protobuf:"bytes,7,opt,name=format,proto3" json:"format,omitempty"`
+ Items *PrimitivesItems `protobuf:"bytes,8,opt,name=items,proto3" json:"items,omitempty"`
+ CollectionFormat string `protobuf:"bytes,9,opt,name=collection_format,json=collectionFormat,proto3" json:"collection_format,omitempty"`
+ Default *Any `protobuf:"bytes,10,opt,name=default,proto3" json:"default,omitempty"`
+ Maximum float64 `protobuf:"fixed64,11,opt,name=maximum,proto3" json:"maximum,omitempty"`
+ ExclusiveMaximum bool `protobuf:"varint,12,opt,name=exclusive_maximum,json=exclusiveMaximum,proto3" json:"exclusive_maximum,omitempty"`
+ Minimum float64 `protobuf:"fixed64,13,opt,name=minimum,proto3" json:"minimum,omitempty"`
+ ExclusiveMinimum bool `protobuf:"varint,14,opt,name=exclusive_minimum,json=exclusiveMinimum,proto3" json:"exclusive_minimum,omitempty"`
+ MaxLength int64 `protobuf:"varint,15,opt,name=max_length,json=maxLength,proto3" json:"max_length,omitempty"`
+ MinLength int64 `protobuf:"varint,16,opt,name=min_length,json=minLength,proto3" json:"min_length,omitempty"`
+ Pattern string `protobuf:"bytes,17,opt,name=pattern,proto3" json:"pattern,omitempty"`
+ MaxItems int64 `protobuf:"varint,18,opt,name=max_items,json=maxItems,proto3" json:"max_items,omitempty"`
+ MinItems int64 `protobuf:"varint,19,opt,name=min_items,json=minItems,proto3" json:"min_items,omitempty"`
+ UniqueItems bool `protobuf:"varint,20,opt,name=unique_items,json=uniqueItems,proto3" json:"unique_items,omitempty"`
+ Enum []*Any `protobuf:"bytes,21,rep,name=enum,proto3" json:"enum,omitempty"`
+ MultipleOf float64 `protobuf:"fixed64,22,opt,name=multiple_of,json=multipleOf,proto3" json:"multiple_of,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,23,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *QueryParameterSubSchema) Reset() { *m = QueryParameterSubSchema{} }
+func (m *QueryParameterSubSchema) String() string { return proto.CompactTextString(m) }
+func (*QueryParameterSubSchema) ProtoMessage() {}
+func (*QueryParameterSubSchema) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{45}
+}
+
+func (m *QueryParameterSubSchema) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_QueryParameterSubSchema.Unmarshal(m, b)
+}
+func (m *QueryParameterSubSchema) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_QueryParameterSubSchema.Marshal(b, m, deterministic)
+}
+func (m *QueryParameterSubSchema) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_QueryParameterSubSchema.Merge(m, src)
+}
+func (m *QueryParameterSubSchema) XXX_Size() int {
+ return xxx_messageInfo_QueryParameterSubSchema.Size(m)
+}
+func (m *QueryParameterSubSchema) XXX_DiscardUnknown() {
+ xxx_messageInfo_QueryParameterSubSchema.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_QueryParameterSubSchema proto.InternalMessageInfo
func (m *QueryParameterSubSchema) GetRequired() bool {
if m != nil {
@@ -3179,17 +3884,40 @@ func (m *QueryParameterSubSchema) GetVendorExtension() []*NamedAny {
}
type Response struct {
- Description string `protobuf:"bytes,1,opt,name=description" json:"description,omitempty"`
- Schema *SchemaItem `protobuf:"bytes,2,opt,name=schema" json:"schema,omitempty"`
- Headers *Headers `protobuf:"bytes,3,opt,name=headers" json:"headers,omitempty"`
- Examples *Examples `protobuf:"bytes,4,opt,name=examples" json:"examples,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,5,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
+ Description string `protobuf:"bytes,1,opt,name=description,proto3" json:"description,omitempty"`
+ Schema *SchemaItem `protobuf:"bytes,2,opt,name=schema,proto3" json:"schema,omitempty"`
+ Headers *Headers `protobuf:"bytes,3,opt,name=headers,proto3" json:"headers,omitempty"`
+ Examples *Examples `protobuf:"bytes,4,opt,name=examples,proto3" json:"examples,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,5,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *Response) Reset() { *m = Response{} }
+func (m *Response) String() string { return proto.CompactTextString(m) }
+func (*Response) ProtoMessage() {}
+func (*Response) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{46}
+}
+
+func (m *Response) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_Response.Unmarshal(m, b)
+}
+func (m *Response) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_Response.Marshal(b, m, deterministic)
+}
+func (m *Response) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Response.Merge(m, src)
+}
+func (m *Response) XXX_Size() int {
+ return xxx_messageInfo_Response.Size(m)
+}
+func (m *Response) XXX_DiscardUnknown() {
+ xxx_messageInfo_Response.DiscardUnknown(m)
}
-func (m *Response) Reset() { *m = Response{} }
-func (m *Response) String() string { return proto.CompactTextString(m) }
-func (*Response) ProtoMessage() {}
-func (*Response) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{46} }
+var xxx_messageInfo_Response proto.InternalMessageInfo
func (m *Response) GetDescription() string {
if m != nil {
@@ -3228,13 +3956,36 @@ func (m *Response) GetVendorExtension() []*NamedAny {
// One or more JSON representations for parameters
type ResponseDefinitions struct {
- AdditionalProperties []*NamedResponse `protobuf:"bytes,1,rep,name=additional_properties,json=additionalProperties" json:"additional_properties,omitempty"`
+ AdditionalProperties []*NamedResponse `protobuf:"bytes,1,rep,name=additional_properties,json=additionalProperties,proto3" json:"additional_properties,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-func (m *ResponseDefinitions) Reset() { *m = ResponseDefinitions{} }
-func (m *ResponseDefinitions) String() string { return proto.CompactTextString(m) }
-func (*ResponseDefinitions) ProtoMessage() {}
-func (*ResponseDefinitions) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{47} }
+func (m *ResponseDefinitions) Reset() { *m = ResponseDefinitions{} }
+func (m *ResponseDefinitions) String() string { return proto.CompactTextString(m) }
+func (*ResponseDefinitions) ProtoMessage() {}
+func (*ResponseDefinitions) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{47}
+}
+
+func (m *ResponseDefinitions) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_ResponseDefinitions.Unmarshal(m, b)
+}
+func (m *ResponseDefinitions) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_ResponseDefinitions.Marshal(b, m, deterministic)
+}
+func (m *ResponseDefinitions) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ResponseDefinitions.Merge(m, src)
+}
+func (m *ResponseDefinitions) XXX_Size() int {
+ return xxx_messageInfo_ResponseDefinitions.Size(m)
+}
+func (m *ResponseDefinitions) XXX_DiscardUnknown() {
+ xxx_messageInfo_ResponseDefinitions.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_ResponseDefinitions proto.InternalMessageInfo
func (m *ResponseDefinitions) GetAdditionalProperties() []*NamedResponse {
if m != nil {
@@ -3247,26 +3998,51 @@ type ResponseValue struct {
// Types that are valid to be assigned to Oneof:
// *ResponseValue_Response
// *ResponseValue_JsonReference
- Oneof isResponseValue_Oneof `protobuf_oneof:"oneof"`
+ Oneof isResponseValue_Oneof `protobuf_oneof:"oneof"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-func (m *ResponseValue) Reset() { *m = ResponseValue{} }
-func (m *ResponseValue) String() string { return proto.CompactTextString(m) }
-func (*ResponseValue) ProtoMessage() {}
-func (*ResponseValue) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{48} }
+func (m *ResponseValue) Reset() { *m = ResponseValue{} }
+func (m *ResponseValue) String() string { return proto.CompactTextString(m) }
+func (*ResponseValue) ProtoMessage() {}
+func (*ResponseValue) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{48}
+}
+
+func (m *ResponseValue) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_ResponseValue.Unmarshal(m, b)
+}
+func (m *ResponseValue) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_ResponseValue.Marshal(b, m, deterministic)
+}
+func (m *ResponseValue) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ResponseValue.Merge(m, src)
+}
+func (m *ResponseValue) XXX_Size() int {
+ return xxx_messageInfo_ResponseValue.Size(m)
+}
+func (m *ResponseValue) XXX_DiscardUnknown() {
+ xxx_messageInfo_ResponseValue.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_ResponseValue proto.InternalMessageInfo
type isResponseValue_Oneof interface {
isResponseValue_Oneof()
}
type ResponseValue_Response struct {
- Response *Response `protobuf:"bytes,1,opt,name=response,oneof"`
+ Response *Response `protobuf:"bytes,1,opt,name=response,proto3,oneof"`
}
+
type ResponseValue_JsonReference struct {
- JsonReference *JsonReference `protobuf:"bytes,2,opt,name=json_reference,json=jsonReference,oneof"`
+ JsonReference *JsonReference `protobuf:"bytes,2,opt,name=json_reference,json=jsonReference,proto3,oneof"`
}
-func (*ResponseValue_Response) isResponseValue_Oneof() {}
+func (*ResponseValue_Response) isResponseValue_Oneof() {}
+
func (*ResponseValue_JsonReference) isResponseValue_Oneof() {}
func (m *ResponseValue) GetOneof() isResponseValue_Oneof {
@@ -3290,90 +4066,47 @@ func (m *ResponseValue) GetJsonReference() *JsonReference {
return nil
}
-// XXX_OneofFuncs is for the internal use of the proto package.
-func (*ResponseValue) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
- return _ResponseValue_OneofMarshaler, _ResponseValue_OneofUnmarshaler, _ResponseValue_OneofSizer, []interface{}{
+// XXX_OneofWrappers is for the internal use of the proto package.
+func (*ResponseValue) XXX_OneofWrappers() []interface{} {
+ return []interface{}{
(*ResponseValue_Response)(nil),
(*ResponseValue_JsonReference)(nil),
}
}
-func _ResponseValue_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
- m := msg.(*ResponseValue)
- // oneof
- switch x := m.Oneof.(type) {
- case *ResponseValue_Response:
- b.EncodeVarint(1<<3 | proto.WireBytes)
- if err := b.EncodeMessage(x.Response); err != nil {
- return err
- }
- case *ResponseValue_JsonReference:
- b.EncodeVarint(2<<3 | proto.WireBytes)
- if err := b.EncodeMessage(x.JsonReference); err != nil {
- return err
- }
- case nil:
- default:
- return fmt.Errorf("ResponseValue.Oneof has unexpected type %T", x)
- }
- return nil
-}
-
-func _ResponseValue_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
- m := msg.(*ResponseValue)
- switch tag {
- case 1: // oneof.response
- if wire != proto.WireBytes {
- return true, proto.ErrInternalBadWireType
- }
- msg := new(Response)
- err := b.DecodeMessage(msg)
- m.Oneof = &ResponseValue_Response{msg}
- return true, err
- case 2: // oneof.json_reference
- if wire != proto.WireBytes {
- return true, proto.ErrInternalBadWireType
- }
- msg := new(JsonReference)
- err := b.DecodeMessage(msg)
- m.Oneof = &ResponseValue_JsonReference{msg}
- return true, err
- default:
- return false, nil
- }
-}
-
-func _ResponseValue_OneofSizer(msg proto.Message) (n int) {
- m := msg.(*ResponseValue)
- // oneof
- switch x := m.Oneof.(type) {
- case *ResponseValue_Response:
- s := proto.Size(x.Response)
- n += proto.SizeVarint(1<<3 | proto.WireBytes)
- n += proto.SizeVarint(uint64(s))
- n += s
- case *ResponseValue_JsonReference:
- s := proto.Size(x.JsonReference)
- n += proto.SizeVarint(2<<3 | proto.WireBytes)
- n += proto.SizeVarint(uint64(s))
- n += s
- case nil:
- default:
- panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
- }
- return n
-}
-
// Response objects names can either be any valid HTTP status code or 'default'.
type Responses struct {
- ResponseCode []*NamedResponseValue `protobuf:"bytes,1,rep,name=response_code,json=responseCode" json:"response_code,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,2,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
+ ResponseCode []*NamedResponseValue `protobuf:"bytes,1,rep,name=response_code,json=responseCode,proto3" json:"response_code,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,2,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *Responses) Reset() { *m = Responses{} }
+func (m *Responses) String() string { return proto.CompactTextString(m) }
+func (*Responses) ProtoMessage() {}
+func (*Responses) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{49}
+}
+
+func (m *Responses) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_Responses.Unmarshal(m, b)
+}
+func (m *Responses) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_Responses.Marshal(b, m, deterministic)
+}
+func (m *Responses) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Responses.Merge(m, src)
+}
+func (m *Responses) XXX_Size() int {
+ return xxx_messageInfo_Responses.Size(m)
+}
+func (m *Responses) XXX_DiscardUnknown() {
+ xxx_messageInfo_Responses.DiscardUnknown(m)
}
-func (m *Responses) Reset() { *m = Responses{} }
-func (m *Responses) String() string { return proto.CompactTextString(m) }
-func (*Responses) ProtoMessage() {}
-func (*Responses) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{49} }
+var xxx_messageInfo_Responses proto.InternalMessageInfo
func (m *Responses) GetResponseCode() []*NamedResponseValue {
if m != nil {
@@ -3391,43 +4124,66 @@ func (m *Responses) GetVendorExtension() []*NamedAny {
// A deterministic version of a JSON Schema object.
type Schema struct {
- XRef string `protobuf:"bytes,1,opt,name=_ref,json=Ref" json:"_ref,omitempty"`
- Format string `protobuf:"bytes,2,opt,name=format" json:"format,omitempty"`
- Title string `protobuf:"bytes,3,opt,name=title" json:"title,omitempty"`
- Description string `protobuf:"bytes,4,opt,name=description" json:"description,omitempty"`
- Default *Any `protobuf:"bytes,5,opt,name=default" json:"default,omitempty"`
- MultipleOf float64 `protobuf:"fixed64,6,opt,name=multiple_of,json=multipleOf" json:"multiple_of,omitempty"`
- Maximum float64 `protobuf:"fixed64,7,opt,name=maximum" json:"maximum,omitempty"`
- ExclusiveMaximum bool `protobuf:"varint,8,opt,name=exclusive_maximum,json=exclusiveMaximum" json:"exclusive_maximum,omitempty"`
- Minimum float64 `protobuf:"fixed64,9,opt,name=minimum" json:"minimum,omitempty"`
- ExclusiveMinimum bool `protobuf:"varint,10,opt,name=exclusive_minimum,json=exclusiveMinimum" json:"exclusive_minimum,omitempty"`
- MaxLength int64 `protobuf:"varint,11,opt,name=max_length,json=maxLength" json:"max_length,omitempty"`
- MinLength int64 `protobuf:"varint,12,opt,name=min_length,json=minLength" json:"min_length,omitempty"`
- Pattern string `protobuf:"bytes,13,opt,name=pattern" json:"pattern,omitempty"`
- MaxItems int64 `protobuf:"varint,14,opt,name=max_items,json=maxItems" json:"max_items,omitempty"`
- MinItems int64 `protobuf:"varint,15,opt,name=min_items,json=minItems" json:"min_items,omitempty"`
- UniqueItems bool `protobuf:"varint,16,opt,name=unique_items,json=uniqueItems" json:"unique_items,omitempty"`
- MaxProperties int64 `protobuf:"varint,17,opt,name=max_properties,json=maxProperties" json:"max_properties,omitempty"`
- MinProperties int64 `protobuf:"varint,18,opt,name=min_properties,json=minProperties" json:"min_properties,omitempty"`
- Required []string `protobuf:"bytes,19,rep,name=required" json:"required,omitempty"`
- Enum []*Any `protobuf:"bytes,20,rep,name=enum" json:"enum,omitempty"`
- AdditionalProperties *AdditionalPropertiesItem `protobuf:"bytes,21,opt,name=additional_properties,json=additionalProperties" json:"additional_properties,omitempty"`
- Type *TypeItem `protobuf:"bytes,22,opt,name=type" json:"type,omitempty"`
- Items *ItemsItem `protobuf:"bytes,23,opt,name=items" json:"items,omitempty"`
- AllOf []*Schema `protobuf:"bytes,24,rep,name=all_of,json=allOf" json:"all_of,omitempty"`
- Properties *Properties `protobuf:"bytes,25,opt,name=properties" json:"properties,omitempty"`
- Discriminator string `protobuf:"bytes,26,opt,name=discriminator" json:"discriminator,omitempty"`
- ReadOnly bool `protobuf:"varint,27,opt,name=read_only,json=readOnly" json:"read_only,omitempty"`
- Xml *Xml `protobuf:"bytes,28,opt,name=xml" json:"xml,omitempty"`
- ExternalDocs *ExternalDocs `protobuf:"bytes,29,opt,name=external_docs,json=externalDocs" json:"external_docs,omitempty"`
- Example *Any `protobuf:"bytes,30,opt,name=example" json:"example,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,31,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
-}
-
-func (m *Schema) Reset() { *m = Schema{} }
-func (m *Schema) String() string { return proto.CompactTextString(m) }
-func (*Schema) ProtoMessage() {}
-func (*Schema) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{50} }
+ XRef string `protobuf:"bytes,1,opt,name=_ref,json=Ref,proto3" json:"_ref,omitempty"`
+ Format string `protobuf:"bytes,2,opt,name=format,proto3" json:"format,omitempty"`
+ Title string `protobuf:"bytes,3,opt,name=title,proto3" json:"title,omitempty"`
+ Description string `protobuf:"bytes,4,opt,name=description,proto3" json:"description,omitempty"`
+ Default *Any `protobuf:"bytes,5,opt,name=default,proto3" json:"default,omitempty"`
+ MultipleOf float64 `protobuf:"fixed64,6,opt,name=multiple_of,json=multipleOf,proto3" json:"multiple_of,omitempty"`
+ Maximum float64 `protobuf:"fixed64,7,opt,name=maximum,proto3" json:"maximum,omitempty"`
+ ExclusiveMaximum bool `protobuf:"varint,8,opt,name=exclusive_maximum,json=exclusiveMaximum,proto3" json:"exclusive_maximum,omitempty"`
+ Minimum float64 `protobuf:"fixed64,9,opt,name=minimum,proto3" json:"minimum,omitempty"`
+ ExclusiveMinimum bool `protobuf:"varint,10,opt,name=exclusive_minimum,json=exclusiveMinimum,proto3" json:"exclusive_minimum,omitempty"`
+ MaxLength int64 `protobuf:"varint,11,opt,name=max_length,json=maxLength,proto3" json:"max_length,omitempty"`
+ MinLength int64 `protobuf:"varint,12,opt,name=min_length,json=minLength,proto3" json:"min_length,omitempty"`
+ Pattern string `protobuf:"bytes,13,opt,name=pattern,proto3" json:"pattern,omitempty"`
+ MaxItems int64 `protobuf:"varint,14,opt,name=max_items,json=maxItems,proto3" json:"max_items,omitempty"`
+ MinItems int64 `protobuf:"varint,15,opt,name=min_items,json=minItems,proto3" json:"min_items,omitempty"`
+ UniqueItems bool `protobuf:"varint,16,opt,name=unique_items,json=uniqueItems,proto3" json:"unique_items,omitempty"`
+ MaxProperties int64 `protobuf:"varint,17,opt,name=max_properties,json=maxProperties,proto3" json:"max_properties,omitempty"`
+ MinProperties int64 `protobuf:"varint,18,opt,name=min_properties,json=minProperties,proto3" json:"min_properties,omitempty"`
+ Required []string `protobuf:"bytes,19,rep,name=required,proto3" json:"required,omitempty"`
+ Enum []*Any `protobuf:"bytes,20,rep,name=enum,proto3" json:"enum,omitempty"`
+ AdditionalProperties *AdditionalPropertiesItem `protobuf:"bytes,21,opt,name=additional_properties,json=additionalProperties,proto3" json:"additional_properties,omitempty"`
+ Type *TypeItem `protobuf:"bytes,22,opt,name=type,proto3" json:"type,omitempty"`
+ Items *ItemsItem `protobuf:"bytes,23,opt,name=items,proto3" json:"items,omitempty"`
+ AllOf []*Schema `protobuf:"bytes,24,rep,name=all_of,json=allOf,proto3" json:"all_of,omitempty"`
+ Properties *Properties `protobuf:"bytes,25,opt,name=properties,proto3" json:"properties,omitempty"`
+ Discriminator string `protobuf:"bytes,26,opt,name=discriminator,proto3" json:"discriminator,omitempty"`
+ ReadOnly bool `protobuf:"varint,27,opt,name=read_only,json=readOnly,proto3" json:"read_only,omitempty"`
+ Xml *Xml `protobuf:"bytes,28,opt,name=xml,proto3" json:"xml,omitempty"`
+ ExternalDocs *ExternalDocs `protobuf:"bytes,29,opt,name=external_docs,json=externalDocs,proto3" json:"external_docs,omitempty"`
+ Example *Any `protobuf:"bytes,30,opt,name=example,proto3" json:"example,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,31,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *Schema) Reset() { *m = Schema{} }
+func (m *Schema) String() string { return proto.CompactTextString(m) }
+func (*Schema) ProtoMessage() {}
+func (*Schema) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{50}
+}
+
+func (m *Schema) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_Schema.Unmarshal(m, b)
+}
+func (m *Schema) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_Schema.Marshal(b, m, deterministic)
+}
+func (m *Schema) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Schema.Merge(m, src)
+}
+func (m *Schema) XXX_Size() int {
+ return xxx_messageInfo_Schema.Size(m)
+}
+func (m *Schema) XXX_DiscardUnknown() {
+ xxx_messageInfo_Schema.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_Schema proto.InternalMessageInfo
func (m *Schema) GetXRef() string {
if m != nil {
@@ -3650,26 +4406,51 @@ type SchemaItem struct {
// Types that are valid to be assigned to Oneof:
// *SchemaItem_Schema
// *SchemaItem_FileSchema
- Oneof isSchemaItem_Oneof `protobuf_oneof:"oneof"`
+ Oneof isSchemaItem_Oneof `protobuf_oneof:"oneof"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-func (m *SchemaItem) Reset() { *m = SchemaItem{} }
-func (m *SchemaItem) String() string { return proto.CompactTextString(m) }
-func (*SchemaItem) ProtoMessage() {}
-func (*SchemaItem) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{51} }
+func (m *SchemaItem) Reset() { *m = SchemaItem{} }
+func (m *SchemaItem) String() string { return proto.CompactTextString(m) }
+func (*SchemaItem) ProtoMessage() {}
+func (*SchemaItem) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{51}
+}
+
+func (m *SchemaItem) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_SchemaItem.Unmarshal(m, b)
+}
+func (m *SchemaItem) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_SchemaItem.Marshal(b, m, deterministic)
+}
+func (m *SchemaItem) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_SchemaItem.Merge(m, src)
+}
+func (m *SchemaItem) XXX_Size() int {
+ return xxx_messageInfo_SchemaItem.Size(m)
+}
+func (m *SchemaItem) XXX_DiscardUnknown() {
+ xxx_messageInfo_SchemaItem.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_SchemaItem proto.InternalMessageInfo
type isSchemaItem_Oneof interface {
isSchemaItem_Oneof()
}
type SchemaItem_Schema struct {
- Schema *Schema `protobuf:"bytes,1,opt,name=schema,oneof"`
+ Schema *Schema `protobuf:"bytes,1,opt,name=schema,proto3,oneof"`
}
+
type SchemaItem_FileSchema struct {
- FileSchema *FileSchema `protobuf:"bytes,2,opt,name=file_schema,json=fileSchema,oneof"`
+ FileSchema *FileSchema `protobuf:"bytes,2,opt,name=file_schema,json=fileSchema,proto3,oneof"`
}
-func (*SchemaItem_Schema) isSchemaItem_Oneof() {}
+func (*SchemaItem_Schema) isSchemaItem_Oneof() {}
+
func (*SchemaItem_FileSchema) isSchemaItem_Oneof() {}
func (m *SchemaItem) GetOneof() isSchemaItem_Oneof {
@@ -3693,88 +4474,45 @@ func (m *SchemaItem) GetFileSchema() *FileSchema {
return nil
}
-// XXX_OneofFuncs is for the internal use of the proto package.
-func (*SchemaItem) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
- return _SchemaItem_OneofMarshaler, _SchemaItem_OneofUnmarshaler, _SchemaItem_OneofSizer, []interface{}{
+// XXX_OneofWrappers is for the internal use of the proto package.
+func (*SchemaItem) XXX_OneofWrappers() []interface{} {
+ return []interface{}{
(*SchemaItem_Schema)(nil),
(*SchemaItem_FileSchema)(nil),
}
}
-func _SchemaItem_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
- m := msg.(*SchemaItem)
- // oneof
- switch x := m.Oneof.(type) {
- case *SchemaItem_Schema:
- b.EncodeVarint(1<<3 | proto.WireBytes)
- if err := b.EncodeMessage(x.Schema); err != nil {
- return err
- }
- case *SchemaItem_FileSchema:
- b.EncodeVarint(2<<3 | proto.WireBytes)
- if err := b.EncodeMessage(x.FileSchema); err != nil {
- return err
- }
- case nil:
- default:
- return fmt.Errorf("SchemaItem.Oneof has unexpected type %T", x)
- }
- return nil
-}
-
-func _SchemaItem_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
- m := msg.(*SchemaItem)
- switch tag {
- case 1: // oneof.schema
- if wire != proto.WireBytes {
- return true, proto.ErrInternalBadWireType
- }
- msg := new(Schema)
- err := b.DecodeMessage(msg)
- m.Oneof = &SchemaItem_Schema{msg}
- return true, err
- case 2: // oneof.file_schema
- if wire != proto.WireBytes {
- return true, proto.ErrInternalBadWireType
- }
- msg := new(FileSchema)
- err := b.DecodeMessage(msg)
- m.Oneof = &SchemaItem_FileSchema{msg}
- return true, err
- default:
- return false, nil
- }
-}
-
-func _SchemaItem_OneofSizer(msg proto.Message) (n int) {
- m := msg.(*SchemaItem)
- // oneof
- switch x := m.Oneof.(type) {
- case *SchemaItem_Schema:
- s := proto.Size(x.Schema)
- n += proto.SizeVarint(1<<3 | proto.WireBytes)
- n += proto.SizeVarint(uint64(s))
- n += s
- case *SchemaItem_FileSchema:
- s := proto.Size(x.FileSchema)
- n += proto.SizeVarint(2<<3 | proto.WireBytes)
- n += proto.SizeVarint(uint64(s))
- n += s
- case nil:
- default:
- panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
- }
- return n
+type SecurityDefinitions struct {
+ AdditionalProperties []*NamedSecurityDefinitionsItem `protobuf:"bytes,1,rep,name=additional_properties,json=additionalProperties,proto3" json:"additional_properties,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-type SecurityDefinitions struct {
- AdditionalProperties []*NamedSecurityDefinitionsItem `protobuf:"bytes,1,rep,name=additional_properties,json=additionalProperties" json:"additional_properties,omitempty"`
+func (m *SecurityDefinitions) Reset() { *m = SecurityDefinitions{} }
+func (m *SecurityDefinitions) String() string { return proto.CompactTextString(m) }
+func (*SecurityDefinitions) ProtoMessage() {}
+func (*SecurityDefinitions) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{52}
}
-func (m *SecurityDefinitions) Reset() { *m = SecurityDefinitions{} }
-func (m *SecurityDefinitions) String() string { return proto.CompactTextString(m) }
-func (*SecurityDefinitions) ProtoMessage() {}
-func (*SecurityDefinitions) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{52} }
+func (m *SecurityDefinitions) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_SecurityDefinitions.Unmarshal(m, b)
+}
+func (m *SecurityDefinitions) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_SecurityDefinitions.Marshal(b, m, deterministic)
+}
+func (m *SecurityDefinitions) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_SecurityDefinitions.Merge(m, src)
+}
+func (m *SecurityDefinitions) XXX_Size() int {
+ return xxx_messageInfo_SecurityDefinitions.Size(m)
+}
+func (m *SecurityDefinitions) XXX_DiscardUnknown() {
+ xxx_messageInfo_SecurityDefinitions.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_SecurityDefinitions proto.InternalMessageInfo
func (m *SecurityDefinitions) GetAdditionalProperties() []*NamedSecurityDefinitionsItem {
if m != nil {
@@ -3791,43 +4529,76 @@ type SecurityDefinitionsItem struct {
// *SecurityDefinitionsItem_Oauth2PasswordSecurity
// *SecurityDefinitionsItem_Oauth2ApplicationSecurity
// *SecurityDefinitionsItem_Oauth2AccessCodeSecurity
- Oneof isSecurityDefinitionsItem_Oneof `protobuf_oneof:"oneof"`
+ Oneof isSecurityDefinitionsItem_Oneof `protobuf_oneof:"oneof"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *SecurityDefinitionsItem) Reset() { *m = SecurityDefinitionsItem{} }
+func (m *SecurityDefinitionsItem) String() string { return proto.CompactTextString(m) }
+func (*SecurityDefinitionsItem) ProtoMessage() {}
+func (*SecurityDefinitionsItem) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{53}
}
-func (m *SecurityDefinitionsItem) Reset() { *m = SecurityDefinitionsItem{} }
-func (m *SecurityDefinitionsItem) String() string { return proto.CompactTextString(m) }
-func (*SecurityDefinitionsItem) ProtoMessage() {}
-func (*SecurityDefinitionsItem) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{53} }
+func (m *SecurityDefinitionsItem) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_SecurityDefinitionsItem.Unmarshal(m, b)
+}
+func (m *SecurityDefinitionsItem) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_SecurityDefinitionsItem.Marshal(b, m, deterministic)
+}
+func (m *SecurityDefinitionsItem) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_SecurityDefinitionsItem.Merge(m, src)
+}
+func (m *SecurityDefinitionsItem) XXX_Size() int {
+ return xxx_messageInfo_SecurityDefinitionsItem.Size(m)
+}
+func (m *SecurityDefinitionsItem) XXX_DiscardUnknown() {
+ xxx_messageInfo_SecurityDefinitionsItem.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_SecurityDefinitionsItem proto.InternalMessageInfo
type isSecurityDefinitionsItem_Oneof interface {
isSecurityDefinitionsItem_Oneof()
}
type SecurityDefinitionsItem_BasicAuthenticationSecurity struct {
- BasicAuthenticationSecurity *BasicAuthenticationSecurity `protobuf:"bytes,1,opt,name=basic_authentication_security,json=basicAuthenticationSecurity,oneof"`
+ BasicAuthenticationSecurity *BasicAuthenticationSecurity `protobuf:"bytes,1,opt,name=basic_authentication_security,json=basicAuthenticationSecurity,proto3,oneof"`
}
+
type SecurityDefinitionsItem_ApiKeySecurity struct {
- ApiKeySecurity *ApiKeySecurity `protobuf:"bytes,2,opt,name=api_key_security,json=apiKeySecurity,oneof"`
+ ApiKeySecurity *ApiKeySecurity `protobuf:"bytes,2,opt,name=api_key_security,json=apiKeySecurity,proto3,oneof"`
}
+
type SecurityDefinitionsItem_Oauth2ImplicitSecurity struct {
- Oauth2ImplicitSecurity *Oauth2ImplicitSecurity `protobuf:"bytes,3,opt,name=oauth2_implicit_security,json=oauth2ImplicitSecurity,oneof"`
+ Oauth2ImplicitSecurity *Oauth2ImplicitSecurity `protobuf:"bytes,3,opt,name=oauth2_implicit_security,json=oauth2ImplicitSecurity,proto3,oneof"`
}
+
type SecurityDefinitionsItem_Oauth2PasswordSecurity struct {
- Oauth2PasswordSecurity *Oauth2PasswordSecurity `protobuf:"bytes,4,opt,name=oauth2_password_security,json=oauth2PasswordSecurity,oneof"`
+ Oauth2PasswordSecurity *Oauth2PasswordSecurity `protobuf:"bytes,4,opt,name=oauth2_password_security,json=oauth2PasswordSecurity,proto3,oneof"`
}
+
type SecurityDefinitionsItem_Oauth2ApplicationSecurity struct {
- Oauth2ApplicationSecurity *Oauth2ApplicationSecurity `protobuf:"bytes,5,opt,name=oauth2_application_security,json=oauth2ApplicationSecurity,oneof"`
+ Oauth2ApplicationSecurity *Oauth2ApplicationSecurity `protobuf:"bytes,5,opt,name=oauth2_application_security,json=oauth2ApplicationSecurity,proto3,oneof"`
}
+
type SecurityDefinitionsItem_Oauth2AccessCodeSecurity struct {
- Oauth2AccessCodeSecurity *Oauth2AccessCodeSecurity `protobuf:"bytes,6,opt,name=oauth2_access_code_security,json=oauth2AccessCodeSecurity,oneof"`
+ Oauth2AccessCodeSecurity *Oauth2AccessCodeSecurity `protobuf:"bytes,6,opt,name=oauth2_access_code_security,json=oauth2AccessCodeSecurity,proto3,oneof"`
}
func (*SecurityDefinitionsItem_BasicAuthenticationSecurity) isSecurityDefinitionsItem_Oneof() {}
-func (*SecurityDefinitionsItem_ApiKeySecurity) isSecurityDefinitionsItem_Oneof() {}
-func (*SecurityDefinitionsItem_Oauth2ImplicitSecurity) isSecurityDefinitionsItem_Oneof() {}
-func (*SecurityDefinitionsItem_Oauth2PasswordSecurity) isSecurityDefinitionsItem_Oneof() {}
-func (*SecurityDefinitionsItem_Oauth2ApplicationSecurity) isSecurityDefinitionsItem_Oneof() {}
-func (*SecurityDefinitionsItem_Oauth2AccessCodeSecurity) isSecurityDefinitionsItem_Oneof() {}
+
+func (*SecurityDefinitionsItem_ApiKeySecurity) isSecurityDefinitionsItem_Oneof() {}
+
+func (*SecurityDefinitionsItem_Oauth2ImplicitSecurity) isSecurityDefinitionsItem_Oneof() {}
+
+func (*SecurityDefinitionsItem_Oauth2PasswordSecurity) isSecurityDefinitionsItem_Oneof() {}
+
+func (*SecurityDefinitionsItem_Oauth2ApplicationSecurity) isSecurityDefinitionsItem_Oneof() {}
+
+func (*SecurityDefinitionsItem_Oauth2AccessCodeSecurity) isSecurityDefinitionsItem_Oneof() {}
func (m *SecurityDefinitionsItem) GetOneof() isSecurityDefinitionsItem_Oneof {
if m != nil {
@@ -3878,9 +4649,9 @@ func (m *SecurityDefinitionsItem) GetOauth2AccessCodeSecurity() *Oauth2AccessCod
return nil
}
-// XXX_OneofFuncs is for the internal use of the proto package.
-func (*SecurityDefinitionsItem) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
- return _SecurityDefinitionsItem_OneofMarshaler, _SecurityDefinitionsItem_OneofUnmarshaler, _SecurityDefinitionsItem_OneofSizer, []interface{}{
+// XXX_OneofWrappers is for the internal use of the proto package.
+func (*SecurityDefinitionsItem) XXX_OneofWrappers() []interface{} {
+ return []interface{}{
(*SecurityDefinitionsItem_BasicAuthenticationSecurity)(nil),
(*SecurityDefinitionsItem_ApiKeySecurity)(nil),
(*SecurityDefinitionsItem_Oauth2ImplicitSecurity)(nil),
@@ -3890,152 +4661,37 @@ func (*SecurityDefinitionsItem) XXX_OneofFuncs() (func(msg proto.Message, b *pro
}
}
-func _SecurityDefinitionsItem_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
- m := msg.(*SecurityDefinitionsItem)
- // oneof
- switch x := m.Oneof.(type) {
- case *SecurityDefinitionsItem_BasicAuthenticationSecurity:
- b.EncodeVarint(1<<3 | proto.WireBytes)
- if err := b.EncodeMessage(x.BasicAuthenticationSecurity); err != nil {
- return err
- }
- case *SecurityDefinitionsItem_ApiKeySecurity:
- b.EncodeVarint(2<<3 | proto.WireBytes)
- if err := b.EncodeMessage(x.ApiKeySecurity); err != nil {
- return err
- }
- case *SecurityDefinitionsItem_Oauth2ImplicitSecurity:
- b.EncodeVarint(3<<3 | proto.WireBytes)
- if err := b.EncodeMessage(x.Oauth2ImplicitSecurity); err != nil {
- return err
- }
- case *SecurityDefinitionsItem_Oauth2PasswordSecurity:
- b.EncodeVarint(4<<3 | proto.WireBytes)
- if err := b.EncodeMessage(x.Oauth2PasswordSecurity); err != nil {
- return err
- }
- case *SecurityDefinitionsItem_Oauth2ApplicationSecurity:
- b.EncodeVarint(5<<3 | proto.WireBytes)
- if err := b.EncodeMessage(x.Oauth2ApplicationSecurity); err != nil {
- return err
- }
- case *SecurityDefinitionsItem_Oauth2AccessCodeSecurity:
- b.EncodeVarint(6<<3 | proto.WireBytes)
- if err := b.EncodeMessage(x.Oauth2AccessCodeSecurity); err != nil {
- return err
- }
- case nil:
- default:
- return fmt.Errorf("SecurityDefinitionsItem.Oneof has unexpected type %T", x)
- }
- return nil
-}
-
-func _SecurityDefinitionsItem_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
- m := msg.(*SecurityDefinitionsItem)
- switch tag {
- case 1: // oneof.basic_authentication_security
- if wire != proto.WireBytes {
- return true, proto.ErrInternalBadWireType
- }
- msg := new(BasicAuthenticationSecurity)
- err := b.DecodeMessage(msg)
- m.Oneof = &SecurityDefinitionsItem_BasicAuthenticationSecurity{msg}
- return true, err
- case 2: // oneof.api_key_security
- if wire != proto.WireBytes {
- return true, proto.ErrInternalBadWireType
- }
- msg := new(ApiKeySecurity)
- err := b.DecodeMessage(msg)
- m.Oneof = &SecurityDefinitionsItem_ApiKeySecurity{msg}
- return true, err
- case 3: // oneof.oauth2_implicit_security
- if wire != proto.WireBytes {
- return true, proto.ErrInternalBadWireType
- }
- msg := new(Oauth2ImplicitSecurity)
- err := b.DecodeMessage(msg)
- m.Oneof = &SecurityDefinitionsItem_Oauth2ImplicitSecurity{msg}
- return true, err
- case 4: // oneof.oauth2_password_security
- if wire != proto.WireBytes {
- return true, proto.ErrInternalBadWireType
- }
- msg := new(Oauth2PasswordSecurity)
- err := b.DecodeMessage(msg)
- m.Oneof = &SecurityDefinitionsItem_Oauth2PasswordSecurity{msg}
- return true, err
- case 5: // oneof.oauth2_application_security
- if wire != proto.WireBytes {
- return true, proto.ErrInternalBadWireType
- }
- msg := new(Oauth2ApplicationSecurity)
- err := b.DecodeMessage(msg)
- m.Oneof = &SecurityDefinitionsItem_Oauth2ApplicationSecurity{msg}
- return true, err
- case 6: // oneof.oauth2_access_code_security
- if wire != proto.WireBytes {
- return true, proto.ErrInternalBadWireType
- }
- msg := new(Oauth2AccessCodeSecurity)
- err := b.DecodeMessage(msg)
- m.Oneof = &SecurityDefinitionsItem_Oauth2AccessCodeSecurity{msg}
- return true, err
- default:
- return false, nil
- }
-}
-
-func _SecurityDefinitionsItem_OneofSizer(msg proto.Message) (n int) {
- m := msg.(*SecurityDefinitionsItem)
- // oneof
- switch x := m.Oneof.(type) {
- case *SecurityDefinitionsItem_BasicAuthenticationSecurity:
- s := proto.Size(x.BasicAuthenticationSecurity)
- n += proto.SizeVarint(1<<3 | proto.WireBytes)
- n += proto.SizeVarint(uint64(s))
- n += s
- case *SecurityDefinitionsItem_ApiKeySecurity:
- s := proto.Size(x.ApiKeySecurity)
- n += proto.SizeVarint(2<<3 | proto.WireBytes)
- n += proto.SizeVarint(uint64(s))
- n += s
- case *SecurityDefinitionsItem_Oauth2ImplicitSecurity:
- s := proto.Size(x.Oauth2ImplicitSecurity)
- n += proto.SizeVarint(3<<3 | proto.WireBytes)
- n += proto.SizeVarint(uint64(s))
- n += s
- case *SecurityDefinitionsItem_Oauth2PasswordSecurity:
- s := proto.Size(x.Oauth2PasswordSecurity)
- n += proto.SizeVarint(4<<3 | proto.WireBytes)
- n += proto.SizeVarint(uint64(s))
- n += s
- case *SecurityDefinitionsItem_Oauth2ApplicationSecurity:
- s := proto.Size(x.Oauth2ApplicationSecurity)
- n += proto.SizeVarint(5<<3 | proto.WireBytes)
- n += proto.SizeVarint(uint64(s))
- n += s
- case *SecurityDefinitionsItem_Oauth2AccessCodeSecurity:
- s := proto.Size(x.Oauth2AccessCodeSecurity)
- n += proto.SizeVarint(6<<3 | proto.WireBytes)
- n += proto.SizeVarint(uint64(s))
- n += s
- case nil:
- default:
- panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
- }
- return n
+type SecurityRequirement struct {
+ AdditionalProperties []*NamedStringArray `protobuf:"bytes,1,rep,name=additional_properties,json=additionalProperties,proto3" json:"additional_properties,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-type SecurityRequirement struct {
- AdditionalProperties []*NamedStringArray `protobuf:"bytes,1,rep,name=additional_properties,json=additionalProperties" json:"additional_properties,omitempty"`
+func (m *SecurityRequirement) Reset() { *m = SecurityRequirement{} }
+func (m *SecurityRequirement) String() string { return proto.CompactTextString(m) }
+func (*SecurityRequirement) ProtoMessage() {}
+func (*SecurityRequirement) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{54}
}
-func (m *SecurityRequirement) Reset() { *m = SecurityRequirement{} }
-func (m *SecurityRequirement) String() string { return proto.CompactTextString(m) }
-func (*SecurityRequirement) ProtoMessage() {}
-func (*SecurityRequirement) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{54} }
+func (m *SecurityRequirement) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_SecurityRequirement.Unmarshal(m, b)
+}
+func (m *SecurityRequirement) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_SecurityRequirement.Marshal(b, m, deterministic)
+}
+func (m *SecurityRequirement) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_SecurityRequirement.Merge(m, src)
+}
+func (m *SecurityRequirement) XXX_Size() int {
+ return xxx_messageInfo_SecurityRequirement.Size(m)
+}
+func (m *SecurityRequirement) XXX_DiscardUnknown() {
+ xxx_messageInfo_SecurityRequirement.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_SecurityRequirement proto.InternalMessageInfo
func (m *SecurityRequirement) GetAdditionalProperties() []*NamedStringArray {
if m != nil {
@@ -4045,13 +4701,36 @@ func (m *SecurityRequirement) GetAdditionalProperties() []*NamedStringArray {
}
type StringArray struct {
- Value []string `protobuf:"bytes,1,rep,name=value" json:"value,omitempty"`
+ Value []string `protobuf:"bytes,1,rep,name=value,proto3" json:"value,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *StringArray) Reset() { *m = StringArray{} }
+func (m *StringArray) String() string { return proto.CompactTextString(m) }
+func (*StringArray) ProtoMessage() {}
+func (*StringArray) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{55}
+}
+
+func (m *StringArray) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_StringArray.Unmarshal(m, b)
+}
+func (m *StringArray) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_StringArray.Marshal(b, m, deterministic)
+}
+func (m *StringArray) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_StringArray.Merge(m, src)
+}
+func (m *StringArray) XXX_Size() int {
+ return xxx_messageInfo_StringArray.Size(m)
+}
+func (m *StringArray) XXX_DiscardUnknown() {
+ xxx_messageInfo_StringArray.DiscardUnknown(m)
}
-func (m *StringArray) Reset() { *m = StringArray{} }
-func (m *StringArray) String() string { return proto.CompactTextString(m) }
-func (*StringArray) ProtoMessage() {}
-func (*StringArray) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{55} }
+var xxx_messageInfo_StringArray proto.InternalMessageInfo
func (m *StringArray) GetValue() []string {
if m != nil {
@@ -4061,16 +4740,39 @@ func (m *StringArray) GetValue() []string {
}
type Tag struct {
- Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
- Description string `protobuf:"bytes,2,opt,name=description" json:"description,omitempty"`
- ExternalDocs *ExternalDocs `protobuf:"bytes,3,opt,name=external_docs,json=externalDocs" json:"external_docs,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,4,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
+ Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
+ Description string `protobuf:"bytes,2,opt,name=description,proto3" json:"description,omitempty"`
+ ExternalDocs *ExternalDocs `protobuf:"bytes,3,opt,name=external_docs,json=externalDocs,proto3" json:"external_docs,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,4,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
-func (m *Tag) Reset() { *m = Tag{} }
-func (m *Tag) String() string { return proto.CompactTextString(m) }
-func (*Tag) ProtoMessage() {}
-func (*Tag) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{56} }
+func (m *Tag) Reset() { *m = Tag{} }
+func (m *Tag) String() string { return proto.CompactTextString(m) }
+func (*Tag) ProtoMessage() {}
+func (*Tag) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{56}
+}
+
+func (m *Tag) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_Tag.Unmarshal(m, b)
+}
+func (m *Tag) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_Tag.Marshal(b, m, deterministic)
+}
+func (m *Tag) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Tag.Merge(m, src)
+}
+func (m *Tag) XXX_Size() int {
+ return xxx_messageInfo_Tag.Size(m)
+}
+func (m *Tag) XXX_DiscardUnknown() {
+ xxx_messageInfo_Tag.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_Tag proto.InternalMessageInfo
func (m *Tag) GetName() string {
if m != nil {
@@ -4101,13 +4803,36 @@ func (m *Tag) GetVendorExtension() []*NamedAny {
}
type TypeItem struct {
- Value []string `protobuf:"bytes,1,rep,name=value" json:"value,omitempty"`
+ Value []string `protobuf:"bytes,1,rep,name=value,proto3" json:"value,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *TypeItem) Reset() { *m = TypeItem{} }
+func (m *TypeItem) String() string { return proto.CompactTextString(m) }
+func (*TypeItem) ProtoMessage() {}
+func (*TypeItem) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{57}
+}
+
+func (m *TypeItem) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_TypeItem.Unmarshal(m, b)
+}
+func (m *TypeItem) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_TypeItem.Marshal(b, m, deterministic)
+}
+func (m *TypeItem) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_TypeItem.Merge(m, src)
+}
+func (m *TypeItem) XXX_Size() int {
+ return xxx_messageInfo_TypeItem.Size(m)
+}
+func (m *TypeItem) XXX_DiscardUnknown() {
+ xxx_messageInfo_TypeItem.DiscardUnknown(m)
}
-func (m *TypeItem) Reset() { *m = TypeItem{} }
-func (m *TypeItem) String() string { return proto.CompactTextString(m) }
-func (*TypeItem) ProtoMessage() {}
-func (*TypeItem) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{57} }
+var xxx_messageInfo_TypeItem proto.InternalMessageInfo
func (m *TypeItem) GetValue() []string {
if m != nil {
@@ -4118,13 +4843,36 @@ func (m *TypeItem) GetValue() []string {
// Any property starting with x- is valid.
type VendorExtension struct {
- AdditionalProperties []*NamedAny `protobuf:"bytes,1,rep,name=additional_properties,json=additionalProperties" json:"additional_properties,omitempty"`
+ AdditionalProperties []*NamedAny `protobuf:"bytes,1,rep,name=additional_properties,json=additionalProperties,proto3" json:"additional_properties,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *VendorExtension) Reset() { *m = VendorExtension{} }
+func (m *VendorExtension) String() string { return proto.CompactTextString(m) }
+func (*VendorExtension) ProtoMessage() {}
+func (*VendorExtension) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{58}
}
-func (m *VendorExtension) Reset() { *m = VendorExtension{} }
-func (m *VendorExtension) String() string { return proto.CompactTextString(m) }
-func (*VendorExtension) ProtoMessage() {}
-func (*VendorExtension) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{58} }
+func (m *VendorExtension) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_VendorExtension.Unmarshal(m, b)
+}
+func (m *VendorExtension) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_VendorExtension.Marshal(b, m, deterministic)
+}
+func (m *VendorExtension) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_VendorExtension.Merge(m, src)
+}
+func (m *VendorExtension) XXX_Size() int {
+ return xxx_messageInfo_VendorExtension.Size(m)
+}
+func (m *VendorExtension) XXX_DiscardUnknown() {
+ xxx_messageInfo_VendorExtension.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_VendorExtension proto.InternalMessageInfo
func (m *VendorExtension) GetAdditionalProperties() []*NamedAny {
if m != nil {
@@ -4134,18 +4882,41 @@ func (m *VendorExtension) GetAdditionalProperties() []*NamedAny {
}
type Xml struct {
- Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
- Namespace string `protobuf:"bytes,2,opt,name=namespace" json:"namespace,omitempty"`
- Prefix string `protobuf:"bytes,3,opt,name=prefix" json:"prefix,omitempty"`
- Attribute bool `protobuf:"varint,4,opt,name=attribute" json:"attribute,omitempty"`
- Wrapped bool `protobuf:"varint,5,opt,name=wrapped" json:"wrapped,omitempty"`
- VendorExtension []*NamedAny `protobuf:"bytes,6,rep,name=vendor_extension,json=vendorExtension" json:"vendor_extension,omitempty"`
+ Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
+ Namespace string `protobuf:"bytes,2,opt,name=namespace,proto3" json:"namespace,omitempty"`
+ Prefix string `protobuf:"bytes,3,opt,name=prefix,proto3" json:"prefix,omitempty"`
+ Attribute bool `protobuf:"varint,4,opt,name=attribute,proto3" json:"attribute,omitempty"`
+ Wrapped bool `protobuf:"varint,5,opt,name=wrapped,proto3" json:"wrapped,omitempty"`
+ VendorExtension []*NamedAny `protobuf:"bytes,6,rep,name=vendor_extension,json=vendorExtension,proto3" json:"vendor_extension,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *Xml) Reset() { *m = Xml{} }
+func (m *Xml) String() string { return proto.CompactTextString(m) }
+func (*Xml) ProtoMessage() {}
+func (*Xml) Descriptor() ([]byte, []int) {
+ return fileDescriptor_336adc04ae589d92, []int{59}
+}
+
+func (m *Xml) XXX_Unmarshal(b []byte) error {
+ return xxx_messageInfo_Xml.Unmarshal(m, b)
+}
+func (m *Xml) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ return xxx_messageInfo_Xml.Marshal(b, m, deterministic)
+}
+func (m *Xml) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Xml.Merge(m, src)
+}
+func (m *Xml) XXX_Size() int {
+ return xxx_messageInfo_Xml.Size(m)
+}
+func (m *Xml) XXX_DiscardUnknown() {
+ xxx_messageInfo_Xml.DiscardUnknown(m)
}
-func (m *Xml) Reset() { *m = Xml{} }
-func (m *Xml) String() string { return proto.CompactTextString(m) }
-func (*Xml) ProtoMessage() {}
-func (*Xml) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{59} }
+var xxx_messageInfo_Xml proto.InternalMessageInfo
func (m *Xml) GetName() string {
if m != nil {
@@ -4252,9 +5023,9 @@ func init() {
proto.RegisterType((*Xml)(nil), "openapi.v2.Xml")
}
-func init() { proto.RegisterFile("OpenAPIv2/OpenAPIv2.proto", fileDescriptor0) }
+func init() { proto.RegisterFile("OpenAPIv2/OpenAPIv2.proto", fileDescriptor_336adc04ae589d92) }
-var fileDescriptor0 = []byte{
+var fileDescriptor_336adc04ae589d92 = []byte{
// 3129 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0x3b, 0x4b, 0x73, 0x1c, 0x57,
0xd5, 0xf3, 0x7e, 0x1c, 0x69, 0x46, 0xa3, 0x96, 0x2c, 0xb7, 0x24, 0xc7, 0x71, 0xe4, 0x3c, 0x6c,
diff --git a/vendor/github.com/googleapis/gnostic/compiler/reader.go b/vendor/github.com/googleapis/gnostic/compiler/reader.go
index af10e0d1bc6e6..25affd063523b 100644
--- a/vendor/github.com/googleapis/gnostic/compiler/reader.go
+++ b/vendor/github.com/googleapis/gnostic/compiler/reader.go
@@ -71,6 +71,17 @@ func RemoveFromInfoCache(filename string) {
delete(infoCache, filename)
}
+func GetInfoCache() map[string]interface{} {
+ if infoCache == nil {
+ initializeInfoCache()
+ }
+ return infoCache
+}
+
+func ClearInfoCache() {
+ infoCache = make(map[string]interface{})
+}
+
// FetchFile gets a specified file from the local filesystem or a remote location.
func FetchFile(fileurl string) ([]byte, error) {
var bytes []byte
@@ -168,7 +179,11 @@ func ReadInfoForRef(basefile string, ref string) (interface{}, error) {
parts := strings.Split(ref, "#")
var filename string
if parts[0] != "" {
- filename = basedir + parts[0]
+ filename = parts[0]
+ if _, err := url.ParseRequestURI(parts[0]); err != nil {
+ // It is not an URL, so the file is local
+ filename = basedir + parts[0]
+ }
} else {
filename = basefile
}
diff --git a/vendor/github.com/googleapis/gnostic/extensions/COMPILE-EXTENSION.sh b/vendor/github.com/googleapis/gnostic/extensions/COMPILE-EXTENSION.sh
deleted file mode 100644
index 68d02a02ac5da..0000000000000
--- a/vendor/github.com/googleapis/gnostic/extensions/COMPILE-EXTENSION.sh
+++ /dev/null
@@ -1,5 +0,0 @@
-go get github.com/golang/protobuf/protoc-gen-go
-
-protoc \
---go_out=Mgoogle/protobuf/any.proto=github.com/golang/protobuf/ptypes/any:. *.proto
-
diff --git a/vendor/github.com/googleapis/gnostic/extensions/extension.pb.go b/vendor/github.com/googleapis/gnostic/extensions/extension.pb.go
index e6927896f97d3..432dc06e6d08c 100644
--- a/vendor/github.com/googleapis/gnostic/extensions/extension.pb.go
+++ b/vendor/github.com/googleapis/gnostic/extensions/extension.pb.go
@@ -1,12 +1,14 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
-// source: extension.proto
+// source: extensions/extension.proto
package openapiextension_v1
-import proto "github.com/golang/protobuf/proto"
-import fmt "fmt"
-import math "math"
-import any "github.com/golang/protobuf/ptypes/any"
+import (
+ fmt "fmt"
+ proto "github.com/golang/protobuf/proto"
+ any "github.com/golang/protobuf/ptypes/any"
+ math "math"
+)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
@@ -17,7 +19,7 @@ var _ = math.Inf
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
-const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
+const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
// The version number of OpenAPI compiler.
type Version struct {
@@ -36,16 +38,17 @@ func (m *Version) Reset() { *m = Version{} }
func (m *Version) String() string { return proto.CompactTextString(m) }
func (*Version) ProtoMessage() {}
func (*Version) Descriptor() ([]byte, []int) {
- return fileDescriptor_extension_d25f09c742c58c90, []int{0}
+ return fileDescriptor_661e47e790f76671, []int{0}
}
+
func (m *Version) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_Version.Unmarshal(m, b)
}
func (m *Version) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_Version.Marshal(b, m, deterministic)
}
-func (dst *Version) XXX_Merge(src proto.Message) {
- xxx_messageInfo_Version.Merge(dst, src)
+func (m *Version) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Version.Merge(m, src)
}
func (m *Version) XXX_Size() int {
return xxx_messageInfo_Version.Size(m)
@@ -100,16 +103,17 @@ func (m *ExtensionHandlerRequest) Reset() { *m = ExtensionHandlerRequest
func (m *ExtensionHandlerRequest) String() string { return proto.CompactTextString(m) }
func (*ExtensionHandlerRequest) ProtoMessage() {}
func (*ExtensionHandlerRequest) Descriptor() ([]byte, []int) {
- return fileDescriptor_extension_d25f09c742c58c90, []int{1}
+ return fileDescriptor_661e47e790f76671, []int{1}
}
+
func (m *ExtensionHandlerRequest) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_ExtensionHandlerRequest.Unmarshal(m, b)
}
func (m *ExtensionHandlerRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_ExtensionHandlerRequest.Marshal(b, m, deterministic)
}
-func (dst *ExtensionHandlerRequest) XXX_Merge(src proto.Message) {
- xxx_messageInfo_ExtensionHandlerRequest.Merge(dst, src)
+func (m *ExtensionHandlerRequest) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ExtensionHandlerRequest.Merge(m, src)
}
func (m *ExtensionHandlerRequest) XXX_Size() int {
return xxx_messageInfo_ExtensionHandlerRequest.Size(m)
@@ -159,16 +163,17 @@ func (m *ExtensionHandlerResponse) Reset() { *m = ExtensionHandlerRespon
func (m *ExtensionHandlerResponse) String() string { return proto.CompactTextString(m) }
func (*ExtensionHandlerResponse) ProtoMessage() {}
func (*ExtensionHandlerResponse) Descriptor() ([]byte, []int) {
- return fileDescriptor_extension_d25f09c742c58c90, []int{2}
+ return fileDescriptor_661e47e790f76671, []int{2}
}
+
func (m *ExtensionHandlerResponse) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_ExtensionHandlerResponse.Unmarshal(m, b)
}
func (m *ExtensionHandlerResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_ExtensionHandlerResponse.Marshal(b, m, deterministic)
}
-func (dst *ExtensionHandlerResponse) XXX_Merge(src proto.Message) {
- xxx_messageInfo_ExtensionHandlerResponse.Merge(dst, src)
+func (m *ExtensionHandlerResponse) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_ExtensionHandlerResponse.Merge(m, src)
}
func (m *ExtensionHandlerResponse) XXX_Size() int {
return xxx_messageInfo_ExtensionHandlerResponse.Size(m)
@@ -216,16 +221,17 @@ func (m *Wrapper) Reset() { *m = Wrapper{} }
func (m *Wrapper) String() string { return proto.CompactTextString(m) }
func (*Wrapper) ProtoMessage() {}
func (*Wrapper) Descriptor() ([]byte, []int) {
- return fileDescriptor_extension_d25f09c742c58c90, []int{3}
+ return fileDescriptor_661e47e790f76671, []int{3}
}
+
func (m *Wrapper) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_Wrapper.Unmarshal(m, b)
}
func (m *Wrapper) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_Wrapper.Marshal(b, m, deterministic)
}
-func (dst *Wrapper) XXX_Merge(src proto.Message) {
- xxx_messageInfo_Wrapper.Merge(dst, src)
+func (m *Wrapper) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Wrapper.Merge(m, src)
}
func (m *Wrapper) XXX_Size() int {
return xxx_messageInfo_Wrapper.Size(m)
@@ -264,31 +270,31 @@ func init() {
proto.RegisterType((*Wrapper)(nil), "openapiextension.v1.Wrapper")
}
-func init() { proto.RegisterFile("extension.proto", fileDescriptor_extension_d25f09c742c58c90) }
-
-var fileDescriptor_extension_d25f09c742c58c90 = []byte{
- // 357 bytes of a gzipped FileDescriptorProto
- 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x74, 0x91, 0x4d, 0x4b, 0xc3, 0x40,
- 0x18, 0x84, 0x49, 0xbf, 0x62, 0x56, 0x6c, 0x65, 0x2d, 0x1a, 0xc5, 0x43, 0x09, 0x08, 0x45, 0x64,
- 0x4b, 0x15, 0xbc, 0xb7, 0x50, 0xd4, 0x8b, 0x2d, 0x7b, 0xa8, 0x37, 0xcb, 0x36, 0x7d, 0x9b, 0x46,
- 0x92, 0xdd, 0x75, 0xf3, 0x61, 0xfb, 0x57, 0x3c, 0xfa, 0x4b, 0x25, 0xbb, 0x49, 0x3d, 0xa8, 0xb7,
- 0xcc, 0xc3, 0x24, 0xef, 0xcc, 0x04, 0x75, 0x60, 0x9b, 0x02, 0x4f, 0x42, 0xc1, 0x89, 0x54, 0x22,
- 0x15, 0xf8, 0x44, 0x48, 0xe0, 0x4c, 0x86, 0x3f, 0x3c, 0x1f, 0x5e, 0x9c, 0x07, 0x42, 0x04, 0x11,
- 0x0c, 0xb4, 0x65, 0x99, 0xad, 0x07, 0x8c, 0xef, 0x8c, 0xdf, 0xf3, 0x91, 0x3d, 0x07, 0x55, 0x18,
- 0x71, 0x17, 0x35, 0x63, 0xf6, 0x26, 0x94, 0x6b, 0xf5, 0xac, 0x7e, 0x93, 0x1a, 0xa1, 0x69, 0xc8,
- 0x85, 0x72, 0x6b, 0x25, 0x2d, 0x44, 0x41, 0x25, 0x4b, 0xfd, 0x8d, 0x5b, 0x37, 0x54, 0x0b, 0x7c,
- 0x8a, 0x5a, 0x49, 0xb6, 0x5e, 0x87, 0x5b, 0xb7, 0xd1, 0xb3, 0xfa, 0x0e, 0x2d, 0x95, 0xf7, 0x69,
- 0xa1, 0xb3, 0x49, 0x15, 0xe8, 0x91, 0xf1, 0x55, 0x04, 0x8a, 0xc2, 0x7b, 0x06, 0x49, 0x8a, 0xef,
- 0x91, 0xfd, 0xa1, 0x98, 0x94, 0x60, 0xee, 0x1e, 0xde, 0x5e, 0x92, 0x3f, 0x2a, 0x90, 0x17, 0xe3,
- 0xa1, 0x95, 0x19, 0x3f, 0xa0, 0x63, 0x5f, 0xc4, 0x32, 0x8c, 0x40, 0x2d, 0x72, 0xd3, 0x40, 0x87,
- 0xf9, 0xef, 0x03, 0x65, 0x4b, 0xda, 0xa9, 0xde, 0x2a, 0x81, 0x97, 0x23, 0xf7, 0x77, 0xb6, 0x44,
- 0x0a, 0x9e, 0x00, 0x76, 0x91, 0xbd, 0xd1, 0x68, 0xa5, 0xc3, 0x1d, 0xd0, 0x4a, 0x16, 0x03, 0x80,
- 0x52, 0x7a, 0x96, 0x7a, 0xdf, 0xa1, 0x46, 0xe0, 0x6b, 0xd4, 0xcc, 0x59, 0x94, 0x41, 0x99, 0xa4,
- 0x4b, 0xcc, 0xf0, 0xa4, 0x1a, 0x9e, 0x8c, 0xf8, 0x8e, 0x1a, 0x8b, 0xf7, 0x8a, 0xec, 0xb2, 0x54,
- 0x71, 0xa6, 0xaa, 0x60, 0xe9, 0xe1, 0x2a, 0x89, 0xaf, 0x50, 0x7b, 0xdf, 0x62, 0xc1, 0x59, 0x0c,
- 0xfa, 0x37, 0x38, 0xf4, 0x68, 0x4f, 0x9f, 0x59, 0x0c, 0x18, 0xa3, 0xc6, 0x8e, 0xc5, 0x91, 0x3e,
- 0xeb, 0x50, 0xfd, 0x3c, 0xbe, 0x41, 0x6d, 0xa1, 0x02, 0x12, 0x70, 0x91, 0xa4, 0xa1, 0x4f, 0xf2,
- 0xe1, 0x18, 0x4f, 0x25, 0xf0, 0xd1, 0xec, 0x69, 0x5f, 0x77, 0x3e, 0x9c, 0x59, 0x5f, 0xb5, 0xfa,
- 0x74, 0x34, 0x59, 0xb6, 0x74, 0xc4, 0xbb, 0xef, 0x00, 0x00, 0x00, 0xff, 0xff, 0x84, 0x5c, 0x6b,
- 0x80, 0x51, 0x02, 0x00, 0x00,
+func init() { proto.RegisterFile("extensions/extension.proto", fileDescriptor_661e47e790f76671) }
+
+var fileDescriptor_661e47e790f76671 = []byte{
+ // 362 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x74, 0x91, 0x4d, 0x4b, 0xeb, 0x40,
+ 0x18, 0x85, 0x49, 0xbf, 0x72, 0x33, 0x97, 0xdb, 0x2b, 0x63, 0xd1, 0x58, 0x5c, 0x94, 0x80, 0x50,
+ 0x44, 0xa6, 0x54, 0xc1, 0x7d, 0x0b, 0x45, 0xdd, 0xd8, 0x32, 0x8b, 0xba, 0xb3, 0x4c, 0xd3, 0xb7,
+ 0x69, 0x24, 0x99, 0x19, 0x27, 0x1f, 0xb6, 0x7f, 0xc5, 0xa5, 0xbf, 0x54, 0x32, 0x93, 0xc4, 0x85,
+ 0xba, 0x9b, 0xf3, 0x70, 0xda, 0xf7, 0x9c, 0x13, 0xd4, 0x87, 0x7d, 0x0a, 0x3c, 0x09, 0x05, 0x4f,
+ 0x46, 0xf5, 0x93, 0x48, 0x25, 0x52, 0x81, 0x8f, 0x85, 0x04, 0xce, 0x64, 0xf8, 0xc5, 0xf3, 0x71,
+ 0xff, 0x2c, 0x10, 0x22, 0x88, 0x60, 0xa4, 0x2d, 0xeb, 0x6c, 0x3b, 0x62, 0xfc, 0x60, 0xfc, 0x9e,
+ 0x8f, 0xec, 0x25, 0xa8, 0xc2, 0x88, 0x7b, 0xa8, 0x1d, 0xb3, 0x17, 0xa1, 0x5c, 0x6b, 0x60, 0x0d,
+ 0xdb, 0xd4, 0x08, 0x4d, 0x43, 0x2e, 0x94, 0xdb, 0x28, 0x69, 0x21, 0x0a, 0x2a, 0x59, 0xea, 0xef,
+ 0xdc, 0xa6, 0xa1, 0x5a, 0xe0, 0x13, 0xd4, 0x49, 0xb2, 0xed, 0x36, 0xdc, 0xbb, 0xad, 0x81, 0x35,
+ 0x74, 0x68, 0xa9, 0xbc, 0x77, 0x0b, 0x9d, 0xce, 0xaa, 0x40, 0xf7, 0x8c, 0x6f, 0x22, 0x50, 0x14,
+ 0x5e, 0x33, 0x48, 0x52, 0x7c, 0x8b, 0xec, 0x37, 0xc5, 0xa4, 0x04, 0x73, 0xf7, 0xef, 0xf5, 0x39,
+ 0xf9, 0xa1, 0x02, 0x79, 0x32, 0x1e, 0x5a, 0x99, 0xf1, 0x1d, 0x3a, 0xf2, 0x45, 0x2c, 0xc3, 0x08,
+ 0xd4, 0x2a, 0x37, 0x0d, 0x74, 0x98, 0xdf, 0xfe, 0xa0, 0x6c, 0x49, 0xff, 0x57, 0xbf, 0x2a, 0x81,
+ 0x97, 0x23, 0xf7, 0x7b, 0xb6, 0x44, 0x0a, 0x9e, 0x00, 0x76, 0x91, 0xbd, 0xd3, 0x68, 0xa3, 0xc3,
+ 0xfd, 0xa1, 0x95, 0x2c, 0x06, 0x00, 0xa5, 0xf4, 0x2c, 0xcd, 0xa1, 0x43, 0x8d, 0xc0, 0x97, 0xa8,
+ 0x9d, 0xb3, 0x28, 0x83, 0x32, 0x49, 0x8f, 0x98, 0xe1, 0x49, 0x35, 0x3c, 0x99, 0xf0, 0x03, 0x35,
+ 0x16, 0xef, 0x19, 0xd9, 0x65, 0xa9, 0xe2, 0x4c, 0x55, 0xc1, 0xd2, 0xc3, 0x55, 0x12, 0x5f, 0xa0,
+ 0x6e, 0xdd, 0x62, 0xc5, 0x59, 0x0c, 0xfa, 0x33, 0x38, 0xf4, 0x5f, 0x4d, 0x1f, 0x59, 0x0c, 0x18,
+ 0xa3, 0xd6, 0x81, 0xc5, 0x91, 0x3e, 0xeb, 0x50, 0xfd, 0x9e, 0x5e, 0xa1, 0xae, 0x50, 0x01, 0x09,
+ 0xb8, 0x48, 0xd2, 0xd0, 0x27, 0xf9, 0x78, 0x8a, 0xe7, 0x12, 0xf8, 0x64, 0xf1, 0x50, 0xd7, 0x5d,
+ 0x8e, 0x17, 0xd6, 0x47, 0xa3, 0x39, 0x9f, 0xcc, 0xd6, 0x1d, 0x1d, 0xf1, 0xe6, 0x33, 0x00, 0x00,
+ 0xff, 0xff, 0xeb, 0xf3, 0xfa, 0x65, 0x5c, 0x02, 0x00, 0x00,
}
diff --git a/vendor/github.com/gophercloud/gophercloud/.travis.yml b/vendor/github.com/gophercloud/gophercloud/.travis.yml
index 9153a00fc55f0..31f80f8dbb890 100644
--- a/vendor/github.com/gophercloud/gophercloud/.travis.yml
+++ b/vendor/github.com/gophercloud/gophercloud/.travis.yml
@@ -7,9 +7,9 @@ install:
- GO111MODULE=off go get github.com/mattn/goveralls
- GO111MODULE=off go get golang.org/x/tools/cmd/goimports
go:
-- "1.10"
- "1.11"
- "1.12"
+- "1.13"
- "tip"
env:
global:
diff --git a/vendor/github.com/gophercloud/gophercloud/CHANGELOG.md b/vendor/github.com/gophercloud/gophercloud/CHANGELOG.md
index d0b120de1b42a..e1044e68a406e 100644
--- a/vendor/github.com/gophercloud/gophercloud/CHANGELOG.md
+++ b/vendor/github.com/gophercloud/gophercloud/CHANGELOG.md
@@ -1,4 +1,80 @@
-## 0.4.0 (Unreleased)
+## 0.7.0 (Unreleased)
+
+## 0.6.0 (October 17, 2019)
+
+UPGRADE NOTES
+
+* The way reauthentication works has been refactored. This should not cause a problem, but please report bugs if it does. See [GH-1746](https://github.com/gophercloud/gophercloud/pull/1746) for more information.
+
+IMPROVEMENTS
+
+* Added `networking/v2/extensions/quotas.Get` [GH-1742](https://github.com/gophercloud/gophercloud/pull/1742)
+* Added `networking/v2/extensions/quotas.Update` [GH-1747](https://github.com/gophercloud/gophercloud/pull/1747)
+* Refactored the reauthentication implementation to use goroutines and added a check to prevent an infinite loop in certain situations. [GH-1746](https://github.com/gophercloud/gophercloud/pull/1746)
+
+BUG FIXES
+
+* Changed `Flavor` to `FlavorID` in `loadbalancer/v2/loadbalancers` [GH-1744](https://github.com/gophercloud/gophercloud/pull/1744)
+* Changed `Flavor` to `FlavorID` in `networking/v2/extensions/lbaas_v2/loadbalancers` [GH-1744](https://github.com/gophercloud/gophercloud/pull/1744)
+* The `go-yaml` dependency was updated to `v2.2.4` to fix possible DDOS vulnerabilities [GH-1751](https://github.com/gophercloud/gophercloud/pull/1751)
+
+## 0.5.0 (October 13, 2019)
+
+IMPROVEMENTS
+
+* Added `VolumeType` to `compute/v2/extensions/bootfromvolume.BlockDevice`[GH-1690](https://github.com/gophercloud/gophercloud/pull/1690)
+* Added `networking/v2/extensions/layer3/portforwarding.List` [GH-1688](https://github.com/gophercloud/gophercloud/pull/1688)
+* Added `networking/v2/extensions/layer3/portforwarding.Get` [GH-1698](https://github.com/gophercloud/gophercloud/pull/1696)
+* Added `compute/v2/extensions/tags.ReplaceAll` [GH-1696](https://github.com/gophercloud/gophercloud/pull/1696)
+* Added `compute/v2/extensions/tags.Add` [GH-1696](https://github.com/gophercloud/gophercloud/pull/1696)
+* Added `networking/v2/extensions/layer3/portforwarding.Update` [GH-1703](https://github.com/gophercloud/gophercloud/pull/1703)
+* Added `ExtractDomain` method to token results in `identity/v3/tokens` [GH-1712](https://github.com/gophercloud/gophercloud/pull/1712)
+* Added `AllowedCIDRs` to `loadbalancer/v2/listeners.CreateOpts` [GH-1710](https://github.com/gophercloud/gophercloud/pull/1710)
+* Added `AllowedCIDRs` to `loadbalancer/v2/listeners.UpdateOpts` [GH-1710](https://github.com/gophercloud/gophercloud/pull/1710)
+* Added `AllowedCIDRs` to `loadbalancer/v2/listeners.Listener` [GH-1710](https://github.com/gophercloud/gophercloud/pull/1710)
+* Added `compute/v2/extensions/tags.Add` [GH-1695](https://github.com/gophercloud/gophercloud/pull/1695)
+* Added `compute/v2/extensions/tags.ReplaceAll` [GH-1694](https://github.com/gophercloud/gophercloud/pull/1694)
+* Added `compute/v2/extensions/tags.Delete` [GH-1699](https://github.com/gophercloud/gophercloud/pull/1699)
+* Added `compute/v2/extensions/tags.DeleteAll` [GH-1700](https://github.com/gophercloud/gophercloud/pull/1700)
+* Added `ImageStatusImporting` as an image status [GH-1725](https://github.com/gophercloud/gophercloud/pull/1725)
+* Added `ByPath` to `baremetalintrospection/v1/introspection.RootDiskType` [GH-1730](https://github.com/gophercloud/gophercloud/pull/1730)
+* Added `AttachedVolumes` to `compute/v2/servers.Server` [GH-1732](https://github.com/gophercloud/gophercloud/pull/1732)
+* Enable unmarshaling server tags to a `compute/v2/servers.Server` struct [GH-1734]
+* Allow setting an empty members list in `loadbalancer/v2/pools.BatchUpdateMembers` [GH-1736](https://github.com/gophercloud/gophercloud/pull/1736)
+* Allow unsetting members' subnet ID and name in `loadbalancer/v2/pools.BatchUpdateMemberOpts` [GH-1738](https://github.com/gophercloud/gophercloud/pull/1738)
+
+BUG FIXES
+
+* Changed struct type for options in `networking/v2/extensions/lbaas_v2/listeners` to `UpdateOptsBuilder` interface instead of specific UpdateOpts type [GH-1705](https://github.com/gophercloud/gophercloud/pull/1705)
+* Changed struct type for options in `networking/v2/extensions/lbaas_v2/loadbalancers` to `UpdateOptsBuilder` interface instead of specific UpdateOpts type [GH-1706](https://github.com/gophercloud/gophercloud/pull/1706)
+* Fixed issue with `blockstorage/v1/volumes.Create` where the response was expected to be 202 [GH-1720](https://github.com/gophercloud/gophercloud/pull/1720)
+* Changed `DefaultTlsContainerRef` from `string` to `*string` in `loadbalancer/v2/listeners.UpdateOpts` to allow the value to be removed during update. [GH-1723](https://github.com/gophercloud/gophercloud/pull/1723)
+* Changed `SniContainerRefs` from `[]string{}` to `*[]string{}` in `loadbalancer/v2/listeners.UpdateOpts` to allow the value to be removed during update. [GH-1723](https://github.com/gophercloud/gophercloud/pull/1723)
+* Changed `DefaultTlsContainerRef` from `string` to `*string` in `networking/v2/extensions/lbaas_v2/listeners.UpdateOpts` to allow the value to be removed during update. [GH-1723](https://github.com/gophercloud/gophercloud/pull/1723)
+* Changed `SniContainerRefs` from `[]string{}` to `*[]string{}` in `networking/v2/extensions/lbaas_v2/listeners.UpdateOpts` to allow the value to be removed during update. [GH-1723](https://github.com/gophercloud/gophercloud/pull/1723)
+
+
+## 0.4.0 (September 3, 2019)
+
+IMPROVEMENTS
+
+* Added `blockstorage/extensions/quotasets.results.QuotaSet.Groups` [GH-1668](https://github.com/gophercloud/gophercloud/pull/1668)
+* Added `blockstorage/extensions/quotasets.results.QuotaUsageSet.Groups` [GH-1668](https://github.com/gophercloud/gophercloud/pull/1668)
+* Added `containerinfra/v1/clusters.CreateOpts.FixedNetwork` [GH-1674](https://github.com/gophercloud/gophercloud/pull/1674)
+* Added `containerinfra/v1/clusters.CreateOpts.FixedSubnet` [GH-1676](https://github.com/gophercloud/gophercloud/pull/1676)
+* Added `containerinfra/v1/clusters.CreateOpts.FloatingIPEnabled` [GH-1677](https://github.com/gophercloud/gophercloud/pull/1677)
+* Added `CreatedAt` and `UpdatedAt` to `loadbalancers/v2/loadbalancers.LoadBalancer` [GH-1681](https://github.com/gophercloud/gophercloud/pull/1681)
+* Added `networking/v2/extensions/layer3/portforwarding.Create` [GH-1651](https://github.com/gophercloud/gophercloud/pull/1651)
+* Added `networking/v2/extensions/agents.ListDHCPNetworks` [GH-1686](https://github.com/gophercloud/gophercloud/pull/1686)
+* Added `networking/v2/extensions/layer3/portforwarding.Delete` [GH-1652](https://github.com/gophercloud/gophercloud/pull/1652)
+* Added `compute/v2/extensions/tags.List` [GH-1679](https://github.com/gophercloud/gophercloud/pull/1679)
+* Added `compute/v2/extensions/tags.Check` [GH-1679](https://github.com/gophercloud/gophercloud/pull/1679)
+
+BUG FIXES
+
+* Changed `identity/v3/endpoints.ListOpts.RegionID` from `int` to `string` [GH-1664](https://github.com/gophercloud/gophercloud/pull/1664)
+* Fixed issue where older time formats in some networking APIs/resources were unable to be parsed [GH-1671](https://github.com/gophercloud/gophercloud/pull/1664)
+* Changed `SATA`, `SCSI`, and `SAS` types to `InterfaceType` in `baremetal/v1/nodes` [GH-1683]
## 0.3.0 (July 31, 2019)
diff --git a/vendor/github.com/gophercloud/gophercloud/go.mod b/vendor/github.com/gophercloud/gophercloud/go.mod
index d1ee3b472ec8f..1eebf17ed7678 100644
--- a/vendor/github.com/gophercloud/gophercloud/go.mod
+++ b/vendor/github.com/gophercloud/gophercloud/go.mod
@@ -1,7 +1,7 @@
module github.com/gophercloud/gophercloud
require (
- golang.org/x/crypto v0.0.0-20190211182817-74369b46fc67
- golang.org/x/sys v0.0.0-20190209173611-3b5209105503 // indirect
- gopkg.in/yaml.v2 v2.2.2
+ golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2
+ gopkg.in/yaml.v2 v2.2.4
)
+
diff --git a/vendor/github.com/gophercloud/gophercloud/go.sum b/vendor/github.com/gophercloud/gophercloud/go.sum
index 33cb0be8aa31a..27dc9b30555c0 100644
--- a/vendor/github.com/gophercloud/gophercloud/go.sum
+++ b/vendor/github.com/gophercloud/gophercloud/go.sum
@@ -1,8 +1,8 @@
-golang.org/x/crypto v0.0.0-20190211182817-74369b46fc67 h1:ng3VDlRp5/DHpSWl02R4rM9I+8M2rhmsuLwAMmkLQWE=
-golang.org/x/crypto v0.0.0-20190211182817-74369b46fc67/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
-golang.org/x/sys v0.0.0-20190209173611-3b5209105503 h1:5SvYFrOM3W8Mexn9/oA44Ji7vhXAZQ9hiP+1Q/DMrWg=
-golang.org/x/sys v0.0.0-20190209173611-3b5209105503/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2 h1:VklqNMn3ovrHsnt90PveolxSbWFaJdECFbxSq0Mqo2M=
+golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
+golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a h1:1BGLXjeY4akVXGgbC9HugT3Jv3hCI0z56oJR5vAMgBU=
+golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
-gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
-gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
+gopkg.in/yaml.v2 v2.2.4 h1:/eiJrUcujPVeJ3xlSWaiNi3uSVmDGBK1pDHUHAnao1I=
+gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/results.go b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/results.go
index f973d1ea0e1b5..cec633e77a688 100644
--- a/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/results.go
+++ b/vendor/github.com/gophercloud/gophercloud/openstack/compute/v2/servers/results.go
@@ -212,10 +212,17 @@ type Server struct {
// to it.
SecurityGroups []map[string]interface{} `json:"security_groups"`
+ // AttachedVolumes includes the volume attachments of this instance
+ AttachedVolumes []AttachedVolume `json:"os-extended-volumes:volumes_attached"`
+
// Fault contains failure information about a server.
Fault Fault `json:"fault"`
}
+type AttachedVolume struct {
+ ID string `json:"id"`
+}
+
type Fault struct {
Code int `json:"code"`
Created time.Time `json:"created"`
diff --git a/vendor/github.com/gophercloud/gophercloud/openstack/identity/v3/tokens/results.go b/vendor/github.com/gophercloud/gophercloud/openstack/identity/v3/tokens/results.go
index 6f26c96bcdc97..8af4d634cfa06 100644
--- a/vendor/github.com/gophercloud/gophercloud/openstack/identity/v3/tokens/results.go
+++ b/vendor/github.com/gophercloud/gophercloud/openstack/identity/v3/tokens/results.go
@@ -144,6 +144,15 @@ func (r commonResult) ExtractProject() (*Project, error) {
return s.Project, err
}
+// ExtractDomain returns Domain to which User is authorized.
+func (r commonResult) ExtractDomain() (*Domain, error) {
+ var s struct {
+ Domain *Domain `json:"domain"`
+ }
+ err := r.ExtractInto(&s)
+ return s.Domain, err
+}
+
// CreateResult is the response from a Create request. Use ExtractToken()
// to interpret it as a Token, or ExtractServiceCatalog() to interpret it
// as a service catalog.
diff --git a/vendor/github.com/gophercloud/gophercloud/provider_client.go b/vendor/github.com/gophercloud/gophercloud/provider_client.go
index fce00462fd36b..885bf07a7bd89 100644
--- a/vendor/github.com/gophercloud/gophercloud/provider_client.go
+++ b/vendor/github.com/gophercloud/gophercloud/provider_client.go
@@ -94,9 +94,10 @@ type ProviderClient struct {
// reauthlock represents a set of attributes used to help in the reauthentication process.
type reauthlock struct {
sync.RWMutex
- reauthing bool
- reauthingErr error
- done *sync.Cond
+ // This channel is non-nil during reauthentication. It can be used to ask the
+ // goroutine doing Reauthenticate() for its result. Look at the implementation
+ // of Reauthenticate() for details.
+ ongoing chan<- (chan<- error)
}
// AuthenticatedHeaders returns a map of HTTP headers that are common for all
@@ -106,11 +107,15 @@ func (client *ProviderClient) AuthenticatedHeaders() (m map[string]string) {
return
}
if client.reauthmut != nil {
+ // If a Reauthenticate is in progress, wait for it to complete.
client.reauthmut.Lock()
- for client.reauthmut.reauthing {
- client.reauthmut.done.Wait()
- }
+ ongoing := client.reauthmut.ongoing
client.reauthmut.Unlock()
+ if ongoing != nil {
+ responseChannel := make(chan error)
+ ongoing <- responseChannel
+ _ = <-responseChannel
+ }
}
t := client.Token()
if t == "" {
@@ -223,7 +228,7 @@ func (client *ProviderClient) SetThrowaway(v bool) {
// this case, the reauthentication can be skipped if another thread has already
// reauthenticated in the meantime. If no previous token is known, an empty
// string should be passed instead to force unconditional reauthentication.
-func (client *ProviderClient) Reauthenticate(previousToken string) (err error) {
+func (client *ProviderClient) Reauthenticate(previousToken string) error {
if client.ReauthFunc == nil {
return nil
}
@@ -232,33 +237,50 @@ func (client *ProviderClient) Reauthenticate(previousToken string) (err error) {
return client.ReauthFunc()
}
+ messages := make(chan (chan<- error))
+
+ // Check if a Reauthenticate is in progress, or start one if not.
client.reauthmut.Lock()
- if client.reauthmut.reauthing {
- for !client.reauthmut.reauthing {
- client.reauthmut.done.Wait()
- }
- err = client.reauthmut.reauthingErr
- client.reauthmut.Unlock()
- return err
+ ongoing := client.reauthmut.ongoing
+ if ongoing == nil {
+ client.reauthmut.ongoing = messages
}
client.reauthmut.Unlock()
- client.reauthmut.Lock()
- client.reauthmut.reauthing = true
- client.reauthmut.done = sync.NewCond(client.reauthmut)
- client.reauthmut.reauthingErr = nil
- client.reauthmut.Unlock()
+ // If Reauthenticate is running elsewhere, wait for its result.
+ if ongoing != nil {
+ responseChannel := make(chan error)
+ ongoing <- responseChannel
+ return <-responseChannel
+ }
+ // Perform the actual reauthentication.
+ var err error
if previousToken == "" || client.TokenID == previousToken {
err = client.ReauthFunc()
+ } else {
+ err = nil
}
+ // Mark Reauthenticate as finished.
client.reauthmut.Lock()
- client.reauthmut.reauthing = false
- client.reauthmut.reauthingErr = err
- client.reauthmut.done.Broadcast()
+ client.reauthmut.ongoing = nil
client.reauthmut.Unlock()
- return
+
+ // Report result to all other interested goroutines.
+ //
+ // This happens in a separate goroutine because another goroutine might have
+ // acquired a copy of `client.reauthmut.ongoing` before we cleared it, but not
+ // have come around to sending its request. By answering in a goroutine, we
+ // can have that goroutine linger until all responseChannels have been sent.
+ // When GC has collected all sendings ends of the channel, our receiving end
+ // will be closed and the goroutine will end.
+ go func() {
+ for responseChannel := range messages {
+ responseChannel <- err
+ }
+ }()
+ return err
}
// RequestOpts customizes the behavior of the provider.Request() method.
@@ -285,11 +307,26 @@ type RequestOpts struct {
ErrorContext error
}
+// requestState contains temporary state for a single ProviderClient.Request() call.
+type requestState struct {
+ // This flag indicates if we have reauthenticated during this request because of a 401 response.
+ // It ensures that we don't reauthenticate multiple times for a single request. If we
+ // reauthenticate, but keep getting 401 responses with the fresh token, reauthenticating some more
+ // will just get us into an infinite loop.
+ hasReauthenticated bool
+}
+
var applicationJSON = "application/json"
// Request performs an HTTP request using the ProviderClient's current HTTPClient. An authentication
// header will automatically be provided.
func (client *ProviderClient) Request(method, url string, options *RequestOpts) (*http.Response, error) {
+ return client.doRequest(method, url, options, &requestState{
+ hasReauthenticated: false,
+ })
+}
+
+func (client *ProviderClient) doRequest(method, url string, options *RequestOpts, state *requestState) (*http.Response, error) {
var body io.Reader
var contentType *string
@@ -392,7 +429,7 @@ func (client *ProviderClient) Request(method, url string, options *RequestOpts)
err = error400er.Error400(respErr)
}
case http.StatusUnauthorized:
- if client.ReauthFunc != nil {
+ if client.ReauthFunc != nil && !state.hasReauthenticated {
err = client.Reauthenticate(prereqtok)
if err != nil {
e := &ErrUnableToReauthenticate{}
@@ -404,7 +441,8 @@ func (client *ProviderClient) Request(method, url string, options *RequestOpts)
seeker.Seek(0, 0)
}
}
- resp, err = client.Request(method, url, options)
+ state.hasReauthenticated = true
+ resp, err = client.doRequest(method, url, options, state)
if err != nil {
switch err.(type) {
case *ErrUnexpectedResponseCode:
diff --git a/vendor/github.com/grpc-ecosystem/grpc-gateway/internal/stream_chunk.pb.go b/vendor/github.com/grpc-ecosystem/grpc-gateway/internal/stream_chunk.pb.go
index 8858f069046f5..1eca68e3350b0 100644
--- a/vendor/github.com/grpc-ecosystem/grpc-gateway/internal/stream_chunk.pb.go
+++ b/vendor/github.com/grpc-ecosystem/grpc-gateway/internal/stream_chunk.pb.go
@@ -3,10 +3,12 @@
package internal
-import proto "github.com/golang/protobuf/proto"
-import fmt "fmt"
-import math "math"
-import any "github.com/golang/protobuf/ptypes/any"
+import (
+ fmt "fmt"
+ proto "github.com/golang/protobuf/proto"
+ any "github.com/golang/protobuf/ptypes/any"
+ math "math"
+)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
@@ -17,7 +19,7 @@ var _ = math.Inf
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
-const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
+const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
// StreamError is a response type which is returned when
// streaming rpc returns an error.
@@ -36,16 +38,17 @@ func (m *StreamError) Reset() { *m = StreamError{} }
func (m *StreamError) String() string { return proto.CompactTextString(m) }
func (*StreamError) ProtoMessage() {}
func (*StreamError) Descriptor() ([]byte, []int) {
- return fileDescriptor_stream_chunk_a2afb657504565d7, []int{0}
+ return fileDescriptor_9d15b670e96bbb5a, []int{0}
}
+
func (m *StreamError) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_StreamError.Unmarshal(m, b)
}
func (m *StreamError) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
return xxx_messageInfo_StreamError.Marshal(b, m, deterministic)
}
-func (dst *StreamError) XXX_Merge(src proto.Message) {
- xxx_messageInfo_StreamError.Merge(dst, src)
+func (m *StreamError) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_StreamError.Merge(m, src)
}
func (m *StreamError) XXX_Size() int {
return xxx_messageInfo_StreamError.Size(m)
@@ -95,11 +98,9 @@ func init() {
proto.RegisterType((*StreamError)(nil), "grpc.gateway.runtime.StreamError")
}
-func init() {
- proto.RegisterFile("internal/stream_chunk.proto", fileDescriptor_stream_chunk_a2afb657504565d7)
-}
+func init() { proto.RegisterFile("internal/stream_chunk.proto", fileDescriptor_9d15b670e96bbb5a) }
-var fileDescriptor_stream_chunk_a2afb657504565d7 = []byte{
+var fileDescriptor_9d15b670e96bbb5a = []byte{
// 223 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x34, 0x90, 0x41, 0x4e, 0xc3, 0x30,
0x10, 0x45, 0x15, 0x4a, 0x69, 0x3b, 0xd9, 0x45, 0x5d, 0x18, 0xba, 0x20, 0x62, 0x95, 0x95, 0x23,
diff --git a/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/BUILD.bazel b/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/BUILD.bazel
index 20862228ef872..819c45a7657fa 100644
--- a/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/BUILD.bazel
+++ b/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/BUILD.bazel
@@ -27,11 +27,12 @@ go_library(
deps = [
"//internal:go_default_library",
"//utilities:go_default_library",
+ "@com_github_golang_protobuf//descriptor:go_default_library_gen",
"@com_github_golang_protobuf//jsonpb:go_default_library_gen",
"@com_github_golang_protobuf//proto:go_default_library",
- "@com_github_golang_protobuf//protoc-gen-go/generator:go_default_library_gen",
"@go_googleapis//google/api:httpbody_go_proto",
"@io_bazel_rules_go//proto/wkt:any_go_proto",
+ "@io_bazel_rules_go//proto/wkt:descriptor_go_proto",
"@io_bazel_rules_go//proto/wkt:duration_go_proto",
"@io_bazel_rules_go//proto/wkt:field_mask_go_proto",
"@io_bazel_rules_go//proto/wkt:timestamp_go_proto",
@@ -48,6 +49,7 @@ go_test(
size = "small",
srcs = [
"context_test.go",
+ "convert_test.go",
"errors_test.go",
"fieldmask_test.go",
"handler_test.go",
diff --git a/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/context.go b/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/context.go
index 896057e1e1e10..f8083821f3d4b 100644
--- a/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/context.go
+++ b/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/context.go
@@ -57,13 +57,39 @@ except that the forwarded destination is not another HTTP service but rather
a gRPC service.
*/
func AnnotateContext(ctx context.Context, mux *ServeMux, req *http.Request) (context.Context, error) {
+ ctx, md, err := annotateContext(ctx, mux, req)
+ if err != nil {
+ return nil, err
+ }
+ if md == nil {
+ return ctx, nil
+ }
+
+ return metadata.NewOutgoingContext(ctx, md), nil
+}
+
+// AnnotateIncomingContext adds context information such as metadata from the request.
+// Attach metadata as incoming context.
+func AnnotateIncomingContext(ctx context.Context, mux *ServeMux, req *http.Request) (context.Context, error) {
+ ctx, md, err := annotateContext(ctx, mux, req)
+ if err != nil {
+ return nil, err
+ }
+ if md == nil {
+ return ctx, nil
+ }
+
+ return metadata.NewIncomingContext(ctx, md), nil
+}
+
+func annotateContext(ctx context.Context, mux *ServeMux, req *http.Request) (context.Context, metadata.MD, error) {
var pairs []string
timeout := DefaultContextTimeout
if tm := req.Header.Get(metadataGrpcTimeout); tm != "" {
var err error
timeout, err = timeoutDecode(tm)
if err != nil {
- return nil, status.Errorf(codes.InvalidArgument, "invalid grpc-timeout: %s", tm)
+ return nil, nil, status.Errorf(codes.InvalidArgument, "invalid grpc-timeout: %s", tm)
}
}
@@ -80,7 +106,7 @@ func AnnotateContext(ctx context.Context, mux *ServeMux, req *http.Request) (con
if strings.HasSuffix(key, metadataHeaderBinarySuffix) {
b, err := decodeBinHeader(val)
if err != nil {
- return nil, status.Errorf(codes.InvalidArgument, "invalid binary header %s: %s", key, err)
+ return nil, nil, status.Errorf(codes.InvalidArgument, "invalid binary header %s: %s", key, err)
}
val = string(b)
@@ -111,13 +137,13 @@ func AnnotateContext(ctx context.Context, mux *ServeMux, req *http.Request) (con
ctx, _ = context.WithTimeout(ctx, timeout)
}
if len(pairs) == 0 {
- return ctx, nil
+ return ctx, nil, nil
}
md := metadata.Pairs(pairs...)
for _, mda := range mux.metadataAnnotators {
md = metadata.Join(md, mda(ctx, req))
}
- return metadata.NewOutgoingContext(ctx, md), nil
+ return ctx, md, nil
}
// ServerMetadata consists of metadata sent from gRPC server.
diff --git a/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/convert.go b/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/convert.go
index a5b3bd6a792c7..2c279344dc414 100644
--- a/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/convert.go
+++ b/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/convert.go
@@ -206,16 +206,22 @@ func BytesSlice(val, sep string) ([][]byte, error) {
// Timestamp converts the given RFC3339 formatted string into a timestamp.Timestamp.
func Timestamp(val string) (*timestamp.Timestamp, error) {
- var r *timestamp.Timestamp
- err := jsonpb.UnmarshalString(val, r)
- return r, err
+ var r timestamp.Timestamp
+ err := jsonpb.UnmarshalString(val, &r)
+ if err != nil {
+ return nil, err
+ }
+ return &r, nil
}
// Duration converts the given string into a timestamp.Duration.
func Duration(val string) (*duration.Duration, error) {
- var r *duration.Duration
- err := jsonpb.UnmarshalString(val, r)
- return r, err
+ var r duration.Duration
+ err := jsonpb.UnmarshalString(val, &r)
+ if err != nil {
+ return nil, err
+ }
+ return &r, nil
}
// Enum converts the given string into an int32 that should be type casted into the
diff --git a/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/errors.go b/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/errors.go
index ad945788dc60b..a36080713ce3f 100644
--- a/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/errors.go
+++ b/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/errors.go
@@ -66,12 +66,12 @@ var (
)
type errorBody struct {
- Error string `protobuf:"bytes,1,name=error" json:"error"`
+ Error string `protobuf:"bytes,100,name=error" json:"error"`
// This is to make the error more compatible with users that expect errors to be Status objects:
// https://github.com/grpc/grpc/blob/master/src/proto/grpc/status/status.proto
// It should be the exact same message as the Error field.
- Message string `protobuf:"bytes,1,name=message" json:"message"`
- Code int32 `protobuf:"varint,2,name=code" json:"code"`
+ Code int32 `protobuf:"varint,1,name=code" json:"code"`
+ Message string `protobuf:"bytes,2,name=message" json:"message"`
Details []*any.Any `protobuf:"bytes,3,rep,name=details" json:"details,omitempty"`
}
diff --git a/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/fieldmask.go b/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/fieldmask.go
index e1cf7a91461f8..341aad5a3ea6c 100644
--- a/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/fieldmask.go
+++ b/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/fieldmask.go
@@ -5,12 +5,37 @@ import (
"io"
"strings"
- "github.com/golang/protobuf/protoc-gen-go/generator"
+ descriptor2 "github.com/golang/protobuf/descriptor"
+ "github.com/golang/protobuf/protoc-gen-go/descriptor"
"google.golang.org/genproto/protobuf/field_mask"
)
+func translateName(name string, md *descriptor.DescriptorProto) (string, *descriptor.DescriptorProto) {
+ // TODO - should really gate this with a test that the marshaller has used json names
+ if md != nil {
+ for _, f := range md.Field {
+ if f.JsonName != nil && f.Name != nil && *f.JsonName == name {
+ var subType *descriptor.DescriptorProto
+
+ // If the field has a TypeName then we retrieve the nested type for translating the embedded message names.
+ if f.TypeName != nil {
+ typeSplit := strings.Split(*f.TypeName, ".")
+ typeName := typeSplit[len(typeSplit)-1]
+ for _, t := range md.NestedType {
+ if typeName == *t.Name {
+ subType = t
+ }
+ }
+ }
+ return *f.Name, subType
+ }
+ }
+ }
+ return name, nil
+}
+
// FieldMaskFromRequestBody creates a FieldMask printing all complete paths from the JSON body.
-func FieldMaskFromRequestBody(r io.Reader) (*field_mask.FieldMask, error) {
+func FieldMaskFromRequestBody(r io.Reader, md *descriptor.DescriptorProto) (*field_mask.FieldMask, error) {
fm := &field_mask.FieldMask{}
var root interface{}
if err := json.NewDecoder(r).Decode(&root); err != nil {
@@ -20,7 +45,7 @@ func FieldMaskFromRequestBody(r io.Reader) (*field_mask.FieldMask, error) {
return nil, err
}
- queue := []fieldMaskPathItem{{node: root}}
+ queue := []fieldMaskPathItem{{node: root, md: md}}
for len(queue) > 0 {
// dequeue an item
item := queue[0]
@@ -29,7 +54,11 @@ func FieldMaskFromRequestBody(r io.Reader) (*field_mask.FieldMask, error) {
if m, ok := item.node.(map[string]interface{}); ok {
// if the item is an object, then enqueue all of its children
for k, v := range m {
- queue = append(queue, fieldMaskPathItem{path: append(item.path, generator.CamelCase(k)), node: v})
+ protoName, subMd := translateName(k, item.md)
+ if subMsg, ok := v.(descriptor2.Message); ok {
+ _, subMd = descriptor2.ForMessage(subMsg)
+ }
+ queue = append(queue, fieldMaskPathItem{path: append(item.path, protoName), node: v, md: subMd})
}
} else if len(item.path) > 0 {
// otherwise, it's a leaf node so print its path
@@ -47,24 +76,7 @@ type fieldMaskPathItem struct {
// a generic decoded json object the current item to inspect for further path extraction
node interface{}
-}
-
-// CamelCaseFieldMask updates the given FieldMask by converting all of its paths to CamelCase, using the same heuristic
-// that's used for naming protobuf fields in Go.
-func CamelCaseFieldMask(mask *field_mask.FieldMask) {
- if mask == nil || mask.Paths == nil {
- return
- }
-
- var newPaths []string
- for _, path := range mask.Paths {
- lowerCasedParts := strings.Split(path, ".")
- var camelCasedParts []string
- for _, part := range lowerCasedParts {
- camelCasedParts = append(camelCasedParts, generator.CamelCase(part))
- }
- newPaths = append(newPaths, strings.Join(camelCasedParts, "."))
- }
- mask.Paths = newPaths
+ // descriptor for parent message
+ md *descriptor.DescriptorProto
}
diff --git a/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/query.go b/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/query.go
index 5fbba5e8e8b5f..ee0207e461e0e 100644
--- a/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/query.go
+++ b/vendor/github.com/grpc-ecosystem/grpc-gateway/runtime/query.go
@@ -15,15 +15,13 @@ import (
"google.golang.org/grpc/grpclog"
)
+var valuesKeyRegexp = regexp.MustCompile("^(.*)\\[(.*)\\]$")
+
// PopulateQueryParameters populates "values" into "msg".
// A value is ignored if its key starts with one of the elements in "filter".
func PopulateQueryParameters(msg proto.Message, values url.Values, filter *utilities.DoubleArray) error {
for key, values := range values {
- re, err := regexp.Compile("^(.*)\\[(.*)\\]$")
- if err != nil {
- return err
- }
- match := re.FindStringSubmatch(key)
+ match := valuesKeyRegexp.FindStringSubmatch(key)
if len(match) == 3 {
key = match[1]
values = append([]string{match[2]}, values...)
diff --git a/vendor/github.com/hashicorp/consul/api/agent.go b/vendor/github.com/hashicorp/consul/api/agent.go
index 04043ba842f92..73c6e5881d974 100644
--- a/vendor/github.com/hashicorp/consul/api/agent.go
+++ b/vendor/github.com/hashicorp/consul/api/agent.go
@@ -23,23 +23,11 @@ const (
// service proxies another service within Consul and speaks the connect
// protocol.
ServiceKindConnectProxy ServiceKind = "connect-proxy"
-)
-
-// ProxyExecMode is the execution mode for a managed Connect proxy.
-type ProxyExecMode string
-const (
- // ProxyExecModeDaemon indicates that the proxy command should be long-running
- // and should be started and supervised by the agent until it's target service
- // is deregistered.
- ProxyExecModeDaemon ProxyExecMode = "daemon"
-
- // ProxyExecModeScript indicates that the proxy command should be invoke to
- // completion on each change to the configuration of lifecycle event. The
- // script typically fetches the config and certificates from the agent API and
- // then configures an externally managed daemon, perhaps starting and stopping
- // it if necessary.
- ProxyExecModeScript ProxyExecMode = "script"
+ // ServiceKindMeshGateway is a Mesh Gateway for the Connect feature. This
+ // service will proxy connections based off the SNI header set by other
+ // connect proxies
+ ServiceKindMeshGateway ServiceKind = "mesh-gateway"
)
// UpstreamDestType is the type of upstream discovery mechanism.
@@ -64,6 +52,7 @@ type AgentCheck struct {
Output string
ServiceID string
ServiceName string
+ Type string
Definition HealthCheckDefinition
}
@@ -82,15 +71,14 @@ type AgentService struct {
Meta map[string]string
Port int
Address string
+ TaggedAddresses map[string]ServiceAddress `json:",omitempty"`
Weights AgentWeights
EnableTagOverride bool
- CreateIndex uint64 `json:",omitempty" bexpr:"-"`
- ModifyIndex uint64 `json:",omitempty" bexpr:"-"`
- ContentHash string `json:",omitempty" bexpr:"-"`
- // DEPRECATED (ProxyDestination) - remove this field
- ProxyDestination string `json:",omitempty" bexpr:"-"`
- Proxy *AgentServiceConnectProxyConfig `json:",omitempty"`
- Connect *AgentServiceConnect `json:",omitempty"`
+ CreateIndex uint64 `json:",omitempty" bexpr:"-"`
+ ModifyIndex uint64 `json:",omitempty" bexpr:"-"`
+ ContentHash string `json:",omitempty" bexpr:"-"`
+ Proxy *AgentServiceConnectProxyConfig `json:",omitempty"`
+ Connect *AgentServiceConnect `json:",omitempty"`
}
// AgentServiceChecksInfo returns information about a Service and its checks
@@ -103,28 +91,20 @@ type AgentServiceChecksInfo struct {
// AgentServiceConnect represents the Connect configuration of a service.
type AgentServiceConnect struct {
Native bool `json:",omitempty"`
- Proxy *AgentServiceConnectProxy `json:",omitempty" bexpr:"-"`
SidecarService *AgentServiceRegistration `json:",omitempty" bexpr:"-"`
}
-// AgentServiceConnectProxy represents the Connect Proxy configuration of a
-// service.
-type AgentServiceConnectProxy struct {
- ExecMode ProxyExecMode `json:",omitempty"`
- Command []string `json:",omitempty"`
- Config map[string]interface{} `json:",omitempty" bexpr:"-"`
- Upstreams []Upstream `json:",omitempty"`
-}
-
// AgentServiceConnectProxyConfig is the proxy configuration in a connect-proxy
// ServiceDefinition or response.
type AgentServiceConnectProxyConfig struct {
- DestinationServiceName string
+ DestinationServiceName string `json:",omitempty"`
DestinationServiceID string `json:",omitempty"`
LocalServiceAddress string `json:",omitempty"`
LocalServicePort int `json:",omitempty"`
Config map[string]interface{} `json:",omitempty" bexpr:"-"`
- Upstreams []Upstream
+ Upstreams []Upstream `json:",omitempty"`
+ MeshGateway MeshGatewayConfig `json:",omitempty"`
+ Expose ExposeConfig `json:",omitempty"`
}
// AgentMember represents a cluster member known to the agent
@@ -157,21 +137,20 @@ type MembersOpts struct {
// AgentServiceRegistration is used to register a new service
type AgentServiceRegistration struct {
- Kind ServiceKind `json:",omitempty"`
- ID string `json:",omitempty"`
- Name string `json:",omitempty"`
- Tags []string `json:",omitempty"`
- Port int `json:",omitempty"`
- Address string `json:",omitempty"`
- EnableTagOverride bool `json:",omitempty"`
- Meta map[string]string `json:",omitempty"`
- Weights *AgentWeights `json:",omitempty"`
+ Kind ServiceKind `json:",omitempty"`
+ ID string `json:",omitempty"`
+ Name string `json:",omitempty"`
+ Tags []string `json:",omitempty"`
+ Port int `json:",omitempty"`
+ Address string `json:",omitempty"`
+ TaggedAddresses map[string]ServiceAddress `json:",omitempty"`
+ EnableTagOverride bool `json:",omitempty"`
+ Meta map[string]string `json:",omitempty"`
+ Weights *AgentWeights `json:",omitempty"`
Check *AgentServiceCheck
Checks AgentServiceChecks
- // DEPRECATED (ProxyDestination) - remove this field
- ProxyDestination string `json:",omitempty"`
- Proxy *AgentServiceConnectProxyConfig `json:",omitempty"`
- Connect *AgentServiceConnect `json:",omitempty"`
+ Proxy *AgentServiceConnectProxyConfig `json:",omitempty"`
+ Connect *AgentServiceConnect `json:",omitempty"`
}
// AgentCheckRegistration is used to register a new check
@@ -276,12 +255,8 @@ type ConnectProxyConfig struct {
TargetServiceID string
TargetServiceName string
ContentHash string
- // DEPRECATED(managed-proxies) - this struct is re-used for sidecar configs
- // but they don't need ExecMode or Command
- ExecMode ProxyExecMode `json:",omitempty"`
- Command []string `json:",omitempty"`
- Config map[string]interface{} `bexpr:"-"`
- Upstreams []Upstream
+ Config map[string]interface{} `bexpr:"-"`
+ Upstreams []Upstream
}
// Upstream is the response structure for a proxy upstream configuration.
@@ -293,6 +268,7 @@ type Upstream struct {
LocalBindAddress string `json:",omitempty"`
LocalBindPort int `json:",omitempty"`
Config map[string]interface{} `json:",omitempty" bexpr:"-"`
+ MeshGateway MeshGatewayConfig `json:",omitempty"`
}
// Agent can be used to query the Agent endpoints
@@ -755,6 +731,19 @@ func (a *Agent) ForceLeave(node string) error {
return nil
}
+//ForceLeavePrune is used to have an a failed agent removed
+//from the list of members
+func (a *Agent) ForceLeavePrune(node string) error {
+ r := a.c.newRequest("PUT", "/v1/agent/force-leave/"+node)
+ r.params.Set("prune", "1")
+ _, resp, err := requireOK(a.c.doRequest(r))
+ if err != nil {
+ return err
+ }
+ resp.Body.Close()
+ return nil
+}
+
// ConnectAuthorize is used to authorize an incoming connection
// to a natively integrated Connect service.
func (a *Agent) ConnectAuthorize(auth *AgentAuthorizeParams) (*AgentAuthorize, error) {
@@ -815,31 +804,6 @@ func (a *Agent) ConnectCALeaf(serviceID string, q *QueryOptions) (*LeafCert, *Qu
return &out, qm, nil
}
-// ConnectProxyConfig gets the configuration for a local managed proxy instance.
-//
-// Note that this uses an unconventional blocking mechanism since it's
-// agent-local state. That means there is no persistent raft index so we block
-// based on object hash instead.
-func (a *Agent) ConnectProxyConfig(proxyServiceID string, q *QueryOptions) (*ConnectProxyConfig, *QueryMeta, error) {
- r := a.c.newRequest("GET", "/v1/agent/connect/proxy/"+proxyServiceID)
- r.setQueryOptions(q)
- rtt, resp, err := requireOK(a.c.doRequest(r))
- if err != nil {
- return nil, nil, err
- }
- defer resp.Body.Close()
-
- qm := &QueryMeta{}
- parseQueryMeta(resp, qm)
- qm.RequestTime = rtt
-
- var out ConnectProxyConfig
- if err := decodeBody(resp, &out); err != nil {
- return nil, nil, err
- }
- return &out, qm, nil
-}
-
// EnableServiceMaintenance toggles service maintenance mode on
// for the given service ID.
func (a *Agent) EnableServiceMaintenance(serviceID, reason string) error {
diff --git a/vendor/github.com/hashicorp/consul/api/api.go b/vendor/github.com/hashicorp/consul/api/api.go
index 4b17ff6cda222..433d0f4826182 100644
--- a/vendor/github.com/hashicorp/consul/api/api.go
+++ b/vendor/github.com/hashicorp/consul/api/api.go
@@ -89,7 +89,7 @@ type QueryOptions struct {
RequireConsistent bool
// UseCache requests that the agent cache results locally. See
- // https://www.consul.io/api/index.html#agent-caching for more details on the
+ // https://www.consul.io/api/features/caching.html for more details on the
// semantics.
UseCache bool
@@ -99,14 +99,14 @@ type QueryOptions struct {
// returned. Clients that wish to allow for stale results on error can set
// StaleIfError to a longer duration to change this behavior. It is ignored
// if the endpoint supports background refresh caching. See
- // https://www.consul.io/api/index.html#agent-caching for more details.
+ // https://www.consul.io/api/features/caching.html for more details.
MaxAge time.Duration
// StaleIfError specifies how stale the client will accept a cached response
// if the servers are unavailable to fetch a fresh one. Only makes sense when
// UseCache is true and MaxAge is set to a lower, non-zero value. It is
// ignored if the endpoint supports background refresh caching. See
- // https://www.consul.io/api/index.html#agent-caching for more details.
+ // https://www.consul.io/api/features/caching.html for more details.
StaleIfError time.Duration
// WaitIndex is used to enable a blocking query. Waits
@@ -143,6 +143,10 @@ type QueryOptions struct {
// a value from 0 to 5 (inclusive).
RelayFactor uint8
+ // LocalOnly is used in keyring list operation to force the keyring
+ // query to only hit local servers (no WAN traffic).
+ LocalOnly bool
+
// Connect filters prepared query execution to only include Connect-capable
// services. This currently affects prepared query execution.
Connect bool
@@ -655,6 +659,9 @@ func (r *request) setQueryOptions(q *QueryOptions) {
if q.RelayFactor != 0 {
r.params.Set("relay-factor", strconv.Itoa(int(q.RelayFactor)))
}
+ if q.LocalOnly {
+ r.params.Set("local-only", fmt.Sprintf("%t", q.LocalOnly))
+ }
if q.Connect {
r.params.Set("connect", "true")
}
@@ -672,6 +679,7 @@ func (r *request) setQueryOptions(q *QueryOptions) {
r.header.Set("Cache-Control", strings.Join(cc, ", "))
}
}
+
r.ctx = q.ctx
}
diff --git a/vendor/github.com/hashicorp/consul/api/catalog.go b/vendor/github.com/hashicorp/consul/api/catalog.go
index c175c3fff53b3..3fb055342c066 100644
--- a/vendor/github.com/hashicorp/consul/api/catalog.go
+++ b/vendor/github.com/hashicorp/consul/api/catalog.go
@@ -1,5 +1,10 @@
package api
+import (
+ "net"
+ "strconv"
+)
+
type Weights struct {
Passing int
Warning int
@@ -16,6 +21,11 @@ type Node struct {
ModifyIndex uint64
}
+type ServiceAddress struct {
+ Address string
+ Port int
+}
+
type CatalogService struct {
ID string
Node string
@@ -26,17 +36,16 @@ type CatalogService struct {
ServiceID string
ServiceName string
ServiceAddress string
+ ServiceTaggedAddresses map[string]ServiceAddress
ServiceTags []string
ServiceMeta map[string]string
ServicePort int
ServiceWeights Weights
ServiceEnableTagOverride bool
- // DEPRECATED (ProxyDestination) - remove the next comment!
- // We forgot to ever add ServiceProxyDestination here so no need to deprecate!
- ServiceProxy *AgentServiceConnectProxyConfig
- CreateIndex uint64
- Checks HealthChecks
- ModifyIndex uint64
+ ServiceProxy *AgentServiceConnectProxyConfig
+ CreateIndex uint64
+ Checks HealthChecks
+ ModifyIndex uint64
}
type CatalogNode struct {
@@ -242,3 +251,12 @@ func (c *Catalog) Node(node string, q *QueryOptions) (*CatalogNode, *QueryMeta,
}
return out, qm, nil
}
+
+func ParseServiceAddr(addrPort string) (ServiceAddress, error) {
+ port := 0
+ host, portStr, err := net.SplitHostPort(addrPort)
+ if err == nil {
+ port, err = strconv.Atoi(portStr)
+ }
+ return ServiceAddress{Address: host, Port: port}, err
+}
diff --git a/vendor/github.com/hashicorp/consul/api/config_entry.go b/vendor/github.com/hashicorp/consul/api/config_entry.go
index 0c18963fd60f0..5c05311be4902 100644
--- a/vendor/github.com/hashicorp/consul/api/config_entry.go
+++ b/vendor/github.com/hashicorp/consul/api/config_entry.go
@@ -12,8 +12,12 @@ import (
)
const (
- ServiceDefaults string = "service-defaults"
- ProxyDefaults string = "proxy-defaults"
+ ServiceDefaults string = "service-defaults"
+ ProxyDefaults string = "proxy-defaults"
+ ServiceRouter string = "service-router"
+ ServiceSplitter string = "service-splitter"
+ ServiceResolver string = "service-resolver"
+
ProxyConfigGlobal string = "global"
)
@@ -24,10 +28,70 @@ type ConfigEntry interface {
GetModifyIndex() uint64
}
+type MeshGatewayMode string
+
+const (
+ // MeshGatewayModeDefault represents no specific mode and should
+ // be used to indicate that a different layer of the configuration
+ // chain should take precedence
+ MeshGatewayModeDefault MeshGatewayMode = ""
+
+ // MeshGatewayModeNone represents that the Upstream Connect connections
+ // should be direct and not flow through a mesh gateway.
+ MeshGatewayModeNone MeshGatewayMode = "none"
+
+ // MeshGatewayModeLocal represents that the Upstrea Connect connections
+ // should be made to a mesh gateway in the local datacenter. This is
+ MeshGatewayModeLocal MeshGatewayMode = "local"
+
+ // MeshGatewayModeRemote represents that the Upstream Connect connections
+ // should be made to a mesh gateway in a remote datacenter.
+ MeshGatewayModeRemote MeshGatewayMode = "remote"
+)
+
+// MeshGatewayConfig controls how Mesh Gateways are used for upstream Connect
+// services
+type MeshGatewayConfig struct {
+ // Mode is the mode that should be used for the upstream connection.
+ Mode MeshGatewayMode `json:",omitempty"`
+}
+
+// ExposeConfig describes HTTP paths to expose through Envoy outside of Connect.
+// Users can expose individual paths and/or all HTTP/GRPC paths for checks.
+type ExposeConfig struct {
+ // Checks defines whether paths associated with Consul checks will be exposed.
+ // This flag triggers exposing all HTTP and GRPC check paths registered for the service.
+ Checks bool `json:",omitempty"`
+
+ // Paths is the list of paths exposed through the proxy.
+ Paths []ExposePath `json:",omitempty"`
+}
+
+type ExposePath struct {
+ // ListenerPort defines the port of the proxy's listener for exposed paths.
+ ListenerPort int `json:",omitempty"`
+
+ // Path is the path to expose through the proxy, ie. "/metrics."
+ Path string `json:",omitempty"`
+
+ // LocalPathPort is the port that the service is listening on for the given path.
+ LocalPathPort int `json:",omitempty"`
+
+ // Protocol describes the upstream's service protocol.
+ // Valid values are "http" and "http2", defaults to "http"
+ Protocol string `json:",omitempty"`
+
+ // ParsedFromCheck is set if this path was parsed from a registered check
+ ParsedFromCheck bool
+}
+
type ServiceConfigEntry struct {
Kind string
Name string
- Protocol string
+ Protocol string `json:",omitempty"`
+ MeshGateway MeshGatewayConfig `json:",omitempty"`
+ Expose ExposeConfig `json:",omitempty"`
+ ExternalSNI string `json:",omitempty"`
CreateIndex uint64
ModifyIndex uint64
}
@@ -51,7 +115,9 @@ func (s *ServiceConfigEntry) GetModifyIndex() uint64 {
type ProxyConfigEntry struct {
Kind string
Name string
- Config map[string]interface{}
+ Config map[string]interface{} `json:",omitempty"`
+ MeshGateway MeshGatewayConfig `json:",omitempty"`
+ Expose ExposeConfig `json:",omitempty"`
CreateIndex uint64
ModifyIndex uint64
}
@@ -80,14 +146,35 @@ type rawEntryListResponse struct {
func makeConfigEntry(kind, name string) (ConfigEntry, error) {
switch kind {
case ServiceDefaults:
- return &ServiceConfigEntry{Name: name}, nil
+ return &ServiceConfigEntry{Kind: kind, Name: name}, nil
case ProxyDefaults:
- return &ProxyConfigEntry{Name: name}, nil
+ return &ProxyConfigEntry{Kind: kind, Name: name}, nil
+ case ServiceRouter:
+ return &ServiceRouterConfigEntry{Kind: kind, Name: name}, nil
+ case ServiceSplitter:
+ return &ServiceSplitterConfigEntry{Kind: kind, Name: name}, nil
+ case ServiceResolver:
+ return &ServiceResolverConfigEntry{Kind: kind, Name: name}, nil
default:
return nil, fmt.Errorf("invalid config entry kind: %s", kind)
}
}
+func MakeConfigEntry(kind, name string) (ConfigEntry, error) {
+ return makeConfigEntry(kind, name)
+}
+
+// DecodeConfigEntry will decode the result of using json.Unmarshal of a config
+// entry into a map[string]interface{}.
+//
+// Important caveats:
+//
+// - This will NOT work if the map[string]interface{} was produced using HCL
+// decoding as that requires more extensive parsing to work around the issues
+// with map[string][]interface{} that arise.
+//
+// - This will only decode fields using their camel case json field
+// representations.
func DecodeConfigEntry(raw map[string]interface{}) (ConfigEntry, error) {
var entry ConfigEntry
@@ -132,7 +219,19 @@ func DecodeConfigEntryFromJSON(data []byte) (ConfigEntry, error) {
return DecodeConfigEntry(raw)
}
-// Config can be used to query the Config endpoints
+func decodeConfigEntrySlice(raw []map[string]interface{}) ([]ConfigEntry, error) {
+ var entries []ConfigEntry
+ for _, rawEntry := range raw {
+ entry, err := DecodeConfigEntry(rawEntry)
+ if err != nil {
+ return nil, err
+ }
+ entries = append(entries, entry)
+ }
+ return entries, nil
+}
+
+// ConfigEntries can be used to query the Config endpoints
type ConfigEntries struct {
c *Client
}
@@ -195,13 +294,9 @@ func (conf *ConfigEntries) List(kind string, q *QueryOptions) ([]ConfigEntry, *Q
return nil, nil, err
}
- var entries []ConfigEntry
- for _, rawEntry := range raw {
- entry, err := DecodeConfigEntry(rawEntry)
- if err != nil {
- return nil, nil, err
- }
- entries = append(entries, entry)
+ entries, err := decodeConfigEntrySlice(raw)
+ if err != nil {
+ return nil, nil, err
}
return entries, qm, nil
diff --git a/vendor/github.com/hashicorp/consul/api/config_entry_discoverychain.go b/vendor/github.com/hashicorp/consul/api/config_entry_discoverychain.go
new file mode 100644
index 0000000000000..77acfbddf1ea0
--- /dev/null
+++ b/vendor/github.com/hashicorp/consul/api/config_entry_discoverychain.go
@@ -0,0 +1,200 @@
+package api
+
+import (
+ "encoding/json"
+ "time"
+)
+
+type ServiceRouterConfigEntry struct {
+ Kind string
+ Name string
+
+ Routes []ServiceRoute `json:",omitempty"`
+
+ CreateIndex uint64
+ ModifyIndex uint64
+}
+
+func (e *ServiceRouterConfigEntry) GetKind() string { return e.Kind }
+func (e *ServiceRouterConfigEntry) GetName() string { return e.Name }
+func (e *ServiceRouterConfigEntry) GetCreateIndex() uint64 { return e.CreateIndex }
+func (e *ServiceRouterConfigEntry) GetModifyIndex() uint64 { return e.ModifyIndex }
+
+type ServiceRoute struct {
+ Match *ServiceRouteMatch `json:",omitempty"`
+ Destination *ServiceRouteDestination `json:",omitempty"`
+}
+
+type ServiceRouteMatch struct {
+ HTTP *ServiceRouteHTTPMatch `json:",omitempty"`
+}
+
+type ServiceRouteHTTPMatch struct {
+ PathExact string `json:",omitempty"`
+ PathPrefix string `json:",omitempty"`
+ PathRegex string `json:",omitempty"`
+
+ Header []ServiceRouteHTTPMatchHeader `json:",omitempty"`
+ QueryParam []ServiceRouteHTTPMatchQueryParam `json:",omitempty"`
+ Methods []string `json:",omitempty"`
+}
+
+type ServiceRouteHTTPMatchHeader struct {
+ Name string
+ Present bool `json:",omitempty"`
+ Exact string `json:",omitempty"`
+ Prefix string `json:",omitempty"`
+ Suffix string `json:",omitempty"`
+ Regex string `json:",omitempty"`
+ Invert bool `json:",omitempty"`
+}
+
+type ServiceRouteHTTPMatchQueryParam struct {
+ Name string
+ Present bool `json:",omitempty"`
+ Exact string `json:",omitempty"`
+ Regex string `json:",omitempty"`
+}
+
+type ServiceRouteDestination struct {
+ Service string `json:",omitempty"`
+ ServiceSubset string `json:",omitempty"`
+ Namespace string `json:",omitempty"`
+ PrefixRewrite string `json:",omitempty"`
+ RequestTimeout time.Duration `json:",omitempty"`
+ NumRetries uint32 `json:",omitempty"`
+ RetryOnConnectFailure bool `json:",omitempty"`
+ RetryOnStatusCodes []uint32 `json:",omitempty"`
+}
+
+func (e *ServiceRouteDestination) MarshalJSON() ([]byte, error) {
+ type Alias ServiceRouteDestination
+ exported := &struct {
+ RequestTimeout string `json:",omitempty"`
+ *Alias
+ }{
+ RequestTimeout: e.RequestTimeout.String(),
+ Alias: (*Alias)(e),
+ }
+ if e.RequestTimeout == 0 {
+ exported.RequestTimeout = ""
+ }
+
+ return json.Marshal(exported)
+}
+
+func (e *ServiceRouteDestination) UnmarshalJSON(data []byte) error {
+ type Alias ServiceRouteDestination
+ aux := &struct {
+ RequestTimeout string
+ *Alias
+ }{
+ Alias: (*Alias)(e),
+ }
+ if err := json.Unmarshal(data, &aux); err != nil {
+ return err
+ }
+ var err error
+ if aux.RequestTimeout != "" {
+ if e.RequestTimeout, err = time.ParseDuration(aux.RequestTimeout); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+type ServiceSplitterConfigEntry struct {
+ Kind string
+ Name string
+
+ Splits []ServiceSplit `json:",omitempty"`
+
+ CreateIndex uint64
+ ModifyIndex uint64
+}
+
+func (e *ServiceSplitterConfigEntry) GetKind() string { return e.Kind }
+func (e *ServiceSplitterConfigEntry) GetName() string { return e.Name }
+func (e *ServiceSplitterConfigEntry) GetCreateIndex() uint64 { return e.CreateIndex }
+func (e *ServiceSplitterConfigEntry) GetModifyIndex() uint64 { return e.ModifyIndex }
+
+type ServiceSplit struct {
+ Weight float32
+ Service string `json:",omitempty"`
+ ServiceSubset string `json:",omitempty"`
+ Namespace string `json:",omitempty"`
+}
+
+type ServiceResolverConfigEntry struct {
+ Kind string
+ Name string
+
+ DefaultSubset string `json:",omitempty"`
+ Subsets map[string]ServiceResolverSubset `json:",omitempty"`
+ Redirect *ServiceResolverRedirect `json:",omitempty"`
+ Failover map[string]ServiceResolverFailover `json:",omitempty"`
+ ConnectTimeout time.Duration `json:",omitempty"`
+
+ CreateIndex uint64
+ ModifyIndex uint64
+}
+
+func (e *ServiceResolverConfigEntry) MarshalJSON() ([]byte, error) {
+ type Alias ServiceResolverConfigEntry
+ exported := &struct {
+ ConnectTimeout string `json:",omitempty"`
+ *Alias
+ }{
+ ConnectTimeout: e.ConnectTimeout.String(),
+ Alias: (*Alias)(e),
+ }
+ if e.ConnectTimeout == 0 {
+ exported.ConnectTimeout = ""
+ }
+
+ return json.Marshal(exported)
+}
+
+func (e *ServiceResolverConfigEntry) UnmarshalJSON(data []byte) error {
+ type Alias ServiceResolverConfigEntry
+ aux := &struct {
+ ConnectTimeout string
+ *Alias
+ }{
+ Alias: (*Alias)(e),
+ }
+ if err := json.Unmarshal(data, &aux); err != nil {
+ return err
+ }
+ var err error
+ if aux.ConnectTimeout != "" {
+ if e.ConnectTimeout, err = time.ParseDuration(aux.ConnectTimeout); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func (e *ServiceResolverConfigEntry) GetKind() string { return e.Kind }
+func (e *ServiceResolverConfigEntry) GetName() string { return e.Name }
+func (e *ServiceResolverConfigEntry) GetCreateIndex() uint64 { return e.CreateIndex }
+func (e *ServiceResolverConfigEntry) GetModifyIndex() uint64 { return e.ModifyIndex }
+
+type ServiceResolverSubset struct {
+ Filter string `json:",omitempty"`
+ OnlyPassing bool `json:",omitempty"`
+}
+
+type ServiceResolverRedirect struct {
+ Service string `json:",omitempty"`
+ ServiceSubset string `json:",omitempty"`
+ Namespace string `json:",omitempty"`
+ Datacenter string `json:",omitempty"`
+}
+
+type ServiceResolverFailover struct {
+ Service string `json:",omitempty"`
+ ServiceSubset string `json:",omitempty"`
+ Namespace string `json:",omitempty"`
+ Datacenters []string `json:",omitempty"`
+}
diff --git a/vendor/github.com/hashicorp/consul/api/connect_intention.go b/vendor/github.com/hashicorp/consul/api/connect_intention.go
index a996c03e5e7e6..d25cb844fb8b6 100644
--- a/vendor/github.com/hashicorp/consul/api/connect_intention.go
+++ b/vendor/github.com/hashicorp/consul/api/connect_intention.go
@@ -54,6 +54,13 @@ type Intention struct {
// or modified.
CreatedAt, UpdatedAt time.Time
+ // Hash of the contents of the intention
+ //
+ // This is needed mainly for replication purposes. When replicating from
+ // one DC to another keeping the content Hash will allow us to detect
+ // content changes more efficiently than checking every single field
+ Hash []byte
+
CreateIndex uint64
ModifyIndex uint64
}
diff --git a/vendor/github.com/hashicorp/consul/api/coordinate.go b/vendor/github.com/hashicorp/consul/api/coordinate.go
index 53318f11dd5e0..776630f67d724 100644
--- a/vendor/github.com/hashicorp/consul/api/coordinate.go
+++ b/vendor/github.com/hashicorp/consul/api/coordinate.go
@@ -84,7 +84,7 @@ func (c *Coordinate) Update(coord *CoordinateEntry, q *WriteOptions) (*WriteMeta
return wm, nil
}
-// Node is used to return the coordinates of a single in the LAN pool.
+// Node is used to return the coordinates of a single node in the LAN pool.
func (c *Coordinate) Node(node string, q *QueryOptions) ([]*CoordinateEntry, *QueryMeta, error) {
r := c.c.newRequest("GET", "/v1/coordinate/node/"+node)
r.setQueryOptions(q)
diff --git a/vendor/github.com/hashicorp/consul/api/discovery_chain.go b/vendor/github.com/hashicorp/consul/api/discovery_chain.go
new file mode 100644
index 0000000000000..407a3b08e37d4
--- /dev/null
+++ b/vendor/github.com/hashicorp/consul/api/discovery_chain.go
@@ -0,0 +1,230 @@
+package api
+
+import (
+ "encoding/json"
+ "fmt"
+ "time"
+)
+
+// DiscoveryChain can be used to query the discovery-chain endpoints
+type DiscoveryChain struct {
+ c *Client
+}
+
+// DiscoveryChain returns a handle to the discovery-chain endpoints
+func (c *Client) DiscoveryChain() *DiscoveryChain {
+ return &DiscoveryChain{c}
+}
+
+func (d *DiscoveryChain) Get(name string, opts *DiscoveryChainOptions, q *QueryOptions) (*DiscoveryChainResponse, *QueryMeta, error) {
+ if name == "" {
+ return nil, nil, fmt.Errorf("Name parameter must not be empty")
+ }
+
+ method := "GET"
+ if opts != nil && opts.requiresPOST() {
+ method = "POST"
+ }
+
+ r := d.c.newRequest(method, fmt.Sprintf("/v1/discovery-chain/%s", name))
+ r.setQueryOptions(q)
+
+ if opts != nil {
+ if opts.EvaluateInDatacenter != "" {
+ r.params.Set("compile-dc", opts.EvaluateInDatacenter)
+ }
+ // TODO(namespaces): handle possible EvaluateInNamespace here
+ }
+
+ if method == "POST" {
+ r.obj = opts
+ }
+
+ rtt, resp, err := requireOK(d.c.doRequest(r))
+ if err != nil {
+ return nil, nil, err
+ }
+ defer resp.Body.Close()
+
+ qm := &QueryMeta{}
+ parseQueryMeta(resp, qm)
+ qm.RequestTime = rtt
+
+ var out DiscoveryChainResponse
+
+ if err := decodeBody(resp, &out); err != nil {
+ return nil, nil, err
+ }
+
+ return &out, qm, nil
+}
+
+type DiscoveryChainOptions struct {
+ EvaluateInDatacenter string `json:"-"`
+
+ // OverrideMeshGateway allows for the mesh gateway setting to be overridden
+ // for any resolver in the compiled chain.
+ OverrideMeshGateway MeshGatewayConfig `json:",omitempty"`
+
+ // OverrideProtocol allows for the final protocol for the chain to be
+ // altered.
+ //
+ // - If the chain ordinarily would be TCP and an L7 protocol is passed here
+ // the chain will not include Routers or Splitters.
+ //
+ // - If the chain ordinarily would be L7 and TCP is passed here the chain
+ // will not include Routers or Splitters.
+ OverrideProtocol string `json:",omitempty"`
+
+ // OverrideConnectTimeout allows for the ConnectTimeout setting to be
+ // overridden for any resolver in the compiled chain.
+ OverrideConnectTimeout time.Duration `json:",omitempty"`
+}
+
+func (o *DiscoveryChainOptions) requiresPOST() bool {
+ if o == nil {
+ return false
+ }
+ return o.OverrideMeshGateway.Mode != "" ||
+ o.OverrideProtocol != "" ||
+ o.OverrideConnectTimeout != 0
+}
+
+type DiscoveryChainResponse struct {
+ Chain *CompiledDiscoveryChain
+}
+
+type CompiledDiscoveryChain struct {
+ ServiceName string
+ Namespace string
+ Datacenter string
+
+ // CustomizationHash is a unique hash of any data that affects the
+ // compilation of the discovery chain other than config entries or the
+ // name/namespace/datacenter evaluation criteria.
+ //
+ // If set, this value should be used to prefix/suffix any generated load
+ // balancer data plane objects to avoid sharing customized and
+ // non-customized versions.
+ CustomizationHash string
+
+ // Protocol is the overall protocol shared by everything in the chain.
+ Protocol string
+
+ // StartNode is the first key into the Nodes map that should be followed
+ // when walking the discovery chain.
+ StartNode string
+
+ // Nodes contains all nodes available for traversal in the chain keyed by a
+ // unique name. You can walk this by starting with StartNode.
+ //
+ // NOTE: The names should be treated as opaque values and are only
+ // guaranteed to be consistent within a single compilation.
+ Nodes map[string]*DiscoveryGraphNode
+
+ // Targets is a list of all targets used in this chain.
+ //
+ // NOTE: The names should be treated as opaque values and are only
+ // guaranteed to be consistent within a single compilation.
+ Targets map[string]*DiscoveryTarget
+}
+
+const (
+ DiscoveryGraphNodeTypeRouter = "router"
+ DiscoveryGraphNodeTypeSplitter = "splitter"
+ DiscoveryGraphNodeTypeResolver = "resolver"
+)
+
+// DiscoveryGraphNode is a single node in the compiled discovery chain.
+type DiscoveryGraphNode struct {
+ Type string
+ Name string // this is NOT necessarily a service
+
+ // fields for Type==router
+ Routes []*DiscoveryRoute
+
+ // fields for Type==splitter
+ Splits []*DiscoverySplit
+
+ // fields for Type==resolver
+ Resolver *DiscoveryResolver
+}
+
+// compiled form of ServiceRoute
+type DiscoveryRoute struct {
+ Definition *ServiceRoute
+ NextNode string
+}
+
+// compiled form of ServiceSplit
+type DiscoverySplit struct {
+ Weight float32
+ NextNode string
+}
+
+// compiled form of ServiceResolverConfigEntry
+type DiscoveryResolver struct {
+ Default bool
+ ConnectTimeout time.Duration
+ Target string
+ Failover *DiscoveryFailover
+}
+
+func (r *DiscoveryResolver) MarshalJSON() ([]byte, error) {
+ type Alias DiscoveryResolver
+ exported := &struct {
+ ConnectTimeout string `json:",omitempty"`
+ *Alias
+ }{
+ ConnectTimeout: r.ConnectTimeout.String(),
+ Alias: (*Alias)(r),
+ }
+ if r.ConnectTimeout == 0 {
+ exported.ConnectTimeout = ""
+ }
+
+ return json.Marshal(exported)
+}
+
+func (r *DiscoveryResolver) UnmarshalJSON(data []byte) error {
+ type Alias DiscoveryResolver
+ aux := &struct {
+ ConnectTimeout string
+ *Alias
+ }{
+ Alias: (*Alias)(r),
+ }
+ if err := json.Unmarshal(data, &aux); err != nil {
+ return err
+ }
+ var err error
+ if aux.ConnectTimeout != "" {
+ if r.ConnectTimeout, err = time.ParseDuration(aux.ConnectTimeout); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+// compiled form of ServiceResolverFailover
+type DiscoveryFailover struct {
+ Targets []string
+}
+
+// DiscoveryTarget represents all of the inputs necessary to use a resolver
+// config entry to execute a catalog query to generate a list of service
+// instances during discovery.
+type DiscoveryTarget struct {
+ ID string
+
+ Service string
+ ServiceSubset string
+ Namespace string
+ Datacenter string
+
+ MeshGateway MeshGatewayConfig
+ Subset ServiceResolverSubset
+ External bool
+ SNI string
+ Name string
+}
diff --git a/vendor/github.com/hashicorp/consul/api/go.mod b/vendor/github.com/hashicorp/consul/api/go.mod
index e198218915847..d02afa95a15bd 100644
--- a/vendor/github.com/hashicorp/consul/api/go.mod
+++ b/vendor/github.com/hashicorp/consul/api/go.mod
@@ -5,7 +5,7 @@ go 1.12
replace github.com/hashicorp/consul/sdk => ../sdk
require (
- github.com/hashicorp/consul/sdk v0.1.1
+ github.com/hashicorp/consul/sdk v0.3.0
github.com/hashicorp/go-cleanhttp v0.5.1
github.com/hashicorp/go-rootcerts v1.0.0
github.com/hashicorp/go-uuid v1.0.1
diff --git a/vendor/github.com/hashicorp/consul/api/go.sum b/vendor/github.com/hashicorp/consul/api/go.sum
index 372ebc1416bd6..72a87ea68dbc3 100644
--- a/vendor/github.com/hashicorp/consul/api/go.sum
+++ b/vendor/github.com/hashicorp/consul/api/go.sum
@@ -9,6 +9,8 @@ github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSs
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c h1:964Od4U6p2jUkFxvCydnIczKteheJEzHRToSGK3Bnlw=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
+github.com/hashicorp/consul/sdk v0.3.0 h1:UOxjlb4xVNF93jak1mzzoBatyFju9nrkxpVwIp/QqxQ=
+github.com/hashicorp/consul/sdk v0.3.0/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.1 h1:dH3aiDG9Jvb5r5+bYHsikaOUIpcM0xvgMXVoDkXMzJM=
diff --git a/vendor/github.com/hashicorp/consul/api/health.go b/vendor/github.com/hashicorp/consul/api/health.go
index 9faf6b665aa89..ce8e69750559c 100644
--- a/vendor/github.com/hashicorp/consul/api/health.go
+++ b/vendor/github.com/hashicorp/consul/api/health.go
@@ -36,6 +36,7 @@ type HealthCheck struct {
ServiceID string
ServiceName string
ServiceTags []string
+ Type string
Definition HealthCheckDefinition
@@ -94,40 +95,63 @@ func (d *HealthCheckDefinition) MarshalJSON() ([]byte, error) {
return json.Marshal(out)
}
-func (d *HealthCheckDefinition) UnmarshalJSON(data []byte) error {
+func (t *HealthCheckDefinition) UnmarshalJSON(data []byte) (err error) {
type Alias HealthCheckDefinition
aux := &struct {
- Interval string
- Timeout string
- DeregisterCriticalServiceAfter string
+ IntervalDuration interface{}
+ TimeoutDuration interface{}
+ DeregisterCriticalServiceAfterDuration interface{}
*Alias
}{
- Alias: (*Alias)(d),
+ Alias: (*Alias)(t),
}
if err := json.Unmarshal(data, &aux); err != nil {
return err
}
// Parse the values into both the time.Duration and old ReadableDuration fields.
- var err error
- if aux.Interval != "" {
- if d.IntervalDuration, err = time.ParseDuration(aux.Interval); err != nil {
- return err
+
+ if aux.IntervalDuration == nil {
+ t.IntervalDuration = time.Duration(t.Interval)
+ } else {
+ switch v := aux.IntervalDuration.(type) {
+ case string:
+ if t.IntervalDuration, err = time.ParseDuration(v); err != nil {
+ return err
+ }
+ case float64:
+ t.IntervalDuration = time.Duration(v)
}
- d.Interval = ReadableDuration(d.IntervalDuration)
+ t.Interval = ReadableDuration(t.IntervalDuration)
}
- if aux.Timeout != "" {
- if d.TimeoutDuration, err = time.ParseDuration(aux.Timeout); err != nil {
- return err
+
+ if aux.TimeoutDuration == nil {
+ t.TimeoutDuration = time.Duration(t.Timeout)
+ } else {
+ switch v := aux.TimeoutDuration.(type) {
+ case string:
+ if t.TimeoutDuration, err = time.ParseDuration(v); err != nil {
+ return err
+ }
+ case float64:
+ t.TimeoutDuration = time.Duration(v)
}
- d.Timeout = ReadableDuration(d.TimeoutDuration)
+ t.Timeout = ReadableDuration(t.TimeoutDuration)
}
- if aux.DeregisterCriticalServiceAfter != "" {
- if d.DeregisterCriticalServiceAfterDuration, err = time.ParseDuration(aux.DeregisterCriticalServiceAfter); err != nil {
- return err
+ if aux.DeregisterCriticalServiceAfterDuration == nil {
+ t.DeregisterCriticalServiceAfterDuration = time.Duration(t.DeregisterCriticalServiceAfter)
+ } else {
+ switch v := aux.DeregisterCriticalServiceAfterDuration.(type) {
+ case string:
+ if t.DeregisterCriticalServiceAfterDuration, err = time.ParseDuration(v); err != nil {
+ return err
+ }
+ case float64:
+ t.DeregisterCriticalServiceAfterDuration = time.Duration(v)
}
- d.DeregisterCriticalServiceAfter = ReadableDuration(d.DeregisterCriticalServiceAfterDuration)
+ t.DeregisterCriticalServiceAfter = ReadableDuration(t.DeregisterCriticalServiceAfterDuration)
}
+
return nil
}
diff --git a/vendor/github.com/hashicorp/consul/api/operator_autopilot.go b/vendor/github.com/hashicorp/consul/api/operator_autopilot.go
index b179406dc12ce..0e4ef24649fcc 100644
--- a/vendor/github.com/hashicorp/consul/api/operator_autopilot.go
+++ b/vendor/github.com/hashicorp/consul/api/operator_autopilot.go
@@ -25,6 +25,10 @@ type AutopilotConfiguration struct {
// be behind before being considered unhealthy.
MaxTrailingLogs uint64
+ // MinQuorum sets the minimum number of servers allowed in a cluster before
+ // autopilot can prune dead servers.
+ MinQuorum uint
+
// ServerStabilizationTime is the minimum amount of time a server must be
// in a stable, healthy state before it can be added to the cluster. Only
// applicable with Raft protocol version 3 or higher.
@@ -130,19 +134,28 @@ func (d *ReadableDuration) MarshalJSON() ([]byte, error) {
return []byte(fmt.Sprintf(`"%s"`, d.Duration().String())), nil
}
-func (d *ReadableDuration) UnmarshalJSON(raw []byte) error {
+func (d *ReadableDuration) UnmarshalJSON(raw []byte) (err error) {
if d == nil {
return fmt.Errorf("cannot unmarshal to nil pointer")
}
+ var dur time.Duration
str := string(raw)
- if len(str) < 2 || str[0] != '"' || str[len(str)-1] != '"' {
- return fmt.Errorf("must be enclosed with quotes: %s", str)
- }
- dur, err := time.ParseDuration(str[1 : len(str)-1])
- if err != nil {
- return err
+ if len(str) >= 2 && str[0] == '"' && str[len(str)-1] == '"' {
+ // quoted string
+ dur, err = time.ParseDuration(str[1 : len(str)-1])
+ if err != nil {
+ return err
+ }
+ } else {
+ // no quotes, not a string
+ v, err := strconv.ParseFloat(str, 64)
+ if err != nil {
+ return err
+ }
+ dur = time.Duration(v)
}
+
*d = ReadableDuration(dur)
return nil
}
diff --git a/vendor/github.com/hashicorp/consul/api/operator_license.go b/vendor/github.com/hashicorp/consul/api/operator_license.go
new file mode 100644
index 0000000000000..25aa702e8ade1
--- /dev/null
+++ b/vendor/github.com/hashicorp/consul/api/operator_license.go
@@ -0,0 +1,111 @@
+package api
+
+import (
+ "io/ioutil"
+ "strings"
+ "time"
+)
+
+type License struct {
+ // The unique identifier of the license
+ LicenseID string `json:"license_id"`
+
+ // The customer ID associated with the license
+ CustomerID string `json:"customer_id"`
+
+ // If set, an identifier that should be used to lock the license to a
+ // particular site, cluster, etc.
+ InstallationID string `json:"installation_id"`
+
+ // The time at which the license was issued
+ IssueTime time.Time `json:"issue_time"`
+
+ // The time at which the license starts being valid
+ StartTime time.Time `json:"start_time"`
+
+ // The time after which the license expires
+ ExpirationTime time.Time `json:"expiration_time"`
+
+ // The time at which the license ceases to function and can
+ // no longer be used in any capacity
+ TerminationTime time.Time `json:"termination_time"`
+
+ // The product the license is valid for
+ Product string `json:"product"`
+
+ // License Specific Flags
+ Flags map[string]interface{} `json:"flags"`
+
+ // List of features enabled by the license
+ Features []string `json:"features"`
+}
+
+type LicenseReply struct {
+ Valid bool
+ License *License
+ Warnings []string
+}
+
+func (op *Operator) LicenseGet(q *QueryOptions) (*LicenseReply, error) {
+ var reply LicenseReply
+ if _, err := op.c.query("/v1/operator/license", &reply, q); err != nil {
+ return nil, err
+ } else {
+ return &reply, nil
+ }
+}
+
+func (op *Operator) LicenseGetSigned(q *QueryOptions) (string, error) {
+ r := op.c.newRequest("GET", "/v1/operator/license")
+ r.params.Set("signed", "1")
+ r.setQueryOptions(q)
+ _, resp, err := requireOK(op.c.doRequest(r))
+ if err != nil {
+ return "", err
+ }
+ defer resp.Body.Close()
+
+ data, err := ioutil.ReadAll(resp.Body)
+ if err != nil {
+ return "", err
+ }
+
+ return string(data), nil
+}
+
+// LicenseReset will reset the license to the builtin one if it is still valid.
+// If the builtin license is invalid, the current license stays active.
+func (op *Operator) LicenseReset(opts *WriteOptions) (*LicenseReply, error) {
+ var reply LicenseReply
+ r := op.c.newRequest("DELETE", "/v1/operator/license")
+ r.setWriteOptions(opts)
+ _, resp, err := requireOK(op.c.doRequest(r))
+ if err != nil {
+ return nil, err
+ }
+ defer resp.Body.Close()
+
+ if err := decodeBody(resp, &reply); err != nil {
+ return nil, err
+ }
+
+ return &reply, nil
+}
+
+func (op *Operator) LicensePut(license string, opts *WriteOptions) (*LicenseReply, error) {
+ var reply LicenseReply
+ r := op.c.newRequest("PUT", "/v1/operator/license")
+ r.setWriteOptions(opts)
+ r.body = strings.NewReader(license)
+ _, resp, err := requireOK(op.c.doRequest(r))
+ if err != nil {
+ return nil, err
+ }
+ defer resp.Body.Close()
+
+ if err := decodeBody(resp, &reply); err != nil {
+ return nil, err
+ }
+
+ return &reply, nil
+}
diff --git a/vendor/github.com/hashicorp/memberlist/net.go b/vendor/github.com/hashicorp/memberlist/net.go
index f6a0d45fedb9b..068d8e1ade59e 100644
--- a/vendor/github.com/hashicorp/memberlist/net.go
+++ b/vendor/github.com/hashicorp/memberlist/net.go
@@ -522,7 +522,7 @@ func (m *Memberlist) handleIndirectPing(buf []byte, from net.Addr) {
// Send the ping.
addr := joinHostPort(net.IP(ind.Target).String(), ind.Port)
if err := m.encodeAndSendMsg(addr, pingMsg, &ping); err != nil {
- m.logger.Printf("[ERR] memberlist: Failed to send ping: %s %s", err, LogAddress(from))
+ m.logger.Printf("[ERR] memberlist: Failed to send indirect ping: %s %s", err, LogAddress(from))
}
// Setup a timer to fire off a nack if no ack is seen in time.
diff --git a/vendor/github.com/hashicorp/memberlist/state.go b/vendor/github.com/hashicorp/memberlist/state.go
index 1af62943e8c94..f5ed65a7821ef 100644
--- a/vendor/github.com/hashicorp/memberlist/state.go
+++ b/vendor/github.com/hashicorp/memberlist/state.go
@@ -6,6 +6,7 @@ import (
"math"
"math/rand"
"net"
+ "strings"
"sync/atomic"
"time"
@@ -242,6 +243,21 @@ func (m *Memberlist) probeNodeByAddr(addr string) {
m.probeNode(n)
}
+// failedRemote checks the error and decides if it indicates a failure on the
+// other end.
+func failedRemote(err error) bool {
+ switch t := err.(type) {
+ case *net.OpError:
+ if strings.HasPrefix(t.Net, "tcp") {
+ switch t.Op {
+ case "dial", "read", "write":
+ return true
+ }
+ }
+ }
+ return false
+}
+
// probeNode handles a single round of failure checking on a node.
func (m *Memberlist) probeNode(node *nodeState) {
defer metrics.MeasureSince([]string{"memberlist", "probeNode"}, time.Now())
@@ -272,10 +288,20 @@ func (m *Memberlist) probeNode(node *nodeState) {
// soon as possible.
deadline := sent.Add(probeInterval)
addr := node.Address()
+
+ // Arrange for our self-awareness to get updated.
+ var awarenessDelta int
+ defer func() {
+ m.awareness.ApplyDelta(awarenessDelta)
+ }()
if node.State == stateAlive {
if err := m.encodeAndSendMsg(addr, pingMsg, &ping); err != nil {
m.logger.Printf("[ERR] memberlist: Failed to send ping: %s", err)
- return
+ if failedRemote(err) {
+ goto HANDLE_REMOTE_FAILURE
+ } else {
+ return
+ }
}
} else {
var msgs [][]byte
@@ -296,7 +322,11 @@ func (m *Memberlist) probeNode(node *nodeState) {
compound := makeCompoundMessage(msgs)
if err := m.rawSendMsgPacket(addr, &node.Node, compound.Bytes()); err != nil {
m.logger.Printf("[ERR] memberlist: Failed to send compound ping and suspect message to %s: %s", addr, err)
- return
+ if failedRemote(err) {
+ goto HANDLE_REMOTE_FAILURE
+ } else {
+ return
+ }
}
}
@@ -305,10 +335,7 @@ func (m *Memberlist) probeNode(node *nodeState) {
// which will improve our health until we get to the failure scenarios
// at the end of this function, which will alter this delta variable
// accordingly.
- awarenessDelta := -1
- defer func() {
- m.awareness.ApplyDelta(awarenessDelta)
- }()
+ awarenessDelta = -1
// Wait for response or round-trip-time.
select {
@@ -333,9 +360,10 @@ func (m *Memberlist) probeNode(node *nodeState) {
// probe interval it will give the TCP fallback more time, which
// is more active in dealing with lost packets, and it gives more
// time to wait for indirect acks/nacks.
- m.logger.Printf("[DEBUG] memberlist: Failed ping: %v (timeout reached)", node.Name)
+ m.logger.Printf("[DEBUG] memberlist: Failed ping: %s (timeout reached)", node.Name)
}
+HANDLE_REMOTE_FAILURE:
// Get some random live nodes.
m.nodeLock.RLock()
kNodes := kRandomNodes(m.config.IndirectChecks, m.nodes, func(n *nodeState) bool {
diff --git a/vendor/github.com/jpillora/backoff/backoff.go b/vendor/github.com/jpillora/backoff/backoff.go
index b4941b6e24c37..d113e68906bbc 100644
--- a/vendor/github.com/jpillora/backoff/backoff.go
+++ b/vendor/github.com/jpillora/backoff/backoff.go
@@ -4,6 +4,7 @@ package backoff
import (
"math"
"math/rand"
+ "sync/atomic"
"time"
)
@@ -14,19 +15,19 @@ import (
// Backoff is not generally concurrent-safe, but the ForAttempt method can
// be used concurrently.
type Backoff struct {
- //Factor is the multiplying factor for each increment step
- attempt, Factor float64
- //Jitter eases contention by randomizing backoff steps
+ attempt uint64
+ // Factor is the multiplying factor for each increment step
+ Factor float64
+ // Jitter eases contention by randomizing backoff steps
Jitter bool
- //Min and Max are the minimum and maximum values of the counter
+ // Min and Max are the minimum and maximum values of the counter
Min, Max time.Duration
}
// Duration returns the duration for the current attempt before incrementing
// the attempt counter. See ForAttempt.
func (b *Backoff) Duration() time.Duration {
- d := b.ForAttempt(b.attempt)
- b.attempt++
+ d := b.ForAttempt(float64(atomic.AddUint64(&b.attempt, 1) - 1))
return d
}
@@ -80,12 +81,12 @@ func (b *Backoff) ForAttempt(attempt float64) time.Duration {
// Reset restarts the current attempt counter at zero.
func (b *Backoff) Reset() {
- b.attempt = 0
+ atomic.StoreUint64(&b.attempt, 0)
}
// Attempt returns the current attempt counter value.
func (b *Backoff) Attempt() float64 {
- return b.attempt
+ return float64(atomic.LoadUint64(&b.attempt))
}
// Copy returns a backoff with equals constraints as the original
diff --git a/vendor/github.com/jpillora/backoff/go.mod b/vendor/github.com/jpillora/backoff/go.mod
new file mode 100644
index 0000000000000..7c41bc6f5830e
--- /dev/null
+++ b/vendor/github.com/jpillora/backoff/go.mod
@@ -0,0 +1,3 @@
+module github.com/jpillora/backoff
+
+go 1.13
diff --git a/vendor/github.com/jstemmer/go-junit-report/.gitignore b/vendor/github.com/jstemmer/go-junit-report/.gitignore
new file mode 100644
index 0000000000000..720bda6070d29
--- /dev/null
+++ b/vendor/github.com/jstemmer/go-junit-report/.gitignore
@@ -0,0 +1 @@
+go-junit-report
diff --git a/vendor/github.com/jstemmer/go-junit-report/.travis.yml b/vendor/github.com/jstemmer/go-junit-report/.travis.yml
new file mode 100644
index 0000000000000..d0dff3ef8e515
--- /dev/null
+++ b/vendor/github.com/jstemmer/go-junit-report/.travis.yml
@@ -0,0 +1,16 @@
+language: go
+
+go:
+ - tip
+ - "1.13.x"
+ - "1.12.x"
+ - "1.11.x"
+ - "1.10.x"
+ - "1.9.x"
+ - "1.8.x"
+ - "1.7.x"
+ - "1.6.x"
+ - "1.5.x"
+ - "1.4.x"
+ - "1.3.x"
+ - "1.2.x"
diff --git a/vendor/github.com/jstemmer/go-junit-report/LICENSE b/vendor/github.com/jstemmer/go-junit-report/LICENSE
new file mode 100644
index 0000000000000..f346564cefd9a
--- /dev/null
+++ b/vendor/github.com/jstemmer/go-junit-report/LICENSE
@@ -0,0 +1,20 @@
+Copyright (c) 2012 Joel Stemmer
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
diff --git a/vendor/github.com/jstemmer/go-junit-report/README.md b/vendor/github.com/jstemmer/go-junit-report/README.md
new file mode 100644
index 0000000000000..5b5f608be3d9c
--- /dev/null
+++ b/vendor/github.com/jstemmer/go-junit-report/README.md
@@ -0,0 +1,49 @@
+# go-junit-report
+
+Converts `go test` output to an xml report, suitable for applications that
+expect junit xml reports (e.g. [Jenkins](http://jenkins-ci.org)).
+
+[![Build Status][travis-badge]][travis-link]
+[![Report Card][report-badge]][report-link]
+
+## Installation
+
+Go version 1.2 or higher is required. Install or update using the `go get`
+command:
+
+```bash
+go get -u github.com/jstemmer/go-junit-report
+```
+
+## Usage
+
+go-junit-report reads the `go test` verbose output from standard in and writes
+junit compatible XML to standard out.
+
+```bash
+go test -v 2>&1 | go-junit-report > report.xml
+```
+
+Note that it also can parse benchmark output with `-bench` flag:
+```bash
+go test -v -bench . -count 5 2>&1 | go-junit-report > report.xml
+```
+
+## Contribution
+
+Create an Issue and discuss the fix or feature, then fork the package.
+Clone to github.com/jstemmer/go-junit-report. This is necessary because go import uses this path.
+Fix or implement feature. Test and then commit change.
+Specify #Issue and describe change in the commit message.
+Create Pull Request. It can be merged by owner or administrator then.
+
+### Run Tests
+
+```bash
+go test
+```
+
+[travis-badge]: https://travis-ci.org/jstemmer/go-junit-report.svg
+[travis-link]: https://travis-ci.org/jstemmer/go-junit-report
+[report-badge]: https://goreportcard.com/badge/github.com/jstemmer/go-junit-report
+[report-link]: https://goreportcard.com/report/github.com/jstemmer/go-junit-report
diff --git a/vendor/github.com/jstemmer/go-junit-report/formatter/formatter.go b/vendor/github.com/jstemmer/go-junit-report/formatter/formatter.go
new file mode 100644
index 0000000000000..6e1a0f31d685c
--- /dev/null
+++ b/vendor/github.com/jstemmer/go-junit-report/formatter/formatter.go
@@ -0,0 +1,182 @@
+package formatter
+
+import (
+ "bufio"
+ "encoding/xml"
+ "fmt"
+ "io"
+ "runtime"
+ "strings"
+ "time"
+
+ "github.com/jstemmer/go-junit-report/parser"
+)
+
+// JUnitTestSuites is a collection of JUnit test suites.
+type JUnitTestSuites struct {
+ XMLName xml.Name `xml:"testsuites"`
+ Suites []JUnitTestSuite `xml:"testsuite"`
+}
+
+// JUnitTestSuite is a single JUnit test suite which may contain many
+// testcases.
+type JUnitTestSuite struct {
+ XMLName xml.Name `xml:"testsuite"`
+ Tests int `xml:"tests,attr"`
+ Failures int `xml:"failures,attr"`
+ Time string `xml:"time,attr"`
+ Name string `xml:"name,attr"`
+ Properties []JUnitProperty `xml:"properties>property,omitempty"`
+ TestCases []JUnitTestCase `xml:"testcase"`
+}
+
+// JUnitTestCase is a single test case with its result.
+type JUnitTestCase struct {
+ XMLName xml.Name `xml:"testcase"`
+ Classname string `xml:"classname,attr"`
+ Name string `xml:"name,attr"`
+ Time string `xml:"time,attr"`
+ SkipMessage *JUnitSkipMessage `xml:"skipped,omitempty"`
+ Failure *JUnitFailure `xml:"failure,omitempty"`
+}
+
+// JUnitSkipMessage contains the reason why a testcase was skipped.
+type JUnitSkipMessage struct {
+ Message string `xml:"message,attr"`
+}
+
+// JUnitProperty represents a key/value pair used to define properties.
+type JUnitProperty struct {
+ Name string `xml:"name,attr"`
+ Value string `xml:"value,attr"`
+}
+
+// JUnitFailure contains data related to a failed test.
+type JUnitFailure struct {
+ Message string `xml:"message,attr"`
+ Type string `xml:"type,attr"`
+ Contents string `xml:",chardata"`
+}
+
+// JUnitReportXML writes a JUnit xml representation of the given report to w
+// in the format described at http://windyroad.org/dl/Open%20Source/JUnit.xsd
+func JUnitReportXML(report *parser.Report, noXMLHeader bool, goVersion string, w io.Writer) error {
+ suites := JUnitTestSuites{}
+
+ // convert Report to JUnit test suites
+ for _, pkg := range report.Packages {
+ pkg.Benchmarks = mergeBenchmarks(pkg.Benchmarks)
+ ts := JUnitTestSuite{
+ Tests: len(pkg.Tests) + len(pkg.Benchmarks),
+ Failures: 0,
+ Time: formatTime(pkg.Duration),
+ Name: pkg.Name,
+ Properties: []JUnitProperty{},
+ TestCases: []JUnitTestCase{},
+ }
+
+ classname := pkg.Name
+ if idx := strings.LastIndex(classname, "/"); idx > -1 && idx < len(pkg.Name) {
+ classname = pkg.Name[idx+1:]
+ }
+
+ // properties
+ if goVersion == "" {
+ // if goVersion was not specified as a flag, fall back to version reported by runtime
+ goVersion = runtime.Version()
+ }
+ ts.Properties = append(ts.Properties, JUnitProperty{"go.version", goVersion})
+ if pkg.CoveragePct != "" {
+ ts.Properties = append(ts.Properties, JUnitProperty{"coverage.statements.pct", pkg.CoveragePct})
+ }
+
+ // individual test cases
+ for _, test := range pkg.Tests {
+ testCase := JUnitTestCase{
+ Classname: classname,
+ Name: test.Name,
+ Time: formatTime(test.Duration),
+ Failure: nil,
+ }
+
+ if test.Result == parser.FAIL {
+ ts.Failures++
+ testCase.Failure = &JUnitFailure{
+ Message: "Failed",
+ Type: "",
+ Contents: strings.Join(test.Output, "\n"),
+ }
+ }
+
+ if test.Result == parser.SKIP {
+ testCase.SkipMessage = &JUnitSkipMessage{strings.Join(test.Output, "\n")}
+ }
+
+ ts.TestCases = append(ts.TestCases, testCase)
+ }
+
+ // individual benchmarks
+ for _, benchmark := range pkg.Benchmarks {
+ benchmarkCase := JUnitTestCase{
+ Classname: classname,
+ Name: benchmark.Name,
+ Time: formatBenchmarkTime(benchmark.Duration),
+ }
+
+ ts.TestCases = append(ts.TestCases, benchmarkCase)
+ }
+
+ suites.Suites = append(suites.Suites, ts)
+ }
+
+ // to xml
+ bytes, err := xml.MarshalIndent(suites, "", "\t")
+ if err != nil {
+ return err
+ }
+
+ writer := bufio.NewWriter(w)
+
+ if !noXMLHeader {
+ writer.WriteString(xml.Header)
+ }
+
+ writer.Write(bytes)
+ writer.WriteByte('\n')
+ writer.Flush()
+
+ return nil
+}
+
+func mergeBenchmarks(benchmarks []*parser.Benchmark) []*parser.Benchmark {
+ var merged []*parser.Benchmark
+ benchmap := make(map[string][]*parser.Benchmark)
+ for _, bm := range benchmarks {
+ if _, ok := benchmap[bm.Name]; !ok {
+ merged = append(merged, &parser.Benchmark{Name: bm.Name})
+ }
+ benchmap[bm.Name] = append(benchmap[bm.Name], bm)
+ }
+
+ for _, bm := range merged {
+ for _, b := range benchmap[bm.Name] {
+ bm.Allocs += b.Allocs
+ bm.Bytes += b.Bytes
+ bm.Duration += b.Duration
+ }
+ n := len(benchmap[bm.Name])
+ bm.Allocs /= n
+ bm.Bytes /= n
+ bm.Duration /= time.Duration(n)
+ }
+
+ return merged
+}
+
+func formatTime(d time.Duration) string {
+ return fmt.Sprintf("%.3f", d.Seconds())
+}
+
+func formatBenchmarkTime(d time.Duration) string {
+ return fmt.Sprintf("%.9f", d.Seconds())
+}
diff --git a/vendor/github.com/jstemmer/go-junit-report/go-junit-report.go b/vendor/github.com/jstemmer/go-junit-report/go-junit-report.go
new file mode 100644
index 0000000000000..1332f3b65b17a
--- /dev/null
+++ b/vendor/github.com/jstemmer/go-junit-report/go-junit-report.go
@@ -0,0 +1,45 @@
+package main
+
+import (
+ "flag"
+ "fmt"
+ "os"
+
+ "github.com/jstemmer/go-junit-report/formatter"
+ "github.com/jstemmer/go-junit-report/parser"
+)
+
+var (
+ noXMLHeader = flag.Bool("no-xml-header", false, "do not print xml header")
+ packageName = flag.String("package-name", "", "specify a package name (compiled test have no package name in output)")
+ goVersionFlag = flag.String("go-version", "", "specify the value to use for the go.version property in the generated XML")
+ setExitCode = flag.Bool("set-exit-code", false, "set exit code to 1 if tests failed")
+)
+
+func main() {
+ flag.Parse()
+
+ if flag.NArg() != 0 {
+ fmt.Fprintf(os.Stderr, "%s does not accept positional arguments\n", os.Args[0])
+ flag.Usage()
+ os.Exit(1)
+ }
+
+ // Read input
+ report, err := parser.Parse(os.Stdin, *packageName)
+ if err != nil {
+ fmt.Printf("Error reading input: %s\n", err)
+ os.Exit(1)
+ }
+
+ // Write xml
+ err = formatter.JUnitReportXML(report, *noXMLHeader, *goVersionFlag, os.Stdout)
+ if err != nil {
+ fmt.Printf("Error writing XML: %s\n", err)
+ os.Exit(1)
+ }
+
+ if *setExitCode && report.Failures() > 0 {
+ os.Exit(1)
+ }
+}
diff --git a/vendor/github.com/jstemmer/go-junit-report/go.mod b/vendor/github.com/jstemmer/go-junit-report/go.mod
new file mode 100644
index 0000000000000..de52369acc959
--- /dev/null
+++ b/vendor/github.com/jstemmer/go-junit-report/go.mod
@@ -0,0 +1,3 @@
+module github.com/jstemmer/go-junit-report
+
+go 1.2
diff --git a/vendor/github.com/jstemmer/go-junit-report/parser/parser.go b/vendor/github.com/jstemmer/go-junit-report/parser/parser.go
new file mode 100644
index 0000000000000..e268128a2dc82
--- /dev/null
+++ b/vendor/github.com/jstemmer/go-junit-report/parser/parser.go
@@ -0,0 +1,319 @@
+package parser
+
+import (
+ "bufio"
+ "io"
+ "regexp"
+ "strconv"
+ "strings"
+ "time"
+)
+
+// Result represents a test result.
+type Result int
+
+// Test result constants
+const (
+ PASS Result = iota
+ FAIL
+ SKIP
+)
+
+// Report is a collection of package tests.
+type Report struct {
+ Packages []Package
+}
+
+// Package contains the test results of a single package.
+type Package struct {
+ Name string
+ Duration time.Duration
+ Tests []*Test
+ Benchmarks []*Benchmark
+ CoveragePct string
+
+ // Time is deprecated, use Duration instead.
+ Time int // in milliseconds
+}
+
+// Test contains the results of a single test.
+type Test struct {
+ Name string
+ Duration time.Duration
+ Result Result
+ Output []string
+
+ SubtestIndent string
+
+ // Time is deprecated, use Duration instead.
+ Time int // in milliseconds
+}
+
+// Benchmark contains the results of a single benchmark.
+type Benchmark struct {
+ Name string
+ Duration time.Duration
+ // number of B/op
+ Bytes int
+ // number of allocs/op
+ Allocs int
+}
+
+var (
+ regexStatus = regexp.MustCompile(`--- (PASS|FAIL|SKIP): (.+) \((\d+\.\d+)(?: seconds|s)\)`)
+ regexIndent = regexp.MustCompile(`^([ \t]+)---`)
+ regexCoverage = regexp.MustCompile(`^coverage:\s+(\d+\.\d+)%\s+of\s+statements(?:\sin\s.+)?$`)
+ regexResult = regexp.MustCompile(`^(ok|FAIL)\s+([^ ]+)\s+(?:(\d+\.\d+)s|\(cached\)|(\[\w+ failed]))(?:\s+coverage:\s+(\d+\.\d+)%\sof\sstatements(?:\sin\s.+)?)?$`)
+ // regexBenchmark captures 3-5 groups: benchmark name, number of times ran, ns/op (with or without decimal), B/op (optional), and allocs/op (optional).
+ regexBenchmark = regexp.MustCompile(`^(Benchmark[^ -]+)(?:-\d+\s+|\s+)(\d+)\s+(\d+|\d+\.\d+)\sns/op(?:\s+(\d+)\sB/op)?(?:\s+(\d+)\sallocs/op)?`)
+ regexOutput = regexp.MustCompile(`( )*\t(.*)`)
+ regexSummary = regexp.MustCompile(`^(PASS|FAIL|SKIP)$`)
+ regexPackageWithTest = regexp.MustCompile(`^# ([^\[\]]+) \[[^\]]+\]$`)
+)
+
+// Parse parses go test output from reader r and returns a report with the
+// results. An optional pkgName can be given, which is used in case a package
+// result line is missing.
+func Parse(r io.Reader, pkgName string) (*Report, error) {
+ reader := bufio.NewReader(r)
+
+ report := &Report{make([]Package, 0)}
+
+ // keep track of tests we find
+ var tests []*Test
+
+ // keep track of benchmarks we find
+ var benchmarks []*Benchmark
+
+ // sum of tests' time, use this if current test has no result line (when it is compiled test)
+ var testsTime time.Duration
+
+ // current test
+ var cur string
+
+ // coverage percentage report for current package
+ var coveragePct string
+
+ // stores mapping between package name and output of build failures
+ var packageCaptures = map[string][]string{}
+
+ // the name of the package which it's build failure output is being captured
+ var capturedPackage string
+
+ // capture any non-test output
+ var buffers = map[string][]string{}
+
+ // parse lines
+ for {
+ l, _, err := reader.ReadLine()
+ if err != nil && err == io.EOF {
+ break
+ } else if err != nil {
+ return nil, err
+ }
+
+ line := string(l)
+
+ if strings.HasPrefix(line, "=== RUN ") {
+ // new test
+ cur = strings.TrimSpace(line[8:])
+ tests = append(tests, &Test{
+ Name: cur,
+ Result: FAIL,
+ Output: make([]string, 0),
+ })
+
+ // clear the current build package, so output lines won't be added to that build
+ capturedPackage = ""
+ } else if matches := regexBenchmark.FindStringSubmatch(line); len(matches) == 6 {
+ bytes, _ := strconv.Atoi(matches[4])
+ allocs, _ := strconv.Atoi(matches[5])
+
+ benchmarks = append(benchmarks, &Benchmark{
+ Name: matches[1],
+ Duration: parseNanoseconds(matches[3]),
+ Bytes: bytes,
+ Allocs: allocs,
+ })
+ } else if strings.HasPrefix(line, "=== PAUSE ") {
+ continue
+ } else if strings.HasPrefix(line, "=== CONT ") {
+ cur = strings.TrimSpace(line[8:])
+ continue
+ } else if matches := regexResult.FindStringSubmatch(line); len(matches) == 6 {
+ if matches[5] != "" {
+ coveragePct = matches[5]
+ }
+ if strings.HasSuffix(matches[4], "failed]") {
+ // the build of the package failed, inject a dummy test into the package
+ // which indicate about the failure and contain the failure description.
+ tests = append(tests, &Test{
+ Name: matches[4],
+ Result: FAIL,
+ Output: packageCaptures[matches[2]],
+ })
+ } else if matches[1] == "FAIL" && !containsFailures(tests) && len(buffers[cur]) > 0 {
+ // This package didn't have any failing tests, but still it
+ // failed with some output. Create a dummy test with the
+ // output.
+ tests = append(tests, &Test{
+ Name: "Failure",
+ Result: FAIL,
+ Output: buffers[cur],
+ })
+ buffers[cur] = buffers[cur][0:0]
+ }
+
+ // all tests in this package are finished
+ report.Packages = append(report.Packages, Package{
+ Name: matches[2],
+ Duration: parseSeconds(matches[3]),
+ Tests: tests,
+ Benchmarks: benchmarks,
+ CoveragePct: coveragePct,
+
+ Time: int(parseSeconds(matches[3]) / time.Millisecond), // deprecated
+ })
+
+ buffers[cur] = buffers[cur][0:0]
+ tests = make([]*Test, 0)
+ benchmarks = make([]*Benchmark, 0)
+ coveragePct = ""
+ cur = ""
+ testsTime = 0
+ } else if matches := regexStatus.FindStringSubmatch(line); len(matches) == 4 {
+ cur = matches[2]
+ test := findTest(tests, cur)
+ if test == nil {
+ continue
+ }
+
+ // test status
+ if matches[1] == "PASS" {
+ test.Result = PASS
+ } else if matches[1] == "SKIP" {
+ test.Result = SKIP
+ } else {
+ test.Result = FAIL
+ }
+
+ if matches := regexIndent.FindStringSubmatch(line); len(matches) == 2 {
+ test.SubtestIndent = matches[1]
+ }
+
+ test.Output = buffers[cur]
+
+ test.Name = matches[2]
+ test.Duration = parseSeconds(matches[3])
+ testsTime += test.Duration
+
+ test.Time = int(test.Duration / time.Millisecond) // deprecated
+ } else if matches := regexCoverage.FindStringSubmatch(line); len(matches) == 2 {
+ coveragePct = matches[1]
+ } else if matches := regexOutput.FindStringSubmatch(line); capturedPackage == "" && len(matches) == 3 {
+ // Sub-tests start with one or more series of 4-space indents, followed by a hard tab,
+ // followed by the test output
+ // Top-level tests start with a hard tab.
+ test := findTest(tests, cur)
+ if test == nil {
+ continue
+ }
+ test.Output = append(test.Output, matches[2])
+ } else if strings.HasPrefix(line, "# ") {
+ // indicates a capture of build output of a package. set the current build package.
+ packageWithTestBinary := regexPackageWithTest.FindStringSubmatch(line)
+ if packageWithTestBinary != nil {
+ // Sometimes, the text after "# " shows the name of the test binary
+ // ("<package>.test") in addition to the package
+ // e.g.: "# package/name [package/name.test]"
+ capturedPackage = packageWithTestBinary[1]
+ } else {
+ capturedPackage = line[2:]
+ }
+ } else if capturedPackage != "" {
+ // current line is build failure capture for the current built package
+ packageCaptures[capturedPackage] = append(packageCaptures[capturedPackage], line)
+ } else if regexSummary.MatchString(line) {
+ // unset current test name so any additional output after the
+ // summary is captured separately.
+ cur = ""
+ } else {
+ // buffer anything else that we didn't recognize
+ buffers[cur] = append(buffers[cur], line)
+
+ // if we have a current test, also append to its output
+ test := findTest(tests, cur)
+ if test != nil {
+ if strings.HasPrefix(line, test.SubtestIndent+" ") {
+ test.Output = append(test.Output, strings.TrimPrefix(line, test.SubtestIndent+" "))
+ }
+ }
+ }
+ }
+
+ if len(tests) > 0 {
+ // no result line found
+ report.Packages = append(report.Packages, Package{
+ Name: pkgName,
+ Duration: testsTime,
+ Time: int(testsTime / time.Millisecond),
+ Tests: tests,
+ Benchmarks: benchmarks,
+ CoveragePct: coveragePct,
+ })
+ }
+
+ return report, nil
+}
+
+func parseSeconds(t string) time.Duration {
+ if t == "" {
+ return time.Duration(0)
+ }
+ // ignore error
+ d, _ := time.ParseDuration(t + "s")
+ return d
+}
+
+func parseNanoseconds(t string) time.Duration {
+ // note: if input < 1 ns precision, result will be 0s.
+ if t == "" {
+ return time.Duration(0)
+ }
+ // ignore error
+ d, _ := time.ParseDuration(t + "ns")
+ return d
+}
+
+func findTest(tests []*Test, name string) *Test {
+ for i := len(tests) - 1; i >= 0; i-- {
+ if tests[i].Name == name {
+ return tests[i]
+ }
+ }
+ return nil
+}
+
+func containsFailures(tests []*Test) bool {
+ for _, test := range tests {
+ if test.Result == FAIL {
+ return true
+ }
+ }
+ return false
+}
+
+// Failures counts the number of failed tests in this report
+func (r *Report) Failures() int {
+ count := 0
+
+ for _, p := range r.Packages {
+ for _, t := range p.Tests {
+ if t.Result == FAIL {
+ count++
+ }
+ }
+ }
+
+ return count
+}
diff --git a/vendor/github.com/mattn/go-ieproxy/go.mod b/vendor/github.com/mattn/go-ieproxy/go.mod
new file mode 100644
index 0000000000000..4090bb8e7f031
--- /dev/null
+++ b/vendor/github.com/mattn/go-ieproxy/go.mod
@@ -0,0 +1,9 @@
+module github.com/mattn/go-ieproxy
+
+go 1.14
+
+require (
+ golang.org/x/net v0.0.0-20191112182307-2180aed22343
+ golang.org/x/sys v0.0.0-20191112214154-59a1497f0cea
+ golang.org/x/text v0.3.2 // indirect
+)
diff --git a/vendor/github.com/mattn/go-ieproxy/go.sum b/vendor/github.com/mattn/go-ieproxy/go.sum
new file mode 100644
index 0000000000000..a87acb1eca7b0
--- /dev/null
+++ b/vendor/github.com/mattn/go-ieproxy/go.sum
@@ -0,0 +1,11 @@
+golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
+golang.org/x/net v0.0.0-20191112182307-2180aed22343 h1:00ohfJ4K98s3m6BGUoBd8nyfp4Yl0GoIKvw5abItTjI=
+golang.org/x/net v0.0.0-20191112182307-2180aed22343/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/sys v0.0.0-20191112214154-59a1497f0cea h1:Mz1TMnfJDRJLk8S8OPCoJYgrsp/Se/2TBre2+vwX128=
+golang.org/x/sys v0.0.0-20191112214154-59a1497f0cea/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
+golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
+golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
+golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
diff --git a/vendor/github.com/mattn/go-ieproxy/ieproxy_windows.go b/vendor/github.com/mattn/go-ieproxy/ieproxy_windows.go
index a3d4c11c47e5d..79b4dabcef3ee 100644
--- a/vendor/github.com/mattn/go-ieproxy/ieproxy_windows.go
+++ b/vendor/github.com/mattn/go-ieproxy/ieproxy_windows.go
@@ -25,33 +25,59 @@ func getConf() ProxyConf {
}
func writeConf() {
- var (
- cfg *tWINHTTP_CURRENT_USER_IE_PROXY_CONFIG
- err error
- )
+ proxy := ""
+ proxyByPass := ""
+ autoConfigUrl := ""
+ autoDetect := false
+
+ // Try from IE first.
+ if ieCfg, err := getUserConfigFromWindowsSyscall(); err == nil {
+ defer globalFreeWrapper(ieCfg.lpszProxy)
+ defer globalFreeWrapper(ieCfg.lpszProxyBypass)
+ defer globalFreeWrapper(ieCfg.lpszAutoConfigUrl)
+
+ proxy = StringFromUTF16Ptr(ieCfg.lpszProxy)
+ proxyByPass = StringFromUTF16Ptr(ieCfg.lpszProxyBypass)
+ autoConfigUrl = StringFromUTF16Ptr(ieCfg.lpszAutoConfigUrl)
+ autoDetect = ieCfg.fAutoDetect
+ }
+
+ // Try WinHTTP default proxy.
+ if defaultCfg, err := getDefaultProxyConfiguration(); err == nil {
+ defer globalFreeWrapper(defaultCfg.lpszProxy)
+ defer globalFreeWrapper(defaultCfg.lpszProxyBypass)
+
+ newProxy := StringFromUTF16Ptr(defaultCfg.lpszProxy)
+ if proxy == "" {
+ proxy = newProxy
+ }
+
+ newProxyByPass := StringFromUTF16Ptr(defaultCfg.lpszProxyBypass)
+ if proxyByPass == "" {
+ proxyByPass = newProxyByPass
+ }
+ }
- if cfg, err = getUserConfigFromWindowsSyscall(); err != nil {
+ if proxy == "" && !autoDetect {
+ // Fall back to IE registry or manual detection if nothing is found there..
regedit, _ := readRegedit() // If the syscall fails, backup to manual detection.
windowsProxyConf = parseRegedit(regedit)
return
}
- defer globalFreeWrapper(cfg.lpszProxy)
- defer globalFreeWrapper(cfg.lpszProxyBypass)
- defer globalFreeWrapper(cfg.lpszAutoConfigUrl)
-
+ // Setting the proxy settings.
windowsProxyConf = ProxyConf{
Static: StaticProxyConf{
- Active: cfg.lpszProxy != nil,
+ Active: len(proxy) > 0,
},
Automatic: ProxyScriptConf{
- Active: cfg.lpszAutoConfigUrl != nil || cfg.fAutoDetect,
+ Active: len(autoConfigUrl) > 0 || autoDetect,
},
}
if windowsProxyConf.Static.Active {
protocol := make(map[string]string)
- for _, s := range strings.Split(StringFromUTF16Ptr(cfg.lpszProxy), ";") {
+ for _, s := range strings.Split(proxy, ";") {
s = strings.TrimSpace(s)
if s == "" {
continue
@@ -65,31 +91,38 @@ func writeConf() {
}
windowsProxyConf.Static.Protocols = protocol
- if cfg.lpszProxyBypass != nil {
- windowsProxyConf.Static.NoProxy = strings.Replace(StringFromUTF16Ptr(cfg.lpszProxyBypass), ";", ",", -1)
+ if len(proxyByPass) > 0 {
+ windowsProxyConf.Static.NoProxy = strings.Replace(proxyByPass, ";", ",", -1)
}
}
if windowsProxyConf.Automatic.Active {
- windowsProxyConf.Automatic.PreConfiguredURL = StringFromUTF16Ptr(cfg.lpszAutoConfigUrl)
+ windowsProxyConf.Automatic.PreConfiguredURL = autoConfigUrl
}
}
func getUserConfigFromWindowsSyscall() (*tWINHTTP_CURRENT_USER_IE_PROXY_CONFIG, error) {
- handle, _, err := winHttpOpen.Call(0, 0, 0, 0, 0)
- if handle == 0 {
- return &tWINHTTP_CURRENT_USER_IE_PROXY_CONFIG{}, err
+ if err := winHttpGetIEProxyConfigForCurrentUser.Find(); err != nil {
+ return nil, err
}
- defer winHttpCloseHandle.Call(handle)
-
- config := new(tWINHTTP_CURRENT_USER_IE_PROXY_CONFIG)
-
- ret, _, err := winHttpGetIEProxyConfigForCurrentUser.Call(uintptr(unsafe.Pointer(config)))
- if ret > 0 {
- err = nil
+ p := new(tWINHTTP_CURRENT_USER_IE_PROXY_CONFIG)
+ r, _, err := winHttpGetIEProxyConfigForCurrentUser.Call(uintptr(unsafe.Pointer(p)))
+ if rTrue(r) {
+ return p, nil
}
+ return nil, err
+}
- return config, err
+func getDefaultProxyConfiguration() (*tWINHTTP_PROXY_INFO, error) {
+ pInfo := new(tWINHTTP_PROXY_INFO)
+ if err := winHttpGetDefaultProxyConfiguration.Find(); err != nil {
+ return nil, err
+ }
+ r, _, err := winHttpGetDefaultProxyConfiguration.Call(uintptr(unsafe.Pointer(pInfo)))
+ if rTrue(r) {
+ return pInfo, nil
+ }
+ return nil, err
}
// OverrideEnvWithStaticProxy writes new values to the
diff --git a/vendor/github.com/mattn/go-ieproxy/kernel32_data_windows.go b/vendor/github.com/mattn/go-ieproxy/kernel32_data_windows.go
index cfb4349e23ea4..30ebbd22a076c 100644
--- a/vendor/github.com/mattn/go-ieproxy/kernel32_data_windows.go
+++ b/vendor/github.com/mattn/go-ieproxy/kernel32_data_windows.go
@@ -13,3 +13,7 @@ func globalFreeWrapper(ptr *uint16) {
_, _, _ = globalFree.Call(uintptr(unsafe.Pointer(ptr)))
}
}
+
+func rTrue(r uintptr) bool {
+ return r == 1
+}
diff --git a/vendor/github.com/mattn/go-ieproxy/GetProxyFunc.go b/vendor/github.com/mattn/go-ieproxy/proxy_middleman.go
similarity index 100%
rename from vendor/github.com/mattn/go-ieproxy/GetProxyFunc.go
rename to vendor/github.com/mattn/go-ieproxy/proxy_middleman.go
diff --git a/vendor/github.com/mattn/go-ieproxy/proxyMiddleman_unix.go b/vendor/github.com/mattn/go-ieproxy/proxy_middleman_unix.go
similarity index 100%
rename from vendor/github.com/mattn/go-ieproxy/proxyMiddleman_unix.go
rename to vendor/github.com/mattn/go-ieproxy/proxy_middleman_unix.go
diff --git a/vendor/github.com/mattn/go-ieproxy/proxyMiddleman_windows.go b/vendor/github.com/mattn/go-ieproxy/proxy_middleman_windows.go
similarity index 100%
rename from vendor/github.com/mattn/go-ieproxy/proxyMiddleman_windows.go
rename to vendor/github.com/mattn/go-ieproxy/proxy_middleman_windows.go
diff --git a/vendor/github.com/mattn/go-ieproxy/winhttp_data_windows.go b/vendor/github.com/mattn/go-ieproxy/winhttp_data_windows.go
index 560940df88f4e..4d3b1677805c0 100644
--- a/vendor/github.com/mattn/go-ieproxy/winhttp_data_windows.go
+++ b/vendor/github.com/mattn/go-ieproxy/winhttp_data_windows.go
@@ -7,6 +7,7 @@ var winHttpGetProxyForURL = winHttp.NewProc("WinHttpGetProxyForUrl")
var winHttpOpen = winHttp.NewProc("WinHttpOpen")
var winHttpCloseHandle = winHttp.NewProc("WinHttpCloseHandle")
var winHttpGetIEProxyConfigForCurrentUser = winHttp.NewProc("WinHttpGetIEProxyConfigForCurrentUser")
+var winHttpGetDefaultProxyConfiguration = winHttp.NewProc("WinHttpGetDefaultProxyConfiguration")
type tWINHTTP_AUTOPROXY_OPTIONS struct {
dwFlags autoProxyFlag
diff --git a/vendor/github.com/miekg/dns/generate.go b/vendor/github.com/miekg/dns/generate.go
index 97bc39f58a828..26edc4f402eba 100644
--- a/vendor/github.com/miekg/dns/generate.go
+++ b/vendor/github.com/miekg/dns/generate.go
@@ -49,11 +49,15 @@ func (zp *ZoneParser) generate(l lex) (RR, bool) {
if err != nil {
return zp.setParseError("bad stop in $GENERATE range", l)
}
- if end < 0 || start < 0 || end < start {
+ if end < 0 || start < 0 || end < start || (end-start)/step > 65535 {
return zp.setParseError("bad range in $GENERATE range", l)
}
- zp.c.Next() // _BLANK
+ // _BLANK
+ l, ok := zp.c.Next()
+ if !ok || l.value != zBlank {
+ return zp.setParseError("garbage after $GENERATE range", l)
+ }
// Create a complete new string, which we then parse again.
var s string
diff --git a/vendor/github.com/miekg/dns/labels.go b/vendor/github.com/miekg/dns/labels.go
index aa9dd41eec6aa..e32d2a1d2546f 100644
--- a/vendor/github.com/miekg/dns/labels.go
+++ b/vendor/github.com/miekg/dns/labels.go
@@ -13,29 +13,27 @@ func SplitDomainName(s string) (labels []string) {
if len(s) == 0 {
return nil
}
- if s == "." {
- return nil
- }
- // offset of the final '.' or the length of the name
- var fqdnEnd int
+ fqdnEnd := 0 // offset of the final '.' or the length of the name
+ idx := Split(s)
+ begin := 0
if IsFqdn(s) {
fqdnEnd = len(s) - 1
} else {
fqdnEnd = len(s)
}
- var (
- begin int
- off int
- end bool
- )
- for {
- off, end = NextLabel(s, off)
- if end {
- break
+
+ switch len(idx) {
+ case 0:
+ return nil
+ case 1:
+ // no-op
+ default:
+ for _, end := range idx[1:] {
+ labels = append(labels, s[begin:end-1])
+ begin = end
}
- labels = append(labels, s[begin:off-1])
- begin = off
}
+
return append(labels, s[begin:fqdnEnd])
}
@@ -54,50 +52,52 @@ func CompareDomainName(s1, s2 string) (n int) {
return 0
}
- j1 := len(s1)
- if s1[j1-1] == '.' {
- j1--
- }
- j2 := len(s2)
- if s2[j2-1] == '.' {
- j2--
+ l1 := Split(s1)
+ l2 := Split(s2)
+
+ j1 := len(l1) - 1 // end
+ i1 := len(l1) - 2 // start
+ j2 := len(l2) - 1
+ i2 := len(l2) - 2
+ // the second check can be done here: last/only label
+ // before we fall through into the for-loop below
+ if equal(s1[l1[j1]:], s2[l2[j2]:]) {
+ n++
+ } else {
+ return
}
- var i1, i2 int
for {
- i1 = prevLabel(s1, j1-1)
- i2 = prevLabel(s2, j2-1)
- if equal(s1[i1:j1], s2[i2:j2]) {
- n++
- } else {
+ if i1 < 0 || i2 < 0 {
break
}
- if i1 == 0 || i2 == 0 {
+ if equal(s1[l1[i1]:l1[j1]], s2[l2[i2]:l2[j2]]) {
+ n++
+ } else {
break
}
- j1 = i1 - 2
- j2 = i2 - 2
+ j1--
+ i1--
+ j2--
+ i2--
}
return
}
// CountLabel counts the the number of labels in the string s.
// s must be a syntactically valid domain name.
-func CountLabel(s string) int {
+func CountLabel(s string) (labels int) {
if s == "." {
- return 0
+ return
}
- labels := 1
- for i := 0; i < len(s)-1; i++ {
- c := s[i]
- if c == '\\' {
- i++
- continue
- }
- if c == '.' {
- labels++
+ off := 0
+ end := false
+ for {
+ off, end = NextLabel(s, off)
+ labels++
+ if end {
+ return
}
}
- return labels
}
// Split splits a name s into its label indexes.
@@ -126,70 +126,40 @@ func Split(s string) []int {
// The bool end is true when the end of the string has been reached.
// Also see PrevLabel.
func NextLabel(s string, offset int) (i int, end bool) {
+ quote := false
for i = offset; i < len(s)-1; i++ {
- c := s[i]
- if c == '\\' {
- i++
- continue
- }
- if c == '.' {
+ switch s[i] {
+ case '\\':
+ quote = !quote
+ default:
+ quote = false
+ case '.':
+ if quote {
+ quote = !quote
+ continue
+ }
return i + 1, false
}
}
return i + 1, true
}
-func prevLabel(s string, offset int) int {
- for i := offset; i >= 0; i-- {
- if s[i] == '.' {
- if i == 0 || s[i-1] != '\\' {
- return i + 1 // the '.' is not escaped
- }
- // We are at '\.' and need to check if the '\' itself is escaped.
- // We do this by walking backwards from '\.' and counting the
- // number of '\' we encounter. If the number of '\' is even
- // (though here it's actually odd since we start at '\.') the '\'
- // is escaped.
- j := i - 2
- for ; j >= 0 && s[j] == '\\'; j-- {
- }
- // An odd number here indicates that the '\' preceding the '.'
- // is escaped.
- if (i-j)&1 == 1 {
- return i + 1
- }
- i = j + 1
- }
- }
- return 0
-}
-
// PrevLabel returns the index of the label when starting from the right and
// jumping n labels to the left.
// The bool start is true when the start of the string has been overshot.
// Also see NextLabel.
func PrevLabel(s string, n int) (i int, start bool) {
- if s == "." {
- return 0, true
- }
if n == 0 {
return len(s), false
}
- i = len(s) - 1
- if s[i] == '.' {
- i--
- }
- for ; n > 0; n-- {
- i = prevLabel(s, i)
- if i == 0 {
- break
- }
- i -= 2
+ lab := Split(s)
+ if lab == nil {
+ return 0, true
}
- if n > 0 {
+ if n > len(lab) {
return 0, true
}
- return i + 2, false
+ return lab[len(lab)-n], false
}
// equal compares a and b while ignoring case. It returns true when equal otherwise false.
@@ -200,19 +170,18 @@ func equal(a, b string) bool {
if la != lb {
return false
}
- if a != b {
- // case-insensitive comparison
- for i := la - 1; i >= 0; i-- {
- ai := a[i]
- bi := b[i]
- if ai != bi {
- if bi < ai {
- bi, ai = ai, bi
- }
- if !('A' <= ai && ai <= 'Z' && bi == ai+'a'-'A') {
- return false
- }
- }
+
+ for i := la - 1; i >= 0; i-- {
+ ai := a[i]
+ bi := b[i]
+ if ai >= 'A' && ai <= 'Z' {
+ ai |= 'a' - 'A'
+ }
+ if bi >= 'A' && bi <= 'Z' {
+ bi |= 'a' - 'A'
+ }
+ if ai != bi {
+ return false
}
}
return true
diff --git a/vendor/github.com/miekg/dns/msg.go b/vendor/github.com/miekg/dns/msg.go
index a2e051b6a48fd..e04fb5d77a78f 100644
--- a/vendor/github.com/miekg/dns/msg.go
+++ b/vendor/github.com/miekg/dns/msg.go
@@ -754,24 +754,13 @@ func (dns *Msg) Pack() (msg []byte, err error) {
return dns.PackBuffer(nil)
}
-var compressionPackPool = sync.Pool{
- New: func() interface{} {
- return make(map[string]uint16)
- },
-}
-
// PackBuffer packs a Msg, using the given buffer buf. If buf is too small a new buffer is allocated.
func (dns *Msg) PackBuffer(buf []byte) (msg []byte, err error) {
// If this message can't be compressed, avoid filling the
// compression map and creating garbage.
if dns.Compress && dns.isCompressible() {
- compression := compressionPackPool.Get().(map[string]uint16)
- msg, err := dns.packBufferWithCompressionMap(buf, compressionMap{int: compression}, true)
- for k := range compression {
- delete(compression, k)
- }
- compressionPackPool.Put(compression)
- return msg, err
+ compression := make(map[string]uint16) // Compression pointer mappings.
+ return dns.packBufferWithCompressionMap(buf, compressionMap{int: compression}, true)
}
return dns.packBufferWithCompressionMap(buf, compressionMap{}, false)
@@ -983,12 +972,6 @@ func (dns *Msg) isCompressible() bool {
len(dns.Ns) > 0 || len(dns.Extra) > 0
}
-var compressionPool = sync.Pool{
- New: func() interface{} {
- return make(map[string]struct{})
- },
-}
-
// Len returns the message length when in (un)compressed wire format.
// If dns.Compress is true compression it is taken into account. Len()
// is provided to be a faster way to get the size of the resulting packet,
@@ -997,13 +980,8 @@ func (dns *Msg) Len() int {
// If this message can't be compressed, avoid filling the
// compression map and creating garbage.
if dns.Compress && dns.isCompressible() {
- compression := compressionPool.Get().(map[string]struct{})
- n := msgLenWithCompressionMap(dns, compression)
- for k := range compression {
- delete(compression, k)
- }
- compressionPool.Put(compression)
- return n
+ compression := make(map[string]struct{})
+ return msgLenWithCompressionMap(dns, compression)
}
return msgLenWithCompressionMap(dns, nil)
diff --git a/vendor/github.com/miekg/dns/msg_truncate.go b/vendor/github.com/miekg/dns/msg_truncate.go
index a711f006d02d0..89d40757dbddf 100644
--- a/vendor/github.com/miekg/dns/msg_truncate.go
+++ b/vendor/github.com/miekg/dns/msg_truncate.go
@@ -54,7 +54,7 @@ func (dns *Msg) Truncate(size int) {
size -= Len(edns0)
}
- compression := compressionPool.Get().(map[string]struct{})
+ compression := make(map[string]struct{})
l = headerSize
for _, r := range dns.Question {
@@ -88,11 +88,6 @@ func (dns *Msg) Truncate(size int) {
// Add the OPT record back onto the additional section.
dns.Extra = append(dns.Extra, edns0)
}
-
- for k := range compression {
- delete(compression, k)
- }
- compressionPool.Put(compression)
}
func truncateLoop(rrs []RR, size, l int, compression map[string]struct{}) (int, int) {
diff --git a/vendor/github.com/miekg/dns/scan.go b/vendor/github.com/miekg/dns/scan.go
index 7da14c88f2acc..0f9361af019ae 100644
--- a/vendor/github.com/miekg/dns/scan.go
+++ b/vendor/github.com/miekg/dns/scan.go
@@ -134,7 +134,7 @@ func ReadRR(r io.Reader, file string) (RR, error) {
}
// ParseZone reads a RFC 1035 style zonefile from r. It returns
-// *Tokens on the returned channel, each consisting of either a
+// Tokens on the returned channel, each consisting of either a
// parsed RR and optional comment or a nil RR and an error. The
// channel is closed by ParseZone when the end of r is reached.
//
@@ -143,7 +143,8 @@ func ReadRR(r io.Reader, file string) (RR, error) {
// origin, as if the file would start with an $ORIGIN directive.
//
// The directives $INCLUDE, $ORIGIN, $TTL and $GENERATE are all
-// supported.
+// supported. Note that $GENERATE's range support up to a maximum of
+// of 65535 steps.
//
// Basic usage pattern when reading from a string (z) containing the
// zone data:
@@ -203,6 +204,7 @@ func parseZone(r io.Reader, origin, file string, t chan *Token) {
//
// The directives $INCLUDE, $ORIGIN, $TTL and $GENERATE are all
// supported. Although $INCLUDE is disabled by default.
+// Note that $GENERATE's range support up to a maximum of 65535 steps.
//
// Basic usage pattern when reading from a string (z) containing the
// zone data:
@@ -968,6 +970,11 @@ func (zl *zlexer) Next() (lex, bool) {
// was inside braces and we delayed adding it until now.
com[comi] = ' ' // convert newline to space
comi++
+ if comi >= len(com) {
+ l.token = "comment length insufficient for parsing"
+ l.err = true
+ return *l, true
+ }
}
com[comi] = ';'
diff --git a/vendor/github.com/miekg/dns/version.go b/vendor/github.com/miekg/dns/version.go
index b945243ec81e6..0a833dc93babf 100644
--- a/vendor/github.com/miekg/dns/version.go
+++ b/vendor/github.com/miekg/dns/version.go
@@ -3,7 +3,7 @@ package dns
import "fmt"
// Version is current version of this library.
-var Version = V{1, 1, 19}
+var Version = V{1, 1, 22}
// V holds the version of this library.
type V struct {
diff --git a/vendor/github.com/prometheus/client_golang/api/prometheus/v1/api.go b/vendor/github.com/prometheus/client_golang/api/prometheus/v1/api.go
index 1845ef6f06c18..00cd76ea2712e 100644
--- a/vendor/github.com/prometheus/client_golang/api/prometheus/v1/api.go
+++ b/vendor/github.com/prometheus/client_golang/api/prometheus/v1/api.go
@@ -100,7 +100,6 @@ func marshalPointJSON(ptr unsafe.Pointer, stream *json.Stream) {
if abs != 0 {
if abs < 1e-6 || abs >= 1e21 {
fmt = 'e'
- fmt = 'e'
}
}
buf = strconv.AppendFloat(buf, float64(p.Value), fmt, -1, 64)
diff --git a/vendor/github.com/prometheus/client_golang/prometheus/desc.go b/vendor/github.com/prometheus/client_golang/prometheus/desc.go
index 1d034f871cb9c..e3232d79f44b9 100644
--- a/vendor/github.com/prometheus/client_golang/prometheus/desc.go
+++ b/vendor/github.com/prometheus/client_golang/prometheus/desc.go
@@ -19,6 +19,7 @@ import (
"sort"
"strings"
+ "github.com/cespare/xxhash/v2"
"github.com/golang/protobuf/proto"
"github.com/prometheus/common/model"
@@ -126,24 +127,24 @@ func NewDesc(fqName, help string, variableLabels []string, constLabels Labels) *
return d
}
- vh := hashNew()
+ xxh := xxhash.New()
for _, val := range labelValues {
- vh = hashAdd(vh, val)
- vh = hashAddByte(vh, separatorByte)
+ xxh.WriteString(val)
+ xxh.Write(separatorByteSlice)
}
- d.id = vh
+ d.id = xxh.Sum64()
// Sort labelNames so that order doesn't matter for the hash.
sort.Strings(labelNames)
// Now hash together (in this order) the help string and the sorted
// label names.
- lh := hashNew()
- lh = hashAdd(lh, help)
- lh = hashAddByte(lh, separatorByte)
+ xxh.Reset()
+ xxh.WriteString(help)
+ xxh.Write(separatorByteSlice)
for _, labelName := range labelNames {
- lh = hashAdd(lh, labelName)
- lh = hashAddByte(lh, separatorByte)
+ xxh.WriteString(labelName)
+ xxh.Write(separatorByteSlice)
}
- d.dimHash = lh
+ d.dimHash = xxh.Sum64()
d.constLabelPairs = make([]*dto.LabelPair, 0, len(constLabels))
for n, v := range constLabels {
diff --git a/vendor/github.com/prometheus/client_golang/prometheus/histogram.go b/vendor/github.com/prometheus/client_golang/prometheus/histogram.go
index d7ea67bd2bafd..fd6ab7beca9aa 100644
--- a/vendor/github.com/prometheus/client_golang/prometheus/histogram.go
+++ b/vendor/github.com/prometheus/client_golang/prometheus/histogram.go
@@ -187,7 +187,7 @@ func newHistogram(desc *Desc, opts HistogramOpts, labelValues ...string) Histogr
desc: desc,
upperBounds: opts.Buckets,
labelPairs: makeLabelPairs(desc, labelValues),
- counts: [2]*histogramCounts{&histogramCounts{}, &histogramCounts{}},
+ counts: [2]*histogramCounts{{}, {}},
}
for i, upperBound := range h.upperBounds {
if i < len(h.upperBounds)-1 {
diff --git a/vendor/github.com/prometheus/client_golang/prometheus/metric.go b/vendor/github.com/prometheus/client_golang/prometheus/metric.go
index 55e6d86d596f7..0df1eff8814fb 100644
--- a/vendor/github.com/prometheus/client_golang/prometheus/metric.go
+++ b/vendor/github.com/prometheus/client_golang/prometheus/metric.go
@@ -18,11 +18,12 @@ import (
"time"
"github.com/golang/protobuf/proto"
+ "github.com/prometheus/common/model"
dto "github.com/prometheus/client_model/go"
)
-const separatorByte byte = 255
+var separatorByteSlice = []byte{model.SeparatorByte} // For convenient use with xxhash.
// A Metric models a single sample value with its meta data being exported to
// Prometheus. Implementations of Metric in this package are Gauge, Counter,
diff --git a/vendor/github.com/prometheus/client_golang/prometheus/promhttp/delegator.go b/vendor/github.com/prometheus/client_golang/prometheus/promhttp/delegator.go
index fa535684f96cb..d1354b1016ae9 100644
--- a/vendor/github.com/prometheus/client_golang/prometheus/promhttp/delegator.go
+++ b/vendor/github.com/prometheus/client_golang/prometheus/promhttp/delegator.go
@@ -62,6 +62,8 @@ func (r *responseWriterDelegator) WriteHeader(code int) {
}
func (r *responseWriterDelegator) Write(b []byte) (int, error) {
+ // If applicable, call WriteHeader here so that observeWriteHeader is
+ // handled appropriately.
if !r.wroteHeader {
r.WriteHeader(http.StatusOK)
}
@@ -82,12 +84,19 @@ func (d closeNotifierDelegator) CloseNotify() <-chan bool {
return d.ResponseWriter.(http.CloseNotifier).CloseNotify()
}
func (d flusherDelegator) Flush() {
+ // If applicable, call WriteHeader here so that observeWriteHeader is
+ // handled appropriately.
+ if !d.wroteHeader {
+ d.WriteHeader(http.StatusOK)
+ }
d.ResponseWriter.(http.Flusher).Flush()
}
func (d hijackerDelegator) Hijack() (net.Conn, *bufio.ReadWriter, error) {
return d.ResponseWriter.(http.Hijacker).Hijack()
}
func (d readerFromDelegator) ReadFrom(re io.Reader) (int64, error) {
+ // If applicable, call WriteHeader here so that observeWriteHeader is
+ // handled appropriately.
if !d.wroteHeader {
d.WriteHeader(http.StatusOK)
}
diff --git a/vendor/github.com/prometheus/client_golang/prometheus/push/push.go b/vendor/github.com/prometheus/client_golang/prometheus/push/push.go
new file mode 100644
index 0000000000000..77ce9c83734fd
--- /dev/null
+++ b/vendor/github.com/prometheus/client_golang/prometheus/push/push.go
@@ -0,0 +1,309 @@
+// Copyright 2015 The Prometheus Authors
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// Package push provides functions to push metrics to a Pushgateway. It uses a
+// builder approach. Create a Pusher with New and then add the various options
+// by using its methods, finally calling Add or Push, like this:
+//
+// // Easy case:
+// push.New("http://example.org/metrics", "my_job").Gatherer(myRegistry).Push()
+//
+// // Complex case:
+// push.New("http://example.org/metrics", "my_job").
+// Collector(myCollector1).
+// Collector(myCollector2).
+// Grouping("zone", "xy").
+// Client(&myHTTPClient).
+// BasicAuth("top", "secret").
+// Add()
+//
+// See the examples section for more detailed examples.
+//
+// See the documentation of the Pushgateway to understand the meaning of
+// the grouping key and the differences between Push and Add:
+// https://github.com/prometheus/pushgateway
+package push
+
+import (
+ "bytes"
+ "encoding/base64"
+ "fmt"
+ "io/ioutil"
+ "net/http"
+ "net/url"
+ "strings"
+
+ "github.com/prometheus/common/expfmt"
+ "github.com/prometheus/common/model"
+
+ "github.com/prometheus/client_golang/prometheus"
+)
+
+const (
+ contentTypeHeader = "Content-Type"
+ // base64Suffix is appended to a label name in the request URL path to
+ // mark the following label value as base64 encoded.
+ base64Suffix = "@base64"
+)
+
+// HTTPDoer is an interface for the one method of http.Client that is used by Pusher
+type HTTPDoer interface {
+ Do(*http.Request) (*http.Response, error)
+}
+
+// Pusher manages a push to the Pushgateway. Use New to create one, configure it
+// with its methods, and finally use the Add or Push method to push.
+type Pusher struct {
+ error error
+
+ url, job string
+ grouping map[string]string
+
+ gatherers prometheus.Gatherers
+ registerer prometheus.Registerer
+
+ client HTTPDoer
+ useBasicAuth bool
+ username, password string
+
+ expfmt expfmt.Format
+}
+
+// New creates a new Pusher to push to the provided URL with the provided job
+// name. You can use just host:port or ip:port as url, in which case “http://”
+// is added automatically. Alternatively, include the schema in the
+// URL. However, do not include the “/metrics/jobs/…” part.
+func New(url, job string) *Pusher {
+ var (
+ reg = prometheus.NewRegistry()
+ err error
+ )
+ if !strings.Contains(url, "://") {
+ url = "http://" + url
+ }
+ if strings.HasSuffix(url, "/") {
+ url = url[:len(url)-1]
+ }
+
+ return &Pusher{
+ error: err,
+ url: url,
+ job: job,
+ grouping: map[string]string{},
+ gatherers: prometheus.Gatherers{reg},
+ registerer: reg,
+ client: &http.Client{},
+ expfmt: expfmt.FmtProtoDelim,
+ }
+}
+
+// Push collects/gathers all metrics from all Collectors and Gatherers added to
+// this Pusher. Then, it pushes them to the Pushgateway configured while
+// creating this Pusher, using the configured job name and any added grouping
+// labels as grouping key. All previously pushed metrics with the same job and
+// other grouping labels will be replaced with the metrics pushed by this
+// call. (It uses HTTP method “PUT” to push to the Pushgateway.)
+//
+// Push returns the first error encountered by any method call (including this
+// one) in the lifetime of the Pusher.
+func (p *Pusher) Push() error {
+ return p.push(http.MethodPut)
+}
+
+// Add works like push, but only previously pushed metrics with the same name
+// (and the same job and other grouping labels) will be replaced. (It uses HTTP
+// method “POST” to push to the Pushgateway.)
+func (p *Pusher) Add() error {
+ return p.push(http.MethodPost)
+}
+
+// Gatherer adds a Gatherer to the Pusher, from which metrics will be gathered
+// to push them to the Pushgateway. The gathered metrics must not contain a job
+// label of their own.
+//
+// For convenience, this method returns a pointer to the Pusher itself.
+func (p *Pusher) Gatherer(g prometheus.Gatherer) *Pusher {
+ p.gatherers = append(p.gatherers, g)
+ return p
+}
+
+// Collector adds a Collector to the Pusher, from which metrics will be
+// collected to push them to the Pushgateway. The collected metrics must not
+// contain a job label of their own.
+//
+// For convenience, this method returns a pointer to the Pusher itself.
+func (p *Pusher) Collector(c prometheus.Collector) *Pusher {
+ if p.error == nil {
+ p.error = p.registerer.Register(c)
+ }
+ return p
+}
+
+// Grouping adds a label pair to the grouping key of the Pusher, replacing any
+// previously added label pair with the same label name. Note that setting any
+// labels in the grouping key that are already contained in the metrics to push
+// will lead to an error.
+//
+// For convenience, this method returns a pointer to the Pusher itself.
+func (p *Pusher) Grouping(name, value string) *Pusher {
+ if p.error == nil {
+ if !model.LabelName(name).IsValid() {
+ p.error = fmt.Errorf("grouping label has invalid name: %s", name)
+ return p
+ }
+ p.grouping[name] = value
+ }
+ return p
+}
+
+// Client sets a custom HTTP client for the Pusher. For convenience, this method
+// returns a pointer to the Pusher itself.
+// Pusher only needs one method of the custom HTTP client: Do(*http.Request).
+// Thus, rather than requiring a fully fledged http.Client,
+// the provided client only needs to implement the HTTPDoer interface.
+// Since *http.Client naturally implements that interface, it can still be used normally.
+func (p *Pusher) Client(c HTTPDoer) *Pusher {
+ p.client = c
+ return p
+}
+
+// BasicAuth configures the Pusher to use HTTP Basic Authentication with the
+// provided username and password. For convenience, this method returns a
+// pointer to the Pusher itself.
+func (p *Pusher) BasicAuth(username, password string) *Pusher {
+ p.useBasicAuth = true
+ p.username = username
+ p.password = password
+ return p
+}
+
+// Format configures the Pusher to use an encoding format given by the
+// provided expfmt.Format. The default format is expfmt.FmtProtoDelim and
+// should be used with the standard Prometheus Pushgateway. Custom
+// implementations may require different formats. For convenience, this
+// method returns a pointer to the Pusher itself.
+func (p *Pusher) Format(format expfmt.Format) *Pusher {
+ p.expfmt = format
+ return p
+}
+
+// Delete sends a “DELETE” request to the Pushgateway configured while creating
+// this Pusher, using the configured job name and any added grouping labels as
+// grouping key. Any added Gatherers and Collectors added to this Pusher are
+// ignored by this method.
+//
+// Delete returns the first error encountered by any method call (including this
+// one) in the lifetime of the Pusher.
+func (p *Pusher) Delete() error {
+ if p.error != nil {
+ return p.error
+ }
+ req, err := http.NewRequest(http.MethodDelete, p.fullURL(), nil)
+ if err != nil {
+ return err
+ }
+ if p.useBasicAuth {
+ req.SetBasicAuth(p.username, p.password)
+ }
+ resp, err := p.client.Do(req)
+ if err != nil {
+ return err
+ }
+ defer resp.Body.Close()
+ if resp.StatusCode != http.StatusAccepted {
+ body, _ := ioutil.ReadAll(resp.Body) // Ignore any further error as this is for an error message only.
+ return fmt.Errorf("unexpected status code %d while deleting %s: %s", resp.StatusCode, p.fullURL(), body)
+ }
+ return nil
+}
+
+func (p *Pusher) push(method string) error {
+ if p.error != nil {
+ return p.error
+ }
+ mfs, err := p.gatherers.Gather()
+ if err != nil {
+ return err
+ }
+ buf := &bytes.Buffer{}
+ enc := expfmt.NewEncoder(buf, p.expfmt)
+ // Check for pre-existing grouping labels:
+ for _, mf := range mfs {
+ for _, m := range mf.GetMetric() {
+ for _, l := range m.GetLabel() {
+ if l.GetName() == "job" {
+ return fmt.Errorf("pushed metric %s (%s) already contains a job label", mf.GetName(), m)
+ }
+ if _, ok := p.grouping[l.GetName()]; ok {
+ return fmt.Errorf(
+ "pushed metric %s (%s) already contains grouping label %s",
+ mf.GetName(), m, l.GetName(),
+ )
+ }
+ }
+ }
+ enc.Encode(mf)
+ }
+ req, err := http.NewRequest(method, p.fullURL(), buf)
+ if err != nil {
+ return err
+ }
+ if p.useBasicAuth {
+ req.SetBasicAuth(p.username, p.password)
+ }
+ req.Header.Set(contentTypeHeader, string(p.expfmt))
+ resp, err := p.client.Do(req)
+ if err != nil {
+ return err
+ }
+ defer resp.Body.Close()
+ // Pushgateway 0.10+ responds with StatusOK, earlier versions with StatusAccepted.
+ if resp.StatusCode != http.StatusOK && resp.StatusCode != http.StatusAccepted {
+ body, _ := ioutil.ReadAll(resp.Body) // Ignore any further error as this is for an error message only.
+ return fmt.Errorf("unexpected status code %d while pushing to %s: %s", resp.StatusCode, p.fullURL(), body)
+ }
+ return nil
+}
+
+// fullURL assembles the URL used to push/delete metrics and returns it as a
+// string. The job name and any grouping label values containing a '/' will
+// trigger a base64 encoding of the affected component and proper suffixing of
+// the preceding component. If the component does not contain a '/' but other
+// special character, the usual url.QueryEscape is used for compatibility with
+// older versions of the Pushgateway and for better readability.
+func (p *Pusher) fullURL() string {
+ urlComponents := []string{}
+ if encodedJob, base64 := encodeComponent(p.job); base64 {
+ urlComponents = append(urlComponents, "job"+base64Suffix, encodedJob)
+ } else {
+ urlComponents = append(urlComponents, "job", encodedJob)
+ }
+ for ln, lv := range p.grouping {
+ if encodedLV, base64 := encodeComponent(lv); base64 {
+ urlComponents = append(urlComponents, ln+base64Suffix, encodedLV)
+ } else {
+ urlComponents = append(urlComponents, ln, encodedLV)
+ }
+ }
+ return fmt.Sprintf("%s/metrics/%s", p.url, strings.Join(urlComponents, "/"))
+}
+
+// encodeComponent encodes the provided string with base64.RawURLEncoding in
+// case it contains '/'. If not, it uses url.QueryEscape instead. It returns
+// true in the former case.
+func encodeComponent(s string) (string, bool) {
+ if strings.Contains(s, "/") {
+ return base64.RawURLEncoding.EncodeToString([]byte(s)), true
+ }
+ return url.QueryEscape(s), false
+}
diff --git a/vendor/github.com/prometheus/client_golang/prometheus/registry.go b/vendor/github.com/prometheus/client_golang/prometheus/registry.go
index 6c32516aa2e31..c05d6ee1b3853 100644
--- a/vendor/github.com/prometheus/client_golang/prometheus/registry.go
+++ b/vendor/github.com/prometheus/client_golang/prometheus/registry.go
@@ -25,6 +25,7 @@ import (
"sync"
"unicode/utf8"
+ "github.com/cespare/xxhash/v2"
"github.com/golang/protobuf/proto"
"github.com/prometheus/common/expfmt"
@@ -74,7 +75,7 @@ func NewRegistry() *Registry {
// NewPedanticRegistry returns a registry that checks during collection if each
// collected Metric is consistent with its reported Desc, and if the Desc has
// actually been registered with the registry. Unchecked Collectors (those whose
-// Describe methed does not yield any descriptors) are excluded from the check.
+// Describe method does not yield any descriptors) are excluded from the check.
//
// Usually, a Registry will be happy as long as the union of all collected
// Metrics is consistent and valid even if some metrics are not consistent with
@@ -266,7 +267,7 @@ func (r *Registry) Register(c Collector) error {
descChan = make(chan *Desc, capDescChan)
newDescIDs = map[uint64]struct{}{}
newDimHashesByName = map[string]uint64{}
- collectorID uint64 // Just a sum of all desc IDs.
+ collectorID uint64 // All desc IDs XOR'd together.
duplicateDescErr error
)
go func() {
@@ -293,12 +294,12 @@ func (r *Registry) Register(c Collector) error {
if _, exists := r.descIDs[desc.id]; exists {
duplicateDescErr = fmt.Errorf("descriptor %s already exists with the same fully-qualified name and const label values", desc)
}
- // If it is not a duplicate desc in this collector, add it to
+ // If it is not a duplicate desc in this collector, XOR it to
// the collectorID. (We allow duplicate descs within the same
// collector, but their existence must be a no-op.)
if _, exists := newDescIDs[desc.id]; !exists {
newDescIDs[desc.id] = struct{}{}
- collectorID += desc.id
+ collectorID ^= desc.id
}
// Are all the label names and the help string consistent with
@@ -360,7 +361,7 @@ func (r *Registry) Unregister(c Collector) bool {
var (
descChan = make(chan *Desc, capDescChan)
descIDs = map[uint64]struct{}{}
- collectorID uint64 // Just a sum of the desc IDs.
+ collectorID uint64 // All desc IDs XOR'd together.
)
go func() {
c.Describe(descChan)
@@ -368,7 +369,7 @@ func (r *Registry) Unregister(c Collector) bool {
}()
for desc := range descChan {
if _, exists := descIDs[desc.id]; !exists {
- collectorID += desc.id
+ collectorID ^= desc.id
descIDs[desc.id] = struct{}{}
}
}
@@ -875,9 +876,9 @@ func checkMetricConsistency(
}
// Is the metric unique (i.e. no other metric with the same name and the same labels)?
- h := hashNew()
- h = hashAdd(h, name)
- h = hashAddByte(h, separatorByte)
+ h := xxhash.New()
+ h.WriteString(name)
+ h.Write(separatorByteSlice)
// Make sure label pairs are sorted. We depend on it for the consistency
// check.
if !sort.IsSorted(labelPairSorter(dtoMetric.Label)) {
@@ -888,18 +889,19 @@ func checkMetricConsistency(
dtoMetric.Label = copiedLabels
}
for _, lp := range dtoMetric.Label {
- h = hashAdd(h, lp.GetName())
- h = hashAddByte(h, separatorByte)
- h = hashAdd(h, lp.GetValue())
- h = hashAddByte(h, separatorByte)
+ h.WriteString(lp.GetName())
+ h.Write(separatorByteSlice)
+ h.WriteString(lp.GetValue())
+ h.Write(separatorByteSlice)
}
- if _, exists := metricHashes[h]; exists {
+ hSum := h.Sum64()
+ if _, exists := metricHashes[hSum]; exists {
return fmt.Errorf(
"collected metric %q { %s} was collected before with the same name and label values",
name, dtoMetric,
)
}
- metricHashes[h] = struct{}{}
+ metricHashes[hSum] = struct{}{}
return nil
}
diff --git a/vendor/github.com/prometheus/client_golang/prometheus/summary.go b/vendor/github.com/prometheus/client_golang/prometheus/summary.go
index c970fdee0e4ff..ae42e761a19a2 100644
--- a/vendor/github.com/prometheus/client_golang/prometheus/summary.go
+++ b/vendor/github.com/prometheus/client_golang/prometheus/summary.go
@@ -208,7 +208,7 @@ func newSummary(desc *Desc, opts SummaryOpts, labelValues ...string) Summary {
s := &noObjectivesSummary{
desc: desc,
labelPairs: makeLabelPairs(desc, labelValues),
- counts: [2]*summaryCounts{&summaryCounts{}, &summaryCounts{}},
+ counts: [2]*summaryCounts{{}, {}},
}
s.init(s) // Init self-collection.
return s
diff --git a/vendor/github.com/prometheus/procfs/CONTRIBUTING.md b/vendor/github.com/prometheus/procfs/CONTRIBUTING.md
index 40503edbf18fc..943de7615eecb 100644
--- a/vendor/github.com/prometheus/procfs/CONTRIBUTING.md
+++ b/vendor/github.com/prometheus/procfs/CONTRIBUTING.md
@@ -2,17 +2,120 @@
Prometheus uses GitHub to manage reviews of pull requests.
+* If you are a new contributor see: [Steps to Contribute](#steps-to-contribute)
+
* If you have a trivial fix or improvement, go ahead and create a pull request,
- addressing (with `@...`) the maintainer of this repository (see
+ addressing (with `@...`) a suitable maintainer of this repository (see
[MAINTAINERS.md](MAINTAINERS.md)) in the description of the pull request.
* If you plan to do something more involved, first discuss your ideas
on our [mailing list](https://groups.google.com/forum/?fromgroups#!forum/prometheus-developers).
This will avoid unnecessary work and surely give you and us a good deal
- of inspiration.
+ of inspiration. Also please see our [non-goals issue](https://github.com/prometheus/docs/issues/149) on areas that the Prometheus community doesn't plan to work on.
* Relevant coding style guidelines are the [Go Code Review
Comments](https://code.google.com/p/go-wiki/wiki/CodeReviewComments)
and the _Formatting and style_ section of Peter Bourgon's [Go: Best
Practices for Production
- Environments](http://peter.bourgon.org/go-in-production/#formatting-and-style).
+ Environments](https://peter.bourgon.org/go-in-production/#formatting-and-style).
+
+* Be sure to sign off on the [DCO](https://github.com/probot/dco#how-it-works)
+
+## Steps to Contribute
+
+Should you wish to work on an issue, please claim it first by commenting on the GitHub issue that you want to work on it. This is to prevent duplicated efforts from contributors on the same issue.
+
+Please check the [`help-wanted`](https://github.com/prometheus/procfs/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22) label to find issues that are good for getting started. If you have questions about one of the issues, with or without the tag, please comment on them and one of the maintainers will clarify it. For a quicker response, contact us over [IRC](https://prometheus.io/community).
+
+For quickly compiling and testing your changes do:
+```
+make test # Make sure all the tests pass before you commit and push :)
+```
+
+We use [`golangci-lint`](https://github.com/golangci/golangci-lint) for linting the code. If it reports an issue and you think that the warning needs to be disregarded or is a false-positive, you can add a special comment `//nolint:linter1[,linter2,...]` before the offending line. Use this sparingly though, fixing the code to comply with the linter's recommendation is in general the preferred course of action.
+
+## Pull Request Checklist
+
+* Branch from the master branch and, if needed, rebase to the current master branch before submitting your pull request. If it doesn't merge cleanly with master you may be asked to rebase your changes.
+
+* Commits should be as small as possible, while ensuring that each commit is correct independently (i.e., each commit should compile and pass tests).
+
+* If your patch is not getting reviewed or you need a specific person to review it, you can @-reply a reviewer asking for a review in the pull request or a comment, or you can ask for a review on IRC channel [#prometheus](https://webchat.freenode.net/?channels=#prometheus) on irc.freenode.net (for the easiest start, [join via Riot](https://riot.im/app/#/room/#prometheus:matrix.org)).
+
+* Add tests relevant to the fixed bug or new feature.
+
+## Dependency management
+
+The Prometheus project uses [Go modules](https://golang.org/cmd/go/#hdr-Modules__module_versions__and_more) to manage dependencies on external packages. This requires a working Go environment with version 1.12 or greater installed.
+
+All dependencies are vendored in the `vendor/` directory.
+
+To add or update a new dependency, use the `go get` command:
+
+```bash
+# Pick the latest tagged release.
+go get example.com/some/module/pkg
+
+# Pick a specific version.
+go get example.com/some/module/[email protected]
+```
+
+Tidy up the `go.mod` and `go.sum` files and copy the new/updated dependency to the `vendor/` directory:
+
+
+```bash
+# The GO111MODULE variable can be omitted when the code isn't located in GOPATH.
+GO111MODULE=on go mod tidy
+
+GO111MODULE=on go mod vendor
+```
+
+You have to commit the changes to `go.mod`, `go.sum` and the `vendor/` directory before submitting the pull request.
+
+
+## API Implementation Guidelines
+
+### Naming and Documentation
+
+Public functions and structs should normally be named according to the file(s) being read and parsed. For example,
+the `fs.BuddyInfo()` function reads the file `/proc/buddyinfo`. In addition, the godoc for each public function
+should contain the path to the file(s) being read and a URL of the linux kernel documentation describing the file(s).
+
+### Reading vs. Parsing
+
+Most functionality in this library consists of reading files and then parsing the text into structured data. In most
+cases reading and parsing should be separated into different functions/methods with a public `fs.Thing()` method and
+a private `parseThing(r Reader)` function. This provides a logical separation and allows parsing to be tested
+directly without the need to read from the filesystem. Using a `Reader` argument is preferred over other data types
+such as `string` or `*File` because it provides the most flexibility regarding the data source. When a set of files
+in a directory needs to be parsed, then a `path` string parameter to the parse function can be used instead.
+
+### /proc and /sys filesystem I/O
+
+The `proc` and `sys` filesystems are pseudo file systems and work a bit differently from standard disk I/O.
+Many of the files are changing continuously and the data being read can in some cases change between subsequent
+reads in the same file. Also, most of the files are relatively small (less than a few KBs), and system calls
+to the `stat` function will often return the wrong size. Therefore, for most files it's recommended to read the
+full file in a single operation using an internal utility function called `util.ReadFileNoStat`.
+This function is similar to `ioutil.ReadFile`, but it avoids the system call to `stat` to get the current size of
+the file.
+
+Note that parsing the file's contents can still be performed one line at a time. This is done by first reading
+the full file, and then using a scanner on the `[]byte` or `string` containing the data.
+
+```
+ data, err := util.ReadFileNoStat("/proc/cpuinfo")
+ if err != nil {
+ return err
+ }
+ reader := bytes.NewReader(data)
+ scanner := bufio.NewScanner(reader)
+```
+
+The `/sys` filesystem contains many very small files which contain only a single numeric or text value. These files
+can be read using an internal function called `util.SysReadFile` which is similar to `ioutil.ReadFile` but does
+not bother to check the size of the file before reading.
+```
+ data, err := util.SysReadFile("/sys/class/power_supply/BAT0/capacity")
+```
+
diff --git a/vendor/github.com/prometheus/procfs/README.md b/vendor/github.com/prometheus/procfs/README.md
index 6f8850feb6783..55d1e3261c9d9 100644
--- a/vendor/github.com/prometheus/procfs/README.md
+++ b/vendor/github.com/prometheus/procfs/README.md
@@ -1,6 +1,6 @@
# procfs
-This procfs package provides functions to retrieve system, kernel and process
+This package provides functions to retrieve system, kernel, and process
metrics from the pseudo-filesystems /proc and /sys.
*WARNING*: This package is a work in progress. Its API may still break in
@@ -13,7 +13,8 @@ backwards-incompatible ways without warnings. Use it at your own risk.
## Usage
The procfs library is organized by packages based on whether the gathered data is coming from
-/proc, /sys, or both. Each package contains an `FS` type which represents the path to either /proc, /sys, or both. For example, current cpu statistics are gathered from
+/proc, /sys, or both. Each package contains an `FS` type which represents the path to either /proc,
+/sys, or both. For example, cpu statistics are gathered from
`/proc/stat` and are available via the root procfs package. First, the proc filesystem mount
point is initialized, and then the stat information is read.
@@ -29,10 +30,17 @@ Some sub-packages such as `blockdevice`, require access to both the proc and sys
stats, err := fs.ProcDiskstats()
```
+## Package Organization
+
+The packages in this project are organized according to (1) whether the data comes from the `/proc` or
+`/sys` filesystem and (2) the type of information being retrieved. For example, most process information
+can be gathered from the functions in the root `procfs` package. Information about block devices such as disk drives
+is available in the `blockdevices` sub-package.
+
## Building and Testing
-The procfs library is normally built as part of another application. However, when making
-changes to the library, the `make test` command can be used to run the API test suite.
+The procfs library is intended to be built as part of another application, so there are no distributable binaries.
+However, most of the API includes unit tests which can be run with `make test`.
### Updating Test Fixtures
diff --git a/vendor/github.com/prometheus/procfs/arp.go b/vendor/github.com/prometheus/procfs/arp.go
new file mode 100644
index 0000000000000..916c9182a8b10
--- /dev/null
+++ b/vendor/github.com/prometheus/procfs/arp.go
@@ -0,0 +1,85 @@
+// Copyright 2019 The Prometheus Authors
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package procfs
+
+import (
+ "fmt"
+ "io/ioutil"
+ "net"
+ "strings"
+)
+
+// ARPEntry contains a single row of the columnar data represented in
+// /proc/net/arp.
+type ARPEntry struct {
+ // IP address
+ IPAddr net.IP
+ // MAC address
+ HWAddr net.HardwareAddr
+ // Name of the device
+ Device string
+}
+
+// GatherARPEntries retrieves all the ARP entries, parse the relevant columns,
+// and then return a slice of ARPEntry's.
+func (fs FS) GatherARPEntries() ([]ARPEntry, error) {
+ data, err := ioutil.ReadFile(fs.proc.Path("net/arp"))
+ if err != nil {
+ return nil, fmt.Errorf("error reading arp %s: %s", fs.proc.Path("net/arp"), err)
+ }
+
+ return parseARPEntries(data)
+}
+
+func parseARPEntries(data []byte) ([]ARPEntry, error) {
+ lines := strings.Split(string(data), "\n")
+ entries := make([]ARPEntry, 0)
+ var err error
+ const (
+ expectedDataWidth = 6
+ expectedHeaderWidth = 9
+ )
+ for _, line := range lines {
+ columns := strings.Fields(line)
+ width := len(columns)
+
+ if width == expectedHeaderWidth || width == 0 {
+ continue
+ } else if width == expectedDataWidth {
+ entry, err := parseARPEntry(columns)
+ if err != nil {
+ return []ARPEntry{}, fmt.Errorf("failed to parse ARP entry: %s", err)
+ }
+ entries = append(entries, entry)
+ } else {
+ return []ARPEntry{}, fmt.Errorf("%d columns were detected, but %d were expected", width, expectedDataWidth)
+ }
+
+ }
+
+ return entries, err
+}
+
+func parseARPEntry(columns []string) (ARPEntry, error) {
+ ip := net.ParseIP(columns[0])
+ mac := net.HardwareAddr(columns[3])
+
+ entry := ARPEntry{
+ IPAddr: ip,
+ HWAddr: mac,
+ Device: columns[5],
+ }
+
+ return entry, nil
+}
diff --git a/vendor/github.com/prometheus/procfs/buddyinfo.go b/vendor/github.com/prometheus/procfs/buddyinfo.go
index 63d4229a45a0f..10bd067a0a55d 100644
--- a/vendor/github.com/prometheus/procfs/buddyinfo.go
+++ b/vendor/github.com/prometheus/procfs/buddyinfo.go
@@ -31,7 +31,7 @@ type BuddyInfo struct {
Sizes []float64
}
-// NewBuddyInfo reads the buddyinfo statistics from the specified `proc` filesystem.
+// BuddyInfo reads the buddyinfo statistics from the specified `proc` filesystem.
func (fs FS) BuddyInfo() ([]BuddyInfo, error) {
file, err := os.Open(fs.proc.Path("buddyinfo"))
if err != nil {
diff --git a/vendor/github.com/prometheus/procfs/cpuinfo.go b/vendor/github.com/prometheus/procfs/cpuinfo.go
new file mode 100644
index 0000000000000..2e02215528f09
--- /dev/null
+++ b/vendor/github.com/prometheus/procfs/cpuinfo.go
@@ -0,0 +1,167 @@
+// Copyright 2019 The Prometheus Authors
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package procfs
+
+import (
+ "bufio"
+ "bytes"
+ "strconv"
+ "strings"
+
+ "github.com/prometheus/procfs/internal/util"
+)
+
+// CPUInfo contains general information about a system CPU found in /proc/cpuinfo
+type CPUInfo struct {
+ Processor uint
+ VendorID string
+ CPUFamily string
+ Model string
+ ModelName string
+ Stepping string
+ Microcode string
+ CPUMHz float64
+ CacheSize string
+ PhysicalID string
+ Siblings uint
+ CoreID string
+ CPUCores uint
+ APICID string
+ InitialAPICID string
+ FPU string
+ FPUException string
+ CPUIDLevel uint
+ WP string
+ Flags []string
+ Bugs []string
+ BogoMips float64
+ CLFlushSize uint
+ CacheAlignment uint
+ AddressSizes string
+ PowerManagement string
+}
+
+// CPUInfo returns information about current system CPUs.
+// See https://www.kernel.org/doc/Documentation/filesystems/proc.txt
+func (fs FS) CPUInfo() ([]CPUInfo, error) {
+ data, err := util.ReadFileNoStat(fs.proc.Path("cpuinfo"))
+ if err != nil {
+ return nil, err
+ }
+ return parseCPUInfo(data)
+}
+
+// parseCPUInfo parses data from /proc/cpuinfo
+func parseCPUInfo(info []byte) ([]CPUInfo, error) {
+ cpuinfo := []CPUInfo{}
+ i := -1
+ scanner := bufio.NewScanner(bytes.NewReader(info))
+ for scanner.Scan() {
+ line := scanner.Text()
+ if strings.TrimSpace(line) == "" {
+ continue
+ }
+ field := strings.SplitN(line, ": ", 2)
+ switch strings.TrimSpace(field[0]) {
+ case "processor":
+ cpuinfo = append(cpuinfo, CPUInfo{}) // start of the next processor
+ i++
+ v, err := strconv.ParseUint(field[1], 0, 32)
+ if err != nil {
+ return nil, err
+ }
+ cpuinfo[i].Processor = uint(v)
+ case "vendor_id":
+ cpuinfo[i].VendorID = field[1]
+ case "cpu family":
+ cpuinfo[i].CPUFamily = field[1]
+ case "model":
+ cpuinfo[i].Model = field[1]
+ case "model name":
+ cpuinfo[i].ModelName = field[1]
+ case "stepping":
+ cpuinfo[i].Stepping = field[1]
+ case "microcode":
+ cpuinfo[i].Microcode = field[1]
+ case "cpu MHz":
+ v, err := strconv.ParseFloat(field[1], 64)
+ if err != nil {
+ return nil, err
+ }
+ cpuinfo[i].CPUMHz = v
+ case "cache size":
+ cpuinfo[i].CacheSize = field[1]
+ case "physical id":
+ cpuinfo[i].PhysicalID = field[1]
+ case "siblings":
+ v, err := strconv.ParseUint(field[1], 0, 32)
+ if err != nil {
+ return nil, err
+ }
+ cpuinfo[i].Siblings = uint(v)
+ case "core id":
+ cpuinfo[i].CoreID = field[1]
+ case "cpu cores":
+ v, err := strconv.ParseUint(field[1], 0, 32)
+ if err != nil {
+ return nil, err
+ }
+ cpuinfo[i].CPUCores = uint(v)
+ case "apicid":
+ cpuinfo[i].APICID = field[1]
+ case "initial apicid":
+ cpuinfo[i].InitialAPICID = field[1]
+ case "fpu":
+ cpuinfo[i].FPU = field[1]
+ case "fpu_exception":
+ cpuinfo[i].FPUException = field[1]
+ case "cpuid level":
+ v, err := strconv.ParseUint(field[1], 0, 32)
+ if err != nil {
+ return nil, err
+ }
+ cpuinfo[i].CPUIDLevel = uint(v)
+ case "wp":
+ cpuinfo[i].WP = field[1]
+ case "flags":
+ cpuinfo[i].Flags = strings.Fields(field[1])
+ case "bugs":
+ cpuinfo[i].Bugs = strings.Fields(field[1])
+ case "bogomips":
+ v, err := strconv.ParseFloat(field[1], 64)
+ if err != nil {
+ return nil, err
+ }
+ cpuinfo[i].BogoMips = v
+ case "clflush size":
+ v, err := strconv.ParseUint(field[1], 0, 32)
+ if err != nil {
+ return nil, err
+ }
+ cpuinfo[i].CLFlushSize = uint(v)
+ case "cache_alignment":
+ v, err := strconv.ParseUint(field[1], 0, 32)
+ if err != nil {
+ return nil, err
+ }
+ cpuinfo[i].CacheAlignment = uint(v)
+ case "address sizes":
+ cpuinfo[i].AddressSizes = field[1]
+ case "power management":
+ cpuinfo[i].PowerManagement = field[1]
+ }
+ }
+ return cpuinfo, nil
+
+}
diff --git a/vendor/github.com/prometheus/procfs/crypto.go b/vendor/github.com/prometheus/procfs/crypto.go
new file mode 100644
index 0000000000000..19d4041b29a9e
--- /dev/null
+++ b/vendor/github.com/prometheus/procfs/crypto.go
@@ -0,0 +1,131 @@
+// Copyright 2019 The Prometheus Authors
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package procfs
+
+import (
+ "bytes"
+ "fmt"
+ "io/ioutil"
+ "strconv"
+ "strings"
+
+ "github.com/prometheus/procfs/internal/util"
+)
+
+// Crypto holds info parsed from /proc/crypto.
+type Crypto struct {
+ Alignmask *uint64
+ Async bool
+ Blocksize *uint64
+ Chunksize *uint64
+ Ctxsize *uint64
+ Digestsize *uint64
+ Driver string
+ Geniv string
+ Internal string
+ Ivsize *uint64
+ Maxauthsize *uint64
+ MaxKeysize *uint64
+ MinKeysize *uint64
+ Module string
+ Name string
+ Priority *int64
+ Refcnt *int64
+ Seedsize *uint64
+ Selftest string
+ Type string
+ Walksize *uint64
+}
+
+// Crypto parses an crypto-file (/proc/crypto) and returns a slice of
+// structs containing the relevant info. More information available here:
+// https://kernel.readthedocs.io/en/sphinx-samples/crypto-API.html
+func (fs FS) Crypto() ([]Crypto, error) {
+ data, err := ioutil.ReadFile(fs.proc.Path("crypto"))
+ if err != nil {
+ return nil, fmt.Errorf("error parsing crypto %s: %s", fs.proc.Path("crypto"), err)
+ }
+ crypto, err := parseCrypto(data)
+ if err != nil {
+ return nil, fmt.Errorf("error parsing crypto %s: %s", fs.proc.Path("crypto"), err)
+ }
+ return crypto, nil
+}
+
+func parseCrypto(cryptoData []byte) ([]Crypto, error) {
+ crypto := []Crypto{}
+
+ cryptoBlocks := bytes.Split(cryptoData, []byte("\n\n"))
+
+ for _, block := range cryptoBlocks {
+ var newCryptoElem Crypto
+
+ lines := strings.Split(string(block), "\n")
+ for _, line := range lines {
+ if strings.TrimSpace(line) == "" || line[0] == ' ' {
+ continue
+ }
+ fields := strings.Split(line, ":")
+ key := strings.TrimSpace(fields[0])
+ value := strings.TrimSpace(fields[1])
+ vp := util.NewValueParser(value)
+
+ switch strings.TrimSpace(key) {
+ case "async":
+ b, err := strconv.ParseBool(value)
+ if err == nil {
+ newCryptoElem.Async = b
+ }
+ case "blocksize":
+ newCryptoElem.Blocksize = vp.PUInt64()
+ case "chunksize":
+ newCryptoElem.Chunksize = vp.PUInt64()
+ case "digestsize":
+ newCryptoElem.Digestsize = vp.PUInt64()
+ case "driver":
+ newCryptoElem.Driver = value
+ case "geniv":
+ newCryptoElem.Geniv = value
+ case "internal":
+ newCryptoElem.Internal = value
+ case "ivsize":
+ newCryptoElem.Ivsize = vp.PUInt64()
+ case "maxauthsize":
+ newCryptoElem.Maxauthsize = vp.PUInt64()
+ case "max keysize":
+ newCryptoElem.MaxKeysize = vp.PUInt64()
+ case "min keysize":
+ newCryptoElem.MinKeysize = vp.PUInt64()
+ case "module":
+ newCryptoElem.Module = value
+ case "name":
+ newCryptoElem.Name = value
+ case "priority":
+ newCryptoElem.Priority = vp.PInt64()
+ case "refcnt":
+ newCryptoElem.Refcnt = vp.PInt64()
+ case "seedsize":
+ newCryptoElem.Seedsize = vp.PUInt64()
+ case "selftest":
+ newCryptoElem.Selftest = value
+ case "type":
+ newCryptoElem.Type = value
+ case "walksize":
+ newCryptoElem.Walksize = vp.PUInt64()
+ }
+ }
+ crypto = append(crypto, newCryptoElem)
+ }
+ return crypto, nil
+}
diff --git a/vendor/github.com/prometheus/procfs/fixtures.ttar b/vendor/github.com/prometheus/procfs/fixtures.ttar
index 6b42e7ba14b3f..38b71fe3244a8 100644
--- a/vendor/github.com/prometheus/procfs/fixtures.ttar
+++ b/vendor/github.com/prometheus/procfs/fixtures.ttar
@@ -47,6 +47,48 @@ SymlinkTo: ../../symlinktargets/ghi
Path: fixtures/proc/26231/fd/3
SymlinkTo: ../../symlinktargets/uvw
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/proc/26231/fdinfo
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/proc/26231/fdinfo/0
+Lines: 6
+pos: 0
+flags: 02004000
+mnt_id: 13
+inotify wd:3 ino:1 sdev:34 mask:fce ignored_mask:0 fhandle-bytes:c fhandle-type:81 f_handle:000000000100000000000000
+inotify wd:2 ino:1300016 sdev:fd00002 mask:fce ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:16003001ed3f022a
+inotify wd:1 ino:2e0001 sdev:fd00000 mask:fce ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:01002e00138e7c65
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/proc/26231/fdinfo/1
+Lines: 4
+pos: 0
+flags: 02004002
+mnt_id: 13
+eventfd-count: 0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/proc/26231/fdinfo/10
+Lines: 3
+pos: 0
+flags: 02004002
+mnt_id: 9
+Mode: 400
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/proc/26231/fdinfo/2
+Lines: 3
+pos: 0
+flags: 02004002
+mnt_id: 9
+Mode: 400
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/proc/26231/fdinfo/3
+Lines: 3
+pos: 0
+flags: 02004002
+mnt_id: 9
+Mode: 400
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Path: fixtures/proc/26231/io
Lines: 7
rchar: 750339
@@ -126,6 +168,11 @@ SymlinkTo: net:[4026531993]
Path: fixtures/proc/26231/root
SymlinkTo: /
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/proc/26231/schedstat
+Lines: 1
+411605849 93680043 79
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Path: fixtures/proc/26231/stat
Lines: 1
26231 (vim) R 5392 7446 5392 34835 7446 4218880 32533 309516 26 82 1677 44 158 99 20 0 1 0 82375 56274944 1981 18446744073709551615 4194304 6294284 140736914091744 140736914087944 139965136429984 0 0 12288 1870679807 0 0 0 17 0 0 0 31 0 0 8391624 8481048 16420864 140736914093252 140736914093279 140736914093279 140736914096107 0
@@ -137,10 +184,10 @@ Lines: 53
Name: prometheus
Umask: 0022
State: S (sleeping)
-Tgid: 1
+Tgid: 26231
Ngid: 0
-Pid: 1
-PPid: 0
+Pid: 26231
+PPid: 1
TracerPid: 0
Uid: 0 0 0 0
Gid: 0 0 0 0
@@ -258,6 +305,18 @@ Lines: 1
com.github.uiautomatorNULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTENULLBYTEEOF
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/proc/26233/schedstat
+Lines: 8
+ ____________________________________
+< this is a malformed schedstat file >
+ ------------------------------------
+ \ ^__^
+ \ (oo)\_______
+ (__)\ )\/\
+ ||----w |
+ || ||
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Directory: fixtures/proc/584
Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
@@ -274,6 +333,1201 @@ Node 0, zone DMA32 759 572 791 475 194 45 12 0
Node 0, zone Normal 4381 1093 185 1530 567 102 4 0 0 0 0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/proc/cpuinfo
+Lines: 216
+processor : 0
+vendor_id : GenuineIntel
+cpu family : 6
+model : 142
+model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
+stepping : 10
+microcode : 0xb4
+cpu MHz : 799.998
+cache size : 8192 KB
+physical id : 0
+siblings : 8
+core id : 0
+cpu cores : 4
+apicid : 0
+initial apicid : 0
+fpu : yes
+fpu_exception : yes
+cpuid level : 22
+wp : yes
+flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d
+bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs
+bogomips : 4224.00
+clflush size : 64
+cache_alignment : 64
+address sizes : 39 bits physical, 48 bits virtual
+power management:
+
+processor : 1
+vendor_id : GenuineIntel
+cpu family : 6
+model : 142
+model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
+stepping : 10
+microcode : 0xb4
+cpu MHz : 800.037
+cache size : 8192 KB
+physical id : 0
+siblings : 8
+core id : 1
+cpu cores : 4
+apicid : 2
+initial apicid : 2
+fpu : yes
+fpu_exception : yes
+cpuid level : 22
+wp : yes
+flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d
+bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs
+bogomips : 4224.00
+clflush size : 64
+cache_alignment : 64
+address sizes : 39 bits physical, 48 bits virtual
+power management:
+
+processor : 2
+vendor_id : GenuineIntel
+cpu family : 6
+model : 142
+model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
+stepping : 10
+microcode : 0xb4
+cpu MHz : 800.010
+cache size : 8192 KB
+physical id : 0
+siblings : 8
+core id : 2
+cpu cores : 4
+apicid : 4
+initial apicid : 4
+fpu : yes
+fpu_exception : yes
+cpuid level : 22
+wp : yes
+flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d
+bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs
+bogomips : 4224.00
+clflush size : 64
+cache_alignment : 64
+address sizes : 39 bits physical, 48 bits virtual
+power management:
+
+processor : 3
+vendor_id : GenuineIntel
+cpu family : 6
+model : 142
+model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
+stepping : 10
+microcode : 0xb4
+cpu MHz : 800.028
+cache size : 8192 KB
+physical id : 0
+siblings : 8
+core id : 3
+cpu cores : 4
+apicid : 6
+initial apicid : 6
+fpu : yes
+fpu_exception : yes
+cpuid level : 22
+wp : yes
+flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d
+bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs
+bogomips : 4224.00
+clflush size : 64
+cache_alignment : 64
+address sizes : 39 bits physical, 48 bits virtual
+power management:
+
+processor : 4
+vendor_id : GenuineIntel
+cpu family : 6
+model : 142
+model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
+stepping : 10
+microcode : 0xb4
+cpu MHz : 799.989
+cache size : 8192 KB
+physical id : 0
+siblings : 8
+core id : 0
+cpu cores : 4
+apicid : 1
+initial apicid : 1
+fpu : yes
+fpu_exception : yes
+cpuid level : 22
+wp : yes
+flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d
+bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs
+bogomips : 4224.00
+clflush size : 64
+cache_alignment : 64
+address sizes : 39 bits physical, 48 bits virtual
+power management:
+
+processor : 5
+vendor_id : GenuineIntel
+cpu family : 6
+model : 142
+model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
+stepping : 10
+microcode : 0xb4
+cpu MHz : 800.083
+cache size : 8192 KB
+physical id : 0
+siblings : 8
+core id : 1
+cpu cores : 4
+apicid : 3
+initial apicid : 3
+fpu : yes
+fpu_exception : yes
+cpuid level : 22
+wp : yes
+flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d
+bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs
+bogomips : 4224.00
+clflush size : 64
+cache_alignment : 64
+address sizes : 39 bits physical, 48 bits virtual
+power management:
+
+processor : 6
+vendor_id : GenuineIntel
+cpu family : 6
+model : 142
+model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
+stepping : 10
+microcode : 0xb4
+cpu MHz : 800.017
+cache size : 8192 KB
+physical id : 0
+siblings : 8
+core id : 2
+cpu cores : 4
+apicid : 5
+initial apicid : 5
+fpu : yes
+fpu_exception : yes
+cpuid level : 22
+wp : yes
+flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d
+bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs
+bogomips : 4224.00
+clflush size : 64
+cache_alignment : 64
+address sizes : 39 bits physical, 48 bits virtual
+power management:
+
+processor : 7
+vendor_id : GenuineIntel
+cpu family : 6
+model : 142
+model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
+stepping : 10
+microcode : 0xb4
+cpu MHz : 800.030
+cache size : 8192 KB
+physical id : 0
+siblings : 8
+core id : 3
+cpu cores : 4
+apicid : 7
+initial apicid : 7
+fpu : yes
+fpu_exception : yes
+cpuid level : 22
+wp : yes
+flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d
+bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs
+bogomips : 4224.00
+clflush size : 64
+cache_alignment : 64
+address sizes : 39 bits physical, 48 bits virtual
+power management:
+
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/proc/crypto
+Lines: 971
+name : ccm(aes)
+driver : ccm_base(ctr(aes-aesni),cbcmac(aes-aesni))
+module : ccm
+priority : 300
+refcnt : 4
+selftest : passed
+internal : no
+type : aead
+async : no
+blocksize : 1
+ivsize : 16
+maxauthsize : 16
+geniv : <none>
+
+name : cbcmac(aes)
+driver : cbcmac(aes-aesni)
+module : ccm
+priority : 300
+refcnt : 7
+selftest : passed
+internal : no
+type : shash
+blocksize : 1
+digestsize : 16
+
+name : ecdh
+driver : ecdh-generic
+module : ecdh_generic
+priority : 100
+refcnt : 1
+selftest : passed
+internal : no
+type : kpp
+
+name : ecb(arc4)
+driver : ecb(arc4)-generic
+module : arc4
+priority : 100
+refcnt : 1
+selftest : passed
+internal : no
+type : skcipher
+async : no
+blocksize : 1
+min keysize : 1
+max keysize : 256
+ivsize : 0
+chunksize : 1
+walksize : 1
+
+name : arc4
+driver : arc4-generic
+module : arc4
+priority : 0
+refcnt : 3
+selftest : passed
+internal : no
+type : cipher
+blocksize : 1
+min keysize : 1
+max keysize : 256
+
+name : crct10dif
+driver : crct10dif-pclmul
+module : crct10dif_pclmul
+priority : 200
+refcnt : 2
+selftest : passed
+internal : no
+type : shash
+blocksize : 1
+digestsize : 2
+
+name : crc32
+driver : crc32-pclmul
+module : crc32_pclmul
+priority : 200
+refcnt : 1
+selftest : passed
+internal : no
+type : shash
+blocksize : 1
+digestsize : 4
+
+name : __ghash
+driver : cryptd(__ghash-pclmulqdqni)
+module : kernel
+priority : 50
+refcnt : 1
+selftest : passed
+internal : yes
+type : ahash
+async : yes
+blocksize : 16
+digestsize : 16
+
+name : ghash
+driver : ghash-clmulni
+module : ghash_clmulni_intel
+priority : 400
+refcnt : 1
+selftest : passed
+internal : no
+type : ahash
+async : yes
+blocksize : 16
+digestsize : 16
+
+name : __ghash
+driver : __ghash-pclmulqdqni
+module : ghash_clmulni_intel
+priority : 0
+refcnt : 1
+selftest : passed
+internal : yes
+type : shash
+blocksize : 16
+digestsize : 16
+
+name : crc32c
+driver : crc32c-intel
+module : crc32c_intel
+priority : 200
+refcnt : 5
+selftest : passed
+internal : no
+type : shash
+blocksize : 1
+digestsize : 4
+
+name : cbc(aes)
+driver : cbc(aes-aesni)
+module : kernel
+priority : 300
+refcnt : 1
+selftest : passed
+internal : no
+type : skcipher
+async : no
+blocksize : 16
+min keysize : 16
+max keysize : 32
+ivsize : 16
+chunksize : 16
+walksize : 16
+
+name : ctr(aes)
+driver : ctr(aes-aesni)
+module : kernel
+priority : 300
+refcnt : 5
+selftest : passed
+internal : no
+type : skcipher
+async : no
+blocksize : 1
+min keysize : 16
+max keysize : 32
+ivsize : 16
+chunksize : 16
+walksize : 16
+
+name : pkcs1pad(rsa,sha256)
+driver : pkcs1pad(rsa-generic,sha256)
+module : kernel
+priority : 100
+refcnt : 1
+selftest : passed
+internal : no
+type : akcipher
+
+name : __xts(aes)
+driver : cryptd(__xts-aes-aesni)
+module : kernel
+priority : 451
+refcnt : 1
+selftest : passed
+internal : yes
+type : skcipher
+async : yes
+blocksize : 16
+min keysize : 32
+max keysize : 64
+ivsize : 16
+chunksize : 16
+walksize : 16
+
+name : xts(aes)
+driver : xts-aes-aesni
+module : kernel
+priority : 401
+refcnt : 1
+selftest : passed
+internal : no
+type : skcipher
+async : yes
+blocksize : 16
+min keysize : 32
+max keysize : 64
+ivsize : 16
+chunksize : 16
+walksize : 16
+
+name : __ctr(aes)
+driver : cryptd(__ctr-aes-aesni)
+module : kernel
+priority : 450
+refcnt : 1
+selftest : passed
+internal : yes
+type : skcipher
+async : yes
+blocksize : 1
+min keysize : 16
+max keysize : 32
+ivsize : 16
+chunksize : 16
+walksize : 16
+
+name : ctr(aes)
+driver : ctr-aes-aesni
+module : kernel
+priority : 400
+refcnt : 1
+selftest : passed
+internal : no
+type : skcipher
+async : yes
+blocksize : 1
+min keysize : 16
+max keysize : 32
+ivsize : 16
+chunksize : 16
+walksize : 16
+
+name : __cbc(aes)
+driver : cryptd(__cbc-aes-aesni)
+module : kernel
+priority : 450
+refcnt : 1
+selftest : passed
+internal : yes
+type : skcipher
+async : yes
+blocksize : 16
+min keysize : 16
+max keysize : 32
+ivsize : 16
+chunksize : 16
+walksize : 16
+
+name : cbc(aes)
+driver : cbc-aes-aesni
+module : kernel
+priority : 400
+refcnt : 1
+selftest : passed
+internal : no
+type : skcipher
+async : yes
+blocksize : 16
+min keysize : 16
+max keysize : 32
+ivsize : 16
+chunksize : 16
+walksize : 16
+
+name : __ecb(aes)
+driver : cryptd(__ecb-aes-aesni)
+module : kernel
+priority : 450
+refcnt : 1
+selftest : passed
+internal : yes
+type : skcipher
+async : yes
+blocksize : 16
+min keysize : 16
+max keysize : 32
+ivsize : 0
+chunksize : 16
+walksize : 16
+
+name : ecb(aes)
+driver : ecb-aes-aesni
+module : kernel
+priority : 400
+refcnt : 1
+selftest : passed
+internal : no
+type : skcipher
+async : yes
+blocksize : 16
+min keysize : 16
+max keysize : 32
+ivsize : 0
+chunksize : 16
+walksize : 16
+
+name : __generic-gcm-aes-aesni
+driver : cryptd(__driver-generic-gcm-aes-aesni)
+module : kernel
+priority : 50
+refcnt : 1
+selftest : passed
+internal : yes
+type : aead
+async : yes
+blocksize : 1
+ivsize : 12
+maxauthsize : 16
+geniv : <none>
+
+name : gcm(aes)
+driver : generic-gcm-aesni
+module : kernel
+priority : 400
+refcnt : 1
+selftest : passed
+internal : no
+type : aead
+async : yes
+blocksize : 1
+ivsize : 12
+maxauthsize : 16
+geniv : <none>
+
+name : __generic-gcm-aes-aesni
+driver : __driver-generic-gcm-aes-aesni
+module : kernel
+priority : 0
+refcnt : 1
+selftest : passed
+internal : yes
+type : aead
+async : no
+blocksize : 1
+ivsize : 12
+maxauthsize : 16
+geniv : <none>
+
+name : __gcm-aes-aesni
+driver : cryptd(__driver-gcm-aes-aesni)
+module : kernel
+priority : 50
+refcnt : 1
+selftest : passed
+internal : yes
+type : aead
+async : yes
+blocksize : 1
+ivsize : 8
+maxauthsize : 16
+geniv : <none>
+
+name : rfc4106(gcm(aes))
+driver : rfc4106-gcm-aesni
+module : kernel
+priority : 400
+refcnt : 1
+selftest : passed
+internal : no
+type : aead
+async : yes
+blocksize : 1
+ivsize : 8
+maxauthsize : 16
+geniv : <none>
+
+name : __gcm-aes-aesni
+driver : __driver-gcm-aes-aesni
+module : kernel
+priority : 0
+refcnt : 1
+selftest : passed
+internal : yes
+type : aead
+async : no
+blocksize : 1
+ivsize : 8
+maxauthsize : 16
+geniv : <none>
+
+name : __xts(aes)
+driver : __xts-aes-aesni
+module : kernel
+priority : 401
+refcnt : 1
+selftest : passed
+internal : yes
+type : skcipher
+async : no
+blocksize : 16
+min keysize : 32
+max keysize : 64
+ivsize : 16
+chunksize : 16
+walksize : 16
+
+name : __ctr(aes)
+driver : __ctr-aes-aesni
+module : kernel
+priority : 400
+refcnt : 1
+selftest : passed
+internal : yes
+type : skcipher
+async : no
+blocksize : 1
+min keysize : 16
+max keysize : 32
+ivsize : 16
+chunksize : 16
+walksize : 16
+
+name : __cbc(aes)
+driver : __cbc-aes-aesni
+module : kernel
+priority : 400
+refcnt : 1
+selftest : passed
+internal : yes
+type : skcipher
+async : no
+blocksize : 16
+min keysize : 16
+max keysize : 32
+ivsize : 16
+chunksize : 16
+walksize : 16
+
+name : __ecb(aes)
+driver : __ecb-aes-aesni
+module : kernel
+priority : 400
+refcnt : 1
+selftest : passed
+internal : yes
+type : skcipher
+async : no
+blocksize : 16
+min keysize : 16
+max keysize : 32
+ivsize : 0
+chunksize : 16
+walksize : 16
+
+name : __aes
+driver : __aes-aesni
+module : kernel
+priority : 300
+refcnt : 1
+selftest : passed
+internal : yes
+type : cipher
+blocksize : 16
+min keysize : 16
+max keysize : 32
+
+name : aes
+driver : aes-aesni
+module : kernel
+priority : 300
+refcnt : 8
+selftest : passed
+internal : no
+type : cipher
+blocksize : 16
+min keysize : 16
+max keysize : 32
+
+name : hmac(sha1)
+driver : hmac(sha1-generic)
+module : kernel
+priority : 100
+refcnt : 9
+selftest : passed
+internal : no
+type : shash
+blocksize : 64
+digestsize : 20
+
+name : ghash
+driver : ghash-generic
+module : kernel
+priority : 100
+refcnt : 3
+selftest : passed
+internal : no
+type : shash
+blocksize : 16
+digestsize : 16
+
+name : jitterentropy_rng
+driver : jitterentropy_rng
+module : kernel
+priority : 100
+refcnt : 1
+selftest : passed
+internal : no
+type : rng
+seedsize : 0
+
+name : stdrng
+driver : drbg_nopr_hmac_sha256
+module : kernel
+priority : 221
+refcnt : 2
+selftest : passed
+internal : no
+type : rng
+seedsize : 0
+
+name : stdrng
+driver : drbg_nopr_hmac_sha512
+module : kernel
+priority : 220
+refcnt : 1
+selftest : passed
+internal : no
+type : rng
+seedsize : 0
+
+name : stdrng
+driver : drbg_nopr_hmac_sha384
+module : kernel
+priority : 219
+refcnt : 1
+selftest : passed
+internal : no
+type : rng
+seedsize : 0
+
+name : stdrng
+driver : drbg_nopr_hmac_sha1
+module : kernel
+priority : 218
+refcnt : 1
+selftest : passed
+internal : no
+type : rng
+seedsize : 0
+
+name : stdrng
+driver : drbg_nopr_sha256
+module : kernel
+priority : 217
+refcnt : 1
+selftest : passed
+internal : no
+type : rng
+seedsize : 0
+
+name : stdrng
+driver : drbg_nopr_sha512
+module : kernel
+priority : 216
+refcnt : 1
+selftest : passed
+internal : no
+type : rng
+seedsize : 0
+
+name : stdrng
+driver : drbg_nopr_sha384
+module : kernel
+priority : 215
+refcnt : 1
+selftest : passed
+internal : no
+type : rng
+seedsize : 0
+
+name : stdrng
+driver : drbg_nopr_sha1
+module : kernel
+priority : 214
+refcnt : 1
+selftest : passed
+internal : no
+type : rng
+seedsize : 0
+
+name : stdrng
+driver : drbg_nopr_ctr_aes256
+module : kernel
+priority : 213
+refcnt : 1
+selftest : passed
+internal : no
+type : rng
+seedsize : 0
+
+name : stdrng
+driver : drbg_nopr_ctr_aes192
+module : kernel
+priority : 212
+refcnt : 1
+selftest : passed
+internal : no
+type : rng
+seedsize : 0
+
+name : stdrng
+driver : drbg_nopr_ctr_aes128
+module : kernel
+priority : 211
+refcnt : 1
+selftest : passed
+internal : no
+type : rng
+seedsize : 0
+
+name : hmac(sha256)
+driver : hmac(sha256-generic)
+module : kernel
+priority : 100
+refcnt : 10
+selftest : passed
+internal : no
+type : shash
+blocksize : 64
+digestsize : 32
+
+name : stdrng
+driver : drbg_pr_hmac_sha256
+module : kernel
+priority : 210
+refcnt : 1
+selftest : passed
+internal : no
+type : rng
+seedsize : 0
+
+name : stdrng
+driver : drbg_pr_hmac_sha512
+module : kernel
+priority : 209
+refcnt : 1
+selftest : passed
+internal : no
+type : rng
+seedsize : 0
+
+name : stdrng
+driver : drbg_pr_hmac_sha384
+module : kernel
+priority : 208
+refcnt : 1
+selftest : passed
+internal : no
+type : rng
+seedsize : 0
+
+name : stdrng
+driver : drbg_pr_hmac_sha1
+module : kernel
+priority : 207
+refcnt : 1
+selftest : passed
+internal : no
+type : rng
+seedsize : 0
+
+name : stdrng
+driver : drbg_pr_sha256
+module : kernel
+priority : 206
+refcnt : 1
+selftest : passed
+internal : no
+type : rng
+seedsize : 0
+
+name : stdrng
+driver : drbg_pr_sha512
+module : kernel
+priority : 205
+refcnt : 1
+selftest : passed
+internal : no
+type : rng
+seedsize : 0
+
+name : stdrng
+driver : drbg_pr_sha384
+module : kernel
+priority : 204
+refcnt : 1
+selftest : passed
+internal : no
+type : rng
+seedsize : 0
+
+name : stdrng
+driver : drbg_pr_sha1
+module : kernel
+priority : 203
+refcnt : 1
+selftest : passed
+internal : no
+type : rng
+seedsize : 0
+
+name : stdrng
+driver : drbg_pr_ctr_aes256
+module : kernel
+priority : 202
+refcnt : 1
+selftest : passed
+internal : no
+type : rng
+seedsize : 0
+
+name : stdrng
+driver : drbg_pr_ctr_aes192
+module : kernel
+priority : 201
+refcnt : 1
+selftest : passed
+internal : no
+type : rng
+seedsize : 0
+
+name : stdrng
+driver : drbg_pr_ctr_aes128
+module : kernel
+priority : 200
+refcnt : 1
+selftest : passed
+internal : no
+type : rng
+seedsize : 0
+
+name : 842
+driver : 842-scomp
+module : kernel
+priority : 100
+refcnt : 1
+selftest : passed
+internal : no
+type : scomp
+
+name : 842
+driver : 842-generic
+module : kernel
+priority : 100
+refcnt : 1
+selftest : passed
+internal : no
+type : compression
+
+name : lzo-rle
+driver : lzo-rle-scomp
+module : kernel
+priority : 0
+refcnt : 1
+selftest : passed
+internal : no
+type : scomp
+
+name : lzo-rle
+driver : lzo-rle-generic
+module : kernel
+priority : 0
+refcnt : 1
+selftest : passed
+internal : no
+type : compression
+
+name : lzo
+driver : lzo-scomp
+module : kernel
+priority : 0
+refcnt : 1
+selftest : passed
+internal : no
+type : scomp
+
+name : lzo
+driver : lzo-generic
+module : kernel
+priority : 0
+refcnt : 9
+selftest : passed
+internal : no
+type : compression
+
+name : crct10dif
+driver : crct10dif-generic
+module : kernel
+priority : 100
+refcnt : 1
+selftest : passed
+internal : no
+type : shash
+blocksize : 1
+digestsize : 2
+
+name : crc32c
+driver : crc32c-generic
+module : kernel
+priority : 100
+refcnt : 1
+selftest : passed
+internal : no
+type : shash
+blocksize : 1
+digestsize : 4
+
+name : zlib-deflate
+driver : zlib-deflate-scomp
+module : kernel
+priority : 0
+refcnt : 1
+selftest : passed
+internal : no
+type : scomp
+
+name : deflate
+driver : deflate-scomp
+module : kernel
+priority : 0
+refcnt : 1
+selftest : passed
+internal : no
+type : scomp
+
+name : deflate
+driver : deflate-generic
+module : kernel
+priority : 0
+refcnt : 1
+selftest : passed
+internal : no
+type : compression
+
+name : aes
+driver : aes-generic
+module : kernel
+priority : 100
+refcnt : 1
+selftest : passed
+internal : no
+type : cipher
+blocksize : 16
+min keysize : 16
+max keysize : 32
+
+name : sha224
+driver : sha224-generic
+module : kernel
+priority : 100
+refcnt : 1
+selftest : passed
+internal : no
+type : shash
+blocksize : 64
+digestsize : 28
+
+name : sha256
+driver : sha256-generic
+module : kernel
+priority : 100
+refcnt : 11
+selftest : passed
+internal : no
+type : shash
+blocksize : 64
+digestsize : 32
+
+name : sha1
+driver : sha1-generic
+module : kernel
+priority : 100
+refcnt : 11
+selftest : passed
+internal : no
+type : shash
+blocksize : 64
+digestsize : 20
+
+name : md5
+driver : md5-generic
+module : kernel
+priority : 0
+refcnt : 1
+selftest : passed
+internal : no
+type : shash
+blocksize : 64
+digestsize : 16
+
+name : ecb(cipher_null)
+driver : ecb-cipher_null
+module : kernel
+priority : 100
+refcnt : 1
+selftest : passed
+internal : no
+type : skcipher
+async : no
+blocksize : 1
+min keysize : 0
+max keysize : 0
+ivsize : 0
+chunksize : 1
+walksize : 1
+
+name : digest_null
+driver : digest_null-generic
+module : kernel
+priority : 0
+refcnt : 1
+selftest : passed
+internal : no
+type : shash
+blocksize : 1
+digestsize : 0
+
+name : compress_null
+driver : compress_null-generic
+module : kernel
+priority : 0
+refcnt : 1
+selftest : passed
+internal : no
+type : compression
+
+name : cipher_null
+driver : cipher_null-generic
+module : kernel
+priority : 0
+refcnt : 1
+selftest : passed
+internal : no
+type : cipher
+blocksize : 1
+min keysize : 0
+max keysize : 0
+
+name : rsa
+driver : rsa-generic
+module : kernel
+priority : 100
+refcnt : 1
+selftest : passed
+internal : no
+type : akcipher
+
+name : dh
+driver : dh-generic
+module : kernel
+priority : 100
+refcnt : 1
+selftest : passed
+internal : no
+type : kpp
+
+name : aes
+driver : aes-asm
+module : kernel
+priority : 200
+refcnt : 1
+selftest : passed
+internal : no
+type : cipher
+blocksize : 16
+min keysize : 16
+max keysize : 32
+
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Path: fixtures/proc/diskstats
Lines: 49
1 0 ram0 0 0 0 0 0 0 0 0 0 0 0
@@ -420,9 +1674,61 @@ md101 : active (read-only) raid0 sdb[2] sdd[1] sdc[0]
unused devices: <none>
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/proc/meminfo
+Lines: 42
+MemTotal: 15666184 kB
+MemFree: 440324 kB
+Buffers: 1020128 kB
+Cached: 12007640 kB
+SwapCached: 0 kB
+Active: 6761276 kB
+Inactive: 6532708 kB
+Active(anon): 267256 kB
+Inactive(anon): 268 kB
+Active(file): 6494020 kB
+Inactive(file): 6532440 kB
+Unevictable: 0 kB
+Mlocked: 0 kB
+SwapTotal: 0 kB
+SwapFree: 0 kB
+Dirty: 768 kB
+Writeback: 0 kB
+AnonPages: 266216 kB
+Mapped: 44204 kB
+Shmem: 1308 kB
+Slab: 1807264 kB
+SReclaimable: 1738124 kB
+SUnreclaim: 69140 kB
+KernelStack: 1616 kB
+PageTables: 5288 kB
+NFS_Unstable: 0 kB
+Bounce: 0 kB
+WritebackTmp: 0 kB
+CommitLimit: 7833092 kB
+Committed_AS: 530844 kB
+VmallocTotal: 34359738367 kB
+VmallocUsed: 36596 kB
+VmallocChunk: 34359637840 kB
+HardwareCorrupted: 0 kB
+AnonHugePages: 12288 kB
+HugePages_Total: 0
+HugePages_Free: 0
+HugePages_Rsvd: 0
+HugePages_Surp: 0
+Hugepagesize: 2048 kB
+DirectMap4k: 91136 kB
+DirectMap2M: 16039936 kB
+Mode: 664
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Directory: fixtures/proc/net
Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/proc/net/arp
+Lines: 2
+IP address HW type Flags HW address Mask Device
+192.168.224.1 0x1 0x2 00:50:56:c0:00:08 * ens33
+Mode: 664
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Path: fixtures/proc/net/dev
Lines: 6
Inter-| Receive | Transmit
@@ -495,6 +1801,11 @@ proc4 2 2 10853
proc4ops 72 0 0 0 1098 2 0 0 0 0 8179 5896 0 0 0 0 5900 0 0 2 0 2 0 9609 0 2 150 1272 0 0 0 1236 0 0 0 0 3 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/proc/net/softnet_stat
+Lines: 1
+00015c73 00020e76 F0000769 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Path: fixtures/proc/net/unix
Lines: 6
Num RefCount Protocol Flags Type St Inode Path
@@ -567,6 +1878,16 @@ some avg10=0.10 avg60=2.00 avg300=3.85 total=15
full avg10=0.20 avg60=3.00 avg300=4.95 total=25
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/proc/schedstat
+Lines: 6
+version 15
+timestamp 15819019232
+cpu0 498494191 0 3533438552 2553969831 3853684107 2465731542 2045936778163039 343796328169361 4767485306
+domain0 00000000,00000003 212499247 210112015 1861015 1860405436 536440 369895 32599 210079416 25368550 24241256 384652 927363878 807233 6366 1647 24239609 2122447165 1886868564 121112060 2848625533 125678146 241025 1032026 1885836538 2545 12 2533 0 0 0 0 0 0 1387952561 21076581 0
+cpu1 518377256 0 4155211005 2778589869 10466382 2867629021 1904686152592476 364107263788241 5145567945
+domain0 00000000,00000003 217653037 215526982 1577949 1580427380 557469 393576 28538 215498444 28721913 27662819 371153 870843407 745912 5523 1639 27661180 2331056874 2107732788 111442342 652402556 123615235 196159 1045245 2106687543 2400 3 2397 0 0 0 0 0 0 1437804657 26220076 0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Path: fixtures/proc/self
SymlinkTo: 26231
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
@@ -619,532 +1940,1055 @@ Path: fixtures/proc/symlinktargets/xyz
Lines: 0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys
-Mode: 755
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/block
+Directory: fixtures/proc/sys
Mode: 775
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/block/dm-0
-Mode: 775
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/block/dm-0/stat
-Lines: 1
-6447303 0 710266738 1529043 953216 0 31201176 4557464 0 796160 6088971
-Mode: 664
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/block/sda
+Directory: fixtures/proc/sys/vm
Mode: 775
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/block/sda/stat
+Path: fixtures/proc/sys/vm/admin_reserve_kbytes
Lines: 1
-9652963 396792 759304206 412943 8422549 6731723 286915323 13947418 0 5658367 19174573 1 2 3 12
-Mode: 664
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/class
-Mode: 775
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/class/infiniband
-Mode: 755
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/class/infiniband/mlx4_0
-Mode: 755
+8192
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/board_id
+Path: fixtures/proc/sys/vm/block_dump
Lines: 1
-SM_1141000001000
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/fw_ver
+Path: fixtures/proc/sys/vm/compact_unevictable_allowed
Lines: 1
-2.31.5050
+1
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/hca_type
+Path: fixtures/proc/sys/vm/dirty_background_bytes
Lines: 1
-MT4099
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/class/infiniband/mlx4_0/ports
-Mode: 755
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/class/infiniband/mlx4_0/ports/1
-Mode: 755
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters
-Mode: 755
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/excessive_buffer_overrun_errors
+Path: fixtures/proc/sys/vm/dirty_background_ratio
Lines: 1
-0
+10
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/link_downed
+Path: fixtures/proc/sys/vm/dirty_bytes
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/link_error_recovery
+Path: fixtures/proc/sys/vm/dirty_expire_centisecs
Lines: 1
-0
+3000
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/local_link_integrity_errors
+Path: fixtures/proc/sys/vm/dirty_ratio
Lines: 1
-0
+20
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/port_rcv_constraint_errors
+Path: fixtures/proc/sys/vm/dirty_writeback_centisecs
Lines: 1
-0
+500
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/port_rcv_data
+Path: fixtures/proc/sys/vm/dirtytime_expire_seconds
Lines: 1
-2221223609
+43200
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/port_rcv_errors
+Path: fixtures/proc/sys/vm/drop_caches
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/port_rcv_packets
+Path: fixtures/proc/sys/vm/extfrag_threshold
Lines: 1
-87169372
+500
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/port_rcv_remote_physical_errors
+Path: fixtures/proc/sys/vm/hugetlb_shm_group
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/port_rcv_switch_relay_errors
+Path: fixtures/proc/sys/vm/laptop_mode
Lines: 1
-0
+5
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/port_xmit_constraint_errors
+Path: fixtures/proc/sys/vm/legacy_va_layout
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/port_xmit_data
+Path: fixtures/proc/sys/vm/lowmem_reserve_ratio
Lines: 1
-26509113295
+256 256 32 0 0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/port_xmit_discards
+Path: fixtures/proc/sys/vm/max_map_count
Lines: 1
-0
+65530
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/port_xmit_packets
+Path: fixtures/proc/sys/vm/memory_failure_early_kill
Lines: 1
-85734114
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/port_xmit_wait
+Path: fixtures/proc/sys/vm/memory_failure_recovery
Lines: 1
-3599
+1
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/symbol_error
+Path: fixtures/proc/sys/vm/min_free_kbytes
Lines: 1
-0
+67584
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/phys_state
+Path: fixtures/proc/sys/vm/min_slab_ratio
Lines: 1
-5: LinkUp
+5
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/rate
+Path: fixtures/proc/sys/vm/min_unmapped_ratio
Lines: 1
-40 Gb/sec (4X QDR)
+1
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/state
+Path: fixtures/proc/sys/vm/mmap_min_addr
Lines: 1
-4: ACTIVE
+65536
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/class/infiniband/mlx4_0/ports/2
-Mode: 755
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters
-Mode: 755
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/excessive_buffer_overrun_errors
+Path: fixtures/proc/sys/vm/nr_hugepages
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/link_downed
+Path: fixtures/proc/sys/vm/nr_hugepages_mempolicy
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/link_error_recovery
+Path: fixtures/proc/sys/vm/nr_overcommit_hugepages
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/local_link_integrity_errors
+Path: fixtures/proc/sys/vm/numa_stat
Lines: 1
-0
+1
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/port_rcv_constraint_errors
+Path: fixtures/proc/sys/vm/numa_zonelist_order
Lines: 1
-0
+Node
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/port_rcv_data
+Path: fixtures/proc/sys/vm/oom_dump_tasks
Lines: 1
-2460436784
+1
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/port_rcv_errors
+Path: fixtures/proc/sys/vm/oom_kill_allocating_task
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/port_rcv_packets
+Path: fixtures/proc/sys/vm/overcommit_kbytes
Lines: 1
-89332064
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/port_rcv_remote_physical_errors
+Path: fixtures/proc/sys/vm/overcommit_memory
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/port_rcv_switch_relay_errors
+Path: fixtures/proc/sys/vm/overcommit_ratio
Lines: 1
-0
+50
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/port_xmit_constraint_errors
+Path: fixtures/proc/sys/vm/page-cluster
Lines: 1
-0
+3
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/port_xmit_data
+Path: fixtures/proc/sys/vm/panic_on_oom
Lines: 1
-26540356890
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/port_xmit_discards
+Path: fixtures/proc/sys/vm/percpu_pagelist_fraction
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/port_xmit_packets
+Path: fixtures/proc/sys/vm/stat_interval
Lines: 1
-88622850
+1
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/port_xmit_wait
+Path: fixtures/proc/sys/vm/swappiness
Lines: 1
-3846
+60
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/symbol_error
+Path: fixtures/proc/sys/vm/user_reserve_kbytes
Lines: 1
-0
+131072
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/phys_state
+Path: fixtures/proc/sys/vm/vfs_cache_pressure
Lines: 1
-5: LinkUp
+100
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/rate
+Path: fixtures/proc/sys/vm/watermark_boost_factor
Lines: 1
-40 Gb/sec (4X QDR)
+15000
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/state
+Path: fixtures/proc/sys/vm/watermark_scale_factor
Lines: 1
-4: ACTIVE
+10
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/class/net
-Mode: 775
+Path: fixtures/proc/sys/vm/zone_reclaim_mode
+Lines: 1
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/class/net/eth0
+Path: fixtures/proc/zoneinfo
+Lines: 262
+Node 0, zone DMA
+ per-node stats
+ nr_inactive_anon 230981
+ nr_active_anon 547580
+ nr_inactive_file 316904
+ nr_active_file 346282
+ nr_unevictable 115467
+ nr_slab_reclaimable 131220
+ nr_slab_unreclaimable 47320
+ nr_isolated_anon 0
+ nr_isolated_file 0
+ workingset_nodes 11627
+ workingset_refault 466886
+ workingset_activate 276925
+ workingset_restore 84055
+ workingset_nodereclaim 487
+ nr_anon_pages 795576
+ nr_mapped 215483
+ nr_file_pages 761874
+ nr_dirty 908
+ nr_writeback 0
+ nr_writeback_temp 0
+ nr_shmem 224925
+ nr_shmem_hugepages 0
+ nr_shmem_pmdmapped 0
+ nr_anon_transparent_hugepages 0
+ nr_unstable 0
+ nr_vmscan_write 12950
+ nr_vmscan_immediate_reclaim 3033
+ nr_dirtied 8007423
+ nr_written 7752121
+ nr_kernel_misc_reclaimable 0
+ pages free 3952
+ min 33
+ low 41
+ high 49
+ spanned 4095
+ present 3975
+ managed 3956
+ protection: (0, 2877, 7826, 7826, 7826)
+ nr_free_pages 3952
+ nr_zone_inactive_anon 0
+ nr_zone_active_anon 0
+ nr_zone_inactive_file 0
+ nr_zone_active_file 0
+ nr_zone_unevictable 0
+ nr_zone_write_pending 0
+ nr_mlock 0
+ nr_page_table_pages 0
+ nr_kernel_stack 0
+ nr_bounce 0
+ nr_zspages 0
+ nr_free_cma 0
+ numa_hit 1
+ numa_miss 0
+ numa_foreign 0
+ numa_interleave 0
+ numa_local 1
+ numa_other 0
+ pagesets
+ cpu: 0
+ count: 0
+ high: 0
+ batch: 1
+ vm stats threshold: 8
+ cpu: 1
+ count: 0
+ high: 0
+ batch: 1
+ vm stats threshold: 8
+ cpu: 2
+ count: 0
+ high: 0
+ batch: 1
+ vm stats threshold: 8
+ cpu: 3
+ count: 0
+ high: 0
+ batch: 1
+ vm stats threshold: 8
+ cpu: 4
+ count: 0
+ high: 0
+ batch: 1
+ vm stats threshold: 8
+ cpu: 5
+ count: 0
+ high: 0
+ batch: 1
+ vm stats threshold: 8
+ cpu: 6
+ count: 0
+ high: 0
+ batch: 1
+ vm stats threshold: 8
+ cpu: 7
+ count: 0
+ high: 0
+ batch: 1
+ vm stats threshold: 8
+ node_unreclaimable: 0
+ start_pfn: 1
+Node 0, zone DMA32
+ pages free 204252
+ min 19510
+ low 21059
+ high 22608
+ spanned 1044480
+ present 759231
+ managed 742806
+ protection: (0, 0, 4949, 4949, 4949)
+ nr_free_pages 204252
+ nr_zone_inactive_anon 118558
+ nr_zone_active_anon 106598
+ nr_zone_inactive_file 75475
+ nr_zone_active_file 70293
+ nr_zone_unevictable 66195
+ nr_zone_write_pending 64
+ nr_mlock 4
+ nr_page_table_pages 1756
+ nr_kernel_stack 2208
+ nr_bounce 0
+ nr_zspages 0
+ nr_free_cma 0
+ numa_hit 113952967
+ numa_miss 0
+ numa_foreign 0
+ numa_interleave 0
+ numa_local 113952967
+ numa_other 0
+ pagesets
+ cpu: 0
+ count: 345
+ high: 378
+ batch: 63
+ vm stats threshold: 48
+ cpu: 1
+ count: 356
+ high: 378
+ batch: 63
+ vm stats threshold: 48
+ cpu: 2
+ count: 325
+ high: 378
+ batch: 63
+ vm stats threshold: 48
+ cpu: 3
+ count: 346
+ high: 378
+ batch: 63
+ vm stats threshold: 48
+ cpu: 4
+ count: 321
+ high: 378
+ batch: 63
+ vm stats threshold: 48
+ cpu: 5
+ count: 316
+ high: 378
+ batch: 63
+ vm stats threshold: 48
+ cpu: 6
+ count: 373
+ high: 378
+ batch: 63
+ vm stats threshold: 48
+ cpu: 7
+ count: 339
+ high: 378
+ batch: 63
+ vm stats threshold: 48
+ node_unreclaimable: 0
+ start_pfn: 4096
+Node 0, zone Normal
+ pages free 18553
+ min 11176
+ low 13842
+ high 16508
+ spanned 1308160
+ present 1308160
+ managed 1268711
+ protection: (0, 0, 0, 0, 0)
+ nr_free_pages 18553
+ nr_zone_inactive_anon 112423
+ nr_zone_active_anon 440982
+ nr_zone_inactive_file 241429
+ nr_zone_active_file 275989
+ nr_zone_unevictable 49272
+ nr_zone_write_pending 844
+ nr_mlock 154
+ nr_page_table_pages 9750
+ nr_kernel_stack 15136
+ nr_bounce 0
+ nr_zspages 0
+ nr_free_cma 0
+ numa_hit 162718019
+ numa_miss 0
+ numa_foreign 0
+ numa_interleave 26812
+ numa_local 162718019
+ numa_other 0
+ pagesets
+ cpu: 0
+ count: 316
+ high: 378
+ batch: 63
+ vm stats threshold: 56
+ cpu: 1
+ count: 366
+ high: 378
+ batch: 63
+ vm stats threshold: 56
+ cpu: 2
+ count: 60
+ high: 378
+ batch: 63
+ vm stats threshold: 56
+ cpu: 3
+ count: 256
+ high: 378
+ batch: 63
+ vm stats threshold: 56
+ cpu: 4
+ count: 253
+ high: 378
+ batch: 63
+ vm stats threshold: 56
+ cpu: 5
+ count: 159
+ high: 378
+ batch: 63
+ vm stats threshold: 56
+ cpu: 6
+ count: 311
+ high: 378
+ batch: 63
+ vm stats threshold: 56
+ cpu: 7
+ count: 264
+ high: 378
+ batch: 63
+ vm stats threshold: 56
+ node_unreclaimable: 0
+ start_pfn: 1048576
+Node 0, zone Movable
+ pages free 0
+ min 0
+ low 0
+ high 0
+ spanned 0
+ present 0
+ managed 0
+ protection: (0, 0, 0, 0, 0)
+Node 0, zone Device
+ pages free 0
+ min 0
+ low 0
+ high 0
+ spanned 0
+ present 0
+ managed 0
+ protection: (0, 0, 0, 0, 0)
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys
Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/addr_assign_type
-Lines: 1
-3
-Mode: 644
+Directory: fixtures/sys/block
+Mode: 775
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/addr_len
+Directory: fixtures/sys/block/dm-0
+Mode: 775
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/block/dm-0/stat
Lines: 1
-6
-Mode: 644
+6447303 0 710266738 1529043 953216 0 31201176 4557464 0 796160 6088971
+Mode: 664
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/address
+Directory: fixtures/sys/block/sda
+Mode: 775
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/block/sda/stat
Lines: 1
-01:01:01:01:01:01
-Mode: 644
+9652963 396792 759304206 412943 8422549 6731723 286915323 13947418 0 5658367 19174573 1 2 3 12
+Mode: 664
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/broadcast
+Directory: fixtures/sys/class
+Mode: 775
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/class/infiniband
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/class/infiniband/mlx4_0
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/infiniband/mlx4_0/board_id
Lines: 1
-ff:ff:ff:ff:ff:ff
+SM_1141000001000
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/carrier
+Path: fixtures/sys/class/infiniband/mlx4_0/fw_ver
Lines: 1
-1
+2.31.5050
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/carrier_changes
+Path: fixtures/sys/class/infiniband/mlx4_0/hca_type
Lines: 1
-2
+MT4099
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/carrier_down_count
+Directory: fixtures/sys/class/infiniband/mlx4_0/ports
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/class/infiniband/mlx4_0/ports/1
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/excessive_buffer_overrun_errors
Lines: 1
-1
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/carrier_up_count
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/link_downed
Lines: 1
-1
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/dev_id
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/link_error_recovery
Lines: 1
-0x20
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/dormant
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/local_link_integrity_errors
Lines: 1
-1
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/duplex
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/port_rcv_constraint_errors
Lines: 1
-full
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/flags
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/port_rcv_data
Lines: 1
-0x1303
+2221223609
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/ifalias
-Lines: 0
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/port_rcv_errors
+Lines: 1
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/ifindex
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/port_rcv_packets
Lines: 1
-2
+87169372
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/iflink
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/port_rcv_remote_physical_errors
Lines: 1
-2
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/link_mode
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/port_rcv_switch_relay_errors
Lines: 1
-1
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/mtu
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/port_xmit_constraint_errors
Lines: 1
-1500
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/name_assign_type
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/port_xmit_data
Lines: 1
-2
+26509113295
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/netdev_group
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/port_xmit_discards
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/operstate
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/port_xmit_packets
Lines: 1
-up
-Mode: 644
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/phys_port_id
-Lines: 0
+85734114
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/phys_port_name
-Lines: 0
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/port_xmit_wait
+Lines: 1
+3599
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/phys_switch_id
-Lines: 0
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/counters/symbol_error
+Lines: 1
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/speed
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/phys_state
Lines: 1
-1000
+5: LinkUp
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/tx_queue_len
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/rate
Lines: 1
-1000
+40 Gb/sec (4X QDR)
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/net/eth0/type
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/1/state
Lines: 1
-1
+4: ACTIVE
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/class/power_supply
+Directory: fixtures/sys/class/infiniband/mlx4_0/ports/2
Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/class/power_supply/AC
+Directory: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters
Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/power_supply/AC/online
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/excessive_buffer_overrun_errors
Lines: 1
0
-Mode: 444
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/power_supply/AC/type
-Lines: 1
-Mains
-Mode: 444
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/power_supply/AC/uevent
-Lines: 2
-POWER_SUPPLY_NAME=AC
-POWER_SUPPLY_ONLINE=0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/class/power_supply/BAT0
-Mode: 755
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/power_supply/BAT0/alarm
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/link_downed
Lines: 1
-2503000
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/power_supply/BAT0/capacity
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/link_error_recovery
Lines: 1
-98
-Mode: 444
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/power_supply/BAT0/capacity_level
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/local_link_integrity_errors
Lines: 1
-Normal
-Mode: 444
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/power_supply/BAT0/charge_start_threshold
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/port_rcv_constraint_errors
Lines: 1
-95
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/power_supply/BAT0/charge_stop_threshold
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/port_rcv_data
Lines: 1
-100
+2460436784
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/power_supply/BAT0/cycle_count
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/port_rcv_errors
Lines: 1
0
-Mode: 444
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/power_supply/BAT0/energy_full
-Lines: 1
-50060000
-Mode: 444
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/power_supply/BAT0/energy_full_design
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/port_rcv_packets
Lines: 1
-47520000
-Mode: 444
+89332064
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/power_supply/BAT0/energy_now
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/port_rcv_remote_physical_errors
Lines: 1
-49450000
-Mode: 444
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/power_supply/BAT0/manufacturer
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/port_rcv_switch_relay_errors
Lines: 1
-LGC
-Mode: 444
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/power_supply/BAT0/model_name
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/port_xmit_constraint_errors
Lines: 1
-LNV-45N1
-Mode: 444
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/power_supply/BAT0/power_now
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/port_xmit_data
Lines: 1
-4830000
-Mode: 444
+26540356890
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/power_supply/BAT0/present
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/port_xmit_discards
Lines: 1
-1
-Mode: 444
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/power_supply/BAT0/serial_number
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/port_xmit_packets
Lines: 1
-38109
-Mode: 444
+88622850
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/power_supply/BAT0/status
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/port_xmit_wait
Lines: 1
-Discharging
-Mode: 444
+3846
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/power_supply/BAT0/technology
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/counters/symbol_error
Lines: 1
-Li-ion
-Mode: 444
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/power_supply/BAT0/type
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/phys_state
Lines: 1
-Battery
-Mode: 444
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/power_supply/BAT0/uevent
-Lines: 16
-POWER_SUPPLY_NAME=BAT0
-POWER_SUPPLY_STATUS=Discharging
-POWER_SUPPLY_PRESENT=1
-POWER_SUPPLY_TECHNOLOGY=Li-ion
-POWER_SUPPLY_CYCLE_COUNT=0
-POWER_SUPPLY_VOLTAGE_MIN_DESIGN=10800000
-POWER_SUPPLY_VOLTAGE_NOW=12229000
-POWER_SUPPLY_POWER_NOW=4830000
-POWER_SUPPLY_ENERGY_FULL_DESIGN=47520000
-POWER_SUPPLY_ENERGY_FULL=50060000
-POWER_SUPPLY_ENERGY_NOW=49450000
-POWER_SUPPLY_CAPACITY=98
-POWER_SUPPLY_CAPACITY_LEVEL=Normal
-POWER_SUPPLY_MODEL_NAME=LNV-45N1
-POWER_SUPPLY_MANUFACTURER=LGC
-POWER_SUPPLY_SERIAL_NUMBER=38109
+5: LinkUp
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/power_supply/BAT0/voltage_min_design
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/rate
Lines: 1
-10800000
-Mode: 444
+40 Gb/sec (4X QDR)
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/power_supply/BAT0/voltage_now
+Path: fixtures/sys/class/infiniband/mlx4_0/ports/2/state
Lines: 1
-12229000
-Mode: 444
+4: ACTIVE
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/class/thermal
+Directory: fixtures/sys/class/net
Mode: 775
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/class/net/eth0
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/addr_assign_type
+Lines: 1
+3
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/addr_len
+Lines: 1
+6
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/address
+Lines: 1
+01:01:01:01:01:01
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/broadcast
+Lines: 1
+ff:ff:ff:ff:ff:ff
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/carrier
+Lines: 1
+1
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/carrier_changes
+Lines: 1
+2
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/carrier_down_count
+Lines: 1
+1
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/carrier_up_count
+Lines: 1
+1
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/dev_id
+Lines: 1
+0x20
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/device
+SymlinkTo: ../../../devices/pci0000:00/0000:00:1f.6/
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/dormant
+Lines: 1
+1
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/duplex
+Lines: 1
+full
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/flags
+Lines: 1
+0x1303
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/ifalias
+Lines: 0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/ifindex
+Lines: 1
+2
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/iflink
+Lines: 1
+2
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/link_mode
+Lines: 1
+1
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/mtu
+Lines: 1
+1500
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/name_assign_type
+Lines: 1
+2
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/netdev_group
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/operstate
+Lines: 1
+up
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/phys_port_id
+Lines: 0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/phys_port_name
+Lines: 0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/phys_switch_id
+Lines: 0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/speed
+Lines: 1
+1000
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/tx_queue_len
+Lines: 1
+1000
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/net/eth0/type
+Lines: 1
+1
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/class/power_supply
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/power_supply/AC
+SymlinkTo: ../../devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/power_supply/BAT0
+SymlinkTo: ../../devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/class/powercap
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/class/powercap/intel-rapl
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/powercap/intel-rapl/enabled
+Lines: 1
+1
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/powercap/intel-rapl/uevent
+Lines: 0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/class/powercap/intel-rapl:0
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/powercap/intel-rapl:0/constraint_0_max_power_uw
+Lines: 1
+95000000
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/powercap/intel-rapl:0/constraint_0_name
+Lines: 1
+long_term
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/powercap/intel-rapl:0/constraint_0_power_limit_uw
+Lines: 1
+4090000000
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/powercap/intel-rapl:0/constraint_0_time_window_us
+Lines: 1
+999424
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/powercap/intel-rapl:0/constraint_1_max_power_uw
+Lines: 1
+0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/powercap/intel-rapl:0/constraint_1_name
+Lines: 1
+short_term
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/powercap/intel-rapl:0/constraint_1_power_limit_uw
+Lines: 1
+4090000000
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/powercap/intel-rapl:0/constraint_1_time_window_us
+Lines: 1
+2440
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/powercap/intel-rapl:0/enabled
+Lines: 1
+1
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/powercap/intel-rapl:0/energy_uj
+Lines: 1
+240422366267
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/powercap/intel-rapl:0/max_energy_range_uj
+Lines: 1
+262143328850
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/powercap/intel-rapl:0/name
+Lines: 1
+package-0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/powercap/intel-rapl:0/uevent
+Lines: 0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/class/powercap/intel-rapl:0:0
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/powercap/intel-rapl:0:0/constraint_0_max_power_uw
+Lines: 0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/powercap/intel-rapl:0:0/constraint_0_name
+Lines: 1
+long_term
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/powercap/intel-rapl:0:0/constraint_0_power_limit_uw
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/powercap/intel-rapl:0:0/constraint_0_time_window_us
+Lines: 1
+976
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/powercap/intel-rapl:0:0/enabled
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/powercap/intel-rapl:0:0/energy_uj
+Lines: 1
+118821284256
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/powercap/intel-rapl:0:0/max_energy_range_uj
+Lines: 1
+262143328850
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/powercap/intel-rapl:0:0/name
+Lines: 1
+core
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/powercap/intel-rapl:0:0/uevent
+Lines: 0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/class/thermal
+Mode: 775
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/class/thermal/cooling_device0
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/thermal/cooling_device0/cur_state
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/thermal/cooling_device0/max_state
+Lines: 1
+50
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/thermal/cooling_device0/type
+Lines: 1
+Processor
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/class/thermal/cooling_device1
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/thermal/cooling_device1/cur_state
+Lines: 1
+-1
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/thermal/cooling_device1/max_state
+Lines: 1
+27
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/thermal/cooling_device1/type
+Lines: 1
+intel_powerclamp
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Directory: fixtures/sys/class/thermal/thermal_zone0
Mode: 775
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
@@ -1166,910 +3010,2022 @@ Mode: 664
Directory: fixtures/sys/class/thermal/thermal_zone1
Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/thermal/thermal_zone1/mode
+Path: fixtures/sys/class/thermal/thermal_zone1/mode
+Lines: 1
+enabled
+Mode: 664
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/thermal/thermal_zone1/passive
+Lines: 1
+0
+Mode: 664
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/thermal/thermal_zone1/policy
+Lines: 1
+step_wise
+Mode: 664
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/thermal/thermal_zone1/temp
+Lines: 1
+44000
+Mode: 664
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/class/thermal/thermal_zone1/type
+Lines: 1
+acpitz
+Mode: 664
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/LNXSYSTM:00
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/device
+SymlinkTo: ../../../ACPI0003:00
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/online
+Lines: 1
+0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/power
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/power/async
+Lines: 1
+disabled
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/power/autosuspend_delay_ms
+Lines: 0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/power/control
+Lines: 1
+auto
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/power/runtime_active_kids
+Lines: 1
+0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/power/runtime_active_time
+Lines: 1
+0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/power/runtime_enabled
+Lines: 1
+disabled
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/power/runtime_status
+Lines: 1
+unsupported
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/power/runtime_suspended_time
+Lines: 1
+0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/power/runtime_usage
+Lines: 1
+0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/power/wakeup
+Lines: 1
+enabled
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/power/wakeup_abort_count
+Lines: 1
+0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/power/wakeup_active
+Lines: 1
+0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/power/wakeup_active_count
+Lines: 1
+1
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/power/wakeup_count
+Lines: 1
+0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/power/wakeup_expire_count
+Lines: 1
+0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/power/wakeup_last_time_ms
+Lines: 1
+10598
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/power/wakeup_max_time_ms
+Lines: 1
+1
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/power/wakeup_prevent_sleep_time_ms
+Lines: 1
+0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/power/wakeup_total_time_ms
+Lines: 1
+1
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/subsystem
+SymlinkTo: ../../../../../../../../../class/power_supply
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/type
+Lines: 1
+Mains
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/ACPI0003:00/power_supply/AC/uevent
+Lines: 2
+POWER_SUPPLY_NAME=AC
+POWER_SUPPLY_ONLINE=0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/alarm
+Lines: 1
+2369000
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/capacity
+Lines: 1
+98
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/capacity_level
+Lines: 1
+Normal
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/charge_start_threshold
+Lines: 1
+95
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/charge_stop_threshold
+Lines: 1
+100
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/cycle_count
+Lines: 1
+0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/device
+SymlinkTo: ../../../PNP0C0A:00
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/energy_full
+Lines: 1
+50060000
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/energy_full_design
+Lines: 1
+47520000
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/energy_now
+Lines: 1
+49450000
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/manufacturer
+Lines: 1
+LGC
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/model_name
+Lines: 1
+LNV-45N1
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/power
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/power/async
+Lines: 1
+disabled
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/power/autosuspend_delay_ms
+Lines: 0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/power/control
+Lines: 1
+auto
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/power/runtime_active_kids
+Lines: 1
+0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/power/runtime_active_time
+Lines: 1
+0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/power/runtime_enabled
+Lines: 1
+disabled
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/power/runtime_status
+Lines: 1
+unsupported
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/power/runtime_suspended_time
+Lines: 1
+0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/power/runtime_usage
+Lines: 1
+0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/power_now
+Lines: 1
+4830000
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/present
+Lines: 1
+1
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/serial_number
+Lines: 1
+38109
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/status
+Lines: 1
+Discharging
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/subsystem
+SymlinkTo: ../../../../../../../../../class/power_supply
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/technology
+Lines: 1
+Li-ion
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/type
+Lines: 1
+Battery
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/uevent
+Lines: 16
+POWER_SUPPLY_NAME=BAT0
+POWER_SUPPLY_STATUS=Discharging
+POWER_SUPPLY_PRESENT=1
+POWER_SUPPLY_TECHNOLOGY=Li-ion
+POWER_SUPPLY_CYCLE_COUNT=0
+POWER_SUPPLY_VOLTAGE_MIN_DESIGN=10800000
+POWER_SUPPLY_VOLTAGE_NOW=11750000
+POWER_SUPPLY_POWER_NOW=5064000
+POWER_SUPPLY_ENERGY_FULL_DESIGN=47520000
+POWER_SUPPLY_ENERGY_FULL=47390000
+POWER_SUPPLY_ENERGY_NOW=40730000
+POWER_SUPPLY_CAPACITY=85
+POWER_SUPPLY_CAPACITY_LEVEL=Normal
+POWER_SUPPLY_MODEL_NAME=LNV-45N1
+POWER_SUPPLY_MANUFACTURER=LGC
+POWER_SUPPLY_SERIAL_NUMBER=38109
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/voltage_min_design
+Lines: 1
+10800000
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/device:00/PNP0C09:00/PNP0C0A:00/power_supply/BAT0/voltage_now
+Lines: 1
+12229000
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/pci0000:00
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/dirty_data
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_day
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_day/bypassed
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_day/cache_bypass_hits
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_day/cache_bypass_misses
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_day/cache_hit_ratio
+Lines: 1
+100
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_day/cache_hits
+Lines: 1
+289
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_day/cache_miss_collisions
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_day/cache_misses
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_day/cache_readaheads
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_five_minute
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_five_minute/bypassed
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_five_minute/cache_bypass_hits
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_five_minute/cache_bypass_misses
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_five_minute/cache_hit_ratio
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_five_minute/cache_hits
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_five_minute/cache_miss_collisions
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_five_minute/cache_misses
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_five_minute/cache_readaheads
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_hour
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_hour/bypassed
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_hour/cache_bypass_hits
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_hour/cache_bypass_misses
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_hour/cache_hit_ratio
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_hour/cache_hits
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_hour/cache_miss_collisions
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_hour/cache_misses
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_hour/cache_readaheads
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_total
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_total/bypassed
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_total/cache_bypass_hits
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_total/cache_bypass_misses
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_total/cache_hit_ratio
+Lines: 1
+100
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_total/cache_hits
+Lines: 1
+546
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_total/cache_miss_collisions
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_total/cache_misses
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_total/cache_readaheads
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata5
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata5/host4
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata5/host4/target4:0:0
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata5/host4/target4:0:0/4:0:0:0
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata5/host4/target4:0:0/4:0:0:0/block
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata5/host4/target4:0:0/4:0:0:0/block/sdc
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata5/host4/target4:0:0/4:0:0:0/block/sdc/bcache
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata5/host4/target4:0:0/4:0:0:0/block/sdc/bcache/io_errors
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata5/host4/target4:0:0/4:0:0:0/block/sdc/bcache/metadata_written
+Lines: 1
+512
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata5/host4/target4:0:0/4:0:0:0/block/sdc/bcache/priority_stats
+Lines: 5
+Unused: 99%
+Metadata: 0%
+Average: 10473
+Sectors per Q: 64
+Quantiles: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946]
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata5/host4/target4:0:0/4:0:0:0/block/sdc/bcache/written
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/pci0000:00/0000:00:1f.6
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:1f.6/ari_enabled
+Lines: 1
+0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:1f.6/broken_parity_status
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:1f.6/class
+Lines: 1
+0x020000
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:1f.6/consistent_dma_mask_bits
+Lines: 1
+64
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:1f.6/d3cold_allowed
+Lines: 1
+1
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:1f.6/device
+Lines: 1
+0x15d7
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:1f.6/dma_mask_bits
+Lines: 1
+64
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:1f.6/driver_override
+Lines: 1
+(null)
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:1f.6/enable
+Lines: 1
+1
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:1f.6/irq
+Lines: 1
+140
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:1f.6/local_cpulist
+Lines: 1
+0-7
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:1f.6/local_cpus
+Lines: 1
+ff
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:1f.6/modalias
+Lines: 1
+pci:v00008086d000015D7sv000017AAsd0000225Abc02sc00i00
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:1f.6/msi_bus
+Lines: 1
+1
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:1f.6/numa_node
+Lines: 1
+-1
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:1f.6/resource
+Lines: 13
+0x00000000ec200000 0x00000000ec21ffff 0x0000000000040200
+0x0000000000000000 0x0000000000000000 0x0000000000000000
+0x0000000000000000 0x0000000000000000 0x0000000000000000
+0x0000000000000000 0x0000000000000000 0x0000000000000000
+0x0000000000000000 0x0000000000000000 0x0000000000000000
+0x0000000000000000 0x0000000000000000 0x0000000000000000
+0x0000000000000000 0x0000000000000000 0x0000000000000000
+0x0000000000000000 0x0000000000000000 0x0000000000000000
+0x0000000000000000 0x0000000000000000 0x0000000000000000
+0x0000000000000000 0x0000000000000000 0x0000000000000000
+0x0000000000000000 0x0000000000000000 0x0000000000000000
+0x0000000000000000 0x0000000000000000 0x0000000000000000
+0x0000000000000000 0x0000000000000000 0x0000000000000000
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:1f.6/revision
+Lines: 1
+0x21
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:1f.6/subsystem_device
+Lines: 1
+0x225a
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:1f.6/subsystem_vendor
+Lines: 1
+0x17aa
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:1f.6/uevent
+Lines: 6
+DRIVER=e1000e
+PCI_CLASS=20000
+PCI_ID=8086:15D7
+PCI_SUBSYS_ID=17AA:225A
+PCI_SLOT_NAME=0000:00:1f.6
+MODALIAS=pci:v00008086d000015D7sv000017AAsd0000225Abc02sc00i00
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/pci0000:00/0000:00:1f.6/vendor
+Lines: 1
+0x8086
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/rbd
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/rbd/0
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/rbd/0/name
+Lines: 1
+demo
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/rbd/0/pool
+Lines: 1
+iscsi-images
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/rbd/1
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/rbd/1/name
+Lines: 1
+wrong
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/rbd/1/pool
+Lines: 1
+wrong-images
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/system
+Mode: 775
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/system/clocksource
+Mode: 775
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/system/clocksource/clocksource0
+Mode: 775
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/clocksource/clocksource0/available_clocksource
+Lines: 1
+tsc hpet acpi_pm
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/clocksource/clocksource0/current_clocksource
+Lines: 1
+tsc
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/system/cpu
+Mode: 775
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/system/cpu/cpu0
+Mode: 775
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu0/cpufreq
+SymlinkTo: ../cpufreq/policy0
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/system/cpu/cpu0/thermal_throttle
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu0/thermal_throttle/core_throttle_count
+Lines: 1
+10084
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu0/thermal_throttle/package_throttle_count
+Lines: 1
+34818
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/system/cpu/cpu0/topology
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu0/topology/core_id
+Lines: 1
+0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu0/topology/core_siblings
+Lines: 1
+ff
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu0/topology/core_siblings_list
+Lines: 1
+0-7
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu0/topology/physical_package_id
+Lines: 1
+0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu0/topology/thread_siblings
+Lines: 1
+11
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu0/topology/thread_siblings_list
+Lines: 1
+0,4
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/system/cpu/cpu1
+Mode: 775
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/system/cpu/cpu1/cpufreq
+Mode: 775
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu1/cpufreq/cpuinfo_cur_freq
+Lines: 1
+1200195
+Mode: 400
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu1/cpufreq/cpuinfo_max_freq
+Lines: 1
+3300000
+Mode: 664
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu1/cpufreq/cpuinfo_min_freq
+Lines: 1
+1200000
+Mode: 664
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu1/cpufreq/cpuinfo_transition_latency
+Lines: 1
+4294967295
+Mode: 664
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu1/cpufreq/related_cpus
+Lines: 1
+1
+Mode: 664
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu1/cpufreq/scaling_available_governors
+Lines: 1
+performance powersave
+Mode: 664
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu1/cpufreq/scaling_driver
+Lines: 1
+intel_pstate
+Mode: 664
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
+Lines: 1
+powersave
+Mode: 664
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu1/cpufreq/scaling_max_freq
+Lines: 1
+3300000
+Mode: 664
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu1/cpufreq/scaling_min_freq
+Lines: 1
+1200000
+Mode: 664
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu1/cpufreq/scaling_setspeed
+Lines: 1
+<unsupported>
+Mode: 664
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/system/cpu/cpu1/thermal_throttle
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu1/thermal_throttle/core_throttle_count
+Lines: 1
+523
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu1/thermal_throttle/package_throttle_count
+Lines: 1
+34818
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/system/cpu/cpu1/topology
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu1/topology/core_id
+Lines: 1
+1
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu1/topology/core_siblings
+Lines: 1
+ff
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu1/topology/core_siblings_list
+Lines: 1
+0-7
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu1/topology/physical_package_id
+Lines: 1
+0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu1/topology/thread_siblings
+Lines: 1
+22
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpu1/topology/thread_siblings_list
+Lines: 1
+1,5
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/system/cpu/cpufreq
+Mode: 775
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/system/cpu/cpufreq/policy0
+Mode: 775
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpufreq/policy0/affected_cpus
+Lines: 1
+0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpufreq/policy0/cpuinfo_max_freq
+Lines: 1
+2400000
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpufreq/policy0/cpuinfo_min_freq
+Lines: 1
+800000
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpufreq/policy0/cpuinfo_transition_latency
+Lines: 1
+0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpufreq/policy0/related_cpus
+Lines: 1
+0
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpufreq/policy0/scaling_available_governors
+Lines: 1
+performance powersave
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpufreq/policy0/scaling_cur_freq
+Lines: 1
+1219917
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpufreq/policy0/scaling_driver
+Lines: 1
+intel_pstate
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpufreq/policy0/scaling_governor
+Lines: 1
+powersave
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpufreq/policy0/scaling_max_freq
+Lines: 1
+2400000
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpufreq/policy0/scaling_min_freq
+Lines: 1
+800000
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/devices/system/cpu/cpufreq/policy0/scaling_setspeed
+Lines: 1
+<unsupported>
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/devices/system/cpu/cpufreq/policy1
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/fs
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/fs/bcache
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/average_key_size
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0
+Mode: 777
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/dirty_data
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_day
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_day/bypassed
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_day/cache_bypass_hits
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_day/cache_bypass_misses
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_day/cache_hit_ratio
+Lines: 1
+100
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_day/cache_hits
+Lines: 1
+289
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_day/cache_miss_collisions
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_day/cache_misses
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_day/cache_readaheads
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_five_minute
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_five_minute/bypassed
Lines: 1
-enabled
-Mode: 664
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/thermal/thermal_zone1/passive
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_five_minute/cache_bypass_hits
Lines: 1
0
-Mode: 664
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/thermal/thermal_zone1/policy
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_five_minute/cache_bypass_misses
Lines: 1
-step_wise
-Mode: 664
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/thermal/thermal_zone1/temp
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_five_minute/cache_hit_ratio
Lines: 1
-44000
-Mode: 664
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/class/thermal/thermal_zone1/type
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_five_minute/cache_hits
Lines: 1
-acpitz
-Mode: 664
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices
-Mode: 755
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_five_minute/cache_miss_collisions
+Lines: 1
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/pci0000:00
-Mode: 755
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_five_minute/cache_misses
+Lines: 1
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0
-Mode: 755
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_five_minute/cache_readaheads
+Lines: 1
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4
+Directory: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_hour
Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3
-Mode: 755
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_hour/bypassed
+Lines: 1
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0
-Mode: 755
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_hour/cache_bypass_hits
+Lines: 1
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0
-Mode: 755
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_hour/cache_bypass_misses
+Lines: 1
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block
-Mode: 755
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_hour/cache_hit_ratio
+Lines: 1
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb
-Mode: 755
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_hour/cache_hits
+Lines: 1
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache
-Mode: 755
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_hour/cache_miss_collisions
+Lines: 1
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/dirty_data
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_hour/cache_misses
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_day
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_hour/cache_readaheads
+Lines: 1
+0
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_total
Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_day/bypassed
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_total/bypassed
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_day/cache_bypass_hits
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_total/cache_bypass_hits
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_day/cache_bypass_misses
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_total/cache_bypass_misses
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_day/cache_hit_ratio
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_total/cache_hit_ratio
Lines: 1
100
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_day/cache_hits
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_total/cache_hits
Lines: 1
-289
+546
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_day/cache_miss_collisions
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_total/cache_miss_collisions
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_day/cache_misses
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_total/cache_misses
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_day/cache_readaheads
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_total/cache_readaheads
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_five_minute
-Mode: 755
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_five_minute/bypassed
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/btree_cache_size
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_five_minute/cache_bypass_hits
+Directory: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/cache0
+Mode: 777
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/cache0/io_errors
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_five_minute/cache_bypass_misses
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/cache0/metadata_written
Lines: 1
-0
+512
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_five_minute/cache_hit_ratio
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/cache0/priority_stats
+Lines: 5
+Unused: 99%
+Metadata: 0%
+Average: 10473
+Sectors per Q: 64
+Quantiles: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946]
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/cache0/written
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_five_minute/cache_hits
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/cache_available_percent
+Lines: 1
+100
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/congested
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_five_minute/cache_miss_collisions
+Directory: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/internal
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/internal/active_journal_entries
+Lines: 1
+1
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/internal/btree_nodes
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_five_minute/cache_misses
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/internal/btree_read_average_duration_us
+Lines: 1
+1305
+Mode: 644
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/internal/cache_read_races
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_five_minute/cache_readaheads
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/root_usage_percent
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_hour
+Directory: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_day
Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_hour/bypassed
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_day/bypassed
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_hour/cache_bypass_hits
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_day/cache_bypass_hits
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_hour/cache_bypass_misses
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_day/cache_bypass_misses
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_hour/cache_hit_ratio
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_day/cache_hit_ratio
Lines: 1
-0
+100
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_hour/cache_hits
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_day/cache_hits
Lines: 1
-0
+289
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_hour/cache_miss_collisions
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_day/cache_miss_collisions
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_hour/cache_misses
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_day/cache_misses
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_hour/cache_readaheads
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_day/cache_readaheads
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_total
+Directory: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_five_minute
Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_total/bypassed
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_five_minute/bypassed
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_total/cache_bypass_hits
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_five_minute/cache_bypass_hits
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_total/cache_bypass_misses
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_five_minute/cache_bypass_misses
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_total/cache_hit_ratio
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_five_minute/cache_hit_ratio
Lines: 1
-100
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_total/cache_hits
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_five_minute/cache_hits
Lines: 1
-546
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_total/cache_miss_collisions
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_five_minute/cache_miss_collisions
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_total/cache_misses
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_five_minute/cache_misses
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata4/host3/target3:0:0/3:0:0:0/block/sdb/bcache/stats_total/cache_readaheads
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_five_minute/cache_readaheads
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata5
-Mode: 755
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata5/host4
-Mode: 755
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata5/host4/target4:0:0
+Directory: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_hour
Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata5/host4/target4:0:0/4:0:0:0
-Mode: 755
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_hour/bypassed
+Lines: 1
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata5/host4/target4:0:0/4:0:0:0/block
-Mode: 755
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_hour/cache_bypass_hits
+Lines: 1
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata5/host4/target4:0:0/4:0:0:0/block/sdc
-Mode: 755
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_hour/cache_bypass_misses
+Lines: 1
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata5/host4/target4:0:0/4:0:0:0/block/sdc/bcache
-Mode: 755
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_hour/cache_hit_ratio
+Lines: 1
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata5/host4/target4:0:0/4:0:0:0/block/sdc/bcache/io_errors
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_hour/cache_hits
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata5/host4/target4:0:0/4:0:0:0/block/sdc/bcache/metadata_written
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_hour/cache_miss_collisions
Lines: 1
-512
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata5/host4/target4:0:0/4:0:0:0/block/sdc/bcache/priority_stats
-Lines: 5
-Unused: 99%
-Metadata: 0%
-Average: 10473
-Sectors per Q: 64
-Quantiles: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946]
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_hour/cache_misses
+Lines: 1
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/pci0000:00/0000:00:0d.0/ata5/host4/target4:0:0/4:0:0:0/block/sdc/bcache/written
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_hour/cache_readaheads
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/rbd
-Mode: 755
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/rbd/0
+Directory: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_total
Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/rbd/0/name
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_total/bypassed
Lines: 1
-demo
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/rbd/0/pool
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_total/cache_bypass_hits
Lines: 1
-iscsi-images
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/rbd/1
-Mode: 755
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/rbd/1/name
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_total/cache_bypass_misses
Lines: 1
-wrong
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/rbd/1/pool
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_total/cache_hit_ratio
Lines: 1
-wrong-images
+100
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/system
-Mode: 775
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/system/clocksource
-Mode: 775
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_total/cache_hits
+Lines: 1
+546
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/system/clocksource/clocksource0
-Mode: 775
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_total/cache_miss_collisions
+Lines: 1
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/clocksource/clocksource0/available_clocksource
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_total/cache_misses
Lines: 1
-tsc hpet acpi_pm
-Mode: 444
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/clocksource/clocksource0/current_clocksource
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_total/cache_readaheads
Lines: 1
-tsc
+0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/system/cpu
-Mode: 775
+Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/tree_depth
+Lines: 1
+0
+Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/system/cpu/cpu0
-Mode: 775
+Directory: fixtures/sys/fs/btrfs
+Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/cpu/cpu0/cpufreq
-SymlinkTo: ../cpufreq/policy0
+Directory: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d
+Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/system/cpu/cpu1
-Mode: 775
+Directory: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation
+Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/system/cpu/cpu1/cpufreq
-Mode: 775
+Directory: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data
+Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/cpu/cpu1/cpufreq/cpuinfo_cur_freq
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/bytes_may_use
Lines: 1
-1200195
-Mode: 400
+0
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/cpu/cpu1/cpufreq/cpuinfo_max_freq
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/bytes_pinned
Lines: 1
-3300000
-Mode: 664
+0
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/cpu/cpu1/cpufreq/cpuinfo_min_freq
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/bytes_readonly
Lines: 1
-1200000
-Mode: 664
+0
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/cpu/cpu1/cpufreq/cpuinfo_transition_latency
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/bytes_reserved
Lines: 1
-4294967295
-Mode: 664
+0
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/cpu/cpu1/cpufreq/related_cpus
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/bytes_used
Lines: 1
-1
-Mode: 664
+808189952
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/cpu/cpu1/cpufreq/scaling_available_governors
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/disk_total
Lines: 1
-performance powersave
-Mode: 664
+2147483648
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/cpu/cpu1/cpufreq/scaling_driver
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/disk_used
Lines: 1
-intel_pstate
-Mode: 664
+808189952
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/cpu/cpu1/cpufreq/scaling_governor
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/flags
Lines: 1
-powersave
-Mode: 664
+1
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/cpu/cpu1/cpufreq/scaling_max_freq
-Lines: 1
-3300000
-Mode: 664
+Directory: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/raid0
+Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/cpu/cpu1/cpufreq/scaling_min_freq
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/raid0/total_bytes
Lines: 1
-1200000
-Mode: 664
+2147483648
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/cpu/cpu1/cpufreq/scaling_setspeed
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/raid0/used_bytes
Lines: 1
-<unsupported>
-Mode: 664
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/system/cpu/cpufreq
-Mode: 775
+808189952
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/system/cpu/cpufreq/policy0
-Mode: 775
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/total_bytes
+Lines: 1
+2147483648
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/cpu/cpufreq/policy0/affected_cpus
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/data/total_bytes_pinned
Lines: 1
0
Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/cpu/cpufreq/policy0/cpuinfo_max_freq
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/global_rsv_reserved
Lines: 1
-2400000
+16777216
Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/cpu/cpufreq/policy0/cpuinfo_min_freq
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/global_rsv_size
Lines: 1
-800000
+16777216
Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/cpu/cpufreq/policy0/cpuinfo_transition_latency
+Directory: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/bytes_may_use
Lines: 1
-0
+16777216
Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/cpu/cpufreq/policy0/related_cpus
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/bytes_pinned
Lines: 1
0
Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/cpu/cpufreq/policy0/scaling_available_governors
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/bytes_readonly
Lines: 1
-performance powersave
+131072
Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/cpu/cpufreq/policy0/scaling_cur_freq
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/bytes_reserved
Lines: 1
-1219917
+0
Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/cpu/cpufreq/policy0/scaling_driver
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/bytes_used
Lines: 1
-intel_pstate
+933888
Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/cpu/cpufreq/policy0/scaling_governor
-Lines: 1
-powersave
-Mode: 644
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/cpu/cpufreq/policy0/scaling_max_freq
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/disk_total
Lines: 1
-2400000
-Mode: 644
+2147483648
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/cpu/cpufreq/policy0/scaling_min_freq
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/disk_used
Lines: 1
-800000
-Mode: 644
+1867776
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/devices/system/cpu/cpufreq/policy0/scaling_setspeed
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/flags
Lines: 1
-<unsupported>
-Mode: 644
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/devices/system/cpu/cpufreq/policy1
-Mode: 755
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/fs
-Mode: 755
+4
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/fs/bcache
+Directory: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/raid1
Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74
-Mode: 755
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/raid1/total_bytes
+Lines: 1
+1073741824
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/average_key_size
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/raid1/used_bytes
Lines: 1
-0
-Mode: 644
+933888
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0
-Mode: 777
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/total_bytes
+Lines: 1
+1073741824
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/dirty_data
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/metadata/total_bytes_pinned
Lines: 1
0
-Mode: 644
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_day
+Directory: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system
Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_day/bypassed
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/bytes_may_use
Lines: 1
0
-Mode: 644
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_day/cache_bypass_hits
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/bytes_pinned
Lines: 1
0
-Mode: 644
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_day/cache_bypass_misses
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/bytes_readonly
Lines: 1
0
-Mode: 644
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_day/cache_hit_ratio
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/bytes_reserved
Lines: 1
-100
-Mode: 644
+0
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_day/cache_hits
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/bytes_used
Lines: 1
-289
-Mode: 644
+16384
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_day/cache_miss_collisions
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/disk_total
Lines: 1
-0
-Mode: 644
+16777216
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_day/cache_misses
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/disk_used
Lines: 1
-0
-Mode: 644
+32768
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_day/cache_readaheads
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/flags
Lines: 1
-0
-Mode: 644
+2
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_five_minute
+Directory: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/raid1
Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_five_minute/bypassed
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/raid1/total_bytes
Lines: 1
-0
-Mode: 644
+8388608
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_five_minute/cache_bypass_hits
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/raid1/used_bytes
Lines: 1
-0
-Mode: 644
+16384
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_five_minute/cache_bypass_misses
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/total_bytes
Lines: 1
-0
-Mode: 644
+8388608
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_five_minute/cache_hit_ratio
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/allocation/system/total_bytes_pinned
Lines: 1
0
-Mode: 644
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_five_minute/cache_hits
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/clone_alignment
Lines: 1
-0
-Mode: 644
+4096
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_five_minute/cache_miss_collisions
-Lines: 1
-0
-Mode: 644
+Directory: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/devices
+Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_five_minute/cache_misses
+Directory: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/devices/loop25
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/devices/loop25/size
Lines: 1
-0
-Mode: 644
+20971520
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_five_minute/cache_readaheads
+Directory: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/devices/loop26
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/devices/loop26/size
Lines: 1
-0
-Mode: 644
+20971520
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_hour
+Directory: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/features
Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_hour/bypassed
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/features/big_metadata
+Lines: 1
+1
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/features/extended_iref
Lines: 1
-0
+1
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_hour/cache_bypass_hits
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/features/mixed_backref
Lines: 1
-0
-Mode: 644
+1
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_hour/cache_bypass_misses
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/features/skinny_metadata
Lines: 1
-0
-Mode: 644
+1
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_hour/cache_hit_ratio
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/label
Lines: 1
-0
+fixture
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_hour/cache_hits
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/metadata_uuid
Lines: 1
-0
-Mode: 644
+0abb23a9-579b-43e6-ad30-227ef47fcb9d
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_hour/cache_miss_collisions
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/nodesize
Lines: 1
-0
-Mode: 644
+16384
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_hour/cache_misses
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/quota_override
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_hour/cache_readaheads
+Path: fixtures/sys/fs/btrfs/0abb23a9-579b-43e6-ad30-227ef47fcb9d/sectorsize
Lines: 1
-0
-Mode: 644
+4096
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_total
+Directory: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b
Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_total/bypassed
-Lines: 1
-0
-Mode: 644
+Directory: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation
+Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_total/cache_bypass_hits
+Directory: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data
+Mode: 755
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/bytes_may_use
Lines: 1
0
-Mode: 644
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_total/cache_bypass_misses
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/bytes_pinned
Lines: 1
0
-Mode: 644
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_total/cache_hit_ratio
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/bytes_readonly
Lines: 1
-100
-Mode: 644
+0
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_total/cache_hits
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/bytes_reserved
Lines: 1
-546
-Mode: 644
+0
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_total/cache_miss_collisions
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/bytes_used
Lines: 1
0
-Mode: 644
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_total/cache_misses
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/disk_total
Lines: 1
-0
-Mode: 644
+644087808
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/bdev0/stats_total/cache_readaheads
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/disk_used
Lines: 1
0
-Mode: 644
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/btree_cache_size
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/flags
Lines: 1
-0
-Mode: 644
+1
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/cache0
-Mode: 777
+Directory: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/raid5
+Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/cache0/io_errors
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/raid5/total_bytes
Lines: 1
-0
-Mode: 644
+644087808
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/cache0/metadata_written
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/raid5/used_bytes
Lines: 1
-512
-Mode: 644
+0
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/cache0/priority_stats
-Lines: 5
-Unused: 99%
-Metadata: 0%
-Average: 10473
-Sectors per Q: 64
-Quantiles: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946 20946]
-Mode: 644
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/total_bytes
+Lines: 1
+644087808
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/cache0/written
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/data/total_bytes_pinned
Lines: 1
0
-Mode: 644
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/cache_available_percent
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/global_rsv_reserved
Lines: 1
-100
-Mode: 644
+16777216
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/congested
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/global_rsv_size
Lines: 1
-0
-Mode: 644
+16777216
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/internal
+Directory: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata
Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/internal/active_journal_entries
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/bytes_may_use
Lines: 1
-1
-Mode: 644
+16777216
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/internal/btree_nodes
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/bytes_pinned
Lines: 1
0
-Mode: 644
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/internal/btree_read_average_duration_us
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/bytes_readonly
Lines: 1
-1305
-Mode: 644
+262144
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/internal/cache_read_races
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/bytes_reserved
Lines: 1
0
-Mode: 644
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/root_usage_percent
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/bytes_used
Lines: 1
-0
-Mode: 644
-# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_day
-Mode: 755
+114688
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_day/bypassed
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/disk_total
Lines: 1
-0
-Mode: 644
+429391872
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_day/cache_bypass_hits
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/disk_used
Lines: 1
-0
-Mode: 644
+114688
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_day/cache_bypass_misses
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/flags
Lines: 1
-0
-Mode: 644
+4
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_day/cache_hit_ratio
-Lines: 1
-100
-Mode: 644
+Directory: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/raid6
+Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_day/cache_hits
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/raid6/total_bytes
Lines: 1
-289
-Mode: 644
+429391872
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_day/cache_miss_collisions
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/raid6/used_bytes
Lines: 1
-0
-Mode: 644
+114688
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_day/cache_misses
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/total_bytes
Lines: 1
-0
-Mode: 644
+429391872
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_day/cache_readaheads
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/metadata/total_bytes_pinned
Lines: 1
0
-Mode: 644
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_five_minute
+Directory: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system
Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_five_minute/bypassed
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/bytes_may_use
Lines: 1
0
-Mode: 644
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_five_minute/cache_bypass_hits
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/bytes_pinned
Lines: 1
0
-Mode: 644
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_five_minute/cache_bypass_misses
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/bytes_readonly
Lines: 1
0
-Mode: 644
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_five_minute/cache_hit_ratio
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/bytes_reserved
Lines: 1
0
-Mode: 644
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_five_minute/cache_hits
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/bytes_used
Lines: 1
-0
-Mode: 644
+16384
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_five_minute/cache_miss_collisions
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/disk_total
Lines: 1
-0
-Mode: 644
+16777216
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_five_minute/cache_misses
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/disk_used
Lines: 1
-0
-Mode: 644
+16384
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_five_minute/cache_readaheads
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/flags
Lines: 1
-0
-Mode: 644
+2
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_hour
+Directory: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/raid6
Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_hour/bypassed
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/raid6/total_bytes
Lines: 1
-0
-Mode: 644
+16777216
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_hour/cache_bypass_hits
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/raid6/used_bytes
Lines: 1
-0
-Mode: 644
+16384
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_hour/cache_bypass_misses
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/total_bytes
Lines: 1
-0
-Mode: 644
+16777216
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_hour/cache_hit_ratio
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/allocation/system/total_bytes_pinned
Lines: 1
0
-Mode: 644
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_hour/cache_hits
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/clone_alignment
Lines: 1
-0
-Mode: 644
+4096
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_hour/cache_miss_collisions
-Lines: 1
-0
-Mode: 644
+Directory: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/devices
+Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_hour/cache_misses
-Lines: 1
-0
-Mode: 644
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/devices/loop22
+SymlinkTo: ../../../../devices/virtual/block/loop22
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_hour/cache_readaheads
-Lines: 1
-0
-Mode: 644
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/devices/loop23
+SymlinkTo: ../../../../devices/virtual/block/loop23
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Directory: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_total
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/devices/loop24
+SymlinkTo: ../../../../devices/virtual/block/loop24
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/devices/loop25
+SymlinkTo: ../../../../devices/virtual/block/loop25
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Directory: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/features
Mode: 755
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_total/bypassed
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/features/big_metadata
Lines: 1
-0
-Mode: 644
+1
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_total/cache_bypass_hits
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/features/extended_iref
Lines: 1
-0
+1
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_total/cache_bypass_misses
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/features/mixed_backref
Lines: 1
-0
-Mode: 644
+1
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_total/cache_hit_ratio
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/features/raid56
Lines: 1
-100
-Mode: 644
+1
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_total/cache_hits
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/features/skinny_metadata
Lines: 1
-546
+1
+Mode: 444
+# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/label
+Lines: 0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_total/cache_miss_collisions
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/metadata_uuid
Lines: 1
-0
-Mode: 644
+7f07c59f-6136-449c-ab87-e1cf2328731b
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_total/cache_misses
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/nodesize
Lines: 1
-0
-Mode: 644
+16384
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/stats_total/cache_readaheads
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/quota_override
Lines: 1
0
Mode: 644
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
-Path: fixtures/sys/fs/bcache/deaddd54-c735-46d5-868e-f331c5fd7c74/tree_depth
+Path: fixtures/sys/fs/btrfs/7f07c59f-6136-449c-ab87-e1cf2328731b/sectorsize
Lines: 1
-0
-Mode: 644
+4096
+Mode: 444
# ttar - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Directory: fixtures/sys/fs/xfs
Mode: 755
diff --git a/vendor/github.com/prometheus/procfs/go.mod b/vendor/github.com/prometheus/procfs/go.mod
index b2f8cca933397..0e04e5d1fdace 100644
--- a/vendor/github.com/prometheus/procfs/go.mod
+++ b/vendor/github.com/prometheus/procfs/go.mod
@@ -1,6 +1,8 @@
module github.com/prometheus/procfs
+go 1.12
+
require (
- github.com/google/go-cmp v0.3.0
- golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4
+ github.com/google/go-cmp v0.3.1
+ golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e
)
diff --git a/vendor/github.com/prometheus/procfs/go.sum b/vendor/github.com/prometheus/procfs/go.sum
index db54133d7ca5b..33b824b01bcf4 100644
--- a/vendor/github.com/prometheus/procfs/go.sum
+++ b/vendor/github.com/prometheus/procfs/go.sum
@@ -1,4 +1,4 @@
-github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
-github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
-golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4 h1:YUO/7uOKsKeq9UokNS62b8FYywz3ker1l1vDZRCRefw=
-golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+github.com/google/go-cmp v0.3.1 h1:Xye71clBPdm5HgqGwUkwhbynsUJZhDbS20FvLhQ2izg=
+github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
+golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e h1:vcxGaoTs7kV8m5Np9uUNQin4BrLOthgV7252N8V+FwY=
+golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
diff --git a/vendor/github.com/prometheus/procfs/internal/fs/fs.go b/vendor/github.com/prometheus/procfs/internal/fs/fs.go
index 7ddfd6b6ed627..565e89e42c4dc 100644
--- a/vendor/github.com/prometheus/procfs/internal/fs/fs.go
+++ b/vendor/github.com/prometheus/procfs/internal/fs/fs.go
@@ -26,7 +26,7 @@ const (
// DefaultSysMountPoint is the common mount point of the sys filesystem.
DefaultSysMountPoint = "/sys"
- // DefaultConfigfsMountPoint is the commont mount point of the configfs
+ // DefaultConfigfsMountPoint is the common mount point of the configfs
DefaultConfigfsMountPoint = "/sys/kernel/config"
)
diff --git a/vendor/github.com/prometheus/procfs/internal/util/parse.go b/vendor/github.com/prometheus/procfs/internal/util/parse.go
new file mode 100644
index 0000000000000..755591d9a5e96
--- /dev/null
+++ b/vendor/github.com/prometheus/procfs/internal/util/parse.go
@@ -0,0 +1,88 @@
+// Copyright 2018 The Prometheus Authors
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package util
+
+import (
+ "io/ioutil"
+ "strconv"
+ "strings"
+)
+
+// ParseUint32s parses a slice of strings into a slice of uint32s.
+func ParseUint32s(ss []string) ([]uint32, error) {
+ us := make([]uint32, 0, len(ss))
+ for _, s := range ss {
+ u, err := strconv.ParseUint(s, 10, 32)
+ if err != nil {
+ return nil, err
+ }
+
+ us = append(us, uint32(u))
+ }
+
+ return us, nil
+}
+
+// ParseUint64s parses a slice of strings into a slice of uint64s.
+func ParseUint64s(ss []string) ([]uint64, error) {
+ us := make([]uint64, 0, len(ss))
+ for _, s := range ss {
+ u, err := strconv.ParseUint(s, 10, 64)
+ if err != nil {
+ return nil, err
+ }
+
+ us = append(us, u)
+ }
+
+ return us, nil
+}
+
+// ParsePInt64s parses a slice of strings into a slice of int64 pointers.
+func ParsePInt64s(ss []string) ([]*int64, error) {
+ us := make([]*int64, 0, len(ss))
+ for _, s := range ss {
+ u, err := strconv.ParseInt(s, 10, 64)
+ if err != nil {
+ return nil, err
+ }
+
+ us = append(us, &u)
+ }
+
+ return us, nil
+}
+
+// ReadUintFromFile reads a file and attempts to parse a uint64 from it.
+func ReadUintFromFile(path string) (uint64, error) {
+ data, err := ioutil.ReadFile(path)
+ if err != nil {
+ return 0, err
+ }
+ return strconv.ParseUint(strings.TrimSpace(string(data)), 10, 64)
+}
+
+// ParseBool parses a string into a boolean pointer.
+func ParseBool(b string) *bool {
+ var truth bool
+ switch b {
+ case "enabled":
+ truth = true
+ case "disabled":
+ truth = false
+ default:
+ return nil
+ }
+ return &truth
+}
diff --git a/vendor/github.com/prometheus/procfs/internal/util/readfile.go b/vendor/github.com/prometheus/procfs/internal/util/readfile.go
new file mode 100644
index 0000000000000..8051161b2aa4d
--- /dev/null
+++ b/vendor/github.com/prometheus/procfs/internal/util/readfile.go
@@ -0,0 +1,38 @@
+// Copyright 2019 The Prometheus Authors
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package util
+
+import (
+ "io"
+ "io/ioutil"
+ "os"
+)
+
+// ReadFileNoStat uses ioutil.ReadAll to read contents of entire file.
+// This is similar to ioutil.ReadFile but without the call to os.Stat, because
+// many files in /proc and /sys report incorrect file sizes (either 0 or 4096).
+// Reads a max file size of 512kB. For files larger than this, a scanner
+// should be used.
+func ReadFileNoStat(filename string) ([]byte, error) {
+ const maxBufferSize = 1024 * 512
+
+ f, err := os.Open(filename)
+ if err != nil {
+ return nil, err
+ }
+ defer f.Close()
+
+ reader := io.LimitReader(f, maxBufferSize)
+ return ioutil.ReadAll(reader)
+}
diff --git a/vendor/github.com/prometheus/procfs/internal/util/sysreadfile.go b/vendor/github.com/prometheus/procfs/internal/util/sysreadfile.go
new file mode 100644
index 0000000000000..c07de0b6c9c6a
--- /dev/null
+++ b/vendor/github.com/prometheus/procfs/internal/util/sysreadfile.go
@@ -0,0 +1,48 @@
+// Copyright 2018 The Prometheus Authors
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// +build linux,!appengine
+
+package util
+
+import (
+ "bytes"
+ "os"
+ "syscall"
+)
+
+// SysReadFile is a simplified ioutil.ReadFile that invokes syscall.Read directly.
+// https://github.com/prometheus/node_exporter/pull/728/files
+//
+// Note that this function will not read files larger than 128 bytes.
+func SysReadFile(file string) (string, error) {
+ f, err := os.Open(file)
+ if err != nil {
+ return "", err
+ }
+ defer f.Close()
+
+ // On some machines, hwmon drivers are broken and return EAGAIN. This causes
+ // Go's ioutil.ReadFile implementation to poll forever.
+ //
+ // Since we either want to read data or bail immediately, do the simplest
+ // possible read using syscall directly.
+ const sysFileBufferSize = 128
+ b := make([]byte, sysFileBufferSize)
+ n, err := syscall.Read(int(f.Fd()), b)
+ if err != nil {
+ return "", err
+ }
+
+ return string(bytes.TrimSpace(b[:n])), nil
+}
diff --git a/vendor/github.com/prometheus/procfs/internal/util/sysreadfile_compat.go b/vendor/github.com/prometheus/procfs/internal/util/sysreadfile_compat.go
new file mode 100644
index 0000000000000..bd55b45377dbc
--- /dev/null
+++ b/vendor/github.com/prometheus/procfs/internal/util/sysreadfile_compat.go
@@ -0,0 +1,26 @@
+// Copyright 2019 The Prometheus Authors
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// +build linux,appengine !linux
+
+package util
+
+import (
+ "fmt"
+)
+
+// SysReadFile is here implemented as a noop for builds that do not support
+// the read syscall. For example Windows, or Linux on Google App Engine.
+func SysReadFile(file string) (string, error) {
+ return "", fmt.Errorf("not supported on this platform")
+}
diff --git a/vendor/github.com/prometheus/procfs/internal/util/valueparser.go b/vendor/github.com/prometheus/procfs/internal/util/valueparser.go
new file mode 100644
index 0000000000000..ac93cb42d2c7f
--- /dev/null
+++ b/vendor/github.com/prometheus/procfs/internal/util/valueparser.go
@@ -0,0 +1,77 @@
+// Copyright 2019 The Prometheus Authors
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package util
+
+import (
+ "strconv"
+)
+
+// TODO(mdlayher): util packages are an anti-pattern and this should be moved
+// somewhere else that is more focused in the future.
+
+// A ValueParser enables parsing a single string into a variety of data types
+// in a concise and safe way. The Err method must be invoked after invoking
+// any other methods to ensure a value was successfully parsed.
+type ValueParser struct {
+ v string
+ err error
+}
+
+// NewValueParser creates a ValueParser using the input string.
+func NewValueParser(v string) *ValueParser {
+ return &ValueParser{v: v}
+}
+
+// PInt64 interprets the underlying value as an int64 and returns a pointer to
+// that value.
+func (vp *ValueParser) PInt64() *int64 {
+ if vp.err != nil {
+ return nil
+ }
+
+ // A base value of zero makes ParseInt infer the correct base using the
+ // string's prefix, if any.
+ const base = 0
+ v, err := strconv.ParseInt(vp.v, base, 64)
+ if err != nil {
+ vp.err = err
+ return nil
+ }
+
+ return &v
+}
+
+// PUInt64 interprets the underlying value as an uint64 and returns a pointer to
+// that value.
+func (vp *ValueParser) PUInt64() *uint64 {
+ if vp.err != nil {
+ return nil
+ }
+
+ // A base value of zero makes ParseInt infer the correct base using the
+ // string's prefix, if any.
+ const base = 0
+ v, err := strconv.ParseUint(vp.v, base, 64)
+ if err != nil {
+ vp.err = err
+ return nil
+ }
+
+ return &v
+}
+
+// Err returns the last error, if any, encountered by the ValueParser.
+func (vp *ValueParser) Err() error {
+ return vp.err
+}
diff --git a/vendor/github.com/prometheus/procfs/ipvs.go b/vendor/github.com/prometheus/procfs/ipvs.go
index 2d6cb8d1c6c8b..89e447746cfe1 100644
--- a/vendor/github.com/prometheus/procfs/ipvs.go
+++ b/vendor/github.com/prometheus/procfs/ipvs.go
@@ -15,6 +15,7 @@ package procfs
import (
"bufio"
+ "bytes"
"encoding/hex"
"errors"
"fmt"
@@ -24,6 +25,8 @@ import (
"os"
"strconv"
"strings"
+
+ "github.com/prometheus/procfs/internal/util"
)
// IPVSStats holds IPVS statistics, as exposed by the kernel in `/proc/net/ip_vs_stats`.
@@ -64,17 +67,16 @@ type IPVSBackendStatus struct {
// IPVSStats reads the IPVS statistics from the specified `proc` filesystem.
func (fs FS) IPVSStats() (IPVSStats, error) {
- file, err := os.Open(fs.proc.Path("net/ip_vs_stats"))
+ data, err := util.ReadFileNoStat(fs.proc.Path("net/ip_vs_stats"))
if err != nil {
return IPVSStats{}, err
}
- defer file.Close()
- return parseIPVSStats(file)
+ return parseIPVSStats(bytes.NewReader(data))
}
// parseIPVSStats performs the actual parsing of `ip_vs_stats`.
-func parseIPVSStats(file io.Reader) (IPVSStats, error) {
+func parseIPVSStats(r io.Reader) (IPVSStats, error) {
var (
statContent []byte
statLines []string
@@ -82,7 +84,7 @@ func parseIPVSStats(file io.Reader) (IPVSStats, error) {
stats IPVSStats
)
- statContent, err := ioutil.ReadAll(file)
+ statContent, err := ioutil.ReadAll(r)
if err != nil {
return IPVSStats{}, err
}
diff --git a/vendor/github.com/prometheus/procfs/meminfo.go b/vendor/github.com/prometheus/procfs/meminfo.go
new file mode 100644
index 0000000000000..35b7a0d7268df
--- /dev/null
+++ b/vendor/github.com/prometheus/procfs/meminfo.go
@@ -0,0 +1,456 @@
+// Copyright 2019 The Prometheus Authors
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package procfs
+
+import (
+ "bufio"
+ "bytes"
+ "log"
+ "strconv"
+ "strings"
+
+ "github.com/prometheus/procfs/internal/util"
+)
+
+// Meminfo represents memory statistics.
+type Meminfo struct {
+ // Total usable ram (i.e. physical ram minus a few reserved
+ // bits and the kernel binary code)
+ MemTotal uint64
+ // The sum of LowFree+HighFree
+ MemFree uint64
+ // An estimate of how much memory is available for starting
+ // new applications, without swapping. Calculated from
+ // MemFree, SReclaimable, the size of the file LRU lists, and
+ // the low watermarks in each zone. The estimate takes into
+ // account that the system needs some page cache to function
+ // well, and that not all reclaimable slab will be
+ // reclaimable, due to items being in use. The impact of those
+ // factors will vary from system to system.
+ MemAvailable uint64
+ // Relatively temporary storage for raw disk blocks shouldn't
+ // get tremendously large (20MB or so)
+ Buffers uint64
+ Cached uint64
+ // Memory that once was swapped out, is swapped back in but
+ // still also is in the swapfile (if memory is needed it
+ // doesn't need to be swapped out AGAIN because it is already
+ // in the swapfile. This saves I/O)
+ SwapCached uint64
+ // Memory that has been used more recently and usually not
+ // reclaimed unless absolutely necessary.
+ Active uint64
+ // Memory which has been less recently used. It is more
+ // eligible to be reclaimed for other purposes
+ Inactive uint64
+ ActiveAnon uint64
+ InactiveAnon uint64
+ ActiveFile uint64
+ InactiveFile uint64
+ Unevictable uint64
+ Mlocked uint64
+ // total amount of swap space available
+ SwapTotal uint64
+ // Memory which has been evicted from RAM, and is temporarily
+ // on the disk
+ SwapFree uint64
+ // Memory which is waiting to get written back to the disk
+ Dirty uint64
+ // Memory which is actively being written back to the disk
+ Writeback uint64
+ // Non-file backed pages mapped into userspace page tables
+ AnonPages uint64
+ // files which have been mapped, such as libraries
+ Mapped uint64
+ Shmem uint64
+ // in-kernel data structures cache
+ Slab uint64
+ // Part of Slab, that might be reclaimed, such as caches
+ SReclaimable uint64
+ // Part of Slab, that cannot be reclaimed on memory pressure
+ SUnreclaim uint64
+ KernelStack uint64
+ // amount of memory dedicated to the lowest level of page
+ // tables.
+ PageTables uint64
+ // NFS pages sent to the server, but not yet committed to
+ // stable storage
+ NFSUnstable uint64
+ // Memory used for block device "bounce buffers"
+ Bounce uint64
+ // Memory used by FUSE for temporary writeback buffers
+ WritebackTmp uint64
+ // Based on the overcommit ratio ('vm.overcommit_ratio'),
+ // this is the total amount of memory currently available to
+ // be allocated on the system. This limit is only adhered to
+ // if strict overcommit accounting is enabled (mode 2 in
+ // 'vm.overcommit_memory').
+ // The CommitLimit is calculated with the following formula:
+ // CommitLimit = ([total RAM pages] - [total huge TLB pages]) *
+ // overcommit_ratio / 100 + [total swap pages]
+ // For example, on a system with 1G of physical RAM and 7G
+ // of swap with a `vm.overcommit_ratio` of 30 it would
+ // yield a CommitLimit of 7.3G.
+ // For more details, see the memory overcommit documentation
+ // in vm/overcommit-accounting.
+ CommitLimit uint64
+ // The amount of memory presently allocated on the system.
+ // The committed memory is a sum of all of the memory which
+ // has been allocated by processes, even if it has not been
+ // "used" by them as of yet. A process which malloc()'s 1G
+ // of memory, but only touches 300M of it will show up as
+ // using 1G. This 1G is memory which has been "committed" to
+ // by the VM and can be used at any time by the allocating
+ // application. With strict overcommit enabled on the system
+ // (mode 2 in 'vm.overcommit_memory'),allocations which would
+ // exceed the CommitLimit (detailed above) will not be permitted.
+ // This is useful if one needs to guarantee that processes will
+ // not fail due to lack of memory once that memory has been
+ // successfully allocated.
+ CommittedAS uint64
+ // total size of vmalloc memory area
+ VmallocTotal uint64
+ // amount of vmalloc area which is used
+ VmallocUsed uint64
+ // largest contiguous block of vmalloc area which is free
+ VmallocChunk uint64
+ HardwareCorrupted uint64
+ AnonHugePages uint64
+ ShmemHugePages uint64
+ ShmemPmdMapped uint64
+ CmaTotal uint64
+ CmaFree uint64
+ HugePagesTotal uint64
+ HugePagesFree uint64
+ HugePagesRsvd uint64
+ HugePagesSurp uint64
+ Hugepagesize uint64
+ DirectMap4k uint64
+ DirectMap2M uint64
+ DirectMap1G uint64
+}
+
+// Meminfo returns an information about current kernel/system memory statistics.
+// See https://www.kernel.org/doc/Documentation/filesystems/proc.txt
+func (fs FS) Meminfo() (Meminfo, error) {
+ data, err := util.ReadFileNoStat(fs.proc.Path("meminfo"))
+ if err != nil {
+ return Meminfo{}, err
+ }
+ return parseMemInfo(data)
+}
+
+func parseMemInfo(info []byte) (m Meminfo, err error) {
+ scanner := bufio.NewScanner(bytes.NewReader(info))
+
+ var line string
+ for scanner.Scan() {
+ line = scanner.Text()
+ log.Println(line)
+
+ field := strings.Fields(line)
+ log.Println(field[0])
+ switch field[0] {
+ case "MemTotal:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.MemTotal = v
+ case "MemFree:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.MemFree = v
+ case "MemAvailable:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.MemAvailable = v
+ case "Buffers:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.Buffers = v
+ case "Cached:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.Cached = v
+ case "SwapCached:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.SwapCached = v
+ case "Active:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.Active = v
+ case "Inactive:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.Inactive = v
+ case "Active(anon):":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.ActiveAnon = v
+ case "Inactive(anon):":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.InactiveAnon = v
+ case "Active(file):":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.ActiveFile = v
+ case "Inactive(file):":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.InactiveFile = v
+ case "Unevictable:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.Unevictable = v
+ case "Mlocked:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.Mlocked = v
+ case "SwapTotal:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.SwapTotal = v
+ case "SwapFree:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.SwapFree = v
+ case "Dirty:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.Dirty = v
+ case "Writeback:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.Writeback = v
+ case "AnonPages:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.AnonPages = v
+ case "Mapped:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.Mapped = v
+ case "Shmem:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.Shmem = v
+ case "Slab:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.Slab = v
+ case "SReclaimable:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.SReclaimable = v
+ case "SUnreclaim:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.SUnreclaim = v
+ case "KernelStack:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.KernelStack = v
+ case "PageTables:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.PageTables = v
+ case "NFS_Unstable:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.NFSUnstable = v
+ case "Bounce:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.Bounce = v
+ case "WritebackTmp:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.WritebackTmp = v
+ case "CommitLimit:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.CommitLimit = v
+ case "Committed_AS:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.CommittedAS = v
+ case "VmallocTotal:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.VmallocTotal = v
+ case "VmallocUsed:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.VmallocUsed = v
+ case "VmallocChunk:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.VmallocChunk = v
+ case "HardwareCorrupted:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.HardwareCorrupted = v
+ case "AnonHugePages:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.AnonHugePages = v
+ case "ShmemHugePages:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.ShmemHugePages = v
+ case "ShmemPmdMapped:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.ShmemPmdMapped = v
+ case "CmaTotal:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.CmaTotal = v
+ case "CmaFree:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.CmaFree = v
+ case "HugePages_Total:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.HugePagesTotal = v
+ case "HugePages_Free:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.HugePagesFree = v
+ case "HugePages_Rsvd:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.HugePagesRsvd = v
+ case "HugePages_Surp:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.HugePagesSurp = v
+ case "Hugepagesize:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.Hugepagesize = v
+ case "DirectMap4k:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.DirectMap4k = v
+ case "DirectMap2M:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.DirectMap2M = v
+ case "DirectMap1G:":
+ v, err := strconv.ParseUint(field[1], 0, 64)
+ if err != nil {
+ return Meminfo{}, err
+ }
+ m.DirectMap1G = v
+ }
+ }
+ return m, nil
+}
diff --git a/vendor/github.com/prometheus/procfs/mountinfo.go b/vendor/github.com/prometheus/procfs/mountinfo.go
index 61fa618874c4a..bb01bb5a2a9f3 100644
--- a/vendor/github.com/prometheus/procfs/mountinfo.go
+++ b/vendor/github.com/prometheus/procfs/mountinfo.go
@@ -15,19 +15,13 @@ package procfs
import (
"bufio"
+ "bytes"
"fmt"
- "io"
- "os"
"strconv"
"strings"
-)
-var validOptionalFields = map[string]bool{
- "shared": true,
- "master": true,
- "propagate_from": true,
- "unbindable": true,
-}
+ "github.com/prometheus/procfs/internal/util"
+)
// A MountInfo is a type that describes the details, options
// for each mount, parsed from /proc/self/mountinfo.
@@ -58,18 +52,10 @@ type MountInfo struct {
SuperOptions map[string]string
}
-// Returns part of the mountinfo line, if it exists, else an empty string.
-func getStringSliceElement(parts []string, idx int, defaultValue string) string {
- if idx >= len(parts) {
- return defaultValue
- }
- return parts[idx]
-}
-
// Reads each line of the mountinfo file, and returns a list of formatted MountInfo structs.
-func parseMountInfo(r io.Reader) ([]*MountInfo, error) {
+func parseMountInfo(info []byte) ([]*MountInfo, error) {
mounts := []*MountInfo{}
- scanner := bufio.NewScanner(r)
+ scanner := bufio.NewScanner(bytes.NewReader(info))
for scanner.Scan() {
mountString := scanner.Text()
parsedMounts, err := parseMountInfoString(mountString)
@@ -89,57 +75,75 @@ func parseMountInfo(r io.Reader) ([]*MountInfo, error) {
func parseMountInfoString(mountString string) (*MountInfo, error) {
var err error
- // OptionalFields can be zero, hence these checks to ensure we do not populate the wrong values in the wrong spots
- separatorIndex := strings.Index(mountString, "-")
- if separatorIndex == -1 {
- return nil, fmt.Errorf("no separator found in mountinfo string: %s", mountString)
+ mountInfo := strings.Split(mountString, " ")
+ mountInfoLength := len(mountInfo)
+ if mountInfoLength < 11 {
+ return nil, fmt.Errorf("couldn't find enough fields in mount string: %s", mountString)
}
- beforeFields := strings.Fields(mountString[:separatorIndex])
- afterFields := strings.Fields(mountString[separatorIndex+1:])
- if (len(beforeFields) + len(afterFields)) < 7 {
- return nil, fmt.Errorf("too few fields")
+
+ if mountInfo[mountInfoLength-4] != "-" {
+ return nil, fmt.Errorf("couldn't find separator in expected field: %s", mountInfo[mountInfoLength-4])
}
mount := &MountInfo{
- MajorMinorVer: getStringSliceElement(beforeFields, 2, ""),
- Root: getStringSliceElement(beforeFields, 3, ""),
- MountPoint: getStringSliceElement(beforeFields, 4, ""),
- Options: mountOptionsParser(getStringSliceElement(beforeFields, 5, "")),
+ MajorMinorVer: mountInfo[2],
+ Root: mountInfo[3],
+ MountPoint: mountInfo[4],
+ Options: mountOptionsParser(mountInfo[5]),
OptionalFields: nil,
- FSType: getStringSliceElement(afterFields, 0, ""),
- Source: getStringSliceElement(afterFields, 1, ""),
- SuperOptions: mountOptionsParser(getStringSliceElement(afterFields, 2, "")),
+ FSType: mountInfo[mountInfoLength-3],
+ Source: mountInfo[mountInfoLength-2],
+ SuperOptions: mountOptionsParser(mountInfo[mountInfoLength-1]),
}
- mount.MountId, err = strconv.Atoi(getStringSliceElement(beforeFields, 0, ""))
+ mount.MountId, err = strconv.Atoi(mountInfo[0])
if err != nil {
return nil, fmt.Errorf("failed to parse mount ID")
}
- mount.ParentId, err = strconv.Atoi(getStringSliceElement(beforeFields, 1, ""))
+ mount.ParentId, err = strconv.Atoi(mountInfo[1])
if err != nil {
return nil, fmt.Errorf("failed to parse parent ID")
}
// Has optional fields, which is a space separated list of values.
// Example: shared:2 master:7
- if len(beforeFields) > 6 {
- mount.OptionalFields = make(map[string]string)
- optionalFields := beforeFields[6:]
- for _, field := range optionalFields {
- optionSplit := strings.Split(field, ":")
- target, value := optionSplit[0], ""
- if len(optionSplit) == 2 {
- value = optionSplit[1]
- }
- // Checks if the 'keys' in the optional fields in the mountinfo line are acceptable.
- // Allowed 'keys' are shared, master, propagate_from, unbindable.
- if _, ok := validOptionalFields[target]; ok {
- mount.OptionalFields[target] = value
- }
+ if mountInfo[6] != "" {
+ mount.OptionalFields, err = mountOptionsParseOptionalFields(mountInfo[6 : mountInfoLength-4])
+ if err != nil {
+ return nil, err
}
}
return mount, nil
}
+// mountOptionsIsValidField checks a string against a valid list of optional fields keys.
+func mountOptionsIsValidField(s string) bool {
+ switch s {
+ case
+ "shared",
+ "master",
+ "propagate_from",
+ "unbindable":
+ return true
+ }
+ return false
+}
+
+// mountOptionsParseOptionalFields parses a list of optional fields strings into a double map of strings.
+func mountOptionsParseOptionalFields(o []string) (map[string]string, error) {
+ optionalFields := make(map[string]string)
+ for _, field := range o {
+ optionSplit := strings.SplitN(field, ":", 2)
+ value := ""
+ if len(optionSplit) == 2 {
+ value = optionSplit[1]
+ }
+ if mountOptionsIsValidField(optionSplit[0]) {
+ optionalFields[optionSplit[0]] = value
+ }
+ }
+ return optionalFields, nil
+}
+
// Parses the mount options, superblock options.
func mountOptionsParser(mountOptions string) map[string]string {
opts := make(map[string]string)
@@ -159,20 +163,18 @@ func mountOptionsParser(mountOptions string) map[string]string {
// Retrieves mountinfo information from `/proc/self/mountinfo`.
func GetMounts() ([]*MountInfo, error) {
- f, err := os.Open("/proc/self/mountinfo")
+ data, err := util.ReadFileNoStat("/proc/self/mountinfo")
if err != nil {
return nil, err
}
- defer f.Close()
- return parseMountInfo(f)
+ return parseMountInfo(data)
}
// Retrieves mountinfo information from a processes' `/proc/<pid>/mountinfo`.
func GetProcMounts(pid int) ([]*MountInfo, error) {
- f, err := os.Open(fmt.Sprintf("/proc/%d/mountinfo", pid))
+ data, err := util.ReadFileNoStat(fmt.Sprintf("/proc/%d/mountinfo", pid))
if err != nil {
return nil, err
}
- defer f.Close()
- return parseMountInfo(f)
+ return parseMountInfo(data)
}
diff --git a/vendor/github.com/prometheus/procfs/net_dev.go b/vendor/github.com/prometheus/procfs/net_dev.go
index a0b7a01196af7..47a710befb93f 100644
--- a/vendor/github.com/prometheus/procfs/net_dev.go
+++ b/vendor/github.com/prometheus/procfs/net_dev.go
@@ -183,7 +183,6 @@ func (netDev NetDev) Total() NetDevLine {
names = append(names, ifc.Name)
total.RxBytes += ifc.RxBytes
total.RxPackets += ifc.RxPackets
- total.RxPackets += ifc.RxPackets
total.RxErrors += ifc.RxErrors
total.RxDropped += ifc.RxDropped
total.RxFIFO += ifc.RxFIFO
diff --git a/vendor/github.com/prometheus/procfs/net_softnet.go b/vendor/github.com/prometheus/procfs/net_softnet.go
new file mode 100644
index 0000000000000..6fcad20afc75f
--- /dev/null
+++ b/vendor/github.com/prometheus/procfs/net_softnet.go
@@ -0,0 +1,91 @@
+// Copyright 2019 The Prometheus Authors
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package procfs
+
+import (
+ "fmt"
+ "io/ioutil"
+ "strconv"
+ "strings"
+)
+
+// For the proc file format details,
+// see https://elixir.bootlin.com/linux/v4.17/source/net/core/net-procfs.c#L162
+// and https://elixir.bootlin.com/linux/v4.17/source/include/linux/netdevice.h#L2810.
+
+// SoftnetEntry contains a single row of data from /proc/net/softnet_stat
+type SoftnetEntry struct {
+ // Number of processed packets
+ Processed uint
+ // Number of dropped packets
+ Dropped uint
+ // Number of times processing packets ran out of quota
+ TimeSqueezed uint
+}
+
+// GatherSoftnetStats reads /proc/net/softnet_stat, parse the relevant columns,
+// and then return a slice of SoftnetEntry's.
+func (fs FS) GatherSoftnetStats() ([]SoftnetEntry, error) {
+ data, err := ioutil.ReadFile(fs.proc.Path("net/softnet_stat"))
+ if err != nil {
+ return nil, fmt.Errorf("error reading softnet %s: %s", fs.proc.Path("net/softnet_stat"), err)
+ }
+
+ return parseSoftnetEntries(data)
+}
+
+func parseSoftnetEntries(data []byte) ([]SoftnetEntry, error) {
+ lines := strings.Split(string(data), "\n")
+ entries := make([]SoftnetEntry, 0)
+ var err error
+ const (
+ expectedColumns = 11
+ )
+ for _, line := range lines {
+ columns := strings.Fields(line)
+ width := len(columns)
+ if width == 0 {
+ continue
+ }
+ if width != expectedColumns {
+ return []SoftnetEntry{}, fmt.Errorf("%d columns were detected, but %d were expected", width, expectedColumns)
+ }
+ var entry SoftnetEntry
+ if entry, err = parseSoftnetEntry(columns); err != nil {
+ return []SoftnetEntry{}, err
+ }
+ entries = append(entries, entry)
+ }
+
+ return entries, nil
+}
+
+func parseSoftnetEntry(columns []string) (SoftnetEntry, error) {
+ var err error
+ var processed, dropped, timeSqueezed uint64
+ if processed, err = strconv.ParseUint(columns[0], 16, 32); err != nil {
+ return SoftnetEntry{}, fmt.Errorf("Unable to parse column 0: %s", err)
+ }
+ if dropped, err = strconv.ParseUint(columns[1], 16, 32); err != nil {
+ return SoftnetEntry{}, fmt.Errorf("Unable to parse column 1: %s", err)
+ }
+ if timeSqueezed, err = strconv.ParseUint(columns[2], 16, 32); err != nil {
+ return SoftnetEntry{}, fmt.Errorf("Unable to parse column 2: %s", err)
+ }
+ return SoftnetEntry{
+ Processed: uint(processed),
+ Dropped: uint(dropped),
+ TimeSqueezed: uint(timeSqueezed),
+ }, nil
+}
diff --git a/vendor/github.com/prometheus/procfs/proc.go b/vendor/github.com/prometheus/procfs/proc.go
index 41c148d0661fc..330e472c70fd2 100644
--- a/vendor/github.com/prometheus/procfs/proc.go
+++ b/vendor/github.com/prometheus/procfs/proc.go
@@ -22,6 +22,7 @@ import (
"strings"
"github.com/prometheus/procfs/internal/fs"
+ "github.com/prometheus/procfs/internal/util"
)
// Proc provides information about a running process.
@@ -121,13 +122,7 @@ func (fs FS) AllProcs() (Procs, error) {
// CmdLine returns the command line of a process.
func (p Proc) CmdLine() ([]string, error) {
- f, err := os.Open(p.path("cmdline"))
- if err != nil {
- return nil, err
- }
- defer f.Close()
-
- data, err := ioutil.ReadAll(f)
+ data, err := util.ReadFileNoStat(p.path("cmdline"))
if err != nil {
return nil, err
}
@@ -141,13 +136,7 @@ func (p Proc) CmdLine() ([]string, error) {
// Comm returns the command name of a process.
func (p Proc) Comm() (string, error) {
- f, err := os.Open(p.path("comm"))
- if err != nil {
- return "", err
- }
- defer f.Close()
-
- data, err := ioutil.ReadAll(f)
+ data, err := util.ReadFileNoStat(p.path("comm"))
if err != nil {
return "", err
}
@@ -252,13 +241,11 @@ func (p Proc) MountStats() ([]*Mount, error) {
// It supplies information missing in `/proc/self/mounts` and
// fixes various other problems with that file too.
func (p Proc) MountInfo() ([]*MountInfo, error) {
- f, err := os.Open(p.path("mountinfo"))
+ data, err := util.ReadFileNoStat(p.path("mountinfo"))
if err != nil {
return nil, err
}
- defer f.Close()
-
- return parseMountInfo(f)
+ return parseMountInfo(data)
}
func (p Proc) fileDescriptors() ([]string, error) {
@@ -279,3 +266,33 @@ func (p Proc) fileDescriptors() ([]string, error) {
func (p Proc) path(pa ...string) string {
return p.fs.Path(append([]string{strconv.Itoa(p.PID)}, pa...)...)
}
+
+// FileDescriptorsInfo retrieves information about all file descriptors of
+// the process.
+func (p Proc) FileDescriptorsInfo() (ProcFDInfos, error) {
+ names, err := p.fileDescriptors()
+ if err != nil {
+ return nil, err
+ }
+
+ var fdinfos ProcFDInfos
+
+ for _, n := range names {
+ fdinfo, err := p.FDInfo(n)
+ if err != nil {
+ continue
+ }
+ fdinfos = append(fdinfos, *fdinfo)
+ }
+
+ return fdinfos, nil
+}
+
+// Schedstat returns task scheduling information for the process.
+func (p Proc) Schedstat() (ProcSchedstat, error) {
+ contents, err := ioutil.ReadFile(p.path("schedstat"))
+ if err != nil {
+ return ProcSchedstat{}, err
+ }
+ return parseProcSchedstat(string(contents))
+}
diff --git a/vendor/github.com/prometheus/procfs/proc_environ.go b/vendor/github.com/prometheus/procfs/proc_environ.go
index 7172bb586e440..6134b3580c453 100644
--- a/vendor/github.com/prometheus/procfs/proc_environ.go
+++ b/vendor/github.com/prometheus/procfs/proc_environ.go
@@ -14,22 +14,16 @@
package procfs
import (
- "io/ioutil"
- "os"
"strings"
+
+ "github.com/prometheus/procfs/internal/util"
)
// Environ reads process environments from /proc/<pid>/environ
func (p Proc) Environ() ([]string, error) {
environments := make([]string, 0)
- f, err := os.Open(p.path("environ"))
- if err != nil {
- return environments, err
- }
- defer f.Close()
-
- data, err := ioutil.ReadAll(f)
+ data, err := util.ReadFileNoStat(p.path("environ"))
if err != nil {
return environments, err
}
diff --git a/vendor/github.com/prometheus/procfs/proc_fdinfo.go b/vendor/github.com/prometheus/procfs/proc_fdinfo.go
new file mode 100644
index 0000000000000..4e7597f86b6ea
--- /dev/null
+++ b/vendor/github.com/prometheus/procfs/proc_fdinfo.go
@@ -0,0 +1,125 @@
+// Copyright 2019 The Prometheus Authors
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package procfs
+
+import (
+ "bufio"
+ "bytes"
+ "regexp"
+
+ "github.com/prometheus/procfs/internal/util"
+)
+
+// Regexp variables
+var (
+ rPos = regexp.MustCompile(`^pos:\s+(\d+)$`)
+ rFlags = regexp.MustCompile(`^flags:\s+(\d+)$`)
+ rMntID = regexp.MustCompile(`^mnt_id:\s+(\d+)$`)
+ rInotify = regexp.MustCompile(`^inotify`)
+)
+
+// ProcFDInfo contains represents file descriptor information.
+type ProcFDInfo struct {
+ // File descriptor
+ FD string
+ // File offset
+ Pos string
+ // File access mode and status flags
+ Flags string
+ // Mount point ID
+ MntID string
+ // List of inotify lines (structed) in the fdinfo file (kernel 3.8+ only)
+ InotifyInfos []InotifyInfo
+}
+
+// FDInfo constructor. On kernels older than 3.8, InotifyInfos will always be empty.
+func (p Proc) FDInfo(fd string) (*ProcFDInfo, error) {
+ data, err := util.ReadFileNoStat(p.path("fdinfo", fd))
+ if err != nil {
+ return nil, err
+ }
+
+ var text, pos, flags, mntid string
+ var inotify []InotifyInfo
+
+ scanner := bufio.NewScanner(bytes.NewReader(data))
+ for scanner.Scan() {
+ text = scanner.Text()
+ if rPos.MatchString(text) {
+ pos = rPos.FindStringSubmatch(text)[1]
+ } else if rFlags.MatchString(text) {
+ flags = rFlags.FindStringSubmatch(text)[1]
+ } else if rMntID.MatchString(text) {
+ mntid = rMntID.FindStringSubmatch(text)[1]
+ } else if rInotify.MatchString(text) {
+ newInotify, err := parseInotifyInfo(text)
+ if err != nil {
+ return nil, err
+ }
+ inotify = append(inotify, *newInotify)
+ }
+ }
+
+ i := &ProcFDInfo{
+ FD: fd,
+ Pos: pos,
+ Flags: flags,
+ MntID: mntid,
+ InotifyInfos: inotify,
+ }
+
+ return i, nil
+}
+
+// InotifyInfo represents a single inotify line in the fdinfo file.
+type InotifyInfo struct {
+ // Watch descriptor number
+ WD string
+ // Inode number
+ Ino string
+ // Device ID
+ Sdev string
+ // Mask of events being monitored
+ Mask string
+}
+
+// InotifyInfo constructor. Only available on kernel 3.8+.
+func parseInotifyInfo(line string) (*InotifyInfo, error) {
+ r := regexp.MustCompile(`^inotify\s+wd:([0-9a-f]+)\s+ino:([0-9a-f]+)\s+sdev:([0-9a-f]+)\s+mask:([0-9a-f]+)`)
+ m := r.FindStringSubmatch(line)
+ i := &InotifyInfo{
+ WD: m[1],
+ Ino: m[2],
+ Sdev: m[3],
+ Mask: m[4],
+ }
+ return i, nil
+}
+
+// ProcFDInfos represents a list of ProcFDInfo structs.
+type ProcFDInfos []ProcFDInfo
+
+func (p ProcFDInfos) Len() int { return len(p) }
+func (p ProcFDInfos) Swap(i, j int) { p[i], p[j] = p[j], p[i] }
+func (p ProcFDInfos) Less(i, j int) bool { return p[i].FD < p[j].FD }
+
+// InotifyWatchLen returns the total number of inotify watches
+func (p ProcFDInfos) InotifyWatchLen() (int, error) {
+ length := 0
+ for _, f := range p {
+ length += len(f.InotifyInfos)
+ }
+
+ return length, nil
+}
diff --git a/vendor/github.com/prometheus/procfs/proc_io.go b/vendor/github.com/prometheus/procfs/proc_io.go
index 0ff89b1cef194..776f34971730d 100644
--- a/vendor/github.com/prometheus/procfs/proc_io.go
+++ b/vendor/github.com/prometheus/procfs/proc_io.go
@@ -15,8 +15,8 @@ package procfs
import (
"fmt"
- "io/ioutil"
- "os"
+
+ "github.com/prometheus/procfs/internal/util"
)
// ProcIO models the content of /proc/<pid>/io.
@@ -43,13 +43,7 @@ type ProcIO struct {
func (p Proc) IO() (ProcIO, error) {
pio := ProcIO{}
- f, err := os.Open(p.path("io"))
- if err != nil {
- return pio, err
- }
- defer f.Close()
-
- data, err := ioutil.ReadAll(f)
+ data, err := util.ReadFileNoStat(p.path("io"))
if err != nil {
return pio, err
}
diff --git a/vendor/github.com/prometheus/procfs/proc_psi.go b/vendor/github.com/prometheus/procfs/proc_psi.go
index 46fe266263764..0d7bee54cac3a 100644
--- a/vendor/github.com/prometheus/procfs/proc_psi.go
+++ b/vendor/github.com/prometheus/procfs/proc_psi.go
@@ -24,11 +24,13 @@ package procfs
// > full avg10=0.00 avg60=0.13 avg300=0.96 total=8183134
import (
+ "bufio"
+ "bytes"
"fmt"
"io"
- "io/ioutil"
- "os"
"strings"
+
+ "github.com/prometheus/procfs/internal/util"
)
const lineFormat = "avg10=%f avg60=%f avg300=%f total=%d"
@@ -55,24 +57,21 @@ type PSIStats struct {
// resource from /proc/pressure/<resource>. At time of writing this can be
// either "cpu", "memory" or "io".
func (fs FS) PSIStatsForResource(resource string) (PSIStats, error) {
- file, err := os.Open(fs.proc.Path(fmt.Sprintf("%s/%s", "pressure", resource)))
+ data, err := util.ReadFileNoStat(fs.proc.Path(fmt.Sprintf("%s/%s", "pressure", resource)))
if err != nil {
return PSIStats{}, fmt.Errorf("psi_stats: unavailable for %s", resource)
}
- defer file.Close()
- return parsePSIStats(resource, file)
+ return parsePSIStats(resource, bytes.NewReader(data))
}
// parsePSIStats parses the specified file for pressure stall information
-func parsePSIStats(resource string, file io.Reader) (PSIStats, error) {
+func parsePSIStats(resource string, r io.Reader) (PSIStats, error) {
psiStats := PSIStats{}
- stats, err := ioutil.ReadAll(file)
- if err != nil {
- return psiStats, fmt.Errorf("psi_stats: unable to read data for %s", resource)
- }
- for _, l := range strings.Split(string(stats), "\n") {
+ scanner := bufio.NewScanner(r)
+ for scanner.Scan() {
+ l := scanner.Text()
prefix := strings.Split(l, " ")[0]
switch prefix {
case "some":
diff --git a/vendor/github.com/prometheus/procfs/proc_stat.go b/vendor/github.com/prometheus/procfs/proc_stat.go
index dbde1fa0d6468..4517d2e9dd013 100644
--- a/vendor/github.com/prometheus/procfs/proc_stat.go
+++ b/vendor/github.com/prometheus/procfs/proc_stat.go
@@ -16,10 +16,10 @@ package procfs
import (
"bytes"
"fmt"
- "io/ioutil"
"os"
"github.com/prometheus/procfs/internal/fs"
+ "github.com/prometheus/procfs/internal/util"
)
// Originally, this USER_HZ value was dynamically retrieved via a sysconf call
@@ -113,13 +113,7 @@ func (p Proc) NewStat() (ProcStat, error) {
// Stat returns the current status information of the process.
func (p Proc) Stat() (ProcStat, error) {
- f, err := os.Open(p.path("stat"))
- if err != nil {
- return ProcStat{}, err
- }
- defer f.Close()
-
- data, err := ioutil.ReadAll(f)
+ data, err := util.ReadFileNoStat(p.path("stat"))
if err != nil {
return ProcStat{}, err
}
diff --git a/vendor/github.com/prometheus/procfs/proc_status.go b/vendor/github.com/prometheus/procfs/proc_status.go
index 6b4b61f71cd9d..e30c2b88f473b 100644
--- a/vendor/github.com/prometheus/procfs/proc_status.go
+++ b/vendor/github.com/prometheus/procfs/proc_status.go
@@ -15,13 +15,13 @@ package procfs
import (
"bytes"
- "io/ioutil"
- "os"
"strconv"
"strings"
+
+ "github.com/prometheus/procfs/internal/util"
)
-// ProcStat provides status information about the process,
+// ProcStatus provides status information about the process,
// read from /proc/[pid]/stat.
type ProcStatus struct {
// The process ID.
@@ -29,6 +29,9 @@ type ProcStatus struct {
// The process name.
Name string
+ // Thread group ID.
+ TGID int
+
// Peak virtual memory size.
VmPeak uint64
// Virtual memory size.
@@ -72,13 +75,7 @@ type ProcStatus struct {
// NewStatus returns the current status information of the process.
func (p Proc) NewStatus() (ProcStatus, error) {
- f, err := os.Open(p.path("status"))
- if err != nil {
- return ProcStatus{}, err
- }
- defer f.Close()
-
- data, err := ioutil.ReadAll(f)
+ data, err := util.ReadFileNoStat(p.path("status"))
if err != nil {
return ProcStatus{}, err
}
@@ -113,6 +110,8 @@ func (p Proc) NewStatus() (ProcStatus, error) {
func (s *ProcStatus) fillStatus(k string, vString string, vUint uint64, vUintBytes uint64) {
switch k {
+ case "Tgid":
+ s.TGID = int(vUint)
case "Name":
s.Name = vString
case "VmPeak":
diff --git a/vendor/github.com/prometheus/procfs/schedstat.go b/vendor/github.com/prometheus/procfs/schedstat.go
new file mode 100644
index 0000000000000..a4c4089ac529e
--- /dev/null
+++ b/vendor/github.com/prometheus/procfs/schedstat.go
@@ -0,0 +1,118 @@
+// Copyright 2019 The Prometheus Authors
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package procfs
+
+import (
+ "bufio"
+ "errors"
+ "os"
+ "regexp"
+ "strconv"
+)
+
+var (
+ cpuLineRE = regexp.MustCompile(`cpu(\d+) (\d+) (\d+) (\d+) (\d+) (\d+) (\d+) (\d+) (\d+) (\d+)`)
+ procLineRE = regexp.MustCompile(`(\d+) (\d+) (\d+)`)
+)
+
+// Schedstat contains scheduler statistics from /proc/schedstat
+//
+// See
+// https://www.kernel.org/doc/Documentation/scheduler/sched-stats.txt
+// for a detailed description of what these numbers mean.
+//
+// Note the current kernel documentation claims some of the time units are in
+// jiffies when they are actually in nanoseconds since 2.6.23 with the
+// introduction of CFS. A fix to the documentation is pending. See
+// https://lore.kernel.org/patchwork/project/lkml/list/?series=403473
+type Schedstat struct {
+ CPUs []*SchedstatCPU
+}
+
+// SchedstatCPU contains the values from one "cpu<N>" line
+type SchedstatCPU struct {
+ CPUNum string
+
+ RunningNanoseconds uint64
+ WaitingNanoseconds uint64
+ RunTimeslices uint64
+}
+
+// ProcSchedstat contains the values from /proc/<pid>/schedstat
+type ProcSchedstat struct {
+ RunningNanoseconds uint64
+ WaitingNanoseconds uint64
+ RunTimeslices uint64
+}
+
+// Schedstat reads data from /proc/schedstat
+func (fs FS) Schedstat() (*Schedstat, error) {
+ file, err := os.Open(fs.proc.Path("schedstat"))
+ if err != nil {
+ return nil, err
+ }
+ defer file.Close()
+
+ stats := &Schedstat{}
+ scanner := bufio.NewScanner(file)
+
+ for scanner.Scan() {
+ match := cpuLineRE.FindStringSubmatch(scanner.Text())
+ if match != nil {
+ cpu := &SchedstatCPU{}
+ cpu.CPUNum = match[1]
+
+ cpu.RunningNanoseconds, err = strconv.ParseUint(match[8], 10, 64)
+ if err != nil {
+ continue
+ }
+
+ cpu.WaitingNanoseconds, err = strconv.ParseUint(match[9], 10, 64)
+ if err != nil {
+ continue
+ }
+
+ cpu.RunTimeslices, err = strconv.ParseUint(match[10], 10, 64)
+ if err != nil {
+ continue
+ }
+
+ stats.CPUs = append(stats.CPUs, cpu)
+ }
+ }
+
+ return stats, nil
+}
+
+func parseProcSchedstat(contents string) (stats ProcSchedstat, err error) {
+ match := procLineRE.FindStringSubmatch(contents)
+
+ if match != nil {
+ stats.RunningNanoseconds, err = strconv.ParseUint(match[1], 10, 64)
+ if err != nil {
+ return
+ }
+
+ stats.WaitingNanoseconds, err = strconv.ParseUint(match[2], 10, 64)
+ if err != nil {
+ return
+ }
+
+ stats.RunTimeslices, err = strconv.ParseUint(match[3], 10, 64)
+ return
+ }
+
+ err = errors.New("could not parse schedstat")
+ return
+}
diff --git a/vendor/github.com/prometheus/procfs/stat.go b/vendor/github.com/prometheus/procfs/stat.go
index 6661ee03a66f2..b2a6fc994c11c 100644
--- a/vendor/github.com/prometheus/procfs/stat.go
+++ b/vendor/github.com/prometheus/procfs/stat.go
@@ -15,13 +15,14 @@ package procfs
import (
"bufio"
+ "bytes"
"fmt"
"io"
- "os"
"strconv"
"strings"
"github.com/prometheus/procfs/internal/fs"
+ "github.com/prometheus/procfs/internal/util"
)
// CPUStat shows how much time the cpu spend in various stages.
@@ -164,16 +165,15 @@ func (fs FS) NewStat() (Stat, error) {
// Stat returns information about current cpu/process statistics.
// See https://www.kernel.org/doc/Documentation/filesystems/proc.txt
func (fs FS) Stat() (Stat, error) {
-
- f, err := os.Open(fs.proc.Path("stat"))
+ fileName := fs.proc.Path("stat")
+ data, err := util.ReadFileNoStat(fileName)
if err != nil {
return Stat{}, err
}
- defer f.Close()
stat := Stat{}
- scanner := bufio.NewScanner(f)
+ scanner := bufio.NewScanner(bytes.NewReader(data))
for scanner.Scan() {
line := scanner.Text()
parts := strings.Fields(scanner.Text())
@@ -237,7 +237,7 @@ func (fs FS) Stat() (Stat, error) {
}
if err := scanner.Err(); err != nil {
- return Stat{}, fmt.Errorf("couldn't parse %s: %s", f.Name(), err)
+ return Stat{}, fmt.Errorf("couldn't parse %s: %s", fileName, err)
}
return stat, nil
diff --git a/vendor/github.com/prometheus/procfs/vm.go b/vendor/github.com/prometheus/procfs/vm.go
new file mode 100644
index 0000000000000..cb13891414b16
--- /dev/null
+++ b/vendor/github.com/prometheus/procfs/vm.go
@@ -0,0 +1,210 @@
+// Copyright 2019 The Prometheus Authors
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// +build !windows
+
+package procfs
+
+import (
+ "fmt"
+ "io/ioutil"
+ "os"
+ "path/filepath"
+ "strings"
+
+ "github.com/prometheus/procfs/internal/util"
+)
+
+// The VM interface is described at
+// https://www.kernel.org/doc/Documentation/sysctl/vm.txt
+// Each setting is exposed as a single file.
+// Each file contains one line with a single numerical value, except lowmem_reserve_ratio which holds an array
+// and numa_zonelist_order (deprecated) which is a string
+type VM struct {
+ AdminReserveKbytes *int64 // /proc/sys/vm/admin_reserve_kbytes
+ BlockDump *int64 // /proc/sys/vm/block_dump
+ CompactUnevictableAllowed *int64 // /proc/sys/vm/compact_unevictable_allowed
+ DirtyBackgroundBytes *int64 // /proc/sys/vm/dirty_background_bytes
+ DirtyBackgroundRatio *int64 // /proc/sys/vm/dirty_background_ratio
+ DirtyBytes *int64 // /proc/sys/vm/dirty_bytes
+ DirtyExpireCentisecs *int64 // /proc/sys/vm/dirty_expire_centisecs
+ DirtyRatio *int64 // /proc/sys/vm/dirty_ratio
+ DirtytimeExpireSeconds *int64 // /proc/sys/vm/dirtytime_expire_seconds
+ DirtyWritebackCentisecs *int64 // /proc/sys/vm/dirty_writeback_centisecs
+ DropCaches *int64 // /proc/sys/vm/drop_caches
+ ExtfragThreshold *int64 // /proc/sys/vm/extfrag_threshold
+ HugetlbShmGroup *int64 // /proc/sys/vm/hugetlb_shm_group
+ LaptopMode *int64 // /proc/sys/vm/laptop_mode
+ LegacyVaLayout *int64 // /proc/sys/vm/legacy_va_layout
+ LowmemReserveRatio []*int64 // /proc/sys/vm/lowmem_reserve_ratio
+ MaxMapCount *int64 // /proc/sys/vm/max_map_count
+ MemoryFailureEarlyKill *int64 // /proc/sys/vm/memory_failure_early_kill
+ MemoryFailureRecovery *int64 // /proc/sys/vm/memory_failure_recovery
+ MinFreeKbytes *int64 // /proc/sys/vm/min_free_kbytes
+ MinSlabRatio *int64 // /proc/sys/vm/min_slab_ratio
+ MinUnmappedRatio *int64 // /proc/sys/vm/min_unmapped_ratio
+ MmapMinAddr *int64 // /proc/sys/vm/mmap_min_addr
+ NrHugepages *int64 // /proc/sys/vm/nr_hugepages
+ NrHugepagesMempolicy *int64 // /proc/sys/vm/nr_hugepages_mempolicy
+ NrOvercommitHugepages *int64 // /proc/sys/vm/nr_overcommit_hugepages
+ NumaStat *int64 // /proc/sys/vm/numa_stat
+ NumaZonelistOrder string // /proc/sys/vm/numa_zonelist_order
+ OomDumpTasks *int64 // /proc/sys/vm/oom_dump_tasks
+ OomKillAllocatingTask *int64 // /proc/sys/vm/oom_kill_allocating_task
+ OvercommitKbytes *int64 // /proc/sys/vm/overcommit_kbytes
+ OvercommitMemory *int64 // /proc/sys/vm/overcommit_memory
+ OvercommitRatio *int64 // /proc/sys/vm/overcommit_ratio
+ PageCluster *int64 // /proc/sys/vm/page-cluster
+ PanicOnOom *int64 // /proc/sys/vm/panic_on_oom
+ PercpuPagelistFraction *int64 // /proc/sys/vm/percpu_pagelist_fraction
+ StatInterval *int64 // /proc/sys/vm/stat_interval
+ Swappiness *int64 // /proc/sys/vm/swappiness
+ UserReserveKbytes *int64 // /proc/sys/vm/user_reserve_kbytes
+ VfsCachePressure *int64 // /proc/sys/vm/vfs_cache_pressure
+ WatermarkBoostFactor *int64 // /proc/sys/vm/watermark_boost_factor
+ WatermarkScaleFactor *int64 // /proc/sys/vm/watermark_scale_factor
+ ZoneReclaimMode *int64 // /proc/sys/vm/zone_reclaim_mode
+}
+
+// VM reads the VM statistics from the specified `proc` filesystem.
+func (fs FS) VM() (*VM, error) {
+ path := fs.proc.Path("sys/vm")
+ file, err := os.Stat(path)
+ if err != nil {
+ return nil, err
+ }
+ if !file.Mode().IsDir() {
+ return nil, fmt.Errorf("%s is not a directory", path)
+ }
+
+ files, err := ioutil.ReadDir(path)
+ if err != nil {
+ return nil, err
+ }
+
+ var vm VM
+ for _, f := range files {
+ if f.IsDir() {
+ continue
+ }
+
+ name := filepath.Join(path, f.Name())
+ // ignore errors on read, as there are some write only
+ // in /proc/sys/vm
+ value, err := util.SysReadFile(name)
+ if err != nil {
+ continue
+ }
+ vp := util.NewValueParser(value)
+
+ switch f.Name() {
+ case "admin_reserve_kbytes":
+ vm.AdminReserveKbytes = vp.PInt64()
+ case "block_dump":
+ vm.BlockDump = vp.PInt64()
+ case "compact_unevictable_allowed":
+ vm.CompactUnevictableAllowed = vp.PInt64()
+ case "dirty_background_bytes":
+ vm.DirtyBackgroundBytes = vp.PInt64()
+ case "dirty_background_ratio":
+ vm.DirtyBackgroundRatio = vp.PInt64()
+ case "dirty_bytes":
+ vm.DirtyBytes = vp.PInt64()
+ case "dirty_expire_centisecs":
+ vm.DirtyExpireCentisecs = vp.PInt64()
+ case "dirty_ratio":
+ vm.DirtyRatio = vp.PInt64()
+ case "dirtytime_expire_seconds":
+ vm.DirtytimeExpireSeconds = vp.PInt64()
+ case "dirty_writeback_centisecs":
+ vm.DirtyWritebackCentisecs = vp.PInt64()
+ case "drop_caches":
+ vm.DropCaches = vp.PInt64()
+ case "extfrag_threshold":
+ vm.ExtfragThreshold = vp.PInt64()
+ case "hugetlb_shm_group":
+ vm.HugetlbShmGroup = vp.PInt64()
+ case "laptop_mode":
+ vm.LaptopMode = vp.PInt64()
+ case "legacy_va_layout":
+ vm.LegacyVaLayout = vp.PInt64()
+ case "lowmem_reserve_ratio":
+ stringSlice := strings.Fields(value)
+ pint64Slice := make([]*int64, 0, len(stringSlice))
+ for _, value := range stringSlice {
+ vp := util.NewValueParser(value)
+ pint64Slice = append(pint64Slice, vp.PInt64())
+ }
+ vm.LowmemReserveRatio = pint64Slice
+ case "max_map_count":
+ vm.MaxMapCount = vp.PInt64()
+ case "memory_failure_early_kill":
+ vm.MemoryFailureEarlyKill = vp.PInt64()
+ case "memory_failure_recovery":
+ vm.MemoryFailureRecovery = vp.PInt64()
+ case "min_free_kbytes":
+ vm.MinFreeKbytes = vp.PInt64()
+ case "min_slab_ratio":
+ vm.MinSlabRatio = vp.PInt64()
+ case "min_unmapped_ratio":
+ vm.MinUnmappedRatio = vp.PInt64()
+ case "mmap_min_addr":
+ vm.MmapMinAddr = vp.PInt64()
+ case "nr_hugepages":
+ vm.NrHugepages = vp.PInt64()
+ case "nr_hugepages_mempolicy":
+ vm.NrHugepagesMempolicy = vp.PInt64()
+ case "nr_overcommit_hugepages":
+ vm.NrOvercommitHugepages = vp.PInt64()
+ case "numa_stat":
+ vm.NumaStat = vp.PInt64()
+ case "numa_zonelist_order":
+ vm.NumaZonelistOrder = value
+ case "oom_dump_tasks":
+ vm.OomDumpTasks = vp.PInt64()
+ case "oom_kill_allocating_task":
+ vm.OomKillAllocatingTask = vp.PInt64()
+ case "overcommit_kbytes":
+ vm.OvercommitKbytes = vp.PInt64()
+ case "overcommit_memory":
+ vm.OvercommitMemory = vp.PInt64()
+ case "overcommit_ratio":
+ vm.OvercommitRatio = vp.PInt64()
+ case "page-cluster":
+ vm.PageCluster = vp.PInt64()
+ case "panic_on_oom":
+ vm.PanicOnOom = vp.PInt64()
+ case "percpu_pagelist_fraction":
+ vm.PercpuPagelistFraction = vp.PInt64()
+ case "stat_interval":
+ vm.StatInterval = vp.PInt64()
+ case "swappiness":
+ vm.Swappiness = vp.PInt64()
+ case "user_reserve_kbytes":
+ vm.UserReserveKbytes = vp.PInt64()
+ case "vfs_cache_pressure":
+ vm.VfsCachePressure = vp.PInt64()
+ case "watermark_boost_factor":
+ vm.WatermarkBoostFactor = vp.PInt64()
+ case "watermark_scale_factor":
+ vm.WatermarkScaleFactor = vp.PInt64()
+ case "zone_reclaim_mode":
+ vm.ZoneReclaimMode = vp.PInt64()
+ }
+ if err := vp.Err(); err != nil {
+ return nil, err
+ }
+ }
+
+ return &vm, nil
+}
diff --git a/vendor/github.com/prometheus/procfs/zoneinfo.go b/vendor/github.com/prometheus/procfs/zoneinfo.go
new file mode 100644
index 0000000000000..e941503d5cdd0
--- /dev/null
+++ b/vendor/github.com/prometheus/procfs/zoneinfo.go
@@ -0,0 +1,196 @@
+// Copyright 2019 The Prometheus Authors
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+// +build !windows
+
+package procfs
+
+import (
+ "bytes"
+ "fmt"
+ "io/ioutil"
+ "regexp"
+ "strings"
+
+ "github.com/prometheus/procfs/internal/util"
+)
+
+// Zoneinfo holds info parsed from /proc/zoneinfo.
+type Zoneinfo struct {
+ Node string
+ Zone string
+ NrFreePages *int64
+ Min *int64
+ Low *int64
+ High *int64
+ Scanned *int64
+ Spanned *int64
+ Present *int64
+ Managed *int64
+ NrActiveAnon *int64
+ NrInactiveAnon *int64
+ NrIsolatedAnon *int64
+ NrAnonPages *int64
+ NrAnonTransparentHugepages *int64
+ NrActiveFile *int64
+ NrInactiveFile *int64
+ NrIsolatedFile *int64
+ NrFilePages *int64
+ NrSlabReclaimable *int64
+ NrSlabUnreclaimable *int64
+ NrMlockStack *int64
+ NrKernelStack *int64
+ NrMapped *int64
+ NrDirty *int64
+ NrWriteback *int64
+ NrUnevictable *int64
+ NrShmem *int64
+ NrDirtied *int64
+ NrWritten *int64
+ NumaHit *int64
+ NumaMiss *int64
+ NumaForeign *int64
+ NumaInterleave *int64
+ NumaLocal *int64
+ NumaOther *int64
+ Protection []*int64
+}
+
+var nodeZoneRE = regexp.MustCompile(`(\d+), zone\s+(\w+)`)
+
+// Zoneinfo parses an zoneinfo-file (/proc/zoneinfo) and returns a slice of
+// structs containing the relevant info. More information available here:
+// https://www.kernel.org/doc/Documentation/sysctl/vm.txt
+func (fs FS) Zoneinfo() ([]Zoneinfo, error) {
+ data, err := ioutil.ReadFile(fs.proc.Path("zoneinfo"))
+ if err != nil {
+ return nil, fmt.Errorf("error reading zoneinfo %s: %s", fs.proc.Path("zoneinfo"), err)
+ }
+ zoneinfo, err := parseZoneinfo(data)
+ if err != nil {
+ return nil, fmt.Errorf("error parsing zoneinfo %s: %s", fs.proc.Path("zoneinfo"), err)
+ }
+ return zoneinfo, nil
+}
+
+func parseZoneinfo(zoneinfoData []byte) ([]Zoneinfo, error) {
+
+ zoneinfo := []Zoneinfo{}
+
+ zoneinfoBlocks := bytes.Split(zoneinfoData, []byte("\nNode"))
+ for _, block := range zoneinfoBlocks {
+ var zoneinfoElement Zoneinfo
+ lines := strings.Split(string(block), "\n")
+ for _, line := range lines {
+
+ if nodeZone := nodeZoneRE.FindStringSubmatch(line); nodeZone != nil {
+ zoneinfoElement.Node = nodeZone[1]
+ zoneinfoElement.Zone = nodeZone[2]
+ continue
+ }
+ if strings.HasPrefix(strings.TrimSpace(line), "per-node stats") {
+ zoneinfoElement.Zone = ""
+ continue
+ }
+ parts := strings.Fields(strings.TrimSpace(line))
+ if len(parts) < 2 {
+ continue
+ }
+ vp := util.NewValueParser(parts[1])
+ switch parts[0] {
+ case "nr_free_pages":
+ zoneinfoElement.NrFreePages = vp.PInt64()
+ case "min":
+ zoneinfoElement.Min = vp.PInt64()
+ case "low":
+ zoneinfoElement.Low = vp.PInt64()
+ case "high":
+ zoneinfoElement.High = vp.PInt64()
+ case "scanned":
+ zoneinfoElement.Scanned = vp.PInt64()
+ case "spanned":
+ zoneinfoElement.Spanned = vp.PInt64()
+ case "present":
+ zoneinfoElement.Present = vp.PInt64()
+ case "managed":
+ zoneinfoElement.Managed = vp.PInt64()
+ case "nr_active_anon":
+ zoneinfoElement.NrActiveAnon = vp.PInt64()
+ case "nr_inactive_anon":
+ zoneinfoElement.NrInactiveAnon = vp.PInt64()
+ case "nr_isolated_anon":
+ zoneinfoElement.NrIsolatedAnon = vp.PInt64()
+ case "nr_anon_pages":
+ zoneinfoElement.NrAnonPages = vp.PInt64()
+ case "nr_anon_transparent_hugepages":
+ zoneinfoElement.NrAnonTransparentHugepages = vp.PInt64()
+ case "nr_active_file":
+ zoneinfoElement.NrActiveFile = vp.PInt64()
+ case "nr_inactive_file":
+ zoneinfoElement.NrInactiveFile = vp.PInt64()
+ case "nr_isolated_file":
+ zoneinfoElement.NrIsolatedFile = vp.PInt64()
+ case "nr_file_pages":
+ zoneinfoElement.NrFilePages = vp.PInt64()
+ case "nr_slab_reclaimable":
+ zoneinfoElement.NrSlabReclaimable = vp.PInt64()
+ case "nr_slab_unreclaimable":
+ zoneinfoElement.NrSlabUnreclaimable = vp.PInt64()
+ case "nr_mlock_stack":
+ zoneinfoElement.NrMlockStack = vp.PInt64()
+ case "nr_kernel_stack":
+ zoneinfoElement.NrKernelStack = vp.PInt64()
+ case "nr_mapped":
+ zoneinfoElement.NrMapped = vp.PInt64()
+ case "nr_dirty":
+ zoneinfoElement.NrDirty = vp.PInt64()
+ case "nr_writeback":
+ zoneinfoElement.NrWriteback = vp.PInt64()
+ case "nr_unevictable":
+ zoneinfoElement.NrUnevictable = vp.PInt64()
+ case "nr_shmem":
+ zoneinfoElement.NrShmem = vp.PInt64()
+ case "nr_dirtied":
+ zoneinfoElement.NrDirtied = vp.PInt64()
+ case "nr_written":
+ zoneinfoElement.NrWritten = vp.PInt64()
+ case "numa_hit":
+ zoneinfoElement.NumaHit = vp.PInt64()
+ case "numa_miss":
+ zoneinfoElement.NumaMiss = vp.PInt64()
+ case "numa_foreign":
+ zoneinfoElement.NumaForeign = vp.PInt64()
+ case "numa_interleave":
+ zoneinfoElement.NumaInterleave = vp.PInt64()
+ case "numa_local":
+ zoneinfoElement.NumaLocal = vp.PInt64()
+ case "numa_other":
+ zoneinfoElement.NumaOther = vp.PInt64()
+ case "protection:":
+ protectionParts := strings.Split(line, ":")
+ protectionValues := strings.Replace(protectionParts[1], "(", "", 1)
+ protectionValues = strings.Replace(protectionValues, ")", "", 1)
+ protectionValues = strings.TrimSpace(protectionValues)
+ protectionStringMap := strings.Split(protectionValues, ", ")
+ val, err := util.ParsePInt64s(protectionStringMap)
+ if err == nil {
+ zoneinfoElement.Protection = val
+ }
+ }
+
+ }
+
+ zoneinfo = append(zoneinfo, zoneinfoElement)
+ }
+ return zoneinfo, nil
+}
diff --git a/vendor/github.com/prometheus/prometheus/NOTICE b/vendor/github.com/prometheus/prometheus/NOTICE
index e36e57e527697..30ce2a82630e7 100644
--- a/vendor/github.com/prometheus/prometheus/NOTICE
+++ b/vendor/github.com/prometheus/prometheus/NOTICE
@@ -85,3 +85,9 @@ go-zookeeper - Native ZooKeeper client for Go
https://github.com/samuel/go-zookeeper
Copyright (c) 2013, Samuel Stauffer <[email protected]>
See https://github.com/samuel/go-zookeeper/blob/master/LICENSE for license details.
+
+We also use code from a large number of npm packages. For details, see:
+- https://github.com/prometheus/prometheus/blob/master/web/ui/react-app/package.json
+- https://github.com/prometheus/prometheus/blob/master/web/ui/react-app/package-lock.json
+- The individual package licenses as copied from the node_modules directory can be found in
+ the npm_licenses.tar.bz2 archive in release tarballs and Docker images.
diff --git a/vendor/github.com/prometheus/prometheus/discovery/README.md b/vendor/github.com/prometheus/prometheus/discovery/README.md
index 4782510f011e3..060c7d52c8b26 100644
--- a/vendor/github.com/prometheus/prometheus/discovery/README.md
+++ b/vendor/github.com/prometheus/prometheus/discovery/README.md
@@ -2,10 +2,6 @@
This directory contains the service discovery (SD) component of Prometheus.
-There is currently a moratorium on new service discovery mechanisms being added
-to Prometheus due to a lack of developer capacity. In the meantime `file_sd`
-remains available.
-
## Design of a Prometheus SD
There are many requests to add new SDs to Prometheus, this section looks at
@@ -19,6 +15,10 @@ use across multiple organizations. It should allow discovering of machines
and/or services running somewhere. When exactly an SD is popular enough to
justify being added to Prometheus natively is an open question.
+Note: As part of lifting the past moratorium on new SD implementations it was
+agreed that, in addition to the existing requirements, new service discovery
+implementations will be required to have a committed maintainer with push access (i.e., on -team).
+
It should not be a brand new SD mechanism, or a variant of an established
mechanism. We want to integrate Prometheus with the SD that's already there in
your infrastructure, not invent yet more ways to do service discovery. We also
diff --git a/vendor/github.com/prometheus/prometheus/discovery/consul/consul.go b/vendor/github.com/prometheus/prometheus/discovery/consul/consul.go
index ccc09b072d72f..f0db8d762b6ba 100644
--- a/vendor/github.com/prometheus/prometheus/discovery/consul/consul.go
+++ b/vendor/github.com/prometheus/prometheus/discovery/consul/consul.go
@@ -357,6 +357,13 @@ func (d *Discovery) watchServices(ctx context.Context, ch chan<- []*targetgroup.
elapsed := time.Since(t0)
rpcDuration.WithLabelValues("catalog", "services").Observe(elapsed.Seconds())
+ // Check the context before in order to exit early.
+ select {
+ case <-ctx.Done():
+ return
+ default:
+ }
+
if err != nil {
level.Error(d.logger).Log("msg", "Error refreshing service list", "err", err)
rpcFailuresCount.Inc()
diff --git a/vendor/github.com/prometheus/prometheus/discovery/dns/dns.go b/vendor/github.com/prometheus/prometheus/discovery/dns/dns.go
index 8710bf3277f8b..014d5239e4cc3 100644
--- a/vendor/github.com/prometheus/prometheus/discovery/dns/dns.go
+++ b/vendor/github.com/prometheus/prometheus/discovery/dns/dns.go
@@ -151,7 +151,7 @@ func (d *Discovery) refresh(ctx context.Context) ([]*targetgroup.Group, error) {
wg.Add(len(d.names))
for _, name := range d.names {
go func(n string) {
- if err := d.refreshOne(ctx, n, ch); err != nil {
+ if err := d.refreshOne(ctx, n, ch); err != nil && err != context.Canceled {
level.Error(d.logger).Log("msg", "Error refreshing DNS targets", "err", err)
}
wg.Done()
diff --git a/vendor/github.com/prometheus/prometheus/discovery/kubernetes/endpoints.go b/vendor/github.com/prometheus/prometheus/discovery/kubernetes/endpoints.go
index 83e39d973d9d6..93acec58457f5 100644
--- a/vendor/github.com/prometheus/prometheus/discovery/kubernetes/endpoints.go
+++ b/vendor/github.com/prometheus/prometheus/discovery/kubernetes/endpoints.go
@@ -128,7 +128,9 @@ func (e *Endpoints) Run(ctx context.Context, ch chan<- []*targetgroup.Group) {
defer e.queue.ShutDown()
if !cache.WaitForCacheSync(ctx.Done(), e.endpointsInf.HasSynced, e.serviceInf.HasSynced, e.podInf.HasSynced) {
- level.Error(e.logger).Log("msg", "endpoints informer unable to sync cache")
+ if ctx.Err() != context.Canceled {
+ level.Error(e.logger).Log("msg", "endpoints informer unable to sync cache")
+ }
return
}
diff --git a/vendor/github.com/prometheus/prometheus/discovery/kubernetes/ingress.go b/vendor/github.com/prometheus/prometheus/discovery/kubernetes/ingress.go
index 9ad4677db7ff3..10c729ede6fbc 100644
--- a/vendor/github.com/prometheus/prometheus/discovery/kubernetes/ingress.go
+++ b/vendor/github.com/prometheus/prometheus/discovery/kubernetes/ingress.go
@@ -70,7 +70,9 @@ func (i *Ingress) Run(ctx context.Context, ch chan<- []*targetgroup.Group) {
defer i.queue.ShutDown()
if !cache.WaitForCacheSync(ctx.Done(), i.informer.HasSynced) {
- level.Error(i.logger).Log("msg", "ingress informer unable to sync cache")
+ if ctx.Err() != context.Canceled {
+ level.Error(i.logger).Log("msg", "ingress informer unable to sync cache")
+ }
return
}
@@ -142,7 +144,8 @@ const (
)
func ingressLabels(ingress *v1beta1.Ingress) model.LabelSet {
- ls := make(model.LabelSet, len(ingress.Labels)+len(ingress.Annotations)+2)
+ // Each label and annotation will create two key-value pairs in the map.
+ ls := make(model.LabelSet, 2*(len(ingress.Labels)+len(ingress.Annotations))+2)
ls[ingressNameLabel] = lv(ingress.Name)
ls[namespaceLabel] = lv(ingress.Namespace)
diff --git a/vendor/github.com/prometheus/prometheus/discovery/kubernetes/node.go b/vendor/github.com/prometheus/prometheus/discovery/kubernetes/node.go
index 973be2809ba72..08c933b389575 100644
--- a/vendor/github.com/prometheus/prometheus/discovery/kubernetes/node.go
+++ b/vendor/github.com/prometheus/prometheus/discovery/kubernetes/node.go
@@ -79,7 +79,9 @@ func (n *Node) Run(ctx context.Context, ch chan<- []*targetgroup.Group) {
defer n.queue.ShutDown()
if !cache.WaitForCacheSync(ctx.Done(), n.informer.HasSynced) {
- level.Error(n.logger).Log("msg", "node informer unable to sync cache")
+ if ctx.Err() != context.Canceled {
+ level.Error(n.logger).Log("msg", "node informer unable to sync cache")
+ }
return
}
@@ -149,7 +151,8 @@ const (
)
func nodeLabels(n *apiv1.Node) model.LabelSet {
- ls := make(model.LabelSet, len(n.Labels)+len(n.Annotations)+1)
+ // Each label and annotation will create two key-value pairs in the map.
+ ls := make(model.LabelSet, 2*(len(n.Labels)+len(n.Annotations))+1)
ls[nodeNameLabel] = lv(n.Name)
diff --git a/vendor/github.com/prometheus/prometheus/discovery/kubernetes/pod.go b/vendor/github.com/prometheus/prometheus/discovery/kubernetes/pod.go
index 4f522e96a795b..baf58d24bb17c 100644
--- a/vendor/github.com/prometheus/prometheus/discovery/kubernetes/pod.go
+++ b/vendor/github.com/prometheus/prometheus/discovery/kubernetes/pod.go
@@ -82,7 +82,9 @@ func (p *Pod) Run(ctx context.Context, ch chan<- []*targetgroup.Group) {
defer p.queue.ShutDown()
if !cache.WaitForCacheSync(ctx.Done(), p.informer.HasSynced) {
- level.Error(p.logger).Log("msg", "pod informer unable to sync cache")
+ if ctx.Err() != context.Canceled {
+ level.Error(p.logger).Log("msg", "pod informer unable to sync cache")
+ }
return
}
diff --git a/vendor/github.com/prometheus/prometheus/discovery/kubernetes/service.go b/vendor/github.com/prometheus/prometheus/discovery/kubernetes/service.go
index 25471558cce6b..ca01a5b38c19a 100644
--- a/vendor/github.com/prometheus/prometheus/discovery/kubernetes/service.go
+++ b/vendor/github.com/prometheus/prometheus/discovery/kubernetes/service.go
@@ -75,7 +75,9 @@ func (s *Service) Run(ctx context.Context, ch chan<- []*targetgroup.Group) {
defer s.queue.ShutDown()
if !cache.WaitForCacheSync(ctx.Done(), s.informer.HasSynced) {
- level.Error(s.logger).Log("msg", "service informer unable to sync cache")
+ if ctx.Err() != context.Canceled {
+ level.Error(s.logger).Log("msg", "service informer unable to sync cache")
+ }
return
}
@@ -147,7 +149,8 @@ const (
)
func serviceLabels(svc *apiv1.Service) model.LabelSet {
- ls := make(model.LabelSet, len(svc.Labels)+len(svc.Annotations)+2)
+ // Each label and annotation will create two key-value pairs in the map.
+ ls := make(model.LabelSet, 2*(len(svc.Labels)+len(svc.Annotations))+2)
ls[serviceNameLabel] = lv(svc.Name)
ls[namespaceLabel] = lv(svc.Namespace)
diff --git a/vendor/github.com/prometheus/prometheus/discovery/manager.go b/vendor/github.com/prometheus/prometheus/discovery/manager.go
index 4625e42a3178a..5457bd9b2e7ac 100644
--- a/vendor/github.com/prometheus/prometheus/discovery/manager.go
+++ b/vendor/github.com/prometheus/prometheus/discovery/manager.go
@@ -41,10 +41,10 @@ import (
)
var (
- failedConfigs = prometheus.NewCounterVec(
- prometheus.CounterOpts{
- Name: "prometheus_sd_configs_failed_total",
- Help: "Total number of service discovery configurations that failed to load.",
+ failedConfigs = prometheus.NewGaugeVec(
+ prometheus.GaugeOpts{
+ Name: "prometheus_sd_failed_configs",
+ Help: "Current number of service discovery configurations that failed to load.",
},
[]string{"name"},
)
@@ -194,10 +194,14 @@ func (m *Manager) ApplyConfig(cfg map[string]sd_config.ServiceDiscoveryConfig) e
m.targets = make(map[poolKey]map[string]*targetgroup.Group)
m.providers = nil
m.discoverCancel = nil
+
+ failedCount := 0
for name, scfg := range cfg {
- m.registerProviders(scfg, name)
+ failedCount += m.registerProviders(scfg, name)
discoveredTargets.WithLabelValues(m.name, name).Set(0)
}
+ failedConfigs.WithLabelValues(m.name).Set(float64(failedCount))
+
for _, prov := range m.providers {
m.startProvider(m.ctx, prov)
}
@@ -317,8 +321,12 @@ func (m *Manager) allGroups() map[string][]*targetgroup.Group {
return tSets
}
-func (m *Manager) registerProviders(cfg sd_config.ServiceDiscoveryConfig, setName string) {
- var added bool
+// registerProviders returns a number of failed SD config.
+func (m *Manager) registerProviders(cfg sd_config.ServiceDiscoveryConfig, setName string) int {
+ var (
+ failedCount int
+ added bool
+ )
add := func(cfg interface{}, newDiscoverer func() (Discoverer, error)) {
t := reflect.TypeOf(cfg).String()
for _, p := range m.providers {
@@ -332,7 +340,7 @@ func (m *Manager) registerProviders(cfg sd_config.ServiceDiscoveryConfig, setNam
d, err := newDiscoverer()
if err != nil {
level.Error(m.logger).Log("msg", "Cannot create service discovery", "err", err, "type", t)
- failedConfigs.WithLabelValues(m.name).Inc()
+ failedCount++
return
}
@@ -421,6 +429,7 @@ func (m *Manager) registerProviders(cfg sd_config.ServiceDiscoveryConfig, setNam
return &StaticProvider{TargetGroups: []*targetgroup.Group{{}}}, nil
})
}
+ return failedCount
}
// StaticProvider holds a list of target groups that never change.
diff --git a/vendor/github.com/prometheus/prometheus/discovery/refresh/refresh.go b/vendor/github.com/prometheus/prometheus/discovery/refresh/refresh.go
index ebc99e2e044b5..c48524c508e49 100644
--- a/vendor/github.com/prometheus/prometheus/discovery/refresh/refresh.go
+++ b/vendor/github.com/prometheus/prometheus/discovery/refresh/refresh.go
@@ -75,7 +75,9 @@ func (d *Discovery) Run(ctx context.Context, ch chan<- []*targetgroup.Group) {
// Get an initial set right away.
tgs, err := d.refresh(ctx)
if err != nil {
- level.Error(d.logger).Log("msg", "Unable to refresh target groups", "err", err.Error())
+ if ctx.Err() != context.Canceled {
+ level.Error(d.logger).Log("msg", "Unable to refresh target groups", "err", err.Error())
+ }
} else {
select {
case ch <- tgs:
@@ -92,7 +94,9 @@ func (d *Discovery) Run(ctx context.Context, ch chan<- []*targetgroup.Group) {
case <-ticker.C:
tgs, err := d.refresh(ctx)
if err != nil {
- level.Error(d.logger).Log("msg", "Unable to refresh target groups", "err", err.Error())
+ if ctx.Err() != context.Canceled {
+ level.Error(d.logger).Log("msg", "Unable to refresh target groups", "err", err.Error())
+ }
continue
}
diff --git a/vendor/github.com/prometheus/prometheus/pkg/exemplar/exemplar.go b/vendor/github.com/prometheus/prometheus/pkg/exemplar/exemplar.go
new file mode 100644
index 0000000000000..c6ea0db94da19
--- /dev/null
+++ b/vendor/github.com/prometheus/prometheus/pkg/exemplar/exemplar.go
@@ -0,0 +1,24 @@
+// Copyright 2019 The Prometheus Authors
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package exemplar
+
+import "github.com/prometheus/prometheus/pkg/labels"
+
+// Exemplar is additional information associated with a time series.
+type Exemplar struct {
+ Labels labels.Labels
+ Value float64
+ HasTs bool
+ Ts int64
+}
diff --git a/vendor/github.com/prometheus/prometheus/pkg/labels/labels.go b/vendor/github.com/prometheus/prometheus/pkg/labels/labels.go
index b1d434881a050..93c0a338b911c 100644
--- a/vendor/github.com/prometheus/prometheus/pkg/labels/labels.go
+++ b/vendor/github.com/prometheus/prometheus/pkg/labels/labels.go
@@ -201,6 +201,24 @@ func (ls Labels) Has(name string) bool {
return false
}
+// WithoutEmpty returns the labelset without empty labels.
+// May return the same labelset.
+func (ls Labels) WithoutEmpty() Labels {
+ for _, v := range ls {
+ if v.Value != "" {
+ continue
+ }
+ els := make(Labels, 0, len(ls)-1)
+ for _, v := range ls {
+ if v.Value != "" {
+ els = append(els, v)
+ }
+ }
+ return els
+ }
+ return ls
+}
+
// Equal returns whether the two label sets are equal.
func Equal(ls, o Labels) bool {
if len(ls) != len(o) {
diff --git a/vendor/github.com/prometheus/prometheus/pkg/labels/matcher.go b/vendor/github.com/prometheus/prometheus/pkg/labels/matcher.go
index 7fa5d947e7eb3..2702d9ddbe00f 100644
--- a/vendor/github.com/prometheus/prometheus/pkg/labels/matcher.go
+++ b/vendor/github.com/prometheus/prometheus/pkg/labels/matcher.go
@@ -68,6 +68,15 @@ func NewMatcher(t MatchType, n, v string) (*Matcher, error) {
return m, nil
}
+// MustNewMatcher panics on error - only for use in tests!
+func MustNewMatcher(mt MatchType, name, val string) *Matcher {
+ m, err := NewMatcher(mt, name, val)
+ if err != nil {
+ panic(err)
+ }
+ return m
+}
+
func (m *Matcher) String() string {
return fmt.Sprintf("%s%s%q", m.Name, m.Type, m.Value)
}
@@ -86,3 +95,18 @@ func (m *Matcher) Matches(s string) bool {
}
panic("labels.Matcher.Matches: invalid match type")
}
+
+// Inverse returns a matcher that matches the opposite.
+func (m *Matcher) Inverse() (*Matcher, error) {
+ switch m.Type {
+ case MatchEqual:
+ return NewMatcher(MatchNotEqual, m.Name, m.Value)
+ case MatchNotEqual:
+ return NewMatcher(MatchEqual, m.Name, m.Value)
+ case MatchRegexp:
+ return NewMatcher(MatchNotRegexp, m.Name, m.Value)
+ case MatchNotRegexp:
+ return NewMatcher(MatchRegexp, m.Name, m.Value)
+ }
+ panic("labels.Matcher.Matches: invalid match type")
+}
diff --git a/vendor/github.com/prometheus/prometheus/pkg/labels/test_utils.go b/vendor/github.com/prometheus/prometheus/pkg/labels/test_utils.go
new file mode 100644
index 0000000000000..319ee6184ec0c
--- /dev/null
+++ b/vendor/github.com/prometheus/prometheus/pkg/labels/test_utils.go
@@ -0,0 +1,87 @@
+// Copyright 2017 The Prometheus Authors
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package labels
+
+import (
+ "bufio"
+ "os"
+ "sort"
+ "strings"
+
+ "github.com/pkg/errors"
+)
+
+// Slice is a sortable slice of label sets.
+type Slice []Labels
+
+func (s Slice) Len() int { return len(s) }
+func (s Slice) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
+func (s Slice) Less(i, j int) bool { return Compare(s[i], s[j]) < 0 }
+
+// Selector holds constraints for matching against a label set.
+type Selector []*Matcher
+
+// Matches returns whether the labels satisfy all matchers.
+func (s Selector) Matches(labels Labels) bool {
+ for _, m := range s {
+ if v := labels.Get(m.Name); !m.Matches(v) {
+ return false
+ }
+ }
+ return true
+}
+
+// ReadLabels reads up to n label sets in a JSON formatted file fn. It is mostly useful
+// to load testing data.
+func ReadLabels(fn string, n int) ([]Labels, error) {
+ f, err := os.Open(fn)
+ if err != nil {
+ return nil, err
+ }
+ defer f.Close()
+
+ scanner := bufio.NewScanner(f)
+
+ var mets []Labels
+ hashes := map[uint64]struct{}{}
+ i := 0
+
+ for scanner.Scan() && i < n {
+ m := make(Labels, 0, 10)
+
+ r := strings.NewReplacer("\"", "", "{", "", "}", "")
+ s := r.Replace(scanner.Text())
+
+ labelChunks := strings.Split(s, ",")
+ for _, labelChunk := range labelChunks {
+ split := strings.Split(labelChunk, ":")
+ m = append(m, Label{Name: split[0], Value: split[1]})
+ }
+ // Order of the k/v labels matters, don't assume we'll always receive them already sorted.
+ sort.Sort(m)
+
+ h := m.Hash()
+ if _, ok := hashes[h]; ok {
+ continue
+ }
+ mets = append(mets, m)
+ hashes[h] = struct{}{}
+ i++
+ }
+
+ if i != n {
+ return mets, errors.Errorf("requested %d metrics but found %d", n, i)
+ }
+ return mets, nil
+}
diff --git a/vendor/github.com/prometheus/prometheus/pkg/textparse/README.md b/vendor/github.com/prometheus/prometheus/pkg/textparse/README.md
new file mode 100644
index 0000000000000..697966f0975a0
--- /dev/null
+++ b/vendor/github.com/prometheus/prometheus/pkg/textparse/README.md
@@ -0,0 +1,6 @@
+# Making changes to textparse lexers
+In the rare case that you need to update the textparse lexers, edit promlex.l or openmetricslex.l and then run the following command:
+`golex -o=promlex.l.go promlex.l`
+
+Note that you need golex installed:
+`go get -u modernc.org/golex`
\ No newline at end of file
diff --git a/vendor/github.com/prometheus/prometheus/pkg/textparse/interface.go b/vendor/github.com/prometheus/prometheus/pkg/textparse/interface.go
index 330dffa8ee8ce..cfcd05e210f04 100644
--- a/vendor/github.com/prometheus/prometheus/pkg/textparse/interface.go
+++ b/vendor/github.com/prometheus/prometheus/pkg/textparse/interface.go
@@ -16,6 +16,7 @@ package textparse
import (
"mime"
+ "github.com/prometheus/prometheus/pkg/exemplar"
"github.com/prometheus/prometheus/pkg/labels"
)
@@ -50,6 +51,10 @@ type Parser interface {
// It returns the string from which the metric was parsed.
Metric(l *labels.Labels) string
+ // Exemplar writes the exemplar of the current sample into the passed
+ // exemplar. It returns if an exemplar exists or not.
+ Exemplar(l *exemplar.Exemplar) bool
+
// Next advances the parser to the next sample. It returns false if no
// more samples were read or an error occurred.
Next() (Entry, error)
diff --git a/vendor/github.com/prometheus/prometheus/pkg/textparse/openmetricslex.l b/vendor/github.com/prometheus/prometheus/pkg/textparse/openmetricslex.l
index a259885f10a2c..b02e975835763 100644
--- a/vendor/github.com/prometheus/prometheus/pkg/textparse/openmetricslex.l
+++ b/vendor/github.com/prometheus/prometheus/pkg/textparse/openmetricslex.l
@@ -36,7 +36,7 @@ M [a-zA-Z_:]
C [^\n]
S [ ]
-%x sComment sMeta1 sMeta2 sLabels sLValue sValue sTimestamp
+%x sComment sMeta1 sMeta2 sLabels sLValue sValue sTimestamp sExemplar sEValue sETimestamp
%yyc c
%yyn c = l.next()
@@ -62,8 +62,17 @@ S [ ]
<sLValue>\"(\\.|[^\\"\n])*\" l.state = sLabels; return tLValue
<sValue>{S}[^ \n]+ l.state = sTimestamp; return tValue
<sTimestamp>{S}[^ \n]+ return tTimestamp
-<sTimestamp>{S}#{S}{C}*\n l.state = sInit; return tLinebreak
<sTimestamp>\n l.state = sInit; return tLinebreak
+<sTimestamp>{S}#{S}\{ l.state = sExemplar; return tComment
+
+<sExemplar>{L}({L}|{D})* return tLName
+<sExemplar>\} l.state = sEValue; return tBraceClose
+<sExemplar>= l.state = sEValue; return tEqual
+<sEValue>\"(\\.|[^\\"\n])*\" l.state = sExemplar; return tLValue
+<sExemplar>, return tComma
+<sEValue>{S}[^ \n]+ l.state = sETimestamp; return tValue
+<sETimestamp>{S}[^ \n]+ return tTimestamp
+<sETimestamp>\n l.state = sInit; return tLinebreak
%%
diff --git a/vendor/github.com/prometheus/prometheus/pkg/textparse/openmetricslex.l.go b/vendor/github.com/prometheus/prometheus/pkg/textparse/openmetricslex.l.go
index 150d44dd05eca..e20d127cda6bc 100644
--- a/vendor/github.com/prometheus/prometheus/pkg/textparse/openmetricslex.l.go
+++ b/vendor/github.com/prometheus/prometheus/pkg/textparse/openmetricslex.l.go
@@ -1,4 +1,4 @@
-// CAUTION: Generated file - DO NOT EDIT.
+// Code generated by golex. DO NOT EDIT.
// Copyright 2018 The Prometheus Authors
// Licensed under the Apache License, Version 2.0 (the "License");
@@ -16,7 +16,7 @@
package textparse
import (
- "github.com/pkg/errors"
+ "fmt"
)
// Lex is called by the parser generated by "go tool yacc" to obtain each
@@ -33,7 +33,7 @@ yystate0:
switch yyt := l.state; yyt {
default:
- panic(errors.Errorf(`invalid start condition %d`, yyt))
+ panic(fmt.Errorf(`invalid start condition %d`, yyt))
case 0: // start condition: INITIAL
goto yystart1
case 1: // start condition: sComment
@@ -50,6 +50,12 @@ yystate0:
goto yystart39
case 7: // start condition: sTimestamp
goto yystart43
+ case 8: // start condition: sExemplar
+ goto yystart50
+ case 9: // start condition: sEValue
+ goto yystart55
+ case 10: // start condition: sETimestamp
+ goto yystart61
}
goto yystate0 // silence unused label error
@@ -427,7 +433,7 @@ yystart43:
yystate44:
c = l.next()
- goto yyrule18
+ goto yyrule17
yystate45:
c = l.next()
@@ -465,15 +471,143 @@ yystate48:
switch {
default:
goto yyabort
- case c == '\n':
+ case c == '{':
goto yystate49
- case c >= '\x01' && c <= '\t' || c >= '\v' && c <= 'ÿ':
- goto yystate48
}
yystate49:
c = l.next()
- goto yyrule17
+ goto yyrule18
+
+ goto yystate50 // silence unused label error
+yystate50:
+ c = l.next()
+yystart50:
+ switch {
+ default:
+ goto yyabort
+ case c == ',':
+ goto yystate51
+ case c == '=':
+ goto yystate52
+ case c == '}':
+ goto yystate54
+ case c >= 'A' && c <= 'Z' || c == '_' || c >= 'a' && c <= 'z':
+ goto yystate53
+ }
+
+yystate51:
+ c = l.next()
+ goto yyrule23
+
+yystate52:
+ c = l.next()
+ goto yyrule21
+
+yystate53:
+ c = l.next()
+ switch {
+ default:
+ goto yyrule19
+ case c >= '0' && c <= '9' || c >= 'A' && c <= 'Z' || c == '_' || c >= 'a' && c <= 'z':
+ goto yystate53
+ }
+
+yystate54:
+ c = l.next()
+ goto yyrule20
+
+ goto yystate55 // silence unused label error
+yystate55:
+ c = l.next()
+yystart55:
+ switch {
+ default:
+ goto yyabort
+ case c == ' ':
+ goto yystate56
+ case c == '"':
+ goto yystate58
+ }
+
+yystate56:
+ c = l.next()
+ switch {
+ default:
+ goto yyabort
+ case c >= '\x01' && c <= '\t' || c >= '\v' && c <= '\x1f' || c >= '!' && c <= 'ÿ':
+ goto yystate57
+ }
+
+yystate57:
+ c = l.next()
+ switch {
+ default:
+ goto yyrule24
+ case c >= '\x01' && c <= '\t' || c >= '\v' && c <= '\x1f' || c >= '!' && c <= 'ÿ':
+ goto yystate57
+ }
+
+yystate58:
+ c = l.next()
+ switch {
+ default:
+ goto yyabort
+ case c == '"':
+ goto yystate59
+ case c == '\\':
+ goto yystate60
+ case c >= '\x01' && c <= '\t' || c >= '\v' && c <= '!' || c >= '#' && c <= '[' || c >= ']' && c <= 'ÿ':
+ goto yystate58
+ }
+
+yystate59:
+ c = l.next()
+ goto yyrule22
+
+yystate60:
+ c = l.next()
+ switch {
+ default:
+ goto yyabort
+ case c >= '\x01' && c <= '\t' || c >= '\v' && c <= 'ÿ':
+ goto yystate58
+ }
+
+ goto yystate61 // silence unused label error
+yystate61:
+ c = l.next()
+yystart61:
+ switch {
+ default:
+ goto yyabort
+ case c == ' ':
+ goto yystate63
+ case c == '\n':
+ goto yystate62
+ }
+
+yystate62:
+ c = l.next()
+ goto yyrule26
+
+yystate63:
+ c = l.next()
+ switch {
+ default:
+ goto yyabort
+ case c >= '\x01' && c <= '\t' || c >= '\v' && c <= '\x1f' || c >= '!' && c <= 'ÿ':
+ goto yystate64
+ }
+
+yystate64:
+ c = l.next()
+ switch {
+ default:
+ goto yyrule25
+ case c >= '\x01' && c <= '\t' || c >= '\v' && c <= '\x1f' || c >= '!' && c <= 'ÿ':
+ goto yystate64
+ }
yyrule1: // #{S}
{
@@ -564,13 +698,55 @@ yyrule16: // {S}[^ \n]+
{
return tTimestamp
}
-yyrule17: // {S}#{S}{C}*\n
+yyrule17: // \n
{
l.state = sInit
return tLinebreak
goto yystate0
}
-yyrule18: // \n
+yyrule18: // {S}#{S}\{
+ {
+ l.state = sExemplar
+ return tComment
+ goto yystate0
+ }
+yyrule19: // {L}({L}|{D})*
+ {
+ return tLName
+ }
+yyrule20: // \}
+ {
+ l.state = sEValue
+ return tBraceClose
+ goto yystate0
+ }
+yyrule21: // =
+ {
+ l.state = sEValue
+ return tEqual
+ goto yystate0
+ }
+yyrule22: // \"(\\.|[^\\"\n])*\"
+ {
+ l.state = sExemplar
+ return tLValue
+ goto yystate0
+ }
+yyrule23: // ,
+ {
+ return tComma
+ }
+yyrule24: // {S}[^ \n]+
+ {
+ l.state = sETimestamp
+ return tValue
+ goto yystate0
+ }
+yyrule25: // {S}[^ \n]+
+ {
+ return tTimestamp
+ }
+yyrule26: // \n
{
l.state = sInit
return tLinebreak
diff --git a/vendor/github.com/prometheus/prometheus/pkg/textparse/openmetricsparse.go b/vendor/github.com/prometheus/prometheus/pkg/textparse/openmetricsparse.go
index ed76bc39578ec..19a8a32ea8f11 100644
--- a/vendor/github.com/prometheus/prometheus/pkg/textparse/openmetricsparse.go
+++ b/vendor/github.com/prometheus/prometheus/pkg/textparse/openmetricsparse.go
@@ -17,6 +17,8 @@
package textparse
import (
+ "bytes"
+ "fmt"
"io"
"math"
"sort"
@@ -25,10 +27,13 @@ import (
"github.com/pkg/errors"
+ "github.com/prometheus/prometheus/pkg/exemplar"
"github.com/prometheus/prometheus/pkg/labels"
"github.com/prometheus/prometheus/pkg/value"
)
+var allowedSuffixes = [][]byte{[]byte("_total"), []byte("_bucket")}
+
type openMetricsLexer struct {
b []byte
i int
@@ -85,6 +90,12 @@ type OpenMetricsParser struct {
hasTS bool
start int
offsets []int
+
+ eOffsets []int
+ exemplar []byte
+ exemplarVal float64
+ exemplarTs int64
+ hasExemplarTs bool
}
// NewOpenMetricsParser returns a new parser of the byte slice.
@@ -96,7 +107,8 @@ func NewOpenMetricsParser(b []byte) Parser {
// of the current sample.
func (p *OpenMetricsParser) Series() ([]byte, *int64, float64) {
if p.hasTS {
- return p.series, &p.ts, p.val
+ ts := p.ts
+ return p.series, &ts, p.val
}
return p.series, nil, p.val
}
@@ -170,6 +182,38 @@ func (p *OpenMetricsParser) Metric(l *labels.Labels) string {
return s
}
+// Exemplar writes the exemplar of the current sample into the passed
+// exemplar. It returns the whether an exemplar exists.
+func (p *OpenMetricsParser) Exemplar(e *exemplar.Exemplar) bool {
+ if len(p.exemplar) == 0 {
+ return false
+ }
+
+ // Allocate the full immutable string immediately, so we just
+ // have to create references on it below.
+ s := string(p.exemplar)
+
+ e.Value = p.exemplarVal
+ if p.hasExemplarTs {
+ e.HasTs = true
+ e.Ts = p.exemplarTs
+ }
+
+ for i := 0; i < len(p.eOffsets); i += 4 {
+ a := p.eOffsets[i] - p.start
+ b := p.eOffsets[i+1] - p.start
+ c := p.eOffsets[i+2] - p.start
+ d := p.eOffsets[i+3] - p.start
+
+ e.Labels = append(e.Labels, labels.Label{Name: s[a:b], Value: s[c:d]})
+ }
+
+ // Sort the labels.
+ sort.Sort(e.Labels)
+
+ return true
+}
+
// nextToken returns the next token from the openMetricsLexer.
func (p *OpenMetricsParser) nextToken() token {
tok := p.l.Lex()
@@ -183,6 +227,10 @@ func (p *OpenMetricsParser) Next() (Entry, error) {
p.start = p.l.i
p.offsets = p.offsets[:0]
+ p.eOffsets = p.eOffsets[:0]
+ p.exemplar = p.exemplar[:0]
+ p.exemplarVal = 0
+ p.hasExemplarTs = false
switch t := p.nextToken(); t {
case tEofWord:
@@ -191,7 +239,7 @@ func (p *OpenMetricsParser) Next() (Entry, error) {
}
return EntryInvalid, io.EOF
case tEOF:
- return EntryInvalid, parseError("unexpected end of data", t)
+ return EntryInvalid, io.EOF
case tHelp, tType, tUnit:
switch t := p.nextToken(); t {
case tMName:
@@ -258,26 +306,29 @@ func (p *OpenMetricsParser) Next() (Entry, error) {
t2 := p.nextToken()
if t2 == tBraceOpen {
- if err := p.parseLVals(); err != nil {
+ offsets, err := p.parseLVals()
+ if err != nil {
return EntryInvalid, err
}
+ p.offsets = append(p.offsets, offsets...)
p.series = p.l.b[p.start:p.l.i]
t2 = p.nextToken()
}
- if t2 != tValue {
- return EntryInvalid, parseError("expected value after metric", t)
- }
- if p.val, err = parseFloat(yoloString(p.l.buf()[1:])); err != nil {
+ p.val, err = p.getFloatValue(t2, "metric")
+ if err != nil {
return EntryInvalid, err
}
- // Ensure canonical NaN value.
- if math.IsNaN(p.val) {
- p.val = math.Float64frombits(value.NormalNaN)
- }
+
p.hasTS = false
- switch p.nextToken() {
+ switch t2 := p.nextToken(); t2 {
+ case tEOF:
+ return EntryInvalid, io.EOF
case tLinebreak:
break
+ case tComment:
+ if err := p.parseComment(); err != nil {
+ return EntryInvalid, err
+ }
case tTimestamp:
p.hasTS = true
var ts float64
@@ -286,11 +337,17 @@ func (p *OpenMetricsParser) Next() (Entry, error) {
return EntryInvalid, err
}
p.ts = int64(ts * 1000)
- if t2 := p.nextToken(); t2 != tLinebreak {
- return EntryInvalid, parseError("expected next entry after timestamp", t)
+ switch t3 := p.nextToken(); t3 {
+ case tLinebreak:
+ case tComment:
+ if err := p.parseComment(); err != nil {
+ return EntryInvalid, err
+ }
+ default:
+ return EntryInvalid, parseError("expected next entry after timestamp", t3)
}
default:
- return EntryInvalid, parseError("expected timestamp or new record", t)
+ return EntryInvalid, parseError("expected timestamp or # symbol", t2)
}
return EntrySeries, nil
@@ -300,50 +357,121 @@ func (p *OpenMetricsParser) Next() (Entry, error) {
return EntryInvalid, err
}
-func (p *OpenMetricsParser) parseLVals() error {
+func (p *OpenMetricsParser) parseComment() error {
+ // Validate the name of the metric. It must have _total or _bucket as
+ // suffix for exemplars to be supported.
+ if err := p.validateNameForExemplar(p.series[:p.offsets[0]-p.start]); err != nil {
+ return err
+ }
+
+ // Parse the labels.
+ offsets, err := p.parseLVals()
+ if err != nil {
+ return err
+ }
+ p.eOffsets = append(p.eOffsets, offsets...)
+ p.exemplar = p.l.b[p.start:p.l.i]
+
+ // Get the value.
+ p.exemplarVal, err = p.getFloatValue(p.nextToken(), "exemplar labels")
+ if err != nil {
+ return err
+ }
+
+ // Read the optional timestamp.
+ p.hasExemplarTs = false
+ switch t2 := p.nextToken(); t2 {
+ case tEOF:
+ return io.EOF
+ case tLinebreak:
+ break
+ case tTimestamp:
+ p.hasExemplarTs = true
+ var ts float64
+ // A float is enough to hold what we need for millisecond resolution.
+ if ts, err = parseFloat(yoloString(p.l.buf()[1:])); err != nil {
+ return err
+ }
+ p.exemplarTs = int64(ts * 1000)
+ switch t3 := p.nextToken(); t3 {
+ case tLinebreak:
+ default:
+ return parseError("expected next entry after exemplar timestamp", t3)
+ }
+ default:
+ return parseError("expected timestamp or comment", t2)
+ }
+ return nil
+}
+
+func (p *OpenMetricsParser) parseLVals() ([]int, error) {
+ var offsets []int
first := true
for {
t := p.nextToken()
switch t {
case tBraceClose:
- return nil
+ return offsets, nil
case tComma:
if first {
- return parseError("expected label name or left brace", t)
+ return nil, parseError("expected label name or left brace", t)
}
t = p.nextToken()
if t != tLName {
- return parseError("expected label name", t)
+ return nil, parseError("expected label name", t)
}
case tLName:
if !first {
- return parseError("expected comma", t)
+ return nil, parseError("expected comma", t)
}
default:
if first {
- return parseError("expected label name or left brace", t)
+ return nil, parseError("expected label name or left brace", t)
}
- return parseError("expected comma or left brace", t)
+ return nil, parseError("expected comma or left brace", t)
}
first = false
// t is now a label name.
- p.offsets = append(p.offsets, p.l.start, p.l.i)
+ offsets = append(offsets, p.l.start, p.l.i)
if t := p.nextToken(); t != tEqual {
- return parseError("expected equal", t)
+ return nil, parseError("expected equal", t)
}
if t := p.nextToken(); t != tLValue {
- return parseError("expected label value", t)
+ return nil, parseError("expected label value", t)
}
if !utf8.Valid(p.l.buf()) {
- return errors.New("invalid UTF-8 label value")
+ return nil, errors.New("invalid UTF-8 label value")
}
// The openMetricsLexer ensures the value string is quoted. Strip first
// and last character.
- p.offsets = append(p.offsets, p.l.start+1, p.l.i-1)
+ offsets = append(offsets, p.l.start+1, p.l.i-1)
+ }
+}
+
+func (p *OpenMetricsParser) getFloatValue(t token, after string) (float64, error) {
+ if t != tValue {
+ return 0, parseError(fmt.Sprintf("expected value after %v", after), t)
+ }
+ val, err := parseFloat(yoloString(p.l.buf()[1:]))
+ if err != nil {
+ return 0, err
+ }
+ // Ensure canonical NaN value.
+ if math.IsNaN(p.exemplarVal) {
+ val = math.Float64frombits(value.NormalNaN)
+ }
+ return val, nil
+}
+func (p *OpenMetricsParser) validateNameForExemplar(name []byte) error {
+ for _, suffix := range allowedSuffixes {
+ if bytes.HasSuffix(name, suffix) {
+ return nil
+ }
}
+ return fmt.Errorf("metric name %v does not support exemplars", string(name))
}
diff --git a/vendor/github.com/prometheus/prometheus/pkg/textparse/promlex.l.go b/vendor/github.com/prometheus/prometheus/pkg/textparse/promlex.l.go
index f24f0452fdf26..690ec4e05bb45 100644
--- a/vendor/github.com/prometheus/prometheus/pkg/textparse/promlex.l.go
+++ b/vendor/github.com/prometheus/prometheus/pkg/textparse/promlex.l.go
@@ -28,6 +28,9 @@ const (
sLValue
sValue
sTimestamp
+ sExemplar
+ sEValue
+ sETimestamp
)
// Lex is called by the parser generated by "go tool yacc" to obtain each
diff --git a/vendor/github.com/prometheus/prometheus/pkg/textparse/promparse.go b/vendor/github.com/prometheus/prometheus/pkg/textparse/promparse.go
index 69fa51c3dddc8..6c254b5261ea4 100644
--- a/vendor/github.com/prometheus/prometheus/pkg/textparse/promparse.go
+++ b/vendor/github.com/prometheus/prometheus/pkg/textparse/promparse.go
@@ -28,6 +28,7 @@ import (
"github.com/pkg/errors"
+ "github.com/prometheus/prometheus/pkg/exemplar"
"github.com/prometheus/prometheus/pkg/labels"
"github.com/prometheus/prometheus/pkg/value"
)
@@ -234,6 +235,12 @@ func (p *PromParser) Metric(l *labels.Labels) string {
return s
}
+// Exemplar writes the exemplar of the current sample into the passed
+// exemplar. It returns if an exemplar exists.
+func (p *PromParser) Exemplar(e *exemplar.Exemplar) bool {
+ return false
+}
+
// nextToken returns the next token from the promlexer. It skips over tabs
// and spaces.
func (p *PromParser) nextToken() token {
diff --git a/vendor/github.com/prometheus/prometheus/promql/ast.go b/vendor/github.com/prometheus/prometheus/promql/ast.go
index 3cc699aa3153b..973971d2929d1 100644
--- a/vendor/github.com/prometheus/prometheus/promql/ast.go
+++ b/vendor/github.com/prometheus/prometheus/promql/ast.go
@@ -248,61 +248,10 @@ func Walk(v Visitor, node Node, path []Node) error {
}
path = append(path, node)
- switch n := node.(type) {
- case *EvalStmt:
- if err := Walk(v, n.Expr, path); err != nil {
- return err
- }
-
- case Expressions:
- for _, e := range n {
- if err := Walk(v, e, path); err != nil {
- return err
- }
- }
- case *AggregateExpr:
- if n.Param != nil {
- if err := Walk(v, n.Param, path); err != nil {
- return err
- }
- }
- if err := Walk(v, n.Expr, path); err != nil {
- return err
- }
-
- case *BinaryExpr:
- if err := Walk(v, n.LHS, path); err != nil {
- return err
- }
- if err := Walk(v, n.RHS, path); err != nil {
- return err
- }
-
- case *Call:
- if err := Walk(v, n.Args, path); err != nil {
- return err
- }
-
- case *SubqueryExpr:
- if err := Walk(v, n.Expr, path); err != nil {
- return err
- }
-
- case *ParenExpr:
- if err := Walk(v, n.Expr, path); err != nil {
+ for _, e := range Children(node) {
+ if err := Walk(v, e, path); err != nil {
return err
}
-
- case *UnaryExpr:
- if err := Walk(v, n.Expr, path); err != nil {
- return err
- }
-
- case *MatrixSelector, *NumberLiteral, *StringLiteral, *VectorSelector:
- // nothing to do
-
- default:
- panic(errors.Errorf("promql.Walk: unhandled node type %T", node))
}
_, err = v.Visit(nil, nil)
@@ -326,3 +275,51 @@ func Inspect(node Node, f inspector) {
//nolint: errcheck
Walk(inspector(f), node, nil)
}
+
+// Children returns a list of all child nodes of a syntax tree node.
+func Children(node Node) []Node {
+ // For some reasons these switches have significantly better performance than interfaces
+ switch n := node.(type) {
+ case *EvalStmt:
+ return []Node{n.Expr}
+ case Expressions:
+ // golang cannot convert slices of interfaces
+ ret := make([]Node, len(n))
+ for i, e := range n {
+ ret[i] = e
+ }
+ return ret
+ case *AggregateExpr:
+ // While this does not look nice, it should avoid unnecessary allocations
+ // caused by slice resizing
+ if n.Expr == nil && n.Param == nil {
+ return nil
+ } else if n.Expr == nil {
+ return []Node{n.Param}
+ } else if n.Param == nil {
+ return []Node{n.Expr}
+ } else {
+ return []Node{n.Expr, n.Param}
+ }
+ case *BinaryExpr:
+ return []Node{n.LHS, n.RHS}
+ case *Call:
+ // golang cannot convert slices of interfaces
+ ret := make([]Node, len(n.Args))
+ for i, e := range n.Args {
+ ret[i] = e
+ }
+ return ret
+ case *SubqueryExpr:
+ return []Node{n.Expr}
+ case *ParenExpr:
+ return []Node{n.Expr}
+ case *UnaryExpr:
+ return []Node{n.Expr}
+ case *MatrixSelector, *NumberLiteral, *StringLiteral, *VectorSelector:
+ // nothing to do
+ return []Node{}
+ default:
+ panic(errors.Errorf("promql.Children: unhandled node type %T", node))
+ }
+}
diff --git a/vendor/github.com/prometheus/prometheus/promql/lex.go b/vendor/github.com/prometheus/prometheus/promql/lex.go
index dde2799f30fb8..87e315cc7e80d 100644
--- a/vendor/github.com/prometheus/prometheus/promql/lex.go
+++ b/vendor/github.com/prometheus/prometheus/promql/lex.go
@@ -66,7 +66,7 @@ func (i ItemType) isAggregatorWithParam() bool {
// Returns false otherwise.
func (i ItemType) isKeyword() bool { return i > keywordsStart && i < keywordsEnd }
-// isCompairsonOperator returns true if the item corresponds to a comparison operator.
+// isComparisonOperator returns true if the item corresponds to a comparison operator.
// Returns false otherwise.
func (i ItemType) isComparisonOperator() bool {
switch i {
@@ -317,13 +317,13 @@ type Pos int
// lexer holds the state of the scanner.
type lexer struct {
- input string // The string being scanned.
- state stateFn // The next lexing function to enter.
- pos Pos // Current position in the input.
- start Pos // Start position of this item.
- width Pos // Width of last rune read from input.
- lastPos Pos // Position of most recent item returned by nextItem.
- items chan item // Channel of scanned items.
+ input string // The string being scanned.
+ state stateFn // The next lexing function to enter.
+ pos Pos // Current position in the input.
+ start Pos // Start position of this item.
+ width Pos // Width of last rune read from input.
+ lastPos Pos // Position of most recent item returned by nextItem.
+ items []item // Slice buffer of scanned items.
parenDepth int // Nesting depth of ( ) exprs.
braceOpen bool // Whether a { is opened.
@@ -362,7 +362,7 @@ func (l *lexer) backup() {
// emit passes an item back to the client.
func (l *lexer) emit(t ItemType) {
- l.items <- item{t, l.start, l.input[l.start:l.pos]}
+ l.items = append(l.items, item{t, l.start, l.input[l.start:l.pos]})
l.start = l.pos
}
@@ -408,13 +408,21 @@ func (l *lexer) linePosition() int {
// errorf returns an error token and terminates the scan by passing
// back a nil pointer that will be the next state, terminating l.nextItem.
func (l *lexer) errorf(format string, args ...interface{}) stateFn {
- l.items <- item{ItemError, l.start, fmt.Sprintf(format, args...)}
+ l.items = append(l.items, item{ItemError, l.start, fmt.Sprintf(format, args...)})
return nil
}
// nextItem returns the next item from the input.
func (l *lexer) nextItem() item {
- item := <-l.items
+ for len(l.items) == 0 {
+ if l.state != nil {
+ l.state = l.state(l)
+ } else {
+ l.emit(ItemEOF)
+ }
+ }
+ item := l.items[0]
+ l.items = l.items[1:]
l.lastPos = item.pos
return item
}
@@ -423,9 +431,8 @@ func (l *lexer) nextItem() item {
func lex(input string) *lexer {
l := &lexer{
input: input,
- items: make(chan item),
+ state: lexStatements,
}
- go l.run()
return l
}
@@ -434,7 +441,6 @@ func (l *lexer) run() {
for l.state = lexStatements; l.state != nil; {
l.state = l.state(l)
}
- close(l.items)
}
// Release resources used by lexer.
@@ -550,6 +556,9 @@ func lexStatements(l *lexer) stateFn {
}
l.gotColon = false
l.emit(ItemLeftBracket)
+ if isSpace(l.peek()) {
+ skipSpaces(l)
+ }
l.bracketOpen = true
return lexDuration
case r == ']':
@@ -715,6 +724,14 @@ func digitVal(ch rune) int {
return 16 // Larger than any legal digit val.
}
+// skipSpaces skips the spaces until a non-space is encountered.
+func skipSpaces(l *lexer) {
+ for isSpace(l.peek()) {
+ l.next()
+ }
+ l.ignore()
+}
+
// lexString scans a quoted string. The initial quote has already been seen.
func lexString(l *lexer) stateFn {
Loop:
diff --git a/vendor/github.com/prometheus/prometheus/promql/parse.go b/vendor/github.com/prometheus/prometheus/promql/parse.go
index 4bb4a11c6fb4b..8d6c6121e9d5b 100644
--- a/vendor/github.com/prometheus/prometheus/promql/parse.go
+++ b/vendor/github.com/prometheus/prometheus/promql/parse.go
@@ -32,9 +32,9 @@ import (
)
type parser struct {
- lex *lexer
- token [3]item
- peekCount int
+ lex *lexer
+ token item
+ peeking bool
}
// ParseErr wraps a parsing error with line and position context.
@@ -106,9 +106,6 @@ func (p *parser) parseExpr() (expr Expr, err error) {
defer p.recover(&err)
for p.peek().typ != ItemEOF {
- if p.peek().typ == ItemComment {
- continue
- }
if expr != nil {
p.errorf("could not parse remaining input %.15q...", p.lex.input[p.lex.lastPos:])
}
@@ -248,41 +245,42 @@ func (p *parser) typecheck(node Node) (err error) {
// next returns the next token.
func (p *parser) next() item {
- if p.peekCount > 0 {
- p.peekCount--
- } else {
+ if !p.peeking {
t := p.lex.nextItem()
// Skip comments.
for t.typ == ItemComment {
t = p.lex.nextItem()
}
- p.token[0] = t
+ p.token = t
}
- if p.token[p.peekCount].typ == ItemError {
- p.errorf("%s", p.token[p.peekCount].val)
+
+ p.peeking = false
+
+ if p.token.typ == ItemError {
+ p.errorf("%s", p.token.val)
}
- return p.token[p.peekCount]
+ return p.token
}
// peek returns but does not consume the next token.
func (p *parser) peek() item {
- if p.peekCount > 0 {
- return p.token[p.peekCount-1]
+ if p.peeking {
+ return p.token
}
- p.peekCount = 1
+ p.peeking = true
t := p.lex.nextItem()
// Skip comments.
for t.typ == ItemComment {
t = p.lex.nextItem()
}
- p.token[0] = t
- return p.token[0]
+ p.token = t
+ return p.token
}
// backup backs the input stream up one token.
func (p *parser) backup() {
- p.peekCount++
+ p.peeking = true
}
// errorf formats the error and terminates processing.
diff --git a/vendor/github.com/prometheus/prometheus/promql/printer.go b/vendor/github.com/prometheus/prometheus/promql/printer.go
index adcf2e699a425..846353c88d952 100644
--- a/vendor/github.com/prometheus/prometheus/promql/printer.go
+++ b/vendor/github.com/prometheus/prometheus/promql/printer.go
@@ -38,39 +38,10 @@ func tree(node Node, level string) string {
level += " · · ·"
- switch n := node.(type) {
- case *EvalStmt:
- t += tree(n.Expr, level)
-
- case Expressions:
- for _, e := range n {
- t += tree(e, level)
- }
- case *AggregateExpr:
- t += tree(n.Expr, level)
-
- case *BinaryExpr:
- t += tree(n.LHS, level)
- t += tree(n.RHS, level)
-
- case *Call:
- t += tree(n.Args, level)
-
- case *ParenExpr:
- t += tree(n.Expr, level)
-
- case *UnaryExpr:
- t += tree(n.Expr, level)
-
- case *SubqueryExpr:
- t += tree(n.Expr, level)
-
- case *MatrixSelector, *NumberLiteral, *StringLiteral, *VectorSelector:
- // nothing to do
-
- default:
- panic("promql.Tree: not all node types covered")
+ for _, e := range Children(node) {
+ t += tree(e, level)
}
+
return t
}
@@ -157,7 +128,11 @@ func (node *SubqueryExpr) String() string {
if node.Step != 0 {
step = model.Duration(node.Step).String()
}
- return fmt.Sprintf("%s[%s:%s]", node.Expr.String(), model.Duration(node.Range), step)
+ offset := ""
+ if node.Offset != time.Duration(0) {
+ offset = fmt.Sprintf(" offset %s", model.Duration(node.Offset))
+ }
+ return fmt.Sprintf("%s[%s:%s]%s", node.Expr.String(), model.Duration(node.Range), step, offset)
}
func (node *NumberLiteral) String() string {
diff --git a/vendor/github.com/prometheus/prometheus/promql/query_logger.go b/vendor/github.com/prometheus/prometheus/promql/query_logger.go
index 256c3d775c448..1014ade4078b9 100644
--- a/vendor/github.com/prometheus/prometheus/promql/query_logger.go
+++ b/vendor/github.com/prometheus/prometheus/promql/query_logger.go
@@ -41,12 +41,14 @@ const (
entrySize int = 1000
)
-func parseBrokenJson(brokenJson []byte, logger log.Logger) (bool, string) {
+func parseBrokenJson(brokenJson []byte) (bool, string) {
queries := strings.ReplaceAll(string(brokenJson), "\x00", "")
- queries = queries[:len(queries)-1] + "]"
+ if len(queries) > 0 {
+ queries = queries[:len(queries)-1] + "]"
+ }
// Conditional because of implementation detail: len() = 1 implies file consisted of a single char: '['.
- if len(queries) == 1 {
+ if len(queries) <= 1 {
return false, "[]"
}
@@ -68,7 +70,7 @@ func logUnfinishedQueries(filename string, filesize int, logger log.Logger) {
return
}
- queriesExist, queries := parseBrokenJson(brokenJson, logger)
+ queriesExist, queries := parseBrokenJson(brokenJson)
if !queriesExist {
return
}
diff --git a/vendor/github.com/prometheus/prometheus/promql/test.go b/vendor/github.com/prometheus/prometheus/promql/test.go
index 292203100fcb5..d2c3486149f4a 100644
--- a/vendor/github.com/prometheus/prometheus/promql/test.go
+++ b/vendor/github.com/prometheus/prometheus/promql/test.go
@@ -461,7 +461,7 @@ func (t *Test) exec(tc testCommand) error {
}
// Check query returns same result in range mode,
- /// by checking against the middle step.
+ // by checking against the middle step.
q, err = t.queryEngine.NewRangeQuery(t.storage, cmd.expr, cmd.start.Add(-time.Minute), cmd.start.Add(time.Minute), time.Minute)
if err != nil {
return err
diff --git a/vendor/github.com/prometheus/prometheus/storage/tsdb/tsdb.go b/vendor/github.com/prometheus/prometheus/storage/tsdb/tsdb.go
index e72242d126b42..d982ef5cf2ac5 100644
--- a/vendor/github.com/prometheus/prometheus/storage/tsdb/tsdb.go
+++ b/vendor/github.com/prometheus/prometheus/storage/tsdb/tsdb.go
@@ -17,7 +17,6 @@ import (
"context"
"sync"
"time"
- "unsafe"
"github.com/alecthomas/units"
"github.com/go-kit/kit/log"
@@ -27,7 +26,6 @@ import (
"github.com/prometheus/prometheus/pkg/labels"
"github.com/prometheus/prometheus/storage"
"github.com/prometheus/prometheus/tsdb"
- tsdbLabels "github.com/prometheus/prometheus/tsdb/labels"
)
// ErrNotReady is returned if the underlying storage is not ready yet.
@@ -244,12 +242,7 @@ type querier struct {
q tsdb.Querier
}
-func (q querier) Select(_ *storage.SelectParams, oms ...*labels.Matcher) (storage.SeriesSet, storage.Warnings, error) {
- ms := make([]tsdbLabels.Matcher, 0, len(oms))
-
- for _, om := range oms {
- ms = append(ms, convertMatcher(om))
- }
+func (q querier) Select(_ *storage.SelectParams, ms ...*labels.Matcher) (storage.SeriesSet, storage.Warnings, error) {
set, err := q.q.Select(ms...)
if err != nil {
return nil, nil, err
@@ -279,15 +272,15 @@ type series struct {
s tsdb.Series
}
-func (s series) Labels() labels.Labels { return toLabels(s.s.Labels()) }
-func (s series) Iterator() storage.SeriesIterator { return storage.SeriesIterator(s.s.Iterator()) }
+func (s series) Labels() labels.Labels { return s.s.Labels() }
+func (s series) Iterator() storage.SeriesIterator { return s.s.Iterator() }
type appender struct {
a tsdb.Appender
}
func (a appender) Add(lset labels.Labels, t int64, v float64) (uint64, error) {
- ref, err := a.a.Add(toTSDBLabels(lset), t, v)
+ ref, err := a.a.Add(lset, t, v)
switch errors.Cause(err) {
case tsdb.ErrNotFound:
@@ -320,36 +313,3 @@ func (a appender) AddFast(_ labels.Labels, ref uint64, t int64, v float64) error
func (a appender) Commit() error { return a.a.Commit() }
func (a appender) Rollback() error { return a.a.Rollback() }
-
-func convertMatcher(m *labels.Matcher) tsdbLabels.Matcher {
- switch m.Type {
- case labels.MatchEqual:
- return tsdbLabels.NewEqualMatcher(m.Name, m.Value)
-
- case labels.MatchNotEqual:
- return tsdbLabels.Not(tsdbLabels.NewEqualMatcher(m.Name, m.Value))
-
- case labels.MatchRegexp:
- res, err := tsdbLabels.NewRegexpMatcher(m.Name, "^(?:"+m.Value+")$")
- if err != nil {
- panic(err)
- }
- return res
-
- case labels.MatchNotRegexp:
- res, err := tsdbLabels.NewRegexpMatcher(m.Name, "^(?:"+m.Value+")$")
- if err != nil {
- panic(err)
- }
- return tsdbLabels.Not(res)
- }
- panic("storage.convertMatcher: invalid matcher type")
-}
-
-func toTSDBLabels(l labels.Labels) tsdbLabels.Labels {
- return *(*tsdbLabels.Labels)(unsafe.Pointer(&l))
-}
-
-func toLabels(l tsdbLabels.Labels) labels.Labels {
- return *(*labels.Labels)(unsafe.Pointer(&l))
-}
diff --git a/vendor/github.com/prometheus/prometheus/tsdb/MAINTAINERS.md b/vendor/github.com/prometheus/prometheus/tsdb/MAINTAINERS.md
deleted file mode 100644
index dcb57a80dfd9e..0000000000000
--- a/vendor/github.com/prometheus/prometheus/tsdb/MAINTAINERS.md
+++ /dev/null
@@ -1,4 +0,0 @@
-Maintainers of this repository:
-
-* Krasi Georgiev <[email protected]> @krasi-georgiev
-* Goutham Veeramachaneni <[email protected]> @gouthamve
\ No newline at end of file
diff --git a/vendor/github.com/prometheus/prometheus/tsdb/README.md b/vendor/github.com/prometheus/prometheus/tsdb/README.md
index c62d616d63ab0..61f867088203c 100644
--- a/vendor/github.com/prometheus/prometheus/tsdb/README.md
+++ b/vendor/github.com/prometheus/prometheus/tsdb/README.md
@@ -1,8 +1,6 @@
# TSDB
-[](https://travis-ci.org/prometheus/tsdb)
-[](https://godoc.org/github.com/prometheus/tsdb)
-[](https://goreportcard.com/report/github.com/prometheus/tsdb)
+[](https://godoc.org/github.com/prometheus/prometheus/tsdb)
This repository contains the Prometheus storage layer that is used in its 2.x releases.
diff --git a/vendor/github.com/prometheus/prometheus/tsdb/block.go b/vendor/github.com/prometheus/prometheus/tsdb/block.go
index 0030fbd6854cb..e61d43ae4fe6a 100644
--- a/vendor/github.com/prometheus/prometheus/tsdb/block.go
+++ b/vendor/github.com/prometheus/prometheus/tsdb/block.go
@@ -26,12 +26,13 @@ import (
"github.com/go-kit/kit/log/level"
"github.com/oklog/ulid"
"github.com/pkg/errors"
+ "github.com/prometheus/prometheus/pkg/labels"
"github.com/prometheus/prometheus/tsdb/chunkenc"
"github.com/prometheus/prometheus/tsdb/chunks"
tsdb_errors "github.com/prometheus/prometheus/tsdb/errors"
"github.com/prometheus/prometheus/tsdb/fileutil"
"github.com/prometheus/prometheus/tsdb/index"
- "github.com/prometheus/prometheus/tsdb/labels"
+ "github.com/prometheus/prometheus/tsdb/tombstones"
)
// IndexWriter serializes the index for a block of series data.
@@ -135,8 +136,8 @@ type BlockReader interface {
// Chunks returns a ChunkReader over the block's data.
Chunks() (ChunkReader, error)
- // Tombstones returns a TombstoneReader over the block's deleted data.
- Tombstones() (TombstoneReader, error)
+ // Tombstones returns a tombstones.Reader over the block's deleted data.
+ Tombstones() (tombstones.Reader, error)
// Meta provides meta information about the block reader.
Meta() BlockMeta
@@ -274,12 +275,12 @@ type Block struct {
meta BlockMeta
// Symbol Table Size in bytes.
- // We maintain this variable to avoid recalculation everytime.
+ // We maintain this variable to avoid recalculation every time.
symbolTableSize uint64
chunkr ChunkReader
indexr IndexReader
- tombstones TombstoneReader
+ tombstones tombstones.Reader
logger log.Logger
@@ -321,7 +322,7 @@ func OpenBlock(logger log.Logger, dir string, pool chunkenc.Pool) (pb *Block, er
}
closers = append(closers, ir)
- tr, sizeTomb, err := readTombstones(dir)
+ tr, sizeTomb, err := tombstones.ReadTombstones(dir)
if err != nil {
return nil, err
}
@@ -412,11 +413,11 @@ func (pb *Block) Chunks() (ChunkReader, error) {
}
// Tombstones returns a new TombstoneReader against the block data.
-func (pb *Block) Tombstones() (TombstoneReader, error) {
+func (pb *Block) Tombstones() (tombstones.Reader, error) {
if err := pb.startRead(); err != nil {
return nil, err
}
- return blockTombstoneReader{TombstoneReader: pb.tombstones, b: pb}, nil
+ return blockTombstoneReader{Reader: pb.tombstones, b: pb}, nil
}
// GetSymbolTableSize returns the Symbol Table Size in the index of this block.
@@ -483,7 +484,7 @@ func (r blockIndexReader) Close() error {
}
type blockTombstoneReader struct {
- TombstoneReader
+ tombstones.Reader
b *Block
}
@@ -503,7 +504,7 @@ func (r blockChunkReader) Close() error {
}
// Delete matching series between mint and maxt in the block.
-func (pb *Block) Delete(mint, maxt int64, ms ...labels.Matcher) error {
+func (pb *Block) Delete(mint, maxt int64, ms ...*labels.Matcher) error {
pb.mtx.Lock()
defer pb.mtx.Unlock()
@@ -519,7 +520,7 @@ func (pb *Block) Delete(mint, maxt int64, ms ...labels.Matcher) error {
ir := pb.indexr
// Choose only valid postings which have chunks in the time-range.
- stones := newMemTombstones()
+ stones := tombstones.NewMemTombstones()
var lset labels.Labels
var chks []chunks.Meta
@@ -535,7 +536,7 @@ Outer:
if chk.OverlapsClosedInterval(mint, maxt) {
// Delete only until the current values and not beyond.
tmin, tmax := clampInterval(mint, maxt, chks[0].MinTime, chks[len(chks)-1].MaxTime)
- stones.addInterval(p.At(), Interval{tmin, tmax})
+ stones.AddInterval(p.At(), tombstones.Interval{Mint: tmin, Maxt: tmax})
continue Outer
}
}
@@ -545,9 +546,9 @@ Outer:
return p.Err()
}
- err = pb.tombstones.Iter(func(id uint64, ivs Intervals) error {
+ err = pb.tombstones.Iter(func(id uint64, ivs tombstones.Intervals) error {
for _, iv := range ivs {
- stones.addInterval(id, iv)
+ stones.AddInterval(id, iv)
}
return nil
})
@@ -557,7 +558,7 @@ Outer:
pb.tombstones = stones
pb.meta.Stats.NumTombstones = pb.tombstones.Total()
- n, err := writeTombstoneFile(pb.logger, pb.dir, pb.tombstones)
+ n, err := tombstones.WriteFile(pb.logger, pb.dir, pb.tombstones)
if err != nil {
return err
}
@@ -575,7 +576,7 @@ Outer:
func (pb *Block) CleanTombstones(dest string, c Compactor) (*ulid.ULID, error) {
numStones := 0
- if err := pb.tombstones.Iter(func(id uint64, ivs Intervals) error {
+ if err := pb.tombstones.Iter(func(id uint64, ivs tombstones.Intervals) error {
numStones += len(ivs)
return nil
}); err != nil {
@@ -610,7 +611,7 @@ func (pb *Block) Snapshot(dir string) error {
for _, fname := range []string{
metaFilename,
indexFilename,
- tombstoneFilename,
+ tombstones.TombstonesFilename,
} {
if err := os.Link(filepath.Join(pb.dir, fname), filepath.Join(blockDir, fname)); err != nil {
return errors.Wrapf(err, "create snapshot %s", fname)
diff --git a/vendor/github.com/prometheus/prometheus/tsdb/chunkenc/xor.go b/vendor/github.com/prometheus/prometheus/tsdb/chunkenc/xor.go
index ca20309f687f2..ce6e0a9512f68 100644
--- a/vendor/github.com/prometheus/prometheus/tsdb/chunkenc/xor.go
+++ b/vendor/github.com/prometheus/prometheus/tsdb/chunkenc/xor.go
@@ -14,7 +14,7 @@
// The code in this file was largely written by Damian Gryski as part of
// https://github.com/dgryski/go-tsz and published under the license below.
// It was modified to accommodate reading from byte slices without modifying
-// the underlying bytes, which would panic when reading from mmaped
+// the underlying bytes, which would panic when reading from mmap'd
// read-only byte slices.
// Copyright (c) 2015,2016 Damian Gryski <[email protected]>
diff --git a/vendor/github.com/prometheus/prometheus/tsdb/compact.go b/vendor/github.com/prometheus/prometheus/tsdb/compact.go
index f88d27fba39fd..e446235fbb20b 100644
--- a/vendor/github.com/prometheus/prometheus/tsdb/compact.go
+++ b/vendor/github.com/prometheus/prometheus/tsdb/compact.go
@@ -29,12 +29,13 @@ import (
"github.com/oklog/ulid"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/prometheus/pkg/labels"
"github.com/prometheus/prometheus/tsdb/chunkenc"
"github.com/prometheus/prometheus/tsdb/chunks"
tsdb_errors "github.com/prometheus/prometheus/tsdb/errors"
"github.com/prometheus/prometheus/tsdb/fileutil"
"github.com/prometheus/prometheus/tsdb/index"
- "github.com/prometheus/prometheus/tsdb/labels"
+ "github.com/prometheus/prometheus/tsdb/tombstones"
)
// ExponentialBlockRanges returns the time ranges based on the stepSize.
@@ -607,7 +608,7 @@ func (c *LeveledCompactor) write(dest string, meta *BlockMeta, blocks ...BlockRe
}
// Create an empty tombstones file.
- if _, err := writeTombstoneFile(c.logger, tmp, newMemTombstones()); err != nil {
+ if _, err := tombstones.WriteFile(c.logger, tmp, tombstones.NewMemTombstones()); err != nil {
return errors.Wrap(err, "write new tombstones file")
}
@@ -768,7 +769,7 @@ func (c *LeveledCompactor) populateBlock(blocks []BlockReader, meta *BlockMeta,
//
// TODO think how to avoid the typecasting to verify when it is head block.
if _, isHeadChunk := chk.Chunk.(*safeChunk); isHeadChunk && chk.MaxTime >= meta.MaxTime {
- dranges = append(dranges, Interval{Mint: meta.MaxTime, Maxt: math.MaxInt64})
+ dranges = append(dranges, tombstones.Interval{Mint: meta.MaxTime, Maxt: math.MaxInt64})
} else
// Sanity check for disk blocks.
@@ -876,15 +877,15 @@ type compactionSeriesSet struct {
p index.Postings
index IndexReader
chunks ChunkReader
- tombstones TombstoneReader
+ tombstones tombstones.Reader
l labels.Labels
c []chunks.Meta
- intervals Intervals
+ intervals tombstones.Intervals
err error
}
-func newCompactionSeriesSet(i IndexReader, c ChunkReader, t TombstoneReader, p index.Postings) *compactionSeriesSet {
+func newCompactionSeriesSet(i IndexReader, c ChunkReader, t tombstones.Reader, p index.Postings) *compactionSeriesSet {
return &compactionSeriesSet{
index: i,
chunks: c,
@@ -914,7 +915,7 @@ func (c *compactionSeriesSet) Next() bool {
if len(c.intervals) > 0 {
chks := make([]chunks.Meta, 0, len(c.c))
for _, chk := range c.c {
- if !(Interval{chk.MinTime, chk.MaxTime}.isSubrange(c.intervals)) {
+ if !(tombstones.Interval{Mint: chk.MinTime, Maxt: chk.MaxTime}.IsSubrange(c.intervals)) {
chks = append(chks, chk)
}
}
@@ -942,7 +943,7 @@ func (c *compactionSeriesSet) Err() error {
return c.p.Err()
}
-func (c *compactionSeriesSet) At() (labels.Labels, []chunks.Meta, Intervals) {
+func (c *compactionSeriesSet) At() (labels.Labels, []chunks.Meta, tombstones.Intervals) {
return c.l, c.c, c.intervals
}
@@ -952,7 +953,7 @@ type compactionMerger struct {
aok, bok bool
l labels.Labels
c []chunks.Meta
- intervals Intervals
+ intervals tombstones.Intervals
}
func newCompactionMerger(a, b ChunkSeriesSet) (*compactionMerger, error) {
@@ -1008,7 +1009,7 @@ func (c *compactionMerger) Next() bool {
_, cb, rb := c.b.At()
for _, r := range rb {
- ra = ra.add(r)
+ ra = ra.Add(r)
}
c.l = append(c.l[:0], l...)
@@ -1029,6 +1030,6 @@ func (c *compactionMerger) Err() error {
return c.b.Err()
}
-func (c *compactionMerger) At() (labels.Labels, []chunks.Meta, Intervals) {
+func (c *compactionMerger) At() (labels.Labels, []chunks.Meta, tombstones.Intervals) {
return c.l, c.c, c.intervals
}
diff --git a/vendor/github.com/prometheus/prometheus/tsdb/db.go b/vendor/github.com/prometheus/prometheus/tsdb/db.go
index e29aedf5bace2..715a97b28c6ed 100644
--- a/vendor/github.com/prometheus/prometheus/tsdb/db.go
+++ b/vendor/github.com/prometheus/prometheus/tsdb/db.go
@@ -34,21 +34,26 @@ import (
"github.com/oklog/ulid"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/prometheus/pkg/labels"
"github.com/prometheus/prometheus/tsdb/chunkenc"
tsdb_errors "github.com/prometheus/prometheus/tsdb/errors"
"github.com/prometheus/prometheus/tsdb/fileutil"
_ "github.com/prometheus/prometheus/tsdb/goversion"
- "github.com/prometheus/prometheus/tsdb/labels"
"github.com/prometheus/prometheus/tsdb/wal"
"golang.org/x/sync/errgroup"
)
+// Default duration of a block in milliseconds - 2h.
+const (
+ DefaultBlockDuration = int64(2 * 60 * 60 * 1000)
+)
+
// DefaultOptions used for the DB. They are sane for setups using
// millisecond precision timestamps.
var DefaultOptions = &Options{
WALSegmentSize: wal.DefaultSegmentSize,
RetentionDuration: 15 * 24 * 60 * 60 * 1000, // 15 days in milliseconds
- BlockRanges: ExponentialBlockRanges(int64(2*time.Hour)/1e6, 3, 5),
+ BlockRanges: ExponentialBlockRanges(DefaultBlockDuration, 3, 5),
NoLockfile: false,
AllowOverlappingBlocks: false,
WALCompression: false,
@@ -216,7 +221,7 @@ func newDBMetrics(db *DB, r prometheus.Registerer) *dbMetrics {
db.mtx.RLock()
defer db.mtx.RUnlock()
if len(db.blocks) == 0 {
- return float64(db.head.minTime)
+ return float64(db.head.MinTime())
}
return float64(db.blocks[0].meta.MinTime)
})
@@ -260,7 +265,7 @@ func newDBMetrics(db *DB, r prometheus.Registerer) *dbMetrics {
var ErrClosed = errors.New("db already closed")
// DBReadOnly provides APIs for read only operations on a database.
-// Current implementation doesn't support concurency so
+// Current implementation doesn't support concurrency so
// all API calls should happen in the same go routine.
type DBReadOnly struct {
logger log.Logger
@@ -272,7 +277,7 @@ type DBReadOnly struct {
// OpenDBReadOnly opens DB in the given directory for read only operations.
func OpenDBReadOnly(dir string, l log.Logger) (*DBReadOnly, error) {
if _, err := os.Stat(dir); err != nil {
- return nil, errors.Wrap(err, "openning the db dir")
+ return nil, errors.Wrap(err, "opening the db dir")
}
if l == nil {
@@ -359,7 +364,7 @@ func (db *DBReadOnly) Querier(mint, maxt int64) (Querier, error) {
maxBlockTime = blocks[len(blocks)-1].Meta().MaxTime
}
- // Also add the WAL if the current blocks don't cover the requestes time range.
+ // Also add the WAL if the current blocks don't cover the requests time range.
if maxBlockTime <= maxt {
w, err := wal.Open(db.logger, nil, filepath.Join(db.dir, "wal"))
if err != nil {
@@ -933,7 +938,11 @@ func (db *DB) beyondSizeRetention(blocks []*Block) (deleteable map[ulid.ULID]*Bl
}
deleteable = make(map[ulid.ULID]*Block)
- blocksSize := int64(0)
+
+ walSize, _ := db.Head().wal.Size()
+ // Initializing size counter with WAL size,
+ // as that is part of the retention strategy.
+ blocksSize := walSize
for i, block := range blocks {
blocksSize += block.Size()
if blocksSize > db.opts.MaxBytes {
@@ -1243,7 +1252,7 @@ func rangeForTimestamp(t int64, width int64) (maxt int64) {
}
// Delete implements deletion of metrics. It only has atomicity guarantees on a per-block basis.
-func (db *DB) Delete(mint, maxt int64, ms ...labels.Matcher) error {
+func (db *DB) Delete(mint, maxt int64, ms ...*labels.Matcher) error {
db.cmtx.Lock()
defer db.cmtx.Unlock()
diff --git a/vendor/github.com/prometheus/prometheus/tsdb/fileutil/dir.go b/vendor/github.com/prometheus/prometheus/tsdb/fileutil/dir.go
new file mode 100644
index 0000000000000..e6ac4ec989229
--- /dev/null
+++ b/vendor/github.com/prometheus/prometheus/tsdb/fileutil/dir.go
@@ -0,0 +1,33 @@
+// Copyright 2019 The Prometheus Authors
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package fileutil
+
+import (
+ "os"
+ "path/filepath"
+)
+
+func DirSize(dir string) (int64, error) {
+ var size int64
+ err := filepath.Walk(dir, func(filePath string, info os.FileInfo, err error) error {
+ if err != nil {
+ return err
+ }
+ if !info.IsDir() {
+ size += info.Size()
+ }
+ return nil
+ })
+ return size, err
+}
diff --git a/vendor/github.com/prometheus/prometheus/tsdb/goversion/goversion.go b/vendor/github.com/prometheus/prometheus/tsdb/goversion/goversion.go
index 8b194d4a2be7a..93ff6ef8b9f94 100644
--- a/vendor/github.com/prometheus/prometheus/tsdb/goversion/goversion.go
+++ b/vendor/github.com/prometheus/prometheus/tsdb/goversion/goversion.go
@@ -13,7 +13,7 @@
// +build go1.12
-// Package goversion enforces the go version suported by the tsdb module.
+// Package goversion enforces the go version supported by the tsdb module.
package goversion
const _SoftwareRequiresGOVERSION1_12 = uint8(0)
diff --git a/vendor/github.com/prometheus/prometheus/tsdb/head.go b/vendor/github.com/prometheus/prometheus/tsdb/head.go
index 6c9975a136285..34d40bb713035 100644
--- a/vendor/github.com/prometheus/prometheus/tsdb/head.go
+++ b/vendor/github.com/prometheus/prometheus/tsdb/head.go
@@ -28,11 +28,13 @@ import (
"github.com/oklog/ulid"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/prometheus/pkg/labels"
"github.com/prometheus/prometheus/tsdb/chunkenc"
"github.com/prometheus/prometheus/tsdb/chunks"
"github.com/prometheus/prometheus/tsdb/encoding"
"github.com/prometheus/prometheus/tsdb/index"
- "github.com/prometheus/prometheus/tsdb/labels"
+ "github.com/prometheus/prometheus/tsdb/record"
+ "github.com/prometheus/prometheus/tsdb/tombstones"
"github.com/prometheus/prometheus/tsdb/wal"
)
@@ -54,22 +56,25 @@ var (
// emptyTombstoneReader is a no-op Tombstone Reader.
// This is used by head to satisfy the Tombstones() function call.
- emptyTombstoneReader = newMemTombstones()
+ emptyTombstoneReader = tombstones.NewMemTombstones()
)
// Head handles reads and writes of time series data within a time window.
type Head struct {
- chunkRange int64
+ // Keep all 64bit atomically accessed variables at the top of this struct.
+ // See https://golang.org/pkg/sync/atomic/#pkg-note-BUG for more info.
+ chunkRange int64
+ numSeries uint64
+ minTime, maxTime int64 // Current min and max of the samples included in the head.
+ minValidTime int64 // Mint allowed to be added to the head. It shouldn't be lower than the maxt of the last persisted block.
+ lastSeriesID uint64
+
metrics *headMetrics
wal *wal.WAL
logger log.Logger
appendPool sync.Pool
+ seriesPool sync.Pool
bytesPool sync.Pool
- numSeries uint64
-
- minTime, maxTime int64 // Current min and max of the samples included in the head.
- minValidTime int64 // Mint allowed to be added to the head. It shouldn't be lower than the maxt of the last persisted block.
- lastSeriesID uint64
// All series addressable by their ID or hash.
series *stripeSeries
@@ -82,6 +87,10 @@ type Head struct {
deleted map[uint64]int // Deleted series, and what WAL segment they must be kept until.
postings *index.MemPostings // postings lists for terms
+
+ cardinalityMutex sync.Mutex
+ cardinalityCache *index.PostingsStats // posting stats cache which will expire after 30sec
+ lastPostingsStatsCall time.Duration // last posting stats call (PostgingsCardinalityStats()) time for caching
}
type headMetrics struct {
@@ -226,6 +235,26 @@ func newHeadMetrics(h *Head, r prometheus.Registerer) *headMetrics {
return m
}
+const cardinalityCacheExpirationTime = time.Duration(30) * time.Second
+
+// PostingsCardinalityStats returns top 10 highest cardinality stats By label and value names.
+func (h *Head) PostingsCardinalityStats(statsByLabelName string) *index.PostingsStats {
+ h.cardinalityMutex.Lock()
+ defer h.cardinalityMutex.Unlock()
+ currentTime := time.Duration(time.Now().Unix()) * time.Second
+ seconds := currentTime - h.lastPostingsStatsCall
+ if seconds > cardinalityCacheExpirationTime {
+ h.cardinalityCache = nil
+ }
+ if h.cardinalityCache != nil {
+ return h.cardinalityCache
+ }
+ h.cardinalityCache = h.postings.Stats(statsByLabelName)
+ h.lastPostingsStatsCall = time.Duration(time.Now().Unix()) * time.Second
+
+ return h.cardinalityCache
+}
+
// NewHead opens the head block in dir.
func NewHead(r prometheus.Registerer, l log.Logger, wal *wal.WAL, chunkRange int64) (*Head, error) {
if l == nil {
@@ -256,7 +285,7 @@ func NewHead(r prometheus.Registerer, l log.Logger, wal *wal.WAL, chunkRange int
// Samples before the mint timestamp are discarded.
func (h *Head) processWALSamples(
minValidTime int64,
- input <-chan []RefSample, output chan<- []RefSample,
+ input <-chan []record.RefSample, output chan<- []record.RefSample,
) (unknownRefs uint64) {
defer close(output)
@@ -328,11 +357,10 @@ func (h *Head) loadWAL(r *wal.Reader, multiRef map[uint64]uint64) (err error) {
// They are connected through a ring of channels which ensures that all sample batches
// read from the WAL are processed in order.
var (
- wg sync.WaitGroup
- multiRefLock sync.Mutex
- n = runtime.GOMAXPROCS(0)
- inputs = make([]chan []RefSample, n)
- outputs = make([]chan []RefSample, n)
+ wg sync.WaitGroup
+ n = runtime.GOMAXPROCS(0)
+ inputs = make([]chan []record.RefSample, n)
+ outputs = make([]chan []record.RefSample, n)
)
wg.Add(n)
@@ -349,10 +377,10 @@ func (h *Head) loadWAL(r *wal.Reader, multiRef map[uint64]uint64) (err error) {
}()
for i := 0; i < n; i++ {
- outputs[i] = make(chan []RefSample, 300)
- inputs[i] = make(chan []RefSample, 300)
+ outputs[i] = make(chan []record.RefSample, 300)
+ inputs[i] = make(chan []record.RefSample, 300)
- go func(input <-chan []RefSample, output chan<- []RefSample) {
+ go func(input <-chan []record.RefSample, output chan<- []record.RefSample) {
unknown := h.processWALSamples(h.minValidTime, input, output)
atomic.AddUint64(&unknownRefs, unknown)
wg.Done()
@@ -360,55 +388,106 @@ func (h *Head) loadWAL(r *wal.Reader, multiRef map[uint64]uint64) (err error) {
}
var (
- dec RecordDecoder
- series []RefSeries
- samples []RefSample
- tstones []Stone
- allStones = newMemTombstones()
+ dec record.Decoder
+ allStones = tombstones.NewMemTombstones()
+ shards = make([][]record.RefSample, n)
)
defer func() {
if err := allStones.Close(); err != nil {
level.Warn(h.logger).Log("msg", "closing memTombstones during wal read", "err", err)
}
}()
- for r.Next() {
- series, samples, tstones = series[:0], samples[:0], tstones[:0]
- rec := r.Record()
-
- switch dec.Type(rec) {
- case RecordSeries:
- series, err = dec.Series(rec, series)
- if err != nil {
- return &wal.CorruptionErr{
- Err: errors.Wrap(err, "decode series"),
+
+ var (
+ decoded = make(chan interface{}, 10)
+ errCh = make(chan error, 1)
+ seriesPool = sync.Pool{
+ New: func() interface{} {
+ return []record.RefSeries{}
+ },
+ }
+ samplesPool = sync.Pool{
+ New: func() interface{} {
+ return []record.RefSample{}
+ },
+ }
+ tstonesPool = sync.Pool{
+ New: func() interface{} {
+ return []tombstones.Stone{}
+ },
+ }
+ )
+ go func() {
+ defer close(decoded)
+ for r.Next() {
+ rec := r.Record()
+ switch dec.Type(rec) {
+ case record.Series:
+ series := seriesPool.Get().([]record.RefSeries)[:0]
+ series, err = dec.Series(rec, series)
+ if err != nil {
+ errCh <- &wal.CorruptionErr{
+ Err: errors.Wrap(err, "decode series"),
+ Segment: r.Segment(),
+ Offset: r.Offset(),
+ }
+ return
+ }
+ decoded <- series
+ case record.Samples:
+ samples := samplesPool.Get().([]record.RefSample)[:0]
+ samples, err = dec.Samples(rec, samples)
+ if err != nil {
+ errCh <- &wal.CorruptionErr{
+ Err: errors.Wrap(err, "decode samples"),
+ Segment: r.Segment(),
+ Offset: r.Offset(),
+ }
+ return
+ }
+ decoded <- samples
+ case record.Tombstones:
+ tstones := tstonesPool.Get().([]tombstones.Stone)[:0]
+ tstones, err = dec.Tombstones(rec, tstones)
+ if err != nil {
+ errCh <- &wal.CorruptionErr{
+ Err: errors.Wrap(err, "decode tombstones"),
+ Segment: r.Segment(),
+ Offset: r.Offset(),
+ }
+ return
+ }
+ decoded <- tstones
+ default:
+ errCh <- &wal.CorruptionErr{
+ Err: errors.Errorf("invalid record type %v", dec.Type(rec)),
Segment: r.Segment(),
Offset: r.Offset(),
}
+ return
}
- for _, s := range series {
+ }
+ }()
+
+ for d := range decoded {
+ switch v := d.(type) {
+ case []record.RefSeries:
+ for _, s := range v {
series, created := h.getOrCreateWithID(s.Ref, s.Labels.Hash(), s.Labels)
if !created {
// There's already a different ref for this series.
- multiRefLock.Lock()
multiRef[s.Ref] = series.ref
- multiRefLock.Unlock()
}
if h.lastSeriesID < s.Ref {
h.lastSeriesID = s.Ref
}
}
- case RecordSamples:
- samples, err = dec.Samples(rec, samples)
- s := samples
- if err != nil {
- return &wal.CorruptionErr{
- Err: errors.Wrap(err, "decode samples"),
- Segment: r.Segment(),
- Offset: r.Offset(),
- }
- }
+ //lint:ignore SA6002 relax staticcheck verification.
+ seriesPool.Put(v)
+ case []record.RefSample:
+ samples := v
// We split up the samples into chunks of 5000 samples or less.
// With O(300 * #cores) in-flight sample batches, large scrapes could otherwise
// cause thousands of very large in flight buffers occupying large amounts
@@ -418,9 +497,8 @@ func (h *Head) loadWAL(r *wal.Reader, multiRef map[uint64]uint64) (err error) {
if len(samples) < m {
m = len(samples)
}
- shards := make([][]RefSample, n)
for i := 0; i < n; i++ {
- var buf []RefSample
+ var buf []record.RefSample
select {
case buf = <-outputs[i]:
default:
@@ -439,37 +517,34 @@ func (h *Head) loadWAL(r *wal.Reader, multiRef map[uint64]uint64) (err error) {
}
samples = samples[m:]
}
- samples = s // Keep whole slice for reuse.
- case RecordTombstones:
- tstones, err = dec.Tombstones(rec, tstones)
- if err != nil {
- return &wal.CorruptionErr{
- Err: errors.Wrap(err, "decode tombstones"),
- Segment: r.Segment(),
- Offset: r.Offset(),
- }
- }
- for _, s := range tstones {
- for _, itv := range s.intervals {
+ //lint:ignore SA6002 relax staticcheck verification.
+ samplesPool.Put(v)
+ case []tombstones.Stone:
+ for _, s := range v {
+ for _, itv := range s.Intervals {
if itv.Maxt < h.minValidTime {
continue
}
- if m := h.series.getByID(s.ref); m == nil {
+ if m := h.series.getByID(s.Ref); m == nil {
unknownRefs++
continue
}
- allStones.addInterval(s.ref, itv)
+ allStones.AddInterval(s.Ref, itv)
}
}
+ //lint:ignore SA6002 relax staticcheck verification.
+ tstonesPool.Put(v)
default:
- return &wal.CorruptionErr{
- Err: errors.Errorf("invalid record type %v", dec.Type(rec)),
- Segment: r.Segment(),
- Offset: r.Offset(),
- }
+ panic(fmt.Errorf("unexpected decoded type: %T", d))
}
}
+ select {
+ case err := <-errCh:
+ return err
+ default:
+ }
+
// Signal termination to each worker and wait for it to close its output channel.
for i := 0; i < n; i++ {
close(inputs[i])
@@ -482,7 +557,7 @@ func (h *Head) loadWAL(r *wal.Reader, multiRef map[uint64]uint64) (err error) {
return errors.Wrap(r.Err(), "read records")
}
- if err := allStones.Iter(func(ref uint64, dranges Intervals) error {
+ if err := allStones.Iter(func(ref uint64, dranges tombstones.Intervals) error {
return h.chunkRewrite(ref, dranges)
}); err != nil {
return errors.Wrap(r.Err(), "deleting samples from tombstones")
@@ -508,8 +583,8 @@ func (h *Head) Init(minValidTime int64) error {
level.Info(h.logger).Log("msg", "replaying WAL, this may take awhile")
// Backfill the checkpoint first if it exists.
- dir, startFrom, err := LastCheckpoint(h.wal.Dir())
- if err != nil && err != ErrNotFound {
+ dir, startFrom, err := wal.LastCheckpoint(h.wal.Dir())
+ if err != nil && err != record.ErrNotFound {
return errors.Wrap(err, "find last checkpoint")
}
multiRef := map[uint64]uint64{}
@@ -629,7 +704,7 @@ func (h *Head) Truncate(mint int64) (err error) {
return ok
}
h.metrics.checkpointCreationTotal.Inc()
- if _, err = Checkpoint(h.wal, first, last, keep, mint); err != nil {
+ if _, err = wal.Checkpoint(h.wal, first, last, keep, mint); err != nil {
h.metrics.checkpointCreationFail.Inc()
return errors.Wrap(err, "create checkpoint")
}
@@ -651,7 +726,7 @@ func (h *Head) Truncate(mint int64) (err error) {
h.deletedMtx.Unlock()
h.metrics.checkpointDeleteTotal.Inc()
- if err := DeleteCheckpoints(h.wal.Dir(), last); err != nil {
+ if err := wal.DeleteCheckpoints(h.wal.Dir(), last); err != nil {
// Leftover old checkpoints do not cause problems down the line beyond
// occupying disk space.
// They will just be ignored since a higher checkpoint exists.
@@ -693,7 +768,7 @@ func (h *rangeHead) Chunks() (ChunkReader, error) {
return h.head.chunksRange(h.mint, h.maxt), nil
}
-func (h *rangeHead) Tombstones() (TombstoneReader, error) {
+func (h *rangeHead) Tombstones() (tombstones.Reader, error) {
return emptyTombstoneReader, nil
}
@@ -779,6 +854,7 @@ func (h *Head) appender() *headAppender {
mint: math.MaxInt64,
maxt: math.MinInt64,
samples: h.getAppendBuffer(),
+ sampleSeries: h.getSeriesBuffer(),
}
}
@@ -789,19 +865,32 @@ func max(a, b int64) int64 {
return b
}
-func (h *Head) getAppendBuffer() []RefSample {
+func (h *Head) getAppendBuffer() []record.RefSample {
b := h.appendPool.Get()
if b == nil {
- return make([]RefSample, 0, 512)
+ return make([]record.RefSample, 0, 512)
}
- return b.([]RefSample)
+ return b.([]record.RefSample)
}
-func (h *Head) putAppendBuffer(b []RefSample) {
+func (h *Head) putAppendBuffer(b []record.RefSample) {
//lint:ignore SA6002 safe to ignore and actually fixing it has some performance penalty.
h.appendPool.Put(b[:0])
}
+func (h *Head) getSeriesBuffer() []*memSeries {
+ b := h.seriesPool.Get()
+ if b == nil {
+ return make([]*memSeries, 0, 512)
+ }
+ return b.([]*memSeries)
+}
+
+func (h *Head) putSeriesBuffer(b []*memSeries) {
+ //lint:ignore SA6002 safe to ignore and actually fixing it has some performance penalty.
+ h.seriesPool.Put(b[:0])
+}
+
func (h *Head) getBytesBuffer() []byte {
b := h.bytesPool.Get()
if b == nil {
@@ -820,8 +909,9 @@ type headAppender struct {
minValidTime int64 // No samples below this timestamp are allowed.
mint, maxt int64
- series []RefSeries
- samples []RefSample
+ series []record.RefSeries
+ samples []record.RefSample
+ sampleSeries []*memSeries
}
func (a *headAppender) Add(lset labels.Labels, t int64, v float64) (uint64, error) {
@@ -834,7 +924,7 @@ func (a *headAppender) Add(lset labels.Labels, t int64, v float64) (uint64, erro
s, created := a.head.getOrCreate(lset.Hash(), lset)
if created {
- a.series = append(a.series, RefSeries{
+ a.series = append(a.series, record.RefSeries{
Ref: s.ref,
Labels: lset,
})
@@ -866,12 +956,12 @@ func (a *headAppender) AddFast(ref uint64, t int64, v float64) error {
a.maxt = t
}
- a.samples = append(a.samples, RefSample{
- Ref: ref,
- T: t,
- V: v,
- series: s,
+ a.samples = append(a.samples, record.RefSample{
+ Ref: ref,
+ T: t,
+ V: v,
})
+ a.sampleSeries = append(a.sampleSeries, s)
return nil
}
@@ -884,7 +974,7 @@ func (a *headAppender) log() error {
defer func() { a.head.putBytesBuffer(buf) }()
var rec []byte
- var enc RecordEncoder
+ var enc record.Encoder
if len(a.series) > 0 {
rec = enc.Series(a.series, buf)
@@ -908,18 +998,20 @@ func (a *headAppender) log() error {
func (a *headAppender) Commit() error {
defer a.head.metrics.activeAppenders.Dec()
defer a.head.putAppendBuffer(a.samples)
+ defer a.head.putSeriesBuffer(a.sampleSeries)
if err := a.log(); err != nil {
return errors.Wrap(err, "write to WAL")
}
total := len(a.samples)
-
- for _, s := range a.samples {
- s.series.Lock()
- ok, chunkCreated := s.series.append(s.T, s.V)
- s.series.pendingCommit = false
- s.series.Unlock()
+ var series *memSeries
+ for i, s := range a.samples {
+ series = a.sampleSeries[i]
+ series.Lock()
+ ok, chunkCreated := series.append(s.T, s.V)
+ series.pendingCommit = false
+ series.Unlock()
if !ok {
total--
@@ -938,10 +1030,12 @@ func (a *headAppender) Commit() error {
func (a *headAppender) Rollback() error {
a.head.metrics.activeAppenders.Dec()
- for _, s := range a.samples {
- s.series.Lock()
- s.series.pendingCommit = false
- s.series.Unlock()
+ var series *memSeries
+ for i := range a.samples {
+ series = a.sampleSeries[i]
+ series.Lock()
+ series.pendingCommit = false
+ series.Unlock()
}
a.head.putAppendBuffer(a.samples)
@@ -953,7 +1047,7 @@ func (a *headAppender) Rollback() error {
// Delete all samples in the range of [mint, maxt] for series that satisfy the given
// label matchers.
-func (h *Head) Delete(mint, maxt int64, ms ...labels.Matcher) error {
+func (h *Head) Delete(mint, maxt int64, ms ...*labels.Matcher) error {
// Do not delete anything beyond the currently valid range.
mint, maxt = clampInterval(mint, maxt, h.MinTime(), h.MaxTime())
@@ -964,7 +1058,7 @@ func (h *Head) Delete(mint, maxt int64, ms ...labels.Matcher) error {
return errors.Wrap(err, "select series")
}
- var stones []Stone
+ var stones []tombstones.Stone
dirty := false
for p.Next() {
series := h.series.getByID(p.At())
@@ -976,9 +1070,9 @@ func (h *Head) Delete(mint, maxt int64, ms ...labels.Matcher) error {
// Delete only until the current values and not beyond.
t0, t1 = clampInterval(mint, maxt, t0, t1)
if h.wal != nil {
- stones = append(stones, Stone{p.At(), Intervals{{t0, t1}}})
+ stones = append(stones, tombstones.Stone{Ref: p.At(), Intervals: tombstones.Intervals{{Mint: t0, Maxt: t1}}})
}
- if err := h.chunkRewrite(p.At(), Intervals{{t0, t1}}); err != nil {
+ if err := h.chunkRewrite(p.At(), tombstones.Intervals{{Mint: t0, Maxt: t1}}); err != nil {
return errors.Wrap(err, "delete samples")
}
dirty = true
@@ -986,7 +1080,7 @@ func (h *Head) Delete(mint, maxt int64, ms ...labels.Matcher) error {
if p.Err() != nil {
return p.Err()
}
- var enc RecordEncoder
+ var enc record.Encoder
if h.wal != nil {
// Although we don't store the stones in the head
// we need to write them to the WAL to mark these as deleted
@@ -1005,7 +1099,7 @@ func (h *Head) Delete(mint, maxt int64, ms ...labels.Matcher) error {
// chunkRewrite re-writes the chunks which overlaps with deleted ranges
// and removes the samples in the deleted ranges.
// Chunks is deleted if no samples are left at the end.
-func (h *Head) chunkRewrite(ref uint64, dranges Intervals) (err error) {
+func (h *Head) chunkRewrite(ref uint64, dranges tombstones.Intervals) (err error) {
if len(dranges) == 0 {
return nil
}
@@ -1097,7 +1191,7 @@ func (h *Head) gc() {
}
// Tombstones returns a new reader over the head's tombstones
-func (h *Head) Tombstones() (TombstoneReader, error) {
+func (h *Head) Tombstones() (tombstones.Reader, error) {
return emptyTombstoneReader, nil
}
@@ -1422,7 +1516,7 @@ type seriesHashmap map[uint64][]*memSeries
func (m seriesHashmap) get(hash uint64, lset labels.Labels) *memSeries {
for _, s := range m[hash] {
- if s.lset.Equals(lset) {
+ if labels.Equal(s.lset, lset) {
return s
}
}
@@ -1432,7 +1526,7 @@ func (m seriesHashmap) get(hash uint64, lset labels.Labels) *memSeries {
func (m seriesHashmap) set(hash uint64, s *memSeries) {
l := m[hash]
for i, prev := range l {
- if prev.lset.Equals(s.lset) {
+ if labels.Equal(prev.lset, s.lset) {
l[i] = s
return
}
@@ -1443,7 +1537,7 @@ func (m seriesHashmap) set(hash uint64, s *memSeries) {
func (m seriesHashmap) del(hash uint64, lset labels.Labels) {
var rem []*memSeries
for _, s := range m[hash] {
- if !s.lset.Equals(lset) {
+ if !labels.Equal(s.lset, lset) {
rem = append(rem, s)
}
}
diff --git a/vendor/github.com/prometheus/prometheus/tsdb/index/index.go b/vendor/github.com/prometheus/prometheus/tsdb/index/index.go
index 48ab96ba76a17..ded55d6f370b6 100644
--- a/vendor/github.com/prometheus/prometheus/tsdb/index/index.go
+++ b/vendor/github.com/prometheus/prometheus/tsdb/index/index.go
@@ -27,11 +27,11 @@ import (
"strings"
"github.com/pkg/errors"
+ "github.com/prometheus/prometheus/pkg/labels"
"github.com/prometheus/prometheus/tsdb/chunks"
"github.com/prometheus/prometheus/tsdb/encoding"
tsdb_errors "github.com/prometheus/prometheus/tsdb/errors"
"github.com/prometheus/prometheus/tsdb/fileutil"
- "github.com/prometheus/prometheus/tsdb/labels"
)
const (
diff --git a/vendor/github.com/prometheus/prometheus/tsdb/index/postings.go b/vendor/github.com/prometheus/prometheus/tsdb/index/postings.go
index 6bc07eb3ec956..3438a5c3ddb3f 100644
--- a/vendor/github.com/prometheus/prometheus/tsdb/index/postings.go
+++ b/vendor/github.com/prometheus/prometheus/tsdb/index/postings.go
@@ -21,7 +21,7 @@ import (
"strings"
"sync"
- "github.com/prometheus/prometheus/tsdb/labels"
+ "github.com/prometheus/prometheus/pkg/labels"
)
var allPostingsKey = labels.Label{}
@@ -79,6 +79,57 @@ func (p *MemPostings) SortedKeys() []labels.Label {
return keys
}
+// PostingsStats contains cardinality based statistics for postings.
+type PostingsStats struct {
+ CardinalityMetricsStats []Stat
+ CardinalityLabelStats []Stat
+ LabelValueStats []Stat
+ LabelValuePairsStats []Stat
+}
+
+// Stats calculates the cardinality statistics from postings.
+func (p *MemPostings) Stats(label string) *PostingsStats {
+ const maxNumOfRecords = 10
+ var size uint64
+
+ p.mtx.RLock()
+
+ metrics := &maxHeap{}
+ labels := &maxHeap{}
+ labelValueLength := &maxHeap{}
+ labelValuePairs := &maxHeap{}
+
+ metrics.init(maxNumOfRecords)
+ labels.init(maxNumOfRecords)
+ labelValueLength.init(maxNumOfRecords)
+ labelValuePairs.init(maxNumOfRecords)
+
+ for n, e := range p.m {
+ if n == "" {
+ continue
+ }
+ labels.push(Stat{Name: n, Count: uint64(len(e))})
+ size = 0
+ for name, values := range e {
+ if n == label {
+ metrics.push(Stat{Name: name, Count: uint64(len(values))})
+ }
+ labelValuePairs.push(Stat{Name: n + "=" + name, Count: uint64(len(values))})
+ size += uint64(len(name))
+ }
+ labelValueLength.push(Stat{Name: n, Count: size})
+ }
+
+ p.mtx.RUnlock()
+
+ return &PostingsStats{
+ CardinalityMetricsStats: metrics.get(),
+ CardinalityLabelStats: labels.get(),
+ LabelValueStats: labelValueLength.get(),
+ LabelValuePairsStats: labelValuePairs.get(),
+ }
+}
+
// Get returns a postings list for the given label pair.
func (p *MemPostings) Get(name, value string) Postings {
var lp []uint64
diff --git a/vendor/github.com/prometheus/prometheus/tsdb/index/postingsstats.go b/vendor/github.com/prometheus/prometheus/tsdb/index/postingsstats.go
new file mode 100644
index 0000000000000..5cb17bd0c0c15
--- /dev/null
+++ b/vendor/github.com/prometheus/prometheus/tsdb/index/postingsstats.go
@@ -0,0 +1,69 @@
+// Copyright 2019 The Prometheus Authors
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+package index
+
+import (
+ "math"
+ "sort"
+)
+
+// Stat holds values for a single cardinality statistic.
+type Stat struct {
+ Name string
+ Count uint64
+}
+
+type maxHeap struct {
+ maxLength int
+ minValue uint64
+ minIndex int
+ Items []Stat
+}
+
+func (m *maxHeap) init(len int) {
+ m.maxLength = len
+ m.minValue = math.MaxUint64
+ m.Items = make([]Stat, 0, len)
+}
+
+func (m *maxHeap) push(item Stat) {
+ if len(m.Items) < m.maxLength {
+ if item.Count < m.minValue {
+ m.minValue = item.Count
+ m.minIndex = len(m.Items)
+ }
+ m.Items = append(m.Items, item)
+ return
+ }
+ if item.Count < m.minValue {
+ return
+ }
+
+ m.Items[m.minIndex] = item
+ m.minValue = item.Count
+
+ for i, stat := range m.Items {
+ if stat.Count < m.minValue {
+ m.minValue = stat.Count
+ m.minIndex = i
+ }
+ }
+
+}
+
+func (m *maxHeap) get() []Stat {
+ sort.Slice(m.Items, func(i, j int) bool {
+ return m.Items[i].Count > m.Items[j].Count
+ })
+ return m.Items
+}
diff --git a/vendor/github.com/prometheus/prometheus/tsdb/labels/labels.go b/vendor/github.com/prometheus/prometheus/tsdb/labels/labels.go
deleted file mode 100644
index aab8e42be9030..0000000000000
--- a/vendor/github.com/prometheus/prometheus/tsdb/labels/labels.go
+++ /dev/null
@@ -1,233 +0,0 @@
-// Copyright 2017 The Prometheus Authors
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package labels
-
-import (
- "bufio"
- "bytes"
- "os"
- "sort"
- "strconv"
- "strings"
-
- "github.com/cespare/xxhash"
- "github.com/pkg/errors"
-)
-
-const sep = '\xff'
-
-// Label is a key/value pair of strings.
-type Label struct {
- Name, Value string
-}
-
-// Labels is a sorted set of labels. Order has to be guaranteed upon
-// instantiation.
-type Labels []Label
-
-func (ls Labels) Len() int { return len(ls) }
-func (ls Labels) Swap(i, j int) { ls[i], ls[j] = ls[j], ls[i] }
-func (ls Labels) Less(i, j int) bool { return ls[i].Name < ls[j].Name }
-
-func (ls Labels) String() string {
- var b bytes.Buffer
-
- b.WriteByte('{')
- for i, l := range ls {
- if i > 0 {
- b.WriteByte(',')
- }
- b.WriteString(l.Name)
- b.WriteByte('=')
- b.WriteString(strconv.Quote(l.Value))
- }
- b.WriteByte('}')
-
- return b.String()
-}
-
-// Hash returns a hash value for the label set.
-func (ls Labels) Hash() uint64 {
- b := make([]byte, 0, 1024)
-
- for _, v := range ls {
- b = append(b, v.Name...)
- b = append(b, sep)
- b = append(b, v.Value...)
- b = append(b, sep)
- }
- return xxhash.Sum64(b)
-}
-
-// Get returns the value for the label with the given name.
-// Returns an empty string if the label doesn't exist.
-func (ls Labels) Get(name string) string {
- for _, l := range ls {
- if l.Name == name {
- return l.Value
- }
- }
- return ""
-}
-
-// Equals returns whether the two label sets are equal.
-func (ls Labels) Equals(o Labels) bool {
- if len(ls) != len(o) {
- return false
- }
- for i, l := range ls {
- if o[i] != l {
- return false
- }
- }
- return true
-}
-
-// Map returns a string map of the labels.
-func (ls Labels) Map() map[string]string {
- m := make(map[string]string, len(ls))
- for _, l := range ls {
- m[l.Name] = l.Value
- }
- return m
-}
-
-// WithoutEmpty returns the labelset without empty labels.
-// May return the same labelset.
-func (ls Labels) WithoutEmpty() Labels {
- for _, v := range ls {
- if v.Value == "" {
- els := make(Labels, 0, len(ls)-1)
- for _, v := range ls {
- if v.Value != "" {
- els = append(els, v)
- }
- }
- return els
- }
- }
- return ls
-}
-
-// New returns a sorted Labels from the given labels.
-// The caller has to guarantee that all label names are unique.
-func New(ls ...Label) Labels {
- set := make(Labels, 0, len(ls))
- for _, l := range ls {
- set = append(set, l)
- }
- sort.Sort(set)
-
- return set
-}
-
-// FromMap returns new sorted Labels from the given map.
-func FromMap(m map[string]string) Labels {
- l := make(Labels, 0, len(m))
- for k, v := range m {
- if v != "" {
- l = append(l, Label{Name: k, Value: v})
- }
- }
- sort.Sort(l)
-
- return l
-}
-
-// FromStrings creates new labels from pairs of strings.
-func FromStrings(ss ...string) Labels {
- if len(ss)%2 != 0 {
- panic("invalid number of strings")
- }
- var res Labels
- for i := 0; i < len(ss); i += 2 {
- if ss[i+1] != "" {
- res = append(res, Label{Name: ss[i], Value: ss[i+1]})
- }
- }
-
- sort.Sort(res)
- return res
-}
-
-// Compare compares the two label sets.
-// The result will be 0 if a==b, <0 if a < b, and >0 if a > b.
-func Compare(a, b Labels) int {
- l := len(a)
- if len(b) < l {
- l = len(b)
- }
-
- for i := 0; i < l; i++ {
- if d := strings.Compare(a[i].Name, b[i].Name); d != 0 {
- return d
- }
- if d := strings.Compare(a[i].Value, b[i].Value); d != 0 {
- return d
- }
- }
- // If all labels so far were in common, the set with fewer labels comes first.
- return len(a) - len(b)
-}
-
-// Slice is a sortable slice of label sets.
-type Slice []Labels
-
-func (s Slice) Len() int { return len(s) }
-func (s Slice) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
-func (s Slice) Less(i, j int) bool { return Compare(s[i], s[j]) < 0 }
-
-// ReadLabels reads up to n label sets in a JSON formatted file fn. It is mostly useful
-// to load testing data.
-func ReadLabels(fn string, n int) ([]Labels, error) {
- f, err := os.Open(fn)
- if err != nil {
- return nil, err
- }
- defer f.Close()
-
- scanner := bufio.NewScanner(f)
-
- var mets []Labels
- hashes := map[uint64]struct{}{}
- i := 0
-
- for scanner.Scan() && i < n {
- m := make(Labels, 0, 10)
-
- r := strings.NewReplacer("\"", "", "{", "", "}", "")
- s := r.Replace(scanner.Text())
-
- labelChunks := strings.Split(s, ",")
- for _, labelChunk := range labelChunks {
- split := strings.Split(labelChunk, ":")
- m = append(m, Label{Name: split[0], Value: split[1]})
- }
- // Order of the k/v labels matters, don't assume we'll always receive them already sorted.
- sort.Sort(m)
-
- h := m.Hash()
- if _, ok := hashes[h]; ok {
- continue
- }
- mets = append(mets, m)
- hashes[h] = struct{}{}
- i++
- }
-
- if i != n {
- return mets, errors.Errorf("requested %d metrics but found %d", n, i)
- }
- return mets, nil
-}
diff --git a/vendor/github.com/prometheus/prometheus/tsdb/labels/selector.go b/vendor/github.com/prometheus/prometheus/tsdb/labels/selector.go
deleted file mode 100644
index c94ebb332197a..0000000000000
--- a/vendor/github.com/prometheus/prometheus/tsdb/labels/selector.go
+++ /dev/null
@@ -1,109 +0,0 @@
-// Copyright 2017 The Prometheus Authors
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-package labels
-
-import (
- "fmt"
- "regexp"
-)
-
-// Selector holds constraints for matching against a label set.
-type Selector []Matcher
-
-// Matches returns whether the labels satisfy all matchers.
-func (s Selector) Matches(labels Labels) bool {
- for _, m := range s {
- if v := labels.Get(m.Name()); !m.Matches(v) {
- return false
- }
- }
- return true
-}
-
-// Matcher specifies a constraint for the value of a label.
-type Matcher interface {
- // Name returns the label name the matcher should apply to.
- Name() string
- // Matches checks whether a value fulfills the constraints.
- Matches(v string) bool
- // String returns a human readable matcher.
- String() string
-}
-
-// EqualMatcher matches on equality.
-type EqualMatcher struct {
- name, value string
-}
-
-// Name implements Matcher interface.
-func (m EqualMatcher) Name() string { return m.name }
-
-// Matches implements Matcher interface.
-func (m EqualMatcher) Matches(v string) bool { return v == m.value }
-
-// String implements Matcher interface.
-func (m EqualMatcher) String() string { return fmt.Sprintf("%s=%q", m.name, m.value) }
-
-// Value returns the matched value.
-func (m EqualMatcher) Value() string { return m.value }
-
-// NewEqualMatcher returns a new matcher matching an exact label value.
-func NewEqualMatcher(name, value string) Matcher {
- return &EqualMatcher{name: name, value: value}
-}
-
-type RegexpMatcher struct {
- name string
- re *regexp.Regexp
-}
-
-func (m RegexpMatcher) Name() string { return m.name }
-func (m RegexpMatcher) Matches(v string) bool { return m.re.MatchString(v) }
-func (m RegexpMatcher) String() string { return fmt.Sprintf("%s=~%q", m.name, m.re.String()) }
-func (m RegexpMatcher) Value() string { return m.re.String() }
-
-// NewRegexpMatcher returns a new matcher verifying that a value matches
-// the regular expression pattern.
-func NewRegexpMatcher(name, pattern string) (Matcher, error) {
- re, err := regexp.Compile(pattern)
- if err != nil {
- return nil, err
- }
- return &RegexpMatcher{name: name, re: re}, nil
-}
-
-// NewMustRegexpMatcher returns a new matcher verifying that a value matches
-// the regular expression pattern. Will panic if the pattern is not a valid
-// regular expression.
-func NewMustRegexpMatcher(name, pattern string) Matcher {
- re, err := regexp.Compile(pattern)
- if err != nil {
- panic(err)
- }
- return &RegexpMatcher{name: name, re: re}
-
-}
-
-// NotMatcher inverts the matching result for a matcher.
-type NotMatcher struct {
- Matcher
-}
-
-func (m NotMatcher) Matches(v string) bool { return !m.Matcher.Matches(v) }
-func (m NotMatcher) String() string { return fmt.Sprintf("not(%s)", m.Matcher.String()) }
-
-// Not inverts the matcher's matching result.
-func Not(m Matcher) Matcher {
- return &NotMatcher{m}
-}
diff --git a/vendor/github.com/prometheus/prometheus/tsdb/querier.go b/vendor/github.com/prometheus/prometheus/tsdb/querier.go
index 357232e46150d..59ffdaa85b420 100644
--- a/vendor/github.com/prometheus/prometheus/tsdb/querier.go
+++ b/vendor/github.com/prometheus/prometheus/tsdb/querier.go
@@ -20,18 +20,19 @@ import (
"unicode/utf8"
"github.com/pkg/errors"
+ "github.com/prometheus/prometheus/pkg/labels"
"github.com/prometheus/prometheus/tsdb/chunkenc"
"github.com/prometheus/prometheus/tsdb/chunks"
tsdb_errors "github.com/prometheus/prometheus/tsdb/errors"
"github.com/prometheus/prometheus/tsdb/index"
- "github.com/prometheus/prometheus/tsdb/labels"
+ "github.com/prometheus/prometheus/tsdb/tombstones"
)
// Querier provides querying access over time series data of a fixed
// time range.
type Querier interface {
// Select returns a set of series that matches the given label matchers.
- Select(...labels.Matcher) (SeriesSet, error)
+ Select(...*labels.Matcher) (SeriesSet, error)
// LabelValues returns all potential values for a label name.
LabelValues(string) ([]string, error)
@@ -111,28 +112,22 @@ func (q *querier) LabelValuesFor(string, labels.Label) ([]string, error) {
return nil, fmt.Errorf("not implemented")
}
-func (q *querier) Select(ms ...labels.Matcher) (SeriesSet, error) {
- return q.sel(q.blocks, ms)
-}
-
-func (q *querier) sel(qs []Querier, ms []labels.Matcher) (SeriesSet, error) {
- if len(qs) == 0 {
+func (q *querier) Select(ms ...*labels.Matcher) (SeriesSet, error) {
+ if len(q.blocks) == 0 {
return EmptySeriesSet(), nil
}
- if len(qs) == 1 {
- return qs[0].Select(ms...)
+ ss := make([]SeriesSet, len(q.blocks))
+ var s SeriesSet
+ var err error
+ for i, b := range q.blocks {
+ s, err = b.Select(ms...)
+ if err != nil {
+ return nil, err
+ }
+ ss[i] = s
}
- l := len(qs) / 2
- a, err := q.sel(qs[:l], ms)
- if err != nil {
- return nil, err
- }
- b, err := q.sel(qs[l:], ms)
- if err != nil {
- return nil, err
- }
- return newMergedSeriesSet(a, b), nil
+ return NewMergedSeriesSet(ss), nil
}
func (q *querier) Close() error {
@@ -150,11 +145,11 @@ type verticalQuerier struct {
querier
}
-func (q *verticalQuerier) Select(ms ...labels.Matcher) (SeriesSet, error) {
+func (q *verticalQuerier) Select(ms ...*labels.Matcher) (SeriesSet, error) {
return q.sel(q.blocks, ms)
}
-func (q *verticalQuerier) sel(qs []Querier, ms []labels.Matcher) (SeriesSet, error) {
+func (q *verticalQuerier) sel(qs []Querier, ms []*labels.Matcher) (SeriesSet, error) {
if len(qs) == 0 {
return EmptySeriesSet(), nil
}
@@ -204,14 +199,14 @@ func NewBlockQuerier(b BlockReader, mint, maxt int64) (Querier, error) {
type blockQuerier struct {
index IndexReader
chunks ChunkReader
- tombstones TombstoneReader
+ tombstones tombstones.Reader
closed bool
mint, maxt int64
}
-func (q *blockQuerier) Select(ms ...labels.Matcher) (SeriesSet, error) {
+func (q *blockQuerier) Select(ms ...*labels.Matcher) (SeriesSet, error) {
base, err := LookupChunkSeries(q.index, q.tombstones, ms...)
if err != nil {
return nil, err
@@ -325,26 +320,31 @@ func findSetMatches(pattern string) []string {
// PostingsForMatchers assembles a single postings iterator against the index reader
// based on the given matchers.
-func PostingsForMatchers(ix IndexReader, ms ...labels.Matcher) (index.Postings, error) {
+func PostingsForMatchers(ix IndexReader, ms ...*labels.Matcher) (index.Postings, error) {
var its, notIts []index.Postings
// See which label must be non-empty.
// Optimization for case like {l=~".", l!="1"}.
labelMustBeSet := make(map[string]bool, len(ms))
for _, m := range ms {
if !m.Matches("") {
- labelMustBeSet[m.Name()] = true
+ labelMustBeSet[m.Name] = true
}
}
for _, m := range ms {
- if labelMustBeSet[m.Name()] {
+ if labelMustBeSet[m.Name] {
// If this matcher must be non-empty, we can be smarter.
matchesEmpty := m.Matches("")
- nm, isNot := m.(*labels.NotMatcher)
+ isNot := m.Type == labels.MatchNotEqual || m.Type == labels.MatchNotRegexp
if isNot && matchesEmpty { // l!="foo"
// If the label can't be empty and is a Not and the inner matcher
// doesn't match empty, then subtract it out at the end.
- it, err := postingsForMatcher(ix, nm.Matcher)
+ inverse, err := m.Inverse()
+ if err != nil {
+ return nil, err
+ }
+
+ it, err := postingsForMatcher(ix, inverse)
if err != nil {
return nil, err
}
@@ -352,7 +352,12 @@ func PostingsForMatchers(ix IndexReader, ms ...labels.Matcher) (index.Postings,
} else if isNot && !matchesEmpty { // l!=""
// If the label can't be empty and is a Not, but the inner matcher can
// be empty we need to use inversePostingsForMatcher.
- it, err := inversePostingsForMatcher(ix, nm.Matcher)
+ inverse, err := m.Inverse()
+ if err != nil {
+ return nil, err
+ }
+
+ it, err := inversePostingsForMatcher(ix, inverse)
if err != nil {
return nil, err
}
@@ -396,23 +401,23 @@ func PostingsForMatchers(ix IndexReader, ms ...labels.Matcher) (index.Postings,
return ix.SortedPostings(it), nil
}
-func postingsForMatcher(ix IndexReader, m labels.Matcher) (index.Postings, error) {
+func postingsForMatcher(ix IndexReader, m *labels.Matcher) (index.Postings, error) {
// This method will not return postings for missing labels.
// Fast-path for equal matching.
- if em, ok := m.(*labels.EqualMatcher); ok {
- return ix.Postings(em.Name(), em.Value())
+ if m.Type == labels.MatchEqual {
+ return ix.Postings(m.Name, m.Value)
}
// Fast-path for set matching.
- if em, ok := m.(*labels.RegexpMatcher); ok {
- setMatches := findSetMatches(em.Value())
+ if m.Type == labels.MatchRegexp {
+ setMatches := findSetMatches(m.Value)
if len(setMatches) > 0 {
- return postingsForSetMatcher(ix, em.Name(), setMatches)
+ return postingsForSetMatcher(ix, m.Name, setMatches)
}
}
- tpls, err := ix.LabelValues(m.Name())
+ tpls, err := ix.LabelValues(m.Name)
if err != nil {
return nil, err
}
@@ -435,7 +440,7 @@ func postingsForMatcher(ix IndexReader, m labels.Matcher) (index.Postings, error
var rit []index.Postings
for _, v := range res {
- it, err := ix.Postings(m.Name(), v)
+ it, err := ix.Postings(m.Name, v)
if err != nil {
return nil, err
}
@@ -446,8 +451,8 @@ func postingsForMatcher(ix IndexReader, m labels.Matcher) (index.Postings, error
}
// inversePostingsForMatcher returns the postings for the series with the label name set but not matching the matcher.
-func inversePostingsForMatcher(ix IndexReader, m labels.Matcher) (index.Postings, error) {
- tpls, err := ix.LabelValues(m.Name())
+func inversePostingsForMatcher(ix IndexReader, m *labels.Matcher) (index.Postings, error) {
+ tpls, err := ix.LabelValues(m.Name)
if err != nil {
return nil, err
}
@@ -466,7 +471,7 @@ func inversePostingsForMatcher(ix IndexReader, m labels.Matcher) (index.Postings
var rit []index.Postings
for _, v := range res {
- it, err := ix.Postings(m.Name(), v)
+ it, err := ix.Postings(m.Name, v)
if err != nil {
return nil, err
}
@@ -531,29 +536,28 @@ func EmptySeriesSet() SeriesSet {
return emptySeriesSet
}
-// mergedSeriesSet takes two series sets as a single series set. The input series sets
-// must be sorted and sequential in time, i.e. if they have the same label set,
-// the datapoints of a must be before the datapoints of b.
+// mergedSeriesSet returns a series sets slice as a single series set. The input series sets
+// must be sorted and sequential in time.
type mergedSeriesSet struct {
- a, b SeriesSet
-
- cur Series
- adone, bdone bool
-}
-
-// NewMergedSeriesSet takes two series sets as a single series set. The input series sets
-// must be sorted and sequential in time, i.e. if they have the same label set,
-// the datapoints of a must be before the datapoints of b.
-func NewMergedSeriesSet(a, b SeriesSet) SeriesSet {
- return newMergedSeriesSet(a, b)
+ all []SeriesSet
+ buf []SeriesSet // A buffer for keeping the order of SeriesSet slice during forwarding the SeriesSet.
+ ids []int // The indices of chosen SeriesSet for the current run.
+ done bool
+ err error
+ cur Series
}
-func newMergedSeriesSet(a, b SeriesSet) *mergedSeriesSet {
- s := &mergedSeriesSet{a: a, b: b}
- // Initialize first elements of both sets as Next() needs
+func NewMergedSeriesSet(all []SeriesSet) SeriesSet {
+ if len(all) == 1 {
+ return all[0]
+ }
+ s := &mergedSeriesSet{all: all}
+ // Initialize first elements of all sets as Next() needs
// one element look-ahead.
- s.adone = !s.a.Next()
- s.bdone = !s.b.Next()
+ s.nextAll()
+ if len(s.all) == 0 {
+ s.done = true
+ }
return s
}
@@ -563,40 +567,93 @@ func (s *mergedSeriesSet) At() Series {
}
func (s *mergedSeriesSet) Err() error {
- if s.a.Err() != nil {
- return s.a.Err()
+ return s.err
+}
+
+// nextAll is to call Next() for all SeriesSet.
+// Because the order of the SeriesSet slice will affect the results,
+// we need to use an buffer slice to hold the order.
+func (s *mergedSeriesSet) nextAll() {
+ s.buf = s.buf[:0]
+ for _, ss := range s.all {
+ if ss.Next() {
+ s.buf = append(s.buf, ss)
+ } else if ss.Err() != nil {
+ s.done = true
+ s.err = ss.Err()
+ break
+ }
}
- return s.b.Err()
+ s.all, s.buf = s.buf, s.all
}
-func (s *mergedSeriesSet) compare() int {
- if s.adone {
- return 1
+// nextWithID is to call Next() for the SeriesSet with the indices of s.ids.
+// Because the order of the SeriesSet slice will affect the results,
+// we need to use an buffer slice to hold the order.
+func (s *mergedSeriesSet) nextWithID() {
+ if len(s.ids) == 0 {
+ return
}
- if s.bdone {
- return -1
+
+ s.buf = s.buf[:0]
+ i1 := 0
+ i2 := 0
+ for i1 < len(s.all) {
+ if i2 < len(s.ids) && i1 == s.ids[i2] {
+ if !s.all[s.ids[i2]].Next() {
+ if s.all[s.ids[i2]].Err() != nil {
+ s.done = true
+ s.err = s.all[s.ids[i2]].Err()
+ break
+ }
+ i2++
+ i1++
+ continue
+ }
+ i2++
+ }
+ s.buf = append(s.buf, s.all[i1])
+ i1++
}
- return labels.Compare(s.a.At().Labels(), s.b.At().Labels())
+ s.all, s.buf = s.buf, s.all
}
func (s *mergedSeriesSet) Next() bool {
- if s.adone && s.bdone || s.Err() != nil {
+ if s.done {
return false
}
- d := s.compare()
+ s.nextWithID()
+ if s.done {
+ return false
+ }
+ s.ids = s.ids[:0]
+ if len(s.all) == 0 {
+ s.done = true
+ return false
+ }
- // Both sets contain the current series. Chain them into a single one.
- if d > 0 {
- s.cur = s.b.At()
- s.bdone = !s.b.Next()
- } else if d < 0 {
- s.cur = s.a.At()
- s.adone = !s.a.Next()
+ // Here we are looking for a set of series sets with the lowest labels,
+ // and we will cache their indexes in s.ids.
+ s.ids = append(s.ids, 0)
+ for i := 1; i < len(s.all); i++ {
+ cmp := labels.Compare(s.all[s.ids[0]].At().Labels(), s.all[i].At().Labels())
+ if cmp > 0 {
+ s.ids = s.ids[:1]
+ s.ids[0] = i
+ } else if cmp == 0 {
+ s.ids = append(s.ids, i)
+ }
+ }
+
+ if len(s.ids) > 1 {
+ series := make([]Series, len(s.ids))
+ for i, idx := range s.ids {
+ series[i] = s.all[idx].At()
+ }
+ s.cur = &chainedSeries{series: series}
} else {
- s.cur = &chainedSeries{series: []Series{s.a.At(), s.b.At()}}
- s.adone = !s.a.Next()
- s.bdone = !s.b.Next()
+ s.cur = s.all[s.ids[0]].At()
}
return true
}
@@ -671,7 +728,7 @@ func (s *mergedVerticalSeriesSet) Next() bool {
// actual series itself.
type ChunkSeriesSet interface {
Next() bool
- At() (labels.Labels, []chunks.Meta, Intervals)
+ At() (labels.Labels, []chunks.Meta, tombstones.Intervals)
Err() error
}
@@ -680,19 +737,19 @@ type ChunkSeriesSet interface {
type baseChunkSeries struct {
p index.Postings
index IndexReader
- tombstones TombstoneReader
+ tombstones tombstones.Reader
lset labels.Labels
chks []chunks.Meta
- intervals Intervals
+ intervals tombstones.Intervals
err error
}
// LookupChunkSeries retrieves all series for the given matchers and returns a ChunkSeriesSet
// over them. It drops chunks based on tombstones in the given reader.
-func LookupChunkSeries(ir IndexReader, tr TombstoneReader, ms ...labels.Matcher) (ChunkSeriesSet, error) {
+func LookupChunkSeries(ir IndexReader, tr tombstones.Reader, ms ...*labels.Matcher) (ChunkSeriesSet, error) {
if tr == nil {
- tr = newMemTombstones()
+ tr = tombstones.NewMemTombstones()
}
p, err := PostingsForMatchers(ir, ms...)
if err != nil {
@@ -705,7 +762,7 @@ func LookupChunkSeries(ir IndexReader, tr TombstoneReader, ms ...labels.Matcher)
}, nil
}
-func (s *baseChunkSeries) At() (labels.Labels, []chunks.Meta, Intervals) {
+func (s *baseChunkSeries) At() (labels.Labels, []chunks.Meta, tombstones.Intervals) {
return s.lset, s.chks, s.intervals
}
@@ -741,7 +798,7 @@ func (s *baseChunkSeries) Next() bool {
// Only those chunks that are not entirely deleted.
chks := make([]chunks.Meta, 0, len(s.chks))
for _, chk := range s.chks {
- if !(Interval{chk.MinTime, chk.MaxTime}.isSubrange(s.intervals)) {
+ if !(tombstones.Interval{Mint: chk.MinTime, Maxt: chk.MaxTime}.IsSubrange(s.intervals)) {
chks = append(chks, chk)
}
}
@@ -768,10 +825,10 @@ type populatedChunkSeries struct {
err error
chks []chunks.Meta
lset labels.Labels
- intervals Intervals
+ intervals tombstones.Intervals
}
-func (s *populatedChunkSeries) At() (labels.Labels, []chunks.Meta, Intervals) {
+func (s *populatedChunkSeries) At() (labels.Labels, []chunks.Meta, tombstones.Intervals) {
return s.lset, s.chks, s.intervals
}
@@ -866,7 +923,7 @@ type chunkSeries struct {
mint, maxt int64
- intervals Intervals
+ intervals tombstones.Intervals
}
func (s *chunkSeries) Labels() labels.Labels {
@@ -905,7 +962,7 @@ func (s *chainedSeries) Iterator() SeriesIterator {
return newChainedSeriesIterator(s.series...)
}
-// chainedSeriesIterator implements a series iterater over a list
+// chainedSeriesIterator implements a series iterator over a list
// of time-sorted, non-overlapping iterators.
type chainedSeriesIterator struct {
series []Series // series in time order
@@ -976,7 +1033,7 @@ func (s *verticalChainedSeries) Iterator() SeriesIterator {
return newVerticalMergeSeriesIterator(s.series...)
}
-// verticalMergeSeriesIterator implements a series iterater over a list
+// verticalMergeSeriesIterator implements a series iterator over a list
// of time-sorted, time-overlapping iterators.
type verticalMergeSeriesIterator struct {
a, b SeriesIterator
@@ -1067,10 +1124,10 @@ type chunkSeriesIterator struct {
maxt, mint int64
- intervals Intervals
+ intervals tombstones.Intervals
}
-func newChunkSeriesIterator(cs []chunks.Meta, dranges Intervals, mint, maxt int64) *chunkSeriesIterator {
+func newChunkSeriesIterator(cs []chunks.Meta, dranges tombstones.Intervals, mint, maxt int64) *chunkSeriesIterator {
csi := &chunkSeriesIterator{
chunks: cs,
i: 0,
@@ -1169,7 +1226,7 @@ func (it *chunkSeriesIterator) Err() error {
type deletedIterator struct {
it chunkenc.Iterator
- intervals Intervals
+ intervals tombstones.Intervals
}
func (it *deletedIterator) At() (int64, float64) {
@@ -1182,7 +1239,7 @@ Outer:
ts, _ := it.it.At()
for _, tr := range it.intervals {
- if tr.inBounds(ts) {
+ if tr.InBounds(ts) {
continue Outer
}
diff --git a/vendor/github.com/prometheus/prometheus/tsdb/record.go b/vendor/github.com/prometheus/prometheus/tsdb/record/record.go
similarity index 64%
rename from vendor/github.com/prometheus/prometheus/tsdb/record.go
rename to vendor/github.com/prometheus/prometheus/tsdb/record/record.go
index 174d3bd14f5ed..d63198f977a9a 100644
--- a/vendor/github.com/prometheus/prometheus/tsdb/record.go
+++ b/vendor/github.com/prometheus/prometheus/tsdb/record/record.go
@@ -12,54 +12,73 @@
// See the License for the specific language governing permissions and
// limitations under the License.
-package tsdb
+package record
import (
"math"
"sort"
"github.com/pkg/errors"
+ "github.com/prometheus/prometheus/pkg/labels"
"github.com/prometheus/prometheus/tsdb/encoding"
- "github.com/prometheus/prometheus/tsdb/labels"
+ "github.com/prometheus/prometheus/tsdb/tombstones"
)
-// RecordType represents the data type of a record.
-type RecordType uint8
+// Type represents the data type of a record.
+type Type uint8
const (
- // RecordInvalid is returned for unrecognised WAL record types.
- RecordInvalid RecordType = 255
- // RecordSeries is used to match WAL records of type Series.
- RecordSeries RecordType = 1
- // RecordSamples is used to match WAL records of type Samples.
- RecordSamples RecordType = 2
- // RecordTombstones is used to match WAL records of type Tombstones.
- RecordTombstones RecordType = 3
+ // Invalid is returned for unrecognised WAL record types.
+ Invalid Type = 255
+ // Series is used to match WAL records of type Series.
+ Series Type = 1
+ // Samples is used to match WAL records of type Samples.
+ Samples Type = 2
+ // Tombstones is used to match WAL records of type Tombstones.
+ Tombstones Type = 3
)
-// RecordDecoder decodes series, sample, and tombstone records.
+var (
+ // ErrNotFound is returned if a looked up resource was not found. Duplicate ErrNotFound from head.go.
+ ErrNotFound = errors.New("not found")
+)
+
+// RefSeries is the series labels with the series ID.
+type RefSeries struct {
+ Ref uint64
+ Labels labels.Labels
+}
+
+// RefSample is a timestamp/value pair associated with a reference to a series.
+type RefSample struct {
+ Ref uint64
+ T int64
+ V float64
+}
+
+// Decoder decodes series, sample, and tombstone records.
// The zero value is ready to use.
-type RecordDecoder struct {
+type Decoder struct {
}
// Type returns the type of the record.
-// Return RecordInvalid if no valid record type is found.
-func (d *RecordDecoder) Type(rec []byte) RecordType {
+// Returns RecordInvalid if no valid record type is found.
+func (d *Decoder) Type(rec []byte) Type {
if len(rec) < 1 {
- return RecordInvalid
+ return Invalid
}
- switch t := RecordType(rec[0]); t {
- case RecordSeries, RecordSamples, RecordTombstones:
+ switch t := Type(rec[0]); t {
+ case Series, Samples, Tombstones:
return t
}
- return RecordInvalid
+ return Invalid
}
// Series appends series in rec to the given slice.
-func (d *RecordDecoder) Series(rec []byte, series []RefSeries) ([]RefSeries, error) {
+func (d *Decoder) Series(rec []byte, series []RefSeries) ([]RefSeries, error) {
dec := encoding.Decbuf{B: rec}
- if RecordType(dec.Byte()) != RecordSeries {
+ if Type(dec.Byte()) != Series {
return nil, errors.New("invalid record type")
}
for len(dec.B) > 0 && dec.Err() == nil {
@@ -88,10 +107,10 @@ func (d *RecordDecoder) Series(rec []byte, series []RefSeries) ([]RefSeries, err
}
// Samples appends samples in rec to the given slice.
-func (d *RecordDecoder) Samples(rec []byte, samples []RefSample) ([]RefSample, error) {
+func (d *Decoder) Samples(rec []byte, samples []RefSample) ([]RefSample, error) {
dec := encoding.Decbuf{B: rec}
- if RecordType(dec.Byte()) != RecordSamples {
+ if Type(dec.Byte()) != Samples {
return nil, errors.New("invalid record type")
}
if dec.Len() == 0 {
@@ -123,16 +142,16 @@ func (d *RecordDecoder) Samples(rec []byte, samples []RefSample) ([]RefSample, e
}
// Tombstones appends tombstones in rec to the given slice.
-func (d *RecordDecoder) Tombstones(rec []byte, tstones []Stone) ([]Stone, error) {
+func (d *Decoder) Tombstones(rec []byte, tstones []tombstones.Stone) ([]tombstones.Stone, error) {
dec := encoding.Decbuf{B: rec}
- if RecordType(dec.Byte()) != RecordTombstones {
+ if Type(dec.Byte()) != Tombstones {
return nil, errors.New("invalid record type")
}
for dec.Len() > 0 && dec.Err() == nil {
- tstones = append(tstones, Stone{
- ref: dec.Be64(),
- intervals: Intervals{
+ tstones = append(tstones, tombstones.Stone{
+ Ref: dec.Be64(),
+ Intervals: tombstones.Intervals{
{Mint: dec.Varint64(), Maxt: dec.Varint64()},
},
})
@@ -146,15 +165,15 @@ func (d *RecordDecoder) Tombstones(rec []byte, tstones []Stone) ([]Stone, error)
return tstones, nil
}
-// RecordEncoder encodes series, sample, and tombstones records.
+// Encoder encodes series, sample, and tombstones records.
// The zero value is ready to use.
-type RecordEncoder struct {
+type Encoder struct {
}
// Series appends the encoded series to b and returns the resulting slice.
-func (e *RecordEncoder) Series(series []RefSeries, b []byte) []byte {
+func (e *Encoder) Series(series []RefSeries, b []byte) []byte {
buf := encoding.Encbuf{B: b}
- buf.PutByte(byte(RecordSeries))
+ buf.PutByte(byte(Series))
for _, s := range series {
buf.PutBE64(s.Ref)
@@ -169,9 +188,9 @@ func (e *RecordEncoder) Series(series []RefSeries, b []byte) []byte {
}
// Samples appends the encoded samples to b and returns the resulting slice.
-func (e *RecordEncoder) Samples(samples []RefSample, b []byte) []byte {
+func (e *Encoder) Samples(samples []RefSample, b []byte) []byte {
buf := encoding.Encbuf{B: b}
- buf.PutByte(byte(RecordSamples))
+ buf.PutByte(byte(Samples))
if len(samples) == 0 {
return buf.Get()
@@ -193,13 +212,13 @@ func (e *RecordEncoder) Samples(samples []RefSample, b []byte) []byte {
}
// Tombstones appends the encoded tombstones to b and returns the resulting slice.
-func (e *RecordEncoder) Tombstones(tstones []Stone, b []byte) []byte {
+func (e *Encoder) Tombstones(tstones []tombstones.Stone, b []byte) []byte {
buf := encoding.Encbuf{B: b}
- buf.PutByte(byte(RecordTombstones))
+ buf.PutByte(byte(Tombstones))
for _, s := range tstones {
- for _, iv := range s.intervals {
- buf.PutBE64(s.ref)
+ for _, iv := range s.Intervals {
+ buf.PutBE64(s.Ref)
buf.PutVarint64(iv.Mint)
buf.PutVarint64(iv.Maxt)
}
diff --git a/vendor/github.com/prometheus/prometheus/tsdb/tombstones.go b/vendor/github.com/prometheus/prometheus/tsdb/tombstones/tombstones.go
similarity index 74%
rename from vendor/github.com/prometheus/prometheus/tsdb/tombstones.go
rename to vendor/github.com/prometheus/prometheus/tsdb/tombstones/tombstones.go
index 8910f7b8b3140..9cdf05a1e95a7 100644
--- a/vendor/github.com/prometheus/prometheus/tsdb/tombstones.go
+++ b/vendor/github.com/prometheus/prometheus/tsdb/tombstones/tombstones.go
@@ -11,11 +11,13 @@
// See the License for the specific language governing permissions and
// limitations under the License.
-package tsdb
+package tombstones
import (
"encoding/binary"
"fmt"
+ "hash"
+ "hash/crc32"
"io"
"io/ioutil"
"os"
@@ -30,7 +32,7 @@ import (
"github.com/prometheus/prometheus/tsdb/fileutil"
)
-const tombstoneFilename = "tombstones"
+const TombstonesFilename = "tombstones"
const (
// MagicTombstone is 4 bytes at the head of a tombstone file.
@@ -39,8 +41,23 @@ const (
tombstoneFormatV1 = 1
)
-// TombstoneReader gives access to tombstone intervals by series reference.
-type TombstoneReader interface {
+// The table gets initialized with sync.Once but may still cause a race
+// with any other use of the crc32 package anywhere. Thus we initialize it
+// before.
+var castagnoliTable *crc32.Table
+
+func init() {
+ castagnoliTable = crc32.MakeTable(crc32.Castagnoli)
+}
+
+// newCRC32 initializes a CRC32 hash with a preconfigured polynomial, so the
+// polynomial may be easily changed in one location at a later time, if necessary.
+func newCRC32() hash.Hash32 {
+ return crc32.New(castagnoliTable)
+}
+
+// Reader gives access to tombstone intervals by series reference.
+type Reader interface {
// Get returns deletion intervals for the series with the given reference.
Get(ref uint64) (Intervals, error)
@@ -54,8 +71,8 @@ type TombstoneReader interface {
Close() error
}
-func writeTombstoneFile(logger log.Logger, dir string, tr TombstoneReader) (int64, error) {
- path := filepath.Join(dir, tombstoneFilename)
+func WriteFile(logger log.Logger, dir string, tr Reader) (int64, error) {
+ path := filepath.Join(dir, TombstonesFilename)
tmp := path + ".tmp"
hash := newCRC32()
var size int
@@ -129,14 +146,14 @@ func writeTombstoneFile(logger log.Logger, dir string, tr TombstoneReader) (int6
// Stone holds the information on the posting and time-range
// that is deleted.
type Stone struct {
- ref uint64
- intervals Intervals
+ Ref uint64
+ Intervals Intervals
}
-func readTombstones(dir string) (TombstoneReader, int64, error) {
- b, err := ioutil.ReadFile(filepath.Join(dir, tombstoneFilename))
+func ReadTombstones(dir string) (Reader, int64, error) {
+ b, err := ioutil.ReadFile(filepath.Join(dir, TombstonesFilename))
if os.IsNotExist(err) {
- return newMemTombstones(), 0, nil
+ return NewMemTombstones(), 0, nil
} else if err != nil {
return nil, 0, err
}
@@ -166,7 +183,7 @@ func readTombstones(dir string) (TombstoneReader, int64, error) {
return nil, 0, errors.New("checksum did not match")
}
- stonesMap := newMemTombstones()
+ stonesMap := NewMemTombstones()
for d.Len() > 0 {
k := d.Uvarint64()
@@ -176,7 +193,7 @@ func readTombstones(dir string) (TombstoneReader, int64, error) {
return nil, 0, d.Err()
}
- stonesMap.addInterval(k, Interval{mint, maxt})
+ stonesMap.AddInterval(k, Interval{mint, maxt})
}
return stonesMap, int64(len(b)), nil
@@ -187,12 +204,22 @@ type memTombstones struct {
mtx sync.RWMutex
}
-// newMemTombstones creates new in memory TombstoneReader
+// NewMemTombstones creates new in memory Tombstone Reader
// that allows adding new intervals.
-func newMemTombstones() *memTombstones {
+func NewMemTombstones() *memTombstones {
return &memTombstones{intvlGroups: make(map[uint64]Intervals)}
}
+func NewTestMemTombstones(intervals []Intervals) *memTombstones {
+ ret := NewMemTombstones()
+ for i, intervalsGroup := range intervals {
+ for _, interval := range intervalsGroup {
+ ret.AddInterval(uint64(i+1), interval)
+ }
+ }
+ return ret
+}
+
func (t *memTombstones) Get(ref uint64) (Intervals, error) {
t.mtx.RLock()
defer t.mtx.RUnlock()
@@ -221,12 +248,12 @@ func (t *memTombstones) Total() uint64 {
return total
}
-// addInterval to an existing memTombstones
-func (t *memTombstones) addInterval(ref uint64, itvs ...Interval) {
+// AddInterval to an existing memTombstones.
+func (t *memTombstones) AddInterval(ref uint64, itvs ...Interval) {
t.mtx.Lock()
defer t.mtx.Unlock()
for _, itv := range itvs {
- t.intvlGroups[ref] = t.intvlGroups[ref].add(itv)
+ t.intvlGroups[ref] = t.intvlGroups[ref].Add(itv)
}
}
@@ -239,13 +266,13 @@ type Interval struct {
Mint, Maxt int64
}
-func (tr Interval) inBounds(t int64) bool {
+func (tr Interval) InBounds(t int64) bool {
return t >= tr.Mint && t <= tr.Maxt
}
-func (tr Interval) isSubrange(dranges Intervals) bool {
+func (tr Interval) IsSubrange(dranges Intervals) bool {
for _, r := range dranges {
- if r.inBounds(tr.Mint) && r.inBounds(tr.Maxt) {
+ if r.InBounds(tr.Mint) && r.InBounds(tr.Maxt) {
return true
}
}
@@ -256,12 +283,12 @@ func (tr Interval) isSubrange(dranges Intervals) bool {
// Intervals represents a set of increasing and non-overlapping time-intervals.
type Intervals []Interval
-// add the new time-range to the existing ones.
+// Add the new time-range to the existing ones.
// The existing ones must be sorted.
-func (itvs Intervals) add(n Interval) Intervals {
+func (itvs Intervals) Add(n Interval) Intervals {
for i, r := range itvs {
// TODO(gouthamve): Make this codepath easier to digest.
- if r.inBounds(n.Mint-1) || r.inBounds(n.Mint) {
+ if r.InBounds(n.Mint-1) || r.InBounds(n.Mint) {
if n.Maxt > r.Maxt {
itvs[i].Maxt = n.Maxt
}
@@ -282,7 +309,7 @@ func (itvs Intervals) add(n Interval) Intervals {
return itvs
}
- if r.inBounds(n.Maxt+1) || r.inBounds(n.Maxt) {
+ if r.InBounds(n.Maxt+1) || r.InBounds(n.Maxt) {
if n.Mint < r.Maxt {
itvs[i].Mint = n.Mint
}
diff --git a/vendor/github.com/prometheus/prometheus/tsdb/tsdbblockutil.go b/vendor/github.com/prometheus/prometheus/tsdb/tsdbblockutil.go
new file mode 100644
index 0000000000000..eb175601950b8
--- /dev/null
+++ b/vendor/github.com/prometheus/prometheus/tsdb/tsdbblockutil.go
@@ -0,0 +1,83 @@
+// Copyright 2019 The Prometheus Authors
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package tsdb
+
+import (
+ "context"
+ "fmt"
+ "github.com/go-kit/kit/log"
+ "github.com/prometheus/prometheus/pkg/labels"
+ "os"
+ "path/filepath"
+)
+
+var InvalidTimesError = fmt.Errorf("max time is lesser than min time")
+
+type MetricSample struct {
+ TimestampMs int64
+ Value float64
+ Labels labels.Labels
+}
+
+// CreateHead creates a TSDB writer head to write the sample data to.
+func CreateHead(samples []*MetricSample, chunkRange int64, logger log.Logger) (*Head, error) {
+ head, err := NewHead(nil, logger, nil, chunkRange)
+ if err != nil {
+ return nil, err
+ }
+ app := head.Appender()
+ for _, sample := range samples {
+ _, err = app.Add(sample.Labels, sample.TimestampMs, sample.Value)
+ if err != nil {
+ return nil, err
+ }
+ }
+ err = app.Commit()
+ if err != nil {
+ return nil, err
+ }
+ return head, nil
+}
+
+// CreateBlock creates a chunkrange block from the samples passed to it, and writes it to disk.
+func CreateBlock(samples []*MetricSample, dir string, mint, maxt int64, logger log.Logger) (string, error) {
+ chunkRange := maxt - mint
+ if chunkRange == 0 {
+ chunkRange = DefaultBlockDuration
+ }
+ if chunkRange < 0 {
+ return "", InvalidTimesError
+ }
+ head, err := CreateHead(samples, chunkRange, logger)
+ if err != nil {
+ return "", err
+ }
+
+ compactor, err := NewLeveledCompactor(context.Background(), nil, logger, DefaultOptions.BlockRanges, nil)
+ if err != nil {
+ return "", err
+ }
+
+ err = os.MkdirAll(dir, 0777)
+ if err != nil {
+ return "", err
+ }
+
+ ulid, err := compactor.Write(dir, head, mint, maxt, nil)
+ if err != nil {
+ return "", err
+ }
+
+ return filepath.Join(dir, ulid.String()), nil
+}
diff --git a/vendor/github.com/prometheus/prometheus/tsdb/wal.go b/vendor/github.com/prometheus/prometheus/tsdb/wal.go
index 187feabd9df4a..17f8963b7eecd 100644
--- a/vendor/github.com/prometheus/prometheus/tsdb/wal.go
+++ b/vendor/github.com/prometheus/prometheus/tsdb/wal.go
@@ -31,9 +31,11 @@ import (
"github.com/go-kit/kit/log/level"
"github.com/pkg/errors"
"github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/prometheus/pkg/labels"
"github.com/prometheus/prometheus/tsdb/encoding"
"github.com/prometheus/prometheus/tsdb/fileutil"
- "github.com/prometheus/prometheus/tsdb/labels"
+ "github.com/prometheus/prometheus/tsdb/record"
+ "github.com/prometheus/prometheus/tsdb/tombstones"
"github.com/prometheus/prometheus/tsdb/wal"
)
@@ -89,9 +91,9 @@ func newWalMetrics(wal *SegmentWAL, r prometheus.Registerer) *walMetrics {
// DEPRECATED: use wal pkg combined with the record codex instead.
type WAL interface {
Reader() WALReader
- LogSeries([]RefSeries) error
- LogSamples([]RefSample) error
- LogDeletes([]Stone) error
+ LogSeries([]record.RefSeries) error
+ LogSamples([]record.RefSample) error
+ LogDeletes([]tombstones.Stone) error
Truncate(mint int64, keep func(uint64) bool) error
Close() error
}
@@ -99,27 +101,12 @@ type WAL interface {
// WALReader reads entries from a WAL.
type WALReader interface {
Read(
- seriesf func([]RefSeries),
- samplesf func([]RefSample),
- deletesf func([]Stone),
+ seriesf func([]record.RefSeries),
+ samplesf func([]record.RefSample),
+ deletesf func([]tombstones.Stone),
) error
}
-// RefSeries is the series labels with the series ID.
-type RefSeries struct {
- Ref uint64
- Labels labels.Labels
-}
-
-// RefSample is a timestamp/value pair associated with a reference to a series.
-type RefSample struct {
- Ref uint64
- T int64
- V float64
-
- series *memSeries
-}
-
// segmentFile wraps a file object of a segment and tracks the highest timestamp
// it contains. During WAL truncating, all segments with no higher timestamp than
// the truncation threshold can be compacted.
@@ -240,9 +227,9 @@ type repairingWALReader struct {
}
func (r *repairingWALReader) Read(
- seriesf func([]RefSeries),
- samplesf func([]RefSample),
- deletesf func([]Stone),
+ seriesf func([]record.RefSeries),
+ samplesf func([]record.RefSample),
+ deletesf func([]tombstones.Stone),
) error {
err := r.r.Read(seriesf, samplesf, deletesf)
if err == nil {
@@ -348,8 +335,8 @@ func (w *SegmentWAL) Truncate(mint int64, keep func(uint64) bool) error {
var (
csf = newSegmentFile(f)
crc32 = newCRC32()
- decSeries = []RefSeries{}
- activeSeries = []RefSeries{}
+ decSeries = []record.RefSeries{}
+ activeSeries = []record.RefSeries{}
)
for r.next() {
@@ -433,7 +420,7 @@ func (w *SegmentWAL) Truncate(mint int64, keep func(uint64) bool) error {
// LogSeries writes a batch of new series labels to the log.
// The series have to be ordered.
-func (w *SegmentWAL) LogSeries(series []RefSeries) error {
+func (w *SegmentWAL) LogSeries(series []record.RefSeries) error {
buf := w.getBuffer()
flag := w.encodeSeries(buf, series)
@@ -460,7 +447,7 @@ func (w *SegmentWAL) LogSeries(series []RefSeries) error {
}
// LogSamples writes a batch of new samples to the log.
-func (w *SegmentWAL) LogSamples(samples []RefSample) error {
+func (w *SegmentWAL) LogSamples(samples []record.RefSample) error {
buf := w.getBuffer()
flag := w.encodeSamples(buf, samples)
@@ -486,7 +473,7 @@ func (w *SegmentWAL) LogSamples(samples []RefSample) error {
}
// LogDeletes write a batch of new deletes to the log.
-func (w *SegmentWAL) LogDeletes(stones []Stone) error {
+func (w *SegmentWAL) LogDeletes(stones []tombstones.Stone) error {
buf := w.getBuffer()
flag := w.encodeDeletes(buf, stones)
@@ -504,7 +491,7 @@ func (w *SegmentWAL) LogDeletes(stones []Stone) error {
tf := w.head()
for _, s := range stones {
- for _, iv := range s.intervals {
+ for _, iv := range s.Intervals {
if tf.maxTime < iv.Maxt {
tf.maxTime = iv.Maxt
}
@@ -797,7 +784,7 @@ const (
walDeletesSimple = 1
)
-func (w *SegmentWAL) encodeSeries(buf *encoding.Encbuf, series []RefSeries) uint8 {
+func (w *SegmentWAL) encodeSeries(buf *encoding.Encbuf, series []record.RefSeries) uint8 {
for _, s := range series {
buf.PutBE64(s.Ref)
buf.PutUvarint(len(s.Labels))
@@ -810,7 +797,7 @@ func (w *SegmentWAL) encodeSeries(buf *encoding.Encbuf, series []RefSeries) uint
return walSeriesSimple
}
-func (w *SegmentWAL) encodeSamples(buf *encoding.Encbuf, samples []RefSample) uint8 {
+func (w *SegmentWAL) encodeSamples(buf *encoding.Encbuf, samples []record.RefSample) uint8 {
if len(samples) == 0 {
return walSamplesSimple
}
@@ -831,10 +818,10 @@ func (w *SegmentWAL) encodeSamples(buf *encoding.Encbuf, samples []RefSample) ui
return walSamplesSimple
}
-func (w *SegmentWAL) encodeDeletes(buf *encoding.Encbuf, stones []Stone) uint8 {
+func (w *SegmentWAL) encodeDeletes(buf *encoding.Encbuf, stones []tombstones.Stone) uint8 {
for _, s := range stones {
- for _, iv := range s.intervals {
- buf.PutBE64(s.ref)
+ for _, iv := range s.Intervals {
+ buf.PutBE64(s.Ref)
buf.PutVarint64(iv.Mint)
buf.PutVarint64(iv.Maxt)
}
@@ -877,9 +864,9 @@ func (r *walReader) Err() error {
}
func (r *walReader) Read(
- seriesf func([]RefSeries),
- samplesf func([]RefSample),
- deletesf func([]Stone),
+ seriesf func([]record.RefSeries),
+ samplesf func([]record.RefSample),
+ deletesf func([]tombstones.Stone),
) error {
// Concurrency for replaying the WAL is very limited. We at least split out decoding and
// processing into separate threads.
@@ -898,19 +885,19 @@ func (r *walReader) Read(
for x := range datac {
switch v := x.(type) {
- case []RefSeries:
+ case []record.RefSeries:
if seriesf != nil {
seriesf(v)
}
//lint:ignore SA6002 safe to ignore and actually fixing it has some performance penalty.
seriesPool.Put(v[:0])
- case []RefSample:
+ case []record.RefSample:
if samplesf != nil {
samplesf(v)
}
//lint:ignore SA6002 safe to ignore and actually fixing it has some performance penalty.
samplePool.Put(v[:0])
- case []Stone:
+ case []tombstones.Stone:
if deletesf != nil {
deletesf(v)
}
@@ -928,14 +915,14 @@ func (r *walReader) Read(
et, flag, b := r.at()
// In decoding below we never return a walCorruptionErr for now.
- // Those should generally be catched by entry decoding before.
+ // Those should generally be caught by entry decoding before.
switch et {
case WALEntrySeries:
- var series []RefSeries
+ var series []record.RefSeries
if v := seriesPool.Get(); v == nil {
- series = make([]RefSeries, 0, 512)
+ series = make([]record.RefSeries, 0, 512)
} else {
- series = v.([]RefSeries)
+ series = v.([]record.RefSeries)
}
err = r.decodeSeries(flag, b, &series)
@@ -952,11 +939,11 @@ func (r *walReader) Read(
}
}
case WALEntrySamples:
- var samples []RefSample
+ var samples []record.RefSample
if v := samplePool.Get(); v == nil {
- samples = make([]RefSample, 0, 512)
+ samples = make([]record.RefSample, 0, 512)
} else {
- samples = v.([]RefSample)
+ samples = v.([]record.RefSample)
}
err = r.decodeSamples(flag, b, &samples)
@@ -974,11 +961,11 @@ func (r *walReader) Read(
}
}
case WALEntryDeletes:
- var deletes []Stone
+ var deletes []tombstones.Stone
if v := deletePool.Get(); v == nil {
- deletes = make([]Stone, 0, 512)
+ deletes = make([]tombstones.Stone, 0, 512)
} else {
- deletes = v.([]Stone)
+ deletes = v.([]tombstones.Stone)
}
err = r.decodeDeletes(flag, b, &deletes)
@@ -991,7 +978,7 @@ func (r *walReader) Read(
// Update the times for the WAL segment file.
cf := r.current()
for _, s := range deletes {
- for _, iv := range s.intervals {
+ for _, iv := range s.Intervals {
if cf.maxTime < iv.Maxt {
cf.maxTime = iv.Maxt
}
@@ -1128,7 +1115,7 @@ func (r *walReader) entry(cr io.Reader) (WALEntryType, byte, []byte, error) {
return etype, flag, buf, nil
}
-func (r *walReader) decodeSeries(flag byte, b []byte, res *[]RefSeries) error {
+func (r *walReader) decodeSeries(flag byte, b []byte, res *[]record.RefSeries) error {
dec := encoding.Decbuf{B: b}
for len(dec.B) > 0 && dec.Err() == nil {
@@ -1142,7 +1129,7 @@ func (r *walReader) decodeSeries(flag byte, b []byte, res *[]RefSeries) error {
}
sort.Sort(lset)
- *res = append(*res, RefSeries{
+ *res = append(*res, record.RefSeries{
Ref: ref,
Labels: lset,
})
@@ -1156,7 +1143,7 @@ func (r *walReader) decodeSeries(flag byte, b []byte, res *[]RefSeries) error {
return nil
}
-func (r *walReader) decodeSamples(flag byte, b []byte, res *[]RefSample) error {
+func (r *walReader) decodeSamples(flag byte, b []byte, res *[]record.RefSample) error {
if len(b) == 0 {
return nil
}
@@ -1172,7 +1159,7 @@ func (r *walReader) decodeSamples(flag byte, b []byte, res *[]RefSample) error {
dtime := dec.Varint64()
val := dec.Be64()
- *res = append(*res, RefSample{
+ *res = append(*res, record.RefSample{
Ref: uint64(int64(baseRef) + dref),
T: baseTime + dtime,
V: math.Float64frombits(val),
@@ -1188,13 +1175,13 @@ func (r *walReader) decodeSamples(flag byte, b []byte, res *[]RefSample) error {
return nil
}
-func (r *walReader) decodeDeletes(flag byte, b []byte, res *[]Stone) error {
+func (r *walReader) decodeDeletes(flag byte, b []byte, res *[]tombstones.Stone) error {
dec := &encoding.Decbuf{B: b}
for dec.Len() > 0 && dec.Err() == nil {
- *res = append(*res, Stone{
- ref: dec.Be64(),
- intervals: Intervals{
+ *res = append(*res, tombstones.Stone{
+ Ref: dec.Be64(),
+ Intervals: tombstones.Intervals{
{Mint: dec.Varint64(), Maxt: dec.Varint64()},
},
})
@@ -1274,23 +1261,23 @@ func MigrateWAL(logger log.Logger, dir string) (err error) {
rdr := w.Reader()
var (
- enc RecordEncoder
+ enc record.Encoder
b []byte
)
decErr := rdr.Read(
- func(s []RefSeries) {
+ func(s []record.RefSeries) {
if err != nil {
return
}
err = repl.Log(enc.Series(s, b[:0]))
},
- func(s []RefSample) {
+ func(s []record.RefSample) {
if err != nil {
return
}
err = repl.Log(enc.Samples(s, b[:0]))
},
- func(s []Stone) {
+ func(s []tombstones.Stone) {
if err != nil {
return
}
diff --git a/vendor/github.com/prometheus/prometheus/tsdb/checkpoint.go b/vendor/github.com/prometheus/prometheus/tsdb/wal/checkpoint.go
similarity index 88%
rename from vendor/github.com/prometheus/prometheus/tsdb/checkpoint.go
rename to vendor/github.com/prometheus/prometheus/tsdb/wal/checkpoint.go
index d82e567f93ed0..8d3cc8ab6cb56 100644
--- a/vendor/github.com/prometheus/prometheus/tsdb/checkpoint.go
+++ b/vendor/github.com/prometheus/prometheus/tsdb/wal/checkpoint.go
@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
-package tsdb
+package wal
import (
"fmt"
@@ -27,7 +27,8 @@ import (
"github.com/pkg/errors"
tsdb_errors "github.com/prometheus/prometheus/tsdb/errors"
"github.com/prometheus/prometheus/tsdb/fileutil"
- "github.com/prometheus/prometheus/tsdb/wal"
+ "github.com/prometheus/prometheus/tsdb/record"
+ "github.com/prometheus/prometheus/tsdb/tombstones"
)
// CheckpointStats returns stats about a created checkpoint.
@@ -63,7 +64,7 @@ func LastCheckpoint(dir string) (string, int, error) {
}
return filepath.Join(dir, fi.Name()), idx, nil
}
- return "", 0, ErrNotFound
+ return "", 0, record.ErrNotFound
}
// DeleteCheckpoints deletes all checkpoints in a directory below a given index.
@@ -99,15 +100,15 @@ const checkpointPrefix = "checkpoint."
// segmented format as the original WAL itself.
// This makes it easy to read it through the WAL package and concatenate
// it with the original WAL.
-func Checkpoint(w *wal.WAL, from, to int, keep func(id uint64) bool, mint int64) (*CheckpointStats, error) {
+func Checkpoint(w *WAL, from, to int, keep func(id uint64) bool, mint int64) (*CheckpointStats, error) {
stats := &CheckpointStats{}
var sgmReader io.ReadCloser
{
- var sgmRange []wal.SegmentRange
+ var sgmRange []SegmentRange
dir, idx, err := LastCheckpoint(w.Dir())
- if err != nil && err != ErrNotFound {
+ if err != nil && err != record.ErrNotFound {
return nil, errors.Wrap(err, "find last checkpoint")
}
last := idx + 1
@@ -118,11 +119,11 @@ func Checkpoint(w *wal.WAL, from, to int, keep func(id uint64) bool, mint int64)
// Ignore WAL files below the checkpoint. They shouldn't exist to begin with.
from = last
- sgmRange = append(sgmRange, wal.SegmentRange{Dir: dir, Last: math.MaxInt32})
+ sgmRange = append(sgmRange, SegmentRange{Dir: dir, Last: math.MaxInt32})
}
- sgmRange = append(sgmRange, wal.SegmentRange{Dir: w.Dir(), First: from, Last: to})
- sgmReader, err = wal.NewSegmentsRangeReader(sgmRange...)
+ sgmRange = append(sgmRange, SegmentRange{Dir: w.Dir(), First: from, Last: to})
+ sgmReader, err = NewSegmentsRangeReader(sgmRange...)
if err != nil {
return nil, errors.Wrap(err, "create segment reader")
}
@@ -135,7 +136,7 @@ func Checkpoint(w *wal.WAL, from, to int, keep func(id uint64) bool, mint int64)
if err := os.MkdirAll(cpdirtmp, 0777); err != nil {
return nil, errors.Wrap(err, "create checkpoint dir")
}
- cp, err := wal.New(nil, nil, cpdirtmp, w.CompressionEnabled())
+ cp, err := New(nil, nil, cpdirtmp, w.CompressionEnabled())
if err != nil {
return nil, errors.Wrap(err, "open checkpoint")
}
@@ -146,14 +147,14 @@ func Checkpoint(w *wal.WAL, from, to int, keep func(id uint64) bool, mint int64)
os.RemoveAll(cpdirtmp)
}()
- r := wal.NewReader(sgmReader)
+ r := NewReader(sgmReader)
var (
- series []RefSeries
- samples []RefSample
- tstones []Stone
- dec RecordDecoder
- enc RecordEncoder
+ series []record.RefSeries
+ samples []record.RefSample
+ tstones []tombstones.Stone
+ dec record.Decoder
+ enc record.Encoder
buf []byte
recs [][]byte
)
@@ -167,7 +168,7 @@ func Checkpoint(w *wal.WAL, from, to int, keep func(id uint64) bool, mint int64)
rec := r.Record()
switch dec.Type(rec) {
- case RecordSeries:
+ case record.Series:
series, err = dec.Series(rec, series)
if err != nil {
return nil, errors.Wrap(err, "decode series")
@@ -185,7 +186,7 @@ func Checkpoint(w *wal.WAL, from, to int, keep func(id uint64) bool, mint int64)
stats.TotalSeries += len(series)
stats.DroppedSeries += len(series) - len(repl)
- case RecordSamples:
+ case record.Samples:
samples, err = dec.Samples(rec, samples)
if err != nil {
return nil, errors.Wrap(err, "decode samples")
@@ -203,7 +204,7 @@ func Checkpoint(w *wal.WAL, from, to int, keep func(id uint64) bool, mint int64)
stats.TotalSamples += len(samples)
stats.DroppedSamples += len(samples) - len(repl)
- case RecordTombstones:
+ case record.Tombstones:
tstones, err = dec.Tombstones(rec, tstones)
if err != nil {
return nil, errors.Wrap(err, "decode deletes")
@@ -211,7 +212,7 @@ func Checkpoint(w *wal.WAL, from, to int, keep func(id uint64) bool, mint int64)
// Drop irrelevant tombstones in place.
repl := tstones[:0]
for _, s := range tstones {
- for _, iv := range s.intervals {
+ for _, iv := range s.Intervals {
if iv.Maxt >= mint {
repl = append(repl, s)
break
diff --git a/vendor/github.com/prometheus/prometheus/tsdb/wal/live_reader.go b/vendor/github.com/prometheus/prometheus/tsdb/wal/live_reader.go
index 6d29036884cae..446e859940420 100644
--- a/vendor/github.com/prometheus/prometheus/tsdb/wal/live_reader.go
+++ b/vendor/github.com/prometheus/prometheus/tsdb/wal/live_reader.go
@@ -32,7 +32,7 @@ type liveReaderMetrics struct {
readerCorruptionErrors *prometheus.CounterVec
}
-// NewLiveReaderMetrics instatiates, registers and returns metrics to be injected
+// NewLiveReaderMetrics instantiates, registers and returns metrics to be injected
// at LiveReader instantiation.
func NewLiveReaderMetrics(reg prometheus.Registerer) *liveReaderMetrics {
m := &liveReaderMetrics{
diff --git a/vendor/github.com/prometheus/prometheus/tsdb/wal/wal.go b/vendor/github.com/prometheus/prometheus/tsdb/wal/wal.go
index 6d64bb37ef922..b0b6931744722 100644
--- a/vendor/github.com/prometheus/prometheus/tsdb/wal/wal.go
+++ b/vendor/github.com/prometheus/prometheus/tsdb/wal/wal.go
@@ -177,6 +177,10 @@ type WAL struct {
compress bool
snappyBuf []byte
+ metrics *walMetrics
+}
+
+type walMetrics struct {
fsyncDuration prometheus.Summary
pageFlushes prometheus.Counter
pageCompletions prometheus.Counter
@@ -185,6 +189,49 @@ type WAL struct {
currentSegment prometheus.Gauge
}
+func newWALMetrics(w *WAL, r prometheus.Registerer) *walMetrics {
+ m := &walMetrics{}
+
+ m.fsyncDuration = prometheus.NewSummary(prometheus.SummaryOpts{
+ Name: "prometheus_tsdb_wal_fsync_duration_seconds",
+ Help: "Duration of WAL fsync.",
+ Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
+ })
+ m.pageFlushes = prometheus.NewCounter(prometheus.CounterOpts{
+ Name: "prometheus_tsdb_wal_page_flushes_total",
+ Help: "Total number of page flushes.",
+ })
+ m.pageCompletions = prometheus.NewCounter(prometheus.CounterOpts{
+ Name: "prometheus_tsdb_wal_completed_pages_total",
+ Help: "Total number of completed pages.",
+ })
+ m.truncateFail = prometheus.NewCounter(prometheus.CounterOpts{
+ Name: "prometheus_tsdb_wal_truncations_failed_total",
+ Help: "Total number of WAL truncations that failed.",
+ })
+ m.truncateTotal = prometheus.NewCounter(prometheus.CounterOpts{
+ Name: "prometheus_tsdb_wal_truncations_total",
+ Help: "Total number of WAL truncations attempted.",
+ })
+ m.currentSegment = prometheus.NewGauge(prometheus.GaugeOpts{
+ Name: "prometheus_tsdb_wal_segment_current",
+ Help: "WAL segment index that TSDB is currently writing to.",
+ })
+
+ if r != nil {
+ r.MustRegister(
+ m.fsyncDuration,
+ m.pageFlushes,
+ m.pageCompletions,
+ m.truncateFail,
+ m.truncateTotal,
+ m.currentSegment,
+ )
+ }
+
+ return m
+}
+
// New returns a new WAL over the given directory.
func New(logger log.Logger, reg prometheus.Registerer, dir string, compress bool) (*WAL, error) {
return NewSize(logger, reg, dir, DefaultSegmentSize, compress)
@@ -211,7 +258,7 @@ func NewSize(logger log.Logger, reg prometheus.Registerer, dir string, segmentSi
stopc: make(chan chan struct{}),
compress: compress,
}
- registerMetrics(reg, w)
+ w.metrics = newWALMetrics(w, reg)
_, last, err := w.Segments()
if err != nil {
@@ -249,41 +296,9 @@ func Open(logger log.Logger, reg prometheus.Registerer, dir string) (*WAL, error
logger: logger,
}
- registerMetrics(reg, w)
return w, nil
}
-func registerMetrics(reg prometheus.Registerer, w *WAL) {
- w.fsyncDuration = prometheus.NewSummary(prometheus.SummaryOpts{
- Name: "prometheus_tsdb_wal_fsync_duration_seconds",
- Help: "Duration of WAL fsync.",
- Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01, 0.99: 0.001},
- })
- w.pageFlushes = prometheus.NewCounter(prometheus.CounterOpts{
- Name: "prometheus_tsdb_wal_page_flushes_total",
- Help: "Total number of page flushes.",
- })
- w.pageCompletions = prometheus.NewCounter(prometheus.CounterOpts{
- Name: "prometheus_tsdb_wal_completed_pages_total",
- Help: "Total number of completed pages.",
- })
- w.truncateFail = prometheus.NewCounter(prometheus.CounterOpts{
- Name: "prometheus_tsdb_wal_truncations_failed_total",
- Help: "Total number of WAL truncations that failed.",
- })
- w.truncateTotal = prometheus.NewCounter(prometheus.CounterOpts{
- Name: "prometheus_tsdb_wal_truncations_total",
- Help: "Total number of WAL truncations attempted.",
- })
- w.currentSegment = prometheus.NewGauge(prometheus.GaugeOpts{
- Name: "prometheus_tsdb_wal_segment_current",
- Help: "WAL segment index that TSDB is currently writing to.",
- })
- if reg != nil {
- reg.MustRegister(w.fsyncDuration, w.pageFlushes, w.pageCompletions, w.truncateFail, w.truncateTotal, w.currentSegment)
- }
-}
-
// CompressionEnabled returns if compression is enabled on this WAL.
func (w *WAL) CompressionEnabled() bool {
return w.compress
@@ -476,7 +491,7 @@ func (w *WAL) setSegment(segment *Segment) error {
return err
}
w.donePages = int(stat.Size() / pageSize)
- w.currentSegment.Set(float64(segment.Index()))
+ w.metrics.currentSegment.Set(float64(segment.Index()))
return nil
}
@@ -484,7 +499,7 @@ func (w *WAL) setSegment(segment *Segment) error {
// the page, the remaining bytes will be set to zero and a new page will be started.
// If clear is true, this is enforced regardless of how many bytes are left in the page.
func (w *WAL) flushPage(clear bool) error {
- w.pageFlushes.Inc()
+ w.metrics.pageFlushes.Inc()
p := w.page
clear = clear || p.full()
@@ -504,7 +519,7 @@ func (w *WAL) flushPage(clear bool) error {
if clear {
p.reset()
w.donePages++
- w.pageCompletions.Inc()
+ w.metrics.pageCompletions.Inc()
}
return nil
}
@@ -671,10 +686,10 @@ func (w *WAL) Segments() (first, last int, err error) {
// Truncate drops all segments before i.
func (w *WAL) Truncate(i int) (err error) {
- w.truncateTotal.Inc()
+ w.metrics.truncateTotal.Inc()
defer func() {
if err != nil {
- w.truncateFail.Inc()
+ w.metrics.truncateFail.Inc()
}
}()
refs, err := listSegments(w.dir)
@@ -695,7 +710,7 @@ func (w *WAL) Truncate(i int) (err error) {
func (w *WAL) fsync(f *Segment) error {
start := time.Now()
err := f.File.Sync()
- w.fsyncDuration.Observe(time.Since(start).Seconds())
+ w.metrics.fsyncDuration.Observe(time.Since(start).Seconds())
return err
}
@@ -861,3 +876,9 @@ func (r *segmentBufReader) Read(b []byte) (n int, err error) {
r.buf.Reset(r.segs[r.cur])
return n, nil
}
+
+// Computing size of the WAL.
+// We do this by adding the sizes of all the files under the WAL dir.
+func (w *WAL) Size() (int64, error) {
+ return fileutil.DirSize(w.Dir())
+}
diff --git a/vendor/github.com/prometheus/prometheus/tsdb/wal/watcher.go b/vendor/github.com/prometheus/prometheus/tsdb/wal/watcher.go
new file mode 100644
index 0000000000000..11c9bfddcb397
--- /dev/null
+++ b/vendor/github.com/prometheus/prometheus/tsdb/wal/watcher.go
@@ -0,0 +1,595 @@
+// Copyright 2018 The Prometheus Authors
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+package wal
+
+import (
+ "fmt"
+ "io"
+ "math"
+ "os"
+ "path"
+ "sort"
+ "strconv"
+ "strings"
+ "time"
+
+ "github.com/go-kit/kit/log"
+ "github.com/go-kit/kit/log/level"
+ "github.com/pkg/errors"
+ "github.com/prometheus/client_golang/prometheus"
+ "github.com/prometheus/prometheus/pkg/timestamp"
+ "github.com/prometheus/prometheus/tsdb/fileutil"
+ "github.com/prometheus/prometheus/tsdb/record"
+)
+
+const (
+ readPeriod = 10 * time.Millisecond
+ checkpointPeriod = 5 * time.Second
+ segmentCheckPeriod = 100 * time.Millisecond
+ consumer = "consumer"
+)
+
+// WriteTo is an interface used by the Watcher to send the samples it's read
+// from the WAL on to somewhere else. Functions will be called concurrently
+// and it is left to the implementer to make sure they are safe.
+type WriteTo interface {
+ Append([]record.RefSample) bool
+ StoreSeries([]record.RefSeries, int)
+ // SeriesReset is called after reading a checkpoint to allow the deletion
+ // of all series created in a segment lower than the argument.
+ SeriesReset(int)
+}
+
+type WatcherMetrics struct {
+ recordsRead *prometheus.CounterVec
+ recordDecodeFails *prometheus.CounterVec
+ samplesSentPreTailing *prometheus.CounterVec
+ currentSegment *prometheus.GaugeVec
+}
+
+// Watcher watches the TSDB WAL for a given WriteTo.
+type Watcher struct {
+ name string
+ writer WriteTo
+ logger log.Logger
+ walDir string
+ lastCheckpoint string
+ metrics *WatcherMetrics
+ readerMetrics *liveReaderMetrics
+
+ StartTime int64
+
+ recordsReadMetric *prometheus.CounterVec
+ recordDecodeFailsMetric prometheus.Counter
+ samplesSentPreTailing prometheus.Counter
+ currentSegmentMetric prometheus.Gauge
+
+ quit chan struct{}
+ done chan struct{}
+
+ // For testing, stop when we hit this segment.
+ MaxSegment int
+}
+
+func NewWatcherMetrics(reg prometheus.Registerer) *WatcherMetrics {
+ m := &WatcherMetrics{
+ recordsRead: prometheus.NewCounterVec(
+ prometheus.CounterOpts{
+ Namespace: "prometheus",
+ Subsystem: "wal_watcher",
+ Name: "records_read_total",
+ Help: "Number of records read by the WAL watcher from the WAL.",
+ },
+ []string{consumer, "type"},
+ ),
+ recordDecodeFails: prometheus.NewCounterVec(
+ prometheus.CounterOpts{
+ Namespace: "prometheus",
+ Subsystem: "wal_watcher",
+ Name: "record_decode_failures_total",
+ Help: "Number of records read by the WAL watcher that resulted in an error when decoding.",
+ },
+ []string{consumer},
+ ),
+ samplesSentPreTailing: prometheus.NewCounterVec(
+ prometheus.CounterOpts{
+ Namespace: "prometheus",
+ Subsystem: "wal_watcher",
+ Name: "samples_sent_pre_tailing_total",
+ Help: "Number of sample records read by the WAL watcher and sent to remote write during replay of existing WAL.",
+ },
+ []string{consumer},
+ ),
+ currentSegment: prometheus.NewGaugeVec(
+ prometheus.GaugeOpts{
+ Namespace: "prometheus",
+ Subsystem: "wal_watcher",
+ Name: "current_segment",
+ Help: "Current segment the WAL watcher is reading records from.",
+ },
+ []string{consumer},
+ ),
+ }
+
+ if reg != nil {
+ _ = reg.Register(m.recordsRead)
+ _ = reg.Register(m.recordDecodeFails)
+ _ = reg.Register(m.samplesSentPreTailing)
+ _ = reg.Register(m.currentSegment)
+ }
+
+ return m
+}
+
+// NewWatcher creates a new WAL watcher for a given WriteTo.
+func NewWatcher(reg prometheus.Registerer, metrics *WatcherMetrics, logger log.Logger, name string, writer WriteTo, walDir string) *Watcher {
+ if logger == nil {
+ logger = log.NewNopLogger()
+ }
+ return &Watcher{
+ logger: logger,
+ writer: writer,
+ metrics: metrics,
+ readerMetrics: NewLiveReaderMetrics(reg),
+ walDir: path.Join(walDir, "wal"),
+ name: name,
+ quit: make(chan struct{}),
+ done: make(chan struct{}),
+
+ MaxSegment: -1,
+ }
+}
+
+func (w *Watcher) setMetrics() {
+ // Setup the WAL Watchers metrics. We do this here rather than in the
+ // constructor because of the ordering of creating Queue Managers's,
+ // stopping them, and then starting new ones in storage/remote/storage.go ApplyConfig.
+ if w.metrics != nil {
+ w.recordsReadMetric = w.metrics.recordsRead.MustCurryWith(prometheus.Labels{consumer: w.name})
+ w.recordDecodeFailsMetric = w.metrics.recordDecodeFails.WithLabelValues(w.name)
+ w.samplesSentPreTailing = w.metrics.samplesSentPreTailing.WithLabelValues(w.name)
+ w.currentSegmentMetric = w.metrics.currentSegment.WithLabelValues(w.name)
+ }
+}
+
+// Start the Watcher.
+func (w *Watcher) Start() {
+ w.setMetrics()
+ level.Info(w.logger).Log("msg", "starting WAL watcher", "queue", w.name)
+
+ go w.loop()
+}
+
+// Stop the Watcher.
+func (w *Watcher) Stop() {
+ close(w.quit)
+ <-w.done
+
+ // Records read metric has series and samples.
+ w.metrics.recordsRead.DeleteLabelValues(w.name, "series")
+ w.metrics.recordsRead.DeleteLabelValues(w.name, "samples")
+ w.metrics.recordDecodeFails.DeleteLabelValues(w.name)
+ w.metrics.samplesSentPreTailing.DeleteLabelValues(w.name)
+ w.metrics.currentSegment.DeleteLabelValues(w.name)
+
+ level.Info(w.logger).Log("msg", "WAL watcher stopped", "queue", w.name)
+}
+
+func (w *Watcher) loop() {
+ defer close(w.done)
+
+ // We may encounter failures processing the WAL; we should wait and retry.
+ for !isClosed(w.quit) {
+ w.StartTime = timestamp.FromTime(time.Now())
+ if err := w.Run(); err != nil {
+ level.Error(w.logger).Log("msg", "error tailing WAL", "err", err)
+ }
+
+ select {
+ case <-w.quit:
+ return
+ case <-time.After(5 * time.Second):
+ }
+ }
+}
+
+// Run the watcher, which will tail the WAL until the quit channel is closed
+// or an error case is hit.
+func (w *Watcher) Run() error {
+ _, lastSegment, err := w.firstAndLast()
+ if err != nil {
+ return errors.Wrap(err, "wal.Segments")
+ }
+
+ // Backfill from the checkpoint first if it exists.
+ lastCheckpoint, checkpointIndex, err := LastCheckpoint(w.walDir)
+ if err != nil && err != record.ErrNotFound {
+ return errors.Wrap(err, "tsdb.LastCheckpoint")
+ }
+
+ if err == nil {
+ if err = w.readCheckpoint(lastCheckpoint); err != nil {
+ return errors.Wrap(err, "readCheckpoint")
+ }
+ }
+ w.lastCheckpoint = lastCheckpoint
+
+ currentSegment, err := w.findSegmentForIndex(checkpointIndex)
+ if err != nil {
+ return err
+ }
+
+ level.Debug(w.logger).Log("msg", "tailing WAL", "lastCheckpoint", lastCheckpoint, "checkpointIndex", checkpointIndex, "currentSegment", currentSegment, "lastSegment", lastSegment)
+ for !isClosed(w.quit) {
+ w.currentSegmentMetric.Set(float64(currentSegment))
+ level.Debug(w.logger).Log("msg", "processing segment", "currentSegment", currentSegment)
+
+ // On start, after reading the existing WAL for series records, we have a pointer to what is the latest segment.
+ // On subsequent calls to this function, currentSegment will have been incremented and we should open that segment.
+ if err := w.watch(currentSegment, currentSegment >= lastSegment); err != nil {
+ return err
+ }
+
+ // For testing: stop when you hit a specific segment.
+ if currentSegment == w.MaxSegment {
+ return nil
+ }
+
+ currentSegment++
+ }
+
+ return nil
+}
+
+// findSegmentForIndex finds the first segment greater than or equal to index.
+func (w *Watcher) findSegmentForIndex(index int) (int, error) {
+ refs, err := w.segments(w.walDir)
+ if err != nil {
+ return -1, err
+ }
+
+ for _, r := range refs {
+ if r >= index {
+ return r, nil
+ }
+ }
+
+ return -1, errors.New("failed to find segment for index")
+}
+
+func (w *Watcher) firstAndLast() (int, int, error) {
+ refs, err := w.segments(w.walDir)
+ if err != nil {
+ return -1, -1, err
+ }
+
+ if len(refs) == 0 {
+ return -1, -1, nil
+ }
+ return refs[0], refs[len(refs)-1], nil
+}
+
+// Copied from tsdb/wal/wal.go so we do not have to open a WAL.
+// Plan is to move WAL watcher to TSDB and dedupe these implementations.
+func (w *Watcher) segments(dir string) ([]int, error) {
+ files, err := fileutil.ReadDir(dir)
+ if err != nil {
+ return nil, err
+ }
+
+ var refs []int
+ var last int
+ for _, fn := range files {
+ k, err := strconv.Atoi(fn)
+ if err != nil {
+ continue
+ }
+ if len(refs) > 0 && k > last+1 {
+ return nil, errors.New("segments are not sequential")
+ }
+ refs = append(refs, k)
+ last = k
+ }
+ sort.Ints(refs)
+
+ return refs, nil
+}
+
+// Use tail true to indicate that the reader is currently on a segment that is
+// actively being written to. If false, assume it's a full segment and we're
+// replaying it on start to cache the series records.
+func (w *Watcher) watch(segmentNum int, tail bool) error {
+ segment, err := OpenReadSegment(SegmentName(w.walDir, segmentNum))
+ if err != nil {
+ return err
+ }
+ defer segment.Close()
+
+ reader := NewLiveReader(w.logger, w.readerMetrics, segment)
+
+ readTicker := time.NewTicker(readPeriod)
+ defer readTicker.Stop()
+
+ checkpointTicker := time.NewTicker(checkpointPeriod)
+ defer checkpointTicker.Stop()
+
+ segmentTicker := time.NewTicker(segmentCheckPeriod)
+ defer segmentTicker.Stop()
+
+ // If we're replaying the segment we need to know the size of the file to know
+ // when to return from watch and move on to the next segment.
+ size := int64(math.MaxInt64)
+ if !tail {
+ segmentTicker.Stop()
+ checkpointTicker.Stop()
+ var err error
+ size, err = getSegmentSize(w.walDir, segmentNum)
+ if err != nil {
+ return errors.Wrap(err, "getSegmentSize")
+ }
+ }
+
+ gcSem := make(chan struct{}, 1)
+ for {
+ select {
+ case <-w.quit:
+ return nil
+
+ case <-checkpointTicker.C:
+ // Periodically check if there is a new checkpoint so we can garbage
+ // collect labels. As this is considered an optimisation, we ignore
+ // errors during checkpoint processing. Doing the process asynchronously
+ // allows the current WAL segment to be processed while reading the
+ // checkpoint.
+ select {
+ case gcSem <- struct{}{}:
+ go func() {
+ defer func() {
+ <-gcSem
+ }()
+ if err := w.garbageCollectSeries(segmentNum); err != nil {
+ level.Warn(w.logger).Log("msg", "error process checkpoint", "err", err)
+ }
+ }()
+ default:
+ // Currently doing a garbage collect, try again later.
+ }
+
+ case <-segmentTicker.C:
+ _, last, err := w.firstAndLast()
+ if err != nil {
+ return errors.Wrap(err, "segments")
+ }
+
+ // Check if new segments exists.
+ if last <= segmentNum {
+ continue
+ }
+
+ err = w.readSegment(reader, segmentNum, tail)
+
+ // Ignore errors reading to end of segment whilst replaying the WAL.
+ if !tail {
+ if err != nil && err != io.EOF {
+ level.Warn(w.logger).Log("msg", "ignoring error reading to end of segment, may have dropped data", "err", err)
+ } else if reader.Offset() != size {
+ level.Warn(w.logger).Log("msg", "expected to have read whole segment, may have dropped data", "segment", segmentNum, "read", reader.Offset(), "size", size)
+ }
+ return nil
+ }
+
+ // Otherwise, when we are tailing, non-EOFs are fatal.
+ if err != io.EOF {
+ return err
+ }
+
+ return nil
+
+ case <-readTicker.C:
+ err = w.readSegment(reader, segmentNum, tail)
+
+ // Ignore all errors reading to end of segment whilst replaying the WAL.
+ if !tail {
+ if err != nil && err != io.EOF {
+ level.Warn(w.logger).Log("msg", "ignoring error reading to end of segment, may have dropped data", "segment", segmentNum, "err", err)
+ } else if reader.Offset() != size {
+ level.Warn(w.logger).Log("msg", "expected to have read whole segment, may have dropped data", "segment", segmentNum, "read", reader.Offset(), "size", size)
+ }
+ return nil
+ }
+
+ // Otherwise, when we are tailing, non-EOFs are fatal.
+ if err != io.EOF {
+ return err
+ }
+ }
+ }
+}
+
+func (w *Watcher) garbageCollectSeries(segmentNum int) error {
+ dir, _, err := LastCheckpoint(w.walDir)
+ if err != nil && err != record.ErrNotFound {
+ return errors.Wrap(err, "tsdb.LastCheckpoint")
+ }
+
+ if dir == "" || dir == w.lastCheckpoint {
+ return nil
+ }
+ w.lastCheckpoint = dir
+
+ index, err := checkpointNum(dir)
+ if err != nil {
+ return errors.Wrap(err, "error parsing checkpoint filename")
+ }
+
+ if index >= segmentNum {
+ level.Debug(w.logger).Log("msg", "current segment is behind the checkpoint, skipping reading of checkpoint", "current", fmt.Sprintf("%08d", segmentNum), "checkpoint", dir)
+ return nil
+ }
+
+ level.Debug(w.logger).Log("msg", "new checkpoint detected", "new", dir, "currentSegment", segmentNum)
+
+ if err = w.readCheckpoint(dir); err != nil {
+ return errors.Wrap(err, "readCheckpoint")
+ }
+
+ // Clear series with a checkpoint or segment index # lower than the checkpoint we just read.
+ w.writer.SeriesReset(index)
+ return nil
+}
+
+func (w *Watcher) readSegment(r *LiveReader, segmentNum int, tail bool) error {
+ var (
+ dec record.Decoder
+ series []record.RefSeries
+ samples []record.RefSample
+ send []record.RefSample
+ )
+
+ for r.Next() && !isClosed(w.quit) {
+ rec := r.Record()
+ w.recordsReadMetric.WithLabelValues(recordType(dec.Type(rec))).Inc()
+
+ switch dec.Type(rec) {
+ case record.Series:
+ series, err := dec.Series(rec, series[:0])
+ if err != nil {
+ w.recordDecodeFailsMetric.Inc()
+ return err
+ }
+ w.writer.StoreSeries(series, segmentNum)
+
+ case record.Samples:
+ // If we're not tailing a segment we can ignore any samples records we see.
+ // This speeds up replay of the WAL by > 10x.
+ if !tail {
+ break
+ }
+ samples, err := dec.Samples(rec, samples[:0])
+ if err != nil {
+ w.recordDecodeFailsMetric.Inc()
+ return err
+ }
+ for _, s := range samples {
+ if s.T > w.StartTime {
+ send = append(send, s)
+ }
+ }
+ if len(send) > 0 {
+ // Blocks until the sample is sent to all remote write endpoints or closed (because enqueue blocks).
+ w.writer.Append(send)
+ send = send[:0]
+ }
+
+ case record.Tombstones:
+ // noop
+ case record.Invalid:
+ return errors.New("invalid record")
+
+ default:
+ w.recordDecodeFailsMetric.Inc()
+ return errors.New("unknown TSDB record type")
+ }
+ }
+ return r.Err()
+}
+
+func recordType(rt record.Type) string {
+ switch rt {
+ case record.Invalid:
+ return "invalid"
+ case record.Series:
+ return "series"
+ case record.Samples:
+ return "samples"
+ case record.Tombstones:
+ return "tombstones"
+ default:
+ return "unknown"
+ }
+}
+
+// Read all the series records from a Checkpoint directory.
+func (w *Watcher) readCheckpoint(checkpointDir string) error {
+ level.Debug(w.logger).Log("msg", "reading checkpoint", "dir", checkpointDir)
+ index, err := checkpointNum(checkpointDir)
+ if err != nil {
+ return errors.Wrap(err, "checkpointNum")
+ }
+
+ // Ensure we read the whole contents of every segment in the checkpoint dir.
+ segs, err := w.segments(checkpointDir)
+ if err != nil {
+ return errors.Wrap(err, "Unable to get segments checkpoint dir")
+ }
+ for _, seg := range segs {
+ size, err := getSegmentSize(checkpointDir, seg)
+ if err != nil {
+ return errors.Wrap(err, "getSegmentSize")
+ }
+
+ sr, err := OpenReadSegment(SegmentName(checkpointDir, seg))
+ if err != nil {
+ return errors.Wrap(err, "unable to open segment")
+ }
+ defer sr.Close()
+
+ r := NewLiveReader(w.logger, w.readerMetrics, sr)
+ if err := w.readSegment(r, index, false); err != io.EOF && err != nil {
+ return errors.Wrap(err, "readSegment")
+ }
+
+ if r.Offset() != size {
+ return fmt.Errorf("readCheckpoint wasn't able to read all data from the checkpoint %s/%08d, size: %d, totalRead: %d", checkpointDir, seg, size, r.Offset())
+ }
+ }
+
+ level.Debug(w.logger).Log("msg", "read series references from checkpoint", "checkpoint", checkpointDir)
+ return nil
+}
+
+func checkpointNum(dir string) (int, error) {
+ // Checkpoint dir names are in the format checkpoint.000001
+ // dir may contain a hidden directory, so only check the base directory
+ chunks := strings.Split(path.Base(dir), ".")
+ if len(chunks) != 2 {
+ return 0, errors.Errorf("invalid checkpoint dir string: %s", dir)
+ }
+
+ result, err := strconv.Atoi(chunks[1])
+ if err != nil {
+ return 0, errors.Errorf("invalid checkpoint dir string: %s", dir)
+ }
+
+ return result, nil
+}
+
+// Get size of segment.
+func getSegmentSize(dir string, index int) (int64, error) {
+ i := int64(-1)
+ fi, err := os.Stat(SegmentName(dir, index))
+ if err == nil {
+ i = fi.Size()
+ }
+ return i, err
+}
+
+func isClosed(c chan struct{}) bool {
+ select {
+ case <-c:
+ return true
+ default:
+ return false
+ }
+}
diff --git a/vendor/github.com/prometheus/prometheus/util/testutil/directory.go b/vendor/github.com/prometheus/prometheus/util/testutil/directory.go
index 5f1c31554ce21..3cc43d2207d12 100644
--- a/vendor/github.com/prometheus/prometheus/util/testutil/directory.go
+++ b/vendor/github.com/prometheus/prometheus/util/testutil/directory.go
@@ -133,20 +133,6 @@ func NewTemporaryDirectory(name string, t T) (handler TemporaryDirectory) {
return
}
-// DirSize returns the size in bytes of all files in a directory.
-func DirSize(t *testing.T, path string) int64 {
- var size int64
- err := filepath.Walk(path, func(_ string, info os.FileInfo, err error) error {
- Ok(t, err)
- if !info.IsDir() {
- size += info.Size()
- }
- return nil
- })
- Ok(t, err)
- return size
-}
-
// DirHash returns a hash of all files attribites and their content within a directory.
func DirHash(t *testing.T, path string) []byte {
hash := sha256.New()
diff --git a/vendor/github.com/prometheus/prometheus/util/testutil/testing.go b/vendor/github.com/prometheus/prometheus/util/testutil/testing.go
index 39b44e53617e9..6fac7b801e93f 100644
--- a/vendor/github.com/prometheus/prometheus/util/testutil/testing.go
+++ b/vendor/github.com/prometheus/prometheus/util/testutil/testing.go
@@ -67,7 +67,7 @@ func NotOk(tb TB, err error, a ...interface{}) {
func Equals(tb TB, exp, act interface{}, msgAndArgs ...interface{}) {
tb.Helper()
if !reflect.DeepEqual(exp, act) {
- tb.Fatalf("\033[31m%s\n\nexp: %#v\n\ngot: %#v%s\033[39m\n", formatMessage(msgAndArgs), exp, act)
+ tb.Fatalf("\033[31m%s\n\nexp: %#v\n\ngot: %#v\033[39m\n", formatMessage(msgAndArgs), exp, act)
}
}
diff --git a/vendor/github.com/samuel/go-zookeeper/zk/conn.go b/vendor/github.com/samuel/go-zookeeper/zk/conn.go
index 155c19b682625..da9503a271619 100644
--- a/vendor/github.com/samuel/go-zookeeper/zk/conn.go
+++ b/vendor/github.com/samuel/go-zookeeper/zk/conn.go
@@ -695,20 +695,28 @@ func (c *Conn) authenticate() error {
binary.BigEndian.PutUint32(buf[:4], uint32(n))
- c.conn.SetWriteDeadline(time.Now().Add(c.recvTimeout * 10))
+ if err := c.conn.SetWriteDeadline(time.Now().Add(c.recvTimeout * 10)); err != nil {
+ return err
+ }
_, err = c.conn.Write(buf[:n+4])
- c.conn.SetWriteDeadline(time.Time{})
if err != nil {
return err
}
+ if err := c.conn.SetWriteDeadline(time.Time{}); err != nil {
+ return err
+ }
// Receive and decode a connect response.
- c.conn.SetReadDeadline(time.Now().Add(c.recvTimeout * 10))
+ if err := c.conn.SetReadDeadline(time.Now().Add(c.recvTimeout * 10)); err != nil {
+ return err
+ }
_, err = io.ReadFull(c.conn, buf[:4])
- c.conn.SetReadDeadline(time.Time{})
if err != nil {
return err
}
+ if err := c.conn.SetReadDeadline(time.Time{}); err != nil {
+ return err
+ }
blen := int(binary.BigEndian.Uint32(buf[:4]))
if cap(buf) < blen {
@@ -770,14 +778,18 @@ func (c *Conn) sendData(req *request) error {
c.requests[req.xid] = req
c.requestsLock.Unlock()
- c.conn.SetWriteDeadline(time.Now().Add(c.recvTimeout))
+ if err := c.conn.SetWriteDeadline(time.Now().Add(c.recvTimeout)); err != nil {
+ return err
+ }
_, err = c.conn.Write(c.buf[:n+4])
- c.conn.SetWriteDeadline(time.Time{})
if err != nil {
req.recvChan <- response{-1, err}
c.conn.Close()
return err
}
+ if err := c.conn.SetWriteDeadline(time.Time{}); err != nil {
+ return err
+ }
return nil
}
@@ -800,13 +812,17 @@ func (c *Conn) sendLoop() error {
binary.BigEndian.PutUint32(c.buf[:4], uint32(n))
- c.conn.SetWriteDeadline(time.Now().Add(c.recvTimeout))
+ if err := c.conn.SetWriteDeadline(time.Now().Add(c.recvTimeout)); err != nil {
+ return err
+ }
_, err = c.conn.Write(c.buf[:n+4])
- c.conn.SetWriteDeadline(time.Time{})
if err != nil {
c.conn.Close()
return err
}
+ if err := c.conn.SetWriteDeadline(time.Time{}); err != nil {
+ return err
+ }
case <-c.closeChan:
return nil
}
@@ -838,10 +854,12 @@ func (c *Conn) recvLoop(conn net.Conn) error {
}
_, err = io.ReadFull(conn, buf[:blen])
- conn.SetReadDeadline(time.Time{})
if err != nil {
return err
}
+ if err := conn.SetReadDeadline(time.Time{}); err != nil {
+ return err
+ }
res := responseHeader{}
_, err = decodePacket(buf[:16], &res)
@@ -874,7 +892,7 @@ func (c *Conn) recvLoop(conn net.Conn) error {
c.watchersLock.Lock()
for _, t := range wTypes {
wpt := watchPathType{res.Path, t}
- if watchers := c.watchers[wpt]; watchers != nil && len(watchers) > 0 {
+ if watchers, ok := c.watchers[wpt]; ok {
for _, ch := range watchers {
ch <- ev
close(ch)
diff --git a/vendor/github.com/samuel/go-zookeeper/zk/constants.go b/vendor/github.com/samuel/go-zookeeper/zk/constants.go
index 07370bb9cae1e..ccafcfc977ab6 100644
--- a/vendor/github.com/samuel/go-zookeeper/zk/constants.go
+++ b/vendor/github.com/samuel/go-zookeeper/zk/constants.go
@@ -144,7 +144,7 @@ func (e ErrCode) toError() error {
if err, ok := errCodeToError[e]; ok {
return err
}
- return errors.New(fmt.Sprintf("unknown error: %v", e))
+ return fmt.Errorf("unknown error: %v", e)
}
const (
diff --git a/vendor/github.com/samuel/go-zookeeper/zk/flw.go b/vendor/github.com/samuel/go-zookeeper/zk/flw.go
index 3e97f96876c59..1fb8b2aed0206 100644
--- a/vendor/github.com/samuel/go-zookeeper/zk/flw.go
+++ b/vendor/github.com/samuel/go-zookeeper/zk/flw.go
@@ -255,12 +255,16 @@ func fourLetterWord(server, command string, timeout time.Duration) ([]byte, erro
// once the command has been processed, but better safe than sorry
defer conn.Close()
- conn.SetWriteDeadline(time.Now().Add(timeout))
+ if err := conn.SetWriteDeadline(time.Now().Add(timeout)); err != nil {
+ return nil, err
+ }
_, err = conn.Write([]byte(command))
if err != nil {
return nil, err
}
- conn.SetReadDeadline(time.Now().Add(timeout))
+ if err := conn.SetReadDeadline(time.Now().Add(timeout)); err != nil {
+ return nil, err
+ }
return ioutil.ReadAll(conn)
}
diff --git a/vendor/github.com/spf13/pflag/.travis.yml b/vendor/github.com/spf13/pflag/.travis.yml
index f8a63b308ba56..00d04cb9b0269 100644
--- a/vendor/github.com/spf13/pflag/.travis.yml
+++ b/vendor/github.com/spf13/pflag/.travis.yml
@@ -3,8 +3,9 @@ sudo: false
language: go
go:
- - 1.7.3
- - 1.8.1
+ - 1.9.x
+ - 1.10.x
+ - 1.11.x
- tip
matrix:
@@ -12,7 +13,7 @@ matrix:
- go: tip
install:
- - go get github.com/golang/lint/golint
+ - go get golang.org/x/lint/golint
- export PATH=$GOPATH/bin:$PATH
- go install ./...
diff --git a/vendor/github.com/spf13/pflag/README.md b/vendor/github.com/spf13/pflag/README.md
index b052414d12951..7eacc5bdbe5f4 100644
--- a/vendor/github.com/spf13/pflag/README.md
+++ b/vendor/github.com/spf13/pflag/README.md
@@ -86,8 +86,8 @@ fmt.Println("ip has value ", *ip)
fmt.Println("flagvar has value ", flagvar)
```
-There are helpers function to get values later if you have the FlagSet but
-it was difficult to keep up with all of the flag pointers in your code.
+There are helper functions available to get the value stored in a Flag if you have a FlagSet but find
+it difficult to keep up with all of the pointers in your code.
If you have a pflag.FlagSet with a flag called 'flagname' of type int you
can use GetInt() to get the int value. But notice that 'flagname' must exist
and it must be an int. GetString("flagname") will fail.
diff --git a/vendor/github.com/spf13/pflag/bool_slice.go b/vendor/github.com/spf13/pflag/bool_slice.go
index 5af02f1a75a9d..3731370d6a572 100644
--- a/vendor/github.com/spf13/pflag/bool_slice.go
+++ b/vendor/github.com/spf13/pflag/bool_slice.go
@@ -71,6 +71,44 @@ func (s *boolSliceValue) String() string {
return "[" + out + "]"
}
+func (s *boolSliceValue) fromString(val string) (bool, error) {
+ return strconv.ParseBool(val)
+}
+
+func (s *boolSliceValue) toString(val bool) string {
+ return strconv.FormatBool(val)
+}
+
+func (s *boolSliceValue) Append(val string) error {
+ i, err := s.fromString(val)
+ if err != nil {
+ return err
+ }
+ *s.value = append(*s.value, i)
+ return nil
+}
+
+func (s *boolSliceValue) Replace(val []string) error {
+ out := make([]bool, len(val))
+ for i, d := range val {
+ var err error
+ out[i], err = s.fromString(d)
+ if err != nil {
+ return err
+ }
+ }
+ *s.value = out
+ return nil
+}
+
+func (s *boolSliceValue) GetSlice() []string {
+ out := make([]string, len(*s.value))
+ for i, d := range *s.value {
+ out[i] = s.toString(d)
+ }
+ return out
+}
+
func boolSliceConv(val string) (interface{}, error) {
val = strings.Trim(val, "[]")
// Empty string would cause a slice with one (empty) entry
diff --git a/vendor/github.com/spf13/pflag/count.go b/vendor/github.com/spf13/pflag/count.go
index aa126e44d1c83..a0b2679f71c75 100644
--- a/vendor/github.com/spf13/pflag/count.go
+++ b/vendor/github.com/spf13/pflag/count.go
@@ -46,7 +46,7 @@ func (f *FlagSet) GetCount(name string) (int, error) {
// CountVar defines a count flag with specified name, default value, and usage string.
// The argument p points to an int variable in which to store the value of the flag.
-// A count flag will add 1 to its value evey time it is found on the command line
+// A count flag will add 1 to its value every time it is found on the command line
func (f *FlagSet) CountVar(p *int, name string, usage string) {
f.CountVarP(p, name, "", usage)
}
@@ -69,7 +69,7 @@ func CountVarP(p *int, name, shorthand string, usage string) {
// Count defines a count flag with specified name, default value, and usage string.
// The return value is the address of an int variable that stores the value of the flag.
-// A count flag will add 1 to its value evey time it is found on the command line
+// A count flag will add 1 to its value every time it is found on the command line
func (f *FlagSet) Count(name string, usage string) *int {
p := new(int)
f.CountVarP(p, name, "", usage)
diff --git a/vendor/github.com/spf13/pflag/duration_slice.go b/vendor/github.com/spf13/pflag/duration_slice.go
index 52c6b6dc1041f..badadda53fdbc 100644
--- a/vendor/github.com/spf13/pflag/duration_slice.go
+++ b/vendor/github.com/spf13/pflag/duration_slice.go
@@ -51,6 +51,44 @@ func (s *durationSliceValue) String() string {
return "[" + strings.Join(out, ",") + "]"
}
+func (s *durationSliceValue) fromString(val string) (time.Duration, error) {
+ return time.ParseDuration(val)
+}
+
+func (s *durationSliceValue) toString(val time.Duration) string {
+ return fmt.Sprintf("%s", val)
+}
+
+func (s *durationSliceValue) Append(val string) error {
+ i, err := s.fromString(val)
+ if err != nil {
+ return err
+ }
+ *s.value = append(*s.value, i)
+ return nil
+}
+
+func (s *durationSliceValue) Replace(val []string) error {
+ out := make([]time.Duration, len(val))
+ for i, d := range val {
+ var err error
+ out[i], err = s.fromString(d)
+ if err != nil {
+ return err
+ }
+ }
+ *s.value = out
+ return nil
+}
+
+func (s *durationSliceValue) GetSlice() []string {
+ out := make([]string, len(*s.value))
+ for i, d := range *s.value {
+ out[i] = s.toString(d)
+ }
+ return out
+}
+
func durationSliceConv(val string) (interface{}, error) {
val = strings.Trim(val, "[]")
// Empty string would cause a slice with one (empty) entry
diff --git a/vendor/github.com/spf13/pflag/flag.go b/vendor/github.com/spf13/pflag/flag.go
index 9beeda8ecca93..24a5036e95b61 100644
--- a/vendor/github.com/spf13/pflag/flag.go
+++ b/vendor/github.com/spf13/pflag/flag.go
@@ -57,9 +57,9 @@ that give one-letter shorthands for flags. You can use these by appending
var ip = flag.IntP("flagname", "f", 1234, "help message")
var flagvar bool
func init() {
- flag.BoolVarP("boolname", "b", true, "help message")
+ flag.BoolVarP(&flagvar, "boolname", "b", true, "help message")
}
- flag.VarP(&flagVar, "varname", "v", 1234, "help message")
+ flag.VarP(&flagval, "varname", "v", "help message")
Shorthand letters can be used with single dashes on the command line.
Boolean shorthand flags can be combined with other shorthand flags.
@@ -190,6 +190,18 @@ type Value interface {
Type() string
}
+// SliceValue is a secondary interface to all flags which hold a list
+// of values. This allows full control over the value of list flags,
+// and avoids complicated marshalling and unmarshalling to csv.
+type SliceValue interface {
+ // Append adds the specified value to the end of the flag value list.
+ Append(string) error
+ // Replace will fully overwrite any data currently in the flag value list.
+ Replace([]string) error
+ // GetSlice returns the flag value list as an array of strings.
+ GetSlice() []string
+}
+
// sortFlags returns the flags as a slice in lexicographical sorted order.
func sortFlags(flags map[NormalizedName]*Flag) []*Flag {
list := make(sort.StringSlice, len(flags))
diff --git a/vendor/github.com/spf13/pflag/float32_slice.go b/vendor/github.com/spf13/pflag/float32_slice.go
new file mode 100644
index 0000000000000..caa352741a603
--- /dev/null
+++ b/vendor/github.com/spf13/pflag/float32_slice.go
@@ -0,0 +1,174 @@
+package pflag
+
+import (
+ "fmt"
+ "strconv"
+ "strings"
+)
+
+// -- float32Slice Value
+type float32SliceValue struct {
+ value *[]float32
+ changed bool
+}
+
+func newFloat32SliceValue(val []float32, p *[]float32) *float32SliceValue {
+ isv := new(float32SliceValue)
+ isv.value = p
+ *isv.value = val
+ return isv
+}
+
+func (s *float32SliceValue) Set(val string) error {
+ ss := strings.Split(val, ",")
+ out := make([]float32, len(ss))
+ for i, d := range ss {
+ var err error
+ var temp64 float64
+ temp64, err = strconv.ParseFloat(d, 32)
+ if err != nil {
+ return err
+ }
+ out[i] = float32(temp64)
+
+ }
+ if !s.changed {
+ *s.value = out
+ } else {
+ *s.value = append(*s.value, out...)
+ }
+ s.changed = true
+ return nil
+}
+
+func (s *float32SliceValue) Type() string {
+ return "float32Slice"
+}
+
+func (s *float32SliceValue) String() string {
+ out := make([]string, len(*s.value))
+ for i, d := range *s.value {
+ out[i] = fmt.Sprintf("%f", d)
+ }
+ return "[" + strings.Join(out, ",") + "]"
+}
+
+func (s *float32SliceValue) fromString(val string) (float32, error) {
+ t64, err := strconv.ParseFloat(val, 32)
+ if err != nil {
+ return 0, err
+ }
+ return float32(t64), nil
+}
+
+func (s *float32SliceValue) toString(val float32) string {
+ return fmt.Sprintf("%f", val)
+}
+
+func (s *float32SliceValue) Append(val string) error {
+ i, err := s.fromString(val)
+ if err != nil {
+ return err
+ }
+ *s.value = append(*s.value, i)
+ return nil
+}
+
+func (s *float32SliceValue) Replace(val []string) error {
+ out := make([]float32, len(val))
+ for i, d := range val {
+ var err error
+ out[i], err = s.fromString(d)
+ if err != nil {
+ return err
+ }
+ }
+ *s.value = out
+ return nil
+}
+
+func (s *float32SliceValue) GetSlice() []string {
+ out := make([]string, len(*s.value))
+ for i, d := range *s.value {
+ out[i] = s.toString(d)
+ }
+ return out
+}
+
+func float32SliceConv(val string) (interface{}, error) {
+ val = strings.Trim(val, "[]")
+ // Empty string would cause a slice with one (empty) entry
+ if len(val) == 0 {
+ return []float32{}, nil
+ }
+ ss := strings.Split(val, ",")
+ out := make([]float32, len(ss))
+ for i, d := range ss {
+ var err error
+ var temp64 float64
+ temp64, err = strconv.ParseFloat(d, 32)
+ if err != nil {
+ return nil, err
+ }
+ out[i] = float32(temp64)
+
+ }
+ return out, nil
+}
+
+// GetFloat32Slice return the []float32 value of a flag with the given name
+func (f *FlagSet) GetFloat32Slice(name string) ([]float32, error) {
+ val, err := f.getFlagType(name, "float32Slice", float32SliceConv)
+ if err != nil {
+ return []float32{}, err
+ }
+ return val.([]float32), nil
+}
+
+// Float32SliceVar defines a float32Slice flag with specified name, default value, and usage string.
+// The argument p points to a []float32 variable in which to store the value of the flag.
+func (f *FlagSet) Float32SliceVar(p *[]float32, name string, value []float32, usage string) {
+ f.VarP(newFloat32SliceValue(value, p), name, "", usage)
+}
+
+// Float32SliceVarP is like Float32SliceVar, but accepts a shorthand letter that can be used after a single dash.
+func (f *FlagSet) Float32SliceVarP(p *[]float32, name, shorthand string, value []float32, usage string) {
+ f.VarP(newFloat32SliceValue(value, p), name, shorthand, usage)
+}
+
+// Float32SliceVar defines a float32[] flag with specified name, default value, and usage string.
+// The argument p points to a float32[] variable in which to store the value of the flag.
+func Float32SliceVar(p *[]float32, name string, value []float32, usage string) {
+ CommandLine.VarP(newFloat32SliceValue(value, p), name, "", usage)
+}
+
+// Float32SliceVarP is like Float32SliceVar, but accepts a shorthand letter that can be used after a single dash.
+func Float32SliceVarP(p *[]float32, name, shorthand string, value []float32, usage string) {
+ CommandLine.VarP(newFloat32SliceValue(value, p), name, shorthand, usage)
+}
+
+// Float32Slice defines a []float32 flag with specified name, default value, and usage string.
+// The return value is the address of a []float32 variable that stores the value of the flag.
+func (f *FlagSet) Float32Slice(name string, value []float32, usage string) *[]float32 {
+ p := []float32{}
+ f.Float32SliceVarP(&p, name, "", value, usage)
+ return &p
+}
+
+// Float32SliceP is like Float32Slice, but accepts a shorthand letter that can be used after a single dash.
+func (f *FlagSet) Float32SliceP(name, shorthand string, value []float32, usage string) *[]float32 {
+ p := []float32{}
+ f.Float32SliceVarP(&p, name, shorthand, value, usage)
+ return &p
+}
+
+// Float32Slice defines a []float32 flag with specified name, default value, and usage string.
+// The return value is the address of a []float32 variable that stores the value of the flag.
+func Float32Slice(name string, value []float32, usage string) *[]float32 {
+ return CommandLine.Float32SliceP(name, "", value, usage)
+}
+
+// Float32SliceP is like Float32Slice, but accepts a shorthand letter that can be used after a single dash.
+func Float32SliceP(name, shorthand string, value []float32, usage string) *[]float32 {
+ return CommandLine.Float32SliceP(name, shorthand, value, usage)
+}
diff --git a/vendor/github.com/spf13/pflag/float64_slice.go b/vendor/github.com/spf13/pflag/float64_slice.go
new file mode 100644
index 0000000000000..85bf3073d5064
--- /dev/null
+++ b/vendor/github.com/spf13/pflag/float64_slice.go
@@ -0,0 +1,166 @@
+package pflag
+
+import (
+ "fmt"
+ "strconv"
+ "strings"
+)
+
+// -- float64Slice Value
+type float64SliceValue struct {
+ value *[]float64
+ changed bool
+}
+
+func newFloat64SliceValue(val []float64, p *[]float64) *float64SliceValue {
+ isv := new(float64SliceValue)
+ isv.value = p
+ *isv.value = val
+ return isv
+}
+
+func (s *float64SliceValue) Set(val string) error {
+ ss := strings.Split(val, ",")
+ out := make([]float64, len(ss))
+ for i, d := range ss {
+ var err error
+ out[i], err = strconv.ParseFloat(d, 64)
+ if err != nil {
+ return err
+ }
+
+ }
+ if !s.changed {
+ *s.value = out
+ } else {
+ *s.value = append(*s.value, out...)
+ }
+ s.changed = true
+ return nil
+}
+
+func (s *float64SliceValue) Type() string {
+ return "float64Slice"
+}
+
+func (s *float64SliceValue) String() string {
+ out := make([]string, len(*s.value))
+ for i, d := range *s.value {
+ out[i] = fmt.Sprintf("%f", d)
+ }
+ return "[" + strings.Join(out, ",") + "]"
+}
+
+func (s *float64SliceValue) fromString(val string) (float64, error) {
+ return strconv.ParseFloat(val, 64)
+}
+
+func (s *float64SliceValue) toString(val float64) string {
+ return fmt.Sprintf("%f", val)
+}
+
+func (s *float64SliceValue) Append(val string) error {
+ i, err := s.fromString(val)
+ if err != nil {
+ return err
+ }
+ *s.value = append(*s.value, i)
+ return nil
+}
+
+func (s *float64SliceValue) Replace(val []string) error {
+ out := make([]float64, len(val))
+ for i, d := range val {
+ var err error
+ out[i], err = s.fromString(d)
+ if err != nil {
+ return err
+ }
+ }
+ *s.value = out
+ return nil
+}
+
+func (s *float64SliceValue) GetSlice() []string {
+ out := make([]string, len(*s.value))
+ for i, d := range *s.value {
+ out[i] = s.toString(d)
+ }
+ return out
+}
+
+func float64SliceConv(val string) (interface{}, error) {
+ val = strings.Trim(val, "[]")
+ // Empty string would cause a slice with one (empty) entry
+ if len(val) == 0 {
+ return []float64{}, nil
+ }
+ ss := strings.Split(val, ",")
+ out := make([]float64, len(ss))
+ for i, d := range ss {
+ var err error
+ out[i], err = strconv.ParseFloat(d, 64)
+ if err != nil {
+ return nil, err
+ }
+
+ }
+ return out, nil
+}
+
+// GetFloat64Slice return the []float64 value of a flag with the given name
+func (f *FlagSet) GetFloat64Slice(name string) ([]float64, error) {
+ val, err := f.getFlagType(name, "float64Slice", float64SliceConv)
+ if err != nil {
+ return []float64{}, err
+ }
+ return val.([]float64), nil
+}
+
+// Float64SliceVar defines a float64Slice flag with specified name, default value, and usage string.
+// The argument p points to a []float64 variable in which to store the value of the flag.
+func (f *FlagSet) Float64SliceVar(p *[]float64, name string, value []float64, usage string) {
+ f.VarP(newFloat64SliceValue(value, p), name, "", usage)
+}
+
+// Float64SliceVarP is like Float64SliceVar, but accepts a shorthand letter that can be used after a single dash.
+func (f *FlagSet) Float64SliceVarP(p *[]float64, name, shorthand string, value []float64, usage string) {
+ f.VarP(newFloat64SliceValue(value, p), name, shorthand, usage)
+}
+
+// Float64SliceVar defines a float64[] flag with specified name, default value, and usage string.
+// The argument p points to a float64[] variable in which to store the value of the flag.
+func Float64SliceVar(p *[]float64, name string, value []float64, usage string) {
+ CommandLine.VarP(newFloat64SliceValue(value, p), name, "", usage)
+}
+
+// Float64SliceVarP is like Float64SliceVar, but accepts a shorthand letter that can be used after a single dash.
+func Float64SliceVarP(p *[]float64, name, shorthand string, value []float64, usage string) {
+ CommandLine.VarP(newFloat64SliceValue(value, p), name, shorthand, usage)
+}
+
+// Float64Slice defines a []float64 flag with specified name, default value, and usage string.
+// The return value is the address of a []float64 variable that stores the value of the flag.
+func (f *FlagSet) Float64Slice(name string, value []float64, usage string) *[]float64 {
+ p := []float64{}
+ f.Float64SliceVarP(&p, name, "", value, usage)
+ return &p
+}
+
+// Float64SliceP is like Float64Slice, but accepts a shorthand letter that can be used after a single dash.
+func (f *FlagSet) Float64SliceP(name, shorthand string, value []float64, usage string) *[]float64 {
+ p := []float64{}
+ f.Float64SliceVarP(&p, name, shorthand, value, usage)
+ return &p
+}
+
+// Float64Slice defines a []float64 flag with specified name, default value, and usage string.
+// The return value is the address of a []float64 variable that stores the value of the flag.
+func Float64Slice(name string, value []float64, usage string) *[]float64 {
+ return CommandLine.Float64SliceP(name, "", value, usage)
+}
+
+// Float64SliceP is like Float64Slice, but accepts a shorthand letter that can be used after a single dash.
+func Float64SliceP(name, shorthand string, value []float64, usage string) *[]float64 {
+ return CommandLine.Float64SliceP(name, shorthand, value, usage)
+}
diff --git a/vendor/github.com/spf13/pflag/go.mod b/vendor/github.com/spf13/pflag/go.mod
new file mode 100644
index 0000000000000..b2287eec13482
--- /dev/null
+++ b/vendor/github.com/spf13/pflag/go.mod
@@ -0,0 +1,3 @@
+module github.com/spf13/pflag
+
+go 1.12
diff --git a/vendor/github.com/spf13/pflag/go.sum b/vendor/github.com/spf13/pflag/go.sum
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/vendor/github.com/spf13/pflag/int32_slice.go b/vendor/github.com/spf13/pflag/int32_slice.go
new file mode 100644
index 0000000000000..ff128ff06d869
--- /dev/null
+++ b/vendor/github.com/spf13/pflag/int32_slice.go
@@ -0,0 +1,174 @@
+package pflag
+
+import (
+ "fmt"
+ "strconv"
+ "strings"
+)
+
+// -- int32Slice Value
+type int32SliceValue struct {
+ value *[]int32
+ changed bool
+}
+
+func newInt32SliceValue(val []int32, p *[]int32) *int32SliceValue {
+ isv := new(int32SliceValue)
+ isv.value = p
+ *isv.value = val
+ return isv
+}
+
+func (s *int32SliceValue) Set(val string) error {
+ ss := strings.Split(val, ",")
+ out := make([]int32, len(ss))
+ for i, d := range ss {
+ var err error
+ var temp64 int64
+ temp64, err = strconv.ParseInt(d, 0, 32)
+ if err != nil {
+ return err
+ }
+ out[i] = int32(temp64)
+
+ }
+ if !s.changed {
+ *s.value = out
+ } else {
+ *s.value = append(*s.value, out...)
+ }
+ s.changed = true
+ return nil
+}
+
+func (s *int32SliceValue) Type() string {
+ return "int32Slice"
+}
+
+func (s *int32SliceValue) String() string {
+ out := make([]string, len(*s.value))
+ for i, d := range *s.value {
+ out[i] = fmt.Sprintf("%d", d)
+ }
+ return "[" + strings.Join(out, ",") + "]"
+}
+
+func (s *int32SliceValue) fromString(val string) (int32, error) {
+ t64, err := strconv.ParseInt(val, 0, 32)
+ if err != nil {
+ return 0, err
+ }
+ return int32(t64), nil
+}
+
+func (s *int32SliceValue) toString(val int32) string {
+ return fmt.Sprintf("%d", val)
+}
+
+func (s *int32SliceValue) Append(val string) error {
+ i, err := s.fromString(val)
+ if err != nil {
+ return err
+ }
+ *s.value = append(*s.value, i)
+ return nil
+}
+
+func (s *int32SliceValue) Replace(val []string) error {
+ out := make([]int32, len(val))
+ for i, d := range val {
+ var err error
+ out[i], err = s.fromString(d)
+ if err != nil {
+ return err
+ }
+ }
+ *s.value = out
+ return nil
+}
+
+func (s *int32SliceValue) GetSlice() []string {
+ out := make([]string, len(*s.value))
+ for i, d := range *s.value {
+ out[i] = s.toString(d)
+ }
+ return out
+}
+
+func int32SliceConv(val string) (interface{}, error) {
+ val = strings.Trim(val, "[]")
+ // Empty string would cause a slice with one (empty) entry
+ if len(val) == 0 {
+ return []int32{}, nil
+ }
+ ss := strings.Split(val, ",")
+ out := make([]int32, len(ss))
+ for i, d := range ss {
+ var err error
+ var temp64 int64
+ temp64, err = strconv.ParseInt(d, 0, 32)
+ if err != nil {
+ return nil, err
+ }
+ out[i] = int32(temp64)
+
+ }
+ return out, nil
+}
+
+// GetInt32Slice return the []int32 value of a flag with the given name
+func (f *FlagSet) GetInt32Slice(name string) ([]int32, error) {
+ val, err := f.getFlagType(name, "int32Slice", int32SliceConv)
+ if err != nil {
+ return []int32{}, err
+ }
+ return val.([]int32), nil
+}
+
+// Int32SliceVar defines a int32Slice flag with specified name, default value, and usage string.
+// The argument p points to a []int32 variable in which to store the value of the flag.
+func (f *FlagSet) Int32SliceVar(p *[]int32, name string, value []int32, usage string) {
+ f.VarP(newInt32SliceValue(value, p), name, "", usage)
+}
+
+// Int32SliceVarP is like Int32SliceVar, but accepts a shorthand letter that can be used after a single dash.
+func (f *FlagSet) Int32SliceVarP(p *[]int32, name, shorthand string, value []int32, usage string) {
+ f.VarP(newInt32SliceValue(value, p), name, shorthand, usage)
+}
+
+// Int32SliceVar defines a int32[] flag with specified name, default value, and usage string.
+// The argument p points to a int32[] variable in which to store the value of the flag.
+func Int32SliceVar(p *[]int32, name string, value []int32, usage string) {
+ CommandLine.VarP(newInt32SliceValue(value, p), name, "", usage)
+}
+
+// Int32SliceVarP is like Int32SliceVar, but accepts a shorthand letter that can be used after a single dash.
+func Int32SliceVarP(p *[]int32, name, shorthand string, value []int32, usage string) {
+ CommandLine.VarP(newInt32SliceValue(value, p), name, shorthand, usage)
+}
+
+// Int32Slice defines a []int32 flag with specified name, default value, and usage string.
+// The return value is the address of a []int32 variable that stores the value of the flag.
+func (f *FlagSet) Int32Slice(name string, value []int32, usage string) *[]int32 {
+ p := []int32{}
+ f.Int32SliceVarP(&p, name, "", value, usage)
+ return &p
+}
+
+// Int32SliceP is like Int32Slice, but accepts a shorthand letter that can be used after a single dash.
+func (f *FlagSet) Int32SliceP(name, shorthand string, value []int32, usage string) *[]int32 {
+ p := []int32{}
+ f.Int32SliceVarP(&p, name, shorthand, value, usage)
+ return &p
+}
+
+// Int32Slice defines a []int32 flag with specified name, default value, and usage string.
+// The return value is the address of a []int32 variable that stores the value of the flag.
+func Int32Slice(name string, value []int32, usage string) *[]int32 {
+ return CommandLine.Int32SliceP(name, "", value, usage)
+}
+
+// Int32SliceP is like Int32Slice, but accepts a shorthand letter that can be used after a single dash.
+func Int32SliceP(name, shorthand string, value []int32, usage string) *[]int32 {
+ return CommandLine.Int32SliceP(name, shorthand, value, usage)
+}
diff --git a/vendor/github.com/spf13/pflag/int64_slice.go b/vendor/github.com/spf13/pflag/int64_slice.go
new file mode 100644
index 0000000000000..25464638f3ab6
--- /dev/null
+++ b/vendor/github.com/spf13/pflag/int64_slice.go
@@ -0,0 +1,166 @@
+package pflag
+
+import (
+ "fmt"
+ "strconv"
+ "strings"
+)
+
+// -- int64Slice Value
+type int64SliceValue struct {
+ value *[]int64
+ changed bool
+}
+
+func newInt64SliceValue(val []int64, p *[]int64) *int64SliceValue {
+ isv := new(int64SliceValue)
+ isv.value = p
+ *isv.value = val
+ return isv
+}
+
+func (s *int64SliceValue) Set(val string) error {
+ ss := strings.Split(val, ",")
+ out := make([]int64, len(ss))
+ for i, d := range ss {
+ var err error
+ out[i], err = strconv.ParseInt(d, 0, 64)
+ if err != nil {
+ return err
+ }
+
+ }
+ if !s.changed {
+ *s.value = out
+ } else {
+ *s.value = append(*s.value, out...)
+ }
+ s.changed = true
+ return nil
+}
+
+func (s *int64SliceValue) Type() string {
+ return "int64Slice"
+}
+
+func (s *int64SliceValue) String() string {
+ out := make([]string, len(*s.value))
+ for i, d := range *s.value {
+ out[i] = fmt.Sprintf("%d", d)
+ }
+ return "[" + strings.Join(out, ",") + "]"
+}
+
+func (s *int64SliceValue) fromString(val string) (int64, error) {
+ return strconv.ParseInt(val, 0, 64)
+}
+
+func (s *int64SliceValue) toString(val int64) string {
+ return fmt.Sprintf("%d", val)
+}
+
+func (s *int64SliceValue) Append(val string) error {
+ i, err := s.fromString(val)
+ if err != nil {
+ return err
+ }
+ *s.value = append(*s.value, i)
+ return nil
+}
+
+func (s *int64SliceValue) Replace(val []string) error {
+ out := make([]int64, len(val))
+ for i, d := range val {
+ var err error
+ out[i], err = s.fromString(d)
+ if err != nil {
+ return err
+ }
+ }
+ *s.value = out
+ return nil
+}
+
+func (s *int64SliceValue) GetSlice() []string {
+ out := make([]string, len(*s.value))
+ for i, d := range *s.value {
+ out[i] = s.toString(d)
+ }
+ return out
+}
+
+func int64SliceConv(val string) (interface{}, error) {
+ val = strings.Trim(val, "[]")
+ // Empty string would cause a slice with one (empty) entry
+ if len(val) == 0 {
+ return []int64{}, nil
+ }
+ ss := strings.Split(val, ",")
+ out := make([]int64, len(ss))
+ for i, d := range ss {
+ var err error
+ out[i], err = strconv.ParseInt(d, 0, 64)
+ if err != nil {
+ return nil, err
+ }
+
+ }
+ return out, nil
+}
+
+// GetInt64Slice return the []int64 value of a flag with the given name
+func (f *FlagSet) GetInt64Slice(name string) ([]int64, error) {
+ val, err := f.getFlagType(name, "int64Slice", int64SliceConv)
+ if err != nil {
+ return []int64{}, err
+ }
+ return val.([]int64), nil
+}
+
+// Int64SliceVar defines a int64Slice flag with specified name, default value, and usage string.
+// The argument p points to a []int64 variable in which to store the value of the flag.
+func (f *FlagSet) Int64SliceVar(p *[]int64, name string, value []int64, usage string) {
+ f.VarP(newInt64SliceValue(value, p), name, "", usage)
+}
+
+// Int64SliceVarP is like Int64SliceVar, but accepts a shorthand letter that can be used after a single dash.
+func (f *FlagSet) Int64SliceVarP(p *[]int64, name, shorthand string, value []int64, usage string) {
+ f.VarP(newInt64SliceValue(value, p), name, shorthand, usage)
+}
+
+// Int64SliceVar defines a int64[] flag with specified name, default value, and usage string.
+// The argument p points to a int64[] variable in which to store the value of the flag.
+func Int64SliceVar(p *[]int64, name string, value []int64, usage string) {
+ CommandLine.VarP(newInt64SliceValue(value, p), name, "", usage)
+}
+
+// Int64SliceVarP is like Int64SliceVar, but accepts a shorthand letter that can be used after a single dash.
+func Int64SliceVarP(p *[]int64, name, shorthand string, value []int64, usage string) {
+ CommandLine.VarP(newInt64SliceValue(value, p), name, shorthand, usage)
+}
+
+// Int64Slice defines a []int64 flag with specified name, default value, and usage string.
+// The return value is the address of a []int64 variable that stores the value of the flag.
+func (f *FlagSet) Int64Slice(name string, value []int64, usage string) *[]int64 {
+ p := []int64{}
+ f.Int64SliceVarP(&p, name, "", value, usage)
+ return &p
+}
+
+// Int64SliceP is like Int64Slice, but accepts a shorthand letter that can be used after a single dash.
+func (f *FlagSet) Int64SliceP(name, shorthand string, value []int64, usage string) *[]int64 {
+ p := []int64{}
+ f.Int64SliceVarP(&p, name, shorthand, value, usage)
+ return &p
+}
+
+// Int64Slice defines a []int64 flag with specified name, default value, and usage string.
+// The return value is the address of a []int64 variable that stores the value of the flag.
+func Int64Slice(name string, value []int64, usage string) *[]int64 {
+ return CommandLine.Int64SliceP(name, "", value, usage)
+}
+
+// Int64SliceP is like Int64Slice, but accepts a shorthand letter that can be used after a single dash.
+func Int64SliceP(name, shorthand string, value []int64, usage string) *[]int64 {
+ return CommandLine.Int64SliceP(name, shorthand, value, usage)
+}
diff --git a/vendor/github.com/spf13/pflag/int_slice.go b/vendor/github.com/spf13/pflag/int_slice.go
index 1e7c9edde955c..e71c39d91aa71 100644
--- a/vendor/github.com/spf13/pflag/int_slice.go
+++ b/vendor/github.com/spf13/pflag/int_slice.go
@@ -51,6 +51,36 @@ func (s *intSliceValue) String() string {
return "[" + strings.Join(out, ",") + "]"
}
+func (s *intSliceValue) Append(val string) error {
+ i, err := strconv.Atoi(val)
+ if err != nil {
+ return err
+ }
+ *s.value = append(*s.value, i)
+ return nil
+}
+
+func (s *intSliceValue) Replace(val []string) error {
+ out := make([]int, len(val))
+ for i, d := range val {
+ var err error
+ out[i], err = strconv.Atoi(d)
+ if err != nil {
+ return err
+ }
+ }
+ *s.value = out
+ return nil
+}
+
+func (s *intSliceValue) GetSlice() []string {
+ out := make([]string, len(*s.value))
+ for i, d := range *s.value {
+ out[i] = strconv.Itoa(d)
+ }
+ return out
+}
+
func intSliceConv(val string) (interface{}, error) {
val = strings.Trim(val, "[]")
// Empty string would cause a slice with one (empty) entry
diff --git a/vendor/github.com/spf13/pflag/ip_slice.go b/vendor/github.com/spf13/pflag/ip_slice.go
index 7dd196fe3fb11..775faae4fd8d2 100644
--- a/vendor/github.com/spf13/pflag/ip_slice.go
+++ b/vendor/github.com/spf13/pflag/ip_slice.go
@@ -72,9 +72,47 @@ func (s *ipSliceValue) String() string {
return "[" + out + "]"
}
+func (s *ipSliceValue) fromString(val string) (net.IP, error) {
+ return net.ParseIP(strings.TrimSpace(val)), nil
+}
+
+func (s *ipSliceValue) toString(val net.IP) string {
+ return val.String()
+}
+
+func (s *ipSliceValue) Append(val string) error {
+ i, err := s.fromString(val)
+ if err != nil {
+ return err
+ }
+ *s.value = append(*s.value, i)
+ return nil
+}
+
+func (s *ipSliceValue) Replace(val []string) error {
+ out := make([]net.IP, len(val))
+ for i, d := range val {
+ var err error
+ out[i], err = s.fromString(d)
+ if err != nil {
+ return err
+ }
+ }
+ *s.value = out
+ return nil
+}
+
+func (s *ipSliceValue) GetSlice() []string {
+ out := make([]string, len(*s.value))
+ for i, d := range *s.value {
+ out[i] = s.toString(d)
+ }
+ return out
+}
+
func ipSliceConv(val string) (interface{}, error) {
val = strings.Trim(val, "[]")
- // Emtpy string would cause a slice with one (empty) entry
+ // Empty string would cause a slice with one (empty) entry
if len(val) == 0 {
return []net.IP{}, nil
}
diff --git a/vendor/github.com/spf13/pflag/string_array.go b/vendor/github.com/spf13/pflag/string_array.go
index fa7bc60187a79..4894af818023b 100644
--- a/vendor/github.com/spf13/pflag/string_array.go
+++ b/vendor/github.com/spf13/pflag/string_array.go
@@ -23,6 +23,32 @@ func (s *stringArrayValue) Set(val string) error {
return nil
}
+func (s *stringArrayValue) Append(val string) error {
+ *s.value = append(*s.value, val)
+ return nil
+}
+
+func (s *stringArrayValue) Replace(val []string) error {
+ out := make([]string, len(val))
+ for i, d := range val {
+ var err error
+ out[i] = d
+ if err != nil {
+ return err
+ }
+ }
+ *s.value = out
+ return nil
+}
+
+func (s *stringArrayValue) GetSlice() []string {
+ out := make([]string, len(*s.value))
+ for i, d := range *s.value {
+ out[i] = d
+ }
+ return out
+}
+
func (s *stringArrayValue) Type() string {
return "stringArray"
}
diff --git a/vendor/github.com/spf13/pflag/string_slice.go b/vendor/github.com/spf13/pflag/string_slice.go
index 0cd3ccc083e28..3cb2e69dba026 100644
--- a/vendor/github.com/spf13/pflag/string_slice.go
+++ b/vendor/github.com/spf13/pflag/string_slice.go
@@ -62,6 +62,20 @@ func (s *stringSliceValue) String() string {
return "[" + str + "]"
}
+func (s *stringSliceValue) Append(val string) error {
+ *s.value = append(*s.value, val)
+ return nil
+}
+
+func (s *stringSliceValue) Replace(val []string) error {
+ *s.value = val
+ return nil
+}
+
+func (s *stringSliceValue) GetSlice() []string {
+ return *s.value
+}
+
func stringSliceConv(sval string) (interface{}, error) {
sval = sval[1 : len(sval)-1]
// An empty string would cause a slice with one (empty) string
@@ -84,7 +98,7 @@ func (f *FlagSet) GetStringSlice(name string) ([]string, error) {
// The argument p points to a []string variable in which to store the value of the flag.
// Compared to StringArray flags, StringSlice flags take comma-separated value as arguments and split them accordingly.
// For example:
-// --ss="v1,v2" -ss="v3"
+// --ss="v1,v2" --ss="v3"
// will result in
// []string{"v1", "v2", "v3"}
func (f *FlagSet) StringSliceVar(p *[]string, name string, value []string, usage string) {
@@ -100,7 +114,7 @@ func (f *FlagSet) StringSliceVarP(p *[]string, name, shorthand string, value []s
// The argument p points to a []string variable in which to store the value of the flag.
// Compared to StringArray flags, StringSlice flags take comma-separated value as arguments and split them accordingly.
// For example:
-// --ss="v1,v2" -ss="v3"
+// --ss="v1,v2" --ss="v3"
// will result in
// []string{"v1", "v2", "v3"}
func StringSliceVar(p *[]string, name string, value []string, usage string) {
@@ -116,7 +130,7 @@ func StringSliceVarP(p *[]string, name, shorthand string, value []string, usage
// The return value is the address of a []string variable that stores the value of the flag.
// Compared to StringArray flags, StringSlice flags take comma-separated value as arguments and split them accordingly.
// For example:
-// --ss="v1,v2" -ss="v3"
+// --ss="v1,v2" --ss="v3"
// will result in
// []string{"v1", "v2", "v3"}
func (f *FlagSet) StringSlice(name string, value []string, usage string) *[]string {
@@ -136,7 +150,7 @@ func (f *FlagSet) StringSliceP(name, shorthand string, value []string, usage str
// The return value is the address of a []string variable that stores the value of the flag.
// Compared to StringArray flags, StringSlice flags take comma-separated value as arguments and split them accordingly.
// For example:
-// --ss="v1,v2" -ss="v3"
+// --ss="v1,v2" --ss="v3"
// will result in
// []string{"v1", "v2", "v3"}
func StringSlice(name string, value []string, usage string) *[]string {
diff --git a/vendor/github.com/spf13/pflag/string_to_int64.go b/vendor/github.com/spf13/pflag/string_to_int64.go
new file mode 100644
index 0000000000000..a807a04a0bad8
--- /dev/null
+++ b/vendor/github.com/spf13/pflag/string_to_int64.go
@@ -0,0 +1,149 @@
+package pflag
+
+import (
+ "bytes"
+ "fmt"
+ "strconv"
+ "strings"
+)
+
+// -- stringToInt64 Value
+type stringToInt64Value struct {
+ value *map[string]int64
+ changed bool
+}
+
+func newStringToInt64Value(val map[string]int64, p *map[string]int64) *stringToInt64Value {
+ ssv := new(stringToInt64Value)
+ ssv.value = p
+ *ssv.value = val
+ return ssv
+}
+
+// Format: a=1,b=2
+func (s *stringToInt64Value) Set(val string) error {
+ ss := strings.Split(val, ",")
+ out := make(map[string]int64, len(ss))
+ for _, pair := range ss {
+ kv := strings.SplitN(pair, "=", 2)
+ if len(kv) != 2 {
+ return fmt.Errorf("%s must be formatted as key=value", pair)
+ }
+ var err error
+ out[kv[0]], err = strconv.ParseInt(kv[1], 10, 64)
+ if err != nil {
+ return err
+ }
+ }
+ if !s.changed {
+ *s.value = out
+ } else {
+ for k, v := range out {
+ (*s.value)[k] = v
+ }
+ }
+ s.changed = true
+ return nil
+}
+
+func (s *stringToInt64Value) Type() string {
+ return "stringToInt64"
+}
+
+func (s *stringToInt64Value) String() string {
+ var buf bytes.Buffer
+ i := 0
+ for k, v := range *s.value {
+ if i > 0 {
+ buf.WriteRune(',')
+ }
+ buf.WriteString(k)
+ buf.WriteRune('=')
+ buf.WriteString(strconv.FormatInt(v, 10))
+ i++
+ }
+ return "[" + buf.String() + "]"
+}
+
+func stringToInt64Conv(val string) (interface{}, error) {
+ val = strings.Trim(val, "[]")
+ // An empty string would cause an empty map
+ if len(val) == 0 {
+ return map[string]int64{}, nil
+ }
+ ss := strings.Split(val, ",")
+ out := make(map[string]int64, len(ss))
+ for _, pair := range ss {
+ kv := strings.SplitN(pair, "=", 2)
+ if len(kv) != 2 {
+ return nil, fmt.Errorf("%s must be formatted as key=value", pair)
+ }
+ var err error
+ out[kv[0]], err = strconv.ParseInt(kv[1], 10, 64)
+ if err != nil {
+ return nil, err
+ }
+ }
+ return out, nil
+}
+
+// GetStringToInt64 return the map[string]int64 value of a flag with the given name
+func (f *FlagSet) GetStringToInt64(name string) (map[string]int64, error) {
+ val, err := f.getFlagType(name, "stringToInt64", stringToInt64Conv)
+ if err != nil {
+ return map[string]int64{}, err
+ }
+ return val.(map[string]int64), nil
+}
+
+// StringToInt64Var defines a string flag with specified name, default value, and usage string.
+// The argument p point64s to a map[string]int64 variable in which to store the values of the multiple flags.
+// The value of each argument will not try to be separated by comma
+func (f *FlagSet) StringToInt64Var(p *map[string]int64, name string, value map[string]int64, usage string) {
+ f.VarP(newStringToInt64Value(value, p), name, "", usage)
+}
+
+// StringToInt64VarP is like StringToInt64Var, but accepts a shorthand letter that can be used after a single dash.
+func (f *FlagSet) StringToInt64VarP(p *map[string]int64, name, shorthand string, value map[string]int64, usage string) {
+ f.VarP(newStringToInt64Value(value, p), name, shorthand, usage)
+}
+
+// StringToInt64Var defines a string flag with specified name, default value, and usage string.
+// The argument p point64s to a map[string]int64 variable in which to store the value of the flag.
+// The value of each argument will not try to be separated by comma
+func StringToInt64Var(p *map[string]int64, name string, value map[string]int64, usage string) {
+ CommandLine.VarP(newStringToInt64Value(value, p), name, "", usage)
+}
+
+// StringToInt64VarP is like StringToInt64Var, but accepts a shorthand letter that can be used after a single dash.
+func StringToInt64VarP(p *map[string]int64, name, shorthand string, value map[string]int64, usage string) {
+ CommandLine.VarP(newStringToInt64Value(value, p), name, shorthand, usage)
+}
+
+// StringToInt64 defines a string flag with specified name, default value, and usage string.
+// The return value is the address of a map[string]int64 variable that stores the value of the flag.
+// The value of each argument will not try to be separated by comma
+func (f *FlagSet) StringToInt64(name string, value map[string]int64, usage string) *map[string]int64 {
+ p := map[string]int64{}
+ f.StringToInt64VarP(&p, name, "", value, usage)
+ return &p
+}
+
+// StringToInt64P is like StringToInt64, but accepts a shorthand letter that can be used after a single dash.
+func (f *FlagSet) StringToInt64P(name, shorthand string, value map[string]int64, usage string) *map[string]int64 {
+ p := map[string]int64{}
+ f.StringToInt64VarP(&p, name, shorthand, value, usage)
+ return &p
+}
+
+// StringToInt64 defines a string flag with specified name, default value, and usage string.
+// The return value is the address of a map[string]int64 variable that stores the value of the flag.
+// The value of each argument will not try to be separated by comma
+func StringToInt64(name string, value map[string]int64, usage string) *map[string]int64 {
+ return CommandLine.StringToInt64P(name, "", value, usage)
+}
+
+// StringToInt64P is like StringToInt64, but accepts a shorthand letter that can be used after a single dash.
+func StringToInt64P(name, shorthand string, value map[string]int64, usage string) *map[string]int64 {
+ return CommandLine.StringToInt64P(name, shorthand, value, usage)
+}
diff --git a/vendor/github.com/spf13/pflag/uint_slice.go b/vendor/github.com/spf13/pflag/uint_slice.go
index edd94c600af09..5fa924835ed3f 100644
--- a/vendor/github.com/spf13/pflag/uint_slice.go
+++ b/vendor/github.com/spf13/pflag/uint_slice.go
@@ -50,6 +50,48 @@ func (s *uintSliceValue) String() string {
return "[" + strings.Join(out, ",") + "]"
}
+func (s *uintSliceValue) fromString(val string) (uint, error) {
+ t, err := strconv.ParseUint(val, 10, 0)
+ if err != nil {
+ return 0, err
+ }
+ return uint(t), nil
+}
+
+func (s *uintSliceValue) toString(val uint) string {
+ return fmt.Sprintf("%d", val)
+}
+
+func (s *uintSliceValue) Append(val string) error {
+ i, err := s.fromString(val)
+ if err != nil {
+ return err
+ }
+ *s.value = append(*s.value, i)
+ return nil
+}
+
+func (s *uintSliceValue) Replace(val []string) error {
+ out := make([]uint, len(val))
+ for i, d := range val {
+ var err error
+ out[i], err = s.fromString(d)
+ if err != nil {
+ return err
+ }
+ }
+ *s.value = out
+ return nil
+}
+
+func (s *uintSliceValue) GetSlice() []string {
+ out := make([]string, len(*s.value))
+ for i, d := range *s.value {
+ out[i] = s.toString(d)
+ }
+ return out
+}
+
func uintSliceConv(val string) (interface{}, error) {
val = strings.Trim(val, "[]")
// Empty string would cause a slice with one (empty) entry
diff --git a/vendor/github.com/uber-go/atomic/.codecov.yml b/vendor/github.com/uber-go/atomic/.codecov.yml
new file mode 100644
index 0000000000000..6d4d1be7b5745
--- /dev/null
+++ b/vendor/github.com/uber-go/atomic/.codecov.yml
@@ -0,0 +1,15 @@
+coverage:
+ range: 80..100
+ round: down
+ precision: 2
+
+ status:
+ project: # measuring the overall project coverage
+ default: # context, you can create multiple ones with custom titles
+ enabled: yes # must be yes|true to enable this status
+ target: 100 # specify the target coverage for each commit status
+ # option: "auto" (must increase from parent commit or pull request base)
+ # option: "X%" a static target percentage to hit
+ if_not_found: success # if parent is not found report status as success, error, or failure
+ if_ci_failed: error # if ci fails report status as success, error, or failure
+
diff --git a/vendor/github.com/uber-go/atomic/.gitignore b/vendor/github.com/uber-go/atomic/.gitignore
new file mode 100644
index 0000000000000..0a4504f11095f
--- /dev/null
+++ b/vendor/github.com/uber-go/atomic/.gitignore
@@ -0,0 +1,11 @@
+.DS_Store
+/vendor
+/cover
+cover.out
+lint.log
+
+# Binaries
+*.test
+
+# Profiling output
+*.prof
diff --git a/vendor/github.com/uber-go/atomic/.travis.yml b/vendor/github.com/uber-go/atomic/.travis.yml
new file mode 100644
index 0000000000000..0f3769e5fa6b2
--- /dev/null
+++ b/vendor/github.com/uber-go/atomic/.travis.yml
@@ -0,0 +1,27 @@
+sudo: false
+language: go
+go_import_path: go.uber.org/atomic
+
+go:
+ - 1.11.x
+ - 1.12.x
+
+matrix:
+ include:
+ - go: 1.12.x
+ env: NO_TEST=yes LINT=yes
+
+cache:
+ directories:
+ - vendor
+
+install:
+ - make install_ci
+
+script:
+ - test -n "$NO_TEST" || make test_ci
+ - test -n "$NO_TEST" || scripts/test-ubergo.sh
+ - test -z "$LINT" || make install_lint lint
+
+after_success:
+ - bash <(curl -s https://codecov.io/bash)
diff --git a/vendor/github.com/uber-go/atomic/LICENSE.txt b/vendor/github.com/uber-go/atomic/LICENSE.txt
new file mode 100644
index 0000000000000..8765c9fbc6191
--- /dev/null
+++ b/vendor/github.com/uber-go/atomic/LICENSE.txt
@@ -0,0 +1,19 @@
+Copyright (c) 2016 Uber Technologies, Inc.
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in
+all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+THE SOFTWARE.
diff --git a/vendor/github.com/uber-go/atomic/Makefile b/vendor/github.com/uber-go/atomic/Makefile
new file mode 100644
index 0000000000000..1ef263075d762
--- /dev/null
+++ b/vendor/github.com/uber-go/atomic/Makefile
@@ -0,0 +1,51 @@
+# Many Go tools take file globs or directories as arguments instead of packages.
+PACKAGE_FILES ?= *.go
+
+# For pre go1.6
+export GO15VENDOREXPERIMENT=1
+
+
+.PHONY: build
+build:
+ go build -i ./...
+
+
+.PHONY: install
+install:
+ glide --version || go get github.com/Masterminds/glide
+ glide install
+
+
+.PHONY: test
+test:
+ go test -cover -race ./...
+
+
+.PHONY: install_ci
+install_ci: install
+ go get github.com/wadey/gocovmerge
+ go get github.com/mattn/goveralls
+ go get golang.org/x/tools/cmd/cover
+
+.PHONY: install_lint
+install_lint:
+ go get golang.org/x/lint/golint
+
+
+.PHONY: lint
+lint:
+ @rm -rf lint.log
+ @echo "Checking formatting..."
+ @gofmt -d -s $(PACKAGE_FILES) 2>&1 | tee lint.log
+ @echo "Checking vet..."
+ @go vet ./... 2>&1 | tee -a lint.log;)
+ @echo "Checking lint..."
+ @golint $$(go list ./...) 2>&1 | tee -a lint.log
+ @echo "Checking for unresolved FIXMEs..."
+ @git grep -i fixme | grep -v -e vendor -e Makefile | tee -a lint.log
+ @[ ! -s lint.log ]
+
+
+.PHONY: test_ci
+test_ci: install_ci build
+ ./scripts/cover.sh $(shell go list $(PACKAGES))
diff --git a/vendor/github.com/uber-go/atomic/README.md b/vendor/github.com/uber-go/atomic/README.md
new file mode 100644
index 0000000000000..62eb8e5760969
--- /dev/null
+++ b/vendor/github.com/uber-go/atomic/README.md
@@ -0,0 +1,36 @@
+# atomic [![GoDoc][doc-img]][doc] [![Build Status][ci-img]][ci] [![Coverage Status][cov-img]][cov] [![Go Report Card][reportcard-img]][reportcard]
+
+Simple wrappers for primitive types to enforce atomic access.
+
+## Installation
+`go get -u go.uber.org/atomic`
+
+## Usage
+The standard library's `sync/atomic` is powerful, but it's easy to forget which
+variables must be accessed atomically. `go.uber.org/atomic` preserves all the
+functionality of the standard library, but wraps the primitive types to
+provide a safer, more convenient API.
+
+```go
+var atom atomic.Uint32
+atom.Store(42)
+atom.Sub(2)
+atom.CAS(40, 11)
+```
+
+See the [documentation][doc] for a complete API specification.
+
+## Development Status
+Stable.
+
+___
+Released under the [MIT License](LICENSE.txt).
+
+[doc-img]: https://godoc.org/github.com/uber-go/atomic?status.svg
+[doc]: https://godoc.org/go.uber.org/atomic
+[ci-img]: https://travis-ci.com/uber-go/atomic.svg?branch=master
+[ci]: https://travis-ci.com/uber-go/atomic
+[cov-img]: https://codecov.io/gh/uber-go/atomic/branch/master/graph/badge.svg
+[cov]: https://codecov.io/gh/uber-go/atomic
+[reportcard-img]: https://goreportcard.com/badge/go.uber.org/atomic
+[reportcard]: https://goreportcard.com/report/go.uber.org/atomic
diff --git a/vendor/github.com/uber-go/atomic/atomic.go b/vendor/github.com/uber-go/atomic/atomic.go
new file mode 100644
index 0000000000000..1db6849fca0aa
--- /dev/null
+++ b/vendor/github.com/uber-go/atomic/atomic.go
@@ -0,0 +1,351 @@
+// Copyright (c) 2016 Uber Technologies, Inc.
+//
+// Permission is hereby granted, free of charge, to any person obtaining a copy
+// of this software and associated documentation files (the "Software"), to deal
+// in the Software without restriction, including without limitation the rights
+// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+// copies of the Software, and to permit persons to whom the Software is
+// furnished to do so, subject to the following conditions:
+//
+// The above copyright notice and this permission notice shall be included in
+// all copies or substantial portions of the Software.
+//
+// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+// THE SOFTWARE.
+
+// Package atomic provides simple wrappers around numerics to enforce atomic
+// access.
+package atomic
+
+import (
+ "math"
+ "sync/atomic"
+ "time"
+)
+
+// Int32 is an atomic wrapper around an int32.
+type Int32 struct{ v int32 }
+
+// NewInt32 creates an Int32.
+func NewInt32(i int32) *Int32 {
+ return &Int32{i}
+}
+
+// Load atomically loads the wrapped value.
+func (i *Int32) Load() int32 {
+ return atomic.LoadInt32(&i.v)
+}
+
+// Add atomically adds to the wrapped int32 and returns the new value.
+func (i *Int32) Add(n int32) int32 {
+ return atomic.AddInt32(&i.v, n)
+}
+
+// Sub atomically subtracts from the wrapped int32 and returns the new value.
+func (i *Int32) Sub(n int32) int32 {
+ return atomic.AddInt32(&i.v, -n)
+}
+
+// Inc atomically increments the wrapped int32 and returns the new value.
+func (i *Int32) Inc() int32 {
+ return i.Add(1)
+}
+
+// Dec atomically decrements the wrapped int32 and returns the new value.
+func (i *Int32) Dec() int32 {
+ return i.Sub(1)
+}
+
+// CAS is an atomic compare-and-swap.
+func (i *Int32) CAS(old, new int32) bool {
+ return atomic.CompareAndSwapInt32(&i.v, old, new)
+}
+
+// Store atomically stores the passed value.
+func (i *Int32) Store(n int32) {
+ atomic.StoreInt32(&i.v, n)
+}
+
+// Swap atomically swaps the wrapped int32 and returns the old value.
+func (i *Int32) Swap(n int32) int32 {
+ return atomic.SwapInt32(&i.v, n)
+}
+
+// Int64 is an atomic wrapper around an int64.
+type Int64 struct{ v int64 }
+
+// NewInt64 creates an Int64.
+func NewInt64(i int64) *Int64 {
+ return &Int64{i}
+}
+
+// Load atomically loads the wrapped value.
+func (i *Int64) Load() int64 {
+ return atomic.LoadInt64(&i.v)
+}
+
+// Add atomically adds to the wrapped int64 and returns the new value.
+func (i *Int64) Add(n int64) int64 {
+ return atomic.AddInt64(&i.v, n)
+}
+
+// Sub atomically subtracts from the wrapped int64 and returns the new value.
+func (i *Int64) Sub(n int64) int64 {
+ return atomic.AddInt64(&i.v, -n)
+}
+
+// Inc atomically increments the wrapped int64 and returns the new value.
+func (i *Int64) Inc() int64 {
+ return i.Add(1)
+}
+
+// Dec atomically decrements the wrapped int64 and returns the new value.
+func (i *Int64) Dec() int64 {
+ return i.Sub(1)
+}
+
+// CAS is an atomic compare-and-swap.
+func (i *Int64) CAS(old, new int64) bool {
+ return atomic.CompareAndSwapInt64(&i.v, old, new)
+}
+
+// Store atomically stores the passed value.
+func (i *Int64) Store(n int64) {
+ atomic.StoreInt64(&i.v, n)
+}
+
+// Swap atomically swaps the wrapped int64 and returns the old value.
+func (i *Int64) Swap(n int64) int64 {
+ return atomic.SwapInt64(&i.v, n)
+}
+
+// Uint32 is an atomic wrapper around an uint32.
+type Uint32 struct{ v uint32 }
+
+// NewUint32 creates a Uint32.
+func NewUint32(i uint32) *Uint32 {
+ return &Uint32{i}
+}
+
+// Load atomically loads the wrapped value.
+func (i *Uint32) Load() uint32 {
+ return atomic.LoadUint32(&i.v)
+}
+
+// Add atomically adds to the wrapped uint32 and returns the new value.
+func (i *Uint32) Add(n uint32) uint32 {
+ return atomic.AddUint32(&i.v, n)
+}
+
+// Sub atomically subtracts from the wrapped uint32 and returns the new value.
+func (i *Uint32) Sub(n uint32) uint32 {
+ return atomic.AddUint32(&i.v, ^(n - 1))
+}
+
+// Inc atomically increments the wrapped uint32 and returns the new value.
+func (i *Uint32) Inc() uint32 {
+ return i.Add(1)
+}
+
+// Dec atomically decrements the wrapped int32 and returns the new value.
+func (i *Uint32) Dec() uint32 {
+ return i.Sub(1)
+}
+
+// CAS is an atomic compare-and-swap.
+func (i *Uint32) CAS(old, new uint32) bool {
+ return atomic.CompareAndSwapUint32(&i.v, old, new)
+}
+
+// Store atomically stores the passed value.
+func (i *Uint32) Store(n uint32) {
+ atomic.StoreUint32(&i.v, n)
+}
+
+// Swap atomically swaps the wrapped uint32 and returns the old value.
+func (i *Uint32) Swap(n uint32) uint32 {
+ return atomic.SwapUint32(&i.v, n)
+}
+
+// Uint64 is an atomic wrapper around a uint64.
+type Uint64 struct{ v uint64 }
+
+// NewUint64 creates a Uint64.
+func NewUint64(i uint64) *Uint64 {
+ return &Uint64{i}
+}
+
+// Load atomically loads the wrapped value.
+func (i *Uint64) Load() uint64 {
+ return atomic.LoadUint64(&i.v)
+}
+
+// Add atomically adds to the wrapped uint64 and returns the new value.
+func (i *Uint64) Add(n uint64) uint64 {
+ return atomic.AddUint64(&i.v, n)
+}
+
+// Sub atomically subtracts from the wrapped uint64 and returns the new value.
+func (i *Uint64) Sub(n uint64) uint64 {
+ return atomic.AddUint64(&i.v, ^(n - 1))
+}
+
+// Inc atomically increments the wrapped uint64 and returns the new value.
+func (i *Uint64) Inc() uint64 {
+ return i.Add(1)
+}
+
+// Dec atomically decrements the wrapped uint64 and returns the new value.
+func (i *Uint64) Dec() uint64 {
+ return i.Sub(1)
+}
+
+// CAS is an atomic compare-and-swap.
+func (i *Uint64) CAS(old, new uint64) bool {
+ return atomic.CompareAndSwapUint64(&i.v, old, new)
+}
+
+// Store atomically stores the passed value.
+func (i *Uint64) Store(n uint64) {
+ atomic.StoreUint64(&i.v, n)
+}
+
+// Swap atomically swaps the wrapped uint64 and returns the old value.
+func (i *Uint64) Swap(n uint64) uint64 {
+ return atomic.SwapUint64(&i.v, n)
+}
+
+// Bool is an atomic Boolean.
+type Bool struct{ v uint32 }
+
+// NewBool creates a Bool.
+func NewBool(initial bool) *Bool {
+ return &Bool{boolToInt(initial)}
+}
+
+// Load atomically loads the Boolean.
+func (b *Bool) Load() bool {
+ return truthy(atomic.LoadUint32(&b.v))
+}
+
+// CAS is an atomic compare-and-swap.
+func (b *Bool) CAS(old, new bool) bool {
+ return atomic.CompareAndSwapUint32(&b.v, boolToInt(old), boolToInt(new))
+}
+
+// Store atomically stores the passed value.
+func (b *Bool) Store(new bool) {
+ atomic.StoreUint32(&b.v, boolToInt(new))
+}
+
+// Swap sets the given value and returns the previous value.
+func (b *Bool) Swap(new bool) bool {
+ return truthy(atomic.SwapUint32(&b.v, boolToInt(new)))
+}
+
+// Toggle atomically negates the Boolean and returns the previous value.
+func (b *Bool) Toggle() bool {
+ return truthy(atomic.AddUint32(&b.v, 1) - 1)
+}
+
+func truthy(n uint32) bool {
+ return n&1 == 1
+}
+
+func boolToInt(b bool) uint32 {
+ if b {
+ return 1
+ }
+ return 0
+}
+
+// Float64 is an atomic wrapper around float64.
+type Float64 struct {
+ v uint64
+}
+
+// NewFloat64 creates a Float64.
+func NewFloat64(f float64) *Float64 {
+ return &Float64{math.Float64bits(f)}
+}
+
+// Load atomically loads the wrapped value.
+func (f *Float64) Load() float64 {
+ return math.Float64frombits(atomic.LoadUint64(&f.v))
+}
+
+// Store atomically stores the passed value.
+func (f *Float64) Store(s float64) {
+ atomic.StoreUint64(&f.v, math.Float64bits(s))
+}
+
+// Add atomically adds to the wrapped float64 and returns the new value.
+func (f *Float64) Add(s float64) float64 {
+ for {
+ old := f.Load()
+ new := old + s
+ if f.CAS(old, new) {
+ return new
+ }
+ }
+}
+
+// Sub atomically subtracts from the wrapped float64 and returns the new value.
+func (f *Float64) Sub(s float64) float64 {
+ return f.Add(-s)
+}
+
+// CAS is an atomic compare-and-swap.
+func (f *Float64) CAS(old, new float64) bool {
+ return atomic.CompareAndSwapUint64(&f.v, math.Float64bits(old), math.Float64bits(new))
+}
+
+// Duration is an atomic wrapper around time.Duration
+// https://godoc.org/time#Duration
+type Duration struct {
+ v Int64
+}
+
+// NewDuration creates a Duration.
+func NewDuration(d time.Duration) *Duration {
+ return &Duration{v: *NewInt64(int64(d))}
+}
+
+// Load atomically loads the wrapped value.
+func (d *Duration) Load() time.Duration {
+ return time.Duration(d.v.Load())
+}
+
+// Store atomically stores the passed value.
+func (d *Duration) Store(n time.Duration) {
+ d.v.Store(int64(n))
+}
+
+// Add atomically adds to the wrapped time.Duration and returns the new value.
+func (d *Duration) Add(n time.Duration) time.Duration {
+ return time.Duration(d.v.Add(int64(n)))
+}
+
+// Sub atomically subtracts from the wrapped time.Duration and returns the new value.
+func (d *Duration) Sub(n time.Duration) time.Duration {
+ return time.Duration(d.v.Sub(int64(n)))
+}
+
+// Swap atomically swaps the wrapped time.Duration and returns the old value.
+func (d *Duration) Swap(n time.Duration) time.Duration {
+ return time.Duration(d.v.Swap(int64(n)))
+}
+
+// CAS is an atomic compare-and-swap.
+func (d *Duration) CAS(old, new time.Duration) bool {
+ return d.v.CAS(int64(old), int64(new))
+}
+
+// Value shadows the type of the same name from sync/atomic
+// https://godoc.org/sync/atomic#Value
+type Value struct{ atomic.Value }
diff --git a/vendor/github.com/uber-go/atomic/error.go b/vendor/github.com/uber-go/atomic/error.go
new file mode 100644
index 0000000000000..0489d19badbde
--- /dev/null
+++ b/vendor/github.com/uber-go/atomic/error.go
@@ -0,0 +1,55 @@
+// Copyright (c) 2016 Uber Technologies, Inc.
+//
+// Permission is hereby granted, free of charge, to any person obtaining a copy
+// of this software and associated documentation files (the "Software"), to deal
+// in the Software without restriction, including without limitation the rights
+// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+// copies of the Software, and to permit persons to whom the Software is
+// furnished to do so, subject to the following conditions:
+//
+// The above copyright notice and this permission notice shall be included in
+// all copies or substantial portions of the Software.
+//
+// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+// THE SOFTWARE.
+
+package atomic
+
+// Error is an atomic type-safe wrapper around Value for errors
+type Error struct{ v Value }
+
+// errorHolder is non-nil holder for error object.
+// atomic.Value panics on saving nil object, so err object needs to be
+// wrapped with valid object first.
+type errorHolder struct{ err error }
+
+// NewError creates new atomic error object
+func NewError(err error) *Error {
+ e := &Error{}
+ if err != nil {
+ e.Store(err)
+ }
+ return e
+}
+
+// Load atomically loads the wrapped error
+func (e *Error) Load() error {
+ v := e.v.Load()
+ if v == nil {
+ return nil
+ }
+
+ eh := v.(errorHolder)
+ return eh.err
+}
+
+// Store atomically stores error.
+// NOTE: a holder object is allocated on each Store call.
+func (e *Error) Store(err error) {
+ e.v.Store(errorHolder{err: err})
+}
diff --git a/vendor/go.uber.org/atomic/glide.lock b/vendor/github.com/uber-go/atomic/glide.lock
similarity index 100%
rename from vendor/go.uber.org/atomic/glide.lock
rename to vendor/github.com/uber-go/atomic/glide.lock
diff --git a/vendor/go.uber.org/atomic/glide.yaml b/vendor/github.com/uber-go/atomic/glide.yaml
similarity index 100%
rename from vendor/go.uber.org/atomic/glide.yaml
rename to vendor/github.com/uber-go/atomic/glide.yaml
diff --git a/vendor/github.com/uber-go/atomic/string.go b/vendor/github.com/uber-go/atomic/string.go
new file mode 100644
index 0000000000000..ede8136face10
--- /dev/null
+++ b/vendor/github.com/uber-go/atomic/string.go
@@ -0,0 +1,49 @@
+// Copyright (c) 2016 Uber Technologies, Inc.
+//
+// Permission is hereby granted, free of charge, to any person obtaining a copy
+// of this software and associated documentation files (the "Software"), to deal
+// in the Software without restriction, including without limitation the rights
+// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+// copies of the Software, and to permit persons to whom the Software is
+// furnished to do so, subject to the following conditions:
+//
+// The above copyright notice and this permission notice shall be included in
+// all copies or substantial portions of the Software.
+//
+// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+// THE SOFTWARE.
+
+package atomic
+
+// String is an atomic type-safe wrapper around Value for strings.
+type String struct{ v Value }
+
+// NewString creates a String.
+func NewString(str string) *String {
+ s := &String{}
+ if str != "" {
+ s.Store(str)
+ }
+ return s
+}
+
+// Load atomically loads the wrapped string.
+func (s *String) Load() string {
+ v := s.v.Load()
+ if v == nil {
+ return ""
+ }
+ return v.(string)
+}
+
+// Store atomically stores the passed string.
+// Note: Converting the string to an interface{} to store in the Value
+// requires an allocation.
+func (s *String) Store(str string) {
+ s.v.Store(str)
+}
diff --git a/vendor/github.com/uber/jaeger-client-go/CHANGELOG.md b/vendor/github.com/uber/jaeger-client-go/CHANGELOG.md
index c4590bf93e207..5e7e9d5e5eada 100644
--- a/vendor/github.com/uber/jaeger-client-go/CHANGELOG.md
+++ b/vendor/github.com/uber/jaeger-client-go/CHANGELOG.md
@@ -1,6 +1,16 @@
Changes by Version
==================
+2.20.1 (2019-11-08)
+-------------------
+
+Minor patch via https://github.com/jaegertracing/jaeger-client-go/pull/468
+
+- Make `AdaptiveSamplerUpdater` usable with default values; Resolves #467
+- Create `OperationNameLateBinding` sampler option and config option
+- Make `SamplerOptions` var of public type, so that its functions are discoverable via godoc
+
+
2.20.0 (2019-11-06)
-------------------
diff --git a/vendor/github.com/uber/jaeger-client-go/config/config.go b/vendor/github.com/uber/jaeger-client-go/config/config.go
index 965f7c3ee7d7e..a0c32d804473a 100644
--- a/vendor/github.com/uber/jaeger-client-go/config/config.go
+++ b/vendor/github.com/uber/jaeger-client-go/config/config.go
@@ -76,17 +76,26 @@ type SamplerConfig struct {
// Can be set by exporting an environment variable named JAEGER_SAMPLER_MANAGER_HOST_PORT
SamplingServerURL string `yaml:"samplingServerURL"`
- // MaxOperations is the maximum number of operations that the sampler
- // will keep track of. If an operation is not tracked, a default probabilistic
- // sampler will be used rather than the per operation specific sampler.
- // Can be set by exporting an environment variable named JAEGER_SAMPLER_MAX_OPERATIONS
- MaxOperations int `yaml:"maxOperations"`
-
// SamplingRefreshInterval controls how often the remotely controlled sampler will poll
// jaeger-agent for the appropriate sampling strategy.
// Can be set by exporting an environment variable named JAEGER_SAMPLER_REFRESH_INTERVAL
SamplingRefreshInterval time.Duration `yaml:"samplingRefreshInterval"`
+ // MaxOperations is the maximum number of operations that the PerOperationSampler
+ // will keep track of. If an operation is not tracked, a default probabilistic
+ // sampler will be used rather than the per operation specific sampler.
+ // Can be set by exporting an environment variable named JAEGER_SAMPLER_MAX_OPERATIONS.
+ MaxOperations int `yaml:"maxOperations"`
+
+ // Opt-in feature for applications that require late binding of span name via explicit
+ // call to SetOperationName when using PerOperationSampler. When this feature is enabled,
+ // the sampler will return retryable=true from OnCreateSpan(), thus leaving the sampling
+ // decision as non-final (and the span as writeable). This may lead to degraded performance
+ // in applications that always provide the correct span name on trace creation.
+ //
+ // For backwards compatibility this option is off by default.
+ OperationNameLateBinding bool `yaml:"operationNameLateBinding"`
+
// Options can be used to programmatically pass additional options to the Remote sampler.
Options []jaeger.SamplerOption
}
@@ -335,7 +344,7 @@ func (sc *SamplerConfig) NewSampler(
return jaeger.NewProbabilisticSampler(sc.Param)
}
return nil, fmt.Errorf(
- "Invalid Param for probabilistic sampler: %v. Expecting value between 0 and 1",
+ "invalid Param for probabilistic sampler; expecting value between 0 and 1, received %v",
sc.Param,
)
}
@@ -353,17 +362,14 @@ func (sc *SamplerConfig) NewSampler(
jaeger.SamplerOptions.Metrics(metrics),
jaeger.SamplerOptions.InitialSampler(initSampler),
jaeger.SamplerOptions.SamplingServerURL(sc.SamplingServerURL),
- }
- if sc.MaxOperations != 0 {
- options = append(options, jaeger.SamplerOptions.MaxOperations(sc.MaxOperations))
- }
- if sc.SamplingRefreshInterval != 0 {
- options = append(options, jaeger.SamplerOptions.SamplingRefreshInterval(sc.SamplingRefreshInterval))
+ jaeger.SamplerOptions.MaxOperations(sc.MaxOperations),
+ jaeger.SamplerOptions.OperationNameLateBinding(sc.OperationNameLateBinding),
+ jaeger.SamplerOptions.SamplingRefreshInterval(sc.SamplingRefreshInterval),
}
options = append(options, sc.Options...)
return jaeger.NewRemotelyControlledSampler(serviceName, options...), nil
}
- return nil, fmt.Errorf("Unknown sampler type %v", sc.Type)
+ return nil, fmt.Errorf("unknown sampler type (%s)", sc.Type)
}
// NewReporter instantiates a new reporter that submits spans to the collector
diff --git a/vendor/github.com/uber/jaeger-client-go/constants.go b/vendor/github.com/uber/jaeger-client-go/constants.go
index 0da47b02fa68e..5d27b628d7670 100644
--- a/vendor/github.com/uber/jaeger-client-go/constants.go
+++ b/vendor/github.com/uber/jaeger-client-go/constants.go
@@ -22,7 +22,7 @@ import (
const (
// JaegerClientVersion is the version of the client library reported as Span tag.
- JaegerClientVersion = "Go-2.20.0"
+ JaegerClientVersion = "Go-2.20.1"
// JaegerClientVersionTagKey is the name of the tag used to report client version.
JaegerClientVersionTagKey = "jaeger.version"
diff --git a/vendor/github.com/uber/jaeger-client-go/sampler.go b/vendor/github.com/uber/jaeger-client-go/sampler.go
index 6195d59c58653..f47004b1f68f3 100644
--- a/vendor/github.com/uber/jaeger-client-go/sampler.go
+++ b/vendor/github.com/uber/jaeger-client-go/sampler.go
@@ -363,6 +363,9 @@ type PerOperationSamplerParams struct {
// NewPerOperationSampler returns a new PerOperationSampler.
func NewPerOperationSampler(params PerOperationSamplerParams) *PerOperationSampler {
+ if params.MaxOperations <= 0 {
+ params.MaxOperations = defaultMaxOperations
+ }
samplers := make(map[string]*GuaranteedThroughputProbabilisticSampler)
for _, strategy := range params.Strategies.PerOperationStrategies {
sampler := newGuaranteedThroughputProbabilisticSampler(
diff --git a/vendor/github.com/uber/jaeger-client-go/sampler_remote.go b/vendor/github.com/uber/jaeger-client-go/sampler_remote.go
index 9bd0c98227a02..4448b8f643936 100644
--- a/vendor/github.com/uber/jaeger-client-go/sampler_remote.go
+++ b/vendor/github.com/uber/jaeger-client-go/sampler_remote.go
@@ -258,8 +258,9 @@ func (u *RateLimitingSamplerUpdater) Update(sampler SamplerV2, strategy interfac
// -----------------------
// AdaptiveSamplerUpdater is used by RemotelyControlledSampler to parse sampling configuration.
+// Fields have the same meaning as in PerOperationSamplerParams.
type AdaptiveSamplerUpdater struct {
- MaxOperations int // required
+ MaxOperations int
OperationNameLateBinding bool
}
diff --git a/vendor/github.com/uber/jaeger-client-go/sampler_remote_options.go b/vendor/github.com/uber/jaeger-client-go/sampler_remote_options.go
index 7a292effc90cf..3b5c6aa9c221b 100644
--- a/vendor/github.com/uber/jaeger-client-go/sampler_remote_options.go
+++ b/vendor/github.com/uber/jaeger-client-go/sampler_remote_options.go
@@ -23,12 +23,17 @@ import (
// SamplerOption is a function that sets some option on the sampler
type SamplerOption func(options *samplerOptions)
-// SamplerOptions is a factory for all available SamplerOption's
-var SamplerOptions samplerOptions
+// SamplerOptions is a factory for all available SamplerOption's.
+var SamplerOptions SamplerOptionsFactory
+
+// SamplerOptionsFactory is a factory for all available SamplerOption's.
+// The type acts as a namespace for factory functions. It is public to
+// make the functions discoverable via godoc. Recommended to be used
+// via global SamplerOptions variable.
+type SamplerOptionsFactory struct{}
type samplerOptions struct {
metrics *Metrics
- maxOperations int
sampler SamplerV2
logger Logger
samplingServerURL string
@@ -36,11 +41,12 @@ type samplerOptions struct {
samplingFetcher SamplingStrategyFetcher
samplingParser SamplingStrategyParser
updaters []SamplerUpdater
+ posParams PerOperationSamplerParams
}
// Metrics creates a SamplerOption that initializes Metrics on the sampler,
// which is used to emit statistics.
-func (samplerOptions) Metrics(m *Metrics) SamplerOption {
+func (SamplerOptionsFactory) Metrics(m *Metrics) SamplerOption {
return func(o *samplerOptions) {
o.metrics = m
}
@@ -48,22 +54,30 @@ func (samplerOptions) Metrics(m *Metrics) SamplerOption {
// MaxOperations creates a SamplerOption that sets the maximum number of
// operations the sampler will keep track of.
-func (samplerOptions) MaxOperations(maxOperations int) SamplerOption {
+func (SamplerOptionsFactory) MaxOperations(maxOperations int) SamplerOption {
+ return func(o *samplerOptions) {
+ o.posParams.MaxOperations = maxOperations
+ }
+}
+
+// OperationNameLateBinding creates a SamplerOption that sets the respective
+// field in the PerOperationSamplerParams.
+func (SamplerOptionsFactory) OperationNameLateBinding(enable bool) SamplerOption {
return func(o *samplerOptions) {
- o.maxOperations = maxOperations
+ o.posParams.OperationNameLateBinding = enable
}
}
// InitialSampler creates a SamplerOption that sets the initial sampler
// to use before a remote sampler is created and used.
-func (samplerOptions) InitialSampler(sampler Sampler) SamplerOption {
+func (SamplerOptionsFactory) InitialSampler(sampler Sampler) SamplerOption {
return func(o *samplerOptions) {
o.sampler = samplerV1toV2(sampler)
}
}
// Logger creates a SamplerOption that sets the logger used by the sampler.
-func (samplerOptions) Logger(logger Logger) SamplerOption {
+func (SamplerOptionsFactory) Logger(logger Logger) SamplerOption {
return func(o *samplerOptions) {
o.logger = logger
}
@@ -71,7 +85,7 @@ func (samplerOptions) Logger(logger Logger) SamplerOption {
// SamplingServerURL creates a SamplerOption that sets the sampling server url
// of the local agent that contains the sampling strategies.
-func (samplerOptions) SamplingServerURL(samplingServerURL string) SamplerOption {
+func (SamplerOptionsFactory) SamplingServerURL(samplingServerURL string) SamplerOption {
return func(o *samplerOptions) {
o.samplingServerURL = samplingServerURL
}
@@ -79,28 +93,28 @@ func (samplerOptions) SamplingServerURL(samplingServerURL string) SamplerOption
// SamplingRefreshInterval creates a SamplerOption that sets how often the
// sampler will poll local agent for the appropriate sampling strategy.
-func (samplerOptions) SamplingRefreshInterval(samplingRefreshInterval time.Duration) SamplerOption {
+func (SamplerOptionsFactory) SamplingRefreshInterval(samplingRefreshInterval time.Duration) SamplerOption {
return func(o *samplerOptions) {
o.samplingRefreshInterval = samplingRefreshInterval
}
}
// SamplingStrategyFetcher creates a SamplerOption that initializes sampling strategy fetcher.
-func (samplerOptions) SamplingStrategyFetcher(fetcher SamplingStrategyFetcher) SamplerOption {
+func (SamplerOptionsFactory) SamplingStrategyFetcher(fetcher SamplingStrategyFetcher) SamplerOption {
return func(o *samplerOptions) {
o.samplingFetcher = fetcher
}
}
// SamplingStrategyParser creates a SamplerOption that initializes sampling strategy parser.
-func (samplerOptions) SamplingStrategyParser(parser SamplingStrategyParser) SamplerOption {
+func (SamplerOptionsFactory) SamplingStrategyParser(parser SamplingStrategyParser) SamplerOption {
return func(o *samplerOptions) {
o.samplingParser = parser
}
}
// Updaters creates a SamplerOption that initializes sampler updaters.
-func (samplerOptions) Updaters(updaters ...SamplerUpdater) SamplerOption {
+func (SamplerOptionsFactory) Updaters(updaters ...SamplerUpdater) SamplerOption {
return func(o *samplerOptions) {
o.updaters = updaters
}
@@ -116,9 +130,6 @@ func (o *samplerOptions) applyOptionsAndDefaults(opts ...SamplerOption) *sampler
if o.logger == nil {
o.logger = log.NullLogger
}
- if o.maxOperations <= 0 {
- o.maxOperations = defaultMaxOperations
- }
if o.samplingServerURL == "" {
o.samplingServerURL = DefaultSamplingServerURL
}
@@ -139,7 +150,10 @@ func (o *samplerOptions) applyOptionsAndDefaults(opts ...SamplerOption) *sampler
}
if o.updaters == nil {
o.updaters = []SamplerUpdater{
- &AdaptiveSamplerUpdater{MaxOperations: o.maxOperations},
+ &AdaptiveSamplerUpdater{
+ MaxOperations: o.posParams.MaxOperations,
+ OperationNameLateBinding: o.posParams.OperationNameLateBinding,
+ },
new(ProbabilisticSamplerUpdater),
new(RateLimitingSamplerUpdater),
}
diff --git a/vendor/go.opencensus.io/Gopkg.lock b/vendor/go.opencensus.io/Gopkg.lock
deleted file mode 100644
index 3be12ac8f2478..0000000000000
--- a/vendor/go.opencensus.io/Gopkg.lock
+++ /dev/null
@@ -1,231 +0,0 @@
-# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.
-
-
-[[projects]]
- branch = "master"
- digest = "1:eee9386329f4fcdf8d6c0def0c9771b634bdd5ba460d888aa98c17d59b37a76c"
- name = "git.apache.org/thrift.git"
- packages = ["lib/go/thrift"]
- pruneopts = "UT"
- revision = "6e67faa92827ece022380b211c2caaadd6145bf5"
- source = "github.com/apache/thrift"
-
-[[projects]]
- branch = "master"
- digest = "1:d6afaeed1502aa28e80a4ed0981d570ad91b2579193404256ce672ed0a609e0d"
- name = "github.com/beorn7/perks"
- packages = ["quantile"]
- pruneopts = "UT"
- revision = "3a771d992973f24aa725d07868b467d1ddfceafb"
-
-[[projects]]
- digest = "1:4c0989ca0bcd10799064318923b9bc2db6b4d6338dd75f3f2d86c3511aaaf5cf"
- name = "github.com/golang/protobuf"
- packages = [
- "proto",
- "ptypes",
- "ptypes/any",
- "ptypes/duration",
- "ptypes/timestamp",
- ]
- pruneopts = "UT"
- revision = "aa810b61a9c79d51363740d207bb46cf8e620ed5"
- version = "v1.2.0"
-
-[[projects]]
- digest = "1:ff5ebae34cfbf047d505ee150de27e60570e8c394b3b8fdbb720ff6ac71985fc"
- name = "github.com/matttproud/golang_protobuf_extensions"
- packages = ["pbutil"]
- pruneopts = "UT"
- revision = "c12348ce28de40eed0136aa2b644d0ee0650e56c"
- version = "v1.0.1"
-
-[[projects]]
- digest = "1:824c8f3aa4c5f23928fa84ebbd5ed2e9443b3f0cb958a40c1f2fbed5cf5e64b1"
- name = "github.com/openzipkin/zipkin-go"
- packages = [
- ".",
- "idgenerator",
- "model",
- "propagation",
- "reporter",
- "reporter/http",
- ]
- pruneopts = "UT"
- revision = "d455a5674050831c1e187644faa4046d653433c2"
- version = "v0.1.1"
-
-[[projects]]
- digest = "1:d14a5f4bfecf017cb780bdde1b6483e5deb87e12c332544d2c430eda58734bcb"
- name = "github.com/prometheus/client_golang"
- packages = [
- "prometheus",
- "prometheus/promhttp",
- ]
- pruneopts = "UT"
- revision = "c5b7fccd204277076155f10851dad72b76a49317"
- version = "v0.8.0"
-
-[[projects]]
- branch = "master"
- digest = "1:2d5cd61daa5565187e1d96bae64dbbc6080dacf741448e9629c64fd93203b0d4"
- name = "github.com/prometheus/client_model"
- packages = ["go"]
- pruneopts = "UT"
- revision = "5c3871d89910bfb32f5fcab2aa4b9ec68e65a99f"
-
-[[projects]]
- branch = "master"
- digest = "1:63b68062b8968092eb86bedc4e68894bd096ea6b24920faca8b9dcf451f54bb5"
- name = "github.com/prometheus/common"
- packages = [
- "expfmt",
- "internal/bitbucket.org/ww/goautoneg",
- "model",
- ]
- pruneopts = "UT"
- revision = "c7de2306084e37d54b8be01f3541a8464345e9a5"
-
-[[projects]]
- branch = "master"
- digest = "1:8c49953a1414305f2ff5465147ee576dd705487c35b15918fcd4efdc0cb7a290"
- name = "github.com/prometheus/procfs"
- packages = [
- ".",
- "internal/util",
- "nfs",
- "xfs",
- ]
- pruneopts = "UT"
- revision = "05ee40e3a273f7245e8777337fc7b46e533a9a92"
-
-[[projects]]
- branch = "master"
- digest = "1:deafe4ab271911fec7de5b693d7faae3f38796d9eb8622e2b9e7df42bb3dfea9"
- name = "golang.org/x/net"
- packages = [
- "context",
- "http/httpguts",
- "http2",
- "http2/hpack",
- "idna",
- "internal/timeseries",
- "trace",
- ]
- pruneopts = "UT"
- revision = "922f4815f713f213882e8ef45e0d315b164d705c"
-
-[[projects]]
- branch = "master"
- digest = "1:e0140c0c868c6e0f01c0380865194592c011fe521d6e12d78bfd33e756fe018a"
- name = "golang.org/x/sync"
- packages = ["semaphore"]
- pruneopts = "UT"
- revision = "1d60e4601c6fd243af51cc01ddf169918a5407ca"
-
-[[projects]]
- branch = "master"
- digest = "1:a3f00ac457c955fe86a41e1495e8f4c54cb5399d609374c5cc26aa7d72e542c8"
- name = "golang.org/x/sys"
- packages = ["unix"]
- pruneopts = "UT"
- revision = "3b58ed4ad3395d483fc92d5d14123ce2c3581fec"
-
-[[projects]]
- digest = "1:a2ab62866c75542dd18d2b069fec854577a20211d7c0ea6ae746072a1dccdd18"
- name = "golang.org/x/text"
- packages = [
- "collate",
- "collate/build",
- "internal/colltab",
- "internal/gen",
- "internal/tag",
- "internal/triegen",
- "internal/ucd",
- "language",
- "secure/bidirule",
- "transform",
- "unicode/bidi",
- "unicode/cldr",
- "unicode/norm",
- "unicode/rangetable",
- ]
- pruneopts = "UT"
- revision = "f21a4dfb5e38f5895301dc265a8def02365cc3d0"
- version = "v0.3.0"
-
-[[projects]]
- branch = "master"
- digest = "1:c0c17c94fe8bc1ab34e7f586a4a8b788c5e1f4f9f750ff23395b8b2f5a523530"
- name = "google.golang.org/api"
- packages = ["support/bundler"]
- pruneopts = "UT"
- revision = "e21acd801f91da814261b938941d193bb036441a"
-
-[[projects]]
- branch = "master"
- digest = "1:077c1c599507b3b3e9156d17d36e1e61928ee9b53a5b420f10f28ebd4a0b275c"
- name = "google.golang.org/genproto"
- packages = ["googleapis/rpc/status"]
- pruneopts = "UT"
- revision = "c66870c02cf823ceb633bcd05be3c7cda29976f4"
-
-[[projects]]
- digest = "1:3dd7996ce6bf52dec6a2f69fa43e7c4cefea1d4dfa3c8ab7a5f8a9f7434e239d"
- name = "google.golang.org/grpc"
- packages = [
- ".",
- "balancer",
- "balancer/base",
- "balancer/roundrobin",
- "codes",
- "connectivity",
- "credentials",
- "encoding",
- "encoding/proto",
- "grpclog",
- "internal",
- "internal/backoff",
- "internal/channelz",
- "internal/envconfig",
- "internal/grpcrand",
- "internal/transport",
- "keepalive",
- "metadata",
- "naming",
- "peer",
- "resolver",
- "resolver/dns",
- "resolver/passthrough",
- "stats",
- "status",
- "tap",
- ]
- pruneopts = "UT"
- revision = "32fb0ac620c32ba40a4626ddf94d90d12cce3455"
- version = "v1.14.0"
-
-[solve-meta]
- analyzer-name = "dep"
- analyzer-version = 1
- input-imports = [
- "git.apache.org/thrift.git/lib/go/thrift",
- "github.com/golang/protobuf/proto",
- "github.com/openzipkin/zipkin-go",
- "github.com/openzipkin/zipkin-go/model",
- "github.com/openzipkin/zipkin-go/reporter",
- "github.com/openzipkin/zipkin-go/reporter/http",
- "github.com/prometheus/client_golang/prometheus",
- "github.com/prometheus/client_golang/prometheus/promhttp",
- "golang.org/x/net/context",
- "golang.org/x/net/http2",
- "google.golang.org/api/support/bundler",
- "google.golang.org/grpc",
- "google.golang.org/grpc/codes",
- "google.golang.org/grpc/grpclog",
- "google.golang.org/grpc/metadata",
- "google.golang.org/grpc/stats",
- "google.golang.org/grpc/status",
- ]
- solver-name = "gps-cdcl"
- solver-version = 1
diff --git a/vendor/go.opencensus.io/Gopkg.toml b/vendor/go.opencensus.io/Gopkg.toml
deleted file mode 100644
index a9f3cd68eb302..0000000000000
--- a/vendor/go.opencensus.io/Gopkg.toml
+++ /dev/null
@@ -1,36 +0,0 @@
-# For v0.x.y dependencies, prefer adding a constraints of the form: version=">= 0.x.y"
-# to avoid locking to a particular minor version which can cause dep to not be
-# able to find a satisfying dependency graph.
-
-[[constraint]]
- branch = "master"
- name = "git.apache.org/thrift.git"
- source = "github.com/apache/thrift"
-
-[[constraint]]
- name = "github.com/golang/protobuf"
- version = "1.0.0"
-
-[[constraint]]
- name = "github.com/openzipkin/zipkin-go"
- version = ">=0.1.0"
-
-[[constraint]]
- name = "github.com/prometheus/client_golang"
- version = ">=0.8.0"
-
-[[constraint]]
- branch = "master"
- name = "golang.org/x/net"
-
-[[constraint]]
- branch = "master"
- name = "google.golang.org/api"
-
-[[constraint]]
- name = "google.golang.org/grpc"
- version = "1.11.3"
-
-[prune]
- go-tests = true
- unused-packages = true
diff --git a/vendor/go.opencensus.io/README.md b/vendor/go.opencensus.io/README.md
index a8cd09eafbf0b..1d7e837116f0e 100644
--- a/vendor/go.opencensus.io/README.md
+++ b/vendor/go.opencensus.io/README.md
@@ -9,6 +9,8 @@ OpenCensus Go is a Go implementation of OpenCensus, a toolkit for
collecting application performance and behavior monitoring data.
Currently it consists of three major components: tags, stats and tracing.
+#### OpenCensus and OpenTracing have merged to form OpenTelemetry, which serves as the next major version of OpenCensus and OpenTracing. OpenTelemetry will offer backwards compatibility with existing OpenCensus integrations, and we will continue to make security patches to existing OpenCensus libraries for two years. Read more about the merger [here](https://medium.com/opentracing/a-roadmap-to-convergence-b074e5815289).
+
## Installation
```
@@ -57,6 +59,7 @@ can implement their own exporters by implementing the exporter interfaces
* [Datadog][exporter-datadog] for stats and traces
* [Graphite][exporter-graphite] for stats
* [Honeycomb][exporter-honeycomb] for traces
+* [New Relic][exporter-newrelic] for stats and traces
## Overview
@@ -261,3 +264,4 @@ release in which the functionality was marked *Deprecated*.
[exporter-datadog]: https://github.com/DataDog/opencensus-go-exporter-datadog
[exporter-graphite]: https://github.com/census-ecosystem/opencensus-go-exporter-graphite
[exporter-honeycomb]: https://github.com/honeycombio/opencensus-exporter
+[exporter-newrelic]: https://github.com/newrelic/newrelic-opencensus-exporter-go
diff --git a/vendor/go.opencensus.io/appveyor.yml b/vendor/go.opencensus.io/appveyor.yml
index 12bd7c4c73d7f..d08f0edaff974 100644
--- a/vendor/go.opencensus.io/appveyor.yml
+++ b/vendor/go.opencensus.io/appveyor.yml
@@ -6,13 +6,12 @@ clone_folder: c:\gopath\src\go.opencensus.io
environment:
GOPATH: 'c:\gopath'
- GOVERSION: '1.11'
GO111MODULE: 'on'
CGO_ENABLED: '0' # See: https://github.com/appveyor/ci/issues/2613
-install:
- - set PATH=%GOPATH%\bin;c:\go\bin;%PATH%
- - choco upgrade golang --version 1.11.5 # Temporary fix because of a go.sum bug in 1.11
+stack: go 1.11
+
+before_test:
- go version
- go env
diff --git a/vendor/go.opencensus.io/go.mod b/vendor/go.opencensus.io/go.mod
index 7c1886e9ef23d..c867df5f5c463 100644
--- a/vendor/go.opencensus.io/go.mod
+++ b/vendor/go.opencensus.io/go.mod
@@ -4,6 +4,7 @@ require (
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6
github.com/golang/protobuf v1.3.1
github.com/google/go-cmp v0.3.0
+ github.com/stretchr/testify v1.4.0
golang.org/x/net v0.0.0-20190620200207-3b0461eec859
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd // indirect
golang.org/x/text v0.3.2 // indirect
diff --git a/vendor/go.opencensus.io/go.sum b/vendor/go.opencensus.io/go.sum
index 212b6b73be14f..ed2a1d844f099 100644
--- a/vendor/go.opencensus.io/go.sum
+++ b/vendor/go.opencensus.io/go.sum
@@ -1,6 +1,8 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
+github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8=
+github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b h1:VKtxabqXZkF25pY9ekfRL6a582T4P37/31XEstQ5p58=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6 h1:ZgQEtGgCBiWRM39fZuwSd1LwSqqSW0hOdXCYYDX0R3I=
@@ -12,6 +14,11 @@ github.com/golang/protobuf v1.3.1 h1:YF8+flBXS5eO826T4nzqPrxfhQThhXl0YzfuUPu4SBg
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
+github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
+github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
+github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
+github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
+github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
@@ -60,4 +67,7 @@ google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZi
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1 h1:Hz2g2wirWK7H0qIIhGIqRGTuMwTE8HEKFnDZZ7lm9NU=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
+gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
+gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
diff --git a/vendor/go.opencensus.io/plugin/ochttp/server.go b/vendor/go.opencensus.io/plugin/ochttp/server.go
index dc6563a15d80e..c7ea642357265 100644
--- a/vendor/go.opencensus.io/plugin/ochttp/server.go
+++ b/vendor/go.opencensus.io/plugin/ochttp/server.go
@@ -70,6 +70,12 @@ type Handler struct {
// from the information found in the incoming HTTP Request. By default the
// name equals the URL Path.
FormatSpanName func(*http.Request) string
+
+ // IsHealthEndpoint holds the function to use for determining if the
+ // incoming HTTP request should be considered a health check. This is in
+ // addition to the private isHealthEndpoint func which may also indicate
+ // tracing should be skipped.
+ IsHealthEndpoint func(*http.Request) bool
}
func (h *Handler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
@@ -87,7 +93,7 @@ func (h *Handler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
}
func (h *Handler) startTrace(w http.ResponseWriter, r *http.Request) (*http.Request, func()) {
- if isHealthEndpoint(r.URL.Path) {
+ if h.IsHealthEndpoint != nil && h.IsHealthEndpoint(r) || isHealthEndpoint(r.URL.Path) {
return r, func() {}
}
var name string
diff --git a/vendor/go.opencensus.io/stats/units.go b/vendor/go.opencensus.io/stats/units.go
index 6931a5f296616..736399652cc80 100644
--- a/vendor/go.opencensus.io/stats/units.go
+++ b/vendor/go.opencensus.io/stats/units.go
@@ -22,4 +22,5 @@ const (
UnitDimensionless = "1"
UnitBytes = "By"
UnitMilliseconds = "ms"
+ UnitSeconds = "s"
)
diff --git a/vendor/go.opencensus.io/stats/view/aggregation.go b/vendor/go.opencensus.io/stats/view/aggregation.go
index 8bd25314e20ec..9d7093728ed87 100644
--- a/vendor/go.opencensus.io/stats/view/aggregation.go
+++ b/vendor/go.opencensus.io/stats/view/aggregation.go
@@ -99,13 +99,14 @@ func Sum() *Aggregation {
// If len(bounds) is 1 then there is no finite buckets, and that single
// element is the common boundary of the overflow and underflow buckets.
func Distribution(bounds ...float64) *Aggregation {
- return &Aggregation{
+ agg := &Aggregation{
Type: AggTypeDistribution,
Buckets: bounds,
- newData: func() AggregationData {
- return newDistributionData(bounds)
- },
}
+ agg.newData = func() AggregationData {
+ return newDistributionData(agg)
+ }
+ return agg
}
// LastValue only reports the last value recorded using this
diff --git a/vendor/go.opencensus.io/stats/view/aggregation_data.go b/vendor/go.opencensus.io/stats/view/aggregation_data.go
index d500e67f7335e..f331d456e9bb1 100644
--- a/vendor/go.opencensus.io/stats/view/aggregation_data.go
+++ b/vendor/go.opencensus.io/stats/view/aggregation_data.go
@@ -128,12 +128,12 @@ type DistributionData struct {
bounds []float64 // histogram distribution of the values
}
-func newDistributionData(bounds []float64) *DistributionData {
- bucketCount := len(bounds) + 1
+func newDistributionData(agg *Aggregation) *DistributionData {
+ bucketCount := len(agg.Buckets) + 1
return &DistributionData{
CountPerBucket: make([]int64, bucketCount),
ExemplarsPerBucket: make([]*metricdata.Exemplar, bucketCount),
- bounds: bounds,
+ bounds: agg.Buckets,
Min: math.MaxFloat64,
Max: math.SmallestNonzeroFloat64,
}
diff --git a/vendor/go.uber.org/atomic/.gitignore b/vendor/go.uber.org/atomic/.gitignore
index 0a4504f11095f..c3fa253893f06 100644
--- a/vendor/go.uber.org/atomic/.gitignore
+++ b/vendor/go.uber.org/atomic/.gitignore
@@ -1,6 +1,7 @@
+/bin
.DS_Store
/vendor
-/cover
+cover.html
cover.out
lint.log
diff --git a/vendor/go.uber.org/atomic/.travis.yml b/vendor/go.uber.org/atomic/.travis.yml
index 0f3769e5fa6b2..4e73268b60297 100644
--- a/vendor/go.uber.org/atomic/.travis.yml
+++ b/vendor/go.uber.org/atomic/.travis.yml
@@ -2,26 +2,26 @@ sudo: false
language: go
go_import_path: go.uber.org/atomic
-go:
- - 1.11.x
- - 1.12.x
+env:
+ global:
+ - GO111MODULE=on
matrix:
include:
- go: 1.12.x
- env: NO_TEST=yes LINT=yes
+ - go: 1.13.x
+ env: LINT=1
cache:
directories:
- vendor
-install:
- - make install_ci
+before_install:
+ - go version
script:
- - test -n "$NO_TEST" || make test_ci
- - test -n "$NO_TEST" || scripts/test-ubergo.sh
- - test -z "$LINT" || make install_lint lint
+ - test -z "$LINT" || make lint
+ - make cover
after_success:
- bash <(curl -s https://codecov.io/bash)
diff --git a/vendor/go.uber.org/atomic/CHANGELOG.md b/vendor/go.uber.org/atomic/CHANGELOG.md
new file mode 100644
index 0000000000000..a88b023e4869c
--- /dev/null
+++ b/vendor/go.uber.org/atomic/CHANGELOG.md
@@ -0,0 +1,54 @@
+# Changelog
+All notable changes to this project will be documented in this file.
+
+The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
+and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
+
+## [1.5.0] - 2019-10-29
+### Changed
+- With Go modules, only the `go.uber.org/atomic` import path is supported now.
+ If you need to use the old import path, please add a `replace` directive to
+ your `go.mod`.
+
+## [1.4.0] - 2019-05-01
+### Added
+ - Add `atomic.Error` type for atomic operations on `error` values.
+
+## [1.3.2] - 2018-05-02
+### Added
+- Add `atomic.Duration` type for atomic operations on `time.Duration` values.
+
+## [1.3.1] - 2017-11-14
+### Fixed
+- Revert optimization for `atomic.String.Store("")` which caused data races.
+
+## [1.3.0] - 2017-11-13
+### Added
+- Add `atomic.Bool.CAS` for compare-and-swap semantics on bools.
+
+### Changed
+- Optimize `atomic.String.Store("")` by avoiding an allocation.
+
+## [1.2.0] - 2017-04-12
+### Added
+- Shadow `atomic.Value` from `sync/atomic`.
+
+## [1.1.0] - 2017-03-10
+### Added
+- Add atomic `Float64` type.
+
+### Changed
+- Support new `go.uber.org/atomic` import path.
+
+## [1.0.0] - 2016-07-18
+
+- Initial release.
+
+[1.4.0]: https://github.com/uber-go/atomic/compare/v1.4.0...v1.5.0
+[1.4.0]: https://github.com/uber-go/atomic/compare/v1.3.2...v1.4.0
+[1.3.2]: https://github.com/uber-go/atomic/compare/v1.3.1...v1.3.2
+[1.3.1]: https://github.com/uber-go/atomic/compare/v1.3.0...v1.3.1
+[1.3.0]: https://github.com/uber-go/atomic/compare/v1.2.0...v1.3.0
+[1.2.0]: https://github.com/uber-go/atomic/compare/v1.1.0...v1.2.0
+[1.1.0]: https://github.com/uber-go/atomic/compare/v1.0.0...v1.1.0
+[1.0.0]: https://github.com/uber-go/atomic/releases/tag/v1.0.0
diff --git a/vendor/go.uber.org/atomic/Makefile b/vendor/go.uber.org/atomic/Makefile
index 1ef263075d762..39af0fb63f2f3 100644
--- a/vendor/go.uber.org/atomic/Makefile
+++ b/vendor/go.uber.org/atomic/Makefile
@@ -1,51 +1,35 @@
-# Many Go tools take file globs or directories as arguments instead of packages.
-PACKAGE_FILES ?= *.go
+# Directory to place `go install`ed binaries into.
+export GOBIN ?= $(shell pwd)/bin
-# For pre go1.6
-export GO15VENDOREXPERIMENT=1
+GOLINT = $(GOBIN)/golint
+GO_FILES ?= *.go
.PHONY: build
build:
- go build -i ./...
-
-
-.PHONY: install
-install:
- glide --version || go get github.com/Masterminds/glide
- glide install
-
+ go build ./...
.PHONY: test
test:
- go test -cover -race ./...
+ go test -race ./...
+.PHONY: gofmt
+gofmt:
+ $(eval FMT_LOG := $(shell mktemp -t gofmt.XXXXX))
+ gofmt -e -s -l $(GO_FILES) > $(FMT_LOG) || true
+ @[ ! -s "$(FMT_LOG)" ] || (echo "gofmt failed:" && cat $(FMT_LOG) && false)
-.PHONY: install_ci
-install_ci: install
- go get github.com/wadey/gocovmerge
- go get github.com/mattn/goveralls
- go get golang.org/x/tools/cmd/cover
-
-.PHONY: install_lint
-install_lint:
- go get golang.org/x/lint/golint
+$(GOLINT):
+ go install golang.org/x/lint/golint
+.PHONY: golint
+golint: $(GOLINT)
+ $(GOLINT) ./...
.PHONY: lint
-lint:
- @rm -rf lint.log
- @echo "Checking formatting..."
- @gofmt -d -s $(PACKAGE_FILES) 2>&1 | tee lint.log
- @echo "Checking vet..."
- @go vet ./... 2>&1 | tee -a lint.log;)
- @echo "Checking lint..."
- @golint $$(go list ./...) 2>&1 | tee -a lint.log
- @echo "Checking for unresolved FIXMEs..."
- @git grep -i fixme | grep -v -e vendor -e Makefile | tee -a lint.log
- @[ ! -s lint.log ]
-
-
-.PHONY: test_ci
-test_ci: install_ci build
- ./scripts/cover.sh $(shell go list $(PACKAGES))
+lint: gofmt golint
+
+.PHONY: cover
+cover:
+ go test -coverprofile=cover.out -coverpkg ./... -v ./...
+ go tool cover -html=cover.out -o cover.html
diff --git a/vendor/go.uber.org/atomic/README.md b/vendor/go.uber.org/atomic/README.md
index 62eb8e5760969..3cc368ba30eeb 100644
--- a/vendor/go.uber.org/atomic/README.md
+++ b/vendor/go.uber.org/atomic/README.md
@@ -3,9 +3,22 @@
Simple wrappers for primitive types to enforce atomic access.
## Installation
-`go get -u go.uber.org/atomic`
+
+```shell
+$ go get -u go.uber.org/atomic@v1
+```
+
+Note: If you are using Go modules, this package will fail to compile with the
+import path `github.com/uber-go/atomic`. To continue using that import path,
+you will have to add a `replace` directive to your `go.mod`, replacing
+`github.com/uber-go/atomic` with `go.uber.org/atomic`.
+
+```shell
+$ go mod edit -replace github.com/uber-go/atomic=go.uber.org/atomic@v1
+```
## Usage
+
The standard library's `sync/atomic` is powerful, but it's easy to forget which
variables must be accessed atomically. `go.uber.org/atomic` preserves all the
functionality of the standard library, but wraps the primitive types to
@@ -21,9 +34,11 @@ atom.CAS(40, 11)
See the [documentation][doc] for a complete API specification.
## Development Status
+
Stable.
-___
+---
+
Released under the [MIT License](LICENSE.txt).
[doc-img]: https://godoc.org/github.com/uber-go/atomic?status.svg
diff --git a/vendor/go.uber.org/atomic/go.mod b/vendor/go.uber.org/atomic/go.mod
new file mode 100644
index 0000000000000..a935daebb9f4d
--- /dev/null
+++ b/vendor/go.uber.org/atomic/go.mod
@@ -0,0 +1,10 @@
+module go.uber.org/atomic
+
+require (
+ github.com/davecgh/go-spew v1.1.1 // indirect
+ github.com/stretchr/testify v1.3.0
+ golang.org/x/lint v0.0.0-20190930215403-16217165b5de
+ golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c // indirect
+)
+
+go 1.13
diff --git a/vendor/go.uber.org/atomic/go.sum b/vendor/go.uber.org/atomic/go.sum
new file mode 100644
index 0000000000000..51b2b62afbcfe
--- /dev/null
+++ b/vendor/go.uber.org/atomic/go.sum
@@ -0,0 +1,22 @@
+github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8=
+github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
+github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
+github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
+github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
+github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
+github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
+golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
+golang.org/x/lint v0.0.0-20190930215403-16217165b5de h1:5hukYrvBGR8/eNkX5mdUezrA6JiaEZDtJb9Ei+1LlBs=
+golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
+golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/tools v0.0.0-20190311212946-11955173bddd h1:/e+gpKk9r3dJobndpTytxS2gOy6m5uvpg+ISQoEcusQ=
+golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
+golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c h1:IGkKhmfzcztjm6gYkykvu/NiS8kaqbCWAEWWAyf8J5U=
+golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
+golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
diff --git a/vendor/go.uber.org/atomic/tools.go b/vendor/go.uber.org/atomic/tools.go
new file mode 100644
index 0000000000000..654f5b2fe5bac
--- /dev/null
+++ b/vendor/go.uber.org/atomic/tools.go
@@ -0,0 +1,28 @@
+// Copyright (c) 2019 Uber Technologies, Inc.
+//
+// Permission is hereby granted, free of charge, to any person obtaining a copy
+// of this software and associated documentation files (the "Software"), to deal
+// in the Software without restriction, including without limitation the rights
+// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+// copies of the Software, and to permit persons to whom the Software is
+// furnished to do so, subject to the following conditions:
+//
+// The above copyright notice and this permission notice shall be included in
+// all copies or substantial portions of the Software.
+//
+// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+// THE SOFTWARE.
+
+// +build tools
+
+package atomic
+
+import (
+ // Tools used during development.
+ _ "golang.org/x/lint/golint"
+)
diff --git a/vendor/golang.org/x/exp/AUTHORS b/vendor/golang.org/x/exp/AUTHORS
new file mode 100644
index 0000000000000..15167cd746c56
--- /dev/null
+++ b/vendor/golang.org/x/exp/AUTHORS
@@ -0,0 +1,3 @@
+# This source code refers to The Go Authors for copyright purposes.
+# The master list of authors is in the main Go distribution,
+# visible at http://tip.golang.org/AUTHORS.
diff --git a/vendor/golang.org/x/exp/CONTRIBUTORS b/vendor/golang.org/x/exp/CONTRIBUTORS
new file mode 100644
index 0000000000000..1c4577e968061
--- /dev/null
+++ b/vendor/golang.org/x/exp/CONTRIBUTORS
@@ -0,0 +1,3 @@
+# This source code was written by the Go contributors.
+# The master list of contributors is in the main Go distribution,
+# visible at http://tip.golang.org/CONTRIBUTORS.
diff --git a/vendor/golang.org/x/exp/LICENSE b/vendor/golang.org/x/exp/LICENSE
new file mode 100644
index 0000000000000..6a66aea5eafe0
--- /dev/null
+++ b/vendor/golang.org/x/exp/LICENSE
@@ -0,0 +1,27 @@
+Copyright (c) 2009 The Go Authors. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+ * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+ * Neither the name of Google Inc. nor the names of its
+contributors may be used to endorse or promote products derived from
+this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/golang.org/x/exp/PATENTS b/vendor/golang.org/x/exp/PATENTS
new file mode 100644
index 0000000000000..733099041f84f
--- /dev/null
+++ b/vendor/golang.org/x/exp/PATENTS
@@ -0,0 +1,22 @@
+Additional IP Rights Grant (Patents)
+
+"This implementation" means the copyrightable works distributed by
+Google as part of the Go project.
+
+Google hereby grants to You a perpetual, worldwide, non-exclusive,
+no-charge, royalty-free, irrevocable (except as stated in this section)
+patent license to make, have made, use, offer to sell, sell, import,
+transfer and otherwise run, modify and propagate the contents of this
+implementation of Go, where such license applies only to those patent
+claims, both currently owned or controlled by Google and acquired in
+the future, licensable by Google that are necessarily infringed by this
+implementation of Go. This grant does not include claims that would be
+infringed only as a consequence of further modification of this
+implementation. If you or your agent or exclusive licensee institute or
+order or agree to the institution of patent litigation against any
+entity (including a cross-claim or counterclaim in a lawsuit) alleging
+that this implementation of Go or any code incorporated within this
+implementation of Go constitutes direct or contributory patent
+infringement, or inducement of patent infringement, then any patent
+rights granted to you under this License for this implementation of Go
+shall terminate as of the date such litigation is filed.
diff --git a/vendor/golang.org/x/exp/apidiff/README.md b/vendor/golang.org/x/exp/apidiff/README.md
new file mode 100644
index 0000000000000..3d9576c28663a
--- /dev/null
+++ b/vendor/golang.org/x/exp/apidiff/README.md
@@ -0,0 +1,624 @@
+# Checking Go Package API Compatibility
+
+The `apidiff` tool in this directory determines whether two versions of the same
+package are compatible. The goal is to help the developer make an informed
+choice of semantic version after they have changed the code of their module.
+
+`apidiff` reports two kinds of changes: incompatible ones, which require
+incrementing the major part of the semantic version, and compatible ones, which
+require a minor version increment. If no API changes are reported but there are
+code changes that could affect client code, then the patch version should
+be incremented.
+
+Because `apidiff` ignores package import paths, it may be used to display API
+differences between any two packages, not just different versions of the same
+package.
+
+The current version of `apidiff` compares only packages, not modules.
+
+
+## Compatibility Desiderata
+
+Any tool that checks compatibility can offer only an approximation. No tool can
+detect behavioral changes; and even if it could, whether a behavioral change is
+a breaking change or not depends on many factors, such as whether it closes a
+security hole or fixes a bug. Even a change that causes some code to fail to
+compile may not be considered a breaking change by the developers or their
+users. It may only affect code marked as experimental or unstable, for
+example, or the break may only manifest in unlikely cases.
+
+For a tool to be useful, its notion of compatibility must be relaxed enough to
+allow reasonable changes, like adding a field to a struct, but strict enough to
+catch significant breaking changes. A tool that is too lax will miss important
+incompatibilities, and users will stop trusting it; one that is too strict may
+generate so much noise that users will ignore it.
+
+To a first approximation, this tool reports a change as incompatible if it could
+cause client code to stop compiling. But `apidiff` ignores five ways in which
+code may fail to compile after a change. Three of them are mentioned in the
+[Go 1 Compatibility Guarantee](https://golang.org/doc/go1compat).
+
+### Unkeyed Struct Literals
+
+Code that uses an unkeyed struct literal would fail to compile if a field was
+added to the struct, making any such addition an incompatible change. An example:
+
+```
+// old
+type Point struct { X, Y int }
+
+// new
+type Point struct { X, Y, Z int }
+
+// client
+p := pkg.Point{1, 2} // fails in new because there are more fields than expressions
+```
+Here and below, we provide three snippets: the code in the old version of the
+package, the code in the new version, and the code written in a client of the package,
+which refers to it by the name `pkg`. The client code compiles against the old
+code but not the new.
+
+### Embedding and Shadowing
+
+Adding an exported field to a struct can break code that embeds that struct,
+because the newly added field may conflict with an identically named field
+at the same struct depth. A selector referring to the latter would become
+ambiguous and thus erroneous.
+
+
+```
+// old
+type Point struct { X, Y int }
+
+// new
+type Point struct { X, Y, Z int }
+
+// client
+type z struct { Z int }
+
+var v struct {
+ pkg.Point
+ z
+}
+
+_ = v.Z // fails in new
+```
+In the new version, the last line fails to compile because there are two embedded `Z`
+fields at the same depth, one from `z` and one from `pkg.Point`.
+
+
+### Using an Identical Type Externally
+
+If it is possible for client code to write a type expression representing the
+underlying type of a defined type in a package, then external code can use it in
+assignments involving the package type, making any change to that type incompatible.
+```
+// old
+type Point struct { X, Y int }
+
+// new
+type Point struct { X, Y, Z int }
+
+// client
+var p struct { X, Y int } = pkg.Point{} // fails in new because of Point's extra field
+```
+Here, the external code could have used the provided name `Point`, but chose not
+to. I'll have more to say about this and related examples later.
+
+### unsafe.Sizeof and Friends
+
+Since `unsafe.Sizeof`, `unsafe.Offsetof` and `unsafe.Alignof` are constant
+expressions, they can be used in an array type literal:
+
+```
+// old
+type S struct{ X int }
+
+// new
+type S struct{ X, y int }
+
+// client
+var a [unsafe.Sizeof(pkg.S{})]int = [8]int{} // fails in new because S's size is not 8
+```
+Use of these operations could make many changes to a type potentially incompatible.
+
+
+### Type Switches
+
+A package change that merges two different types (with same underlying type)
+into a single new type may break type switches in clients that refer to both
+original types:
+
+```
+// old
+type T1 int
+type T2 int
+
+// new
+type T1 int
+type T2 = T1
+
+// client
+switch x.(type) {
+case T1:
+case T2:
+} // fails with new because two cases have the same type
+```
+This sort of incompatibility is sufficiently esoteric to ignore; the tool allows
+merging types.
+
+## First Attempt at a Definition
+
+Our first attempt at defining compatibility captures the idea that all the
+exported names in the old package must have compatible equivalents in the new
+package.
+
+A new package is compatible with an old one if and only if:
+- For every exported package-level name in the old package, the same name is
+ declared in the new at package level, and
+- the names denote the same kind of object (e.g. both are variables), and
+- the types of the objects are compatible.
+
+We will work out the details (and make some corrections) below, but it is clear
+already that we will need to determine what makes two types compatible. And
+whatever the definition of type compatibility, it's certainly true that if two
+types are the same, they are compatible. So we will need to decide what makes an
+old and new type the same. We will call this sameness relation _correspondence_.
+
+## Type Correspondence
+
+Go already has a definition of when two types are the same:
+[type identity](https://golang.org/ref/spec#Type_identity).
+But identity isn't adequate for our purpose: it says that two defined
+types are identical if they arise from the same definition, but it's unclear
+what "same" means when talking about two different packages (or two versions of
+a single package).
+
+The obvious change to the definition of identity is to require that old and new
+[defined types](https://golang.org/ref/spec#Type_definitions)
+have the same name instead. But that doesn't work either, for two
+reasons. First, type aliases can equate two defined types with different names:
+
+```
+// old
+type E int
+
+// new
+type t int
+type E = t
+```
+Second, an unexported type can be renamed:
+
+```
+// old
+type u1 int
+var V u1
+
+// new
+type u2 int
+var V u2
+```
+Here, even though `u1` and `u2` are unexported, their exported fields and
+methods are visible to clients, so they are part of the API. But since the name
+`u1` is not visible to clients, it can be changed compatibly. We say that `u1`
+and `u2` are _exposed_: a type is exposed if a client package can declare variables of that type.
+
+We will say that an old defined type _corresponds_ to a new one if they have the
+same name, or one can be renamed to the other without otherwise changing the
+API. In the first example above, old `E` and new `t` correspond. In the second,
+old `u1` and new `u2` correspond.
+
+Two or more old defined types can correspond to a single new type: we consider
+"merging" two types into one to be a compatible change. As mentioned above,
+code that uses both names in a type switch will fail, but we deliberately ignore
+this case. However, a single old type can correspond to only one new type.
+
+So far, we've explained what correspondence means for defined types. To extend
+the definition to all types, we parallel the language's definition of type
+identity. So, for instance, an old and a new slice type correspond if their
+element types correspond.
+
+## Definition of Compatibility
+
+We can now present the definition of compatibility used by `apidiff`.
+
+### Package Compatibility
+
+> A new package is compatible with an old one if:
+>1. Each exported name in the old package's scope also appears in the new
+>package's scope, and the object (constant, variable, function or type) denoted
+>by that name in the old package is compatible with the object denoted by the
+>name in the new package, and
+>2. For every exposed type that implements an exposed interface in the old package,
+> its corresponding type should implement the corresponding interface in the new package.
+>
+>Otherwise the packages are incompatible.
+
+As an aside, the tool also finds exported names in the new package that are not
+exported in the old, and marks them as compatible changes.
+
+Clause 2 is discussed further in "Whole-Package Compatibility."
+
+### Object Compatibility
+
+This section provides compatibility rules for constants, variables, functions
+and types.
+
+#### Constants
+
+>A new exported constant is compatible with an old one of the same name if and only if
+>1. Their types correspond, and
+>2. Their values are identical.
+
+It is tempting to allow changing a typed constant to an untyped one. That may
+seem harmless, but it can break code like this:
+
+```
+// old
+const C int64 = 1
+
+// new
+const C = 1
+
+// client
+var x = C // old type is int64, new is int
+var y int64 = x // fails with new: different types in assignment
+```
+
+A change to the value of a constant can break compatibility if the value is used
+in an array type:
+
+```
+// old
+const C = 1
+
+// new
+const C = 2
+
+// client
+var a [C]int = [1]int{} // fails with new because [2]int and [1]int are different types
+```
+Changes to constant values are rare, and determining whether they are compatible
+or not is better left to the user, so the tool reports them.
+
+#### Variables
+
+>A new exported variable is compatible with an old one of the same name if and
+>only if their types correspond.
+
+Correspondence doesn't look past names, so this rule does not prevent adding a
+field to `MyStruct` if the package declares `var V MyStruct`. It does, however, mean that
+
+```
+var V struct { X int }
+```
+is incompatible with
+```
+var V struct { X, Y int }
+```
+I discuss this at length below in the section "Compatibility, Types and Names."
+
+#### Functions
+
+>A new exported function or variable is compatible with an old function of the
+>same name if and only if their types (signatures) correspond.
+
+This rule captures the fact that, although many signature changes are compatible
+for all call sites, none are compatible for assignment:
+
+```
+var v func(int) = pkg.F
+```
+Here, `F` must be of type `func(int)` and not, for instance, `func(...int)` or `func(interface{})`.
+
+Note that the rule permits changing a function to a variable. This is a common
+practice, usually done for test stubbing, and cannot break any code at compile
+time.
+
+#### Exported Types
+
+> A new exported type is compatible with an old one if and only if their
+> names are the same and their types correspond.
+
+This rule seems far too strict. But, ignoring aliases for the moment, it demands only
+that the old and new _defined_ types correspond. Consider:
+```
+// old
+type T struct { X int }
+
+// new
+type T struct { X, Y int }
+```
+The addition of `Y` is a compatible change, because this rule does not require
+that the struct literals have to correspond, only that the defined types
+denoted by `T` must correspond. (Remember that correspondence stops at type
+names.)
+
+If one type is an alias that refers to the corresponding defined type, the
+situation is the same:
+
+```
+// old
+type T struct { X int }
+
+// new
+type u struct { X, Y int }
+type T = u
+```
+Here, the only requirement is that old `T` corresponds to new `u`, not that the
+struct types correspond. (We can't tell from this snippet that the old `T` and
+the new `u` do correspond; that depends on whether `u` replaces `T` throughout
+the API.)
+
+However, the following change is incompatible, because the names do not
+denote corresponding types:
+
+```
+// old
+type T = struct { X int }
+
+// new
+type T = struct { X, Y int }
+```
+### Type Literal Compatibility
+
+Only five kinds of types can differ compatibly: defined types, structs,
+interfaces, channels and numeric types. We only consider the compatibility of
+the last four when they are the underlying type of a defined type. See
+"Compatibility, Types and Names" for a rationale.
+
+We justify the compatibility rules by enumerating all the ways a type
+can be used, and by showing that the allowed changes cannot break any code that
+uses values of the type in those ways.
+
+Values of all types can be used in assignments (including argument passing and
+function return), but we do not require that old and new types are assignment
+compatible. That is because we assume that the old and new packages are never
+used together: any given binary will link in either the old package or the new.
+So in describing how a type can be used in the sections below, we omit
+assignment.
+
+Any type can also be used in a type assertion or conversion. The changes we allow
+below may affect the run-time behavior of these operations, but they cannot affect
+whether they compile. The only such breaking change would be to change
+the type `T` in an assertion `x.T` so that it no longer implements the interface
+type of `x`; but the rules for interfaces below disallow that.
+
+> A new type is compatible with an old one if and only if they correspond, or
+> one of the cases below applies.
+
+#### Defined Types
+
+Other than assignment, the only ways to use a defined type are to access its
+methods, or to make use of the properties of its underlying type. Rule 2 below
+covers the latter, and rules 3 and 4 cover the former.
+
+> A new defined type is compatible with an old one if and only if all of the
+> following hold:
+>1. They correspond.
+>2. Their underlying types are compatible.
+>3. The new exported value method set is a superset of the old.
+>4. The new exported pointer method set is a superset of the old.
+
+An exported method set is a method set with all unexported methods removed.
+When comparing methods of a method set, we require identical names and
+corresponding signatures.
+
+Removing an exported method is clearly a breaking change. But removing an
+unexported one (or changing its signature) can be breaking as well, if it
+results in the type no longer implementing an interface. See "Whole-Package
+Compatibility," below.
+
+#### Channels
+
+> A new channel type is compatible with an old one if
+> 1. The element types correspond, and
+> 2. Either the directions are the same, or the new type has no direction.
+
+Other than assignment, the only ways to use values of a channel type are to send
+and receive on them, to close them, and to use them as map keys. Changes to a
+channel type cannot cause code that closes a channel or uses it as a map key to
+fail to compile, so we need not consider those operations.
+
+Rule 1 ensures that any operations on the values sent or received will compile.
+Rule 2 captures the fact that any program that compiles with a directed channel
+must use either only sends, or only receives, so allowing the other operation
+by removing the channel direction cannot break any code.
+
+
+#### Interfaces
+
+> A new interface is compatible with an old one if and only if:
+> 1. The old interface does not have an unexported method, and it corresponds
+> to the new interfaces (i.e. they have the same method set), or
+> 2. The old interface has an unexported method and the new exported method set is a
+> superset of the old.
+
+Other than assignment, the only ways to use an interface are to implement it,
+embed it, or call one of its methods. (Interface values can also be used as map
+keys, but that cannot cause a compile-time error.)
+
+Certainly, removing an exported method from an interface could break a client
+call, so neither rule allows it.
+
+Rule 1 also disallows adding a method to an interface without an existing unexported
+method. Such an interface can be implemented in client code. If adding a method
+were allowed, a type that implements the old interface could fail to implement
+the new one:
+
+```
+type I interface { M1() } // old
+type I interface { M1(); M2() } // new
+
+// client
+type t struct{}
+func (t) M1() {}
+var i pkg.I = t{} // fails with new, because t lacks M2
+```
+
+Rule 2 is based on the observation that if an interface has an unexported
+method, the only way a client can implement it is to embed it.
+Adding a method is compatible in this case, because the embedding struct will
+continue to implement the interface. Adding a method also cannot break any call
+sites, since no program that compiles could have any such call sites.
+
+#### Structs
+
+> A new struct is compatible with an old one if all of the following hold:
+> 1. The new set of top-level exported fields is a superset of the old.
+> 2. The new set of _selectable_ exported fields is a superset of the old.
+> 3. If the old struct is comparable, so is the new one.
+
+The set of selectable exported fields is the set of exported fields `F`
+such that `x.F` is a valid selector expression for a value `x` of the struct
+type. `F` may be at the top level of the struct, or it may be a field of an
+embedded struct.
+
+Two fields are the same if they have the same name and corresponding types.
+
+Other than assignment, there are only four ways to use a struct: write a struct
+literal, select a field, use a value of the struct as a map key, or compare two
+values for equality. The first clause ensures that struct literals compile; the
+second, that selections compile; and the third, that equality expressions and
+map index expressions compile.
+
+#### Numeric Types
+
+> A new numeric type is compatible with an old one if and only if they are
+> both unsigned integers, both signed integers, both floats or both complex
+> types, and the new one is at least as large as the old on both 32-bit and
+> 64-bit architectures.
+
+Other than in assignments, numeric types appear in arithmetic and comparison
+expressions. Since all arithmetic operations but shifts (see below) require that
+operand types be identical, and by assumption the old and new types underly
+defined types (see "Compatibility, Types and Names," below), there is no way for
+client code to write an arithmetic expression that compiles with operands of the
+old type but not the new.
+
+Numeric types can also appear in type switches and type assertions. Again, since
+the old and new types underly defined types, type switches and type assertions
+that compiled using the old defined type will continue to compile with the new
+defined type.
+
+Going from an unsigned to a signed integer type is an incompatible change for
+the sole reason that only an unsigned type can appear as the right operand of a
+shift. If this rule is relaxed, then changes from an unsigned type to a larger
+signed type would be compatible. See [this
+issue](https://github.com/golang/go/issues/19113).
+
+Only integer types can be used in bitwise and shift operations, and for indexing
+slices and arrays. That is why switching from an integer to a floating-point
+type--even one that can represent all values of the integer type--is an
+incompatible change.
+
+
+Conversions from floating-point to complex types or vice versa are not permitted
+(the predeclared functions real, imag, and complex must be used instead). To
+prevent valid floating-point or complex conversions from becoming invalid,
+changing a floating-point type to a complex type or vice versa is considered an
+incompatible change.
+
+Although conversions between any two integer types are valid, assigning a
+constant value to a variable of integer type that is too small to represent the
+constant is not permitted. That is why the only compatible changes are to
+a new type whose values are a superset of the old. The requirement that the new
+set of values must include the old on both 32-bit and 64-bit machines allows
+conversions from `int32` to `int` and from `int` to `int64`, but not the other
+direction; and similarly for `uint`.
+
+Changing a type to or from `uintptr` is considered an incompatible change. Since
+its size is not specified, there is no way to know whether the new type's values
+are a superset of the old type's.
+
+## Whole-Package Compatibility
+
+Some changes that are compatible for a single type are not compatible when the
+package is considered as a whole. For example, if you remove an unexported
+method on a defined type, it may no longer implement an interface of the
+package. This can break client code:
+
+```
+// old
+type T int
+func (T) m() {}
+type I interface { m() }
+
+// new
+type T int // no method m anymore
+
+// client
+var i pkg.I = pkg.T{} // fails with new because T lacks m
+```
+
+Similarly, adding a method to an interface can cause defined types
+in the package to stop implementing it.
+
+The second clause in the definition for package compatibility handles these
+cases. To repeat:
+> 2. For every exposed type that implements an exposed interface in the old package,
+> its corresponding type should implement the corresponding interface in the new package.
+Recall that a type is exposed if it is part of the package's API, even if it is
+unexported.
+
+Other incompatibilities that involve more than one type in the package can arise
+whenever two types with identical underlying types exist in the old or new
+package. Here, a change "splits" an identical underlying type into two, breaking
+conversions:
+
+```
+// old
+type B struct { X int }
+type C struct { X int }
+
+// new
+type B struct { X int }
+type C struct { X, Y int }
+
+// client
+var b B
+_ = C(b) // fails with new: cannot convert B to C
+```
+Finally, changes that are compatible for the package in which they occur can
+break downstream packages. That can happen even if they involve unexported
+methods, thanks to embedding.
+
+The definitions given here don't account for these sorts of problems.
+
+
+## Compatibility, Types and Names
+
+The above definitions state that the only types that can differ compatibly are
+defined types and the types that underly them. Changes to other type literals
+are considered incompatible. For instance, it is considered an incompatible
+change to add a field to the struct in this variable declaration:
+
+```
+var V struct { X int }
+```
+or this alias definition:
+```
+type T = struct { X int }
+```
+
+We make this choice to keep the definition of compatibility (relatively) simple.
+A more precise definition could, for instance, distinguish between
+
+```
+func F(struct { X int })
+```
+where any changes to the struct are incompatible, and
+
+```
+func F(struct { X, u int })
+```
+where adding a field is compatible (since clients cannot write the signature,
+and thus cannot assign `F` to a variable of the signature type). The definition
+should then also allow other function signature changes that only require
+call-site compatibility, like
+
+```
+func F(struct { X, u int }, ...int)
+```
+The result would be a much more complex definition with little benefit, since
+the examples in this section rarely arise in practice.
diff --git a/vendor/golang.org/x/exp/apidiff/apidiff.go b/vendor/golang.org/x/exp/apidiff/apidiff.go
new file mode 100644
index 0000000000000..76669d8b053d7
--- /dev/null
+++ b/vendor/golang.org/x/exp/apidiff/apidiff.go
@@ -0,0 +1,220 @@
+// TODO: test swap corresponding types (e.g. u1 <-> u2 and u2 <-> u1)
+// TODO: test exported alias refers to something in another package -- does correspondence work then?
+// TODO: CODE COVERAGE
+// TODO: note that we may miss correspondences because we bail early when we compare a signature (e.g. when lengths differ; we could do up to the shorter)
+// TODO: if you add an unexported method to an exposed interface, you have to check that
+// every exposed type that previously implemented the interface still does. Otherwise
+// an external assignment of the exposed type to the interface type could fail.
+// TODO: check constant values: large values aren't representable by some types.
+// TODO: Document all the incompatibilities we don't check for.
+
+package apidiff
+
+import (
+ "fmt"
+ "go/constant"
+ "go/token"
+ "go/types"
+)
+
+// Changes reports on the differences between the APIs of the old and new packages.
+// It classifies each difference as either compatible or incompatible (breaking.) For
+// a detailed discussion of what constitutes an incompatible change, see the package
+// documentation.
+func Changes(old, new *types.Package) Report {
+ d := newDiffer(old, new)
+ d.checkPackage()
+ r := Report{}
+ for _, m := range d.incompatibles.collect() {
+ r.Changes = append(r.Changes, Change{Message: m, Compatible: false})
+ }
+ for _, m := range d.compatibles.collect() {
+ r.Changes = append(r.Changes, Change{Message: m, Compatible: true})
+ }
+ return r
+}
+
+type differ struct {
+ old, new *types.Package
+ // Correspondences between named types.
+ // Even though it is the named types (*types.Named) that correspond, we use
+ // *types.TypeName as a map key because they are canonical.
+ // The values can be either named types or basic types.
+ correspondMap map[*types.TypeName]types.Type
+
+ // Messages.
+ incompatibles messageSet
+ compatibles messageSet
+}
+
+func newDiffer(old, new *types.Package) *differ {
+ return &differ{
+ old: old,
+ new: new,
+ correspondMap: map[*types.TypeName]types.Type{},
+ incompatibles: messageSet{},
+ compatibles: messageSet{},
+ }
+}
+
+func (d *differ) incompatible(obj types.Object, part, format string, args ...interface{}) {
+ addMessage(d.incompatibles, obj, part, format, args)
+}
+
+func (d *differ) compatible(obj types.Object, part, format string, args ...interface{}) {
+ addMessage(d.compatibles, obj, part, format, args)
+}
+
+func addMessage(ms messageSet, obj types.Object, part, format string, args []interface{}) {
+ ms.add(obj, part, fmt.Sprintf(format, args...))
+}
+
+func (d *differ) checkPackage() {
+ // Old changes.
+ for _, name := range d.old.Scope().Names() {
+ oldobj := d.old.Scope().Lookup(name)
+ if !oldobj.Exported() {
+ continue
+ }
+ newobj := d.new.Scope().Lookup(name)
+ if newobj == nil {
+ d.incompatible(oldobj, "", "removed")
+ continue
+ }
+ d.checkObjects(oldobj, newobj)
+ }
+ // New additions.
+ for _, name := range d.new.Scope().Names() {
+ newobj := d.new.Scope().Lookup(name)
+ if newobj.Exported() && d.old.Scope().Lookup(name) == nil {
+ d.compatible(newobj, "", "added")
+ }
+ }
+
+ // Whole-package satisfaction.
+ // For every old exposed interface oIface and its corresponding new interface nIface...
+ for otn1, nt1 := range d.correspondMap {
+ oIface, ok := otn1.Type().Underlying().(*types.Interface)
+ if !ok {
+ continue
+ }
+ nIface, ok := nt1.Underlying().(*types.Interface)
+ if !ok {
+ // If nt1 isn't an interface but otn1 is, then that's an incompatibility that
+ // we've already noticed, so there's no need to do anything here.
+ continue
+ }
+ // For every old type that implements oIface, its corresponding new type must implement
+ // nIface.
+ for otn2, nt2 := range d.correspondMap {
+ if otn1 == otn2 {
+ continue
+ }
+ if types.Implements(otn2.Type(), oIface) && !types.Implements(nt2, nIface) {
+ d.incompatible(otn2, "", "no longer implements %s", objectString(otn1))
+ }
+ }
+ }
+}
+
+func (d *differ) checkObjects(old, new types.Object) {
+ switch old := old.(type) {
+ case *types.Const:
+ if new, ok := new.(*types.Const); ok {
+ d.constChanges(old, new)
+ return
+ }
+ case *types.Var:
+ if new, ok := new.(*types.Var); ok {
+ d.checkCorrespondence(old, "", old.Type(), new.Type())
+ return
+ }
+ case *types.Func:
+ switch new := new.(type) {
+ case *types.Func:
+ d.checkCorrespondence(old, "", old.Type(), new.Type())
+ return
+ case *types.Var:
+ d.compatible(old, "", "changed from func to var")
+ d.checkCorrespondence(old, "", old.Type(), new.Type())
+ return
+
+ }
+ case *types.TypeName:
+ if new, ok := new.(*types.TypeName); ok {
+ d.checkCorrespondence(old, "", old.Type(), new.Type())
+ return
+ }
+ default:
+ panic("unexpected obj type")
+ }
+ // Here if kind of type changed.
+ d.incompatible(old, "", "changed from %s to %s",
+ objectKindString(old), objectKindString(new))
+}
+
+// Compare two constants.
+func (d *differ) constChanges(old, new *types.Const) {
+ ot := old.Type()
+ nt := new.Type()
+ // Check for change of type.
+ if !d.correspond(ot, nt) {
+ d.typeChanged(old, "", ot, nt)
+ return
+ }
+ // Check for change of value.
+ // We know the types are the same, so constant.Compare shouldn't panic.
+ if !constant.Compare(old.Val(), token.EQL, new.Val()) {
+ d.incompatible(old, "", "value changed from %s to %s", old.Val(), new.Val())
+ }
+}
+
+func objectKindString(obj types.Object) string {
+ switch obj.(type) {
+ case *types.Const:
+ return "const"
+ case *types.Var:
+ return "var"
+ case *types.Func:
+ return "func"
+ case *types.TypeName:
+ return "type"
+ default:
+ return "???"
+ }
+}
+
+func (d *differ) checkCorrespondence(obj types.Object, part string, old, new types.Type) {
+ if !d.correspond(old, new) {
+ d.typeChanged(obj, part, old, new)
+ }
+}
+
+func (d *differ) typeChanged(obj types.Object, part string, old, new types.Type) {
+ old = removeNamesFromSignature(old)
+ new = removeNamesFromSignature(new)
+ olds := types.TypeString(old, types.RelativeTo(d.old))
+ news := types.TypeString(new, types.RelativeTo(d.new))
+ d.incompatible(obj, part, "changed from %s to %s", olds, news)
+}
+
+// go/types always includes the argument and result names when formatting a signature.
+// Since these can change without affecting compatibility, we don't want users to
+// be distracted by them, so we remove them.
+func removeNamesFromSignature(t types.Type) types.Type {
+ sig, ok := t.(*types.Signature)
+ if !ok {
+ return t
+ }
+
+ dename := func(p *types.Tuple) *types.Tuple {
+ var vars []*types.Var
+ for i := 0; i < p.Len(); i++ {
+ v := p.At(i)
+ vars = append(vars, types.NewVar(v.Pos(), v.Pkg(), "", v.Type()))
+ }
+ return types.NewTuple(vars...)
+ }
+
+ return types.NewSignature(sig.Recv(), dename(sig.Params()), dename(sig.Results()), sig.Variadic())
+}
diff --git a/vendor/golang.org/x/exp/apidiff/compatibility.go b/vendor/golang.org/x/exp/apidiff/compatibility.go
new file mode 100644
index 0000000000000..f78da8f3c92d9
--- /dev/null
+++ b/vendor/golang.org/x/exp/apidiff/compatibility.go
@@ -0,0 +1,361 @@
+package apidiff
+
+import (
+ "fmt"
+ "go/types"
+ "reflect"
+)
+
+func (d *differ) checkCompatible(otn *types.TypeName, old, new types.Type) {
+ switch old := old.(type) {
+ case *types.Interface:
+ if new, ok := new.(*types.Interface); ok {
+ d.checkCompatibleInterface(otn, old, new)
+ return
+ }
+
+ case *types.Struct:
+ if new, ok := new.(*types.Struct); ok {
+ d.checkCompatibleStruct(otn, old, new)
+ return
+ }
+
+ case *types.Chan:
+ if new, ok := new.(*types.Chan); ok {
+ d.checkCompatibleChan(otn, old, new)
+ return
+ }
+
+ case *types.Basic:
+ if new, ok := new.(*types.Basic); ok {
+ d.checkCompatibleBasic(otn, old, new)
+ return
+ }
+
+ case *types.Named:
+ panic("unreachable")
+
+ default:
+ d.checkCorrespondence(otn, "", old, new)
+ return
+
+ }
+ // Here if old and new are different kinds of types.
+ d.typeChanged(otn, "", old, new)
+}
+
+func (d *differ) checkCompatibleChan(otn *types.TypeName, old, new *types.Chan) {
+ d.checkCorrespondence(otn, ", element type", old.Elem(), new.Elem())
+ if old.Dir() != new.Dir() {
+ if new.Dir() == types.SendRecv {
+ d.compatible(otn, "", "removed direction")
+ } else {
+ d.incompatible(otn, "", "changed direction")
+ }
+ }
+}
+
+func (d *differ) checkCompatibleBasic(otn *types.TypeName, old, new *types.Basic) {
+ // Certain changes to numeric types are compatible. Approximately, the info must
+ // be the same, and the new values must be a superset of the old.
+ if old.Kind() == new.Kind() {
+ // old and new are identical
+ return
+ }
+ if compatibleBasics[[2]types.BasicKind{old.Kind(), new.Kind()}] {
+ d.compatible(otn, "", "changed from %s to %s", old, new)
+ } else {
+ d.typeChanged(otn, "", old, new)
+ }
+}
+
+// All pairs (old, new) of compatible basic types.
+var compatibleBasics = map[[2]types.BasicKind]bool{
+ {types.Uint8, types.Uint16}: true,
+ {types.Uint8, types.Uint32}: true,
+ {types.Uint8, types.Uint}: true,
+ {types.Uint8, types.Uint64}: true,
+ {types.Uint16, types.Uint32}: true,
+ {types.Uint16, types.Uint}: true,
+ {types.Uint16, types.Uint64}: true,
+ {types.Uint32, types.Uint}: true,
+ {types.Uint32, types.Uint64}: true,
+ {types.Uint, types.Uint64}: true,
+ {types.Int8, types.Int16}: true,
+ {types.Int8, types.Int32}: true,
+ {types.Int8, types.Int}: true,
+ {types.Int8, types.Int64}: true,
+ {types.Int16, types.Int32}: true,
+ {types.Int16, types.Int}: true,
+ {types.Int16, types.Int64}: true,
+ {types.Int32, types.Int}: true,
+ {types.Int32, types.Int64}: true,
+ {types.Int, types.Int64}: true,
+ {types.Float32, types.Float64}: true,
+ {types.Complex64, types.Complex128}: true,
+}
+
+// Interface compatibility:
+// If the old interface has an unexported method, the new interface is compatible
+// if its exported method set is a superset of the old. (Users could not implement,
+// only embed.)
+//
+// If the old interface did not have an unexported method, the new interface is
+// compatible if its exported method set is the same as the old, and it has no
+// unexported methods. (Adding an unexported method makes the interface
+// unimplementable outside the package.)
+//
+// TODO: must also check that if any methods were added or removed, every exposed
+// type in the package that implemented the interface in old still implements it in
+// new. Otherwise external assignments could fail.
+func (d *differ) checkCompatibleInterface(otn *types.TypeName, old, new *types.Interface) {
+ // Method sets are checked in checkCompatibleDefined.
+
+ // Does the old interface have an unexported method?
+ if unexportedMethod(old) != nil {
+ d.checkMethodSet(otn, old, new, additionsCompatible)
+ } else {
+ // Perform an equivalence check, but with more information.
+ d.checkMethodSet(otn, old, new, additionsIncompatible)
+ if u := unexportedMethod(new); u != nil {
+ d.incompatible(otn, u.Name(), "added unexported method")
+ }
+ }
+}
+
+// Return an unexported method from the method set of t, or nil if there are none.
+func unexportedMethod(t *types.Interface) *types.Func {
+ for i := 0; i < t.NumMethods(); i++ {
+ if m := t.Method(i); !m.Exported() {
+ return m
+ }
+ }
+ return nil
+}
+
+// We need to check three things for structs:
+// 1. The set of exported fields must be compatible. This ensures that keyed struct
+// literals continue to compile. (There is no compatibility guarantee for unkeyed
+// struct literals.)
+// 2. The set of exported *selectable* fields must be compatible. This includes the exported
+// fields of all embedded structs. This ensures that selections continue to compile.
+// 3. If the old struct is comparable, so must the new one be. This ensures that equality
+// expressions and uses of struct values as map keys continue to compile.
+//
+// An unexported embedded struct can't appear in a struct literal outside the
+// package, so it doesn't have to be present, or have the same name, in the new
+// struct.
+//
+// Field tags are ignored: they have no compile-time implications.
+func (d *differ) checkCompatibleStruct(obj types.Object, old, new *types.Struct) {
+ d.checkCompatibleObjectSets(obj, exportedFields(old), exportedFields(new))
+ d.checkCompatibleObjectSets(obj, exportedSelectableFields(old), exportedSelectableFields(new))
+ // Removing comparability from a struct is an incompatible change.
+ if types.Comparable(old) && !types.Comparable(new) {
+ d.incompatible(obj, "", "old is comparable, new is not")
+ }
+}
+
+// exportedFields collects all the immediate fields of the struct that are exported.
+// This is also the set of exported keys for keyed struct literals.
+func exportedFields(s *types.Struct) map[string]types.Object {
+ m := map[string]types.Object{}
+ for i := 0; i < s.NumFields(); i++ {
+ f := s.Field(i)
+ if f.Exported() {
+ m[f.Name()] = f
+ }
+ }
+ return m
+}
+
+// exportedSelectableFields collects all the exported fields of the struct, including
+// exported fields of embedded structs.
+//
+// We traverse the struct breadth-first, because of the rule that a lower-depth field
+// shadows one at a higher depth.
+func exportedSelectableFields(s *types.Struct) map[string]types.Object {
+ var (
+ m = map[string]types.Object{}
+ next []*types.Struct // embedded structs at the next depth
+ seen []*types.Struct // to handle recursive embedding
+ )
+ for cur := []*types.Struct{s}; len(cur) > 0; cur, next = next, nil {
+ seen = append(seen, cur...)
+ // We only want to consider unambiguous fields. Ambiguous fields (where there
+ // is more than one field of the same name at the same level) are legal, but
+ // cannot be selected.
+ for name, f := range unambiguousFields(cur) {
+ // Record an exported field we haven't seen before. If we have seen it,
+ // it occurred a lower depth, so it shadows this field.
+ if f.Exported() && m[name] == nil {
+ m[name] = f
+ }
+ // Remember embedded structs for processing at the next depth,
+ // but only if we haven't seen the struct at this depth or above.
+ if !f.Anonymous() {
+ continue
+ }
+ t := f.Type().Underlying()
+ if p, ok := t.(*types.Pointer); ok {
+ t = p.Elem().Underlying()
+ }
+ if t, ok := t.(*types.Struct); ok && !contains(seen, t) {
+ next = append(next, t)
+ }
+ }
+ }
+ return m
+}
+
+func contains(ts []*types.Struct, t *types.Struct) bool {
+ for _, s := range ts {
+ if types.Identical(s, t) {
+ return true
+ }
+ }
+ return false
+}
+
+// Given a set of structs at the same depth, the unambiguous fields are the ones whose
+// names appear exactly once.
+func unambiguousFields(structs []*types.Struct) map[string]*types.Var {
+ fields := map[string]*types.Var{}
+ seen := map[string]bool{}
+ for _, s := range structs {
+ for i := 0; i < s.NumFields(); i++ {
+ f := s.Field(i)
+ name := f.Name()
+ if seen[name] {
+ delete(fields, name)
+ } else {
+ seen[name] = true
+ fields[name] = f
+ }
+ }
+ }
+ return fields
+}
+
+// Anything removed or change from the old set is an incompatible change.
+// Anything added to the new set is a compatible change.
+func (d *differ) checkCompatibleObjectSets(obj types.Object, old, new map[string]types.Object) {
+ for name, oldo := range old {
+ newo := new[name]
+ if newo == nil {
+ d.incompatible(obj, name, "removed")
+ } else {
+ d.checkCorrespondence(obj, name, oldo.Type(), newo.Type())
+ }
+ }
+ for name := range new {
+ if old[name] == nil {
+ d.compatible(obj, name, "added")
+ }
+ }
+}
+
+func (d *differ) checkCompatibleDefined(otn *types.TypeName, old *types.Named, new types.Type) {
+ // We've already checked that old and new correspond.
+ d.checkCompatible(otn, old.Underlying(), new.Underlying())
+ // If there are different kinds of types (e.g. struct and interface), don't bother checking
+ // the method sets.
+ if reflect.TypeOf(old.Underlying()) != reflect.TypeOf(new.Underlying()) {
+ return
+ }
+ // Interface method sets are checked in checkCompatibleInterface.
+ if _, ok := old.Underlying().(*types.Interface); ok {
+ return
+ }
+
+ // A new method set is compatible with an old if the new exported methods are a superset of the old.
+ d.checkMethodSet(otn, old, new, additionsCompatible)
+ d.checkMethodSet(otn, types.NewPointer(old), types.NewPointer(new), additionsCompatible)
+}
+
+const (
+ additionsCompatible = true
+ additionsIncompatible = false
+)
+
+func (d *differ) checkMethodSet(otn *types.TypeName, oldt, newt types.Type, addcompat bool) {
+ // TODO: find a way to use checkCompatibleObjectSets for this.
+ oldMethodSet := exportedMethods(oldt)
+ newMethodSet := exportedMethods(newt)
+ msname := otn.Name()
+ if _, ok := oldt.(*types.Pointer); ok {
+ msname = "*" + msname
+ }
+ for name, oldMethod := range oldMethodSet {
+ newMethod := newMethodSet[name]
+ if newMethod == nil {
+ var part string
+ // Due to embedding, it's possible that the method's receiver type is not
+ // the same as the defined type whose method set we're looking at. So for
+ // a type T with removed method M that is embedded in some other type U,
+ // we will generate two "removed" messages for T.M, one for its own type
+ // T and one for the embedded type U. We want both messages to appear,
+ // but the messageSet dedup logic will allow only one message for a given
+ // object. So use the part string to distinguish them.
+ if receiverNamedType(oldMethod).Obj() != otn {
+ part = fmt.Sprintf(", method set of %s", msname)
+ }
+ d.incompatible(oldMethod, part, "removed")
+ } else {
+ obj := oldMethod
+ // If a value method is changed to a pointer method and has a signature
+ // change, then we can get two messages for the same method definition: one
+ // for the value method set that says it's removed, and another for the
+ // pointer method set that says it changed. To keep both messages (since
+ // messageSet dedups), use newMethod for the second. (Slight hack.)
+ if !hasPointerReceiver(oldMethod) && hasPointerReceiver(newMethod) {
+ obj = newMethod
+ }
+ d.checkCorrespondence(obj, "", oldMethod.Type(), newMethod.Type())
+ }
+ }
+
+ // Check for added methods.
+ for name, newMethod := range newMethodSet {
+ if oldMethodSet[name] == nil {
+ if addcompat {
+ d.compatible(newMethod, "", "added")
+ } else {
+ d.incompatible(newMethod, "", "added")
+ }
+ }
+ }
+}
+
+// exportedMethods collects all the exported methods of type's method set.
+func exportedMethods(t types.Type) map[string]types.Object {
+ m := map[string]types.Object{}
+ ms := types.NewMethodSet(t)
+ for i := 0; i < ms.Len(); i++ {
+ obj := ms.At(i).Obj()
+ if obj.Exported() {
+ m[obj.Name()] = obj
+ }
+ }
+ return m
+}
+
+func receiverType(method types.Object) types.Type {
+ return method.Type().(*types.Signature).Recv().Type()
+}
+
+func receiverNamedType(method types.Object) *types.Named {
+ switch t := receiverType(method).(type) {
+ case *types.Pointer:
+ return t.Elem().(*types.Named)
+ case *types.Named:
+ return t
+ default:
+ panic("unreachable")
+ }
+}
+
+func hasPointerReceiver(method types.Object) bool {
+ _, ok := receiverType(method).(*types.Pointer)
+ return ok
+}
diff --git a/vendor/golang.org/x/exp/apidiff/correspondence.go b/vendor/golang.org/x/exp/apidiff/correspondence.go
new file mode 100644
index 0000000000000..bd14c094b5637
--- /dev/null
+++ b/vendor/golang.org/x/exp/apidiff/correspondence.go
@@ -0,0 +1,219 @@
+package apidiff
+
+import (
+ "go/types"
+ "sort"
+)
+
+// Two types are correspond if they are identical except for defined types,
+// which must correspond.
+//
+// Two defined types correspond if they can be interchanged in the old and new APIs,
+// possibly after a renaming.
+//
+// This is not a pure function. If we come across named types while traversing,
+// we establish correspondence.
+func (d *differ) correspond(old, new types.Type) bool {
+ return d.corr(old, new, nil)
+}
+
+// corr determines whether old and new correspond. The argument p is a list of
+// known interface identities, to avoid infinite recursion.
+//
+// corr calls itself recursively as much as possible, to establish more
+// correspondences and so check more of the API. E.g. if the new function has more
+// parameters than the old, compare all the old ones before returning false.
+//
+// Compare this to the implementation of go/types.Identical.
+func (d *differ) corr(old, new types.Type, p *ifacePair) bool {
+ // Structure copied from types.Identical.
+ switch old := old.(type) {
+ case *types.Basic:
+ return types.Identical(old, new)
+
+ case *types.Array:
+ if new, ok := new.(*types.Array); ok {
+ return d.corr(old.Elem(), new.Elem(), p) && old.Len() == new.Len()
+ }
+
+ case *types.Slice:
+ if new, ok := new.(*types.Slice); ok {
+ return d.corr(old.Elem(), new.Elem(), p)
+ }
+
+ case *types.Map:
+ if new, ok := new.(*types.Map); ok {
+ return d.corr(old.Key(), new.Key(), p) && d.corr(old.Elem(), new.Elem(), p)
+ }
+
+ case *types.Chan:
+ if new, ok := new.(*types.Chan); ok {
+ return d.corr(old.Elem(), new.Elem(), p) && old.Dir() == new.Dir()
+ }
+
+ case *types.Pointer:
+ if new, ok := new.(*types.Pointer); ok {
+ return d.corr(old.Elem(), new.Elem(), p)
+ }
+
+ case *types.Signature:
+ if new, ok := new.(*types.Signature); ok {
+ pe := d.corr(old.Params(), new.Params(), p)
+ re := d.corr(old.Results(), new.Results(), p)
+ return old.Variadic() == new.Variadic() && pe && re
+ }
+
+ case *types.Tuple:
+ if new, ok := new.(*types.Tuple); ok {
+ for i := 0; i < old.Len(); i++ {
+ if i >= new.Len() || !d.corr(old.At(i).Type(), new.At(i).Type(), p) {
+ return false
+ }
+ }
+ return old.Len() == new.Len()
+ }
+
+ case *types.Struct:
+ if new, ok := new.(*types.Struct); ok {
+ for i := 0; i < old.NumFields(); i++ {
+ if i >= new.NumFields() {
+ return false
+ }
+ of := old.Field(i)
+ nf := new.Field(i)
+ if of.Anonymous() != nf.Anonymous() ||
+ old.Tag(i) != new.Tag(i) ||
+ !d.corr(of.Type(), nf.Type(), p) ||
+ !d.corrFieldNames(of, nf) {
+ return false
+ }
+ }
+ return old.NumFields() == new.NumFields()
+ }
+
+ case *types.Interface:
+ if new, ok := new.(*types.Interface); ok {
+ // Deal with circularity. See the comment in types.Identical.
+ q := &ifacePair{old, new, p}
+ for p != nil {
+ if p.identical(q) {
+ return true // same pair was compared before
+ }
+ p = p.prev
+ }
+ oldms := d.sortedMethods(old)
+ newms := d.sortedMethods(new)
+ for i, om := range oldms {
+ if i >= len(newms) {
+ return false
+ }
+ nm := newms[i]
+ if d.methodID(om) != d.methodID(nm) || !d.corr(om.Type(), nm.Type(), q) {
+ return false
+ }
+ }
+ return old.NumMethods() == new.NumMethods()
+ }
+
+ case *types.Named:
+ if new, ok := new.(*types.Named); ok {
+ return d.establishCorrespondence(old, new)
+ }
+ if new, ok := new.(*types.Basic); ok {
+ // Basic types are defined types, too, so we have to support them.
+
+ return d.establishCorrespondence(old, new)
+ }
+
+ default:
+ panic("unknown type kind")
+ }
+ return false
+}
+
+// Compare old and new field names. We are determining correspondence across packages,
+// so just compare names, not packages. For an unexported, embedded field of named
+// type (non-named embedded fields are possible with aliases), we check that the type
+// names correspond. We check the types for correspondence before this is called, so
+// we've established correspondence.
+func (d *differ) corrFieldNames(of, nf *types.Var) bool {
+ if of.Anonymous() && nf.Anonymous() && !of.Exported() && !nf.Exported() {
+ if on, ok := of.Type().(*types.Named); ok {
+ nn := nf.Type().(*types.Named)
+ return d.establishCorrespondence(on, nn)
+ }
+ }
+ return of.Name() == nf.Name()
+}
+
+// Establish that old corresponds with new if it does not already
+// correspond to something else.
+func (d *differ) establishCorrespondence(old *types.Named, new types.Type) bool {
+ oldname := old.Obj()
+ oldc := d.correspondMap[oldname]
+ if oldc == nil {
+ // For now, assume the types don't correspond unless they are from the old
+ // and new packages, respectively.
+ //
+ // This is too conservative. For instance,
+ // [old] type A = q.B; [new] type A q.C
+ // could be OK if in package q, B is an alias for C.
+ // Or, using p as the name of the current old/new packages:
+ // [old] type A = q.B; [new] type A int
+ // could be OK if in q,
+ // [old] type B int; [new] type B = p.A
+ // In this case, p.A and q.B name the same type in both old and new worlds.
+ // Note that this case doesn't imply circular package imports: it's possible
+ // that in the old world, p imports q, but in the new, q imports p.
+ //
+ // However, if we didn't do something here, then we'd incorrectly allow cases
+ // like the first one above in which q.B is not an alias for q.C
+ //
+ // What we should do is check that the old type, in the new world's package
+ // of the same path, doesn't correspond to something other than the new type.
+ // That is a bit hard, because there is no easy way to find a new package
+ // matching an old one.
+ if newn, ok := new.(*types.Named); ok {
+ if old.Obj().Pkg() != d.old || newn.Obj().Pkg() != d.new {
+ return old.Obj().Id() == newn.Obj().Id()
+ }
+ }
+ // If there is no correspondence, create one.
+ d.correspondMap[oldname] = new
+ // Check that the corresponding types are compatible.
+ d.checkCompatibleDefined(oldname, old, new)
+ return true
+ }
+ return types.Identical(oldc, new)
+}
+
+func (d *differ) sortedMethods(iface *types.Interface) []*types.Func {
+ ms := make([]*types.Func, iface.NumMethods())
+ for i := 0; i < iface.NumMethods(); i++ {
+ ms[i] = iface.Method(i)
+ }
+ sort.Slice(ms, func(i, j int) bool { return d.methodID(ms[i]) < d.methodID(ms[j]) })
+ return ms
+}
+
+func (d *differ) methodID(m *types.Func) string {
+ // If the method belongs to one of the two packages being compared, use
+ // just its name even if it's unexported. That lets us treat unexported names
+ // from the old and new packages as equal.
+ if m.Pkg() == d.old || m.Pkg() == d.new {
+ return m.Name()
+ }
+ return m.Id()
+}
+
+// Copied from the go/types package:
+
+// An ifacePair is a node in a stack of interface type pairs compared for identity.
+type ifacePair struct {
+ x, y *types.Interface
+ prev *ifacePair
+}
+
+func (p *ifacePair) identical(q *ifacePair) bool {
+ return p.x == q.x && p.y == q.y || p.x == q.y && p.y == q.x
+}
diff --git a/vendor/golang.org/x/exp/apidiff/messageset.go b/vendor/golang.org/x/exp/apidiff/messageset.go
new file mode 100644
index 0000000000000..135479053d460
--- /dev/null
+++ b/vendor/golang.org/x/exp/apidiff/messageset.go
@@ -0,0 +1,79 @@
+// TODO: show that two-non-empty dotjoin can happen, by using an anon struct as a field type
+// TODO: don't report removed/changed methods for both value and pointer method sets?
+
+package apidiff
+
+import (
+ "fmt"
+ "go/types"
+ "sort"
+ "strings"
+)
+
+// There can be at most one message for each object or part thereof.
+// Parts include interface methods and struct fields.
+//
+// The part thing is necessary. Method (Func) objects have sufficient info, but field
+// Vars do not: they just have a field name and a type, without the enclosing struct.
+type messageSet map[types.Object]map[string]string
+
+// Add a message for obj and part, overwriting a previous message
+// (shouldn't happen).
+// obj is required but part can be empty.
+func (m messageSet) add(obj types.Object, part, msg string) {
+ s := m[obj]
+ if s == nil {
+ s = map[string]string{}
+ m[obj] = s
+ }
+ if f, ok := s[part]; ok && f != msg {
+ fmt.Printf("! second, different message for obj %s, part %q\n", obj, part)
+ fmt.Printf(" first: %s\n", f)
+ fmt.Printf(" second: %s\n", msg)
+ }
+ s[part] = msg
+}
+
+func (m messageSet) collect() []string {
+ var s []string
+ for obj, parts := range m {
+ // Format each object name relative to its own package.
+ objstring := objectString(obj)
+ for part, msg := range parts {
+ var p string
+
+ if strings.HasPrefix(part, ",") {
+ p = objstring + part
+ } else {
+ p = dotjoin(objstring, part)
+ }
+ s = append(s, p+": "+msg)
+ }
+ }
+ sort.Strings(s)
+ return s
+}
+
+func objectString(obj types.Object) string {
+ if f, ok := obj.(*types.Func); ok {
+ sig := f.Type().(*types.Signature)
+ if recv := sig.Recv(); recv != nil {
+ tn := types.TypeString(recv.Type(), types.RelativeTo(obj.Pkg()))
+ if tn[0] == '*' {
+ tn = "(" + tn + ")"
+ }
+ return fmt.Sprintf("%s.%s", tn, obj.Name())
+ }
+ }
+ return obj.Name()
+}
+
+func dotjoin(s1, s2 string) string {
+ if s1 == "" {
+ return s2
+ }
+ if s2 == "" {
+ return s1
+ }
+ return s1 + "." + s2
+}
diff --git a/vendor/golang.org/x/exp/apidiff/report.go b/vendor/golang.org/x/exp/apidiff/report.go
new file mode 100644
index 0000000000000..ce79e2790a012
--- /dev/null
+++ b/vendor/golang.org/x/exp/apidiff/report.go
@@ -0,0 +1,71 @@
+package apidiff
+
+import (
+ "bytes"
+ "fmt"
+ "io"
+)
+
+// Report describes the changes detected by Changes.
+type Report struct {
+ Changes []Change
+}
+
+// A Change describes a single API change.
+type Change struct {
+ Message string
+ Compatible bool
+}
+
+func (r Report) messages(compatible bool) []string {
+ var msgs []string
+ for _, c := range r.Changes {
+ if c.Compatible == compatible {
+ msgs = append(msgs, c.Message)
+ }
+ }
+ return msgs
+}
+
+func (r Report) String() string {
+ var buf bytes.Buffer
+ if err := r.Text(&buf); err != nil {
+ return fmt.Sprintf("!!%v", err)
+ }
+ return buf.String()
+}
+
+func (r Report) Text(w io.Writer) error {
+ if err := r.TextIncompatible(w, true); err != nil {
+ return err
+ }
+ return r.TextCompatible(w)
+}
+
+func (r Report) TextIncompatible(w io.Writer, withHeader bool) error {
+ if withHeader {
+ return r.writeMessages(w, "Incompatible changes:", r.messages(false))
+ }
+ return r.writeMessages(w, "", r.messages(false))
+}
+
+func (r Report) TextCompatible(w io.Writer) error {
+ return r.writeMessages(w, "Compatible changes:", r.messages(true))
+}
+
+func (r Report) writeMessages(w io.Writer, header string, msgs []string) error {
+ if len(msgs) == 0 {
+ return nil
+ }
+ if header != "" {
+ if _, err := fmt.Fprintf(w, "%s\n", header); err != nil {
+ return err
+ }
+ }
+ for _, m := range msgs {
+ if _, err := fmt.Fprintf(w, "- %s\n", m); err != nil {
+ return err
+ }
+ }
+ return nil
+}
diff --git a/vendor/golang.org/x/exp/cmd/apidiff/main.go b/vendor/golang.org/x/exp/cmd/apidiff/main.go
new file mode 100644
index 0000000000000..a5446b7bcfd78
--- /dev/null
+++ b/vendor/golang.org/x/exp/cmd/apidiff/main.go
@@ -0,0 +1,142 @@
+// Command apidiff determines whether two versions of a package are compatible
+package main
+
+import (
+ "bufio"
+ "flag"
+ "fmt"
+ "go/token"
+ "go/types"
+ "os"
+
+ "golang.org/x/exp/apidiff"
+ "golang.org/x/tools/go/gcexportdata"
+ "golang.org/x/tools/go/packages"
+)
+
+var (
+ exportDataOutfile = flag.String("w", "", "file for export data")
+ incompatibleOnly = flag.Bool("incompatible", false, "display only incompatible changes")
+)
+
+func main() {
+ flag.Usage = func() {
+ w := flag.CommandLine.Output()
+ fmt.Fprintf(w, "usage:\n")
+ fmt.Fprintf(w, "apidiff OLD NEW\n")
+ fmt.Fprintf(w, " compares OLD and NEW package APIs\n")
+ fmt.Fprintf(w, " where OLD and NEW are either import paths or files of export data\n")
+ fmt.Fprintf(w, "apidiff -w FILE IMPORT_PATH\n")
+ fmt.Fprintf(w, " writes export data of the package at IMPORT_PATH to FILE\n")
+ fmt.Fprintf(w, " NOTE: In a GOPATH-less environment, this option consults the\n")
+ fmt.Fprintf(w, " module cache by default, unless used in the directory that\n")
+ fmt.Fprintf(w, " contains the go.mod module definition that IMPORT_PATH belongs\n")
+ fmt.Fprintf(w, " to. In most cases users want the latter behavior, so be sure\n")
+ fmt.Fprintf(w, " to cd to the exact directory which contains the module\n")
+ fmt.Fprintf(w, " definition of IMPORT_PATH.\n")
+ flag.PrintDefaults()
+ }
+
+ flag.Parse()
+ if *exportDataOutfile != "" {
+ if len(flag.Args()) != 1 {
+ flag.Usage()
+ os.Exit(2)
+ }
+ pkg := mustLoadPackage(flag.Arg(0))
+ if err := writeExportData(pkg, *exportDataOutfile); err != nil {
+ die("writing export data: %v", err)
+ }
+ } else {
+ if len(flag.Args()) != 2 {
+ flag.Usage()
+ os.Exit(2)
+ }
+ oldpkg := mustLoadOrRead(flag.Arg(0))
+ newpkg := mustLoadOrRead(flag.Arg(1))
+
+ report := apidiff.Changes(oldpkg, newpkg)
+ var err error
+ if *incompatibleOnly {
+ err = report.TextIncompatible(os.Stdout, false)
+ } else {
+ err = report.Text(os.Stdout)
+ }
+ if err != nil {
+ die("writing report: %v", err)
+ }
+ }
+}
+
+func mustLoadOrRead(importPathOrFile string) *types.Package {
+ fileInfo, err := os.Stat(importPathOrFile)
+ if err == nil && fileInfo.Mode().IsRegular() {
+ pkg, err := readExportData(importPathOrFile)
+ if err != nil {
+ die("reading export data from %s: %v", importPathOrFile, err)
+ }
+ return pkg
+ } else {
+ return mustLoadPackage(importPathOrFile).Types
+ }
+}
+
+func mustLoadPackage(importPath string) *packages.Package {
+ pkg, err := loadPackage(importPath)
+ if err != nil {
+ die("loading %s: %v", importPath, err)
+ }
+ return pkg
+}
+
+func loadPackage(importPath string) (*packages.Package, error) {
+ cfg := &packages.Config{Mode: packages.LoadTypes}
+ pkgs, err := packages.Load(cfg, importPath)
+ if err != nil {
+ return nil, err
+ }
+ if len(pkgs) == 0 {
+ return nil, fmt.Errorf("found no packages for import %s", importPath)
+ }
+ if len(pkgs[0].Errors) > 0 {
+ return nil, pkgs[0].Errors[0]
+ }
+ return pkgs[0], nil
+}
+
+func readExportData(filename string) (*types.Package, error) {
+ f, err := os.Open(filename)
+ if err != nil {
+ return nil, err
+ }
+ defer f.Close()
+ r := bufio.NewReader(f)
+ m := map[string]*types.Package{}
+ pkgPath, err := r.ReadString('\n')
+ if err != nil {
+ return nil, err
+ }
+ pkgPath = pkgPath[:len(pkgPath)-1] // remove delimiter
+ return gcexportdata.Read(r, token.NewFileSet(), m, pkgPath)
+}
+
+func writeExportData(pkg *packages.Package, filename string) error {
+ f, err := os.Create(filename)
+ if err != nil {
+ return err
+ }
+ // Include the package path in the file. The exportdata format does
+ // not record the path of the package being written.
+ fmt.Fprintln(f, pkg.PkgPath)
+ err1 := gcexportdata.Write(f, pkg.Fset, pkg.Types)
+ err2 := f.Close()
+ if err1 != nil {
+ return err1
+ }
+ return err2
+}
+
+func die(format string, args ...interface{}) {
+ fmt.Fprintf(os.Stderr, format+"\n", args...)
+ os.Exit(1)
+}
diff --git a/vendor/golang.org/x/lint/.travis.yml b/vendor/golang.org/x/lint/.travis.yml
new file mode 100644
index 0000000000000..50553ebd004a3
--- /dev/null
+++ b/vendor/golang.org/x/lint/.travis.yml
@@ -0,0 +1,19 @@
+sudo: false
+language: go
+go:
+ - 1.10.x
+ - 1.11.x
+ - master
+
+go_import_path: golang.org/x/lint
+
+install:
+ - go get -t -v ./...
+
+script:
+ - go test -v -race ./...
+
+matrix:
+ allow_failures:
+ - go: master
+ fast_finish: true
diff --git a/vendor/golang.org/x/lint/CONTRIBUTING.md b/vendor/golang.org/x/lint/CONTRIBUTING.md
new file mode 100644
index 0000000000000..1fadda62d2fc8
--- /dev/null
+++ b/vendor/golang.org/x/lint/CONTRIBUTING.md
@@ -0,0 +1,15 @@
+# Contributing to Golint
+
+## Before filing an issue:
+
+### Are you having trouble building golint?
+
+Check you have the latest version of its dependencies. Run
+```
+go get -u golang.org/x/lint/golint
+```
+If you still have problems, consider searching for existing issues before filing a new issue.
+
+## Before sending a pull request:
+
+Have you understood the purpose of golint? Make sure to carefully read `README`.
diff --git a/vendor/golang.org/x/lint/LICENSE b/vendor/golang.org/x/lint/LICENSE
new file mode 100644
index 0000000000000..65d761bc9f28c
--- /dev/null
+++ b/vendor/golang.org/x/lint/LICENSE
@@ -0,0 +1,27 @@
+Copyright (c) 2013 The Go Authors. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+ * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+ * Neither the name of Google Inc. nor the names of its
+contributors may be used to endorse or promote products derived from
+this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/golang.org/x/lint/README.md b/vendor/golang.org/x/lint/README.md
new file mode 100644
index 0000000000000..4968b13aef7e8
--- /dev/null
+++ b/vendor/golang.org/x/lint/README.md
@@ -0,0 +1,88 @@
+Golint is a linter for Go source code.
+
+[](https://travis-ci.org/golang/lint)
+
+## Installation
+
+Golint requires a
+[supported release of Go](https://golang.org/doc/devel/release.html#policy).
+
+ go get -u golang.org/x/lint/golint
+
+To find out where `golint` was installed you can run `go list -f {{.Target}} golang.org/x/lint/golint`. For `golint` to be used globally add that directory to the `$PATH` environment setting.
+
+## Usage
+
+Invoke `golint` with one or more filenames, directories, or packages named
+by its import path. Golint uses the same
+[import path syntax](https://golang.org/cmd/go/#hdr-Import_path_syntax) as
+the `go` command and therefore
+also supports relative import paths like `./...`. Additionally the `...`
+wildcard can be used as suffix on relative and absolute file paths to recurse
+into them.
+
+The output of this tool is a list of suggestions in Vim quickfix format,
+which is accepted by lots of different editors.
+
+## Purpose
+
+Golint differs from gofmt. Gofmt reformats Go source code, whereas
+golint prints out style mistakes.
+
+Golint differs from govet. Govet is concerned with correctness, whereas
+golint is concerned with coding style. Golint is in use at Google, and it
+seeks to match the accepted style of the open source Go project.
+
+The suggestions made by golint are exactly that: suggestions.
+Golint is not perfect, and has both false positives and false negatives.
+Do not treat its output as a gold standard. We will not be adding pragmas
+or other knobs to suppress specific warnings, so do not expect or require
+code to be completely "lint-free".
+In short, this tool is not, and will never be, trustworthy enough for its
+suggestions to be enforced automatically, for example as part of a build process.
+Golint makes suggestions for many of the mechanically checkable items listed in
+[Effective Go](https://golang.org/doc/effective_go.html) and the
+[CodeReviewComments wiki page](https://golang.org/wiki/CodeReviewComments).
+
+## Scope
+
+Golint is meant to carry out the stylistic conventions put forth in
+[Effective Go](https://golang.org/doc/effective_go.html) and
+[CodeReviewComments](https://golang.org/wiki/CodeReviewComments).
+Changes that are not aligned with those documents will not be considered.
+
+## Contributions
+
+Contributions to this project are welcome provided they are [in scope](#scope),
+though please send mail before starting work on anything major.
+Contributors retain their copyright, so we need you to fill out
+[a short form](https://developers.google.com/open-source/cla/individual)
+before we can accept your contribution.
+
+## Vim
+
+Add this to your ~/.vimrc:
+
+ set rtp+=$GOPATH/src/golang.org/x/lint/misc/vim
+
+If you have multiple entries in your GOPATH, replace `$GOPATH` with the right value.
+
+Running `:Lint` will run golint on the current file and populate the quickfix list.
+
+Optionally, add this to your `~/.vimrc` to automatically run `golint` on `:w`
+
+ autocmd BufWritePost,FileWritePost *.go execute 'Lint' | cwindow
+
+
+## Emacs
+
+Add this to your `.emacs` file:
+
+ (add-to-list 'load-path (concat (getenv "GOPATH") "/src/golang.org/x/lint/misc/emacs/"))
+ (require 'golint)
+
+If you have multiple entries in your GOPATH, replace `$GOPATH` with the right value.
+
+Running M-x golint will run golint on the current file.
+
+For more usage, see [Compilation-Mode](http://www.gnu.org/software/emacs/manual/html_node/emacs/Compilation-Mode.html).
diff --git a/vendor/golang.org/x/lint/go.mod b/vendor/golang.org/x/lint/go.mod
new file mode 100644
index 0000000000000..d5ba4dbfd6cf0
--- /dev/null
+++ b/vendor/golang.org/x/lint/go.mod
@@ -0,0 +1,3 @@
+module golang.org/x/lint
+
+require golang.org/x/tools v0.0.0-20190311212946-11955173bddd
diff --git a/vendor/golang.org/x/lint/go.sum b/vendor/golang.org/x/lint/go.sum
new file mode 100644
index 0000000000000..7d0e2e61884be
--- /dev/null
+++ b/vendor/golang.org/x/lint/go.sum
@@ -0,0 +1,6 @@
+golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
+golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
+golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
+golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
+golang.org/x/tools v0.0.0-20190311212946-11955173bddd h1:/e+gpKk9r3dJobndpTytxS2gOy6m5uvpg+ISQoEcusQ=
+golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
diff --git a/vendor/golang.org/x/lint/golint/golint.go b/vendor/golang.org/x/lint/golint/golint.go
new file mode 100644
index 0000000000000..ac024b6d26f18
--- /dev/null
+++ b/vendor/golang.org/x/lint/golint/golint.go
@@ -0,0 +1,159 @@
+// Copyright (c) 2013 The Go Authors. All rights reserved.
+//
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file or at
+// https://developers.google.com/open-source/licenses/bsd.
+
+// golint lints the Go source files named on its command line.
+package main
+
+import (
+ "flag"
+ "fmt"
+ "go/build"
+ "io/ioutil"
+ "os"
+ "path/filepath"
+ "strings"
+
+ "golang.org/x/lint"
+)
+
+var (
+ minConfidence = flag.Float64("min_confidence", 0.8, "minimum confidence of a problem to print it")
+ setExitStatus = flag.Bool("set_exit_status", false, "set exit status to 1 if any issues are found")
+ suggestions int
+)
+
+func usage() {
+ fmt.Fprintf(os.Stderr, "Usage of %s:\n", os.Args[0])
+ fmt.Fprintf(os.Stderr, "\tgolint [flags] # runs on package in current directory\n")
+ fmt.Fprintf(os.Stderr, "\tgolint [flags] [packages]\n")
+ fmt.Fprintf(os.Stderr, "\tgolint [flags] [directories] # where a '/...' suffix includes all sub-directories\n")
+ fmt.Fprintf(os.Stderr, "\tgolint [flags] [files] # all must belong to a single package\n")
+ fmt.Fprintf(os.Stderr, "Flags:\n")
+ flag.PrintDefaults()
+}
+
+func main() {
+ flag.Usage = usage
+ flag.Parse()
+
+ if flag.NArg() == 0 {
+ lintDir(".")
+ } else {
+ // dirsRun, filesRun, and pkgsRun indicate whether golint is applied to
+ // directory, file or package targets. The distinction affects which
+ // checks are run. It is no valid to mix target types.
+ var dirsRun, filesRun, pkgsRun int
+ var args []string
+ for _, arg := range flag.Args() {
+ if strings.HasSuffix(arg, "/...") && isDir(arg[:len(arg)-len("/...")]) {
+ dirsRun = 1
+ for _, dirname := range allPackagesInFS(arg) {
+ args = append(args, dirname)
+ }
+ } else if isDir(arg) {
+ dirsRun = 1
+ args = append(args, arg)
+ } else if exists(arg) {
+ filesRun = 1
+ args = append(args, arg)
+ } else {
+ pkgsRun = 1
+ args = append(args, arg)
+ }
+ }
+
+ if dirsRun+filesRun+pkgsRun != 1 {
+ usage()
+ os.Exit(2)
+ }
+ switch {
+ case dirsRun == 1:
+ for _, dir := range args {
+ lintDir(dir)
+ }
+ case filesRun == 1:
+ lintFiles(args...)
+ case pkgsRun == 1:
+ for _, pkg := range importPaths(args) {
+ lintPackage(pkg)
+ }
+ }
+ }
+
+ if *setExitStatus && suggestions > 0 {
+ fmt.Fprintf(os.Stderr, "Found %d lint suggestions; failing.\n", suggestions)
+ os.Exit(1)
+ }
+}
+
+func isDir(filename string) bool {
+ fi, err := os.Stat(filename)
+ return err == nil && fi.IsDir()
+}
+
+func exists(filename string) bool {
+ _, err := os.Stat(filename)
+ return err == nil
+}
+
+func lintFiles(filenames ...string) {
+ files := make(map[string][]byte)
+ for _, filename := range filenames {
+ src, err := ioutil.ReadFile(filename)
+ if err != nil {
+ fmt.Fprintln(os.Stderr, err)
+ continue
+ }
+ files[filename] = src
+ }
+
+ l := new(lint.Linter)
+ ps, err := l.LintFiles(files)
+ if err != nil {
+ fmt.Fprintf(os.Stderr, "%v\n", err)
+ return
+ }
+ for _, p := range ps {
+ if p.Confidence >= *minConfidence {
+ fmt.Printf("%v: %s\n", p.Position, p.Text)
+ suggestions++
+ }
+ }
+}
+
+func lintDir(dirname string) {
+ pkg, err := build.ImportDir(dirname, 0)
+ lintImportedPackage(pkg, err)
+}
+
+func lintPackage(pkgname string) {
+ pkg, err := build.Import(pkgname, ".", 0)
+ lintImportedPackage(pkg, err)
+}
+
+func lintImportedPackage(pkg *build.Package, err error) {
+ if err != nil {
+ if _, nogo := err.(*build.NoGoError); nogo {
+ // Don't complain if the failure is due to no Go source files.
+ return
+ }
+ fmt.Fprintln(os.Stderr, err)
+ return
+ }
+
+ var files []string
+ files = append(files, pkg.GoFiles...)
+ files = append(files, pkg.CgoFiles...)
+ files = append(files, pkg.TestGoFiles...)
+ if pkg.Dir != "." {
+ for i, f := range files {
+ files[i] = filepath.Join(pkg.Dir, f)
+ }
+ }
+ // TODO(dsymonds): Do foo_test too (pkg.XTestGoFiles)
+
+ lintFiles(files...)
+}
diff --git a/vendor/golang.org/x/lint/golint/import.go b/vendor/golang.org/x/lint/golint/import.go
new file mode 100644
index 0000000000000..2ba9dea779273
--- /dev/null
+++ b/vendor/golang.org/x/lint/golint/import.go
@@ -0,0 +1,309 @@
+package main
+
+/*
+
+This file holds a direct copy of the import path matching code of
+https://github.com/golang/go/blob/master/src/cmd/go/main.go. It can be
+replaced when https://golang.org/issue/8768 is resolved.
+
+It has been updated to follow upstream changes in a few ways.
+
+*/
+
+import (
+ "fmt"
+ "go/build"
+ "log"
+ "os"
+ "path"
+ "path/filepath"
+ "regexp"
+ "runtime"
+ "strings"
+)
+
+var (
+ buildContext = build.Default
+ goroot = filepath.Clean(runtime.GOROOT())
+ gorootSrc = filepath.Join(goroot, "src")
+)
+
+// importPathsNoDotExpansion returns the import paths to use for the given
+// command line, but it does no ... expansion.
+func importPathsNoDotExpansion(args []string) []string {
+ if len(args) == 0 {
+ return []string{"."}
+ }
+ var out []string
+ for _, a := range args {
+ // Arguments are supposed to be import paths, but
+ // as a courtesy to Windows developers, rewrite \ to /
+ // in command-line arguments. Handles .\... and so on.
+ if filepath.Separator == '\\' {
+ a = strings.Replace(a, `\`, `/`, -1)
+ }
+
+ // Put argument in canonical form, but preserve leading ./.
+ if strings.HasPrefix(a, "./") {
+ a = "./" + path.Clean(a)
+ if a == "./." {
+ a = "."
+ }
+ } else {
+ a = path.Clean(a)
+ }
+ if a == "all" || a == "std" {
+ out = append(out, allPackages(a)...)
+ continue
+ }
+ out = append(out, a)
+ }
+ return out
+}
+
+// importPaths returns the import paths to use for the given command line.
+func importPaths(args []string) []string {
+ args = importPathsNoDotExpansion(args)
+ var out []string
+ for _, a := range args {
+ if strings.Contains(a, "...") {
+ if build.IsLocalImport(a) {
+ out = append(out, allPackagesInFS(a)...)
+ } else {
+ out = append(out, allPackages(a)...)
+ }
+ continue
+ }
+ out = append(out, a)
+ }
+ return out
+}
+
+// matchPattern(pattern)(name) reports whether
+// name matches pattern. Pattern is a limited glob
+// pattern in which '...' means 'any string' and there
+// is no other special syntax.
+func matchPattern(pattern string) func(name string) bool {
+ re := regexp.QuoteMeta(pattern)
+ re = strings.Replace(re, `\.\.\.`, `.*`, -1)
+ // Special case: foo/... matches foo too.
+ if strings.HasSuffix(re, `/.*`) {
+ re = re[:len(re)-len(`/.*`)] + `(/.*)?`
+ }
+ reg := regexp.MustCompile(`^` + re + `$`)
+ return func(name string) bool {
+ return reg.MatchString(name)
+ }
+}
+
+// hasPathPrefix reports whether the path s begins with the
+// elements in prefix.
+func hasPathPrefix(s, prefix string) bool {
+ switch {
+ default:
+ return false
+ case len(s) == len(prefix):
+ return s == prefix
+ case len(s) > len(prefix):
+ if prefix != "" && prefix[len(prefix)-1] == '/' {
+ return strings.HasPrefix(s, prefix)
+ }
+ return s[len(prefix)] == '/' && s[:len(prefix)] == prefix
+ }
+}
+
+// treeCanMatchPattern(pattern)(name) reports whether
+// name or children of name can possibly match pattern.
+// Pattern is the same limited glob accepted by matchPattern.
+func treeCanMatchPattern(pattern string) func(name string) bool {
+ wildCard := false
+ if i := strings.Index(pattern, "..."); i >= 0 {
+ wildCard = true
+ pattern = pattern[:i]
+ }
+ return func(name string) bool {
+ return len(name) <= len(pattern) && hasPathPrefix(pattern, name) ||
+ wildCard && strings.HasPrefix(name, pattern)
+ }
+}
+
+// allPackages returns all the packages that can be found
+// under the $GOPATH directories and $GOROOT matching pattern.
+// The pattern is either "all" (all packages), "std" (standard packages)
+// or a path including "...".
+func allPackages(pattern string) []string {
+ pkgs := matchPackages(pattern)
+ if len(pkgs) == 0 {
+ fmt.Fprintf(os.Stderr, "warning: %q matched no packages\n", pattern)
+ }
+ return pkgs
+}
+
+func matchPackages(pattern string) []string {
+ match := func(string) bool { return true }
+ treeCanMatch := func(string) bool { return true }
+ if pattern != "all" && pattern != "std" {
+ match = matchPattern(pattern)
+ treeCanMatch = treeCanMatchPattern(pattern)
+ }
+
+ have := map[string]bool{
+ "builtin": true, // ignore pseudo-package that exists only for documentation
+ }
+ if !buildContext.CgoEnabled {
+ have["runtime/cgo"] = true // ignore during walk
+ }
+ var pkgs []string
+
+ // Commands
+ cmd := filepath.Join(goroot, "src/cmd") + string(filepath.Separator)
+ filepath.Walk(cmd, func(path string, fi os.FileInfo, err error) error {
+ if err != nil || !fi.IsDir() || path == cmd {
+ return nil
+ }
+ name := path[len(cmd):]
+ if !treeCanMatch(name) {
+ return filepath.SkipDir
+ }
+ // Commands are all in cmd/, not in subdirectories.
+ if strings.Contains(name, string(filepath.Separator)) {
+ return filepath.SkipDir
+ }
+
+ // We use, e.g., cmd/gofmt as the pseudo import path for gofmt.
+ name = "cmd/" + name
+ if have[name] {
+ return nil
+ }
+ have[name] = true
+ if !match(name) {
+ return nil
+ }
+ _, err = buildContext.ImportDir(path, 0)
+ if err != nil {
+ if _, noGo := err.(*build.NoGoError); !noGo {
+ log.Print(err)
+ }
+ return nil
+ }
+ pkgs = append(pkgs, name)
+ return nil
+ })
+
+ for _, src := range buildContext.SrcDirs() {
+ if (pattern == "std" || pattern == "cmd") && src != gorootSrc {
+ continue
+ }
+ src = filepath.Clean(src) + string(filepath.Separator)
+ root := src
+ if pattern == "cmd" {
+ root += "cmd" + string(filepath.Separator)
+ }
+ filepath.Walk(root, func(path string, fi os.FileInfo, err error) error {
+ if err != nil || !fi.IsDir() || path == src {
+ return nil
+ }
+
+ // Avoid .foo, _foo, and testdata directory trees.
+ _, elem := filepath.Split(path)
+ if strings.HasPrefix(elem, ".") || strings.HasPrefix(elem, "_") || elem == "testdata" {
+ return filepath.SkipDir
+ }
+
+ name := filepath.ToSlash(path[len(src):])
+ if pattern == "std" && (strings.Contains(name, ".") || name == "cmd") {
+ // The name "std" is only the standard library.
+ // If the name is cmd, it's the root of the command tree.
+ return filepath.SkipDir
+ }
+ if !treeCanMatch(name) {
+ return filepath.SkipDir
+ }
+ if have[name] {
+ return nil
+ }
+ have[name] = true
+ if !match(name) {
+ return nil
+ }
+ _, err = buildContext.ImportDir(path, 0)
+ if err != nil {
+ if _, noGo := err.(*build.NoGoError); noGo {
+ return nil
+ }
+ }
+ pkgs = append(pkgs, name)
+ return nil
+ })
+ }
+ return pkgs
+}
+
+// allPackagesInFS is like allPackages but is passed a pattern
+// beginning ./ or ../, meaning it should scan the tree rooted
+// at the given directory. There are ... in the pattern too.
+func allPackagesInFS(pattern string) []string {
+ pkgs := matchPackagesInFS(pattern)
+ if len(pkgs) == 0 {
+ fmt.Fprintf(os.Stderr, "warning: %q matched no packages\n", pattern)
+ }
+ return pkgs
+}
+
+func matchPackagesInFS(pattern string) []string {
+ // Find directory to begin the scan.
+ // Could be smarter but this one optimization
+ // is enough for now, since ... is usually at the
+ // end of a path.
+ i := strings.Index(pattern, "...")
+ dir, _ := path.Split(pattern[:i])
+
+ // pattern begins with ./ or ../.
+ // path.Clean will discard the ./ but not the ../.
+ // We need to preserve the ./ for pattern matching
+ // and in the returned import paths.
+ prefix := ""
+ if strings.HasPrefix(pattern, "./") {
+ prefix = "./"
+ }
+ match := matchPattern(pattern)
+
+ var pkgs []string
+ filepath.Walk(dir, func(path string, fi os.FileInfo, err error) error {
+ if err != nil || !fi.IsDir() {
+ return nil
+ }
+ if path == dir {
+ // filepath.Walk starts at dir and recurses. For the recursive case,
+ // the path is the result of filepath.Join, which calls filepath.Clean.
+ // The initial case is not Cleaned, though, so we do this explicitly.
+ //
+ // This converts a path like "./io/" to "io". Without this step, running
+ // "cd $GOROOT/src/pkg; go list ./io/..." would incorrectly skip the io
+ // package, because prepending the prefix "./" to the unclean path would
+ // result in "././io", and match("././io") returns false.
+ path = filepath.Clean(path)
+ }
+
+ // Avoid .foo, _foo, and testdata directory trees, but do not avoid "." or "..".
+ _, elem := filepath.Split(path)
+ dot := strings.HasPrefix(elem, ".") && elem != "." && elem != ".."
+ if dot || strings.HasPrefix(elem, "_") || elem == "testdata" {
+ return filepath.SkipDir
+ }
+
+ name := prefix + filepath.ToSlash(path)
+ if !match(name) {
+ return nil
+ }
+ if _, err = build.ImportDir(path, 0); err != nil {
+ if _, noGo := err.(*build.NoGoError); !noGo {
+ log.Print(err)
+ }
+ return nil
+ }
+ pkgs = append(pkgs, name)
+ return nil
+ })
+ return pkgs
+}
diff --git a/vendor/golang.org/x/lint/golint/importcomment.go b/vendor/golang.org/x/lint/golint/importcomment.go
new file mode 100644
index 0000000000000..d5b32f7346494
--- /dev/null
+++ b/vendor/golang.org/x/lint/golint/importcomment.go
@@ -0,0 +1,13 @@
+// Copyright (c) 2018 The Go Authors. All rights reserved.
+//
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file or at
+// https://developers.google.com/open-source/licenses/bsd.
+
+// +build go1.12
+
+// Require use of the correct import path only for Go 1.12+ users, so
+// any breakages coincide with people updating their CI configs or
+// whatnot.
+
+package main // import "golang.org/x/lint/golint"
diff --git a/vendor/golang.org/x/lint/lint.go b/vendor/golang.org/x/lint/lint.go
new file mode 100644
index 0000000000000..532a75ad24780
--- /dev/null
+++ b/vendor/golang.org/x/lint/lint.go
@@ -0,0 +1,1614 @@
+// Copyright (c) 2013 The Go Authors. All rights reserved.
+//
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file or at
+// https://developers.google.com/open-source/licenses/bsd.
+
+// Package lint contains a linter for Go source code.
+package lint // import "golang.org/x/lint"
+
+import (
+ "bufio"
+ "bytes"
+ "fmt"
+ "go/ast"
+ "go/parser"
+ "go/printer"
+ "go/token"
+ "go/types"
+ "regexp"
+ "sort"
+ "strconv"
+ "strings"
+ "unicode"
+ "unicode/utf8"
+
+ "golang.org/x/tools/go/ast/astutil"
+ "golang.org/x/tools/go/gcexportdata"
+)
+
+const styleGuideBase = "https://golang.org/wiki/CodeReviewComments"
+
+// A Linter lints Go source code.
+type Linter struct {
+}
+
+// Problem represents a problem in some source code.
+type Problem struct {
+ Position token.Position // position in source file
+ Text string // the prose that describes the problem
+ Link string // (optional) the link to the style guide for the problem
+ Confidence float64 // a value in (0,1] estimating the confidence in this problem's correctness
+ LineText string // the source line
+ Category string // a short name for the general category of the problem
+
+ // If the problem has a suggested fix (the minority case),
+ // ReplacementLine is a full replacement for the relevant line of the source file.
+ ReplacementLine string
+}
+
+func (p *Problem) String() string {
+ if p.Link != "" {
+ return p.Text + "\n\n" + p.Link
+ }
+ return p.Text
+}
+
+type byPosition []Problem
+
+func (p byPosition) Len() int { return len(p) }
+func (p byPosition) Swap(i, j int) { p[i], p[j] = p[j], p[i] }
+
+func (p byPosition) Less(i, j int) bool {
+ pi, pj := p[i].Position, p[j].Position
+
+ if pi.Filename != pj.Filename {
+ return pi.Filename < pj.Filename
+ }
+ if pi.Line != pj.Line {
+ return pi.Line < pj.Line
+ }
+ if pi.Column != pj.Column {
+ return pi.Column < pj.Column
+ }
+
+ return p[i].Text < p[j].Text
+}
+
+// Lint lints src.
+func (l *Linter) Lint(filename string, src []byte) ([]Problem, error) {
+ return l.LintFiles(map[string][]byte{filename: src})
+}
+
+// LintFiles lints a set of files of a single package.
+// The argument is a map of filename to source.
+func (l *Linter) LintFiles(files map[string][]byte) ([]Problem, error) {
+ pkg := &pkg{
+ fset: token.NewFileSet(),
+ files: make(map[string]*file),
+ }
+ var pkgName string
+ for filename, src := range files {
+ if isGenerated(src) {
+ continue // See issue #239
+ }
+ f, err := parser.ParseFile(pkg.fset, filename, src, parser.ParseComments)
+ if err != nil {
+ return nil, err
+ }
+ if pkgName == "" {
+ pkgName = f.Name.Name
+ } else if f.Name.Name != pkgName {
+ return nil, fmt.Errorf("%s is in package %s, not %s", filename, f.Name.Name, pkgName)
+ }
+ pkg.files[filename] = &file{
+ pkg: pkg,
+ f: f,
+ fset: pkg.fset,
+ src: src,
+ filename: filename,
+ }
+ }
+ if len(pkg.files) == 0 {
+ return nil, nil
+ }
+ return pkg.lint(), nil
+}
+
+var (
+ genHdr = []byte("// Code generated ")
+ genFtr = []byte(" DO NOT EDIT.")
+)
+
+// isGenerated reports whether the source file is generated code
+// according the rules from https://golang.org/s/generatedcode.
+func isGenerated(src []byte) bool {
+ sc := bufio.NewScanner(bytes.NewReader(src))
+ for sc.Scan() {
+ b := sc.Bytes()
+ if bytes.HasPrefix(b, genHdr) && bytes.HasSuffix(b, genFtr) && len(b) >= len(genHdr)+len(genFtr) {
+ return true
+ }
+ }
+ return false
+}
+
+// pkg represents a package being linted.
+type pkg struct {
+ fset *token.FileSet
+ files map[string]*file
+
+ typesPkg *types.Package
+ typesInfo *types.Info
+
+ // sortable is the set of types in the package that implement sort.Interface.
+ sortable map[string]bool
+ // main is whether this is a "main" package.
+ main bool
+
+ problems []Problem
+}
+
+func (p *pkg) lint() []Problem {
+ if err := p.typeCheck(); err != nil {
+ /* TODO(dsymonds): Consider reporting these errors when golint operates on entire packages.
+ if e, ok := err.(types.Error); ok {
+ pos := p.fset.Position(e.Pos)
+ conf := 1.0
+ if strings.Contains(e.Msg, "can't find import: ") {
+ // Golint is probably being run in a context that doesn't support
+ // typechecking (e.g. package files aren't found), so don't warn about it.
+ conf = 0
+ }
+ if conf > 0 {
+ p.errorfAt(pos, conf, category("typechecking"), e.Msg)
+ }
+
+ // TODO(dsymonds): Abort if !e.Soft?
+ }
+ */
+ }
+
+ p.scanSortable()
+ p.main = p.isMain()
+
+ for _, f := range p.files {
+ f.lint()
+ }
+
+ sort.Sort(byPosition(p.problems))
+
+ return p.problems
+}
+
+// file represents a file being linted.
+type file struct {
+ pkg *pkg
+ f *ast.File
+ fset *token.FileSet
+ src []byte
+ filename string
+}
+
+func (f *file) isTest() bool { return strings.HasSuffix(f.filename, "_test.go") }
+
+func (f *file) lint() {
+ f.lintPackageComment()
+ f.lintImports()
+ f.lintBlankImports()
+ f.lintExported()
+ f.lintNames()
+ f.lintElses()
+ f.lintRanges()
+ f.lintErrorf()
+ f.lintErrors()
+ f.lintErrorStrings()
+ f.lintReceiverNames()
+ f.lintIncDec()
+ f.lintErrorReturn()
+ f.lintUnexportedReturn()
+ f.lintTimeNames()
+ f.lintContextKeyTypes()
+ f.lintContextArgs()
+}
+
+type link string
+type category string
+
+// The variadic arguments may start with link and category types,
+// and must end with a format string and any arguments.
+// It returns the new Problem.
+func (f *file) errorf(n ast.Node, confidence float64, args ...interface{}) *Problem {
+ pos := f.fset.Position(n.Pos())
+ if pos.Filename == "" {
+ pos.Filename = f.filename
+ }
+ return f.pkg.errorfAt(pos, confidence, args...)
+}
+
+func (p *pkg) errorfAt(pos token.Position, confidence float64, args ...interface{}) *Problem {
+ problem := Problem{
+ Position: pos,
+ Confidence: confidence,
+ }
+ if pos.Filename != "" {
+ // The file might not exist in our mapping if a //line directive was encountered.
+ if f, ok := p.files[pos.Filename]; ok {
+ problem.LineText = srcLine(f.src, pos)
+ }
+ }
+
+argLoop:
+ for len(args) > 1 { // always leave at least the format string in args
+ switch v := args[0].(type) {
+ case link:
+ problem.Link = string(v)
+ case category:
+ problem.Category = string(v)
+ default:
+ break argLoop
+ }
+ args = args[1:]
+ }
+
+ problem.Text = fmt.Sprintf(args[0].(string), args[1:]...)
+
+ p.problems = append(p.problems, problem)
+ return &p.problems[len(p.problems)-1]
+}
+
+var newImporter = func(fset *token.FileSet) types.ImporterFrom {
+ return gcexportdata.NewImporter(fset, make(map[string]*types.Package))
+}
+
+func (p *pkg) typeCheck() error {
+ config := &types.Config{
+ // By setting a no-op error reporter, the type checker does as much work as possible.
+ Error: func(error) {},
+ Importer: newImporter(p.fset),
+ }
+ info := &types.Info{
+ Types: make(map[ast.Expr]types.TypeAndValue),
+ Defs: make(map[*ast.Ident]types.Object),
+ Uses: make(map[*ast.Ident]types.Object),
+ Scopes: make(map[ast.Node]*types.Scope),
+ }
+ var anyFile *file
+ var astFiles []*ast.File
+ for _, f := range p.files {
+ anyFile = f
+ astFiles = append(astFiles, f.f)
+ }
+ pkg, err := config.Check(anyFile.f.Name.Name, p.fset, astFiles, info)
+ // Remember the typechecking info, even if config.Check failed,
+ // since we will get partial information.
+ p.typesPkg = pkg
+ p.typesInfo = info
+ return err
+}
+
+func (p *pkg) typeOf(expr ast.Expr) types.Type {
+ if p.typesInfo == nil {
+ return nil
+ }
+ return p.typesInfo.TypeOf(expr)
+}
+
+func (p *pkg) isNamedType(typ types.Type, importPath, name string) bool {
+ n, ok := typ.(*types.Named)
+ if !ok {
+ return false
+ }
+ tn := n.Obj()
+ return tn != nil && tn.Pkg() != nil && tn.Pkg().Path() == importPath && tn.Name() == name
+}
+
+// scopeOf returns the tightest scope encompassing id.
+func (p *pkg) scopeOf(id *ast.Ident) *types.Scope {
+ var scope *types.Scope
+ if obj := p.typesInfo.ObjectOf(id); obj != nil {
+ scope = obj.Parent()
+ }
+ if scope == p.typesPkg.Scope() {
+ // We were given a top-level identifier.
+ // Use the file-level scope instead of the package-level scope.
+ pos := id.Pos()
+ for _, f := range p.files {
+ if f.f.Pos() <= pos && pos < f.f.End() {
+ scope = p.typesInfo.Scopes[f.f]
+ break
+ }
+ }
+ }
+ return scope
+}
+
+func (p *pkg) scanSortable() {
+ p.sortable = make(map[string]bool)
+
+ // bitfield for which methods exist on each type.
+ const (
+ Len = 1 << iota
+ Less
+ Swap
+ )
+ nmap := map[string]int{"Len": Len, "Less": Less, "Swap": Swap}
+ has := make(map[string]int)
+ for _, f := range p.files {
+ f.walk(func(n ast.Node) bool {
+ fn, ok := n.(*ast.FuncDecl)
+ if !ok || fn.Recv == nil || len(fn.Recv.List) == 0 {
+ return true
+ }
+ // TODO(dsymonds): We could check the signature to be more precise.
+ recv := receiverType(fn)
+ if i, ok := nmap[fn.Name.Name]; ok {
+ has[recv] |= i
+ }
+ return false
+ })
+ }
+ for typ, ms := range has {
+ if ms == Len|Less|Swap {
+ p.sortable[typ] = true
+ }
+ }
+}
+
+func (p *pkg) isMain() bool {
+ for _, f := range p.files {
+ if f.isMain() {
+ return true
+ }
+ }
+ return false
+}
+
+func (f *file) isMain() bool {
+ if f.f.Name.Name == "main" {
+ return true
+ }
+ return false
+}
+
+// lintPackageComment checks package comments. It complains if
+// there is no package comment, or if it is not of the right form.
+// This has a notable false positive in that a package comment
+// could rightfully appear in a different file of the same package,
+// but that's not easy to fix since this linter is file-oriented.
+func (f *file) lintPackageComment() {
+ if f.isTest() {
+ return
+ }
+
+ const ref = styleGuideBase + "#package-comments"
+ prefix := "Package " + f.f.Name.Name + " "
+
+ // Look for a detached package comment.
+ // First, scan for the last comment that occurs before the "package" keyword.
+ var lastCG *ast.CommentGroup
+ for _, cg := range f.f.Comments {
+ if cg.Pos() > f.f.Package {
+ // Gone past "package" keyword.
+ break
+ }
+ lastCG = cg
+ }
+ if lastCG != nil && strings.HasPrefix(lastCG.Text(), prefix) {
+ endPos := f.fset.Position(lastCG.End())
+ pkgPos := f.fset.Position(f.f.Package)
+ if endPos.Line+1 < pkgPos.Line {
+ // There isn't a great place to anchor this error;
+ // the start of the blank lines between the doc and the package statement
+ // is at least pointing at the location of the problem.
+ pos := token.Position{
+ Filename: endPos.Filename,
+ // Offset not set; it is non-trivial, and doesn't appear to be needed.
+ Line: endPos.Line + 1,
+ Column: 1,
+ }
+ f.pkg.errorfAt(pos, 0.9, link(ref), category("comments"), "package comment is detached; there should be no blank lines between it and the package statement")
+ return
+ }
+ }
+
+ if f.f.Doc == nil {
+ f.errorf(f.f, 0.2, link(ref), category("comments"), "should have a package comment, unless it's in another file for this package")
+ return
+ }
+ s := f.f.Doc.Text()
+ if ts := strings.TrimLeft(s, " \t"); ts != s {
+ f.errorf(f.f.Doc, 1, link(ref), category("comments"), "package comment should not have leading space")
+ s = ts
+ }
+ // Only non-main packages need to keep to this form.
+ if !f.pkg.main && !strings.HasPrefix(s, prefix) {
+ f.errorf(f.f.Doc, 1, link(ref), category("comments"), `package comment should be of the form "%s..."`, prefix)
+ }
+}
+
+// lintBlankImports complains if a non-main package has blank imports that are
+// not documented.
+func (f *file) lintBlankImports() {
+ // In package main and in tests, we don't complain about blank imports.
+ if f.pkg.main || f.isTest() {
+ return
+ }
+
+ // The first element of each contiguous group of blank imports should have
+ // an explanatory comment of some kind.
+ for i, imp := range f.f.Imports {
+ pos := f.fset.Position(imp.Pos())
+
+ if !isBlank(imp.Name) {
+ continue // Ignore non-blank imports.
+ }
+ if i > 0 {
+ prev := f.f.Imports[i-1]
+ prevPos := f.fset.Position(prev.Pos())
+ if isBlank(prev.Name) && prevPos.Line+1 == pos.Line {
+ continue // A subsequent blank in a group.
+ }
+ }
+
+ // This is the first blank import of a group.
+ if imp.Doc == nil && imp.Comment == nil {
+ ref := ""
+ f.errorf(imp, 1, link(ref), category("imports"), "a blank import should be only in a main or test package, or have a comment justifying it")
+ }
+ }
+}
+
+// lintImports examines import blocks.
+func (f *file) lintImports() {
+ for i, is := range f.f.Imports {
+ _ = i
+ if is.Name != nil && is.Name.Name == "." && !f.isTest() {
+ f.errorf(is, 1, link(styleGuideBase+"#import-dot"), category("imports"), "should not use dot imports")
+ }
+
+ }
+}
+
+const docCommentsLink = styleGuideBase + "#doc-comments"
+
+// lintExported examines the exported names.
+// It complains if any required doc comments are missing,
+// or if they are not of the right form. The exact rules are in
+// lintFuncDoc, lintTypeDoc and lintValueSpecDoc; this function
+// also tracks the GenDecl structure being traversed to permit
+// doc comments for constants to be on top of the const block.
+// It also complains if the names stutter when combined with
+// the package name.
+func (f *file) lintExported() {
+ if f.isTest() {
+ return
+ }
+
+ var lastGen *ast.GenDecl // last GenDecl entered.
+
+ // Set of GenDecls that have already had missing comments flagged.
+ genDeclMissingComments := make(map[*ast.GenDecl]bool)
+
+ f.walk(func(node ast.Node) bool {
+ switch v := node.(type) {
+ case *ast.GenDecl:
+ if v.Tok == token.IMPORT {
+ return false
+ }
+ // token.CONST, token.TYPE or token.VAR
+ lastGen = v
+ return true
+ case *ast.FuncDecl:
+ f.lintFuncDoc(v)
+ if v.Recv == nil {
+ // Only check for stutter on functions, not methods.
+ // Method names are not used package-qualified.
+ f.checkStutter(v.Name, "func")
+ }
+ // Don't proceed inside funcs.
+ return false
+ case *ast.TypeSpec:
+ // inside a GenDecl, which usually has the doc
+ doc := v.Doc
+ if doc == nil {
+ doc = lastGen.Doc
+ }
+ f.lintTypeDoc(v, doc)
+ f.checkStutter(v.Name, "type")
+ // Don't proceed inside types.
+ return false
+ case *ast.ValueSpec:
+ f.lintValueSpecDoc(v, lastGen, genDeclMissingComments)
+ return false
+ }
+ return true
+ })
+}
+
+var (
+ allCapsRE = regexp.MustCompile(`^[A-Z0-9_]+$`)
+ anyCapsRE = regexp.MustCompile(`[A-Z]`)
+)
+
+// knownNameExceptions is a set of names that are known to be exempt from naming checks.
+// This is usually because they are constrained by having to match names in the
+// standard library.
+var knownNameExceptions = map[string]bool{
+ "LastInsertId": true, // must match database/sql
+ "kWh": true,
+}
+
+func isInTopLevel(f *ast.File, ident *ast.Ident) bool {
+ path, _ := astutil.PathEnclosingInterval(f, ident.Pos(), ident.End())
+ for _, f := range path {
+ switch f.(type) {
+ case *ast.File, *ast.GenDecl, *ast.ValueSpec, *ast.Ident:
+ continue
+ }
+ return false
+ }
+ return true
+}
+
+// lintNames examines all names in the file.
+// It complains if any use underscores or incorrect known initialisms.
+func (f *file) lintNames() {
+ // Package names need slightly different handling than other names.
+ if strings.Contains(f.f.Name.Name, "_") && !strings.HasSuffix(f.f.Name.Name, "_test") {
+ f.errorf(f.f, 1, link("http://golang.org/doc/effective_go.html#package-names"), category("naming"), "don't use an underscore in package name")
+ }
+ if anyCapsRE.MatchString(f.f.Name.Name) {
+ f.errorf(f.f, 1, link("http://golang.org/doc/effective_go.html#package-names"), category("mixed-caps"), "don't use MixedCaps in package name; %s should be %s", f.f.Name.Name, strings.ToLower(f.f.Name.Name))
+ }
+
+ check := func(id *ast.Ident, thing string) {
+ if id.Name == "_" {
+ return
+ }
+ if knownNameExceptions[id.Name] {
+ return
+ }
+
+ // Handle two common styles from other languages that don't belong in Go.
+ if len(id.Name) >= 5 && allCapsRE.MatchString(id.Name) && strings.Contains(id.Name, "_") {
+ capCount := 0
+ for _, c := range id.Name {
+ if 'A' <= c && c <= 'Z' {
+ capCount++
+ }
+ }
+ if capCount >= 2 {
+ f.errorf(id, 0.8, link(styleGuideBase+"#mixed-caps"), category("naming"), "don't use ALL_CAPS in Go names; use CamelCase")
+ return
+ }
+ }
+ if thing == "const" || (thing == "var" && isInTopLevel(f.f, id)) {
+ if len(id.Name) > 2 && id.Name[0] == 'k' && id.Name[1] >= 'A' && id.Name[1] <= 'Z' {
+ should := string(id.Name[1]+'a'-'A') + id.Name[2:]
+ f.errorf(id, 0.8, link(styleGuideBase+"#mixed-caps"), category("naming"), "don't use leading k in Go names; %s %s should be %s", thing, id.Name, should)
+ }
+ }
+
+ should := lintName(id.Name)
+ if id.Name == should {
+ return
+ }
+
+ if len(id.Name) > 2 && strings.Contains(id.Name[1:], "_") {
+ f.errorf(id, 0.9, link("http://golang.org/doc/effective_go.html#mixed-caps"), category("naming"), "don't use underscores in Go names; %s %s should be %s", thing, id.Name, should)
+ return
+ }
+ f.errorf(id, 0.8, link(styleGuideBase+"#initialisms"), category("naming"), "%s %s should be %s", thing, id.Name, should)
+ }
+ checkList := func(fl *ast.FieldList, thing string) {
+ if fl == nil {
+ return
+ }
+ for _, f := range fl.List {
+ for _, id := range f.Names {
+ check(id, thing)
+ }
+ }
+ }
+ f.walk(func(node ast.Node) bool {
+ switch v := node.(type) {
+ case *ast.AssignStmt:
+ if v.Tok == token.ASSIGN {
+ return true
+ }
+ for _, exp := range v.Lhs {
+ if id, ok := exp.(*ast.Ident); ok {
+ check(id, "var")
+ }
+ }
+ case *ast.FuncDecl:
+ if f.isTest() && (strings.HasPrefix(v.Name.Name, "Example") || strings.HasPrefix(v.Name.Name, "Test") || strings.HasPrefix(v.Name.Name, "Benchmark")) {
+ return true
+ }
+
+ thing := "func"
+ if v.Recv != nil {
+ thing = "method"
+ }
+
+ // Exclude naming warnings for functions that are exported to C but
+ // not exported in the Go API.
+ // See https://github.com/golang/lint/issues/144.
+ if ast.IsExported(v.Name.Name) || !isCgoExported(v) {
+ check(v.Name, thing)
+ }
+
+ checkList(v.Type.Params, thing+" parameter")
+ checkList(v.Type.Results, thing+" result")
+ case *ast.GenDecl:
+ if v.Tok == token.IMPORT {
+ return true
+ }
+ var thing string
+ switch v.Tok {
+ case token.CONST:
+ thing = "const"
+ case token.TYPE:
+ thing = "type"
+ case token.VAR:
+ thing = "var"
+ }
+ for _, spec := range v.Specs {
+ switch s := spec.(type) {
+ case *ast.TypeSpec:
+ check(s.Name, thing)
+ case *ast.ValueSpec:
+ for _, id := range s.Names {
+ check(id, thing)
+ }
+ }
+ }
+ case *ast.InterfaceType:
+ // Do not check interface method names.
+ // They are often constrainted by the method names of concrete types.
+ for _, x := range v.Methods.List {
+ ft, ok := x.Type.(*ast.FuncType)
+ if !ok { // might be an embedded interface name
+ continue
+ }
+ checkList(ft.Params, "interface method parameter")
+ checkList(ft.Results, "interface method result")
+ }
+ case *ast.RangeStmt:
+ if v.Tok == token.ASSIGN {
+ return true
+ }
+ if id, ok := v.Key.(*ast.Ident); ok {
+ check(id, "range var")
+ }
+ if id, ok := v.Value.(*ast.Ident); ok {
+ check(id, "range var")
+ }
+ case *ast.StructType:
+ for _, f := range v.Fields.List {
+ for _, id := range f.Names {
+ check(id, "struct field")
+ }
+ }
+ }
+ return true
+ })
+}
+
+// lintName returns a different name if it should be different.
+func lintName(name string) (should string) {
+ // Fast path for simple cases: "_" and all lowercase.
+ if name == "_" {
+ return name
+ }
+ allLower := true
+ for _, r := range name {
+ if !unicode.IsLower(r) {
+ allLower = false
+ break
+ }
+ }
+ if allLower {
+ return name
+ }
+
+ // Split camelCase at any lower->upper transition, and split on underscores.
+ // Check each word for common initialisms.
+ runes := []rune(name)
+ w, i := 0, 0 // index of start of word, scan
+ for i+1 <= len(runes) {
+ eow := false // whether we hit the end of a word
+ if i+1 == len(runes) {
+ eow = true
+ } else if runes[i+1] == '_' {
+ // underscore; shift the remainder forward over any run of underscores
+ eow = true
+ n := 1
+ for i+n+1 < len(runes) && runes[i+n+1] == '_' {
+ n++
+ }
+
+ // Leave at most one underscore if the underscore is between two digits
+ if i+n+1 < len(runes) && unicode.IsDigit(runes[i]) && unicode.IsDigit(runes[i+n+1]) {
+ n--
+ }
+
+ copy(runes[i+1:], runes[i+n+1:])
+ runes = runes[:len(runes)-n]
+ } else if unicode.IsLower(runes[i]) && !unicode.IsLower(runes[i+1]) {
+ // lower->non-lower
+ eow = true
+ }
+ i++
+ if !eow {
+ continue
+ }
+
+ // [w,i) is a word.
+ word := string(runes[w:i])
+ if u := strings.ToUpper(word); commonInitialisms[u] {
+ // Keep consistent case, which is lowercase only at the start.
+ if w == 0 && unicode.IsLower(runes[w]) {
+ u = strings.ToLower(u)
+ }
+ // All the common initialisms are ASCII,
+ // so we can replace the bytes exactly.
+ copy(runes[w:], []rune(u))
+ } else if w > 0 && strings.ToLower(word) == word {
+ // already all lowercase, and not the first word, so uppercase the first character.
+ runes[w] = unicode.ToUpper(runes[w])
+ }
+ w = i
+ }
+ return string(runes)
+}
+
+// commonInitialisms is a set of common initialisms.
+// Only add entries that are highly unlikely to be non-initialisms.
+// For instance, "ID" is fine (Freudian code is rare), but "AND" is not.
+var commonInitialisms = map[string]bool{
+ "ACL": true,
+ "API": true,
+ "ASCII": true,
+ "CPU": true,
+ "CSS": true,
+ "DNS": true,
+ "EOF": true,
+ "GUID": true,
+ "HTML": true,
+ "HTTP": true,
+ "HTTPS": true,
+ "ID": true,
+ "IP": true,
+ "JSON": true,
+ "LHS": true,
+ "QPS": true,
+ "RAM": true,
+ "RHS": true,
+ "RPC": true,
+ "SLA": true,
+ "SMTP": true,
+ "SQL": true,
+ "SSH": true,
+ "TCP": true,
+ "TLS": true,
+ "TTL": true,
+ "UDP": true,
+ "UI": true,
+ "UID": true,
+ "UUID": true,
+ "URI": true,
+ "URL": true,
+ "UTF8": true,
+ "VM": true,
+ "XML": true,
+ "XMPP": true,
+ "XSRF": true,
+ "XSS": true,
+}
+
+// lintTypeDoc examines the doc comment on a type.
+// It complains if they are missing from an exported type,
+// or if they are not of the standard form.
+func (f *file) lintTypeDoc(t *ast.TypeSpec, doc *ast.CommentGroup) {
+ if !ast.IsExported(t.Name.Name) {
+ return
+ }
+ if doc == nil {
+ f.errorf(t, 1, link(docCommentsLink), category("comments"), "exported type %v should have comment or be unexported", t.Name)
+ return
+ }
+
+ s := doc.Text()
+ articles := [...]string{"A", "An", "The"}
+ for _, a := range articles {
+ if strings.HasPrefix(s, a+" ") {
+ s = s[len(a)+1:]
+ break
+ }
+ }
+ if !strings.HasPrefix(s, t.Name.Name+" ") {
+ f.errorf(doc, 1, link(docCommentsLink), category("comments"), `comment on exported type %v should be of the form "%v ..." (with optional leading article)`, t.Name, t.Name)
+ }
+}
+
+var commonMethods = map[string]bool{
+ "Error": true,
+ "Read": true,
+ "ServeHTTP": true,
+ "String": true,
+ "Write": true,
+}
+
+// lintFuncDoc examines doc comments on functions and methods.
+// It complains if they are missing, or not of the right form.
+// It has specific exclusions for well-known methods (see commonMethods above).
+func (f *file) lintFuncDoc(fn *ast.FuncDecl) {
+ if !ast.IsExported(fn.Name.Name) {
+ // func is unexported
+ return
+ }
+ kind := "function"
+ name := fn.Name.Name
+ if fn.Recv != nil && len(fn.Recv.List) > 0 {
+ // method
+ kind = "method"
+ recv := receiverType(fn)
+ if !ast.IsExported(recv) {
+ // receiver is unexported
+ return
+ }
+ if commonMethods[name] {
+ return
+ }
+ switch name {
+ case "Len", "Less", "Swap":
+ if f.pkg.sortable[recv] {
+ return
+ }
+ }
+ name = recv + "." + name
+ }
+ if fn.Doc == nil {
+ f.errorf(fn, 1, link(docCommentsLink), category("comments"), "exported %s %s should have comment or be unexported", kind, name)
+ return
+ }
+ s := fn.Doc.Text()
+ prefix := fn.Name.Name + " "
+ if !strings.HasPrefix(s, prefix) {
+ f.errorf(fn.Doc, 1, link(docCommentsLink), category("comments"), `comment on exported %s %s should be of the form "%s..."`, kind, name, prefix)
+ }
+}
+
+// lintValueSpecDoc examines package-global variables and constants.
+// It complains if they are not individually declared,
+// or if they are not suitably documented in the right form (unless they are in a block that is commented).
+func (f *file) lintValueSpecDoc(vs *ast.ValueSpec, gd *ast.GenDecl, genDeclMissingComments map[*ast.GenDecl]bool) {
+ kind := "var"
+ if gd.Tok == token.CONST {
+ kind = "const"
+ }
+
+ if len(vs.Names) > 1 {
+ // Check that none are exported except for the first.
+ for _, n := range vs.Names[1:] {
+ if ast.IsExported(n.Name) {
+ f.errorf(vs, 1, category("comments"), "exported %s %s should have its own declaration", kind, n.Name)
+ return
+ }
+ }
+ }
+
+ // Only one name.
+ name := vs.Names[0].Name
+ if !ast.IsExported(name) {
+ return
+ }
+
+ if vs.Doc == nil && gd.Doc == nil {
+ if genDeclMissingComments[gd] {
+ return
+ }
+ block := ""
+ if kind == "const" && gd.Lparen.IsValid() {
+ block = " (or a comment on this block)"
+ }
+ f.errorf(vs, 1, link(docCommentsLink), category("comments"), "exported %s %s should have comment%s or be unexported", kind, name, block)
+ genDeclMissingComments[gd] = true
+ return
+ }
+ // If this GenDecl has parens and a comment, we don't check its comment form.
+ if gd.Lparen.IsValid() && gd.Doc != nil {
+ return
+ }
+ // The relevant text to check will be on either vs.Doc or gd.Doc.
+ // Use vs.Doc preferentially.
+ doc := vs.Doc
+ if doc == nil {
+ doc = gd.Doc
+ }
+ prefix := name + " "
+ if !strings.HasPrefix(doc.Text(), prefix) {
+ f.errorf(doc, 1, link(docCommentsLink), category("comments"), `comment on exported %s %s should be of the form "%s..."`, kind, name, prefix)
+ }
+}
+
+func (f *file) checkStutter(id *ast.Ident, thing string) {
+ pkg, name := f.f.Name.Name, id.Name
+ if !ast.IsExported(name) {
+ // unexported name
+ return
+ }
+ // A name stutters if the package name is a strict prefix
+ // and the next character of the name starts a new word.
+ if len(name) <= len(pkg) {
+ // name is too short to stutter.
+ // This permits the name to be the same as the package name.
+ return
+ }
+ if !strings.EqualFold(pkg, name[:len(pkg)]) {
+ return
+ }
+ // We can assume the name is well-formed UTF-8.
+ // If the next rune after the package name is uppercase or an underscore
+ // the it's starting a new word and thus this name stutters.
+ rem := name[len(pkg):]
+ if next, _ := utf8.DecodeRuneInString(rem); next == '_' || unicode.IsUpper(next) {
+ f.errorf(id, 0.8, link(styleGuideBase+"#package-names"), category("naming"), "%s name will be used as %s.%s by other packages, and that stutters; consider calling this %s", thing, pkg, name, rem)
+ }
+}
+
+// zeroLiteral is a set of ast.BasicLit values that are zero values.
+// It is not exhaustive.
+var zeroLiteral = map[string]bool{
+ "false": true, // bool
+ // runes
+ `'\x00'`: true,
+ `'\000'`: true,
+ // strings
+ `""`: true,
+ "``": true,
+ // numerics
+ "0": true,
+ "0.": true,
+ "0.0": true,
+ "0i": true,
+}
+
+// lintElses examines else blocks. It complains about any else block whose if block ends in a return.
+func (f *file) lintElses() {
+ // We don't want to flag if { } else if { } else { } constructions.
+ // They will appear as an IfStmt whose Else field is also an IfStmt.
+ // Record such a node so we ignore it when we visit it.
+ ignore := make(map[*ast.IfStmt]bool)
+
+ f.walk(func(node ast.Node) bool {
+ ifStmt, ok := node.(*ast.IfStmt)
+ if !ok || ifStmt.Else == nil {
+ return true
+ }
+ if elseif, ok := ifStmt.Else.(*ast.IfStmt); ok {
+ ignore[elseif] = true
+ return true
+ }
+ if ignore[ifStmt] {
+ return true
+ }
+ if _, ok := ifStmt.Else.(*ast.BlockStmt); !ok {
+ // only care about elses without conditions
+ return true
+ }
+ if len(ifStmt.Body.List) == 0 {
+ return true
+ }
+ shortDecl := false // does the if statement have a ":=" initialization statement?
+ if ifStmt.Init != nil {
+ if as, ok := ifStmt.Init.(*ast.AssignStmt); ok && as.Tok == token.DEFINE {
+ shortDecl = true
+ }
+ }
+ lastStmt := ifStmt.Body.List[len(ifStmt.Body.List)-1]
+ if _, ok := lastStmt.(*ast.ReturnStmt); ok {
+ extra := ""
+ if shortDecl {
+ extra = " (move short variable declaration to its own line if necessary)"
+ }
+ f.errorf(ifStmt.Else, 1, link(styleGuideBase+"#indent-error-flow"), category("indent"), "if block ends with a return statement, so drop this else and outdent its block"+extra)
+ }
+ return true
+ })
+}
+
+// lintRanges examines range clauses. It complains about redundant constructions.
+func (f *file) lintRanges() {
+ f.walk(func(node ast.Node) bool {
+ rs, ok := node.(*ast.RangeStmt)
+ if !ok {
+ return true
+ }
+
+ if isIdent(rs.Key, "_") && (rs.Value == nil || isIdent(rs.Value, "_")) {
+ p := f.errorf(rs.Key, 1, category("range-loop"), "should omit values from range; this loop is equivalent to `for range ...`")
+
+ newRS := *rs // shallow copy
+ newRS.Value = nil
+ newRS.Key = nil
+ p.ReplacementLine = f.firstLineOf(&newRS, rs)
+
+ return true
+ }
+
+ if isIdent(rs.Value, "_") {
+ p := f.errorf(rs.Value, 1, category("range-loop"), "should omit 2nd value from range; this loop is equivalent to `for %s %s range ...`", f.render(rs.Key), rs.Tok)
+
+ newRS := *rs // shallow copy
+ newRS.Value = nil
+ p.ReplacementLine = f.firstLineOf(&newRS, rs)
+ }
+
+ return true
+ })
+}
+
+// lintErrorf examines errors.New and testing.Error calls. It complains if its only argument is an fmt.Sprintf invocation.
+func (f *file) lintErrorf() {
+ f.walk(func(node ast.Node) bool {
+ ce, ok := node.(*ast.CallExpr)
+ if !ok || len(ce.Args) != 1 {
+ return true
+ }
+ isErrorsNew := isPkgDot(ce.Fun, "errors", "New")
+ var isTestingError bool
+ se, ok := ce.Fun.(*ast.SelectorExpr)
+ if ok && se.Sel.Name == "Error" {
+ if typ := f.pkg.typeOf(se.X); typ != nil {
+ isTestingError = typ.String() == "*testing.T"
+ }
+ }
+ if !isErrorsNew && !isTestingError {
+ return true
+ }
+ if !f.imports("errors") {
+ return true
+ }
+ arg := ce.Args[0]
+ ce, ok = arg.(*ast.CallExpr)
+ if !ok || !isPkgDot(ce.Fun, "fmt", "Sprintf") {
+ return true
+ }
+ errorfPrefix := "fmt"
+ if isTestingError {
+ errorfPrefix = f.render(se.X)
+ }
+ p := f.errorf(node, 1, category("errors"), "should replace %s(fmt.Sprintf(...)) with %s.Errorf(...)", f.render(se), errorfPrefix)
+
+ m := f.srcLineWithMatch(ce, `^(.*)`+f.render(se)+`\(fmt\.Sprintf\((.*)\)\)(.*)$`)
+ if m != nil {
+ p.ReplacementLine = m[1] + errorfPrefix + ".Errorf(" + m[2] + ")" + m[3]
+ }
+
+ return true
+ })
+}
+
+// lintErrors examines global error vars. It complains if they aren't named in the standard way.
+func (f *file) lintErrors() {
+ for _, decl := range f.f.Decls {
+ gd, ok := decl.(*ast.GenDecl)
+ if !ok || gd.Tok != token.VAR {
+ continue
+ }
+ for _, spec := range gd.Specs {
+ spec := spec.(*ast.ValueSpec)
+ if len(spec.Names) != 1 || len(spec.Values) != 1 {
+ continue
+ }
+ ce, ok := spec.Values[0].(*ast.CallExpr)
+ if !ok {
+ continue
+ }
+ if !isPkgDot(ce.Fun, "errors", "New") && !isPkgDot(ce.Fun, "fmt", "Errorf") {
+ continue
+ }
+
+ id := spec.Names[0]
+ prefix := "err"
+ if id.IsExported() {
+ prefix = "Err"
+ }
+ if !strings.HasPrefix(id.Name, prefix) {
+ f.errorf(id, 0.9, category("naming"), "error var %s should have name of the form %sFoo", id.Name, prefix)
+ }
+ }
+ }
+}
+
+func lintErrorString(s string) (isClean bool, conf float64) {
+ const basicConfidence = 0.8
+ const capConfidence = basicConfidence - 0.2
+ first, firstN := utf8.DecodeRuneInString(s)
+ last, _ := utf8.DecodeLastRuneInString(s)
+ if last == '.' || last == ':' || last == '!' || last == '\n' {
+ return false, basicConfidence
+ }
+ if unicode.IsUpper(first) {
+ // People use proper nouns and exported Go identifiers in error strings,
+ // so decrease the confidence of warnings for capitalization.
+ if len(s) <= firstN {
+ return false, capConfidence
+ }
+ // Flag strings starting with something that doesn't look like an initialism.
+ if second, _ := utf8.DecodeRuneInString(s[firstN:]); !unicode.IsUpper(second) {
+ return false, capConfidence
+ }
+ }
+ return true, 0
+}
+
+// lintErrorStrings examines error strings.
+// It complains if they are capitalized or end in punctuation or a newline.
+func (f *file) lintErrorStrings() {
+ f.walk(func(node ast.Node) bool {
+ ce, ok := node.(*ast.CallExpr)
+ if !ok {
+ return true
+ }
+ if !isPkgDot(ce.Fun, "errors", "New") && !isPkgDot(ce.Fun, "fmt", "Errorf") {
+ return true
+ }
+ if len(ce.Args) < 1 {
+ return true
+ }
+ str, ok := ce.Args[0].(*ast.BasicLit)
+ if !ok || str.Kind != token.STRING {
+ return true
+ }
+ s, _ := strconv.Unquote(str.Value) // can assume well-formed Go
+ if s == "" {
+ return true
+ }
+ clean, conf := lintErrorString(s)
+ if clean {
+ return true
+ }
+
+ f.errorf(str, conf, link(styleGuideBase+"#error-strings"), category("errors"),
+ "error strings should not be capitalized or end with punctuation or a newline")
+ return true
+ })
+}
+
+// lintReceiverNames examines receiver names. It complains about inconsistent
+// names used for the same type and names such as "this".
+func (f *file) lintReceiverNames() {
+ typeReceiver := map[string]string{}
+ f.walk(func(n ast.Node) bool {
+ fn, ok := n.(*ast.FuncDecl)
+ if !ok || fn.Recv == nil || len(fn.Recv.List) == 0 {
+ return true
+ }
+ names := fn.Recv.List[0].Names
+ if len(names) < 1 {
+ return true
+ }
+ name := names[0].Name
+ const ref = styleGuideBase + "#receiver-names"
+ if name == "_" {
+ f.errorf(n, 1, link(ref), category("naming"), `receiver name should not be an underscore, omit the name if it is unused`)
+ return true
+ }
+ if name == "this" || name == "self" {
+ f.errorf(n, 1, link(ref), category("naming"), `receiver name should be a reflection of its identity; don't use generic names such as "this" or "self"`)
+ return true
+ }
+ recv := receiverType(fn)
+ if prev, ok := typeReceiver[recv]; ok && prev != name {
+ f.errorf(n, 1, link(ref), category("naming"), "receiver name %s should be consistent with previous receiver name %s for %s", name, prev, recv)
+ return true
+ }
+ typeReceiver[recv] = name
+ return true
+ })
+}
+
+// lintIncDec examines statements that increment or decrement a variable.
+// It complains if they don't use x++ or x--.
+func (f *file) lintIncDec() {
+ f.walk(func(n ast.Node) bool {
+ as, ok := n.(*ast.AssignStmt)
+ if !ok {
+ return true
+ }
+ if len(as.Lhs) != 1 {
+ return true
+ }
+ if !isOne(as.Rhs[0]) {
+ return true
+ }
+ var suffix string
+ switch as.Tok {
+ case token.ADD_ASSIGN:
+ suffix = "++"
+ case token.SUB_ASSIGN:
+ suffix = "--"
+ default:
+ return true
+ }
+ f.errorf(as, 0.8, category("unary-op"), "should replace %s with %s%s", f.render(as), f.render(as.Lhs[0]), suffix)
+ return true
+ })
+}
+
+// lintErrorReturn examines function declarations that return an error.
+// It complains if the error isn't the last parameter.
+func (f *file) lintErrorReturn() {
+ f.walk(func(n ast.Node) bool {
+ fn, ok := n.(*ast.FuncDecl)
+ if !ok || fn.Type.Results == nil {
+ return true
+ }
+ ret := fn.Type.Results.List
+ if len(ret) <= 1 {
+ return true
+ }
+ if isIdent(ret[len(ret)-1].Type, "error") {
+ return true
+ }
+ // An error return parameter should be the last parameter.
+ // Flag any error parameters found before the last.
+ for _, r := range ret[:len(ret)-1] {
+ if isIdent(r.Type, "error") {
+ f.errorf(fn, 0.9, category("arg-order"), "error should be the last type when returning multiple items")
+ break // only flag one
+ }
+ }
+ return true
+ })
+}
+
+// lintUnexportedReturn examines exported function declarations.
+// It complains if any return an unexported type.
+func (f *file) lintUnexportedReturn() {
+ f.walk(func(n ast.Node) bool {
+ fn, ok := n.(*ast.FuncDecl)
+ if !ok {
+ return true
+ }
+ if fn.Type.Results == nil {
+ return false
+ }
+ if !fn.Name.IsExported() {
+ return false
+ }
+ thing := "func"
+ if fn.Recv != nil && len(fn.Recv.List) > 0 {
+ thing = "method"
+ if !ast.IsExported(receiverType(fn)) {
+ // Don't report exported methods of unexported types,
+ // such as private implementations of sort.Interface.
+ return false
+ }
+ }
+ for _, ret := range fn.Type.Results.List {
+ typ := f.pkg.typeOf(ret.Type)
+ if exportedType(typ) {
+ continue
+ }
+ f.errorf(ret.Type, 0.8, category("unexported-type-in-api"),
+ "exported %s %s returns unexported type %s, which can be annoying to use",
+ thing, fn.Name.Name, typ)
+ break // only flag one
+ }
+ return false
+ })
+}
+
+// exportedType reports whether typ is an exported type.
+// It is imprecise, and will err on the side of returning true,
+// such as for composite types.
+func exportedType(typ types.Type) bool {
+ switch T := typ.(type) {
+ case *types.Named:
+ // Builtin types have no package.
+ return T.Obj().Pkg() == nil || T.Obj().Exported()
+ case *types.Map:
+ return exportedType(T.Key()) && exportedType(T.Elem())
+ case interface {
+ Elem() types.Type
+ }: // array, slice, pointer, chan
+ return exportedType(T.Elem())
+ }
+ // Be conservative about other types, such as struct, interface, etc.
+ return true
+}
+
+// timeSuffixes is a list of name suffixes that imply a time unit.
+// This is not an exhaustive list.
+var timeSuffixes = []string{
+ "Sec", "Secs", "Seconds",
+ "Msec", "Msecs",
+ "Milli", "Millis", "Milliseconds",
+ "Usec", "Usecs", "Microseconds",
+ "MS", "Ms",
+}
+
+func (f *file) lintTimeNames() {
+ f.walk(func(node ast.Node) bool {
+ v, ok := node.(*ast.ValueSpec)
+ if !ok {
+ return true
+ }
+ for _, name := range v.Names {
+ origTyp := f.pkg.typeOf(name)
+ // Look for time.Duration or *time.Duration;
+ // the latter is common when using flag.Duration.
+ typ := origTyp
+ if pt, ok := typ.(*types.Pointer); ok {
+ typ = pt.Elem()
+ }
+ if !f.pkg.isNamedType(typ, "time", "Duration") {
+ continue
+ }
+ suffix := ""
+ for _, suf := range timeSuffixes {
+ if strings.HasSuffix(name.Name, suf) {
+ suffix = suf
+ break
+ }
+ }
+ if suffix == "" {
+ continue
+ }
+ f.errorf(v, 0.9, category("time"), "var %s is of type %v; don't use unit-specific suffix %q", name.Name, origTyp, suffix)
+ }
+ return true
+ })
+}
+
+// lintContextKeyTypes checks for call expressions to context.WithValue with
+// basic types used for the key argument.
+// See: https://golang.org/issue/17293
+func (f *file) lintContextKeyTypes() {
+ f.walk(func(node ast.Node) bool {
+ switch node := node.(type) {
+ case *ast.CallExpr:
+ f.checkContextKeyType(node)
+ }
+
+ return true
+ })
+}
+
+// checkContextKeyType reports an error if the call expression calls
+// context.WithValue with a key argument of basic type.
+func (f *file) checkContextKeyType(x *ast.CallExpr) {
+ sel, ok := x.Fun.(*ast.SelectorExpr)
+ if !ok {
+ return
+ }
+ pkg, ok := sel.X.(*ast.Ident)
+ if !ok || pkg.Name != "context" {
+ return
+ }
+ if sel.Sel.Name != "WithValue" {
+ return
+ }
+
+ // key is second argument to context.WithValue
+ if len(x.Args) != 3 {
+ return
+ }
+ key := f.pkg.typesInfo.Types[x.Args[1]]
+
+ if ktyp, ok := key.Type.(*types.Basic); ok && ktyp.Kind() != types.Invalid {
+ f.errorf(x, 1.0, category("context"), fmt.Sprintf("should not use basic type %s as key in context.WithValue", key.Type))
+ }
+}
+
+// lintContextArgs examines function declarations that contain an
+// argument with a type of context.Context
+// It complains if that argument isn't the first parameter.
+func (f *file) lintContextArgs() {
+ f.walk(func(n ast.Node) bool {
+ fn, ok := n.(*ast.FuncDecl)
+ if !ok || len(fn.Type.Params.List) <= 1 {
+ return true
+ }
+ // A context.Context should be the first parameter of a function.
+ // Flag any that show up after the first.
+ for _, arg := range fn.Type.Params.List[1:] {
+ if isPkgDot(arg.Type, "context", "Context") {
+ f.errorf(fn, 0.9, link("https://golang.org/pkg/context/"), category("arg-order"), "context.Context should be the first parameter of a function")
+ break // only flag one
+ }
+ }
+ return true
+ })
+}
+
+// containsComments returns whether the interval [start, end) contains any
+// comments without "// MATCH " prefix.
+func (f *file) containsComments(start, end token.Pos) bool {
+ for _, cgroup := range f.f.Comments {
+ comments := cgroup.List
+ if comments[0].Slash >= end {
+ // All comments starting with this group are after end pos.
+ return false
+ }
+ if comments[len(comments)-1].Slash < start {
+ // Comments group ends before start pos.
+ continue
+ }
+ for _, c := range comments {
+ if start <= c.Slash && c.Slash < end && !strings.HasPrefix(c.Text, "// MATCH ") {
+ return true
+ }
+ }
+ }
+ return false
+}
+
+// receiverType returns the named type of the method receiver, sans "*",
+// or "invalid-type" if fn.Recv is ill formed.
+func receiverType(fn *ast.FuncDecl) string {
+ switch e := fn.Recv.List[0].Type.(type) {
+ case *ast.Ident:
+ return e.Name
+ case *ast.StarExpr:
+ if id, ok := e.X.(*ast.Ident); ok {
+ return id.Name
+ }
+ }
+ // The parser accepts much more than just the legal forms.
+ return "invalid-type"
+}
+
+func (f *file) walk(fn func(ast.Node) bool) {
+ ast.Walk(walker(fn), f.f)
+}
+
+func (f *file) render(x interface{}) string {
+ var buf bytes.Buffer
+ if err := printer.Fprint(&buf, f.fset, x); err != nil {
+ panic(err)
+ }
+ return buf.String()
+}
+
+func (f *file) debugRender(x interface{}) string {
+ var buf bytes.Buffer
+ if err := ast.Fprint(&buf, f.fset, x, nil); err != nil {
+ panic(err)
+ }
+ return buf.String()
+}
+
+// walker adapts a function to satisfy the ast.Visitor interface.
+// The function return whether the walk should proceed into the node's children.
+type walker func(ast.Node) bool
+
+func (w walker) Visit(node ast.Node) ast.Visitor {
+ if w(node) {
+ return w
+ }
+ return nil
+}
+
+func isIdent(expr ast.Expr, ident string) bool {
+ id, ok := expr.(*ast.Ident)
+ return ok && id.Name == ident
+}
+
+// isBlank returns whether id is the blank identifier "_".
+// If id == nil, the answer is false.
+func isBlank(id *ast.Ident) bool { return id != nil && id.Name == "_" }
+
+func isPkgDot(expr ast.Expr, pkg, name string) bool {
+ sel, ok := expr.(*ast.SelectorExpr)
+ return ok && isIdent(sel.X, pkg) && isIdent(sel.Sel, name)
+}
+
+func isOne(expr ast.Expr) bool {
+ lit, ok := expr.(*ast.BasicLit)
+ return ok && lit.Kind == token.INT && lit.Value == "1"
+}
+
+func isCgoExported(f *ast.FuncDecl) bool {
+ if f.Recv != nil || f.Doc == nil {
+ return false
+ }
+
+ cgoExport := regexp.MustCompile(fmt.Sprintf("(?m)^//export %s$", regexp.QuoteMeta(f.Name.Name)))
+ for _, c := range f.Doc.List {
+ if cgoExport.MatchString(c.Text) {
+ return true
+ }
+ }
+ return false
+}
+
+var basicTypeKinds = map[types.BasicKind]string{
+ types.UntypedBool: "bool",
+ types.UntypedInt: "int",
+ types.UntypedRune: "rune",
+ types.UntypedFloat: "float64",
+ types.UntypedComplex: "complex128",
+ types.UntypedString: "string",
+}
+
+// isUntypedConst reports whether expr is an untyped constant,
+// and indicates what its default type is.
+// scope may be nil.
+func (f *file) isUntypedConst(expr ast.Expr) (defType string, ok bool) {
+ // Re-evaluate expr outside of its context to see if it's untyped.
+ // (An expr evaluated within, for example, an assignment context will get the type of the LHS.)
+ exprStr := f.render(expr)
+ tv, err := types.Eval(f.fset, f.pkg.typesPkg, expr.Pos(), exprStr)
+ if err != nil {
+ return "", false
+ }
+ if b, ok := tv.Type.(*types.Basic); ok {
+ if dt, ok := basicTypeKinds[b.Kind()]; ok {
+ return dt, true
+ }
+ }
+
+ return "", false
+}
+
+// firstLineOf renders the given node and returns its first line.
+// It will also match the indentation of another node.
+func (f *file) firstLineOf(node, match ast.Node) string {
+ line := f.render(node)
+ if i := strings.Index(line, "\n"); i >= 0 {
+ line = line[:i]
+ }
+ return f.indentOf(match) + line
+}
+
+func (f *file) indentOf(node ast.Node) string {
+ line := srcLine(f.src, f.fset.Position(node.Pos()))
+ for i, r := range line {
+ switch r {
+ case ' ', '\t':
+ default:
+ return line[:i]
+ }
+ }
+ return line // unusual or empty line
+}
+
+func (f *file) srcLineWithMatch(node ast.Node, pattern string) (m []string) {
+ line := srcLine(f.src, f.fset.Position(node.Pos()))
+ line = strings.TrimSuffix(line, "\n")
+ rx := regexp.MustCompile(pattern)
+ return rx.FindStringSubmatch(line)
+}
+
+// imports returns true if the current file imports the specified package path.
+func (f *file) imports(importPath string) bool {
+ all := astutil.Imports(f.fset, f.f)
+ for _, p := range all {
+ for _, i := range p {
+ uq, err := strconv.Unquote(i.Path.Value)
+ if err == nil && importPath == uq {
+ return true
+ }
+ }
+ }
+ return false
+}
+
+// srcLine returns the complete line at p, including the terminating newline.
+func srcLine(src []byte, p token.Position) string {
+ // Run to end of line in both directions if not at line start/end.
+ lo, hi := p.Offset, p.Offset+1
+ for lo > 0 && src[lo-1] != '\n' {
+ lo--
+ }
+ for hi < len(src) && src[hi-1] != '\n' {
+ hi++
+ }
+ return string(src[lo:hi])
+}
diff --git a/vendor/golang.org/x/tools/AUTHORS b/vendor/golang.org/x/tools/AUTHORS
new file mode 100644
index 0000000000000..15167cd746c56
--- /dev/null
+++ b/vendor/golang.org/x/tools/AUTHORS
@@ -0,0 +1,3 @@
+# This source code refers to The Go Authors for copyright purposes.
+# The master list of authors is in the main Go distribution,
+# visible at http://tip.golang.org/AUTHORS.
diff --git a/vendor/golang.org/x/tools/CONTRIBUTORS b/vendor/golang.org/x/tools/CONTRIBUTORS
new file mode 100644
index 0000000000000..1c4577e968061
--- /dev/null
+++ b/vendor/golang.org/x/tools/CONTRIBUTORS
@@ -0,0 +1,3 @@
+# This source code was written by the Go contributors.
+# The master list of contributors is in the main Go distribution,
+# visible at http://tip.golang.org/CONTRIBUTORS.
diff --git a/vendor/golang.org/x/tools/LICENSE b/vendor/golang.org/x/tools/LICENSE
new file mode 100644
index 0000000000000..6a66aea5eafe0
--- /dev/null
+++ b/vendor/golang.org/x/tools/LICENSE
@@ -0,0 +1,27 @@
+Copyright (c) 2009 The Go Authors. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+ * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+ * Neither the name of Google Inc. nor the names of its
+contributors may be used to endorse or promote products derived from
+this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/golang.org/x/tools/PATENTS b/vendor/golang.org/x/tools/PATENTS
new file mode 100644
index 0000000000000..733099041f84f
--- /dev/null
+++ b/vendor/golang.org/x/tools/PATENTS
@@ -0,0 +1,22 @@
+Additional IP Rights Grant (Patents)
+
+"This implementation" means the copyrightable works distributed by
+Google as part of the Go project.
+
+Google hereby grants to You a perpetual, worldwide, non-exclusive,
+no-charge, royalty-free, irrevocable (except as stated in this section)
+patent license to make, have made, use, offer to sell, sell, import,
+transfer and otherwise run, modify and propagate the contents of this
+implementation of Go, where such license applies only to those patent
+claims, both currently owned or controlled by Google and acquired in
+the future, licensable by Google that are necessarily infringed by this
+implementation of Go. This grant does not include claims that would be
+infringed only as a consequence of further modification of this
+implementation. If you or your agent or exclusive licensee institute or
+order or agree to the institution of patent litigation against any
+entity (including a cross-claim or counterclaim in a lawsuit) alleging
+that this implementation of Go or any code incorporated within this
+implementation of Go constitutes direct or contributory patent
+infringement, or inducement of patent infringement, then any patent
+rights granted to you under this License for this implementation of Go
+shall terminate as of the date such litigation is filed.
diff --git a/vendor/golang.org/x/tools/cmd/goimports/doc.go b/vendor/golang.org/x/tools/cmd/goimports/doc.go
new file mode 100644
index 0000000000000..7033e4d4cff54
--- /dev/null
+++ b/vendor/golang.org/x/tools/cmd/goimports/doc.go
@@ -0,0 +1,43 @@
+/*
+
+Command goimports updates your Go import lines,
+adding missing ones and removing unreferenced ones.
+
+ $ go get golang.org/x/tools/cmd/goimports
+
+In addition to fixing imports, goimports also formats
+your code in the same style as gofmt so it can be used
+as a replacement for your editor's gofmt-on-save hook.
+
+For emacs, make sure you have the latest go-mode.el:
+ https://github.com/dominikh/go-mode.el
+Then in your .emacs file:
+ (setq gofmt-command "goimports")
+ (add-hook 'before-save-hook 'gofmt-before-save)
+
+For vim, set "gofmt_command" to "goimports":
+ https://golang.org/change/39c724dd7f252
+ https://golang.org/wiki/IDEsAndTextEditorPlugins
+ etc
+
+For GoSublime, follow the steps described here:
+ http://michaelwhatcott.com/gosublime-goimports/
+
+For other editors, you probably know what to do.
+
+To exclude directories in your $GOPATH from being scanned for Go
+files, goimports respects a configuration file at
+$GOPATH/src/.goimportsignore which may contain blank lines, comment
+lines (beginning with '#'), or lines naming a directory relative to
+the configuration file to ignore when scanning. No globbing or regex
+patterns are allowed. Use the "-v" verbose flag to verify it's
+working and see what goimports is doing.
+
+File bugs or feature requests at:
+
+ https://golang.org/issues/new?title=x/tools/cmd/goimports:+
+
+Happy hacking!
+
+*/
+package main // import "golang.org/x/tools/cmd/goimports"
diff --git a/vendor/golang.org/x/tools/cmd/goimports/goimports.go b/vendor/golang.org/x/tools/cmd/goimports/goimports.go
new file mode 100644
index 0000000000000..a476a7f3c30a7
--- /dev/null
+++ b/vendor/golang.org/x/tools/cmd/goimports/goimports.go
@@ -0,0 +1,377 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package main
+
+import (
+ "bufio"
+ "bytes"
+ "errors"
+ "flag"
+ "fmt"
+ "go/build"
+ "go/scanner"
+ "io"
+ "io/ioutil"
+ "log"
+ "os"
+ "os/exec"
+ "path/filepath"
+ "runtime"
+ "runtime/pprof"
+ "strings"
+
+ "golang.org/x/tools/internal/imports"
+)
+
+var (
+ // main operation modes
+ list = flag.Bool("l", false, "list files whose formatting differs from goimport's")
+ write = flag.Bool("w", false, "write result to (source) file instead of stdout")
+ doDiff = flag.Bool("d", false, "display diffs instead of rewriting files")
+ srcdir = flag.String("srcdir", "", "choose imports as if source code is from `dir`. When operating on a single file, dir may instead be the complete file name.")
+
+ verbose bool // verbose logging
+
+ cpuProfile = flag.String("cpuprofile", "", "CPU profile output")
+ memProfile = flag.String("memprofile", "", "memory profile output")
+ memProfileRate = flag.Int("memrate", 0, "if > 0, sets runtime.MemProfileRate")
+
+ options = &imports.Options{
+ TabWidth: 8,
+ TabIndent: true,
+ Comments: true,
+ Fragment: true,
+ // This environment, and its caches, will be reused for the whole run.
+ Env: &imports.ProcessEnv{
+ GOPATH: build.Default.GOPATH,
+ GOROOT: build.Default.GOROOT,
+ },
+ }
+ exitCode = 0
+)
+
+func init() {
+ flag.BoolVar(&options.AllErrors, "e", false, "report all errors (not just the first 10 on different lines)")
+ flag.StringVar(&options.Env.LocalPrefix, "local", "", "put imports beginning with this string after 3rd-party packages; comma-separated list")
+ flag.BoolVar(&options.FormatOnly, "format-only", false, "if true, don't fix imports and only format. In this mode, goimports is effectively gofmt, with the addition that imports are grouped into sections.")
+}
+
+func report(err error) {
+ scanner.PrintError(os.Stderr, err)
+ exitCode = 2
+}
+
+func usage() {
+ fmt.Fprintf(os.Stderr, "usage: goimports [flags] [path ...]\n")
+ flag.PrintDefaults()
+ os.Exit(2)
+}
+
+func isGoFile(f os.FileInfo) bool {
+ // ignore non-Go files
+ name := f.Name()
+ return !f.IsDir() && !strings.HasPrefix(name, ".") && strings.HasSuffix(name, ".go")
+}
+
+// argumentType is which mode goimports was invoked as.
+type argumentType int
+
+const (
+ // fromStdin means the user is piping their source into goimports.
+ fromStdin argumentType = iota
+
+ // singleArg is the common case from editors, when goimports is run on
+ // a single file.
+ singleArg
+
+ // multipleArg is when the user ran "goimports file1.go file2.go"
+ // or ran goimports on a directory tree.
+ multipleArg
+)
+
+func processFile(filename string, in io.Reader, out io.Writer, argType argumentType) error {
+ opt := options
+ if argType == fromStdin {
+ nopt := *options
+ nopt.Fragment = true
+ opt = &nopt
+ }
+
+ if in == nil {
+ f, err := os.Open(filename)
+ if err != nil {
+ return err
+ }
+ defer f.Close()
+ in = f
+ }
+
+ src, err := ioutil.ReadAll(in)
+ if err != nil {
+ return err
+ }
+
+ target := filename
+ if *srcdir != "" {
+ // Determine whether the provided -srcdirc is a directory or file
+ // and then use it to override the target.
+ //
+ // See https://github.com/dominikh/go-mode.el/issues/146
+ if isFile(*srcdir) {
+ if argType == multipleArg {
+ return errors.New("-srcdir value can't be a file when passing multiple arguments or when walking directories")
+ }
+ target = *srcdir
+ } else if argType == singleArg && strings.HasSuffix(*srcdir, ".go") && !isDir(*srcdir) {
+ // For a file which doesn't exist on disk yet, but might shortly.
+ // e.g. user in editor opens $DIR/newfile.go and newfile.go doesn't yet exist on disk.
+ // The goimports on-save hook writes the buffer to a temp file
+ // first and runs goimports before the actual save to newfile.go.
+ // The editor's buffer is named "newfile.go" so that is passed to goimports as:
+ // goimports -srcdir=/gopath/src/pkg/newfile.go /tmp/gofmtXXXXXXXX.go
+ // and then the editor reloads the result from the tmp file and writes
+ // it to newfile.go.
+ target = *srcdir
+ } else {
+ // Pretend that file is from *srcdir in order to decide
+ // visible imports correctly.
+ target = filepath.Join(*srcdir, filepath.Base(filename))
+ }
+ }
+
+ res, err := imports.Process(target, src, opt)
+ if err != nil {
+ return err
+ }
+
+ if !bytes.Equal(src, res) {
+ // formatting has changed
+ if *list {
+ fmt.Fprintln(out, filename)
+ }
+ if *write {
+ if argType == fromStdin {
+ // filename is "<standard input>"
+ return errors.New("can't use -w on stdin")
+ }
+ err = ioutil.WriteFile(filename, res, 0)
+ if err != nil {
+ return err
+ }
+ }
+ if *doDiff {
+ if argType == fromStdin {
+ filename = "stdin.go" // because <standard input>.orig looks silly
+ }
+ data, err := diff(src, res, filename)
+ if err != nil {
+ return fmt.Errorf("computing diff: %s", err)
+ }
+ fmt.Printf("diff -u %s %s\n", filepath.ToSlash(filename+".orig"), filepath.ToSlash(filename))
+ out.Write(data)
+ }
+ }
+
+ if !*list && !*write && !*doDiff {
+ _, err = out.Write(res)
+ }
+
+ return err
+}
+
+func visitFile(path string, f os.FileInfo, err error) error {
+ if err == nil && isGoFile(f) {
+ err = processFile(path, nil, os.Stdout, multipleArg)
+ }
+ if err != nil {
+ report(err)
+ }
+ return nil
+}
+
+func walkDir(path string) {
+ filepath.Walk(path, visitFile)
+}
+
+func main() {
+ runtime.GOMAXPROCS(runtime.NumCPU())
+
+ // call gofmtMain in a separate function
+ // so that it can use defer and have them
+ // run before the exit.
+ gofmtMain()
+ os.Exit(exitCode)
+}
+
+// parseFlags parses command line flags and returns the paths to process.
+// It's a var so that custom implementations can replace it in other files.
+var parseFlags = func() []string {
+ flag.BoolVar(&verbose, "v", false, "verbose logging")
+
+ flag.Parse()
+ return flag.Args()
+}
+
+func bufferedFileWriter(dest string) (w io.Writer, close func()) {
+ f, err := os.Create(dest)
+ if err != nil {
+ log.Fatal(err)
+ }
+ bw := bufio.NewWriter(f)
+ return bw, func() {
+ if err := bw.Flush(); err != nil {
+ log.Fatalf("error flushing %v: %v", dest, err)
+ }
+ if err := f.Close(); err != nil {
+ log.Fatal(err)
+ }
+ }
+}
+
+func gofmtMain() {
+ flag.Usage = usage
+ paths := parseFlags()
+
+ if *cpuProfile != "" {
+ bw, flush := bufferedFileWriter(*cpuProfile)
+ pprof.StartCPUProfile(bw)
+ defer flush()
+ defer pprof.StopCPUProfile()
+ }
+ // doTrace is a conditionally compiled wrapper around runtime/trace. It is
+ // used to allow goimports to compile under gccgo, which does not support
+ // runtime/trace. See https://golang.org/issue/15544.
+ defer doTrace()()
+ if *memProfileRate > 0 {
+ runtime.MemProfileRate = *memProfileRate
+ bw, flush := bufferedFileWriter(*memProfile)
+ defer func() {
+ runtime.GC() // materialize all statistics
+ if err := pprof.WriteHeapProfile(bw); err != nil {
+ log.Fatal(err)
+ }
+ flush()
+ }()
+ }
+
+ if verbose {
+ log.SetFlags(log.LstdFlags | log.Lmicroseconds)
+ options.Env.Debug = true
+ }
+ if options.TabWidth < 0 {
+ fmt.Fprintf(os.Stderr, "negative tabwidth %d\n", options.TabWidth)
+ exitCode = 2
+ return
+ }
+
+ if len(paths) == 0 {
+ if err := processFile("<standard input>", os.Stdin, os.Stdout, fromStdin); err != nil {
+ report(err)
+ }
+ return
+ }
+
+ argType := singleArg
+ if len(paths) > 1 {
+ argType = multipleArg
+ }
+
+ for _, path := range paths {
+ switch dir, err := os.Stat(path); {
+ case err != nil:
+ report(err)
+ case dir.IsDir():
+ walkDir(path)
+ default:
+ if err := processFile(path, nil, os.Stdout, argType); err != nil {
+ report(err)
+ }
+ }
+ }
+}
+
+func writeTempFile(dir, prefix string, data []byte) (string, error) {
+ file, err := ioutil.TempFile(dir, prefix)
+ if err != nil {
+ return "", err
+ }
+ _, err = file.Write(data)
+ if err1 := file.Close(); err == nil {
+ err = err1
+ }
+ if err != nil {
+ os.Remove(file.Name())
+ return "", err
+ }
+ return file.Name(), nil
+}
+
+func diff(b1, b2 []byte, filename string) (data []byte, err error) {
+ f1, err := writeTempFile("", "gofmt", b1)
+ if err != nil {
+ return
+ }
+ defer os.Remove(f1)
+
+ f2, err := writeTempFile("", "gofmt", b2)
+ if err != nil {
+ return
+ }
+ defer os.Remove(f2)
+
+ cmd := "diff"
+ if runtime.GOOS == "plan9" {
+ cmd = "/bin/ape/diff"
+ }
+
+ data, err = exec.Command(cmd, "-u", f1, f2).CombinedOutput()
+ if len(data) > 0 {
+ // diff exits with a non-zero status when the files don't match.
+ // Ignore that failure as long as we get output.
+ return replaceTempFilename(data, filename)
+ }
+ return
+}
+
+// replaceTempFilename replaces temporary filenames in diff with actual one.
+//
+// --- /tmp/gofmt316145376 2017-02-03 19:13:00.280468375 -0500
+// +++ /tmp/gofmt617882815 2017-02-03 19:13:00.280468375 -0500
+// ...
+// ->
+// --- path/to/file.go.orig 2017-02-03 19:13:00.280468375 -0500
+// +++ path/to/file.go 2017-02-03 19:13:00.280468375 -0500
+// ...
+func replaceTempFilename(diff []byte, filename string) ([]byte, error) {
+ bs := bytes.SplitN(diff, []byte{'\n'}, 3)
+ if len(bs) < 3 {
+ return nil, fmt.Errorf("got unexpected diff for %s", filename)
+ }
+ // Preserve timestamps.
+ var t0, t1 []byte
+ if i := bytes.LastIndexByte(bs[0], '\t'); i != -1 {
+ t0 = bs[0][i:]
+ }
+ if i := bytes.LastIndexByte(bs[1], '\t'); i != -1 {
+ t1 = bs[1][i:]
+ }
+ // Always print filepath with slash separator.
+ f := filepath.ToSlash(filename)
+ bs[0] = []byte(fmt.Sprintf("--- %s%s", f+".orig", t0))
+ bs[1] = []byte(fmt.Sprintf("+++ %s%s", f, t1))
+ return bytes.Join(bs, []byte{'\n'}), nil
+}
+
+// isFile reports whether name is a file.
+func isFile(name string) bool {
+ fi, err := os.Stat(name)
+ return err == nil && fi.Mode().IsRegular()
+}
+
+// isDir reports whether name is a directory.
+func isDir(name string) bool {
+ fi, err := os.Stat(name)
+ return err == nil && fi.IsDir()
+}
diff --git a/vendor/golang.org/x/tools/cmd/goimports/goimports_gc.go b/vendor/golang.org/x/tools/cmd/goimports/goimports_gc.go
new file mode 100644
index 0000000000000..21d867eaab552
--- /dev/null
+++ b/vendor/golang.org/x/tools/cmd/goimports/goimports_gc.go
@@ -0,0 +1,26 @@
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build gc
+
+package main
+
+import (
+ "flag"
+ "runtime/trace"
+)
+
+var traceProfile = flag.String("trace", "", "trace profile output")
+
+func doTrace() func() {
+ if *traceProfile != "" {
+ bw, flush := bufferedFileWriter(*traceProfile)
+ trace.Start(bw)
+ return func() {
+ flush()
+ trace.Stop()
+ }
+ }
+ return func() {}
+}
diff --git a/vendor/golang.org/x/tools/cmd/goimports/goimports_not_gc.go b/vendor/golang.org/x/tools/cmd/goimports/goimports_not_gc.go
new file mode 100644
index 0000000000000..f5531ceb31738
--- /dev/null
+++ b/vendor/golang.org/x/tools/cmd/goimports/goimports_not_gc.go
@@ -0,0 +1,11 @@
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build !gc
+
+package main
+
+func doTrace() func() {
+ return func() {}
+}
diff --git a/vendor/golang.org/x/tools/go/analysis/analysis.go b/vendor/golang.org/x/tools/go/analysis/analysis.go
new file mode 100644
index 0000000000000..ea605f4fd463d
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/analysis/analysis.go
@@ -0,0 +1,221 @@
+package analysis
+
+import (
+ "flag"
+ "fmt"
+ "go/ast"
+ "go/token"
+ "go/types"
+ "reflect"
+)
+
+// An Analyzer describes an analysis function and its options.
+type Analyzer struct {
+ // The Name of the analyzer must be a valid Go identifier
+ // as it may appear in command-line flags, URLs, and so on.
+ Name string
+
+ // Doc is the documentation for the analyzer.
+ // The part before the first "\n\n" is the title
+ // (no capital or period, max ~60 letters).
+ Doc string
+
+ // Flags defines any flags accepted by the analyzer.
+ // The manner in which these flags are exposed to the user
+ // depends on the driver which runs the analyzer.
+ Flags flag.FlagSet
+
+ // Run applies the analyzer to a package.
+ // It returns an error if the analyzer failed.
+ //
+ // On success, the Run function may return a result
+ // computed by the Analyzer; its type must match ResultType.
+ // The driver makes this result available as an input to
+ // another Analyzer that depends directly on this one (see
+ // Requires) when it analyzes the same package.
+ //
+ // To pass analysis results between packages (and thus
+ // potentially between address spaces), use Facts, which are
+ // serializable.
+ Run func(*Pass) (interface{}, error)
+
+ // RunDespiteErrors allows the driver to invoke
+ // the Run method of this analyzer even on a
+ // package that contains parse or type errors.
+ RunDespiteErrors bool
+
+ // Requires is a set of analyzers that must run successfully
+ // before this one on a given package. This analyzer may inspect
+ // the outputs produced by each analyzer in Requires.
+ // The graph over analyzers implied by Requires edges must be acyclic.
+ //
+ // Requires establishes a "horizontal" dependency between
+ // analysis passes (different analyzers, same package).
+ Requires []*Analyzer
+
+ // ResultType is the type of the optional result of the Run function.
+ ResultType reflect.Type
+
+ // FactTypes indicates that this analyzer imports and exports
+ // Facts of the specified concrete types.
+ // An analyzer that uses facts may assume that its import
+ // dependencies have been similarly analyzed before it runs.
+ // Facts must be pointers.
+ //
+ // FactTypes establishes a "vertical" dependency between
+ // analysis passes (same analyzer, different packages).
+ FactTypes []Fact
+}
+
+func (a *Analyzer) String() string { return a.Name }
+
+// A Pass provides information to the Run function that
+// applies a specific analyzer to a single Go package.
+//
+// It forms the interface between the analysis logic and the driver
+// program, and has both input and an output components.
+//
+// As in a compiler, one pass may depend on the result computed by another.
+//
+// The Run function should not call any of the Pass functions concurrently.
+type Pass struct {
+ Analyzer *Analyzer // the identity of the current analyzer
+
+ // syntax and type information
+ Fset *token.FileSet // file position information
+ Files []*ast.File // the abstract syntax tree of each file
+ OtherFiles []string // names of non-Go files of this package
+ Pkg *types.Package // type information about the package
+ TypesInfo *types.Info // type information about the syntax trees
+ TypesSizes types.Sizes // function for computing sizes of types
+
+ // Report reports a Diagnostic, a finding about a specific location
+ // in the analyzed source code such as a potential mistake.
+ // It may be called by the Run function.
+ Report func(Diagnostic)
+
+ // ResultOf provides the inputs to this analysis pass, which are
+ // the corresponding results of its prerequisite analyzers.
+ // The map keys are the elements of Analysis.Required,
+ // and the type of each corresponding value is the required
+ // analysis's ResultType.
+ ResultOf map[*Analyzer]interface{}
+
+ // -- facts --
+
+ // ImportObjectFact retrieves a fact associated with obj.
+ // Given a value ptr of type *T, where *T satisfies Fact,
+ // ImportObjectFact copies the value to *ptr.
+ //
+ // ImportObjectFact panics if called after the pass is complete.
+ // ImportObjectFact is not concurrency-safe.
+ ImportObjectFact func(obj types.Object, fact Fact) bool
+
+ // ImportPackageFact retrieves a fact associated with package pkg,
+ // which must be this package or one of its dependencies.
+ // See comments for ImportObjectFact.
+ ImportPackageFact func(pkg *types.Package, fact Fact) bool
+
+ // ExportObjectFact associates a fact of type *T with the obj,
+ // replacing any previous fact of that type.
+ //
+ // ExportObjectFact panics if it is called after the pass is
+ // complete, or if obj does not belong to the package being analyzed.
+ // ExportObjectFact is not concurrency-safe.
+ ExportObjectFact func(obj types.Object, fact Fact)
+
+ // ExportPackageFact associates a fact with the current package.
+ // See comments for ExportObjectFact.
+ ExportPackageFact func(fact Fact)
+
+ // AllPackageFacts returns a new slice containing all package facts of the analysis's FactTypes
+ // in unspecified order.
+ // WARNING: This is an experimental API and may change in the future.
+ AllPackageFacts func() []PackageFact
+
+ // AllObjectFacts returns a new slice containing all object facts of the analysis's FactTypes
+ // in unspecified order.
+ // WARNING: This is an experimental API and may change in the future.
+ AllObjectFacts func() []ObjectFact
+
+ /* Further fields may be added in future. */
+ // For example, suggested or applied refactorings.
+}
+
+// PackageFact is a package together with an associated fact.
+// WARNING: This is an experimental API and may change in the future.
+type PackageFact struct {
+ Package *types.Package
+ Fact Fact
+}
+
+// ObjectFact is an object together with an associated fact.
+// WARNING: This is an experimental API and may change in the future.
+type ObjectFact struct {
+ Object types.Object
+ Fact Fact
+}
+
+// Reportf is a helper function that reports a Diagnostic using the
+// specified position and formatted error message.
+func (pass *Pass) Reportf(pos token.Pos, format string, args ...interface{}) {
+ msg := fmt.Sprintf(format, args...)
+ pass.Report(Diagnostic{Pos: pos, Message: msg})
+}
+
+// The Range interface provides a range. It's equivalent to and satisfied by
+// ast.Node.
+type Range interface {
+ Pos() token.Pos // position of first character belonging to the node
+ End() token.Pos // position of first character immediately after the node
+}
+
+// ReportRangef is a helper function that reports a Diagnostic using the
+// range provided. ast.Node values can be passed in as the range because
+// they satisfy the Range interface.
+func (pass *Pass) ReportRangef(rng Range, format string, args ...interface{}) {
+ msg := fmt.Sprintf(format, args...)
+ pass.Report(Diagnostic{Pos: rng.Pos(), End: rng.End(), Message: msg})
+}
+
+func (pass *Pass) String() string {
+ return fmt.Sprintf("%s@%s", pass.Analyzer.Name, pass.Pkg.Path())
+}
+
+// A Fact is an intermediate fact produced during analysis.
+//
+// Each fact is associated with a named declaration (a types.Object) or
+// with a package as a whole. A single object or package may have
+// multiple associated facts, but only one of any particular fact type.
+//
+// A Fact represents a predicate such as "never returns", but does not
+// represent the subject of the predicate such as "function F" or "package P".
+//
+// Facts may be produced in one analysis pass and consumed by another
+// analysis pass even if these are in different address spaces.
+// If package P imports Q, all facts about Q produced during
+// analysis of that package will be available during later analysis of P.
+// Facts are analogous to type export data in a build system:
+// just as export data enables separate compilation of several passes,
+// facts enable "separate analysis".
+//
+// Each pass (a, p) starts with the set of facts produced by the
+// same analyzer a applied to the packages directly imported by p.
+// The analysis may add facts to the set, and they may be exported in turn.
+// An analysis's Run function may retrieve facts by calling
+// Pass.Import{Object,Package}Fact and update them using
+// Pass.Export{Object,Package}Fact.
+//
+// A fact is logically private to its Analysis. To pass values
+// between different analyzers, use the results mechanism;
+// see Analyzer.Requires, Analyzer.ResultType, and Pass.ResultOf.
+//
+// A Fact type must be a pointer.
+// Facts are encoded and decoded using encoding/gob.
+// A Fact may implement the GobEncoder/GobDecoder interfaces
+// to customize its encoding. Fact encoding should not fail.
+//
+// A Fact should not be modified once exported.
+type Fact interface {
+ AFact() // dummy method to avoid type errors
+}
diff --git a/vendor/golang.org/x/tools/go/analysis/diagnostic.go b/vendor/golang.org/x/tools/go/analysis/diagnostic.go
new file mode 100644
index 0000000000000..57eaf6faa2ac7
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/analysis/diagnostic.go
@@ -0,0 +1,61 @@
+package analysis
+
+import "go/token"
+
+// A Diagnostic is a message associated with a source location or range.
+//
+// An Analyzer may return a variety of diagnostics; the optional Category,
+// which should be a constant, may be used to classify them.
+// It is primarily intended to make it easy to look up documentation.
+//
+// If End is provided, the diagnostic is specified to apply to the range between
+// Pos and End.
+type Diagnostic struct {
+ Pos token.Pos
+ End token.Pos // optional
+ Category string // optional
+ Message string
+
+ // SuggestedFixes contains suggested fixes for a diagnostic which can be used to perform
+ // edits to a file that address the diagnostic.
+ // TODO(matloob): Should multiple SuggestedFixes be allowed for a diagnostic?
+ // Diagnostics should not contain SuggestedFixes that overlap.
+ // Experimental: This API is experimental and may change in the future.
+ SuggestedFixes []SuggestedFix // optional
+
+ // Experimental: This API is experimental and may change in the future.
+ Related []RelatedInformation // optional
+}
+
+// RelatedInformation contains information related to a diagnostic.
+// For example, a diagnostic that flags duplicated declarations of a
+// variable may include one RelatedInformation per existing
+// declaration.
+type RelatedInformation struct {
+ Pos token.Pos
+ End token.Pos
+ Message string
+}
+
+// A SuggestedFix is a code change associated with a Diagnostic that a user can choose
+// to apply to their code. Usually the SuggestedFix is meant to fix the issue flagged
+// by the diagnostic.
+// TextEdits for a SuggestedFix should not overlap. TextEdits for a SuggestedFix
+// should not contain edits for other packages.
+// Experimental: This API is experimental and may change in the future.
+type SuggestedFix struct {
+ // A description for this suggested fix to be shown to a user deciding
+ // whether to accept it.
+ Message string
+ TextEdits []TextEdit
+}
+
+// A TextEdit represents the replacement of the code between Pos and End with the new text.
+// Each TextEdit should apply to a single file. End should not be earlier in the file than Pos.
+// Experimental: This API is experimental and may change in the future.
+type TextEdit struct {
+ // For a pure insertion, End can either be set to Pos or token.NoPos.
+ Pos token.Pos
+ End token.Pos
+ NewText []byte
+}
diff --git a/vendor/golang.org/x/tools/go/analysis/doc.go b/vendor/golang.org/x/tools/go/analysis/doc.go
new file mode 100644
index 0000000000000..a2353fc88b9c6
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/analysis/doc.go
@@ -0,0 +1,336 @@
+/*
+
+The analysis package defines the interface between a modular static
+analysis and an analysis driver program.
+
+Background
+
+A static analysis is a function that inspects a package of Go code and
+reports a set of diagnostics (typically mistakes in the code), and
+perhaps produces other results as well, such as suggested refactorings
+or other facts. An analysis that reports mistakes is informally called a
+"checker". For example, the printf checker reports mistakes in
+fmt.Printf format strings.
+
+A "modular" analysis is one that inspects one package at a time but can
+save information from a lower-level package and use it when inspecting a
+higher-level package, analogous to separate compilation in a toolchain.
+The printf checker is modular: when it discovers that a function such as
+log.Fatalf delegates to fmt.Printf, it records this fact, and checks
+calls to that function too, including calls made from another package.
+
+By implementing a common interface, checkers from a variety of sources
+can be easily selected, incorporated, and reused in a wide range of
+driver programs including command-line tools (such as vet), text editors and
+IDEs, build and test systems (such as go build, Bazel, or Buck), test
+frameworks, code review tools, code-base indexers (such as SourceGraph),
+documentation viewers (such as godoc), batch pipelines for large code
+bases, and so on.
+
+
+Analyzer
+
+The primary type in the API is Analyzer. An Analyzer statically
+describes an analysis function: its name, documentation, flags,
+relationship to other analyzers, and of course, its logic.
+
+To define an analysis, a user declares a (logically constant) variable
+of type Analyzer. Here is a typical example from one of the analyzers in
+the go/analysis/passes/ subdirectory:
+
+ package unusedresult
+
+ var Analyzer = &analysis.Analyzer{
+ Name: "unusedresult",
+ Doc: "check for unused results of calls to some functions",
+ Run: run,
+ ...
+ }
+
+ func run(pass *analysis.Pass) (interface{}, error) {
+ ...
+ }
+
+
+An analysis driver is a program such as vet that runs a set of
+analyses and prints the diagnostics that they report.
+The driver program must import the list of Analyzers it needs.
+Typically each Analyzer resides in a separate package.
+To add a new Analyzer to an existing driver, add another item to the list:
+
+ import ( "unusedresult"; "nilness"; "printf" )
+
+ var analyses = []*analysis.Analyzer{
+ unusedresult.Analyzer,
+ nilness.Analyzer,
+ printf.Analyzer,
+ }
+
+A driver may use the name, flags, and documentation to provide on-line
+help that describes the analyses it performs.
+The doc comment contains a brief one-line summary,
+optionally followed by paragraphs of explanation.
+The vet command, shown below, is an example of a driver that runs
+multiple analyzers. It is based on the multichecker package
+(see the "Standalone commands" section for details).
+
+ $ go build golang.org/x/tools/go/analysis/cmd/vet
+ $ ./vet help
+ vet is a tool for static analysis of Go programs.
+
+ Usage: vet [-flag] [package]
+
+ Registered analyzers:
+
+ asmdecl report mismatches between assembly files and Go declarations
+ assign check for useless assignments
+ atomic check for common mistakes using the sync/atomic package
+ ...
+ unusedresult check for unused results of calls to some functions
+
+ $ ./vet help unusedresult
+ unusedresult: check for unused results of calls to some functions
+
+ Analyzer flags:
+
+ -unusedresult.funcs value
+ comma-separated list of functions whose results must be used (default Error,String)
+ -unusedresult.stringmethods value
+ comma-separated list of names of methods of type func() string whose results must be used
+
+ Some functions like fmt.Errorf return a result and have no side effects,
+ so it is always a mistake to discard the result. This analyzer reports
+ calls to certain functions in which the result of the call is ignored.
+
+ The set of functions may be controlled using flags.
+
+The Analyzer type has more fields besides those shown above:
+
+ type Analyzer struct {
+ Name string
+ Doc string
+ Flags flag.FlagSet
+ Run func(*Pass) (interface{}, error)
+ RunDespiteErrors bool
+ ResultType reflect.Type
+ Requires []*Analyzer
+ FactTypes []Fact
+ }
+
+The Flags field declares a set of named (global) flag variables that
+control analysis behavior. Unlike vet, analysis flags are not declared
+directly in the command line FlagSet; it is up to the driver to set the
+flag variables. A driver for a single analysis, a, might expose its flag
+f directly on the command line as -f, whereas a driver for multiple
+analyses might prefix the flag name by the analysis name (-a.f) to avoid
+ambiguity. An IDE might expose the flags through a graphical interface,
+and a batch pipeline might configure them from a config file.
+See the "findcall" analyzer for an example of flags in action.
+
+The RunDespiteErrors flag indicates whether the analysis is equipped to
+handle ill-typed code. If not, the driver will skip the analysis if
+there were parse or type errors.
+The optional ResultType field specifies the type of the result value
+computed by this analysis and made available to other analyses.
+The Requires field specifies a list of analyses upon which
+this one depends and whose results it may access, and it constrains the
+order in which a driver may run analyses.
+The FactTypes field is discussed in the section on Modularity.
+The analysis package provides a Validate function to perform basic
+sanity checks on an Analyzer, such as that its Requires graph is
+acyclic, its fact and result types are unique, and so on.
+
+Finally, the Run field contains a function to be called by the driver to
+execute the analysis on a single package. The driver passes it an
+instance of the Pass type.
+
+
+Pass
+
+A Pass describes a single unit of work: the application of a particular
+Analyzer to a particular package of Go code.
+The Pass provides information to the Analyzer's Run function about the
+package being analyzed, and provides operations to the Run function for
+reporting diagnostics and other information back to the driver.
+
+ type Pass struct {
+ Fset *token.FileSet
+ Files []*ast.File
+ OtherFiles []string
+ Pkg *types.Package
+ TypesInfo *types.Info
+ ResultOf map[*Analyzer]interface{}
+ Report func(Diagnostic)
+ ...
+ }
+
+The Fset, Files, Pkg, and TypesInfo fields provide the syntax trees,
+type information, and source positions for a single package of Go code.
+
+The OtherFiles field provides the names, but not the contents, of non-Go
+files such as assembly that are part of this package. See the "asmdecl"
+or "buildtags" analyzers for examples of loading non-Go files and reporting
+diagnostics against them.
+
+The ResultOf field provides the results computed by the analyzers
+required by this one, as expressed in its Analyzer.Requires field. The
+driver runs the required analyzers first and makes their results
+available in this map. Each Analyzer must return a value of the type
+described in its Analyzer.ResultType field.
+For example, the "ctrlflow" analyzer returns a *ctrlflow.CFGs, which
+provides a control-flow graph for each function in the package (see
+golang.org/x/tools/go/cfg); the "inspect" analyzer returns a value that
+enables other Analyzers to traverse the syntax trees of the package more
+efficiently; and the "buildssa" analyzer constructs an SSA-form
+intermediate representation.
+Each of these Analyzers extends the capabilities of later Analyzers
+without adding a dependency to the core API, so an analysis tool pays
+only for the extensions it needs.
+
+The Report function emits a diagnostic, a message associated with a
+source position. For most analyses, diagnostics are their primary
+result.
+For convenience, Pass provides a helper method, Reportf, to report a new
+diagnostic by formatting a string.
+Diagnostic is defined as:
+
+ type Diagnostic struct {
+ Pos token.Pos
+ Category string // optional
+ Message string
+ }
+
+The optional Category field is a short identifier that classifies the
+kind of message when an analysis produces several kinds of diagnostic.
+
+Most Analyzers inspect typed Go syntax trees, but a few, such as asmdecl
+and buildtag, inspect the raw text of Go source files or even non-Go
+files such as assembly. To report a diagnostic against a line of a
+raw text file, use the following sequence:
+
+ content, err := ioutil.ReadFile(filename)
+ if err != nil { ... }
+ tf := fset.AddFile(filename, -1, len(content))
+ tf.SetLinesForContent(content)
+ ...
+ pass.Reportf(tf.LineStart(line), "oops")
+
+
+Modular analysis with Facts
+
+To improve efficiency and scalability, large programs are routinely
+built using separate compilation: units of the program are compiled
+separately, and recompiled only when one of their dependencies changes;
+independent modules may be compiled in parallel. The same technique may
+be applied to static analyses, for the same benefits. Such analyses are
+described as "modular".
+
+A compiler’s type checker is an example of a modular static analysis.
+Many other checkers we would like to apply to Go programs can be
+understood as alternative or non-standard type systems. For example,
+vet's printf checker infers whether a function has the "printf wrapper"
+type, and it applies stricter checks to calls of such functions. In
+addition, it records which functions are printf wrappers for use by
+later analysis passes to identify other printf wrappers by induction.
+A result such as “f is a printf wrapper” that is not interesting by
+itself but serves as a stepping stone to an interesting result (such as
+a diagnostic) is called a "fact".
+
+The analysis API allows an analysis to define new types of facts, to
+associate facts of these types with objects (named entities) declared
+within the current package, or with the package as a whole, and to query
+for an existing fact of a given type associated with an object or
+package.
+
+An Analyzer that uses facts must declare their types:
+
+ var Analyzer = &analysis.Analyzer{
+ Name: "printf",
+ FactTypes: []analysis.Fact{new(isWrapper)},
+ ...
+ }
+
+ type isWrapper struct{} // => *types.Func f “is a printf wrapper”
+
+The driver program ensures that facts for a pass’s dependencies are
+generated before analyzing the package and is responsible for propagating
+facts from one package to another, possibly across address spaces.
+Consequently, Facts must be serializable. The API requires that drivers
+use the gob encoding, an efficient, robust, self-describing binary
+protocol. A fact type may implement the GobEncoder/GobDecoder interfaces
+if the default encoding is unsuitable. Facts should be stateless.
+
+The Pass type has functions to import and export facts,
+associated either with an object or with a package:
+
+ type Pass struct {
+ ...
+ ExportObjectFact func(types.Object, Fact)
+ ImportObjectFact func(types.Object, Fact) bool
+
+ ExportPackageFact func(fact Fact)
+ ImportPackageFact func(*types.Package, Fact) bool
+ }
+
+An Analyzer may only export facts associated with the current package or
+its objects, though it may import facts from any package or object that
+is an import dependency of the current package.
+
+Conceptually, ExportObjectFact(obj, fact) inserts fact into a hidden map keyed by
+the pair (obj, TypeOf(fact)), and the ImportObjectFact function
+retrieves the entry from this map and copies its value into the variable
+pointed to by fact. This scheme assumes that the concrete type of fact
+is a pointer; this assumption is checked by the Validate function.
+See the "printf" analyzer for an example of object facts in action.
+
+Some driver implementations (such as those based on Bazel and Blaze) do
+not currently apply analyzers to packages of the standard library.
+Therefore, for best results, analyzer authors should not rely on
+analysis facts being available for standard packages.
+For example, although the printf checker is capable of deducing during
+analysis of the log package that log.Printf is a printf wrapper,
+this fact is built in to the analyzer so that it correctly checks
+calls to log.Printf even when run in a driver that does not apply
+it to standard packages. We would like to remove this limitation in future.
+
+
+Testing an Analyzer
+
+The analysistest subpackage provides utilities for testing an Analyzer.
+In a few lines of code, it is possible to run an analyzer on a package
+of testdata files and check that it reported all the expected
+diagnostics and facts (and no more). Expectations are expressed using
+"// want ..." comments in the input code.
+
+
+Standalone commands
+
+Analyzers are provided in the form of packages that a driver program is
+expected to import. The vet command imports a set of several analyzers,
+but users may wish to define their own analysis commands that perform
+additional checks. To simplify the task of creating an analysis command,
+either for a single analyzer or for a whole suite, we provide the
+singlechecker and multichecker subpackages.
+
+The singlechecker package provides the main function for a command that
+runs one analyzer. By convention, each analyzer such as
+go/passes/findcall should be accompanied by a singlechecker-based
+command such as go/analysis/passes/findcall/cmd/findcall, defined in its
+entirety as:
+
+ package main
+
+ import (
+ "golang.org/x/tools/go/analysis/passes/findcall"
+ "golang.org/x/tools/go/analysis/singlechecker"
+ )
+
+ func main() { singlechecker.Main(findcall.Analyzer) }
+
+A tool that provides multiple analyzers can use multichecker in a
+similar way, giving it the list of Analyzers.
+
+
+
+*/
+package analysis
diff --git a/vendor/golang.org/x/tools/go/analysis/passes/inspect/inspect.go b/vendor/golang.org/x/tools/go/analysis/passes/inspect/inspect.go
new file mode 100644
index 0000000000000..2856df137c562
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/analysis/passes/inspect/inspect.go
@@ -0,0 +1,49 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package inspect defines an Analyzer that provides an AST inspector
+// (golang.org/x/tools/go/ast/inspect.Inspect) for the syntax trees of a
+// package. It is only a building block for other analyzers.
+//
+// Example of use in another analysis:
+//
+// import (
+// "golang.org/x/tools/go/analysis"
+// "golang.org/x/tools/go/analysis/passes/inspect"
+// "golang.org/x/tools/go/ast/inspector"
+// )
+//
+// var Analyzer = &analysis.Analyzer{
+// ...
+// Requires: []*analysis.Analyzer{inspect.Analyzer},
+// }
+//
+// func run(pass *analysis.Pass) (interface{}, error) {
+// inspect := pass.ResultOf[inspect.Analyzer].(*inspector.Inspector)
+// inspect.Preorder(nil, func(n ast.Node) {
+// ...
+// })
+// return nil
+// }
+//
+package inspect
+
+import (
+ "reflect"
+
+ "golang.org/x/tools/go/analysis"
+ "golang.org/x/tools/go/ast/inspector"
+)
+
+var Analyzer = &analysis.Analyzer{
+ Name: "inspect",
+ Doc: "optimize AST traversal for later passes",
+ Run: run,
+ RunDespiteErrors: true,
+ ResultType: reflect.TypeOf(new(inspector.Inspector)),
+}
+
+func run(pass *analysis.Pass) (interface{}, error) {
+ return inspector.New(pass.Files), nil
+}
diff --git a/vendor/golang.org/x/tools/go/analysis/validate.go b/vendor/golang.org/x/tools/go/analysis/validate.go
new file mode 100644
index 0000000000000..be98143461e1a
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/analysis/validate.go
@@ -0,0 +1,97 @@
+package analysis
+
+import (
+ "fmt"
+ "reflect"
+ "unicode"
+)
+
+// Validate reports an error if any of the analyzers are misconfigured.
+// Checks include:
+// that the name is a valid identifier;
+// that the Requires graph is acyclic;
+// that analyzer fact types are unique;
+// that each fact type is a pointer.
+func Validate(analyzers []*Analyzer) error {
+ // Map each fact type to its sole generating analyzer.
+ factTypes := make(map[reflect.Type]*Analyzer)
+
+ // Traverse the Requires graph, depth first.
+ const (
+ white = iota
+ grey
+ black
+ finished
+ )
+ color := make(map[*Analyzer]uint8)
+ var visit func(a *Analyzer) error
+ visit = func(a *Analyzer) error {
+ if a == nil {
+ return fmt.Errorf("nil *Analyzer")
+ }
+ if color[a] == white {
+ color[a] = grey
+
+ // names
+ if !validIdent(a.Name) {
+ return fmt.Errorf("invalid analyzer name %q", a)
+ }
+
+ if a.Doc == "" {
+ return fmt.Errorf("analyzer %q is undocumented", a)
+ }
+
+ // fact types
+ for _, f := range a.FactTypes {
+ if f == nil {
+ return fmt.Errorf("analyzer %s has nil FactType", a)
+ }
+ t := reflect.TypeOf(f)
+ if prev := factTypes[t]; prev != nil {
+ return fmt.Errorf("fact type %s registered by two analyzers: %v, %v",
+ t, a, prev)
+ }
+ if t.Kind() != reflect.Ptr {
+ return fmt.Errorf("%s: fact type %s is not a pointer", a, t)
+ }
+ factTypes[t] = a
+ }
+
+ // recursion
+ for i, req := range a.Requires {
+ if err := visit(req); err != nil {
+ return fmt.Errorf("%s.Requires[%d]: %v", a.Name, i, err)
+ }
+ }
+ color[a] = black
+ }
+
+ return nil
+ }
+ for _, a := range analyzers {
+ if err := visit(a); err != nil {
+ return err
+ }
+ }
+
+ // Reject duplicates among analyzers.
+ // Precondition: color[a] == black.
+ // Postcondition: color[a] == finished.
+ for _, a := range analyzers {
+ if color[a] == finished {
+ return fmt.Errorf("duplicate analyzer: %s", a.Name)
+ }
+ color[a] = finished
+ }
+
+ return nil
+}
+
+func validIdent(name string) bool {
+ for i, r := range name {
+ if !(r == '_' || unicode.IsLetter(r) || i > 0 && unicode.IsDigit(r)) {
+ return false
+ }
+ }
+ return name != ""
+}
diff --git a/vendor/golang.org/x/tools/go/ast/astutil/enclosing.go b/vendor/golang.org/x/tools/go/ast/astutil/enclosing.go
new file mode 100644
index 0000000000000..6b7052b892ca0
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/ast/astutil/enclosing.go
@@ -0,0 +1,627 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package astutil
+
+// This file defines utilities for working with source positions.
+
+import (
+ "fmt"
+ "go/ast"
+ "go/token"
+ "sort"
+)
+
+// PathEnclosingInterval returns the node that encloses the source
+// interval [start, end), and all its ancestors up to the AST root.
+//
+// The definition of "enclosing" used by this function considers
+// additional whitespace abutting a node to be enclosed by it.
+// In this example:
+//
+// z := x + y // add them
+// <-A->
+// <----B----->
+//
+// the ast.BinaryExpr(+) node is considered to enclose interval B
+// even though its [Pos()..End()) is actually only interval A.
+// This behaviour makes user interfaces more tolerant of imperfect
+// input.
+//
+// This function treats tokens as nodes, though they are not included
+// in the result. e.g. PathEnclosingInterval("+") returns the
+// enclosing ast.BinaryExpr("x + y").
+//
+// If start==end, the 1-char interval following start is used instead.
+//
+// The 'exact' result is true if the interval contains only path[0]
+// and perhaps some adjacent whitespace. It is false if the interval
+// overlaps multiple children of path[0], or if it contains only
+// interior whitespace of path[0].
+// In this example:
+//
+// z := x + y // add them
+// <--C--> <---E-->
+// ^
+// D
+//
+// intervals C, D and E are inexact. C is contained by the
+// z-assignment statement, because it spans three of its children (:=,
+// x, +). So too is the 1-char interval D, because it contains only
+// interior whitespace of the assignment. E is considered interior
+// whitespace of the BlockStmt containing the assignment.
+//
+// Precondition: [start, end) both lie within the same file as root.
+// TODO(adonovan): return (nil, false) in this case and remove precond.
+// Requires FileSet; see loader.tokenFileContainsPos.
+//
+// Postcondition: path is never nil; it always contains at least 'root'.
+//
+func PathEnclosingInterval(root *ast.File, start, end token.Pos) (path []ast.Node, exact bool) {
+ // fmt.Printf("EnclosingInterval %d %d\n", start, end) // debugging
+
+ // Precondition: node.[Pos..End) and adjoining whitespace contain [start, end).
+ var visit func(node ast.Node) bool
+ visit = func(node ast.Node) bool {
+ path = append(path, node)
+
+ nodePos := node.Pos()
+ nodeEnd := node.End()
+
+ // fmt.Printf("visit(%T, %d, %d)\n", node, nodePos, nodeEnd) // debugging
+
+ // Intersect [start, end) with interval of node.
+ if start < nodePos {
+ start = nodePos
+ }
+ if end > nodeEnd {
+ end = nodeEnd
+ }
+
+ // Find sole child that contains [start, end).
+ children := childrenOf(node)
+ l := len(children)
+ for i, child := range children {
+ // [childPos, childEnd) is unaugmented interval of child.
+ childPos := child.Pos()
+ childEnd := child.End()
+
+ // [augPos, augEnd) is whitespace-augmented interval of child.
+ augPos := childPos
+ augEnd := childEnd
+ if i > 0 {
+ augPos = children[i-1].End() // start of preceding whitespace
+ }
+ if i < l-1 {
+ nextChildPos := children[i+1].Pos()
+ // Does [start, end) lie between child and next child?
+ if start >= augEnd && end <= nextChildPos {
+ return false // inexact match
+ }
+ augEnd = nextChildPos // end of following whitespace
+ }
+
+ // fmt.Printf("\tchild %d: [%d..%d)\tcontains interval [%d..%d)?\n",
+ // i, augPos, augEnd, start, end) // debugging
+
+ // Does augmented child strictly contain [start, end)?
+ if augPos <= start && end <= augEnd {
+ _, isToken := child.(tokenNode)
+ return isToken || visit(child)
+ }
+
+ // Does [start, end) overlap multiple children?
+ // i.e. left-augmented child contains start
+ // but LR-augmented child does not contain end.
+ if start < childEnd && end > augEnd {
+ break
+ }
+ }
+
+ // No single child contained [start, end),
+ // so node is the result. Is it exact?
+
+ // (It's tempting to put this condition before the
+ // child loop, but it gives the wrong result in the
+ // case where a node (e.g. ExprStmt) and its sole
+ // child have equal intervals.)
+ if start == nodePos && end == nodeEnd {
+ return true // exact match
+ }
+
+ return false // inexact: overlaps multiple children
+ }
+
+ if start > end {
+ start, end = end, start
+ }
+
+ if start < root.End() && end > root.Pos() {
+ if start == end {
+ end = start + 1 // empty interval => interval of size 1
+ }
+ exact = visit(root)
+
+ // Reverse the path:
+ for i, l := 0, len(path); i < l/2; i++ {
+ path[i], path[l-1-i] = path[l-1-i], path[i]
+ }
+ } else {
+ // Selection lies within whitespace preceding the
+ // first (or following the last) declaration in the file.
+ // The result nonetheless always includes the ast.File.
+ path = append(path, root)
+ }
+
+ return
+}
+
+// tokenNode is a dummy implementation of ast.Node for a single token.
+// They are used transiently by PathEnclosingInterval but never escape
+// this package.
+//
+type tokenNode struct {
+ pos token.Pos
+ end token.Pos
+}
+
+func (n tokenNode) Pos() token.Pos {
+ return n.pos
+}
+
+func (n tokenNode) End() token.Pos {
+ return n.end
+}
+
+func tok(pos token.Pos, len int) ast.Node {
+ return tokenNode{pos, pos + token.Pos(len)}
+}
+
+// childrenOf returns the direct non-nil children of ast.Node n.
+// It may include fake ast.Node implementations for bare tokens.
+// it is not safe to call (e.g.) ast.Walk on such nodes.
+//
+func childrenOf(n ast.Node) []ast.Node {
+ var children []ast.Node
+
+ // First add nodes for all true subtrees.
+ ast.Inspect(n, func(node ast.Node) bool {
+ if node == n { // push n
+ return true // recur
+ }
+ if node != nil { // push child
+ children = append(children, node)
+ }
+ return false // no recursion
+ })
+
+ // Then add fake Nodes for bare tokens.
+ switch n := n.(type) {
+ case *ast.ArrayType:
+ children = append(children,
+ tok(n.Lbrack, len("[")),
+ tok(n.Elt.End(), len("]")))
+
+ case *ast.AssignStmt:
+ children = append(children,
+ tok(n.TokPos, len(n.Tok.String())))
+
+ case *ast.BasicLit:
+ children = append(children,
+ tok(n.ValuePos, len(n.Value)))
+
+ case *ast.BinaryExpr:
+ children = append(children, tok(n.OpPos, len(n.Op.String())))
+
+ case *ast.BlockStmt:
+ children = append(children,
+ tok(n.Lbrace, len("{")),
+ tok(n.Rbrace, len("}")))
+
+ case *ast.BranchStmt:
+ children = append(children,
+ tok(n.TokPos, len(n.Tok.String())))
+
+ case *ast.CallExpr:
+ children = append(children,
+ tok(n.Lparen, len("(")),
+ tok(n.Rparen, len(")")))
+ if n.Ellipsis != 0 {
+ children = append(children, tok(n.Ellipsis, len("...")))
+ }
+
+ case *ast.CaseClause:
+ if n.List == nil {
+ children = append(children,
+ tok(n.Case, len("default")))
+ } else {
+ children = append(children,
+ tok(n.Case, len("case")))
+ }
+ children = append(children, tok(n.Colon, len(":")))
+
+ case *ast.ChanType:
+ switch n.Dir {
+ case ast.RECV:
+ children = append(children, tok(n.Begin, len("<-chan")))
+ case ast.SEND:
+ children = append(children, tok(n.Begin, len("chan<-")))
+ case ast.RECV | ast.SEND:
+ children = append(children, tok(n.Begin, len("chan")))
+ }
+
+ case *ast.CommClause:
+ if n.Comm == nil {
+ children = append(children,
+ tok(n.Case, len("default")))
+ } else {
+ children = append(children,
+ tok(n.Case, len("case")))
+ }
+ children = append(children, tok(n.Colon, len(":")))
+
+ case *ast.Comment:
+ // nop
+
+ case *ast.CommentGroup:
+ // nop
+
+ case *ast.CompositeLit:
+ children = append(children,
+ tok(n.Lbrace, len("{")),
+ tok(n.Rbrace, len("{")))
+
+ case *ast.DeclStmt:
+ // nop
+
+ case *ast.DeferStmt:
+ children = append(children,
+ tok(n.Defer, len("defer")))
+
+ case *ast.Ellipsis:
+ children = append(children,
+ tok(n.Ellipsis, len("...")))
+
+ case *ast.EmptyStmt:
+ // nop
+
+ case *ast.ExprStmt:
+ // nop
+
+ case *ast.Field:
+ // TODO(adonovan): Field.{Doc,Comment,Tag}?
+
+ case *ast.FieldList:
+ children = append(children,
+ tok(n.Opening, len("(")),
+ tok(n.Closing, len(")")))
+
+ case *ast.File:
+ // TODO test: Doc
+ children = append(children,
+ tok(n.Package, len("package")))
+
+ case *ast.ForStmt:
+ children = append(children,
+ tok(n.For, len("for")))
+
+ case *ast.FuncDecl:
+ // TODO(adonovan): FuncDecl.Comment?
+
+ // Uniquely, FuncDecl breaks the invariant that
+ // preorder traversal yields tokens in lexical order:
+ // in fact, FuncDecl.Recv precedes FuncDecl.Type.Func.
+ //
+ // As a workaround, we inline the case for FuncType
+ // here and order things correctly.
+ //
+ children = nil // discard ast.Walk(FuncDecl) info subtrees
+ children = append(children, tok(n.Type.Func, len("func")))
+ if n.Recv != nil {
+ children = append(children, n.Recv)
+ }
+ children = append(children, n.Name)
+ if n.Type.Params != nil {
+ children = append(children, n.Type.Params)
+ }
+ if n.Type.Results != nil {
+ children = append(children, n.Type.Results)
+ }
+ if n.Body != nil {
+ children = append(children, n.Body)
+ }
+
+ case *ast.FuncLit:
+ // nop
+
+ case *ast.FuncType:
+ if n.Func != 0 {
+ children = append(children,
+ tok(n.Func, len("func")))
+ }
+
+ case *ast.GenDecl:
+ children = append(children,
+ tok(n.TokPos, len(n.Tok.String())))
+ if n.Lparen != 0 {
+ children = append(children,
+ tok(n.Lparen, len("(")),
+ tok(n.Rparen, len(")")))
+ }
+
+ case *ast.GoStmt:
+ children = append(children,
+ tok(n.Go, len("go")))
+
+ case *ast.Ident:
+ children = append(children,
+ tok(n.NamePos, len(n.Name)))
+
+ case *ast.IfStmt:
+ children = append(children,
+ tok(n.If, len("if")))
+
+ case *ast.ImportSpec:
+ // TODO(adonovan): ImportSpec.{Doc,EndPos}?
+
+ case *ast.IncDecStmt:
+ children = append(children,
+ tok(n.TokPos, len(n.Tok.String())))
+
+ case *ast.IndexExpr:
+ children = append(children,
+ tok(n.Lbrack, len("{")),
+ tok(n.Rbrack, len("}")))
+
+ case *ast.InterfaceType:
+ children = append(children,
+ tok(n.Interface, len("interface")))
+
+ case *ast.KeyValueExpr:
+ children = append(children,
+ tok(n.Colon, len(":")))
+
+ case *ast.LabeledStmt:
+ children = append(children,
+ tok(n.Colon, len(":")))
+
+ case *ast.MapType:
+ children = append(children,
+ tok(n.Map, len("map")))
+
+ case *ast.ParenExpr:
+ children = append(children,
+ tok(n.Lparen, len("(")),
+ tok(n.Rparen, len(")")))
+
+ case *ast.RangeStmt:
+ children = append(children,
+ tok(n.For, len("for")),
+ tok(n.TokPos, len(n.Tok.String())))
+
+ case *ast.ReturnStmt:
+ children = append(children,
+ tok(n.Return, len("return")))
+
+ case *ast.SelectStmt:
+ children = append(children,
+ tok(n.Select, len("select")))
+
+ case *ast.SelectorExpr:
+ // nop
+
+ case *ast.SendStmt:
+ children = append(children,
+ tok(n.Arrow, len("<-")))
+
+ case *ast.SliceExpr:
+ children = append(children,
+ tok(n.Lbrack, len("[")),
+ tok(n.Rbrack, len("]")))
+
+ case *ast.StarExpr:
+ children = append(children, tok(n.Star, len("*")))
+
+ case *ast.StructType:
+ children = append(children, tok(n.Struct, len("struct")))
+
+ case *ast.SwitchStmt:
+ children = append(children, tok(n.Switch, len("switch")))
+
+ case *ast.TypeAssertExpr:
+ children = append(children,
+ tok(n.Lparen-1, len(".")),
+ tok(n.Lparen, len("(")),
+ tok(n.Rparen, len(")")))
+
+ case *ast.TypeSpec:
+ // TODO(adonovan): TypeSpec.{Doc,Comment}?
+
+ case *ast.TypeSwitchStmt:
+ children = append(children, tok(n.Switch, len("switch")))
+
+ case *ast.UnaryExpr:
+ children = append(children, tok(n.OpPos, len(n.Op.String())))
+
+ case *ast.ValueSpec:
+ // TODO(adonovan): ValueSpec.{Doc,Comment}?
+
+ case *ast.BadDecl, *ast.BadExpr, *ast.BadStmt:
+ // nop
+ }
+
+ // TODO(adonovan): opt: merge the logic of ast.Inspect() into
+ // the switch above so we can make interleaved callbacks for
+ // both Nodes and Tokens in the right order and avoid the need
+ // to sort.
+ sort.Sort(byPos(children))
+
+ return children
+}
+
+type byPos []ast.Node
+
+func (sl byPos) Len() int {
+ return len(sl)
+}
+func (sl byPos) Less(i, j int) bool {
+ return sl[i].Pos() < sl[j].Pos()
+}
+func (sl byPos) Swap(i, j int) {
+ sl[i], sl[j] = sl[j], sl[i]
+}
+
+// NodeDescription returns a description of the concrete type of n suitable
+// for a user interface.
+//
+// TODO(adonovan): in some cases (e.g. Field, FieldList, Ident,
+// StarExpr) we could be much more specific given the path to the AST
+// root. Perhaps we should do that.
+//
+func NodeDescription(n ast.Node) string {
+ switch n := n.(type) {
+ case *ast.ArrayType:
+ return "array type"
+ case *ast.AssignStmt:
+ return "assignment"
+ case *ast.BadDecl:
+ return "bad declaration"
+ case *ast.BadExpr:
+ return "bad expression"
+ case *ast.BadStmt:
+ return "bad statement"
+ case *ast.BasicLit:
+ return "basic literal"
+ case *ast.BinaryExpr:
+ return fmt.Sprintf("binary %s operation", n.Op)
+ case *ast.BlockStmt:
+ return "block"
+ case *ast.BranchStmt:
+ switch n.Tok {
+ case token.BREAK:
+ return "break statement"
+ case token.CONTINUE:
+ return "continue statement"
+ case token.GOTO:
+ return "goto statement"
+ case token.FALLTHROUGH:
+ return "fall-through statement"
+ }
+ case *ast.CallExpr:
+ if len(n.Args) == 1 && !n.Ellipsis.IsValid() {
+ return "function call (or conversion)"
+ }
+ return "function call"
+ case *ast.CaseClause:
+ return "case clause"
+ case *ast.ChanType:
+ return "channel type"
+ case *ast.CommClause:
+ return "communication clause"
+ case *ast.Comment:
+ return "comment"
+ case *ast.CommentGroup:
+ return "comment group"
+ case *ast.CompositeLit:
+ return "composite literal"
+ case *ast.DeclStmt:
+ return NodeDescription(n.Decl) + " statement"
+ case *ast.DeferStmt:
+ return "defer statement"
+ case *ast.Ellipsis:
+ return "ellipsis"
+ case *ast.EmptyStmt:
+ return "empty statement"
+ case *ast.ExprStmt:
+ return "expression statement"
+ case *ast.Field:
+ // Can be any of these:
+ // struct {x, y int} -- struct field(s)
+ // struct {T} -- anon struct field
+ // interface {I} -- interface embedding
+ // interface {f()} -- interface method
+ // func (A) func(B) C -- receiver, param(s), result(s)
+ return "field/method/parameter"
+ case *ast.FieldList:
+ return "field/method/parameter list"
+ case *ast.File:
+ return "source file"
+ case *ast.ForStmt:
+ return "for loop"
+ case *ast.FuncDecl:
+ return "function declaration"
+ case *ast.FuncLit:
+ return "function literal"
+ case *ast.FuncType:
+ return "function type"
+ case *ast.GenDecl:
+ switch n.Tok {
+ case token.IMPORT:
+ return "import declaration"
+ case token.CONST:
+ return "constant declaration"
+ case token.TYPE:
+ return "type declaration"
+ case token.VAR:
+ return "variable declaration"
+ }
+ case *ast.GoStmt:
+ return "go statement"
+ case *ast.Ident:
+ return "identifier"
+ case *ast.IfStmt:
+ return "if statement"
+ case *ast.ImportSpec:
+ return "import specification"
+ case *ast.IncDecStmt:
+ if n.Tok == token.INC {
+ return "increment statement"
+ }
+ return "decrement statement"
+ case *ast.IndexExpr:
+ return "index expression"
+ case *ast.InterfaceType:
+ return "interface type"
+ case *ast.KeyValueExpr:
+ return "key/value association"
+ case *ast.LabeledStmt:
+ return "statement label"
+ case *ast.MapType:
+ return "map type"
+ case *ast.Package:
+ return "package"
+ case *ast.ParenExpr:
+ return "parenthesized " + NodeDescription(n.X)
+ case *ast.RangeStmt:
+ return "range loop"
+ case *ast.ReturnStmt:
+ return "return statement"
+ case *ast.SelectStmt:
+ return "select statement"
+ case *ast.SelectorExpr:
+ return "selector"
+ case *ast.SendStmt:
+ return "channel send"
+ case *ast.SliceExpr:
+ return "slice expression"
+ case *ast.StarExpr:
+ return "*-operation" // load/store expr or pointer type
+ case *ast.StructType:
+ return "struct type"
+ case *ast.SwitchStmt:
+ return "switch statement"
+ case *ast.TypeAssertExpr:
+ return "type assertion"
+ case *ast.TypeSpec:
+ return "type specification"
+ case *ast.TypeSwitchStmt:
+ return "type switch"
+ case *ast.UnaryExpr:
+ return fmt.Sprintf("unary %s operation", n.Op)
+ case *ast.ValueSpec:
+ return "value specification"
+
+ }
+ panic(fmt.Sprintf("unexpected node type: %T", n))
+}
diff --git a/vendor/golang.org/x/tools/go/ast/astutil/imports.go b/vendor/golang.org/x/tools/go/ast/astutil/imports.go
new file mode 100644
index 0000000000000..3e4b195368b35
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/ast/astutil/imports.go
@@ -0,0 +1,481 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package astutil contains common utilities for working with the Go AST.
+package astutil // import "golang.org/x/tools/go/ast/astutil"
+
+import (
+ "fmt"
+ "go/ast"
+ "go/token"
+ "strconv"
+ "strings"
+)
+
+// AddImport adds the import path to the file f, if absent.
+func AddImport(fset *token.FileSet, f *ast.File, path string) (added bool) {
+ return AddNamedImport(fset, f, "", path)
+}
+
+// AddNamedImport adds the import with the given name and path to the file f, if absent.
+// If name is not empty, it is used to rename the import.
+//
+// For example, calling
+// AddNamedImport(fset, f, "pathpkg", "path")
+// adds
+// import pathpkg "path"
+func AddNamedImport(fset *token.FileSet, f *ast.File, name, path string) (added bool) {
+ if imports(f, name, path) {
+ return false
+ }
+
+ newImport := &ast.ImportSpec{
+ Path: &ast.BasicLit{
+ Kind: token.STRING,
+ Value: strconv.Quote(path),
+ },
+ }
+ if name != "" {
+ newImport.Name = &ast.Ident{Name: name}
+ }
+
+ // Find an import decl to add to.
+ // The goal is to find an existing import
+ // whose import path has the longest shared
+ // prefix with path.
+ var (
+ bestMatch = -1 // length of longest shared prefix
+ lastImport = -1 // index in f.Decls of the file's final import decl
+ impDecl *ast.GenDecl // import decl containing the best match
+ impIndex = -1 // spec index in impDecl containing the best match
+
+ isThirdPartyPath = isThirdParty(path)
+ )
+ for i, decl := range f.Decls {
+ gen, ok := decl.(*ast.GenDecl)
+ if ok && gen.Tok == token.IMPORT {
+ lastImport = i
+ // Do not add to import "C", to avoid disrupting the
+ // association with its doc comment, breaking cgo.
+ if declImports(gen, "C") {
+ continue
+ }
+
+ // Match an empty import decl if that's all that is available.
+ if len(gen.Specs) == 0 && bestMatch == -1 {
+ impDecl = gen
+ }
+
+ // Compute longest shared prefix with imports in this group and find best
+ // matched import spec.
+ // 1. Always prefer import spec with longest shared prefix.
+ // 2. While match length is 0,
+ // - for stdlib package: prefer first import spec.
+ // - for third party package: prefer first third party import spec.
+ // We cannot use last import spec as best match for third party package
+ // because grouped imports are usually placed last by goimports -local
+ // flag.
+ // See issue #19190.
+ seenAnyThirdParty := false
+ for j, spec := range gen.Specs {
+ impspec := spec.(*ast.ImportSpec)
+ p := importPath(impspec)
+ n := matchLen(p, path)
+ if n > bestMatch || (bestMatch == 0 && !seenAnyThirdParty && isThirdPartyPath) {
+ bestMatch = n
+ impDecl = gen
+ impIndex = j
+ }
+ seenAnyThirdParty = seenAnyThirdParty || isThirdParty(p)
+ }
+ }
+ }
+
+ // If no import decl found, add one after the last import.
+ if impDecl == nil {
+ impDecl = &ast.GenDecl{
+ Tok: token.IMPORT,
+ }
+ if lastImport >= 0 {
+ impDecl.TokPos = f.Decls[lastImport].End()
+ } else {
+ // There are no existing imports.
+ // Our new import, preceded by a blank line, goes after the package declaration
+ // and after the comment, if any, that starts on the same line as the
+ // package declaration.
+ impDecl.TokPos = f.Package
+
+ file := fset.File(f.Package)
+ pkgLine := file.Line(f.Package)
+ for _, c := range f.Comments {
+ if file.Line(c.Pos()) > pkgLine {
+ break
+ }
+ // +2 for a blank line
+ impDecl.TokPos = c.End() + 2
+ }
+ }
+ f.Decls = append(f.Decls, nil)
+ copy(f.Decls[lastImport+2:], f.Decls[lastImport+1:])
+ f.Decls[lastImport+1] = impDecl
+ }
+
+ // Insert new import at insertAt.
+ insertAt := 0
+ if impIndex >= 0 {
+ // insert after the found import
+ insertAt = impIndex + 1
+ }
+ impDecl.Specs = append(impDecl.Specs, nil)
+ copy(impDecl.Specs[insertAt+1:], impDecl.Specs[insertAt:])
+ impDecl.Specs[insertAt] = newImport
+ pos := impDecl.Pos()
+ if insertAt > 0 {
+ // If there is a comment after an existing import, preserve the comment
+ // position by adding the new import after the comment.
+ if spec, ok := impDecl.Specs[insertAt-1].(*ast.ImportSpec); ok && spec.Comment != nil {
+ pos = spec.Comment.End()
+ } else {
+ // Assign same position as the previous import,
+ // so that the sorter sees it as being in the same block.
+ pos = impDecl.Specs[insertAt-1].Pos()
+ }
+ }
+ if newImport.Name != nil {
+ newImport.Name.NamePos = pos
+ }
+ newImport.Path.ValuePos = pos
+ newImport.EndPos = pos
+
+ // Clean up parens. impDecl contains at least one spec.
+ if len(impDecl.Specs) == 1 {
+ // Remove unneeded parens.
+ impDecl.Lparen = token.NoPos
+ } else if !impDecl.Lparen.IsValid() {
+ // impDecl needs parens added.
+ impDecl.Lparen = impDecl.Specs[0].Pos()
+ }
+
+ f.Imports = append(f.Imports, newImport)
+
+ if len(f.Decls) <= 1 {
+ return true
+ }
+
+ // Merge all the import declarations into the first one.
+ var first *ast.GenDecl
+ for i := 0; i < len(f.Decls); i++ {
+ decl := f.Decls[i]
+ gen, ok := decl.(*ast.GenDecl)
+ if !ok || gen.Tok != token.IMPORT || declImports(gen, "C") {
+ continue
+ }
+ if first == nil {
+ first = gen
+ continue // Don't touch the first one.
+ }
+ // We now know there is more than one package in this import
+ // declaration. Ensure that it ends up parenthesized.
+ first.Lparen = first.Pos()
+ // Move the imports of the other import declaration to the first one.
+ for _, spec := range gen.Specs {
+ spec.(*ast.ImportSpec).Path.ValuePos = first.Pos()
+ first.Specs = append(first.Specs, spec)
+ }
+ f.Decls = append(f.Decls[:i], f.Decls[i+1:]...)
+ i--
+ }
+
+ return true
+}
+
+func isThirdParty(importPath string) bool {
+ // Third party package import path usually contains "." (".com", ".org", ...)
+ // This logic is taken from golang.org/x/tools/imports package.
+ return strings.Contains(importPath, ".")
+}
+
+// DeleteImport deletes the import path from the file f, if present.
+// If there are duplicate import declarations, all matching ones are deleted.
+func DeleteImport(fset *token.FileSet, f *ast.File, path string) (deleted bool) {
+ return DeleteNamedImport(fset, f, "", path)
+}
+
+// DeleteNamedImport deletes the import with the given name and path from the file f, if present.
+// If there are duplicate import declarations, all matching ones are deleted.
+func DeleteNamedImport(fset *token.FileSet, f *ast.File, name, path string) (deleted bool) {
+ var delspecs []*ast.ImportSpec
+ var delcomments []*ast.CommentGroup
+
+ // Find the import nodes that import path, if any.
+ for i := 0; i < len(f.Decls); i++ {
+ decl := f.Decls[i]
+ gen, ok := decl.(*ast.GenDecl)
+ if !ok || gen.Tok != token.IMPORT {
+ continue
+ }
+ for j := 0; j < len(gen.Specs); j++ {
+ spec := gen.Specs[j]
+ impspec := spec.(*ast.ImportSpec)
+ if importName(impspec) != name || importPath(impspec) != path {
+ continue
+ }
+
+ // We found an import spec that imports path.
+ // Delete it.
+ delspecs = append(delspecs, impspec)
+ deleted = true
+ copy(gen.Specs[j:], gen.Specs[j+1:])
+ gen.Specs = gen.Specs[:len(gen.Specs)-1]
+
+ // If this was the last import spec in this decl,
+ // delete the decl, too.
+ if len(gen.Specs) == 0 {
+ copy(f.Decls[i:], f.Decls[i+1:])
+ f.Decls = f.Decls[:len(f.Decls)-1]
+ i--
+ break
+ } else if len(gen.Specs) == 1 {
+ if impspec.Doc != nil {
+ delcomments = append(delcomments, impspec.Doc)
+ }
+ if impspec.Comment != nil {
+ delcomments = append(delcomments, impspec.Comment)
+ }
+ for _, cg := range f.Comments {
+ // Found comment on the same line as the import spec.
+ if cg.End() < impspec.Pos() && fset.Position(cg.End()).Line == fset.Position(impspec.Pos()).Line {
+ delcomments = append(delcomments, cg)
+ break
+ }
+ }
+
+ spec := gen.Specs[0].(*ast.ImportSpec)
+
+ // Move the documentation right after the import decl.
+ if spec.Doc != nil {
+ for fset.Position(gen.TokPos).Line+1 < fset.Position(spec.Doc.Pos()).Line {
+ fset.File(gen.TokPos).MergeLine(fset.Position(gen.TokPos).Line)
+ }
+ }
+ for _, cg := range f.Comments {
+ if cg.End() < spec.Pos() && fset.Position(cg.End()).Line == fset.Position(spec.Pos()).Line {
+ for fset.Position(gen.TokPos).Line+1 < fset.Position(spec.Pos()).Line {
+ fset.File(gen.TokPos).MergeLine(fset.Position(gen.TokPos).Line)
+ }
+ break
+ }
+ }
+ }
+ if j > 0 {
+ lastImpspec := gen.Specs[j-1].(*ast.ImportSpec)
+ lastLine := fset.Position(lastImpspec.Path.ValuePos).Line
+ line := fset.Position(impspec.Path.ValuePos).Line
+
+ // We deleted an entry but now there may be
+ // a blank line-sized hole where the import was.
+ if line-lastLine > 1 {
+ // There was a blank line immediately preceding the deleted import,
+ // so there's no need to close the hole.
+ // Do nothing.
+ } else if line != fset.File(gen.Rparen).LineCount() {
+ // There was no blank line. Close the hole.
+ fset.File(gen.Rparen).MergeLine(line)
+ }
+ }
+ j--
+ }
+ }
+
+ // Delete imports from f.Imports.
+ for i := 0; i < len(f.Imports); i++ {
+ imp := f.Imports[i]
+ for j, del := range delspecs {
+ if imp == del {
+ copy(f.Imports[i:], f.Imports[i+1:])
+ f.Imports = f.Imports[:len(f.Imports)-1]
+ copy(delspecs[j:], delspecs[j+1:])
+ delspecs = delspecs[:len(delspecs)-1]
+ i--
+ break
+ }
+ }
+ }
+
+ // Delete comments from f.Comments.
+ for i := 0; i < len(f.Comments); i++ {
+ cg := f.Comments[i]
+ for j, del := range delcomments {
+ if cg == del {
+ copy(f.Comments[i:], f.Comments[i+1:])
+ f.Comments = f.Comments[:len(f.Comments)-1]
+ copy(delcomments[j:], delcomments[j+1:])
+ delcomments = delcomments[:len(delcomments)-1]
+ i--
+ break
+ }
+ }
+ }
+
+ if len(delspecs) > 0 {
+ panic(fmt.Sprintf("deleted specs from Decls but not Imports: %v", delspecs))
+ }
+
+ return
+}
+
+// RewriteImport rewrites any import of path oldPath to path newPath.
+func RewriteImport(fset *token.FileSet, f *ast.File, oldPath, newPath string) (rewrote bool) {
+ for _, imp := range f.Imports {
+ if importPath(imp) == oldPath {
+ rewrote = true
+ // record old End, because the default is to compute
+ // it using the length of imp.Path.Value.
+ imp.EndPos = imp.End()
+ imp.Path.Value = strconv.Quote(newPath)
+ }
+ }
+ return
+}
+
+// UsesImport reports whether a given import is used.
+func UsesImport(f *ast.File, path string) (used bool) {
+ spec := importSpec(f, path)
+ if spec == nil {
+ return
+ }
+
+ name := spec.Name.String()
+ switch name {
+ case "<nil>":
+ // If the package name is not explicitly specified,
+ // make an educated guess. This is not guaranteed to be correct.
+ lastSlash := strings.LastIndex(path, "/")
+ if lastSlash == -1 {
+ name = path
+ } else {
+ name = path[lastSlash+1:]
+ }
+ case "_", ".":
+ // Not sure if this import is used - err on the side of caution.
+ return true
+ }
+
+ ast.Walk(visitFn(func(n ast.Node) {
+ sel, ok := n.(*ast.SelectorExpr)
+ if ok && isTopName(sel.X, name) {
+ used = true
+ }
+ }), f)
+
+ return
+}
+
+type visitFn func(node ast.Node)
+
+func (fn visitFn) Visit(node ast.Node) ast.Visitor {
+ fn(node)
+ return fn
+}
+
+// imports reports whether f has an import with the specified name and path.
+func imports(f *ast.File, name, path string) bool {
+ for _, s := range f.Imports {
+ if importName(s) == name && importPath(s) == path {
+ return true
+ }
+ }
+ return false
+}
+
+// importSpec returns the import spec if f imports path,
+// or nil otherwise.
+func importSpec(f *ast.File, path string) *ast.ImportSpec {
+ for _, s := range f.Imports {
+ if importPath(s) == path {
+ return s
+ }
+ }
+ return nil
+}
+
+// importName returns the name of s,
+// or "" if the import is not named.
+func importName(s *ast.ImportSpec) string {
+ if s.Name == nil {
+ return ""
+ }
+ return s.Name.Name
+}
+
+// importPath returns the unquoted import path of s,
+// or "" if the path is not properly quoted.
+func importPath(s *ast.ImportSpec) string {
+ t, err := strconv.Unquote(s.Path.Value)
+ if err != nil {
+ return ""
+ }
+ return t
+}
+
+// declImports reports whether gen contains an import of path.
+func declImports(gen *ast.GenDecl, path string) bool {
+ if gen.Tok != token.IMPORT {
+ return false
+ }
+ for _, spec := range gen.Specs {
+ impspec := spec.(*ast.ImportSpec)
+ if importPath(impspec) == path {
+ return true
+ }
+ }
+ return false
+}
+
+// matchLen returns the length of the longest path segment prefix shared by x and y.
+func matchLen(x, y string) int {
+ n := 0
+ for i := 0; i < len(x) && i < len(y) && x[i] == y[i]; i++ {
+ if x[i] == '/' {
+ n++
+ }
+ }
+ return n
+}
+
+// isTopName returns true if n is a top-level unresolved identifier with the given name.
+func isTopName(n ast.Expr, name string) bool {
+ id, ok := n.(*ast.Ident)
+ return ok && id.Name == name && id.Obj == nil
+}
+
+// Imports returns the file imports grouped by paragraph.
+func Imports(fset *token.FileSet, f *ast.File) [][]*ast.ImportSpec {
+ var groups [][]*ast.ImportSpec
+
+ for _, decl := range f.Decls {
+ genDecl, ok := decl.(*ast.GenDecl)
+ if !ok || genDecl.Tok != token.IMPORT {
+ break
+ }
+
+ group := []*ast.ImportSpec{}
+
+ var lastLine int
+ for _, spec := range genDecl.Specs {
+ importSpec := spec.(*ast.ImportSpec)
+ pos := importSpec.Path.ValuePos
+ line := fset.Position(pos).Line
+ if lastLine > 0 && pos > 0 && line-lastLine > 1 {
+ groups = append(groups, group)
+ group = []*ast.ImportSpec{}
+ }
+ group = append(group, importSpec)
+ lastLine = line
+ }
+ groups = append(groups, group)
+ }
+
+ return groups
+}
diff --git a/vendor/golang.org/x/tools/go/ast/astutil/rewrite.go b/vendor/golang.org/x/tools/go/ast/astutil/rewrite.go
new file mode 100644
index 0000000000000..cf72ea990bda2
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/ast/astutil/rewrite.go
@@ -0,0 +1,477 @@
+// Copyright 2017 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package astutil
+
+import (
+ "fmt"
+ "go/ast"
+ "reflect"
+ "sort"
+)
+
+// An ApplyFunc is invoked by Apply for each node n, even if n is nil,
+// before and/or after the node's children, using a Cursor describing
+// the current node and providing operations on it.
+//
+// The return value of ApplyFunc controls the syntax tree traversal.
+// See Apply for details.
+type ApplyFunc func(*Cursor) bool
+
+// Apply traverses a syntax tree recursively, starting with root,
+// and calling pre and post for each node as described below.
+// Apply returns the syntax tree, possibly modified.
+//
+// If pre is not nil, it is called for each node before the node's
+// children are traversed (pre-order). If pre returns false, no
+// children are traversed, and post is not called for that node.
+//
+// If post is not nil, and a prior call of pre didn't return false,
+// post is called for each node after its children are traversed
+// (post-order). If post returns false, traversal is terminated and
+// Apply returns immediately.
+//
+// Only fields that refer to AST nodes are considered children;
+// i.e., token.Pos, Scopes, Objects, and fields of basic types
+// (strings, etc.) are ignored.
+//
+// Children are traversed in the order in which they appear in the
+// respective node's struct definition. A package's files are
+// traversed in the filenames' alphabetical order.
+//
+func Apply(root ast.Node, pre, post ApplyFunc) (result ast.Node) {
+ parent := &struct{ ast.Node }{root}
+ defer func() {
+ if r := recover(); r != nil && r != abort {
+ panic(r)
+ }
+ result = parent.Node
+ }()
+ a := &application{pre: pre, post: post}
+ a.apply(parent, "Node", nil, root)
+ return
+}
+
+var abort = new(int) // singleton, to signal termination of Apply
+
+// A Cursor describes a node encountered during Apply.
+// Information about the node and its parent is available
+// from the Node, Parent, Name, and Index methods.
+//
+// If p is a variable of type and value of the current parent node
+// c.Parent(), and f is the field identifier with name c.Name(),
+// the following invariants hold:
+//
+// p.f == c.Node() if c.Index() < 0
+// p.f[c.Index()] == c.Node() if c.Index() >= 0
+//
+// The methods Replace, Delete, InsertBefore, and InsertAfter
+// can be used to change the AST without disrupting Apply.
+type Cursor struct {
+ parent ast.Node
+ name string
+ iter *iterator // valid if non-nil
+ node ast.Node
+}
+
+// Node returns the current Node.
+func (c *Cursor) Node() ast.Node { return c.node }
+
+// Parent returns the parent of the current Node.
+func (c *Cursor) Parent() ast.Node { return c.parent }
+
+// Name returns the name of the parent Node field that contains the current Node.
+// If the parent is a *ast.Package and the current Node is a *ast.File, Name returns
+// the filename for the current Node.
+func (c *Cursor) Name() string { return c.name }
+
+// Index reports the index >= 0 of the current Node in the slice of Nodes that
+// contains it, or a value < 0 if the current Node is not part of a slice.
+// The index of the current node changes if InsertBefore is called while
+// processing the current node.
+func (c *Cursor) Index() int {
+ if c.iter != nil {
+ return c.iter.index
+ }
+ return -1
+}
+
+// field returns the current node's parent field value.
+func (c *Cursor) field() reflect.Value {
+ return reflect.Indirect(reflect.ValueOf(c.parent)).FieldByName(c.name)
+}
+
+// Replace replaces the current Node with n.
+// The replacement node is not walked by Apply.
+func (c *Cursor) Replace(n ast.Node) {
+ if _, ok := c.node.(*ast.File); ok {
+ file, ok := n.(*ast.File)
+ if !ok {
+ panic("attempt to replace *ast.File with non-*ast.File")
+ }
+ c.parent.(*ast.Package).Files[c.name] = file
+ return
+ }
+
+ v := c.field()
+ if i := c.Index(); i >= 0 {
+ v = v.Index(i)
+ }
+ v.Set(reflect.ValueOf(n))
+}
+
+// Delete deletes the current Node from its containing slice.
+// If the current Node is not part of a slice, Delete panics.
+// As a special case, if the current node is a package file,
+// Delete removes it from the package's Files map.
+func (c *Cursor) Delete() {
+ if _, ok := c.node.(*ast.File); ok {
+ delete(c.parent.(*ast.Package).Files, c.name)
+ return
+ }
+
+ i := c.Index()
+ if i < 0 {
+ panic("Delete node not contained in slice")
+ }
+ v := c.field()
+ l := v.Len()
+ reflect.Copy(v.Slice(i, l), v.Slice(i+1, l))
+ v.Index(l - 1).Set(reflect.Zero(v.Type().Elem()))
+ v.SetLen(l - 1)
+ c.iter.step--
+}
+
+// InsertAfter inserts n after the current Node in its containing slice.
+// If the current Node is not part of a slice, InsertAfter panics.
+// Apply does not walk n.
+func (c *Cursor) InsertAfter(n ast.Node) {
+ i := c.Index()
+ if i < 0 {
+ panic("InsertAfter node not contained in slice")
+ }
+ v := c.field()
+ v.Set(reflect.Append(v, reflect.Zero(v.Type().Elem())))
+ l := v.Len()
+ reflect.Copy(v.Slice(i+2, l), v.Slice(i+1, l))
+ v.Index(i + 1).Set(reflect.ValueOf(n))
+ c.iter.step++
+}
+
+// InsertBefore inserts n before the current Node in its containing slice.
+// If the current Node is not part of a slice, InsertBefore panics.
+// Apply will not walk n.
+func (c *Cursor) InsertBefore(n ast.Node) {
+ i := c.Index()
+ if i < 0 {
+ panic("InsertBefore node not contained in slice")
+ }
+ v := c.field()
+ v.Set(reflect.Append(v, reflect.Zero(v.Type().Elem())))
+ l := v.Len()
+ reflect.Copy(v.Slice(i+1, l), v.Slice(i, l))
+ v.Index(i).Set(reflect.ValueOf(n))
+ c.iter.index++
+}
+
+// application carries all the shared data so we can pass it around cheaply.
+type application struct {
+ pre, post ApplyFunc
+ cursor Cursor
+ iter iterator
+}
+
+func (a *application) apply(parent ast.Node, name string, iter *iterator, n ast.Node) {
+ // convert typed nil into untyped nil
+ if v := reflect.ValueOf(n); v.Kind() == reflect.Ptr && v.IsNil() {
+ n = nil
+ }
+
+ // avoid heap-allocating a new cursor for each apply call; reuse a.cursor instead
+ saved := a.cursor
+ a.cursor.parent = parent
+ a.cursor.name = name
+ a.cursor.iter = iter
+ a.cursor.node = n
+
+ if a.pre != nil && !a.pre(&a.cursor) {
+ a.cursor = saved
+ return
+ }
+
+ // walk children
+ // (the order of the cases matches the order of the corresponding node types in go/ast)
+ switch n := n.(type) {
+ case nil:
+ // nothing to do
+
+ // Comments and fields
+ case *ast.Comment:
+ // nothing to do
+
+ case *ast.CommentGroup:
+ if n != nil {
+ a.applyList(n, "List")
+ }
+
+ case *ast.Field:
+ a.apply(n, "Doc", nil, n.Doc)
+ a.applyList(n, "Names")
+ a.apply(n, "Type", nil, n.Type)
+ a.apply(n, "Tag", nil, n.Tag)
+ a.apply(n, "Comment", nil, n.Comment)
+
+ case *ast.FieldList:
+ a.applyList(n, "List")
+
+ // Expressions
+ case *ast.BadExpr, *ast.Ident, *ast.BasicLit:
+ // nothing to do
+
+ case *ast.Ellipsis:
+ a.apply(n, "Elt", nil, n.Elt)
+
+ case *ast.FuncLit:
+ a.apply(n, "Type", nil, n.Type)
+ a.apply(n, "Body", nil, n.Body)
+
+ case *ast.CompositeLit:
+ a.apply(n, "Type", nil, n.Type)
+ a.applyList(n, "Elts")
+
+ case *ast.ParenExpr:
+ a.apply(n, "X", nil, n.X)
+
+ case *ast.SelectorExpr:
+ a.apply(n, "X", nil, n.X)
+ a.apply(n, "Sel", nil, n.Sel)
+
+ case *ast.IndexExpr:
+ a.apply(n, "X", nil, n.X)
+ a.apply(n, "Index", nil, n.Index)
+
+ case *ast.SliceExpr:
+ a.apply(n, "X", nil, n.X)
+ a.apply(n, "Low", nil, n.Low)
+ a.apply(n, "High", nil, n.High)
+ a.apply(n, "Max", nil, n.Max)
+
+ case *ast.TypeAssertExpr:
+ a.apply(n, "X", nil, n.X)
+ a.apply(n, "Type", nil, n.Type)
+
+ case *ast.CallExpr:
+ a.apply(n, "Fun", nil, n.Fun)
+ a.applyList(n, "Args")
+
+ case *ast.StarExpr:
+ a.apply(n, "X", nil, n.X)
+
+ case *ast.UnaryExpr:
+ a.apply(n, "X", nil, n.X)
+
+ case *ast.BinaryExpr:
+ a.apply(n, "X", nil, n.X)
+ a.apply(n, "Y", nil, n.Y)
+
+ case *ast.KeyValueExpr:
+ a.apply(n, "Key", nil, n.Key)
+ a.apply(n, "Value", nil, n.Value)
+
+ // Types
+ case *ast.ArrayType:
+ a.apply(n, "Len", nil, n.Len)
+ a.apply(n, "Elt", nil, n.Elt)
+
+ case *ast.StructType:
+ a.apply(n, "Fields", nil, n.Fields)
+
+ case *ast.FuncType:
+ a.apply(n, "Params", nil, n.Params)
+ a.apply(n, "Results", nil, n.Results)
+
+ case *ast.InterfaceType:
+ a.apply(n, "Methods", nil, n.Methods)
+
+ case *ast.MapType:
+ a.apply(n, "Key", nil, n.Key)
+ a.apply(n, "Value", nil, n.Value)
+
+ case *ast.ChanType:
+ a.apply(n, "Value", nil, n.Value)
+
+ // Statements
+ case *ast.BadStmt:
+ // nothing to do
+
+ case *ast.DeclStmt:
+ a.apply(n, "Decl", nil, n.Decl)
+
+ case *ast.EmptyStmt:
+ // nothing to do
+
+ case *ast.LabeledStmt:
+ a.apply(n, "Label", nil, n.Label)
+ a.apply(n, "Stmt", nil, n.Stmt)
+
+ case *ast.ExprStmt:
+ a.apply(n, "X", nil, n.X)
+
+ case *ast.SendStmt:
+ a.apply(n, "Chan", nil, n.Chan)
+ a.apply(n, "Value", nil, n.Value)
+
+ case *ast.IncDecStmt:
+ a.apply(n, "X", nil, n.X)
+
+ case *ast.AssignStmt:
+ a.applyList(n, "Lhs")
+ a.applyList(n, "Rhs")
+
+ case *ast.GoStmt:
+ a.apply(n, "Call", nil, n.Call)
+
+ case *ast.DeferStmt:
+ a.apply(n, "Call", nil, n.Call)
+
+ case *ast.ReturnStmt:
+ a.applyList(n, "Results")
+
+ case *ast.BranchStmt:
+ a.apply(n, "Label", nil, n.Label)
+
+ case *ast.BlockStmt:
+ a.applyList(n, "List")
+
+ case *ast.IfStmt:
+ a.apply(n, "Init", nil, n.Init)
+ a.apply(n, "Cond", nil, n.Cond)
+ a.apply(n, "Body", nil, n.Body)
+ a.apply(n, "Else", nil, n.Else)
+
+ case *ast.CaseClause:
+ a.applyList(n, "List")
+ a.applyList(n, "Body")
+
+ case *ast.SwitchStmt:
+ a.apply(n, "Init", nil, n.Init)
+ a.apply(n, "Tag", nil, n.Tag)
+ a.apply(n, "Body", nil, n.Body)
+
+ case *ast.TypeSwitchStmt:
+ a.apply(n, "Init", nil, n.Init)
+ a.apply(n, "Assign", nil, n.Assign)
+ a.apply(n, "Body", nil, n.Body)
+
+ case *ast.CommClause:
+ a.apply(n, "Comm", nil, n.Comm)
+ a.applyList(n, "Body")
+
+ case *ast.SelectStmt:
+ a.apply(n, "Body", nil, n.Body)
+
+ case *ast.ForStmt:
+ a.apply(n, "Init", nil, n.Init)
+ a.apply(n, "Cond", nil, n.Cond)
+ a.apply(n, "Post", nil, n.Post)
+ a.apply(n, "Body", nil, n.Body)
+
+ case *ast.RangeStmt:
+ a.apply(n, "Key", nil, n.Key)
+ a.apply(n, "Value", nil, n.Value)
+ a.apply(n, "X", nil, n.X)
+ a.apply(n, "Body", nil, n.Body)
+
+ // Declarations
+ case *ast.ImportSpec:
+ a.apply(n, "Doc", nil, n.Doc)
+ a.apply(n, "Name", nil, n.Name)
+ a.apply(n, "Path", nil, n.Path)
+ a.apply(n, "Comment", nil, n.Comment)
+
+ case *ast.ValueSpec:
+ a.apply(n, "Doc", nil, n.Doc)
+ a.applyList(n, "Names")
+ a.apply(n, "Type", nil, n.Type)
+ a.applyList(n, "Values")
+ a.apply(n, "Comment", nil, n.Comment)
+
+ case *ast.TypeSpec:
+ a.apply(n, "Doc", nil, n.Doc)
+ a.apply(n, "Name", nil, n.Name)
+ a.apply(n, "Type", nil, n.Type)
+ a.apply(n, "Comment", nil, n.Comment)
+
+ case *ast.BadDecl:
+ // nothing to do
+
+ case *ast.GenDecl:
+ a.apply(n, "Doc", nil, n.Doc)
+ a.applyList(n, "Specs")
+
+ case *ast.FuncDecl:
+ a.apply(n, "Doc", nil, n.Doc)
+ a.apply(n, "Recv", nil, n.Recv)
+ a.apply(n, "Name", nil, n.Name)
+ a.apply(n, "Type", nil, n.Type)
+ a.apply(n, "Body", nil, n.Body)
+
+ // Files and packages
+ case *ast.File:
+ a.apply(n, "Doc", nil, n.Doc)
+ a.apply(n, "Name", nil, n.Name)
+ a.applyList(n, "Decls")
+ // Don't walk n.Comments; they have either been walked already if
+ // they are Doc comments, or they can be easily walked explicitly.
+
+ case *ast.Package:
+ // collect and sort names for reproducible behavior
+ var names []string
+ for name := range n.Files {
+ names = append(names, name)
+ }
+ sort.Strings(names)
+ for _, name := range names {
+ a.apply(n, name, nil, n.Files[name])
+ }
+
+ default:
+ panic(fmt.Sprintf("Apply: unexpected node type %T", n))
+ }
+
+ if a.post != nil && !a.post(&a.cursor) {
+ panic(abort)
+ }
+
+ a.cursor = saved
+}
+
+// An iterator controls iteration over a slice of nodes.
+type iterator struct {
+ index, step int
+}
+
+func (a *application) applyList(parent ast.Node, name string) {
+ // avoid heap-allocating a new iterator for each applyList call; reuse a.iter instead
+ saved := a.iter
+ a.iter.index = 0
+ for {
+ // must reload parent.name each time, since cursor modifications might change it
+ v := reflect.Indirect(reflect.ValueOf(parent)).FieldByName(name)
+ if a.iter.index >= v.Len() {
+ break
+ }
+
+ // element x may be nil in a bad AST - be cautious
+ var x ast.Node
+ if e := v.Index(a.iter.index); e.IsValid() {
+ x = e.Interface().(ast.Node)
+ }
+
+ a.iter.step = 1
+ a.apply(parent, name, &a.iter, x)
+ a.iter.index += a.iter.step
+ }
+ a.iter = saved
+}
diff --git a/vendor/golang.org/x/tools/go/ast/astutil/util.go b/vendor/golang.org/x/tools/go/ast/astutil/util.go
new file mode 100644
index 0000000000000..7630629824af1
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/ast/astutil/util.go
@@ -0,0 +1,14 @@
+package astutil
+
+import "go/ast"
+
+// Unparen returns e with any enclosing parentheses stripped.
+func Unparen(e ast.Expr) ast.Expr {
+ for {
+ p, ok := e.(*ast.ParenExpr)
+ if !ok {
+ return e
+ }
+ e = p.X
+ }
+}
diff --git a/vendor/golang.org/x/tools/go/ast/inspector/inspector.go b/vendor/golang.org/x/tools/go/ast/inspector/inspector.go
new file mode 100644
index 0000000000000..ddbdd3f08fc2e
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/ast/inspector/inspector.go
@@ -0,0 +1,182 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package inspector provides helper functions for traversal over the
+// syntax trees of a package, including node filtering by type, and
+// materialization of the traversal stack.
+//
+// During construction, the inspector does a complete traversal and
+// builds a list of push/pop events and their node type. Subsequent
+// method calls that request a traversal scan this list, rather than walk
+// the AST, and perform type filtering using efficient bit sets.
+//
+// Experiments suggest the inspector's traversals are about 2.5x faster
+// than ast.Inspect, but it may take around 5 traversals for this
+// benefit to amortize the inspector's construction cost.
+// If efficiency is the primary concern, do not use Inspector for
+// one-off traversals.
+package inspector
+
+// There are four orthogonal features in a traversal:
+// 1 type filtering
+// 2 pruning
+// 3 postorder calls to f
+// 4 stack
+// Rather than offer all of them in the API,
+// only a few combinations are exposed:
+// - Preorder is the fastest and has fewest features,
+// but is the most commonly needed traversal.
+// - Nodes and WithStack both provide pruning and postorder calls,
+// even though few clients need it, because supporting two versions
+// is not justified.
+// More combinations could be supported by expressing them as
+// wrappers around a more generic traversal, but this was measured
+// and found to degrade performance significantly (30%).
+
+import (
+ "go/ast"
+)
+
+// An Inspector provides methods for inspecting
+// (traversing) the syntax trees of a package.
+type Inspector struct {
+ events []event
+}
+
+// New returns an Inspector for the specified syntax trees.
+func New(files []*ast.File) *Inspector {
+ return &Inspector{traverse(files)}
+}
+
+// An event represents a push or a pop
+// of an ast.Node during a traversal.
+type event struct {
+ node ast.Node
+ typ uint64 // typeOf(node)
+ index int // 1 + index of corresponding pop event, or 0 if this is a pop
+}
+
+// Preorder visits all the nodes of the files supplied to New in
+// depth-first order. It calls f(n) for each node n before it visits
+// n's children.
+//
+// The types argument, if non-empty, enables type-based filtering of
+// events. The function f if is called only for nodes whose type
+// matches an element of the types slice.
+func (in *Inspector) Preorder(types []ast.Node, f func(ast.Node)) {
+ // Because it avoids postorder calls to f, and the pruning
+ // check, Preorder is almost twice as fast as Nodes. The two
+ // features seem to contribute similar slowdowns (~1.4x each).
+
+ mask := maskOf(types)
+ for i := 0; i < len(in.events); {
+ ev := in.events[i]
+ if ev.typ&mask != 0 {
+ if ev.index > 0 {
+ f(ev.node)
+ }
+ }
+ i++
+ }
+}
+
+// Nodes visits the nodes of the files supplied to New in depth-first
+// order. It calls f(n, true) for each node n before it visits n's
+// children. If f returns true, Nodes invokes f recursively for each
+// of the non-nil children of the node, followed by a call of
+// f(n, false).
+//
+// The types argument, if non-empty, enables type-based filtering of
+// events. The function f if is called only for nodes whose type
+// matches an element of the types slice.
+func (in *Inspector) Nodes(types []ast.Node, f func(n ast.Node, push bool) (prune bool)) {
+ mask := maskOf(types)
+ for i := 0; i < len(in.events); {
+ ev := in.events[i]
+ if ev.typ&mask != 0 {
+ if ev.index > 0 {
+ // push
+ if !f(ev.node, true) {
+ i = ev.index // jump to corresponding pop + 1
+ continue
+ }
+ } else {
+ // pop
+ f(ev.node, false)
+ }
+ }
+ i++
+ }
+}
+
+// WithStack visits nodes in a similar manner to Nodes, but it
+// supplies each call to f an additional argument, the current
+// traversal stack. The stack's first element is the outermost node,
+// an *ast.File; its last is the innermost, n.
+func (in *Inspector) WithStack(types []ast.Node, f func(n ast.Node, push bool, stack []ast.Node) (prune bool)) {
+ mask := maskOf(types)
+ var stack []ast.Node
+ for i := 0; i < len(in.events); {
+ ev := in.events[i]
+ if ev.index > 0 {
+ // push
+ stack = append(stack, ev.node)
+ if ev.typ&mask != 0 {
+ if !f(ev.node, true, stack) {
+ i = ev.index
+ stack = stack[:len(stack)-1]
+ continue
+ }
+ }
+ } else {
+ // pop
+ if ev.typ&mask != 0 {
+ f(ev.node, false, stack)
+ }
+ stack = stack[:len(stack)-1]
+ }
+ i++
+ }
+}
+
+// traverse builds the table of events representing a traversal.
+func traverse(files []*ast.File) []event {
+ // Preallocate approximate number of events
+ // based on source file extent.
+ // This makes traverse faster by 4x (!).
+ var extent int
+ for _, f := range files {
+ extent += int(f.End() - f.Pos())
+ }
+ // This estimate is based on the net/http package.
+ events := make([]event, 0, extent*33/100)
+
+ var stack []event
+ for _, f := range files {
+ ast.Inspect(f, func(n ast.Node) bool {
+ if n != nil {
+ // push
+ ev := event{
+ node: n,
+ typ: typeOf(n),
+ index: len(events), // push event temporarily holds own index
+ }
+ stack = append(stack, ev)
+ events = append(events, ev)
+ } else {
+ // pop
+ ev := stack[len(stack)-1]
+ stack = stack[:len(stack)-1]
+
+ events[ev.index].index = len(events) + 1 // make push refer to pop
+
+ ev.index = 0 // turn ev into a pop event
+ events = append(events, ev)
+ }
+ return true
+ })
+ }
+
+ return events
+}
diff --git a/vendor/golang.org/x/tools/go/ast/inspector/typeof.go b/vendor/golang.org/x/tools/go/ast/inspector/typeof.go
new file mode 100644
index 0000000000000..d61301b133dfd
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/ast/inspector/typeof.go
@@ -0,0 +1,216 @@
+package inspector
+
+// This file defines func typeOf(ast.Node) uint64.
+//
+// The initial map-based implementation was too slow;
+// see https://go-review.googlesource.com/c/tools/+/135655/1/go/ast/inspector/inspector.go#196
+
+import "go/ast"
+
+const (
+ nArrayType = iota
+ nAssignStmt
+ nBadDecl
+ nBadExpr
+ nBadStmt
+ nBasicLit
+ nBinaryExpr
+ nBlockStmt
+ nBranchStmt
+ nCallExpr
+ nCaseClause
+ nChanType
+ nCommClause
+ nComment
+ nCommentGroup
+ nCompositeLit
+ nDeclStmt
+ nDeferStmt
+ nEllipsis
+ nEmptyStmt
+ nExprStmt
+ nField
+ nFieldList
+ nFile
+ nForStmt
+ nFuncDecl
+ nFuncLit
+ nFuncType
+ nGenDecl
+ nGoStmt
+ nIdent
+ nIfStmt
+ nImportSpec
+ nIncDecStmt
+ nIndexExpr
+ nInterfaceType
+ nKeyValueExpr
+ nLabeledStmt
+ nMapType
+ nPackage
+ nParenExpr
+ nRangeStmt
+ nReturnStmt
+ nSelectStmt
+ nSelectorExpr
+ nSendStmt
+ nSliceExpr
+ nStarExpr
+ nStructType
+ nSwitchStmt
+ nTypeAssertExpr
+ nTypeSpec
+ nTypeSwitchStmt
+ nUnaryExpr
+ nValueSpec
+)
+
+// typeOf returns a distinct single-bit value that represents the type of n.
+//
+// Various implementations were benchmarked with BenchmarkNewInspector:
+// GOGC=off
+// - type switch 4.9-5.5ms 2.1ms
+// - binary search over a sorted list of types 5.5-5.9ms 2.5ms
+// - linear scan, frequency-ordered list 5.9-6.1ms 2.7ms
+// - linear scan, unordered list 6.4ms 2.7ms
+// - hash table 6.5ms 3.1ms
+// A perfect hash seemed like overkill.
+//
+// The compiler's switch statement is the clear winner
+// as it produces a binary tree in code,
+// with constant conditions and good branch prediction.
+// (Sadly it is the most verbose in source code.)
+// Binary search suffered from poor branch prediction.
+//
+func typeOf(n ast.Node) uint64 {
+ // Fast path: nearly half of all nodes are identifiers.
+ if _, ok := n.(*ast.Ident); ok {
+ return 1 << nIdent
+ }
+
+ // These cases include all nodes encountered by ast.Inspect.
+ switch n.(type) {
+ case *ast.ArrayType:
+ return 1 << nArrayType
+ case *ast.AssignStmt:
+ return 1 << nAssignStmt
+ case *ast.BadDecl:
+ return 1 << nBadDecl
+ case *ast.BadExpr:
+ return 1 << nBadExpr
+ case *ast.BadStmt:
+ return 1 << nBadStmt
+ case *ast.BasicLit:
+ return 1 << nBasicLit
+ case *ast.BinaryExpr:
+ return 1 << nBinaryExpr
+ case *ast.BlockStmt:
+ return 1 << nBlockStmt
+ case *ast.BranchStmt:
+ return 1 << nBranchStmt
+ case *ast.CallExpr:
+ return 1 << nCallExpr
+ case *ast.CaseClause:
+ return 1 << nCaseClause
+ case *ast.ChanType:
+ return 1 << nChanType
+ case *ast.CommClause:
+ return 1 << nCommClause
+ case *ast.Comment:
+ return 1 << nComment
+ case *ast.CommentGroup:
+ return 1 << nCommentGroup
+ case *ast.CompositeLit:
+ return 1 << nCompositeLit
+ case *ast.DeclStmt:
+ return 1 << nDeclStmt
+ case *ast.DeferStmt:
+ return 1 << nDeferStmt
+ case *ast.Ellipsis:
+ return 1 << nEllipsis
+ case *ast.EmptyStmt:
+ return 1 << nEmptyStmt
+ case *ast.ExprStmt:
+ return 1 << nExprStmt
+ case *ast.Field:
+ return 1 << nField
+ case *ast.FieldList:
+ return 1 << nFieldList
+ case *ast.File:
+ return 1 << nFile
+ case *ast.ForStmt:
+ return 1 << nForStmt
+ case *ast.FuncDecl:
+ return 1 << nFuncDecl
+ case *ast.FuncLit:
+ return 1 << nFuncLit
+ case *ast.FuncType:
+ return 1 << nFuncType
+ case *ast.GenDecl:
+ return 1 << nGenDecl
+ case *ast.GoStmt:
+ return 1 << nGoStmt
+ case *ast.Ident:
+ return 1 << nIdent
+ case *ast.IfStmt:
+ return 1 << nIfStmt
+ case *ast.ImportSpec:
+ return 1 << nImportSpec
+ case *ast.IncDecStmt:
+ return 1 << nIncDecStmt
+ case *ast.IndexExpr:
+ return 1 << nIndexExpr
+ case *ast.InterfaceType:
+ return 1 << nInterfaceType
+ case *ast.KeyValueExpr:
+ return 1 << nKeyValueExpr
+ case *ast.LabeledStmt:
+ return 1 << nLabeledStmt
+ case *ast.MapType:
+ return 1 << nMapType
+ case *ast.Package:
+ return 1 << nPackage
+ case *ast.ParenExpr:
+ return 1 << nParenExpr
+ case *ast.RangeStmt:
+ return 1 << nRangeStmt
+ case *ast.ReturnStmt:
+ return 1 << nReturnStmt
+ case *ast.SelectStmt:
+ return 1 << nSelectStmt
+ case *ast.SelectorExpr:
+ return 1 << nSelectorExpr
+ case *ast.SendStmt:
+ return 1 << nSendStmt
+ case *ast.SliceExpr:
+ return 1 << nSliceExpr
+ case *ast.StarExpr:
+ return 1 << nStarExpr
+ case *ast.StructType:
+ return 1 << nStructType
+ case *ast.SwitchStmt:
+ return 1 << nSwitchStmt
+ case *ast.TypeAssertExpr:
+ return 1 << nTypeAssertExpr
+ case *ast.TypeSpec:
+ return 1 << nTypeSpec
+ case *ast.TypeSwitchStmt:
+ return 1 << nTypeSwitchStmt
+ case *ast.UnaryExpr:
+ return 1 << nUnaryExpr
+ case *ast.ValueSpec:
+ return 1 << nValueSpec
+ }
+ return 0
+}
+
+func maskOf(nodes []ast.Node) uint64 {
+ if nodes == nil {
+ return 1<<64 - 1 // match all node types
+ }
+ var mask uint64
+ for _, n := range nodes {
+ mask |= typeOf(n)
+ }
+ return mask
+}
diff --git a/vendor/golang.org/x/tools/go/buildutil/allpackages.go b/vendor/golang.org/x/tools/go/buildutil/allpackages.go
new file mode 100644
index 0000000000000..c0cb03e7bee37
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/buildutil/allpackages.go
@@ -0,0 +1,198 @@
+// Copyright 2014 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package buildutil provides utilities related to the go/build
+// package in the standard library.
+//
+// All I/O is done via the build.Context file system interface, which must
+// be concurrency-safe.
+package buildutil // import "golang.org/x/tools/go/buildutil"
+
+import (
+ "go/build"
+ "os"
+ "path/filepath"
+ "sort"
+ "strings"
+ "sync"
+)
+
+// AllPackages returns the package path of each Go package in any source
+// directory of the specified build context (e.g. $GOROOT or an element
+// of $GOPATH). Errors are ignored. The results are sorted.
+// All package paths are canonical, and thus may contain "/vendor/".
+//
+// The result may include import paths for directories that contain no
+// *.go files, such as "archive" (in $GOROOT/src).
+//
+// All I/O is done via the build.Context file system interface,
+// which must be concurrency-safe.
+//
+func AllPackages(ctxt *build.Context) []string {
+ var list []string
+ ForEachPackage(ctxt, func(pkg string, _ error) {
+ list = append(list, pkg)
+ })
+ sort.Strings(list)
+ return list
+}
+
+// ForEachPackage calls the found function with the package path of
+// each Go package it finds in any source directory of the specified
+// build context (e.g. $GOROOT or an element of $GOPATH).
+// All package paths are canonical, and thus may contain "/vendor/".
+//
+// If the package directory exists but could not be read, the second
+// argument to the found function provides the error.
+//
+// All I/O is done via the build.Context file system interface,
+// which must be concurrency-safe.
+//
+func ForEachPackage(ctxt *build.Context, found func(importPath string, err error)) {
+ ch := make(chan item)
+
+ var wg sync.WaitGroup
+ for _, root := range ctxt.SrcDirs() {
+ root := root
+ wg.Add(1)
+ go func() {
+ allPackages(ctxt, root, ch)
+ wg.Done()
+ }()
+ }
+ go func() {
+ wg.Wait()
+ close(ch)
+ }()
+
+ // All calls to found occur in the caller's goroutine.
+ for i := range ch {
+ found(i.importPath, i.err)
+ }
+}
+
+type item struct {
+ importPath string
+ err error // (optional)
+}
+
+// We use a process-wide counting semaphore to limit
+// the number of parallel calls to ReadDir.
+var ioLimit = make(chan bool, 20)
+
+func allPackages(ctxt *build.Context, root string, ch chan<- item) {
+ root = filepath.Clean(root) + string(os.PathSeparator)
+
+ var wg sync.WaitGroup
+
+ var walkDir func(dir string)
+ walkDir = func(dir string) {
+ // Avoid .foo, _foo, and testdata directory trees.
+ base := filepath.Base(dir)
+ if base == "" || base[0] == '.' || base[0] == '_' || base == "testdata" {
+ return
+ }
+
+ pkg := filepath.ToSlash(strings.TrimPrefix(dir, root))
+
+ // Prune search if we encounter any of these import paths.
+ switch pkg {
+ case "builtin":
+ return
+ }
+
+ ioLimit <- true
+ files, err := ReadDir(ctxt, dir)
+ <-ioLimit
+ if pkg != "" || err != nil {
+ ch <- item{pkg, err}
+ }
+ for _, fi := range files {
+ fi := fi
+ if fi.IsDir() {
+ wg.Add(1)
+ go func() {
+ walkDir(filepath.Join(dir, fi.Name()))
+ wg.Done()
+ }()
+ }
+ }
+ }
+
+ walkDir(root)
+ wg.Wait()
+}
+
+// ExpandPatterns returns the set of packages matched by patterns,
+// which may have the following forms:
+//
+// golang.org/x/tools/cmd/guru # a single package
+// golang.org/x/tools/... # all packages beneath dir
+// ... # the entire workspace.
+//
+// Order is significant: a pattern preceded by '-' removes matching
+// packages from the set. For example, these patterns match all encoding
+// packages except encoding/xml:
+//
+// encoding/... -encoding/xml
+//
+// A trailing slash in a pattern is ignored. (Path components of Go
+// package names are separated by slash, not the platform's path separator.)
+//
+func ExpandPatterns(ctxt *build.Context, patterns []string) map[string]bool {
+ // TODO(adonovan): support other features of 'go list':
+ // - "std"/"cmd"/"all" meta-packages
+ // - "..." not at the end of a pattern
+ // - relative patterns using "./" or "../" prefix
+
+ pkgs := make(map[string]bool)
+ doPkg := func(pkg string, neg bool) {
+ if neg {
+ delete(pkgs, pkg)
+ } else {
+ pkgs[pkg] = true
+ }
+ }
+
+ // Scan entire workspace if wildcards are present.
+ // TODO(adonovan): opt: scan only the necessary subtrees of the workspace.
+ var all []string
+ for _, arg := range patterns {
+ if strings.HasSuffix(arg, "...") {
+ all = AllPackages(ctxt)
+ break
+ }
+ }
+
+ for _, arg := range patterns {
+ if arg == "" {
+ continue
+ }
+
+ neg := arg[0] == '-'
+ if neg {
+ arg = arg[1:]
+ }
+
+ if arg == "..." {
+ // ... matches all packages
+ for _, pkg := range all {
+ doPkg(pkg, neg)
+ }
+ } else if dir := strings.TrimSuffix(arg, "/..."); dir != arg {
+ // dir/... matches all packages beneath dir
+ for _, pkg := range all {
+ if strings.HasPrefix(pkg, dir) &&
+ (len(pkg) == len(dir) || pkg[len(dir)] == '/') {
+ doPkg(pkg, neg)
+ }
+ }
+ } else {
+ // single package
+ doPkg(strings.TrimSuffix(arg, "/"), neg)
+ }
+ }
+
+ return pkgs
+}
diff --git a/vendor/golang.org/x/tools/go/buildutil/fakecontext.go b/vendor/golang.org/x/tools/go/buildutil/fakecontext.go
new file mode 100644
index 0000000000000..8b7f066739f1d
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/buildutil/fakecontext.go
@@ -0,0 +1,109 @@
+package buildutil
+
+import (
+ "fmt"
+ "go/build"
+ "io"
+ "io/ioutil"
+ "os"
+ "path"
+ "path/filepath"
+ "sort"
+ "strings"
+ "time"
+)
+
+// FakeContext returns a build.Context for the fake file tree specified
+// by pkgs, which maps package import paths to a mapping from file base
+// names to contents.
+//
+// The fake Context has a GOROOT of "/go" and no GOPATH, and overrides
+// the necessary file access methods to read from memory instead of the
+// real file system.
+//
+// Unlike a real file tree, the fake one has only two levels---packages
+// and files---so ReadDir("/go/src/") returns all packages under
+// /go/src/ including, for instance, "math" and "math/big".
+// ReadDir("/go/src/math/big") would return all the files in the
+// "math/big" package.
+//
+func FakeContext(pkgs map[string]map[string]string) *build.Context {
+ clean := func(filename string) string {
+ f := path.Clean(filepath.ToSlash(filename))
+ // Removing "/go/src" while respecting segment
+ // boundaries has this unfortunate corner case:
+ if f == "/go/src" {
+ return ""
+ }
+ return strings.TrimPrefix(f, "/go/src/")
+ }
+
+ ctxt := build.Default // copy
+ ctxt.GOROOT = "/go"
+ ctxt.GOPATH = ""
+ ctxt.Compiler = "gc"
+ ctxt.IsDir = func(dir string) bool {
+ dir = clean(dir)
+ if dir == "" {
+ return true // needed by (*build.Context).SrcDirs
+ }
+ return pkgs[dir] != nil
+ }
+ ctxt.ReadDir = func(dir string) ([]os.FileInfo, error) {
+ dir = clean(dir)
+ var fis []os.FileInfo
+ if dir == "" {
+ // enumerate packages
+ for importPath := range pkgs {
+ fis = append(fis, fakeDirInfo(importPath))
+ }
+ } else {
+ // enumerate files of package
+ for basename := range pkgs[dir] {
+ fis = append(fis, fakeFileInfo(basename))
+ }
+ }
+ sort.Sort(byName(fis))
+ return fis, nil
+ }
+ ctxt.OpenFile = func(filename string) (io.ReadCloser, error) {
+ filename = clean(filename)
+ dir, base := path.Split(filename)
+ content, ok := pkgs[path.Clean(dir)][base]
+ if !ok {
+ return nil, fmt.Errorf("file not found: %s", filename)
+ }
+ return ioutil.NopCloser(strings.NewReader(content)), nil
+ }
+ ctxt.IsAbsPath = func(path string) bool {
+ path = filepath.ToSlash(path)
+ // Don't rely on the default (filepath.Path) since on
+ // Windows, it reports virtual paths as non-absolute.
+ return strings.HasPrefix(path, "/")
+ }
+ return &ctxt
+}
+
+type byName []os.FileInfo
+
+func (s byName) Len() int { return len(s) }
+func (s byName) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
+func (s byName) Less(i, j int) bool { return s[i].Name() < s[j].Name() }
+
+type fakeFileInfo string
+
+func (fi fakeFileInfo) Name() string { return string(fi) }
+func (fakeFileInfo) Sys() interface{} { return nil }
+func (fakeFileInfo) ModTime() time.Time { return time.Time{} }
+func (fakeFileInfo) IsDir() bool { return false }
+func (fakeFileInfo) Size() int64 { return 0 }
+func (fakeFileInfo) Mode() os.FileMode { return 0644 }
+
+type fakeDirInfo string
+
+func (fd fakeDirInfo) Name() string { return string(fd) }
+func (fakeDirInfo) Sys() interface{} { return nil }
+func (fakeDirInfo) ModTime() time.Time { return time.Time{} }
+func (fakeDirInfo) IsDir() bool { return true }
+func (fakeDirInfo) Size() int64 { return 0 }
+func (fakeDirInfo) Mode() os.FileMode { return 0755 }
diff --git a/vendor/golang.org/x/tools/go/buildutil/overlay.go b/vendor/golang.org/x/tools/go/buildutil/overlay.go
new file mode 100644
index 0000000000000..8e239086bd444
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/buildutil/overlay.go
@@ -0,0 +1,103 @@
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package buildutil
+
+import (
+ "bufio"
+ "bytes"
+ "fmt"
+ "go/build"
+ "io"
+ "io/ioutil"
+ "path/filepath"
+ "strconv"
+ "strings"
+)
+
+// OverlayContext overlays a build.Context with additional files from
+// a map. Files in the map take precedence over other files.
+//
+// In addition to plain string comparison, two file names are
+// considered equal if their base names match and their directory
+// components point at the same directory on the file system. That is,
+// symbolic links are followed for directories, but not files.
+//
+// A common use case for OverlayContext is to allow editors to pass in
+// a set of unsaved, modified files.
+//
+// Currently, only the Context.OpenFile function will respect the
+// overlay. This may change in the future.
+func OverlayContext(orig *build.Context, overlay map[string][]byte) *build.Context {
+ // TODO(dominikh): Implement IsDir, HasSubdir and ReadDir
+
+ rc := func(data []byte) (io.ReadCloser, error) {
+ return ioutil.NopCloser(bytes.NewBuffer(data)), nil
+ }
+
+ copy := *orig // make a copy
+ ctxt := ©
+ ctxt.OpenFile = func(path string) (io.ReadCloser, error) {
+ // Fast path: names match exactly.
+ if content, ok := overlay[path]; ok {
+ return rc(content)
+ }
+
+ // Slow path: check for same file under a different
+ // alias, perhaps due to a symbolic link.
+ for filename, content := range overlay {
+ if sameFile(path, filename) {
+ return rc(content)
+ }
+ }
+
+ return OpenFile(orig, path)
+ }
+ return ctxt
+}
+
+// ParseOverlayArchive parses an archive containing Go files and their
+// contents. The result is intended to be used with OverlayContext.
+//
+//
+// Archive format
+//
+// The archive consists of a series of files. Each file consists of a
+// name, a decimal file size and the file contents, separated by
+// newlines. No newline follows after the file contents.
+func ParseOverlayArchive(archive io.Reader) (map[string][]byte, error) {
+ overlay := make(map[string][]byte)
+ r := bufio.NewReader(archive)
+ for {
+ // Read file name.
+ filename, err := r.ReadString('\n')
+ if err != nil {
+ if err == io.EOF {
+ break // OK
+ }
+ return nil, fmt.Errorf("reading archive file name: %v", err)
+ }
+ filename = filepath.Clean(strings.TrimSpace(filename))
+
+ // Read file size.
+ sz, err := r.ReadString('\n')
+ if err != nil {
+ return nil, fmt.Errorf("reading size of archive file %s: %v", filename, err)
+ }
+ sz = strings.TrimSpace(sz)
+ size, err := strconv.ParseUint(sz, 10, 32)
+ if err != nil {
+ return nil, fmt.Errorf("parsing size of archive file %s: %v", filename, err)
+ }
+
+ // Read file content.
+ content := make([]byte, size)
+ if _, err := io.ReadFull(r, content); err != nil {
+ return nil, fmt.Errorf("reading archive file %s: %v", filename, err)
+ }
+ overlay[filename] = content
+ }
+
+ return overlay, nil
+}
diff --git a/vendor/golang.org/x/tools/go/buildutil/tags.go b/vendor/golang.org/x/tools/go/buildutil/tags.go
new file mode 100644
index 0000000000000..486606f3768db
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/buildutil/tags.go
@@ -0,0 +1,75 @@
+package buildutil
+
+// This logic was copied from stringsFlag from $GOROOT/src/cmd/go/build.go.
+
+import "fmt"
+
+const TagsFlagDoc = "a list of `build tags` to consider satisfied during the build. " +
+ "For more information about build tags, see the description of " +
+ "build constraints in the documentation for the go/build package"
+
+// TagsFlag is an implementation of the flag.Value and flag.Getter interfaces that parses
+// a flag value in the same manner as go build's -tags flag and
+// populates a []string slice.
+//
+// See $GOROOT/src/go/build/doc.go for description of build tags.
+// See $GOROOT/src/cmd/go/doc.go for description of 'go build -tags' flag.
+//
+// Example:
+// flag.Var((*buildutil.TagsFlag)(&build.Default.BuildTags), "tags", buildutil.TagsFlagDoc)
+type TagsFlag []string
+
+func (v *TagsFlag) Set(s string) error {
+ var err error
+ *v, err = splitQuotedFields(s)
+ if *v == nil {
+ *v = []string{}
+ }
+ return err
+}
+
+func (v *TagsFlag) Get() interface{} { return *v }
+
+func splitQuotedFields(s string) ([]string, error) {
+ // Split fields allowing '' or "" around elements.
+ // Quotes further inside the string do not count.
+ var f []string
+ for len(s) > 0 {
+ for len(s) > 0 && isSpaceByte(s[0]) {
+ s = s[1:]
+ }
+ if len(s) == 0 {
+ break
+ }
+ // Accepted quoted string. No unescaping inside.
+ if s[0] == '"' || s[0] == '\'' {
+ quote := s[0]
+ s = s[1:]
+ i := 0
+ for i < len(s) && s[i] != quote {
+ i++
+ }
+ if i >= len(s) {
+ return nil, fmt.Errorf("unterminated %c string", quote)
+ }
+ f = append(f, s[:i])
+ s = s[i+1:]
+ continue
+ }
+ i := 0
+ for i < len(s) && !isSpaceByte(s[i]) {
+ i++
+ }
+ f = append(f, s[:i])
+ s = s[i:]
+ }
+ return f, nil
+}
+
+func (v *TagsFlag) String() string {
+ return "<tagsFlag>"
+}
+
+func isSpaceByte(c byte) bool {
+ return c == ' ' || c == '\t' || c == '\n' || c == '\r'
+}
diff --git a/vendor/golang.org/x/tools/go/buildutil/util.go b/vendor/golang.org/x/tools/go/buildutil/util.go
new file mode 100644
index 0000000000000..fc923d7a70201
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/buildutil/util.go
@@ -0,0 +1,212 @@
+// Copyright 2014 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package buildutil
+
+import (
+ "fmt"
+ "go/ast"
+ "go/build"
+ "go/parser"
+ "go/token"
+ "io"
+ "io/ioutil"
+ "os"
+ "path"
+ "path/filepath"
+ "strings"
+)
+
+// ParseFile behaves like parser.ParseFile,
+// but uses the build context's file system interface, if any.
+//
+// If file is not absolute (as defined by IsAbsPath), the (dir, file)
+// components are joined using JoinPath; dir must be absolute.
+//
+// The displayPath function, if provided, is used to transform the
+// filename that will be attached to the ASTs.
+//
+// TODO(adonovan): call this from go/loader.parseFiles when the tree thaws.
+//
+func ParseFile(fset *token.FileSet, ctxt *build.Context, displayPath func(string) string, dir string, file string, mode parser.Mode) (*ast.File, error) {
+ if !IsAbsPath(ctxt, file) {
+ file = JoinPath(ctxt, dir, file)
+ }
+ rd, err := OpenFile(ctxt, file)
+ if err != nil {
+ return nil, err
+ }
+ defer rd.Close() // ignore error
+ if displayPath != nil {
+ file = displayPath(file)
+ }
+ return parser.ParseFile(fset, file, rd, mode)
+}
+
+// ContainingPackage returns the package containing filename.
+//
+// If filename is not absolute, it is interpreted relative to working directory dir.
+// All I/O is via the build context's file system interface, if any.
+//
+// The '...Files []string' fields of the resulting build.Package are not
+// populated (build.FindOnly mode).
+//
+func ContainingPackage(ctxt *build.Context, dir, filename string) (*build.Package, error) {
+ if !IsAbsPath(ctxt, filename) {
+ filename = JoinPath(ctxt, dir, filename)
+ }
+
+ // We must not assume the file tree uses
+ // "/" always,
+ // `\` always,
+ // or os.PathSeparator (which varies by platform),
+ // but to make any progress, we are forced to assume that
+ // paths will not use `\` unless the PathSeparator
+ // is also `\`, thus we can rely on filepath.ToSlash for some sanity.
+
+ dirSlash := path.Dir(filepath.ToSlash(filename)) + "/"
+
+ // We assume that no source root (GOPATH[i] or GOROOT) contains any other.
+ for _, srcdir := range ctxt.SrcDirs() {
+ srcdirSlash := filepath.ToSlash(srcdir) + "/"
+ if importPath, ok := HasSubdir(ctxt, srcdirSlash, dirSlash); ok {
+ return ctxt.Import(importPath, dir, build.FindOnly)
+ }
+ }
+
+ return nil, fmt.Errorf("can't find package containing %s", filename)
+}
+
+// -- Effective methods of file system interface -------------------------
+
+// (go/build.Context defines these as methods, but does not export them.)
+
+// hasSubdir calls ctxt.HasSubdir (if not nil) or else uses
+// the local file system to answer the question.
+func HasSubdir(ctxt *build.Context, root, dir string) (rel string, ok bool) {
+ if f := ctxt.HasSubdir; f != nil {
+ return f(root, dir)
+ }
+
+ // Try using paths we received.
+ if rel, ok = hasSubdir(root, dir); ok {
+ return
+ }
+
+ // Try expanding symlinks and comparing
+ // expanded against unexpanded and
+ // expanded against expanded.
+ rootSym, _ := filepath.EvalSymlinks(root)
+ dirSym, _ := filepath.EvalSymlinks(dir)
+
+ if rel, ok = hasSubdir(rootSym, dir); ok {
+ return
+ }
+ if rel, ok = hasSubdir(root, dirSym); ok {
+ return
+ }
+ return hasSubdir(rootSym, dirSym)
+}
+
+func hasSubdir(root, dir string) (rel string, ok bool) {
+ const sep = string(filepath.Separator)
+ root = filepath.Clean(root)
+ if !strings.HasSuffix(root, sep) {
+ root += sep
+ }
+
+ dir = filepath.Clean(dir)
+ if !strings.HasPrefix(dir, root) {
+ return "", false
+ }
+
+ return filepath.ToSlash(dir[len(root):]), true
+}
+
+// FileExists returns true if the specified file exists,
+// using the build context's file system interface.
+func FileExists(ctxt *build.Context, path string) bool {
+ if ctxt.OpenFile != nil {
+ r, err := ctxt.OpenFile(path)
+ if err != nil {
+ return false
+ }
+ r.Close() // ignore error
+ return true
+ }
+ _, err := os.Stat(path)
+ return err == nil
+}
+
+// OpenFile behaves like os.Open,
+// but uses the build context's file system interface, if any.
+func OpenFile(ctxt *build.Context, path string) (io.ReadCloser, error) {
+ if ctxt.OpenFile != nil {
+ return ctxt.OpenFile(path)
+ }
+ return os.Open(path)
+}
+
+// IsAbsPath behaves like filepath.IsAbs,
+// but uses the build context's file system interface, if any.
+func IsAbsPath(ctxt *build.Context, path string) bool {
+ if ctxt.IsAbsPath != nil {
+ return ctxt.IsAbsPath(path)
+ }
+ return filepath.IsAbs(path)
+}
+
+// JoinPath behaves like filepath.Join,
+// but uses the build context's file system interface, if any.
+func JoinPath(ctxt *build.Context, path ...string) string {
+ if ctxt.JoinPath != nil {
+ return ctxt.JoinPath(path...)
+ }
+ return filepath.Join(path...)
+}
+
+// IsDir behaves like os.Stat plus IsDir,
+// but uses the build context's file system interface, if any.
+func IsDir(ctxt *build.Context, path string) bool {
+ if ctxt.IsDir != nil {
+ return ctxt.IsDir(path)
+ }
+ fi, err := os.Stat(path)
+ return err == nil && fi.IsDir()
+}
+
+// ReadDir behaves like ioutil.ReadDir,
+// but uses the build context's file system interface, if any.
+func ReadDir(ctxt *build.Context, path string) ([]os.FileInfo, error) {
+ if ctxt.ReadDir != nil {
+ return ctxt.ReadDir(path)
+ }
+ return ioutil.ReadDir(path)
+}
+
+// SplitPathList behaves like filepath.SplitList,
+// but uses the build context's file system interface, if any.
+func SplitPathList(ctxt *build.Context, s string) []string {
+ if ctxt.SplitPathList != nil {
+ return ctxt.SplitPathList(s)
+ }
+ return filepath.SplitList(s)
+}
+
+// sameFile returns true if x and y have the same basename and denote
+// the same file.
+//
+func sameFile(x, y string) bool {
+ if path.Clean(x) == path.Clean(y) {
+ return true
+ }
+ if filepath.Base(x) == filepath.Base(y) { // (optimisation)
+ if xi, err := os.Stat(x); err == nil {
+ if yi, err := os.Stat(y); err == nil {
+ return os.SameFile(xi, yi)
+ }
+ }
+ }
+ return false
+}
diff --git a/vendor/golang.org/x/tools/go/gcexportdata/gcexportdata.go b/vendor/golang.org/x/tools/go/gcexportdata/gcexportdata.go
new file mode 100644
index 0000000000000..f8363d8faae37
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/gcexportdata/gcexportdata.go
@@ -0,0 +1,109 @@
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package gcexportdata provides functions for locating, reading, and
+// writing export data files containing type information produced by the
+// gc compiler. This package supports go1.7 export data format and all
+// later versions.
+//
+// Although it might seem convenient for this package to live alongside
+// go/types in the standard library, this would cause version skew
+// problems for developer tools that use it, since they must be able to
+// consume the outputs of the gc compiler both before and after a Go
+// update such as from Go 1.7 to Go 1.8. Because this package lives in
+// golang.org/x/tools, sites can update their version of this repo some
+// time before the Go 1.8 release and rebuild and redeploy their
+// developer tools, which will then be able to consume both Go 1.7 and
+// Go 1.8 export data files, so they will work before and after the
+// Go update. (See discussion at https://golang.org/issue/15651.)
+//
+package gcexportdata // import "golang.org/x/tools/go/gcexportdata"
+
+import (
+ "bufio"
+ "bytes"
+ "fmt"
+ "go/token"
+ "go/types"
+ "io"
+ "io/ioutil"
+
+ "golang.org/x/tools/go/internal/gcimporter"
+)
+
+// Find returns the name of an object (.o) or archive (.a) file
+// containing type information for the specified import path,
+// using the workspace layout conventions of go/build.
+// If no file was found, an empty filename is returned.
+//
+// A relative srcDir is interpreted relative to the current working directory.
+//
+// Find also returns the package's resolved (canonical) import path,
+// reflecting the effects of srcDir and vendoring on importPath.
+func Find(importPath, srcDir string) (filename, path string) {
+ return gcimporter.FindPkg(importPath, srcDir)
+}
+
+// NewReader returns a reader for the export data section of an object
+// (.o) or archive (.a) file read from r. The new reader may provide
+// additional trailing data beyond the end of the export data.
+func NewReader(r io.Reader) (io.Reader, error) {
+ buf := bufio.NewReader(r)
+ _, err := gcimporter.FindExportData(buf)
+ // If we ever switch to a zip-like archive format with the ToC
+ // at the end, we can return the correct portion of export data,
+ // but for now we must return the entire rest of the file.
+ return buf, err
+}
+
+// Read reads export data from in, decodes it, and returns type
+// information for the package.
+// The package name is specified by path.
+// File position information is added to fset.
+//
+// Read may inspect and add to the imports map to ensure that references
+// within the export data to other packages are consistent. The caller
+// must ensure that imports[path] does not exist, or exists but is
+// incomplete (see types.Package.Complete), and Read inserts the
+// resulting package into this map entry.
+//
+// On return, the state of the reader is undefined.
+func Read(in io.Reader, fset *token.FileSet, imports map[string]*types.Package, path string) (*types.Package, error) {
+ data, err := ioutil.ReadAll(in)
+ if err != nil {
+ return nil, fmt.Errorf("reading export data for %q: %v", path, err)
+ }
+
+ if bytes.HasPrefix(data, []byte("!<arch>")) {
+ return nil, fmt.Errorf("can't read export data for %q directly from an archive file (call gcexportdata.NewReader first to extract export data)", path)
+ }
+
+ // The App Engine Go runtime v1.6 uses the old export data format.
+ // TODO(adonovan): delete once v1.7 has been around for a while.
+ if bytes.HasPrefix(data, []byte("package ")) {
+ return gcimporter.ImportData(imports, path, path, bytes.NewReader(data))
+ }
+
+ // The indexed export format starts with an 'i'; the older
+ // binary export format starts with a 'c', 'd', or 'v'
+ // (from "version"). Select appropriate importer.
+ if len(data) > 0 && data[0] == 'i' {
+ _, pkg, err := gcimporter.IImportData(fset, imports, data[1:], path)
+ return pkg, err
+ }
+
+ _, pkg, err := gcimporter.BImportData(fset, imports, data, path)
+ return pkg, err
+}
+
+// Write writes encoded type information for the specified package to out.
+// The FileSet provides file position information for named objects.
+func Write(out io.Writer, fset *token.FileSet, pkg *types.Package) error {
+ b, err := gcimporter.IExportData(fset, pkg)
+ if err != nil {
+ return err
+ }
+ _, err = out.Write(b)
+ return err
+}
diff --git a/vendor/golang.org/x/tools/go/gcexportdata/importer.go b/vendor/golang.org/x/tools/go/gcexportdata/importer.go
new file mode 100644
index 0000000000000..efe221e7e1423
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/gcexportdata/importer.go
@@ -0,0 +1,73 @@
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package gcexportdata
+
+import (
+ "fmt"
+ "go/token"
+ "go/types"
+ "os"
+)
+
+// NewImporter returns a new instance of the types.Importer interface
+// that reads type information from export data files written by gc.
+// The Importer also satisfies types.ImporterFrom.
+//
+// Export data files are located using "go build" workspace conventions
+// and the build.Default context.
+//
+// Use this importer instead of go/importer.For("gc", ...) to avoid the
+// version-skew problems described in the documentation of this package,
+// or to control the FileSet or access the imports map populated during
+// package loading.
+//
+func NewImporter(fset *token.FileSet, imports map[string]*types.Package) types.ImporterFrom {
+ return importer{fset, imports}
+}
+
+type importer struct {
+ fset *token.FileSet
+ imports map[string]*types.Package
+}
+
+func (imp importer) Import(importPath string) (*types.Package, error) {
+ return imp.ImportFrom(importPath, "", 0)
+}
+
+func (imp importer) ImportFrom(importPath, srcDir string, mode types.ImportMode) (_ *types.Package, err error) {
+ filename, path := Find(importPath, srcDir)
+ if filename == "" {
+ if importPath == "unsafe" {
+ // Even for unsafe, call Find first in case
+ // the package was vendored.
+ return types.Unsafe, nil
+ }
+ return nil, fmt.Errorf("can't find import: %s", importPath)
+ }
+
+ if pkg, ok := imp.imports[path]; ok && pkg.Complete() {
+ return pkg, nil // cache hit
+ }
+
+ // open file
+ f, err := os.Open(filename)
+ if err != nil {
+ return nil, err
+ }
+ defer func() {
+ f.Close()
+ if err != nil {
+ // add file name to error
+ err = fmt.Errorf("reading export data: %s: %v", filename, err)
+ }
+ }()
+
+ r, err := NewReader(f)
+ if err != nil {
+ return nil, err
+ }
+
+ return Read(r, imp.fset, imp.imports, path)
+}
diff --git a/vendor/golang.org/x/tools/go/internal/gcimporter/bexport.go b/vendor/golang.org/x/tools/go/internal/gcimporter/bexport.go
new file mode 100644
index 0000000000000..a807d0aaa2813
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/internal/gcimporter/bexport.go
@@ -0,0 +1,852 @@
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Binary package export.
+// This file was derived from $GOROOT/src/cmd/compile/internal/gc/bexport.go;
+// see that file for specification of the format.
+
+package gcimporter
+
+import (
+ "bytes"
+ "encoding/binary"
+ "fmt"
+ "go/ast"
+ "go/constant"
+ "go/token"
+ "go/types"
+ "math"
+ "math/big"
+ "sort"
+ "strings"
+)
+
+// If debugFormat is set, each integer and string value is preceded by a marker
+// and position information in the encoding. This mechanism permits an importer
+// to recognize immediately when it is out of sync. The importer recognizes this
+// mode automatically (i.e., it can import export data produced with debugging
+// support even if debugFormat is not set at the time of import). This mode will
+// lead to massively larger export data (by a factor of 2 to 3) and should only
+// be enabled during development and debugging.
+//
+// NOTE: This flag is the first flag to enable if importing dies because of
+// (suspected) format errors, and whenever a change is made to the format.
+const debugFormat = false // default: false
+
+// If trace is set, debugging output is printed to std out.
+const trace = false // default: false
+
+// Current export format version. Increase with each format change.
+// Note: The latest binary (non-indexed) export format is at version 6.
+// This exporter is still at level 4, but it doesn't matter since
+// the binary importer can handle older versions just fine.
+// 6: package height (CL 105038) -- NOT IMPLEMENTED HERE
+// 5: improved position encoding efficiency (issue 20080, CL 41619) -- NOT IMPLEMEMTED HERE
+// 4: type name objects support type aliases, uses aliasTag
+// 3: Go1.8 encoding (same as version 2, aliasTag defined but never used)
+// 2: removed unused bool in ODCL export (compiler only)
+// 1: header format change (more regular), export package for _ struct fields
+// 0: Go1.7 encoding
+const exportVersion = 4
+
+// trackAllTypes enables cycle tracking for all types, not just named
+// types. The existing compiler invariants assume that unnamed types
+// that are not completely set up are not used, or else there are spurious
+// errors.
+// If disabled, only named types are tracked, possibly leading to slightly
+// less efficient encoding in rare cases. It also prevents the export of
+// some corner-case type declarations (but those are not handled correctly
+// with with the textual export format either).
+// TODO(gri) enable and remove once issues caused by it are fixed
+const trackAllTypes = false
+
+type exporter struct {
+ fset *token.FileSet
+ out bytes.Buffer
+
+ // object -> index maps, indexed in order of serialization
+ strIndex map[string]int
+ pkgIndex map[*types.Package]int
+ typIndex map[types.Type]int
+
+ // position encoding
+ posInfoFormat bool
+ prevFile string
+ prevLine int
+
+ // debugging support
+ written int // bytes written
+ indent int // for trace
+}
+
+// internalError represents an error generated inside this package.
+type internalError string
+
+func (e internalError) Error() string { return "gcimporter: " + string(e) }
+
+func internalErrorf(format string, args ...interface{}) error {
+ return internalError(fmt.Sprintf(format, args...))
+}
+
+// BExportData returns binary export data for pkg.
+// If no file set is provided, position info will be missing.
+func BExportData(fset *token.FileSet, pkg *types.Package) (b []byte, err error) {
+ defer func() {
+ if e := recover(); e != nil {
+ if ierr, ok := e.(internalError); ok {
+ err = ierr
+ return
+ }
+ // Not an internal error; panic again.
+ panic(e)
+ }
+ }()
+
+ p := exporter{
+ fset: fset,
+ strIndex: map[string]int{"": 0}, // empty string is mapped to 0
+ pkgIndex: make(map[*types.Package]int),
+ typIndex: make(map[types.Type]int),
+ posInfoFormat: true, // TODO(gri) might become a flag, eventually
+ }
+
+ // write version info
+ // The version string must start with "version %d" where %d is the version
+ // number. Additional debugging information may follow after a blank; that
+ // text is ignored by the importer.
+ p.rawStringln(fmt.Sprintf("version %d", exportVersion))
+ var debug string
+ if debugFormat {
+ debug = "debug"
+ }
+ p.rawStringln(debug) // cannot use p.bool since it's affected by debugFormat; also want to see this clearly
+ p.bool(trackAllTypes)
+ p.bool(p.posInfoFormat)
+
+ // --- generic export data ---
+
+ // populate type map with predeclared "known" types
+ for index, typ := range predeclared() {
+ p.typIndex[typ] = index
+ }
+ if len(p.typIndex) != len(predeclared()) {
+ return nil, internalError("duplicate entries in type map?")
+ }
+
+ // write package data
+ p.pkg(pkg, true)
+ if trace {
+ p.tracef("\n")
+ }
+
+ // write objects
+ objcount := 0
+ scope := pkg.Scope()
+ for _, name := range scope.Names() {
+ if !ast.IsExported(name) {
+ continue
+ }
+ if trace {
+ p.tracef("\n")
+ }
+ p.obj(scope.Lookup(name))
+ objcount++
+ }
+
+ // indicate end of list
+ if trace {
+ p.tracef("\n")
+ }
+ p.tag(endTag)
+
+ // for self-verification only (redundant)
+ p.int(objcount)
+
+ if trace {
+ p.tracef("\n")
+ }
+
+ // --- end of export data ---
+
+ return p.out.Bytes(), nil
+}
+
+func (p *exporter) pkg(pkg *types.Package, emptypath bool) {
+ if pkg == nil {
+ panic(internalError("unexpected nil pkg"))
+ }
+
+ // if we saw the package before, write its index (>= 0)
+ if i, ok := p.pkgIndex[pkg]; ok {
+ p.index('P', i)
+ return
+ }
+
+ // otherwise, remember the package, write the package tag (< 0) and package data
+ if trace {
+ p.tracef("P%d = { ", len(p.pkgIndex))
+ defer p.tracef("} ")
+ }
+ p.pkgIndex[pkg] = len(p.pkgIndex)
+
+ p.tag(packageTag)
+ p.string(pkg.Name())
+ if emptypath {
+ p.string("")
+ } else {
+ p.string(pkg.Path())
+ }
+}
+
+func (p *exporter) obj(obj types.Object) {
+ switch obj := obj.(type) {
+ case *types.Const:
+ p.tag(constTag)
+ p.pos(obj)
+ p.qualifiedName(obj)
+ p.typ(obj.Type())
+ p.value(obj.Val())
+
+ case *types.TypeName:
+ if obj.IsAlias() {
+ p.tag(aliasTag)
+ p.pos(obj)
+ p.qualifiedName(obj)
+ } else {
+ p.tag(typeTag)
+ }
+ p.typ(obj.Type())
+
+ case *types.Var:
+ p.tag(varTag)
+ p.pos(obj)
+ p.qualifiedName(obj)
+ p.typ(obj.Type())
+
+ case *types.Func:
+ p.tag(funcTag)
+ p.pos(obj)
+ p.qualifiedName(obj)
+ sig := obj.Type().(*types.Signature)
+ p.paramList(sig.Params(), sig.Variadic())
+ p.paramList(sig.Results(), false)
+
+ default:
+ panic(internalErrorf("unexpected object %v (%T)", obj, obj))
+ }
+}
+
+func (p *exporter) pos(obj types.Object) {
+ if !p.posInfoFormat {
+ return
+ }
+
+ file, line := p.fileLine(obj)
+ if file == p.prevFile {
+ // common case: write line delta
+ // delta == 0 means different file or no line change
+ delta := line - p.prevLine
+ p.int(delta)
+ if delta == 0 {
+ p.int(-1) // -1 means no file change
+ }
+ } else {
+ // different file
+ p.int(0)
+ // Encode filename as length of common prefix with previous
+ // filename, followed by (possibly empty) suffix. Filenames
+ // frequently share path prefixes, so this can save a lot
+ // of space and make export data size less dependent on file
+ // path length. The suffix is unlikely to be empty because
+ // file names tend to end in ".go".
+ n := commonPrefixLen(p.prevFile, file)
+ p.int(n) // n >= 0
+ p.string(file[n:]) // write suffix only
+ p.prevFile = file
+ p.int(line)
+ }
+ p.prevLine = line
+}
+
+func (p *exporter) fileLine(obj types.Object) (file string, line int) {
+ if p.fset != nil {
+ pos := p.fset.Position(obj.Pos())
+ file = pos.Filename
+ line = pos.Line
+ }
+ return
+}
+
+func commonPrefixLen(a, b string) int {
+ if len(a) > len(b) {
+ a, b = b, a
+ }
+ // len(a) <= len(b)
+ i := 0
+ for i < len(a) && a[i] == b[i] {
+ i++
+ }
+ return i
+}
+
+func (p *exporter) qualifiedName(obj types.Object) {
+ p.string(obj.Name())
+ p.pkg(obj.Pkg(), false)
+}
+
+func (p *exporter) typ(t types.Type) {
+ if t == nil {
+ panic(internalError("nil type"))
+ }
+
+ // Possible optimization: Anonymous pointer types *T where
+ // T is a named type are common. We could canonicalize all
+ // such types *T to a single type PT = *T. This would lead
+ // to at most one *T entry in typIndex, and all future *T's
+ // would be encoded as the respective index directly. Would
+ // save 1 byte (pointerTag) per *T and reduce the typIndex
+ // size (at the cost of a canonicalization map). We can do
+ // this later, without encoding format change.
+
+ // if we saw the type before, write its index (>= 0)
+ if i, ok := p.typIndex[t]; ok {
+ p.index('T', i)
+ return
+ }
+
+ // otherwise, remember the type, write the type tag (< 0) and type data
+ if trackAllTypes {
+ if trace {
+ p.tracef("T%d = {>\n", len(p.typIndex))
+ defer p.tracef("<\n} ")
+ }
+ p.typIndex[t] = len(p.typIndex)
+ }
+
+ switch t := t.(type) {
+ case *types.Named:
+ if !trackAllTypes {
+ // if we don't track all types, track named types now
+ p.typIndex[t] = len(p.typIndex)
+ }
+
+ p.tag(namedTag)
+ p.pos(t.Obj())
+ p.qualifiedName(t.Obj())
+ p.typ(t.Underlying())
+ if !types.IsInterface(t) {
+ p.assocMethods(t)
+ }
+
+ case *types.Array:
+ p.tag(arrayTag)
+ p.int64(t.Len())
+ p.typ(t.Elem())
+
+ case *types.Slice:
+ p.tag(sliceTag)
+ p.typ(t.Elem())
+
+ case *dddSlice:
+ p.tag(dddTag)
+ p.typ(t.elem)
+
+ case *types.Struct:
+ p.tag(structTag)
+ p.fieldList(t)
+
+ case *types.Pointer:
+ p.tag(pointerTag)
+ p.typ(t.Elem())
+
+ case *types.Signature:
+ p.tag(signatureTag)
+ p.paramList(t.Params(), t.Variadic())
+ p.paramList(t.Results(), false)
+
+ case *types.Interface:
+ p.tag(interfaceTag)
+ p.iface(t)
+
+ case *types.Map:
+ p.tag(mapTag)
+ p.typ(t.Key())
+ p.typ(t.Elem())
+
+ case *types.Chan:
+ p.tag(chanTag)
+ p.int(int(3 - t.Dir())) // hack
+ p.typ(t.Elem())
+
+ default:
+ panic(internalErrorf("unexpected type %T: %s", t, t))
+ }
+}
+
+func (p *exporter) assocMethods(named *types.Named) {
+ // Sort methods (for determinism).
+ var methods []*types.Func
+ for i := 0; i < named.NumMethods(); i++ {
+ methods = append(methods, named.Method(i))
+ }
+ sort.Sort(methodsByName(methods))
+
+ p.int(len(methods))
+
+ if trace && methods != nil {
+ p.tracef("associated methods {>\n")
+ }
+
+ for i, m := range methods {
+ if trace && i > 0 {
+ p.tracef("\n")
+ }
+
+ p.pos(m)
+ name := m.Name()
+ p.string(name)
+ if !exported(name) {
+ p.pkg(m.Pkg(), false)
+ }
+
+ sig := m.Type().(*types.Signature)
+ p.paramList(types.NewTuple(sig.Recv()), false)
+ p.paramList(sig.Params(), sig.Variadic())
+ p.paramList(sig.Results(), false)
+ p.int(0) // dummy value for go:nointerface pragma - ignored by importer
+ }
+
+ if trace && methods != nil {
+ p.tracef("<\n} ")
+ }
+}
+
+type methodsByName []*types.Func
+
+func (x methodsByName) Len() int { return len(x) }
+func (x methodsByName) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
+func (x methodsByName) Less(i, j int) bool { return x[i].Name() < x[j].Name() }
+
+func (p *exporter) fieldList(t *types.Struct) {
+ if trace && t.NumFields() > 0 {
+ p.tracef("fields {>\n")
+ defer p.tracef("<\n} ")
+ }
+
+ p.int(t.NumFields())
+ for i := 0; i < t.NumFields(); i++ {
+ if trace && i > 0 {
+ p.tracef("\n")
+ }
+ p.field(t.Field(i))
+ p.string(t.Tag(i))
+ }
+}
+
+func (p *exporter) field(f *types.Var) {
+ if !f.IsField() {
+ panic(internalError("field expected"))
+ }
+
+ p.pos(f)
+ p.fieldName(f)
+ p.typ(f.Type())
+}
+
+func (p *exporter) iface(t *types.Interface) {
+ // TODO(gri): enable importer to load embedded interfaces,
+ // then emit Embeddeds and ExplicitMethods separately here.
+ p.int(0)
+
+ n := t.NumMethods()
+ if trace && n > 0 {
+ p.tracef("methods {>\n")
+ defer p.tracef("<\n} ")
+ }
+ p.int(n)
+ for i := 0; i < n; i++ {
+ if trace && i > 0 {
+ p.tracef("\n")
+ }
+ p.method(t.Method(i))
+ }
+}
+
+func (p *exporter) method(m *types.Func) {
+ sig := m.Type().(*types.Signature)
+ if sig.Recv() == nil {
+ panic(internalError("method expected"))
+ }
+
+ p.pos(m)
+ p.string(m.Name())
+ if m.Name() != "_" && !ast.IsExported(m.Name()) {
+ p.pkg(m.Pkg(), false)
+ }
+
+ // interface method; no need to encode receiver.
+ p.paramList(sig.Params(), sig.Variadic())
+ p.paramList(sig.Results(), false)
+}
+
+func (p *exporter) fieldName(f *types.Var) {
+ name := f.Name()
+
+ if f.Anonymous() {
+ // anonymous field - we distinguish between 3 cases:
+ // 1) field name matches base type name and is exported
+ // 2) field name matches base type name and is not exported
+ // 3) field name doesn't match base type name (alias name)
+ bname := basetypeName(f.Type())
+ if name == bname {
+ if ast.IsExported(name) {
+ name = "" // 1) we don't need to know the field name or package
+ } else {
+ name = "?" // 2) use unexported name "?" to force package export
+ }
+ } else {
+ // 3) indicate alias and export name as is
+ // (this requires an extra "@" but this is a rare case)
+ p.string("@")
+ }
+ }
+
+ p.string(name)
+ if name != "" && !ast.IsExported(name) {
+ p.pkg(f.Pkg(), false)
+ }
+}
+
+func basetypeName(typ types.Type) string {
+ switch typ := deref(typ).(type) {
+ case *types.Basic:
+ return typ.Name()
+ case *types.Named:
+ return typ.Obj().Name()
+ default:
+ return "" // unnamed type
+ }
+}
+
+func (p *exporter) paramList(params *types.Tuple, variadic bool) {
+ // use negative length to indicate unnamed parameters
+ // (look at the first parameter only since either all
+ // names are present or all are absent)
+ n := params.Len()
+ if n > 0 && params.At(0).Name() == "" {
+ n = -n
+ }
+ p.int(n)
+ for i := 0; i < params.Len(); i++ {
+ q := params.At(i)
+ t := q.Type()
+ if variadic && i == params.Len()-1 {
+ t = &dddSlice{t.(*types.Slice).Elem()}
+ }
+ p.typ(t)
+ if n > 0 {
+ name := q.Name()
+ p.string(name)
+ if name != "_" {
+ p.pkg(q.Pkg(), false)
+ }
+ }
+ p.string("") // no compiler-specific info
+ }
+}
+
+func (p *exporter) value(x constant.Value) {
+ if trace {
+ p.tracef("= ")
+ }
+
+ switch x.Kind() {
+ case constant.Bool:
+ tag := falseTag
+ if constant.BoolVal(x) {
+ tag = trueTag
+ }
+ p.tag(tag)
+
+ case constant.Int:
+ if v, exact := constant.Int64Val(x); exact {
+ // common case: x fits into an int64 - use compact encoding
+ p.tag(int64Tag)
+ p.int64(v)
+ return
+ }
+ // uncommon case: large x - use float encoding
+ // (powers of 2 will be encoded efficiently with exponent)
+ p.tag(floatTag)
+ p.float(constant.ToFloat(x))
+
+ case constant.Float:
+ p.tag(floatTag)
+ p.float(x)
+
+ case constant.Complex:
+ p.tag(complexTag)
+ p.float(constant.Real(x))
+ p.float(constant.Imag(x))
+
+ case constant.String:
+ p.tag(stringTag)
+ p.string(constant.StringVal(x))
+
+ case constant.Unknown:
+ // package contains type errors
+ p.tag(unknownTag)
+
+ default:
+ panic(internalErrorf("unexpected value %v (%T)", x, x))
+ }
+}
+
+func (p *exporter) float(x constant.Value) {
+ if x.Kind() != constant.Float {
+ panic(internalErrorf("unexpected constant %v, want float", x))
+ }
+ // extract sign (there is no -0)
+ sign := constant.Sign(x)
+ if sign == 0 {
+ // x == 0
+ p.int(0)
+ return
+ }
+ // x != 0
+
+ var f big.Float
+ if v, exact := constant.Float64Val(x); exact {
+ // float64
+ f.SetFloat64(v)
+ } else if num, denom := constant.Num(x), constant.Denom(x); num.Kind() == constant.Int {
+ // TODO(gri): add big.Rat accessor to constant.Value.
+ r := valueToRat(num)
+ f.SetRat(r.Quo(r, valueToRat(denom)))
+ } else {
+ // Value too large to represent as a fraction => inaccessible.
+ // TODO(gri): add big.Float accessor to constant.Value.
+ f.SetFloat64(math.MaxFloat64) // FIXME
+ }
+
+ // extract exponent such that 0.5 <= m < 1.0
+ var m big.Float
+ exp := f.MantExp(&m)
+
+ // extract mantissa as *big.Int
+ // - set exponent large enough so mant satisfies mant.IsInt()
+ // - get *big.Int from mant
+ m.SetMantExp(&m, int(m.MinPrec()))
+ mant, acc := m.Int(nil)
+ if acc != big.Exact {
+ panic(internalError("internal error"))
+ }
+
+ p.int(sign)
+ p.int(exp)
+ p.string(string(mant.Bytes()))
+}
+
+func valueToRat(x constant.Value) *big.Rat {
+ // Convert little-endian to big-endian.
+ // I can't believe this is necessary.
+ bytes := constant.Bytes(x)
+ for i := 0; i < len(bytes)/2; i++ {
+ bytes[i], bytes[len(bytes)-1-i] = bytes[len(bytes)-1-i], bytes[i]
+ }
+ return new(big.Rat).SetInt(new(big.Int).SetBytes(bytes))
+}
+
+func (p *exporter) bool(b bool) bool {
+ if trace {
+ p.tracef("[")
+ defer p.tracef("= %v] ", b)
+ }
+
+ x := 0
+ if b {
+ x = 1
+ }
+ p.int(x)
+ return b
+}
+
+// ----------------------------------------------------------------------------
+// Low-level encoders
+
+func (p *exporter) index(marker byte, index int) {
+ if index < 0 {
+ panic(internalError("invalid index < 0"))
+ }
+ if debugFormat {
+ p.marker('t')
+ }
+ if trace {
+ p.tracef("%c%d ", marker, index)
+ }
+ p.rawInt64(int64(index))
+}
+
+func (p *exporter) tag(tag int) {
+ if tag >= 0 {
+ panic(internalError("invalid tag >= 0"))
+ }
+ if debugFormat {
+ p.marker('t')
+ }
+ if trace {
+ p.tracef("%s ", tagString[-tag])
+ }
+ p.rawInt64(int64(tag))
+}
+
+func (p *exporter) int(x int) {
+ p.int64(int64(x))
+}
+
+func (p *exporter) int64(x int64) {
+ if debugFormat {
+ p.marker('i')
+ }
+ if trace {
+ p.tracef("%d ", x)
+ }
+ p.rawInt64(x)
+}
+
+func (p *exporter) string(s string) {
+ if debugFormat {
+ p.marker('s')
+ }
+ if trace {
+ p.tracef("%q ", s)
+ }
+ // if we saw the string before, write its index (>= 0)
+ // (the empty string is mapped to 0)
+ if i, ok := p.strIndex[s]; ok {
+ p.rawInt64(int64(i))
+ return
+ }
+ // otherwise, remember string and write its negative length and bytes
+ p.strIndex[s] = len(p.strIndex)
+ p.rawInt64(-int64(len(s)))
+ for i := 0; i < len(s); i++ {
+ p.rawByte(s[i])
+ }
+}
+
+// marker emits a marker byte and position information which makes
+// it easy for a reader to detect if it is "out of sync". Used for
+// debugFormat format only.
+func (p *exporter) marker(m byte) {
+ p.rawByte(m)
+ // Enable this for help tracking down the location
+ // of an incorrect marker when running in debugFormat.
+ if false && trace {
+ p.tracef("#%d ", p.written)
+ }
+ p.rawInt64(int64(p.written))
+}
+
+// rawInt64 should only be used by low-level encoders.
+func (p *exporter) rawInt64(x int64) {
+ var tmp [binary.MaxVarintLen64]byte
+ n := binary.PutVarint(tmp[:], x)
+ for i := 0; i < n; i++ {
+ p.rawByte(tmp[i])
+ }
+}
+
+// rawStringln should only be used to emit the initial version string.
+func (p *exporter) rawStringln(s string) {
+ for i := 0; i < len(s); i++ {
+ p.rawByte(s[i])
+ }
+ p.rawByte('\n')
+}
+
+// rawByte is the bottleneck interface to write to p.out.
+// rawByte escapes b as follows (any encoding does that
+// hides '$'):
+//
+// '$' => '|' 'S'
+// '|' => '|' '|'
+//
+// Necessary so other tools can find the end of the
+// export data by searching for "$$".
+// rawByte should only be used by low-level encoders.
+func (p *exporter) rawByte(b byte) {
+ switch b {
+ case '$':
+ // write '$' as '|' 'S'
+ b = 'S'
+ fallthrough
+ case '|':
+ // write '|' as '|' '|'
+ p.out.WriteByte('|')
+ p.written++
+ }
+ p.out.WriteByte(b)
+ p.written++
+}
+
+// tracef is like fmt.Printf but it rewrites the format string
+// to take care of indentation.
+func (p *exporter) tracef(format string, args ...interface{}) {
+ if strings.ContainsAny(format, "<>\n") {
+ var buf bytes.Buffer
+ for i := 0; i < len(format); i++ {
+ // no need to deal with runes
+ ch := format[i]
+ switch ch {
+ case '>':
+ p.indent++
+ continue
+ case '<':
+ p.indent--
+ continue
+ }
+ buf.WriteByte(ch)
+ if ch == '\n' {
+ for j := p.indent; j > 0; j-- {
+ buf.WriteString(". ")
+ }
+ }
+ }
+ format = buf.String()
+ }
+ fmt.Printf(format, args...)
+}
+
+// Debugging support.
+// (tagString is only used when tracing is enabled)
+var tagString = [...]string{
+ // Packages
+ -packageTag: "package",
+
+ // Types
+ -namedTag: "named type",
+ -arrayTag: "array",
+ -sliceTag: "slice",
+ -dddTag: "ddd",
+ -structTag: "struct",
+ -pointerTag: "pointer",
+ -signatureTag: "signature",
+ -interfaceTag: "interface",
+ -mapTag: "map",
+ -chanTag: "chan",
+
+ // Values
+ -falseTag: "false",
+ -trueTag: "true",
+ -int64Tag: "int64",
+ -floatTag: "float",
+ -fractionTag: "fraction",
+ -complexTag: "complex",
+ -stringTag: "string",
+ -unknownTag: "unknown",
+
+ // Type aliases
+ -aliasTag: "alias",
+}
diff --git a/vendor/golang.org/x/tools/go/internal/gcimporter/bimport.go b/vendor/golang.org/x/tools/go/internal/gcimporter/bimport.go
new file mode 100644
index 0000000000000..e9f73d14a182e
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/internal/gcimporter/bimport.go
@@ -0,0 +1,1039 @@
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// This file is a copy of $GOROOT/src/go/internal/gcimporter/bimport.go.
+
+package gcimporter
+
+import (
+ "encoding/binary"
+ "fmt"
+ "go/constant"
+ "go/token"
+ "go/types"
+ "sort"
+ "strconv"
+ "strings"
+ "sync"
+ "unicode"
+ "unicode/utf8"
+)
+
+type importer struct {
+ imports map[string]*types.Package
+ data []byte
+ importpath string
+ buf []byte // for reading strings
+ version int // export format version
+
+ // object lists
+ strList []string // in order of appearance
+ pathList []string // in order of appearance
+ pkgList []*types.Package // in order of appearance
+ typList []types.Type // in order of appearance
+ interfaceList []*types.Interface // for delayed completion only
+ trackAllTypes bool
+
+ // position encoding
+ posInfoFormat bool
+ prevFile string
+ prevLine int
+ fake fakeFileSet
+
+ // debugging support
+ debugFormat bool
+ read int // bytes read
+}
+
+// BImportData imports a package from the serialized package data
+// and returns the number of bytes consumed and a reference to the package.
+// If the export data version is not recognized or the format is otherwise
+// compromised, an error is returned.
+func BImportData(fset *token.FileSet, imports map[string]*types.Package, data []byte, path string) (_ int, pkg *types.Package, err error) {
+ // catch panics and return them as errors
+ const currentVersion = 6
+ version := -1 // unknown version
+ defer func() {
+ if e := recover(); e != nil {
+ // Return a (possibly nil or incomplete) package unchanged (see #16088).
+ if version > currentVersion {
+ err = fmt.Errorf("cannot import %q (%v), export data is newer version - update tool", path, e)
+ } else {
+ err = fmt.Errorf("cannot import %q (%v), possibly version skew - reinstall package", path, e)
+ }
+ }
+ }()
+
+ p := importer{
+ imports: imports,
+ data: data,
+ importpath: path,
+ version: version,
+ strList: []string{""}, // empty string is mapped to 0
+ pathList: []string{""}, // empty string is mapped to 0
+ fake: fakeFileSet{
+ fset: fset,
+ files: make(map[string]*token.File),
+ },
+ }
+
+ // read version info
+ var versionstr string
+ if b := p.rawByte(); b == 'c' || b == 'd' {
+ // Go1.7 encoding; first byte encodes low-level
+ // encoding format (compact vs debug).
+ // For backward-compatibility only (avoid problems with
+ // old installed packages). Newly compiled packages use
+ // the extensible format string.
+ // TODO(gri) Remove this support eventually; after Go1.8.
+ if b == 'd' {
+ p.debugFormat = true
+ }
+ p.trackAllTypes = p.rawByte() == 'a'
+ p.posInfoFormat = p.int() != 0
+ versionstr = p.string()
+ if versionstr == "v1" {
+ version = 0
+ }
+ } else {
+ // Go1.8 extensible encoding
+ // read version string and extract version number (ignore anything after the version number)
+ versionstr = p.rawStringln(b)
+ if s := strings.SplitN(versionstr, " ", 3); len(s) >= 2 && s[0] == "version" {
+ if v, err := strconv.Atoi(s[1]); err == nil && v > 0 {
+ version = v
+ }
+ }
+ }
+ p.version = version
+
+ // read version specific flags - extend as necessary
+ switch p.version {
+ // case currentVersion:
+ // ...
+ // fallthrough
+ case currentVersion, 5, 4, 3, 2, 1:
+ p.debugFormat = p.rawStringln(p.rawByte()) == "debug"
+ p.trackAllTypes = p.int() != 0
+ p.posInfoFormat = p.int() != 0
+ case 0:
+ // Go1.7 encoding format - nothing to do here
+ default:
+ errorf("unknown bexport format version %d (%q)", p.version, versionstr)
+ }
+
+ // --- generic export data ---
+
+ // populate typList with predeclared "known" types
+ p.typList = append(p.typList, predeclared()...)
+
+ // read package data
+ pkg = p.pkg()
+
+ // read objects of phase 1 only (see cmd/compile/internal/gc/bexport.go)
+ objcount := 0
+ for {
+ tag := p.tagOrIndex()
+ if tag == endTag {
+ break
+ }
+ p.obj(tag)
+ objcount++
+ }
+
+ // self-verification
+ if count := p.int(); count != objcount {
+ errorf("got %d objects; want %d", objcount, count)
+ }
+
+ // ignore compiler-specific import data
+
+ // complete interfaces
+ // TODO(gri) re-investigate if we still need to do this in a delayed fashion
+ for _, typ := range p.interfaceList {
+ typ.Complete()
+ }
+
+ // record all referenced packages as imports
+ list := append(([]*types.Package)(nil), p.pkgList[1:]...)
+ sort.Sort(byPath(list))
+ pkg.SetImports(list)
+
+ // package was imported completely and without errors
+ pkg.MarkComplete()
+
+ return p.read, pkg, nil
+}
+
+func errorf(format string, args ...interface{}) {
+ panic(fmt.Sprintf(format, args...))
+}
+
+func (p *importer) pkg() *types.Package {
+ // if the package was seen before, i is its index (>= 0)
+ i := p.tagOrIndex()
+ if i >= 0 {
+ return p.pkgList[i]
+ }
+
+ // otherwise, i is the package tag (< 0)
+ if i != packageTag {
+ errorf("unexpected package tag %d version %d", i, p.version)
+ }
+
+ // read package data
+ name := p.string()
+ var path string
+ if p.version >= 5 {
+ path = p.path()
+ } else {
+ path = p.string()
+ }
+ if p.version >= 6 {
+ p.int() // package height; unused by go/types
+ }
+
+ // we should never see an empty package name
+ if name == "" {
+ errorf("empty package name in import")
+ }
+
+ // an empty path denotes the package we are currently importing;
+ // it must be the first package we see
+ if (path == "") != (len(p.pkgList) == 0) {
+ errorf("package path %q for pkg index %d", path, len(p.pkgList))
+ }
+
+ // if the package was imported before, use that one; otherwise create a new one
+ if path == "" {
+ path = p.importpath
+ }
+ pkg := p.imports[path]
+ if pkg == nil {
+ pkg = types.NewPackage(path, name)
+ p.imports[path] = pkg
+ } else if pkg.Name() != name {
+ errorf("conflicting names %s and %s for package %q", pkg.Name(), name, path)
+ }
+ p.pkgList = append(p.pkgList, pkg)
+
+ return pkg
+}
+
+// objTag returns the tag value for each object kind.
+func objTag(obj types.Object) int {
+ switch obj.(type) {
+ case *types.Const:
+ return constTag
+ case *types.TypeName:
+ return typeTag
+ case *types.Var:
+ return varTag
+ case *types.Func:
+ return funcTag
+ default:
+ errorf("unexpected object: %v (%T)", obj, obj) // panics
+ panic("unreachable")
+ }
+}
+
+func sameObj(a, b types.Object) bool {
+ // Because unnamed types are not canonicalized, we cannot simply compare types for
+ // (pointer) identity.
+ // Ideally we'd check equality of constant values as well, but this is good enough.
+ return objTag(a) == objTag(b) && types.Identical(a.Type(), b.Type())
+}
+
+func (p *importer) declare(obj types.Object) {
+ pkg := obj.Pkg()
+ if alt := pkg.Scope().Insert(obj); alt != nil {
+ // This can only trigger if we import a (non-type) object a second time.
+ // Excluding type aliases, this cannot happen because 1) we only import a package
+ // once; and b) we ignore compiler-specific export data which may contain
+ // functions whose inlined function bodies refer to other functions that
+ // were already imported.
+ // However, type aliases require reexporting the original type, so we need
+ // to allow it (see also the comment in cmd/compile/internal/gc/bimport.go,
+ // method importer.obj, switch case importing functions).
+ // TODO(gri) review/update this comment once the gc compiler handles type aliases.
+ if !sameObj(obj, alt) {
+ errorf("inconsistent import:\n\t%v\npreviously imported as:\n\t%v\n", obj, alt)
+ }
+ }
+}
+
+func (p *importer) obj(tag int) {
+ switch tag {
+ case constTag:
+ pos := p.pos()
+ pkg, name := p.qualifiedName()
+ typ := p.typ(nil, nil)
+ val := p.value()
+ p.declare(types.NewConst(pos, pkg, name, typ, val))
+
+ case aliasTag:
+ // TODO(gri) verify type alias hookup is correct
+ pos := p.pos()
+ pkg, name := p.qualifiedName()
+ typ := p.typ(nil, nil)
+ p.declare(types.NewTypeName(pos, pkg, name, typ))
+
+ case typeTag:
+ p.typ(nil, nil)
+
+ case varTag:
+ pos := p.pos()
+ pkg, name := p.qualifiedName()
+ typ := p.typ(nil, nil)
+ p.declare(types.NewVar(pos, pkg, name, typ))
+
+ case funcTag:
+ pos := p.pos()
+ pkg, name := p.qualifiedName()
+ params, isddd := p.paramList()
+ result, _ := p.paramList()
+ sig := types.NewSignature(nil, params, result, isddd)
+ p.declare(types.NewFunc(pos, pkg, name, sig))
+
+ default:
+ errorf("unexpected object tag %d", tag)
+ }
+}
+
+const deltaNewFile = -64 // see cmd/compile/internal/gc/bexport.go
+
+func (p *importer) pos() token.Pos {
+ if !p.posInfoFormat {
+ return token.NoPos
+ }
+
+ file := p.prevFile
+ line := p.prevLine
+ delta := p.int()
+ line += delta
+ if p.version >= 5 {
+ if delta == deltaNewFile {
+ if n := p.int(); n >= 0 {
+ // file changed
+ file = p.path()
+ line = n
+ }
+ }
+ } else {
+ if delta == 0 {
+ if n := p.int(); n >= 0 {
+ // file changed
+ file = p.prevFile[:n] + p.string()
+ line = p.int()
+ }
+ }
+ }
+ p.prevFile = file
+ p.prevLine = line
+
+ return p.fake.pos(file, line, 0)
+}
+
+// Synthesize a token.Pos
+type fakeFileSet struct {
+ fset *token.FileSet
+ files map[string]*token.File
+}
+
+func (s *fakeFileSet) pos(file string, line, column int) token.Pos {
+ // TODO(mdempsky): Make use of column.
+
+ // Since we don't know the set of needed file positions, we
+ // reserve maxlines positions per file.
+ const maxlines = 64 * 1024
+ f := s.files[file]
+ if f == nil {
+ f = s.fset.AddFile(file, -1, maxlines)
+ s.files[file] = f
+ // Allocate the fake linebreak indices on first use.
+ // TODO(adonovan): opt: save ~512KB using a more complex scheme?
+ fakeLinesOnce.Do(func() {
+ fakeLines = make([]int, maxlines)
+ for i := range fakeLines {
+ fakeLines[i] = i
+ }
+ })
+ f.SetLines(fakeLines)
+ }
+
+ if line > maxlines {
+ line = 1
+ }
+
+ // Treat the file as if it contained only newlines
+ // and column=1: use the line number as the offset.
+ return f.Pos(line - 1)
+}
+
+var (
+ fakeLines []int
+ fakeLinesOnce sync.Once
+)
+
+func (p *importer) qualifiedName() (pkg *types.Package, name string) {
+ name = p.string()
+ pkg = p.pkg()
+ return
+}
+
+func (p *importer) record(t types.Type) {
+ p.typList = append(p.typList, t)
+}
+
+// A dddSlice is a types.Type representing ...T parameters.
+// It only appears for parameter types and does not escape
+// the importer.
+type dddSlice struct {
+ elem types.Type
+}
+
+func (t *dddSlice) Underlying() types.Type { return t }
+func (t *dddSlice) String() string { return "..." + t.elem.String() }
+
+// parent is the package which declared the type; parent == nil means
+// the package currently imported. The parent package is needed for
+// exported struct fields and interface methods which don't contain
+// explicit package information in the export data.
+//
+// A non-nil tname is used as the "owner" of the result type; i.e.,
+// the result type is the underlying type of tname. tname is used
+// to give interface methods a named receiver type where possible.
+func (p *importer) typ(parent *types.Package, tname *types.Named) types.Type {
+ // if the type was seen before, i is its index (>= 0)
+ i := p.tagOrIndex()
+ if i >= 0 {
+ return p.typList[i]
+ }
+
+ // otherwise, i is the type tag (< 0)
+ switch i {
+ case namedTag:
+ // read type object
+ pos := p.pos()
+ parent, name := p.qualifiedName()
+ scope := parent.Scope()
+ obj := scope.Lookup(name)
+
+ // if the object doesn't exist yet, create and insert it
+ if obj == nil {
+ obj = types.NewTypeName(pos, parent, name, nil)
+ scope.Insert(obj)
+ }
+
+ if _, ok := obj.(*types.TypeName); !ok {
+ errorf("pkg = %s, name = %s => %s", parent, name, obj)
+ }
+
+ // associate new named type with obj if it doesn't exist yet
+ t0 := types.NewNamed(obj.(*types.TypeName), nil, nil)
+
+ // but record the existing type, if any
+ tname := obj.Type().(*types.Named) // tname is either t0 or the existing type
+ p.record(tname)
+
+ // read underlying type
+ t0.SetUnderlying(p.typ(parent, t0))
+
+ // interfaces don't have associated methods
+ if types.IsInterface(t0) {
+ return tname
+ }
+
+ // read associated methods
+ for i := p.int(); i > 0; i-- {
+ // TODO(gri) replace this with something closer to fieldName
+ pos := p.pos()
+ name := p.string()
+ if !exported(name) {
+ p.pkg()
+ }
+
+ recv, _ := p.paramList() // TODO(gri) do we need a full param list for the receiver?
+ params, isddd := p.paramList()
+ result, _ := p.paramList()
+ p.int() // go:nointerface pragma - discarded
+
+ sig := types.NewSignature(recv.At(0), params, result, isddd)
+ t0.AddMethod(types.NewFunc(pos, parent, name, sig))
+ }
+
+ return tname
+
+ case arrayTag:
+ t := new(types.Array)
+ if p.trackAllTypes {
+ p.record(t)
+ }
+
+ n := p.int64()
+ *t = *types.NewArray(p.typ(parent, nil), n)
+ return t
+
+ case sliceTag:
+ t := new(types.Slice)
+ if p.trackAllTypes {
+ p.record(t)
+ }
+
+ *t = *types.NewSlice(p.typ(parent, nil))
+ return t
+
+ case dddTag:
+ t := new(dddSlice)
+ if p.trackAllTypes {
+ p.record(t)
+ }
+
+ t.elem = p.typ(parent, nil)
+ return t
+
+ case structTag:
+ t := new(types.Struct)
+ if p.trackAllTypes {
+ p.record(t)
+ }
+
+ *t = *types.NewStruct(p.fieldList(parent))
+ return t
+
+ case pointerTag:
+ t := new(types.Pointer)
+ if p.trackAllTypes {
+ p.record(t)
+ }
+
+ *t = *types.NewPointer(p.typ(parent, nil))
+ return t
+
+ case signatureTag:
+ t := new(types.Signature)
+ if p.trackAllTypes {
+ p.record(t)
+ }
+
+ params, isddd := p.paramList()
+ result, _ := p.paramList()
+ *t = *types.NewSignature(nil, params, result, isddd)
+ return t
+
+ case interfaceTag:
+ // Create a dummy entry in the type list. This is safe because we
+ // cannot expect the interface type to appear in a cycle, as any
+ // such cycle must contain a named type which would have been
+ // first defined earlier.
+ // TODO(gri) Is this still true now that we have type aliases?
+ // See issue #23225.
+ n := len(p.typList)
+ if p.trackAllTypes {
+ p.record(nil)
+ }
+
+ var embeddeds []types.Type
+ for n := p.int(); n > 0; n-- {
+ p.pos()
+ embeddeds = append(embeddeds, p.typ(parent, nil))
+ }
+
+ t := newInterface(p.methodList(parent, tname), embeddeds)
+ p.interfaceList = append(p.interfaceList, t)
+ if p.trackAllTypes {
+ p.typList[n] = t
+ }
+ return t
+
+ case mapTag:
+ t := new(types.Map)
+ if p.trackAllTypes {
+ p.record(t)
+ }
+
+ key := p.typ(parent, nil)
+ val := p.typ(parent, nil)
+ *t = *types.NewMap(key, val)
+ return t
+
+ case chanTag:
+ t := new(types.Chan)
+ if p.trackAllTypes {
+ p.record(t)
+ }
+
+ dir := chanDir(p.int())
+ val := p.typ(parent, nil)
+ *t = *types.NewChan(dir, val)
+ return t
+
+ default:
+ errorf("unexpected type tag %d", i) // panics
+ panic("unreachable")
+ }
+}
+
+func chanDir(d int) types.ChanDir {
+ // tag values must match the constants in cmd/compile/internal/gc/go.go
+ switch d {
+ case 1 /* Crecv */ :
+ return types.RecvOnly
+ case 2 /* Csend */ :
+ return types.SendOnly
+ case 3 /* Cboth */ :
+ return types.SendRecv
+ default:
+ errorf("unexpected channel dir %d", d)
+ return 0
+ }
+}
+
+func (p *importer) fieldList(parent *types.Package) (fields []*types.Var, tags []string) {
+ if n := p.int(); n > 0 {
+ fields = make([]*types.Var, n)
+ tags = make([]string, n)
+ for i := range fields {
+ fields[i], tags[i] = p.field(parent)
+ }
+ }
+ return
+}
+
+func (p *importer) field(parent *types.Package) (*types.Var, string) {
+ pos := p.pos()
+ pkg, name, alias := p.fieldName(parent)
+ typ := p.typ(parent, nil)
+ tag := p.string()
+
+ anonymous := false
+ if name == "" {
+ // anonymous field - typ must be T or *T and T must be a type name
+ switch typ := deref(typ).(type) {
+ case *types.Basic: // basic types are named types
+ pkg = nil // // objects defined in Universe scope have no package
+ name = typ.Name()
+ case *types.Named:
+ name = typ.Obj().Name()
+ default:
+ errorf("named base type expected")
+ }
+ anonymous = true
+ } else if alias {
+ // anonymous field: we have an explicit name because it's an alias
+ anonymous = true
+ }
+
+ return types.NewField(pos, pkg, name, typ, anonymous), tag
+}
+
+func (p *importer) methodList(parent *types.Package, baseType *types.Named) (methods []*types.Func) {
+ if n := p.int(); n > 0 {
+ methods = make([]*types.Func, n)
+ for i := range methods {
+ methods[i] = p.method(parent, baseType)
+ }
+ }
+ return
+}
+
+func (p *importer) method(parent *types.Package, baseType *types.Named) *types.Func {
+ pos := p.pos()
+ pkg, name, _ := p.fieldName(parent)
+ // If we don't have a baseType, use a nil receiver.
+ // A receiver using the actual interface type (which
+ // we don't know yet) will be filled in when we call
+ // types.Interface.Complete.
+ var recv *types.Var
+ if baseType != nil {
+ recv = types.NewVar(token.NoPos, parent, "", baseType)
+ }
+ params, isddd := p.paramList()
+ result, _ := p.paramList()
+ sig := types.NewSignature(recv, params, result, isddd)
+ return types.NewFunc(pos, pkg, name, sig)
+}
+
+func (p *importer) fieldName(parent *types.Package) (pkg *types.Package, name string, alias bool) {
+ name = p.string()
+ pkg = parent
+ if pkg == nil {
+ // use the imported package instead
+ pkg = p.pkgList[0]
+ }
+ if p.version == 0 && name == "_" {
+ // version 0 didn't export a package for _ fields
+ return
+ }
+ switch name {
+ case "":
+ // 1) field name matches base type name and is exported: nothing to do
+ case "?":
+ // 2) field name matches base type name and is not exported: need package
+ name = ""
+ pkg = p.pkg()
+ case "@":
+ // 3) field name doesn't match type name (alias)
+ name = p.string()
+ alias = true
+ fallthrough
+ default:
+ if !exported(name) {
+ pkg = p.pkg()
+ }
+ }
+ return
+}
+
+func (p *importer) paramList() (*types.Tuple, bool) {
+ n := p.int()
+ if n == 0 {
+ return nil, false
+ }
+ // negative length indicates unnamed parameters
+ named := true
+ if n < 0 {
+ n = -n
+ named = false
+ }
+ // n > 0
+ params := make([]*types.Var, n)
+ isddd := false
+ for i := range params {
+ params[i], isddd = p.param(named)
+ }
+ return types.NewTuple(params...), isddd
+}
+
+func (p *importer) param(named bool) (*types.Var, bool) {
+ t := p.typ(nil, nil)
+ td, isddd := t.(*dddSlice)
+ if isddd {
+ t = types.NewSlice(td.elem)
+ }
+
+ var pkg *types.Package
+ var name string
+ if named {
+ name = p.string()
+ if name == "" {
+ errorf("expected named parameter")
+ }
+ if name != "_" {
+ pkg = p.pkg()
+ }
+ if i := strings.Index(name, "·"); i > 0 {
+ name = name[:i] // cut off gc-specific parameter numbering
+ }
+ }
+
+ // read and discard compiler-specific info
+ p.string()
+
+ return types.NewVar(token.NoPos, pkg, name, t), isddd
+}
+
+func exported(name string) bool {
+ ch, _ := utf8.DecodeRuneInString(name)
+ return unicode.IsUpper(ch)
+}
+
+func (p *importer) value() constant.Value {
+ switch tag := p.tagOrIndex(); tag {
+ case falseTag:
+ return constant.MakeBool(false)
+ case trueTag:
+ return constant.MakeBool(true)
+ case int64Tag:
+ return constant.MakeInt64(p.int64())
+ case floatTag:
+ return p.float()
+ case complexTag:
+ re := p.float()
+ im := p.float()
+ return constant.BinaryOp(re, token.ADD, constant.MakeImag(im))
+ case stringTag:
+ return constant.MakeString(p.string())
+ case unknownTag:
+ return constant.MakeUnknown()
+ default:
+ errorf("unexpected value tag %d", tag) // panics
+ panic("unreachable")
+ }
+}
+
+func (p *importer) float() constant.Value {
+ sign := p.int()
+ if sign == 0 {
+ return constant.MakeInt64(0)
+ }
+
+ exp := p.int()
+ mant := []byte(p.string()) // big endian
+
+ // remove leading 0's if any
+ for len(mant) > 0 && mant[0] == 0 {
+ mant = mant[1:]
+ }
+
+ // convert to little endian
+ // TODO(gri) go/constant should have a more direct conversion function
+ // (e.g., once it supports a big.Float based implementation)
+ for i, j := 0, len(mant)-1; i < j; i, j = i+1, j-1 {
+ mant[i], mant[j] = mant[j], mant[i]
+ }
+
+ // adjust exponent (constant.MakeFromBytes creates an integer value,
+ // but mant represents the mantissa bits such that 0.5 <= mant < 1.0)
+ exp -= len(mant) << 3
+ if len(mant) > 0 {
+ for msd := mant[len(mant)-1]; msd&0x80 == 0; msd <<= 1 {
+ exp++
+ }
+ }
+
+ x := constant.MakeFromBytes(mant)
+ switch {
+ case exp < 0:
+ d := constant.Shift(constant.MakeInt64(1), token.SHL, uint(-exp))
+ x = constant.BinaryOp(x, token.QUO, d)
+ case exp > 0:
+ x = constant.Shift(x, token.SHL, uint(exp))
+ }
+
+ if sign < 0 {
+ x = constant.UnaryOp(token.SUB, x, 0)
+ }
+ return x
+}
+
+// ----------------------------------------------------------------------------
+// Low-level decoders
+
+func (p *importer) tagOrIndex() int {
+ if p.debugFormat {
+ p.marker('t')
+ }
+
+ return int(p.rawInt64())
+}
+
+func (p *importer) int() int {
+ x := p.int64()
+ if int64(int(x)) != x {
+ errorf("exported integer too large")
+ }
+ return int(x)
+}
+
+func (p *importer) int64() int64 {
+ if p.debugFormat {
+ p.marker('i')
+ }
+
+ return p.rawInt64()
+}
+
+func (p *importer) path() string {
+ if p.debugFormat {
+ p.marker('p')
+ }
+ // if the path was seen before, i is its index (>= 0)
+ // (the empty string is at index 0)
+ i := p.rawInt64()
+ if i >= 0 {
+ return p.pathList[i]
+ }
+ // otherwise, i is the negative path length (< 0)
+ a := make([]string, -i)
+ for n := range a {
+ a[n] = p.string()
+ }
+ s := strings.Join(a, "/")
+ p.pathList = append(p.pathList, s)
+ return s
+}
+
+func (p *importer) string() string {
+ if p.debugFormat {
+ p.marker('s')
+ }
+ // if the string was seen before, i is its index (>= 0)
+ // (the empty string is at index 0)
+ i := p.rawInt64()
+ if i >= 0 {
+ return p.strList[i]
+ }
+ // otherwise, i is the negative string length (< 0)
+ if n := int(-i); n <= cap(p.buf) {
+ p.buf = p.buf[:n]
+ } else {
+ p.buf = make([]byte, n)
+ }
+ for i := range p.buf {
+ p.buf[i] = p.rawByte()
+ }
+ s := string(p.buf)
+ p.strList = append(p.strList, s)
+ return s
+}
+
+func (p *importer) marker(want byte) {
+ if got := p.rawByte(); got != want {
+ errorf("incorrect marker: got %c; want %c (pos = %d)", got, want, p.read)
+ }
+
+ pos := p.read
+ if n := int(p.rawInt64()); n != pos {
+ errorf("incorrect position: got %d; want %d", n, pos)
+ }
+}
+
+// rawInt64 should only be used by low-level decoders.
+func (p *importer) rawInt64() int64 {
+ i, err := binary.ReadVarint(p)
+ if err != nil {
+ errorf("read error: %v", err)
+ }
+ return i
+}
+
+// rawStringln should only be used to read the initial version string.
+func (p *importer) rawStringln(b byte) string {
+ p.buf = p.buf[:0]
+ for b != '\n' {
+ p.buf = append(p.buf, b)
+ b = p.rawByte()
+ }
+ return string(p.buf)
+}
+
+// needed for binary.ReadVarint in rawInt64
+func (p *importer) ReadByte() (byte, error) {
+ return p.rawByte(), nil
+}
+
+// byte is the bottleneck interface for reading p.data.
+// It unescapes '|' 'S' to '$' and '|' '|' to '|'.
+// rawByte should only be used by low-level decoders.
+func (p *importer) rawByte() byte {
+ b := p.data[0]
+ r := 1
+ if b == '|' {
+ b = p.data[1]
+ r = 2
+ switch b {
+ case 'S':
+ b = '$'
+ case '|':
+ // nothing to do
+ default:
+ errorf("unexpected escape sequence in export data")
+ }
+ }
+ p.data = p.data[r:]
+ p.read += r
+ return b
+
+}
+
+// ----------------------------------------------------------------------------
+// Export format
+
+// Tags. Must be < 0.
+const (
+ // Objects
+ packageTag = -(iota + 1)
+ constTag
+ typeTag
+ varTag
+ funcTag
+ endTag
+
+ // Types
+ namedTag
+ arrayTag
+ sliceTag
+ dddTag
+ structTag
+ pointerTag
+ signatureTag
+ interfaceTag
+ mapTag
+ chanTag
+
+ // Values
+ falseTag
+ trueTag
+ int64Tag
+ floatTag
+ fractionTag // not used by gc
+ complexTag
+ stringTag
+ nilTag // only used by gc (appears in exported inlined function bodies)
+ unknownTag // not used by gc (only appears in packages with errors)
+
+ // Type aliases
+ aliasTag
+)
+
+var predeclOnce sync.Once
+var predecl []types.Type // initialized lazily
+
+func predeclared() []types.Type {
+ predeclOnce.Do(func() {
+ // initialize lazily to be sure that all
+ // elements have been initialized before
+ predecl = []types.Type{ // basic types
+ types.Typ[types.Bool],
+ types.Typ[types.Int],
+ types.Typ[types.Int8],
+ types.Typ[types.Int16],
+ types.Typ[types.Int32],
+ types.Typ[types.Int64],
+ types.Typ[types.Uint],
+ types.Typ[types.Uint8],
+ types.Typ[types.Uint16],
+ types.Typ[types.Uint32],
+ types.Typ[types.Uint64],
+ types.Typ[types.Uintptr],
+ types.Typ[types.Float32],
+ types.Typ[types.Float64],
+ types.Typ[types.Complex64],
+ types.Typ[types.Complex128],
+ types.Typ[types.String],
+
+ // basic type aliases
+ types.Universe.Lookup("byte").Type(),
+ types.Universe.Lookup("rune").Type(),
+
+ // error
+ types.Universe.Lookup("error").Type(),
+
+ // untyped types
+ types.Typ[types.UntypedBool],
+ types.Typ[types.UntypedInt],
+ types.Typ[types.UntypedRune],
+ types.Typ[types.UntypedFloat],
+ types.Typ[types.UntypedComplex],
+ types.Typ[types.UntypedString],
+ types.Typ[types.UntypedNil],
+
+ // package unsafe
+ types.Typ[types.UnsafePointer],
+
+ // invalid type
+ types.Typ[types.Invalid], // only appears in packages with errors
+
+ // used internally by gc; never used by this package or in .a files
+ anyType{},
+ }
+ })
+ return predecl
+}
+
+type anyType struct{}
+
+func (t anyType) Underlying() types.Type { return t }
+func (t anyType) String() string { return "any" }
diff --git a/vendor/golang.org/x/tools/go/internal/gcimporter/exportdata.go b/vendor/golang.org/x/tools/go/internal/gcimporter/exportdata.go
new file mode 100644
index 0000000000000..f33dc5613e712
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/internal/gcimporter/exportdata.go
@@ -0,0 +1,93 @@
+// Copyright 2011 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// This file is a copy of $GOROOT/src/go/internal/gcimporter/exportdata.go.
+
+// This file implements FindExportData.
+
+package gcimporter
+
+import (
+ "bufio"
+ "fmt"
+ "io"
+ "strconv"
+ "strings"
+)
+
+func readGopackHeader(r *bufio.Reader) (name string, size int, err error) {
+ // See $GOROOT/include/ar.h.
+ hdr := make([]byte, 16+12+6+6+8+10+2)
+ _, err = io.ReadFull(r, hdr)
+ if err != nil {
+ return
+ }
+ // leave for debugging
+ if false {
+ fmt.Printf("header: %s", hdr)
+ }
+ s := strings.TrimSpace(string(hdr[16+12+6+6+8:][:10]))
+ size, err = strconv.Atoi(s)
+ if err != nil || hdr[len(hdr)-2] != '`' || hdr[len(hdr)-1] != '\n' {
+ err = fmt.Errorf("invalid archive header")
+ return
+ }
+ name = strings.TrimSpace(string(hdr[:16]))
+ return
+}
+
+// FindExportData positions the reader r at the beginning of the
+// export data section of an underlying GC-created object/archive
+// file by reading from it. The reader must be positioned at the
+// start of the file before calling this function. The hdr result
+// is the string before the export data, either "$$" or "$$B".
+//
+func FindExportData(r *bufio.Reader) (hdr string, err error) {
+ // Read first line to make sure this is an object file.
+ line, err := r.ReadSlice('\n')
+ if err != nil {
+ err = fmt.Errorf("can't find export data (%v)", err)
+ return
+ }
+
+ if string(line) == "!<arch>\n" {
+ // Archive file. Scan to __.PKGDEF.
+ var name string
+ if name, _, err = readGopackHeader(r); err != nil {
+ return
+ }
+
+ // First entry should be __.PKGDEF.
+ if name != "__.PKGDEF" {
+ err = fmt.Errorf("go archive is missing __.PKGDEF")
+ return
+ }
+
+ // Read first line of __.PKGDEF data, so that line
+ // is once again the first line of the input.
+ if line, err = r.ReadSlice('\n'); err != nil {
+ err = fmt.Errorf("can't find export data (%v)", err)
+ return
+ }
+ }
+
+ // Now at __.PKGDEF in archive or still at beginning of file.
+ // Either way, line should begin with "go object ".
+ if !strings.HasPrefix(string(line), "go object ") {
+ err = fmt.Errorf("not a Go object file")
+ return
+ }
+
+ // Skip over object header to export data.
+ // Begins after first line starting with $$.
+ for line[0] != '$' {
+ if line, err = r.ReadSlice('\n'); err != nil {
+ err = fmt.Errorf("can't find export data (%v)", err)
+ return
+ }
+ }
+ hdr = string(line)
+
+ return
+}
diff --git a/vendor/golang.org/x/tools/go/internal/gcimporter/gcimporter.go b/vendor/golang.org/x/tools/go/internal/gcimporter/gcimporter.go
new file mode 100644
index 0000000000000..9cf186605f6eb
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/internal/gcimporter/gcimporter.go
@@ -0,0 +1,1078 @@
+// Copyright 2011 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// This file is a modified copy of $GOROOT/src/go/internal/gcimporter/gcimporter.go,
+// but it also contains the original source-based importer code for Go1.6.
+// Once we stop supporting 1.6, we can remove that code.
+
+// Package gcimporter provides various functions for reading
+// gc-generated object files that can be used to implement the
+// Importer interface defined by the Go 1.5 standard library package.
+package gcimporter // import "golang.org/x/tools/go/internal/gcimporter"
+
+import (
+ "bufio"
+ "errors"
+ "fmt"
+ "go/build"
+ "go/constant"
+ "go/token"
+ "go/types"
+ "io"
+ "io/ioutil"
+ "os"
+ "path/filepath"
+ "sort"
+ "strconv"
+ "strings"
+ "text/scanner"
+)
+
+// debugging/development support
+const debug = false
+
+var pkgExts = [...]string{".a", ".o"}
+
+// FindPkg returns the filename and unique package id for an import
+// path based on package information provided by build.Import (using
+// the build.Default build.Context). A relative srcDir is interpreted
+// relative to the current working directory.
+// If no file was found, an empty filename is returned.
+//
+func FindPkg(path, srcDir string) (filename, id string) {
+ if path == "" {
+ return
+ }
+
+ var noext string
+ switch {
+ default:
+ // "x" -> "$GOPATH/pkg/$GOOS_$GOARCH/x.ext", "x"
+ // Don't require the source files to be present.
+ if abs, err := filepath.Abs(srcDir); err == nil { // see issue 14282
+ srcDir = abs
+ }
+ bp, _ := build.Import(path, srcDir, build.FindOnly|build.AllowBinary)
+ if bp.PkgObj == "" {
+ id = path // make sure we have an id to print in error message
+ return
+ }
+ noext = strings.TrimSuffix(bp.PkgObj, ".a")
+ id = bp.ImportPath
+
+ case build.IsLocalImport(path):
+ // "./x" -> "/this/directory/x.ext", "/this/directory/x"
+ noext = filepath.Join(srcDir, path)
+ id = noext
+
+ case filepath.IsAbs(path):
+ // for completeness only - go/build.Import
+ // does not support absolute imports
+ // "/x" -> "/x.ext", "/x"
+ noext = path
+ id = path
+ }
+
+ if false { // for debugging
+ if path != id {
+ fmt.Printf("%s -> %s\n", path, id)
+ }
+ }
+
+ // try extensions
+ for _, ext := range pkgExts {
+ filename = noext + ext
+ if f, err := os.Stat(filename); err == nil && !f.IsDir() {
+ return
+ }
+ }
+
+ filename = "" // not found
+ return
+}
+
+// ImportData imports a package by reading the gc-generated export data,
+// adds the corresponding package object to the packages map indexed by id,
+// and returns the object.
+//
+// The packages map must contains all packages already imported. The data
+// reader position must be the beginning of the export data section. The
+// filename is only used in error messages.
+//
+// If packages[id] contains the completely imported package, that package
+// can be used directly, and there is no need to call this function (but
+// there is also no harm but for extra time used).
+//
+func ImportData(packages map[string]*types.Package, filename, id string, data io.Reader) (pkg *types.Package, err error) {
+ // support for parser error handling
+ defer func() {
+ switch r := recover().(type) {
+ case nil:
+ // nothing to do
+ case importError:
+ err = r
+ default:
+ panic(r) // internal error
+ }
+ }()
+
+ var p parser
+ p.init(filename, id, data, packages)
+ pkg = p.parseExport()
+
+ return
+}
+
+// Import imports a gc-generated package given its import path and srcDir, adds
+// the corresponding package object to the packages map, and returns the object.
+// The packages map must contain all packages already imported.
+//
+func Import(packages map[string]*types.Package, path, srcDir string, lookup func(path string) (io.ReadCloser, error)) (pkg *types.Package, err error) {
+ var rc io.ReadCloser
+ var filename, id string
+ if lookup != nil {
+ // With custom lookup specified, assume that caller has
+ // converted path to a canonical import path for use in the map.
+ if path == "unsafe" {
+ return types.Unsafe, nil
+ }
+ id = path
+
+ // No need to re-import if the package was imported completely before.
+ if pkg = packages[id]; pkg != nil && pkg.Complete() {
+ return
+ }
+ f, err := lookup(path)
+ if err != nil {
+ return nil, err
+ }
+ rc = f
+ } else {
+ filename, id = FindPkg(path, srcDir)
+ if filename == "" {
+ if path == "unsafe" {
+ return types.Unsafe, nil
+ }
+ return nil, fmt.Errorf("can't find import: %q", id)
+ }
+
+ // no need to re-import if the package was imported completely before
+ if pkg = packages[id]; pkg != nil && pkg.Complete() {
+ return
+ }
+
+ // open file
+ f, err := os.Open(filename)
+ if err != nil {
+ return nil, err
+ }
+ defer func() {
+ if err != nil {
+ // add file name to error
+ err = fmt.Errorf("%s: %v", filename, err)
+ }
+ }()
+ rc = f
+ }
+ defer rc.Close()
+
+ var hdr string
+ buf := bufio.NewReader(rc)
+ if hdr, err = FindExportData(buf); err != nil {
+ return
+ }
+
+ switch hdr {
+ case "$$\n":
+ // Work-around if we don't have a filename; happens only if lookup != nil.
+ // Either way, the filename is only needed for importer error messages, so
+ // this is fine.
+ if filename == "" {
+ filename = path
+ }
+ return ImportData(packages, filename, id, buf)
+
+ case "$$B\n":
+ var data []byte
+ data, err = ioutil.ReadAll(buf)
+ if err != nil {
+ break
+ }
+
+ // TODO(gri): allow clients of go/importer to provide a FileSet.
+ // Or, define a new standard go/types/gcexportdata package.
+ fset := token.NewFileSet()
+
+ // The indexed export format starts with an 'i'; the older
+ // binary export format starts with a 'c', 'd', or 'v'
+ // (from "version"). Select appropriate importer.
+ if len(data) > 0 && data[0] == 'i' {
+ _, pkg, err = IImportData(fset, packages, data[1:], id)
+ } else {
+ _, pkg, err = BImportData(fset, packages, data, id)
+ }
+
+ default:
+ err = fmt.Errorf("unknown export data header: %q", hdr)
+ }
+
+ return
+}
+
+// ----------------------------------------------------------------------------
+// Parser
+
+// TODO(gri) Imported objects don't have position information.
+// Ideally use the debug table line info; alternatively
+// create some fake position (or the position of the
+// import). That way error messages referring to imported
+// objects can print meaningful information.
+
+// parser parses the exports inside a gc compiler-produced
+// object/archive file and populates its scope with the results.
+type parser struct {
+ scanner scanner.Scanner
+ tok rune // current token
+ lit string // literal string; only valid for Ident, Int, String tokens
+ id string // package id of imported package
+ sharedPkgs map[string]*types.Package // package id -> package object (across importer)
+ localPkgs map[string]*types.Package // package id -> package object (just this package)
+}
+
+func (p *parser) init(filename, id string, src io.Reader, packages map[string]*types.Package) {
+ p.scanner.Init(src)
+ p.scanner.Error = func(_ *scanner.Scanner, msg string) { p.error(msg) }
+ p.scanner.Mode = scanner.ScanIdents | scanner.ScanInts | scanner.ScanChars | scanner.ScanStrings | scanner.ScanComments | scanner.SkipComments
+ p.scanner.Whitespace = 1<<'\t' | 1<<' '
+ p.scanner.Filename = filename // for good error messages
+ p.next()
+ p.id = id
+ p.sharedPkgs = packages
+ if debug {
+ // check consistency of packages map
+ for _, pkg := range packages {
+ if pkg.Name() == "" {
+ fmt.Printf("no package name for %s\n", pkg.Path())
+ }
+ }
+ }
+}
+
+func (p *parser) next() {
+ p.tok = p.scanner.Scan()
+ switch p.tok {
+ case scanner.Ident, scanner.Int, scanner.Char, scanner.String, '·':
+ p.lit = p.scanner.TokenText()
+ default:
+ p.lit = ""
+ }
+ if debug {
+ fmt.Printf("%s: %q -> %q\n", scanner.TokenString(p.tok), p.scanner.TokenText(), p.lit)
+ }
+}
+
+func declTypeName(pkg *types.Package, name string) *types.TypeName {
+ scope := pkg.Scope()
+ if obj := scope.Lookup(name); obj != nil {
+ return obj.(*types.TypeName)
+ }
+ obj := types.NewTypeName(token.NoPos, pkg, name, nil)
+ // a named type may be referred to before the underlying type
+ // is known - set it up
+ types.NewNamed(obj, nil, nil)
+ scope.Insert(obj)
+ return obj
+}
+
+// ----------------------------------------------------------------------------
+// Error handling
+
+// Internal errors are boxed as importErrors.
+type importError struct {
+ pos scanner.Position
+ err error
+}
+
+func (e importError) Error() string {
+ return fmt.Sprintf("import error %s (byte offset = %d): %s", e.pos, e.pos.Offset, e.err)
+}
+
+func (p *parser) error(err interface{}) {
+ if s, ok := err.(string); ok {
+ err = errors.New(s)
+ }
+ // panic with a runtime.Error if err is not an error
+ panic(importError{p.scanner.Pos(), err.(error)})
+}
+
+func (p *parser) errorf(format string, args ...interface{}) {
+ p.error(fmt.Sprintf(format, args...))
+}
+
+func (p *parser) expect(tok rune) string {
+ lit := p.lit
+ if p.tok != tok {
+ p.errorf("expected %s, got %s (%s)", scanner.TokenString(tok), scanner.TokenString(p.tok), lit)
+ }
+ p.next()
+ return lit
+}
+
+func (p *parser) expectSpecial(tok string) {
+ sep := 'x' // not white space
+ i := 0
+ for i < len(tok) && p.tok == rune(tok[i]) && sep > ' ' {
+ sep = p.scanner.Peek() // if sep <= ' ', there is white space before the next token
+ p.next()
+ i++
+ }
+ if i < len(tok) {
+ p.errorf("expected %q, got %q", tok, tok[0:i])
+ }
+}
+
+func (p *parser) expectKeyword(keyword string) {
+ lit := p.expect(scanner.Ident)
+ if lit != keyword {
+ p.errorf("expected keyword %s, got %q", keyword, lit)
+ }
+}
+
+// ----------------------------------------------------------------------------
+// Qualified and unqualified names
+
+// PackageId = string_lit .
+//
+func (p *parser) parsePackageId() string {
+ id, err := strconv.Unquote(p.expect(scanner.String))
+ if err != nil {
+ p.error(err)
+ }
+ // id == "" stands for the imported package id
+ // (only known at time of package installation)
+ if id == "" {
+ id = p.id
+ }
+ return id
+}
+
+// PackageName = ident .
+//
+func (p *parser) parsePackageName() string {
+ return p.expect(scanner.Ident)
+}
+
+// dotIdentifier = ( ident | '·' ) { ident | int | '·' } .
+func (p *parser) parseDotIdent() string {
+ ident := ""
+ if p.tok != scanner.Int {
+ sep := 'x' // not white space
+ for (p.tok == scanner.Ident || p.tok == scanner.Int || p.tok == '·') && sep > ' ' {
+ ident += p.lit
+ sep = p.scanner.Peek() // if sep <= ' ', there is white space before the next token
+ p.next()
+ }
+ }
+ if ident == "" {
+ p.expect(scanner.Ident) // use expect() for error handling
+ }
+ return ident
+}
+
+// QualifiedName = "@" PackageId "." ( "?" | dotIdentifier ) .
+//
+func (p *parser) parseQualifiedName() (id, name string) {
+ p.expect('@')
+ id = p.parsePackageId()
+ p.expect('.')
+ // Per rev f280b8a485fd (10/2/2013), qualified names may be used for anonymous fields.
+ if p.tok == '?' {
+ p.next()
+ } else {
+ name = p.parseDotIdent()
+ }
+ return
+}
+
+// getPkg returns the package for a given id. If the package is
+// not found, create the package and add it to the p.localPkgs
+// and p.sharedPkgs maps. name is the (expected) name of the
+// package. If name == "", the package name is expected to be
+// set later via an import clause in the export data.
+//
+// id identifies a package, usually by a canonical package path like
+// "encoding/json" but possibly by a non-canonical import path like
+// "./json".
+//
+func (p *parser) getPkg(id, name string) *types.Package {
+ // package unsafe is not in the packages maps - handle explicitly
+ if id == "unsafe" {
+ return types.Unsafe
+ }
+
+ pkg := p.localPkgs[id]
+ if pkg == nil {
+ // first import of id from this package
+ pkg = p.sharedPkgs[id]
+ if pkg == nil {
+ // first import of id by this importer;
+ // add (possibly unnamed) pkg to shared packages
+ pkg = types.NewPackage(id, name)
+ p.sharedPkgs[id] = pkg
+ }
+ // add (possibly unnamed) pkg to local packages
+ if p.localPkgs == nil {
+ p.localPkgs = make(map[string]*types.Package)
+ }
+ p.localPkgs[id] = pkg
+ } else if name != "" {
+ // package exists already and we have an expected package name;
+ // make sure names match or set package name if necessary
+ if pname := pkg.Name(); pname == "" {
+ pkg.SetName(name)
+ } else if pname != name {
+ p.errorf("%s package name mismatch: %s (given) vs %s (expected)", id, pname, name)
+ }
+ }
+ return pkg
+}
+
+// parseExportedName is like parseQualifiedName, but
+// the package id is resolved to an imported *types.Package.
+//
+func (p *parser) parseExportedName() (pkg *types.Package, name string) {
+ id, name := p.parseQualifiedName()
+ pkg = p.getPkg(id, "")
+ return
+}
+
+// ----------------------------------------------------------------------------
+// Types
+
+// BasicType = identifier .
+//
+func (p *parser) parseBasicType() types.Type {
+ id := p.expect(scanner.Ident)
+ obj := types.Universe.Lookup(id)
+ if obj, ok := obj.(*types.TypeName); ok {
+ return obj.Type()
+ }
+ p.errorf("not a basic type: %s", id)
+ return nil
+}
+
+// ArrayType = "[" int_lit "]" Type .
+//
+func (p *parser) parseArrayType(parent *types.Package) types.Type {
+ // "[" already consumed and lookahead known not to be "]"
+ lit := p.expect(scanner.Int)
+ p.expect(']')
+ elem := p.parseType(parent)
+ n, err := strconv.ParseInt(lit, 10, 64)
+ if err != nil {
+ p.error(err)
+ }
+ return types.NewArray(elem, n)
+}
+
+// MapType = "map" "[" Type "]" Type .
+//
+func (p *parser) parseMapType(parent *types.Package) types.Type {
+ p.expectKeyword("map")
+ p.expect('[')
+ key := p.parseType(parent)
+ p.expect(']')
+ elem := p.parseType(parent)
+ return types.NewMap(key, elem)
+}
+
+// Name = identifier | "?" | QualifiedName .
+//
+// For unqualified and anonymous names, the returned package is the parent
+// package unless parent == nil, in which case the returned package is the
+// package being imported. (The parent package is not nil if the the name
+// is an unqualified struct field or interface method name belonging to a
+// type declared in another package.)
+//
+// For qualified names, the returned package is nil (and not created if
+// it doesn't exist yet) unless materializePkg is set (which creates an
+// unnamed package with valid package path). In the latter case, a
+// subsequent import clause is expected to provide a name for the package.
+//
+func (p *parser) parseName(parent *types.Package, materializePkg bool) (pkg *types.Package, name string) {
+ pkg = parent
+ if pkg == nil {
+ pkg = p.sharedPkgs[p.id]
+ }
+ switch p.tok {
+ case scanner.Ident:
+ name = p.lit
+ p.next()
+ case '?':
+ // anonymous
+ p.next()
+ case '@':
+ // exported name prefixed with package path
+ pkg = nil
+ var id string
+ id, name = p.parseQualifiedName()
+ if materializePkg {
+ pkg = p.getPkg(id, "")
+ }
+ default:
+ p.error("name expected")
+ }
+ return
+}
+
+func deref(typ types.Type) types.Type {
+ if p, _ := typ.(*types.Pointer); p != nil {
+ return p.Elem()
+ }
+ return typ
+}
+
+// Field = Name Type [ string_lit ] .
+//
+func (p *parser) parseField(parent *types.Package) (*types.Var, string) {
+ pkg, name := p.parseName(parent, true)
+
+ if name == "_" {
+ // Blank fields should be package-qualified because they
+ // are unexported identifiers, but gc does not qualify them.
+ // Assuming that the ident belongs to the current package
+ // causes types to change during re-exporting, leading
+ // to spurious "can't assign A to B" errors from go/types.
+ // As a workaround, pretend all blank fields belong
+ // to the same unique dummy package.
+ const blankpkg = "<_>"
+ pkg = p.getPkg(blankpkg, blankpkg)
+ }
+
+ typ := p.parseType(parent)
+ anonymous := false
+ if name == "" {
+ // anonymous field - typ must be T or *T and T must be a type name
+ switch typ := deref(typ).(type) {
+ case *types.Basic: // basic types are named types
+ pkg = nil // objects defined in Universe scope have no package
+ name = typ.Name()
+ case *types.Named:
+ name = typ.Obj().Name()
+ default:
+ p.errorf("anonymous field expected")
+ }
+ anonymous = true
+ }
+ tag := ""
+ if p.tok == scanner.String {
+ s := p.expect(scanner.String)
+ var err error
+ tag, err = strconv.Unquote(s)
+ if err != nil {
+ p.errorf("invalid struct tag %s: %s", s, err)
+ }
+ }
+ return types.NewField(token.NoPos, pkg, name, typ, anonymous), tag
+}
+
+// StructType = "struct" "{" [ FieldList ] "}" .
+// FieldList = Field { ";" Field } .
+//
+func (p *parser) parseStructType(parent *types.Package) types.Type {
+ var fields []*types.Var
+ var tags []string
+
+ p.expectKeyword("struct")
+ p.expect('{')
+ for i := 0; p.tok != '}' && p.tok != scanner.EOF; i++ {
+ if i > 0 {
+ p.expect(';')
+ }
+ fld, tag := p.parseField(parent)
+ if tag != "" && tags == nil {
+ tags = make([]string, i)
+ }
+ if tags != nil {
+ tags = append(tags, tag)
+ }
+ fields = append(fields, fld)
+ }
+ p.expect('}')
+
+ return types.NewStruct(fields, tags)
+}
+
+// Parameter = ( identifier | "?" ) [ "..." ] Type [ string_lit ] .
+//
+func (p *parser) parseParameter() (par *types.Var, isVariadic bool) {
+ _, name := p.parseName(nil, false)
+ // remove gc-specific parameter numbering
+ if i := strings.Index(name, "·"); i >= 0 {
+ name = name[:i]
+ }
+ if p.tok == '.' {
+ p.expectSpecial("...")
+ isVariadic = true
+ }
+ typ := p.parseType(nil)
+ if isVariadic {
+ typ = types.NewSlice(typ)
+ }
+ // ignore argument tag (e.g. "noescape")
+ if p.tok == scanner.String {
+ p.next()
+ }
+ // TODO(gri) should we provide a package?
+ par = types.NewVar(token.NoPos, nil, name, typ)
+ return
+}
+
+// Parameters = "(" [ ParameterList ] ")" .
+// ParameterList = { Parameter "," } Parameter .
+//
+func (p *parser) parseParameters() (list []*types.Var, isVariadic bool) {
+ p.expect('(')
+ for p.tok != ')' && p.tok != scanner.EOF {
+ if len(list) > 0 {
+ p.expect(',')
+ }
+ par, variadic := p.parseParameter()
+ list = append(list, par)
+ if variadic {
+ if isVariadic {
+ p.error("... not on final argument")
+ }
+ isVariadic = true
+ }
+ }
+ p.expect(')')
+
+ return
+}
+
+// Signature = Parameters [ Result ] .
+// Result = Type | Parameters .
+//
+func (p *parser) parseSignature(recv *types.Var) *types.Signature {
+ params, isVariadic := p.parseParameters()
+
+ // optional result type
+ var results []*types.Var
+ if p.tok == '(' {
+ var variadic bool
+ results, variadic = p.parseParameters()
+ if variadic {
+ p.error("... not permitted on result type")
+ }
+ }
+
+ return types.NewSignature(recv, types.NewTuple(params...), types.NewTuple(results...), isVariadic)
+}
+
+// InterfaceType = "interface" "{" [ MethodList ] "}" .
+// MethodList = Method { ";" Method } .
+// Method = Name Signature .
+//
+// The methods of embedded interfaces are always "inlined"
+// by the compiler and thus embedded interfaces are never
+// visible in the export data.
+//
+func (p *parser) parseInterfaceType(parent *types.Package) types.Type {
+ var methods []*types.Func
+
+ p.expectKeyword("interface")
+ p.expect('{')
+ for i := 0; p.tok != '}' && p.tok != scanner.EOF; i++ {
+ if i > 0 {
+ p.expect(';')
+ }
+ pkg, name := p.parseName(parent, true)
+ sig := p.parseSignature(nil)
+ methods = append(methods, types.NewFunc(token.NoPos, pkg, name, sig))
+ }
+ p.expect('}')
+
+ // Complete requires the type's embedded interfaces to be fully defined,
+ // but we do not define any
+ return types.NewInterface(methods, nil).Complete()
+}
+
+// ChanType = ( "chan" [ "<-" ] | "<-" "chan" ) Type .
+//
+func (p *parser) parseChanType(parent *types.Package) types.Type {
+ dir := types.SendRecv
+ if p.tok == scanner.Ident {
+ p.expectKeyword("chan")
+ if p.tok == '<' {
+ p.expectSpecial("<-")
+ dir = types.SendOnly
+ }
+ } else {
+ p.expectSpecial("<-")
+ p.expectKeyword("chan")
+ dir = types.RecvOnly
+ }
+ elem := p.parseType(parent)
+ return types.NewChan(dir, elem)
+}
+
+// Type =
+// BasicType | TypeName | ArrayType | SliceType | StructType |
+// PointerType | FuncType | InterfaceType | MapType | ChanType |
+// "(" Type ")" .
+//
+// BasicType = ident .
+// TypeName = ExportedName .
+// SliceType = "[" "]" Type .
+// PointerType = "*" Type .
+// FuncType = "func" Signature .
+//
+func (p *parser) parseType(parent *types.Package) types.Type {
+ switch p.tok {
+ case scanner.Ident:
+ switch p.lit {
+ default:
+ return p.parseBasicType()
+ case "struct":
+ return p.parseStructType(parent)
+ case "func":
+ // FuncType
+ p.next()
+ return p.parseSignature(nil)
+ case "interface":
+ return p.parseInterfaceType(parent)
+ case "map":
+ return p.parseMapType(parent)
+ case "chan":
+ return p.parseChanType(parent)
+ }
+ case '@':
+ // TypeName
+ pkg, name := p.parseExportedName()
+ return declTypeName(pkg, name).Type()
+ case '[':
+ p.next() // look ahead
+ if p.tok == ']' {
+ // SliceType
+ p.next()
+ return types.NewSlice(p.parseType(parent))
+ }
+ return p.parseArrayType(parent)
+ case '*':
+ // PointerType
+ p.next()
+ return types.NewPointer(p.parseType(parent))
+ case '<':
+ return p.parseChanType(parent)
+ case '(':
+ // "(" Type ")"
+ p.next()
+ typ := p.parseType(parent)
+ p.expect(')')
+ return typ
+ }
+ p.errorf("expected type, got %s (%q)", scanner.TokenString(p.tok), p.lit)
+ return nil
+}
+
+// ----------------------------------------------------------------------------
+// Declarations
+
+// ImportDecl = "import" PackageName PackageId .
+//
+func (p *parser) parseImportDecl() {
+ p.expectKeyword("import")
+ name := p.parsePackageName()
+ p.getPkg(p.parsePackageId(), name)
+}
+
+// int_lit = [ "+" | "-" ] { "0" ... "9" } .
+//
+func (p *parser) parseInt() string {
+ s := ""
+ switch p.tok {
+ case '-':
+ s = "-"
+ p.next()
+ case '+':
+ p.next()
+ }
+ return s + p.expect(scanner.Int)
+}
+
+// number = int_lit [ "p" int_lit ] .
+//
+func (p *parser) parseNumber() (typ *types.Basic, val constant.Value) {
+ // mantissa
+ mant := constant.MakeFromLiteral(p.parseInt(), token.INT, 0)
+ if mant == nil {
+ panic("invalid mantissa")
+ }
+
+ if p.lit == "p" {
+ // exponent (base 2)
+ p.next()
+ exp, err := strconv.ParseInt(p.parseInt(), 10, 0)
+ if err != nil {
+ p.error(err)
+ }
+ if exp < 0 {
+ denom := constant.MakeInt64(1)
+ denom = constant.Shift(denom, token.SHL, uint(-exp))
+ typ = types.Typ[types.UntypedFloat]
+ val = constant.BinaryOp(mant, token.QUO, denom)
+ return
+ }
+ if exp > 0 {
+ mant = constant.Shift(mant, token.SHL, uint(exp))
+ }
+ typ = types.Typ[types.UntypedFloat]
+ val = mant
+ return
+ }
+
+ typ = types.Typ[types.UntypedInt]
+ val = mant
+ return
+}
+
+// ConstDecl = "const" ExportedName [ Type ] "=" Literal .
+// Literal = bool_lit | int_lit | float_lit | complex_lit | rune_lit | string_lit .
+// bool_lit = "true" | "false" .
+// complex_lit = "(" float_lit "+" float_lit "i" ")" .
+// rune_lit = "(" int_lit "+" int_lit ")" .
+// string_lit = `"` { unicode_char } `"` .
+//
+func (p *parser) parseConstDecl() {
+ p.expectKeyword("const")
+ pkg, name := p.parseExportedName()
+
+ var typ0 types.Type
+ if p.tok != '=' {
+ // constant types are never structured - no need for parent type
+ typ0 = p.parseType(nil)
+ }
+
+ p.expect('=')
+ var typ types.Type
+ var val constant.Value
+ switch p.tok {
+ case scanner.Ident:
+ // bool_lit
+ if p.lit != "true" && p.lit != "false" {
+ p.error("expected true or false")
+ }
+ typ = types.Typ[types.UntypedBool]
+ val = constant.MakeBool(p.lit == "true")
+ p.next()
+
+ case '-', scanner.Int:
+ // int_lit
+ typ, val = p.parseNumber()
+
+ case '(':
+ // complex_lit or rune_lit
+ p.next()
+ if p.tok == scanner.Char {
+ p.next()
+ p.expect('+')
+ typ = types.Typ[types.UntypedRune]
+ _, val = p.parseNumber()
+ p.expect(')')
+ break
+ }
+ _, re := p.parseNumber()
+ p.expect('+')
+ _, im := p.parseNumber()
+ p.expectKeyword("i")
+ p.expect(')')
+ typ = types.Typ[types.UntypedComplex]
+ val = constant.BinaryOp(re, token.ADD, constant.MakeImag(im))
+
+ case scanner.Char:
+ // rune_lit
+ typ = types.Typ[types.UntypedRune]
+ val = constant.MakeFromLiteral(p.lit, token.CHAR, 0)
+ p.next()
+
+ case scanner.String:
+ // string_lit
+ typ = types.Typ[types.UntypedString]
+ val = constant.MakeFromLiteral(p.lit, token.STRING, 0)
+ p.next()
+
+ default:
+ p.errorf("expected literal got %s", scanner.TokenString(p.tok))
+ }
+
+ if typ0 == nil {
+ typ0 = typ
+ }
+
+ pkg.Scope().Insert(types.NewConst(token.NoPos, pkg, name, typ0, val))
+}
+
+// TypeDecl = "type" ExportedName Type .
+//
+func (p *parser) parseTypeDecl() {
+ p.expectKeyword("type")
+ pkg, name := p.parseExportedName()
+ obj := declTypeName(pkg, name)
+
+ // The type object may have been imported before and thus already
+ // have a type associated with it. We still need to parse the type
+ // structure, but throw it away if the object already has a type.
+ // This ensures that all imports refer to the same type object for
+ // a given type declaration.
+ typ := p.parseType(pkg)
+
+ if name := obj.Type().(*types.Named); name.Underlying() == nil {
+ name.SetUnderlying(typ)
+ }
+}
+
+// VarDecl = "var" ExportedName Type .
+//
+func (p *parser) parseVarDecl() {
+ p.expectKeyword("var")
+ pkg, name := p.parseExportedName()
+ typ := p.parseType(pkg)
+ pkg.Scope().Insert(types.NewVar(token.NoPos, pkg, name, typ))
+}
+
+// Func = Signature [ Body ] .
+// Body = "{" ... "}" .
+//
+func (p *parser) parseFunc(recv *types.Var) *types.Signature {
+ sig := p.parseSignature(recv)
+ if p.tok == '{' {
+ p.next()
+ for i := 1; i > 0; p.next() {
+ switch p.tok {
+ case '{':
+ i++
+ case '}':
+ i--
+ }
+ }
+ }
+ return sig
+}
+
+// MethodDecl = "func" Receiver Name Func .
+// Receiver = "(" ( identifier | "?" ) [ "*" ] ExportedName ")" .
+//
+func (p *parser) parseMethodDecl() {
+ // "func" already consumed
+ p.expect('(')
+ recv, _ := p.parseParameter() // receiver
+ p.expect(')')
+
+ // determine receiver base type object
+ base := deref(recv.Type()).(*types.Named)
+
+ // parse method name, signature, and possibly inlined body
+ _, name := p.parseName(nil, false)
+ sig := p.parseFunc(recv)
+
+ // methods always belong to the same package as the base type object
+ pkg := base.Obj().Pkg()
+
+ // add method to type unless type was imported before
+ // and method exists already
+ // TODO(gri) This leads to a quadratic algorithm - ok for now because method counts are small.
+ base.AddMethod(types.NewFunc(token.NoPos, pkg, name, sig))
+}
+
+// FuncDecl = "func" ExportedName Func .
+//
+func (p *parser) parseFuncDecl() {
+ // "func" already consumed
+ pkg, name := p.parseExportedName()
+ typ := p.parseFunc(nil)
+ pkg.Scope().Insert(types.NewFunc(token.NoPos, pkg, name, typ))
+}
+
+// Decl = [ ImportDecl | ConstDecl | TypeDecl | VarDecl | FuncDecl | MethodDecl ] "\n" .
+//
+func (p *parser) parseDecl() {
+ if p.tok == scanner.Ident {
+ switch p.lit {
+ case "import":
+ p.parseImportDecl()
+ case "const":
+ p.parseConstDecl()
+ case "type":
+ p.parseTypeDecl()
+ case "var":
+ p.parseVarDecl()
+ case "func":
+ p.next() // look ahead
+ if p.tok == '(' {
+ p.parseMethodDecl()
+ } else {
+ p.parseFuncDecl()
+ }
+ }
+ }
+ p.expect('\n')
+}
+
+// ----------------------------------------------------------------------------
+// Export
+
+// Export = "PackageClause { Decl } "$$" .
+// PackageClause = "package" PackageName [ "safe" ] "\n" .
+//
+func (p *parser) parseExport() *types.Package {
+ p.expectKeyword("package")
+ name := p.parsePackageName()
+ if p.tok == scanner.Ident && p.lit == "safe" {
+ // package was compiled with -u option - ignore
+ p.next()
+ }
+ p.expect('\n')
+
+ pkg := p.getPkg(p.id, name)
+
+ for p.tok != '$' && p.tok != scanner.EOF {
+ p.parseDecl()
+ }
+
+ if ch := p.scanner.Peek(); p.tok != '$' || ch != '$' {
+ // don't call next()/expect() since reading past the
+ // export data may cause scanner errors (e.g. NUL chars)
+ p.errorf("expected '$$', got %s %c", scanner.TokenString(p.tok), ch)
+ }
+
+ if n := p.scanner.ErrorCount; n != 0 {
+ p.errorf("expected no scanner errors, got %d", n)
+ }
+
+ // Record all locally referenced packages as imports.
+ var imports []*types.Package
+ for id, pkg2 := range p.localPkgs {
+ if pkg2.Name() == "" {
+ p.errorf("%s package has no name", id)
+ }
+ if id == p.id {
+ continue // avoid self-edge
+ }
+ imports = append(imports, pkg2)
+ }
+ sort.Sort(byPath(imports))
+ pkg.SetImports(imports)
+
+ // package was imported completely and without errors
+ pkg.MarkComplete()
+
+ return pkg
+}
+
+type byPath []*types.Package
+
+func (a byPath) Len() int { return len(a) }
+func (a byPath) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
+func (a byPath) Less(i, j int) bool { return a[i].Path() < a[j].Path() }
diff --git a/vendor/golang.org/x/tools/go/internal/gcimporter/iexport.go b/vendor/golang.org/x/tools/go/internal/gcimporter/iexport.go
new file mode 100644
index 0000000000000..4be32a2e55fe0
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/internal/gcimporter/iexport.go
@@ -0,0 +1,739 @@
+// Copyright 2019 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Indexed binary package export.
+// This file was derived from $GOROOT/src/cmd/compile/internal/gc/iexport.go;
+// see that file for specification of the format.
+
+package gcimporter
+
+import (
+ "bytes"
+ "encoding/binary"
+ "go/ast"
+ "go/constant"
+ "go/token"
+ "go/types"
+ "io"
+ "math/big"
+ "reflect"
+ "sort"
+)
+
+// Current indexed export format version. Increase with each format change.
+// 0: Go1.11 encoding
+const iexportVersion = 0
+
+// IExportData returns the binary export data for pkg.
+//
+// If no file set is provided, position info will be missing.
+// The package path of the top-level package will not be recorded,
+// so that calls to IImportData can override with a provided package path.
+func IExportData(fset *token.FileSet, pkg *types.Package) (b []byte, err error) {
+ defer func() {
+ if e := recover(); e != nil {
+ if ierr, ok := e.(internalError); ok {
+ err = ierr
+ return
+ }
+ // Not an internal error; panic again.
+ panic(e)
+ }
+ }()
+
+ p := iexporter{
+ out: bytes.NewBuffer(nil),
+ fset: fset,
+ allPkgs: map[*types.Package]bool{},
+ stringIndex: map[string]uint64{},
+ declIndex: map[types.Object]uint64{},
+ typIndex: map[types.Type]uint64{},
+ localpkg: pkg,
+ }
+
+ for i, pt := range predeclared() {
+ p.typIndex[pt] = uint64(i)
+ }
+ if len(p.typIndex) > predeclReserved {
+ panic(internalErrorf("too many predeclared types: %d > %d", len(p.typIndex), predeclReserved))
+ }
+
+ // Initialize work queue with exported declarations.
+ scope := pkg.Scope()
+ for _, name := range scope.Names() {
+ if ast.IsExported(name) {
+ p.pushDecl(scope.Lookup(name))
+ }
+ }
+
+ // Loop until no more work.
+ for !p.declTodo.empty() {
+ p.doDecl(p.declTodo.popHead())
+ }
+
+ // Append indices to data0 section.
+ dataLen := uint64(p.data0.Len())
+ w := p.newWriter()
+ w.writeIndex(p.declIndex)
+ w.flush()
+
+ // Assemble header.
+ var hdr intWriter
+ hdr.WriteByte('i')
+ hdr.uint64(iexportVersion)
+ hdr.uint64(uint64(p.strings.Len()))
+ hdr.uint64(dataLen)
+
+ // Flush output.
+ io.Copy(p.out, &hdr)
+ io.Copy(p.out, &p.strings)
+ io.Copy(p.out, &p.data0)
+
+ return p.out.Bytes(), nil
+}
+
+// writeIndex writes out an object index. mainIndex indicates whether
+// we're writing out the main index, which is also read by
+// non-compiler tools and includes a complete package description
+// (i.e., name and height).
+func (w *exportWriter) writeIndex(index map[types.Object]uint64) {
+ // Build a map from packages to objects from that package.
+ pkgObjs := map[*types.Package][]types.Object{}
+
+ // For the main index, make sure to include every package that
+ // we reference, even if we're not exporting (or reexporting)
+ // any symbols from it.
+ pkgObjs[w.p.localpkg] = nil
+ for pkg := range w.p.allPkgs {
+ pkgObjs[pkg] = nil
+ }
+
+ for obj := range index {
+ pkgObjs[obj.Pkg()] = append(pkgObjs[obj.Pkg()], obj)
+ }
+
+ var pkgs []*types.Package
+ for pkg, objs := range pkgObjs {
+ pkgs = append(pkgs, pkg)
+
+ sort.Slice(objs, func(i, j int) bool {
+ return objs[i].Name() < objs[j].Name()
+ })
+ }
+
+ sort.Slice(pkgs, func(i, j int) bool {
+ return w.exportPath(pkgs[i]) < w.exportPath(pkgs[j])
+ })
+
+ w.uint64(uint64(len(pkgs)))
+ for _, pkg := range pkgs {
+ w.string(w.exportPath(pkg))
+ w.string(pkg.Name())
+ w.uint64(uint64(0)) // package height is not needed for go/types
+
+ objs := pkgObjs[pkg]
+ w.uint64(uint64(len(objs)))
+ for _, obj := range objs {
+ w.string(obj.Name())
+ w.uint64(index[obj])
+ }
+ }
+}
+
+type iexporter struct {
+ fset *token.FileSet
+ out *bytes.Buffer
+
+ localpkg *types.Package
+
+ // allPkgs tracks all packages that have been referenced by
+ // the export data, so we can ensure to include them in the
+ // main index.
+ allPkgs map[*types.Package]bool
+
+ declTodo objQueue
+
+ strings intWriter
+ stringIndex map[string]uint64
+
+ data0 intWriter
+ declIndex map[types.Object]uint64
+ typIndex map[types.Type]uint64
+}
+
+// stringOff returns the offset of s within the string section.
+// If not already present, it's added to the end.
+func (p *iexporter) stringOff(s string) uint64 {
+ off, ok := p.stringIndex[s]
+ if !ok {
+ off = uint64(p.strings.Len())
+ p.stringIndex[s] = off
+
+ p.strings.uint64(uint64(len(s)))
+ p.strings.WriteString(s)
+ }
+ return off
+}
+
+// pushDecl adds n to the declaration work queue, if not already present.
+func (p *iexporter) pushDecl(obj types.Object) {
+ // Package unsafe is known to the compiler and predeclared.
+ assert(obj.Pkg() != types.Unsafe)
+
+ if _, ok := p.declIndex[obj]; ok {
+ return
+ }
+
+ p.declIndex[obj] = ^uint64(0) // mark n present in work queue
+ p.declTodo.pushTail(obj)
+}
+
+// exportWriter handles writing out individual data section chunks.
+type exportWriter struct {
+ p *iexporter
+
+ data intWriter
+ currPkg *types.Package
+ prevFile string
+ prevLine int64
+}
+
+func (w *exportWriter) exportPath(pkg *types.Package) string {
+ if pkg == w.p.localpkg {
+ return ""
+ }
+ return pkg.Path()
+}
+
+func (p *iexporter) doDecl(obj types.Object) {
+ w := p.newWriter()
+ w.setPkg(obj.Pkg(), false)
+
+ switch obj := obj.(type) {
+ case *types.Var:
+ w.tag('V')
+ w.pos(obj.Pos())
+ w.typ(obj.Type(), obj.Pkg())
+
+ case *types.Func:
+ sig, _ := obj.Type().(*types.Signature)
+ if sig.Recv() != nil {
+ panic(internalErrorf("unexpected method: %v", sig))
+ }
+ w.tag('F')
+ w.pos(obj.Pos())
+ w.signature(sig)
+
+ case *types.Const:
+ w.tag('C')
+ w.pos(obj.Pos())
+ w.value(obj.Type(), obj.Val())
+
+ case *types.TypeName:
+ if obj.IsAlias() {
+ w.tag('A')
+ w.pos(obj.Pos())
+ w.typ(obj.Type(), obj.Pkg())
+ break
+ }
+
+ // Defined type.
+ w.tag('T')
+ w.pos(obj.Pos())
+
+ underlying := obj.Type().Underlying()
+ w.typ(underlying, obj.Pkg())
+
+ t := obj.Type()
+ if types.IsInterface(t) {
+ break
+ }
+
+ named, ok := t.(*types.Named)
+ if !ok {
+ panic(internalErrorf("%s is not a defined type", t))
+ }
+
+ n := named.NumMethods()
+ w.uint64(uint64(n))
+ for i := 0; i < n; i++ {
+ m := named.Method(i)
+ w.pos(m.Pos())
+ w.string(m.Name())
+ sig, _ := m.Type().(*types.Signature)
+ w.param(sig.Recv())
+ w.signature(sig)
+ }
+
+ default:
+ panic(internalErrorf("unexpected object: %v", obj))
+ }
+
+ p.declIndex[obj] = w.flush()
+}
+
+func (w *exportWriter) tag(tag byte) {
+ w.data.WriteByte(tag)
+}
+
+func (w *exportWriter) pos(pos token.Pos) {
+ if w.p.fset == nil {
+ w.int64(0)
+ return
+ }
+
+ p := w.p.fset.Position(pos)
+ file := p.Filename
+ line := int64(p.Line)
+
+ // When file is the same as the last position (common case),
+ // we can save a few bytes by delta encoding just the line
+ // number.
+ //
+ // Note: Because data objects may be read out of order (or not
+ // at all), we can only apply delta encoding within a single
+ // object. This is handled implicitly by tracking prevFile and
+ // prevLine as fields of exportWriter.
+
+ if file == w.prevFile {
+ delta := line - w.prevLine
+ w.int64(delta)
+ if delta == deltaNewFile {
+ w.int64(-1)
+ }
+ } else {
+ w.int64(deltaNewFile)
+ w.int64(line) // line >= 0
+ w.string(file)
+ w.prevFile = file
+ }
+ w.prevLine = line
+}
+
+func (w *exportWriter) pkg(pkg *types.Package) {
+ // Ensure any referenced packages are declared in the main index.
+ w.p.allPkgs[pkg] = true
+
+ w.string(w.exportPath(pkg))
+}
+
+func (w *exportWriter) qualifiedIdent(obj types.Object) {
+ // Ensure any referenced declarations are written out too.
+ w.p.pushDecl(obj)
+
+ w.string(obj.Name())
+ w.pkg(obj.Pkg())
+}
+
+func (w *exportWriter) typ(t types.Type, pkg *types.Package) {
+ w.data.uint64(w.p.typOff(t, pkg))
+}
+
+func (p *iexporter) newWriter() *exportWriter {
+ return &exportWriter{p: p}
+}
+
+func (w *exportWriter) flush() uint64 {
+ off := uint64(w.p.data0.Len())
+ io.Copy(&w.p.data0, &w.data)
+ return off
+}
+
+func (p *iexporter) typOff(t types.Type, pkg *types.Package) uint64 {
+ off, ok := p.typIndex[t]
+ if !ok {
+ w := p.newWriter()
+ w.doTyp(t, pkg)
+ off = predeclReserved + w.flush()
+ p.typIndex[t] = off
+ }
+ return off
+}
+
+func (w *exportWriter) startType(k itag) {
+ w.data.uint64(uint64(k))
+}
+
+func (w *exportWriter) doTyp(t types.Type, pkg *types.Package) {
+ switch t := t.(type) {
+ case *types.Named:
+ w.startType(definedType)
+ w.qualifiedIdent(t.Obj())
+
+ case *types.Pointer:
+ w.startType(pointerType)
+ w.typ(t.Elem(), pkg)
+
+ case *types.Slice:
+ w.startType(sliceType)
+ w.typ(t.Elem(), pkg)
+
+ case *types.Array:
+ w.startType(arrayType)
+ w.uint64(uint64(t.Len()))
+ w.typ(t.Elem(), pkg)
+
+ case *types.Chan:
+ w.startType(chanType)
+ // 1 RecvOnly; 2 SendOnly; 3 SendRecv
+ var dir uint64
+ switch t.Dir() {
+ case types.RecvOnly:
+ dir = 1
+ case types.SendOnly:
+ dir = 2
+ case types.SendRecv:
+ dir = 3
+ }
+ w.uint64(dir)
+ w.typ(t.Elem(), pkg)
+
+ case *types.Map:
+ w.startType(mapType)
+ w.typ(t.Key(), pkg)
+ w.typ(t.Elem(), pkg)
+
+ case *types.Signature:
+ w.startType(signatureType)
+ w.setPkg(pkg, true)
+ w.signature(t)
+
+ case *types.Struct:
+ w.startType(structType)
+ w.setPkg(pkg, true)
+
+ n := t.NumFields()
+ w.uint64(uint64(n))
+ for i := 0; i < n; i++ {
+ f := t.Field(i)
+ w.pos(f.Pos())
+ w.string(f.Name())
+ w.typ(f.Type(), pkg)
+ w.bool(f.Anonymous())
+ w.string(t.Tag(i)) // note (or tag)
+ }
+
+ case *types.Interface:
+ w.startType(interfaceType)
+ w.setPkg(pkg, true)
+
+ n := t.NumEmbeddeds()
+ w.uint64(uint64(n))
+ for i := 0; i < n; i++ {
+ f := t.Embedded(i)
+ w.pos(f.Obj().Pos())
+ w.typ(f.Obj().Type(), f.Obj().Pkg())
+ }
+
+ n = t.NumExplicitMethods()
+ w.uint64(uint64(n))
+ for i := 0; i < n; i++ {
+ m := t.ExplicitMethod(i)
+ w.pos(m.Pos())
+ w.string(m.Name())
+ sig, _ := m.Type().(*types.Signature)
+ w.signature(sig)
+ }
+
+ default:
+ panic(internalErrorf("unexpected type: %v, %v", t, reflect.TypeOf(t)))
+ }
+}
+
+func (w *exportWriter) setPkg(pkg *types.Package, write bool) {
+ if write {
+ w.pkg(pkg)
+ }
+
+ w.currPkg = pkg
+}
+
+func (w *exportWriter) signature(sig *types.Signature) {
+ w.paramList(sig.Params())
+ w.paramList(sig.Results())
+ if sig.Params().Len() > 0 {
+ w.bool(sig.Variadic())
+ }
+}
+
+func (w *exportWriter) paramList(tup *types.Tuple) {
+ n := tup.Len()
+ w.uint64(uint64(n))
+ for i := 0; i < n; i++ {
+ w.param(tup.At(i))
+ }
+}
+
+func (w *exportWriter) param(obj types.Object) {
+ w.pos(obj.Pos())
+ w.localIdent(obj)
+ w.typ(obj.Type(), obj.Pkg())
+}
+
+func (w *exportWriter) value(typ types.Type, v constant.Value) {
+ w.typ(typ, nil)
+
+ switch v.Kind() {
+ case constant.Bool:
+ w.bool(constant.BoolVal(v))
+ case constant.Int:
+ var i big.Int
+ if i64, exact := constant.Int64Val(v); exact {
+ i.SetInt64(i64)
+ } else if ui64, exact := constant.Uint64Val(v); exact {
+ i.SetUint64(ui64)
+ } else {
+ i.SetString(v.ExactString(), 10)
+ }
+ w.mpint(&i, typ)
+ case constant.Float:
+ f := constantToFloat(v)
+ w.mpfloat(f, typ)
+ case constant.Complex:
+ w.mpfloat(constantToFloat(constant.Real(v)), typ)
+ w.mpfloat(constantToFloat(constant.Imag(v)), typ)
+ case constant.String:
+ w.string(constant.StringVal(v))
+ case constant.Unknown:
+ // package contains type errors
+ default:
+ panic(internalErrorf("unexpected value %v (%T)", v, v))
+ }
+}
+
+// constantToFloat converts a constant.Value with kind constant.Float to a
+// big.Float.
+func constantToFloat(x constant.Value) *big.Float {
+ assert(x.Kind() == constant.Float)
+ // Use the same floating-point precision (512) as cmd/compile
+ // (see Mpprec in cmd/compile/internal/gc/mpfloat.go).
+ const mpprec = 512
+ var f big.Float
+ f.SetPrec(mpprec)
+ if v, exact := constant.Float64Val(x); exact {
+ // float64
+ f.SetFloat64(v)
+ } else if num, denom := constant.Num(x), constant.Denom(x); num.Kind() == constant.Int {
+ // TODO(gri): add big.Rat accessor to constant.Value.
+ n := valueToRat(num)
+ d := valueToRat(denom)
+ f.SetRat(n.Quo(n, d))
+ } else {
+ // Value too large to represent as a fraction => inaccessible.
+ // TODO(gri): add big.Float accessor to constant.Value.
+ _, ok := f.SetString(x.ExactString())
+ assert(ok)
+ }
+ return &f
+}
+
+// mpint exports a multi-precision integer.
+//
+// For unsigned types, small values are written out as a single
+// byte. Larger values are written out as a length-prefixed big-endian
+// byte string, where the length prefix is encoded as its complement.
+// For example, bytes 0, 1, and 2 directly represent the integer
+// values 0, 1, and 2; while bytes 255, 254, and 253 indicate a 1-,
+// 2-, and 3-byte big-endian string follow.
+//
+// Encoding for signed types use the same general approach as for
+// unsigned types, except small values use zig-zag encoding and the
+// bottom bit of length prefix byte for large values is reserved as a
+// sign bit.
+//
+// The exact boundary between small and large encodings varies
+// according to the maximum number of bytes needed to encode a value
+// of type typ. As a special case, 8-bit types are always encoded as a
+// single byte.
+//
+// TODO(mdempsky): Is this level of complexity really worthwhile?
+func (w *exportWriter) mpint(x *big.Int, typ types.Type) {
+ basic, ok := typ.Underlying().(*types.Basic)
+ if !ok {
+ panic(internalErrorf("unexpected type %v (%T)", typ.Underlying(), typ.Underlying()))
+ }
+
+ signed, maxBytes := intSize(basic)
+
+ negative := x.Sign() < 0
+ if !signed && negative {
+ panic(internalErrorf("negative unsigned integer; type %v, value %v", typ, x))
+ }
+
+ b := x.Bytes()
+ if len(b) > 0 && b[0] == 0 {
+ panic(internalErrorf("leading zeros"))
+ }
+ if uint(len(b)) > maxBytes {
+ panic(internalErrorf("bad mpint length: %d > %d (type %v, value %v)", len(b), maxBytes, typ, x))
+ }
+
+ maxSmall := 256 - maxBytes
+ if signed {
+ maxSmall = 256 - 2*maxBytes
+ }
+ if maxBytes == 1 {
+ maxSmall = 256
+ }
+
+ // Check if x can use small value encoding.
+ if len(b) <= 1 {
+ var ux uint
+ if len(b) == 1 {
+ ux = uint(b[0])
+ }
+ if signed {
+ ux <<= 1
+ if negative {
+ ux--
+ }
+ }
+ if ux < maxSmall {
+ w.data.WriteByte(byte(ux))
+ return
+ }
+ }
+
+ n := 256 - uint(len(b))
+ if signed {
+ n = 256 - 2*uint(len(b))
+ if negative {
+ n |= 1
+ }
+ }
+ if n < maxSmall || n >= 256 {
+ panic(internalErrorf("encoding mistake: %d, %v, %v => %d", len(b), signed, negative, n))
+ }
+
+ w.data.WriteByte(byte(n))
+ w.data.Write(b)
+}
+
+// mpfloat exports a multi-precision floating point number.
+//
+// The number's value is decomposed into mantissa × 2**exponent, where
+// mantissa is an integer. The value is written out as mantissa (as a
+// multi-precision integer) and then the exponent, except exponent is
+// omitted if mantissa is zero.
+func (w *exportWriter) mpfloat(f *big.Float, typ types.Type) {
+ if f.IsInf() {
+ panic("infinite constant")
+ }
+
+ // Break into f = mant × 2**exp, with 0.5 <= mant < 1.
+ var mant big.Float
+ exp := int64(f.MantExp(&mant))
+
+ // Scale so that mant is an integer.
+ prec := mant.MinPrec()
+ mant.SetMantExp(&mant, int(prec))
+ exp -= int64(prec)
+
+ manti, acc := mant.Int(nil)
+ if acc != big.Exact {
+ panic(internalErrorf("mantissa scaling failed for %f (%s)", f, acc))
+ }
+ w.mpint(manti, typ)
+ if manti.Sign() != 0 {
+ w.int64(exp)
+ }
+}
+
+func (w *exportWriter) bool(b bool) bool {
+ var x uint64
+ if b {
+ x = 1
+ }
+ w.uint64(x)
+ return b
+}
+
+func (w *exportWriter) int64(x int64) { w.data.int64(x) }
+func (w *exportWriter) uint64(x uint64) { w.data.uint64(x) }
+func (w *exportWriter) string(s string) { w.uint64(w.p.stringOff(s)) }
+
+func (w *exportWriter) localIdent(obj types.Object) {
+ // Anonymous parameters.
+ if obj == nil {
+ w.string("")
+ return
+ }
+
+ name := obj.Name()
+ if name == "_" {
+ w.string("_")
+ return
+ }
+
+ w.string(name)
+}
+
+type intWriter struct {
+ bytes.Buffer
+}
+
+func (w *intWriter) int64(x int64) {
+ var buf [binary.MaxVarintLen64]byte
+ n := binary.PutVarint(buf[:], x)
+ w.Write(buf[:n])
+}
+
+func (w *intWriter) uint64(x uint64) {
+ var buf [binary.MaxVarintLen64]byte
+ n := binary.PutUvarint(buf[:], x)
+ w.Write(buf[:n])
+}
+
+func assert(cond bool) {
+ if !cond {
+ panic("internal error: assertion failed")
+ }
+}
+
+// The below is copied from go/src/cmd/compile/internal/gc/syntax.go.
+
+// objQueue is a FIFO queue of types.Object. The zero value of objQueue is
+// a ready-to-use empty queue.
+type objQueue struct {
+ ring []types.Object
+ head, tail int
+}
+
+// empty returns true if q contains no Nodes.
+func (q *objQueue) empty() bool {
+ return q.head == q.tail
+}
+
+// pushTail appends n to the tail of the queue.
+func (q *objQueue) pushTail(obj types.Object) {
+ if len(q.ring) == 0 {
+ q.ring = make([]types.Object, 16)
+ } else if q.head+len(q.ring) == q.tail {
+ // Grow the ring.
+ nring := make([]types.Object, len(q.ring)*2)
+ // Copy the old elements.
+ part := q.ring[q.head%len(q.ring):]
+ if q.tail-q.head <= len(part) {
+ part = part[:q.tail-q.head]
+ copy(nring, part)
+ } else {
+ pos := copy(nring, part)
+ copy(nring[pos:], q.ring[:q.tail%len(q.ring)])
+ }
+ q.ring, q.head, q.tail = nring, 0, q.tail-q.head
+ }
+
+ q.ring[q.tail%len(q.ring)] = obj
+ q.tail++
+}
+
+// popHead pops a node from the head of the queue. It panics if q is empty.
+func (q *objQueue) popHead() types.Object {
+ if q.empty() {
+ panic("dequeue empty")
+ }
+ obj := q.ring[q.head%len(q.ring)]
+ q.head++
+ return obj
+}
diff --git a/vendor/golang.org/x/tools/go/internal/gcimporter/iimport.go b/vendor/golang.org/x/tools/go/internal/gcimporter/iimport.go
new file mode 100644
index 0000000000000..a31a880263e12
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/internal/gcimporter/iimport.go
@@ -0,0 +1,630 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Indexed package import.
+// See cmd/compile/internal/gc/iexport.go for the export data format.
+
+// This file is a copy of $GOROOT/src/go/internal/gcimporter/iimport.go.
+
+package gcimporter
+
+import (
+ "bytes"
+ "encoding/binary"
+ "fmt"
+ "go/constant"
+ "go/token"
+ "go/types"
+ "io"
+ "sort"
+)
+
+type intReader struct {
+ *bytes.Reader
+ path string
+}
+
+func (r *intReader) int64() int64 {
+ i, err := binary.ReadVarint(r.Reader)
+ if err != nil {
+ errorf("import %q: read varint error: %v", r.path, err)
+ }
+ return i
+}
+
+func (r *intReader) uint64() uint64 {
+ i, err := binary.ReadUvarint(r.Reader)
+ if err != nil {
+ errorf("import %q: read varint error: %v", r.path, err)
+ }
+ return i
+}
+
+const predeclReserved = 32
+
+type itag uint64
+
+const (
+ // Types
+ definedType itag = iota
+ pointerType
+ sliceType
+ arrayType
+ chanType
+ mapType
+ signatureType
+ structType
+ interfaceType
+)
+
+// IImportData imports a package from the serialized package data
+// and returns the number of bytes consumed and a reference to the package.
+// If the export data version is not recognized or the format is otherwise
+// compromised, an error is returned.
+func IImportData(fset *token.FileSet, imports map[string]*types.Package, data []byte, path string) (_ int, pkg *types.Package, err error) {
+ const currentVersion = 1
+ version := int64(-1)
+ defer func() {
+ if e := recover(); e != nil {
+ if version > currentVersion {
+ err = fmt.Errorf("cannot import %q (%v), export data is newer version - update tool", path, e)
+ } else {
+ err = fmt.Errorf("cannot import %q (%v), possibly version skew - reinstall package", path, e)
+ }
+ }
+ }()
+
+ r := &intReader{bytes.NewReader(data), path}
+
+ version = int64(r.uint64())
+ switch version {
+ case currentVersion, 0:
+ default:
+ errorf("unknown iexport format version %d", version)
+ }
+
+ sLen := int64(r.uint64())
+ dLen := int64(r.uint64())
+
+ whence, _ := r.Seek(0, io.SeekCurrent)
+ stringData := data[whence : whence+sLen]
+ declData := data[whence+sLen : whence+sLen+dLen]
+ r.Seek(sLen+dLen, io.SeekCurrent)
+
+ p := iimporter{
+ ipath: path,
+ version: int(version),
+
+ stringData: stringData,
+ stringCache: make(map[uint64]string),
+ pkgCache: make(map[uint64]*types.Package),
+
+ declData: declData,
+ pkgIndex: make(map[*types.Package]map[string]uint64),
+ typCache: make(map[uint64]types.Type),
+
+ fake: fakeFileSet{
+ fset: fset,
+ files: make(map[string]*token.File),
+ },
+ }
+
+ for i, pt := range predeclared() {
+ p.typCache[uint64(i)] = pt
+ }
+
+ pkgList := make([]*types.Package, r.uint64())
+ for i := range pkgList {
+ pkgPathOff := r.uint64()
+ pkgPath := p.stringAt(pkgPathOff)
+ pkgName := p.stringAt(r.uint64())
+ _ = r.uint64() // package height; unused by go/types
+
+ if pkgPath == "" {
+ pkgPath = path
+ }
+ pkg := imports[pkgPath]
+ if pkg == nil {
+ pkg = types.NewPackage(pkgPath, pkgName)
+ imports[pkgPath] = pkg
+ } else if pkg.Name() != pkgName {
+ errorf("conflicting names %s and %s for package %q", pkg.Name(), pkgName, path)
+ }
+
+ p.pkgCache[pkgPathOff] = pkg
+
+ nameIndex := make(map[string]uint64)
+ for nSyms := r.uint64(); nSyms > 0; nSyms-- {
+ name := p.stringAt(r.uint64())
+ nameIndex[name] = r.uint64()
+ }
+
+ p.pkgIndex[pkg] = nameIndex
+ pkgList[i] = pkg
+ }
+ if len(pkgList) == 0 {
+ errorf("no packages found for %s", path)
+ panic("unreachable")
+ }
+ p.ipkg = pkgList[0]
+ names := make([]string, 0, len(p.pkgIndex[p.ipkg]))
+ for name := range p.pkgIndex[p.ipkg] {
+ names = append(names, name)
+ }
+ sort.Strings(names)
+ for _, name := range names {
+ p.doDecl(p.ipkg, name)
+ }
+
+ for _, typ := range p.interfaceList {
+ typ.Complete()
+ }
+
+ // record all referenced packages as imports
+ list := append(([]*types.Package)(nil), pkgList[1:]...)
+ sort.Sort(byPath(list))
+ p.ipkg.SetImports(list)
+
+ // package was imported completely and without errors
+ p.ipkg.MarkComplete()
+
+ consumed, _ := r.Seek(0, io.SeekCurrent)
+ return int(consumed), p.ipkg, nil
+}
+
+type iimporter struct {
+ ipath string
+ ipkg *types.Package
+ version int
+
+ stringData []byte
+ stringCache map[uint64]string
+ pkgCache map[uint64]*types.Package
+
+ declData []byte
+ pkgIndex map[*types.Package]map[string]uint64
+ typCache map[uint64]types.Type
+
+ fake fakeFileSet
+ interfaceList []*types.Interface
+}
+
+func (p *iimporter) doDecl(pkg *types.Package, name string) {
+ // See if we've already imported this declaration.
+ if obj := pkg.Scope().Lookup(name); obj != nil {
+ return
+ }
+
+ off, ok := p.pkgIndex[pkg][name]
+ if !ok {
+ errorf("%v.%v not in index", pkg, name)
+ }
+
+ r := &importReader{p: p, currPkg: pkg}
+ r.declReader.Reset(p.declData[off:])
+
+ r.obj(name)
+}
+
+func (p *iimporter) stringAt(off uint64) string {
+ if s, ok := p.stringCache[off]; ok {
+ return s
+ }
+
+ slen, n := binary.Uvarint(p.stringData[off:])
+ if n <= 0 {
+ errorf("varint failed")
+ }
+ spos := off + uint64(n)
+ s := string(p.stringData[spos : spos+slen])
+ p.stringCache[off] = s
+ return s
+}
+
+func (p *iimporter) pkgAt(off uint64) *types.Package {
+ if pkg, ok := p.pkgCache[off]; ok {
+ return pkg
+ }
+ path := p.stringAt(off)
+ if path == p.ipath {
+ return p.ipkg
+ }
+ errorf("missing package %q in %q", path, p.ipath)
+ return nil
+}
+
+func (p *iimporter) typAt(off uint64, base *types.Named) types.Type {
+ if t, ok := p.typCache[off]; ok && (base == nil || !isInterface(t)) {
+ return t
+ }
+
+ if off < predeclReserved {
+ errorf("predeclared type missing from cache: %v", off)
+ }
+
+ r := &importReader{p: p}
+ r.declReader.Reset(p.declData[off-predeclReserved:])
+ t := r.doType(base)
+
+ if base == nil || !isInterface(t) {
+ p.typCache[off] = t
+ }
+ return t
+}
+
+type importReader struct {
+ p *iimporter
+ declReader bytes.Reader
+ currPkg *types.Package
+ prevFile string
+ prevLine int64
+ prevColumn int64
+}
+
+func (r *importReader) obj(name string) {
+ tag := r.byte()
+ pos := r.pos()
+
+ switch tag {
+ case 'A':
+ typ := r.typ()
+
+ r.declare(types.NewTypeName(pos, r.currPkg, name, typ))
+
+ case 'C':
+ typ, val := r.value()
+
+ r.declare(types.NewConst(pos, r.currPkg, name, typ, val))
+
+ case 'F':
+ sig := r.signature(nil)
+
+ r.declare(types.NewFunc(pos, r.currPkg, name, sig))
+
+ case 'T':
+ // Types can be recursive. We need to setup a stub
+ // declaration before recursing.
+ obj := types.NewTypeName(pos, r.currPkg, name, nil)
+ named := types.NewNamed(obj, nil, nil)
+ r.declare(obj)
+
+ underlying := r.p.typAt(r.uint64(), named).Underlying()
+ named.SetUnderlying(underlying)
+
+ if !isInterface(underlying) {
+ for n := r.uint64(); n > 0; n-- {
+ mpos := r.pos()
+ mname := r.ident()
+ recv := r.param()
+ msig := r.signature(recv)
+
+ named.AddMethod(types.NewFunc(mpos, r.currPkg, mname, msig))
+ }
+ }
+
+ case 'V':
+ typ := r.typ()
+
+ r.declare(types.NewVar(pos, r.currPkg, name, typ))
+
+ default:
+ errorf("unexpected tag: %v", tag)
+ }
+}
+
+func (r *importReader) declare(obj types.Object) {
+ obj.Pkg().Scope().Insert(obj)
+}
+
+func (r *importReader) value() (typ types.Type, val constant.Value) {
+ typ = r.typ()
+
+ switch b := typ.Underlying().(*types.Basic); b.Info() & types.IsConstType {
+ case types.IsBoolean:
+ val = constant.MakeBool(r.bool())
+
+ case types.IsString:
+ val = constant.MakeString(r.string())
+
+ case types.IsInteger:
+ val = r.mpint(b)
+
+ case types.IsFloat:
+ val = r.mpfloat(b)
+
+ case types.IsComplex:
+ re := r.mpfloat(b)
+ im := r.mpfloat(b)
+ val = constant.BinaryOp(re, token.ADD, constant.MakeImag(im))
+
+ default:
+ if b.Kind() == types.Invalid {
+ val = constant.MakeUnknown()
+ return
+ }
+ errorf("unexpected type %v", typ) // panics
+ panic("unreachable")
+ }
+
+ return
+}
+
+func intSize(b *types.Basic) (signed bool, maxBytes uint) {
+ if (b.Info() & types.IsUntyped) != 0 {
+ return true, 64
+ }
+
+ switch b.Kind() {
+ case types.Float32, types.Complex64:
+ return true, 3
+ case types.Float64, types.Complex128:
+ return true, 7
+ }
+
+ signed = (b.Info() & types.IsUnsigned) == 0
+ switch b.Kind() {
+ case types.Int8, types.Uint8:
+ maxBytes = 1
+ case types.Int16, types.Uint16:
+ maxBytes = 2
+ case types.Int32, types.Uint32:
+ maxBytes = 4
+ default:
+ maxBytes = 8
+ }
+
+ return
+}
+
+func (r *importReader) mpint(b *types.Basic) constant.Value {
+ signed, maxBytes := intSize(b)
+
+ maxSmall := 256 - maxBytes
+ if signed {
+ maxSmall = 256 - 2*maxBytes
+ }
+ if maxBytes == 1 {
+ maxSmall = 256
+ }
+
+ n, _ := r.declReader.ReadByte()
+ if uint(n) < maxSmall {
+ v := int64(n)
+ if signed {
+ v >>= 1
+ if n&1 != 0 {
+ v = ^v
+ }
+ }
+ return constant.MakeInt64(v)
+ }
+
+ v := -n
+ if signed {
+ v = -(n &^ 1) >> 1
+ }
+ if v < 1 || uint(v) > maxBytes {
+ errorf("weird decoding: %v, %v => %v", n, signed, v)
+ }
+
+ buf := make([]byte, v)
+ io.ReadFull(&r.declReader, buf)
+
+ // convert to little endian
+ // TODO(gri) go/constant should have a more direct conversion function
+ // (e.g., once it supports a big.Float based implementation)
+ for i, j := 0, len(buf)-1; i < j; i, j = i+1, j-1 {
+ buf[i], buf[j] = buf[j], buf[i]
+ }
+
+ x := constant.MakeFromBytes(buf)
+ if signed && n&1 != 0 {
+ x = constant.UnaryOp(token.SUB, x, 0)
+ }
+ return x
+}
+
+func (r *importReader) mpfloat(b *types.Basic) constant.Value {
+ x := r.mpint(b)
+ if constant.Sign(x) == 0 {
+ return x
+ }
+
+ exp := r.int64()
+ switch {
+ case exp > 0:
+ x = constant.Shift(x, token.SHL, uint(exp))
+ case exp < 0:
+ d := constant.Shift(constant.MakeInt64(1), token.SHL, uint(-exp))
+ x = constant.BinaryOp(x, token.QUO, d)
+ }
+ return x
+}
+
+func (r *importReader) ident() string {
+ return r.string()
+}
+
+func (r *importReader) qualifiedIdent() (*types.Package, string) {
+ name := r.string()
+ pkg := r.pkg()
+ return pkg, name
+}
+
+func (r *importReader) pos() token.Pos {
+ if r.p.version >= 1 {
+ r.posv1()
+ } else {
+ r.posv0()
+ }
+
+ if r.prevFile == "" && r.prevLine == 0 && r.prevColumn == 0 {
+ return token.NoPos
+ }
+ return r.p.fake.pos(r.prevFile, int(r.prevLine), int(r.prevColumn))
+}
+
+func (r *importReader) posv0() {
+ delta := r.int64()
+ if delta != deltaNewFile {
+ r.prevLine += delta
+ } else if l := r.int64(); l == -1 {
+ r.prevLine += deltaNewFile
+ } else {
+ r.prevFile = r.string()
+ r.prevLine = l
+ }
+}
+
+func (r *importReader) posv1() {
+ delta := r.int64()
+ r.prevColumn += delta >> 1
+ if delta&1 != 0 {
+ delta = r.int64()
+ r.prevLine += delta >> 1
+ if delta&1 != 0 {
+ r.prevFile = r.string()
+ }
+ }
+}
+
+func (r *importReader) typ() types.Type {
+ return r.p.typAt(r.uint64(), nil)
+}
+
+func isInterface(t types.Type) bool {
+ _, ok := t.(*types.Interface)
+ return ok
+}
+
+func (r *importReader) pkg() *types.Package { return r.p.pkgAt(r.uint64()) }
+func (r *importReader) string() string { return r.p.stringAt(r.uint64()) }
+
+func (r *importReader) doType(base *types.Named) types.Type {
+ switch k := r.kind(); k {
+ default:
+ errorf("unexpected kind tag in %q: %v", r.p.ipath, k)
+ return nil
+
+ case definedType:
+ pkg, name := r.qualifiedIdent()
+ r.p.doDecl(pkg, name)
+ return pkg.Scope().Lookup(name).(*types.TypeName).Type()
+ case pointerType:
+ return types.NewPointer(r.typ())
+ case sliceType:
+ return types.NewSlice(r.typ())
+ case arrayType:
+ n := r.uint64()
+ return types.NewArray(r.typ(), int64(n))
+ case chanType:
+ dir := chanDir(int(r.uint64()))
+ return types.NewChan(dir, r.typ())
+ case mapType:
+ return types.NewMap(r.typ(), r.typ())
+ case signatureType:
+ r.currPkg = r.pkg()
+ return r.signature(nil)
+
+ case structType:
+ r.currPkg = r.pkg()
+
+ fields := make([]*types.Var, r.uint64())
+ tags := make([]string, len(fields))
+ for i := range fields {
+ fpos := r.pos()
+ fname := r.ident()
+ ftyp := r.typ()
+ emb := r.bool()
+ tag := r.string()
+
+ fields[i] = types.NewField(fpos, r.currPkg, fname, ftyp, emb)
+ tags[i] = tag
+ }
+ return types.NewStruct(fields, tags)
+
+ case interfaceType:
+ r.currPkg = r.pkg()
+
+ embeddeds := make([]types.Type, r.uint64())
+ for i := range embeddeds {
+ _ = r.pos()
+ embeddeds[i] = r.typ()
+ }
+
+ methods := make([]*types.Func, r.uint64())
+ for i := range methods {
+ mpos := r.pos()
+ mname := r.ident()
+
+ // TODO(mdempsky): Matches bimport.go, but I
+ // don't agree with this.
+ var recv *types.Var
+ if base != nil {
+ recv = types.NewVar(token.NoPos, r.currPkg, "", base)
+ }
+
+ msig := r.signature(recv)
+ methods[i] = types.NewFunc(mpos, r.currPkg, mname, msig)
+ }
+
+ typ := newInterface(methods, embeddeds)
+ r.p.interfaceList = append(r.p.interfaceList, typ)
+ return typ
+ }
+}
+
+func (r *importReader) kind() itag {
+ return itag(r.uint64())
+}
+
+func (r *importReader) signature(recv *types.Var) *types.Signature {
+ params := r.paramList()
+ results := r.paramList()
+ variadic := params.Len() > 0 && r.bool()
+ return types.NewSignature(recv, params, results, variadic)
+}
+
+func (r *importReader) paramList() *types.Tuple {
+ xs := make([]*types.Var, r.uint64())
+ for i := range xs {
+ xs[i] = r.param()
+ }
+ return types.NewTuple(xs...)
+}
+
+func (r *importReader) param() *types.Var {
+ pos := r.pos()
+ name := r.ident()
+ typ := r.typ()
+ return types.NewParam(pos, r.currPkg, name, typ)
+}
+
+func (r *importReader) bool() bool {
+ return r.uint64() != 0
+}
+
+func (r *importReader) int64() int64 {
+ n, err := binary.ReadVarint(&r.declReader)
+ if err != nil {
+ errorf("readVarint: %v", err)
+ }
+ return n
+}
+
+func (r *importReader) uint64() uint64 {
+ n, err := binary.ReadUvarint(&r.declReader)
+ if err != nil {
+ errorf("readUvarint: %v", err)
+ }
+ return n
+}
+
+func (r *importReader) byte() byte {
+ x, err := r.declReader.ReadByte()
+ if err != nil {
+ errorf("declReader.ReadByte: %v", err)
+ }
+ return x
+}
diff --git a/vendor/golang.org/x/tools/go/internal/gcimporter/newInterface10.go b/vendor/golang.org/x/tools/go/internal/gcimporter/newInterface10.go
new file mode 100644
index 0000000000000..463f252271463
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/internal/gcimporter/newInterface10.go
@@ -0,0 +1,21 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build !go1.11
+
+package gcimporter
+
+import "go/types"
+
+func newInterface(methods []*types.Func, embeddeds []types.Type) *types.Interface {
+ named := make([]*types.Named, len(embeddeds))
+ for i, e := range embeddeds {
+ var ok bool
+ named[i], ok = e.(*types.Named)
+ if !ok {
+ panic("embedding of non-defined interfaces in interfaces is not supported before Go 1.11")
+ }
+ }
+ return types.NewInterface(methods, named)
+}
diff --git a/vendor/golang.org/x/tools/go/internal/gcimporter/newInterface11.go b/vendor/golang.org/x/tools/go/internal/gcimporter/newInterface11.go
new file mode 100644
index 0000000000000..ab28b95cbb84f
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/internal/gcimporter/newInterface11.go
@@ -0,0 +1,13 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build go1.11
+
+package gcimporter
+
+import "go/types"
+
+func newInterface(methods []*types.Func, embeddeds []types.Type) *types.Interface {
+ return types.NewInterfaceType(methods, embeddeds)
+}
diff --git a/vendor/golang.org/x/tools/go/internal/packagesdriver/sizes.go b/vendor/golang.org/x/tools/go/internal/packagesdriver/sizes.go
new file mode 100644
index 0000000000000..db0c9a7ea610b
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/internal/packagesdriver/sizes.go
@@ -0,0 +1,174 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package packagesdriver fetches type sizes for go/packages and go/analysis.
+package packagesdriver
+
+import (
+ "bytes"
+ "context"
+ "encoding/json"
+ "fmt"
+ "go/types"
+ "log"
+ "os"
+ "os/exec"
+ "strings"
+ "time"
+)
+
+var debug = false
+
+// GetSizes returns the sizes used by the underlying driver with the given parameters.
+func GetSizes(ctx context.Context, buildFlags, env []string, dir string, usesExportData bool) (types.Sizes, error) {
+ // TODO(matloob): Clean this up. This code is mostly a copy of packages.findExternalDriver.
+ const toolPrefix = "GOPACKAGESDRIVER="
+ tool := ""
+ for _, env := range env {
+ if val := strings.TrimPrefix(env, toolPrefix); val != env {
+ tool = val
+ }
+ }
+
+ if tool == "" {
+ var err error
+ tool, err = exec.LookPath("gopackagesdriver")
+ if err != nil {
+ // We did not find the driver, so use "go list".
+ tool = "off"
+ }
+ }
+
+ if tool == "off" {
+ return GetSizesGolist(ctx, buildFlags, env, dir, usesExportData)
+ }
+
+ req, err := json.Marshal(struct {
+ Command string `json:"command"`
+ Env []string `json:"env"`
+ BuildFlags []string `json:"build_flags"`
+ }{
+ Command: "sizes",
+ Env: env,
+ BuildFlags: buildFlags,
+ })
+ if err != nil {
+ return nil, fmt.Errorf("failed to encode message to driver tool: %v", err)
+ }
+
+ buf := new(bytes.Buffer)
+ cmd := exec.CommandContext(ctx, tool)
+ cmd.Dir = dir
+ cmd.Env = env
+ cmd.Stdin = bytes.NewReader(req)
+ cmd.Stdout = buf
+ cmd.Stderr = new(bytes.Buffer)
+ if err := cmd.Run(); err != nil {
+ return nil, fmt.Errorf("%v: %v: %s", tool, err, cmd.Stderr)
+ }
+ var response struct {
+ // Sizes, if not nil, is the types.Sizes to use when type checking.
+ Sizes *types.StdSizes
+ }
+ if err := json.Unmarshal(buf.Bytes(), &response); err != nil {
+ return nil, err
+ }
+ return response.Sizes, nil
+}
+
+func GetSizesGolist(ctx context.Context, buildFlags, env []string, dir string, usesExportData bool) (types.Sizes, error) {
+ args := []string{"list", "-f", "{{context.GOARCH}} {{context.Compiler}}"}
+ args = append(args, buildFlags...)
+ args = append(args, "--", "unsafe")
+ stdout, stderr, err := invokeGo(ctx, env, dir, usesExportData, args...)
+ var goarch, compiler string
+ if err != nil {
+ if strings.Contains(err.Error(), "cannot find main module") {
+ // User's running outside of a module. All bets are off. Get GOARCH and guess compiler is gc.
+ // TODO(matloob): Is this a problem in practice?
+ envout, _, enverr := invokeGo(ctx, env, dir, usesExportData, "env", "GOARCH")
+ if enverr != nil {
+ return nil, err
+ }
+ goarch = strings.TrimSpace(envout.String())
+ compiler = "gc"
+ } else {
+ return nil, err
+ }
+ } else {
+ fields := strings.Fields(stdout.String())
+ if len(fields) < 2 {
+ return nil, fmt.Errorf("could not parse GOARCH and Go compiler in format \"<GOARCH> <compiler>\" from stdout of go command:\n%s\ndir: %s\nstdout: <<%s>>\nstderr: <<%s>>",
+ cmdDebugStr(env, args...), dir, stdout.String(), stderr.String())
+ }
+ goarch = fields[0]
+ compiler = fields[1]
+ }
+ return types.SizesFor(compiler, goarch), nil
+}
+
+// invokeGo returns the stdout and stderr of a go command invocation.
+func invokeGo(ctx context.Context, env []string, dir string, usesExportData bool, args ...string) (*bytes.Buffer, *bytes.Buffer, error) {
+ if debug {
+ defer func(start time.Time) { log.Printf("%s for %v", time.Since(start), cmdDebugStr(env, args...)) }(time.Now())
+ }
+ stdout := new(bytes.Buffer)
+ stderr := new(bytes.Buffer)
+ cmd := exec.CommandContext(ctx, "go", args...)
+ // On darwin the cwd gets resolved to the real path, which breaks anything that
+ // expects the working directory to keep the original path, including the
+ // go command when dealing with modules.
+ // The Go stdlib has a special feature where if the cwd and the PWD are the
+ // same node then it trusts the PWD, so by setting it in the env for the child
+ // process we fix up all the paths returned by the go command.
+ cmd.Env = append(append([]string{}, env...), "PWD="+dir)
+ cmd.Dir = dir
+ cmd.Stdout = stdout
+ cmd.Stderr = stderr
+ if err := cmd.Run(); err != nil {
+ exitErr, ok := err.(*exec.ExitError)
+ if !ok {
+ // Catastrophic error:
+ // - executable not found
+ // - context cancellation
+ return nil, nil, fmt.Errorf("couldn't exec 'go %v': %s %T", args, err, err)
+ }
+
+ // Export mode entails a build.
+ // If that build fails, errors appear on stderr
+ // (despite the -e flag) and the Export field is blank.
+ // Do not fail in that case.
+ if !usesExportData {
+ return nil, nil, fmt.Errorf("go %v: %s: %s", args, exitErr, stderr)
+ }
+ }
+
+ // As of writing, go list -export prints some non-fatal compilation
+ // errors to stderr, even with -e set. We would prefer that it put
+ // them in the Package.Error JSON (see https://golang.org/issue/26319).
+ // In the meantime, there's nowhere good to put them, but they can
+ // be useful for debugging. Print them if $GOPACKAGESPRINTGOLISTERRORS
+ // is set.
+ if len(stderr.Bytes()) != 0 && os.Getenv("GOPACKAGESPRINTGOLISTERRORS") != "" {
+ fmt.Fprintf(os.Stderr, "%s stderr: <<%s>>\n", cmdDebugStr(env, args...), stderr)
+ }
+
+ // debugging
+ if false {
+ fmt.Fprintf(os.Stderr, "%s stdout: <<%s>>\n", cmdDebugStr(env, args...), stdout)
+ }
+
+ return stdout, stderr, nil
+}
+
+func cmdDebugStr(envlist []string, args ...string) string {
+ env := make(map[string]string)
+ for _, kv := range envlist {
+ split := strings.Split(kv, "=")
+ k, v := split[0], split[1]
+ env[k] = v
+ }
+
+ return fmt.Sprintf("GOROOT=%v GOPATH=%v GO111MODULE=%v PWD=%v go %v", env["GOROOT"], env["GOPATH"], env["GO111MODULE"], env["PWD"], args)
+}
diff --git a/vendor/golang.org/x/tools/go/packages/doc.go b/vendor/golang.org/x/tools/go/packages/doc.go
new file mode 100644
index 0000000000000..3799f8ed8be18
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/packages/doc.go
@@ -0,0 +1,222 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+/*
+Package packages loads Go packages for inspection and analysis.
+
+The Load function takes as input a list of patterns and return a list of Package
+structs describing individual packages matched by those patterns.
+The LoadMode controls the amount of detail in the loaded packages.
+
+Load passes most patterns directly to the underlying build tool,
+but all patterns with the prefix "query=", where query is a
+non-empty string of letters from [a-z], are reserved and may be
+interpreted as query operators.
+
+Two query operators are currently supported: "file" and "pattern".
+
+The query "file=path/to/file.go" matches the package or packages enclosing
+the Go source file path/to/file.go. For example "file=~/go/src/fmt/print.go"
+might return the packages "fmt" and "fmt [fmt.test]".
+
+The query "pattern=string" causes "string" to be passed directly to
+the underlying build tool. In most cases this is unnecessary,
+but an application can use Load("pattern=" + x) as an escaping mechanism
+to ensure that x is not interpreted as a query operator if it contains '='.
+
+All other query operators are reserved for future use and currently
+cause Load to report an error.
+
+The Package struct provides basic information about the package, including
+
+ - ID, a unique identifier for the package in the returned set;
+ - GoFiles, the names of the package's Go source files;
+ - Imports, a map from source import strings to the Packages they name;
+ - Types, the type information for the package's exported symbols;
+ - Syntax, the parsed syntax trees for the package's source code; and
+ - TypeInfo, the result of a complete type-check of the package syntax trees.
+
+(See the documentation for type Package for the complete list of fields
+and more detailed descriptions.)
+
+For example,
+
+ Load(nil, "bytes", "unicode...")
+
+returns four Package structs describing the standard library packages
+bytes, unicode, unicode/utf16, and unicode/utf8. Note that one pattern
+can match multiple packages and that a package might be matched by
+multiple patterns: in general it is not possible to determine which
+packages correspond to which patterns.
+
+Note that the list returned by Load contains only the packages matched
+by the patterns. Their dependencies can be found by walking the import
+graph using the Imports fields.
+
+The Load function can be configured by passing a pointer to a Config as
+the first argument. A nil Config is equivalent to the zero Config, which
+causes Load to run in LoadFiles mode, collecting minimal information.
+See the documentation for type Config for details.
+
+As noted earlier, the Config.Mode controls the amount of detail
+reported about the loaded packages, with each mode returning all the data of the
+previous mode with some extra added. See the documentation for type LoadMode
+for details.
+
+Most tools should pass their command-line arguments (after any flags)
+uninterpreted to the loader, so that the loader can interpret them
+according to the conventions of the underlying build system.
+See the Example function for typical usage.
+
+*/
+package packages // import "golang.org/x/tools/go/packages"
+
+/*
+
+Motivation and design considerations
+
+The new package's design solves problems addressed by two existing
+packages: go/build, which locates and describes packages, and
+golang.org/x/tools/go/loader, which loads, parses and type-checks them.
+The go/build.Package structure encodes too much of the 'go build' way
+of organizing projects, leaving us in need of a data type that describes a
+package of Go source code independent of the underlying build system.
+We wanted something that works equally well with go build and vgo, and
+also other build systems such as Bazel and Blaze, making it possible to
+construct analysis tools that work in all these environments.
+Tools such as errcheck and staticcheck were essentially unavailable to
+the Go community at Google, and some of Google's internal tools for Go
+are unavailable externally.
+This new package provides a uniform way to obtain package metadata by
+querying each of these build systems, optionally supporting their
+preferred command-line notations for packages, so that tools integrate
+neatly with users' build environments. The Metadata query function
+executes an external query tool appropriate to the current workspace.
+
+Loading packages always returns the complete import graph "all the way down",
+even if all you want is information about a single package, because the query
+mechanisms of all the build systems we currently support ({go,vgo} list, and
+blaze/bazel aspect-based query) cannot provide detailed information
+about one package without visiting all its dependencies too, so there is
+no additional asymptotic cost to providing transitive information.
+(This property might not be true of a hypothetical 5th build system.)
+
+In calls to TypeCheck, all initial packages, and any package that
+transitively depends on one of them, must be loaded from source.
+Consider A->B->C->D->E: if A,C are initial, A,B,C must be loaded from
+source; D may be loaded from export data, and E may not be loaded at all
+(though it's possible that D's export data mentions it, so a
+types.Package may be created for it and exposed.)
+
+The old loader had a feature to suppress type-checking of function
+bodies on a per-package basis, primarily intended to reduce the work of
+obtaining type information for imported packages. Now that imports are
+satisfied by export data, the optimization no longer seems necessary.
+
+Despite some early attempts, the old loader did not exploit export data,
+instead always using the equivalent of WholeProgram mode. This was due
+to the complexity of mixing source and export data packages (now
+resolved by the upward traversal mentioned above), and because export data
+files were nearly always missing or stale. Now that 'go build' supports
+caching, all the underlying build systems can guarantee to produce
+export data in a reasonable (amortized) time.
+
+Test "main" packages synthesized by the build system are now reported as
+first-class packages, avoiding the need for clients (such as go/ssa) to
+reinvent this generation logic.
+
+One way in which go/packages is simpler than the old loader is in its
+treatment of in-package tests. In-package tests are packages that
+consist of all the files of the library under test, plus the test files.
+The old loader constructed in-package tests by a two-phase process of
+mutation called "augmentation": first it would construct and type check
+all the ordinary library packages and type-check the packages that
+depend on them; then it would add more (test) files to the package and
+type-check again. This two-phase approach had four major problems:
+1) in processing the tests, the loader modified the library package,
+ leaving no way for a client application to see both the test
+ package and the library package; one would mutate into the other.
+2) because test files can declare additional methods on types defined in
+ the library portion of the package, the dispatch of method calls in
+ the library portion was affected by the presence of the test files.
+ This should have been a clue that the packages were logically
+ different.
+3) this model of "augmentation" assumed at most one in-package test
+ per library package, which is true of projects using 'go build',
+ but not other build systems.
+4) because of the two-phase nature of test processing, all packages that
+ import the library package had to be processed before augmentation,
+ forcing a "one-shot" API and preventing the client from calling Load
+ in several times in sequence as is now possible in WholeProgram mode.
+ (TypeCheck mode has a similar one-shot restriction for a different reason.)
+
+Early drafts of this package supported "multi-shot" operation.
+Although it allowed clients to make a sequence of calls (or concurrent
+calls) to Load, building up the graph of Packages incrementally,
+it was of marginal value: it complicated the API
+(since it allowed some options to vary across calls but not others),
+it complicated the implementation,
+it cannot be made to work in Types mode, as explained above,
+and it was less efficient than making one combined call (when this is possible).
+Among the clients we have inspected, none made multiple calls to load
+but could not be easily and satisfactorily modified to make only a single call.
+However, applications changes may be required.
+For example, the ssadump command loads the user-specified packages
+and in addition the runtime package. It is tempting to simply append
+"runtime" to the user-provided list, but that does not work if the user
+specified an ad-hoc package such as [a.go b.go].
+Instead, ssadump no longer requests the runtime package,
+but seeks it among the dependencies of the user-specified packages,
+and emits an error if it is not found.
+
+Overlays: The Overlay field in the Config allows providing alternate contents
+for Go source files, by providing a mapping from file path to contents.
+go/packages will pull in new imports added in overlay files when go/packages
+is run in LoadImports mode or greater.
+Overlay support for the go list driver isn't complete yet: if the file doesn't
+exist on disk, it will only be recognized in an overlay if it is a non-test file
+and the package would be reported even without the overlay.
+
+Questions & Tasks
+
+- Add GOARCH/GOOS?
+ They are not portable concepts, but could be made portable.
+ Our goal has been to allow users to express themselves using the conventions
+ of the underlying build system: if the build system honors GOARCH
+ during a build and during a metadata query, then so should
+ applications built atop that query mechanism.
+ Conversely, if the target architecture of the build is determined by
+ command-line flags, the application can pass the relevant
+ flags through to the build system using a command such as:
+ myapp -query_flag="--cpu=amd64" -query_flag="--os=darwin"
+ However, this approach is low-level, unwieldy, and non-portable.
+ GOOS and GOARCH seem important enough to warrant a dedicated option.
+
+- How should we handle partial failures such as a mixture of good and
+ malformed patterns, existing and non-existent packages, successful and
+ failed builds, import failures, import cycles, and so on, in a call to
+ Load?
+
+- Support bazel, blaze, and go1.10 list, not just go1.11 list.
+
+- Handle (and test) various partial success cases, e.g.
+ a mixture of good packages and:
+ invalid patterns
+ nonexistent packages
+ empty packages
+ packages with malformed package or import declarations
+ unreadable files
+ import cycles
+ other parse errors
+ type errors
+ Make sure we record errors at the correct place in the graph.
+
+- Missing packages among initial arguments are not reported.
+ Return bogus packages for them, like golist does.
+
+- "undeclared name" errors (for example) are reported out of source file
+ order. I suspect this is due to the breadth-first resolution now used
+ by go/types. Is that a bug? Discuss with gri.
+
+*/
diff --git a/vendor/golang.org/x/tools/go/packages/external.go b/vendor/golang.org/x/tools/go/packages/external.go
new file mode 100644
index 0000000000000..6ac3e4f5b57d1
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/packages/external.go
@@ -0,0 +1,100 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// This file enables an external tool to intercept package requests.
+// If the tool is present then its results are used in preference to
+// the go list command.
+
+package packages
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "os"
+ "os/exec"
+ "strings"
+)
+
+// The Driver Protocol
+//
+// The driver, given the inputs to a call to Load, returns metadata about the packages specified.
+// This allows for different build systems to support go/packages by telling go/packages how the
+// packages' source is organized.
+// The driver is a binary, either specified by the GOPACKAGESDRIVER environment variable or in
+// the path as gopackagesdriver. It's given the inputs to load in its argv. See the package
+// documentation in doc.go for the full description of the patterns that need to be supported.
+// A driver receives as a JSON-serialized driverRequest struct in standard input and will
+// produce a JSON-serialized driverResponse (see definition in packages.go) in its standard output.
+
+// driverRequest is used to provide the portion of Load's Config that is needed by a driver.
+type driverRequest struct {
+ Mode LoadMode `json:"mode"`
+ // Env specifies the environment the underlying build system should be run in.
+ Env []string `json:"env"`
+ // BuildFlags are flags that should be passed to the underlying build system.
+ BuildFlags []string `json:"build_flags"`
+ // Tests specifies whether the patterns should also return test packages.
+ Tests bool `json:"tests"`
+ // Overlay maps file paths (relative to the driver's working directory) to the byte contents
+ // of overlay files.
+ Overlay map[string][]byte `json:"overlay"`
+}
+
+// findExternalDriver returns the file path of a tool that supplies
+// the build system package structure, or "" if not found."
+// If GOPACKAGESDRIVER is set in the environment findExternalTool returns its
+// value, otherwise it searches for a binary named gopackagesdriver on the PATH.
+func findExternalDriver(cfg *Config) driver {
+ const toolPrefix = "GOPACKAGESDRIVER="
+ tool := ""
+ for _, env := range cfg.Env {
+ if val := strings.TrimPrefix(env, toolPrefix); val != env {
+ tool = val
+ }
+ }
+ if tool != "" && tool == "off" {
+ return nil
+ }
+ if tool == "" {
+ var err error
+ tool, err = exec.LookPath("gopackagesdriver")
+ if err != nil {
+ return nil
+ }
+ }
+ return func(cfg *Config, words ...string) (*driverResponse, error) {
+ req, err := json.Marshal(driverRequest{
+ Mode: cfg.Mode,
+ Env: cfg.Env,
+ BuildFlags: cfg.BuildFlags,
+ Tests: cfg.Tests,
+ Overlay: cfg.Overlay,
+ })
+ if err != nil {
+ return nil, fmt.Errorf("failed to encode message to driver tool: %v", err)
+ }
+
+ buf := new(bytes.Buffer)
+ stderr := new(bytes.Buffer)
+ cmd := exec.CommandContext(cfg.Context, tool, words...)
+ cmd.Dir = cfg.Dir
+ cmd.Env = cfg.Env
+ cmd.Stdin = bytes.NewReader(req)
+ cmd.Stdout = buf
+ cmd.Stderr = stderr
+ if len(stderr.Bytes()) != 0 && os.Getenv("GOPACKAGESPRINTDRIVERERRORS") != "" {
+ fmt.Fprintf(os.Stderr, "%s stderr: <<%s>>\n", cmdDebugStr(cmd, words...), stderr)
+ }
+
+ if err := cmd.Run(); err != nil {
+ return nil, fmt.Errorf("%v: %v: %s", tool, err, cmd.Stderr)
+ }
+ var response driverResponse
+ if err := json.Unmarshal(buf.Bytes(), &response); err != nil {
+ return nil, err
+ }
+ return &response, nil
+ }
+}
diff --git a/vendor/golang.org/x/tools/go/packages/golist.go b/vendor/golang.org/x/tools/go/packages/golist.go
new file mode 100644
index 0000000000000..648e364313acc
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/packages/golist.go
@@ -0,0 +1,1143 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package packages
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "go/types"
+ "io/ioutil"
+ "log"
+ "os"
+ "os/exec"
+ "path"
+ "path/filepath"
+ "reflect"
+ "regexp"
+ "strconv"
+ "strings"
+ "sync"
+ "time"
+ "unicode"
+
+ "golang.org/x/tools/go/internal/packagesdriver"
+ "golang.org/x/tools/internal/gopathwalk"
+ "golang.org/x/tools/internal/semver"
+ "golang.org/x/tools/internal/span"
+)
+
+// debug controls verbose logging.
+var debug, _ = strconv.ParseBool(os.Getenv("GOPACKAGESDEBUG"))
+
+// A goTooOldError reports that the go command
+// found by exec.LookPath is too old to use the new go list behavior.
+type goTooOldError struct {
+ error
+}
+
+// responseDeduper wraps a driverResponse, deduplicating its contents.
+type responseDeduper struct {
+ seenRoots map[string]bool
+ seenPackages map[string]*Package
+ dr *driverResponse
+}
+
+// init fills in r with a driverResponse.
+func (r *responseDeduper) init(dr *driverResponse) {
+ r.dr = dr
+ r.seenRoots = map[string]bool{}
+ r.seenPackages = map[string]*Package{}
+ for _, pkg := range dr.Packages {
+ r.seenPackages[pkg.ID] = pkg
+ }
+ for _, root := range dr.Roots {
+ r.seenRoots[root] = true
+ }
+}
+
+func (r *responseDeduper) addPackage(p *Package) {
+ if r.seenPackages[p.ID] != nil {
+ return
+ }
+ r.seenPackages[p.ID] = p
+ r.dr.Packages = append(r.dr.Packages, p)
+}
+
+func (r *responseDeduper) addRoot(id string) {
+ if r.seenRoots[id] {
+ return
+ }
+ r.seenRoots[id] = true
+ r.dr.Roots = append(r.dr.Roots, id)
+}
+
+// goInfo contains global information from the go tool.
+type goInfo struct {
+ rootDirs map[string]string
+ env goEnv
+}
+
+type goEnv struct {
+ modulesOn bool
+}
+
+func determineEnv(cfg *Config) goEnv {
+ buf, err := invokeGo(cfg, "env", "GOMOD")
+ if err != nil {
+ return goEnv{}
+ }
+ gomod := bytes.TrimSpace(buf.Bytes())
+
+ env := goEnv{}
+ env.modulesOn = len(gomod) > 0
+ return env
+}
+
+// goListDriver uses the go list command to interpret the patterns and produce
+// the build system package structure.
+// See driver for more details.
+func goListDriver(cfg *Config, patterns ...string) (*driverResponse, error) {
+ var sizes types.Sizes
+ var sizeserr error
+ var sizeswg sync.WaitGroup
+ if cfg.Mode&NeedTypesSizes != 0 || cfg.Mode&NeedTypes != 0 {
+ sizeswg.Add(1)
+ go func() {
+ sizes, sizeserr = getSizes(cfg)
+ sizeswg.Done()
+ }()
+ }
+ defer sizeswg.Wait()
+
+ // start fetching rootDirs
+ var info goInfo
+ var rootDirsReady, envReady = make(chan struct{}), make(chan struct{})
+ go func() {
+ info.rootDirs = determineRootDirs(cfg)
+ close(rootDirsReady)
+ }()
+ go func() {
+ info.env = determineEnv(cfg)
+ close(envReady)
+ }()
+ getGoInfo := func() *goInfo {
+ <-rootDirsReady
+ <-envReady
+ return &info
+ }
+
+ // Ensure that we don't leak goroutines: Load is synchronous, so callers will
+ // not expect it to access the fields of cfg after the call returns.
+ defer getGoInfo()
+
+ // always pass getGoInfo to golistDriver
+ golistDriver := func(cfg *Config, patterns ...string) (*driverResponse, error) {
+ return golistDriver(cfg, getGoInfo, patterns...)
+ }
+
+ // Determine files requested in contains patterns
+ var containFiles []string
+ var packagesNamed []string
+ restPatterns := make([]string, 0, len(patterns))
+ // Extract file= and other [querytype]= patterns. Report an error if querytype
+ // doesn't exist.
+extractQueries:
+ for _, pattern := range patterns {
+ eqidx := strings.Index(pattern, "=")
+ if eqidx < 0 {
+ restPatterns = append(restPatterns, pattern)
+ } else {
+ query, value := pattern[:eqidx], pattern[eqidx+len("="):]
+ switch query {
+ case "file":
+ containFiles = append(containFiles, value)
+ case "pattern":
+ restPatterns = append(restPatterns, value)
+ case "iamashamedtousethedisabledqueryname":
+ packagesNamed = append(packagesNamed, value)
+ case "": // not a reserved query
+ restPatterns = append(restPatterns, pattern)
+ default:
+ for _, rune := range query {
+ if rune < 'a' || rune > 'z' { // not a reserved query
+ restPatterns = append(restPatterns, pattern)
+ continue extractQueries
+ }
+ }
+ // Reject all other patterns containing "="
+ return nil, fmt.Errorf("invalid query type %q in query pattern %q", query, pattern)
+ }
+ }
+ }
+
+ response := &responseDeduper{}
+ var err error
+
+ // See if we have any patterns to pass through to go list. Zero initial
+ // patterns also requires a go list call, since it's the equivalent of
+ // ".".
+ if len(restPatterns) > 0 || len(patterns) == 0 {
+ dr, err := golistDriver(cfg, restPatterns...)
+ if err != nil {
+ return nil, err
+ }
+ response.init(dr)
+ } else {
+ response.init(&driverResponse{})
+ }
+
+ sizeswg.Wait()
+ if sizeserr != nil {
+ return nil, sizeserr
+ }
+ // types.SizesFor always returns nil or a *types.StdSizes
+ response.dr.Sizes, _ = sizes.(*types.StdSizes)
+
+ var containsCandidates []string
+
+ if len(containFiles) != 0 {
+ if err := runContainsQueries(cfg, golistDriver, response, containFiles, getGoInfo); err != nil {
+ return nil, err
+ }
+ }
+
+ if len(packagesNamed) != 0 {
+ if err := runNamedQueries(cfg, golistDriver, response, packagesNamed); err != nil {
+ return nil, err
+ }
+ }
+
+ modifiedPkgs, needPkgs, err := processGolistOverlay(cfg, response, getGoInfo)
+ if err != nil {
+ return nil, err
+ }
+ if len(containFiles) > 0 {
+ containsCandidates = append(containsCandidates, modifiedPkgs...)
+ containsCandidates = append(containsCandidates, needPkgs...)
+ }
+ if err := addNeededOverlayPackages(cfg, golistDriver, response, needPkgs, getGoInfo); err != nil {
+ return nil, err
+ }
+ // Check candidate packages for containFiles.
+ if len(containFiles) > 0 {
+ for _, id := range containsCandidates {
+ pkg, ok := response.seenPackages[id]
+ if !ok {
+ response.addPackage(&Package{
+ ID: id,
+ Errors: []Error{
+ {
+ Kind: ListError,
+ Msg: fmt.Sprintf("package %s expected but not seen", id),
+ },
+ },
+ })
+ continue
+ }
+ for _, f := range containFiles {
+ for _, g := range pkg.GoFiles {
+ if sameFile(f, g) {
+ response.addRoot(id)
+ }
+ }
+ }
+ }
+ }
+
+ return response.dr, nil
+}
+
+func addNeededOverlayPackages(cfg *Config, driver driver, response *responseDeduper, pkgs []string, getGoInfo func() *goInfo) error {
+ if len(pkgs) == 0 {
+ return nil
+ }
+ drivercfg := *cfg
+ if getGoInfo().env.modulesOn {
+ drivercfg.BuildFlags = append(drivercfg.BuildFlags, "-mod=readonly")
+ }
+ dr, err := driver(&drivercfg, pkgs...)
+
+ if err != nil {
+ return err
+ }
+ for _, pkg := range dr.Packages {
+ response.addPackage(pkg)
+ }
+ _, needPkgs, err := processGolistOverlay(cfg, response, getGoInfo)
+ if err != nil {
+ return err
+ }
+ return addNeededOverlayPackages(cfg, driver, response, needPkgs, getGoInfo)
+}
+
+func runContainsQueries(cfg *Config, driver driver, response *responseDeduper, queries []string, goInfo func() *goInfo) error {
+ for _, query := range queries {
+ // TODO(matloob): Do only one query per directory.
+ fdir := filepath.Dir(query)
+ // Pass absolute path of directory to go list so that it knows to treat it as a directory,
+ // not a package path.
+ pattern, err := filepath.Abs(fdir)
+ if err != nil {
+ return fmt.Errorf("could not determine absolute path of file= query path %q: %v", query, err)
+ }
+ dirResponse, err := driver(cfg, pattern)
+ if err != nil {
+ var queryErr error
+ if dirResponse, queryErr = adHocPackage(cfg, driver, pattern, query); queryErr != nil {
+ return err // return the original error
+ }
+ }
+ // `go list` can report errors for files that are not listed as part of a package's GoFiles.
+ // In the case of an invalid Go file, we should assume that it is part of package if only
+ // one package is in the response. The file may have valid contents in an overlay.
+ if len(dirResponse.Packages) == 1 {
+ pkg := dirResponse.Packages[0]
+ for i, err := range pkg.Errors {
+ s := errorSpan(err)
+ if !s.IsValid() {
+ break
+ }
+ if len(pkg.CompiledGoFiles) == 0 {
+ break
+ }
+ dir := filepath.Dir(pkg.CompiledGoFiles[0])
+ filename := filepath.Join(dir, filepath.Base(s.URI().Filename()))
+ if info, err := os.Stat(filename); err != nil || info.IsDir() {
+ break
+ }
+ if !contains(pkg.CompiledGoFiles, filename) {
+ pkg.CompiledGoFiles = append(pkg.CompiledGoFiles, filename)
+ pkg.GoFiles = append(pkg.GoFiles, filename)
+ pkg.Errors = append(pkg.Errors[:i], pkg.Errors[i+1:]...)
+ }
+ }
+ }
+ // A final attempt to construct an ad-hoc package.
+ if len(dirResponse.Packages) == 1 && len(dirResponse.Packages[0].Errors) == 1 {
+ var queryErr error
+ if dirResponse, queryErr = adHocPackage(cfg, driver, pattern, query); queryErr != nil {
+ return err // return the original error
+ }
+ }
+ isRoot := make(map[string]bool, len(dirResponse.Roots))
+ for _, root := range dirResponse.Roots {
+ isRoot[root] = true
+ }
+ for _, pkg := range dirResponse.Packages {
+ // Add any new packages to the main set
+ // We don't bother to filter packages that will be dropped by the changes of roots,
+ // that will happen anyway during graph construction outside this function.
+ // Over-reporting packages is not a problem.
+ response.addPackage(pkg)
+ // if the package was not a root one, it cannot have the file
+ if !isRoot[pkg.ID] {
+ continue
+ }
+ for _, pkgFile := range pkg.GoFiles {
+ if filepath.Base(query) == filepath.Base(pkgFile) {
+ response.addRoot(pkg.ID)
+ break
+ }
+ }
+ }
+ }
+ return nil
+}
+
+// adHocPackage attempts to construct an ad-hoc package given a query that failed.
+func adHocPackage(cfg *Config, driver driver, pattern, query string) (*driverResponse, error) {
+ // There was an error loading the package. Try to load the file as an ad-hoc package.
+ // Usually the error will appear in a returned package, but may not if we're in modules mode
+ // and the ad-hoc is located outside a module.
+ dirResponse, err := driver(cfg, query)
+ if err != nil {
+ return nil, err
+ }
+ // If we get nothing back from `go list`, try to make this file into its own ad-hoc package.
+ if len(dirResponse.Packages) == 0 && err == nil {
+ dirResponse.Packages = append(dirResponse.Packages, &Package{
+ ID: "command-line-arguments",
+ PkgPath: query,
+ GoFiles: []string{query},
+ CompiledGoFiles: []string{query},
+ Imports: make(map[string]*Package),
+ })
+ dirResponse.Roots = append(dirResponse.Roots, "command-line-arguments")
+ }
+ // Special case to handle issue #33482:
+ // If this is a file= query for ad-hoc packages where the file only exists on an overlay,
+ // and exists outside of a module, add the file in for the package.
+ if len(dirResponse.Packages) == 1 && (dirResponse.Packages[0].ID == "command-line-arguments" ||
+ filepath.ToSlash(dirResponse.Packages[0].PkgPath) == filepath.ToSlash(query)) {
+ if len(dirResponse.Packages[0].GoFiles) == 0 {
+ filename := filepath.Join(pattern, filepath.Base(query)) // avoid recomputing abspath
+ // TODO(matloob): check if the file is outside of a root dir?
+ for path := range cfg.Overlay {
+ if path == filename {
+ dirResponse.Packages[0].Errors = nil
+ dirResponse.Packages[0].GoFiles = []string{path}
+ dirResponse.Packages[0].CompiledGoFiles = []string{path}
+ }
+ }
+ }
+ }
+ return dirResponse, nil
+}
+
+func contains(files []string, filename string) bool {
+ for _, f := range files {
+ if f == filename {
+ return true
+ }
+ }
+ return false
+}
+
+// errorSpan attempts to parse a standard `go list` error message
+// by stripping off the trailing error message.
+//
+// It works only on errors whose message is prefixed by colon,
+// followed by a space (": "). For example:
+//
+// attributes.go:13:1: expected 'package', found 'type'
+//
+func errorSpan(err Error) span.Span {
+ if err.Pos == "" {
+ input := strings.TrimSpace(err.Msg)
+ msgIndex := strings.Index(input, ": ")
+ if msgIndex < 0 {
+ return span.Parse(input)
+ }
+ return span.Parse(input[:msgIndex])
+ }
+ return span.Parse(err.Pos)
+}
+
+// modCacheRegexp splits a path in a module cache into module, module version, and package.
+var modCacheRegexp = regexp.MustCompile(`(.*)@([^/\\]*)(.*)`)
+
+func runNamedQueries(cfg *Config, driver driver, response *responseDeduper, queries []string) error {
+ // calling `go env` isn't free; bail out if there's nothing to do.
+ if len(queries) == 0 {
+ return nil
+ }
+ // Determine which directories are relevant to scan.
+ roots, modRoot, err := roots(cfg)
+ if err != nil {
+ return err
+ }
+
+ // Scan the selected directories. Simple matches, from GOPATH/GOROOT
+ // or the local module, can simply be "go list"ed. Matches from the
+ // module cache need special treatment.
+ var matchesMu sync.Mutex
+ var simpleMatches, modCacheMatches []string
+ add := func(root gopathwalk.Root, dir string) {
+ // Walk calls this concurrently; protect the result slices.
+ matchesMu.Lock()
+ defer matchesMu.Unlock()
+
+ path := dir
+ if dir != root.Path {
+ path = dir[len(root.Path)+1:]
+ }
+ if pathMatchesQueries(path, queries) {
+ switch root.Type {
+ case gopathwalk.RootModuleCache:
+ modCacheMatches = append(modCacheMatches, path)
+ case gopathwalk.RootCurrentModule:
+ // We'd need to read go.mod to find the full
+ // import path. Relative's easier.
+ rel, err := filepath.Rel(cfg.Dir, dir)
+ if err != nil {
+ // This ought to be impossible, since
+ // we found dir in the current module.
+ panic(err)
+ }
+ simpleMatches = append(simpleMatches, "./"+rel)
+ case gopathwalk.RootGOPATH, gopathwalk.RootGOROOT:
+ simpleMatches = append(simpleMatches, path)
+ }
+ }
+ }
+
+ startWalk := time.Now()
+ gopathwalk.Walk(roots, add, gopathwalk.Options{ModulesEnabled: modRoot != "", Debug: debug})
+ cfg.Logf("%v for walk", time.Since(startWalk))
+
+ // Weird special case: the top-level package in a module will be in
+ // whatever directory the user checked the repository out into. It's
+ // more reasonable for that to not match the package name. So, if there
+ // are any Go files in the mod root, query it just to be safe.
+ if modRoot != "" {
+ rel, err := filepath.Rel(cfg.Dir, modRoot)
+ if err != nil {
+ panic(err) // See above.
+ }
+
+ files, err := ioutil.ReadDir(modRoot)
+ if err != nil {
+ panic(err) // See above.
+ }
+
+ for _, f := range files {
+ if strings.HasSuffix(f.Name(), ".go") {
+ simpleMatches = append(simpleMatches, rel)
+ break
+ }
+ }
+ }
+
+ addResponse := func(r *driverResponse) {
+ for _, pkg := range r.Packages {
+ response.addPackage(pkg)
+ for _, name := range queries {
+ if pkg.Name == name {
+ response.addRoot(pkg.ID)
+ break
+ }
+ }
+ }
+ }
+
+ if len(simpleMatches) != 0 {
+ resp, err := driver(cfg, simpleMatches...)
+ if err != nil {
+ return err
+ }
+ addResponse(resp)
+ }
+
+ // Module cache matches are tricky. We want to avoid downloading new
+ // versions of things, so we need to use the ones present in the cache.
+ // go list doesn't accept version specifiers, so we have to write out a
+ // temporary module, and do the list in that module.
+ if len(modCacheMatches) != 0 {
+ // Collect all the matches, deduplicating by major version
+ // and preferring the newest.
+ type modInfo struct {
+ mod string
+ major string
+ }
+ mods := make(map[modInfo]string)
+ var imports []string
+ for _, modPath := range modCacheMatches {
+ matches := modCacheRegexp.FindStringSubmatch(modPath)
+ mod, ver := filepath.ToSlash(matches[1]), matches[2]
+ importPath := filepath.ToSlash(filepath.Join(matches[1], matches[3]))
+
+ major := semver.Major(ver)
+ if prevVer, ok := mods[modInfo{mod, major}]; !ok || semver.Compare(ver, prevVer) > 0 {
+ mods[modInfo{mod, major}] = ver
+ }
+
+ imports = append(imports, importPath)
+ }
+
+ // Build the temporary module.
+ var gomod bytes.Buffer
+ gomod.WriteString("module modquery\nrequire (\n")
+ for mod, version := range mods {
+ gomod.WriteString("\t" + mod.mod + " " + version + "\n")
+ }
+ gomod.WriteString(")\n")
+
+ tmpCfg := *cfg
+
+ // We're only trying to look at stuff in the module cache, so
+ // disable the network. This should speed things up, and has
+ // prevented errors in at least one case, #28518.
+ tmpCfg.Env = append([]string{"GOPROXY=off"}, cfg.Env...)
+
+ var err error
+ tmpCfg.Dir, err = ioutil.TempDir("", "gopackages-modquery")
+ if err != nil {
+ return err
+ }
+ defer os.RemoveAll(tmpCfg.Dir)
+
+ if err := ioutil.WriteFile(filepath.Join(tmpCfg.Dir, "go.mod"), gomod.Bytes(), 0777); err != nil {
+ return fmt.Errorf("writing go.mod for module cache query: %v", err)
+ }
+
+ // Run the query, using the import paths calculated from the matches above.
+ resp, err := driver(&tmpCfg, imports...)
+ if err != nil {
+ return fmt.Errorf("querying module cache matches: %v", err)
+ }
+ addResponse(resp)
+ }
+
+ return nil
+}
+
+func getSizes(cfg *Config) (types.Sizes, error) {
+ return packagesdriver.GetSizesGolist(cfg.Context, cfg.BuildFlags, cfg.Env, cfg.Dir, usesExportData(cfg))
+}
+
+// roots selects the appropriate paths to walk based on the passed-in configuration,
+// particularly the environment and the presence of a go.mod in cfg.Dir's parents.
+func roots(cfg *Config) ([]gopathwalk.Root, string, error) {
+ stdout, err := invokeGo(cfg, "env", "GOROOT", "GOPATH", "GOMOD")
+ if err != nil {
+ return nil, "", err
+ }
+
+ fields := strings.Split(stdout.String(), "\n")
+ if len(fields) != 4 || len(fields[3]) != 0 {
+ return nil, "", fmt.Errorf("go env returned unexpected output: %q", stdout.String())
+ }
+ goroot, gopath, gomod := fields[0], filepath.SplitList(fields[1]), fields[2]
+ var modDir string
+ if gomod != "" {
+ modDir = filepath.Dir(gomod)
+ }
+
+ var roots []gopathwalk.Root
+ // Always add GOROOT.
+ roots = append(roots, gopathwalk.Root{
+ Path: filepath.Join(goroot, "/src"),
+ Type: gopathwalk.RootGOROOT,
+ })
+ // If modules are enabled, scan the module dir.
+ if modDir != "" {
+ roots = append(roots, gopathwalk.Root{
+ Path: modDir,
+ Type: gopathwalk.RootCurrentModule,
+ })
+ }
+ // Add either GOPATH/src or GOPATH/pkg/mod, depending on module mode.
+ for _, p := range gopath {
+ if modDir != "" {
+ roots = append(roots, gopathwalk.Root{
+ Path: filepath.Join(p, "/pkg/mod"),
+ Type: gopathwalk.RootModuleCache,
+ })
+ } else {
+ roots = append(roots, gopathwalk.Root{
+ Path: filepath.Join(p, "/src"),
+ Type: gopathwalk.RootGOPATH,
+ })
+ }
+ }
+
+ return roots, modDir, nil
+}
+
+// These functions were copied from goimports. See further documentation there.
+
+// pathMatchesQueries is adapted from pkgIsCandidate.
+// TODO: is it reasonable to do Contains here, rather than an exact match on a path component?
+func pathMatchesQueries(path string, queries []string) bool {
+ lastTwo := lastTwoComponents(path)
+ for _, query := range queries {
+ if strings.Contains(lastTwo, query) {
+ return true
+ }
+ if hasHyphenOrUpperASCII(lastTwo) && !hasHyphenOrUpperASCII(query) {
+ lastTwo = lowerASCIIAndRemoveHyphen(lastTwo)
+ if strings.Contains(lastTwo, query) {
+ return true
+ }
+ }
+ }
+ return false
+}
+
+// lastTwoComponents returns at most the last two path components
+// of v, using either / or \ as the path separator.
+func lastTwoComponents(v string) string {
+ nslash := 0
+ for i := len(v) - 1; i >= 0; i-- {
+ if v[i] == '/' || v[i] == '\\' {
+ nslash++
+ if nslash == 2 {
+ return v[i:]
+ }
+ }
+ }
+ return v
+}
+
+func hasHyphenOrUpperASCII(s string) bool {
+ for i := 0; i < len(s); i++ {
+ b := s[i]
+ if b == '-' || ('A' <= b && b <= 'Z') {
+ return true
+ }
+ }
+ return false
+}
+
+func lowerASCIIAndRemoveHyphen(s string) (ret string) {
+ buf := make([]byte, 0, len(s))
+ for i := 0; i < len(s); i++ {
+ b := s[i]
+ switch {
+ case b == '-':
+ continue
+ case 'A' <= b && b <= 'Z':
+ buf = append(buf, b+('a'-'A'))
+ default:
+ buf = append(buf, b)
+ }
+ }
+ return string(buf)
+}
+
+// Fields must match go list;
+// see $GOROOT/src/cmd/go/internal/load/pkg.go.
+type jsonPackage struct {
+ ImportPath string
+ Dir string
+ Name string
+ Export string
+ GoFiles []string
+ CompiledGoFiles []string
+ CFiles []string
+ CgoFiles []string
+ CXXFiles []string
+ MFiles []string
+ HFiles []string
+ FFiles []string
+ SFiles []string
+ SwigFiles []string
+ SwigCXXFiles []string
+ SysoFiles []string
+ Imports []string
+ ImportMap map[string]string
+ Deps []string
+ TestGoFiles []string
+ TestImports []string
+ XTestGoFiles []string
+ XTestImports []string
+ ForTest string // q in a "p [q.test]" package, else ""
+ DepOnly bool
+
+ Error *jsonPackageError
+}
+
+type jsonPackageError struct {
+ ImportStack []string
+ Pos string
+ Err string
+}
+
+func otherFiles(p *jsonPackage) [][]string {
+ return [][]string{p.CFiles, p.CXXFiles, p.MFiles, p.HFiles, p.FFiles, p.SFiles, p.SwigFiles, p.SwigCXXFiles, p.SysoFiles}
+}
+
+// golistDriver uses the "go list" command to expand the pattern
+// words and return metadata for the specified packages. dir may be
+// "" and env may be nil, as per os/exec.Command.
+func golistDriver(cfg *Config, rootsDirs func() *goInfo, words ...string) (*driverResponse, error) {
+ // go list uses the following identifiers in ImportPath and Imports:
+ //
+ // "p" -- importable package or main (command)
+ // "q.test" -- q's test executable
+ // "p [q.test]" -- variant of p as built for q's test executable
+ // "q_test [q.test]" -- q's external test package
+ //
+ // The packages p that are built differently for a test q.test
+ // are q itself, plus any helpers used by the external test q_test,
+ // typically including "testing" and all its dependencies.
+
+ // Run "go list" for complete
+ // information on the specified packages.
+ buf, err := invokeGo(cfg, golistargs(cfg, words)...)
+ if err != nil {
+ return nil, err
+ }
+ seen := make(map[string]*jsonPackage)
+ // Decode the JSON and convert it to Package form.
+ var response driverResponse
+ for dec := json.NewDecoder(buf); dec.More(); {
+ p := new(jsonPackage)
+ if err := dec.Decode(p); err != nil {
+ return nil, fmt.Errorf("JSON decoding failed: %v", err)
+ }
+
+ if p.ImportPath == "" {
+ // The documentation for go list says that “[e]rroneous packages will have
+ // a non-empty ImportPath”. If for some reason it comes back empty, we
+ // prefer to error out rather than silently discarding data or handing
+ // back a package without any way to refer to it.
+ if p.Error != nil {
+ return nil, Error{
+ Pos: p.Error.Pos,
+ Msg: p.Error.Err,
+ }
+ }
+ return nil, fmt.Errorf("package missing import path: %+v", p)
+ }
+
+ // Work around https://golang.org/issue/33157:
+ // go list -e, when given an absolute path, will find the package contained at
+ // that directory. But when no package exists there, it will return a fake package
+ // with an error and the ImportPath set to the absolute path provided to go list.
+ // Try to convert that absolute path to what its package path would be if it's
+ // contained in a known module or GOPATH entry. This will allow the package to be
+ // properly "reclaimed" when overlays are processed.
+ if filepath.IsAbs(p.ImportPath) && p.Error != nil {
+ pkgPath, ok := getPkgPath(cfg, p.ImportPath, rootsDirs)
+ if ok {
+ p.ImportPath = pkgPath
+ }
+ }
+
+ if old, found := seen[p.ImportPath]; found {
+ if !reflect.DeepEqual(p, old) {
+ return nil, fmt.Errorf("internal error: go list gives conflicting information for package %v", p.ImportPath)
+ }
+ // skip the duplicate
+ continue
+ }
+ seen[p.ImportPath] = p
+
+ pkg := &Package{
+ Name: p.Name,
+ ID: p.ImportPath,
+ GoFiles: absJoin(p.Dir, p.GoFiles, p.CgoFiles),
+ CompiledGoFiles: absJoin(p.Dir, p.CompiledGoFiles),
+ OtherFiles: absJoin(p.Dir, otherFiles(p)...),
+ }
+
+ // Work around https://golang.org/issue/28749:
+ // cmd/go puts assembly, C, and C++ files in CompiledGoFiles.
+ // Filter out any elements of CompiledGoFiles that are also in OtherFiles.
+ // We have to keep this workaround in place until go1.12 is a distant memory.
+ if len(pkg.OtherFiles) > 0 {
+ other := make(map[string]bool, len(pkg.OtherFiles))
+ for _, f := range pkg.OtherFiles {
+ other[f] = true
+ }
+
+ out := pkg.CompiledGoFiles[:0]
+ for _, f := range pkg.CompiledGoFiles {
+ if other[f] {
+ continue
+ }
+ out = append(out, f)
+ }
+ pkg.CompiledGoFiles = out
+ }
+
+ // Extract the PkgPath from the package's ID.
+ if i := strings.IndexByte(pkg.ID, ' '); i >= 0 {
+ pkg.PkgPath = pkg.ID[:i]
+ } else {
+ pkg.PkgPath = pkg.ID
+ }
+
+ if pkg.PkgPath == "unsafe" {
+ pkg.GoFiles = nil // ignore fake unsafe.go file
+ }
+
+ // Assume go list emits only absolute paths for Dir.
+ if p.Dir != "" && !filepath.IsAbs(p.Dir) {
+ log.Fatalf("internal error: go list returned non-absolute Package.Dir: %s", p.Dir)
+ }
+
+ if p.Export != "" && !filepath.IsAbs(p.Export) {
+ pkg.ExportFile = filepath.Join(p.Dir, p.Export)
+ } else {
+ pkg.ExportFile = p.Export
+ }
+
+ // imports
+ //
+ // Imports contains the IDs of all imported packages.
+ // ImportsMap records (path, ID) only where they differ.
+ ids := make(map[string]bool)
+ for _, id := range p.Imports {
+ ids[id] = true
+ }
+ pkg.Imports = make(map[string]*Package)
+ for path, id := range p.ImportMap {
+ pkg.Imports[path] = &Package{ID: id} // non-identity import
+ delete(ids, id)
+ }
+ for id := range ids {
+ if id == "C" {
+ continue
+ }
+
+ pkg.Imports[id] = &Package{ID: id} // identity import
+ }
+ if !p.DepOnly {
+ response.Roots = append(response.Roots, pkg.ID)
+ }
+
+ // Work around for pre-go.1.11 versions of go list.
+ // TODO(matloob): they should be handled by the fallback.
+ // Can we delete this?
+ if len(pkg.CompiledGoFiles) == 0 {
+ pkg.CompiledGoFiles = pkg.GoFiles
+ }
+
+ if p.Error != nil {
+ pkg.Errors = append(pkg.Errors, Error{
+ Pos: p.Error.Pos,
+ Msg: strings.TrimSpace(p.Error.Err), // Trim to work around golang.org/issue/32363.
+ })
+ }
+
+ response.Packages = append(response.Packages, pkg)
+ }
+
+ return &response, nil
+}
+
+// getPkgPath finds the package path of a directory if it's relative to a root directory.
+func getPkgPath(cfg *Config, dir string, goInfo func() *goInfo) (string, bool) {
+ absDir, err := filepath.Abs(dir)
+ if err != nil {
+ cfg.Logf("error getting absolute path of %s: %v", dir, err)
+ return "", false
+ }
+ for rdir, rpath := range goInfo().rootDirs {
+ absRdir, err := filepath.Abs(rdir)
+ if err != nil {
+ cfg.Logf("error getting absolute path of %s: %v", rdir, err)
+ continue
+ }
+ // Make sure that the directory is in the module,
+ // to avoid creating a path relative to another module.
+ if !strings.HasPrefix(absDir, absRdir) {
+ cfg.Logf("%s does not have prefix %s", absDir, absRdir)
+ continue
+ }
+ // TODO(matloob): This doesn't properly handle symlinks.
+ r, err := filepath.Rel(rdir, dir)
+ if err != nil {
+ continue
+ }
+ if rpath != "" {
+ // We choose only one root even though the directory even it can belong in multiple modules
+ // or GOPATH entries. This is okay because we only need to work with absolute dirs when a
+ // file is missing from disk, for instance when gopls calls go/packages in an overlay.
+ // Once the file is saved, gopls, or the next invocation of the tool will get the correct
+ // result straight from golist.
+ // TODO(matloob): Implement module tiebreaking?
+ return path.Join(rpath, filepath.ToSlash(r)), true
+ }
+ return filepath.ToSlash(r), true
+ }
+ return "", false
+}
+
+// absJoin absolutizes and flattens the lists of files.
+func absJoin(dir string, fileses ...[]string) (res []string) {
+ for _, files := range fileses {
+ for _, file := range files {
+ if !filepath.IsAbs(file) {
+ file = filepath.Join(dir, file)
+ }
+ res = append(res, file)
+ }
+ }
+ return res
+}
+
+func golistargs(cfg *Config, words []string) []string {
+ const findFlags = NeedImports | NeedTypes | NeedSyntax | NeedTypesInfo
+ fullargs := []string{
+ "list", "-e", "-json",
+ fmt.Sprintf("-compiled=%t", cfg.Mode&(NeedCompiledGoFiles|NeedSyntax|NeedTypesInfo|NeedTypesSizes) != 0),
+ fmt.Sprintf("-test=%t", cfg.Tests),
+ fmt.Sprintf("-export=%t", usesExportData(cfg)),
+ fmt.Sprintf("-deps=%t", cfg.Mode&NeedImports != 0),
+ // go list doesn't let you pass -test and -find together,
+ // probably because you'd just get the TestMain.
+ fmt.Sprintf("-find=%t", !cfg.Tests && cfg.Mode&findFlags == 0),
+ }
+ fullargs = append(fullargs, cfg.BuildFlags...)
+ fullargs = append(fullargs, "--")
+ fullargs = append(fullargs, words...)
+ return fullargs
+}
+
+// invokeGo returns the stdout of a go command invocation.
+func invokeGo(cfg *Config, args ...string) (*bytes.Buffer, error) {
+ stdout := new(bytes.Buffer)
+ stderr := new(bytes.Buffer)
+ cmd := exec.CommandContext(cfg.Context, "go", args...)
+ // On darwin the cwd gets resolved to the real path, which breaks anything that
+ // expects the working directory to keep the original path, including the
+ // go command when dealing with modules.
+ // The Go stdlib has a special feature where if the cwd and the PWD are the
+ // same node then it trusts the PWD, so by setting it in the env for the child
+ // process we fix up all the paths returned by the go command.
+ cmd.Env = append(append([]string{}, cfg.Env...), "PWD="+cfg.Dir)
+ cmd.Dir = cfg.Dir
+ cmd.Stdout = stdout
+ cmd.Stderr = stderr
+ defer func(start time.Time) {
+ cfg.Logf("%s for %v, stderr: <<%s>> stdout: <<%s>>\n", time.Since(start), cmdDebugStr(cmd, args...), stderr, stdout)
+ }(time.Now())
+
+ if err := cmd.Run(); err != nil {
+ // Check for 'go' executable not being found.
+ if ee, ok := err.(*exec.Error); ok && ee.Err == exec.ErrNotFound {
+ return nil, fmt.Errorf("'go list' driver requires 'go', but %s", exec.ErrNotFound)
+ }
+
+ exitErr, ok := err.(*exec.ExitError)
+ if !ok {
+ // Catastrophic error:
+ // - context cancellation
+ return nil, fmt.Errorf("couldn't exec 'go %v': %s %T", args, err, err)
+ }
+
+ // Old go version?
+ if strings.Contains(stderr.String(), "flag provided but not defined") {
+ return nil, goTooOldError{fmt.Errorf("unsupported version of go: %s: %s", exitErr, stderr)}
+ }
+
+ // Related to #24854
+ if len(stderr.String()) > 0 && strings.Contains(stderr.String(), "unexpected directory layout") {
+ return nil, fmt.Errorf("%s", stderr.String())
+ }
+
+ // Is there an error running the C compiler in cgo? This will be reported in the "Error" field
+ // and should be suppressed by go list -e.
+ //
+ // This condition is not perfect yet because the error message can include other error messages than runtime/cgo.
+ isPkgPathRune := func(r rune) bool {
+ // From https://golang.org/ref/spec#Import_declarations:
+ // Implementation restriction: A compiler may restrict ImportPaths to non-empty strings
+ // using only characters belonging to Unicode's L, M, N, P, and S general categories
+ // (the Graphic characters without spaces) and may also exclude the
+ // characters !"#$%&'()*,:;<=>?[\]^`{|} and the Unicode replacement character U+FFFD.
+ return unicode.IsOneOf([]*unicode.RangeTable{unicode.L, unicode.M, unicode.N, unicode.P, unicode.S}, r) &&
+ !strings.ContainsRune("!\"#$%&'()*,:;<=>?[\\]^`{|}\uFFFD", r)
+ }
+ if len(stderr.String()) > 0 && strings.HasPrefix(stderr.String(), "# ") {
+ if strings.HasPrefix(strings.TrimLeftFunc(stderr.String()[len("# "):], isPkgPathRune), "\n") {
+ return stdout, nil
+ }
+ }
+
+ // This error only appears in stderr. See golang.org/cl/166398 for a fix in go list to show
+ // the error in the Err section of stdout in case -e option is provided.
+ // This fix is provided for backwards compatibility.
+ if len(stderr.String()) > 0 && strings.Contains(stderr.String(), "named files must be .go files") {
+ output := fmt.Sprintf(`{"ImportPath": "command-line-arguments","Incomplete": true,"Error": {"Pos": "","Err": %q}}`,
+ strings.Trim(stderr.String(), "\n"))
+ return bytes.NewBufferString(output), nil
+ }
+
+ // Similar to the previous error, but currently lacks a fix in Go.
+ if len(stderr.String()) > 0 && strings.Contains(stderr.String(), "named files must all be in one directory") {
+ output := fmt.Sprintf(`{"ImportPath": "command-line-arguments","Incomplete": true,"Error": {"Pos": "","Err": %q}}`,
+ strings.Trim(stderr.String(), "\n"))
+ return bytes.NewBufferString(output), nil
+ }
+
+ // Backwards compatibility for Go 1.11 because 1.12 and 1.13 put the directory in the ImportPath.
+ // If the package doesn't exist, put the absolute path of the directory into the error message,
+ // as Go 1.13 list does.
+ const noSuchDirectory = "no such directory"
+ if len(stderr.String()) > 0 && strings.Contains(stderr.String(), noSuchDirectory) {
+ errstr := stderr.String()
+ abspath := strings.TrimSpace(errstr[strings.Index(errstr, noSuchDirectory)+len(noSuchDirectory):])
+ output := fmt.Sprintf(`{"ImportPath": %q,"Incomplete": true,"Error": {"Pos": "","Err": %q}}`,
+ abspath, strings.Trim(stderr.String(), "\n"))
+ return bytes.NewBufferString(output), nil
+ }
+
+ // Workaround for #29280: go list -e has incorrect behavior when an ad-hoc package doesn't exist.
+ // Note that the error message we look for in this case is different that the one looked for above.
+ if len(stderr.String()) > 0 && strings.Contains(stderr.String(), "no such file or directory") {
+ output := fmt.Sprintf(`{"ImportPath": "command-line-arguments","Incomplete": true,"Error": {"Pos": "","Err": %q}}`,
+ strings.Trim(stderr.String(), "\n"))
+ return bytes.NewBufferString(output), nil
+ }
+
+ // Workaround for #34273. go list -e with GO111MODULE=on has incorrect behavior when listing a
+ // directory outside any module.
+ if len(stderr.String()) > 0 && strings.Contains(stderr.String(), "outside available modules") {
+ output := fmt.Sprintf(`{"ImportPath": %q,"Incomplete": true,"Error": {"Pos": "","Err": %q}}`,
+ // TODO(matloob): command-line-arguments isn't correct here.
+ "command-line-arguments", strings.Trim(stderr.String(), "\n"))
+ return bytes.NewBufferString(output), nil
+ }
+
+ // Another variation of the previous error
+ if len(stderr.String()) > 0 && strings.Contains(stderr.String(), "outside module root") {
+ output := fmt.Sprintf(`{"ImportPath": %q,"Incomplete": true,"Error": {"Pos": "","Err": %q}}`,
+ // TODO(matloob): command-line-arguments isn't correct here.
+ "command-line-arguments", strings.Trim(stderr.String(), "\n"))
+ return bytes.NewBufferString(output), nil
+ }
+
+ // Workaround for an instance of golang.org/issue/26755: go list -e will return a non-zero exit
+ // status if there's a dependency on a package that doesn't exist. But it should return
+ // a zero exit status and set an error on that package.
+ if len(stderr.String()) > 0 && strings.Contains(stderr.String(), "no Go files in") {
+ // Don't clobber stdout if `go list` actually returned something.
+ if len(stdout.String()) > 0 {
+ return stdout, nil
+ }
+ // try to extract package name from string
+ stderrStr := stderr.String()
+ var importPath string
+ colon := strings.Index(stderrStr, ":")
+ if colon > 0 && strings.HasPrefix(stderrStr, "go build ") {
+ importPath = stderrStr[len("go build "):colon]
+ }
+ output := fmt.Sprintf(`{"ImportPath": %q,"Incomplete": true,"Error": {"Pos": "","Err": %q}}`,
+ importPath, strings.Trim(stderrStr, "\n"))
+ return bytes.NewBufferString(output), nil
+ }
+
+ // Export mode entails a build.
+ // If that build fails, errors appear on stderr
+ // (despite the -e flag) and the Export field is blank.
+ // Do not fail in that case.
+ // The same is true if an ad-hoc package given to go list doesn't exist.
+ // TODO(matloob): Remove these once we can depend on go list to exit with a zero status with -e even when
+ // packages don't exist or a build fails.
+ if !usesExportData(cfg) && !containsGoFile(args) {
+ return nil, fmt.Errorf("go %v: %s: %s", args, exitErr, stderr)
+ }
+ }
+
+ // As of writing, go list -export prints some non-fatal compilation
+ // errors to stderr, even with -e set. We would prefer that it put
+ // them in the Package.Error JSON (see https://golang.org/issue/26319).
+ // In the meantime, there's nowhere good to put them, but they can
+ // be useful for debugging. Print them if $GOPACKAGESPRINTGOLISTERRORS
+ // is set.
+ if len(stderr.Bytes()) != 0 && os.Getenv("GOPACKAGESPRINTGOLISTERRORS") != "" {
+ fmt.Fprintf(os.Stderr, "%s stderr: <<%s>>\n", cmdDebugStr(cmd, args...), stderr)
+ }
+ return stdout, nil
+}
+
+func containsGoFile(s []string) bool {
+ for _, f := range s {
+ if strings.HasSuffix(f, ".go") {
+ return true
+ }
+ }
+ return false
+}
+
+func cmdDebugStr(cmd *exec.Cmd, args ...string) string {
+ env := make(map[string]string)
+ for _, kv := range cmd.Env {
+ split := strings.Split(kv, "=")
+ k, v := split[0], split[1]
+ env[k] = v
+ }
+ var quotedArgs []string
+ for _, arg := range args {
+ quotedArgs = append(quotedArgs, strconv.Quote(arg))
+ }
+
+ return fmt.Sprintf("GOROOT=%v GOPATH=%v GO111MODULE=%v PWD=%v go %s", env["GOROOT"], env["GOPATH"], env["GO111MODULE"], env["PWD"], strings.Join(quotedArgs, " "))
+}
diff --git a/vendor/golang.org/x/tools/go/packages/golist_overlay.go b/vendor/golang.org/x/tools/go/packages/golist_overlay.go
new file mode 100644
index 0000000000000..a7de62299d6ea
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/packages/golist_overlay.go
@@ -0,0 +1,293 @@
+package packages
+
+import (
+ "bytes"
+ "encoding/json"
+ "fmt"
+ "go/parser"
+ "go/token"
+ "path/filepath"
+ "strconv"
+ "strings"
+)
+
+// processGolistOverlay provides rudimentary support for adding
+// files that don't exist on disk to an overlay. The results can be
+// sometimes incorrect.
+// TODO(matloob): Handle unsupported cases, including the following:
+// - determining the correct package to add given a new import path
+func processGolistOverlay(cfg *Config, response *responseDeduper, rootDirs func() *goInfo) (modifiedPkgs, needPkgs []string, err error) {
+ havePkgs := make(map[string]string) // importPath -> non-test package ID
+ needPkgsSet := make(map[string]bool)
+ modifiedPkgsSet := make(map[string]bool)
+
+ for _, pkg := range response.dr.Packages {
+ // This is an approximation of import path to id. This can be
+ // wrong for tests, vendored packages, and a number of other cases.
+ havePkgs[pkg.PkgPath] = pkg.ID
+ }
+
+ // If no new imports are added, it is safe to avoid loading any needPkgs.
+ // Otherwise, it's hard to tell which package is actually being loaded
+ // (due to vendoring) and whether any modified package will show up
+ // in the transitive set of dependencies (because new imports are added,
+ // potentially modifying the transitive set of dependencies).
+ var overlayAddsImports bool
+
+ for opath, contents := range cfg.Overlay {
+ base := filepath.Base(opath)
+ dir := filepath.Dir(opath)
+ var pkg *Package // if opath belongs to both a package and its test variant, this will be the test variant
+ var testVariantOf *Package // if opath is a test file, this is the package it is testing
+ var fileExists bool
+ isTestFile := strings.HasSuffix(opath, "_test.go")
+ pkgName, ok := extractPackageName(opath, contents)
+ if !ok {
+ // Don't bother adding a file that doesn't even have a parsable package statement
+ // to the overlay.
+ continue
+ }
+ nextPackage:
+ for _, p := range response.dr.Packages {
+ if pkgName != p.Name && p.ID != "command-line-arguments" {
+ continue
+ }
+ for _, f := range p.GoFiles {
+ if !sameFile(filepath.Dir(f), dir) {
+ continue
+ }
+ // Make sure to capture information on the package's test variant, if needed.
+ if isTestFile && !hasTestFiles(p) {
+ // TODO(matloob): Are there packages other than the 'production' variant
+ // of a package that this can match? This shouldn't match the test main package
+ // because the file is generated in another directory.
+ testVariantOf = p
+ continue nextPackage
+ }
+ if pkg != nil && p != pkg && pkg.PkgPath == p.PkgPath {
+ // If we've already seen the test variant,
+ // make sure to label which package it is a test variant of.
+ if hasTestFiles(pkg) {
+ testVariantOf = p
+ continue nextPackage
+ }
+ // If we have already seen the package of which this is a test variant.
+ if hasTestFiles(p) {
+ testVariantOf = pkg
+ }
+ }
+ pkg = p
+ if filepath.Base(f) == base {
+ fileExists = true
+ }
+ }
+ }
+ // The overlay could have included an entirely new package.
+ if pkg == nil {
+ // Try to find the module or gopath dir the file is contained in.
+ // Then for modules, add the module opath to the beginning.
+ pkgPath, ok := getPkgPath(cfg, dir, rootDirs)
+ if !ok {
+ break
+ }
+ isXTest := strings.HasSuffix(pkgName, "_test")
+ if isXTest {
+ pkgPath += "_test"
+ }
+ id := pkgPath
+ if isTestFile && !isXTest {
+ id = fmt.Sprintf("%s [%s.test]", pkgPath, pkgPath)
+ }
+ // Try to reclaim a package with the same id if it exists in the response.
+ for _, p := range response.dr.Packages {
+ if reclaimPackage(p, id, opath, contents) {
+ pkg = p
+ break
+ }
+ }
+ // Otherwise, create a new package
+ if pkg == nil {
+ pkg = &Package{PkgPath: pkgPath, ID: id, Name: pkgName, Imports: make(map[string]*Package)}
+ response.addPackage(pkg)
+ havePkgs[pkg.PkgPath] = id
+ // Add the production package's sources for a test variant.
+ if isTestFile && !isXTest && testVariantOf != nil {
+ pkg.GoFiles = append(pkg.GoFiles, testVariantOf.GoFiles...)
+ pkg.CompiledGoFiles = append(pkg.CompiledGoFiles, testVariantOf.CompiledGoFiles...)
+ }
+ }
+ }
+ if !fileExists {
+ pkg.GoFiles = append(pkg.GoFiles, opath)
+ // TODO(matloob): Adding the file to CompiledGoFiles can exhibit the wrong behavior
+ // if the file will be ignored due to its build tags.
+ pkg.CompiledGoFiles = append(pkg.CompiledGoFiles, opath)
+ modifiedPkgsSet[pkg.ID] = true
+ }
+ imports, err := extractImports(opath, contents)
+ if err != nil {
+ // Let the parser or type checker report errors later.
+ continue
+ }
+ for _, imp := range imports {
+ _, found := pkg.Imports[imp]
+ if !found {
+ overlayAddsImports = true
+ // TODO(matloob): Handle cases when the following block isn't correct.
+ // These include imports of vendored packages, etc.
+ id, ok := havePkgs[imp]
+ if !ok {
+ id = imp
+ }
+ pkg.Imports[imp] = &Package{ID: id}
+ // Add dependencies to the non-test variant version of this package as wel.
+ if testVariantOf != nil {
+ testVariantOf.Imports[imp] = &Package{ID: id}
+ }
+ }
+ }
+ continue
+ }
+
+ // toPkgPath tries to guess the package path given the id.
+ // This isn't always correct -- it's certainly wrong for
+ // vendored packages' paths.
+ toPkgPath := func(id string) string {
+ // TODO(matloob): Handle vendor paths.
+ i := strings.IndexByte(id, ' ')
+ if i >= 0 {
+ return id[:i]
+ }
+ return id
+ }
+
+ // Do another pass now that new packages have been created to determine the
+ // set of missing packages.
+ for _, pkg := range response.dr.Packages {
+ for _, imp := range pkg.Imports {
+ pkgPath := toPkgPath(imp.ID)
+ if _, ok := havePkgs[pkgPath]; !ok {
+ needPkgsSet[pkgPath] = true
+ }
+ }
+ }
+
+ if overlayAddsImports {
+ needPkgs = make([]string, 0, len(needPkgsSet))
+ for pkg := range needPkgsSet {
+ needPkgs = append(needPkgs, pkg)
+ }
+ }
+ modifiedPkgs = make([]string, 0, len(modifiedPkgsSet))
+ for pkg := range modifiedPkgsSet {
+ modifiedPkgs = append(modifiedPkgs, pkg)
+ }
+ return modifiedPkgs, needPkgs, err
+}
+
+func hasTestFiles(p *Package) bool {
+ for _, f := range p.GoFiles {
+ if strings.HasSuffix(f, "_test.go") {
+ return true
+ }
+ }
+ return false
+}
+
+// determineRootDirs returns a mapping from directories code can be contained in to the
+// corresponding import path prefixes of those directories.
+// Its result is used to try to determine the import path for a package containing
+// an overlay file.
+func determineRootDirs(cfg *Config) map[string]string {
+ // Assume modules first:
+ out, err := invokeGo(cfg, "list", "-m", "-json", "all")
+ if err != nil {
+ return determineRootDirsGOPATH(cfg)
+ }
+ m := map[string]string{}
+ type jsonMod struct{ Path, Dir string }
+ for dec := json.NewDecoder(out); dec.More(); {
+ mod := new(jsonMod)
+ if err := dec.Decode(mod); err != nil {
+ return m // Give up and return an empty map. Package won't be found for overlay.
+ }
+ if mod.Dir != "" && mod.Path != "" {
+ // This is a valid module; add it to the map.
+ m[mod.Dir] = mod.Path
+ }
+ }
+ return m
+}
+
+func determineRootDirsGOPATH(cfg *Config) map[string]string {
+ m := map[string]string{}
+ out, err := invokeGo(cfg, "env", "GOPATH")
+ if err != nil {
+ // Could not determine root dir mapping. Everything is best-effort, so just return an empty map.
+ // When we try to find the import path for a directory, there will be no root-dir match and
+ // we'll give up.
+ return m
+ }
+ for _, p := range filepath.SplitList(string(bytes.TrimSpace(out.Bytes()))) {
+ m[filepath.Join(p, "src")] = ""
+ }
+ return m
+}
+
+func extractImports(filename string, contents []byte) ([]string, error) {
+ f, err := parser.ParseFile(token.NewFileSet(), filename, contents, parser.ImportsOnly) // TODO(matloob): reuse fileset?
+ if err != nil {
+ return nil, err
+ }
+ var res []string
+ for _, imp := range f.Imports {
+ quotedPath := imp.Path.Value
+ path, err := strconv.Unquote(quotedPath)
+ if err != nil {
+ return nil, err
+ }
+ res = append(res, path)
+ }
+ return res, nil
+}
+
+// reclaimPackage attempts to reuse a package that failed to load in an overlay.
+//
+// If the package has errors and has no Name, GoFiles, or Imports,
+// then it's possible that it doesn't yet exist on disk.
+func reclaimPackage(pkg *Package, id string, filename string, contents []byte) bool {
+ // TODO(rstambler): Check the message of the actual error?
+ // It differs between $GOPATH and module mode.
+ if pkg.ID != id {
+ return false
+ }
+ if len(pkg.Errors) != 1 {
+ return false
+ }
+ if pkg.Name != "" || pkg.ExportFile != "" {
+ return false
+ }
+ if len(pkg.GoFiles) > 0 || len(pkg.CompiledGoFiles) > 0 || len(pkg.OtherFiles) > 0 {
+ return false
+ }
+ if len(pkg.Imports) > 0 {
+ return false
+ }
+ pkgName, ok := extractPackageName(filename, contents)
+ if !ok {
+ return false
+ }
+ pkg.Name = pkgName
+ pkg.Errors = nil
+ return true
+}
+
+func extractPackageName(filename string, contents []byte) (string, bool) {
+ // TODO(rstambler): Check the message of the actual error?
+ // It differs between $GOPATH and module mode.
+ f, err := parser.ParseFile(token.NewFileSet(), filename, contents, parser.PackageClauseOnly) // TODO(matloob): reuse fileset?
+ if err != nil {
+ return "", false
+ }
+ return f.Name.Name, true
+}
diff --git a/vendor/golang.org/x/tools/go/packages/packages.go b/vendor/golang.org/x/tools/go/packages/packages.go
new file mode 100644
index 0000000000000..050cca43a2ba7
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/packages/packages.go
@@ -0,0 +1,1116 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package packages
+
+// See doc.go for package documentation and implementation notes.
+
+import (
+ "context"
+ "encoding/json"
+ "fmt"
+ "go/ast"
+ "go/parser"
+ "go/scanner"
+ "go/token"
+ "go/types"
+ "io/ioutil"
+ "log"
+ "os"
+ "path/filepath"
+ "strings"
+ "sync"
+
+ "golang.org/x/tools/go/gcexportdata"
+)
+
+// A LoadMode controls the amount of detail to return when loading.
+// The bits below can be combined to specify which fields should be
+// filled in the result packages.
+// The zero value is a special case, equivalent to combining
+// the NeedName, NeedFiles, and NeedCompiledGoFiles bits.
+// ID and Errors (if present) will always be filled.
+// Load may return more information than requested.
+type LoadMode int
+
+const (
+ // NeedName adds Name and PkgPath.
+ NeedName LoadMode = 1 << iota
+
+ // NeedFiles adds GoFiles and OtherFiles.
+ NeedFiles
+
+ // NeedCompiledGoFiles adds CompiledGoFiles.
+ NeedCompiledGoFiles
+
+ // NeedImports adds Imports. If NeedDeps is not set, the Imports field will contain
+ // "placeholder" Packages with only the ID set.
+ NeedImports
+
+ // NeedDeps adds the fields requested by the LoadMode in the packages in Imports.
+ NeedDeps
+
+ // NeedExportsFile adds ExportsFile.
+ NeedExportsFile
+
+ // NeedTypes adds Types, Fset, and IllTyped.
+ NeedTypes
+
+ // NeedSyntax adds Syntax.
+ NeedSyntax
+
+ // NeedTypesInfo adds TypesInfo.
+ NeedTypesInfo
+
+ // NeedTypesSizes adds TypesSizes.
+ NeedTypesSizes
+)
+
+const (
+ // Deprecated: LoadFiles exists for historical compatibility
+ // and should not be used. Please directly specify the needed fields using the Need values.
+ LoadFiles = NeedName | NeedFiles | NeedCompiledGoFiles
+
+ // Deprecated: LoadImports exists for historical compatibility
+ // and should not be used. Please directly specify the needed fields using the Need values.
+ LoadImports = LoadFiles | NeedImports
+
+ // Deprecated: LoadTypes exists for historical compatibility
+ // and should not be used. Please directly specify the needed fields using the Need values.
+ LoadTypes = LoadImports | NeedTypes | NeedTypesSizes
+
+ // Deprecated: LoadSyntax exists for historical compatibility
+ // and should not be used. Please directly specify the needed fields using the Need values.
+ LoadSyntax = LoadTypes | NeedSyntax | NeedTypesInfo
+
+ // Deprecated: LoadAllSyntax exists for historical compatibility
+ // and should not be used. Please directly specify the needed fields using the Need values.
+ LoadAllSyntax = LoadSyntax | NeedDeps
+)
+
+// A Config specifies details about how packages should be loaded.
+// The zero value is a valid configuration.
+// Calls to Load do not modify this struct.
+type Config struct {
+ // Mode controls the level of information returned for each package.
+ Mode LoadMode
+
+ // Context specifies the context for the load operation.
+ // If the context is cancelled, the loader may stop early
+ // and return an ErrCancelled error.
+ // If Context is nil, the load cannot be cancelled.
+ Context context.Context
+
+ // Logf is the logger for the config.
+ // If the user provides a logger, debug logging is enabled.
+ // If the GOPACKAGESDEBUG environment variable is set to true,
+ // but the logger is nil, default to log.Printf.
+ Logf func(format string, args ...interface{})
+
+ // Dir is the directory in which to run the build system's query tool
+ // that provides information about the packages.
+ // If Dir is empty, the tool is run in the current directory.
+ Dir string
+
+ // Env is the environment to use when invoking the build system's query tool.
+ // If Env is nil, the current environment is used.
+ // As in os/exec's Cmd, only the last value in the slice for
+ // each environment key is used. To specify the setting of only
+ // a few variables, append to the current environment, as in:
+ //
+ // opt.Env = append(os.Environ(), "GOOS=plan9", "GOARCH=386")
+ //
+ Env []string
+
+ // BuildFlags is a list of command-line flags to be passed through to
+ // the build system's query tool.
+ BuildFlags []string
+
+ // Fset provides source position information for syntax trees and types.
+ // If Fset is nil, Load will use a new fileset, but preserve Fset's value.
+ Fset *token.FileSet
+
+ // ParseFile is called to read and parse each file
+ // when preparing a package's type-checked syntax tree.
+ // It must be safe to call ParseFile simultaneously from multiple goroutines.
+ // If ParseFile is nil, the loader will uses parser.ParseFile.
+ //
+ // ParseFile should parse the source from src and use filename only for
+ // recording position information.
+ //
+ // An application may supply a custom implementation of ParseFile
+ // to change the effective file contents or the behavior of the parser,
+ // or to modify the syntax tree. For example, selectively eliminating
+ // unwanted function bodies can significantly accelerate type checking.
+ ParseFile func(fset *token.FileSet, filename string, src []byte) (*ast.File, error)
+
+ // If Tests is set, the loader includes not just the packages
+ // matching a particular pattern but also any related test packages,
+ // including test-only variants of the package and the test executable.
+ //
+ // For example, when using the go command, loading "fmt" with Tests=true
+ // returns four packages, with IDs "fmt" (the standard package),
+ // "fmt [fmt.test]" (the package as compiled for the test),
+ // "fmt_test" (the test functions from source files in package fmt_test),
+ // and "fmt.test" (the test binary).
+ //
+ // In build systems with explicit names for tests,
+ // setting Tests may have no effect.
+ Tests bool
+
+ // Overlay provides a mapping of absolute file paths to file contents.
+ // If the file with the given path already exists, the parser will use the
+ // alternative file contents provided by the map.
+ //
+ // Overlays provide incomplete support for when a given file doesn't
+ // already exist on disk. See the package doc above for more details.
+ Overlay map[string][]byte
+}
+
+// driver is the type for functions that query the build system for the
+// packages named by the patterns.
+type driver func(cfg *Config, patterns ...string) (*driverResponse, error)
+
+// driverResponse contains the results for a driver query.
+type driverResponse struct {
+ // Sizes, if not nil, is the types.Sizes to use when type checking.
+ Sizes *types.StdSizes
+
+ // Roots is the set of package IDs that make up the root packages.
+ // We have to encode this separately because when we encode a single package
+ // we cannot know if it is one of the roots as that requires knowledge of the
+ // graph it is part of.
+ Roots []string `json:",omitempty"`
+
+ // Packages is the full set of packages in the graph.
+ // The packages are not connected into a graph.
+ // The Imports if populated will be stubs that only have their ID set.
+ // Imports will be connected and then type and syntax information added in a
+ // later pass (see refine).
+ Packages []*Package
+}
+
+// Load loads and returns the Go packages named by the given patterns.
+//
+// Config specifies loading options;
+// nil behaves the same as an empty Config.
+//
+// Load returns an error if any of the patterns was invalid
+// as defined by the underlying build system.
+// It may return an empty list of packages without an error,
+// for instance for an empty expansion of a valid wildcard.
+// Errors associated with a particular package are recorded in the
+// corresponding Package's Errors list, and do not cause Load to
+// return an error. Clients may need to handle such errors before
+// proceeding with further analysis. The PrintErrors function is
+// provided for convenient display of all errors.
+func Load(cfg *Config, patterns ...string) ([]*Package, error) {
+ l := newLoader(cfg)
+ response, err := defaultDriver(&l.Config, patterns...)
+ if err != nil {
+ return nil, err
+ }
+ l.sizes = response.Sizes
+ return l.refine(response.Roots, response.Packages...)
+}
+
+// defaultDriver is a driver that looks for an external driver binary, and if
+// it does not find it falls back to the built in go list driver.
+func defaultDriver(cfg *Config, patterns ...string) (*driverResponse, error) {
+ driver := findExternalDriver(cfg)
+ if driver == nil {
+ driver = goListDriver
+ }
+ return driver(cfg, patterns...)
+}
+
+// A Package describes a loaded Go package.
+type Package struct {
+ // ID is a unique identifier for a package,
+ // in a syntax provided by the underlying build system.
+ //
+ // Because the syntax varies based on the build system,
+ // clients should treat IDs as opaque and not attempt to
+ // interpret them.
+ ID string
+
+ // Name is the package name as it appears in the package source code.
+ Name string
+
+ // PkgPath is the package path as used by the go/types package.
+ PkgPath string
+
+ // Errors contains any errors encountered querying the metadata
+ // of the package, or while parsing or type-checking its files.
+ Errors []Error
+
+ // GoFiles lists the absolute file paths of the package's Go source files.
+ GoFiles []string
+
+ // CompiledGoFiles lists the absolute file paths of the package's source
+ // files that were presented to the compiler.
+ // This may differ from GoFiles if files are processed before compilation.
+ CompiledGoFiles []string
+
+ // OtherFiles lists the absolute file paths of the package's non-Go source files,
+ // including assembly, C, C++, Fortran, Objective-C, SWIG, and so on.
+ OtherFiles []string
+
+ // ExportFile is the absolute path to a file containing type
+ // information for the package as provided by the build system.
+ ExportFile string
+
+ // Imports maps import paths appearing in the package's Go source files
+ // to corresponding loaded Packages.
+ Imports map[string]*Package
+
+ // Types provides type information for the package.
+ // The NeedTypes LoadMode bit sets this field for packages matching the
+ // patterns; type information for dependencies may be missing or incomplete,
+ // unless NeedDeps and NeedImports are also set.
+ Types *types.Package
+
+ // Fset provides position information for Types, TypesInfo, and Syntax.
+ // It is set only when Types is set.
+ Fset *token.FileSet
+
+ // IllTyped indicates whether the package or any dependency contains errors.
+ // It is set only when Types is set.
+ IllTyped bool
+
+ // Syntax is the package's syntax trees, for the files listed in CompiledGoFiles.
+ //
+ // The NeedSyntax LoadMode bit populates this field for packages matching the patterns.
+ // If NeedDeps and NeedImports are also set, this field will also be populated
+ // for dependencies.
+ Syntax []*ast.File
+
+ // TypesInfo provides type information about the package's syntax trees.
+ // It is set only when Syntax is set.
+ TypesInfo *types.Info
+
+ // TypesSizes provides the effective size function for types in TypesInfo.
+ TypesSizes types.Sizes
+}
+
+// An Error describes a problem with a package's metadata, syntax, or types.
+type Error struct {
+ Pos string // "file:line:col" or "file:line" or "" or "-"
+ Msg string
+ Kind ErrorKind
+}
+
+// ErrorKind describes the source of the error, allowing the user to
+// differentiate between errors generated by the driver, the parser, or the
+// type-checker.
+type ErrorKind int
+
+const (
+ UnknownError ErrorKind = iota
+ ListError
+ ParseError
+ TypeError
+)
+
+func (err Error) Error() string {
+ pos := err.Pos
+ if pos == "" {
+ pos = "-" // like token.Position{}.String()
+ }
+ return pos + ": " + err.Msg
+}
+
+// flatPackage is the JSON form of Package
+// It drops all the type and syntax fields, and transforms the Imports
+//
+// TODO(adonovan): identify this struct with Package, effectively
+// publishing the JSON protocol.
+type flatPackage struct {
+ ID string
+ Name string `json:",omitempty"`
+ PkgPath string `json:",omitempty"`
+ Errors []Error `json:",omitempty"`
+ GoFiles []string `json:",omitempty"`
+ CompiledGoFiles []string `json:",omitempty"`
+ OtherFiles []string `json:",omitempty"`
+ ExportFile string `json:",omitempty"`
+ Imports map[string]string `json:",omitempty"`
+}
+
+// MarshalJSON returns the Package in its JSON form.
+// For the most part, the structure fields are written out unmodified, and
+// the type and syntax fields are skipped.
+// The imports are written out as just a map of path to package id.
+// The errors are written using a custom type that tries to preserve the
+// structure of error types we know about.
+//
+// This method exists to enable support for additional build systems. It is
+// not intended for use by clients of the API and we may change the format.
+func (p *Package) MarshalJSON() ([]byte, error) {
+ flat := &flatPackage{
+ ID: p.ID,
+ Name: p.Name,
+ PkgPath: p.PkgPath,
+ Errors: p.Errors,
+ GoFiles: p.GoFiles,
+ CompiledGoFiles: p.CompiledGoFiles,
+ OtherFiles: p.OtherFiles,
+ ExportFile: p.ExportFile,
+ }
+ if len(p.Imports) > 0 {
+ flat.Imports = make(map[string]string, len(p.Imports))
+ for path, ipkg := range p.Imports {
+ flat.Imports[path] = ipkg.ID
+ }
+ }
+ return json.Marshal(flat)
+}
+
+// UnmarshalJSON reads in a Package from its JSON format.
+// See MarshalJSON for details about the format accepted.
+func (p *Package) UnmarshalJSON(b []byte) error {
+ flat := &flatPackage{}
+ if err := json.Unmarshal(b, &flat); err != nil {
+ return err
+ }
+ *p = Package{
+ ID: flat.ID,
+ Name: flat.Name,
+ PkgPath: flat.PkgPath,
+ Errors: flat.Errors,
+ GoFiles: flat.GoFiles,
+ CompiledGoFiles: flat.CompiledGoFiles,
+ OtherFiles: flat.OtherFiles,
+ ExportFile: flat.ExportFile,
+ }
+ if len(flat.Imports) > 0 {
+ p.Imports = make(map[string]*Package, len(flat.Imports))
+ for path, id := range flat.Imports {
+ p.Imports[path] = &Package{ID: id}
+ }
+ }
+ return nil
+}
+
+func (p *Package) String() string { return p.ID }
+
+// loaderPackage augments Package with state used during the loading phase
+type loaderPackage struct {
+ *Package
+ importErrors map[string]error // maps each bad import to its error
+ loadOnce sync.Once
+ color uint8 // for cycle detection
+ needsrc bool // load from source (Mode >= LoadTypes)
+ needtypes bool // type information is either requested or depended on
+ initial bool // package was matched by a pattern
+}
+
+// loader holds the working state of a single call to load.
+type loader struct {
+ pkgs map[string]*loaderPackage
+ Config
+ sizes types.Sizes
+ parseCache map[string]*parseValue
+ parseCacheMu sync.Mutex
+ exportMu sync.Mutex // enforces mutual exclusion of exportdata operations
+
+ // Config.Mode contains the implied mode (see impliedLoadMode).
+ // Implied mode contains all the fields we need the data for.
+ // In requestedMode there are the actually requested fields.
+ // We'll zero them out before returning packages to the user.
+ // This makes it easier for us to get the conditions where
+ // we need certain modes right.
+ requestedMode LoadMode
+}
+
+type parseValue struct {
+ f *ast.File
+ err error
+ ready chan struct{}
+}
+
+func newLoader(cfg *Config) *loader {
+ ld := &loader{
+ parseCache: map[string]*parseValue{},
+ }
+ if cfg != nil {
+ ld.Config = *cfg
+ // If the user has provided a logger, use it.
+ ld.Config.Logf = cfg.Logf
+ }
+ if ld.Config.Logf == nil {
+ // If the GOPACKAGESDEBUG environment variable is set to true,
+ // but the user has not provided a logger, default to log.Printf.
+ if debug {
+ ld.Config.Logf = log.Printf
+ } else {
+ ld.Config.Logf = func(format string, args ...interface{}) {}
+ }
+ }
+ if ld.Config.Mode == 0 {
+ ld.Config.Mode = NeedName | NeedFiles | NeedCompiledGoFiles // Preserve zero behavior of Mode for backwards compatibility.
+ }
+ if ld.Config.Env == nil {
+ ld.Config.Env = os.Environ()
+ }
+ if ld.Context == nil {
+ ld.Context = context.Background()
+ }
+ if ld.Dir == "" {
+ if dir, err := os.Getwd(); err == nil {
+ ld.Dir = dir
+ }
+ }
+
+ // Save the actually requested fields. We'll zero them out before returning packages to the user.
+ ld.requestedMode = ld.Mode
+ ld.Mode = impliedLoadMode(ld.Mode)
+
+ if ld.Mode&NeedTypes != 0 || ld.Mode&NeedSyntax != 0 {
+ if ld.Fset == nil {
+ ld.Fset = token.NewFileSet()
+ }
+
+ // ParseFile is required even in LoadTypes mode
+ // because we load source if export data is missing.
+ if ld.ParseFile == nil {
+ ld.ParseFile = func(fset *token.FileSet, filename string, src []byte) (*ast.File, error) {
+ const mode = parser.AllErrors | parser.ParseComments
+ return parser.ParseFile(fset, filename, src, mode)
+ }
+ }
+ }
+
+ return ld
+}
+
+// refine connects the supplied packages into a graph and then adds type and
+// and syntax information as requested by the LoadMode.
+func (ld *loader) refine(roots []string, list ...*Package) ([]*Package, error) {
+ rootMap := make(map[string]int, len(roots))
+ for i, root := range roots {
+ rootMap[root] = i
+ }
+ ld.pkgs = make(map[string]*loaderPackage)
+ // first pass, fixup and build the map and roots
+ var initial = make([]*loaderPackage, len(roots))
+ for _, pkg := range list {
+ rootIndex := -1
+ if i, found := rootMap[pkg.ID]; found {
+ rootIndex = i
+ }
+ lpkg := &loaderPackage{
+ Package: pkg,
+ needtypes: (ld.Mode&(NeedTypes|NeedTypesInfo) != 0 && ld.Mode&NeedDeps != 0 && rootIndex < 0) || rootIndex >= 0,
+ needsrc: (ld.Mode&(NeedSyntax|NeedTypesInfo) != 0 && ld.Mode&NeedDeps != 0 && rootIndex < 0) || rootIndex >= 0 ||
+ len(ld.Overlay) > 0 || // Overlays can invalidate export data. TODO(matloob): make this check fine-grained based on dependencies on overlaid files
+ pkg.ExportFile == "" && pkg.PkgPath != "unsafe",
+ }
+ ld.pkgs[lpkg.ID] = lpkg
+ if rootIndex >= 0 {
+ initial[rootIndex] = lpkg
+ lpkg.initial = true
+ }
+ }
+ for i, root := range roots {
+ if initial[i] == nil {
+ return nil, fmt.Errorf("root package %v is missing", root)
+ }
+ }
+
+ // Materialize the import graph.
+
+ const (
+ white = 0 // new
+ grey = 1 // in progress
+ black = 2 // complete
+ )
+
+ // visit traverses the import graph, depth-first,
+ // and materializes the graph as Packages.Imports.
+ //
+ // Valid imports are saved in the Packages.Import map.
+ // Invalid imports (cycles and missing nodes) are saved in the importErrors map.
+ // Thus, even in the presence of both kinds of errors, the Import graph remains a DAG.
+ //
+ // visit returns whether the package needs src or has a transitive
+ // dependency on a package that does. These are the only packages
+ // for which we load source code.
+ var stack []*loaderPackage
+ var visit func(lpkg *loaderPackage) bool
+ var srcPkgs []*loaderPackage
+ visit = func(lpkg *loaderPackage) bool {
+ switch lpkg.color {
+ case black:
+ return lpkg.needsrc
+ case grey:
+ panic("internal error: grey node")
+ }
+ lpkg.color = grey
+ stack = append(stack, lpkg) // push
+ stubs := lpkg.Imports // the structure form has only stubs with the ID in the Imports
+ // If NeedImports isn't set, the imports fields will all be zeroed out.
+ if ld.Mode&NeedImports != 0 {
+ lpkg.Imports = make(map[string]*Package, len(stubs))
+ for importPath, ipkg := range stubs {
+ var importErr error
+ imp := ld.pkgs[ipkg.ID]
+ if imp == nil {
+ // (includes package "C" when DisableCgo)
+ importErr = fmt.Errorf("missing package: %q", ipkg.ID)
+ } else if imp.color == grey {
+ importErr = fmt.Errorf("import cycle: %s", stack)
+ }
+ if importErr != nil {
+ if lpkg.importErrors == nil {
+ lpkg.importErrors = make(map[string]error)
+ }
+ lpkg.importErrors[importPath] = importErr
+ continue
+ }
+
+ if visit(imp) {
+ lpkg.needsrc = true
+ }
+ lpkg.Imports[importPath] = imp.Package
+ }
+ }
+ if lpkg.needsrc {
+ srcPkgs = append(srcPkgs, lpkg)
+ }
+ if ld.Mode&NeedTypesSizes != 0 {
+ lpkg.TypesSizes = ld.sizes
+ }
+ stack = stack[:len(stack)-1] // pop
+ lpkg.color = black
+
+ return lpkg.needsrc
+ }
+
+ if ld.Mode&NeedImports == 0 {
+ // We do this to drop the stub import packages that we are not even going to try to resolve.
+ for _, lpkg := range initial {
+ lpkg.Imports = nil
+ }
+ } else {
+ // For each initial package, create its import DAG.
+ for _, lpkg := range initial {
+ visit(lpkg)
+ }
+ }
+ if ld.Mode&NeedImports != 0 && ld.Mode&NeedTypes != 0 {
+ for _, lpkg := range srcPkgs {
+ // Complete type information is required for the
+ // immediate dependencies of each source package.
+ for _, ipkg := range lpkg.Imports {
+ imp := ld.pkgs[ipkg.ID]
+ imp.needtypes = true
+ }
+ }
+ }
+ // Load type data and syntax if needed, starting at
+ // the initial packages (roots of the import DAG).
+ if ld.Mode&NeedTypes != 0 || ld.Mode&NeedSyntax != 0 {
+ var wg sync.WaitGroup
+ for _, lpkg := range initial {
+ wg.Add(1)
+ go func(lpkg *loaderPackage) {
+ ld.loadRecursive(lpkg)
+ wg.Done()
+ }(lpkg)
+ }
+ wg.Wait()
+ }
+
+ result := make([]*Package, len(initial))
+ for i, lpkg := range initial {
+ result[i] = lpkg.Package
+ }
+ for i := range ld.pkgs {
+ // Clear all unrequested fields, for extra de-Hyrum-ization.
+ if ld.requestedMode&NeedName == 0 {
+ ld.pkgs[i].Name = ""
+ ld.pkgs[i].PkgPath = ""
+ }
+ if ld.requestedMode&NeedFiles == 0 {
+ ld.pkgs[i].GoFiles = nil
+ ld.pkgs[i].OtherFiles = nil
+ }
+ if ld.requestedMode&NeedCompiledGoFiles == 0 {
+ ld.pkgs[i].CompiledGoFiles = nil
+ }
+ if ld.requestedMode&NeedImports == 0 {
+ ld.pkgs[i].Imports = nil
+ }
+ if ld.requestedMode&NeedExportsFile == 0 {
+ ld.pkgs[i].ExportFile = ""
+ }
+ if ld.requestedMode&NeedTypes == 0 {
+ ld.pkgs[i].Types = nil
+ ld.pkgs[i].Fset = nil
+ ld.pkgs[i].IllTyped = false
+ }
+ if ld.requestedMode&NeedSyntax == 0 {
+ ld.pkgs[i].Syntax = nil
+ }
+ if ld.requestedMode&NeedTypesInfo == 0 {
+ ld.pkgs[i].TypesInfo = nil
+ }
+ if ld.requestedMode&NeedTypesSizes == 0 {
+ ld.pkgs[i].TypesSizes = nil
+ }
+ }
+
+ return result, nil
+}
+
+// loadRecursive loads the specified package and its dependencies,
+// recursively, in parallel, in topological order.
+// It is atomic and idempotent.
+// Precondition: ld.Mode&NeedTypes.
+func (ld *loader) loadRecursive(lpkg *loaderPackage) {
+ lpkg.loadOnce.Do(func() {
+ // Load the direct dependencies, in parallel.
+ var wg sync.WaitGroup
+ for _, ipkg := range lpkg.Imports {
+ imp := ld.pkgs[ipkg.ID]
+ wg.Add(1)
+ go func(imp *loaderPackage) {
+ ld.loadRecursive(imp)
+ wg.Done()
+ }(imp)
+ }
+ wg.Wait()
+ ld.loadPackage(lpkg)
+ })
+}
+
+// loadPackage loads the specified package.
+// It must be called only once per Package,
+// after immediate dependencies are loaded.
+// Precondition: ld.Mode & NeedTypes.
+func (ld *loader) loadPackage(lpkg *loaderPackage) {
+ if lpkg.PkgPath == "unsafe" {
+ // Fill in the blanks to avoid surprises.
+ lpkg.Types = types.Unsafe
+ lpkg.Fset = ld.Fset
+ lpkg.Syntax = []*ast.File{}
+ lpkg.TypesInfo = new(types.Info)
+ lpkg.TypesSizes = ld.sizes
+ return
+ }
+
+ // Call NewPackage directly with explicit name.
+ // This avoids skew between golist and go/types when the files'
+ // package declarations are inconsistent.
+ lpkg.Types = types.NewPackage(lpkg.PkgPath, lpkg.Name)
+ lpkg.Fset = ld.Fset
+
+ // Subtle: we populate all Types fields with an empty Package
+ // before loading export data so that export data processing
+ // never has to create a types.Package for an indirect dependency,
+ // which would then require that such created packages be explicitly
+ // inserted back into the Import graph as a final step after export data loading.
+ // The Diamond test exercises this case.
+ if !lpkg.needtypes {
+ return
+ }
+ if !lpkg.needsrc {
+ ld.loadFromExportData(lpkg)
+ return // not a source package, don't get syntax trees
+ }
+
+ appendError := func(err error) {
+ // Convert various error types into the one true Error.
+ var errs []Error
+ switch err := err.(type) {
+ case Error:
+ // from driver
+ errs = append(errs, err)
+
+ case *os.PathError:
+ // from parser
+ errs = append(errs, Error{
+ Pos: err.Path + ":1",
+ Msg: err.Err.Error(),
+ Kind: ParseError,
+ })
+
+ case scanner.ErrorList:
+ // from parser
+ for _, err := range err {
+ errs = append(errs, Error{
+ Pos: err.Pos.String(),
+ Msg: err.Msg,
+ Kind: ParseError,
+ })
+ }
+
+ case types.Error:
+ // from type checker
+ errs = append(errs, Error{
+ Pos: err.Fset.Position(err.Pos).String(),
+ Msg: err.Msg,
+ Kind: TypeError,
+ })
+
+ default:
+ // unexpected impoverished error from parser?
+ errs = append(errs, Error{
+ Pos: "-",
+ Msg: err.Error(),
+ Kind: UnknownError,
+ })
+
+ // If you see this error message, please file a bug.
+ log.Printf("internal error: error %q (%T) without position", err, err)
+ }
+
+ lpkg.Errors = append(lpkg.Errors, errs...)
+ }
+
+ if ld.Config.Mode&NeedTypes != 0 && len(lpkg.CompiledGoFiles) == 0 && lpkg.ExportFile != "" {
+ // The config requested loading sources and types, but sources are missing.
+ // Add an error to the package and fall back to loading from export data.
+ appendError(Error{"-", fmt.Sprintf("sources missing for package %s", lpkg.ID), ParseError})
+ ld.loadFromExportData(lpkg)
+ return // can't get syntax trees for this package
+ }
+
+ files, errs := ld.parseFiles(lpkg.CompiledGoFiles)
+ for _, err := range errs {
+ appendError(err)
+ }
+
+ lpkg.Syntax = files
+ if ld.Config.Mode&NeedTypes == 0 {
+ return
+ }
+
+ lpkg.TypesInfo = &types.Info{
+ Types: make(map[ast.Expr]types.TypeAndValue),
+ Defs: make(map[*ast.Ident]types.Object),
+ Uses: make(map[*ast.Ident]types.Object),
+ Implicits: make(map[ast.Node]types.Object),
+ Scopes: make(map[ast.Node]*types.Scope),
+ Selections: make(map[*ast.SelectorExpr]*types.Selection),
+ }
+ lpkg.TypesSizes = ld.sizes
+
+ importer := importerFunc(func(path string) (*types.Package, error) {
+ if path == "unsafe" {
+ return types.Unsafe, nil
+ }
+
+ // The imports map is keyed by import path.
+ ipkg := lpkg.Imports[path]
+ if ipkg == nil {
+ if err := lpkg.importErrors[path]; err != nil {
+ return nil, err
+ }
+ // There was skew between the metadata and the
+ // import declarations, likely due to an edit
+ // race, or because the ParseFile feature was
+ // used to supply alternative file contents.
+ return nil, fmt.Errorf("no metadata for %s", path)
+ }
+
+ if ipkg.Types != nil && ipkg.Types.Complete() {
+ return ipkg.Types, nil
+ }
+ log.Fatalf("internal error: package %q without types was imported from %q", path, lpkg)
+ panic("unreachable")
+ })
+
+ // type-check
+ tc := &types.Config{
+ Importer: importer,
+
+ // Type-check bodies of functions only in non-initial packages.
+ // Example: for import graph A->B->C and initial packages {A,C},
+ // we can ignore function bodies in B.
+ IgnoreFuncBodies: ld.Mode&NeedDeps == 0 && !lpkg.initial,
+
+ Error: appendError,
+ Sizes: ld.sizes,
+ }
+ types.NewChecker(tc, ld.Fset, lpkg.Types, lpkg.TypesInfo).Files(lpkg.Syntax)
+
+ lpkg.importErrors = nil // no longer needed
+
+ // If !Cgo, the type-checker uses FakeImportC mode, so
+ // it doesn't invoke the importer for import "C",
+ // nor report an error for the import,
+ // or for any undefined C.f reference.
+ // We must detect this explicitly and correctly
+ // mark the package as IllTyped (by reporting an error).
+ // TODO(adonovan): if these errors are annoying,
+ // we could just set IllTyped quietly.
+ if tc.FakeImportC {
+ outer:
+ for _, f := range lpkg.Syntax {
+ for _, imp := range f.Imports {
+ if imp.Path.Value == `"C"` {
+ err := types.Error{Fset: ld.Fset, Pos: imp.Pos(), Msg: `import "C" ignored`}
+ appendError(err)
+ break outer
+ }
+ }
+ }
+ }
+
+ // Record accumulated errors.
+ illTyped := len(lpkg.Errors) > 0
+ if !illTyped {
+ for _, imp := range lpkg.Imports {
+ if imp.IllTyped {
+ illTyped = true
+ break
+ }
+ }
+ }
+ lpkg.IllTyped = illTyped
+}
+
+// An importFunc is an implementation of the single-method
+// types.Importer interface based on a function value.
+type importerFunc func(path string) (*types.Package, error)
+
+func (f importerFunc) Import(path string) (*types.Package, error) { return f(path) }
+
+// We use a counting semaphore to limit
+// the number of parallel I/O calls per process.
+var ioLimit = make(chan bool, 20)
+
+func (ld *loader) parseFile(filename string) (*ast.File, error) {
+ ld.parseCacheMu.Lock()
+ v, ok := ld.parseCache[filename]
+ if ok {
+ // cache hit
+ ld.parseCacheMu.Unlock()
+ <-v.ready
+ } else {
+ // cache miss
+ v = &parseValue{ready: make(chan struct{})}
+ ld.parseCache[filename] = v
+ ld.parseCacheMu.Unlock()
+
+ var src []byte
+ for f, contents := range ld.Config.Overlay {
+ if sameFile(f, filename) {
+ src = contents
+ }
+ }
+ var err error
+ if src == nil {
+ ioLimit <- true // wait
+ src, err = ioutil.ReadFile(filename)
+ <-ioLimit // signal
+ }
+ if err != nil {
+ v.err = err
+ } else {
+ v.f, v.err = ld.ParseFile(ld.Fset, filename, src)
+ }
+
+ close(v.ready)
+ }
+ return v.f, v.err
+}
+
+// parseFiles reads and parses the Go source files and returns the ASTs
+// of the ones that could be at least partially parsed, along with a
+// list of I/O and parse errors encountered.
+//
+// Because files are scanned in parallel, the token.Pos
+// positions of the resulting ast.Files are not ordered.
+//
+func (ld *loader) parseFiles(filenames []string) ([]*ast.File, []error) {
+ var wg sync.WaitGroup
+ n := len(filenames)
+ parsed := make([]*ast.File, n)
+ errors := make([]error, n)
+ for i, file := range filenames {
+ if ld.Config.Context.Err() != nil {
+ parsed[i] = nil
+ errors[i] = ld.Config.Context.Err()
+ continue
+ }
+ wg.Add(1)
+ go func(i int, filename string) {
+ parsed[i], errors[i] = ld.parseFile(filename)
+ wg.Done()
+ }(i, file)
+ }
+ wg.Wait()
+
+ // Eliminate nils, preserving order.
+ var o int
+ for _, f := range parsed {
+ if f != nil {
+ parsed[o] = f
+ o++
+ }
+ }
+ parsed = parsed[:o]
+
+ o = 0
+ for _, err := range errors {
+ if err != nil {
+ errors[o] = err
+ o++
+ }
+ }
+ errors = errors[:o]
+
+ return parsed, errors
+}
+
+// sameFile returns true if x and y have the same basename and denote
+// the same file.
+//
+func sameFile(x, y string) bool {
+ if x == y {
+ // It could be the case that y doesn't exist.
+ // For instance, it may be an overlay file that
+ // hasn't been written to disk. To handle that case
+ // let x == y through. (We added the exact absolute path
+ // string to the CompiledGoFiles list, so the unwritten
+ // overlay case implies x==y.)
+ return true
+ }
+ if strings.EqualFold(filepath.Base(x), filepath.Base(y)) { // (optimisation)
+ if xi, err := os.Stat(x); err == nil {
+ if yi, err := os.Stat(y); err == nil {
+ return os.SameFile(xi, yi)
+ }
+ }
+ }
+ return false
+}
+
+// loadFromExportData returns type information for the specified
+// package, loading it from an export data file on the first request.
+func (ld *loader) loadFromExportData(lpkg *loaderPackage) (*types.Package, error) {
+ if lpkg.PkgPath == "" {
+ log.Fatalf("internal error: Package %s has no PkgPath", lpkg)
+ }
+
+ // Because gcexportdata.Read has the potential to create or
+ // modify the types.Package for each node in the transitive
+ // closure of dependencies of lpkg, all exportdata operations
+ // must be sequential. (Finer-grained locking would require
+ // changes to the gcexportdata API.)
+ //
+ // The exportMu lock guards the Package.Pkg field and the
+ // types.Package it points to, for each Package in the graph.
+ //
+ // Not all accesses to Package.Pkg need to be protected by exportMu:
+ // graph ordering ensures that direct dependencies of source
+ // packages are fully loaded before the importer reads their Pkg field.
+ ld.exportMu.Lock()
+ defer ld.exportMu.Unlock()
+
+ if tpkg := lpkg.Types; tpkg != nil && tpkg.Complete() {
+ return tpkg, nil // cache hit
+ }
+
+ lpkg.IllTyped = true // fail safe
+
+ if lpkg.ExportFile == "" {
+ // Errors while building export data will have been printed to stderr.
+ return nil, fmt.Errorf("no export data file")
+ }
+ f, err := os.Open(lpkg.ExportFile)
+ if err != nil {
+ return nil, err
+ }
+ defer f.Close()
+
+ // Read gc export data.
+ //
+ // We don't currently support gccgo export data because all
+ // underlying workspaces use the gc toolchain. (Even build
+ // systems that support gccgo don't use it for workspace
+ // queries.)
+ r, err := gcexportdata.NewReader(f)
+ if err != nil {
+ return nil, fmt.Errorf("reading %s: %v", lpkg.ExportFile, err)
+ }
+
+ // Build the view.
+ //
+ // The gcexportdata machinery has no concept of package ID.
+ // It identifies packages by their PkgPath, which although not
+ // globally unique is unique within the scope of one invocation
+ // of the linker, type-checker, or gcexportdata.
+ //
+ // So, we must build a PkgPath-keyed view of the global
+ // (conceptually ID-keyed) cache of packages and pass it to
+ // gcexportdata. The view must contain every existing
+ // package that might possibly be mentioned by the
+ // current package---its transitive closure.
+ //
+ // In loadPackage, we unconditionally create a types.Package for
+ // each dependency so that export data loading does not
+ // create new ones.
+ //
+ // TODO(adonovan): it would be simpler and more efficient
+ // if the export data machinery invoked a callback to
+ // get-or-create a package instead of a map.
+ //
+ view := make(map[string]*types.Package) // view seen by gcexportdata
+ seen := make(map[*loaderPackage]bool) // all visited packages
+ var visit func(pkgs map[string]*Package)
+ visit = func(pkgs map[string]*Package) {
+ for _, p := range pkgs {
+ lpkg := ld.pkgs[p.ID]
+ if !seen[lpkg] {
+ seen[lpkg] = true
+ view[lpkg.PkgPath] = lpkg.Types
+ visit(lpkg.Imports)
+ }
+ }
+ }
+ visit(lpkg.Imports)
+
+ viewLen := len(view) + 1 // adding the self package
+ // Parse the export data.
+ // (May modify incomplete packages in view but not create new ones.)
+ tpkg, err := gcexportdata.Read(r, ld.Fset, view, lpkg.PkgPath)
+ if err != nil {
+ return nil, fmt.Errorf("reading %s: %v", lpkg.ExportFile, err)
+ }
+ if viewLen != len(view) {
+ log.Fatalf("Unexpected package creation during export data loading")
+ }
+
+ lpkg.Types = tpkg
+ lpkg.IllTyped = false
+
+ return tpkg, nil
+}
+
+// impliedLoadMode returns loadMode with its dependencies.
+func impliedLoadMode(loadMode LoadMode) LoadMode {
+ if loadMode&NeedTypesInfo != 0 && loadMode&NeedImports == 0 {
+ // If NeedTypesInfo, go/packages needs to do typechecking itself so it can
+ // associate type info with the AST. To do so, we need the export data
+ // for dependencies, which means we need to ask for the direct dependencies.
+ // NeedImports is used to ask for the direct dependencies.
+ loadMode |= NeedImports
+ }
+
+ if loadMode&NeedDeps != 0 && loadMode&NeedImports == 0 {
+ // With NeedDeps we need to load at least direct dependencies.
+ // NeedImports is used to ask for the direct dependencies.
+ loadMode |= NeedImports
+ }
+
+ return loadMode
+}
+
+func usesExportData(cfg *Config) bool {
+ return cfg.Mode&NeedExportsFile != 0 || cfg.Mode&NeedTypes != 0 && cfg.Mode&NeedDeps == 0
+}
diff --git a/vendor/golang.org/x/tools/go/packages/visit.go b/vendor/golang.org/x/tools/go/packages/visit.go
new file mode 100644
index 0000000000000..b13cb081fcbe5
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/packages/visit.go
@@ -0,0 +1,55 @@
+package packages
+
+import (
+ "fmt"
+ "os"
+ "sort"
+)
+
+// Visit visits all the packages in the import graph whose roots are
+// pkgs, calling the optional pre function the first time each package
+// is encountered (preorder), and the optional post function after a
+// package's dependencies have been visited (postorder).
+// The boolean result of pre(pkg) determines whether
+// the imports of package pkg are visited.
+func Visit(pkgs []*Package, pre func(*Package) bool, post func(*Package)) {
+ seen := make(map[*Package]bool)
+ var visit func(*Package)
+ visit = func(pkg *Package) {
+ if !seen[pkg] {
+ seen[pkg] = true
+
+ if pre == nil || pre(pkg) {
+ paths := make([]string, 0, len(pkg.Imports))
+ for path := range pkg.Imports {
+ paths = append(paths, path)
+ }
+ sort.Strings(paths) // Imports is a map, this makes visit stable
+ for _, path := range paths {
+ visit(pkg.Imports[path])
+ }
+ }
+
+ if post != nil {
+ post(pkg)
+ }
+ }
+ }
+ for _, pkg := range pkgs {
+ visit(pkg)
+ }
+}
+
+// PrintErrors prints to os.Stderr the accumulated errors of all
+// packages in the import graph rooted at pkgs, dependencies first.
+// PrintErrors returns the number of errors printed.
+func PrintErrors(pkgs []*Package) int {
+ var n int
+ Visit(pkgs, nil, func(pkg *Package) {
+ for _, err := range pkg.Errors {
+ fmt.Fprintln(os.Stderr, err)
+ n++
+ }
+ })
+ return n
+}
diff --git a/vendor/golang.org/x/tools/go/types/objectpath/objectpath.go b/vendor/golang.org/x/tools/go/types/objectpath/objectpath.go
new file mode 100644
index 0000000000000..882e3b3d8a960
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/types/objectpath/objectpath.go
@@ -0,0 +1,523 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package objectpath defines a naming scheme for types.Objects
+// (that is, named entities in Go programs) relative to their enclosing
+// package.
+//
+// Type-checker objects are canonical, so they are usually identified by
+// their address in memory (a pointer), but a pointer has meaning only
+// within one address space. By contrast, objectpath names allow the
+// identity of an object to be sent from one program to another,
+// establishing a correspondence between types.Object variables that are
+// distinct but logically equivalent.
+//
+// A single object may have multiple paths. In this example,
+// type A struct{ X int }
+// type B A
+// the field X has two paths due to its membership of both A and B.
+// The For(obj) function always returns one of these paths, arbitrarily
+// but consistently.
+package objectpath
+
+import (
+ "fmt"
+ "strconv"
+ "strings"
+
+ "go/types"
+)
+
+// A Path is an opaque name that identifies a types.Object
+// relative to its package. Conceptually, the name consists of a
+// sequence of destructuring operations applied to the package scope
+// to obtain the original object.
+// The name does not include the package itself.
+type Path string
+
+// Encoding
+//
+// An object path is a textual and (with training) human-readable encoding
+// of a sequence of destructuring operators, starting from a types.Package.
+// The sequences represent a path through the package/object/type graph.
+// We classify these operators by their type:
+//
+// PO package->object Package.Scope.Lookup
+// OT object->type Object.Type
+// TT type->type Type.{Elem,Key,Params,Results,Underlying} [EKPRU]
+// TO type->object Type.{At,Field,Method,Obj} [AFMO]
+//
+// All valid paths start with a package and end at an object
+// and thus may be defined by the regular language:
+//
+// objectpath = PO (OT TT* TO)*
+//
+// The concrete encoding follows directly:
+// - The only PO operator is Package.Scope.Lookup, which requires an identifier.
+// - The only OT operator is Object.Type,
+// which we encode as '.' because dot cannot appear in an identifier.
+// - The TT operators are encoded as [EKPRU].
+// - The OT operators are encoded as [AFMO];
+// three of these (At,Field,Method) require an integer operand,
+// which is encoded as a string of decimal digits.
+// These indices are stable across different representations
+// of the same package, even source and export data.
+//
+// In the example below,
+//
+// package p
+//
+// type T interface {
+// f() (a string, b struct{ X int })
+// }
+//
+// field X has the path "T.UM0.RA1.F0",
+// representing the following sequence of operations:
+//
+// p.Lookup("T") T
+// .Type().Underlying().Method(0). f
+// .Type().Results().At(1) b
+// .Type().Field(0) X
+//
+// The encoding is not maximally compact---every R or P is
+// followed by an A, for example---but this simplifies the
+// encoder and decoder.
+//
+const (
+ // object->type operators
+ opType = '.' // .Type() (Object)
+
+ // type->type operators
+ opElem = 'E' // .Elem() (Pointer, Slice, Array, Chan, Map)
+ opKey = 'K' // .Key() (Map)
+ opParams = 'P' // .Params() (Signature)
+ opResults = 'R' // .Results() (Signature)
+ opUnderlying = 'U' // .Underlying() (Named)
+
+ // type->object operators
+ opAt = 'A' // .At(i) (Tuple)
+ opField = 'F' // .Field(i) (Struct)
+ opMethod = 'M' // .Method(i) (Named or Interface; not Struct: "promoted" names are ignored)
+ opObj = 'O' // .Obj() (Named)
+)
+
+// The For function returns the path to an object relative to its package,
+// or an error if the object is not accessible from the package's Scope.
+//
+// The For function guarantees to return a path only for the following objects:
+// - package-level types
+// - exported package-level non-types
+// - methods
+// - parameter and result variables
+// - struct fields
+// These objects are sufficient to define the API of their package.
+// The objects described by a package's export data are drawn from this set.
+//
+// For does not return a path for predeclared names, imported package
+// names, local names, and unexported package-level names (except
+// types).
+//
+// Example: given this definition,
+//
+// package p
+//
+// type T interface {
+// f() (a string, b struct{ X int })
+// }
+//
+// For(X) would return a path that denotes the following sequence of operations:
+//
+// p.Scope().Lookup("T") (TypeName T)
+// .Type().Underlying().Method(0). (method Func f)
+// .Type().Results().At(1) (field Var b)
+// .Type().Field(0) (field Var X)
+//
+// where p is the package (*types.Package) to which X belongs.
+func For(obj types.Object) (Path, error) {
+ pkg := obj.Pkg()
+
+ // This table lists the cases of interest.
+ //
+ // Object Action
+ // ------ ------
+ // nil reject
+ // builtin reject
+ // pkgname reject
+ // label reject
+ // var
+ // package-level accept
+ // func param/result accept
+ // local reject
+ // struct field accept
+ // const
+ // package-level accept
+ // local reject
+ // func
+ // package-level accept
+ // init functions reject
+ // concrete method accept
+ // interface method accept
+ // type
+ // package-level accept
+ // local reject
+ //
+ // The only accessible package-level objects are members of pkg itself.
+ //
+ // The cases are handled in four steps:
+ //
+ // 1. reject nil and builtin
+ // 2. accept package-level objects
+ // 3. reject obviously invalid objects
+ // 4. search the API for the path to the param/result/field/method.
+
+ // 1. reference to nil or builtin?
+ if pkg == nil {
+ return "", fmt.Errorf("predeclared %s has no path", obj)
+ }
+ scope := pkg.Scope()
+
+ // 2. package-level object?
+ if scope.Lookup(obj.Name()) == obj {
+ // Only exported objects (and non-exported types) have a path.
+ // Non-exported types may be referenced by other objects.
+ if _, ok := obj.(*types.TypeName); !ok && !obj.Exported() {
+ return "", fmt.Errorf("no path for non-exported %v", obj)
+ }
+ return Path(obj.Name()), nil
+ }
+
+ // 3. Not a package-level object.
+ // Reject obviously non-viable cases.
+ switch obj := obj.(type) {
+ case *types.Const, // Only package-level constants have a path.
+ *types.TypeName, // Only package-level types have a path.
+ *types.Label, // Labels are function-local.
+ *types.PkgName: // PkgNames are file-local.
+ return "", fmt.Errorf("no path for %v", obj)
+
+ case *types.Var:
+ // Could be:
+ // - a field (obj.IsField())
+ // - a func parameter or result
+ // - a local var.
+ // Sadly there is no way to distinguish
+ // a param/result from a local
+ // so we must proceed to the find.
+
+ case *types.Func:
+ // A func, if not package-level, must be a method.
+ if recv := obj.Type().(*types.Signature).Recv(); recv == nil {
+ return "", fmt.Errorf("func is not a method: %v", obj)
+ }
+ // TODO(adonovan): opt: if the method is concrete,
+ // do a specialized version of the rest of this function so
+ // that it's O(1) not O(|scope|). Basically 'find' is needed
+ // only for struct fields and interface methods.
+
+ default:
+ panic(obj)
+ }
+
+ // 4. Search the API for the path to the var (field/param/result) or method.
+
+ // First inspect package-level named types.
+ // In the presence of path aliases, these give
+ // the best paths because non-types may
+ // refer to types, but not the reverse.
+ empty := make([]byte, 0, 48) // initial space
+ for _, name := range scope.Names() {
+ o := scope.Lookup(name)
+ tname, ok := o.(*types.TypeName)
+ if !ok {
+ continue // handle non-types in second pass
+ }
+
+ path := append(empty, name...)
+ path = append(path, opType)
+
+ T := o.Type()
+
+ if tname.IsAlias() {
+ // type alias
+ if r := find(obj, T, path); r != nil {
+ return Path(r), nil
+ }
+ } else {
+ // defined (named) type
+ if r := find(obj, T.Underlying(), append(path, opUnderlying)); r != nil {
+ return Path(r), nil
+ }
+ }
+ }
+
+ // Then inspect everything else:
+ // non-types, and declared methods of defined types.
+ for _, name := range scope.Names() {
+ o := scope.Lookup(name)
+ path := append(empty, name...)
+ if _, ok := o.(*types.TypeName); !ok {
+ if o.Exported() {
+ // exported non-type (const, var, func)
+ if r := find(obj, o.Type(), append(path, opType)); r != nil {
+ return Path(r), nil
+ }
+ }
+ continue
+ }
+
+ // Inspect declared methods of defined types.
+ if T, ok := o.Type().(*types.Named); ok {
+ path = append(path, opType)
+ for i := 0; i < T.NumMethods(); i++ {
+ m := T.Method(i)
+ path2 := appendOpArg(path, opMethod, i)
+ if m == obj {
+ return Path(path2), nil // found declared method
+ }
+ if r := find(obj, m.Type(), append(path2, opType)); r != nil {
+ return Path(r), nil
+ }
+ }
+ }
+ }
+
+ return "", fmt.Errorf("can't find path for %v in %s", obj, pkg.Path())
+}
+
+func appendOpArg(path []byte, op byte, arg int) []byte {
+ path = append(path, op)
+ path = strconv.AppendInt(path, int64(arg), 10)
+ return path
+}
+
+// find finds obj within type T, returning the path to it, or nil if not found.
+func find(obj types.Object, T types.Type, path []byte) []byte {
+ switch T := T.(type) {
+ case *types.Basic, *types.Named:
+ // Named types belonging to pkg were handled already,
+ // so T must belong to another package. No path.
+ return nil
+ case *types.Pointer:
+ return find(obj, T.Elem(), append(path, opElem))
+ case *types.Slice:
+ return find(obj, T.Elem(), append(path, opElem))
+ case *types.Array:
+ return find(obj, T.Elem(), append(path, opElem))
+ case *types.Chan:
+ return find(obj, T.Elem(), append(path, opElem))
+ case *types.Map:
+ if r := find(obj, T.Key(), append(path, opKey)); r != nil {
+ return r
+ }
+ return find(obj, T.Elem(), append(path, opElem))
+ case *types.Signature:
+ if r := find(obj, T.Params(), append(path, opParams)); r != nil {
+ return r
+ }
+ return find(obj, T.Results(), append(path, opResults))
+ case *types.Struct:
+ for i := 0; i < T.NumFields(); i++ {
+ f := T.Field(i)
+ path2 := appendOpArg(path, opField, i)
+ if f == obj {
+ return path2 // found field var
+ }
+ if r := find(obj, f.Type(), append(path2, opType)); r != nil {
+ return r
+ }
+ }
+ return nil
+ case *types.Tuple:
+ for i := 0; i < T.Len(); i++ {
+ v := T.At(i)
+ path2 := appendOpArg(path, opAt, i)
+ if v == obj {
+ return path2 // found param/result var
+ }
+ if r := find(obj, v.Type(), append(path2, opType)); r != nil {
+ return r
+ }
+ }
+ return nil
+ case *types.Interface:
+ for i := 0; i < T.NumMethods(); i++ {
+ m := T.Method(i)
+ path2 := appendOpArg(path, opMethod, i)
+ if m == obj {
+ return path2 // found interface method
+ }
+ if r := find(obj, m.Type(), append(path2, opType)); r != nil {
+ return r
+ }
+ }
+ return nil
+ }
+ panic(T)
+}
+
+// Object returns the object denoted by path p within the package pkg.
+func Object(pkg *types.Package, p Path) (types.Object, error) {
+ if p == "" {
+ return nil, fmt.Errorf("empty path")
+ }
+
+ pathstr := string(p)
+ var pkgobj, suffix string
+ if dot := strings.IndexByte(pathstr, opType); dot < 0 {
+ pkgobj = pathstr
+ } else {
+ pkgobj = pathstr[:dot]
+ suffix = pathstr[dot:] // suffix starts with "."
+ }
+
+ obj := pkg.Scope().Lookup(pkgobj)
+ if obj == nil {
+ return nil, fmt.Errorf("package %s does not contain %q", pkg.Path(), pkgobj)
+ }
+
+ // abstraction of *types.{Pointer,Slice,Array,Chan,Map}
+ type hasElem interface {
+ Elem() types.Type
+ }
+ // abstraction of *types.{Interface,Named}
+ type hasMethods interface {
+ Method(int) *types.Func
+ NumMethods() int
+ }
+
+ // The loop state is the pair (t, obj),
+ // exactly one of which is non-nil, initially obj.
+ // All suffixes start with '.' (the only object->type operation),
+ // followed by optional type->type operations,
+ // then a type->object operation.
+ // The cycle then repeats.
+ var t types.Type
+ for suffix != "" {
+ code := suffix[0]
+ suffix = suffix[1:]
+
+ // Codes [AFM] have an integer operand.
+ var index int
+ switch code {
+ case opAt, opField, opMethod:
+ rest := strings.TrimLeft(suffix, "0123456789")
+ numerals := suffix[:len(suffix)-len(rest)]
+ suffix = rest
+ i, err := strconv.Atoi(numerals)
+ if err != nil {
+ return nil, fmt.Errorf("invalid path: bad numeric operand %q for code %q", numerals, code)
+ }
+ index = int(i)
+ case opObj:
+ // no operand
+ default:
+ // The suffix must end with a type->object operation.
+ if suffix == "" {
+ return nil, fmt.Errorf("invalid path: ends with %q, want [AFMO]", code)
+ }
+ }
+
+ if code == opType {
+ if t != nil {
+ return nil, fmt.Errorf("invalid path: unexpected %q in type context", opType)
+ }
+ t = obj.Type()
+ obj = nil
+ continue
+ }
+
+ if t == nil {
+ return nil, fmt.Errorf("invalid path: code %q in object context", code)
+ }
+
+ // Inv: t != nil, obj == nil
+
+ switch code {
+ case opElem:
+ hasElem, ok := t.(hasElem) // Pointer, Slice, Array, Chan, Map
+ if !ok {
+ return nil, fmt.Errorf("cannot apply %q to %s (got %T, want pointer, slice, array, chan or map)", code, t, t)
+ }
+ t = hasElem.Elem()
+
+ case opKey:
+ mapType, ok := t.(*types.Map)
+ if !ok {
+ return nil, fmt.Errorf("cannot apply %q to %s (got %T, want map)", code, t, t)
+ }
+ t = mapType.Key()
+
+ case opParams:
+ sig, ok := t.(*types.Signature)
+ if !ok {
+ return nil, fmt.Errorf("cannot apply %q to %s (got %T, want signature)", code, t, t)
+ }
+ t = sig.Params()
+
+ case opResults:
+ sig, ok := t.(*types.Signature)
+ if !ok {
+ return nil, fmt.Errorf("cannot apply %q to %s (got %T, want signature)", code, t, t)
+ }
+ t = sig.Results()
+
+ case opUnderlying:
+ named, ok := t.(*types.Named)
+ if !ok {
+ return nil, fmt.Errorf("cannot apply %q to %s (got %s, want named)", code, t, t)
+ }
+ t = named.Underlying()
+
+ case opAt:
+ tuple, ok := t.(*types.Tuple)
+ if !ok {
+ return nil, fmt.Errorf("cannot apply %q to %s (got %s, want tuple)", code, t, t)
+ }
+ if n := tuple.Len(); index >= n {
+ return nil, fmt.Errorf("tuple index %d out of range [0-%d)", index, n)
+ }
+ obj = tuple.At(index)
+ t = nil
+
+ case opField:
+ structType, ok := t.(*types.Struct)
+ if !ok {
+ return nil, fmt.Errorf("cannot apply %q to %s (got %T, want struct)", code, t, t)
+ }
+ if n := structType.NumFields(); index >= n {
+ return nil, fmt.Errorf("field index %d out of range [0-%d)", index, n)
+ }
+ obj = structType.Field(index)
+ t = nil
+
+ case opMethod:
+ hasMethods, ok := t.(hasMethods) // Interface or Named
+ if !ok {
+ return nil, fmt.Errorf("cannot apply %q to %s (got %s, want interface or named)", code, t, t)
+ }
+ if n := hasMethods.NumMethods(); index >= n {
+ return nil, fmt.Errorf("method index %d out of range [0-%d)", index, n)
+ }
+ obj = hasMethods.Method(index)
+ t = nil
+
+ case opObj:
+ named, ok := t.(*types.Named)
+ if !ok {
+ return nil, fmt.Errorf("cannot apply %q to %s (got %s, want named)", code, t, t)
+ }
+ obj = named.Obj()
+ t = nil
+
+ default:
+ return nil, fmt.Errorf("invalid path: unknown code %q", code)
+ }
+ }
+
+ if obj.Pkg() != pkg {
+ return nil, fmt.Errorf("path denotes %s, which belongs to a different package", obj)
+ }
+
+ return obj, nil // success
+}
diff --git a/vendor/golang.org/x/tools/go/types/typeutil/callee.go b/vendor/golang.org/x/tools/go/types/typeutil/callee.go
new file mode 100644
index 0000000000000..38f596daf9e22
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/types/typeutil/callee.go
@@ -0,0 +1,46 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package typeutil
+
+import (
+ "go/ast"
+ "go/types"
+
+ "golang.org/x/tools/go/ast/astutil"
+)
+
+// Callee returns the named target of a function call, if any:
+// a function, method, builtin, or variable.
+func Callee(info *types.Info, call *ast.CallExpr) types.Object {
+ var obj types.Object
+ switch fun := astutil.Unparen(call.Fun).(type) {
+ case *ast.Ident:
+ obj = info.Uses[fun] // type, var, builtin, or declared func
+ case *ast.SelectorExpr:
+ if sel, ok := info.Selections[fun]; ok {
+ obj = sel.Obj() // method or field
+ } else {
+ obj = info.Uses[fun.Sel] // qualified identifier?
+ }
+ }
+ if _, ok := obj.(*types.TypeName); ok {
+ return nil // T(x) is a conversion, not a call
+ }
+ return obj
+}
+
+// StaticCallee returns the target (function or method) of a static
+// function call, if any. It returns nil for calls to builtins.
+func StaticCallee(info *types.Info, call *ast.CallExpr) *types.Func {
+ if f, ok := Callee(info, call).(*types.Func); ok && !interfaceMethod(f) {
+ return f
+ }
+ return nil
+}
+
+func interfaceMethod(f *types.Func) bool {
+ recv := f.Type().(*types.Signature).Recv()
+ return recv != nil && types.IsInterface(recv.Type())
+}
diff --git a/vendor/golang.org/x/tools/go/types/typeutil/imports.go b/vendor/golang.org/x/tools/go/types/typeutil/imports.go
new file mode 100644
index 0000000000000..9c441dba9c06b
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/types/typeutil/imports.go
@@ -0,0 +1,31 @@
+// Copyright 2014 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package typeutil
+
+import "go/types"
+
+// Dependencies returns all dependencies of the specified packages.
+//
+// Dependent packages appear in topological order: if package P imports
+// package Q, Q appears earlier than P in the result.
+// The algorithm follows import statements in the order they
+// appear in the source code, so the result is a total order.
+//
+func Dependencies(pkgs ...*types.Package) []*types.Package {
+ var result []*types.Package
+ seen := make(map[*types.Package]bool)
+ var visit func(pkgs []*types.Package)
+ visit = func(pkgs []*types.Package) {
+ for _, p := range pkgs {
+ if !seen[p] {
+ seen[p] = true
+ visit(p.Imports())
+ result = append(result, p)
+ }
+ }
+ }
+ visit(pkgs)
+ return result
+}
diff --git a/vendor/golang.org/x/tools/go/types/typeutil/map.go b/vendor/golang.org/x/tools/go/types/typeutil/map.go
new file mode 100644
index 0000000000000..c7f7545006409
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/types/typeutil/map.go
@@ -0,0 +1,313 @@
+// Copyright 2014 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package typeutil defines various utilities for types, such as Map,
+// a mapping from types.Type to interface{} values.
+package typeutil // import "golang.org/x/tools/go/types/typeutil"
+
+import (
+ "bytes"
+ "fmt"
+ "go/types"
+ "reflect"
+)
+
+// Map is a hash-table-based mapping from types (types.Type) to
+// arbitrary interface{} values. The concrete types that implement
+// the Type interface are pointers. Since they are not canonicalized,
+// == cannot be used to check for equivalence, and thus we cannot
+// simply use a Go map.
+//
+// Just as with map[K]V, a nil *Map is a valid empty map.
+//
+// Not thread-safe.
+//
+type Map struct {
+ hasher Hasher // shared by many Maps
+ table map[uint32][]entry // maps hash to bucket; entry.key==nil means unused
+ length int // number of map entries
+}
+
+// entry is an entry (key/value association) in a hash bucket.
+type entry struct {
+ key types.Type
+ value interface{}
+}
+
+// SetHasher sets the hasher used by Map.
+//
+// All Hashers are functionally equivalent but contain internal state
+// used to cache the results of hashing previously seen types.
+//
+// A single Hasher created by MakeHasher() may be shared among many
+// Maps. This is recommended if the instances have many keys in
+// common, as it will amortize the cost of hash computation.
+//
+// A Hasher may grow without bound as new types are seen. Even when a
+// type is deleted from the map, the Hasher never shrinks, since other
+// types in the map may reference the deleted type indirectly.
+//
+// Hashers are not thread-safe, and read-only operations such as
+// Map.Lookup require updates to the hasher, so a full Mutex lock (not a
+// read-lock) is require around all Map operations if a shared
+// hasher is accessed from multiple threads.
+//
+// If SetHasher is not called, the Map will create a private hasher at
+// the first call to Insert.
+//
+func (m *Map) SetHasher(hasher Hasher) {
+ m.hasher = hasher
+}
+
+// Delete removes the entry with the given key, if any.
+// It returns true if the entry was found.
+//
+func (m *Map) Delete(key types.Type) bool {
+ if m != nil && m.table != nil {
+ hash := m.hasher.Hash(key)
+ bucket := m.table[hash]
+ for i, e := range bucket {
+ if e.key != nil && types.Identical(key, e.key) {
+ // We can't compact the bucket as it
+ // would disturb iterators.
+ bucket[i] = entry{}
+ m.length--
+ return true
+ }
+ }
+ }
+ return false
+}
+
+// At returns the map entry for the given key.
+// The result is nil if the entry is not present.
+//
+func (m *Map) At(key types.Type) interface{} {
+ if m != nil && m.table != nil {
+ for _, e := range m.table[m.hasher.Hash(key)] {
+ if e.key != nil && types.Identical(key, e.key) {
+ return e.value
+ }
+ }
+ }
+ return nil
+}
+
+// Set sets the map entry for key to val,
+// and returns the previous entry, if any.
+func (m *Map) Set(key types.Type, value interface{}) (prev interface{}) {
+ if m.table != nil {
+ hash := m.hasher.Hash(key)
+ bucket := m.table[hash]
+ var hole *entry
+ for i, e := range bucket {
+ if e.key == nil {
+ hole = &bucket[i]
+ } else if types.Identical(key, e.key) {
+ prev = e.value
+ bucket[i].value = value
+ return
+ }
+ }
+
+ if hole != nil {
+ *hole = entry{key, value} // overwrite deleted entry
+ } else {
+ m.table[hash] = append(bucket, entry{key, value})
+ }
+ } else {
+ if m.hasher.memo == nil {
+ m.hasher = MakeHasher()
+ }
+ hash := m.hasher.Hash(key)
+ m.table = map[uint32][]entry{hash: {entry{key, value}}}
+ }
+
+ m.length++
+ return
+}
+
+// Len returns the number of map entries.
+func (m *Map) Len() int {
+ if m != nil {
+ return m.length
+ }
+ return 0
+}
+
+// Iterate calls function f on each entry in the map in unspecified order.
+//
+// If f should mutate the map, Iterate provides the same guarantees as
+// Go maps: if f deletes a map entry that Iterate has not yet reached,
+// f will not be invoked for it, but if f inserts a map entry that
+// Iterate has not yet reached, whether or not f will be invoked for
+// it is unspecified.
+//
+func (m *Map) Iterate(f func(key types.Type, value interface{})) {
+ if m != nil {
+ for _, bucket := range m.table {
+ for _, e := range bucket {
+ if e.key != nil {
+ f(e.key, e.value)
+ }
+ }
+ }
+ }
+}
+
+// Keys returns a new slice containing the set of map keys.
+// The order is unspecified.
+func (m *Map) Keys() []types.Type {
+ keys := make([]types.Type, 0, m.Len())
+ m.Iterate(func(key types.Type, _ interface{}) {
+ keys = append(keys, key)
+ })
+ return keys
+}
+
+func (m *Map) toString(values bool) string {
+ if m == nil {
+ return "{}"
+ }
+ var buf bytes.Buffer
+ fmt.Fprint(&buf, "{")
+ sep := ""
+ m.Iterate(func(key types.Type, value interface{}) {
+ fmt.Fprint(&buf, sep)
+ sep = ", "
+ fmt.Fprint(&buf, key)
+ if values {
+ fmt.Fprintf(&buf, ": %q", value)
+ }
+ })
+ fmt.Fprint(&buf, "}")
+ return buf.String()
+}
+
+// String returns a string representation of the map's entries.
+// Values are printed using fmt.Sprintf("%v", v).
+// Order is unspecified.
+//
+func (m *Map) String() string {
+ return m.toString(true)
+}
+
+// KeysString returns a string representation of the map's key set.
+// Order is unspecified.
+//
+func (m *Map) KeysString() string {
+ return m.toString(false)
+}
+
+////////////////////////////////////////////////////////////////////////
+// Hasher
+
+// A Hasher maps each type to its hash value.
+// For efficiency, a hasher uses memoization; thus its memory
+// footprint grows monotonically over time.
+// Hashers are not thread-safe.
+// Hashers have reference semantics.
+// Call MakeHasher to create a Hasher.
+type Hasher struct {
+ memo map[types.Type]uint32
+}
+
+// MakeHasher returns a new Hasher instance.
+func MakeHasher() Hasher {
+ return Hasher{make(map[types.Type]uint32)}
+}
+
+// Hash computes a hash value for the given type t such that
+// Identical(t, t') => Hash(t) == Hash(t').
+func (h Hasher) Hash(t types.Type) uint32 {
+ hash, ok := h.memo[t]
+ if !ok {
+ hash = h.hashFor(t)
+ h.memo[t] = hash
+ }
+ return hash
+}
+
+// hashString computes the Fowler–Noll–Vo hash of s.
+func hashString(s string) uint32 {
+ var h uint32
+ for i := 0; i < len(s); i++ {
+ h ^= uint32(s[i])
+ h *= 16777619
+ }
+ return h
+}
+
+// hashFor computes the hash of t.
+func (h Hasher) hashFor(t types.Type) uint32 {
+ // See Identical for rationale.
+ switch t := t.(type) {
+ case *types.Basic:
+ return uint32(t.Kind())
+
+ case *types.Array:
+ return 9043 + 2*uint32(t.Len()) + 3*h.Hash(t.Elem())
+
+ case *types.Slice:
+ return 9049 + 2*h.Hash(t.Elem())
+
+ case *types.Struct:
+ var hash uint32 = 9059
+ for i, n := 0, t.NumFields(); i < n; i++ {
+ f := t.Field(i)
+ if f.Anonymous() {
+ hash += 8861
+ }
+ hash += hashString(t.Tag(i))
+ hash += hashString(f.Name()) // (ignore f.Pkg)
+ hash += h.Hash(f.Type())
+ }
+ return hash
+
+ case *types.Pointer:
+ return 9067 + 2*h.Hash(t.Elem())
+
+ case *types.Signature:
+ var hash uint32 = 9091
+ if t.Variadic() {
+ hash *= 8863
+ }
+ return hash + 3*h.hashTuple(t.Params()) + 5*h.hashTuple(t.Results())
+
+ case *types.Interface:
+ var hash uint32 = 9103
+ for i, n := 0, t.NumMethods(); i < n; i++ {
+ // See go/types.identicalMethods for rationale.
+ // Method order is not significant.
+ // Ignore m.Pkg().
+ m := t.Method(i)
+ hash += 3*hashString(m.Name()) + 5*h.Hash(m.Type())
+ }
+ return hash
+
+ case *types.Map:
+ return 9109 + 2*h.Hash(t.Key()) + 3*h.Hash(t.Elem())
+
+ case *types.Chan:
+ return 9127 + 2*uint32(t.Dir()) + 3*h.Hash(t.Elem())
+
+ case *types.Named:
+ // Not safe with a copying GC; objects may move.
+ return uint32(reflect.ValueOf(t.Obj()).Pointer())
+
+ case *types.Tuple:
+ return h.hashTuple(t)
+ }
+ panic(t)
+}
+
+func (h Hasher) hashTuple(tuple *types.Tuple) uint32 {
+ // See go/types.identicalTypes for rationale.
+ n := tuple.Len()
+ var hash uint32 = 9137 + 2*uint32(n)
+ for i := 0; i < n; i++ {
+ hash += 3 * h.Hash(tuple.At(i).Type())
+ }
+ return hash
+}
diff --git a/vendor/golang.org/x/tools/go/types/typeutil/methodsetcache.go b/vendor/golang.org/x/tools/go/types/typeutil/methodsetcache.go
new file mode 100644
index 0000000000000..32084610f49a0
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/types/typeutil/methodsetcache.go
@@ -0,0 +1,72 @@
+// Copyright 2014 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// This file implements a cache of method sets.
+
+package typeutil
+
+import (
+ "go/types"
+ "sync"
+)
+
+// A MethodSetCache records the method set of each type T for which
+// MethodSet(T) is called so that repeat queries are fast.
+// The zero value is a ready-to-use cache instance.
+type MethodSetCache struct {
+ mu sync.Mutex
+ named map[*types.Named]struct{ value, pointer *types.MethodSet } // method sets for named N and *N
+ others map[types.Type]*types.MethodSet // all other types
+}
+
+// MethodSet returns the method set of type T. It is thread-safe.
+//
+// If cache is nil, this function is equivalent to types.NewMethodSet(T).
+// Utility functions can thus expose an optional *MethodSetCache
+// parameter to clients that care about performance.
+//
+func (cache *MethodSetCache) MethodSet(T types.Type) *types.MethodSet {
+ if cache == nil {
+ return types.NewMethodSet(T)
+ }
+ cache.mu.Lock()
+ defer cache.mu.Unlock()
+
+ switch T := T.(type) {
+ case *types.Named:
+ return cache.lookupNamed(T).value
+
+ case *types.Pointer:
+ if N, ok := T.Elem().(*types.Named); ok {
+ return cache.lookupNamed(N).pointer
+ }
+ }
+
+ // all other types
+ // (The map uses pointer equivalence, not type identity.)
+ mset := cache.others[T]
+ if mset == nil {
+ mset = types.NewMethodSet(T)
+ if cache.others == nil {
+ cache.others = make(map[types.Type]*types.MethodSet)
+ }
+ cache.others[T] = mset
+ }
+ return mset
+}
+
+func (cache *MethodSetCache) lookupNamed(named *types.Named) struct{ value, pointer *types.MethodSet } {
+ if cache.named == nil {
+ cache.named = make(map[*types.Named]struct{ value, pointer *types.MethodSet })
+ }
+ // Avoid recomputing mset(*T) for each distinct Pointer
+ // instance whose underlying type is a named type.
+ msets, ok := cache.named[named]
+ if !ok {
+ msets.value = types.NewMethodSet(named)
+ msets.pointer = types.NewMethodSet(types.NewPointer(named))
+ cache.named[named] = msets
+ }
+ return msets
+}
diff --git a/vendor/golang.org/x/tools/go/types/typeutil/ui.go b/vendor/golang.org/x/tools/go/types/typeutil/ui.go
new file mode 100644
index 0000000000000..9849c24cef3f8
--- /dev/null
+++ b/vendor/golang.org/x/tools/go/types/typeutil/ui.go
@@ -0,0 +1,52 @@
+// Copyright 2014 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package typeutil
+
+// This file defines utilities for user interfaces that display types.
+
+import "go/types"
+
+// IntuitiveMethodSet returns the intuitive method set of a type T,
+// which is the set of methods you can call on an addressable value of
+// that type.
+//
+// The result always contains MethodSet(T), and is exactly MethodSet(T)
+// for interface types and for pointer-to-concrete types.
+// For all other concrete types T, the result additionally
+// contains each method belonging to *T if there is no identically
+// named method on T itself.
+//
+// This corresponds to user intuition about method sets;
+// this function is intended only for user interfaces.
+//
+// The order of the result is as for types.MethodSet(T).
+//
+func IntuitiveMethodSet(T types.Type, msets *MethodSetCache) []*types.Selection {
+ isPointerToConcrete := func(T types.Type) bool {
+ ptr, ok := T.(*types.Pointer)
+ return ok && !types.IsInterface(ptr.Elem())
+ }
+
+ var result []*types.Selection
+ mset := msets.MethodSet(T)
+ if types.IsInterface(T) || isPointerToConcrete(T) {
+ for i, n := 0, mset.Len(); i < n; i++ {
+ result = append(result, mset.At(i))
+ }
+ } else {
+ // T is some other concrete type.
+ // Report methods of T and *T, preferring those of T.
+ pmset := msets.MethodSet(types.NewPointer(T))
+ for i, n := 0, pmset.Len(); i < n; i++ {
+ meth := pmset.At(i)
+ if m := mset.Lookup(meth.Obj().Pkg(), meth.Obj().Name()); m != nil {
+ meth = m
+ }
+ result = append(result, meth)
+ }
+
+ }
+ return result
+}
diff --git a/vendor/golang.org/x/tools/internal/fastwalk/fastwalk.go b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk.go
new file mode 100644
index 0000000000000..7219c8e9ff1f1
--- /dev/null
+++ b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk.go
@@ -0,0 +1,196 @@
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package fastwalk provides a faster version of filepath.Walk for file system
+// scanning tools.
+package fastwalk
+
+import (
+ "errors"
+ "os"
+ "path/filepath"
+ "runtime"
+ "sync"
+)
+
+// TraverseLink is used as a return value from WalkFuncs to indicate that the
+// symlink named in the call may be traversed.
+var TraverseLink = errors.New("fastwalk: traverse symlink, assuming target is a directory")
+
+// SkipFiles is a used as a return value from WalkFuncs to indicate that the
+// callback should not be called for any other files in the current directory.
+// Child directories will still be traversed.
+var SkipFiles = errors.New("fastwalk: skip remaining files in directory")
+
+// Walk is a faster implementation of filepath.Walk.
+//
+// filepath.Walk's design necessarily calls os.Lstat on each file,
+// even if the caller needs less info.
+// Many tools need only the type of each file.
+// On some platforms, this information is provided directly by the readdir
+// system call, avoiding the need to stat each file individually.
+// fastwalk_unix.go contains a fork of the syscall routines.
+//
+// See golang.org/issue/16399
+//
+// Walk walks the file tree rooted at root, calling walkFn for
+// each file or directory in the tree, including root.
+//
+// If fastWalk returns filepath.SkipDir, the directory is skipped.
+//
+// Unlike filepath.Walk:
+// * file stat calls must be done by the user.
+// The only provided metadata is the file type, which does not include
+// any permission bits.
+// * multiple goroutines stat the filesystem concurrently. The provided
+// walkFn must be safe for concurrent use.
+// * fastWalk can follow symlinks if walkFn returns the TraverseLink
+// sentinel error. It is the walkFn's responsibility to prevent
+// fastWalk from going into symlink cycles.
+func Walk(root string, walkFn func(path string, typ os.FileMode) error) error {
+ // TODO(bradfitz): make numWorkers configurable? We used a
+ // minimum of 4 to give the kernel more info about multiple
+ // things we want, in hopes its I/O scheduling can take
+ // advantage of that. Hopefully most are in cache. Maybe 4 is
+ // even too low of a minimum. Profile more.
+ numWorkers := 4
+ if n := runtime.NumCPU(); n > numWorkers {
+ numWorkers = n
+ }
+
+ // Make sure to wait for all workers to finish, otherwise
+ // walkFn could still be called after returning. This Wait call
+ // runs after close(e.donec) below.
+ var wg sync.WaitGroup
+ defer wg.Wait()
+
+ w := &walker{
+ fn: walkFn,
+ enqueuec: make(chan walkItem, numWorkers), // buffered for performance
+ workc: make(chan walkItem, numWorkers), // buffered for performance
+ donec: make(chan struct{}),
+
+ // buffered for correctness & not leaking goroutines:
+ resc: make(chan error, numWorkers),
+ }
+ defer close(w.donec)
+
+ for i := 0; i < numWorkers; i++ {
+ wg.Add(1)
+ go w.doWork(&wg)
+ }
+ todo := []walkItem{{dir: root}}
+ out := 0
+ for {
+ workc := w.workc
+ var workItem walkItem
+ if len(todo) == 0 {
+ workc = nil
+ } else {
+ workItem = todo[len(todo)-1]
+ }
+ select {
+ case workc <- workItem:
+ todo = todo[:len(todo)-1]
+ out++
+ case it := <-w.enqueuec:
+ todo = append(todo, it)
+ case err := <-w.resc:
+ out--
+ if err != nil {
+ return err
+ }
+ if out == 0 && len(todo) == 0 {
+ // It's safe to quit here, as long as the buffered
+ // enqueue channel isn't also readable, which might
+ // happen if the worker sends both another unit of
+ // work and its result before the other select was
+ // scheduled and both w.resc and w.enqueuec were
+ // readable.
+ select {
+ case it := <-w.enqueuec:
+ todo = append(todo, it)
+ default:
+ return nil
+ }
+ }
+ }
+ }
+}
+
+// doWork reads directories as instructed (via workc) and runs the
+// user's callback function.
+func (w *walker) doWork(wg *sync.WaitGroup) {
+ defer wg.Done()
+ for {
+ select {
+ case <-w.donec:
+ return
+ case it := <-w.workc:
+ select {
+ case <-w.donec:
+ return
+ case w.resc <- w.walk(it.dir, !it.callbackDone):
+ }
+ }
+ }
+}
+
+type walker struct {
+ fn func(path string, typ os.FileMode) error
+
+ donec chan struct{} // closed on fastWalk's return
+ workc chan walkItem // to workers
+ enqueuec chan walkItem // from workers
+ resc chan error // from workers
+}
+
+type walkItem struct {
+ dir string
+ callbackDone bool // callback already called; don't do it again
+}
+
+func (w *walker) enqueue(it walkItem) {
+ select {
+ case w.enqueuec <- it:
+ case <-w.donec:
+ }
+}
+
+func (w *walker) onDirEnt(dirName, baseName string, typ os.FileMode) error {
+ joined := dirName + string(os.PathSeparator) + baseName
+ if typ == os.ModeDir {
+ w.enqueue(walkItem{dir: joined})
+ return nil
+ }
+
+ err := w.fn(joined, typ)
+ if typ == os.ModeSymlink {
+ if err == TraverseLink {
+ // Set callbackDone so we don't call it twice for both the
+ // symlink-as-symlink and the symlink-as-directory later:
+ w.enqueue(walkItem{dir: joined, callbackDone: true})
+ return nil
+ }
+ if err == filepath.SkipDir {
+ // Permit SkipDir on symlinks too.
+ return nil
+ }
+ }
+ return err
+}
+
+func (w *walker) walk(root string, runUserCallback bool) error {
+ if runUserCallback {
+ err := w.fn(root, os.ModeDir)
+ if err == filepath.SkipDir {
+ return nil
+ }
+ if err != nil {
+ return err
+ }
+ }
+
+ return readDir(root, w.onDirEnt)
+}
diff --git a/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_fileno.go b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_fileno.go
new file mode 100644
index 0000000000000..ccffec5adc10b
--- /dev/null
+++ b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_fileno.go
@@ -0,0 +1,13 @@
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build freebsd openbsd netbsd
+
+package fastwalk
+
+import "syscall"
+
+func direntInode(dirent *syscall.Dirent) uint64 {
+ return uint64(dirent.Fileno)
+}
diff --git a/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_ino.go b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_ino.go
new file mode 100644
index 0000000000000..ab7fbc0a9a3c8
--- /dev/null
+++ b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_ino.go
@@ -0,0 +1,14 @@
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build linux darwin
+// +build !appengine
+
+package fastwalk
+
+import "syscall"
+
+func direntInode(dirent *syscall.Dirent) uint64 {
+ return uint64(dirent.Ino)
+}
diff --git a/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_namlen_bsd.go b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_namlen_bsd.go
new file mode 100644
index 0000000000000..a3b26a7bae0bf
--- /dev/null
+++ b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_namlen_bsd.go
@@ -0,0 +1,13 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build darwin freebsd openbsd netbsd
+
+package fastwalk
+
+import "syscall"
+
+func direntNamlen(dirent *syscall.Dirent) uint64 {
+ return uint64(dirent.Namlen)
+}
diff --git a/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_namlen_linux.go b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_namlen_linux.go
new file mode 100644
index 0000000000000..e880d358b138f
--- /dev/null
+++ b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_dirent_namlen_linux.go
@@ -0,0 +1,29 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build linux
+// +build !appengine
+
+package fastwalk
+
+import (
+ "bytes"
+ "syscall"
+ "unsafe"
+)
+
+func direntNamlen(dirent *syscall.Dirent) uint64 {
+ const fixedHdr = uint16(unsafe.Offsetof(syscall.Dirent{}.Name))
+ nameBuf := (*[unsafe.Sizeof(dirent.Name)]byte)(unsafe.Pointer(&dirent.Name[0]))
+ const nameBufLen = uint16(len(nameBuf))
+ limit := dirent.Reclen - fixedHdr
+ if limit > nameBufLen {
+ limit = nameBufLen
+ }
+ nameLen := bytes.IndexByte(nameBuf[:limit], 0)
+ if nameLen < 0 {
+ panic("failed to find terminating 0 byte in dirent")
+ }
+ return uint64(nameLen)
+}
diff --git a/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_portable.go b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_portable.go
new file mode 100644
index 0000000000000..a906b87595ba0
--- /dev/null
+++ b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_portable.go
@@ -0,0 +1,37 @@
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build appengine !linux,!darwin,!freebsd,!openbsd,!netbsd
+
+package fastwalk
+
+import (
+ "io/ioutil"
+ "os"
+)
+
+// readDir calls fn for each directory entry in dirName.
+// It does not descend into directories or follow symlinks.
+// If fn returns a non-nil error, readDir returns with that error
+// immediately.
+func readDir(dirName string, fn func(dirName, entName string, typ os.FileMode) error) error {
+ fis, err := ioutil.ReadDir(dirName)
+ if err != nil {
+ return err
+ }
+ skipFiles := false
+ for _, fi := range fis {
+ if fi.Mode().IsRegular() && skipFiles {
+ continue
+ }
+ if err := fn(dirName, fi.Name(), fi.Mode()&os.ModeType); err != nil {
+ if err == SkipFiles {
+ skipFiles = true
+ continue
+ }
+ return err
+ }
+ }
+ return nil
+}
diff --git a/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_unix.go b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_unix.go
new file mode 100644
index 0000000000000..3369b1a0b2de1
--- /dev/null
+++ b/vendor/golang.org/x/tools/internal/fastwalk/fastwalk_unix.go
@@ -0,0 +1,127 @@
+// Copyright 2016 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build linux darwin freebsd openbsd netbsd
+// +build !appengine
+
+package fastwalk
+
+import (
+ "fmt"
+ "os"
+ "syscall"
+ "unsafe"
+)
+
+const blockSize = 8 << 10
+
+// unknownFileMode is a sentinel (and bogus) os.FileMode
+// value used to represent a syscall.DT_UNKNOWN Dirent.Type.
+const unknownFileMode os.FileMode = os.ModeNamedPipe | os.ModeSocket | os.ModeDevice
+
+func readDir(dirName string, fn func(dirName, entName string, typ os.FileMode) error) error {
+ fd, err := syscall.Open(dirName, 0, 0)
+ if err != nil {
+ return &os.PathError{Op: "open", Path: dirName, Err: err}
+ }
+ defer syscall.Close(fd)
+
+ // The buffer must be at least a block long.
+ buf := make([]byte, blockSize) // stack-allocated; doesn't escape
+ bufp := 0 // starting read position in buf
+ nbuf := 0 // end valid data in buf
+ skipFiles := false
+ for {
+ if bufp >= nbuf {
+ bufp = 0
+ nbuf, err = syscall.ReadDirent(fd, buf)
+ if err != nil {
+ return os.NewSyscallError("readdirent", err)
+ }
+ if nbuf <= 0 {
+ return nil
+ }
+ }
+ consumed, name, typ := parseDirEnt(buf[bufp:nbuf])
+ bufp += consumed
+ if name == "" || name == "." || name == ".." {
+ continue
+ }
+ // Fallback for filesystems (like old XFS) that don't
+ // support Dirent.Type and have DT_UNKNOWN (0) there
+ // instead.
+ if typ == unknownFileMode {
+ fi, err := os.Lstat(dirName + "/" + name)
+ if err != nil {
+ // It got deleted in the meantime.
+ if os.IsNotExist(err) {
+ continue
+ }
+ return err
+ }
+ typ = fi.Mode() & os.ModeType
+ }
+ if skipFiles && typ.IsRegular() {
+ continue
+ }
+ if err := fn(dirName, name, typ); err != nil {
+ if err == SkipFiles {
+ skipFiles = true
+ continue
+ }
+ return err
+ }
+ }
+}
+
+func parseDirEnt(buf []byte) (consumed int, name string, typ os.FileMode) {
+ // golang.org/issue/15653
+ dirent := (*syscall.Dirent)(unsafe.Pointer(&buf[0]))
+ if v := unsafe.Offsetof(dirent.Reclen) + unsafe.Sizeof(dirent.Reclen); uintptr(len(buf)) < v {
+ panic(fmt.Sprintf("buf size of %d smaller than dirent header size %d", len(buf), v))
+ }
+ if len(buf) < int(dirent.Reclen) {
+ panic(fmt.Sprintf("buf size %d < record length %d", len(buf), dirent.Reclen))
+ }
+ consumed = int(dirent.Reclen)
+ if direntInode(dirent) == 0 { // File absent in directory.
+ return
+ }
+ switch dirent.Type {
+ case syscall.DT_REG:
+ typ = 0
+ case syscall.DT_DIR:
+ typ = os.ModeDir
+ case syscall.DT_LNK:
+ typ = os.ModeSymlink
+ case syscall.DT_BLK:
+ typ = os.ModeDevice
+ case syscall.DT_FIFO:
+ typ = os.ModeNamedPipe
+ case syscall.DT_SOCK:
+ typ = os.ModeSocket
+ case syscall.DT_UNKNOWN:
+ typ = unknownFileMode
+ default:
+ // Skip weird things.
+ // It's probably a DT_WHT (http://lwn.net/Articles/325369/)
+ // or something. Revisit if/when this package is moved outside
+ // of goimports. goimports only cares about regular files,
+ // symlinks, and directories.
+ return
+ }
+
+ nameBuf := (*[unsafe.Sizeof(dirent.Name)]byte)(unsafe.Pointer(&dirent.Name[0]))
+ nameLen := direntNamlen(dirent)
+
+ // Special cases for common things:
+ if nameLen == 1 && nameBuf[0] == '.' {
+ name = "."
+ } else if nameLen == 2 && nameBuf[0] == '.' && nameBuf[1] == '.' {
+ name = ".."
+ } else {
+ name = string(nameBuf[:nameLen])
+ }
+ return
+}
diff --git a/vendor/golang.org/x/tools/internal/gopathwalk/walk.go b/vendor/golang.org/x/tools/internal/gopathwalk/walk.go
new file mode 100644
index 0000000000000..9a61bdbf5ddca
--- /dev/null
+++ b/vendor/golang.org/x/tools/internal/gopathwalk/walk.go
@@ -0,0 +1,270 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package gopathwalk is like filepath.Walk but specialized for finding Go
+// packages, particularly in $GOPATH and $GOROOT.
+package gopathwalk
+
+import (
+ "bufio"
+ "bytes"
+ "fmt"
+ "go/build"
+ "io/ioutil"
+ "log"
+ "os"
+ "path/filepath"
+ "strings"
+ "time"
+
+ "golang.org/x/tools/internal/fastwalk"
+)
+
+// Options controls the behavior of a Walk call.
+type Options struct {
+ Debug bool // Enable debug logging
+ ModulesEnabled bool // Search module caches. Also disables legacy goimports ignore rules.
+}
+
+// RootType indicates the type of a Root.
+type RootType int
+
+const (
+ RootUnknown RootType = iota
+ RootGOROOT
+ RootGOPATH
+ RootCurrentModule
+ RootModuleCache
+ RootOther
+)
+
+// A Root is a starting point for a Walk.
+type Root struct {
+ Path string
+ Type RootType
+}
+
+// SrcDirsRoots returns the roots from build.Default.SrcDirs(). Not modules-compatible.
+func SrcDirsRoots(ctx *build.Context) []Root {
+ var roots []Root
+ roots = append(roots, Root{filepath.Join(ctx.GOROOT, "src"), RootGOROOT})
+ for _, p := range filepath.SplitList(ctx.GOPATH) {
+ roots = append(roots, Root{filepath.Join(p, "src"), RootGOPATH})
+ }
+ return roots
+}
+
+// Walk walks Go source directories ($GOROOT, $GOPATH, etc) to find packages.
+// For each package found, add will be called (concurrently) with the absolute
+// paths of the containing source directory and the package directory.
+// add will be called concurrently.
+func Walk(roots []Root, add func(root Root, dir string), opts Options) {
+ WalkSkip(roots, add, func(Root, string) bool { return false }, opts)
+}
+
+// WalkSkip walks Go source directories ($GOROOT, $GOPATH, etc) to find packages.
+// For each package found, add will be called (concurrently) with the absolute
+// paths of the containing source directory and the package directory.
+// For each directory that will be scanned, skip will be called (concurrently)
+// with the absolute paths of the containing source directory and the directory.
+// If skip returns false on a directory it will be processed.
+// add will be called concurrently.
+// skip will be called concurrently.
+func WalkSkip(roots []Root, add func(root Root, dir string), skip func(root Root, dir string) bool, opts Options) {
+ for _, root := range roots {
+ walkDir(root, add, skip, opts)
+ }
+}
+
+func walkDir(root Root, add func(Root, string), skip func(root Root, dir string) bool, opts Options) {
+ if _, err := os.Stat(root.Path); os.IsNotExist(err) {
+ if opts.Debug {
+ log.Printf("skipping nonexistent directory: %v", root.Path)
+ }
+ return
+ }
+ start := time.Now()
+ if opts.Debug {
+ log.Printf("gopathwalk: scanning %s", root.Path)
+ }
+ w := &walker{
+ root: root,
+ add: add,
+ skip: skip,
+ opts: opts,
+ }
+ w.init()
+ if err := fastwalk.Walk(root.Path, w.walk); err != nil {
+ log.Printf("gopathwalk: scanning directory %v: %v", root.Path, err)
+ }
+
+ if opts.Debug {
+ log.Printf("gopathwalk: scanned %s in %v", root.Path, time.Since(start))
+ }
+}
+
+// walker is the callback for fastwalk.Walk.
+type walker struct {
+ root Root // The source directory to scan.
+ add func(Root, string) // The callback that will be invoked for every possible Go package dir.
+ skip func(Root, string) bool // The callback that will be invoked for every dir. dir is skipped if it returns true.
+ opts Options // Options passed to Walk by the user.
+
+ ignoredDirs []os.FileInfo // The ignored directories, loaded from .goimportsignore files.
+}
+
+// init initializes the walker based on its Options.
+func (w *walker) init() {
+ var ignoredPaths []string
+ if w.root.Type == RootModuleCache {
+ ignoredPaths = []string{"cache"}
+ }
+ if !w.opts.ModulesEnabled && w.root.Type == RootGOPATH {
+ ignoredPaths = w.getIgnoredDirs(w.root.Path)
+ ignoredPaths = append(ignoredPaths, "v", "mod")
+ }
+
+ for _, p := range ignoredPaths {
+ full := filepath.Join(w.root.Path, p)
+ if fi, err := os.Stat(full); err == nil {
+ w.ignoredDirs = append(w.ignoredDirs, fi)
+ if w.opts.Debug {
+ log.Printf("Directory added to ignore list: %s", full)
+ }
+ } else if w.opts.Debug {
+ log.Printf("Error statting ignored directory: %v", err)
+ }
+ }
+}
+
+// getIgnoredDirs reads an optional config file at <path>/.goimportsignore
+// of relative directories to ignore when scanning for go files.
+// The provided path is one of the $GOPATH entries with "src" appended.
+func (w *walker) getIgnoredDirs(path string) []string {
+ file := filepath.Join(path, ".goimportsignore")
+ slurp, err := ioutil.ReadFile(file)
+ if w.opts.Debug {
+ if err != nil {
+ log.Print(err)
+ } else {
+ log.Printf("Read %s", file)
+ }
+ }
+ if err != nil {
+ return nil
+ }
+
+ var ignoredDirs []string
+ bs := bufio.NewScanner(bytes.NewReader(slurp))
+ for bs.Scan() {
+ line := strings.TrimSpace(bs.Text())
+ if line == "" || strings.HasPrefix(line, "#") {
+ continue
+ }
+ ignoredDirs = append(ignoredDirs, line)
+ }
+ return ignoredDirs
+}
+
+func (w *walker) shouldSkipDir(fi os.FileInfo, dir string) bool {
+ for _, ignoredDir := range w.ignoredDirs {
+ if os.SameFile(fi, ignoredDir) {
+ return true
+ }
+ }
+ if w.skip != nil {
+ // Check with the user specified callback.
+ return w.skip(w.root, dir)
+ }
+ return false
+}
+
+func (w *walker) walk(path string, typ os.FileMode) error {
+ dir := filepath.Dir(path)
+ if typ.IsRegular() {
+ if dir == w.root.Path && (w.root.Type == RootGOROOT || w.root.Type == RootGOPATH) {
+ // Doesn't make sense to have regular files
+ // directly in your $GOPATH/src or $GOROOT/src.
+ return fastwalk.SkipFiles
+ }
+ if !strings.HasSuffix(path, ".go") {
+ return nil
+ }
+
+ w.add(w.root, dir)
+ return fastwalk.SkipFiles
+ }
+ if typ == os.ModeDir {
+ base := filepath.Base(path)
+ if base == "" || base[0] == '.' || base[0] == '_' ||
+ base == "testdata" ||
+ (w.root.Type == RootGOROOT && w.opts.ModulesEnabled && base == "vendor") ||
+ (!w.opts.ModulesEnabled && base == "node_modules") {
+ return filepath.SkipDir
+ }
+ fi, err := os.Lstat(path)
+ if err == nil && w.shouldSkipDir(fi, path) {
+ return filepath.SkipDir
+ }
+ return nil
+ }
+ if typ == os.ModeSymlink {
+ base := filepath.Base(path)
+ if strings.HasPrefix(base, ".#") {
+ // Emacs noise.
+ return nil
+ }
+ fi, err := os.Lstat(path)
+ if err != nil {
+ // Just ignore it.
+ return nil
+ }
+ if w.shouldTraverse(dir, fi) {
+ return fastwalk.TraverseLink
+ }
+ }
+ return nil
+}
+
+// shouldTraverse reports whether the symlink fi, found in dir,
+// should be followed. It makes sure symlinks were never visited
+// before to avoid symlink loops.
+func (w *walker) shouldTraverse(dir string, fi os.FileInfo) bool {
+ path := filepath.Join(dir, fi.Name())
+ target, err := filepath.EvalSymlinks(path)
+ if err != nil {
+ return false
+ }
+ ts, err := os.Stat(target)
+ if err != nil {
+ fmt.Fprintln(os.Stderr, err)
+ return false
+ }
+ if !ts.IsDir() {
+ return false
+ }
+ if w.shouldSkipDir(ts, dir) {
+ return false
+ }
+ // Check for symlink loops by statting each directory component
+ // and seeing if any are the same file as ts.
+ for {
+ parent := filepath.Dir(path)
+ if parent == path {
+ // Made it to the root without seeing a cycle.
+ // Use this symlink.
+ return true
+ }
+ parentInfo, err := os.Stat(parent)
+ if err != nil {
+ return false
+ }
+ if os.SameFile(ts, parentInfo) {
+ // Cycle. Don't traverse.
+ return false
+ }
+ path = parent
+ }
+
+}
diff --git a/vendor/golang.org/x/tools/internal/imports/fix.go b/vendor/golang.org/x/tools/internal/imports/fix.go
new file mode 100644
index 0000000000000..cdaa57b9bde4a
--- /dev/null
+++ b/vendor/golang.org/x/tools/internal/imports/fix.go
@@ -0,0 +1,1581 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package imports
+
+import (
+ "bytes"
+ "context"
+ "fmt"
+ "go/ast"
+ "go/build"
+ "go/parser"
+ "go/token"
+ "io/ioutil"
+ "os"
+ "os/exec"
+ "path"
+ "path/filepath"
+ "reflect"
+ "sort"
+ "strconv"
+ "strings"
+ "sync"
+ "time"
+ "unicode"
+ "unicode/utf8"
+
+ "golang.org/x/tools/go/ast/astutil"
+ "golang.org/x/tools/go/packages"
+ "golang.org/x/tools/internal/gopathwalk"
+)
+
+// importToGroup is a list of functions which map from an import path to
+// a group number.
+var importToGroup = []func(env *ProcessEnv, importPath string) (num int, ok bool){
+ func(env *ProcessEnv, importPath string) (num int, ok bool) {
+ if env.LocalPrefix == "" {
+ return
+ }
+ for _, p := range strings.Split(env.LocalPrefix, ",") {
+ if strings.HasPrefix(importPath, p) || strings.TrimSuffix(p, "/") == importPath {
+ return 3, true
+ }
+ }
+ return
+ },
+ func(_ *ProcessEnv, importPath string) (num int, ok bool) {
+ if strings.HasPrefix(importPath, "appengine") {
+ return 2, true
+ }
+ return
+ },
+ func(_ *ProcessEnv, importPath string) (num int, ok bool) {
+ if strings.Contains(importPath, ".") {
+ return 1, true
+ }
+ return
+ },
+}
+
+func importGroup(env *ProcessEnv, importPath string) int {
+ for _, fn := range importToGroup {
+ if n, ok := fn(env, importPath); ok {
+ return n
+ }
+ }
+ return 0
+}
+
+type ImportFixType int
+
+const (
+ AddImport ImportFixType = iota
+ DeleteImport
+ SetImportName
+)
+
+type ImportFix struct {
+ // StmtInfo represents the import statement this fix will add, remove, or change.
+ StmtInfo ImportInfo
+ // IdentName is the identifier that this fix will add or remove.
+ IdentName string
+ // FixType is the type of fix this is (AddImport, DeleteImport, SetImportName).
+ FixType ImportFixType
+}
+
+// An ImportInfo represents a single import statement.
+type ImportInfo struct {
+ ImportPath string // import path, e.g. "crypto/rand".
+ Name string // import name, e.g. "crand", or "" if none.
+}
+
+// A packageInfo represents what's known about a package.
+type packageInfo struct {
+ name string // real package name, if known.
+ exports map[string]bool // known exports.
+}
+
+// parseOtherFiles parses all the Go files in srcDir except filename, including
+// test files if filename looks like a test.
+func parseOtherFiles(fset *token.FileSet, srcDir, filename string) []*ast.File {
+ // This could use go/packages but it doesn't buy much, and it fails
+ // with https://golang.org/issue/26296 in LoadFiles mode in some cases.
+ considerTests := strings.HasSuffix(filename, "_test.go")
+
+ fileBase := filepath.Base(filename)
+ packageFileInfos, err := ioutil.ReadDir(srcDir)
+ if err != nil {
+ return nil
+ }
+
+ var files []*ast.File
+ for _, fi := range packageFileInfos {
+ if fi.Name() == fileBase || !strings.HasSuffix(fi.Name(), ".go") {
+ continue
+ }
+ if !considerTests && strings.HasSuffix(fi.Name(), "_test.go") {
+ continue
+ }
+
+ f, err := parser.ParseFile(fset, filepath.Join(srcDir, fi.Name()), nil, 0)
+ if err != nil {
+ continue
+ }
+
+ files = append(files, f)
+ }
+
+ return files
+}
+
+// addGlobals puts the names of package vars into the provided map.
+func addGlobals(f *ast.File, globals map[string]bool) {
+ for _, decl := range f.Decls {
+ genDecl, ok := decl.(*ast.GenDecl)
+ if !ok {
+ continue
+ }
+
+ for _, spec := range genDecl.Specs {
+ valueSpec, ok := spec.(*ast.ValueSpec)
+ if !ok {
+ continue
+ }
+ globals[valueSpec.Names[0].Name] = true
+ }
+ }
+}
+
+// collectReferences builds a map of selector expressions, from
+// left hand side (X) to a set of right hand sides (Sel).
+func collectReferences(f *ast.File) references {
+ refs := references{}
+
+ var visitor visitFn
+ visitor = func(node ast.Node) ast.Visitor {
+ if node == nil {
+ return visitor
+ }
+ switch v := node.(type) {
+ case *ast.SelectorExpr:
+ xident, ok := v.X.(*ast.Ident)
+ if !ok {
+ break
+ }
+ if xident.Obj != nil {
+ // If the parser can resolve it, it's not a package ref.
+ break
+ }
+ if !ast.IsExported(v.Sel.Name) {
+ // Whatever this is, it's not exported from a package.
+ break
+ }
+ pkgName := xident.Name
+ r := refs[pkgName]
+ if r == nil {
+ r = make(map[string]bool)
+ refs[pkgName] = r
+ }
+ r[v.Sel.Name] = true
+ }
+ return visitor
+ }
+ ast.Walk(visitor, f)
+ return refs
+}
+
+// collectImports returns all the imports in f.
+// Unnamed imports (., _) and "C" are ignored.
+func collectImports(f *ast.File) []*ImportInfo {
+ var imports []*ImportInfo
+ for _, imp := range f.Imports {
+ var name string
+ if imp.Name != nil {
+ name = imp.Name.Name
+ }
+ if imp.Path.Value == `"C"` || name == "_" || name == "." {
+ continue
+ }
+ path := strings.Trim(imp.Path.Value, `"`)
+ imports = append(imports, &ImportInfo{
+ Name: name,
+ ImportPath: path,
+ })
+ }
+ return imports
+}
+
+// findMissingImport searches pass's candidates for an import that provides
+// pkg, containing all of syms.
+func (p *pass) findMissingImport(pkg string, syms map[string]bool) *ImportInfo {
+ for _, candidate := range p.candidates {
+ pkgInfo, ok := p.knownPackages[candidate.ImportPath]
+ if !ok {
+ continue
+ }
+ if p.importIdentifier(candidate) != pkg {
+ continue
+ }
+
+ allFound := true
+ for right := range syms {
+ if !pkgInfo.exports[right] {
+ allFound = false
+ break
+ }
+ }
+
+ if allFound {
+ return candidate
+ }
+ }
+ return nil
+}
+
+// references is set of references found in a Go file. The first map key is the
+// left hand side of a selector expression, the second key is the right hand
+// side, and the value should always be true.
+type references map[string]map[string]bool
+
+// A pass contains all the inputs and state necessary to fix a file's imports.
+// It can be modified in some ways during use; see comments below.
+type pass struct {
+ // Inputs. These must be set before a call to load, and not modified after.
+ fset *token.FileSet // fset used to parse f and its siblings.
+ f *ast.File // the file being fixed.
+ srcDir string // the directory containing f.
+ env *ProcessEnv // the environment to use for go commands, etc.
+ loadRealPackageNames bool // if true, load package names from disk rather than guessing them.
+ otherFiles []*ast.File // sibling files.
+
+ // Intermediate state, generated by load.
+ existingImports map[string]*ImportInfo
+ allRefs references
+ missingRefs references
+
+ // Inputs to fix. These can be augmented between successive fix calls.
+ lastTry bool // indicates that this is the last call and fix should clean up as best it can.
+ candidates []*ImportInfo // candidate imports in priority order.
+ knownPackages map[string]*packageInfo // information about all known packages.
+}
+
+// loadPackageNames saves the package names for everything referenced by imports.
+func (p *pass) loadPackageNames(imports []*ImportInfo) error {
+ if p.env.Debug {
+ p.env.Logf("loading package names for %v packages", len(imports))
+ defer func() {
+ p.env.Logf("done loading package names for %v packages", len(imports))
+ }()
+ }
+ var unknown []string
+ for _, imp := range imports {
+ if _, ok := p.knownPackages[imp.ImportPath]; ok {
+ continue
+ }
+ unknown = append(unknown, imp.ImportPath)
+ }
+
+ names, err := p.env.GetResolver().loadPackageNames(unknown, p.srcDir)
+ if err != nil {
+ return err
+ }
+
+ for path, name := range names {
+ p.knownPackages[path] = &packageInfo{
+ name: name,
+ exports: map[string]bool{},
+ }
+ }
+ return nil
+}
+
+// importIdentifier returns the identifier that imp will introduce. It will
+// guess if the package name has not been loaded, e.g. because the source
+// is not available.
+func (p *pass) importIdentifier(imp *ImportInfo) string {
+ if imp.Name != "" {
+ return imp.Name
+ }
+ known := p.knownPackages[imp.ImportPath]
+ if known != nil && known.name != "" {
+ return known.name
+ }
+ return importPathToAssumedName(imp.ImportPath)
+}
+
+// load reads in everything necessary to run a pass, and reports whether the
+// file already has all the imports it needs. It fills in p.missingRefs with the
+// file's missing symbols, if any, or removes unused imports if not.
+func (p *pass) load() ([]*ImportFix, bool) {
+ p.knownPackages = map[string]*packageInfo{}
+ p.missingRefs = references{}
+ p.existingImports = map[string]*ImportInfo{}
+
+ // Load basic information about the file in question.
+ p.allRefs = collectReferences(p.f)
+
+ // Load stuff from other files in the same package:
+ // global variables so we know they don't need resolving, and imports
+ // that we might want to mimic.
+ globals := map[string]bool{}
+ for _, otherFile := range p.otherFiles {
+ // Don't load globals from files that are in the same directory
+ // but a different package. Using them to suggest imports is OK.
+ if p.f.Name.Name == otherFile.Name.Name {
+ addGlobals(otherFile, globals)
+ }
+ p.candidates = append(p.candidates, collectImports(otherFile)...)
+ }
+
+ // Resolve all the import paths we've seen to package names, and store
+ // f's imports by the identifier they introduce.
+ imports := collectImports(p.f)
+ if p.loadRealPackageNames {
+ err := p.loadPackageNames(append(imports, p.candidates...))
+ if err != nil {
+ if p.env.Debug {
+ p.env.Logf("loading package names: %v", err)
+ }
+ return nil, false
+ }
+ }
+ for _, imp := range imports {
+ p.existingImports[p.importIdentifier(imp)] = imp
+ }
+
+ // Find missing references.
+ for left, rights := range p.allRefs {
+ if globals[left] {
+ continue
+ }
+ _, ok := p.existingImports[left]
+ if !ok {
+ p.missingRefs[left] = rights
+ continue
+ }
+ }
+ if len(p.missingRefs) != 0 {
+ return nil, false
+ }
+
+ return p.fix()
+}
+
+// fix attempts to satisfy missing imports using p.candidates. If it finds
+// everything, or if p.lastTry is true, it updates fixes to add the imports it found,
+// delete anything unused, and update import names, and returns true.
+func (p *pass) fix() ([]*ImportFix, bool) {
+ // Find missing imports.
+ var selected []*ImportInfo
+ for left, rights := range p.missingRefs {
+ if imp := p.findMissingImport(left, rights); imp != nil {
+ selected = append(selected, imp)
+ }
+ }
+
+ if !p.lastTry && len(selected) != len(p.missingRefs) {
+ return nil, false
+ }
+
+ // Found everything, or giving up. Add the new imports and remove any unused.
+ var fixes []*ImportFix
+ for _, imp := range p.existingImports {
+ // We deliberately ignore globals here, because we can't be sure
+ // they're in the same package. People do things like put multiple
+ // main packages in the same directory, and we don't want to
+ // remove imports if they happen to have the same name as a var in
+ // a different package.
+ if _, ok := p.allRefs[p.importIdentifier(imp)]; !ok {
+ fixes = append(fixes, &ImportFix{
+ StmtInfo: *imp,
+ IdentName: p.importIdentifier(imp),
+ FixType: DeleteImport,
+ })
+ continue
+ }
+
+ // An existing import may need to update its import name to be correct.
+ if name := p.importSpecName(imp); name != imp.Name {
+ fixes = append(fixes, &ImportFix{
+ StmtInfo: ImportInfo{
+ Name: name,
+ ImportPath: imp.ImportPath,
+ },
+ IdentName: p.importIdentifier(imp),
+ FixType: SetImportName,
+ })
+ }
+ }
+
+ for _, imp := range selected {
+ fixes = append(fixes, &ImportFix{
+ StmtInfo: ImportInfo{
+ Name: p.importSpecName(imp),
+ ImportPath: imp.ImportPath,
+ },
+ IdentName: p.importIdentifier(imp),
+ FixType: AddImport,
+ })
+ }
+
+ return fixes, true
+}
+
+// importSpecName gets the import name of imp in the import spec.
+//
+// When the import identifier matches the assumed import name, the import name does
+// not appear in the import spec.
+func (p *pass) importSpecName(imp *ImportInfo) string {
+ // If we did not load the real package names, or the name is already set,
+ // we just return the existing name.
+ if !p.loadRealPackageNames || imp.Name != "" {
+ return imp.Name
+ }
+
+ ident := p.importIdentifier(imp)
+ if ident == importPathToAssumedName(imp.ImportPath) {
+ return "" // ident not needed since the assumed and real names are the same.
+ }
+ return ident
+}
+
+// apply will perform the fixes on f in order.
+func apply(fset *token.FileSet, f *ast.File, fixes []*ImportFix) {
+ for _, fix := range fixes {
+ switch fix.FixType {
+ case DeleteImport:
+ astutil.DeleteNamedImport(fset, f, fix.StmtInfo.Name, fix.StmtInfo.ImportPath)
+ case AddImport:
+ astutil.AddNamedImport(fset, f, fix.StmtInfo.Name, fix.StmtInfo.ImportPath)
+ case SetImportName:
+ // Find the matching import path and change the name.
+ for _, spec := range f.Imports {
+ path := strings.Trim(spec.Path.Value, `"`)
+ if path == fix.StmtInfo.ImportPath {
+ spec.Name = &ast.Ident{
+ Name: fix.StmtInfo.Name,
+ NamePos: spec.Pos(),
+ }
+ }
+ }
+ }
+ }
+}
+
+// assumeSiblingImportsValid assumes that siblings' use of packages is valid,
+// adding the exports they use.
+func (p *pass) assumeSiblingImportsValid() {
+ for _, f := range p.otherFiles {
+ refs := collectReferences(f)
+ imports := collectImports(f)
+ importsByName := map[string]*ImportInfo{}
+ for _, imp := range imports {
+ importsByName[p.importIdentifier(imp)] = imp
+ }
+ for left, rights := range refs {
+ if imp, ok := importsByName[left]; ok {
+ if m, ok := stdlib[imp.ImportPath]; ok {
+ // We have the stdlib in memory; no need to guess.
+ rights = copyExports(m)
+ }
+ p.addCandidate(imp, &packageInfo{
+ // no name; we already know it.
+ exports: rights,
+ })
+ }
+ }
+ }
+}
+
+// addCandidate adds a candidate import to p, and merges in the information
+// in pkg.
+func (p *pass) addCandidate(imp *ImportInfo, pkg *packageInfo) {
+ p.candidates = append(p.candidates, imp)
+ if existing, ok := p.knownPackages[imp.ImportPath]; ok {
+ if existing.name == "" {
+ existing.name = pkg.name
+ }
+ for export := range pkg.exports {
+ existing.exports[export] = true
+ }
+ } else {
+ p.knownPackages[imp.ImportPath] = pkg
+ }
+}
+
+// fixImports adds and removes imports from f so that all its references are
+// satisfied and there are no unused imports.
+//
+// This is declared as a variable rather than a function so goimports can
+// easily be extended by adding a file with an init function.
+var fixImports = fixImportsDefault
+
+func fixImportsDefault(fset *token.FileSet, f *ast.File, filename string, env *ProcessEnv) error {
+ fixes, err := getFixes(fset, f, filename, env)
+ if err != nil {
+ return err
+ }
+ apply(fset, f, fixes)
+ return err
+}
+
+// getFixes gets the import fixes that need to be made to f in order to fix the imports.
+// It does not modify the ast.
+func getFixes(fset *token.FileSet, f *ast.File, filename string, env *ProcessEnv) ([]*ImportFix, error) {
+ abs, err := filepath.Abs(filename)
+ if err != nil {
+ return nil, err
+ }
+ srcDir := filepath.Dir(abs)
+ if env.Debug {
+ env.Logf("fixImports(filename=%q), abs=%q, srcDir=%q ...", filename, abs, srcDir)
+ }
+
+ // First pass: looking only at f, and using the naive algorithm to
+ // derive package names from import paths, see if the file is already
+ // complete. We can't add any imports yet, because we don't know
+ // if missing references are actually package vars.
+ p := &pass{fset: fset, f: f, srcDir: srcDir}
+ if fixes, done := p.load(); done {
+ return fixes, nil
+ }
+
+ otherFiles := parseOtherFiles(fset, srcDir, filename)
+
+ // Second pass: add information from other files in the same package,
+ // like their package vars and imports.
+ p.otherFiles = otherFiles
+ if fixes, done := p.load(); done {
+ return fixes, nil
+ }
+
+ // Now we can try adding imports from the stdlib.
+ p.assumeSiblingImportsValid()
+ addStdlibCandidates(p, p.missingRefs)
+ if fixes, done := p.fix(); done {
+ return fixes, nil
+ }
+
+ // Third pass: get real package names where we had previously used
+ // the naive algorithm. This is the first step that will use the
+ // environment, so we provide it here for the first time.
+ p = &pass{fset: fset, f: f, srcDir: srcDir, env: env}
+ p.loadRealPackageNames = true
+ p.otherFiles = otherFiles
+ if fixes, done := p.load(); done {
+ return fixes, nil
+ }
+
+ addStdlibCandidates(p, p.missingRefs)
+ p.assumeSiblingImportsValid()
+ if fixes, done := p.fix(); done {
+ return fixes, nil
+ }
+
+ // Go look for candidates in $GOPATH, etc. We don't necessarily load
+ // the real exports of sibling imports, so keep assuming their contents.
+ if err := addExternalCandidates(p, p.missingRefs, filename); err != nil {
+ return nil, err
+ }
+
+ p.lastTry = true
+ fixes, _ := p.fix()
+ return fixes, nil
+}
+
+// getCandidatePkgs returns the list of pkgs that are accessible from filename,
+// optionall filtered to only packages named pkgName.
+func getCandidatePkgs(pkgName, filename string, env *ProcessEnv) ([]*pkg, error) {
+ // TODO(heschi): filter out current package. (Don't forget x_test can import x.)
+
+ var result []*pkg
+ // Start off with the standard library.
+ for importPath := range stdlib {
+ if pkgName != "" && path.Base(importPath) != pkgName {
+ continue
+ }
+ result = append(result, &pkg{
+ dir: filepath.Join(env.GOROOT, "src", importPath),
+ importPathShort: importPath,
+ packageName: path.Base(importPath),
+ relevance: 0,
+ })
+ }
+
+ // Exclude goroot results -- getting them is relatively expensive, not cached,
+ // and generally redundant with the in-memory version.
+ exclude := []gopathwalk.RootType{gopathwalk.RootGOROOT}
+ // Only the go/packages resolver uses the first argument, and nobody uses that resolver.
+ scannedPkgs, err := env.GetResolver().scan(nil, true, exclude)
+ if err != nil {
+ return nil, err
+ }
+
+ dupCheck := map[string]struct{}{}
+ for _, pkg := range scannedPkgs {
+ if pkgName != "" && pkg.packageName != pkgName {
+ continue
+ }
+ if !canUse(filename, pkg.dir) {
+ continue
+ }
+ if _, ok := dupCheck[pkg.importPathShort]; ok {
+ continue
+ }
+ dupCheck[pkg.importPathShort] = struct{}{}
+ result = append(result, pkg)
+ }
+
+ // Sort first by relevance, then by package name, with import path as a tiebreaker.
+ sort.Slice(result, func(i, j int) bool {
+ pi, pj := result[i], result[j]
+ if pi.relevance != pj.relevance {
+ return pi.relevance < pj.relevance
+ }
+ if pi.packageName != pj.packageName {
+ return pi.packageName < pj.packageName
+ }
+ return pi.importPathShort < pj.importPathShort
+ })
+
+ return result, nil
+}
+
+func candidateImportName(pkg *pkg) string {
+ if importPathToAssumedName(pkg.importPathShort) != pkg.packageName {
+ return pkg.packageName
+ }
+ return ""
+}
+
+// getAllCandidates gets all of the candidates to be imported, regardless of if they are needed.
+func getAllCandidates(filename string, env *ProcessEnv) ([]ImportFix, error) {
+ pkgs, err := getCandidatePkgs("", filename, env)
+ if err != nil {
+ return nil, err
+ }
+ result := make([]ImportFix, 0, len(pkgs))
+ for _, pkg := range pkgs {
+ result = append(result, ImportFix{
+ StmtInfo: ImportInfo{
+ ImportPath: pkg.importPathShort,
+ Name: candidateImportName(pkg),
+ },
+ IdentName: pkg.packageName,
+ FixType: AddImport,
+ })
+ }
+ return result, nil
+}
+
+// A PackageExport is a package and its exports.
+type PackageExport struct {
+ Fix *ImportFix
+ Exports []string
+}
+
+func getPackageExports(completePackage, filename string, env *ProcessEnv) ([]PackageExport, error) {
+ pkgs, err := getCandidatePkgs(completePackage, filename, env)
+ if err != nil {
+ return nil, err
+ }
+
+ results := make([]PackageExport, 0, len(pkgs))
+ for _, pkg := range pkgs {
+ fix := &ImportFix{
+ StmtInfo: ImportInfo{
+ ImportPath: pkg.importPathShort,
+ Name: candidateImportName(pkg),
+ },
+ IdentName: pkg.packageName,
+ FixType: AddImport,
+ }
+ var exports []string
+ if e, ok := stdlib[pkg.importPathShort]; ok {
+ exports = e
+ } else {
+ exports, err = loadExportsForPackage(context.Background(), env, completePackage, pkg)
+ if err != nil {
+ if env.Debug {
+ env.Logf("while completing %q, error loading exports from %q: %v", completePackage, pkg.importPathShort, err)
+ }
+ continue
+ }
+ }
+ sort.Strings(exports)
+ results = append(results, PackageExport{
+ Fix: fix,
+ Exports: exports,
+ })
+ }
+
+ return results, nil
+}
+
+// ProcessEnv contains environment variables and settings that affect the use of
+// the go command, the go/build package, etc.
+type ProcessEnv struct {
+ LocalPrefix string
+ Debug bool
+
+ // If non-empty, these will be used instead of the
+ // process-wide values.
+ GOPATH, GOROOT, GO111MODULE, GOPROXY, GOFLAGS, GOSUMDB string
+ WorkingDir string
+
+ // If true, use go/packages regardless of the environment.
+ ForceGoPackages bool
+
+ // Logf is the default logger for the ProcessEnv.
+ Logf func(format string, args ...interface{})
+
+ resolver Resolver
+}
+
+func (e *ProcessEnv) env() []string {
+ env := os.Environ()
+ add := func(k, v string) {
+ if v != "" {
+ env = append(env, k+"="+v)
+ }
+ }
+ add("GOPATH", e.GOPATH)
+ add("GOROOT", e.GOROOT)
+ add("GO111MODULE", e.GO111MODULE)
+ add("GOPROXY", e.GOPROXY)
+ add("GOFLAGS", e.GOFLAGS)
+ add("GOSUMDB", e.GOSUMDB)
+ if e.WorkingDir != "" {
+ add("PWD", e.WorkingDir)
+ }
+ return env
+}
+
+func (e *ProcessEnv) GetResolver() Resolver {
+ if e.resolver != nil {
+ return e.resolver
+ }
+ if e.ForceGoPackages {
+ e.resolver = &goPackagesResolver{env: e}
+ return e.resolver
+ }
+
+ out, err := e.invokeGo("env", "GOMOD")
+ if err != nil || len(bytes.TrimSpace(out.Bytes())) == 0 {
+ e.resolver = &gopathResolver{env: e}
+ return e.resolver
+ }
+ e.resolver = &ModuleResolver{env: e}
+ return e.resolver
+}
+
+func (e *ProcessEnv) newPackagesConfig(mode packages.LoadMode) *packages.Config {
+ return &packages.Config{
+ Mode: mode,
+ Dir: e.WorkingDir,
+ Env: e.env(),
+ }
+}
+
+func (e *ProcessEnv) buildContext() *build.Context {
+ ctx := build.Default
+ ctx.GOROOT = e.GOROOT
+ ctx.GOPATH = e.GOPATH
+
+ // As of Go 1.14, build.Context has a WorkingDir field
+ // (see golang.org/issue/34860).
+ // Populate it only if present.
+ if wd := reflect.ValueOf(&ctx).Elem().FieldByName("WorkingDir"); wd.IsValid() && wd.Kind() == reflect.String {
+ wd.SetString(e.WorkingDir)
+ }
+ return &ctx
+}
+
+func (e *ProcessEnv) invokeGo(args ...string) (*bytes.Buffer, error) {
+ cmd := exec.Command("go", args...)
+ stdout := &bytes.Buffer{}
+ stderr := &bytes.Buffer{}
+ cmd.Stdout = stdout
+ cmd.Stderr = stderr
+ cmd.Env = e.env()
+ cmd.Dir = e.WorkingDir
+
+ if e.Debug {
+ defer func(start time.Time) { e.Logf("%s for %v", time.Since(start), cmdDebugStr(cmd)) }(time.Now())
+ }
+ if err := cmd.Run(); err != nil {
+ return nil, fmt.Errorf("running go: %v (stderr:\n%s)", err, stderr)
+ }
+ return stdout, nil
+}
+
+func cmdDebugStr(cmd *exec.Cmd) string {
+ env := make(map[string]string)
+ for _, kv := range cmd.Env {
+ split := strings.Split(kv, "=")
+ k, v := split[0], split[1]
+ env[k] = v
+ }
+
+ return fmt.Sprintf("GOROOT=%v GOPATH=%v GO111MODULE=%v GOPROXY=%v PWD=%v go %v", env["GOROOT"], env["GOPATH"], env["GO111MODULE"], env["GOPROXY"], env["PWD"], cmd.Args)
+}
+
+func addStdlibCandidates(pass *pass, refs references) {
+ add := func(pkg string) {
+ exports := copyExports(stdlib[pkg])
+ pass.addCandidate(
+ &ImportInfo{ImportPath: pkg},
+ &packageInfo{name: path.Base(pkg), exports: exports})
+ }
+ for left := range refs {
+ if left == "rand" {
+ // Make sure we try crypto/rand before math/rand.
+ add("crypto/rand")
+ add("math/rand")
+ continue
+ }
+ for importPath := range stdlib {
+ if path.Base(importPath) == left {
+ add(importPath)
+ }
+ }
+ }
+}
+
+// A Resolver does the build-system-specific parts of goimports.
+type Resolver interface {
+ // loadPackageNames loads the package names in importPaths.
+ loadPackageNames(importPaths []string, srcDir string) (map[string]string, error)
+ // scan finds (at least) the packages satisfying refs. If loadNames is true,
+ // package names will be set on the results, and dirs whose package name
+ // could not be determined will be excluded.
+ scan(refs references, loadNames bool, exclude []gopathwalk.RootType) ([]*pkg, error)
+ // loadExports returns the set of exported symbols in the package at dir.
+ // loadExports may be called concurrently.
+ loadExports(ctx context.Context, pkg *pkg) (string, []string, error)
+
+ ClearForNewScan()
+}
+
+// gopackagesResolver implements resolver for GOPATH and module workspaces using go/packages.
+type goPackagesResolver struct {
+ env *ProcessEnv
+}
+
+func (r *goPackagesResolver) ClearForNewScan() {}
+
+func (r *goPackagesResolver) loadPackageNames(importPaths []string, srcDir string) (map[string]string, error) {
+ if len(importPaths) == 0 {
+ return nil, nil
+ }
+ cfg := r.env.newPackagesConfig(packages.LoadFiles)
+ pkgs, err := packages.Load(cfg, importPaths...)
+ if err != nil {
+ return nil, err
+ }
+ names := map[string]string{}
+ for _, pkg := range pkgs {
+ names[VendorlessPath(pkg.PkgPath)] = pkg.Name
+ }
+ // We may not have found all the packages. Guess the rest.
+ for _, path := range importPaths {
+ if _, ok := names[path]; ok {
+ continue
+ }
+ names[path] = importPathToAssumedName(path)
+ }
+ return names, nil
+
+}
+
+func (r *goPackagesResolver) scan(refs references, _ bool, _ []gopathwalk.RootType) ([]*pkg, error) {
+ var loadQueries []string
+ for pkgName := range refs {
+ loadQueries = append(loadQueries, "iamashamedtousethedisabledqueryname="+pkgName)
+ }
+ sort.Strings(loadQueries)
+ cfg := r.env.newPackagesConfig(packages.LoadFiles)
+ goPackages, err := packages.Load(cfg, loadQueries...)
+ if err != nil {
+ return nil, err
+ }
+
+ var scan []*pkg
+ for _, goPackage := range goPackages {
+ scan = append(scan, &pkg{
+ dir: filepath.Dir(goPackage.CompiledGoFiles[0]),
+ importPathShort: VendorlessPath(goPackage.PkgPath),
+ goPackage: goPackage,
+ packageName: goPackage.Name,
+ })
+ }
+ return scan, nil
+}
+
+func (r *goPackagesResolver) loadExports(ctx context.Context, pkg *pkg) (string, []string, error) {
+ if pkg.goPackage == nil {
+ return "", nil, fmt.Errorf("goPackage not set")
+ }
+ var exports []string
+ fset := token.NewFileSet()
+ for _, fname := range pkg.goPackage.CompiledGoFiles {
+ f, err := parser.ParseFile(fset, fname, nil, 0)
+ if err != nil {
+ return "", nil, fmt.Errorf("parsing %s: %v", fname, err)
+ }
+ for name := range f.Scope.Objects {
+ if ast.IsExported(name) {
+ exports = append(exports, name)
+ }
+ }
+ }
+ return pkg.goPackage.Name, exports, nil
+}
+
+func addExternalCandidates(pass *pass, refs references, filename string) error {
+ dirScan, err := pass.env.GetResolver().scan(refs, false, nil)
+ if err != nil {
+ return err
+ }
+
+ // Search for imports matching potential package references.
+ type result struct {
+ imp *ImportInfo
+ pkg *packageInfo
+ }
+ results := make(chan result, len(refs))
+
+ ctx, cancel := context.WithCancel(context.TODO())
+ var wg sync.WaitGroup
+ defer func() {
+ cancel()
+ wg.Wait()
+ }()
+ var (
+ firstErr error
+ firstErrOnce sync.Once
+ )
+ for pkgName, symbols := range refs {
+ wg.Add(1)
+ go func(pkgName string, symbols map[string]bool) {
+ defer wg.Done()
+
+ found, err := findImport(ctx, pass, dirScan, pkgName, symbols, filename)
+
+ if err != nil {
+ firstErrOnce.Do(func() {
+ firstErr = err
+ cancel()
+ })
+ return
+ }
+
+ if found == nil {
+ return // No matching package.
+ }
+
+ imp := &ImportInfo{
+ ImportPath: found.importPathShort,
+ }
+
+ pkg := &packageInfo{
+ name: pkgName,
+ exports: symbols,
+ }
+ results <- result{imp, pkg}
+ }(pkgName, symbols)
+ }
+ go func() {
+ wg.Wait()
+ close(results)
+ }()
+
+ for result := range results {
+ pass.addCandidate(result.imp, result.pkg)
+ }
+ return firstErr
+}
+
+// notIdentifier reports whether ch is an invalid identifier character.
+func notIdentifier(ch rune) bool {
+ return !('a' <= ch && ch <= 'z' || 'A' <= ch && ch <= 'Z' ||
+ '0' <= ch && ch <= '9' ||
+ ch == '_' ||
+ ch >= utf8.RuneSelf && (unicode.IsLetter(ch) || unicode.IsDigit(ch)))
+}
+
+// importPathToAssumedName returns the assumed package name of an import path.
+// It does this using only string parsing of the import path.
+// It picks the last element of the path that does not look like a major
+// version, and then picks the valid identifier off the start of that element.
+// It is used to determine if a local rename should be added to an import for
+// clarity.
+// This function could be moved to a standard package and exported if we want
+// for use in other tools.
+func importPathToAssumedName(importPath string) string {
+ base := path.Base(importPath)
+ if strings.HasPrefix(base, "v") {
+ if _, err := strconv.Atoi(base[1:]); err == nil {
+ dir := path.Dir(importPath)
+ if dir != "." {
+ base = path.Base(dir)
+ }
+ }
+ }
+ base = strings.TrimPrefix(base, "go-")
+ if i := strings.IndexFunc(base, notIdentifier); i >= 0 {
+ base = base[:i]
+ }
+ return base
+}
+
+// gopathResolver implements resolver for GOPATH workspaces.
+type gopathResolver struct {
+ env *ProcessEnv
+ cache *dirInfoCache
+}
+
+func (r *gopathResolver) init() {
+ if r.cache == nil {
+ r.cache = &dirInfoCache{
+ dirs: map[string]*directoryPackageInfo{},
+ }
+ }
+}
+
+func (r *gopathResolver) ClearForNewScan() {
+ r.cache = nil
+}
+
+func (r *gopathResolver) loadPackageNames(importPaths []string, srcDir string) (map[string]string, error) {
+ r.init()
+ names := map[string]string{}
+ for _, path := range importPaths {
+ names[path] = importPathToName(r.env, path, srcDir)
+ }
+ return names, nil
+}
+
+// importPathToName finds out the actual package name, as declared in its .go files.
+// If there's a problem, it returns "".
+func importPathToName(env *ProcessEnv, importPath, srcDir string) (packageName string) {
+ // Fast path for standard library without going to disk.
+ if _, ok := stdlib[importPath]; ok {
+ return path.Base(importPath) // stdlib packages always match their paths.
+ }
+
+ buildPkg, err := env.buildContext().Import(importPath, srcDir, build.FindOnly)
+ if err != nil {
+ return ""
+ }
+ pkgName, err := packageDirToName(buildPkg.Dir)
+ if err != nil {
+ return ""
+ }
+ return pkgName
+}
+
+// packageDirToName is a faster version of build.Import if
+// the only thing desired is the package name. Given a directory,
+// packageDirToName then only parses one file in the package,
+// trusting that the files in the directory are consistent.
+func packageDirToName(dir string) (packageName string, err error) {
+ d, err := os.Open(dir)
+ if err != nil {
+ return "", err
+ }
+ names, err := d.Readdirnames(-1)
+ d.Close()
+ if err != nil {
+ return "", err
+ }
+ sort.Strings(names) // to have predictable behavior
+ var lastErr error
+ var nfile int
+ for _, name := range names {
+ if !strings.HasSuffix(name, ".go") {
+ continue
+ }
+ if strings.HasSuffix(name, "_test.go") {
+ continue
+ }
+ nfile++
+ fullFile := filepath.Join(dir, name)
+
+ fset := token.NewFileSet()
+ f, err := parser.ParseFile(fset, fullFile, nil, parser.PackageClauseOnly)
+ if err != nil {
+ lastErr = err
+ continue
+ }
+ pkgName := f.Name.Name
+ if pkgName == "documentation" {
+ // Special case from go/build.ImportDir, not
+ // handled by ctx.MatchFile.
+ continue
+ }
+ if pkgName == "main" {
+ // Also skip package main, assuming it's a +build ignore generator or example.
+ // Since you can't import a package main anyway, there's no harm here.
+ continue
+ }
+ return pkgName, nil
+ }
+ if lastErr != nil {
+ return "", lastErr
+ }
+ return "", fmt.Errorf("no importable package found in %d Go files", nfile)
+}
+
+type pkg struct {
+ goPackage *packages.Package
+ dir string // absolute file path to pkg directory ("/usr/lib/go/src/net/http")
+ importPathShort string // vendorless import path ("net/http", "a/b")
+ packageName string // package name loaded from source if requested
+ relevance int // a weakly-defined score of how relevant a package is. 0 is most relevant.
+}
+
+type pkgDistance struct {
+ pkg *pkg
+ distance int // relative distance to target
+}
+
+// byDistanceOrImportPathShortLength sorts by relative distance breaking ties
+// on the short import path length and then the import string itself.
+type byDistanceOrImportPathShortLength []pkgDistance
+
+func (s byDistanceOrImportPathShortLength) Len() int { return len(s) }
+func (s byDistanceOrImportPathShortLength) Less(i, j int) bool {
+ di, dj := s[i].distance, s[j].distance
+ if di == -1 {
+ return false
+ }
+ if dj == -1 {
+ return true
+ }
+ if di != dj {
+ return di < dj
+ }
+
+ vi, vj := s[i].pkg.importPathShort, s[j].pkg.importPathShort
+ if len(vi) != len(vj) {
+ return len(vi) < len(vj)
+ }
+ return vi < vj
+}
+func (s byDistanceOrImportPathShortLength) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
+
+func distance(basepath, targetpath string) int {
+ p, err := filepath.Rel(basepath, targetpath)
+ if err != nil {
+ return -1
+ }
+ if p == "." {
+ return 0
+ }
+ return strings.Count(p, string(filepath.Separator)) + 1
+}
+
+func (r *gopathResolver) scan(_ references, loadNames bool, exclude []gopathwalk.RootType) ([]*pkg, error) {
+ r.init()
+ add := func(root gopathwalk.Root, dir string) {
+ // We assume cached directories have not changed. We can skip them and their
+ // children.
+ if _, ok := r.cache.Load(dir); ok {
+ return
+ }
+
+ importpath := filepath.ToSlash(dir[len(root.Path)+len("/"):])
+ info := directoryPackageInfo{
+ status: directoryScanned,
+ dir: dir,
+ rootType: root.Type,
+ nonCanonicalImportPath: VendorlessPath(importpath),
+ }
+ r.cache.Store(dir, info)
+ }
+ roots := filterRoots(gopathwalk.SrcDirsRoots(r.env.buildContext()), exclude)
+ gopathwalk.Walk(roots, add, gopathwalk.Options{Debug: r.env.Debug, ModulesEnabled: false})
+ var result []*pkg
+ for _, dir := range r.cache.Keys() {
+ info, ok := r.cache.Load(dir)
+ if !ok {
+ continue
+ }
+ if loadNames {
+ var err error
+ info, err = r.cache.CachePackageName(info)
+ if err != nil {
+ continue
+ }
+ }
+
+ p := &pkg{
+ importPathShort: info.nonCanonicalImportPath,
+ dir: dir,
+ relevance: 1,
+ packageName: info.packageName,
+ }
+ if info.rootType == gopathwalk.RootGOROOT {
+ p.relevance = 0
+ }
+ result = append(result, p)
+ }
+ return result, nil
+}
+
+func filterRoots(roots []gopathwalk.Root, exclude []gopathwalk.RootType) []gopathwalk.Root {
+ var result []gopathwalk.Root
+outer:
+ for _, root := range roots {
+ for _, i := range exclude {
+ if i == root.Type {
+ continue outer
+ }
+ }
+ result = append(result, root)
+ }
+ return result
+}
+
+func (r *gopathResolver) loadExports(ctx context.Context, pkg *pkg) (string, []string, error) {
+ r.init()
+ if info, ok := r.cache.Load(pkg.dir); ok {
+ return r.cache.CacheExports(ctx, r.env, info)
+ }
+ return loadExportsFromFiles(ctx, r.env, pkg.dir)
+}
+
+// VendorlessPath returns the devendorized version of the import path ipath.
+// For example, VendorlessPath("foo/bar/vendor/a/b") returns "a/b".
+func VendorlessPath(ipath string) string {
+ // Devendorize for use in import statement.
+ if i := strings.LastIndex(ipath, "/vendor/"); i >= 0 {
+ return ipath[i+len("/vendor/"):]
+ }
+ if strings.HasPrefix(ipath, "vendor/") {
+ return ipath[len("vendor/"):]
+ }
+ return ipath
+}
+
+func loadExportsFromFiles(ctx context.Context, env *ProcessEnv, dir string) (string, []string, error) {
+ var exports []string
+
+ // Look for non-test, buildable .go files which could provide exports.
+ all, err := ioutil.ReadDir(dir)
+ if err != nil {
+ return "", nil, err
+ }
+ var files []os.FileInfo
+ for _, fi := range all {
+ name := fi.Name()
+ if !strings.HasSuffix(name, ".go") || strings.HasSuffix(name, "_test.go") {
+ continue
+ }
+ match, err := env.buildContext().MatchFile(dir, fi.Name())
+ if err != nil || !match {
+ continue
+ }
+ files = append(files, fi)
+ }
+
+ if len(files) == 0 {
+ return "", nil, fmt.Errorf("dir %v contains no buildable, non-test .go files", dir)
+ }
+
+ var pkgName string
+ fset := token.NewFileSet()
+ for _, fi := range files {
+ select {
+ case <-ctx.Done():
+ return "", nil, ctx.Err()
+ default:
+ }
+
+ fullFile := filepath.Join(dir, fi.Name())
+ f, err := parser.ParseFile(fset, fullFile, nil, 0)
+ if err != nil {
+ return "", nil, fmt.Errorf("parsing %s: %v", fullFile, err)
+ }
+ if f.Name.Name == "documentation" {
+ // Special case from go/build.ImportDir, not
+ // handled by MatchFile above.
+ continue
+ }
+ pkgName = f.Name.Name
+ for name := range f.Scope.Objects {
+ if ast.IsExported(name) {
+ exports = append(exports, name)
+ }
+ }
+ }
+
+ if env.Debug {
+ sortedExports := append([]string(nil), exports...)
+ sort.Strings(sortedExports)
+ env.Logf("loaded exports in dir %v (package %v): %v", dir, pkgName, strings.Join(sortedExports, ", "))
+ }
+ return pkgName, exports, nil
+}
+
+// findImport searches for a package with the given symbols.
+// If no package is found, findImport returns ("", false, nil)
+func findImport(ctx context.Context, pass *pass, dirScan []*pkg, pkgName string, symbols map[string]bool, filename string) (*pkg, error) {
+ pkgDir, err := filepath.Abs(filename)
+ if err != nil {
+ return nil, err
+ }
+ pkgDir = filepath.Dir(pkgDir)
+
+ // Find candidate packages, looking only at their directory names first.
+ var candidates []pkgDistance
+ for _, pkg := range dirScan {
+ if pkg.dir == pkgDir && pass.f.Name.Name == pkgName {
+ // The candidate is in the same directory and has the
+ // same package name. Don't try to import ourselves.
+ continue
+ }
+ if pkgIsCandidate(filename, pkgName, pkg) {
+ candidates = append(candidates, pkgDistance{
+ pkg: pkg,
+ distance: distance(pkgDir, pkg.dir),
+ })
+ }
+ }
+
+ // Sort the candidates by their import package length,
+ // assuming that shorter package names are better than long
+ // ones. Note that this sorts by the de-vendored name, so
+ // there's no "penalty" for vendoring.
+ sort.Sort(byDistanceOrImportPathShortLength(candidates))
+ if pass.env.Debug {
+ for i, c := range candidates {
+ pass.env.Logf("%s candidate %d/%d: %v in %v", pkgName, i+1, len(candidates), c.pkg.importPathShort, c.pkg.dir)
+ }
+ }
+
+ // Collect exports for packages with matching names.
+
+ rescv := make([]chan *pkg, len(candidates))
+ for i := range candidates {
+ rescv[i] = make(chan *pkg, 1)
+ }
+ const maxConcurrentPackageImport = 4
+ loadExportsSem := make(chan struct{}, maxConcurrentPackageImport)
+
+ ctx, cancel := context.WithCancel(ctx)
+ var wg sync.WaitGroup
+ defer func() {
+ cancel()
+ wg.Wait()
+ }()
+
+ wg.Add(1)
+ go func() {
+ defer wg.Done()
+ for i, c := range candidates {
+ select {
+ case loadExportsSem <- struct{}{}:
+ case <-ctx.Done():
+ return
+ }
+
+ wg.Add(1)
+ go func(c pkgDistance, resc chan<- *pkg) {
+ defer func() {
+ <-loadExportsSem
+ wg.Done()
+ }()
+
+ if pass.env.Debug {
+ pass.env.Logf("loading exports in dir %s (seeking package %s)", c.pkg.dir, pkgName)
+ }
+ exports, err := loadExportsForPackage(ctx, pass.env, pkgName, c.pkg)
+ if err != nil {
+ if pass.env.Debug {
+ pass.env.Logf("loading exports in dir %s (seeking package %s): %v", c.pkg.dir, pkgName, err)
+ }
+ resc <- nil
+ return
+ }
+
+ exportsMap := make(map[string]bool, len(exports))
+ for _, sym := range exports {
+ exportsMap[sym] = true
+ }
+
+ // If it doesn't have the right
+ // symbols, send nil to mean no match.
+ for symbol := range symbols {
+ if !exportsMap[symbol] {
+ resc <- nil
+ return
+ }
+ }
+ resc <- c.pkg
+ }(c, rescv[i])
+ }
+ }()
+
+ for _, resc := range rescv {
+ pkg := <-resc
+ if pkg == nil {
+ continue
+ }
+ return pkg, nil
+ }
+ return nil, nil
+}
+
+func loadExportsForPackage(ctx context.Context, env *ProcessEnv, expectPkg string, pkg *pkg) ([]string, error) {
+ pkgName, exports, err := env.GetResolver().loadExports(ctx, pkg)
+ if err != nil {
+ return nil, err
+ }
+ if expectPkg != pkgName {
+ return nil, fmt.Errorf("dir %v is package %v, wanted %v", pkg.dir, pkgName, expectPkg)
+ }
+ return exports, err
+}
+
+// pkgIsCandidate reports whether pkg is a candidate for satisfying the
+// finding which package pkgIdent in the file named by filename is trying
+// to refer to.
+//
+// This check is purely lexical and is meant to be as fast as possible
+// because it's run over all $GOPATH directories to filter out poor
+// candidates in order to limit the CPU and I/O later parsing the
+// exports in candidate packages.
+//
+// filename is the file being formatted.
+// pkgIdent is the package being searched for, like "client" (if
+// searching for "client.New")
+func pkgIsCandidate(filename, pkgIdent string, pkg *pkg) bool {
+ // Check "internal" and "vendor" visibility:
+ if !canUse(filename, pkg.dir) {
+ return false
+ }
+
+ // Speed optimization to minimize disk I/O:
+ // the last two components on disk must contain the
+ // package name somewhere.
+ //
+ // This permits mismatch naming like directory
+ // "go-foo" being package "foo", or "pkg.v3" being "pkg",
+ // or directory "google.golang.org/api/cloudbilling/v1"
+ // being package "cloudbilling", but doesn't
+ // permit a directory "foo" to be package
+ // "bar", which is strongly discouraged
+ // anyway. There's no reason goimports needs
+ // to be slow just to accommodate that.
+ lastTwo := lastTwoComponents(pkg.importPathShort)
+ if strings.Contains(lastTwo, pkgIdent) {
+ return true
+ }
+ if hasHyphenOrUpperASCII(lastTwo) && !hasHyphenOrUpperASCII(pkgIdent) {
+ lastTwo = lowerASCIIAndRemoveHyphen(lastTwo)
+ if strings.Contains(lastTwo, pkgIdent) {
+ return true
+ }
+ }
+
+ return false
+}
+
+func hasHyphenOrUpperASCII(s string) bool {
+ for i := 0; i < len(s); i++ {
+ b := s[i]
+ if b == '-' || ('A' <= b && b <= 'Z') {
+ return true
+ }
+ }
+ return false
+}
+
+func lowerASCIIAndRemoveHyphen(s string) (ret string) {
+ buf := make([]byte, 0, len(s))
+ for i := 0; i < len(s); i++ {
+ b := s[i]
+ switch {
+ case b == '-':
+ continue
+ case 'A' <= b && b <= 'Z':
+ buf = append(buf, b+('a'-'A'))
+ default:
+ buf = append(buf, b)
+ }
+ }
+ return string(buf)
+}
+
+// canUse reports whether the package in dir is usable from filename,
+// respecting the Go "internal" and "vendor" visibility rules.
+func canUse(filename, dir string) bool {
+ // Fast path check, before any allocations. If it doesn't contain vendor
+ // or internal, it's not tricky:
+ // Note that this can false-negative on directories like "notinternal",
+ // but we check it correctly below. This is just a fast path.
+ if !strings.Contains(dir, "vendor") && !strings.Contains(dir, "internal") {
+ return true
+ }
+
+ dirSlash := filepath.ToSlash(dir)
+ if !strings.Contains(dirSlash, "/vendor/") && !strings.Contains(dirSlash, "/internal/") && !strings.HasSuffix(dirSlash, "/internal") {
+ return true
+ }
+ // Vendor or internal directory only visible from children of parent.
+ // That means the path from the current directory to the target directory
+ // can contain ../vendor or ../internal but not ../foo/vendor or ../foo/internal
+ // or bar/vendor or bar/internal.
+ // After stripping all the leading ../, the only okay place to see vendor or internal
+ // is at the very beginning of the path.
+ absfile, err := filepath.Abs(filename)
+ if err != nil {
+ return false
+ }
+ absdir, err := filepath.Abs(dir)
+ if err != nil {
+ return false
+ }
+ rel, err := filepath.Rel(absfile, absdir)
+ if err != nil {
+ return false
+ }
+ relSlash := filepath.ToSlash(rel)
+ if i := strings.LastIndex(relSlash, "../"); i >= 0 {
+ relSlash = relSlash[i+len("../"):]
+ }
+ return !strings.Contains(relSlash, "/vendor/") && !strings.Contains(relSlash, "/internal/") && !strings.HasSuffix(relSlash, "/internal")
+}
+
+// lastTwoComponents returns at most the last two path components
+// of v, using either / or \ as the path separator.
+func lastTwoComponents(v string) string {
+ nslash := 0
+ for i := len(v) - 1; i >= 0; i-- {
+ if v[i] == '/' || v[i] == '\\' {
+ nslash++
+ if nslash == 2 {
+ return v[i:]
+ }
+ }
+ }
+ return v
+}
+
+type visitFn func(node ast.Node) ast.Visitor
+
+func (fn visitFn) Visit(node ast.Node) ast.Visitor {
+ return fn(node)
+}
+
+func copyExports(pkg []string) map[string]bool {
+ m := make(map[string]bool, len(pkg))
+ for _, v := range pkg {
+ m[v] = true
+ }
+ return m
+}
diff --git a/vendor/golang.org/x/tools/internal/imports/imports.go b/vendor/golang.org/x/tools/internal/imports/imports.go
new file mode 100644
index 0000000000000..ed3867bb59402
--- /dev/null
+++ b/vendor/golang.org/x/tools/internal/imports/imports.go
@@ -0,0 +1,397 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+//go:generate go run mkstdlib.go
+
+// Package imports implements a Go pretty-printer (like package "go/format")
+// that also adds or removes import statements as necessary.
+package imports
+
+import (
+ "bufio"
+ "bytes"
+ "fmt"
+ "go/ast"
+ "go/build"
+ "go/format"
+ "go/parser"
+ "go/printer"
+ "go/token"
+ "io"
+ "io/ioutil"
+ "log"
+ "regexp"
+ "strconv"
+ "strings"
+
+ "golang.org/x/tools/go/ast/astutil"
+)
+
+// Options is golang.org/x/tools/imports.Options with extra internal-only options.
+type Options struct {
+ Env *ProcessEnv // The environment to use. Note: this contains the cached module and filesystem state.
+
+ Fragment bool // Accept fragment of a source file (no package statement)
+ AllErrors bool // Report all errors (not just the first 10 on different lines)
+
+ Comments bool // Print comments (true if nil *Options provided)
+ TabIndent bool // Use tabs for indent (true if nil *Options provided)
+ TabWidth int // Tab width (8 if nil *Options provided)
+
+ FormatOnly bool // Disable the insertion and deletion of imports
+}
+
+// Process implements golang.org/x/tools/imports.Process with explicit context in env.
+func Process(filename string, src []byte, opt *Options) (formatted []byte, err error) {
+ src, opt, err = initialize(filename, src, opt)
+ if err != nil {
+ return nil, err
+ }
+
+ fileSet := token.NewFileSet()
+ file, adjust, err := parse(fileSet, filename, src, opt)
+ if err != nil {
+ return nil, err
+ }
+
+ if !opt.FormatOnly {
+ if err := fixImports(fileSet, file, filename, opt.Env); err != nil {
+ return nil, err
+ }
+ }
+ return formatFile(fileSet, file, src, adjust, opt)
+}
+
+// FixImports returns a list of fixes to the imports that, when applied,
+// will leave the imports in the same state as Process.
+//
+// Note that filename's directory influences which imports can be chosen,
+// so it is important that filename be accurate.
+func FixImports(filename string, src []byte, opt *Options) (fixes []*ImportFix, err error) {
+ src, opt, err = initialize(filename, src, opt)
+ if err != nil {
+ return nil, err
+ }
+
+ fileSet := token.NewFileSet()
+ file, _, err := parse(fileSet, filename, src, opt)
+ if err != nil {
+ return nil, err
+ }
+
+ return getFixes(fileSet, file, filename, opt.Env)
+}
+
+// ApplyFix will apply all of the fixes to the file and format it.
+func ApplyFixes(fixes []*ImportFix, filename string, src []byte, opt *Options) (formatted []byte, err error) {
+ src, opt, err = initialize(filename, src, opt)
+ if err != nil {
+ return nil, err
+ }
+
+ fileSet := token.NewFileSet()
+ file, adjust, err := parse(fileSet, filename, src, opt)
+ if err != nil {
+ return nil, err
+ }
+
+ // Apply the fixes to the file.
+ apply(fileSet, file, fixes)
+
+ return formatFile(fileSet, file, src, adjust, opt)
+}
+
+// GetAllCandidates gets all of the standard library candidate packages to import in
+// sorted order on import path.
+func GetAllCandidates(filename string, opt *Options) (pkgs []ImportFix, err error) {
+ _, opt, err = initialize(filename, nil, opt)
+ if err != nil {
+ return nil, err
+ }
+ return getAllCandidates(filename, opt.Env)
+}
+
+// GetPackageExports returns all known packages with name pkg and their exports.
+func GetPackageExports(pkg, filename string, opt *Options) (exports []PackageExport, err error) {
+ _, opt, err = initialize(filename, nil, opt)
+ if err != nil {
+ return nil, err
+ }
+ return getPackageExports(pkg, filename, opt.Env)
+}
+
+// initialize sets the values for opt and src.
+// If they are provided, they are not changed. Otherwise opt is set to the
+// default values and src is read from the file system.
+func initialize(filename string, src []byte, opt *Options) ([]byte, *Options, error) {
+ // Use defaults if opt is nil.
+ if opt == nil {
+ opt = &Options{Comments: true, TabIndent: true, TabWidth: 8}
+ }
+
+ // Set the env if the user has not provided it.
+ if opt.Env == nil {
+ opt.Env = &ProcessEnv{
+ GOPATH: build.Default.GOPATH,
+ GOROOT: build.Default.GOROOT,
+ }
+ }
+
+ // Set the logger if the user has not provided it.
+ if opt.Env.Logf == nil {
+ opt.Env.Logf = log.Printf
+ }
+
+ if src == nil {
+ b, err := ioutil.ReadFile(filename)
+ if err != nil {
+ return nil, nil, err
+ }
+ src = b
+ }
+
+ return src, opt, nil
+}
+
+func formatFile(fileSet *token.FileSet, file *ast.File, src []byte, adjust func(orig []byte, src []byte) []byte, opt *Options) ([]byte, error) {
+ mergeImports(opt.Env, fileSet, file)
+ sortImports(opt.Env, fileSet, file)
+ imps := astutil.Imports(fileSet, file)
+ var spacesBefore []string // import paths we need spaces before
+ for _, impSection := range imps {
+ // Within each block of contiguous imports, see if any
+ // import lines are in different group numbers. If so,
+ // we'll need to put a space between them so it's
+ // compatible with gofmt.
+ lastGroup := -1
+ for _, importSpec := range impSection {
+ importPath, _ := strconv.Unquote(importSpec.Path.Value)
+ groupNum := importGroup(opt.Env, importPath)
+ if groupNum != lastGroup && lastGroup != -1 {
+ spacesBefore = append(spacesBefore, importPath)
+ }
+ lastGroup = groupNum
+ }
+
+ }
+
+ printerMode := printer.UseSpaces
+ if opt.TabIndent {
+ printerMode |= printer.TabIndent
+ }
+ printConfig := &printer.Config{Mode: printerMode, Tabwidth: opt.TabWidth}
+
+ var buf bytes.Buffer
+ err := printConfig.Fprint(&buf, fileSet, file)
+ if err != nil {
+ return nil, err
+ }
+ out := buf.Bytes()
+ if adjust != nil {
+ out = adjust(src, out)
+ }
+ if len(spacesBefore) > 0 {
+ out, err = addImportSpaces(bytes.NewReader(out), spacesBefore)
+ if err != nil {
+ return nil, err
+ }
+ }
+
+ out, err = format.Source(out)
+ if err != nil {
+ return nil, err
+ }
+ return out, nil
+}
+
+// parse parses src, which was read from filename,
+// as a Go source file or statement list.
+func parse(fset *token.FileSet, filename string, src []byte, opt *Options) (*ast.File, func(orig, src []byte) []byte, error) {
+ parserMode := parser.Mode(0)
+ if opt.Comments {
+ parserMode |= parser.ParseComments
+ }
+ if opt.AllErrors {
+ parserMode |= parser.AllErrors
+ }
+
+ // Try as whole source file.
+ file, err := parser.ParseFile(fset, filename, src, parserMode)
+ if err == nil {
+ return file, nil, nil
+ }
+ // If the error is that the source file didn't begin with a
+ // package line and we accept fragmented input, fall through to
+ // try as a source fragment. Stop and return on any other error.
+ if !opt.Fragment || !strings.Contains(err.Error(), "expected 'package'") {
+ return nil, nil, err
+ }
+
+ // If this is a declaration list, make it a source file
+ // by inserting a package clause.
+ // Insert using a ;, not a newline, so that parse errors are on
+ // the correct line.
+ const prefix = "package main;"
+ psrc := append([]byte(prefix), src...)
+ file, err = parser.ParseFile(fset, filename, psrc, parserMode)
+ if err == nil {
+ // Gofmt will turn the ; into a \n.
+ // Do that ourselves now and update the file contents,
+ // so that positions and line numbers are correct going forward.
+ psrc[len(prefix)-1] = '\n'
+ fset.File(file.Package).SetLinesForContent(psrc)
+
+ // If a main function exists, we will assume this is a main
+ // package and leave the file.
+ if containsMainFunc(file) {
+ return file, nil, nil
+ }
+
+ adjust := func(orig, src []byte) []byte {
+ // Remove the package clause.
+ src = src[len(prefix):]
+ return matchSpace(orig, src)
+ }
+ return file, adjust, nil
+ }
+ // If the error is that the source file didn't begin with a
+ // declaration, fall through to try as a statement list.
+ // Stop and return on any other error.
+ if !strings.Contains(err.Error(), "expected declaration") {
+ return nil, nil, err
+ }
+
+ // If this is a statement list, make it a source file
+ // by inserting a package clause and turning the list
+ // into a function body. This handles expressions too.
+ // Insert using a ;, not a newline, so that the line numbers
+ // in fsrc match the ones in src.
+ fsrc := append(append([]byte("package p; func _() {"), src...), '}')
+ file, err = parser.ParseFile(fset, filename, fsrc, parserMode)
+ if err == nil {
+ adjust := func(orig, src []byte) []byte {
+ // Remove the wrapping.
+ // Gofmt has turned the ; into a \n\n.
+ src = src[len("package p\n\nfunc _() {"):]
+ src = src[:len(src)-len("}\n")]
+ // Gofmt has also indented the function body one level.
+ // Remove that indent.
+ src = bytes.Replace(src, []byte("\n\t"), []byte("\n"), -1)
+ return matchSpace(orig, src)
+ }
+ return file, adjust, nil
+ }
+
+ // Failed, and out of options.
+ return nil, nil, err
+}
+
+// containsMainFunc checks if a file contains a function declaration with the
+// function signature 'func main()'
+func containsMainFunc(file *ast.File) bool {
+ for _, decl := range file.Decls {
+ if f, ok := decl.(*ast.FuncDecl); ok {
+ if f.Name.Name != "main" {
+ continue
+ }
+
+ if len(f.Type.Params.List) != 0 {
+ continue
+ }
+
+ if f.Type.Results != nil && len(f.Type.Results.List) != 0 {
+ continue
+ }
+
+ return true
+ }
+ }
+
+ return false
+}
+
+func cutSpace(b []byte) (before, middle, after []byte) {
+ i := 0
+ for i < len(b) && (b[i] == ' ' || b[i] == '\t' || b[i] == '\n') {
+ i++
+ }
+ j := len(b)
+ for j > 0 && (b[j-1] == ' ' || b[j-1] == '\t' || b[j-1] == '\n') {
+ j--
+ }
+ if i <= j {
+ return b[:i], b[i:j], b[j:]
+ }
+ return nil, nil, b[j:]
+}
+
+// matchSpace reformats src to use the same space context as orig.
+// 1) If orig begins with blank lines, matchSpace inserts them at the beginning of src.
+// 2) matchSpace copies the indentation of the first non-blank line in orig
+// to every non-blank line in src.
+// 3) matchSpace copies the trailing space from orig and uses it in place
+// of src's trailing space.
+func matchSpace(orig []byte, src []byte) []byte {
+ before, _, after := cutSpace(orig)
+ i := bytes.LastIndex(before, []byte{'\n'})
+ before, indent := before[:i+1], before[i+1:]
+
+ _, src, _ = cutSpace(src)
+
+ var b bytes.Buffer
+ b.Write(before)
+ for len(src) > 0 {
+ line := src
+ if i := bytes.IndexByte(line, '\n'); i >= 0 {
+ line, src = line[:i+1], line[i+1:]
+ } else {
+ src = nil
+ }
+ if len(line) > 0 && line[0] != '\n' { // not blank
+ b.Write(indent)
+ }
+ b.Write(line)
+ }
+ b.Write(after)
+ return b.Bytes()
+}
+
+var impLine = regexp.MustCompile(`^\s+(?:[\w\.]+\s+)?"(.+)"`)
+
+func addImportSpaces(r io.Reader, breaks []string) ([]byte, error) {
+ var out bytes.Buffer
+ in := bufio.NewReader(r)
+ inImports := false
+ done := false
+ for {
+ s, err := in.ReadString('\n')
+ if err == io.EOF {
+ break
+ } else if err != nil {
+ return nil, err
+ }
+
+ if !inImports && !done && strings.HasPrefix(s, "import") {
+ inImports = true
+ }
+ if inImports && (strings.HasPrefix(s, "var") ||
+ strings.HasPrefix(s, "func") ||
+ strings.HasPrefix(s, "const") ||
+ strings.HasPrefix(s, "type")) {
+ done = true
+ inImports = false
+ }
+ if inImports && len(breaks) > 0 {
+ if m := impLine.FindStringSubmatch(s); m != nil {
+ if m[1] == breaks[0] {
+ out.WriteByte('\n')
+ breaks = breaks[1:]
+ }
+ }
+ }
+
+ fmt.Fprint(&out, s)
+ }
+ return out.Bytes(), nil
+}
diff --git a/vendor/golang.org/x/tools/internal/imports/mod.go b/vendor/golang.org/x/tools/internal/imports/mod.go
new file mode 100644
index 0000000000000..0f9b87eb7331e
--- /dev/null
+++ b/vendor/golang.org/x/tools/internal/imports/mod.go
@@ -0,0 +1,643 @@
+package imports
+
+import (
+ "bytes"
+ "context"
+ "encoding/json"
+ "fmt"
+ "io/ioutil"
+ "os"
+ "path"
+ "path/filepath"
+ "regexp"
+ "sort"
+ "strconv"
+ "strings"
+ "sync"
+
+ "golang.org/x/tools/internal/gopathwalk"
+ "golang.org/x/tools/internal/module"
+ "golang.org/x/tools/internal/semver"
+)
+
+// ModuleResolver implements resolver for modules using the go command as little
+// as feasible.
+type ModuleResolver struct {
+ env *ProcessEnv
+ moduleCacheDir string
+ dummyVendorMod *ModuleJSON // If vendoring is enabled, the pseudo-module that represents the /vendor directory.
+
+ Initialized bool
+ Main *ModuleJSON
+ ModsByModPath []*ModuleJSON // All modules, ordered by # of path components in module Path...
+ ModsByDir []*ModuleJSON // ...or Dir.
+
+ // moduleCacheCache stores information about the module cache.
+ moduleCacheCache *dirInfoCache
+ otherCache *dirInfoCache
+}
+
+type ModuleJSON struct {
+ Path string // module path
+ Replace *ModuleJSON // replaced by this module
+ Main bool // is this the main module?
+ Dir string // directory holding files for this module, if any
+ GoMod string // path to go.mod file for this module, if any
+ GoVersion string // go version used in module
+}
+
+func (r *ModuleResolver) init() error {
+ if r.Initialized {
+ return nil
+ }
+ mainMod, vendorEnabled, err := vendorEnabled(r.env)
+ if err != nil {
+ return err
+ }
+
+ if mainMod != nil && vendorEnabled {
+ // Vendor mode is on, so all the non-Main modules are irrelevant,
+ // and we need to search /vendor for everything.
+ r.Main = mainMod
+ r.dummyVendorMod = &ModuleJSON{
+ Path: "",
+ Dir: filepath.Join(mainMod.Dir, "vendor"),
+ }
+ r.ModsByModPath = []*ModuleJSON{mainMod, r.dummyVendorMod}
+ r.ModsByDir = []*ModuleJSON{mainMod, r.dummyVendorMod}
+ } else {
+ // Vendor mode is off, so run go list -m ... to find everything.
+ r.initAllMods()
+ }
+
+ r.moduleCacheDir = filepath.Join(filepath.SplitList(r.env.GOPATH)[0], "/pkg/mod")
+
+ sort.Slice(r.ModsByModPath, func(i, j int) bool {
+ count := func(x int) int {
+ return strings.Count(r.ModsByModPath[x].Path, "/")
+ }
+ return count(j) < count(i) // descending order
+ })
+ sort.Slice(r.ModsByDir, func(i, j int) bool {
+ count := func(x int) int {
+ return strings.Count(r.ModsByDir[x].Dir, "/")
+ }
+ return count(j) < count(i) // descending order
+ })
+
+ if r.moduleCacheCache == nil {
+ r.moduleCacheCache = &dirInfoCache{
+ dirs: map[string]*directoryPackageInfo{},
+ }
+ }
+ if r.otherCache == nil {
+ r.otherCache = &dirInfoCache{
+ dirs: map[string]*directoryPackageInfo{},
+ }
+ }
+ r.Initialized = true
+ return nil
+}
+
+func (r *ModuleResolver) initAllMods() error {
+ stdout, err := r.env.invokeGo("list", "-m", "-json", "...")
+ if err != nil {
+ return err
+ }
+ for dec := json.NewDecoder(stdout); dec.More(); {
+ mod := &ModuleJSON{}
+ if err := dec.Decode(mod); err != nil {
+ return err
+ }
+ if mod.Dir == "" {
+ if r.env.Debug {
+ r.env.Logf("module %v has not been downloaded and will be ignored", mod.Path)
+ }
+ // Can't do anything with a module that's not downloaded.
+ continue
+ }
+ r.ModsByModPath = append(r.ModsByModPath, mod)
+ r.ModsByDir = append(r.ModsByDir, mod)
+ if mod.Main {
+ r.Main = mod
+ }
+ }
+ return nil
+}
+
+func (r *ModuleResolver) ClearForNewScan() {
+ r.otherCache = &dirInfoCache{
+ dirs: map[string]*directoryPackageInfo{},
+ }
+}
+
+func (r *ModuleResolver) ClearForNewMod() {
+ env := r.env
+ *r = ModuleResolver{
+ env: env,
+ }
+ r.init()
+}
+
+// findPackage returns the module and directory that contains the package at
+// the given import path, or returns nil, "" if no module is in scope.
+func (r *ModuleResolver) findPackage(importPath string) (*ModuleJSON, string) {
+ // This can't find packages in the stdlib, but that's harmless for all
+ // the existing code paths.
+ for _, m := range r.ModsByModPath {
+ if !strings.HasPrefix(importPath, m.Path) {
+ continue
+ }
+ pathInModule := importPath[len(m.Path):]
+ pkgDir := filepath.Join(m.Dir, pathInModule)
+ if r.dirIsNestedModule(pkgDir, m) {
+ continue
+ }
+
+ if info, ok := r.cacheLoad(pkgDir); ok {
+ if loaded, err := info.reachedStatus(nameLoaded); loaded {
+ if err != nil {
+ continue // No package in this dir.
+ }
+ return m, pkgDir
+ }
+ if scanned, err := info.reachedStatus(directoryScanned); scanned && err != nil {
+ continue // Dir is unreadable, etc.
+ }
+ // This is slightly wrong: a directory doesn't have to have an
+ // importable package to count as a package for package-to-module
+ // resolution. package main or _test files should count but
+ // don't.
+ // TODO(heschi): fix this.
+ if _, err := r.cachePackageName(info); err == nil {
+ return m, pkgDir
+ }
+ }
+
+ // Not cached. Read the filesystem.
+ pkgFiles, err := ioutil.ReadDir(pkgDir)
+ if err != nil {
+ continue
+ }
+ // A module only contains a package if it has buildable go
+ // files in that directory. If not, it could be provided by an
+ // outer module. See #29736.
+ for _, fi := range pkgFiles {
+ if ok, _ := r.env.buildContext().MatchFile(pkgDir, fi.Name()); ok {
+ return m, pkgDir
+ }
+ }
+ }
+ return nil, ""
+}
+
+func (r *ModuleResolver) cacheLoad(dir string) (directoryPackageInfo, bool) {
+ if info, ok := r.moduleCacheCache.Load(dir); ok {
+ return info, ok
+ }
+ return r.otherCache.Load(dir)
+}
+
+func (r *ModuleResolver) cacheStore(info directoryPackageInfo) {
+ if info.rootType == gopathwalk.RootModuleCache {
+ r.moduleCacheCache.Store(info.dir, info)
+ } else {
+ r.otherCache.Store(info.dir, info)
+ }
+}
+
+func (r *ModuleResolver) cacheKeys() []string {
+ return append(r.moduleCacheCache.Keys(), r.otherCache.Keys()...)
+}
+
+// cachePackageName caches the package name for a dir already in the cache.
+func (r *ModuleResolver) cachePackageName(info directoryPackageInfo) (directoryPackageInfo, error) {
+ if info.rootType == gopathwalk.RootModuleCache {
+ return r.moduleCacheCache.CachePackageName(info)
+ }
+ return r.otherCache.CachePackageName(info)
+}
+
+func (r *ModuleResolver) cacheExports(ctx context.Context, env *ProcessEnv, info directoryPackageInfo) (string, []string, error) {
+ if info.rootType == gopathwalk.RootModuleCache {
+ return r.moduleCacheCache.CacheExports(ctx, env, info)
+ }
+ return r.otherCache.CacheExports(ctx, env, info)
+}
+
+// findModuleByDir returns the module that contains dir, or nil if no such
+// module is in scope.
+func (r *ModuleResolver) findModuleByDir(dir string) *ModuleJSON {
+ // This is quite tricky and may not be correct. dir could be:
+ // - a package in the main module.
+ // - a replace target underneath the main module's directory.
+ // - a nested module in the above.
+ // - a replace target somewhere totally random.
+ // - a nested module in the above.
+ // - in the mod cache.
+ // - in /vendor/ in -mod=vendor mode.
+ // - nested module? Dunno.
+ // Rumor has it that replace targets cannot contain other replace targets.
+ for _, m := range r.ModsByDir {
+ if !strings.HasPrefix(dir, m.Dir) {
+ continue
+ }
+
+ if r.dirIsNestedModule(dir, m) {
+ continue
+ }
+
+ return m
+ }
+ return nil
+}
+
+// dirIsNestedModule reports if dir is contained in a nested module underneath
+// mod, not actually in mod.
+func (r *ModuleResolver) dirIsNestedModule(dir string, mod *ModuleJSON) bool {
+ if !strings.HasPrefix(dir, mod.Dir) {
+ return false
+ }
+ if r.dirInModuleCache(dir) {
+ // Nested modules in the module cache are pruned,
+ // so it cannot be a nested module.
+ return false
+ }
+ if mod != nil && mod == r.dummyVendorMod {
+ // The /vendor pseudomodule is flattened and doesn't actually count.
+ return false
+ }
+ modDir, _ := r.modInfo(dir)
+ if modDir == "" {
+ return false
+ }
+ return modDir != mod.Dir
+}
+
+func (r *ModuleResolver) modInfo(dir string) (modDir string, modName string) {
+ readModName := func(modFile string) string {
+ modBytes, err := ioutil.ReadFile(modFile)
+ if err != nil {
+ return ""
+ }
+ return modulePath(modBytes)
+ }
+
+ if r.dirInModuleCache(dir) {
+ matches := modCacheRegexp.FindStringSubmatch(dir)
+ index := strings.Index(dir, matches[1]+"@"+matches[2])
+ modDir := filepath.Join(dir[:index], matches[1]+"@"+matches[2])
+ return modDir, readModName(filepath.Join(modDir, "go.mod"))
+ }
+ for {
+ if info, ok := r.cacheLoad(dir); ok {
+ return info.moduleDir, info.moduleName
+ }
+ f := filepath.Join(dir, "go.mod")
+ info, err := os.Stat(f)
+ if err == nil && !info.IsDir() {
+ return dir, readModName(f)
+ }
+
+ d := filepath.Dir(dir)
+ if len(d) >= len(dir) {
+ return "", "" // reached top of file system, no go.mod
+ }
+ dir = d
+ }
+}
+
+func (r *ModuleResolver) dirInModuleCache(dir string) bool {
+ if r.moduleCacheDir == "" {
+ return false
+ }
+ return strings.HasPrefix(dir, r.moduleCacheDir)
+}
+
+func (r *ModuleResolver) loadPackageNames(importPaths []string, srcDir string) (map[string]string, error) {
+ if err := r.init(); err != nil {
+ return nil, err
+ }
+ names := map[string]string{}
+ for _, path := range importPaths {
+ _, packageDir := r.findPackage(path)
+ if packageDir == "" {
+ continue
+ }
+ name, err := packageDirToName(packageDir)
+ if err != nil {
+ continue
+ }
+ names[path] = name
+ }
+ return names, nil
+}
+
+func (r *ModuleResolver) scan(_ references, loadNames bool, exclude []gopathwalk.RootType) ([]*pkg, error) {
+ if err := r.init(); err != nil {
+ return nil, err
+ }
+
+ // Walk GOROOT, GOPATH/pkg/mod, and the main module.
+ roots := []gopathwalk.Root{
+ {filepath.Join(r.env.GOROOT, "/src"), gopathwalk.RootGOROOT},
+ }
+ if r.Main != nil {
+ roots = append(roots, gopathwalk.Root{r.Main.Dir, gopathwalk.RootCurrentModule})
+ }
+ if r.dummyVendorMod != nil {
+ roots = append(roots, gopathwalk.Root{r.dummyVendorMod.Dir, gopathwalk.RootOther})
+ } else {
+ roots = append(roots, gopathwalk.Root{r.moduleCacheDir, gopathwalk.RootModuleCache})
+ // Walk replace targets, just in case they're not in any of the above.
+ for _, mod := range r.ModsByModPath {
+ if mod.Replace != nil {
+ roots = append(roots, gopathwalk.Root{mod.Dir, gopathwalk.RootOther})
+ }
+ }
+ }
+
+ roots = filterRoots(roots, exclude)
+
+ var result []*pkg
+ var mu sync.Mutex
+
+ // We assume cached directories have not changed. We can skip them and their
+ // children.
+ skip := func(root gopathwalk.Root, dir string) bool {
+ mu.Lock()
+ defer mu.Unlock()
+
+ info, ok := r.cacheLoad(dir)
+ if !ok {
+ return false
+ }
+ // This directory can be skipped as long as we have already scanned it.
+ // Packages with errors will continue to have errors, so there is no need
+ // to rescan them.
+ packageScanned, _ := info.reachedStatus(directoryScanned)
+ return packageScanned
+ }
+
+ // Add anything new to the cache. We'll process everything in it below.
+ add := func(root gopathwalk.Root, dir string) {
+ mu.Lock()
+ defer mu.Unlock()
+
+ r.cacheStore(r.scanDirForPackage(root, dir))
+ }
+
+ gopathwalk.WalkSkip(roots, add, skip, gopathwalk.Options{Debug: r.env.Debug, ModulesEnabled: true})
+
+ // Everything we already had, and everything new, is now in the cache.
+ for _, dir := range r.cacheKeys() {
+ info, ok := r.cacheLoad(dir)
+ if !ok {
+ continue
+ }
+
+ // Skip this directory if we were not able to get the package information successfully.
+ if scanned, err := info.reachedStatus(directoryScanned); !scanned || err != nil {
+ continue
+ }
+
+ // If we want package names, make sure the cache has them.
+ if loadNames {
+ var err error
+ if info, err = r.cachePackageName(info); err != nil {
+ continue
+ }
+ }
+
+ res, err := r.canonicalize(info)
+ if err != nil {
+ continue
+ }
+ result = append(result, res)
+ }
+
+ return result, nil
+}
+
+// canonicalize gets the result of canonicalizing the packages using the results
+// of initializing the resolver from 'go list -m'.
+func (r *ModuleResolver) canonicalize(info directoryPackageInfo) (*pkg, error) {
+ // Packages in GOROOT are already canonical, regardless of the std/cmd modules.
+ if info.rootType == gopathwalk.RootGOROOT {
+ return &pkg{
+ importPathShort: info.nonCanonicalImportPath,
+ dir: info.dir,
+ packageName: path.Base(info.nonCanonicalImportPath),
+ relevance: 0,
+ }, nil
+ }
+
+ importPath := info.nonCanonicalImportPath
+ relevance := 2
+ // Check if the directory is underneath a module that's in scope.
+ if mod := r.findModuleByDir(info.dir); mod != nil {
+ relevance = 1
+ // It is. If dir is the target of a replace directive,
+ // our guessed import path is wrong. Use the real one.
+ if mod.Dir == info.dir {
+ importPath = mod.Path
+ } else {
+ dirInMod := info.dir[len(mod.Dir)+len("/"):]
+ importPath = path.Join(mod.Path, filepath.ToSlash(dirInMod))
+ }
+ } else if info.needsReplace {
+ return nil, fmt.Errorf("package in %q is not valid without a replace statement", info.dir)
+ }
+
+ res := &pkg{
+ importPathShort: importPath,
+ dir: info.dir,
+ packageName: info.packageName, // may not be populated if the caller didn't ask for it
+ relevance: relevance,
+ }
+ // We may have discovered a package that has a different version
+ // in scope already. Canonicalize to that one if possible.
+ if _, canonicalDir := r.findPackage(importPath); canonicalDir != "" {
+ res.dir = canonicalDir
+ }
+ return res, nil
+}
+
+func (r *ModuleResolver) loadExports(ctx context.Context, pkg *pkg) (string, []string, error) {
+ if err := r.init(); err != nil {
+ return "", nil, err
+ }
+ if info, ok := r.cacheLoad(pkg.dir); ok {
+ return r.cacheExports(ctx, r.env, info)
+ }
+ return loadExportsFromFiles(ctx, r.env, pkg.dir)
+}
+
+func (r *ModuleResolver) scanDirForPackage(root gopathwalk.Root, dir string) directoryPackageInfo {
+ subdir := ""
+ if dir != root.Path {
+ subdir = dir[len(root.Path)+len("/"):]
+ }
+ importPath := filepath.ToSlash(subdir)
+ if strings.HasPrefix(importPath, "vendor/") {
+ // Only enter vendor directories if they're explicitly requested as a root.
+ return directoryPackageInfo{
+ status: directoryScanned,
+ err: fmt.Errorf("unwanted vendor directory"),
+ }
+ }
+ switch root.Type {
+ case gopathwalk.RootCurrentModule:
+ importPath = path.Join(r.Main.Path, filepath.ToSlash(subdir))
+ case gopathwalk.RootModuleCache:
+ matches := modCacheRegexp.FindStringSubmatch(subdir)
+ if len(matches) == 0 {
+ return directoryPackageInfo{
+ status: directoryScanned,
+ err: fmt.Errorf("invalid module cache path: %v", subdir),
+ }
+ }
+ modPath, err := module.DecodePath(filepath.ToSlash(matches[1]))
+ if err != nil {
+ if r.env.Debug {
+ r.env.Logf("decoding module cache path %q: %v", subdir, err)
+ }
+ return directoryPackageInfo{
+ status: directoryScanned,
+ err: fmt.Errorf("decoding module cache path %q: %v", subdir, err),
+ }
+ }
+ importPath = path.Join(modPath, filepath.ToSlash(matches[3]))
+ }
+
+ modDir, modName := r.modInfo(dir)
+ result := directoryPackageInfo{
+ status: directoryScanned,
+ dir: dir,
+ rootType: root.Type,
+ nonCanonicalImportPath: importPath,
+ needsReplace: false,
+ moduleDir: modDir,
+ moduleName: modName,
+ }
+ if root.Type == gopathwalk.RootGOROOT {
+ // stdlib packages are always in scope, despite the confusing go.mod
+ return result
+ }
+ // Check that this package is not obviously impossible to import.
+ if !strings.HasPrefix(importPath, modName) {
+ // The module's declared path does not match
+ // its expected path. It probably needs a
+ // replace directive we don't have.
+ result.needsReplace = true
+ }
+
+ return result
+}
+
+// modCacheRegexp splits a path in a module cache into module, module version, and package.
+var modCacheRegexp = regexp.MustCompile(`(.*)@([^/\\]*)(.*)`)
+
+var (
+ slashSlash = []byte("//")
+ moduleStr = []byte("module")
+)
+
+// modulePath returns the module path from the gomod file text.
+// If it cannot find a module path, it returns an empty string.
+// It is tolerant of unrelated problems in the go.mod file.
+//
+// Copied from cmd/go/internal/modfile.
+func modulePath(mod []byte) string {
+ for len(mod) > 0 {
+ line := mod
+ mod = nil
+ if i := bytes.IndexByte(line, '\n'); i >= 0 {
+ line, mod = line[:i], line[i+1:]
+ }
+ if i := bytes.Index(line, slashSlash); i >= 0 {
+ line = line[:i]
+ }
+ line = bytes.TrimSpace(line)
+ if !bytes.HasPrefix(line, moduleStr) {
+ continue
+ }
+ line = line[len(moduleStr):]
+ n := len(line)
+ line = bytes.TrimSpace(line)
+ if len(line) == n || len(line) == 0 {
+ continue
+ }
+
+ if line[0] == '"' || line[0] == '`' {
+ p, err := strconv.Unquote(string(line))
+ if err != nil {
+ return "" // malformed quoted string or multiline module path
+ }
+ return p
+ }
+
+ return string(line)
+ }
+ return "" // missing module path
+}
+
+var modFlagRegexp = regexp.MustCompile(`-mod[ =](\w+)`)
+
+// vendorEnabled indicates if vendoring is enabled.
+// Inspired by setDefaultBuildMod in modload/init.go
+func vendorEnabled(env *ProcessEnv) (*ModuleJSON, bool, error) {
+ mainMod, go114, err := getMainModuleAnd114(env)
+ if err != nil {
+ return nil, false, err
+ }
+ matches := modFlagRegexp.FindStringSubmatch(env.GOFLAGS)
+ var modFlag string
+ if len(matches) != 0 {
+ modFlag = matches[1]
+ }
+ if modFlag != "" {
+ // Don't override an explicit '-mod=' argument.
+ return mainMod, modFlag == "vendor", nil
+ }
+ if mainMod == nil || !go114 {
+ return mainMod, false, nil
+ }
+ // Check 1.14's automatic vendor mode.
+ if fi, err := os.Stat(filepath.Join(mainMod.Dir, "vendor")); err == nil && fi.IsDir() {
+ if mainMod.GoVersion != "" && semver.Compare("v"+mainMod.GoVersion, "v1.14") >= 0 {
+ // The Go version is at least 1.14, and a vendor directory exists.
+ // Set -mod=vendor by default.
+ return mainMod, true, nil
+ }
+ }
+ return mainMod, false, nil
+}
+
+// getMainModuleAnd114 gets the main module's information and whether the
+// go command in use is 1.14+. This is the information needed to figure out
+// if vendoring should be enabled.
+func getMainModuleAnd114(env *ProcessEnv) (*ModuleJSON, bool, error) {
+ const format = `{{.Path}}
+{{.Dir}}
+{{.GoMod}}
+{{.GoVersion}}
+{{range context.ReleaseTags}}{{if eq . "go1.14"}}{{.}}{{end}}{{end}}
+`
+ stdout, err := env.invokeGo("list", "-m", "-f", format)
+ if err != nil {
+ return nil, false, nil
+ }
+ lines := strings.Split(stdout.String(), "\n")
+ if len(lines) < 5 {
+ return nil, false, fmt.Errorf("unexpected stdout: %q", stdout)
+ }
+ mod := &ModuleJSON{
+ Path: lines[0],
+ Dir: lines[1],
+ GoMod: lines[2],
+ GoVersion: lines[3],
+ Main: true,
+ }
+ return mod, lines[4] == "go1.14", nil
+}
diff --git a/vendor/golang.org/x/tools/internal/imports/mod_cache.go b/vendor/golang.org/x/tools/internal/imports/mod_cache.go
new file mode 100644
index 0000000000000..f6b070a3f6eda
--- /dev/null
+++ b/vendor/golang.org/x/tools/internal/imports/mod_cache.go
@@ -0,0 +1,165 @@
+package imports
+
+import (
+ "context"
+ "fmt"
+ "sync"
+
+ "golang.org/x/tools/internal/gopathwalk"
+)
+
+// To find packages to import, the resolver needs to know about all of the
+// the packages that could be imported. This includes packages that are
+// already in modules that are in (1) the current module, (2) replace targets,
+// and (3) packages in the module cache. Packages in (1) and (2) may change over
+// time, as the client may edit the current module and locally replaced modules.
+// The module cache (which includes all of the packages in (3)) can only
+// ever be added to.
+//
+// The resolver can thus save state about packages in the module cache
+// and guarantee that this will not change over time. To obtain information
+// about new modules added to the module cache, the module cache should be
+// rescanned.
+//
+// It is OK to serve information about modules that have been deleted,
+// as they do still exist.
+// TODO(suzmue): can we share information with the caller about
+// what module needs to be downloaded to import this package?
+
+type directoryPackageStatus int
+
+const (
+ _ directoryPackageStatus = iota
+ directoryScanned
+ nameLoaded
+ exportsLoaded
+)
+
+type directoryPackageInfo struct {
+ // status indicates the extent to which this struct has been filled in.
+ status directoryPackageStatus
+ // err is non-nil when there was an error trying to reach status.
+ err error
+
+ // Set when status >= directoryScanned.
+
+ // dir is the absolute directory of this package.
+ dir string
+ rootType gopathwalk.RootType
+ // nonCanonicalImportPath is the package's expected import path. It may
+ // not actually be importable at that path.
+ nonCanonicalImportPath string
+ // needsReplace is true if the nonCanonicalImportPath does not match the
+ // module's declared path, making it impossible to import without a
+ // replace directive.
+ needsReplace bool
+
+ // Module-related information.
+ moduleDir string // The directory that is the module root of this dir.
+ moduleName string // The module name that contains this dir.
+
+ // Set when status >= nameLoaded.
+
+ packageName string // the package name, as declared in the source.
+
+ // Set when status >= exportsLoaded.
+
+ exports []string
+}
+
+// reachedStatus returns true when info has a status at least target and any error associated with
+// an attempt to reach target.
+func (info *directoryPackageInfo) reachedStatus(target directoryPackageStatus) (bool, error) {
+ if info.err == nil {
+ return info.status >= target, nil
+ }
+ if info.status == target {
+ return true, info.err
+ }
+ return true, nil
+}
+
+// dirInfoCache is a concurrency safe map for storing information about
+// directories that may contain packages.
+//
+// The information in this cache is built incrementally. Entries are initialized in scan.
+// No new keys should be added in any other functions, as all directories containing
+// packages are identified in scan.
+//
+// Other functions, including loadExports and findPackage, may update entries in this cache
+// as they discover new things about the directory.
+//
+// The information in the cache is not expected to change for the cache's
+// lifetime, so there is no protection against competing writes. Users should
+// take care not to hold the cache across changes to the underlying files.
+//
+// TODO(suzmue): consider other concurrency strategies and data structures (RWLocks, sync.Map, etc)
+type dirInfoCache struct {
+ mu sync.Mutex
+ // dirs stores information about packages in directories, keyed by absolute path.
+ dirs map[string]*directoryPackageInfo
+}
+
+// Store stores the package info for dir.
+func (d *dirInfoCache) Store(dir string, info directoryPackageInfo) {
+ d.mu.Lock()
+ defer d.mu.Unlock()
+ stored := info // defensive copy
+ d.dirs[dir] = &stored
+}
+
+// Load returns a copy of the directoryPackageInfo for absolute directory dir.
+func (d *dirInfoCache) Load(dir string) (directoryPackageInfo, bool) {
+ d.mu.Lock()
+ defer d.mu.Unlock()
+ info, ok := d.dirs[dir]
+ if !ok {
+ return directoryPackageInfo{}, false
+ }
+ return *info, true
+}
+
+// Keys returns the keys currently present in d.
+func (d *dirInfoCache) Keys() (keys []string) {
+ d.mu.Lock()
+ defer d.mu.Unlock()
+ for key := range d.dirs {
+ keys = append(keys, key)
+ }
+ return keys
+}
+
+func (d *dirInfoCache) CachePackageName(info directoryPackageInfo) (directoryPackageInfo, error) {
+ if loaded, err := info.reachedStatus(nameLoaded); loaded {
+ return info, err
+ }
+ if scanned, err := info.reachedStatus(directoryScanned); !scanned || err != nil {
+ return info, fmt.Errorf("cannot read package name, scan error: %v", err)
+ }
+ info.packageName, info.err = packageDirToName(info.dir)
+ info.status = nameLoaded
+ d.Store(info.dir, info)
+ return info, info.err
+}
+
+func (d *dirInfoCache) CacheExports(ctx context.Context, env *ProcessEnv, info directoryPackageInfo) (string, []string, error) {
+ if reached, _ := info.reachedStatus(exportsLoaded); reached {
+ return info.packageName, info.exports, info.err
+ }
+ if reached, err := info.reachedStatus(nameLoaded); reached && err != nil {
+ return "", nil, err
+ }
+ info.packageName, info.exports, info.err = loadExportsFromFiles(ctx, env, info.dir)
+ if info.err == context.Canceled {
+ return info.packageName, info.exports, info.err
+ }
+ // The cache structure wants things to proceed linearly. We can skip a
+ // step here, but only if we succeed.
+ if info.status == nameLoaded || info.err == nil {
+ info.status = exportsLoaded
+ } else {
+ info.status = nameLoaded
+ }
+ d.Store(info.dir, info)
+ return info.packageName, info.exports, info.err
+}
diff --git a/vendor/golang.org/x/tools/internal/imports/sortimports.go b/vendor/golang.org/x/tools/internal/imports/sortimports.go
new file mode 100644
index 0000000000000..226279471d39a
--- /dev/null
+++ b/vendor/golang.org/x/tools/internal/imports/sortimports.go
@@ -0,0 +1,280 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Hacked up copy of go/ast/import.go
+
+package imports
+
+import (
+ "go/ast"
+ "go/token"
+ "sort"
+ "strconv"
+)
+
+// sortImports sorts runs of consecutive import lines in import blocks in f.
+// It also removes duplicate imports when it is possible to do so without data loss.
+func sortImports(env *ProcessEnv, fset *token.FileSet, f *ast.File) {
+ for i, d := range f.Decls {
+ d, ok := d.(*ast.GenDecl)
+ if !ok || d.Tok != token.IMPORT {
+ // Not an import declaration, so we're done.
+ // Imports are always first.
+ break
+ }
+
+ if len(d.Specs) == 0 {
+ // Empty import block, remove it.
+ f.Decls = append(f.Decls[:i], f.Decls[i+1:]...)
+ }
+
+ if !d.Lparen.IsValid() {
+ // Not a block: sorted by default.
+ continue
+ }
+
+ // Identify and sort runs of specs on successive lines.
+ i := 0
+ specs := d.Specs[:0]
+ for j, s := range d.Specs {
+ if j > i && fset.Position(s.Pos()).Line > 1+fset.Position(d.Specs[j-1].End()).Line {
+ // j begins a new run. End this one.
+ specs = append(specs, sortSpecs(env, fset, f, d.Specs[i:j])...)
+ i = j
+ }
+ }
+ specs = append(specs, sortSpecs(env, fset, f, d.Specs[i:])...)
+ d.Specs = specs
+
+ // Deduping can leave a blank line before the rparen; clean that up.
+ if len(d.Specs) > 0 {
+ lastSpec := d.Specs[len(d.Specs)-1]
+ lastLine := fset.Position(lastSpec.Pos()).Line
+ if rParenLine := fset.Position(d.Rparen).Line; rParenLine > lastLine+1 {
+ fset.File(d.Rparen).MergeLine(rParenLine - 1)
+ }
+ }
+ }
+}
+
+// mergeImports merges all the import declarations into the first one.
+// Taken from golang.org/x/tools/ast/astutil.
+func mergeImports(env *ProcessEnv, fset *token.FileSet, f *ast.File) {
+ if len(f.Decls) <= 1 {
+ return
+ }
+
+ // Merge all the import declarations into the first one.
+ var first *ast.GenDecl
+ for i := 0; i < len(f.Decls); i++ {
+ decl := f.Decls[i]
+ gen, ok := decl.(*ast.GenDecl)
+ if !ok || gen.Tok != token.IMPORT || declImports(gen, "C") {
+ continue
+ }
+ if first == nil {
+ first = gen
+ continue // Don't touch the first one.
+ }
+ // We now know there is more than one package in this import
+ // declaration. Ensure that it ends up parenthesized.
+ first.Lparen = first.Pos()
+ // Move the imports of the other import declaration to the first one.
+ for _, spec := range gen.Specs {
+ spec.(*ast.ImportSpec).Path.ValuePos = first.Pos()
+ first.Specs = append(first.Specs, spec)
+ }
+ f.Decls = append(f.Decls[:i], f.Decls[i+1:]...)
+ i--
+ }
+}
+
+// declImports reports whether gen contains an import of path.
+// Taken from golang.org/x/tools/ast/astutil.
+func declImports(gen *ast.GenDecl, path string) bool {
+ if gen.Tok != token.IMPORT {
+ return false
+ }
+ for _, spec := range gen.Specs {
+ impspec := spec.(*ast.ImportSpec)
+ if importPath(impspec) == path {
+ return true
+ }
+ }
+ return false
+}
+
+func importPath(s ast.Spec) string {
+ t, err := strconv.Unquote(s.(*ast.ImportSpec).Path.Value)
+ if err == nil {
+ return t
+ }
+ return ""
+}
+
+func importName(s ast.Spec) string {
+ n := s.(*ast.ImportSpec).Name
+ if n == nil {
+ return ""
+ }
+ return n.Name
+}
+
+func importComment(s ast.Spec) string {
+ c := s.(*ast.ImportSpec).Comment
+ if c == nil {
+ return ""
+ }
+ return c.Text()
+}
+
+// collapse indicates whether prev may be removed, leaving only next.
+func collapse(prev, next ast.Spec) bool {
+ if importPath(next) != importPath(prev) || importName(next) != importName(prev) {
+ return false
+ }
+ return prev.(*ast.ImportSpec).Comment == nil
+}
+
+type posSpan struct {
+ Start token.Pos
+ End token.Pos
+}
+
+func sortSpecs(env *ProcessEnv, fset *token.FileSet, f *ast.File, specs []ast.Spec) []ast.Spec {
+ // Can't short-circuit here even if specs are already sorted,
+ // since they might yet need deduplication.
+ // A lone import, however, may be safely ignored.
+ if len(specs) <= 1 {
+ return specs
+ }
+
+ // Record positions for specs.
+ pos := make([]posSpan, len(specs))
+ for i, s := range specs {
+ pos[i] = posSpan{s.Pos(), s.End()}
+ }
+
+ // Identify comments in this range.
+ // Any comment from pos[0].Start to the final line counts.
+ lastLine := fset.Position(pos[len(pos)-1].End).Line
+ cstart := len(f.Comments)
+ cend := len(f.Comments)
+ for i, g := range f.Comments {
+ if g.Pos() < pos[0].Start {
+ continue
+ }
+ if i < cstart {
+ cstart = i
+ }
+ if fset.Position(g.End()).Line > lastLine {
+ cend = i
+ break
+ }
+ }
+ comments := f.Comments[cstart:cend]
+
+ // Assign each comment to the import spec preceding it.
+ importComment := map[*ast.ImportSpec][]*ast.CommentGroup{}
+ specIndex := 0
+ for _, g := range comments {
+ for specIndex+1 < len(specs) && pos[specIndex+1].Start <= g.Pos() {
+ specIndex++
+ }
+ s := specs[specIndex].(*ast.ImportSpec)
+ importComment[s] = append(importComment[s], g)
+ }
+
+ // Sort the import specs by import path.
+ // Remove duplicates, when possible without data loss.
+ // Reassign the import paths to have the same position sequence.
+ // Reassign each comment to abut the end of its spec.
+ // Sort the comments by new position.
+ sort.Sort(byImportSpec{env, specs})
+
+ // Dedup. Thanks to our sorting, we can just consider
+ // adjacent pairs of imports.
+ deduped := specs[:0]
+ for i, s := range specs {
+ if i == len(specs)-1 || !collapse(s, specs[i+1]) {
+ deduped = append(deduped, s)
+ } else {
+ p := s.Pos()
+ fset.File(p).MergeLine(fset.Position(p).Line)
+ }
+ }
+ specs = deduped
+
+ // Fix up comment positions
+ for i, s := range specs {
+ s := s.(*ast.ImportSpec)
+ if s.Name != nil {
+ s.Name.NamePos = pos[i].Start
+ }
+ s.Path.ValuePos = pos[i].Start
+ s.EndPos = pos[i].End
+ nextSpecPos := pos[i].End
+
+ for _, g := range importComment[s] {
+ for _, c := range g.List {
+ c.Slash = pos[i].End
+ nextSpecPos = c.End()
+ }
+ }
+ if i < len(specs)-1 {
+ pos[i+1].Start = nextSpecPos
+ pos[i+1].End = nextSpecPos
+ }
+ }
+
+ sort.Sort(byCommentPos(comments))
+
+ // Fixup comments can insert blank lines, because import specs are on different lines.
+ // We remove those blank lines here by merging import spec to the first import spec line.
+ firstSpecLine := fset.Position(specs[0].Pos()).Line
+ for _, s := range specs[1:] {
+ p := s.Pos()
+ line := fset.File(p).Line(p)
+ for previousLine := line - 1; previousLine >= firstSpecLine; {
+ fset.File(p).MergeLine(previousLine)
+ previousLine--
+ }
+ }
+ return specs
+}
+
+type byImportSpec struct {
+ env *ProcessEnv
+ specs []ast.Spec // slice of *ast.ImportSpec
+}
+
+func (x byImportSpec) Len() int { return len(x.specs) }
+func (x byImportSpec) Swap(i, j int) { x.specs[i], x.specs[j] = x.specs[j], x.specs[i] }
+func (x byImportSpec) Less(i, j int) bool {
+ ipath := importPath(x.specs[i])
+ jpath := importPath(x.specs[j])
+
+ igroup := importGroup(x.env, ipath)
+ jgroup := importGroup(x.env, jpath)
+ if igroup != jgroup {
+ return igroup < jgroup
+ }
+
+ if ipath != jpath {
+ return ipath < jpath
+ }
+ iname := importName(x.specs[i])
+ jname := importName(x.specs[j])
+
+ if iname != jname {
+ return iname < jname
+ }
+ return importComment(x.specs[i]) < importComment(x.specs[j])
+}
+
+type byCommentPos []*ast.CommentGroup
+
+func (x byCommentPos) Len() int { return len(x) }
+func (x byCommentPos) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
+func (x byCommentPos) Less(i, j int) bool { return x[i].Pos() < x[j].Pos() }
diff --git a/vendor/golang.org/x/tools/internal/imports/zstdlib.go b/vendor/golang.org/x/tools/internal/imports/zstdlib.go
new file mode 100644
index 0000000000000..7e60eb04e525d
--- /dev/null
+++ b/vendor/golang.org/x/tools/internal/imports/zstdlib.go
@@ -0,0 +1,10377 @@
+// Code generated by mkstdlib.go. DO NOT EDIT.
+
+package imports
+
+var stdlib = map[string][]string{
+ "archive/tar": []string{
+ "ErrFieldTooLong",
+ "ErrHeader",
+ "ErrWriteAfterClose",
+ "ErrWriteTooLong",
+ "FileInfoHeader",
+ "Format",
+ "FormatGNU",
+ "FormatPAX",
+ "FormatUSTAR",
+ "FormatUnknown",
+ "Header",
+ "NewReader",
+ "NewWriter",
+ "Reader",
+ "TypeBlock",
+ "TypeChar",
+ "TypeCont",
+ "TypeDir",
+ "TypeFifo",
+ "TypeGNULongLink",
+ "TypeGNULongName",
+ "TypeGNUSparse",
+ "TypeLink",
+ "TypeReg",
+ "TypeRegA",
+ "TypeSymlink",
+ "TypeXGlobalHeader",
+ "TypeXHeader",
+ "Writer",
+ },
+ "archive/zip": []string{
+ "Compressor",
+ "Decompressor",
+ "Deflate",
+ "ErrAlgorithm",
+ "ErrChecksum",
+ "ErrFormat",
+ "File",
+ "FileHeader",
+ "FileInfoHeader",
+ "NewReader",
+ "NewWriter",
+ "OpenReader",
+ "ReadCloser",
+ "Reader",
+ "RegisterCompressor",
+ "RegisterDecompressor",
+ "Store",
+ "Writer",
+ },
+ "bufio": []string{
+ "ErrAdvanceTooFar",
+ "ErrBufferFull",
+ "ErrFinalToken",
+ "ErrInvalidUnreadByte",
+ "ErrInvalidUnreadRune",
+ "ErrNegativeAdvance",
+ "ErrNegativeCount",
+ "ErrTooLong",
+ "MaxScanTokenSize",
+ "NewReadWriter",
+ "NewReader",
+ "NewReaderSize",
+ "NewScanner",
+ "NewWriter",
+ "NewWriterSize",
+ "ReadWriter",
+ "Reader",
+ "ScanBytes",
+ "ScanLines",
+ "ScanRunes",
+ "ScanWords",
+ "Scanner",
+ "SplitFunc",
+ "Writer",
+ },
+ "bytes": []string{
+ "Buffer",
+ "Compare",
+ "Contains",
+ "ContainsAny",
+ "ContainsRune",
+ "Count",
+ "Equal",
+ "EqualFold",
+ "ErrTooLarge",
+ "Fields",
+ "FieldsFunc",
+ "HasPrefix",
+ "HasSuffix",
+ "Index",
+ "IndexAny",
+ "IndexByte",
+ "IndexFunc",
+ "IndexRune",
+ "Join",
+ "LastIndex",
+ "LastIndexAny",
+ "LastIndexByte",
+ "LastIndexFunc",
+ "Map",
+ "MinRead",
+ "NewBuffer",
+ "NewBufferString",
+ "NewReader",
+ "Reader",
+ "Repeat",
+ "Replace",
+ "ReplaceAll",
+ "Runes",
+ "Split",
+ "SplitAfter",
+ "SplitAfterN",
+ "SplitN",
+ "Title",
+ "ToLower",
+ "ToLowerSpecial",
+ "ToTitle",
+ "ToTitleSpecial",
+ "ToUpper",
+ "ToUpperSpecial",
+ "ToValidUTF8",
+ "Trim",
+ "TrimFunc",
+ "TrimLeft",
+ "TrimLeftFunc",
+ "TrimPrefix",
+ "TrimRight",
+ "TrimRightFunc",
+ "TrimSpace",
+ "TrimSuffix",
+ },
+ "compress/bzip2": []string{
+ "NewReader",
+ "StructuralError",
+ },
+ "compress/flate": []string{
+ "BestCompression",
+ "BestSpeed",
+ "CorruptInputError",
+ "DefaultCompression",
+ "HuffmanOnly",
+ "InternalError",
+ "NewReader",
+ "NewReaderDict",
+ "NewWriter",
+ "NewWriterDict",
+ "NoCompression",
+ "ReadError",
+ "Reader",
+ "Resetter",
+ "WriteError",
+ "Writer",
+ },
+ "compress/gzip": []string{
+ "BestCompression",
+ "BestSpeed",
+ "DefaultCompression",
+ "ErrChecksum",
+ "ErrHeader",
+ "Header",
+ "HuffmanOnly",
+ "NewReader",
+ "NewWriter",
+ "NewWriterLevel",
+ "NoCompression",
+ "Reader",
+ "Writer",
+ },
+ "compress/lzw": []string{
+ "LSB",
+ "MSB",
+ "NewReader",
+ "NewWriter",
+ "Order",
+ },
+ "compress/zlib": []string{
+ "BestCompression",
+ "BestSpeed",
+ "DefaultCompression",
+ "ErrChecksum",
+ "ErrDictionary",
+ "ErrHeader",
+ "HuffmanOnly",
+ "NewReader",
+ "NewReaderDict",
+ "NewWriter",
+ "NewWriterLevel",
+ "NewWriterLevelDict",
+ "NoCompression",
+ "Resetter",
+ "Writer",
+ },
+ "container/heap": []string{
+ "Fix",
+ "Init",
+ "Interface",
+ "Pop",
+ "Push",
+ "Remove",
+ },
+ "container/list": []string{
+ "Element",
+ "List",
+ "New",
+ },
+ "container/ring": []string{
+ "New",
+ "Ring",
+ },
+ "context": []string{
+ "Background",
+ "CancelFunc",
+ "Canceled",
+ "Context",
+ "DeadlineExceeded",
+ "TODO",
+ "WithCancel",
+ "WithDeadline",
+ "WithTimeout",
+ "WithValue",
+ },
+ "crypto": []string{
+ "BLAKE2b_256",
+ "BLAKE2b_384",
+ "BLAKE2b_512",
+ "BLAKE2s_256",
+ "Decrypter",
+ "DecrypterOpts",
+ "Hash",
+ "MD4",
+ "MD5",
+ "MD5SHA1",
+ "PrivateKey",
+ "PublicKey",
+ "RIPEMD160",
+ "RegisterHash",
+ "SHA1",
+ "SHA224",
+ "SHA256",
+ "SHA384",
+ "SHA3_224",
+ "SHA3_256",
+ "SHA3_384",
+ "SHA3_512",
+ "SHA512",
+ "SHA512_224",
+ "SHA512_256",
+ "Signer",
+ "SignerOpts",
+ },
+ "crypto/aes": []string{
+ "BlockSize",
+ "KeySizeError",
+ "NewCipher",
+ },
+ "crypto/cipher": []string{
+ "AEAD",
+ "Block",
+ "BlockMode",
+ "NewCBCDecrypter",
+ "NewCBCEncrypter",
+ "NewCFBDecrypter",
+ "NewCFBEncrypter",
+ "NewCTR",
+ "NewGCM",
+ "NewGCMWithNonceSize",
+ "NewGCMWithTagSize",
+ "NewOFB",
+ "Stream",
+ "StreamReader",
+ "StreamWriter",
+ },
+ "crypto/des": []string{
+ "BlockSize",
+ "KeySizeError",
+ "NewCipher",
+ "NewTripleDESCipher",
+ },
+ "crypto/dsa": []string{
+ "ErrInvalidPublicKey",
+ "GenerateKey",
+ "GenerateParameters",
+ "L1024N160",
+ "L2048N224",
+ "L2048N256",
+ "L3072N256",
+ "ParameterSizes",
+ "Parameters",
+ "PrivateKey",
+ "PublicKey",
+ "Sign",
+ "Verify",
+ },
+ "crypto/ecdsa": []string{
+ "GenerateKey",
+ "PrivateKey",
+ "PublicKey",
+ "Sign",
+ "Verify",
+ },
+ "crypto/ed25519": []string{
+ "GenerateKey",
+ "NewKeyFromSeed",
+ "PrivateKey",
+ "PrivateKeySize",
+ "PublicKey",
+ "PublicKeySize",
+ "SeedSize",
+ "Sign",
+ "SignatureSize",
+ "Verify",
+ },
+ "crypto/elliptic": []string{
+ "Curve",
+ "CurveParams",
+ "GenerateKey",
+ "Marshal",
+ "P224",
+ "P256",
+ "P384",
+ "P521",
+ "Unmarshal",
+ },
+ "crypto/hmac": []string{
+ "Equal",
+ "New",
+ },
+ "crypto/md5": []string{
+ "BlockSize",
+ "New",
+ "Size",
+ "Sum",
+ },
+ "crypto/rand": []string{
+ "Int",
+ "Prime",
+ "Read",
+ "Reader",
+ },
+ "crypto/rc4": []string{
+ "Cipher",
+ "KeySizeError",
+ "NewCipher",
+ },
+ "crypto/rsa": []string{
+ "CRTValue",
+ "DecryptOAEP",
+ "DecryptPKCS1v15",
+ "DecryptPKCS1v15SessionKey",
+ "EncryptOAEP",
+ "EncryptPKCS1v15",
+ "ErrDecryption",
+ "ErrMessageTooLong",
+ "ErrVerification",
+ "GenerateKey",
+ "GenerateMultiPrimeKey",
+ "OAEPOptions",
+ "PKCS1v15DecryptOptions",
+ "PSSOptions",
+ "PSSSaltLengthAuto",
+ "PSSSaltLengthEqualsHash",
+ "PrecomputedValues",
+ "PrivateKey",
+ "PublicKey",
+ "SignPKCS1v15",
+ "SignPSS",
+ "VerifyPKCS1v15",
+ "VerifyPSS",
+ },
+ "crypto/sha1": []string{
+ "BlockSize",
+ "New",
+ "Size",
+ "Sum",
+ },
+ "crypto/sha256": []string{
+ "BlockSize",
+ "New",
+ "New224",
+ "Size",
+ "Size224",
+ "Sum224",
+ "Sum256",
+ },
+ "crypto/sha512": []string{
+ "BlockSize",
+ "New",
+ "New384",
+ "New512_224",
+ "New512_256",
+ "Size",
+ "Size224",
+ "Size256",
+ "Size384",
+ "Sum384",
+ "Sum512",
+ "Sum512_224",
+ "Sum512_256",
+ },
+ "crypto/subtle": []string{
+ "ConstantTimeByteEq",
+ "ConstantTimeCompare",
+ "ConstantTimeCopy",
+ "ConstantTimeEq",
+ "ConstantTimeLessOrEq",
+ "ConstantTimeSelect",
+ },
+ "crypto/tls": []string{
+ "Certificate",
+ "CertificateRequestInfo",
+ "Client",
+ "ClientAuthType",
+ "ClientHelloInfo",
+ "ClientSessionCache",
+ "ClientSessionState",
+ "Config",
+ "Conn",
+ "ConnectionState",
+ "CurveID",
+ "CurveP256",
+ "CurveP384",
+ "CurveP521",
+ "Dial",
+ "DialWithDialer",
+ "ECDSAWithP256AndSHA256",
+ "ECDSAWithP384AndSHA384",
+ "ECDSAWithP521AndSHA512",
+ "ECDSAWithSHA1",
+ "Ed25519",
+ "Listen",
+ "LoadX509KeyPair",
+ "NewLRUClientSessionCache",
+ "NewListener",
+ "NoClientCert",
+ "PKCS1WithSHA1",
+ "PKCS1WithSHA256",
+ "PKCS1WithSHA384",
+ "PKCS1WithSHA512",
+ "PSSWithSHA256",
+ "PSSWithSHA384",
+ "PSSWithSHA512",
+ "RecordHeaderError",
+ "RenegotiateFreelyAsClient",
+ "RenegotiateNever",
+ "RenegotiateOnceAsClient",
+ "RenegotiationSupport",
+ "RequestClientCert",
+ "RequireAndVerifyClientCert",
+ "RequireAnyClientCert",
+ "Server",
+ "SignatureScheme",
+ "TLS_AES_128_GCM_SHA256",
+ "TLS_AES_256_GCM_SHA384",
+ "TLS_CHACHA20_POLY1305_SHA256",
+ "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA",
+ "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256",
+ "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
+ "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA",
+ "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
+ "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305",
+ "TLS_ECDHE_ECDSA_WITH_RC4_128_SHA",
+ "TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA",
+ "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA",
+ "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256",
+ "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
+ "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
+ "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
+ "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305",
+ "TLS_ECDHE_RSA_WITH_RC4_128_SHA",
+ "TLS_FALLBACK_SCSV",
+ "TLS_RSA_WITH_3DES_EDE_CBC_SHA",
+ "TLS_RSA_WITH_AES_128_CBC_SHA",
+ "TLS_RSA_WITH_AES_128_CBC_SHA256",
+ "TLS_RSA_WITH_AES_128_GCM_SHA256",
+ "TLS_RSA_WITH_AES_256_CBC_SHA",
+ "TLS_RSA_WITH_AES_256_GCM_SHA384",
+ "TLS_RSA_WITH_RC4_128_SHA",
+ "VerifyClientCertIfGiven",
+ "VersionSSL30",
+ "VersionTLS10",
+ "VersionTLS11",
+ "VersionTLS12",
+ "VersionTLS13",
+ "X25519",
+ "X509KeyPair",
+ },
+ "crypto/x509": []string{
+ "CANotAuthorizedForExtKeyUsage",
+ "CANotAuthorizedForThisName",
+ "CertPool",
+ "Certificate",
+ "CertificateInvalidError",
+ "CertificateRequest",
+ "ConstraintViolationError",
+ "CreateCertificate",
+ "CreateCertificateRequest",
+ "DSA",
+ "DSAWithSHA1",
+ "DSAWithSHA256",
+ "DecryptPEMBlock",
+ "ECDSA",
+ "ECDSAWithSHA1",
+ "ECDSAWithSHA256",
+ "ECDSAWithSHA384",
+ "ECDSAWithSHA512",
+ "Ed25519",
+ "EncryptPEMBlock",
+ "ErrUnsupportedAlgorithm",
+ "Expired",
+ "ExtKeyUsage",
+ "ExtKeyUsageAny",
+ "ExtKeyUsageClientAuth",
+ "ExtKeyUsageCodeSigning",
+ "ExtKeyUsageEmailProtection",
+ "ExtKeyUsageIPSECEndSystem",
+ "ExtKeyUsageIPSECTunnel",
+ "ExtKeyUsageIPSECUser",
+ "ExtKeyUsageMicrosoftCommercialCodeSigning",
+ "ExtKeyUsageMicrosoftKernelCodeSigning",
+ "ExtKeyUsageMicrosoftServerGatedCrypto",
+ "ExtKeyUsageNetscapeServerGatedCrypto",
+ "ExtKeyUsageOCSPSigning",
+ "ExtKeyUsageServerAuth",
+ "ExtKeyUsageTimeStamping",
+ "HostnameError",
+ "IncompatibleUsage",
+ "IncorrectPasswordError",
+ "InsecureAlgorithmError",
+ "InvalidReason",
+ "IsEncryptedPEMBlock",
+ "KeyUsage",
+ "KeyUsageCRLSign",
+ "KeyUsageCertSign",
+ "KeyUsageContentCommitment",
+ "KeyUsageDataEncipherment",
+ "KeyUsageDecipherOnly",
+ "KeyUsageDigitalSignature",
+ "KeyUsageEncipherOnly",
+ "KeyUsageKeyAgreement",
+ "KeyUsageKeyEncipherment",
+ "MD2WithRSA",
+ "MD5WithRSA",
+ "MarshalECPrivateKey",
+ "MarshalPKCS1PrivateKey",
+ "MarshalPKCS1PublicKey",
+ "MarshalPKCS8PrivateKey",
+ "MarshalPKIXPublicKey",
+ "NameConstraintsWithoutSANs",
+ "NameMismatch",
+ "NewCertPool",
+ "NotAuthorizedToSign",
+ "PEMCipher",
+ "PEMCipher3DES",
+ "PEMCipherAES128",
+ "PEMCipherAES192",
+ "PEMCipherAES256",
+ "PEMCipherDES",
+ "ParseCRL",
+ "ParseCertificate",
+ "ParseCertificateRequest",
+ "ParseCertificates",
+ "ParseDERCRL",
+ "ParseECPrivateKey",
+ "ParsePKCS1PrivateKey",
+ "ParsePKCS1PublicKey",
+ "ParsePKCS8PrivateKey",
+ "ParsePKIXPublicKey",
+ "PublicKeyAlgorithm",
+ "PureEd25519",
+ "RSA",
+ "SHA1WithRSA",
+ "SHA256WithRSA",
+ "SHA256WithRSAPSS",
+ "SHA384WithRSA",
+ "SHA384WithRSAPSS",
+ "SHA512WithRSA",
+ "SHA512WithRSAPSS",
+ "SignatureAlgorithm",
+ "SystemCertPool",
+ "SystemRootsError",
+ "TooManyConstraints",
+ "TooManyIntermediates",
+ "UnconstrainedName",
+ "UnhandledCriticalExtension",
+ "UnknownAuthorityError",
+ "UnknownPublicKeyAlgorithm",
+ "UnknownSignatureAlgorithm",
+ "VerifyOptions",
+ },
+ "crypto/x509/pkix": []string{
+ "AlgorithmIdentifier",
+ "AttributeTypeAndValue",
+ "AttributeTypeAndValueSET",
+ "CertificateList",
+ "Extension",
+ "Name",
+ "RDNSequence",
+ "RelativeDistinguishedNameSET",
+ "RevokedCertificate",
+ "TBSCertificateList",
+ },
+ "database/sql": []string{
+ "ColumnType",
+ "Conn",
+ "DB",
+ "DBStats",
+ "Drivers",
+ "ErrConnDone",
+ "ErrNoRows",
+ "ErrTxDone",
+ "IsolationLevel",
+ "LevelDefault",
+ "LevelLinearizable",
+ "LevelReadCommitted",
+ "LevelReadUncommitted",
+ "LevelRepeatableRead",
+ "LevelSerializable",
+ "LevelSnapshot",
+ "LevelWriteCommitted",
+ "Named",
+ "NamedArg",
+ "NullBool",
+ "NullFloat64",
+ "NullInt32",
+ "NullInt64",
+ "NullString",
+ "NullTime",
+ "Open",
+ "OpenDB",
+ "Out",
+ "RawBytes",
+ "Register",
+ "Result",
+ "Row",
+ "Rows",
+ "Scanner",
+ "Stmt",
+ "Tx",
+ "TxOptions",
+ },
+ "database/sql/driver": []string{
+ "Bool",
+ "ColumnConverter",
+ "Conn",
+ "ConnBeginTx",
+ "ConnPrepareContext",
+ "Connector",
+ "DefaultParameterConverter",
+ "Driver",
+ "DriverContext",
+ "ErrBadConn",
+ "ErrRemoveArgument",
+ "ErrSkip",
+ "Execer",
+ "ExecerContext",
+ "Int32",
+ "IsScanValue",
+ "IsValue",
+ "IsolationLevel",
+ "NamedValue",
+ "NamedValueChecker",
+ "NotNull",
+ "Null",
+ "Pinger",
+ "Queryer",
+ "QueryerContext",
+ "Result",
+ "ResultNoRows",
+ "Rows",
+ "RowsAffected",
+ "RowsColumnTypeDatabaseTypeName",
+ "RowsColumnTypeLength",
+ "RowsColumnTypeNullable",
+ "RowsColumnTypePrecisionScale",
+ "RowsColumnTypeScanType",
+ "RowsNextResultSet",
+ "SessionResetter",
+ "Stmt",
+ "StmtExecContext",
+ "StmtQueryContext",
+ "String",
+ "Tx",
+ "TxOptions",
+ "Value",
+ "ValueConverter",
+ "Valuer",
+ },
+ "debug/dwarf": []string{
+ "AddrType",
+ "ArrayType",
+ "Attr",
+ "AttrAbstractOrigin",
+ "AttrAccessibility",
+ "AttrAddrClass",
+ "AttrAllocated",
+ "AttrArtificial",
+ "AttrAssociated",
+ "AttrBaseTypes",
+ "AttrBitOffset",
+ "AttrBitSize",
+ "AttrByteSize",
+ "AttrCallColumn",
+ "AttrCallFile",
+ "AttrCallLine",
+ "AttrCalling",
+ "AttrCommonRef",
+ "AttrCompDir",
+ "AttrConstValue",
+ "AttrContainingType",
+ "AttrCount",
+ "AttrDataLocation",
+ "AttrDataMemberLoc",
+ "AttrDeclColumn",
+ "AttrDeclFile",
+ "AttrDeclLine",
+ "AttrDeclaration",
+ "AttrDefaultValue",
+ "AttrDescription",
+ "AttrDiscr",
+ "AttrDiscrList",
+ "AttrDiscrValue",
+ "AttrEncoding",
+ "AttrEntrypc",
+ "AttrExtension",
+ "AttrExternal",
+ "AttrFrameBase",
+ "AttrFriend",
+ "AttrHighpc",
+ "AttrIdentifierCase",
+ "AttrImport",
+ "AttrInline",
+ "AttrIsOptional",
+ "AttrLanguage",
+ "AttrLocation",
+ "AttrLowerBound",
+ "AttrLowpc",
+ "AttrMacroInfo",
+ "AttrName",
+ "AttrNamelistItem",
+ "AttrOrdering",
+ "AttrPriority",
+ "AttrProducer",
+ "AttrPrototyped",
+ "AttrRanges",
+ "AttrReturnAddr",
+ "AttrSegment",
+ "AttrSibling",
+ "AttrSpecification",
+ "AttrStartScope",
+ "AttrStaticLink",
+ "AttrStmtList",
+ "AttrStride",
+ "AttrStrideSize",
+ "AttrStringLength",
+ "AttrTrampoline",
+ "AttrType",
+ "AttrUpperBound",
+ "AttrUseLocation",
+ "AttrUseUTF8",
+ "AttrVarParam",
+ "AttrVirtuality",
+ "AttrVisibility",
+ "AttrVtableElemLoc",
+ "BasicType",
+ "BoolType",
+ "CharType",
+ "Class",
+ "ClassAddress",
+ "ClassBlock",
+ "ClassConstant",
+ "ClassExprLoc",
+ "ClassFlag",
+ "ClassLinePtr",
+ "ClassLocListPtr",
+ "ClassMacPtr",
+ "ClassRangeListPtr",
+ "ClassReference",
+ "ClassReferenceAlt",
+ "ClassReferenceSig",
+ "ClassString",
+ "ClassStringAlt",
+ "ClassUnknown",
+ "CommonType",
+ "ComplexType",
+ "Data",
+ "DecodeError",
+ "DotDotDotType",
+ "Entry",
+ "EnumType",
+ "EnumValue",
+ "ErrUnknownPC",
+ "Field",
+ "FloatType",
+ "FuncType",
+ "IntType",
+ "LineEntry",
+ "LineFile",
+ "LineReader",
+ "LineReaderPos",
+ "New",
+ "Offset",
+ "PtrType",
+ "QualType",
+ "Reader",
+ "StructField",
+ "StructType",
+ "Tag",
+ "TagAccessDeclaration",
+ "TagArrayType",
+ "TagBaseType",
+ "TagCatchDwarfBlock",
+ "TagClassType",
+ "TagCommonDwarfBlock",
+ "TagCommonInclusion",
+ "TagCompileUnit",
+ "TagCondition",
+ "TagConstType",
+ "TagConstant",
+ "TagDwarfProcedure",
+ "TagEntryPoint",
+ "TagEnumerationType",
+ "TagEnumerator",
+ "TagFileType",
+ "TagFormalParameter",
+ "TagFriend",
+ "TagImportedDeclaration",
+ "TagImportedModule",
+ "TagImportedUnit",
+ "TagInheritance",
+ "TagInlinedSubroutine",
+ "TagInterfaceType",
+ "TagLabel",
+ "TagLexDwarfBlock",
+ "TagMember",
+ "TagModule",
+ "TagMutableType",
+ "TagNamelist",
+ "TagNamelistItem",
+ "TagNamespace",
+ "TagPackedType",
+ "TagPartialUnit",
+ "TagPointerType",
+ "TagPtrToMemberType",
+ "TagReferenceType",
+ "TagRestrictType",
+ "TagRvalueReferenceType",
+ "TagSetType",
+ "TagSharedType",
+ "TagStringType",
+ "TagStructType",
+ "TagSubprogram",
+ "TagSubrangeType",
+ "TagSubroutineType",
+ "TagTemplateAlias",
+ "TagTemplateTypeParameter",
+ "TagTemplateValueParameter",
+ "TagThrownType",
+ "TagTryDwarfBlock",
+ "TagTypeUnit",
+ "TagTypedef",
+ "TagUnionType",
+ "TagUnspecifiedParameters",
+ "TagUnspecifiedType",
+ "TagVariable",
+ "TagVariant",
+ "TagVariantPart",
+ "TagVolatileType",
+ "TagWithStmt",
+ "Type",
+ "TypedefType",
+ "UcharType",
+ "UintType",
+ "UnspecifiedType",
+ "UnsupportedType",
+ "VoidType",
+ },
+ "debug/elf": []string{
+ "ARM_MAGIC_TRAMP_NUMBER",
+ "COMPRESS_HIOS",
+ "COMPRESS_HIPROC",
+ "COMPRESS_LOOS",
+ "COMPRESS_LOPROC",
+ "COMPRESS_ZLIB",
+ "Chdr32",
+ "Chdr64",
+ "Class",
+ "CompressionType",
+ "DF_BIND_NOW",
+ "DF_ORIGIN",
+ "DF_STATIC_TLS",
+ "DF_SYMBOLIC",
+ "DF_TEXTREL",
+ "DT_BIND_NOW",
+ "DT_DEBUG",
+ "DT_ENCODING",
+ "DT_FINI",
+ "DT_FINI_ARRAY",
+ "DT_FINI_ARRAYSZ",
+ "DT_FLAGS",
+ "DT_HASH",
+ "DT_HIOS",
+ "DT_HIPROC",
+ "DT_INIT",
+ "DT_INIT_ARRAY",
+ "DT_INIT_ARRAYSZ",
+ "DT_JMPREL",
+ "DT_LOOS",
+ "DT_LOPROC",
+ "DT_NEEDED",
+ "DT_NULL",
+ "DT_PLTGOT",
+ "DT_PLTREL",
+ "DT_PLTRELSZ",
+ "DT_PREINIT_ARRAY",
+ "DT_PREINIT_ARRAYSZ",
+ "DT_REL",
+ "DT_RELA",
+ "DT_RELAENT",
+ "DT_RELASZ",
+ "DT_RELENT",
+ "DT_RELSZ",
+ "DT_RPATH",
+ "DT_RUNPATH",
+ "DT_SONAME",
+ "DT_STRSZ",
+ "DT_STRTAB",
+ "DT_SYMBOLIC",
+ "DT_SYMENT",
+ "DT_SYMTAB",
+ "DT_TEXTREL",
+ "DT_VERNEED",
+ "DT_VERNEEDNUM",
+ "DT_VERSYM",
+ "Data",
+ "Dyn32",
+ "Dyn64",
+ "DynFlag",
+ "DynTag",
+ "EI_ABIVERSION",
+ "EI_CLASS",
+ "EI_DATA",
+ "EI_NIDENT",
+ "EI_OSABI",
+ "EI_PAD",
+ "EI_VERSION",
+ "ELFCLASS32",
+ "ELFCLASS64",
+ "ELFCLASSNONE",
+ "ELFDATA2LSB",
+ "ELFDATA2MSB",
+ "ELFDATANONE",
+ "ELFMAG",
+ "ELFOSABI_86OPEN",
+ "ELFOSABI_AIX",
+ "ELFOSABI_ARM",
+ "ELFOSABI_AROS",
+ "ELFOSABI_CLOUDABI",
+ "ELFOSABI_FENIXOS",
+ "ELFOSABI_FREEBSD",
+ "ELFOSABI_HPUX",
+ "ELFOSABI_HURD",
+ "ELFOSABI_IRIX",
+ "ELFOSABI_LINUX",
+ "ELFOSABI_MODESTO",
+ "ELFOSABI_NETBSD",
+ "ELFOSABI_NONE",
+ "ELFOSABI_NSK",
+ "ELFOSABI_OPENBSD",
+ "ELFOSABI_OPENVMS",
+ "ELFOSABI_SOLARIS",
+ "ELFOSABI_STANDALONE",
+ "ELFOSABI_TRU64",
+ "EM_386",
+ "EM_486",
+ "EM_56800EX",
+ "EM_68HC05",
+ "EM_68HC08",
+ "EM_68HC11",
+ "EM_68HC12",
+ "EM_68HC16",
+ "EM_68K",
+ "EM_78KOR",
+ "EM_8051",
+ "EM_860",
+ "EM_88K",
+ "EM_960",
+ "EM_AARCH64",
+ "EM_ALPHA",
+ "EM_ALPHA_STD",
+ "EM_ALTERA_NIOS2",
+ "EM_AMDGPU",
+ "EM_ARC",
+ "EM_ARCA",
+ "EM_ARC_COMPACT",
+ "EM_ARC_COMPACT2",
+ "EM_ARM",
+ "EM_AVR",
+ "EM_AVR32",
+ "EM_BA1",
+ "EM_BA2",
+ "EM_BLACKFIN",
+ "EM_BPF",
+ "EM_C166",
+ "EM_CDP",
+ "EM_CE",
+ "EM_CLOUDSHIELD",
+ "EM_COGE",
+ "EM_COLDFIRE",
+ "EM_COOL",
+ "EM_COREA_1ST",
+ "EM_COREA_2ND",
+ "EM_CR",
+ "EM_CR16",
+ "EM_CRAYNV2",
+ "EM_CRIS",
+ "EM_CRX",
+ "EM_CSR_KALIMBA",
+ "EM_CUDA",
+ "EM_CYPRESS_M8C",
+ "EM_D10V",
+ "EM_D30V",
+ "EM_DSP24",
+ "EM_DSPIC30F",
+ "EM_DXP",
+ "EM_ECOG1",
+ "EM_ECOG16",
+ "EM_ECOG1X",
+ "EM_ECOG2",
+ "EM_ETPU",
+ "EM_EXCESS",
+ "EM_F2MC16",
+ "EM_FIREPATH",
+ "EM_FR20",
+ "EM_FR30",
+ "EM_FT32",
+ "EM_FX66",
+ "EM_H8S",
+ "EM_H8_300",
+ "EM_H8_300H",
+ "EM_H8_500",
+ "EM_HUANY",
+ "EM_IA_64",
+ "EM_INTEL205",
+ "EM_INTEL206",
+ "EM_INTEL207",
+ "EM_INTEL208",
+ "EM_INTEL209",
+ "EM_IP2K",
+ "EM_JAVELIN",
+ "EM_K10M",
+ "EM_KM32",
+ "EM_KMX16",
+ "EM_KMX32",
+ "EM_KMX8",
+ "EM_KVARC",
+ "EM_L10M",
+ "EM_LANAI",
+ "EM_LATTICEMICO32",
+ "EM_M16C",
+ "EM_M32",
+ "EM_M32C",
+ "EM_M32R",
+ "EM_MANIK",
+ "EM_MAX",
+ "EM_MAXQ30",
+ "EM_MCHP_PIC",
+ "EM_MCST_ELBRUS",
+ "EM_ME16",
+ "EM_METAG",
+ "EM_MICROBLAZE",
+ "EM_MIPS",
+ "EM_MIPS_RS3_LE",
+ "EM_MIPS_RS4_BE",
+ "EM_MIPS_X",
+ "EM_MMA",
+ "EM_MMDSP_PLUS",
+ "EM_MMIX",
+ "EM_MN10200",
+ "EM_MN10300",
+ "EM_MOXIE",
+ "EM_MSP430",
+ "EM_NCPU",
+ "EM_NDR1",
+ "EM_NDS32",
+ "EM_NONE",
+ "EM_NORC",
+ "EM_NS32K",
+ "EM_OPEN8",
+ "EM_OPENRISC",
+ "EM_PARISC",
+ "EM_PCP",
+ "EM_PDP10",
+ "EM_PDP11",
+ "EM_PDSP",
+ "EM_PJ",
+ "EM_PPC",
+ "EM_PPC64",
+ "EM_PRISM",
+ "EM_QDSP6",
+ "EM_R32C",
+ "EM_RCE",
+ "EM_RH32",
+ "EM_RISCV",
+ "EM_RL78",
+ "EM_RS08",
+ "EM_RX",
+ "EM_S370",
+ "EM_S390",
+ "EM_SCORE7",
+ "EM_SEP",
+ "EM_SE_C17",
+ "EM_SE_C33",
+ "EM_SH",
+ "EM_SHARC",
+ "EM_SLE9X",
+ "EM_SNP1K",
+ "EM_SPARC",
+ "EM_SPARC32PLUS",
+ "EM_SPARCV9",
+ "EM_ST100",
+ "EM_ST19",
+ "EM_ST200",
+ "EM_ST7",
+ "EM_ST9PLUS",
+ "EM_STARCORE",
+ "EM_STM8",
+ "EM_STXP7X",
+ "EM_SVX",
+ "EM_TILE64",
+ "EM_TILEGX",
+ "EM_TILEPRO",
+ "EM_TINYJ",
+ "EM_TI_ARP32",
+ "EM_TI_C2000",
+ "EM_TI_C5500",
+ "EM_TI_C6000",
+ "EM_TI_PRU",
+ "EM_TMM_GPP",
+ "EM_TPC",
+ "EM_TRICORE",
+ "EM_TRIMEDIA",
+ "EM_TSK3000",
+ "EM_UNICORE",
+ "EM_V800",
+ "EM_V850",
+ "EM_VAX",
+ "EM_VIDEOCORE",
+ "EM_VIDEOCORE3",
+ "EM_VIDEOCORE5",
+ "EM_VISIUM",
+ "EM_VPP500",
+ "EM_X86_64",
+ "EM_XCORE",
+ "EM_XGATE",
+ "EM_XIMO16",
+ "EM_XTENSA",
+ "EM_Z80",
+ "EM_ZSP",
+ "ET_CORE",
+ "ET_DYN",
+ "ET_EXEC",
+ "ET_HIOS",
+ "ET_HIPROC",
+ "ET_LOOS",
+ "ET_LOPROC",
+ "ET_NONE",
+ "ET_REL",
+ "EV_CURRENT",
+ "EV_NONE",
+ "ErrNoSymbols",
+ "File",
+ "FileHeader",
+ "FormatError",
+ "Header32",
+ "Header64",
+ "ImportedSymbol",
+ "Machine",
+ "NT_FPREGSET",
+ "NT_PRPSINFO",
+ "NT_PRSTATUS",
+ "NType",
+ "NewFile",
+ "OSABI",
+ "Open",
+ "PF_MASKOS",
+ "PF_MASKPROC",
+ "PF_R",
+ "PF_W",
+ "PF_X",
+ "PT_DYNAMIC",
+ "PT_HIOS",
+ "PT_HIPROC",
+ "PT_INTERP",
+ "PT_LOAD",
+ "PT_LOOS",
+ "PT_LOPROC",
+ "PT_NOTE",
+ "PT_NULL",
+ "PT_PHDR",
+ "PT_SHLIB",
+ "PT_TLS",
+ "Prog",
+ "Prog32",
+ "Prog64",
+ "ProgFlag",
+ "ProgHeader",
+ "ProgType",
+ "R_386",
+ "R_386_16",
+ "R_386_32",
+ "R_386_32PLT",
+ "R_386_8",
+ "R_386_COPY",
+ "R_386_GLOB_DAT",
+ "R_386_GOT32",
+ "R_386_GOT32X",
+ "R_386_GOTOFF",
+ "R_386_GOTPC",
+ "R_386_IRELATIVE",
+ "R_386_JMP_SLOT",
+ "R_386_NONE",
+ "R_386_PC16",
+ "R_386_PC32",
+ "R_386_PC8",
+ "R_386_PLT32",
+ "R_386_RELATIVE",
+ "R_386_SIZE32",
+ "R_386_TLS_DESC",
+ "R_386_TLS_DESC_CALL",
+ "R_386_TLS_DTPMOD32",
+ "R_386_TLS_DTPOFF32",
+ "R_386_TLS_GD",
+ "R_386_TLS_GD_32",
+ "R_386_TLS_GD_CALL",
+ "R_386_TLS_GD_POP",
+ "R_386_TLS_GD_PUSH",
+ "R_386_TLS_GOTDESC",
+ "R_386_TLS_GOTIE",
+ "R_386_TLS_IE",
+ "R_386_TLS_IE_32",
+ "R_386_TLS_LDM",
+ "R_386_TLS_LDM_32",
+ "R_386_TLS_LDM_CALL",
+ "R_386_TLS_LDM_POP",
+ "R_386_TLS_LDM_PUSH",
+ "R_386_TLS_LDO_32",
+ "R_386_TLS_LE",
+ "R_386_TLS_LE_32",
+ "R_386_TLS_TPOFF",
+ "R_386_TLS_TPOFF32",
+ "R_390",
+ "R_390_12",
+ "R_390_16",
+ "R_390_20",
+ "R_390_32",
+ "R_390_64",
+ "R_390_8",
+ "R_390_COPY",
+ "R_390_GLOB_DAT",
+ "R_390_GOT12",
+ "R_390_GOT16",
+ "R_390_GOT20",
+ "R_390_GOT32",
+ "R_390_GOT64",
+ "R_390_GOTENT",
+ "R_390_GOTOFF",
+ "R_390_GOTOFF16",
+ "R_390_GOTOFF64",
+ "R_390_GOTPC",
+ "R_390_GOTPCDBL",
+ "R_390_GOTPLT12",
+ "R_390_GOTPLT16",
+ "R_390_GOTPLT20",
+ "R_390_GOTPLT32",
+ "R_390_GOTPLT64",
+ "R_390_GOTPLTENT",
+ "R_390_GOTPLTOFF16",
+ "R_390_GOTPLTOFF32",
+ "R_390_GOTPLTOFF64",
+ "R_390_JMP_SLOT",
+ "R_390_NONE",
+ "R_390_PC16",
+ "R_390_PC16DBL",
+ "R_390_PC32",
+ "R_390_PC32DBL",
+ "R_390_PC64",
+ "R_390_PLT16DBL",
+ "R_390_PLT32",
+ "R_390_PLT32DBL",
+ "R_390_PLT64",
+ "R_390_RELATIVE",
+ "R_390_TLS_DTPMOD",
+ "R_390_TLS_DTPOFF",
+ "R_390_TLS_GD32",
+ "R_390_TLS_GD64",
+ "R_390_TLS_GDCALL",
+ "R_390_TLS_GOTIE12",
+ "R_390_TLS_GOTIE20",
+ "R_390_TLS_GOTIE32",
+ "R_390_TLS_GOTIE64",
+ "R_390_TLS_IE32",
+ "R_390_TLS_IE64",
+ "R_390_TLS_IEENT",
+ "R_390_TLS_LDCALL",
+ "R_390_TLS_LDM32",
+ "R_390_TLS_LDM64",
+ "R_390_TLS_LDO32",
+ "R_390_TLS_LDO64",
+ "R_390_TLS_LE32",
+ "R_390_TLS_LE64",
+ "R_390_TLS_LOAD",
+ "R_390_TLS_TPOFF",
+ "R_AARCH64",
+ "R_AARCH64_ABS16",
+ "R_AARCH64_ABS32",
+ "R_AARCH64_ABS64",
+ "R_AARCH64_ADD_ABS_LO12_NC",
+ "R_AARCH64_ADR_GOT_PAGE",
+ "R_AARCH64_ADR_PREL_LO21",
+ "R_AARCH64_ADR_PREL_PG_HI21",
+ "R_AARCH64_ADR_PREL_PG_HI21_NC",
+ "R_AARCH64_CALL26",
+ "R_AARCH64_CONDBR19",
+ "R_AARCH64_COPY",
+ "R_AARCH64_GLOB_DAT",
+ "R_AARCH64_GOT_LD_PREL19",
+ "R_AARCH64_IRELATIVE",
+ "R_AARCH64_JUMP26",
+ "R_AARCH64_JUMP_SLOT",
+ "R_AARCH64_LD64_GOTOFF_LO15",
+ "R_AARCH64_LD64_GOTPAGE_LO15",
+ "R_AARCH64_LD64_GOT_LO12_NC",
+ "R_AARCH64_LDST128_ABS_LO12_NC",
+ "R_AARCH64_LDST16_ABS_LO12_NC",
+ "R_AARCH64_LDST32_ABS_LO12_NC",
+ "R_AARCH64_LDST64_ABS_LO12_NC",
+ "R_AARCH64_LDST8_ABS_LO12_NC",
+ "R_AARCH64_LD_PREL_LO19",
+ "R_AARCH64_MOVW_SABS_G0",
+ "R_AARCH64_MOVW_SABS_G1",
+ "R_AARCH64_MOVW_SABS_G2",
+ "R_AARCH64_MOVW_UABS_G0",
+ "R_AARCH64_MOVW_UABS_G0_NC",
+ "R_AARCH64_MOVW_UABS_G1",
+ "R_AARCH64_MOVW_UABS_G1_NC",
+ "R_AARCH64_MOVW_UABS_G2",
+ "R_AARCH64_MOVW_UABS_G2_NC",
+ "R_AARCH64_MOVW_UABS_G3",
+ "R_AARCH64_NONE",
+ "R_AARCH64_NULL",
+ "R_AARCH64_P32_ABS16",
+ "R_AARCH64_P32_ABS32",
+ "R_AARCH64_P32_ADD_ABS_LO12_NC",
+ "R_AARCH64_P32_ADR_GOT_PAGE",
+ "R_AARCH64_P32_ADR_PREL_LO21",
+ "R_AARCH64_P32_ADR_PREL_PG_HI21",
+ "R_AARCH64_P32_CALL26",
+ "R_AARCH64_P32_CONDBR19",
+ "R_AARCH64_P32_COPY",
+ "R_AARCH64_P32_GLOB_DAT",
+ "R_AARCH64_P32_GOT_LD_PREL19",
+ "R_AARCH64_P32_IRELATIVE",
+ "R_AARCH64_P32_JUMP26",
+ "R_AARCH64_P32_JUMP_SLOT",
+ "R_AARCH64_P32_LD32_GOT_LO12_NC",
+ "R_AARCH64_P32_LDST128_ABS_LO12_NC",
+ "R_AARCH64_P32_LDST16_ABS_LO12_NC",
+ "R_AARCH64_P32_LDST32_ABS_LO12_NC",
+ "R_AARCH64_P32_LDST64_ABS_LO12_NC",
+ "R_AARCH64_P32_LDST8_ABS_LO12_NC",
+ "R_AARCH64_P32_LD_PREL_LO19",
+ "R_AARCH64_P32_MOVW_SABS_G0",
+ "R_AARCH64_P32_MOVW_UABS_G0",
+ "R_AARCH64_P32_MOVW_UABS_G0_NC",
+ "R_AARCH64_P32_MOVW_UABS_G1",
+ "R_AARCH64_P32_PREL16",
+ "R_AARCH64_P32_PREL32",
+ "R_AARCH64_P32_RELATIVE",
+ "R_AARCH64_P32_TLSDESC",
+ "R_AARCH64_P32_TLSDESC_ADD_LO12_NC",
+ "R_AARCH64_P32_TLSDESC_ADR_PAGE21",
+ "R_AARCH64_P32_TLSDESC_ADR_PREL21",
+ "R_AARCH64_P32_TLSDESC_CALL",
+ "R_AARCH64_P32_TLSDESC_LD32_LO12_NC",
+ "R_AARCH64_P32_TLSDESC_LD_PREL19",
+ "R_AARCH64_P32_TLSGD_ADD_LO12_NC",
+ "R_AARCH64_P32_TLSGD_ADR_PAGE21",
+ "R_AARCH64_P32_TLSIE_ADR_GOTTPREL_PAGE21",
+ "R_AARCH64_P32_TLSIE_LD32_GOTTPREL_LO12_NC",
+ "R_AARCH64_P32_TLSIE_LD_GOTTPREL_PREL19",
+ "R_AARCH64_P32_TLSLE_ADD_TPREL_HI12",
+ "R_AARCH64_P32_TLSLE_ADD_TPREL_LO12",
+ "R_AARCH64_P32_TLSLE_ADD_TPREL_LO12_NC",
+ "R_AARCH64_P32_TLSLE_MOVW_TPREL_G0",
+ "R_AARCH64_P32_TLSLE_MOVW_TPREL_G0_NC",
+ "R_AARCH64_P32_TLSLE_MOVW_TPREL_G1",
+ "R_AARCH64_P32_TLS_DTPMOD",
+ "R_AARCH64_P32_TLS_DTPREL",
+ "R_AARCH64_P32_TLS_TPREL",
+ "R_AARCH64_P32_TSTBR14",
+ "R_AARCH64_PREL16",
+ "R_AARCH64_PREL32",
+ "R_AARCH64_PREL64",
+ "R_AARCH64_RELATIVE",
+ "R_AARCH64_TLSDESC",
+ "R_AARCH64_TLSDESC_ADD",
+ "R_AARCH64_TLSDESC_ADD_LO12_NC",
+ "R_AARCH64_TLSDESC_ADR_PAGE21",
+ "R_AARCH64_TLSDESC_ADR_PREL21",
+ "R_AARCH64_TLSDESC_CALL",
+ "R_AARCH64_TLSDESC_LD64_LO12_NC",
+ "R_AARCH64_TLSDESC_LDR",
+ "R_AARCH64_TLSDESC_LD_PREL19",
+ "R_AARCH64_TLSDESC_OFF_G0_NC",
+ "R_AARCH64_TLSDESC_OFF_G1",
+ "R_AARCH64_TLSGD_ADD_LO12_NC",
+ "R_AARCH64_TLSGD_ADR_PAGE21",
+ "R_AARCH64_TLSGD_ADR_PREL21",
+ "R_AARCH64_TLSGD_MOVW_G0_NC",
+ "R_AARCH64_TLSGD_MOVW_G1",
+ "R_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21",
+ "R_AARCH64_TLSIE_LD64_GOTTPREL_LO12_NC",
+ "R_AARCH64_TLSIE_LD_GOTTPREL_PREL19",
+ "R_AARCH64_TLSIE_MOVW_GOTTPREL_G0_NC",
+ "R_AARCH64_TLSIE_MOVW_GOTTPREL_G1",
+ "R_AARCH64_TLSLD_ADR_PAGE21",
+ "R_AARCH64_TLSLD_ADR_PREL21",
+ "R_AARCH64_TLSLD_LDST128_DTPREL_LO12",
+ "R_AARCH64_TLSLD_LDST128_DTPREL_LO12_NC",
+ "R_AARCH64_TLSLE_ADD_TPREL_HI12",
+ "R_AARCH64_TLSLE_ADD_TPREL_LO12",
+ "R_AARCH64_TLSLE_ADD_TPREL_LO12_NC",
+ "R_AARCH64_TLSLE_LDST128_TPREL_LO12",
+ "R_AARCH64_TLSLE_LDST128_TPREL_LO12_NC",
+ "R_AARCH64_TLSLE_MOVW_TPREL_G0",
+ "R_AARCH64_TLSLE_MOVW_TPREL_G0_NC",
+ "R_AARCH64_TLSLE_MOVW_TPREL_G1",
+ "R_AARCH64_TLSLE_MOVW_TPREL_G1_NC",
+ "R_AARCH64_TLSLE_MOVW_TPREL_G2",
+ "R_AARCH64_TLS_DTPMOD64",
+ "R_AARCH64_TLS_DTPREL64",
+ "R_AARCH64_TLS_TPREL64",
+ "R_AARCH64_TSTBR14",
+ "R_ALPHA",
+ "R_ALPHA_BRADDR",
+ "R_ALPHA_COPY",
+ "R_ALPHA_GLOB_DAT",
+ "R_ALPHA_GPDISP",
+ "R_ALPHA_GPREL32",
+ "R_ALPHA_GPRELHIGH",
+ "R_ALPHA_GPRELLOW",
+ "R_ALPHA_GPVALUE",
+ "R_ALPHA_HINT",
+ "R_ALPHA_IMMED_BR_HI32",
+ "R_ALPHA_IMMED_GP_16",
+ "R_ALPHA_IMMED_GP_HI32",
+ "R_ALPHA_IMMED_LO32",
+ "R_ALPHA_IMMED_SCN_HI32",
+ "R_ALPHA_JMP_SLOT",
+ "R_ALPHA_LITERAL",
+ "R_ALPHA_LITUSE",
+ "R_ALPHA_NONE",
+ "R_ALPHA_OP_PRSHIFT",
+ "R_ALPHA_OP_PSUB",
+ "R_ALPHA_OP_PUSH",
+ "R_ALPHA_OP_STORE",
+ "R_ALPHA_REFLONG",
+ "R_ALPHA_REFQUAD",
+ "R_ALPHA_RELATIVE",
+ "R_ALPHA_SREL16",
+ "R_ALPHA_SREL32",
+ "R_ALPHA_SREL64",
+ "R_ARM",
+ "R_ARM_ABS12",
+ "R_ARM_ABS16",
+ "R_ARM_ABS32",
+ "R_ARM_ABS32_NOI",
+ "R_ARM_ABS8",
+ "R_ARM_ALU_PCREL_15_8",
+ "R_ARM_ALU_PCREL_23_15",
+ "R_ARM_ALU_PCREL_7_0",
+ "R_ARM_ALU_PC_G0",
+ "R_ARM_ALU_PC_G0_NC",
+ "R_ARM_ALU_PC_G1",
+ "R_ARM_ALU_PC_G1_NC",
+ "R_ARM_ALU_PC_G2",
+ "R_ARM_ALU_SBREL_19_12_NC",
+ "R_ARM_ALU_SBREL_27_20_CK",
+ "R_ARM_ALU_SB_G0",
+ "R_ARM_ALU_SB_G0_NC",
+ "R_ARM_ALU_SB_G1",
+ "R_ARM_ALU_SB_G1_NC",
+ "R_ARM_ALU_SB_G2",
+ "R_ARM_AMP_VCALL9",
+ "R_ARM_BASE_ABS",
+ "R_ARM_CALL",
+ "R_ARM_COPY",
+ "R_ARM_GLOB_DAT",
+ "R_ARM_GNU_VTENTRY",
+ "R_ARM_GNU_VTINHERIT",
+ "R_ARM_GOT32",
+ "R_ARM_GOTOFF",
+ "R_ARM_GOTOFF12",
+ "R_ARM_GOTPC",
+ "R_ARM_GOTRELAX",
+ "R_ARM_GOT_ABS",
+ "R_ARM_GOT_BREL12",
+ "R_ARM_GOT_PREL",
+ "R_ARM_IRELATIVE",
+ "R_ARM_JUMP24",
+ "R_ARM_JUMP_SLOT",
+ "R_ARM_LDC_PC_G0",
+ "R_ARM_LDC_PC_G1",
+ "R_ARM_LDC_PC_G2",
+ "R_ARM_LDC_SB_G0",
+ "R_ARM_LDC_SB_G1",
+ "R_ARM_LDC_SB_G2",
+ "R_ARM_LDRS_PC_G0",
+ "R_ARM_LDRS_PC_G1",
+ "R_ARM_LDRS_PC_G2",
+ "R_ARM_LDRS_SB_G0",
+ "R_ARM_LDRS_SB_G1",
+ "R_ARM_LDRS_SB_G2",
+ "R_ARM_LDR_PC_G1",
+ "R_ARM_LDR_PC_G2",
+ "R_ARM_LDR_SBREL_11_10_NC",
+ "R_ARM_LDR_SB_G0",
+ "R_ARM_LDR_SB_G1",
+ "R_ARM_LDR_SB_G2",
+ "R_ARM_ME_TOO",
+ "R_ARM_MOVT_ABS",
+ "R_ARM_MOVT_BREL",
+ "R_ARM_MOVT_PREL",
+ "R_ARM_MOVW_ABS_NC",
+ "R_ARM_MOVW_BREL",
+ "R_ARM_MOVW_BREL_NC",
+ "R_ARM_MOVW_PREL_NC",
+ "R_ARM_NONE",
+ "R_ARM_PC13",
+ "R_ARM_PC24",
+ "R_ARM_PLT32",
+ "R_ARM_PLT32_ABS",
+ "R_ARM_PREL31",
+ "R_ARM_PRIVATE_0",
+ "R_ARM_PRIVATE_1",
+ "R_ARM_PRIVATE_10",
+ "R_ARM_PRIVATE_11",
+ "R_ARM_PRIVATE_12",
+ "R_ARM_PRIVATE_13",
+ "R_ARM_PRIVATE_14",
+ "R_ARM_PRIVATE_15",
+ "R_ARM_PRIVATE_2",
+ "R_ARM_PRIVATE_3",
+ "R_ARM_PRIVATE_4",
+ "R_ARM_PRIVATE_5",
+ "R_ARM_PRIVATE_6",
+ "R_ARM_PRIVATE_7",
+ "R_ARM_PRIVATE_8",
+ "R_ARM_PRIVATE_9",
+ "R_ARM_RABS32",
+ "R_ARM_RBASE",
+ "R_ARM_REL32",
+ "R_ARM_REL32_NOI",
+ "R_ARM_RELATIVE",
+ "R_ARM_RPC24",
+ "R_ARM_RREL32",
+ "R_ARM_RSBREL32",
+ "R_ARM_RXPC25",
+ "R_ARM_SBREL31",
+ "R_ARM_SBREL32",
+ "R_ARM_SWI24",
+ "R_ARM_TARGET1",
+ "R_ARM_TARGET2",
+ "R_ARM_THM_ABS5",
+ "R_ARM_THM_ALU_ABS_G0_NC",
+ "R_ARM_THM_ALU_ABS_G1_NC",
+ "R_ARM_THM_ALU_ABS_G2_NC",
+ "R_ARM_THM_ALU_ABS_G3",
+ "R_ARM_THM_ALU_PREL_11_0",
+ "R_ARM_THM_GOT_BREL12",
+ "R_ARM_THM_JUMP11",
+ "R_ARM_THM_JUMP19",
+ "R_ARM_THM_JUMP24",
+ "R_ARM_THM_JUMP6",
+ "R_ARM_THM_JUMP8",
+ "R_ARM_THM_MOVT_ABS",
+ "R_ARM_THM_MOVT_BREL",
+ "R_ARM_THM_MOVT_PREL",
+ "R_ARM_THM_MOVW_ABS_NC",
+ "R_ARM_THM_MOVW_BREL",
+ "R_ARM_THM_MOVW_BREL_NC",
+ "R_ARM_THM_MOVW_PREL_NC",
+ "R_ARM_THM_PC12",
+ "R_ARM_THM_PC22",
+ "R_ARM_THM_PC8",
+ "R_ARM_THM_RPC22",
+ "R_ARM_THM_SWI8",
+ "R_ARM_THM_TLS_CALL",
+ "R_ARM_THM_TLS_DESCSEQ16",
+ "R_ARM_THM_TLS_DESCSEQ32",
+ "R_ARM_THM_XPC22",
+ "R_ARM_TLS_CALL",
+ "R_ARM_TLS_DESCSEQ",
+ "R_ARM_TLS_DTPMOD32",
+ "R_ARM_TLS_DTPOFF32",
+ "R_ARM_TLS_GD32",
+ "R_ARM_TLS_GOTDESC",
+ "R_ARM_TLS_IE12GP",
+ "R_ARM_TLS_IE32",
+ "R_ARM_TLS_LDM32",
+ "R_ARM_TLS_LDO12",
+ "R_ARM_TLS_LDO32",
+ "R_ARM_TLS_LE12",
+ "R_ARM_TLS_LE32",
+ "R_ARM_TLS_TPOFF32",
+ "R_ARM_V4BX",
+ "R_ARM_XPC25",
+ "R_INFO",
+ "R_INFO32",
+ "R_MIPS",
+ "R_MIPS_16",
+ "R_MIPS_26",
+ "R_MIPS_32",
+ "R_MIPS_64",
+ "R_MIPS_ADD_IMMEDIATE",
+ "R_MIPS_CALL16",
+ "R_MIPS_CALL_HI16",
+ "R_MIPS_CALL_LO16",
+ "R_MIPS_DELETE",
+ "R_MIPS_GOT16",
+ "R_MIPS_GOT_DISP",
+ "R_MIPS_GOT_HI16",
+ "R_MIPS_GOT_LO16",
+ "R_MIPS_GOT_OFST",
+ "R_MIPS_GOT_PAGE",
+ "R_MIPS_GPREL16",
+ "R_MIPS_GPREL32",
+ "R_MIPS_HI16",
+ "R_MIPS_HIGHER",
+ "R_MIPS_HIGHEST",
+ "R_MIPS_INSERT_A",
+ "R_MIPS_INSERT_B",
+ "R_MIPS_JALR",
+ "R_MIPS_LITERAL",
+ "R_MIPS_LO16",
+ "R_MIPS_NONE",
+ "R_MIPS_PC16",
+ "R_MIPS_PJUMP",
+ "R_MIPS_REL16",
+ "R_MIPS_REL32",
+ "R_MIPS_RELGOT",
+ "R_MIPS_SCN_DISP",
+ "R_MIPS_SHIFT5",
+ "R_MIPS_SHIFT6",
+ "R_MIPS_SUB",
+ "R_MIPS_TLS_DTPMOD32",
+ "R_MIPS_TLS_DTPMOD64",
+ "R_MIPS_TLS_DTPREL32",
+ "R_MIPS_TLS_DTPREL64",
+ "R_MIPS_TLS_DTPREL_HI16",
+ "R_MIPS_TLS_DTPREL_LO16",
+ "R_MIPS_TLS_GD",
+ "R_MIPS_TLS_GOTTPREL",
+ "R_MIPS_TLS_LDM",
+ "R_MIPS_TLS_TPREL32",
+ "R_MIPS_TLS_TPREL64",
+ "R_MIPS_TLS_TPREL_HI16",
+ "R_MIPS_TLS_TPREL_LO16",
+ "R_PPC",
+ "R_PPC64",
+ "R_PPC64_ADDR14",
+ "R_PPC64_ADDR14_BRNTAKEN",
+ "R_PPC64_ADDR14_BRTAKEN",
+ "R_PPC64_ADDR16",
+ "R_PPC64_ADDR16_DS",
+ "R_PPC64_ADDR16_HA",
+ "R_PPC64_ADDR16_HI",
+ "R_PPC64_ADDR16_HIGH",
+ "R_PPC64_ADDR16_HIGHA",
+ "R_PPC64_ADDR16_HIGHER",
+ "R_PPC64_ADDR16_HIGHERA",
+ "R_PPC64_ADDR16_HIGHEST",
+ "R_PPC64_ADDR16_HIGHESTA",
+ "R_PPC64_ADDR16_LO",
+ "R_PPC64_ADDR16_LO_DS",
+ "R_PPC64_ADDR24",
+ "R_PPC64_ADDR32",
+ "R_PPC64_ADDR64",
+ "R_PPC64_ADDR64_LOCAL",
+ "R_PPC64_DTPMOD64",
+ "R_PPC64_DTPREL16",
+ "R_PPC64_DTPREL16_DS",
+ "R_PPC64_DTPREL16_HA",
+ "R_PPC64_DTPREL16_HI",
+ "R_PPC64_DTPREL16_HIGH",
+ "R_PPC64_DTPREL16_HIGHA",
+ "R_PPC64_DTPREL16_HIGHER",
+ "R_PPC64_DTPREL16_HIGHERA",
+ "R_PPC64_DTPREL16_HIGHEST",
+ "R_PPC64_DTPREL16_HIGHESTA",
+ "R_PPC64_DTPREL16_LO",
+ "R_PPC64_DTPREL16_LO_DS",
+ "R_PPC64_DTPREL64",
+ "R_PPC64_ENTRY",
+ "R_PPC64_GOT16",
+ "R_PPC64_GOT16_DS",
+ "R_PPC64_GOT16_HA",
+ "R_PPC64_GOT16_HI",
+ "R_PPC64_GOT16_LO",
+ "R_PPC64_GOT16_LO_DS",
+ "R_PPC64_GOT_DTPREL16_DS",
+ "R_PPC64_GOT_DTPREL16_HA",
+ "R_PPC64_GOT_DTPREL16_HI",
+ "R_PPC64_GOT_DTPREL16_LO_DS",
+ "R_PPC64_GOT_TLSGD16",
+ "R_PPC64_GOT_TLSGD16_HA",
+ "R_PPC64_GOT_TLSGD16_HI",
+ "R_PPC64_GOT_TLSGD16_LO",
+ "R_PPC64_GOT_TLSLD16",
+ "R_PPC64_GOT_TLSLD16_HA",
+ "R_PPC64_GOT_TLSLD16_HI",
+ "R_PPC64_GOT_TLSLD16_LO",
+ "R_PPC64_GOT_TPREL16_DS",
+ "R_PPC64_GOT_TPREL16_HA",
+ "R_PPC64_GOT_TPREL16_HI",
+ "R_PPC64_GOT_TPREL16_LO_DS",
+ "R_PPC64_IRELATIVE",
+ "R_PPC64_JMP_IREL",
+ "R_PPC64_JMP_SLOT",
+ "R_PPC64_NONE",
+ "R_PPC64_PLT16_LO_DS",
+ "R_PPC64_PLTGOT16",
+ "R_PPC64_PLTGOT16_DS",
+ "R_PPC64_PLTGOT16_HA",
+ "R_PPC64_PLTGOT16_HI",
+ "R_PPC64_PLTGOT16_LO",
+ "R_PPC64_PLTGOT_LO_DS",
+ "R_PPC64_REL14",
+ "R_PPC64_REL14_BRNTAKEN",
+ "R_PPC64_REL14_BRTAKEN",
+ "R_PPC64_REL16",
+ "R_PPC64_REL16DX_HA",
+ "R_PPC64_REL16_HA",
+ "R_PPC64_REL16_HI",
+ "R_PPC64_REL16_LO",
+ "R_PPC64_REL24",
+ "R_PPC64_REL24_NOTOC",
+ "R_PPC64_REL32",
+ "R_PPC64_REL64",
+ "R_PPC64_SECTOFF_DS",
+ "R_PPC64_SECTOFF_LO_DS",
+ "R_PPC64_TLS",
+ "R_PPC64_TLSGD",
+ "R_PPC64_TLSLD",
+ "R_PPC64_TOC",
+ "R_PPC64_TOC16",
+ "R_PPC64_TOC16_DS",
+ "R_PPC64_TOC16_HA",
+ "R_PPC64_TOC16_HI",
+ "R_PPC64_TOC16_LO",
+ "R_PPC64_TOC16_LO_DS",
+ "R_PPC64_TOCSAVE",
+ "R_PPC64_TPREL16",
+ "R_PPC64_TPREL16_DS",
+ "R_PPC64_TPREL16_HA",
+ "R_PPC64_TPREL16_HI",
+ "R_PPC64_TPREL16_HIGH",
+ "R_PPC64_TPREL16_HIGHA",
+ "R_PPC64_TPREL16_HIGHER",
+ "R_PPC64_TPREL16_HIGHERA",
+ "R_PPC64_TPREL16_HIGHEST",
+ "R_PPC64_TPREL16_HIGHESTA",
+ "R_PPC64_TPREL16_LO",
+ "R_PPC64_TPREL16_LO_DS",
+ "R_PPC64_TPREL64",
+ "R_PPC_ADDR14",
+ "R_PPC_ADDR14_BRNTAKEN",
+ "R_PPC_ADDR14_BRTAKEN",
+ "R_PPC_ADDR16",
+ "R_PPC_ADDR16_HA",
+ "R_PPC_ADDR16_HI",
+ "R_PPC_ADDR16_LO",
+ "R_PPC_ADDR24",
+ "R_PPC_ADDR32",
+ "R_PPC_COPY",
+ "R_PPC_DTPMOD32",
+ "R_PPC_DTPREL16",
+ "R_PPC_DTPREL16_HA",
+ "R_PPC_DTPREL16_HI",
+ "R_PPC_DTPREL16_LO",
+ "R_PPC_DTPREL32",
+ "R_PPC_EMB_BIT_FLD",
+ "R_PPC_EMB_MRKREF",
+ "R_PPC_EMB_NADDR16",
+ "R_PPC_EMB_NADDR16_HA",
+ "R_PPC_EMB_NADDR16_HI",
+ "R_PPC_EMB_NADDR16_LO",
+ "R_PPC_EMB_NADDR32",
+ "R_PPC_EMB_RELSDA",
+ "R_PPC_EMB_RELSEC16",
+ "R_PPC_EMB_RELST_HA",
+ "R_PPC_EMB_RELST_HI",
+ "R_PPC_EMB_RELST_LO",
+ "R_PPC_EMB_SDA21",
+ "R_PPC_EMB_SDA2I16",
+ "R_PPC_EMB_SDA2REL",
+ "R_PPC_EMB_SDAI16",
+ "R_PPC_GLOB_DAT",
+ "R_PPC_GOT16",
+ "R_PPC_GOT16_HA",
+ "R_PPC_GOT16_HI",
+ "R_PPC_GOT16_LO",
+ "R_PPC_GOT_TLSGD16",
+ "R_PPC_GOT_TLSGD16_HA",
+ "R_PPC_GOT_TLSGD16_HI",
+ "R_PPC_GOT_TLSGD16_LO",
+ "R_PPC_GOT_TLSLD16",
+ "R_PPC_GOT_TLSLD16_HA",
+ "R_PPC_GOT_TLSLD16_HI",
+ "R_PPC_GOT_TLSLD16_LO",
+ "R_PPC_GOT_TPREL16",
+ "R_PPC_GOT_TPREL16_HA",
+ "R_PPC_GOT_TPREL16_HI",
+ "R_PPC_GOT_TPREL16_LO",
+ "R_PPC_JMP_SLOT",
+ "R_PPC_LOCAL24PC",
+ "R_PPC_NONE",
+ "R_PPC_PLT16_HA",
+ "R_PPC_PLT16_HI",
+ "R_PPC_PLT16_LO",
+ "R_PPC_PLT32",
+ "R_PPC_PLTREL24",
+ "R_PPC_PLTREL32",
+ "R_PPC_REL14",
+ "R_PPC_REL14_BRNTAKEN",
+ "R_PPC_REL14_BRTAKEN",
+ "R_PPC_REL24",
+ "R_PPC_REL32",
+ "R_PPC_RELATIVE",
+ "R_PPC_SDAREL16",
+ "R_PPC_SECTOFF",
+ "R_PPC_SECTOFF_HA",
+ "R_PPC_SECTOFF_HI",
+ "R_PPC_SECTOFF_LO",
+ "R_PPC_TLS",
+ "R_PPC_TPREL16",
+ "R_PPC_TPREL16_HA",
+ "R_PPC_TPREL16_HI",
+ "R_PPC_TPREL16_LO",
+ "R_PPC_TPREL32",
+ "R_PPC_UADDR16",
+ "R_PPC_UADDR32",
+ "R_RISCV",
+ "R_RISCV_32",
+ "R_RISCV_32_PCREL",
+ "R_RISCV_64",
+ "R_RISCV_ADD16",
+ "R_RISCV_ADD32",
+ "R_RISCV_ADD64",
+ "R_RISCV_ADD8",
+ "R_RISCV_ALIGN",
+ "R_RISCV_BRANCH",
+ "R_RISCV_CALL",
+ "R_RISCV_CALL_PLT",
+ "R_RISCV_COPY",
+ "R_RISCV_GNU_VTENTRY",
+ "R_RISCV_GNU_VTINHERIT",
+ "R_RISCV_GOT_HI20",
+ "R_RISCV_GPREL_I",
+ "R_RISCV_GPREL_S",
+ "R_RISCV_HI20",
+ "R_RISCV_JAL",
+ "R_RISCV_JUMP_SLOT",
+ "R_RISCV_LO12_I",
+ "R_RISCV_LO12_S",
+ "R_RISCV_NONE",
+ "R_RISCV_PCREL_HI20",
+ "R_RISCV_PCREL_LO12_I",
+ "R_RISCV_PCREL_LO12_S",
+ "R_RISCV_RELATIVE",
+ "R_RISCV_RELAX",
+ "R_RISCV_RVC_BRANCH",
+ "R_RISCV_RVC_JUMP",
+ "R_RISCV_RVC_LUI",
+ "R_RISCV_SET16",
+ "R_RISCV_SET32",
+ "R_RISCV_SET6",
+ "R_RISCV_SET8",
+ "R_RISCV_SUB16",
+ "R_RISCV_SUB32",
+ "R_RISCV_SUB6",
+ "R_RISCV_SUB64",
+ "R_RISCV_SUB8",
+ "R_RISCV_TLS_DTPMOD32",
+ "R_RISCV_TLS_DTPMOD64",
+ "R_RISCV_TLS_DTPREL32",
+ "R_RISCV_TLS_DTPREL64",
+ "R_RISCV_TLS_GD_HI20",
+ "R_RISCV_TLS_GOT_HI20",
+ "R_RISCV_TLS_TPREL32",
+ "R_RISCV_TLS_TPREL64",
+ "R_RISCV_TPREL_ADD",
+ "R_RISCV_TPREL_HI20",
+ "R_RISCV_TPREL_I",
+ "R_RISCV_TPREL_LO12_I",
+ "R_RISCV_TPREL_LO12_S",
+ "R_RISCV_TPREL_S",
+ "R_SPARC",
+ "R_SPARC_10",
+ "R_SPARC_11",
+ "R_SPARC_13",
+ "R_SPARC_16",
+ "R_SPARC_22",
+ "R_SPARC_32",
+ "R_SPARC_5",
+ "R_SPARC_6",
+ "R_SPARC_64",
+ "R_SPARC_7",
+ "R_SPARC_8",
+ "R_SPARC_COPY",
+ "R_SPARC_DISP16",
+ "R_SPARC_DISP32",
+ "R_SPARC_DISP64",
+ "R_SPARC_DISP8",
+ "R_SPARC_GLOB_DAT",
+ "R_SPARC_GLOB_JMP",
+ "R_SPARC_GOT10",
+ "R_SPARC_GOT13",
+ "R_SPARC_GOT22",
+ "R_SPARC_H44",
+ "R_SPARC_HH22",
+ "R_SPARC_HI22",
+ "R_SPARC_HIPLT22",
+ "R_SPARC_HIX22",
+ "R_SPARC_HM10",
+ "R_SPARC_JMP_SLOT",
+ "R_SPARC_L44",
+ "R_SPARC_LM22",
+ "R_SPARC_LO10",
+ "R_SPARC_LOPLT10",
+ "R_SPARC_LOX10",
+ "R_SPARC_M44",
+ "R_SPARC_NONE",
+ "R_SPARC_OLO10",
+ "R_SPARC_PC10",
+ "R_SPARC_PC22",
+ "R_SPARC_PCPLT10",
+ "R_SPARC_PCPLT22",
+ "R_SPARC_PCPLT32",
+ "R_SPARC_PC_HH22",
+ "R_SPARC_PC_HM10",
+ "R_SPARC_PC_LM22",
+ "R_SPARC_PLT32",
+ "R_SPARC_PLT64",
+ "R_SPARC_REGISTER",
+ "R_SPARC_RELATIVE",
+ "R_SPARC_UA16",
+ "R_SPARC_UA32",
+ "R_SPARC_UA64",
+ "R_SPARC_WDISP16",
+ "R_SPARC_WDISP19",
+ "R_SPARC_WDISP22",
+ "R_SPARC_WDISP30",
+ "R_SPARC_WPLT30",
+ "R_SYM32",
+ "R_SYM64",
+ "R_TYPE32",
+ "R_TYPE64",
+ "R_X86_64",
+ "R_X86_64_16",
+ "R_X86_64_32",
+ "R_X86_64_32S",
+ "R_X86_64_64",
+ "R_X86_64_8",
+ "R_X86_64_COPY",
+ "R_X86_64_DTPMOD64",
+ "R_X86_64_DTPOFF32",
+ "R_X86_64_DTPOFF64",
+ "R_X86_64_GLOB_DAT",
+ "R_X86_64_GOT32",
+ "R_X86_64_GOT64",
+ "R_X86_64_GOTOFF64",
+ "R_X86_64_GOTPC32",
+ "R_X86_64_GOTPC32_TLSDESC",
+ "R_X86_64_GOTPC64",
+ "R_X86_64_GOTPCREL",
+ "R_X86_64_GOTPCREL64",
+ "R_X86_64_GOTPCRELX",
+ "R_X86_64_GOTPLT64",
+ "R_X86_64_GOTTPOFF",
+ "R_X86_64_IRELATIVE",
+ "R_X86_64_JMP_SLOT",
+ "R_X86_64_NONE",
+ "R_X86_64_PC16",
+ "R_X86_64_PC32",
+ "R_X86_64_PC32_BND",
+ "R_X86_64_PC64",
+ "R_X86_64_PC8",
+ "R_X86_64_PLT32",
+ "R_X86_64_PLT32_BND",
+ "R_X86_64_PLTOFF64",
+ "R_X86_64_RELATIVE",
+ "R_X86_64_RELATIVE64",
+ "R_X86_64_REX_GOTPCRELX",
+ "R_X86_64_SIZE32",
+ "R_X86_64_SIZE64",
+ "R_X86_64_TLSDESC",
+ "R_X86_64_TLSDESC_CALL",
+ "R_X86_64_TLSGD",
+ "R_X86_64_TLSLD",
+ "R_X86_64_TPOFF32",
+ "R_X86_64_TPOFF64",
+ "Rel32",
+ "Rel64",
+ "Rela32",
+ "Rela64",
+ "SHF_ALLOC",
+ "SHF_COMPRESSED",
+ "SHF_EXECINSTR",
+ "SHF_GROUP",
+ "SHF_INFO_LINK",
+ "SHF_LINK_ORDER",
+ "SHF_MASKOS",
+ "SHF_MASKPROC",
+ "SHF_MERGE",
+ "SHF_OS_NONCONFORMING",
+ "SHF_STRINGS",
+ "SHF_TLS",
+ "SHF_WRITE",
+ "SHN_ABS",
+ "SHN_COMMON",
+ "SHN_HIOS",
+ "SHN_HIPROC",
+ "SHN_HIRESERVE",
+ "SHN_LOOS",
+ "SHN_LOPROC",
+ "SHN_LORESERVE",
+ "SHN_UNDEF",
+ "SHN_XINDEX",
+ "SHT_DYNAMIC",
+ "SHT_DYNSYM",
+ "SHT_FINI_ARRAY",
+ "SHT_GNU_ATTRIBUTES",
+ "SHT_GNU_HASH",
+ "SHT_GNU_LIBLIST",
+ "SHT_GNU_VERDEF",
+ "SHT_GNU_VERNEED",
+ "SHT_GNU_VERSYM",
+ "SHT_GROUP",
+ "SHT_HASH",
+ "SHT_HIOS",
+ "SHT_HIPROC",
+ "SHT_HIUSER",
+ "SHT_INIT_ARRAY",
+ "SHT_LOOS",
+ "SHT_LOPROC",
+ "SHT_LOUSER",
+ "SHT_NOBITS",
+ "SHT_NOTE",
+ "SHT_NULL",
+ "SHT_PREINIT_ARRAY",
+ "SHT_PROGBITS",
+ "SHT_REL",
+ "SHT_RELA",
+ "SHT_SHLIB",
+ "SHT_STRTAB",
+ "SHT_SYMTAB",
+ "SHT_SYMTAB_SHNDX",
+ "STB_GLOBAL",
+ "STB_HIOS",
+ "STB_HIPROC",
+ "STB_LOCAL",
+ "STB_LOOS",
+ "STB_LOPROC",
+ "STB_WEAK",
+ "STT_COMMON",
+ "STT_FILE",
+ "STT_FUNC",
+ "STT_HIOS",
+ "STT_HIPROC",
+ "STT_LOOS",
+ "STT_LOPROC",
+ "STT_NOTYPE",
+ "STT_OBJECT",
+ "STT_SECTION",
+ "STT_TLS",
+ "STV_DEFAULT",
+ "STV_HIDDEN",
+ "STV_INTERNAL",
+ "STV_PROTECTED",
+ "ST_BIND",
+ "ST_INFO",
+ "ST_TYPE",
+ "ST_VISIBILITY",
+ "Section",
+ "Section32",
+ "Section64",
+ "SectionFlag",
+ "SectionHeader",
+ "SectionIndex",
+ "SectionType",
+ "Sym32",
+ "Sym32Size",
+ "Sym64",
+ "Sym64Size",
+ "SymBind",
+ "SymType",
+ "SymVis",
+ "Symbol",
+ "Type",
+ "Version",
+ },
+ "debug/gosym": []string{
+ "DecodingError",
+ "Func",
+ "LineTable",
+ "NewLineTable",
+ "NewTable",
+ "Obj",
+ "Sym",
+ "Table",
+ "UnknownFileError",
+ "UnknownLineError",
+ },
+ "debug/macho": []string{
+ "ARM64_RELOC_ADDEND",
+ "ARM64_RELOC_BRANCH26",
+ "ARM64_RELOC_GOT_LOAD_PAGE21",
+ "ARM64_RELOC_GOT_LOAD_PAGEOFF12",
+ "ARM64_RELOC_PAGE21",
+ "ARM64_RELOC_PAGEOFF12",
+ "ARM64_RELOC_POINTER_TO_GOT",
+ "ARM64_RELOC_SUBTRACTOR",
+ "ARM64_RELOC_TLVP_LOAD_PAGE21",
+ "ARM64_RELOC_TLVP_LOAD_PAGEOFF12",
+ "ARM64_RELOC_UNSIGNED",
+ "ARM_RELOC_BR24",
+ "ARM_RELOC_HALF",
+ "ARM_RELOC_HALF_SECTDIFF",
+ "ARM_RELOC_LOCAL_SECTDIFF",
+ "ARM_RELOC_PAIR",
+ "ARM_RELOC_PB_LA_PTR",
+ "ARM_RELOC_SECTDIFF",
+ "ARM_RELOC_VANILLA",
+ "ARM_THUMB_32BIT_BRANCH",
+ "ARM_THUMB_RELOC_BR22",
+ "Cpu",
+ "Cpu386",
+ "CpuAmd64",
+ "CpuArm",
+ "CpuArm64",
+ "CpuPpc",
+ "CpuPpc64",
+ "Dylib",
+ "DylibCmd",
+ "Dysymtab",
+ "DysymtabCmd",
+ "ErrNotFat",
+ "FatArch",
+ "FatArchHeader",
+ "FatFile",
+ "File",
+ "FileHeader",
+ "FlagAllModsBound",
+ "FlagAllowStackExecution",
+ "FlagAppExtensionSafe",
+ "FlagBindAtLoad",
+ "FlagBindsToWeak",
+ "FlagCanonical",
+ "FlagDeadStrippableDylib",
+ "FlagDyldLink",
+ "FlagForceFlat",
+ "FlagHasTLVDescriptors",
+ "FlagIncrLink",
+ "FlagLazyInit",
+ "FlagNoFixPrebinding",
+ "FlagNoHeapExecution",
+ "FlagNoMultiDefs",
+ "FlagNoReexportedDylibs",
+ "FlagNoUndefs",
+ "FlagPIE",
+ "FlagPrebindable",
+ "FlagPrebound",
+ "FlagRootSafe",
+ "FlagSetuidSafe",
+ "FlagSplitSegs",
+ "FlagSubsectionsViaSymbols",
+ "FlagTwoLevel",
+ "FlagWeakDefines",
+ "FormatError",
+ "GENERIC_RELOC_LOCAL_SECTDIFF",
+ "GENERIC_RELOC_PAIR",
+ "GENERIC_RELOC_PB_LA_PTR",
+ "GENERIC_RELOC_SECTDIFF",
+ "GENERIC_RELOC_TLV",
+ "GENERIC_RELOC_VANILLA",
+ "Load",
+ "LoadBytes",
+ "LoadCmd",
+ "LoadCmdDylib",
+ "LoadCmdDylinker",
+ "LoadCmdDysymtab",
+ "LoadCmdRpath",
+ "LoadCmdSegment",
+ "LoadCmdSegment64",
+ "LoadCmdSymtab",
+ "LoadCmdThread",
+ "LoadCmdUnixThread",
+ "Magic32",
+ "Magic64",
+ "MagicFat",
+ "NewFatFile",
+ "NewFile",
+ "Nlist32",
+ "Nlist64",
+ "Open",
+ "OpenFat",
+ "Regs386",
+ "RegsAMD64",
+ "Reloc",
+ "RelocTypeARM",
+ "RelocTypeARM64",
+ "RelocTypeGeneric",
+ "RelocTypeX86_64",
+ "Rpath",
+ "RpathCmd",
+ "Section",
+ "Section32",
+ "Section64",
+ "SectionHeader",
+ "Segment",
+ "Segment32",
+ "Segment64",
+ "SegmentHeader",
+ "Symbol",
+ "Symtab",
+ "SymtabCmd",
+ "Thread",
+ "Type",
+ "TypeBundle",
+ "TypeDylib",
+ "TypeExec",
+ "TypeObj",
+ "X86_64_RELOC_BRANCH",
+ "X86_64_RELOC_GOT",
+ "X86_64_RELOC_GOT_LOAD",
+ "X86_64_RELOC_SIGNED",
+ "X86_64_RELOC_SIGNED_1",
+ "X86_64_RELOC_SIGNED_2",
+ "X86_64_RELOC_SIGNED_4",
+ "X86_64_RELOC_SUBTRACTOR",
+ "X86_64_RELOC_TLV",
+ "X86_64_RELOC_UNSIGNED",
+ },
+ "debug/pe": []string{
+ "COFFSymbol",
+ "COFFSymbolSize",
+ "DataDirectory",
+ "File",
+ "FileHeader",
+ "FormatError",
+ "IMAGE_DIRECTORY_ENTRY_ARCHITECTURE",
+ "IMAGE_DIRECTORY_ENTRY_BASERELOC",
+ "IMAGE_DIRECTORY_ENTRY_BOUND_IMPORT",
+ "IMAGE_DIRECTORY_ENTRY_COM_DESCRIPTOR",
+ "IMAGE_DIRECTORY_ENTRY_DEBUG",
+ "IMAGE_DIRECTORY_ENTRY_DELAY_IMPORT",
+ "IMAGE_DIRECTORY_ENTRY_EXCEPTION",
+ "IMAGE_DIRECTORY_ENTRY_EXPORT",
+ "IMAGE_DIRECTORY_ENTRY_GLOBALPTR",
+ "IMAGE_DIRECTORY_ENTRY_IAT",
+ "IMAGE_DIRECTORY_ENTRY_IMPORT",
+ "IMAGE_DIRECTORY_ENTRY_LOAD_CONFIG",
+ "IMAGE_DIRECTORY_ENTRY_RESOURCE",
+ "IMAGE_DIRECTORY_ENTRY_SECURITY",
+ "IMAGE_DIRECTORY_ENTRY_TLS",
+ "IMAGE_FILE_MACHINE_AM33",
+ "IMAGE_FILE_MACHINE_AMD64",
+ "IMAGE_FILE_MACHINE_ARM",
+ "IMAGE_FILE_MACHINE_ARM64",
+ "IMAGE_FILE_MACHINE_ARMNT",
+ "IMAGE_FILE_MACHINE_EBC",
+ "IMAGE_FILE_MACHINE_I386",
+ "IMAGE_FILE_MACHINE_IA64",
+ "IMAGE_FILE_MACHINE_M32R",
+ "IMAGE_FILE_MACHINE_MIPS16",
+ "IMAGE_FILE_MACHINE_MIPSFPU",
+ "IMAGE_FILE_MACHINE_MIPSFPU16",
+ "IMAGE_FILE_MACHINE_POWERPC",
+ "IMAGE_FILE_MACHINE_POWERPCFP",
+ "IMAGE_FILE_MACHINE_R4000",
+ "IMAGE_FILE_MACHINE_SH3",
+ "IMAGE_FILE_MACHINE_SH3DSP",
+ "IMAGE_FILE_MACHINE_SH4",
+ "IMAGE_FILE_MACHINE_SH5",
+ "IMAGE_FILE_MACHINE_THUMB",
+ "IMAGE_FILE_MACHINE_UNKNOWN",
+ "IMAGE_FILE_MACHINE_WCEMIPSV2",
+ "ImportDirectory",
+ "NewFile",
+ "Open",
+ "OptionalHeader32",
+ "OptionalHeader64",
+ "Reloc",
+ "Section",
+ "SectionHeader",
+ "SectionHeader32",
+ "StringTable",
+ "Symbol",
+ },
+ "debug/plan9obj": []string{
+ "File",
+ "FileHeader",
+ "Magic386",
+ "Magic64",
+ "MagicAMD64",
+ "MagicARM",
+ "NewFile",
+ "Open",
+ "Section",
+ "SectionHeader",
+ "Sym",
+ },
+ "encoding": []string{
+ "BinaryMarshaler",
+ "BinaryUnmarshaler",
+ "TextMarshaler",
+ "TextUnmarshaler",
+ },
+ "encoding/ascii85": []string{
+ "CorruptInputError",
+ "Decode",
+ "Encode",
+ "MaxEncodedLen",
+ "NewDecoder",
+ "NewEncoder",
+ },
+ "encoding/asn1": []string{
+ "BitString",
+ "ClassApplication",
+ "ClassContextSpecific",
+ "ClassPrivate",
+ "ClassUniversal",
+ "Enumerated",
+ "Flag",
+ "Marshal",
+ "MarshalWithParams",
+ "NullBytes",
+ "NullRawValue",
+ "ObjectIdentifier",
+ "RawContent",
+ "RawValue",
+ "StructuralError",
+ "SyntaxError",
+ "TagBitString",
+ "TagBoolean",
+ "TagEnum",
+ "TagGeneralString",
+ "TagGeneralizedTime",
+ "TagIA5String",
+ "TagInteger",
+ "TagNull",
+ "TagNumericString",
+ "TagOID",
+ "TagOctetString",
+ "TagPrintableString",
+ "TagSequence",
+ "TagSet",
+ "TagT61String",
+ "TagUTCTime",
+ "TagUTF8String",
+ "Unmarshal",
+ "UnmarshalWithParams",
+ },
+ "encoding/base32": []string{
+ "CorruptInputError",
+ "Encoding",
+ "HexEncoding",
+ "NewDecoder",
+ "NewEncoder",
+ "NewEncoding",
+ "NoPadding",
+ "StdEncoding",
+ "StdPadding",
+ },
+ "encoding/base64": []string{
+ "CorruptInputError",
+ "Encoding",
+ "NewDecoder",
+ "NewEncoder",
+ "NewEncoding",
+ "NoPadding",
+ "RawStdEncoding",
+ "RawURLEncoding",
+ "StdEncoding",
+ "StdPadding",
+ "URLEncoding",
+ },
+ "encoding/binary": []string{
+ "BigEndian",
+ "ByteOrder",
+ "LittleEndian",
+ "MaxVarintLen16",
+ "MaxVarintLen32",
+ "MaxVarintLen64",
+ "PutUvarint",
+ "PutVarint",
+ "Read",
+ "ReadUvarint",
+ "ReadVarint",
+ "Size",
+ "Uvarint",
+ "Varint",
+ "Write",
+ },
+ "encoding/csv": []string{
+ "ErrBareQuote",
+ "ErrFieldCount",
+ "ErrQuote",
+ "ErrTrailingComma",
+ "NewReader",
+ "NewWriter",
+ "ParseError",
+ "Reader",
+ "Writer",
+ },
+ "encoding/gob": []string{
+ "CommonType",
+ "Decoder",
+ "Encoder",
+ "GobDecoder",
+ "GobEncoder",
+ "NewDecoder",
+ "NewEncoder",
+ "Register",
+ "RegisterName",
+ },
+ "encoding/hex": []string{
+ "Decode",
+ "DecodeString",
+ "DecodedLen",
+ "Dump",
+ "Dumper",
+ "Encode",
+ "EncodeToString",
+ "EncodedLen",
+ "ErrLength",
+ "InvalidByteError",
+ "NewDecoder",
+ "NewEncoder",
+ },
+ "encoding/json": []string{
+ "Compact",
+ "Decoder",
+ "Delim",
+ "Encoder",
+ "HTMLEscape",
+ "Indent",
+ "InvalidUTF8Error",
+ "InvalidUnmarshalError",
+ "Marshal",
+ "MarshalIndent",
+ "Marshaler",
+ "MarshalerError",
+ "NewDecoder",
+ "NewEncoder",
+ "Number",
+ "RawMessage",
+ "SyntaxError",
+ "Token",
+ "Unmarshal",
+ "UnmarshalFieldError",
+ "UnmarshalTypeError",
+ "Unmarshaler",
+ "UnsupportedTypeError",
+ "UnsupportedValueError",
+ "Valid",
+ },
+ "encoding/pem": []string{
+ "Block",
+ "Decode",
+ "Encode",
+ "EncodeToMemory",
+ },
+ "encoding/xml": []string{
+ "Attr",
+ "CharData",
+ "Comment",
+ "CopyToken",
+ "Decoder",
+ "Directive",
+ "Encoder",
+ "EndElement",
+ "Escape",
+ "EscapeText",
+ "HTMLAutoClose",
+ "HTMLEntity",
+ "Header",
+ "Marshal",
+ "MarshalIndent",
+ "Marshaler",
+ "MarshalerAttr",
+ "Name",
+ "NewDecoder",
+ "NewEncoder",
+ "NewTokenDecoder",
+ "ProcInst",
+ "StartElement",
+ "SyntaxError",
+ "TagPathError",
+ "Token",
+ "TokenReader",
+ "Unmarshal",
+ "UnmarshalError",
+ "Unmarshaler",
+ "UnmarshalerAttr",
+ "UnsupportedTypeError",
+ },
+ "errors": []string{
+ "As",
+ "Is",
+ "New",
+ "Unwrap",
+ },
+ "expvar": []string{
+ "Do",
+ "Float",
+ "Func",
+ "Get",
+ "Handler",
+ "Int",
+ "KeyValue",
+ "Map",
+ "NewFloat",
+ "NewInt",
+ "NewMap",
+ "NewString",
+ "Publish",
+ "String",
+ "Var",
+ },
+ "flag": []string{
+ "Arg",
+ "Args",
+ "Bool",
+ "BoolVar",
+ "CommandLine",
+ "ContinueOnError",
+ "Duration",
+ "DurationVar",
+ "ErrHelp",
+ "ErrorHandling",
+ "ExitOnError",
+ "Flag",
+ "FlagSet",
+ "Float64",
+ "Float64Var",
+ "Getter",
+ "Int",
+ "Int64",
+ "Int64Var",
+ "IntVar",
+ "Lookup",
+ "NArg",
+ "NFlag",
+ "NewFlagSet",
+ "PanicOnError",
+ "Parse",
+ "Parsed",
+ "PrintDefaults",
+ "Set",
+ "String",
+ "StringVar",
+ "Uint",
+ "Uint64",
+ "Uint64Var",
+ "UintVar",
+ "UnquoteUsage",
+ "Usage",
+ "Value",
+ "Var",
+ "Visit",
+ "VisitAll",
+ },
+ "fmt": []string{
+ "Errorf",
+ "Formatter",
+ "Fprint",
+ "Fprintf",
+ "Fprintln",
+ "Fscan",
+ "Fscanf",
+ "Fscanln",
+ "GoStringer",
+ "Print",
+ "Printf",
+ "Println",
+ "Scan",
+ "ScanState",
+ "Scanf",
+ "Scanln",
+ "Scanner",
+ "Sprint",
+ "Sprintf",
+ "Sprintln",
+ "Sscan",
+ "Sscanf",
+ "Sscanln",
+ "State",
+ "Stringer",
+ },
+ "go/ast": []string{
+ "ArrayType",
+ "AssignStmt",
+ "Bad",
+ "BadDecl",
+ "BadExpr",
+ "BadStmt",
+ "BasicLit",
+ "BinaryExpr",
+ "BlockStmt",
+ "BranchStmt",
+ "CallExpr",
+ "CaseClause",
+ "ChanDir",
+ "ChanType",
+ "CommClause",
+ "Comment",
+ "CommentGroup",
+ "CommentMap",
+ "CompositeLit",
+ "Con",
+ "Decl",
+ "DeclStmt",
+ "DeferStmt",
+ "Ellipsis",
+ "EmptyStmt",
+ "Expr",
+ "ExprStmt",
+ "Field",
+ "FieldFilter",
+ "FieldList",
+ "File",
+ "FileExports",
+ "Filter",
+ "FilterDecl",
+ "FilterFile",
+ "FilterFuncDuplicates",
+ "FilterImportDuplicates",
+ "FilterPackage",
+ "FilterUnassociatedComments",
+ "ForStmt",
+ "Fprint",
+ "Fun",
+ "FuncDecl",
+ "FuncLit",
+ "FuncType",
+ "GenDecl",
+ "GoStmt",
+ "Ident",
+ "IfStmt",
+ "ImportSpec",
+ "Importer",
+ "IncDecStmt",
+ "IndexExpr",
+ "Inspect",
+ "InterfaceType",
+ "IsExported",
+ "KeyValueExpr",
+ "LabeledStmt",
+ "Lbl",
+ "MapType",
+ "MergeMode",
+ "MergePackageFiles",
+ "NewCommentMap",
+ "NewIdent",
+ "NewObj",
+ "NewPackage",
+ "NewScope",
+ "Node",
+ "NotNilFilter",
+ "ObjKind",
+ "Object",
+ "Package",
+ "PackageExports",
+ "ParenExpr",
+ "Pkg",
+ "Print",
+ "RECV",
+ "RangeStmt",
+ "ReturnStmt",
+ "SEND",
+ "Scope",
+ "SelectStmt",
+ "SelectorExpr",
+ "SendStmt",
+ "SliceExpr",
+ "SortImports",
+ "Spec",
+ "StarExpr",
+ "Stmt",
+ "StructType",
+ "SwitchStmt",
+ "Typ",
+ "TypeAssertExpr",
+ "TypeSpec",
+ "TypeSwitchStmt",
+ "UnaryExpr",
+ "ValueSpec",
+ "Var",
+ "Visitor",
+ "Walk",
+ },
+ "go/build": []string{
+ "AllowBinary",
+ "ArchChar",
+ "Context",
+ "Default",
+ "FindOnly",
+ "IgnoreVendor",
+ "Import",
+ "ImportComment",
+ "ImportDir",
+ "ImportMode",
+ "IsLocalImport",
+ "MultiplePackageError",
+ "NoGoError",
+ "Package",
+ "ToolDir",
+ },
+ "go/constant": []string{
+ "BinaryOp",
+ "BitLen",
+ "Bool",
+ "BoolVal",
+ "Bytes",
+ "Compare",
+ "Complex",
+ "Denom",
+ "Float",
+ "Float32Val",
+ "Float64Val",
+ "Imag",
+ "Int",
+ "Int64Val",
+ "Kind",
+ "Make",
+ "MakeBool",
+ "MakeFloat64",
+ "MakeFromBytes",
+ "MakeFromLiteral",
+ "MakeImag",
+ "MakeInt64",
+ "MakeString",
+ "MakeUint64",
+ "MakeUnknown",
+ "Num",
+ "Real",
+ "Shift",
+ "Sign",
+ "String",
+ "StringVal",
+ "ToComplex",
+ "ToFloat",
+ "ToInt",
+ "Uint64Val",
+ "UnaryOp",
+ "Unknown",
+ "Val",
+ "Value",
+ },
+ "go/doc": []string{
+ "AllDecls",
+ "AllMethods",
+ "Example",
+ "Examples",
+ "Filter",
+ "Func",
+ "IllegalPrefixes",
+ "IsPredeclared",
+ "Mode",
+ "New",
+ "Note",
+ "Package",
+ "PreserveAST",
+ "Synopsis",
+ "ToHTML",
+ "ToText",
+ "Type",
+ "Value",
+ },
+ "go/format": []string{
+ "Node",
+ "Source",
+ },
+ "go/importer": []string{
+ "Default",
+ "For",
+ "ForCompiler",
+ "Lookup",
+ },
+ "go/parser": []string{
+ "AllErrors",
+ "DeclarationErrors",
+ "ImportsOnly",
+ "Mode",
+ "PackageClauseOnly",
+ "ParseComments",
+ "ParseDir",
+ "ParseExpr",
+ "ParseExprFrom",
+ "ParseFile",
+ "SpuriousErrors",
+ "Trace",
+ },
+ "go/printer": []string{
+ "CommentedNode",
+ "Config",
+ "Fprint",
+ "Mode",
+ "RawFormat",
+ "SourcePos",
+ "TabIndent",
+ "UseSpaces",
+ },
+ "go/scanner": []string{
+ "Error",
+ "ErrorHandler",
+ "ErrorList",
+ "Mode",
+ "PrintError",
+ "ScanComments",
+ "Scanner",
+ },
+ "go/token": []string{
+ "ADD",
+ "ADD_ASSIGN",
+ "AND",
+ "AND_ASSIGN",
+ "AND_NOT",
+ "AND_NOT_ASSIGN",
+ "ARROW",
+ "ASSIGN",
+ "BREAK",
+ "CASE",
+ "CHAN",
+ "CHAR",
+ "COLON",
+ "COMMA",
+ "COMMENT",
+ "CONST",
+ "CONTINUE",
+ "DEC",
+ "DEFAULT",
+ "DEFER",
+ "DEFINE",
+ "ELLIPSIS",
+ "ELSE",
+ "EOF",
+ "EQL",
+ "FALLTHROUGH",
+ "FLOAT",
+ "FOR",
+ "FUNC",
+ "File",
+ "FileSet",
+ "GEQ",
+ "GO",
+ "GOTO",
+ "GTR",
+ "HighestPrec",
+ "IDENT",
+ "IF",
+ "ILLEGAL",
+ "IMAG",
+ "IMPORT",
+ "INC",
+ "INT",
+ "INTERFACE",
+ "IsExported",
+ "IsIdentifier",
+ "IsKeyword",
+ "LAND",
+ "LBRACE",
+ "LBRACK",
+ "LEQ",
+ "LOR",
+ "LPAREN",
+ "LSS",
+ "Lookup",
+ "LowestPrec",
+ "MAP",
+ "MUL",
+ "MUL_ASSIGN",
+ "NEQ",
+ "NOT",
+ "NewFileSet",
+ "NoPos",
+ "OR",
+ "OR_ASSIGN",
+ "PACKAGE",
+ "PERIOD",
+ "Pos",
+ "Position",
+ "QUO",
+ "QUO_ASSIGN",
+ "RANGE",
+ "RBRACE",
+ "RBRACK",
+ "REM",
+ "REM_ASSIGN",
+ "RETURN",
+ "RPAREN",
+ "SELECT",
+ "SEMICOLON",
+ "SHL",
+ "SHL_ASSIGN",
+ "SHR",
+ "SHR_ASSIGN",
+ "STRING",
+ "STRUCT",
+ "SUB",
+ "SUB_ASSIGN",
+ "SWITCH",
+ "TYPE",
+ "Token",
+ "UnaryPrec",
+ "VAR",
+ "XOR",
+ "XOR_ASSIGN",
+ },
+ "go/types": []string{
+ "Array",
+ "AssertableTo",
+ "AssignableTo",
+ "Basic",
+ "BasicInfo",
+ "BasicKind",
+ "Bool",
+ "Builtin",
+ "Byte",
+ "Chan",
+ "ChanDir",
+ "CheckExpr",
+ "Checker",
+ "Comparable",
+ "Complex128",
+ "Complex64",
+ "Config",
+ "Const",
+ "ConvertibleTo",
+ "DefPredeclaredTestFuncs",
+ "Default",
+ "Error",
+ "Eval",
+ "ExprString",
+ "FieldVal",
+ "Float32",
+ "Float64",
+ "Func",
+ "Id",
+ "Identical",
+ "IdenticalIgnoreTags",
+ "Implements",
+ "ImportMode",
+ "Importer",
+ "ImporterFrom",
+ "Info",
+ "Initializer",
+ "Int",
+ "Int16",
+ "Int32",
+ "Int64",
+ "Int8",
+ "Interface",
+ "Invalid",
+ "IsBoolean",
+ "IsComplex",
+ "IsConstType",
+ "IsFloat",
+ "IsInteger",
+ "IsInterface",
+ "IsNumeric",
+ "IsOrdered",
+ "IsString",
+ "IsUnsigned",
+ "IsUntyped",
+ "Label",
+ "LookupFieldOrMethod",
+ "Map",
+ "MethodExpr",
+ "MethodSet",
+ "MethodVal",
+ "MissingMethod",
+ "Named",
+ "NewArray",
+ "NewChan",
+ "NewChecker",
+ "NewConst",
+ "NewField",
+ "NewFunc",
+ "NewInterface",
+ "NewInterfaceType",
+ "NewLabel",
+ "NewMap",
+ "NewMethodSet",
+ "NewNamed",
+ "NewPackage",
+ "NewParam",
+ "NewPkgName",
+ "NewPointer",
+ "NewScope",
+ "NewSignature",
+ "NewSlice",
+ "NewStruct",
+ "NewTuple",
+ "NewTypeName",
+ "NewVar",
+ "Nil",
+ "Object",
+ "ObjectString",
+ "Package",
+ "PkgName",
+ "Pointer",
+ "Qualifier",
+ "RecvOnly",
+ "RelativeTo",
+ "Rune",
+ "Scope",
+ "Selection",
+ "SelectionKind",
+ "SelectionString",
+ "SendOnly",
+ "SendRecv",
+ "Signature",
+ "Sizes",
+ "SizesFor",
+ "Slice",
+ "StdSizes",
+ "String",
+ "Struct",
+ "Tuple",
+ "Typ",
+ "Type",
+ "TypeAndValue",
+ "TypeName",
+ "TypeString",
+ "Uint",
+ "Uint16",
+ "Uint32",
+ "Uint64",
+ "Uint8",
+ "Uintptr",
+ "Universe",
+ "Unsafe",
+ "UnsafePointer",
+ "UntypedBool",
+ "UntypedComplex",
+ "UntypedFloat",
+ "UntypedInt",
+ "UntypedNil",
+ "UntypedRune",
+ "UntypedString",
+ "Var",
+ "WriteExpr",
+ "WriteSignature",
+ "WriteType",
+ },
+ "hash": []string{
+ "Hash",
+ "Hash32",
+ "Hash64",
+ },
+ "hash/adler32": []string{
+ "Checksum",
+ "New",
+ "Size",
+ },
+ "hash/crc32": []string{
+ "Castagnoli",
+ "Checksum",
+ "ChecksumIEEE",
+ "IEEE",
+ "IEEETable",
+ "Koopman",
+ "MakeTable",
+ "New",
+ "NewIEEE",
+ "Size",
+ "Table",
+ "Update",
+ },
+ "hash/crc64": []string{
+ "Checksum",
+ "ECMA",
+ "ISO",
+ "MakeTable",
+ "New",
+ "Size",
+ "Table",
+ "Update",
+ },
+ "hash/fnv": []string{
+ "New128",
+ "New128a",
+ "New32",
+ "New32a",
+ "New64",
+ "New64a",
+ },
+ "html": []string{
+ "EscapeString",
+ "UnescapeString",
+ },
+ "html/template": []string{
+ "CSS",
+ "ErrAmbigContext",
+ "ErrBadHTML",
+ "ErrBranchEnd",
+ "ErrEndContext",
+ "ErrNoSuchTemplate",
+ "ErrOutputContext",
+ "ErrPartialCharset",
+ "ErrPartialEscape",
+ "ErrPredefinedEscaper",
+ "ErrRangeLoopReentry",
+ "ErrSlashAmbig",
+ "Error",
+ "ErrorCode",
+ "FuncMap",
+ "HTML",
+ "HTMLAttr",
+ "HTMLEscape",
+ "HTMLEscapeString",
+ "HTMLEscaper",
+ "IsTrue",
+ "JS",
+ "JSEscape",
+ "JSEscapeString",
+ "JSEscaper",
+ "JSStr",
+ "Must",
+ "New",
+ "OK",
+ "ParseFiles",
+ "ParseGlob",
+ "Srcset",
+ "Template",
+ "URL",
+ "URLQueryEscaper",
+ },
+ "image": []string{
+ "Alpha",
+ "Alpha16",
+ "Black",
+ "CMYK",
+ "Config",
+ "Decode",
+ "DecodeConfig",
+ "ErrFormat",
+ "Gray",
+ "Gray16",
+ "Image",
+ "NRGBA",
+ "NRGBA64",
+ "NYCbCrA",
+ "NewAlpha",
+ "NewAlpha16",
+ "NewCMYK",
+ "NewGray",
+ "NewGray16",
+ "NewNRGBA",
+ "NewNRGBA64",
+ "NewNYCbCrA",
+ "NewPaletted",
+ "NewRGBA",
+ "NewRGBA64",
+ "NewUniform",
+ "NewYCbCr",
+ "Opaque",
+ "Paletted",
+ "PalettedImage",
+ "Point",
+ "Pt",
+ "RGBA",
+ "RGBA64",
+ "Rect",
+ "Rectangle",
+ "RegisterFormat",
+ "Transparent",
+ "Uniform",
+ "White",
+ "YCbCr",
+ "YCbCrSubsampleRatio",
+ "YCbCrSubsampleRatio410",
+ "YCbCrSubsampleRatio411",
+ "YCbCrSubsampleRatio420",
+ "YCbCrSubsampleRatio422",
+ "YCbCrSubsampleRatio440",
+ "YCbCrSubsampleRatio444",
+ "ZP",
+ "ZR",
+ },
+ "image/color": []string{
+ "Alpha",
+ "Alpha16",
+ "Alpha16Model",
+ "AlphaModel",
+ "Black",
+ "CMYK",
+ "CMYKModel",
+ "CMYKToRGB",
+ "Color",
+ "Gray",
+ "Gray16",
+ "Gray16Model",
+ "GrayModel",
+ "Model",
+ "ModelFunc",
+ "NRGBA",
+ "NRGBA64",
+ "NRGBA64Model",
+ "NRGBAModel",
+ "NYCbCrA",
+ "NYCbCrAModel",
+ "Opaque",
+ "Palette",
+ "RGBA",
+ "RGBA64",
+ "RGBA64Model",
+ "RGBAModel",
+ "RGBToCMYK",
+ "RGBToYCbCr",
+ "Transparent",
+ "White",
+ "YCbCr",
+ "YCbCrModel",
+ "YCbCrToRGB",
+ },
+ "image/color/palette": []string{
+ "Plan9",
+ "WebSafe",
+ },
+ "image/draw": []string{
+ "Draw",
+ "DrawMask",
+ "Drawer",
+ "FloydSteinberg",
+ "Image",
+ "Op",
+ "Over",
+ "Quantizer",
+ "Src",
+ },
+ "image/gif": []string{
+ "Decode",
+ "DecodeAll",
+ "DecodeConfig",
+ "DisposalBackground",
+ "DisposalNone",
+ "DisposalPrevious",
+ "Encode",
+ "EncodeAll",
+ "GIF",
+ "Options",
+ },
+ "image/jpeg": []string{
+ "Decode",
+ "DecodeConfig",
+ "DefaultQuality",
+ "Encode",
+ "FormatError",
+ "Options",
+ "Reader",
+ "UnsupportedError",
+ },
+ "image/png": []string{
+ "BestCompression",
+ "BestSpeed",
+ "CompressionLevel",
+ "Decode",
+ "DecodeConfig",
+ "DefaultCompression",
+ "Encode",
+ "Encoder",
+ "EncoderBuffer",
+ "EncoderBufferPool",
+ "FormatError",
+ "NoCompression",
+ "UnsupportedError",
+ },
+ "index/suffixarray": []string{
+ "Index",
+ "New",
+ },
+ "io": []string{
+ "ByteReader",
+ "ByteScanner",
+ "ByteWriter",
+ "Closer",
+ "Copy",
+ "CopyBuffer",
+ "CopyN",
+ "EOF",
+ "ErrClosedPipe",
+ "ErrNoProgress",
+ "ErrShortBuffer",
+ "ErrShortWrite",
+ "ErrUnexpectedEOF",
+ "LimitReader",
+ "LimitedReader",
+ "MultiReader",
+ "MultiWriter",
+ "NewSectionReader",
+ "Pipe",
+ "PipeReader",
+ "PipeWriter",
+ "ReadAtLeast",
+ "ReadCloser",
+ "ReadFull",
+ "ReadSeeker",
+ "ReadWriteCloser",
+ "ReadWriteSeeker",
+ "ReadWriter",
+ "Reader",
+ "ReaderAt",
+ "ReaderFrom",
+ "RuneReader",
+ "RuneScanner",
+ "SectionReader",
+ "SeekCurrent",
+ "SeekEnd",
+ "SeekStart",
+ "Seeker",
+ "StringWriter",
+ "TeeReader",
+ "WriteCloser",
+ "WriteSeeker",
+ "WriteString",
+ "Writer",
+ "WriterAt",
+ "WriterTo",
+ },
+ "io/ioutil": []string{
+ "Discard",
+ "NopCloser",
+ "ReadAll",
+ "ReadDir",
+ "ReadFile",
+ "TempDir",
+ "TempFile",
+ "WriteFile",
+ },
+ "log": []string{
+ "Fatal",
+ "Fatalf",
+ "Fatalln",
+ "Flags",
+ "LUTC",
+ "Ldate",
+ "Llongfile",
+ "Lmicroseconds",
+ "Logger",
+ "Lshortfile",
+ "LstdFlags",
+ "Ltime",
+ "New",
+ "Output",
+ "Panic",
+ "Panicf",
+ "Panicln",
+ "Prefix",
+ "Print",
+ "Printf",
+ "Println",
+ "SetFlags",
+ "SetOutput",
+ "SetPrefix",
+ "Writer",
+ },
+ "log/syslog": []string{
+ "Dial",
+ "LOG_ALERT",
+ "LOG_AUTH",
+ "LOG_AUTHPRIV",
+ "LOG_CRIT",
+ "LOG_CRON",
+ "LOG_DAEMON",
+ "LOG_DEBUG",
+ "LOG_EMERG",
+ "LOG_ERR",
+ "LOG_FTP",
+ "LOG_INFO",
+ "LOG_KERN",
+ "LOG_LOCAL0",
+ "LOG_LOCAL1",
+ "LOG_LOCAL2",
+ "LOG_LOCAL3",
+ "LOG_LOCAL4",
+ "LOG_LOCAL5",
+ "LOG_LOCAL6",
+ "LOG_LOCAL7",
+ "LOG_LPR",
+ "LOG_MAIL",
+ "LOG_NEWS",
+ "LOG_NOTICE",
+ "LOG_SYSLOG",
+ "LOG_USER",
+ "LOG_UUCP",
+ "LOG_WARNING",
+ "New",
+ "NewLogger",
+ "Priority",
+ "Writer",
+ },
+ "math": []string{
+ "Abs",
+ "Acos",
+ "Acosh",
+ "Asin",
+ "Asinh",
+ "Atan",
+ "Atan2",
+ "Atanh",
+ "Cbrt",
+ "Ceil",
+ "Copysign",
+ "Cos",
+ "Cosh",
+ "Dim",
+ "E",
+ "Erf",
+ "Erfc",
+ "Erfcinv",
+ "Erfinv",
+ "Exp",
+ "Exp2",
+ "Expm1",
+ "Float32bits",
+ "Float32frombits",
+ "Float64bits",
+ "Float64frombits",
+ "Floor",
+ "Frexp",
+ "Gamma",
+ "Hypot",
+ "Ilogb",
+ "Inf",
+ "IsInf",
+ "IsNaN",
+ "J0",
+ "J1",
+ "Jn",
+ "Ldexp",
+ "Lgamma",
+ "Ln10",
+ "Ln2",
+ "Log",
+ "Log10",
+ "Log10E",
+ "Log1p",
+ "Log2",
+ "Log2E",
+ "Logb",
+ "Max",
+ "MaxFloat32",
+ "MaxFloat64",
+ "MaxInt16",
+ "MaxInt32",
+ "MaxInt64",
+ "MaxInt8",
+ "MaxUint16",
+ "MaxUint32",
+ "MaxUint64",
+ "MaxUint8",
+ "Min",
+ "MinInt16",
+ "MinInt32",
+ "MinInt64",
+ "MinInt8",
+ "Mod",
+ "Modf",
+ "NaN",
+ "Nextafter",
+ "Nextafter32",
+ "Phi",
+ "Pi",
+ "Pow",
+ "Pow10",
+ "Remainder",
+ "Round",
+ "RoundToEven",
+ "Signbit",
+ "Sin",
+ "Sincos",
+ "Sinh",
+ "SmallestNonzeroFloat32",
+ "SmallestNonzeroFloat64",
+ "Sqrt",
+ "Sqrt2",
+ "SqrtE",
+ "SqrtPhi",
+ "SqrtPi",
+ "Tan",
+ "Tanh",
+ "Trunc",
+ "Y0",
+ "Y1",
+ "Yn",
+ },
+ "math/big": []string{
+ "Above",
+ "Accuracy",
+ "AwayFromZero",
+ "Below",
+ "ErrNaN",
+ "Exact",
+ "Float",
+ "Int",
+ "Jacobi",
+ "MaxBase",
+ "MaxExp",
+ "MaxPrec",
+ "MinExp",
+ "NewFloat",
+ "NewInt",
+ "NewRat",
+ "ParseFloat",
+ "Rat",
+ "RoundingMode",
+ "ToNearestAway",
+ "ToNearestEven",
+ "ToNegativeInf",
+ "ToPositiveInf",
+ "ToZero",
+ "Word",
+ },
+ "math/bits": []string{
+ "Add",
+ "Add32",
+ "Add64",
+ "Div",
+ "Div32",
+ "Div64",
+ "LeadingZeros",
+ "LeadingZeros16",
+ "LeadingZeros32",
+ "LeadingZeros64",
+ "LeadingZeros8",
+ "Len",
+ "Len16",
+ "Len32",
+ "Len64",
+ "Len8",
+ "Mul",
+ "Mul32",
+ "Mul64",
+ "OnesCount",
+ "OnesCount16",
+ "OnesCount32",
+ "OnesCount64",
+ "OnesCount8",
+ "Reverse",
+ "Reverse16",
+ "Reverse32",
+ "Reverse64",
+ "Reverse8",
+ "ReverseBytes",
+ "ReverseBytes16",
+ "ReverseBytes32",
+ "ReverseBytes64",
+ "RotateLeft",
+ "RotateLeft16",
+ "RotateLeft32",
+ "RotateLeft64",
+ "RotateLeft8",
+ "Sub",
+ "Sub32",
+ "Sub64",
+ "TrailingZeros",
+ "TrailingZeros16",
+ "TrailingZeros32",
+ "TrailingZeros64",
+ "TrailingZeros8",
+ "UintSize",
+ },
+ "math/cmplx": []string{
+ "Abs",
+ "Acos",
+ "Acosh",
+ "Asin",
+ "Asinh",
+ "Atan",
+ "Atanh",
+ "Conj",
+ "Cos",
+ "Cosh",
+ "Cot",
+ "Exp",
+ "Inf",
+ "IsInf",
+ "IsNaN",
+ "Log",
+ "Log10",
+ "NaN",
+ "Phase",
+ "Polar",
+ "Pow",
+ "Rect",
+ "Sin",
+ "Sinh",
+ "Sqrt",
+ "Tan",
+ "Tanh",
+ },
+ "math/rand": []string{
+ "ExpFloat64",
+ "Float32",
+ "Float64",
+ "Int",
+ "Int31",
+ "Int31n",
+ "Int63",
+ "Int63n",
+ "Intn",
+ "New",
+ "NewSource",
+ "NewZipf",
+ "NormFloat64",
+ "Perm",
+ "Rand",
+ "Read",
+ "Seed",
+ "Shuffle",
+ "Source",
+ "Source64",
+ "Uint32",
+ "Uint64",
+ "Zipf",
+ },
+ "mime": []string{
+ "AddExtensionType",
+ "BEncoding",
+ "ErrInvalidMediaParameter",
+ "ExtensionsByType",
+ "FormatMediaType",
+ "ParseMediaType",
+ "QEncoding",
+ "TypeByExtension",
+ "WordDecoder",
+ "WordEncoder",
+ },
+ "mime/multipart": []string{
+ "ErrMessageTooLarge",
+ "File",
+ "FileHeader",
+ "Form",
+ "NewReader",
+ "NewWriter",
+ "Part",
+ "Reader",
+ "Writer",
+ },
+ "mime/quotedprintable": []string{
+ "NewReader",
+ "NewWriter",
+ "Reader",
+ "Writer",
+ },
+ "net": []string{
+ "Addr",
+ "AddrError",
+ "Buffers",
+ "CIDRMask",
+ "Conn",
+ "DNSConfigError",
+ "DNSError",
+ "DefaultResolver",
+ "Dial",
+ "DialIP",
+ "DialTCP",
+ "DialTimeout",
+ "DialUDP",
+ "DialUnix",
+ "Dialer",
+ "ErrWriteToConnected",
+ "Error",
+ "FileConn",
+ "FileListener",
+ "FilePacketConn",
+ "FlagBroadcast",
+ "FlagLoopback",
+ "FlagMulticast",
+ "FlagPointToPoint",
+ "FlagUp",
+ "Flags",
+ "HardwareAddr",
+ "IP",
+ "IPAddr",
+ "IPConn",
+ "IPMask",
+ "IPNet",
+ "IPv4",
+ "IPv4Mask",
+ "IPv4allrouter",
+ "IPv4allsys",
+ "IPv4bcast",
+ "IPv4len",
+ "IPv4zero",
+ "IPv6interfacelocalallnodes",
+ "IPv6len",
+ "IPv6linklocalallnodes",
+ "IPv6linklocalallrouters",
+ "IPv6loopback",
+ "IPv6unspecified",
+ "IPv6zero",
+ "Interface",
+ "InterfaceAddrs",
+ "InterfaceByIndex",
+ "InterfaceByName",
+ "Interfaces",
+ "InvalidAddrError",
+ "JoinHostPort",
+ "Listen",
+ "ListenConfig",
+ "ListenIP",
+ "ListenMulticastUDP",
+ "ListenPacket",
+ "ListenTCP",
+ "ListenUDP",
+ "ListenUnix",
+ "ListenUnixgram",
+ "Listener",
+ "LookupAddr",
+ "LookupCNAME",
+ "LookupHost",
+ "LookupIP",
+ "LookupMX",
+ "LookupNS",
+ "LookupPort",
+ "LookupSRV",
+ "LookupTXT",
+ "MX",
+ "NS",
+ "OpError",
+ "PacketConn",
+ "ParseCIDR",
+ "ParseError",
+ "ParseIP",
+ "ParseMAC",
+ "Pipe",
+ "ResolveIPAddr",
+ "ResolveTCPAddr",
+ "ResolveUDPAddr",
+ "ResolveUnixAddr",
+ "Resolver",
+ "SRV",
+ "SplitHostPort",
+ "TCPAddr",
+ "TCPConn",
+ "TCPListener",
+ "UDPAddr",
+ "UDPConn",
+ "UnixAddr",
+ "UnixConn",
+ "UnixListener",
+ "UnknownNetworkError",
+ },
+ "net/http": []string{
+ "CanonicalHeaderKey",
+ "Client",
+ "CloseNotifier",
+ "ConnState",
+ "Cookie",
+ "CookieJar",
+ "DefaultClient",
+ "DefaultMaxHeaderBytes",
+ "DefaultMaxIdleConnsPerHost",
+ "DefaultServeMux",
+ "DefaultTransport",
+ "DetectContentType",
+ "Dir",
+ "ErrAbortHandler",
+ "ErrBodyNotAllowed",
+ "ErrBodyReadAfterClose",
+ "ErrContentLength",
+ "ErrHandlerTimeout",
+ "ErrHeaderTooLong",
+ "ErrHijacked",
+ "ErrLineTooLong",
+ "ErrMissingBoundary",
+ "ErrMissingContentLength",
+ "ErrMissingFile",
+ "ErrNoCookie",
+ "ErrNoLocation",
+ "ErrNotMultipart",
+ "ErrNotSupported",
+ "ErrServerClosed",
+ "ErrShortBody",
+ "ErrSkipAltProtocol",
+ "ErrUnexpectedTrailer",
+ "ErrUseLastResponse",
+ "ErrWriteAfterFlush",
+ "Error",
+ "File",
+ "FileServer",
+ "FileSystem",
+ "Flusher",
+ "Get",
+ "Handle",
+ "HandleFunc",
+ "Handler",
+ "HandlerFunc",
+ "Head",
+ "Header",
+ "Hijacker",
+ "ListenAndServe",
+ "ListenAndServeTLS",
+ "LocalAddrContextKey",
+ "MaxBytesReader",
+ "MethodConnect",
+ "MethodDelete",
+ "MethodGet",
+ "MethodHead",
+ "MethodOptions",
+ "MethodPatch",
+ "MethodPost",
+ "MethodPut",
+ "MethodTrace",
+ "NewFileTransport",
+ "NewRequest",
+ "NewRequestWithContext",
+ "NewServeMux",
+ "NoBody",
+ "NotFound",
+ "NotFoundHandler",
+ "ParseHTTPVersion",
+ "ParseTime",
+ "Post",
+ "PostForm",
+ "ProtocolError",
+ "ProxyFromEnvironment",
+ "ProxyURL",
+ "PushOptions",
+ "Pusher",
+ "ReadRequest",
+ "ReadResponse",
+ "Redirect",
+ "RedirectHandler",
+ "Request",
+ "Response",
+ "ResponseWriter",
+ "RoundTripper",
+ "SameSite",
+ "SameSiteDefaultMode",
+ "SameSiteLaxMode",
+ "SameSiteNoneMode",
+ "SameSiteStrictMode",
+ "Serve",
+ "ServeContent",
+ "ServeFile",
+ "ServeMux",
+ "ServeTLS",
+ "Server",
+ "ServerContextKey",
+ "SetCookie",
+ "StateActive",
+ "StateClosed",
+ "StateHijacked",
+ "StateIdle",
+ "StateNew",
+ "StatusAccepted",
+ "StatusAlreadyReported",
+ "StatusBadGateway",
+ "StatusBadRequest",
+ "StatusConflict",
+ "StatusContinue",
+ "StatusCreated",
+ "StatusEarlyHints",
+ "StatusExpectationFailed",
+ "StatusFailedDependency",
+ "StatusForbidden",
+ "StatusFound",
+ "StatusGatewayTimeout",
+ "StatusGone",
+ "StatusHTTPVersionNotSupported",
+ "StatusIMUsed",
+ "StatusInsufficientStorage",
+ "StatusInternalServerError",
+ "StatusLengthRequired",
+ "StatusLocked",
+ "StatusLoopDetected",
+ "StatusMethodNotAllowed",
+ "StatusMisdirectedRequest",
+ "StatusMovedPermanently",
+ "StatusMultiStatus",
+ "StatusMultipleChoices",
+ "StatusNetworkAuthenticationRequired",
+ "StatusNoContent",
+ "StatusNonAuthoritativeInfo",
+ "StatusNotAcceptable",
+ "StatusNotExtended",
+ "StatusNotFound",
+ "StatusNotImplemented",
+ "StatusNotModified",
+ "StatusOK",
+ "StatusPartialContent",
+ "StatusPaymentRequired",
+ "StatusPermanentRedirect",
+ "StatusPreconditionFailed",
+ "StatusPreconditionRequired",
+ "StatusProcessing",
+ "StatusProxyAuthRequired",
+ "StatusRequestEntityTooLarge",
+ "StatusRequestHeaderFieldsTooLarge",
+ "StatusRequestTimeout",
+ "StatusRequestURITooLong",
+ "StatusRequestedRangeNotSatisfiable",
+ "StatusResetContent",
+ "StatusSeeOther",
+ "StatusServiceUnavailable",
+ "StatusSwitchingProtocols",
+ "StatusTeapot",
+ "StatusTemporaryRedirect",
+ "StatusText",
+ "StatusTooEarly",
+ "StatusTooManyRequests",
+ "StatusUnauthorized",
+ "StatusUnavailableForLegalReasons",
+ "StatusUnprocessableEntity",
+ "StatusUnsupportedMediaType",
+ "StatusUpgradeRequired",
+ "StatusUseProxy",
+ "StatusVariantAlsoNegotiates",
+ "StripPrefix",
+ "TimeFormat",
+ "TimeoutHandler",
+ "TrailerPrefix",
+ "Transport",
+ },
+ "net/http/cgi": []string{
+ "Handler",
+ "Request",
+ "RequestFromMap",
+ "Serve",
+ },
+ "net/http/cookiejar": []string{
+ "Jar",
+ "New",
+ "Options",
+ "PublicSuffixList",
+ },
+ "net/http/fcgi": []string{
+ "ErrConnClosed",
+ "ErrRequestAborted",
+ "ProcessEnv",
+ "Serve",
+ },
+ "net/http/httptest": []string{
+ "DefaultRemoteAddr",
+ "NewRecorder",
+ "NewRequest",
+ "NewServer",
+ "NewTLSServer",
+ "NewUnstartedServer",
+ "ResponseRecorder",
+ "Server",
+ },
+ "net/http/httptrace": []string{
+ "ClientTrace",
+ "ContextClientTrace",
+ "DNSDoneInfo",
+ "DNSStartInfo",
+ "GotConnInfo",
+ "WithClientTrace",
+ "WroteRequestInfo",
+ },
+ "net/http/httputil": []string{
+ "BufferPool",
+ "ClientConn",
+ "DumpRequest",
+ "DumpRequestOut",
+ "DumpResponse",
+ "ErrClosed",
+ "ErrLineTooLong",
+ "ErrPersistEOF",
+ "ErrPipeline",
+ "NewChunkedReader",
+ "NewChunkedWriter",
+ "NewClientConn",
+ "NewProxyClientConn",
+ "NewServerConn",
+ "NewSingleHostReverseProxy",
+ "ReverseProxy",
+ "ServerConn",
+ },
+ "net/http/pprof": []string{
+ "Cmdline",
+ "Handler",
+ "Index",
+ "Profile",
+ "Symbol",
+ "Trace",
+ },
+ "net/mail": []string{
+ "Address",
+ "AddressParser",
+ "ErrHeaderNotPresent",
+ "Header",
+ "Message",
+ "ParseAddress",
+ "ParseAddressList",
+ "ParseDate",
+ "ReadMessage",
+ },
+ "net/rpc": []string{
+ "Accept",
+ "Call",
+ "Client",
+ "ClientCodec",
+ "DefaultDebugPath",
+ "DefaultRPCPath",
+ "DefaultServer",
+ "Dial",
+ "DialHTTP",
+ "DialHTTPPath",
+ "ErrShutdown",
+ "HandleHTTP",
+ "NewClient",
+ "NewClientWithCodec",
+ "NewServer",
+ "Register",
+ "RegisterName",
+ "Request",
+ "Response",
+ "ServeCodec",
+ "ServeConn",
+ "ServeRequest",
+ "Server",
+ "ServerCodec",
+ "ServerError",
+ },
+ "net/rpc/jsonrpc": []string{
+ "Dial",
+ "NewClient",
+ "NewClientCodec",
+ "NewServerCodec",
+ "ServeConn",
+ },
+ "net/smtp": []string{
+ "Auth",
+ "CRAMMD5Auth",
+ "Client",
+ "Dial",
+ "NewClient",
+ "PlainAuth",
+ "SendMail",
+ "ServerInfo",
+ },
+ "net/textproto": []string{
+ "CanonicalMIMEHeaderKey",
+ "Conn",
+ "Dial",
+ "Error",
+ "MIMEHeader",
+ "NewConn",
+ "NewReader",
+ "NewWriter",
+ "Pipeline",
+ "ProtocolError",
+ "Reader",
+ "TrimBytes",
+ "TrimString",
+ "Writer",
+ },
+ "net/url": []string{
+ "Error",
+ "EscapeError",
+ "InvalidHostError",
+ "Parse",
+ "ParseQuery",
+ "ParseRequestURI",
+ "PathEscape",
+ "PathUnescape",
+ "QueryEscape",
+ "QueryUnescape",
+ "URL",
+ "User",
+ "UserPassword",
+ "Userinfo",
+ "Values",
+ },
+ "os": []string{
+ "Args",
+ "Chdir",
+ "Chmod",
+ "Chown",
+ "Chtimes",
+ "Clearenv",
+ "Create",
+ "DevNull",
+ "Environ",
+ "ErrClosed",
+ "ErrExist",
+ "ErrInvalid",
+ "ErrNoDeadline",
+ "ErrNotExist",
+ "ErrPermission",
+ "Executable",
+ "Exit",
+ "Expand",
+ "ExpandEnv",
+ "File",
+ "FileInfo",
+ "FileMode",
+ "FindProcess",
+ "Getegid",
+ "Getenv",
+ "Geteuid",
+ "Getgid",
+ "Getgroups",
+ "Getpagesize",
+ "Getpid",
+ "Getppid",
+ "Getuid",
+ "Getwd",
+ "Hostname",
+ "Interrupt",
+ "IsExist",
+ "IsNotExist",
+ "IsPathSeparator",
+ "IsPermission",
+ "IsTimeout",
+ "Kill",
+ "Lchown",
+ "Link",
+ "LinkError",
+ "LookupEnv",
+ "Lstat",
+ "Mkdir",
+ "MkdirAll",
+ "ModeAppend",
+ "ModeCharDevice",
+ "ModeDevice",
+ "ModeDir",
+ "ModeExclusive",
+ "ModeIrregular",
+ "ModeNamedPipe",
+ "ModePerm",
+ "ModeSetgid",
+ "ModeSetuid",
+ "ModeSocket",
+ "ModeSticky",
+ "ModeSymlink",
+ "ModeTemporary",
+ "ModeType",
+ "NewFile",
+ "NewSyscallError",
+ "O_APPEND",
+ "O_CREATE",
+ "O_EXCL",
+ "O_RDONLY",
+ "O_RDWR",
+ "O_SYNC",
+ "O_TRUNC",
+ "O_WRONLY",
+ "Open",
+ "OpenFile",
+ "PathError",
+ "PathListSeparator",
+ "PathSeparator",
+ "Pipe",
+ "ProcAttr",
+ "Process",
+ "ProcessState",
+ "Readlink",
+ "Remove",
+ "RemoveAll",
+ "Rename",
+ "SEEK_CUR",
+ "SEEK_END",
+ "SEEK_SET",
+ "SameFile",
+ "Setenv",
+ "Signal",
+ "StartProcess",
+ "Stat",
+ "Stderr",
+ "Stdin",
+ "Stdout",
+ "Symlink",
+ "SyscallError",
+ "TempDir",
+ "Truncate",
+ "Unsetenv",
+ "UserCacheDir",
+ "UserConfigDir",
+ "UserHomeDir",
+ },
+ "os/exec": []string{
+ "Cmd",
+ "Command",
+ "CommandContext",
+ "ErrNotFound",
+ "Error",
+ "ExitError",
+ "LookPath",
+ },
+ "os/signal": []string{
+ "Ignore",
+ "Ignored",
+ "Notify",
+ "Reset",
+ "Stop",
+ },
+ "os/user": []string{
+ "Current",
+ "Group",
+ "Lookup",
+ "LookupGroup",
+ "LookupGroupId",
+ "LookupId",
+ "UnknownGroupError",
+ "UnknownGroupIdError",
+ "UnknownUserError",
+ "UnknownUserIdError",
+ "User",
+ },
+ "path": []string{
+ "Base",
+ "Clean",
+ "Dir",
+ "ErrBadPattern",
+ "Ext",
+ "IsAbs",
+ "Join",
+ "Match",
+ "Split",
+ },
+ "path/filepath": []string{
+ "Abs",
+ "Base",
+ "Clean",
+ "Dir",
+ "ErrBadPattern",
+ "EvalSymlinks",
+ "Ext",
+ "FromSlash",
+ "Glob",
+ "HasPrefix",
+ "IsAbs",
+ "Join",
+ "ListSeparator",
+ "Match",
+ "Rel",
+ "Separator",
+ "SkipDir",
+ "Split",
+ "SplitList",
+ "ToSlash",
+ "VolumeName",
+ "Walk",
+ "WalkFunc",
+ },
+ "plugin": []string{
+ "Open",
+ "Plugin",
+ "Symbol",
+ },
+ "reflect": []string{
+ "Append",
+ "AppendSlice",
+ "Array",
+ "ArrayOf",
+ "Bool",
+ "BothDir",
+ "Chan",
+ "ChanDir",
+ "ChanOf",
+ "Complex128",
+ "Complex64",
+ "Copy",
+ "DeepEqual",
+ "Float32",
+ "Float64",
+ "Func",
+ "FuncOf",
+ "Indirect",
+ "Int",
+ "Int16",
+ "Int32",
+ "Int64",
+ "Int8",
+ "Interface",
+ "Invalid",
+ "Kind",
+ "MakeChan",
+ "MakeFunc",
+ "MakeMap",
+ "MakeMapWithSize",
+ "MakeSlice",
+ "Map",
+ "MapIter",
+ "MapOf",
+ "Method",
+ "New",
+ "NewAt",
+ "Ptr",
+ "PtrTo",
+ "RecvDir",
+ "Select",
+ "SelectCase",
+ "SelectDefault",
+ "SelectDir",
+ "SelectRecv",
+ "SelectSend",
+ "SendDir",
+ "Slice",
+ "SliceHeader",
+ "SliceOf",
+ "String",
+ "StringHeader",
+ "Struct",
+ "StructField",
+ "StructOf",
+ "StructTag",
+ "Swapper",
+ "Type",
+ "TypeOf",
+ "Uint",
+ "Uint16",
+ "Uint32",
+ "Uint64",
+ "Uint8",
+ "Uintptr",
+ "UnsafePointer",
+ "Value",
+ "ValueError",
+ "ValueOf",
+ "Zero",
+ },
+ "regexp": []string{
+ "Compile",
+ "CompilePOSIX",
+ "Match",
+ "MatchReader",
+ "MatchString",
+ "MustCompile",
+ "MustCompilePOSIX",
+ "QuoteMeta",
+ "Regexp",
+ },
+ "regexp/syntax": []string{
+ "ClassNL",
+ "Compile",
+ "DotNL",
+ "EmptyBeginLine",
+ "EmptyBeginText",
+ "EmptyEndLine",
+ "EmptyEndText",
+ "EmptyNoWordBoundary",
+ "EmptyOp",
+ "EmptyOpContext",
+ "EmptyWordBoundary",
+ "ErrInternalError",
+ "ErrInvalidCharClass",
+ "ErrInvalidCharRange",
+ "ErrInvalidEscape",
+ "ErrInvalidNamedCapture",
+ "ErrInvalidPerlOp",
+ "ErrInvalidRepeatOp",
+ "ErrInvalidRepeatSize",
+ "ErrInvalidUTF8",
+ "ErrMissingBracket",
+ "ErrMissingParen",
+ "ErrMissingRepeatArgument",
+ "ErrTrailingBackslash",
+ "ErrUnexpectedParen",
+ "Error",
+ "ErrorCode",
+ "Flags",
+ "FoldCase",
+ "Inst",
+ "InstAlt",
+ "InstAltMatch",
+ "InstCapture",
+ "InstEmptyWidth",
+ "InstFail",
+ "InstMatch",
+ "InstNop",
+ "InstOp",
+ "InstRune",
+ "InstRune1",
+ "InstRuneAny",
+ "InstRuneAnyNotNL",
+ "IsWordChar",
+ "Literal",
+ "MatchNL",
+ "NonGreedy",
+ "OneLine",
+ "Op",
+ "OpAlternate",
+ "OpAnyChar",
+ "OpAnyCharNotNL",
+ "OpBeginLine",
+ "OpBeginText",
+ "OpCapture",
+ "OpCharClass",
+ "OpConcat",
+ "OpEmptyMatch",
+ "OpEndLine",
+ "OpEndText",
+ "OpLiteral",
+ "OpNoMatch",
+ "OpNoWordBoundary",
+ "OpPlus",
+ "OpQuest",
+ "OpRepeat",
+ "OpStar",
+ "OpWordBoundary",
+ "POSIX",
+ "Parse",
+ "Perl",
+ "PerlX",
+ "Prog",
+ "Regexp",
+ "Simple",
+ "UnicodeGroups",
+ "WasDollar",
+ },
+ "runtime": []string{
+ "BlockProfile",
+ "BlockProfileRecord",
+ "Breakpoint",
+ "CPUProfile",
+ "Caller",
+ "Callers",
+ "CallersFrames",
+ "Compiler",
+ "Error",
+ "Frame",
+ "Frames",
+ "Func",
+ "FuncForPC",
+ "GC",
+ "GOARCH",
+ "GOMAXPROCS",
+ "GOOS",
+ "GOROOT",
+ "Goexit",
+ "GoroutineProfile",
+ "Gosched",
+ "KeepAlive",
+ "LockOSThread",
+ "MemProfile",
+ "MemProfileRate",
+ "MemProfileRecord",
+ "MemStats",
+ "MutexProfile",
+ "NumCPU",
+ "NumCgoCall",
+ "NumGoroutine",
+ "ReadMemStats",
+ "ReadTrace",
+ "SetBlockProfileRate",
+ "SetCPUProfileRate",
+ "SetCgoTraceback",
+ "SetFinalizer",
+ "SetMutexProfileFraction",
+ "Stack",
+ "StackRecord",
+ "StartTrace",
+ "StopTrace",
+ "ThreadCreateProfile",
+ "TypeAssertionError",
+ "UnlockOSThread",
+ "Version",
+ },
+ "runtime/debug": []string{
+ "BuildInfo",
+ "FreeOSMemory",
+ "GCStats",
+ "Module",
+ "PrintStack",
+ "ReadBuildInfo",
+ "ReadGCStats",
+ "SetGCPercent",
+ "SetMaxStack",
+ "SetMaxThreads",
+ "SetPanicOnFault",
+ "SetTraceback",
+ "Stack",
+ "WriteHeapDump",
+ },
+ "runtime/pprof": []string{
+ "Do",
+ "ForLabels",
+ "Label",
+ "LabelSet",
+ "Labels",
+ "Lookup",
+ "NewProfile",
+ "Profile",
+ "Profiles",
+ "SetGoroutineLabels",
+ "StartCPUProfile",
+ "StopCPUProfile",
+ "WithLabels",
+ "WriteHeapProfile",
+ },
+ "runtime/trace": []string{
+ "IsEnabled",
+ "Log",
+ "Logf",
+ "NewTask",
+ "Region",
+ "Start",
+ "StartRegion",
+ "Stop",
+ "Task",
+ "WithRegion",
+ },
+ "sort": []string{
+ "Float64Slice",
+ "Float64s",
+ "Float64sAreSorted",
+ "IntSlice",
+ "Interface",
+ "Ints",
+ "IntsAreSorted",
+ "IsSorted",
+ "Reverse",
+ "Search",
+ "SearchFloat64s",
+ "SearchInts",
+ "SearchStrings",
+ "Slice",
+ "SliceIsSorted",
+ "SliceStable",
+ "Sort",
+ "Stable",
+ "StringSlice",
+ "Strings",
+ "StringsAreSorted",
+ },
+ "strconv": []string{
+ "AppendBool",
+ "AppendFloat",
+ "AppendInt",
+ "AppendQuote",
+ "AppendQuoteRune",
+ "AppendQuoteRuneToASCII",
+ "AppendQuoteRuneToGraphic",
+ "AppendQuoteToASCII",
+ "AppendQuoteToGraphic",
+ "AppendUint",
+ "Atoi",
+ "CanBackquote",
+ "ErrRange",
+ "ErrSyntax",
+ "FormatBool",
+ "FormatFloat",
+ "FormatInt",
+ "FormatUint",
+ "IntSize",
+ "IsGraphic",
+ "IsPrint",
+ "Itoa",
+ "NumError",
+ "ParseBool",
+ "ParseFloat",
+ "ParseInt",
+ "ParseUint",
+ "Quote",
+ "QuoteRune",
+ "QuoteRuneToASCII",
+ "QuoteRuneToGraphic",
+ "QuoteToASCII",
+ "QuoteToGraphic",
+ "Unquote",
+ "UnquoteChar",
+ },
+ "strings": []string{
+ "Builder",
+ "Compare",
+ "Contains",
+ "ContainsAny",
+ "ContainsRune",
+ "Count",
+ "EqualFold",
+ "Fields",
+ "FieldsFunc",
+ "HasPrefix",
+ "HasSuffix",
+ "Index",
+ "IndexAny",
+ "IndexByte",
+ "IndexFunc",
+ "IndexRune",
+ "Join",
+ "LastIndex",
+ "LastIndexAny",
+ "LastIndexByte",
+ "LastIndexFunc",
+ "Map",
+ "NewReader",
+ "NewReplacer",
+ "Reader",
+ "Repeat",
+ "Replace",
+ "ReplaceAll",
+ "Replacer",
+ "Split",
+ "SplitAfter",
+ "SplitAfterN",
+ "SplitN",
+ "Title",
+ "ToLower",
+ "ToLowerSpecial",
+ "ToTitle",
+ "ToTitleSpecial",
+ "ToUpper",
+ "ToUpperSpecial",
+ "ToValidUTF8",
+ "Trim",
+ "TrimFunc",
+ "TrimLeft",
+ "TrimLeftFunc",
+ "TrimPrefix",
+ "TrimRight",
+ "TrimRightFunc",
+ "TrimSpace",
+ "TrimSuffix",
+ },
+ "sync": []string{
+ "Cond",
+ "Locker",
+ "Map",
+ "Mutex",
+ "NewCond",
+ "Once",
+ "Pool",
+ "RWMutex",
+ "WaitGroup",
+ },
+ "sync/atomic": []string{
+ "AddInt32",
+ "AddInt64",
+ "AddUint32",
+ "AddUint64",
+ "AddUintptr",
+ "CompareAndSwapInt32",
+ "CompareAndSwapInt64",
+ "CompareAndSwapPointer",
+ "CompareAndSwapUint32",
+ "CompareAndSwapUint64",
+ "CompareAndSwapUintptr",
+ "LoadInt32",
+ "LoadInt64",
+ "LoadPointer",
+ "LoadUint32",
+ "LoadUint64",
+ "LoadUintptr",
+ "StoreInt32",
+ "StoreInt64",
+ "StorePointer",
+ "StoreUint32",
+ "StoreUint64",
+ "StoreUintptr",
+ "SwapInt32",
+ "SwapInt64",
+ "SwapPointer",
+ "SwapUint32",
+ "SwapUint64",
+ "SwapUintptr",
+ "Value",
+ },
+ "syscall": []string{
+ "AF_ALG",
+ "AF_APPLETALK",
+ "AF_ARP",
+ "AF_ASH",
+ "AF_ATM",
+ "AF_ATMPVC",
+ "AF_ATMSVC",
+ "AF_AX25",
+ "AF_BLUETOOTH",
+ "AF_BRIDGE",
+ "AF_CAIF",
+ "AF_CAN",
+ "AF_CCITT",
+ "AF_CHAOS",
+ "AF_CNT",
+ "AF_COIP",
+ "AF_DATAKIT",
+ "AF_DECnet",
+ "AF_DLI",
+ "AF_E164",
+ "AF_ECMA",
+ "AF_ECONET",
+ "AF_ENCAP",
+ "AF_FILE",
+ "AF_HYLINK",
+ "AF_IEEE80211",
+ "AF_IEEE802154",
+ "AF_IMPLINK",
+ "AF_INET",
+ "AF_INET6",
+ "AF_INET6_SDP",
+ "AF_INET_SDP",
+ "AF_IPX",
+ "AF_IRDA",
+ "AF_ISDN",
+ "AF_ISO",
+ "AF_IUCV",
+ "AF_KEY",
+ "AF_LAT",
+ "AF_LINK",
+ "AF_LLC",
+ "AF_LOCAL",
+ "AF_MAX",
+ "AF_MPLS",
+ "AF_NATM",
+ "AF_NDRV",
+ "AF_NETBEUI",
+ "AF_NETBIOS",
+ "AF_NETGRAPH",
+ "AF_NETLINK",
+ "AF_NETROM",
+ "AF_NS",
+ "AF_OROUTE",
+ "AF_OSI",
+ "AF_PACKET",
+ "AF_PHONET",
+ "AF_PPP",
+ "AF_PPPOX",
+ "AF_PUP",
+ "AF_RDS",
+ "AF_RESERVED_36",
+ "AF_ROSE",
+ "AF_ROUTE",
+ "AF_RXRPC",
+ "AF_SCLUSTER",
+ "AF_SECURITY",
+ "AF_SIP",
+ "AF_SLOW",
+ "AF_SNA",
+ "AF_SYSTEM",
+ "AF_TIPC",
+ "AF_UNIX",
+ "AF_UNSPEC",
+ "AF_VENDOR00",
+ "AF_VENDOR01",
+ "AF_VENDOR02",
+ "AF_VENDOR03",
+ "AF_VENDOR04",
+ "AF_VENDOR05",
+ "AF_VENDOR06",
+ "AF_VENDOR07",
+ "AF_VENDOR08",
+ "AF_VENDOR09",
+ "AF_VENDOR10",
+ "AF_VENDOR11",
+ "AF_VENDOR12",
+ "AF_VENDOR13",
+ "AF_VENDOR14",
+ "AF_VENDOR15",
+ "AF_VENDOR16",
+ "AF_VENDOR17",
+ "AF_VENDOR18",
+ "AF_VENDOR19",
+ "AF_VENDOR20",
+ "AF_VENDOR21",
+ "AF_VENDOR22",
+ "AF_VENDOR23",
+ "AF_VENDOR24",
+ "AF_VENDOR25",
+ "AF_VENDOR26",
+ "AF_VENDOR27",
+ "AF_VENDOR28",
+ "AF_VENDOR29",
+ "AF_VENDOR30",
+ "AF_VENDOR31",
+ "AF_VENDOR32",
+ "AF_VENDOR33",
+ "AF_VENDOR34",
+ "AF_VENDOR35",
+ "AF_VENDOR36",
+ "AF_VENDOR37",
+ "AF_VENDOR38",
+ "AF_VENDOR39",
+ "AF_VENDOR40",
+ "AF_VENDOR41",
+ "AF_VENDOR42",
+ "AF_VENDOR43",
+ "AF_VENDOR44",
+ "AF_VENDOR45",
+ "AF_VENDOR46",
+ "AF_VENDOR47",
+ "AF_WANPIPE",
+ "AF_X25",
+ "AI_CANONNAME",
+ "AI_NUMERICHOST",
+ "AI_PASSIVE",
+ "APPLICATION_ERROR",
+ "ARPHRD_ADAPT",
+ "ARPHRD_APPLETLK",
+ "ARPHRD_ARCNET",
+ "ARPHRD_ASH",
+ "ARPHRD_ATM",
+ "ARPHRD_AX25",
+ "ARPHRD_BIF",
+ "ARPHRD_CHAOS",
+ "ARPHRD_CISCO",
+ "ARPHRD_CSLIP",
+ "ARPHRD_CSLIP6",
+ "ARPHRD_DDCMP",
+ "ARPHRD_DLCI",
+ "ARPHRD_ECONET",
+ "ARPHRD_EETHER",
+ "ARPHRD_ETHER",
+ "ARPHRD_EUI64",
+ "ARPHRD_FCAL",
+ "ARPHRD_FCFABRIC",
+ "ARPHRD_FCPL",
+ "ARPHRD_FCPP",
+ "ARPHRD_FDDI",
+ "ARPHRD_FRAD",
+ "ARPHRD_FRELAY",
+ "ARPHRD_HDLC",
+ "ARPHRD_HIPPI",
+ "ARPHRD_HWX25",
+ "ARPHRD_IEEE1394",
+ "ARPHRD_IEEE802",
+ "ARPHRD_IEEE80211",
+ "ARPHRD_IEEE80211_PRISM",
+ "ARPHRD_IEEE80211_RADIOTAP",
+ "ARPHRD_IEEE802154",
+ "ARPHRD_IEEE802154_PHY",
+ "ARPHRD_IEEE802_TR",
+ "ARPHRD_INFINIBAND",
+ "ARPHRD_IPDDP",
+ "ARPHRD_IPGRE",
+ "ARPHRD_IRDA",
+ "ARPHRD_LAPB",
+ "ARPHRD_LOCALTLK",
+ "ARPHRD_LOOPBACK",
+ "ARPHRD_METRICOM",
+ "ARPHRD_NETROM",
+ "ARPHRD_NONE",
+ "ARPHRD_PIMREG",
+ "ARPHRD_PPP",
+ "ARPHRD_PRONET",
+ "ARPHRD_RAWHDLC",
+ "ARPHRD_ROSE",
+ "ARPHRD_RSRVD",
+ "ARPHRD_SIT",
+ "ARPHRD_SKIP",
+ "ARPHRD_SLIP",
+ "ARPHRD_SLIP6",
+ "ARPHRD_STRIP",
+ "ARPHRD_TUNNEL",
+ "ARPHRD_TUNNEL6",
+ "ARPHRD_VOID",
+ "ARPHRD_X25",
+ "AUTHTYPE_CLIENT",
+ "AUTHTYPE_SERVER",
+ "Accept",
+ "Accept4",
+ "AcceptEx",
+ "Access",
+ "Acct",
+ "AddrinfoW",
+ "Adjtime",
+ "Adjtimex",
+ "AttachLsf",
+ "B0",
+ "B1000000",
+ "B110",
+ "B115200",
+ "B1152000",
+ "B1200",
+ "B134",
+ "B14400",
+ "B150",
+ "B1500000",
+ "B1800",
+ "B19200",
+ "B200",
+ "B2000000",
+ "B230400",
+ "B2400",
+ "B2500000",
+ "B28800",
+ "B300",
+ "B3000000",
+ "B3500000",
+ "B38400",
+ "B4000000",
+ "B460800",
+ "B4800",
+ "B50",
+ "B500000",
+ "B57600",
+ "B576000",
+ "B600",
+ "B7200",
+ "B75",
+ "B76800",
+ "B921600",
+ "B9600",
+ "BASE_PROTOCOL",
+ "BIOCFEEDBACK",
+ "BIOCFLUSH",
+ "BIOCGBLEN",
+ "BIOCGDIRECTION",
+ "BIOCGDIRFILT",
+ "BIOCGDLT",
+ "BIOCGDLTLIST",
+ "BIOCGETBUFMODE",
+ "BIOCGETIF",
+ "BIOCGETZMAX",
+ "BIOCGFEEDBACK",
+ "BIOCGFILDROP",
+ "BIOCGHDRCMPLT",
+ "BIOCGRSIG",
+ "BIOCGRTIMEOUT",
+ "BIOCGSEESENT",
+ "BIOCGSTATS",
+ "BIOCGSTATSOLD",
+ "BIOCGTSTAMP",
+ "BIOCIMMEDIATE",
+ "BIOCLOCK",
+ "BIOCPROMISC",
+ "BIOCROTZBUF",
+ "BIOCSBLEN",
+ "BIOCSDIRECTION",
+ "BIOCSDIRFILT",
+ "BIOCSDLT",
+ "BIOCSETBUFMODE",
+ "BIOCSETF",
+ "BIOCSETFNR",
+ "BIOCSETIF",
+ "BIOCSETWF",
+ "BIOCSETZBUF",
+ "BIOCSFEEDBACK",
+ "BIOCSFILDROP",
+ "BIOCSHDRCMPLT",
+ "BIOCSRSIG",
+ "BIOCSRTIMEOUT",
+ "BIOCSSEESENT",
+ "BIOCSTCPF",
+ "BIOCSTSTAMP",
+ "BIOCSUDPF",
+ "BIOCVERSION",
+ "BPF_A",
+ "BPF_ABS",
+ "BPF_ADD",
+ "BPF_ALIGNMENT",
+ "BPF_ALIGNMENT32",
+ "BPF_ALU",
+ "BPF_AND",
+ "BPF_B",
+ "BPF_BUFMODE_BUFFER",
+ "BPF_BUFMODE_ZBUF",
+ "BPF_DFLTBUFSIZE",
+ "BPF_DIRECTION_IN",
+ "BPF_DIRECTION_OUT",
+ "BPF_DIV",
+ "BPF_H",
+ "BPF_IMM",
+ "BPF_IND",
+ "BPF_JA",
+ "BPF_JEQ",
+ "BPF_JGE",
+ "BPF_JGT",
+ "BPF_JMP",
+ "BPF_JSET",
+ "BPF_K",
+ "BPF_LD",
+ "BPF_LDX",
+ "BPF_LEN",
+ "BPF_LSH",
+ "BPF_MAJOR_VERSION",
+ "BPF_MAXBUFSIZE",
+ "BPF_MAXINSNS",
+ "BPF_MEM",
+ "BPF_MEMWORDS",
+ "BPF_MINBUFSIZE",
+ "BPF_MINOR_VERSION",
+ "BPF_MISC",
+ "BPF_MSH",
+ "BPF_MUL",
+ "BPF_NEG",
+ "BPF_OR",
+ "BPF_RELEASE",
+ "BPF_RET",
+ "BPF_RSH",
+ "BPF_ST",
+ "BPF_STX",
+ "BPF_SUB",
+ "BPF_TAX",
+ "BPF_TXA",
+ "BPF_T_BINTIME",
+ "BPF_T_BINTIME_FAST",
+ "BPF_T_BINTIME_MONOTONIC",
+ "BPF_T_BINTIME_MONOTONIC_FAST",
+ "BPF_T_FAST",
+ "BPF_T_FLAG_MASK",
+ "BPF_T_FORMAT_MASK",
+ "BPF_T_MICROTIME",
+ "BPF_T_MICROTIME_FAST",
+ "BPF_T_MICROTIME_MONOTONIC",
+ "BPF_T_MICROTIME_MONOTONIC_FAST",
+ "BPF_T_MONOTONIC",
+ "BPF_T_MONOTONIC_FAST",
+ "BPF_T_NANOTIME",
+ "BPF_T_NANOTIME_FAST",
+ "BPF_T_NANOTIME_MONOTONIC",
+ "BPF_T_NANOTIME_MONOTONIC_FAST",
+ "BPF_T_NONE",
+ "BPF_T_NORMAL",
+ "BPF_W",
+ "BPF_X",
+ "BRKINT",
+ "Bind",
+ "BindToDevice",
+ "BpfBuflen",
+ "BpfDatalink",
+ "BpfHdr",
+ "BpfHeadercmpl",
+ "BpfInsn",
+ "BpfInterface",
+ "BpfJump",
+ "BpfProgram",
+ "BpfStat",
+ "BpfStats",
+ "BpfStmt",
+ "BpfTimeout",
+ "BpfTimeval",
+ "BpfVersion",
+ "BpfZbuf",
+ "BpfZbufHeader",
+ "ByHandleFileInformation",
+ "BytePtrFromString",
+ "ByteSliceFromString",
+ "CCR0_FLUSH",
+ "CERT_CHAIN_POLICY_AUTHENTICODE",
+ "CERT_CHAIN_POLICY_AUTHENTICODE_TS",
+ "CERT_CHAIN_POLICY_BASE",
+ "CERT_CHAIN_POLICY_BASIC_CONSTRAINTS",
+ "CERT_CHAIN_POLICY_EV",
+ "CERT_CHAIN_POLICY_MICROSOFT_ROOT",
+ "CERT_CHAIN_POLICY_NT_AUTH",
+ "CERT_CHAIN_POLICY_SSL",
+ "CERT_E_CN_NO_MATCH",
+ "CERT_E_EXPIRED",
+ "CERT_E_PURPOSE",
+ "CERT_E_ROLE",
+ "CERT_E_UNTRUSTEDROOT",
+ "CERT_STORE_ADD_ALWAYS",
+ "CERT_STORE_DEFER_CLOSE_UNTIL_LAST_FREE_FLAG",
+ "CERT_STORE_PROV_MEMORY",
+ "CERT_TRUST_HAS_EXCLUDED_NAME_CONSTRAINT",
+ "CERT_TRUST_HAS_NOT_DEFINED_NAME_CONSTRAINT",
+ "CERT_TRUST_HAS_NOT_PERMITTED_NAME_CONSTRAINT",
+ "CERT_TRUST_HAS_NOT_SUPPORTED_CRITICAL_EXT",
+ "CERT_TRUST_HAS_NOT_SUPPORTED_NAME_CONSTRAINT",
+ "CERT_TRUST_INVALID_BASIC_CONSTRAINTS",
+ "CERT_TRUST_INVALID_EXTENSION",
+ "CERT_TRUST_INVALID_NAME_CONSTRAINTS",
+ "CERT_TRUST_INVALID_POLICY_CONSTRAINTS",
+ "CERT_TRUST_IS_CYCLIC",
+ "CERT_TRUST_IS_EXPLICIT_DISTRUST",
+ "CERT_TRUST_IS_NOT_SIGNATURE_VALID",
+ "CERT_TRUST_IS_NOT_TIME_VALID",
+ "CERT_TRUST_IS_NOT_VALID_FOR_USAGE",
+ "CERT_TRUST_IS_OFFLINE_REVOCATION",
+ "CERT_TRUST_IS_REVOKED",
+ "CERT_TRUST_IS_UNTRUSTED_ROOT",
+ "CERT_TRUST_NO_ERROR",
+ "CERT_TRUST_NO_ISSUANCE_CHAIN_POLICY",
+ "CERT_TRUST_REVOCATION_STATUS_UNKNOWN",
+ "CFLUSH",
+ "CLOCAL",
+ "CLONE_CHILD_CLEARTID",
+ "CLONE_CHILD_SETTID",
+ "CLONE_CSIGNAL",
+ "CLONE_DETACHED",
+ "CLONE_FILES",
+ "CLONE_FS",
+ "CLONE_IO",
+ "CLONE_NEWIPC",
+ "CLONE_NEWNET",
+ "CLONE_NEWNS",
+ "CLONE_NEWPID",
+ "CLONE_NEWUSER",
+ "CLONE_NEWUTS",
+ "CLONE_PARENT",
+ "CLONE_PARENT_SETTID",
+ "CLONE_PID",
+ "CLONE_PTRACE",
+ "CLONE_SETTLS",
+ "CLONE_SIGHAND",
+ "CLONE_SYSVSEM",
+ "CLONE_THREAD",
+ "CLONE_UNTRACED",
+ "CLONE_VFORK",
+ "CLONE_VM",
+ "CPUID_CFLUSH",
+ "CREAD",
+ "CREATE_ALWAYS",
+ "CREATE_NEW",
+ "CREATE_NEW_PROCESS_GROUP",
+ "CREATE_UNICODE_ENVIRONMENT",
+ "CRYPT_DEFAULT_CONTAINER_OPTIONAL",
+ "CRYPT_DELETEKEYSET",
+ "CRYPT_MACHINE_KEYSET",
+ "CRYPT_NEWKEYSET",
+ "CRYPT_SILENT",
+ "CRYPT_VERIFYCONTEXT",
+ "CS5",
+ "CS6",
+ "CS7",
+ "CS8",
+ "CSIZE",
+ "CSTART",
+ "CSTATUS",
+ "CSTOP",
+ "CSTOPB",
+ "CSUSP",
+ "CTL_MAXNAME",
+ "CTL_NET",
+ "CTL_QUERY",
+ "CTRL_BREAK_EVENT",
+ "CTRL_C_EVENT",
+ "CancelIo",
+ "CancelIoEx",
+ "CertAddCertificateContextToStore",
+ "CertChainContext",
+ "CertChainElement",
+ "CertChainPara",
+ "CertChainPolicyPara",
+ "CertChainPolicyStatus",
+ "CertCloseStore",
+ "CertContext",
+ "CertCreateCertificateContext",
+ "CertEnhKeyUsage",
+ "CertEnumCertificatesInStore",
+ "CertFreeCertificateChain",
+ "CertFreeCertificateContext",
+ "CertGetCertificateChain",
+ "CertInfo",
+ "CertOpenStore",
+ "CertOpenSystemStore",
+ "CertRevocationCrlInfo",
+ "CertRevocationInfo",
+ "CertSimpleChain",
+ "CertTrustListInfo",
+ "CertTrustStatus",
+ "CertUsageMatch",
+ "CertVerifyCertificateChainPolicy",
+ "Chdir",
+ "CheckBpfVersion",
+ "Chflags",
+ "Chmod",
+ "Chown",
+ "Chroot",
+ "Clearenv",
+ "Close",
+ "CloseHandle",
+ "CloseOnExec",
+ "Closesocket",
+ "CmsgLen",
+ "CmsgSpace",
+ "Cmsghdr",
+ "CommandLineToArgv",
+ "ComputerName",
+ "Conn",
+ "Connect",
+ "ConnectEx",
+ "ConvertSidToStringSid",
+ "ConvertStringSidToSid",
+ "CopySid",
+ "Creat",
+ "CreateDirectory",
+ "CreateFile",
+ "CreateFileMapping",
+ "CreateHardLink",
+ "CreateIoCompletionPort",
+ "CreatePipe",
+ "CreateProcess",
+ "CreateProcessAsUser",
+ "CreateSymbolicLink",
+ "CreateToolhelp32Snapshot",
+ "Credential",
+ "CryptAcquireContext",
+ "CryptGenRandom",
+ "CryptReleaseContext",
+ "DIOCBSFLUSH",
+ "DIOCOSFPFLUSH",
+ "DLL",
+ "DLLError",
+ "DLT_A429",
+ "DLT_A653_ICM",
+ "DLT_AIRONET_HEADER",
+ "DLT_AOS",
+ "DLT_APPLE_IP_OVER_IEEE1394",
+ "DLT_ARCNET",
+ "DLT_ARCNET_LINUX",
+ "DLT_ATM_CLIP",
+ "DLT_ATM_RFC1483",
+ "DLT_AURORA",
+ "DLT_AX25",
+ "DLT_AX25_KISS",
+ "DLT_BACNET_MS_TP",
+ "DLT_BLUETOOTH_HCI_H4",
+ "DLT_BLUETOOTH_HCI_H4_WITH_PHDR",
+ "DLT_CAN20B",
+ "DLT_CAN_SOCKETCAN",
+ "DLT_CHAOS",
+ "DLT_CHDLC",
+ "DLT_CISCO_IOS",
+ "DLT_C_HDLC",
+ "DLT_C_HDLC_WITH_DIR",
+ "DLT_DBUS",
+ "DLT_DECT",
+ "DLT_DOCSIS",
+ "DLT_DVB_CI",
+ "DLT_ECONET",
+ "DLT_EN10MB",
+ "DLT_EN3MB",
+ "DLT_ENC",
+ "DLT_ERF",
+ "DLT_ERF_ETH",
+ "DLT_ERF_POS",
+ "DLT_FC_2",
+ "DLT_FC_2_WITH_FRAME_DELIMS",
+ "DLT_FDDI",
+ "DLT_FLEXRAY",
+ "DLT_FRELAY",
+ "DLT_FRELAY_WITH_DIR",
+ "DLT_GCOM_SERIAL",
+ "DLT_GCOM_T1E1",
+ "DLT_GPF_F",
+ "DLT_GPF_T",
+ "DLT_GPRS_LLC",
+ "DLT_GSMTAP_ABIS",
+ "DLT_GSMTAP_UM",
+ "DLT_HDLC",
+ "DLT_HHDLC",
+ "DLT_HIPPI",
+ "DLT_IBM_SN",
+ "DLT_IBM_SP",
+ "DLT_IEEE802",
+ "DLT_IEEE802_11",
+ "DLT_IEEE802_11_RADIO",
+ "DLT_IEEE802_11_RADIO_AVS",
+ "DLT_IEEE802_15_4",
+ "DLT_IEEE802_15_4_LINUX",
+ "DLT_IEEE802_15_4_NOFCS",
+ "DLT_IEEE802_15_4_NONASK_PHY",
+ "DLT_IEEE802_16_MAC_CPS",
+ "DLT_IEEE802_16_MAC_CPS_RADIO",
+ "DLT_IPFILTER",
+ "DLT_IPMB",
+ "DLT_IPMB_LINUX",
+ "DLT_IPNET",
+ "DLT_IPOIB",
+ "DLT_IPV4",
+ "DLT_IPV6",
+ "DLT_IP_OVER_FC",
+ "DLT_JUNIPER_ATM1",
+ "DLT_JUNIPER_ATM2",
+ "DLT_JUNIPER_ATM_CEMIC",
+ "DLT_JUNIPER_CHDLC",
+ "DLT_JUNIPER_ES",
+ "DLT_JUNIPER_ETHER",
+ "DLT_JUNIPER_FIBRECHANNEL",
+ "DLT_JUNIPER_FRELAY",
+ "DLT_JUNIPER_GGSN",
+ "DLT_JUNIPER_ISM",
+ "DLT_JUNIPER_MFR",
+ "DLT_JUNIPER_MLFR",
+ "DLT_JUNIPER_MLPPP",
+ "DLT_JUNIPER_MONITOR",
+ "DLT_JUNIPER_PIC_PEER",
+ "DLT_JUNIPER_PPP",
+ "DLT_JUNIPER_PPPOE",
+ "DLT_JUNIPER_PPPOE_ATM",
+ "DLT_JUNIPER_SERVICES",
+ "DLT_JUNIPER_SRX_E2E",
+ "DLT_JUNIPER_ST",
+ "DLT_JUNIPER_VP",
+ "DLT_JUNIPER_VS",
+ "DLT_LAPB_WITH_DIR",
+ "DLT_LAPD",
+ "DLT_LIN",
+ "DLT_LINUX_EVDEV",
+ "DLT_LINUX_IRDA",
+ "DLT_LINUX_LAPD",
+ "DLT_LINUX_PPP_WITHDIRECTION",
+ "DLT_LINUX_SLL",
+ "DLT_LOOP",
+ "DLT_LTALK",
+ "DLT_MATCHING_MAX",
+ "DLT_MATCHING_MIN",
+ "DLT_MFR",
+ "DLT_MOST",
+ "DLT_MPEG_2_TS",
+ "DLT_MPLS",
+ "DLT_MTP2",
+ "DLT_MTP2_WITH_PHDR",
+ "DLT_MTP3",
+ "DLT_MUX27010",
+ "DLT_NETANALYZER",
+ "DLT_NETANALYZER_TRANSPARENT",
+ "DLT_NFC_LLCP",
+ "DLT_NFLOG",
+ "DLT_NG40",
+ "DLT_NULL",
+ "DLT_PCI_EXP",
+ "DLT_PFLOG",
+ "DLT_PFSYNC",
+ "DLT_PPI",
+ "DLT_PPP",
+ "DLT_PPP_BSDOS",
+ "DLT_PPP_ETHER",
+ "DLT_PPP_PPPD",
+ "DLT_PPP_SERIAL",
+ "DLT_PPP_WITH_DIR",
+ "DLT_PPP_WITH_DIRECTION",
+ "DLT_PRISM_HEADER",
+ "DLT_PRONET",
+ "DLT_RAIF1",
+ "DLT_RAW",
+ "DLT_RAWAF_MASK",
+ "DLT_RIO",
+ "DLT_SCCP",
+ "DLT_SITA",
+ "DLT_SLIP",
+ "DLT_SLIP_BSDOS",
+ "DLT_STANAG_5066_D_PDU",
+ "DLT_SUNATM",
+ "DLT_SYMANTEC_FIREWALL",
+ "DLT_TZSP",
+ "DLT_USB",
+ "DLT_USB_LINUX",
+ "DLT_USB_LINUX_MMAPPED",
+ "DLT_USER0",
+ "DLT_USER1",
+ "DLT_USER10",
+ "DLT_USER11",
+ "DLT_USER12",
+ "DLT_USER13",
+ "DLT_USER14",
+ "DLT_USER15",
+ "DLT_USER2",
+ "DLT_USER3",
+ "DLT_USER4",
+ "DLT_USER5",
+ "DLT_USER6",
+ "DLT_USER7",
+ "DLT_USER8",
+ "DLT_USER9",
+ "DLT_WIHART",
+ "DLT_X2E_SERIAL",
+ "DLT_X2E_XORAYA",
+ "DNSMXData",
+ "DNSPTRData",
+ "DNSRecord",
+ "DNSSRVData",
+ "DNSTXTData",
+ "DNS_INFO_NO_RECORDS",
+ "DNS_TYPE_A",
+ "DNS_TYPE_A6",
+ "DNS_TYPE_AAAA",
+ "DNS_TYPE_ADDRS",
+ "DNS_TYPE_AFSDB",
+ "DNS_TYPE_ALL",
+ "DNS_TYPE_ANY",
+ "DNS_TYPE_ATMA",
+ "DNS_TYPE_AXFR",
+ "DNS_TYPE_CERT",
+ "DNS_TYPE_CNAME",
+ "DNS_TYPE_DHCID",
+ "DNS_TYPE_DNAME",
+ "DNS_TYPE_DNSKEY",
+ "DNS_TYPE_DS",
+ "DNS_TYPE_EID",
+ "DNS_TYPE_GID",
+ "DNS_TYPE_GPOS",
+ "DNS_TYPE_HINFO",
+ "DNS_TYPE_ISDN",
+ "DNS_TYPE_IXFR",
+ "DNS_TYPE_KEY",
+ "DNS_TYPE_KX",
+ "DNS_TYPE_LOC",
+ "DNS_TYPE_MAILA",
+ "DNS_TYPE_MAILB",
+ "DNS_TYPE_MB",
+ "DNS_TYPE_MD",
+ "DNS_TYPE_MF",
+ "DNS_TYPE_MG",
+ "DNS_TYPE_MINFO",
+ "DNS_TYPE_MR",
+ "DNS_TYPE_MX",
+ "DNS_TYPE_NAPTR",
+ "DNS_TYPE_NBSTAT",
+ "DNS_TYPE_NIMLOC",
+ "DNS_TYPE_NS",
+ "DNS_TYPE_NSAP",
+ "DNS_TYPE_NSAPPTR",
+ "DNS_TYPE_NSEC",
+ "DNS_TYPE_NULL",
+ "DNS_TYPE_NXT",
+ "DNS_TYPE_OPT",
+ "DNS_TYPE_PTR",
+ "DNS_TYPE_PX",
+ "DNS_TYPE_RP",
+ "DNS_TYPE_RRSIG",
+ "DNS_TYPE_RT",
+ "DNS_TYPE_SIG",
+ "DNS_TYPE_SINK",
+ "DNS_TYPE_SOA",
+ "DNS_TYPE_SRV",
+ "DNS_TYPE_TEXT",
+ "DNS_TYPE_TKEY",
+ "DNS_TYPE_TSIG",
+ "DNS_TYPE_UID",
+ "DNS_TYPE_UINFO",
+ "DNS_TYPE_UNSPEC",
+ "DNS_TYPE_WINS",
+ "DNS_TYPE_WINSR",
+ "DNS_TYPE_WKS",
+ "DNS_TYPE_X25",
+ "DT_BLK",
+ "DT_CHR",
+ "DT_DIR",
+ "DT_FIFO",
+ "DT_LNK",
+ "DT_REG",
+ "DT_SOCK",
+ "DT_UNKNOWN",
+ "DT_WHT",
+ "DUPLICATE_CLOSE_SOURCE",
+ "DUPLICATE_SAME_ACCESS",
+ "DeleteFile",
+ "DetachLsf",
+ "DeviceIoControl",
+ "Dirent",
+ "DnsNameCompare",
+ "DnsQuery",
+ "DnsRecordListFree",
+ "DnsSectionAdditional",
+ "DnsSectionAnswer",
+ "DnsSectionAuthority",
+ "DnsSectionQuestion",
+ "Dup",
+ "Dup2",
+ "Dup3",
+ "DuplicateHandle",
+ "E2BIG",
+ "EACCES",
+ "EADDRINUSE",
+ "EADDRNOTAVAIL",
+ "EADV",
+ "EAFNOSUPPORT",
+ "EAGAIN",
+ "EALREADY",
+ "EAUTH",
+ "EBADARCH",
+ "EBADE",
+ "EBADEXEC",
+ "EBADF",
+ "EBADFD",
+ "EBADMACHO",
+ "EBADMSG",
+ "EBADR",
+ "EBADRPC",
+ "EBADRQC",
+ "EBADSLT",
+ "EBFONT",
+ "EBUSY",
+ "ECANCELED",
+ "ECAPMODE",
+ "ECHILD",
+ "ECHO",
+ "ECHOCTL",
+ "ECHOE",
+ "ECHOK",
+ "ECHOKE",
+ "ECHONL",
+ "ECHOPRT",
+ "ECHRNG",
+ "ECOMM",
+ "ECONNABORTED",
+ "ECONNREFUSED",
+ "ECONNRESET",
+ "EDEADLK",
+ "EDEADLOCK",
+ "EDESTADDRREQ",
+ "EDEVERR",
+ "EDOM",
+ "EDOOFUS",
+ "EDOTDOT",
+ "EDQUOT",
+ "EEXIST",
+ "EFAULT",
+ "EFBIG",
+ "EFER_LMA",
+ "EFER_LME",
+ "EFER_NXE",
+ "EFER_SCE",
+ "EFTYPE",
+ "EHOSTDOWN",
+ "EHOSTUNREACH",
+ "EHWPOISON",
+ "EIDRM",
+ "EILSEQ",
+ "EINPROGRESS",
+ "EINTR",
+ "EINVAL",
+ "EIO",
+ "EIPSEC",
+ "EISCONN",
+ "EISDIR",
+ "EISNAM",
+ "EKEYEXPIRED",
+ "EKEYREJECTED",
+ "EKEYREVOKED",
+ "EL2HLT",
+ "EL2NSYNC",
+ "EL3HLT",
+ "EL3RST",
+ "ELAST",
+ "ELF_NGREG",
+ "ELF_PRARGSZ",
+ "ELIBACC",
+ "ELIBBAD",
+ "ELIBEXEC",
+ "ELIBMAX",
+ "ELIBSCN",
+ "ELNRNG",
+ "ELOOP",
+ "EMEDIUMTYPE",
+ "EMFILE",
+ "EMLINK",
+ "EMSGSIZE",
+ "EMT_TAGOVF",
+ "EMULTIHOP",
+ "EMUL_ENABLED",
+ "EMUL_LINUX",
+ "EMUL_LINUX32",
+ "EMUL_MAXID",
+ "EMUL_NATIVE",
+ "ENAMETOOLONG",
+ "ENAVAIL",
+ "ENDRUNDISC",
+ "ENEEDAUTH",
+ "ENETDOWN",
+ "ENETRESET",
+ "ENETUNREACH",
+ "ENFILE",
+ "ENOANO",
+ "ENOATTR",
+ "ENOBUFS",
+ "ENOCSI",
+ "ENODATA",
+ "ENODEV",
+ "ENOENT",
+ "ENOEXEC",
+ "ENOKEY",
+ "ENOLCK",
+ "ENOLINK",
+ "ENOMEDIUM",
+ "ENOMEM",
+ "ENOMSG",
+ "ENONET",
+ "ENOPKG",
+ "ENOPOLICY",
+ "ENOPROTOOPT",
+ "ENOSPC",
+ "ENOSR",
+ "ENOSTR",
+ "ENOSYS",
+ "ENOTBLK",
+ "ENOTCAPABLE",
+ "ENOTCONN",
+ "ENOTDIR",
+ "ENOTEMPTY",
+ "ENOTNAM",
+ "ENOTRECOVERABLE",
+ "ENOTSOCK",
+ "ENOTSUP",
+ "ENOTTY",
+ "ENOTUNIQ",
+ "ENXIO",
+ "EN_SW_CTL_INF",
+ "EN_SW_CTL_PREC",
+ "EN_SW_CTL_ROUND",
+ "EN_SW_DATACHAIN",
+ "EN_SW_DENORM",
+ "EN_SW_INVOP",
+ "EN_SW_OVERFLOW",
+ "EN_SW_PRECLOSS",
+ "EN_SW_UNDERFLOW",
+ "EN_SW_ZERODIV",
+ "EOPNOTSUPP",
+ "EOVERFLOW",
+ "EOWNERDEAD",
+ "EPERM",
+ "EPFNOSUPPORT",
+ "EPIPE",
+ "EPOLLERR",
+ "EPOLLET",
+ "EPOLLHUP",
+ "EPOLLIN",
+ "EPOLLMSG",
+ "EPOLLONESHOT",
+ "EPOLLOUT",
+ "EPOLLPRI",
+ "EPOLLRDBAND",
+ "EPOLLRDHUP",
+ "EPOLLRDNORM",
+ "EPOLLWRBAND",
+ "EPOLLWRNORM",
+ "EPOLL_CLOEXEC",
+ "EPOLL_CTL_ADD",
+ "EPOLL_CTL_DEL",
+ "EPOLL_CTL_MOD",
+ "EPOLL_NONBLOCK",
+ "EPROCLIM",
+ "EPROCUNAVAIL",
+ "EPROGMISMATCH",
+ "EPROGUNAVAIL",
+ "EPROTO",
+ "EPROTONOSUPPORT",
+ "EPROTOTYPE",
+ "EPWROFF",
+ "ERANGE",
+ "EREMCHG",
+ "EREMOTE",
+ "EREMOTEIO",
+ "ERESTART",
+ "ERFKILL",
+ "EROFS",
+ "ERPCMISMATCH",
+ "ERROR_ACCESS_DENIED",
+ "ERROR_ALREADY_EXISTS",
+ "ERROR_BROKEN_PIPE",
+ "ERROR_BUFFER_OVERFLOW",
+ "ERROR_DIR_NOT_EMPTY",
+ "ERROR_ENVVAR_NOT_FOUND",
+ "ERROR_FILE_EXISTS",
+ "ERROR_FILE_NOT_FOUND",
+ "ERROR_HANDLE_EOF",
+ "ERROR_INSUFFICIENT_BUFFER",
+ "ERROR_IO_PENDING",
+ "ERROR_MOD_NOT_FOUND",
+ "ERROR_MORE_DATA",
+ "ERROR_NETNAME_DELETED",
+ "ERROR_NOT_FOUND",
+ "ERROR_NO_MORE_FILES",
+ "ERROR_OPERATION_ABORTED",
+ "ERROR_PATH_NOT_FOUND",
+ "ERROR_PRIVILEGE_NOT_HELD",
+ "ERROR_PROC_NOT_FOUND",
+ "ESHLIBVERS",
+ "ESHUTDOWN",
+ "ESOCKTNOSUPPORT",
+ "ESPIPE",
+ "ESRCH",
+ "ESRMNT",
+ "ESTALE",
+ "ESTRPIPE",
+ "ETHERCAP_JUMBO_MTU",
+ "ETHERCAP_VLAN_HWTAGGING",
+ "ETHERCAP_VLAN_MTU",
+ "ETHERMIN",
+ "ETHERMTU",
+ "ETHERMTU_JUMBO",
+ "ETHERTYPE_8023",
+ "ETHERTYPE_AARP",
+ "ETHERTYPE_ACCTON",
+ "ETHERTYPE_AEONIC",
+ "ETHERTYPE_ALPHA",
+ "ETHERTYPE_AMBER",
+ "ETHERTYPE_AMOEBA",
+ "ETHERTYPE_AOE",
+ "ETHERTYPE_APOLLO",
+ "ETHERTYPE_APOLLODOMAIN",
+ "ETHERTYPE_APPLETALK",
+ "ETHERTYPE_APPLITEK",
+ "ETHERTYPE_ARGONAUT",
+ "ETHERTYPE_ARP",
+ "ETHERTYPE_AT",
+ "ETHERTYPE_ATALK",
+ "ETHERTYPE_ATOMIC",
+ "ETHERTYPE_ATT",
+ "ETHERTYPE_ATTSTANFORD",
+ "ETHERTYPE_AUTOPHON",
+ "ETHERTYPE_AXIS",
+ "ETHERTYPE_BCLOOP",
+ "ETHERTYPE_BOFL",
+ "ETHERTYPE_CABLETRON",
+ "ETHERTYPE_CHAOS",
+ "ETHERTYPE_COMDESIGN",
+ "ETHERTYPE_COMPUGRAPHIC",
+ "ETHERTYPE_COUNTERPOINT",
+ "ETHERTYPE_CRONUS",
+ "ETHERTYPE_CRONUSVLN",
+ "ETHERTYPE_DCA",
+ "ETHERTYPE_DDE",
+ "ETHERTYPE_DEBNI",
+ "ETHERTYPE_DECAM",
+ "ETHERTYPE_DECCUST",
+ "ETHERTYPE_DECDIAG",
+ "ETHERTYPE_DECDNS",
+ "ETHERTYPE_DECDTS",
+ "ETHERTYPE_DECEXPER",
+ "ETHERTYPE_DECLAST",
+ "ETHERTYPE_DECLTM",
+ "ETHERTYPE_DECMUMPS",
+ "ETHERTYPE_DECNETBIOS",
+ "ETHERTYPE_DELTACON",
+ "ETHERTYPE_DIDDLE",
+ "ETHERTYPE_DLOG1",
+ "ETHERTYPE_DLOG2",
+ "ETHERTYPE_DN",
+ "ETHERTYPE_DOGFIGHT",
+ "ETHERTYPE_DSMD",
+ "ETHERTYPE_ECMA",
+ "ETHERTYPE_ENCRYPT",
+ "ETHERTYPE_ES",
+ "ETHERTYPE_EXCELAN",
+ "ETHERTYPE_EXPERDATA",
+ "ETHERTYPE_FLIP",
+ "ETHERTYPE_FLOWCONTROL",
+ "ETHERTYPE_FRARP",
+ "ETHERTYPE_GENDYN",
+ "ETHERTYPE_HAYES",
+ "ETHERTYPE_HIPPI_FP",
+ "ETHERTYPE_HITACHI",
+ "ETHERTYPE_HP",
+ "ETHERTYPE_IEEEPUP",
+ "ETHERTYPE_IEEEPUPAT",
+ "ETHERTYPE_IMLBL",
+ "ETHERTYPE_IMLBLDIAG",
+ "ETHERTYPE_IP",
+ "ETHERTYPE_IPAS",
+ "ETHERTYPE_IPV6",
+ "ETHERTYPE_IPX",
+ "ETHERTYPE_IPXNEW",
+ "ETHERTYPE_KALPANA",
+ "ETHERTYPE_LANBRIDGE",
+ "ETHERTYPE_LANPROBE",
+ "ETHERTYPE_LAT",
+ "ETHERTYPE_LBACK",
+ "ETHERTYPE_LITTLE",
+ "ETHERTYPE_LLDP",
+ "ETHERTYPE_LOGICRAFT",
+ "ETHERTYPE_LOOPBACK",
+ "ETHERTYPE_MATRA",
+ "ETHERTYPE_MAX",
+ "ETHERTYPE_MERIT",
+ "ETHERTYPE_MICP",
+ "ETHERTYPE_MOPDL",
+ "ETHERTYPE_MOPRC",
+ "ETHERTYPE_MOTOROLA",
+ "ETHERTYPE_MPLS",
+ "ETHERTYPE_MPLS_MCAST",
+ "ETHERTYPE_MUMPS",
+ "ETHERTYPE_NBPCC",
+ "ETHERTYPE_NBPCLAIM",
+ "ETHERTYPE_NBPCLREQ",
+ "ETHERTYPE_NBPCLRSP",
+ "ETHERTYPE_NBPCREQ",
+ "ETHERTYPE_NBPCRSP",
+ "ETHERTYPE_NBPDG",
+ "ETHERTYPE_NBPDGB",
+ "ETHERTYPE_NBPDLTE",
+ "ETHERTYPE_NBPRAR",
+ "ETHERTYPE_NBPRAS",
+ "ETHERTYPE_NBPRST",
+ "ETHERTYPE_NBPSCD",
+ "ETHERTYPE_NBPVCD",
+ "ETHERTYPE_NBS",
+ "ETHERTYPE_NCD",
+ "ETHERTYPE_NESTAR",
+ "ETHERTYPE_NETBEUI",
+ "ETHERTYPE_NOVELL",
+ "ETHERTYPE_NS",
+ "ETHERTYPE_NSAT",
+ "ETHERTYPE_NSCOMPAT",
+ "ETHERTYPE_NTRAILER",
+ "ETHERTYPE_OS9",
+ "ETHERTYPE_OS9NET",
+ "ETHERTYPE_PACER",
+ "ETHERTYPE_PAE",
+ "ETHERTYPE_PCS",
+ "ETHERTYPE_PLANNING",
+ "ETHERTYPE_PPP",
+ "ETHERTYPE_PPPOE",
+ "ETHERTYPE_PPPOEDISC",
+ "ETHERTYPE_PRIMENTS",
+ "ETHERTYPE_PUP",
+ "ETHERTYPE_PUPAT",
+ "ETHERTYPE_QINQ",
+ "ETHERTYPE_RACAL",
+ "ETHERTYPE_RATIONAL",
+ "ETHERTYPE_RAWFR",
+ "ETHERTYPE_RCL",
+ "ETHERTYPE_RDP",
+ "ETHERTYPE_RETIX",
+ "ETHERTYPE_REVARP",
+ "ETHERTYPE_SCA",
+ "ETHERTYPE_SECTRA",
+ "ETHERTYPE_SECUREDATA",
+ "ETHERTYPE_SGITW",
+ "ETHERTYPE_SG_BOUNCE",
+ "ETHERTYPE_SG_DIAG",
+ "ETHERTYPE_SG_NETGAMES",
+ "ETHERTYPE_SG_RESV",
+ "ETHERTYPE_SIMNET",
+ "ETHERTYPE_SLOW",
+ "ETHERTYPE_SLOWPROTOCOLS",
+ "ETHERTYPE_SNA",
+ "ETHERTYPE_SNMP",
+ "ETHERTYPE_SONIX",
+ "ETHERTYPE_SPIDER",
+ "ETHERTYPE_SPRITE",
+ "ETHERTYPE_STP",
+ "ETHERTYPE_TALARIS",
+ "ETHERTYPE_TALARISMC",
+ "ETHERTYPE_TCPCOMP",
+ "ETHERTYPE_TCPSM",
+ "ETHERTYPE_TEC",
+ "ETHERTYPE_TIGAN",
+ "ETHERTYPE_TRAIL",
+ "ETHERTYPE_TRANSETHER",
+ "ETHERTYPE_TYMSHARE",
+ "ETHERTYPE_UBBST",
+ "ETHERTYPE_UBDEBUG",
+ "ETHERTYPE_UBDIAGLOOP",
+ "ETHERTYPE_UBDL",
+ "ETHERTYPE_UBNIU",
+ "ETHERTYPE_UBNMC",
+ "ETHERTYPE_VALID",
+ "ETHERTYPE_VARIAN",
+ "ETHERTYPE_VAXELN",
+ "ETHERTYPE_VEECO",
+ "ETHERTYPE_VEXP",
+ "ETHERTYPE_VGLAB",
+ "ETHERTYPE_VINES",
+ "ETHERTYPE_VINESECHO",
+ "ETHERTYPE_VINESLOOP",
+ "ETHERTYPE_VITAL",
+ "ETHERTYPE_VLAN",
+ "ETHERTYPE_VLTLMAN",
+ "ETHERTYPE_VPROD",
+ "ETHERTYPE_VURESERVED",
+ "ETHERTYPE_WATERLOO",
+ "ETHERTYPE_WELLFLEET",
+ "ETHERTYPE_X25",
+ "ETHERTYPE_X75",
+ "ETHERTYPE_XNSSM",
+ "ETHERTYPE_XTP",
+ "ETHER_ADDR_LEN",
+ "ETHER_ALIGN",
+ "ETHER_CRC_LEN",
+ "ETHER_CRC_POLY_BE",
+ "ETHER_CRC_POLY_LE",
+ "ETHER_HDR_LEN",
+ "ETHER_MAX_DIX_LEN",
+ "ETHER_MAX_LEN",
+ "ETHER_MAX_LEN_JUMBO",
+ "ETHER_MIN_LEN",
+ "ETHER_PPPOE_ENCAP_LEN",
+ "ETHER_TYPE_LEN",
+ "ETHER_VLAN_ENCAP_LEN",
+ "ETH_P_1588",
+ "ETH_P_8021Q",
+ "ETH_P_802_2",
+ "ETH_P_802_3",
+ "ETH_P_AARP",
+ "ETH_P_ALL",
+ "ETH_P_AOE",
+ "ETH_P_ARCNET",
+ "ETH_P_ARP",
+ "ETH_P_ATALK",
+ "ETH_P_ATMFATE",
+ "ETH_P_ATMMPOA",
+ "ETH_P_AX25",
+ "ETH_P_BPQ",
+ "ETH_P_CAIF",
+ "ETH_P_CAN",
+ "ETH_P_CONTROL",
+ "ETH_P_CUST",
+ "ETH_P_DDCMP",
+ "ETH_P_DEC",
+ "ETH_P_DIAG",
+ "ETH_P_DNA_DL",
+ "ETH_P_DNA_RC",
+ "ETH_P_DNA_RT",
+ "ETH_P_DSA",
+ "ETH_P_ECONET",
+ "ETH_P_EDSA",
+ "ETH_P_FCOE",
+ "ETH_P_FIP",
+ "ETH_P_HDLC",
+ "ETH_P_IEEE802154",
+ "ETH_P_IEEEPUP",
+ "ETH_P_IEEEPUPAT",
+ "ETH_P_IP",
+ "ETH_P_IPV6",
+ "ETH_P_IPX",
+ "ETH_P_IRDA",
+ "ETH_P_LAT",
+ "ETH_P_LINK_CTL",
+ "ETH_P_LOCALTALK",
+ "ETH_P_LOOP",
+ "ETH_P_MOBITEX",
+ "ETH_P_MPLS_MC",
+ "ETH_P_MPLS_UC",
+ "ETH_P_PAE",
+ "ETH_P_PAUSE",
+ "ETH_P_PHONET",
+ "ETH_P_PPPTALK",
+ "ETH_P_PPP_DISC",
+ "ETH_P_PPP_MP",
+ "ETH_P_PPP_SES",
+ "ETH_P_PUP",
+ "ETH_P_PUPAT",
+ "ETH_P_RARP",
+ "ETH_P_SCA",
+ "ETH_P_SLOW",
+ "ETH_P_SNAP",
+ "ETH_P_TEB",
+ "ETH_P_TIPC",
+ "ETH_P_TRAILER",
+ "ETH_P_TR_802_2",
+ "ETH_P_WAN_PPP",
+ "ETH_P_WCCP",
+ "ETH_P_X25",
+ "ETIME",
+ "ETIMEDOUT",
+ "ETOOMANYREFS",
+ "ETXTBSY",
+ "EUCLEAN",
+ "EUNATCH",
+ "EUSERS",
+ "EVFILT_AIO",
+ "EVFILT_FS",
+ "EVFILT_LIO",
+ "EVFILT_MACHPORT",
+ "EVFILT_PROC",
+ "EVFILT_READ",
+ "EVFILT_SIGNAL",
+ "EVFILT_SYSCOUNT",
+ "EVFILT_THREADMARKER",
+ "EVFILT_TIMER",
+ "EVFILT_USER",
+ "EVFILT_VM",
+ "EVFILT_VNODE",
+ "EVFILT_WRITE",
+ "EV_ADD",
+ "EV_CLEAR",
+ "EV_DELETE",
+ "EV_DISABLE",
+ "EV_DISPATCH",
+ "EV_DROP",
+ "EV_ENABLE",
+ "EV_EOF",
+ "EV_ERROR",
+ "EV_FLAG0",
+ "EV_FLAG1",
+ "EV_ONESHOT",
+ "EV_OOBAND",
+ "EV_POLL",
+ "EV_RECEIPT",
+ "EV_SYSFLAGS",
+ "EWINDOWS",
+ "EWOULDBLOCK",
+ "EXDEV",
+ "EXFULL",
+ "EXTA",
+ "EXTB",
+ "EXTPROC",
+ "Environ",
+ "EpollCreate",
+ "EpollCreate1",
+ "EpollCtl",
+ "EpollEvent",
+ "EpollWait",
+ "Errno",
+ "EscapeArg",
+ "Exchangedata",
+ "Exec",
+ "Exit",
+ "ExitProcess",
+ "FD_CLOEXEC",
+ "FD_SETSIZE",
+ "FILE_ACTION_ADDED",
+ "FILE_ACTION_MODIFIED",
+ "FILE_ACTION_REMOVED",
+ "FILE_ACTION_RENAMED_NEW_NAME",
+ "FILE_ACTION_RENAMED_OLD_NAME",
+ "FILE_APPEND_DATA",
+ "FILE_ATTRIBUTE_ARCHIVE",
+ "FILE_ATTRIBUTE_DIRECTORY",
+ "FILE_ATTRIBUTE_HIDDEN",
+ "FILE_ATTRIBUTE_NORMAL",
+ "FILE_ATTRIBUTE_READONLY",
+ "FILE_ATTRIBUTE_REPARSE_POINT",
+ "FILE_ATTRIBUTE_SYSTEM",
+ "FILE_BEGIN",
+ "FILE_CURRENT",
+ "FILE_END",
+ "FILE_FLAG_BACKUP_SEMANTICS",
+ "FILE_FLAG_OPEN_REPARSE_POINT",
+ "FILE_FLAG_OVERLAPPED",
+ "FILE_LIST_DIRECTORY",
+ "FILE_MAP_COPY",
+ "FILE_MAP_EXECUTE",
+ "FILE_MAP_READ",
+ "FILE_MAP_WRITE",
+ "FILE_NOTIFY_CHANGE_ATTRIBUTES",
+ "FILE_NOTIFY_CHANGE_CREATION",
+ "FILE_NOTIFY_CHANGE_DIR_NAME",
+ "FILE_NOTIFY_CHANGE_FILE_NAME",
+ "FILE_NOTIFY_CHANGE_LAST_ACCESS",
+ "FILE_NOTIFY_CHANGE_LAST_WRITE",
+ "FILE_NOTIFY_CHANGE_SIZE",
+ "FILE_SHARE_DELETE",
+ "FILE_SHARE_READ",
+ "FILE_SHARE_WRITE",
+ "FILE_SKIP_COMPLETION_PORT_ON_SUCCESS",
+ "FILE_SKIP_SET_EVENT_ON_HANDLE",
+ "FILE_TYPE_CHAR",
+ "FILE_TYPE_DISK",
+ "FILE_TYPE_PIPE",
+ "FILE_TYPE_REMOTE",
+ "FILE_TYPE_UNKNOWN",
+ "FILE_WRITE_ATTRIBUTES",
+ "FLUSHO",
+ "FORMAT_MESSAGE_ALLOCATE_BUFFER",
+ "FORMAT_MESSAGE_ARGUMENT_ARRAY",
+ "FORMAT_MESSAGE_FROM_HMODULE",
+ "FORMAT_MESSAGE_FROM_STRING",
+ "FORMAT_MESSAGE_FROM_SYSTEM",
+ "FORMAT_MESSAGE_IGNORE_INSERTS",
+ "FORMAT_MESSAGE_MAX_WIDTH_MASK",
+ "FSCTL_GET_REPARSE_POINT",
+ "F_ADDFILESIGS",
+ "F_ADDSIGS",
+ "F_ALLOCATEALL",
+ "F_ALLOCATECONTIG",
+ "F_CANCEL",
+ "F_CHKCLEAN",
+ "F_CLOSEM",
+ "F_DUP2FD",
+ "F_DUP2FD_CLOEXEC",
+ "F_DUPFD",
+ "F_DUPFD_CLOEXEC",
+ "F_EXLCK",
+ "F_FLUSH_DATA",
+ "F_FREEZE_FS",
+ "F_FSCTL",
+ "F_FSDIRMASK",
+ "F_FSIN",
+ "F_FSINOUT",
+ "F_FSOUT",
+ "F_FSPRIV",
+ "F_FSVOID",
+ "F_FULLFSYNC",
+ "F_GETFD",
+ "F_GETFL",
+ "F_GETLEASE",
+ "F_GETLK",
+ "F_GETLK64",
+ "F_GETLKPID",
+ "F_GETNOSIGPIPE",
+ "F_GETOWN",
+ "F_GETOWN_EX",
+ "F_GETPATH",
+ "F_GETPATH_MTMINFO",
+ "F_GETPIPE_SZ",
+ "F_GETPROTECTIONCLASS",
+ "F_GETSIG",
+ "F_GLOBAL_NOCACHE",
+ "F_LOCK",
+ "F_LOG2PHYS",
+ "F_LOG2PHYS_EXT",
+ "F_MARKDEPENDENCY",
+ "F_MAXFD",
+ "F_NOCACHE",
+ "F_NODIRECT",
+ "F_NOTIFY",
+ "F_OGETLK",
+ "F_OK",
+ "F_OSETLK",
+ "F_OSETLKW",
+ "F_PARAM_MASK",
+ "F_PARAM_MAX",
+ "F_PATHPKG_CHECK",
+ "F_PEOFPOSMODE",
+ "F_PREALLOCATE",
+ "F_RDADVISE",
+ "F_RDAHEAD",
+ "F_RDLCK",
+ "F_READAHEAD",
+ "F_READBOOTSTRAP",
+ "F_SETBACKINGSTORE",
+ "F_SETFD",
+ "F_SETFL",
+ "F_SETLEASE",
+ "F_SETLK",
+ "F_SETLK64",
+ "F_SETLKW",
+ "F_SETLKW64",
+ "F_SETLK_REMOTE",
+ "F_SETNOSIGPIPE",
+ "F_SETOWN",
+ "F_SETOWN_EX",
+ "F_SETPIPE_SZ",
+ "F_SETPROTECTIONCLASS",
+ "F_SETSIG",
+ "F_SETSIZE",
+ "F_SHLCK",
+ "F_TEST",
+ "F_THAW_FS",
+ "F_TLOCK",
+ "F_ULOCK",
+ "F_UNLCK",
+ "F_UNLCKSYS",
+ "F_VOLPOSMODE",
+ "F_WRITEBOOTSTRAP",
+ "F_WRLCK",
+ "Faccessat",
+ "Fallocate",
+ "Fbootstraptransfer_t",
+ "Fchdir",
+ "Fchflags",
+ "Fchmod",
+ "Fchmodat",
+ "Fchown",
+ "Fchownat",
+ "FcntlFlock",
+ "FdSet",
+ "Fdatasync",
+ "FileNotifyInformation",
+ "Filetime",
+ "FindClose",
+ "FindFirstFile",
+ "FindNextFile",
+ "Flock",
+ "Flock_t",
+ "FlushBpf",
+ "FlushFileBuffers",
+ "FlushViewOfFile",
+ "ForkExec",
+ "ForkLock",
+ "FormatMessage",
+ "Fpathconf",
+ "FreeAddrInfoW",
+ "FreeEnvironmentStrings",
+ "FreeLibrary",
+ "Fsid",
+ "Fstat",
+ "Fstatat",
+ "Fstatfs",
+ "Fstore_t",
+ "Fsync",
+ "Ftruncate",
+ "FullPath",
+ "Futimes",
+ "Futimesat",
+ "GENERIC_ALL",
+ "GENERIC_EXECUTE",
+ "GENERIC_READ",
+ "GENERIC_WRITE",
+ "GUID",
+ "GetAcceptExSockaddrs",
+ "GetAdaptersInfo",
+ "GetAddrInfoW",
+ "GetCommandLine",
+ "GetComputerName",
+ "GetConsoleMode",
+ "GetCurrentDirectory",
+ "GetCurrentProcess",
+ "GetEnvironmentStrings",
+ "GetEnvironmentVariable",
+ "GetExitCodeProcess",
+ "GetFileAttributes",
+ "GetFileAttributesEx",
+ "GetFileExInfoStandard",
+ "GetFileExMaxInfoLevel",
+ "GetFileInformationByHandle",
+ "GetFileType",
+ "GetFullPathName",
+ "GetHostByName",
+ "GetIfEntry",
+ "GetLastError",
+ "GetLengthSid",
+ "GetLongPathName",
+ "GetProcAddress",
+ "GetProcessTimes",
+ "GetProtoByName",
+ "GetQueuedCompletionStatus",
+ "GetServByName",
+ "GetShortPathName",
+ "GetStartupInfo",
+ "GetStdHandle",
+ "GetSystemTimeAsFileTime",
+ "GetTempPath",
+ "GetTimeZoneInformation",
+ "GetTokenInformation",
+ "GetUserNameEx",
+ "GetUserProfileDirectory",
+ "GetVersion",
+ "Getcwd",
+ "Getdents",
+ "Getdirentries",
+ "Getdtablesize",
+ "Getegid",
+ "Getenv",
+ "Geteuid",
+ "Getfsstat",
+ "Getgid",
+ "Getgroups",
+ "Getpagesize",
+ "Getpeername",
+ "Getpgid",
+ "Getpgrp",
+ "Getpid",
+ "Getppid",
+ "Getpriority",
+ "Getrlimit",
+ "Getrusage",
+ "Getsid",
+ "Getsockname",
+ "Getsockopt",
+ "GetsockoptByte",
+ "GetsockoptICMPv6Filter",
+ "GetsockoptIPMreq",
+ "GetsockoptIPMreqn",
+ "GetsockoptIPv6MTUInfo",
+ "GetsockoptIPv6Mreq",
+ "GetsockoptInet4Addr",
+ "GetsockoptInt",
+ "GetsockoptUcred",
+ "Gettid",
+ "Gettimeofday",
+ "Getuid",
+ "Getwd",
+ "Getxattr",
+ "HANDLE_FLAG_INHERIT",
+ "HKEY_CLASSES_ROOT",
+ "HKEY_CURRENT_CONFIG",
+ "HKEY_CURRENT_USER",
+ "HKEY_DYN_DATA",
+ "HKEY_LOCAL_MACHINE",
+ "HKEY_PERFORMANCE_DATA",
+ "HKEY_USERS",
+ "HUPCL",
+ "Handle",
+ "Hostent",
+ "ICANON",
+ "ICMP6_FILTER",
+ "ICMPV6_FILTER",
+ "ICMPv6Filter",
+ "ICRNL",
+ "IEXTEN",
+ "IFAN_ARRIVAL",
+ "IFAN_DEPARTURE",
+ "IFA_ADDRESS",
+ "IFA_ANYCAST",
+ "IFA_BROADCAST",
+ "IFA_CACHEINFO",
+ "IFA_F_DADFAILED",
+ "IFA_F_DEPRECATED",
+ "IFA_F_HOMEADDRESS",
+ "IFA_F_NODAD",
+ "IFA_F_OPTIMISTIC",
+ "IFA_F_PERMANENT",
+ "IFA_F_SECONDARY",
+ "IFA_F_TEMPORARY",
+ "IFA_F_TENTATIVE",
+ "IFA_LABEL",
+ "IFA_LOCAL",
+ "IFA_MAX",
+ "IFA_MULTICAST",
+ "IFA_ROUTE",
+ "IFA_UNSPEC",
+ "IFF_ALLMULTI",
+ "IFF_ALTPHYS",
+ "IFF_AUTOMEDIA",
+ "IFF_BROADCAST",
+ "IFF_CANTCHANGE",
+ "IFF_CANTCONFIG",
+ "IFF_DEBUG",
+ "IFF_DRV_OACTIVE",
+ "IFF_DRV_RUNNING",
+ "IFF_DYING",
+ "IFF_DYNAMIC",
+ "IFF_LINK0",
+ "IFF_LINK1",
+ "IFF_LINK2",
+ "IFF_LOOPBACK",
+ "IFF_MASTER",
+ "IFF_MONITOR",
+ "IFF_MULTICAST",
+ "IFF_NOARP",
+ "IFF_NOTRAILERS",
+ "IFF_NO_PI",
+ "IFF_OACTIVE",
+ "IFF_ONE_QUEUE",
+ "IFF_POINTOPOINT",
+ "IFF_POINTTOPOINT",
+ "IFF_PORTSEL",
+ "IFF_PPROMISC",
+ "IFF_PROMISC",
+ "IFF_RENAMING",
+ "IFF_RUNNING",
+ "IFF_SIMPLEX",
+ "IFF_SLAVE",
+ "IFF_SMART",
+ "IFF_STATICARP",
+ "IFF_TAP",
+ "IFF_TUN",
+ "IFF_TUN_EXCL",
+ "IFF_UP",
+ "IFF_VNET_HDR",
+ "IFLA_ADDRESS",
+ "IFLA_BROADCAST",
+ "IFLA_COST",
+ "IFLA_IFALIAS",
+ "IFLA_IFNAME",
+ "IFLA_LINK",
+ "IFLA_LINKINFO",
+ "IFLA_LINKMODE",
+ "IFLA_MAP",
+ "IFLA_MASTER",
+ "IFLA_MAX",
+ "IFLA_MTU",
+ "IFLA_NET_NS_PID",
+ "IFLA_OPERSTATE",
+ "IFLA_PRIORITY",
+ "IFLA_PROTINFO",
+ "IFLA_QDISC",
+ "IFLA_STATS",
+ "IFLA_TXQLEN",
+ "IFLA_UNSPEC",
+ "IFLA_WEIGHT",
+ "IFLA_WIRELESS",
+ "IFNAMSIZ",
+ "IFT_1822",
+ "IFT_A12MPPSWITCH",
+ "IFT_AAL2",
+ "IFT_AAL5",
+ "IFT_ADSL",
+ "IFT_AFLANE8023",
+ "IFT_AFLANE8025",
+ "IFT_ARAP",
+ "IFT_ARCNET",
+ "IFT_ARCNETPLUS",
+ "IFT_ASYNC",
+ "IFT_ATM",
+ "IFT_ATMDXI",
+ "IFT_ATMFUNI",
+ "IFT_ATMIMA",
+ "IFT_ATMLOGICAL",
+ "IFT_ATMRADIO",
+ "IFT_ATMSUBINTERFACE",
+ "IFT_ATMVCIENDPT",
+ "IFT_ATMVIRTUAL",
+ "IFT_BGPPOLICYACCOUNTING",
+ "IFT_BLUETOOTH",
+ "IFT_BRIDGE",
+ "IFT_BSC",
+ "IFT_CARP",
+ "IFT_CCTEMUL",
+ "IFT_CELLULAR",
+ "IFT_CEPT",
+ "IFT_CES",
+ "IFT_CHANNEL",
+ "IFT_CNR",
+ "IFT_COFFEE",
+ "IFT_COMPOSITELINK",
+ "IFT_DCN",
+ "IFT_DIGITALPOWERLINE",
+ "IFT_DIGITALWRAPPEROVERHEADCHANNEL",
+ "IFT_DLSW",
+ "IFT_DOCSCABLEDOWNSTREAM",
+ "IFT_DOCSCABLEMACLAYER",
+ "IFT_DOCSCABLEUPSTREAM",
+ "IFT_DOCSCABLEUPSTREAMCHANNEL",
+ "IFT_DS0",
+ "IFT_DS0BUNDLE",
+ "IFT_DS1FDL",
+ "IFT_DS3",
+ "IFT_DTM",
+ "IFT_DUMMY",
+ "IFT_DVBASILN",
+ "IFT_DVBASIOUT",
+ "IFT_DVBRCCDOWNSTREAM",
+ "IFT_DVBRCCMACLAYER",
+ "IFT_DVBRCCUPSTREAM",
+ "IFT_ECONET",
+ "IFT_ENC",
+ "IFT_EON",
+ "IFT_EPLRS",
+ "IFT_ESCON",
+ "IFT_ETHER",
+ "IFT_FAITH",
+ "IFT_FAST",
+ "IFT_FASTETHER",
+ "IFT_FASTETHERFX",
+ "IFT_FDDI",
+ "IFT_FIBRECHANNEL",
+ "IFT_FRAMERELAYINTERCONNECT",
+ "IFT_FRAMERELAYMPI",
+ "IFT_FRDLCIENDPT",
+ "IFT_FRELAY",
+ "IFT_FRELAYDCE",
+ "IFT_FRF16MFRBUNDLE",
+ "IFT_FRFORWARD",
+ "IFT_G703AT2MB",
+ "IFT_G703AT64K",
+ "IFT_GIF",
+ "IFT_GIGABITETHERNET",
+ "IFT_GR303IDT",
+ "IFT_GR303RDT",
+ "IFT_H323GATEKEEPER",
+ "IFT_H323PROXY",
+ "IFT_HDH1822",
+ "IFT_HDLC",
+ "IFT_HDSL2",
+ "IFT_HIPERLAN2",
+ "IFT_HIPPI",
+ "IFT_HIPPIINTERFACE",
+ "IFT_HOSTPAD",
+ "IFT_HSSI",
+ "IFT_HY",
+ "IFT_IBM370PARCHAN",
+ "IFT_IDSL",
+ "IFT_IEEE1394",
+ "IFT_IEEE80211",
+ "IFT_IEEE80212",
+ "IFT_IEEE8023ADLAG",
+ "IFT_IFGSN",
+ "IFT_IMT",
+ "IFT_INFINIBAND",
+ "IFT_INTERLEAVE",
+ "IFT_IP",
+ "IFT_IPFORWARD",
+ "IFT_IPOVERATM",
+ "IFT_IPOVERCDLC",
+ "IFT_IPOVERCLAW",
+ "IFT_IPSWITCH",
+ "IFT_IPXIP",
+ "IFT_ISDN",
+ "IFT_ISDNBASIC",
+ "IFT_ISDNPRIMARY",
+ "IFT_ISDNS",
+ "IFT_ISDNU",
+ "IFT_ISO88022LLC",
+ "IFT_ISO88023",
+ "IFT_ISO88024",
+ "IFT_ISO88025",
+ "IFT_ISO88025CRFPINT",
+ "IFT_ISO88025DTR",
+ "IFT_ISO88025FIBER",
+ "IFT_ISO88026",
+ "IFT_ISUP",
+ "IFT_L2VLAN",
+ "IFT_L3IPVLAN",
+ "IFT_L3IPXVLAN",
+ "IFT_LAPB",
+ "IFT_LAPD",
+ "IFT_LAPF",
+ "IFT_LINEGROUP",
+ "IFT_LOCALTALK",
+ "IFT_LOOP",
+ "IFT_MEDIAMAILOVERIP",
+ "IFT_MFSIGLINK",
+ "IFT_MIOX25",
+ "IFT_MODEM",
+ "IFT_MPC",
+ "IFT_MPLS",
+ "IFT_MPLSTUNNEL",
+ "IFT_MSDSL",
+ "IFT_MVL",
+ "IFT_MYRINET",
+ "IFT_NFAS",
+ "IFT_NSIP",
+ "IFT_OPTICALCHANNEL",
+ "IFT_OPTICALTRANSPORT",
+ "IFT_OTHER",
+ "IFT_P10",
+ "IFT_P80",
+ "IFT_PARA",
+ "IFT_PDP",
+ "IFT_PFLOG",
+ "IFT_PFLOW",
+ "IFT_PFSYNC",
+ "IFT_PLC",
+ "IFT_PON155",
+ "IFT_PON622",
+ "IFT_POS",
+ "IFT_PPP",
+ "IFT_PPPMULTILINKBUNDLE",
+ "IFT_PROPATM",
+ "IFT_PROPBWAP2MP",
+ "IFT_PROPCNLS",
+ "IFT_PROPDOCSWIRELESSDOWNSTREAM",
+ "IFT_PROPDOCSWIRELESSMACLAYER",
+ "IFT_PROPDOCSWIRELESSUPSTREAM",
+ "IFT_PROPMUX",
+ "IFT_PROPVIRTUAL",
+ "IFT_PROPWIRELESSP2P",
+ "IFT_PTPSERIAL",
+ "IFT_PVC",
+ "IFT_Q2931",
+ "IFT_QLLC",
+ "IFT_RADIOMAC",
+ "IFT_RADSL",
+ "IFT_REACHDSL",
+ "IFT_RFC1483",
+ "IFT_RS232",
+ "IFT_RSRB",
+ "IFT_SDLC",
+ "IFT_SDSL",
+ "IFT_SHDSL",
+ "IFT_SIP",
+ "IFT_SIPSIG",
+ "IFT_SIPTG",
+ "IFT_SLIP",
+ "IFT_SMDSDXI",
+ "IFT_SMDSICIP",
+ "IFT_SONET",
+ "IFT_SONETOVERHEADCHANNEL",
+ "IFT_SONETPATH",
+ "IFT_SONETVT",
+ "IFT_SRP",
+ "IFT_SS7SIGLINK",
+ "IFT_STACKTOSTACK",
+ "IFT_STARLAN",
+ "IFT_STF",
+ "IFT_T1",
+ "IFT_TDLC",
+ "IFT_TELINK",
+ "IFT_TERMPAD",
+ "IFT_TR008",
+ "IFT_TRANSPHDLC",
+ "IFT_TUNNEL",
+ "IFT_ULTRA",
+ "IFT_USB",
+ "IFT_V11",
+ "IFT_V35",
+ "IFT_V36",
+ "IFT_V37",
+ "IFT_VDSL",
+ "IFT_VIRTUALIPADDRESS",
+ "IFT_VIRTUALTG",
+ "IFT_VOICEDID",
+ "IFT_VOICEEM",
+ "IFT_VOICEEMFGD",
+ "IFT_VOICEENCAP",
+ "IFT_VOICEFGDEANA",
+ "IFT_VOICEFXO",
+ "IFT_VOICEFXS",
+ "IFT_VOICEOVERATM",
+ "IFT_VOICEOVERCABLE",
+ "IFT_VOICEOVERFRAMERELAY",
+ "IFT_VOICEOVERIP",
+ "IFT_X213",
+ "IFT_X25",
+ "IFT_X25DDN",
+ "IFT_X25HUNTGROUP",
+ "IFT_X25MLP",
+ "IFT_X25PLE",
+ "IFT_XETHER",
+ "IGNBRK",
+ "IGNCR",
+ "IGNORE",
+ "IGNPAR",
+ "IMAXBEL",
+ "INFINITE",
+ "INLCR",
+ "INPCK",
+ "INVALID_FILE_ATTRIBUTES",
+ "IN_ACCESS",
+ "IN_ALL_EVENTS",
+ "IN_ATTRIB",
+ "IN_CLASSA_HOST",
+ "IN_CLASSA_MAX",
+ "IN_CLASSA_NET",
+ "IN_CLASSA_NSHIFT",
+ "IN_CLASSB_HOST",
+ "IN_CLASSB_MAX",
+ "IN_CLASSB_NET",
+ "IN_CLASSB_NSHIFT",
+ "IN_CLASSC_HOST",
+ "IN_CLASSC_NET",
+ "IN_CLASSC_NSHIFT",
+ "IN_CLASSD_HOST",
+ "IN_CLASSD_NET",
+ "IN_CLASSD_NSHIFT",
+ "IN_CLOEXEC",
+ "IN_CLOSE",
+ "IN_CLOSE_NOWRITE",
+ "IN_CLOSE_WRITE",
+ "IN_CREATE",
+ "IN_DELETE",
+ "IN_DELETE_SELF",
+ "IN_DONT_FOLLOW",
+ "IN_EXCL_UNLINK",
+ "IN_IGNORED",
+ "IN_ISDIR",
+ "IN_LINKLOCALNETNUM",
+ "IN_LOOPBACKNET",
+ "IN_MASK_ADD",
+ "IN_MODIFY",
+ "IN_MOVE",
+ "IN_MOVED_FROM",
+ "IN_MOVED_TO",
+ "IN_MOVE_SELF",
+ "IN_NONBLOCK",
+ "IN_ONESHOT",
+ "IN_ONLYDIR",
+ "IN_OPEN",
+ "IN_Q_OVERFLOW",
+ "IN_RFC3021_HOST",
+ "IN_RFC3021_MASK",
+ "IN_RFC3021_NET",
+ "IN_RFC3021_NSHIFT",
+ "IN_UNMOUNT",
+ "IOC_IN",
+ "IOC_INOUT",
+ "IOC_OUT",
+ "IOC_VENDOR",
+ "IOC_WS2",
+ "IO_REPARSE_TAG_SYMLINK",
+ "IPMreq",
+ "IPMreqn",
+ "IPPROTO_3PC",
+ "IPPROTO_ADFS",
+ "IPPROTO_AH",
+ "IPPROTO_AHIP",
+ "IPPROTO_APES",
+ "IPPROTO_ARGUS",
+ "IPPROTO_AX25",
+ "IPPROTO_BHA",
+ "IPPROTO_BLT",
+ "IPPROTO_BRSATMON",
+ "IPPROTO_CARP",
+ "IPPROTO_CFTP",
+ "IPPROTO_CHAOS",
+ "IPPROTO_CMTP",
+ "IPPROTO_COMP",
+ "IPPROTO_CPHB",
+ "IPPROTO_CPNX",
+ "IPPROTO_DCCP",
+ "IPPROTO_DDP",
+ "IPPROTO_DGP",
+ "IPPROTO_DIVERT",
+ "IPPROTO_DIVERT_INIT",
+ "IPPROTO_DIVERT_RESP",
+ "IPPROTO_DONE",
+ "IPPROTO_DSTOPTS",
+ "IPPROTO_EGP",
+ "IPPROTO_EMCON",
+ "IPPROTO_ENCAP",
+ "IPPROTO_EON",
+ "IPPROTO_ESP",
+ "IPPROTO_ETHERIP",
+ "IPPROTO_FRAGMENT",
+ "IPPROTO_GGP",
+ "IPPROTO_GMTP",
+ "IPPROTO_GRE",
+ "IPPROTO_HELLO",
+ "IPPROTO_HMP",
+ "IPPROTO_HOPOPTS",
+ "IPPROTO_ICMP",
+ "IPPROTO_ICMPV6",
+ "IPPROTO_IDP",
+ "IPPROTO_IDPR",
+ "IPPROTO_IDRP",
+ "IPPROTO_IGMP",
+ "IPPROTO_IGP",
+ "IPPROTO_IGRP",
+ "IPPROTO_IL",
+ "IPPROTO_INLSP",
+ "IPPROTO_INP",
+ "IPPROTO_IP",
+ "IPPROTO_IPCOMP",
+ "IPPROTO_IPCV",
+ "IPPROTO_IPEIP",
+ "IPPROTO_IPIP",
+ "IPPROTO_IPPC",
+ "IPPROTO_IPV4",
+ "IPPROTO_IPV6",
+ "IPPROTO_IPV6_ICMP",
+ "IPPROTO_IRTP",
+ "IPPROTO_KRYPTOLAN",
+ "IPPROTO_LARP",
+ "IPPROTO_LEAF1",
+ "IPPROTO_LEAF2",
+ "IPPROTO_MAX",
+ "IPPROTO_MAXID",
+ "IPPROTO_MEAS",
+ "IPPROTO_MH",
+ "IPPROTO_MHRP",
+ "IPPROTO_MICP",
+ "IPPROTO_MOBILE",
+ "IPPROTO_MPLS",
+ "IPPROTO_MTP",
+ "IPPROTO_MUX",
+ "IPPROTO_ND",
+ "IPPROTO_NHRP",
+ "IPPROTO_NONE",
+ "IPPROTO_NSP",
+ "IPPROTO_NVPII",
+ "IPPROTO_OLD_DIVERT",
+ "IPPROTO_OSPFIGP",
+ "IPPROTO_PFSYNC",
+ "IPPROTO_PGM",
+ "IPPROTO_PIGP",
+ "IPPROTO_PIM",
+ "IPPROTO_PRM",
+ "IPPROTO_PUP",
+ "IPPROTO_PVP",
+ "IPPROTO_RAW",
+ "IPPROTO_RCCMON",
+ "IPPROTO_RDP",
+ "IPPROTO_ROUTING",
+ "IPPROTO_RSVP",
+ "IPPROTO_RVD",
+ "IPPROTO_SATEXPAK",
+ "IPPROTO_SATMON",
+ "IPPROTO_SCCSP",
+ "IPPROTO_SCTP",
+ "IPPROTO_SDRP",
+ "IPPROTO_SEND",
+ "IPPROTO_SEP",
+ "IPPROTO_SKIP",
+ "IPPROTO_SPACER",
+ "IPPROTO_SRPC",
+ "IPPROTO_ST",
+ "IPPROTO_SVMTP",
+ "IPPROTO_SWIPE",
+ "IPPROTO_TCF",
+ "IPPROTO_TCP",
+ "IPPROTO_TLSP",
+ "IPPROTO_TP",
+ "IPPROTO_TPXX",
+ "IPPROTO_TRUNK1",
+ "IPPROTO_TRUNK2",
+ "IPPROTO_TTP",
+ "IPPROTO_UDP",
+ "IPPROTO_UDPLITE",
+ "IPPROTO_VINES",
+ "IPPROTO_VISA",
+ "IPPROTO_VMTP",
+ "IPPROTO_VRRP",
+ "IPPROTO_WBEXPAK",
+ "IPPROTO_WBMON",
+ "IPPROTO_WSN",
+ "IPPROTO_XNET",
+ "IPPROTO_XTP",
+ "IPV6_2292DSTOPTS",
+ "IPV6_2292HOPLIMIT",
+ "IPV6_2292HOPOPTS",
+ "IPV6_2292NEXTHOP",
+ "IPV6_2292PKTINFO",
+ "IPV6_2292PKTOPTIONS",
+ "IPV6_2292RTHDR",
+ "IPV6_ADDRFORM",
+ "IPV6_ADD_MEMBERSHIP",
+ "IPV6_AUTHHDR",
+ "IPV6_AUTH_LEVEL",
+ "IPV6_AUTOFLOWLABEL",
+ "IPV6_BINDANY",
+ "IPV6_BINDV6ONLY",
+ "IPV6_BOUND_IF",
+ "IPV6_CHECKSUM",
+ "IPV6_DEFAULT_MULTICAST_HOPS",
+ "IPV6_DEFAULT_MULTICAST_LOOP",
+ "IPV6_DEFHLIM",
+ "IPV6_DONTFRAG",
+ "IPV6_DROP_MEMBERSHIP",
+ "IPV6_DSTOPTS",
+ "IPV6_ESP_NETWORK_LEVEL",
+ "IPV6_ESP_TRANS_LEVEL",
+ "IPV6_FAITH",
+ "IPV6_FLOWINFO_MASK",
+ "IPV6_FLOWLABEL_MASK",
+ "IPV6_FRAGTTL",
+ "IPV6_FW_ADD",
+ "IPV6_FW_DEL",
+ "IPV6_FW_FLUSH",
+ "IPV6_FW_GET",
+ "IPV6_FW_ZERO",
+ "IPV6_HLIMDEC",
+ "IPV6_HOPLIMIT",
+ "IPV6_HOPOPTS",
+ "IPV6_IPCOMP_LEVEL",
+ "IPV6_IPSEC_POLICY",
+ "IPV6_JOIN_ANYCAST",
+ "IPV6_JOIN_GROUP",
+ "IPV6_LEAVE_ANYCAST",
+ "IPV6_LEAVE_GROUP",
+ "IPV6_MAXHLIM",
+ "IPV6_MAXOPTHDR",
+ "IPV6_MAXPACKET",
+ "IPV6_MAX_GROUP_SRC_FILTER",
+ "IPV6_MAX_MEMBERSHIPS",
+ "IPV6_MAX_SOCK_SRC_FILTER",
+ "IPV6_MIN_MEMBERSHIPS",
+ "IPV6_MMTU",
+ "IPV6_MSFILTER",
+ "IPV6_MTU",
+ "IPV6_MTU_DISCOVER",
+ "IPV6_MULTICAST_HOPS",
+ "IPV6_MULTICAST_IF",
+ "IPV6_MULTICAST_LOOP",
+ "IPV6_NEXTHOP",
+ "IPV6_OPTIONS",
+ "IPV6_PATHMTU",
+ "IPV6_PIPEX",
+ "IPV6_PKTINFO",
+ "IPV6_PMTUDISC_DO",
+ "IPV6_PMTUDISC_DONT",
+ "IPV6_PMTUDISC_PROBE",
+ "IPV6_PMTUDISC_WANT",
+ "IPV6_PORTRANGE",
+ "IPV6_PORTRANGE_DEFAULT",
+ "IPV6_PORTRANGE_HIGH",
+ "IPV6_PORTRANGE_LOW",
+ "IPV6_PREFER_TEMPADDR",
+ "IPV6_RECVDSTOPTS",
+ "IPV6_RECVDSTPORT",
+ "IPV6_RECVERR",
+ "IPV6_RECVHOPLIMIT",
+ "IPV6_RECVHOPOPTS",
+ "IPV6_RECVPATHMTU",
+ "IPV6_RECVPKTINFO",
+ "IPV6_RECVRTHDR",
+ "IPV6_RECVTCLASS",
+ "IPV6_ROUTER_ALERT",
+ "IPV6_RTABLE",
+ "IPV6_RTHDR",
+ "IPV6_RTHDRDSTOPTS",
+ "IPV6_RTHDR_LOOSE",
+ "IPV6_RTHDR_STRICT",
+ "IPV6_RTHDR_TYPE_0",
+ "IPV6_RXDSTOPTS",
+ "IPV6_RXHOPOPTS",
+ "IPV6_SOCKOPT_RESERVED1",
+ "IPV6_TCLASS",
+ "IPV6_UNICAST_HOPS",
+ "IPV6_USE_MIN_MTU",
+ "IPV6_V6ONLY",
+ "IPV6_VERSION",
+ "IPV6_VERSION_MASK",
+ "IPV6_XFRM_POLICY",
+ "IP_ADD_MEMBERSHIP",
+ "IP_ADD_SOURCE_MEMBERSHIP",
+ "IP_AUTH_LEVEL",
+ "IP_BINDANY",
+ "IP_BLOCK_SOURCE",
+ "IP_BOUND_IF",
+ "IP_DEFAULT_MULTICAST_LOOP",
+ "IP_DEFAULT_MULTICAST_TTL",
+ "IP_DF",
+ "IP_DIVERTFL",
+ "IP_DONTFRAG",
+ "IP_DROP_MEMBERSHIP",
+ "IP_DROP_SOURCE_MEMBERSHIP",
+ "IP_DUMMYNET3",
+ "IP_DUMMYNET_CONFIGURE",
+ "IP_DUMMYNET_DEL",
+ "IP_DUMMYNET_FLUSH",
+ "IP_DUMMYNET_GET",
+ "IP_EF",
+ "IP_ERRORMTU",
+ "IP_ESP_NETWORK_LEVEL",
+ "IP_ESP_TRANS_LEVEL",
+ "IP_FAITH",
+ "IP_FREEBIND",
+ "IP_FW3",
+ "IP_FW_ADD",
+ "IP_FW_DEL",
+ "IP_FW_FLUSH",
+ "IP_FW_GET",
+ "IP_FW_NAT_CFG",
+ "IP_FW_NAT_DEL",
+ "IP_FW_NAT_GET_CONFIG",
+ "IP_FW_NAT_GET_LOG",
+ "IP_FW_RESETLOG",
+ "IP_FW_TABLE_ADD",
+ "IP_FW_TABLE_DEL",
+ "IP_FW_TABLE_FLUSH",
+ "IP_FW_TABLE_GETSIZE",
+ "IP_FW_TABLE_LIST",
+ "IP_FW_ZERO",
+ "IP_HDRINCL",
+ "IP_IPCOMP_LEVEL",
+ "IP_IPSECFLOWINFO",
+ "IP_IPSEC_LOCAL_AUTH",
+ "IP_IPSEC_LOCAL_CRED",
+ "IP_IPSEC_LOCAL_ID",
+ "IP_IPSEC_POLICY",
+ "IP_IPSEC_REMOTE_AUTH",
+ "IP_IPSEC_REMOTE_CRED",
+ "IP_IPSEC_REMOTE_ID",
+ "IP_MAXPACKET",
+ "IP_MAX_GROUP_SRC_FILTER",
+ "IP_MAX_MEMBERSHIPS",
+ "IP_MAX_SOCK_MUTE_FILTER",
+ "IP_MAX_SOCK_SRC_FILTER",
+ "IP_MAX_SOURCE_FILTER",
+ "IP_MF",
+ "IP_MINFRAGSIZE",
+ "IP_MINTTL",
+ "IP_MIN_MEMBERSHIPS",
+ "IP_MSFILTER",
+ "IP_MSS",
+ "IP_MTU",
+ "IP_MTU_DISCOVER",
+ "IP_MULTICAST_IF",
+ "IP_MULTICAST_IFINDEX",
+ "IP_MULTICAST_LOOP",
+ "IP_MULTICAST_TTL",
+ "IP_MULTICAST_VIF",
+ "IP_NAT__XXX",
+ "IP_OFFMASK",
+ "IP_OLD_FW_ADD",
+ "IP_OLD_FW_DEL",
+ "IP_OLD_FW_FLUSH",
+ "IP_OLD_FW_GET",
+ "IP_OLD_FW_RESETLOG",
+ "IP_OLD_FW_ZERO",
+ "IP_ONESBCAST",
+ "IP_OPTIONS",
+ "IP_ORIGDSTADDR",
+ "IP_PASSSEC",
+ "IP_PIPEX",
+ "IP_PKTINFO",
+ "IP_PKTOPTIONS",
+ "IP_PMTUDISC",
+ "IP_PMTUDISC_DO",
+ "IP_PMTUDISC_DONT",
+ "IP_PMTUDISC_PROBE",
+ "IP_PMTUDISC_WANT",
+ "IP_PORTRANGE",
+ "IP_PORTRANGE_DEFAULT",
+ "IP_PORTRANGE_HIGH",
+ "IP_PORTRANGE_LOW",
+ "IP_RECVDSTADDR",
+ "IP_RECVDSTPORT",
+ "IP_RECVERR",
+ "IP_RECVIF",
+ "IP_RECVOPTS",
+ "IP_RECVORIGDSTADDR",
+ "IP_RECVPKTINFO",
+ "IP_RECVRETOPTS",
+ "IP_RECVRTABLE",
+ "IP_RECVTOS",
+ "IP_RECVTTL",
+ "IP_RETOPTS",
+ "IP_RF",
+ "IP_ROUTER_ALERT",
+ "IP_RSVP_OFF",
+ "IP_RSVP_ON",
+ "IP_RSVP_VIF_OFF",
+ "IP_RSVP_VIF_ON",
+ "IP_RTABLE",
+ "IP_SENDSRCADDR",
+ "IP_STRIPHDR",
+ "IP_TOS",
+ "IP_TRAFFIC_MGT_BACKGROUND",
+ "IP_TRANSPARENT",
+ "IP_TTL",
+ "IP_UNBLOCK_SOURCE",
+ "IP_XFRM_POLICY",
+ "IPv6MTUInfo",
+ "IPv6Mreq",
+ "ISIG",
+ "ISTRIP",
+ "IUCLC",
+ "IUTF8",
+ "IXANY",
+ "IXOFF",
+ "IXON",
+ "IfAddrmsg",
+ "IfAnnounceMsghdr",
+ "IfData",
+ "IfInfomsg",
+ "IfMsghdr",
+ "IfaMsghdr",
+ "IfmaMsghdr",
+ "IfmaMsghdr2",
+ "ImplementsGetwd",
+ "Inet4Pktinfo",
+ "Inet6Pktinfo",
+ "InotifyAddWatch",
+ "InotifyEvent",
+ "InotifyInit",
+ "InotifyInit1",
+ "InotifyRmWatch",
+ "InterfaceAddrMessage",
+ "InterfaceAnnounceMessage",
+ "InterfaceInfo",
+ "InterfaceMessage",
+ "InterfaceMulticastAddrMessage",
+ "InvalidHandle",
+ "Ioperm",
+ "Iopl",
+ "Iovec",
+ "IpAdapterInfo",
+ "IpAddrString",
+ "IpAddressString",
+ "IpMaskString",
+ "Issetugid",
+ "KEY_ALL_ACCESS",
+ "KEY_CREATE_LINK",
+ "KEY_CREATE_SUB_KEY",
+ "KEY_ENUMERATE_SUB_KEYS",
+ "KEY_EXECUTE",
+ "KEY_NOTIFY",
+ "KEY_QUERY_VALUE",
+ "KEY_READ",
+ "KEY_SET_VALUE",
+ "KEY_WOW64_32KEY",
+ "KEY_WOW64_64KEY",
+ "KEY_WRITE",
+ "Kevent",
+ "Kevent_t",
+ "Kill",
+ "Klogctl",
+ "Kqueue",
+ "LANG_ENGLISH",
+ "LAYERED_PROTOCOL",
+ "LCNT_OVERLOAD_FLUSH",
+ "LINUX_REBOOT_CMD_CAD_OFF",
+ "LINUX_REBOOT_CMD_CAD_ON",
+ "LINUX_REBOOT_CMD_HALT",
+ "LINUX_REBOOT_CMD_KEXEC",
+ "LINUX_REBOOT_CMD_POWER_OFF",
+ "LINUX_REBOOT_CMD_RESTART",
+ "LINUX_REBOOT_CMD_RESTART2",
+ "LINUX_REBOOT_CMD_SW_SUSPEND",
+ "LINUX_REBOOT_MAGIC1",
+ "LINUX_REBOOT_MAGIC2",
+ "LOCK_EX",
+ "LOCK_NB",
+ "LOCK_SH",
+ "LOCK_UN",
+ "LazyDLL",
+ "LazyProc",
+ "Lchown",
+ "Linger",
+ "Link",
+ "Listen",
+ "Listxattr",
+ "LoadCancelIoEx",
+ "LoadConnectEx",
+ "LoadCreateSymbolicLink",
+ "LoadDLL",
+ "LoadGetAddrInfo",
+ "LoadLibrary",
+ "LoadSetFileCompletionNotificationModes",
+ "LocalFree",
+ "Log2phys_t",
+ "LookupAccountName",
+ "LookupAccountSid",
+ "LookupSID",
+ "LsfJump",
+ "LsfSocket",
+ "LsfStmt",
+ "Lstat",
+ "MADV_AUTOSYNC",
+ "MADV_CAN_REUSE",
+ "MADV_CORE",
+ "MADV_DOFORK",
+ "MADV_DONTFORK",
+ "MADV_DONTNEED",
+ "MADV_FREE",
+ "MADV_FREE_REUSABLE",
+ "MADV_FREE_REUSE",
+ "MADV_HUGEPAGE",
+ "MADV_HWPOISON",
+ "MADV_MERGEABLE",
+ "MADV_NOCORE",
+ "MADV_NOHUGEPAGE",
+ "MADV_NORMAL",
+ "MADV_NOSYNC",
+ "MADV_PROTECT",
+ "MADV_RANDOM",
+ "MADV_REMOVE",
+ "MADV_SEQUENTIAL",
+ "MADV_SPACEAVAIL",
+ "MADV_UNMERGEABLE",
+ "MADV_WILLNEED",
+ "MADV_ZERO_WIRED_PAGES",
+ "MAP_32BIT",
+ "MAP_ALIGNED_SUPER",
+ "MAP_ALIGNMENT_16MB",
+ "MAP_ALIGNMENT_1TB",
+ "MAP_ALIGNMENT_256TB",
+ "MAP_ALIGNMENT_4GB",
+ "MAP_ALIGNMENT_64KB",
+ "MAP_ALIGNMENT_64PB",
+ "MAP_ALIGNMENT_MASK",
+ "MAP_ALIGNMENT_SHIFT",
+ "MAP_ANON",
+ "MAP_ANONYMOUS",
+ "MAP_COPY",
+ "MAP_DENYWRITE",
+ "MAP_EXECUTABLE",
+ "MAP_FILE",
+ "MAP_FIXED",
+ "MAP_FLAGMASK",
+ "MAP_GROWSDOWN",
+ "MAP_HASSEMAPHORE",
+ "MAP_HUGETLB",
+ "MAP_INHERIT",
+ "MAP_INHERIT_COPY",
+ "MAP_INHERIT_DEFAULT",
+ "MAP_INHERIT_DONATE_COPY",
+ "MAP_INHERIT_NONE",
+ "MAP_INHERIT_SHARE",
+ "MAP_JIT",
+ "MAP_LOCKED",
+ "MAP_NOCACHE",
+ "MAP_NOCORE",
+ "MAP_NOEXTEND",
+ "MAP_NONBLOCK",
+ "MAP_NORESERVE",
+ "MAP_NOSYNC",
+ "MAP_POPULATE",
+ "MAP_PREFAULT_READ",
+ "MAP_PRIVATE",
+ "MAP_RENAME",
+ "MAP_RESERVED0080",
+ "MAP_RESERVED0100",
+ "MAP_SHARED",
+ "MAP_STACK",
+ "MAP_TRYFIXED",
+ "MAP_TYPE",
+ "MAP_WIRED",
+ "MAXIMUM_REPARSE_DATA_BUFFER_SIZE",
+ "MAXLEN_IFDESCR",
+ "MAXLEN_PHYSADDR",
+ "MAX_ADAPTER_ADDRESS_LENGTH",
+ "MAX_ADAPTER_DESCRIPTION_LENGTH",
+ "MAX_ADAPTER_NAME_LENGTH",
+ "MAX_COMPUTERNAME_LENGTH",
+ "MAX_INTERFACE_NAME_LEN",
+ "MAX_LONG_PATH",
+ "MAX_PATH",
+ "MAX_PROTOCOL_CHAIN",
+ "MCL_CURRENT",
+ "MCL_FUTURE",
+ "MNT_DETACH",
+ "MNT_EXPIRE",
+ "MNT_FORCE",
+ "MSG_BCAST",
+ "MSG_CMSG_CLOEXEC",
+ "MSG_COMPAT",
+ "MSG_CONFIRM",
+ "MSG_CONTROLMBUF",
+ "MSG_CTRUNC",
+ "MSG_DONTROUTE",
+ "MSG_DONTWAIT",
+ "MSG_EOF",
+ "MSG_EOR",
+ "MSG_ERRQUEUE",
+ "MSG_FASTOPEN",
+ "MSG_FIN",
+ "MSG_FLUSH",
+ "MSG_HAVEMORE",
+ "MSG_HOLD",
+ "MSG_IOVUSRSPACE",
+ "MSG_LENUSRSPACE",
+ "MSG_MCAST",
+ "MSG_MORE",
+ "MSG_NAMEMBUF",
+ "MSG_NBIO",
+ "MSG_NEEDSA",
+ "MSG_NOSIGNAL",
+ "MSG_NOTIFICATION",
+ "MSG_OOB",
+ "MSG_PEEK",
+ "MSG_PROXY",
+ "MSG_RCVMORE",
+ "MSG_RST",
+ "MSG_SEND",
+ "MSG_SYN",
+ "MSG_TRUNC",
+ "MSG_TRYHARD",
+ "MSG_USERFLAGS",
+ "MSG_WAITALL",
+ "MSG_WAITFORONE",
+ "MSG_WAITSTREAM",
+ "MS_ACTIVE",
+ "MS_ASYNC",
+ "MS_BIND",
+ "MS_DEACTIVATE",
+ "MS_DIRSYNC",
+ "MS_INVALIDATE",
+ "MS_I_VERSION",
+ "MS_KERNMOUNT",
+ "MS_KILLPAGES",
+ "MS_MANDLOCK",
+ "MS_MGC_MSK",
+ "MS_MGC_VAL",
+ "MS_MOVE",
+ "MS_NOATIME",
+ "MS_NODEV",
+ "MS_NODIRATIME",
+ "MS_NOEXEC",
+ "MS_NOSUID",
+ "MS_NOUSER",
+ "MS_POSIXACL",
+ "MS_PRIVATE",
+ "MS_RDONLY",
+ "MS_REC",
+ "MS_RELATIME",
+ "MS_REMOUNT",
+ "MS_RMT_MASK",
+ "MS_SHARED",
+ "MS_SILENT",
+ "MS_SLAVE",
+ "MS_STRICTATIME",
+ "MS_SYNC",
+ "MS_SYNCHRONOUS",
+ "MS_UNBINDABLE",
+ "Madvise",
+ "MapViewOfFile",
+ "MaxTokenInfoClass",
+ "Mclpool",
+ "MibIfRow",
+ "Mkdir",
+ "Mkdirat",
+ "Mkfifo",
+ "Mknod",
+ "Mknodat",
+ "Mlock",
+ "Mlockall",
+ "Mmap",
+ "Mount",
+ "MoveFile",
+ "Mprotect",
+ "Msghdr",
+ "Munlock",
+ "Munlockall",
+ "Munmap",
+ "MustLoadDLL",
+ "NAME_MAX",
+ "NETLINK_ADD_MEMBERSHIP",
+ "NETLINK_AUDIT",
+ "NETLINK_BROADCAST_ERROR",
+ "NETLINK_CONNECTOR",
+ "NETLINK_DNRTMSG",
+ "NETLINK_DROP_MEMBERSHIP",
+ "NETLINK_ECRYPTFS",
+ "NETLINK_FIB_LOOKUP",
+ "NETLINK_FIREWALL",
+ "NETLINK_GENERIC",
+ "NETLINK_INET_DIAG",
+ "NETLINK_IP6_FW",
+ "NETLINK_ISCSI",
+ "NETLINK_KOBJECT_UEVENT",
+ "NETLINK_NETFILTER",
+ "NETLINK_NFLOG",
+ "NETLINK_NO_ENOBUFS",
+ "NETLINK_PKTINFO",
+ "NETLINK_RDMA",
+ "NETLINK_ROUTE",
+ "NETLINK_SCSITRANSPORT",
+ "NETLINK_SELINUX",
+ "NETLINK_UNUSED",
+ "NETLINK_USERSOCK",
+ "NETLINK_XFRM",
+ "NET_RT_DUMP",
+ "NET_RT_DUMP2",
+ "NET_RT_FLAGS",
+ "NET_RT_IFLIST",
+ "NET_RT_IFLIST2",
+ "NET_RT_IFLISTL",
+ "NET_RT_IFMALIST",
+ "NET_RT_MAXID",
+ "NET_RT_OIFLIST",
+ "NET_RT_OOIFLIST",
+ "NET_RT_STAT",
+ "NET_RT_STATS",
+ "NET_RT_TABLE",
+ "NET_RT_TRASH",
+ "NLA_ALIGNTO",
+ "NLA_F_NESTED",
+ "NLA_F_NET_BYTEORDER",
+ "NLA_HDRLEN",
+ "NLMSG_ALIGNTO",
+ "NLMSG_DONE",
+ "NLMSG_ERROR",
+ "NLMSG_HDRLEN",
+ "NLMSG_MIN_TYPE",
+ "NLMSG_NOOP",
+ "NLMSG_OVERRUN",
+ "NLM_F_ACK",
+ "NLM_F_APPEND",
+ "NLM_F_ATOMIC",
+ "NLM_F_CREATE",
+ "NLM_F_DUMP",
+ "NLM_F_ECHO",
+ "NLM_F_EXCL",
+ "NLM_F_MATCH",
+ "NLM_F_MULTI",
+ "NLM_F_REPLACE",
+ "NLM_F_REQUEST",
+ "NLM_F_ROOT",
+ "NOFLSH",
+ "NOTE_ABSOLUTE",
+ "NOTE_ATTRIB",
+ "NOTE_CHILD",
+ "NOTE_DELETE",
+ "NOTE_EOF",
+ "NOTE_EXEC",
+ "NOTE_EXIT",
+ "NOTE_EXITSTATUS",
+ "NOTE_EXTEND",
+ "NOTE_FFAND",
+ "NOTE_FFCOPY",
+ "NOTE_FFCTRLMASK",
+ "NOTE_FFLAGSMASK",
+ "NOTE_FFNOP",
+ "NOTE_FFOR",
+ "NOTE_FORK",
+ "NOTE_LINK",
+ "NOTE_LOWAT",
+ "NOTE_NONE",
+ "NOTE_NSECONDS",
+ "NOTE_PCTRLMASK",
+ "NOTE_PDATAMASK",
+ "NOTE_REAP",
+ "NOTE_RENAME",
+ "NOTE_RESOURCEEND",
+ "NOTE_REVOKE",
+ "NOTE_SECONDS",
+ "NOTE_SIGNAL",
+ "NOTE_TRACK",
+ "NOTE_TRACKERR",
+ "NOTE_TRIGGER",
+ "NOTE_TRUNCATE",
+ "NOTE_USECONDS",
+ "NOTE_VM_ERROR",
+ "NOTE_VM_PRESSURE",
+ "NOTE_VM_PRESSURE_SUDDEN_TERMINATE",
+ "NOTE_VM_PRESSURE_TERMINATE",
+ "NOTE_WRITE",
+ "NameCanonical",
+ "NameCanonicalEx",
+ "NameDisplay",
+ "NameDnsDomain",
+ "NameFullyQualifiedDN",
+ "NameSamCompatible",
+ "NameServicePrincipal",
+ "NameUniqueId",
+ "NameUnknown",
+ "NameUserPrincipal",
+ "Nanosleep",
+ "NetApiBufferFree",
+ "NetGetJoinInformation",
+ "NetSetupDomainName",
+ "NetSetupUnjoined",
+ "NetSetupUnknownStatus",
+ "NetSetupWorkgroupName",
+ "NetUserGetInfo",
+ "NetlinkMessage",
+ "NetlinkRIB",
+ "NetlinkRouteAttr",
+ "NetlinkRouteRequest",
+ "NewCallback",
+ "NewCallbackCDecl",
+ "NewLazyDLL",
+ "NlAttr",
+ "NlMsgerr",
+ "NlMsghdr",
+ "NsecToFiletime",
+ "NsecToTimespec",
+ "NsecToTimeval",
+ "Ntohs",
+ "OCRNL",
+ "OFDEL",
+ "OFILL",
+ "OFIOGETBMAP",
+ "OID_PKIX_KP_SERVER_AUTH",
+ "OID_SERVER_GATED_CRYPTO",
+ "OID_SGC_NETSCAPE",
+ "OLCUC",
+ "ONLCR",
+ "ONLRET",
+ "ONOCR",
+ "ONOEOT",
+ "OPEN_ALWAYS",
+ "OPEN_EXISTING",
+ "OPOST",
+ "O_ACCMODE",
+ "O_ALERT",
+ "O_ALT_IO",
+ "O_APPEND",
+ "O_ASYNC",
+ "O_CLOEXEC",
+ "O_CREAT",
+ "O_DIRECT",
+ "O_DIRECTORY",
+ "O_DSYNC",
+ "O_EVTONLY",
+ "O_EXCL",
+ "O_EXEC",
+ "O_EXLOCK",
+ "O_FSYNC",
+ "O_LARGEFILE",
+ "O_NDELAY",
+ "O_NOATIME",
+ "O_NOCTTY",
+ "O_NOFOLLOW",
+ "O_NONBLOCK",
+ "O_NOSIGPIPE",
+ "O_POPUP",
+ "O_RDONLY",
+ "O_RDWR",
+ "O_RSYNC",
+ "O_SHLOCK",
+ "O_SYMLINK",
+ "O_SYNC",
+ "O_TRUNC",
+ "O_TTY_INIT",
+ "O_WRONLY",
+ "Open",
+ "OpenCurrentProcessToken",
+ "OpenProcess",
+ "OpenProcessToken",
+ "Openat",
+ "Overlapped",
+ "PACKET_ADD_MEMBERSHIP",
+ "PACKET_BROADCAST",
+ "PACKET_DROP_MEMBERSHIP",
+ "PACKET_FASTROUTE",
+ "PACKET_HOST",
+ "PACKET_LOOPBACK",
+ "PACKET_MR_ALLMULTI",
+ "PACKET_MR_MULTICAST",
+ "PACKET_MR_PROMISC",
+ "PACKET_MULTICAST",
+ "PACKET_OTHERHOST",
+ "PACKET_OUTGOING",
+ "PACKET_RECV_OUTPUT",
+ "PACKET_RX_RING",
+ "PACKET_STATISTICS",
+ "PAGE_EXECUTE_READ",
+ "PAGE_EXECUTE_READWRITE",
+ "PAGE_EXECUTE_WRITECOPY",
+ "PAGE_READONLY",
+ "PAGE_READWRITE",
+ "PAGE_WRITECOPY",
+ "PARENB",
+ "PARMRK",
+ "PARODD",
+ "PENDIN",
+ "PFL_HIDDEN",
+ "PFL_MATCHES_PROTOCOL_ZERO",
+ "PFL_MULTIPLE_PROTO_ENTRIES",
+ "PFL_NETWORKDIRECT_PROVIDER",
+ "PFL_RECOMMENDED_PROTO_ENTRY",
+ "PF_FLUSH",
+ "PKCS_7_ASN_ENCODING",
+ "PMC5_PIPELINE_FLUSH",
+ "PRIO_PGRP",
+ "PRIO_PROCESS",
+ "PRIO_USER",
+ "PRI_IOFLUSH",
+ "PROCESS_QUERY_INFORMATION",
+ "PROCESS_TERMINATE",
+ "PROT_EXEC",
+ "PROT_GROWSDOWN",
+ "PROT_GROWSUP",
+ "PROT_NONE",
+ "PROT_READ",
+ "PROT_WRITE",
+ "PROV_DH_SCHANNEL",
+ "PROV_DSS",
+ "PROV_DSS_DH",
+ "PROV_EC_ECDSA_FULL",
+ "PROV_EC_ECDSA_SIG",
+ "PROV_EC_ECNRA_FULL",
+ "PROV_EC_ECNRA_SIG",
+ "PROV_FORTEZZA",
+ "PROV_INTEL_SEC",
+ "PROV_MS_EXCHANGE",
+ "PROV_REPLACE_OWF",
+ "PROV_RNG",
+ "PROV_RSA_AES",
+ "PROV_RSA_FULL",
+ "PROV_RSA_SCHANNEL",
+ "PROV_RSA_SIG",
+ "PROV_SPYRUS_LYNKS",
+ "PROV_SSL",
+ "PR_CAPBSET_DROP",
+ "PR_CAPBSET_READ",
+ "PR_CLEAR_SECCOMP_FILTER",
+ "PR_ENDIAN_BIG",
+ "PR_ENDIAN_LITTLE",
+ "PR_ENDIAN_PPC_LITTLE",
+ "PR_FPEMU_NOPRINT",
+ "PR_FPEMU_SIGFPE",
+ "PR_FP_EXC_ASYNC",
+ "PR_FP_EXC_DISABLED",
+ "PR_FP_EXC_DIV",
+ "PR_FP_EXC_INV",
+ "PR_FP_EXC_NONRECOV",
+ "PR_FP_EXC_OVF",
+ "PR_FP_EXC_PRECISE",
+ "PR_FP_EXC_RES",
+ "PR_FP_EXC_SW_ENABLE",
+ "PR_FP_EXC_UND",
+ "PR_GET_DUMPABLE",
+ "PR_GET_ENDIAN",
+ "PR_GET_FPEMU",
+ "PR_GET_FPEXC",
+ "PR_GET_KEEPCAPS",
+ "PR_GET_NAME",
+ "PR_GET_PDEATHSIG",
+ "PR_GET_SECCOMP",
+ "PR_GET_SECCOMP_FILTER",
+ "PR_GET_SECUREBITS",
+ "PR_GET_TIMERSLACK",
+ "PR_GET_TIMING",
+ "PR_GET_TSC",
+ "PR_GET_UNALIGN",
+ "PR_MCE_KILL",
+ "PR_MCE_KILL_CLEAR",
+ "PR_MCE_KILL_DEFAULT",
+ "PR_MCE_KILL_EARLY",
+ "PR_MCE_KILL_GET",
+ "PR_MCE_KILL_LATE",
+ "PR_MCE_KILL_SET",
+ "PR_SECCOMP_FILTER_EVENT",
+ "PR_SECCOMP_FILTER_SYSCALL",
+ "PR_SET_DUMPABLE",
+ "PR_SET_ENDIAN",
+ "PR_SET_FPEMU",
+ "PR_SET_FPEXC",
+ "PR_SET_KEEPCAPS",
+ "PR_SET_NAME",
+ "PR_SET_PDEATHSIG",
+ "PR_SET_PTRACER",
+ "PR_SET_SECCOMP",
+ "PR_SET_SECCOMP_FILTER",
+ "PR_SET_SECUREBITS",
+ "PR_SET_TIMERSLACK",
+ "PR_SET_TIMING",
+ "PR_SET_TSC",
+ "PR_SET_UNALIGN",
+ "PR_TASK_PERF_EVENTS_DISABLE",
+ "PR_TASK_PERF_EVENTS_ENABLE",
+ "PR_TIMING_STATISTICAL",
+ "PR_TIMING_TIMESTAMP",
+ "PR_TSC_ENABLE",
+ "PR_TSC_SIGSEGV",
+ "PR_UNALIGN_NOPRINT",
+ "PR_UNALIGN_SIGBUS",
+ "PTRACE_ARCH_PRCTL",
+ "PTRACE_ATTACH",
+ "PTRACE_CONT",
+ "PTRACE_DETACH",
+ "PTRACE_EVENT_CLONE",
+ "PTRACE_EVENT_EXEC",
+ "PTRACE_EVENT_EXIT",
+ "PTRACE_EVENT_FORK",
+ "PTRACE_EVENT_VFORK",
+ "PTRACE_EVENT_VFORK_DONE",
+ "PTRACE_GETCRUNCHREGS",
+ "PTRACE_GETEVENTMSG",
+ "PTRACE_GETFPREGS",
+ "PTRACE_GETFPXREGS",
+ "PTRACE_GETHBPREGS",
+ "PTRACE_GETREGS",
+ "PTRACE_GETREGSET",
+ "PTRACE_GETSIGINFO",
+ "PTRACE_GETVFPREGS",
+ "PTRACE_GETWMMXREGS",
+ "PTRACE_GET_THREAD_AREA",
+ "PTRACE_KILL",
+ "PTRACE_OLDSETOPTIONS",
+ "PTRACE_O_MASK",
+ "PTRACE_O_TRACECLONE",
+ "PTRACE_O_TRACEEXEC",
+ "PTRACE_O_TRACEEXIT",
+ "PTRACE_O_TRACEFORK",
+ "PTRACE_O_TRACESYSGOOD",
+ "PTRACE_O_TRACEVFORK",
+ "PTRACE_O_TRACEVFORKDONE",
+ "PTRACE_PEEKDATA",
+ "PTRACE_PEEKTEXT",
+ "PTRACE_PEEKUSR",
+ "PTRACE_POKEDATA",
+ "PTRACE_POKETEXT",
+ "PTRACE_POKEUSR",
+ "PTRACE_SETCRUNCHREGS",
+ "PTRACE_SETFPREGS",
+ "PTRACE_SETFPXREGS",
+ "PTRACE_SETHBPREGS",
+ "PTRACE_SETOPTIONS",
+ "PTRACE_SETREGS",
+ "PTRACE_SETREGSET",
+ "PTRACE_SETSIGINFO",
+ "PTRACE_SETVFPREGS",
+ "PTRACE_SETWMMXREGS",
+ "PTRACE_SET_SYSCALL",
+ "PTRACE_SET_THREAD_AREA",
+ "PTRACE_SINGLEBLOCK",
+ "PTRACE_SINGLESTEP",
+ "PTRACE_SYSCALL",
+ "PTRACE_SYSEMU",
+ "PTRACE_SYSEMU_SINGLESTEP",
+ "PTRACE_TRACEME",
+ "PT_ATTACH",
+ "PT_ATTACHEXC",
+ "PT_CONTINUE",
+ "PT_DATA_ADDR",
+ "PT_DENY_ATTACH",
+ "PT_DETACH",
+ "PT_FIRSTMACH",
+ "PT_FORCEQUOTA",
+ "PT_KILL",
+ "PT_MASK",
+ "PT_READ_D",
+ "PT_READ_I",
+ "PT_READ_U",
+ "PT_SIGEXC",
+ "PT_STEP",
+ "PT_TEXT_ADDR",
+ "PT_TEXT_END_ADDR",
+ "PT_THUPDATE",
+ "PT_TRACE_ME",
+ "PT_WRITE_D",
+ "PT_WRITE_I",
+ "PT_WRITE_U",
+ "ParseDirent",
+ "ParseNetlinkMessage",
+ "ParseNetlinkRouteAttr",
+ "ParseRoutingMessage",
+ "ParseRoutingSockaddr",
+ "ParseSocketControlMessage",
+ "ParseUnixCredentials",
+ "ParseUnixRights",
+ "PathMax",
+ "Pathconf",
+ "Pause",
+ "Pipe",
+ "Pipe2",
+ "PivotRoot",
+ "Pointer",
+ "PostQueuedCompletionStatus",
+ "Pread",
+ "Proc",
+ "ProcAttr",
+ "Process32First",
+ "Process32Next",
+ "ProcessEntry32",
+ "ProcessInformation",
+ "Protoent",
+ "PtraceAttach",
+ "PtraceCont",
+ "PtraceDetach",
+ "PtraceGetEventMsg",
+ "PtraceGetRegs",
+ "PtracePeekData",
+ "PtracePeekText",
+ "PtracePokeData",
+ "PtracePokeText",
+ "PtraceRegs",
+ "PtraceSetOptions",
+ "PtraceSetRegs",
+ "PtraceSingleStep",
+ "PtraceSyscall",
+ "Pwrite",
+ "REG_BINARY",
+ "REG_DWORD",
+ "REG_DWORD_BIG_ENDIAN",
+ "REG_DWORD_LITTLE_ENDIAN",
+ "REG_EXPAND_SZ",
+ "REG_FULL_RESOURCE_DESCRIPTOR",
+ "REG_LINK",
+ "REG_MULTI_SZ",
+ "REG_NONE",
+ "REG_QWORD",
+ "REG_QWORD_LITTLE_ENDIAN",
+ "REG_RESOURCE_LIST",
+ "REG_RESOURCE_REQUIREMENTS_LIST",
+ "REG_SZ",
+ "RLIMIT_AS",
+ "RLIMIT_CORE",
+ "RLIMIT_CPU",
+ "RLIMIT_DATA",
+ "RLIMIT_FSIZE",
+ "RLIMIT_NOFILE",
+ "RLIMIT_STACK",
+ "RLIM_INFINITY",
+ "RTAX_ADVMSS",
+ "RTAX_AUTHOR",
+ "RTAX_BRD",
+ "RTAX_CWND",
+ "RTAX_DST",
+ "RTAX_FEATURES",
+ "RTAX_FEATURE_ALLFRAG",
+ "RTAX_FEATURE_ECN",
+ "RTAX_FEATURE_SACK",
+ "RTAX_FEATURE_TIMESTAMP",
+ "RTAX_GATEWAY",
+ "RTAX_GENMASK",
+ "RTAX_HOPLIMIT",
+ "RTAX_IFA",
+ "RTAX_IFP",
+ "RTAX_INITCWND",
+ "RTAX_INITRWND",
+ "RTAX_LABEL",
+ "RTAX_LOCK",
+ "RTAX_MAX",
+ "RTAX_MTU",
+ "RTAX_NETMASK",
+ "RTAX_REORDERING",
+ "RTAX_RTO_MIN",
+ "RTAX_RTT",
+ "RTAX_RTTVAR",
+ "RTAX_SRC",
+ "RTAX_SRCMASK",
+ "RTAX_SSTHRESH",
+ "RTAX_TAG",
+ "RTAX_UNSPEC",
+ "RTAX_WINDOW",
+ "RTA_ALIGNTO",
+ "RTA_AUTHOR",
+ "RTA_BRD",
+ "RTA_CACHEINFO",
+ "RTA_DST",
+ "RTA_FLOW",
+ "RTA_GATEWAY",
+ "RTA_GENMASK",
+ "RTA_IFA",
+ "RTA_IFP",
+ "RTA_IIF",
+ "RTA_LABEL",
+ "RTA_MAX",
+ "RTA_METRICS",
+ "RTA_MULTIPATH",
+ "RTA_NETMASK",
+ "RTA_OIF",
+ "RTA_PREFSRC",
+ "RTA_PRIORITY",
+ "RTA_SRC",
+ "RTA_SRCMASK",
+ "RTA_TABLE",
+ "RTA_TAG",
+ "RTA_UNSPEC",
+ "RTCF_DIRECTSRC",
+ "RTCF_DOREDIRECT",
+ "RTCF_LOG",
+ "RTCF_MASQ",
+ "RTCF_NAT",
+ "RTCF_VALVE",
+ "RTF_ADDRCLASSMASK",
+ "RTF_ADDRCONF",
+ "RTF_ALLONLINK",
+ "RTF_ANNOUNCE",
+ "RTF_BLACKHOLE",
+ "RTF_BROADCAST",
+ "RTF_CACHE",
+ "RTF_CLONED",
+ "RTF_CLONING",
+ "RTF_CONDEMNED",
+ "RTF_DEFAULT",
+ "RTF_DELCLONE",
+ "RTF_DONE",
+ "RTF_DYNAMIC",
+ "RTF_FLOW",
+ "RTF_FMASK",
+ "RTF_GATEWAY",
+ "RTF_GWFLAG_COMPAT",
+ "RTF_HOST",
+ "RTF_IFREF",
+ "RTF_IFSCOPE",
+ "RTF_INTERFACE",
+ "RTF_IRTT",
+ "RTF_LINKRT",
+ "RTF_LLDATA",
+ "RTF_LLINFO",
+ "RTF_LOCAL",
+ "RTF_MASK",
+ "RTF_MODIFIED",
+ "RTF_MPATH",
+ "RTF_MPLS",
+ "RTF_MSS",
+ "RTF_MTU",
+ "RTF_MULTICAST",
+ "RTF_NAT",
+ "RTF_NOFORWARD",
+ "RTF_NONEXTHOP",
+ "RTF_NOPMTUDISC",
+ "RTF_PERMANENT_ARP",
+ "RTF_PINNED",
+ "RTF_POLICY",
+ "RTF_PRCLONING",
+ "RTF_PROTO1",
+ "RTF_PROTO2",
+ "RTF_PROTO3",
+ "RTF_REINSTATE",
+ "RTF_REJECT",
+ "RTF_RNH_LOCKED",
+ "RTF_SOURCE",
+ "RTF_SRC",
+ "RTF_STATIC",
+ "RTF_STICKY",
+ "RTF_THROW",
+ "RTF_TUNNEL",
+ "RTF_UP",
+ "RTF_USETRAILERS",
+ "RTF_WASCLONED",
+ "RTF_WINDOW",
+ "RTF_XRESOLVE",
+ "RTM_ADD",
+ "RTM_BASE",
+ "RTM_CHANGE",
+ "RTM_CHGADDR",
+ "RTM_DELACTION",
+ "RTM_DELADDR",
+ "RTM_DELADDRLABEL",
+ "RTM_DELETE",
+ "RTM_DELLINK",
+ "RTM_DELMADDR",
+ "RTM_DELNEIGH",
+ "RTM_DELQDISC",
+ "RTM_DELROUTE",
+ "RTM_DELRULE",
+ "RTM_DELTCLASS",
+ "RTM_DELTFILTER",
+ "RTM_DESYNC",
+ "RTM_F_CLONED",
+ "RTM_F_EQUALIZE",
+ "RTM_F_NOTIFY",
+ "RTM_F_PREFIX",
+ "RTM_GET",
+ "RTM_GET2",
+ "RTM_GETACTION",
+ "RTM_GETADDR",
+ "RTM_GETADDRLABEL",
+ "RTM_GETANYCAST",
+ "RTM_GETDCB",
+ "RTM_GETLINK",
+ "RTM_GETMULTICAST",
+ "RTM_GETNEIGH",
+ "RTM_GETNEIGHTBL",
+ "RTM_GETQDISC",
+ "RTM_GETROUTE",
+ "RTM_GETRULE",
+ "RTM_GETTCLASS",
+ "RTM_GETTFILTER",
+ "RTM_IEEE80211",
+ "RTM_IFANNOUNCE",
+ "RTM_IFINFO",
+ "RTM_IFINFO2",
+ "RTM_LLINFO_UPD",
+ "RTM_LOCK",
+ "RTM_LOSING",
+ "RTM_MAX",
+ "RTM_MAXSIZE",
+ "RTM_MISS",
+ "RTM_NEWACTION",
+ "RTM_NEWADDR",
+ "RTM_NEWADDRLABEL",
+ "RTM_NEWLINK",
+ "RTM_NEWMADDR",
+ "RTM_NEWMADDR2",
+ "RTM_NEWNDUSEROPT",
+ "RTM_NEWNEIGH",
+ "RTM_NEWNEIGHTBL",
+ "RTM_NEWPREFIX",
+ "RTM_NEWQDISC",
+ "RTM_NEWROUTE",
+ "RTM_NEWRULE",
+ "RTM_NEWTCLASS",
+ "RTM_NEWTFILTER",
+ "RTM_NR_FAMILIES",
+ "RTM_NR_MSGTYPES",
+ "RTM_OIFINFO",
+ "RTM_OLDADD",
+ "RTM_OLDDEL",
+ "RTM_OOIFINFO",
+ "RTM_REDIRECT",
+ "RTM_RESOLVE",
+ "RTM_RTTUNIT",
+ "RTM_SETDCB",
+ "RTM_SETGATE",
+ "RTM_SETLINK",
+ "RTM_SETNEIGHTBL",
+ "RTM_VERSION",
+ "RTNH_ALIGNTO",
+ "RTNH_F_DEAD",
+ "RTNH_F_ONLINK",
+ "RTNH_F_PERVASIVE",
+ "RTNLGRP_IPV4_IFADDR",
+ "RTNLGRP_IPV4_MROUTE",
+ "RTNLGRP_IPV4_ROUTE",
+ "RTNLGRP_IPV4_RULE",
+ "RTNLGRP_IPV6_IFADDR",
+ "RTNLGRP_IPV6_IFINFO",
+ "RTNLGRP_IPV6_MROUTE",
+ "RTNLGRP_IPV6_PREFIX",
+ "RTNLGRP_IPV6_ROUTE",
+ "RTNLGRP_IPV6_RULE",
+ "RTNLGRP_LINK",
+ "RTNLGRP_ND_USEROPT",
+ "RTNLGRP_NEIGH",
+ "RTNLGRP_NONE",
+ "RTNLGRP_NOTIFY",
+ "RTNLGRP_TC",
+ "RTN_ANYCAST",
+ "RTN_BLACKHOLE",
+ "RTN_BROADCAST",
+ "RTN_LOCAL",
+ "RTN_MAX",
+ "RTN_MULTICAST",
+ "RTN_NAT",
+ "RTN_PROHIBIT",
+ "RTN_THROW",
+ "RTN_UNICAST",
+ "RTN_UNREACHABLE",
+ "RTN_UNSPEC",
+ "RTN_XRESOLVE",
+ "RTPROT_BIRD",
+ "RTPROT_BOOT",
+ "RTPROT_DHCP",
+ "RTPROT_DNROUTED",
+ "RTPROT_GATED",
+ "RTPROT_KERNEL",
+ "RTPROT_MRT",
+ "RTPROT_NTK",
+ "RTPROT_RA",
+ "RTPROT_REDIRECT",
+ "RTPROT_STATIC",
+ "RTPROT_UNSPEC",
+ "RTPROT_XORP",
+ "RTPROT_ZEBRA",
+ "RTV_EXPIRE",
+ "RTV_HOPCOUNT",
+ "RTV_MTU",
+ "RTV_RPIPE",
+ "RTV_RTT",
+ "RTV_RTTVAR",
+ "RTV_SPIPE",
+ "RTV_SSTHRESH",
+ "RTV_WEIGHT",
+ "RT_CACHING_CONTEXT",
+ "RT_CLASS_DEFAULT",
+ "RT_CLASS_LOCAL",
+ "RT_CLASS_MAIN",
+ "RT_CLASS_MAX",
+ "RT_CLASS_UNSPEC",
+ "RT_DEFAULT_FIB",
+ "RT_NORTREF",
+ "RT_SCOPE_HOST",
+ "RT_SCOPE_LINK",
+ "RT_SCOPE_NOWHERE",
+ "RT_SCOPE_SITE",
+ "RT_SCOPE_UNIVERSE",
+ "RT_TABLEID_MAX",
+ "RT_TABLE_COMPAT",
+ "RT_TABLE_DEFAULT",
+ "RT_TABLE_LOCAL",
+ "RT_TABLE_MAIN",
+ "RT_TABLE_MAX",
+ "RT_TABLE_UNSPEC",
+ "RUSAGE_CHILDREN",
+ "RUSAGE_SELF",
+ "RUSAGE_THREAD",
+ "Radvisory_t",
+ "RawConn",
+ "RawSockaddr",
+ "RawSockaddrAny",
+ "RawSockaddrDatalink",
+ "RawSockaddrInet4",
+ "RawSockaddrInet6",
+ "RawSockaddrLinklayer",
+ "RawSockaddrNetlink",
+ "RawSockaddrUnix",
+ "RawSyscall",
+ "RawSyscall6",
+ "Read",
+ "ReadConsole",
+ "ReadDirectoryChanges",
+ "ReadDirent",
+ "ReadFile",
+ "Readlink",
+ "Reboot",
+ "Recvfrom",
+ "Recvmsg",
+ "RegCloseKey",
+ "RegEnumKeyEx",
+ "RegOpenKeyEx",
+ "RegQueryInfoKey",
+ "RegQueryValueEx",
+ "RemoveDirectory",
+ "Removexattr",
+ "Rename",
+ "Renameat",
+ "Revoke",
+ "Rlimit",
+ "Rmdir",
+ "RouteMessage",
+ "RouteRIB",
+ "RoutingMessage",
+ "RtAttr",
+ "RtGenmsg",
+ "RtMetrics",
+ "RtMsg",
+ "RtMsghdr",
+ "RtNexthop",
+ "Rusage",
+ "SCM_BINTIME",
+ "SCM_CREDENTIALS",
+ "SCM_CREDS",
+ "SCM_RIGHTS",
+ "SCM_TIMESTAMP",
+ "SCM_TIMESTAMPING",
+ "SCM_TIMESTAMPNS",
+ "SCM_TIMESTAMP_MONOTONIC",
+ "SHUT_RD",
+ "SHUT_RDWR",
+ "SHUT_WR",
+ "SID",
+ "SIDAndAttributes",
+ "SIGABRT",
+ "SIGALRM",
+ "SIGBUS",
+ "SIGCHLD",
+ "SIGCLD",
+ "SIGCONT",
+ "SIGEMT",
+ "SIGFPE",
+ "SIGHUP",
+ "SIGILL",
+ "SIGINFO",
+ "SIGINT",
+ "SIGIO",
+ "SIGIOT",
+ "SIGKILL",
+ "SIGLIBRT",
+ "SIGLWP",
+ "SIGPIPE",
+ "SIGPOLL",
+ "SIGPROF",
+ "SIGPWR",
+ "SIGQUIT",
+ "SIGSEGV",
+ "SIGSTKFLT",
+ "SIGSTOP",
+ "SIGSYS",
+ "SIGTERM",
+ "SIGTHR",
+ "SIGTRAP",
+ "SIGTSTP",
+ "SIGTTIN",
+ "SIGTTOU",
+ "SIGUNUSED",
+ "SIGURG",
+ "SIGUSR1",
+ "SIGUSR2",
+ "SIGVTALRM",
+ "SIGWINCH",
+ "SIGXCPU",
+ "SIGXFSZ",
+ "SIOCADDDLCI",
+ "SIOCADDMULTI",
+ "SIOCADDRT",
+ "SIOCAIFADDR",
+ "SIOCAIFGROUP",
+ "SIOCALIFADDR",
+ "SIOCARPIPLL",
+ "SIOCATMARK",
+ "SIOCAUTOADDR",
+ "SIOCAUTONETMASK",
+ "SIOCBRDGADD",
+ "SIOCBRDGADDS",
+ "SIOCBRDGARL",
+ "SIOCBRDGDADDR",
+ "SIOCBRDGDEL",
+ "SIOCBRDGDELS",
+ "SIOCBRDGFLUSH",
+ "SIOCBRDGFRL",
+ "SIOCBRDGGCACHE",
+ "SIOCBRDGGFD",
+ "SIOCBRDGGHT",
+ "SIOCBRDGGIFFLGS",
+ "SIOCBRDGGMA",
+ "SIOCBRDGGPARAM",
+ "SIOCBRDGGPRI",
+ "SIOCBRDGGRL",
+ "SIOCBRDGGSIFS",
+ "SIOCBRDGGTO",
+ "SIOCBRDGIFS",
+ "SIOCBRDGRTS",
+ "SIOCBRDGSADDR",
+ "SIOCBRDGSCACHE",
+ "SIOCBRDGSFD",
+ "SIOCBRDGSHT",
+ "SIOCBRDGSIFCOST",
+ "SIOCBRDGSIFFLGS",
+ "SIOCBRDGSIFPRIO",
+ "SIOCBRDGSMA",
+ "SIOCBRDGSPRI",
+ "SIOCBRDGSPROTO",
+ "SIOCBRDGSTO",
+ "SIOCBRDGSTXHC",
+ "SIOCDARP",
+ "SIOCDELDLCI",
+ "SIOCDELMULTI",
+ "SIOCDELRT",
+ "SIOCDEVPRIVATE",
+ "SIOCDIFADDR",
+ "SIOCDIFGROUP",
+ "SIOCDIFPHYADDR",
+ "SIOCDLIFADDR",
+ "SIOCDRARP",
+ "SIOCGARP",
+ "SIOCGDRVSPEC",
+ "SIOCGETKALIVE",
+ "SIOCGETLABEL",
+ "SIOCGETPFLOW",
+ "SIOCGETPFSYNC",
+ "SIOCGETSGCNT",
+ "SIOCGETVIFCNT",
+ "SIOCGETVLAN",
+ "SIOCGHIWAT",
+ "SIOCGIFADDR",
+ "SIOCGIFADDRPREF",
+ "SIOCGIFALIAS",
+ "SIOCGIFALTMTU",
+ "SIOCGIFASYNCMAP",
+ "SIOCGIFBOND",
+ "SIOCGIFBR",
+ "SIOCGIFBRDADDR",
+ "SIOCGIFCAP",
+ "SIOCGIFCONF",
+ "SIOCGIFCOUNT",
+ "SIOCGIFDATA",
+ "SIOCGIFDESCR",
+ "SIOCGIFDEVMTU",
+ "SIOCGIFDLT",
+ "SIOCGIFDSTADDR",
+ "SIOCGIFENCAP",
+ "SIOCGIFFIB",
+ "SIOCGIFFLAGS",
+ "SIOCGIFGATTR",
+ "SIOCGIFGENERIC",
+ "SIOCGIFGMEMB",
+ "SIOCGIFGROUP",
+ "SIOCGIFHARDMTU",
+ "SIOCGIFHWADDR",
+ "SIOCGIFINDEX",
+ "SIOCGIFKPI",
+ "SIOCGIFMAC",
+ "SIOCGIFMAP",
+ "SIOCGIFMEDIA",
+ "SIOCGIFMEM",
+ "SIOCGIFMETRIC",
+ "SIOCGIFMTU",
+ "SIOCGIFNAME",
+ "SIOCGIFNETMASK",
+ "SIOCGIFPDSTADDR",
+ "SIOCGIFPFLAGS",
+ "SIOCGIFPHYS",
+ "SIOCGIFPRIORITY",
+ "SIOCGIFPSRCADDR",
+ "SIOCGIFRDOMAIN",
+ "SIOCGIFRTLABEL",
+ "SIOCGIFSLAVE",
+ "SIOCGIFSTATUS",
+ "SIOCGIFTIMESLOT",
+ "SIOCGIFTXQLEN",
+ "SIOCGIFVLAN",
+ "SIOCGIFWAKEFLAGS",
+ "SIOCGIFXFLAGS",
+ "SIOCGLIFADDR",
+ "SIOCGLIFPHYADDR",
+ "SIOCGLIFPHYRTABLE",
+ "SIOCGLIFPHYTTL",
+ "SIOCGLINKSTR",
+ "SIOCGLOWAT",
+ "SIOCGPGRP",
+ "SIOCGPRIVATE_0",
+ "SIOCGPRIVATE_1",
+ "SIOCGRARP",
+ "SIOCGSPPPPARAMS",
+ "SIOCGSTAMP",
+ "SIOCGSTAMPNS",
+ "SIOCGVH",
+ "SIOCGVNETID",
+ "SIOCIFCREATE",
+ "SIOCIFCREATE2",
+ "SIOCIFDESTROY",
+ "SIOCIFGCLONERS",
+ "SIOCINITIFADDR",
+ "SIOCPROTOPRIVATE",
+ "SIOCRSLVMULTI",
+ "SIOCRTMSG",
+ "SIOCSARP",
+ "SIOCSDRVSPEC",
+ "SIOCSETKALIVE",
+ "SIOCSETLABEL",
+ "SIOCSETPFLOW",
+ "SIOCSETPFSYNC",
+ "SIOCSETVLAN",
+ "SIOCSHIWAT",
+ "SIOCSIFADDR",
+ "SIOCSIFADDRPREF",
+ "SIOCSIFALTMTU",
+ "SIOCSIFASYNCMAP",
+ "SIOCSIFBOND",
+ "SIOCSIFBR",
+ "SIOCSIFBRDADDR",
+ "SIOCSIFCAP",
+ "SIOCSIFDESCR",
+ "SIOCSIFDSTADDR",
+ "SIOCSIFENCAP",
+ "SIOCSIFFIB",
+ "SIOCSIFFLAGS",
+ "SIOCSIFGATTR",
+ "SIOCSIFGENERIC",
+ "SIOCSIFHWADDR",
+ "SIOCSIFHWBROADCAST",
+ "SIOCSIFKPI",
+ "SIOCSIFLINK",
+ "SIOCSIFLLADDR",
+ "SIOCSIFMAC",
+ "SIOCSIFMAP",
+ "SIOCSIFMEDIA",
+ "SIOCSIFMEM",
+ "SIOCSIFMETRIC",
+ "SIOCSIFMTU",
+ "SIOCSIFNAME",
+ "SIOCSIFNETMASK",
+ "SIOCSIFPFLAGS",
+ "SIOCSIFPHYADDR",
+ "SIOCSIFPHYS",
+ "SIOCSIFPRIORITY",
+ "SIOCSIFRDOMAIN",
+ "SIOCSIFRTLABEL",
+ "SIOCSIFRVNET",
+ "SIOCSIFSLAVE",
+ "SIOCSIFTIMESLOT",
+ "SIOCSIFTXQLEN",
+ "SIOCSIFVLAN",
+ "SIOCSIFVNET",
+ "SIOCSIFXFLAGS",
+ "SIOCSLIFPHYADDR",
+ "SIOCSLIFPHYRTABLE",
+ "SIOCSLIFPHYTTL",
+ "SIOCSLINKSTR",
+ "SIOCSLOWAT",
+ "SIOCSPGRP",
+ "SIOCSRARP",
+ "SIOCSSPPPPARAMS",
+ "SIOCSVH",
+ "SIOCSVNETID",
+ "SIOCZIFDATA",
+ "SIO_GET_EXTENSION_FUNCTION_POINTER",
+ "SIO_GET_INTERFACE_LIST",
+ "SIO_KEEPALIVE_VALS",
+ "SIO_UDP_CONNRESET",
+ "SOCK_CLOEXEC",
+ "SOCK_DCCP",
+ "SOCK_DGRAM",
+ "SOCK_FLAGS_MASK",
+ "SOCK_MAXADDRLEN",
+ "SOCK_NONBLOCK",
+ "SOCK_NOSIGPIPE",
+ "SOCK_PACKET",
+ "SOCK_RAW",
+ "SOCK_RDM",
+ "SOCK_SEQPACKET",
+ "SOCK_STREAM",
+ "SOL_AAL",
+ "SOL_ATM",
+ "SOL_DECNET",
+ "SOL_ICMPV6",
+ "SOL_IP",
+ "SOL_IPV6",
+ "SOL_IRDA",
+ "SOL_PACKET",
+ "SOL_RAW",
+ "SOL_SOCKET",
+ "SOL_TCP",
+ "SOL_X25",
+ "SOMAXCONN",
+ "SO_ACCEPTCONN",
+ "SO_ACCEPTFILTER",
+ "SO_ATTACH_FILTER",
+ "SO_BINDANY",
+ "SO_BINDTODEVICE",
+ "SO_BINTIME",
+ "SO_BROADCAST",
+ "SO_BSDCOMPAT",
+ "SO_DEBUG",
+ "SO_DETACH_FILTER",
+ "SO_DOMAIN",
+ "SO_DONTROUTE",
+ "SO_DONTTRUNC",
+ "SO_ERROR",
+ "SO_KEEPALIVE",
+ "SO_LABEL",
+ "SO_LINGER",
+ "SO_LINGER_SEC",
+ "SO_LISTENINCQLEN",
+ "SO_LISTENQLEN",
+ "SO_LISTENQLIMIT",
+ "SO_MARK",
+ "SO_NETPROC",
+ "SO_NKE",
+ "SO_NOADDRERR",
+ "SO_NOHEADER",
+ "SO_NOSIGPIPE",
+ "SO_NOTIFYCONFLICT",
+ "SO_NO_CHECK",
+ "SO_NO_DDP",
+ "SO_NO_OFFLOAD",
+ "SO_NP_EXTENSIONS",
+ "SO_NREAD",
+ "SO_NWRITE",
+ "SO_OOBINLINE",
+ "SO_OVERFLOWED",
+ "SO_PASSCRED",
+ "SO_PASSSEC",
+ "SO_PEERCRED",
+ "SO_PEERLABEL",
+ "SO_PEERNAME",
+ "SO_PEERSEC",
+ "SO_PRIORITY",
+ "SO_PROTOCOL",
+ "SO_PROTOTYPE",
+ "SO_RANDOMPORT",
+ "SO_RCVBUF",
+ "SO_RCVBUFFORCE",
+ "SO_RCVLOWAT",
+ "SO_RCVTIMEO",
+ "SO_RESTRICTIONS",
+ "SO_RESTRICT_DENYIN",
+ "SO_RESTRICT_DENYOUT",
+ "SO_RESTRICT_DENYSET",
+ "SO_REUSEADDR",
+ "SO_REUSEPORT",
+ "SO_REUSESHAREUID",
+ "SO_RTABLE",
+ "SO_RXQ_OVFL",
+ "SO_SECURITY_AUTHENTICATION",
+ "SO_SECURITY_ENCRYPTION_NETWORK",
+ "SO_SECURITY_ENCRYPTION_TRANSPORT",
+ "SO_SETFIB",
+ "SO_SNDBUF",
+ "SO_SNDBUFFORCE",
+ "SO_SNDLOWAT",
+ "SO_SNDTIMEO",
+ "SO_SPLICE",
+ "SO_TIMESTAMP",
+ "SO_TIMESTAMPING",
+ "SO_TIMESTAMPNS",
+ "SO_TIMESTAMP_MONOTONIC",
+ "SO_TYPE",
+ "SO_UPCALLCLOSEWAIT",
+ "SO_UPDATE_ACCEPT_CONTEXT",
+ "SO_UPDATE_CONNECT_CONTEXT",
+ "SO_USELOOPBACK",
+ "SO_USER_COOKIE",
+ "SO_VENDOR",
+ "SO_WANTMORE",
+ "SO_WANTOOBFLAG",
+ "SSLExtraCertChainPolicyPara",
+ "STANDARD_RIGHTS_ALL",
+ "STANDARD_RIGHTS_EXECUTE",
+ "STANDARD_RIGHTS_READ",
+ "STANDARD_RIGHTS_REQUIRED",
+ "STANDARD_RIGHTS_WRITE",
+ "STARTF_USESHOWWINDOW",
+ "STARTF_USESTDHANDLES",
+ "STD_ERROR_HANDLE",
+ "STD_INPUT_HANDLE",
+ "STD_OUTPUT_HANDLE",
+ "SUBLANG_ENGLISH_US",
+ "SW_FORCEMINIMIZE",
+ "SW_HIDE",
+ "SW_MAXIMIZE",
+ "SW_MINIMIZE",
+ "SW_NORMAL",
+ "SW_RESTORE",
+ "SW_SHOW",
+ "SW_SHOWDEFAULT",
+ "SW_SHOWMAXIMIZED",
+ "SW_SHOWMINIMIZED",
+ "SW_SHOWMINNOACTIVE",
+ "SW_SHOWNA",
+ "SW_SHOWNOACTIVATE",
+ "SW_SHOWNORMAL",
+ "SYMBOLIC_LINK_FLAG_DIRECTORY",
+ "SYNCHRONIZE",
+ "SYSCTL_VERSION",
+ "SYSCTL_VERS_0",
+ "SYSCTL_VERS_1",
+ "SYSCTL_VERS_MASK",
+ "SYS_ABORT2",
+ "SYS_ACCEPT",
+ "SYS_ACCEPT4",
+ "SYS_ACCEPT_NOCANCEL",
+ "SYS_ACCESS",
+ "SYS_ACCESS_EXTENDED",
+ "SYS_ACCT",
+ "SYS_ADD_KEY",
+ "SYS_ADD_PROFIL",
+ "SYS_ADJFREQ",
+ "SYS_ADJTIME",
+ "SYS_ADJTIMEX",
+ "SYS_AFS_SYSCALL",
+ "SYS_AIO_CANCEL",
+ "SYS_AIO_ERROR",
+ "SYS_AIO_FSYNC",
+ "SYS_AIO_READ",
+ "SYS_AIO_RETURN",
+ "SYS_AIO_SUSPEND",
+ "SYS_AIO_SUSPEND_NOCANCEL",
+ "SYS_AIO_WRITE",
+ "SYS_ALARM",
+ "SYS_ARCH_PRCTL",
+ "SYS_ARM_FADVISE64_64",
+ "SYS_ARM_SYNC_FILE_RANGE",
+ "SYS_ATGETMSG",
+ "SYS_ATPGETREQ",
+ "SYS_ATPGETRSP",
+ "SYS_ATPSNDREQ",
+ "SYS_ATPSNDRSP",
+ "SYS_ATPUTMSG",
+ "SYS_ATSOCKET",
+ "SYS_AUDIT",
+ "SYS_AUDITCTL",
+ "SYS_AUDITON",
+ "SYS_AUDIT_SESSION_JOIN",
+ "SYS_AUDIT_SESSION_PORT",
+ "SYS_AUDIT_SESSION_SELF",
+ "SYS_BDFLUSH",
+ "SYS_BIND",
+ "SYS_BINDAT",
+ "SYS_BREAK",
+ "SYS_BRK",
+ "SYS_BSDTHREAD_CREATE",
+ "SYS_BSDTHREAD_REGISTER",
+ "SYS_BSDTHREAD_TERMINATE",
+ "SYS_CAPGET",
+ "SYS_CAPSET",
+ "SYS_CAP_ENTER",
+ "SYS_CAP_FCNTLS_GET",
+ "SYS_CAP_FCNTLS_LIMIT",
+ "SYS_CAP_GETMODE",
+ "SYS_CAP_GETRIGHTS",
+ "SYS_CAP_IOCTLS_GET",
+ "SYS_CAP_IOCTLS_LIMIT",
+ "SYS_CAP_NEW",
+ "SYS_CAP_RIGHTS_GET",
+ "SYS_CAP_RIGHTS_LIMIT",
+ "SYS_CHDIR",
+ "SYS_CHFLAGS",
+ "SYS_CHFLAGSAT",
+ "SYS_CHMOD",
+ "SYS_CHMOD_EXTENDED",
+ "SYS_CHOWN",
+ "SYS_CHOWN32",
+ "SYS_CHROOT",
+ "SYS_CHUD",
+ "SYS_CLOCK_ADJTIME",
+ "SYS_CLOCK_GETCPUCLOCKID2",
+ "SYS_CLOCK_GETRES",
+ "SYS_CLOCK_GETTIME",
+ "SYS_CLOCK_NANOSLEEP",
+ "SYS_CLOCK_SETTIME",
+ "SYS_CLONE",
+ "SYS_CLOSE",
+ "SYS_CLOSEFROM",
+ "SYS_CLOSE_NOCANCEL",
+ "SYS_CONNECT",
+ "SYS_CONNECTAT",
+ "SYS_CONNECT_NOCANCEL",
+ "SYS_COPYFILE",
+ "SYS_CPUSET",
+ "SYS_CPUSET_GETAFFINITY",
+ "SYS_CPUSET_GETID",
+ "SYS_CPUSET_SETAFFINITY",
+ "SYS_CPUSET_SETID",
+ "SYS_CREAT",
+ "SYS_CREATE_MODULE",
+ "SYS_CSOPS",
+ "SYS_DELETE",
+ "SYS_DELETE_MODULE",
+ "SYS_DUP",
+ "SYS_DUP2",
+ "SYS_DUP3",
+ "SYS_EACCESS",
+ "SYS_EPOLL_CREATE",
+ "SYS_EPOLL_CREATE1",
+ "SYS_EPOLL_CTL",
+ "SYS_EPOLL_CTL_OLD",
+ "SYS_EPOLL_PWAIT",
+ "SYS_EPOLL_WAIT",
+ "SYS_EPOLL_WAIT_OLD",
+ "SYS_EVENTFD",
+ "SYS_EVENTFD2",
+ "SYS_EXCHANGEDATA",
+ "SYS_EXECVE",
+ "SYS_EXIT",
+ "SYS_EXIT_GROUP",
+ "SYS_EXTATTRCTL",
+ "SYS_EXTATTR_DELETE_FD",
+ "SYS_EXTATTR_DELETE_FILE",
+ "SYS_EXTATTR_DELETE_LINK",
+ "SYS_EXTATTR_GET_FD",
+ "SYS_EXTATTR_GET_FILE",
+ "SYS_EXTATTR_GET_LINK",
+ "SYS_EXTATTR_LIST_FD",
+ "SYS_EXTATTR_LIST_FILE",
+ "SYS_EXTATTR_LIST_LINK",
+ "SYS_EXTATTR_SET_FD",
+ "SYS_EXTATTR_SET_FILE",
+ "SYS_EXTATTR_SET_LINK",
+ "SYS_FACCESSAT",
+ "SYS_FADVISE64",
+ "SYS_FADVISE64_64",
+ "SYS_FALLOCATE",
+ "SYS_FANOTIFY_INIT",
+ "SYS_FANOTIFY_MARK",
+ "SYS_FCHDIR",
+ "SYS_FCHFLAGS",
+ "SYS_FCHMOD",
+ "SYS_FCHMODAT",
+ "SYS_FCHMOD_EXTENDED",
+ "SYS_FCHOWN",
+ "SYS_FCHOWN32",
+ "SYS_FCHOWNAT",
+ "SYS_FCHROOT",
+ "SYS_FCNTL",
+ "SYS_FCNTL64",
+ "SYS_FCNTL_NOCANCEL",
+ "SYS_FDATASYNC",
+ "SYS_FEXECVE",
+ "SYS_FFCLOCK_GETCOUNTER",
+ "SYS_FFCLOCK_GETESTIMATE",
+ "SYS_FFCLOCK_SETESTIMATE",
+ "SYS_FFSCTL",
+ "SYS_FGETATTRLIST",
+ "SYS_FGETXATTR",
+ "SYS_FHOPEN",
+ "SYS_FHSTAT",
+ "SYS_FHSTATFS",
+ "SYS_FILEPORT_MAKEFD",
+ "SYS_FILEPORT_MAKEPORT",
+ "SYS_FKTRACE",
+ "SYS_FLISTXATTR",
+ "SYS_FLOCK",
+ "SYS_FORK",
+ "SYS_FPATHCONF",
+ "SYS_FREEBSD6_FTRUNCATE",
+ "SYS_FREEBSD6_LSEEK",
+ "SYS_FREEBSD6_MMAP",
+ "SYS_FREEBSD6_PREAD",
+ "SYS_FREEBSD6_PWRITE",
+ "SYS_FREEBSD6_TRUNCATE",
+ "SYS_FREMOVEXATTR",
+ "SYS_FSCTL",
+ "SYS_FSETATTRLIST",
+ "SYS_FSETXATTR",
+ "SYS_FSGETPATH",
+ "SYS_FSTAT",
+ "SYS_FSTAT64",
+ "SYS_FSTAT64_EXTENDED",
+ "SYS_FSTATAT",
+ "SYS_FSTATAT64",
+ "SYS_FSTATFS",
+ "SYS_FSTATFS64",
+ "SYS_FSTATV",
+ "SYS_FSTATVFS1",
+ "SYS_FSTAT_EXTENDED",
+ "SYS_FSYNC",
+ "SYS_FSYNC_NOCANCEL",
+ "SYS_FSYNC_RANGE",
+ "SYS_FTIME",
+ "SYS_FTRUNCATE",
+ "SYS_FTRUNCATE64",
+ "SYS_FUTEX",
+ "SYS_FUTIMENS",
+ "SYS_FUTIMES",
+ "SYS_FUTIMESAT",
+ "SYS_GETATTRLIST",
+ "SYS_GETAUDIT",
+ "SYS_GETAUDIT_ADDR",
+ "SYS_GETAUID",
+ "SYS_GETCONTEXT",
+ "SYS_GETCPU",
+ "SYS_GETCWD",
+ "SYS_GETDENTS",
+ "SYS_GETDENTS64",
+ "SYS_GETDIRENTRIES",
+ "SYS_GETDIRENTRIES64",
+ "SYS_GETDIRENTRIESATTR",
+ "SYS_GETDTABLECOUNT",
+ "SYS_GETDTABLESIZE",
+ "SYS_GETEGID",
+ "SYS_GETEGID32",
+ "SYS_GETEUID",
+ "SYS_GETEUID32",
+ "SYS_GETFH",
+ "SYS_GETFSSTAT",
+ "SYS_GETFSSTAT64",
+ "SYS_GETGID",
+ "SYS_GETGID32",
+ "SYS_GETGROUPS",
+ "SYS_GETGROUPS32",
+ "SYS_GETHOSTUUID",
+ "SYS_GETITIMER",
+ "SYS_GETLCID",
+ "SYS_GETLOGIN",
+ "SYS_GETLOGINCLASS",
+ "SYS_GETPEERNAME",
+ "SYS_GETPGID",
+ "SYS_GETPGRP",
+ "SYS_GETPID",
+ "SYS_GETPMSG",
+ "SYS_GETPPID",
+ "SYS_GETPRIORITY",
+ "SYS_GETRESGID",
+ "SYS_GETRESGID32",
+ "SYS_GETRESUID",
+ "SYS_GETRESUID32",
+ "SYS_GETRLIMIT",
+ "SYS_GETRTABLE",
+ "SYS_GETRUSAGE",
+ "SYS_GETSGROUPS",
+ "SYS_GETSID",
+ "SYS_GETSOCKNAME",
+ "SYS_GETSOCKOPT",
+ "SYS_GETTHRID",
+ "SYS_GETTID",
+ "SYS_GETTIMEOFDAY",
+ "SYS_GETUID",
+ "SYS_GETUID32",
+ "SYS_GETVFSSTAT",
+ "SYS_GETWGROUPS",
+ "SYS_GETXATTR",
+ "SYS_GET_KERNEL_SYMS",
+ "SYS_GET_MEMPOLICY",
+ "SYS_GET_ROBUST_LIST",
+ "SYS_GET_THREAD_AREA",
+ "SYS_GTTY",
+ "SYS_IDENTITYSVC",
+ "SYS_IDLE",
+ "SYS_INITGROUPS",
+ "SYS_INIT_MODULE",
+ "SYS_INOTIFY_ADD_WATCH",
+ "SYS_INOTIFY_INIT",
+ "SYS_INOTIFY_INIT1",
+ "SYS_INOTIFY_RM_WATCH",
+ "SYS_IOCTL",
+ "SYS_IOPERM",
+ "SYS_IOPL",
+ "SYS_IOPOLICYSYS",
+ "SYS_IOPRIO_GET",
+ "SYS_IOPRIO_SET",
+ "SYS_IO_CANCEL",
+ "SYS_IO_DESTROY",
+ "SYS_IO_GETEVENTS",
+ "SYS_IO_SETUP",
+ "SYS_IO_SUBMIT",
+ "SYS_IPC",
+ "SYS_ISSETUGID",
+ "SYS_JAIL",
+ "SYS_JAIL_ATTACH",
+ "SYS_JAIL_GET",
+ "SYS_JAIL_REMOVE",
+ "SYS_JAIL_SET",
+ "SYS_KDEBUG_TRACE",
+ "SYS_KENV",
+ "SYS_KEVENT",
+ "SYS_KEVENT64",
+ "SYS_KEXEC_LOAD",
+ "SYS_KEYCTL",
+ "SYS_KILL",
+ "SYS_KLDFIND",
+ "SYS_KLDFIRSTMOD",
+ "SYS_KLDLOAD",
+ "SYS_KLDNEXT",
+ "SYS_KLDSTAT",
+ "SYS_KLDSYM",
+ "SYS_KLDUNLOAD",
+ "SYS_KLDUNLOADF",
+ "SYS_KQUEUE",
+ "SYS_KQUEUE1",
+ "SYS_KTIMER_CREATE",
+ "SYS_KTIMER_DELETE",
+ "SYS_KTIMER_GETOVERRUN",
+ "SYS_KTIMER_GETTIME",
+ "SYS_KTIMER_SETTIME",
+ "SYS_KTRACE",
+ "SYS_LCHFLAGS",
+ "SYS_LCHMOD",
+ "SYS_LCHOWN",
+ "SYS_LCHOWN32",
+ "SYS_LGETFH",
+ "SYS_LGETXATTR",
+ "SYS_LINK",
+ "SYS_LINKAT",
+ "SYS_LIO_LISTIO",
+ "SYS_LISTEN",
+ "SYS_LISTXATTR",
+ "SYS_LLISTXATTR",
+ "SYS_LOCK",
+ "SYS_LOOKUP_DCOOKIE",
+ "SYS_LPATHCONF",
+ "SYS_LREMOVEXATTR",
+ "SYS_LSEEK",
+ "SYS_LSETXATTR",
+ "SYS_LSTAT",
+ "SYS_LSTAT64",
+ "SYS_LSTAT64_EXTENDED",
+ "SYS_LSTATV",
+ "SYS_LSTAT_EXTENDED",
+ "SYS_LUTIMES",
+ "SYS_MAC_SYSCALL",
+ "SYS_MADVISE",
+ "SYS_MADVISE1",
+ "SYS_MAXSYSCALL",
+ "SYS_MBIND",
+ "SYS_MIGRATE_PAGES",
+ "SYS_MINCORE",
+ "SYS_MINHERIT",
+ "SYS_MKCOMPLEX",
+ "SYS_MKDIR",
+ "SYS_MKDIRAT",
+ "SYS_MKDIR_EXTENDED",
+ "SYS_MKFIFO",
+ "SYS_MKFIFOAT",
+ "SYS_MKFIFO_EXTENDED",
+ "SYS_MKNOD",
+ "SYS_MKNODAT",
+ "SYS_MLOCK",
+ "SYS_MLOCKALL",
+ "SYS_MMAP",
+ "SYS_MMAP2",
+ "SYS_MODCTL",
+ "SYS_MODFIND",
+ "SYS_MODFNEXT",
+ "SYS_MODIFY_LDT",
+ "SYS_MODNEXT",
+ "SYS_MODSTAT",
+ "SYS_MODWATCH",
+ "SYS_MOUNT",
+ "SYS_MOVE_PAGES",
+ "SYS_MPROTECT",
+ "SYS_MPX",
+ "SYS_MQUERY",
+ "SYS_MQ_GETSETATTR",
+ "SYS_MQ_NOTIFY",
+ "SYS_MQ_OPEN",
+ "SYS_MQ_TIMEDRECEIVE",
+ "SYS_MQ_TIMEDSEND",
+ "SYS_MQ_UNLINK",
+ "SYS_MREMAP",
+ "SYS_MSGCTL",
+ "SYS_MSGGET",
+ "SYS_MSGRCV",
+ "SYS_MSGRCV_NOCANCEL",
+ "SYS_MSGSND",
+ "SYS_MSGSND_NOCANCEL",
+ "SYS_MSGSYS",
+ "SYS_MSYNC",
+ "SYS_MSYNC_NOCANCEL",
+ "SYS_MUNLOCK",
+ "SYS_MUNLOCKALL",
+ "SYS_MUNMAP",
+ "SYS_NAME_TO_HANDLE_AT",
+ "SYS_NANOSLEEP",
+ "SYS_NEWFSTATAT",
+ "SYS_NFSCLNT",
+ "SYS_NFSSERVCTL",
+ "SYS_NFSSVC",
+ "SYS_NFSTAT",
+ "SYS_NICE",
+ "SYS_NLSTAT",
+ "SYS_NMOUNT",
+ "SYS_NSTAT",
+ "SYS_NTP_ADJTIME",
+ "SYS_NTP_GETTIME",
+ "SYS_OABI_SYSCALL_BASE",
+ "SYS_OBREAK",
+ "SYS_OLDFSTAT",
+ "SYS_OLDLSTAT",
+ "SYS_OLDOLDUNAME",
+ "SYS_OLDSTAT",
+ "SYS_OLDUNAME",
+ "SYS_OPEN",
+ "SYS_OPENAT",
+ "SYS_OPENBSD_POLL",
+ "SYS_OPEN_BY_HANDLE_AT",
+ "SYS_OPEN_EXTENDED",
+ "SYS_OPEN_NOCANCEL",
+ "SYS_OVADVISE",
+ "SYS_PACCEPT",
+ "SYS_PATHCONF",
+ "SYS_PAUSE",
+ "SYS_PCICONFIG_IOBASE",
+ "SYS_PCICONFIG_READ",
+ "SYS_PCICONFIG_WRITE",
+ "SYS_PDFORK",
+ "SYS_PDGETPID",
+ "SYS_PDKILL",
+ "SYS_PERF_EVENT_OPEN",
+ "SYS_PERSONALITY",
+ "SYS_PID_HIBERNATE",
+ "SYS_PID_RESUME",
+ "SYS_PID_SHUTDOWN_SOCKETS",
+ "SYS_PID_SUSPEND",
+ "SYS_PIPE",
+ "SYS_PIPE2",
+ "SYS_PIVOT_ROOT",
+ "SYS_PMC_CONTROL",
+ "SYS_PMC_GET_INFO",
+ "SYS_POLL",
+ "SYS_POLLTS",
+ "SYS_POLL_NOCANCEL",
+ "SYS_POSIX_FADVISE",
+ "SYS_POSIX_FALLOCATE",
+ "SYS_POSIX_OPENPT",
+ "SYS_POSIX_SPAWN",
+ "SYS_PPOLL",
+ "SYS_PRCTL",
+ "SYS_PREAD",
+ "SYS_PREAD64",
+ "SYS_PREADV",
+ "SYS_PREAD_NOCANCEL",
+ "SYS_PRLIMIT64",
+ "SYS_PROCCTL",
+ "SYS_PROCESS_POLICY",
+ "SYS_PROCESS_VM_READV",
+ "SYS_PROCESS_VM_WRITEV",
+ "SYS_PROC_INFO",
+ "SYS_PROF",
+ "SYS_PROFIL",
+ "SYS_PSELECT",
+ "SYS_PSELECT6",
+ "SYS_PSET_ASSIGN",
+ "SYS_PSET_CREATE",
+ "SYS_PSET_DESTROY",
+ "SYS_PSYNCH_CVBROAD",
+ "SYS_PSYNCH_CVCLRPREPOST",
+ "SYS_PSYNCH_CVSIGNAL",
+ "SYS_PSYNCH_CVWAIT",
+ "SYS_PSYNCH_MUTEXDROP",
+ "SYS_PSYNCH_MUTEXWAIT",
+ "SYS_PSYNCH_RW_DOWNGRADE",
+ "SYS_PSYNCH_RW_LONGRDLOCK",
+ "SYS_PSYNCH_RW_RDLOCK",
+ "SYS_PSYNCH_RW_UNLOCK",
+ "SYS_PSYNCH_RW_UNLOCK2",
+ "SYS_PSYNCH_RW_UPGRADE",
+ "SYS_PSYNCH_RW_WRLOCK",
+ "SYS_PSYNCH_RW_YIELDWRLOCK",
+ "SYS_PTRACE",
+ "SYS_PUTPMSG",
+ "SYS_PWRITE",
+ "SYS_PWRITE64",
+ "SYS_PWRITEV",
+ "SYS_PWRITE_NOCANCEL",
+ "SYS_QUERY_MODULE",
+ "SYS_QUOTACTL",
+ "SYS_RASCTL",
+ "SYS_RCTL_ADD_RULE",
+ "SYS_RCTL_GET_LIMITS",
+ "SYS_RCTL_GET_RACCT",
+ "SYS_RCTL_GET_RULES",
+ "SYS_RCTL_REMOVE_RULE",
+ "SYS_READ",
+ "SYS_READAHEAD",
+ "SYS_READDIR",
+ "SYS_READLINK",
+ "SYS_READLINKAT",
+ "SYS_READV",
+ "SYS_READV_NOCANCEL",
+ "SYS_READ_NOCANCEL",
+ "SYS_REBOOT",
+ "SYS_RECV",
+ "SYS_RECVFROM",
+ "SYS_RECVFROM_NOCANCEL",
+ "SYS_RECVMMSG",
+ "SYS_RECVMSG",
+ "SYS_RECVMSG_NOCANCEL",
+ "SYS_REMAP_FILE_PAGES",
+ "SYS_REMOVEXATTR",
+ "SYS_RENAME",
+ "SYS_RENAMEAT",
+ "SYS_REQUEST_KEY",
+ "SYS_RESTART_SYSCALL",
+ "SYS_REVOKE",
+ "SYS_RFORK",
+ "SYS_RMDIR",
+ "SYS_RTPRIO",
+ "SYS_RTPRIO_THREAD",
+ "SYS_RT_SIGACTION",
+ "SYS_RT_SIGPENDING",
+ "SYS_RT_SIGPROCMASK",
+ "SYS_RT_SIGQUEUEINFO",
+ "SYS_RT_SIGRETURN",
+ "SYS_RT_SIGSUSPEND",
+ "SYS_RT_SIGTIMEDWAIT",
+ "SYS_RT_TGSIGQUEUEINFO",
+ "SYS_SBRK",
+ "SYS_SCHED_GETAFFINITY",
+ "SYS_SCHED_GETPARAM",
+ "SYS_SCHED_GETSCHEDULER",
+ "SYS_SCHED_GET_PRIORITY_MAX",
+ "SYS_SCHED_GET_PRIORITY_MIN",
+ "SYS_SCHED_RR_GET_INTERVAL",
+ "SYS_SCHED_SETAFFINITY",
+ "SYS_SCHED_SETPARAM",
+ "SYS_SCHED_SETSCHEDULER",
+ "SYS_SCHED_YIELD",
+ "SYS_SCTP_GENERIC_RECVMSG",
+ "SYS_SCTP_GENERIC_SENDMSG",
+ "SYS_SCTP_GENERIC_SENDMSG_IOV",
+ "SYS_SCTP_PEELOFF",
+ "SYS_SEARCHFS",
+ "SYS_SECURITY",
+ "SYS_SELECT",
+ "SYS_SELECT_NOCANCEL",
+ "SYS_SEMCONFIG",
+ "SYS_SEMCTL",
+ "SYS_SEMGET",
+ "SYS_SEMOP",
+ "SYS_SEMSYS",
+ "SYS_SEMTIMEDOP",
+ "SYS_SEM_CLOSE",
+ "SYS_SEM_DESTROY",
+ "SYS_SEM_GETVALUE",
+ "SYS_SEM_INIT",
+ "SYS_SEM_OPEN",
+ "SYS_SEM_POST",
+ "SYS_SEM_TRYWAIT",
+ "SYS_SEM_UNLINK",
+ "SYS_SEM_WAIT",
+ "SYS_SEM_WAIT_NOCANCEL",
+ "SYS_SEND",
+ "SYS_SENDFILE",
+ "SYS_SENDFILE64",
+ "SYS_SENDMMSG",
+ "SYS_SENDMSG",
+ "SYS_SENDMSG_NOCANCEL",
+ "SYS_SENDTO",
+ "SYS_SENDTO_NOCANCEL",
+ "SYS_SETATTRLIST",
+ "SYS_SETAUDIT",
+ "SYS_SETAUDIT_ADDR",
+ "SYS_SETAUID",
+ "SYS_SETCONTEXT",
+ "SYS_SETDOMAINNAME",
+ "SYS_SETEGID",
+ "SYS_SETEUID",
+ "SYS_SETFIB",
+ "SYS_SETFSGID",
+ "SYS_SETFSGID32",
+ "SYS_SETFSUID",
+ "SYS_SETFSUID32",
+ "SYS_SETGID",
+ "SYS_SETGID32",
+ "SYS_SETGROUPS",
+ "SYS_SETGROUPS32",
+ "SYS_SETHOSTNAME",
+ "SYS_SETITIMER",
+ "SYS_SETLCID",
+ "SYS_SETLOGIN",
+ "SYS_SETLOGINCLASS",
+ "SYS_SETNS",
+ "SYS_SETPGID",
+ "SYS_SETPRIORITY",
+ "SYS_SETPRIVEXEC",
+ "SYS_SETREGID",
+ "SYS_SETREGID32",
+ "SYS_SETRESGID",
+ "SYS_SETRESGID32",
+ "SYS_SETRESUID",
+ "SYS_SETRESUID32",
+ "SYS_SETREUID",
+ "SYS_SETREUID32",
+ "SYS_SETRLIMIT",
+ "SYS_SETRTABLE",
+ "SYS_SETSGROUPS",
+ "SYS_SETSID",
+ "SYS_SETSOCKOPT",
+ "SYS_SETTID",
+ "SYS_SETTID_WITH_PID",
+ "SYS_SETTIMEOFDAY",
+ "SYS_SETUID",
+ "SYS_SETUID32",
+ "SYS_SETWGROUPS",
+ "SYS_SETXATTR",
+ "SYS_SET_MEMPOLICY",
+ "SYS_SET_ROBUST_LIST",
+ "SYS_SET_THREAD_AREA",
+ "SYS_SET_TID_ADDRESS",
+ "SYS_SGETMASK",
+ "SYS_SHARED_REGION_CHECK_NP",
+ "SYS_SHARED_REGION_MAP_AND_SLIDE_NP",
+ "SYS_SHMAT",
+ "SYS_SHMCTL",
+ "SYS_SHMDT",
+ "SYS_SHMGET",
+ "SYS_SHMSYS",
+ "SYS_SHM_OPEN",
+ "SYS_SHM_UNLINK",
+ "SYS_SHUTDOWN",
+ "SYS_SIGACTION",
+ "SYS_SIGALTSTACK",
+ "SYS_SIGNAL",
+ "SYS_SIGNALFD",
+ "SYS_SIGNALFD4",
+ "SYS_SIGPENDING",
+ "SYS_SIGPROCMASK",
+ "SYS_SIGQUEUE",
+ "SYS_SIGQUEUEINFO",
+ "SYS_SIGRETURN",
+ "SYS_SIGSUSPEND",
+ "SYS_SIGSUSPEND_NOCANCEL",
+ "SYS_SIGTIMEDWAIT",
+ "SYS_SIGWAIT",
+ "SYS_SIGWAITINFO",
+ "SYS_SOCKET",
+ "SYS_SOCKETCALL",
+ "SYS_SOCKETPAIR",
+ "SYS_SPLICE",
+ "SYS_SSETMASK",
+ "SYS_SSTK",
+ "SYS_STACK_SNAPSHOT",
+ "SYS_STAT",
+ "SYS_STAT64",
+ "SYS_STAT64_EXTENDED",
+ "SYS_STATFS",
+ "SYS_STATFS64",
+ "SYS_STATV",
+ "SYS_STATVFS1",
+ "SYS_STAT_EXTENDED",
+ "SYS_STIME",
+ "SYS_STTY",
+ "SYS_SWAPCONTEXT",
+ "SYS_SWAPCTL",
+ "SYS_SWAPOFF",
+ "SYS_SWAPON",
+ "SYS_SYMLINK",
+ "SYS_SYMLINKAT",
+ "SYS_SYNC",
+ "SYS_SYNCFS",
+ "SYS_SYNC_FILE_RANGE",
+ "SYS_SYSARCH",
+ "SYS_SYSCALL",
+ "SYS_SYSCALL_BASE",
+ "SYS_SYSFS",
+ "SYS_SYSINFO",
+ "SYS_SYSLOG",
+ "SYS_TEE",
+ "SYS_TGKILL",
+ "SYS_THREAD_SELFID",
+ "SYS_THR_CREATE",
+ "SYS_THR_EXIT",
+ "SYS_THR_KILL",
+ "SYS_THR_KILL2",
+ "SYS_THR_NEW",
+ "SYS_THR_SELF",
+ "SYS_THR_SET_NAME",
+ "SYS_THR_SUSPEND",
+ "SYS_THR_WAKE",
+ "SYS_TIME",
+ "SYS_TIMERFD_CREATE",
+ "SYS_TIMERFD_GETTIME",
+ "SYS_TIMERFD_SETTIME",
+ "SYS_TIMER_CREATE",
+ "SYS_TIMER_DELETE",
+ "SYS_TIMER_GETOVERRUN",
+ "SYS_TIMER_GETTIME",
+ "SYS_TIMER_SETTIME",
+ "SYS_TIMES",
+ "SYS_TKILL",
+ "SYS_TRUNCATE",
+ "SYS_TRUNCATE64",
+ "SYS_TUXCALL",
+ "SYS_UGETRLIMIT",
+ "SYS_ULIMIT",
+ "SYS_UMASK",
+ "SYS_UMASK_EXTENDED",
+ "SYS_UMOUNT",
+ "SYS_UMOUNT2",
+ "SYS_UNAME",
+ "SYS_UNDELETE",
+ "SYS_UNLINK",
+ "SYS_UNLINKAT",
+ "SYS_UNMOUNT",
+ "SYS_UNSHARE",
+ "SYS_USELIB",
+ "SYS_USTAT",
+ "SYS_UTIME",
+ "SYS_UTIMENSAT",
+ "SYS_UTIMES",
+ "SYS_UTRACE",
+ "SYS_UUIDGEN",
+ "SYS_VADVISE",
+ "SYS_VFORK",
+ "SYS_VHANGUP",
+ "SYS_VM86",
+ "SYS_VM86OLD",
+ "SYS_VMSPLICE",
+ "SYS_VM_PRESSURE_MONITOR",
+ "SYS_VSERVER",
+ "SYS_WAIT4",
+ "SYS_WAIT4_NOCANCEL",
+ "SYS_WAIT6",
+ "SYS_WAITEVENT",
+ "SYS_WAITID",
+ "SYS_WAITID_NOCANCEL",
+ "SYS_WAITPID",
+ "SYS_WATCHEVENT",
+ "SYS_WORKQ_KERNRETURN",
+ "SYS_WORKQ_OPEN",
+ "SYS_WRITE",
+ "SYS_WRITEV",
+ "SYS_WRITEV_NOCANCEL",
+ "SYS_WRITE_NOCANCEL",
+ "SYS_YIELD",
+ "SYS__LLSEEK",
+ "SYS__LWP_CONTINUE",
+ "SYS__LWP_CREATE",
+ "SYS__LWP_CTL",
+ "SYS__LWP_DETACH",
+ "SYS__LWP_EXIT",
+ "SYS__LWP_GETNAME",
+ "SYS__LWP_GETPRIVATE",
+ "SYS__LWP_KILL",
+ "SYS__LWP_PARK",
+ "SYS__LWP_SELF",
+ "SYS__LWP_SETNAME",
+ "SYS__LWP_SETPRIVATE",
+ "SYS__LWP_SUSPEND",
+ "SYS__LWP_UNPARK",
+ "SYS__LWP_UNPARK_ALL",
+ "SYS__LWP_WAIT",
+ "SYS__LWP_WAKEUP",
+ "SYS__NEWSELECT",
+ "SYS__PSET_BIND",
+ "SYS__SCHED_GETAFFINITY",
+ "SYS__SCHED_GETPARAM",
+ "SYS__SCHED_SETAFFINITY",
+ "SYS__SCHED_SETPARAM",
+ "SYS__SYSCTL",
+ "SYS__UMTX_LOCK",
+ "SYS__UMTX_OP",
+ "SYS__UMTX_UNLOCK",
+ "SYS___ACL_ACLCHECK_FD",
+ "SYS___ACL_ACLCHECK_FILE",
+ "SYS___ACL_ACLCHECK_LINK",
+ "SYS___ACL_DELETE_FD",
+ "SYS___ACL_DELETE_FILE",
+ "SYS___ACL_DELETE_LINK",
+ "SYS___ACL_GET_FD",
+ "SYS___ACL_GET_FILE",
+ "SYS___ACL_GET_LINK",
+ "SYS___ACL_SET_FD",
+ "SYS___ACL_SET_FILE",
+ "SYS___ACL_SET_LINK",
+ "SYS___CLONE",
+ "SYS___DISABLE_THREADSIGNAL",
+ "SYS___GETCWD",
+ "SYS___GETLOGIN",
+ "SYS___GET_TCB",
+ "SYS___MAC_EXECVE",
+ "SYS___MAC_GETFSSTAT",
+ "SYS___MAC_GET_FD",
+ "SYS___MAC_GET_FILE",
+ "SYS___MAC_GET_LCID",
+ "SYS___MAC_GET_LCTX",
+ "SYS___MAC_GET_LINK",
+ "SYS___MAC_GET_MOUNT",
+ "SYS___MAC_GET_PID",
+ "SYS___MAC_GET_PROC",
+ "SYS___MAC_MOUNT",
+ "SYS___MAC_SET_FD",
+ "SYS___MAC_SET_FILE",
+ "SYS___MAC_SET_LCTX",
+ "SYS___MAC_SET_LINK",
+ "SYS___MAC_SET_PROC",
+ "SYS___MAC_SYSCALL",
+ "SYS___OLD_SEMWAIT_SIGNAL",
+ "SYS___OLD_SEMWAIT_SIGNAL_NOCANCEL",
+ "SYS___POSIX_CHOWN",
+ "SYS___POSIX_FCHOWN",
+ "SYS___POSIX_LCHOWN",
+ "SYS___POSIX_RENAME",
+ "SYS___PTHREAD_CANCELED",
+ "SYS___PTHREAD_CHDIR",
+ "SYS___PTHREAD_FCHDIR",
+ "SYS___PTHREAD_KILL",
+ "SYS___PTHREAD_MARKCANCEL",
+ "SYS___PTHREAD_SIGMASK",
+ "SYS___QUOTACTL",
+ "SYS___SEMCTL",
+ "SYS___SEMWAIT_SIGNAL",
+ "SYS___SEMWAIT_SIGNAL_NOCANCEL",
+ "SYS___SETLOGIN",
+ "SYS___SETUGID",
+ "SYS___SET_TCB",
+ "SYS___SIGACTION_SIGTRAMP",
+ "SYS___SIGTIMEDWAIT",
+ "SYS___SIGWAIT",
+ "SYS___SIGWAIT_NOCANCEL",
+ "SYS___SYSCTL",
+ "SYS___TFORK",
+ "SYS___THREXIT",
+ "SYS___THRSIGDIVERT",
+ "SYS___THRSLEEP",
+ "SYS___THRWAKEUP",
+ "S_ARCH1",
+ "S_ARCH2",
+ "S_BLKSIZE",
+ "S_IEXEC",
+ "S_IFBLK",
+ "S_IFCHR",
+ "S_IFDIR",
+ "S_IFIFO",
+ "S_IFLNK",
+ "S_IFMT",
+ "S_IFREG",
+ "S_IFSOCK",
+ "S_IFWHT",
+ "S_IREAD",
+ "S_IRGRP",
+ "S_IROTH",
+ "S_IRUSR",
+ "S_IRWXG",
+ "S_IRWXO",
+ "S_IRWXU",
+ "S_ISGID",
+ "S_ISTXT",
+ "S_ISUID",
+ "S_ISVTX",
+ "S_IWGRP",
+ "S_IWOTH",
+ "S_IWRITE",
+ "S_IWUSR",
+ "S_IXGRP",
+ "S_IXOTH",
+ "S_IXUSR",
+ "S_LOGIN_SET",
+ "SecurityAttributes",
+ "Seek",
+ "Select",
+ "Sendfile",
+ "Sendmsg",
+ "SendmsgN",
+ "Sendto",
+ "Servent",
+ "SetBpf",
+ "SetBpfBuflen",
+ "SetBpfDatalink",
+ "SetBpfHeadercmpl",
+ "SetBpfImmediate",
+ "SetBpfInterface",
+ "SetBpfPromisc",
+ "SetBpfTimeout",
+ "SetCurrentDirectory",
+ "SetEndOfFile",
+ "SetEnvironmentVariable",
+ "SetFileAttributes",
+ "SetFileCompletionNotificationModes",
+ "SetFilePointer",
+ "SetFileTime",
+ "SetHandleInformation",
+ "SetKevent",
+ "SetLsfPromisc",
+ "SetNonblock",
+ "Setdomainname",
+ "Setegid",
+ "Setenv",
+ "Seteuid",
+ "Setfsgid",
+ "Setfsuid",
+ "Setgid",
+ "Setgroups",
+ "Sethostname",
+ "Setlogin",
+ "Setpgid",
+ "Setpriority",
+ "Setprivexec",
+ "Setregid",
+ "Setresgid",
+ "Setresuid",
+ "Setreuid",
+ "Setrlimit",
+ "Setsid",
+ "Setsockopt",
+ "SetsockoptByte",
+ "SetsockoptICMPv6Filter",
+ "SetsockoptIPMreq",
+ "SetsockoptIPMreqn",
+ "SetsockoptIPv6Mreq",
+ "SetsockoptInet4Addr",
+ "SetsockoptInt",
+ "SetsockoptLinger",
+ "SetsockoptString",
+ "SetsockoptTimeval",
+ "Settimeofday",
+ "Setuid",
+ "Setxattr",
+ "Shutdown",
+ "SidTypeAlias",
+ "SidTypeComputer",
+ "SidTypeDeletedAccount",
+ "SidTypeDomain",
+ "SidTypeGroup",
+ "SidTypeInvalid",
+ "SidTypeLabel",
+ "SidTypeUnknown",
+ "SidTypeUser",
+ "SidTypeWellKnownGroup",
+ "Signal",
+ "SizeofBpfHdr",
+ "SizeofBpfInsn",
+ "SizeofBpfProgram",
+ "SizeofBpfStat",
+ "SizeofBpfVersion",
+ "SizeofBpfZbuf",
+ "SizeofBpfZbufHeader",
+ "SizeofCmsghdr",
+ "SizeofICMPv6Filter",
+ "SizeofIPMreq",
+ "SizeofIPMreqn",
+ "SizeofIPv6MTUInfo",
+ "SizeofIPv6Mreq",
+ "SizeofIfAddrmsg",
+ "SizeofIfAnnounceMsghdr",
+ "SizeofIfData",
+ "SizeofIfInfomsg",
+ "SizeofIfMsghdr",
+ "SizeofIfaMsghdr",
+ "SizeofIfmaMsghdr",
+ "SizeofIfmaMsghdr2",
+ "SizeofInet4Pktinfo",
+ "SizeofInet6Pktinfo",
+ "SizeofInotifyEvent",
+ "SizeofLinger",
+ "SizeofMsghdr",
+ "SizeofNlAttr",
+ "SizeofNlMsgerr",
+ "SizeofNlMsghdr",
+ "SizeofRtAttr",
+ "SizeofRtGenmsg",
+ "SizeofRtMetrics",
+ "SizeofRtMsg",
+ "SizeofRtMsghdr",
+ "SizeofRtNexthop",
+ "SizeofSockFilter",
+ "SizeofSockFprog",
+ "SizeofSockaddrAny",
+ "SizeofSockaddrDatalink",
+ "SizeofSockaddrInet4",
+ "SizeofSockaddrInet6",
+ "SizeofSockaddrLinklayer",
+ "SizeofSockaddrNetlink",
+ "SizeofSockaddrUnix",
+ "SizeofTCPInfo",
+ "SizeofUcred",
+ "SlicePtrFromStrings",
+ "SockFilter",
+ "SockFprog",
+ "Sockaddr",
+ "SockaddrDatalink",
+ "SockaddrGen",
+ "SockaddrInet4",
+ "SockaddrInet6",
+ "SockaddrLinklayer",
+ "SockaddrNetlink",
+ "SockaddrUnix",
+ "Socket",
+ "SocketControlMessage",
+ "SocketDisableIPv6",
+ "Socketpair",
+ "Splice",
+ "StartProcess",
+ "StartupInfo",
+ "Stat",
+ "Stat_t",
+ "Statfs",
+ "Statfs_t",
+ "Stderr",
+ "Stdin",
+ "Stdout",
+ "StringBytePtr",
+ "StringByteSlice",
+ "StringSlicePtr",
+ "StringToSid",
+ "StringToUTF16",
+ "StringToUTF16Ptr",
+ "Symlink",
+ "Sync",
+ "SyncFileRange",
+ "SysProcAttr",
+ "SysProcIDMap",
+ "Syscall",
+ "Syscall12",
+ "Syscall15",
+ "Syscall18",
+ "Syscall6",
+ "Syscall9",
+ "Sysctl",
+ "SysctlUint32",
+ "Sysctlnode",
+ "Sysinfo",
+ "Sysinfo_t",
+ "Systemtime",
+ "TCGETS",
+ "TCIFLUSH",
+ "TCIOFLUSH",
+ "TCOFLUSH",
+ "TCPInfo",
+ "TCPKeepalive",
+ "TCP_CA_NAME_MAX",
+ "TCP_CONGCTL",
+ "TCP_CONGESTION",
+ "TCP_CONNECTIONTIMEOUT",
+ "TCP_CORK",
+ "TCP_DEFER_ACCEPT",
+ "TCP_INFO",
+ "TCP_KEEPALIVE",
+ "TCP_KEEPCNT",
+ "TCP_KEEPIDLE",
+ "TCP_KEEPINIT",
+ "TCP_KEEPINTVL",
+ "TCP_LINGER2",
+ "TCP_MAXBURST",
+ "TCP_MAXHLEN",
+ "TCP_MAXOLEN",
+ "TCP_MAXSEG",
+ "TCP_MAXWIN",
+ "TCP_MAX_SACK",
+ "TCP_MAX_WINSHIFT",
+ "TCP_MD5SIG",
+ "TCP_MD5SIG_MAXKEYLEN",
+ "TCP_MINMSS",
+ "TCP_MINMSSOVERLOAD",
+ "TCP_MSS",
+ "TCP_NODELAY",
+ "TCP_NOOPT",
+ "TCP_NOPUSH",
+ "TCP_NSTATES",
+ "TCP_QUICKACK",
+ "TCP_RXT_CONNDROPTIME",
+ "TCP_RXT_FINDROP",
+ "TCP_SACK_ENABLE",
+ "TCP_SYNCNT",
+ "TCP_VENDOR",
+ "TCP_WINDOW_CLAMP",
+ "TCSAFLUSH",
+ "TCSETS",
+ "TF_DISCONNECT",
+ "TF_REUSE_SOCKET",
+ "TF_USE_DEFAULT_WORKER",
+ "TF_USE_KERNEL_APC",
+ "TF_USE_SYSTEM_THREAD",
+ "TF_WRITE_BEHIND",
+ "TH32CS_INHERIT",
+ "TH32CS_SNAPALL",
+ "TH32CS_SNAPHEAPLIST",
+ "TH32CS_SNAPMODULE",
+ "TH32CS_SNAPMODULE32",
+ "TH32CS_SNAPPROCESS",
+ "TH32CS_SNAPTHREAD",
+ "TIME_ZONE_ID_DAYLIGHT",
+ "TIME_ZONE_ID_STANDARD",
+ "TIME_ZONE_ID_UNKNOWN",
+ "TIOCCBRK",
+ "TIOCCDTR",
+ "TIOCCONS",
+ "TIOCDCDTIMESTAMP",
+ "TIOCDRAIN",
+ "TIOCDSIMICROCODE",
+ "TIOCEXCL",
+ "TIOCEXT",
+ "TIOCFLAG_CDTRCTS",
+ "TIOCFLAG_CLOCAL",
+ "TIOCFLAG_CRTSCTS",
+ "TIOCFLAG_MDMBUF",
+ "TIOCFLAG_PPS",
+ "TIOCFLAG_SOFTCAR",
+ "TIOCFLUSH",
+ "TIOCGDEV",
+ "TIOCGDRAINWAIT",
+ "TIOCGETA",
+ "TIOCGETD",
+ "TIOCGFLAGS",
+ "TIOCGICOUNT",
+ "TIOCGLCKTRMIOS",
+ "TIOCGLINED",
+ "TIOCGPGRP",
+ "TIOCGPTN",
+ "TIOCGQSIZE",
+ "TIOCGRANTPT",
+ "TIOCGRS485",
+ "TIOCGSERIAL",
+ "TIOCGSID",
+ "TIOCGSIZE",
+ "TIOCGSOFTCAR",
+ "TIOCGTSTAMP",
+ "TIOCGWINSZ",
+ "TIOCINQ",
+ "TIOCIXOFF",
+ "TIOCIXON",
+ "TIOCLINUX",
+ "TIOCMBIC",
+ "TIOCMBIS",
+ "TIOCMGDTRWAIT",
+ "TIOCMGET",
+ "TIOCMIWAIT",
+ "TIOCMODG",
+ "TIOCMODS",
+ "TIOCMSDTRWAIT",
+ "TIOCMSET",
+ "TIOCM_CAR",
+ "TIOCM_CD",
+ "TIOCM_CTS",
+ "TIOCM_DCD",
+ "TIOCM_DSR",
+ "TIOCM_DTR",
+ "TIOCM_LE",
+ "TIOCM_RI",
+ "TIOCM_RNG",
+ "TIOCM_RTS",
+ "TIOCM_SR",
+ "TIOCM_ST",
+ "TIOCNOTTY",
+ "TIOCNXCL",
+ "TIOCOUTQ",
+ "TIOCPKT",
+ "TIOCPKT_DATA",
+ "TIOCPKT_DOSTOP",
+ "TIOCPKT_FLUSHREAD",
+ "TIOCPKT_FLUSHWRITE",
+ "TIOCPKT_IOCTL",
+ "TIOCPKT_NOSTOP",
+ "TIOCPKT_START",
+ "TIOCPKT_STOP",
+ "TIOCPTMASTER",
+ "TIOCPTMGET",
+ "TIOCPTSNAME",
+ "TIOCPTYGNAME",
+ "TIOCPTYGRANT",
+ "TIOCPTYUNLK",
+ "TIOCRCVFRAME",
+ "TIOCREMOTE",
+ "TIOCSBRK",
+ "TIOCSCONS",
+ "TIOCSCTTY",
+ "TIOCSDRAINWAIT",
+ "TIOCSDTR",
+ "TIOCSERCONFIG",
+ "TIOCSERGETLSR",
+ "TIOCSERGETMULTI",
+ "TIOCSERGSTRUCT",
+ "TIOCSERGWILD",
+ "TIOCSERSETMULTI",
+ "TIOCSERSWILD",
+ "TIOCSER_TEMT",
+ "TIOCSETA",
+ "TIOCSETAF",
+ "TIOCSETAW",
+ "TIOCSETD",
+ "TIOCSFLAGS",
+ "TIOCSIG",
+ "TIOCSLCKTRMIOS",
+ "TIOCSLINED",
+ "TIOCSPGRP",
+ "TIOCSPTLCK",
+ "TIOCSQSIZE",
+ "TIOCSRS485",
+ "TIOCSSERIAL",
+ "TIOCSSIZE",
+ "TIOCSSOFTCAR",
+ "TIOCSTART",
+ "TIOCSTAT",
+ "TIOCSTI",
+ "TIOCSTOP",
+ "TIOCSTSTAMP",
+ "TIOCSWINSZ",
+ "TIOCTIMESTAMP",
+ "TIOCUCNTL",
+ "TIOCVHANGUP",
+ "TIOCXMTFRAME",
+ "TOKEN_ADJUST_DEFAULT",
+ "TOKEN_ADJUST_GROUPS",
+ "TOKEN_ADJUST_PRIVILEGES",
+ "TOKEN_ADJUST_SESSIONID",
+ "TOKEN_ALL_ACCESS",
+ "TOKEN_ASSIGN_PRIMARY",
+ "TOKEN_DUPLICATE",
+ "TOKEN_EXECUTE",
+ "TOKEN_IMPERSONATE",
+ "TOKEN_QUERY",
+ "TOKEN_QUERY_SOURCE",
+ "TOKEN_READ",
+ "TOKEN_WRITE",
+ "TOSTOP",
+ "TRUNCATE_EXISTING",
+ "TUNATTACHFILTER",
+ "TUNDETACHFILTER",
+ "TUNGETFEATURES",
+ "TUNGETIFF",
+ "TUNGETSNDBUF",
+ "TUNGETVNETHDRSZ",
+ "TUNSETDEBUG",
+ "TUNSETGROUP",
+ "TUNSETIFF",
+ "TUNSETLINK",
+ "TUNSETNOCSUM",
+ "TUNSETOFFLOAD",
+ "TUNSETOWNER",
+ "TUNSETPERSIST",
+ "TUNSETSNDBUF",
+ "TUNSETTXFILTER",
+ "TUNSETVNETHDRSZ",
+ "Tee",
+ "TerminateProcess",
+ "Termios",
+ "Tgkill",
+ "Time",
+ "Time_t",
+ "Times",
+ "Timespec",
+ "TimespecToNsec",
+ "Timeval",
+ "Timeval32",
+ "TimevalToNsec",
+ "Timex",
+ "Timezoneinformation",
+ "Tms",
+ "Token",
+ "TokenAccessInformation",
+ "TokenAuditPolicy",
+ "TokenDefaultDacl",
+ "TokenElevation",
+ "TokenElevationType",
+ "TokenGroups",
+ "TokenGroupsAndPrivileges",
+ "TokenHasRestrictions",
+ "TokenImpersonationLevel",
+ "TokenIntegrityLevel",
+ "TokenLinkedToken",
+ "TokenLogonSid",
+ "TokenMandatoryPolicy",
+ "TokenOrigin",
+ "TokenOwner",
+ "TokenPrimaryGroup",
+ "TokenPrivileges",
+ "TokenRestrictedSids",
+ "TokenSandBoxInert",
+ "TokenSessionId",
+ "TokenSessionReference",
+ "TokenSource",
+ "TokenStatistics",
+ "TokenType",
+ "TokenUIAccess",
+ "TokenUser",
+ "TokenVirtualizationAllowed",
+ "TokenVirtualizationEnabled",
+ "Tokenprimarygroup",
+ "Tokenuser",
+ "TranslateAccountName",
+ "TranslateName",
+ "TransmitFile",
+ "TransmitFileBuffers",
+ "Truncate",
+ "UNIX_PATH_MAX",
+ "USAGE_MATCH_TYPE_AND",
+ "USAGE_MATCH_TYPE_OR",
+ "UTF16FromString",
+ "UTF16PtrFromString",
+ "UTF16ToString",
+ "Ucred",
+ "Umask",
+ "Uname",
+ "Undelete",
+ "UnixCredentials",
+ "UnixRights",
+ "Unlink",
+ "Unlinkat",
+ "UnmapViewOfFile",
+ "Unmount",
+ "Unsetenv",
+ "Unshare",
+ "UserInfo10",
+ "Ustat",
+ "Ustat_t",
+ "Utimbuf",
+ "Utime",
+ "Utimes",
+ "UtimesNano",
+ "Utsname",
+ "VDISCARD",
+ "VDSUSP",
+ "VEOF",
+ "VEOL",
+ "VEOL2",
+ "VERASE",
+ "VERASE2",
+ "VINTR",
+ "VKILL",
+ "VLNEXT",
+ "VMIN",
+ "VQUIT",
+ "VREPRINT",
+ "VSTART",
+ "VSTATUS",
+ "VSTOP",
+ "VSUSP",
+ "VSWTC",
+ "VT0",
+ "VT1",
+ "VTDLY",
+ "VTIME",
+ "VWERASE",
+ "VirtualLock",
+ "VirtualUnlock",
+ "WAIT_ABANDONED",
+ "WAIT_FAILED",
+ "WAIT_OBJECT_0",
+ "WAIT_TIMEOUT",
+ "WALL",
+ "WALLSIG",
+ "WALTSIG",
+ "WCLONE",
+ "WCONTINUED",
+ "WCOREFLAG",
+ "WEXITED",
+ "WLINUXCLONE",
+ "WNOHANG",
+ "WNOTHREAD",
+ "WNOWAIT",
+ "WNOZOMBIE",
+ "WOPTSCHECKED",
+ "WORDSIZE",
+ "WSABuf",
+ "WSACleanup",
+ "WSADESCRIPTION_LEN",
+ "WSAData",
+ "WSAEACCES",
+ "WSAECONNABORTED",
+ "WSAECONNRESET",
+ "WSAEnumProtocols",
+ "WSAID_CONNECTEX",
+ "WSAIoctl",
+ "WSAPROTOCOL_LEN",
+ "WSAProtocolChain",
+ "WSAProtocolInfo",
+ "WSARecv",
+ "WSARecvFrom",
+ "WSASYS_STATUS_LEN",
+ "WSASend",
+ "WSASendTo",
+ "WSASendto",
+ "WSAStartup",
+ "WSTOPPED",
+ "WTRAPPED",
+ "WUNTRACED",
+ "Wait4",
+ "WaitForSingleObject",
+ "WaitStatus",
+ "Win32FileAttributeData",
+ "Win32finddata",
+ "Write",
+ "WriteConsole",
+ "WriteFile",
+ "X509_ASN_ENCODING",
+ "XCASE",
+ "XP1_CONNECTIONLESS",
+ "XP1_CONNECT_DATA",
+ "XP1_DISCONNECT_DATA",
+ "XP1_EXPEDITED_DATA",
+ "XP1_GRACEFUL_CLOSE",
+ "XP1_GUARANTEED_DELIVERY",
+ "XP1_GUARANTEED_ORDER",
+ "XP1_IFS_HANDLES",
+ "XP1_MESSAGE_ORIENTED",
+ "XP1_MULTIPOINT_CONTROL_PLANE",
+ "XP1_MULTIPOINT_DATA_PLANE",
+ "XP1_PARTIAL_MESSAGE",
+ "XP1_PSEUDO_STREAM",
+ "XP1_QOS_SUPPORTED",
+ "XP1_SAN_SUPPORT_SDP",
+ "XP1_SUPPORT_BROADCAST",
+ "XP1_SUPPORT_MULTIPOINT",
+ "XP1_UNI_RECV",
+ "XP1_UNI_SEND",
+ },
+ "syscall/js": []string{
+ "CopyBytesToGo",
+ "CopyBytesToJS",
+ "Error",
+ "Func",
+ "FuncOf",
+ "Global",
+ "Null",
+ "Type",
+ "TypeBoolean",
+ "TypeFunction",
+ "TypeNull",
+ "TypeNumber",
+ "TypeObject",
+ "TypeString",
+ "TypeSymbol",
+ "TypeUndefined",
+ "Undefined",
+ "Value",
+ "ValueError",
+ "ValueOf",
+ "Wrapper",
+ },
+ "testing": []string{
+ "AllocsPerRun",
+ "B",
+ "Benchmark",
+ "BenchmarkResult",
+ "Cover",
+ "CoverBlock",
+ "CoverMode",
+ "Coverage",
+ "Init",
+ "InternalBenchmark",
+ "InternalExample",
+ "InternalTest",
+ "M",
+ "Main",
+ "MainStart",
+ "PB",
+ "RegisterCover",
+ "RunBenchmarks",
+ "RunExamples",
+ "RunTests",
+ "Short",
+ "T",
+ "TB",
+ "Verbose",
+ },
+ "testing/iotest": []string{
+ "DataErrReader",
+ "ErrTimeout",
+ "HalfReader",
+ "NewReadLogger",
+ "NewWriteLogger",
+ "OneByteReader",
+ "TimeoutReader",
+ "TruncateWriter",
+ },
+ "testing/quick": []string{
+ "Check",
+ "CheckEqual",
+ "CheckEqualError",
+ "CheckError",
+ "Config",
+ "Generator",
+ "SetupError",
+ "Value",
+ },
+ "text/scanner": []string{
+ "Char",
+ "Comment",
+ "EOF",
+ "Float",
+ "GoTokens",
+ "GoWhitespace",
+ "Ident",
+ "Int",
+ "Position",
+ "RawString",
+ "ScanChars",
+ "ScanComments",
+ "ScanFloats",
+ "ScanIdents",
+ "ScanInts",
+ "ScanRawStrings",
+ "ScanStrings",
+ "Scanner",
+ "SkipComments",
+ "String",
+ "TokenString",
+ },
+ "text/tabwriter": []string{
+ "AlignRight",
+ "Debug",
+ "DiscardEmptyColumns",
+ "Escape",
+ "FilterHTML",
+ "NewWriter",
+ "StripEscape",
+ "TabIndent",
+ "Writer",
+ },
+ "text/template": []string{
+ "ExecError",
+ "FuncMap",
+ "HTMLEscape",
+ "HTMLEscapeString",
+ "HTMLEscaper",
+ "IsTrue",
+ "JSEscape",
+ "JSEscapeString",
+ "JSEscaper",
+ "Must",
+ "New",
+ "ParseFiles",
+ "ParseGlob",
+ "Template",
+ "URLQueryEscaper",
+ },
+ "text/template/parse": []string{
+ "ActionNode",
+ "BoolNode",
+ "BranchNode",
+ "ChainNode",
+ "CommandNode",
+ "DotNode",
+ "FieldNode",
+ "IdentifierNode",
+ "IfNode",
+ "IsEmptyTree",
+ "ListNode",
+ "New",
+ "NewIdentifier",
+ "NilNode",
+ "Node",
+ "NodeAction",
+ "NodeBool",
+ "NodeChain",
+ "NodeCommand",
+ "NodeDot",
+ "NodeField",
+ "NodeIdentifier",
+ "NodeIf",
+ "NodeList",
+ "NodeNil",
+ "NodeNumber",
+ "NodePipe",
+ "NodeRange",
+ "NodeString",
+ "NodeTemplate",
+ "NodeText",
+ "NodeType",
+ "NodeVariable",
+ "NodeWith",
+ "NumberNode",
+ "Parse",
+ "PipeNode",
+ "Pos",
+ "RangeNode",
+ "StringNode",
+ "TemplateNode",
+ "TextNode",
+ "Tree",
+ "VariableNode",
+ "WithNode",
+ },
+ "time": []string{
+ "ANSIC",
+ "After",
+ "AfterFunc",
+ "April",
+ "August",
+ "Date",
+ "December",
+ "Duration",
+ "February",
+ "FixedZone",
+ "Friday",
+ "Hour",
+ "January",
+ "July",
+ "June",
+ "Kitchen",
+ "LoadLocation",
+ "LoadLocationFromTZData",
+ "Local",
+ "Location",
+ "March",
+ "May",
+ "Microsecond",
+ "Millisecond",
+ "Minute",
+ "Monday",
+ "Month",
+ "Nanosecond",
+ "NewTicker",
+ "NewTimer",
+ "November",
+ "Now",
+ "October",
+ "Parse",
+ "ParseDuration",
+ "ParseError",
+ "ParseInLocation",
+ "RFC1123",
+ "RFC1123Z",
+ "RFC3339",
+ "RFC3339Nano",
+ "RFC822",
+ "RFC822Z",
+ "RFC850",
+ "RubyDate",
+ "Saturday",
+ "Second",
+ "September",
+ "Since",
+ "Sleep",
+ "Stamp",
+ "StampMicro",
+ "StampMilli",
+ "StampNano",
+ "Sunday",
+ "Thursday",
+ "Tick",
+ "Ticker",
+ "Time",
+ "Timer",
+ "Tuesday",
+ "UTC",
+ "Unix",
+ "UnixDate",
+ "Until",
+ "Wednesday",
+ "Weekday",
+ },
+ "unicode": []string{
+ "ASCII_Hex_Digit",
+ "Adlam",
+ "Ahom",
+ "Anatolian_Hieroglyphs",
+ "Arabic",
+ "Armenian",
+ "Avestan",
+ "AzeriCase",
+ "Balinese",
+ "Bamum",
+ "Bassa_Vah",
+ "Batak",
+ "Bengali",
+ "Bhaiksuki",
+ "Bidi_Control",
+ "Bopomofo",
+ "Brahmi",
+ "Braille",
+ "Buginese",
+ "Buhid",
+ "C",
+ "Canadian_Aboriginal",
+ "Carian",
+ "CaseRange",
+ "CaseRanges",
+ "Categories",
+ "Caucasian_Albanian",
+ "Cc",
+ "Cf",
+ "Chakma",
+ "Cham",
+ "Cherokee",
+ "Co",
+ "Common",
+ "Coptic",
+ "Cs",
+ "Cuneiform",
+ "Cypriot",
+ "Cyrillic",
+ "Dash",
+ "Deprecated",
+ "Deseret",
+ "Devanagari",
+ "Diacritic",
+ "Digit",
+ "Dogra",
+ "Duployan",
+ "Egyptian_Hieroglyphs",
+ "Elbasan",
+ "Ethiopic",
+ "Extender",
+ "FoldCategory",
+ "FoldScript",
+ "Georgian",
+ "Glagolitic",
+ "Gothic",
+ "Grantha",
+ "GraphicRanges",
+ "Greek",
+ "Gujarati",
+ "Gunjala_Gondi",
+ "Gurmukhi",
+ "Han",
+ "Hangul",
+ "Hanifi_Rohingya",
+ "Hanunoo",
+ "Hatran",
+ "Hebrew",
+ "Hex_Digit",
+ "Hiragana",
+ "Hyphen",
+ "IDS_Binary_Operator",
+ "IDS_Trinary_Operator",
+ "Ideographic",
+ "Imperial_Aramaic",
+ "In",
+ "Inherited",
+ "Inscriptional_Pahlavi",
+ "Inscriptional_Parthian",
+ "Is",
+ "IsControl",
+ "IsDigit",
+ "IsGraphic",
+ "IsLetter",
+ "IsLower",
+ "IsMark",
+ "IsNumber",
+ "IsOneOf",
+ "IsPrint",
+ "IsPunct",
+ "IsSpace",
+ "IsSymbol",
+ "IsTitle",
+ "IsUpper",
+ "Javanese",
+ "Join_Control",
+ "Kaithi",
+ "Kannada",
+ "Katakana",
+ "Kayah_Li",
+ "Kharoshthi",
+ "Khmer",
+ "Khojki",
+ "Khudawadi",
+ "L",
+ "Lao",
+ "Latin",
+ "Lepcha",
+ "Letter",
+ "Limbu",
+ "Linear_A",
+ "Linear_B",
+ "Lisu",
+ "Ll",
+ "Lm",
+ "Lo",
+ "Logical_Order_Exception",
+ "Lower",
+ "LowerCase",
+ "Lt",
+ "Lu",
+ "Lycian",
+ "Lydian",
+ "M",
+ "Mahajani",
+ "Makasar",
+ "Malayalam",
+ "Mandaic",
+ "Manichaean",
+ "Marchen",
+ "Mark",
+ "Masaram_Gondi",
+ "MaxASCII",
+ "MaxCase",
+ "MaxLatin1",
+ "MaxRune",
+ "Mc",
+ "Me",
+ "Medefaidrin",
+ "Meetei_Mayek",
+ "Mende_Kikakui",
+ "Meroitic_Cursive",
+ "Meroitic_Hieroglyphs",
+ "Miao",
+ "Mn",
+ "Modi",
+ "Mongolian",
+ "Mro",
+ "Multani",
+ "Myanmar",
+ "N",
+ "Nabataean",
+ "Nd",
+ "New_Tai_Lue",
+ "Newa",
+ "Nko",
+ "Nl",
+ "No",
+ "Noncharacter_Code_Point",
+ "Number",
+ "Nushu",
+ "Ogham",
+ "Ol_Chiki",
+ "Old_Hungarian",
+ "Old_Italic",
+ "Old_North_Arabian",
+ "Old_Permic",
+ "Old_Persian",
+ "Old_Sogdian",
+ "Old_South_Arabian",
+ "Old_Turkic",
+ "Oriya",
+ "Osage",
+ "Osmanya",
+ "Other",
+ "Other_Alphabetic",
+ "Other_Default_Ignorable_Code_Point",
+ "Other_Grapheme_Extend",
+ "Other_ID_Continue",
+ "Other_ID_Start",
+ "Other_Lowercase",
+ "Other_Math",
+ "Other_Uppercase",
+ "P",
+ "Pahawh_Hmong",
+ "Palmyrene",
+ "Pattern_Syntax",
+ "Pattern_White_Space",
+ "Pau_Cin_Hau",
+ "Pc",
+ "Pd",
+ "Pe",
+ "Pf",
+ "Phags_Pa",
+ "Phoenician",
+ "Pi",
+ "Po",
+ "Prepended_Concatenation_Mark",
+ "PrintRanges",
+ "Properties",
+ "Ps",
+ "Psalter_Pahlavi",
+ "Punct",
+ "Quotation_Mark",
+ "Radical",
+ "Range16",
+ "Range32",
+ "RangeTable",
+ "Regional_Indicator",
+ "Rejang",
+ "ReplacementChar",
+ "Runic",
+ "S",
+ "STerm",
+ "Samaritan",
+ "Saurashtra",
+ "Sc",
+ "Scripts",
+ "Sentence_Terminal",
+ "Sharada",
+ "Shavian",
+ "Siddham",
+ "SignWriting",
+ "SimpleFold",
+ "Sinhala",
+ "Sk",
+ "Sm",
+ "So",
+ "Soft_Dotted",
+ "Sogdian",
+ "Sora_Sompeng",
+ "Soyombo",
+ "Space",
+ "SpecialCase",
+ "Sundanese",
+ "Syloti_Nagri",
+ "Symbol",
+ "Syriac",
+ "Tagalog",
+ "Tagbanwa",
+ "Tai_Le",
+ "Tai_Tham",
+ "Tai_Viet",
+ "Takri",
+ "Tamil",
+ "Tangut",
+ "Telugu",
+ "Terminal_Punctuation",
+ "Thaana",
+ "Thai",
+ "Tibetan",
+ "Tifinagh",
+ "Tirhuta",
+ "Title",
+ "TitleCase",
+ "To",
+ "ToLower",
+ "ToTitle",
+ "ToUpper",
+ "TurkishCase",
+ "Ugaritic",
+ "Unified_Ideograph",
+ "Upper",
+ "UpperCase",
+ "UpperLower",
+ "Vai",
+ "Variation_Selector",
+ "Version",
+ "Warang_Citi",
+ "White_Space",
+ "Yi",
+ "Z",
+ "Zanabazar_Square",
+ "Zl",
+ "Zp",
+ "Zs",
+ },
+ "unicode/utf16": []string{
+ "Decode",
+ "DecodeRune",
+ "Encode",
+ "EncodeRune",
+ "IsSurrogate",
+ },
+ "unicode/utf8": []string{
+ "DecodeLastRune",
+ "DecodeLastRuneInString",
+ "DecodeRune",
+ "DecodeRuneInString",
+ "EncodeRune",
+ "FullRune",
+ "FullRuneInString",
+ "MaxRune",
+ "RuneCount",
+ "RuneCountInString",
+ "RuneError",
+ "RuneLen",
+ "RuneSelf",
+ "RuneStart",
+ "UTFMax",
+ "Valid",
+ "ValidRune",
+ "ValidString",
+ },
+ "unsafe": []string{
+ "Alignof",
+ "ArbitraryType",
+ "Offsetof",
+ "Pointer",
+ "Sizeof",
+ },
+}
diff --git a/vendor/golang.org/x/tools/internal/module/module.go b/vendor/golang.org/x/tools/internal/module/module.go
new file mode 100644
index 0000000000000..9a4edb9dec159
--- /dev/null
+++ b/vendor/golang.org/x/tools/internal/module/module.go
@@ -0,0 +1,540 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package module defines the module.Version type
+// along with support code.
+package module
+
+// IMPORTANT NOTE
+//
+// This file essentially defines the set of valid import paths for the go command.
+// There are many subtle considerations, including Unicode ambiguity,
+// security, network, and file system representations.
+//
+// This file also defines the set of valid module path and version combinations,
+// another topic with many subtle considerations.
+//
+// Changes to the semantics in this file require approval from rsc.
+
+import (
+ "fmt"
+ "sort"
+ "strings"
+ "unicode"
+ "unicode/utf8"
+
+ "golang.org/x/tools/internal/semver"
+)
+
+// A Version is defined by a module path and version pair.
+type Version struct {
+ Path string
+
+ // Version is usually a semantic version in canonical form.
+ // There are two exceptions to this general rule.
+ // First, the top-level target of a build has no specific version
+ // and uses Version = "".
+ // Second, during MVS calculations the version "none" is used
+ // to represent the decision to take no version of a given module.
+ Version string `json:",omitempty"`
+}
+
+// Check checks that a given module path, version pair is valid.
+// In addition to the path being a valid module path
+// and the version being a valid semantic version,
+// the two must correspond.
+// For example, the path "yaml/v2" only corresponds to
+// semantic versions beginning with "v2.".
+func Check(path, version string) error {
+ if err := CheckPath(path); err != nil {
+ return err
+ }
+ if !semver.IsValid(version) {
+ return fmt.Errorf("malformed semantic version %v", version)
+ }
+ _, pathMajor, _ := SplitPathVersion(path)
+ if !MatchPathMajor(version, pathMajor) {
+ if pathMajor == "" {
+ pathMajor = "v0 or v1"
+ }
+ if pathMajor[0] == '.' { // .v1
+ pathMajor = pathMajor[1:]
+ }
+ return fmt.Errorf("mismatched module path %v and version %v (want %v)", path, version, pathMajor)
+ }
+ return nil
+}
+
+// firstPathOK reports whether r can appear in the first element of a module path.
+// The first element of the path must be an LDH domain name, at least for now.
+// To avoid case ambiguity, the domain name must be entirely lower case.
+func firstPathOK(r rune) bool {
+ return r == '-' || r == '.' ||
+ '0' <= r && r <= '9' ||
+ 'a' <= r && r <= 'z'
+}
+
+// pathOK reports whether r can appear in an import path element.
+// Paths can be ASCII letters, ASCII digits, and limited ASCII punctuation: + - . _ and ~.
+// This matches what "go get" has historically recognized in import paths.
+// TODO(rsc): We would like to allow Unicode letters, but that requires additional
+// care in the safe encoding (see note below).
+func pathOK(r rune) bool {
+ if r < utf8.RuneSelf {
+ return r == '+' || r == '-' || r == '.' || r == '_' || r == '~' ||
+ '0' <= r && r <= '9' ||
+ 'A' <= r && r <= 'Z' ||
+ 'a' <= r && r <= 'z'
+ }
+ return false
+}
+
+// fileNameOK reports whether r can appear in a file name.
+// For now we allow all Unicode letters but otherwise limit to pathOK plus a few more punctuation characters.
+// If we expand the set of allowed characters here, we have to
+// work harder at detecting potential case-folding and normalization collisions.
+// See note about "safe encoding" below.
+func fileNameOK(r rune) bool {
+ if r < utf8.RuneSelf {
+ // Entire set of ASCII punctuation, from which we remove characters:
+ // ! " # $ % & ' ( ) * + , - . / : ; < = > ? @ [ \ ] ^ _ ` { | } ~
+ // We disallow some shell special characters: " ' * < > ? ` |
+ // (Note that some of those are disallowed by the Windows file system as well.)
+ // We also disallow path separators / : and \ (fileNameOK is only called on path element characters).
+ // We allow spaces (U+0020) in file names.
+ const allowed = "!#$%&()+,-.=@[]^_{}~ "
+ if '0' <= r && r <= '9' || 'A' <= r && r <= 'Z' || 'a' <= r && r <= 'z' {
+ return true
+ }
+ for i := 0; i < len(allowed); i++ {
+ if rune(allowed[i]) == r {
+ return true
+ }
+ }
+ return false
+ }
+ // It may be OK to add more ASCII punctuation here, but only carefully.
+ // For example Windows disallows < > \, and macOS disallows :, so we must not allow those.
+ return unicode.IsLetter(r)
+}
+
+// CheckPath checks that a module path is valid.
+func CheckPath(path string) error {
+ if err := checkPath(path, false); err != nil {
+ return fmt.Errorf("malformed module path %q: %v", path, err)
+ }
+ i := strings.Index(path, "/")
+ if i < 0 {
+ i = len(path)
+ }
+ if i == 0 {
+ return fmt.Errorf("malformed module path %q: leading slash", path)
+ }
+ if !strings.Contains(path[:i], ".") {
+ return fmt.Errorf("malformed module path %q: missing dot in first path element", path)
+ }
+ if path[0] == '-' {
+ return fmt.Errorf("malformed module path %q: leading dash in first path element", path)
+ }
+ for _, r := range path[:i] {
+ if !firstPathOK(r) {
+ return fmt.Errorf("malformed module path %q: invalid char %q in first path element", path, r)
+ }
+ }
+ if _, _, ok := SplitPathVersion(path); !ok {
+ return fmt.Errorf("malformed module path %q: invalid version", path)
+ }
+ return nil
+}
+
+// CheckImportPath checks that an import path is valid.
+func CheckImportPath(path string) error {
+ if err := checkPath(path, false); err != nil {
+ return fmt.Errorf("malformed import path %q: %v", path, err)
+ }
+ return nil
+}
+
+// checkPath checks that a general path is valid.
+// It returns an error describing why but not mentioning path.
+// Because these checks apply to both module paths and import paths,
+// the caller is expected to add the "malformed ___ path %q: " prefix.
+// fileName indicates whether the final element of the path is a file name
+// (as opposed to a directory name).
+func checkPath(path string, fileName bool) error {
+ if !utf8.ValidString(path) {
+ return fmt.Errorf("invalid UTF-8")
+ }
+ if path == "" {
+ return fmt.Errorf("empty string")
+ }
+ if strings.Contains(path, "..") {
+ return fmt.Errorf("double dot")
+ }
+ if strings.Contains(path, "//") {
+ return fmt.Errorf("double slash")
+ }
+ if path[len(path)-1] == '/' {
+ return fmt.Errorf("trailing slash")
+ }
+ elemStart := 0
+ for i, r := range path {
+ if r == '/' {
+ if err := checkElem(path[elemStart:i], fileName); err != nil {
+ return err
+ }
+ elemStart = i + 1
+ }
+ }
+ if err := checkElem(path[elemStart:], fileName); err != nil {
+ return err
+ }
+ return nil
+}
+
+// checkElem checks whether an individual path element is valid.
+// fileName indicates whether the element is a file name (not a directory name).
+func checkElem(elem string, fileName bool) error {
+ if elem == "" {
+ return fmt.Errorf("empty path element")
+ }
+ if strings.Count(elem, ".") == len(elem) {
+ return fmt.Errorf("invalid path element %q", elem)
+ }
+ if elem[0] == '.' && !fileName {
+ return fmt.Errorf("leading dot in path element")
+ }
+ if elem[len(elem)-1] == '.' {
+ return fmt.Errorf("trailing dot in path element")
+ }
+ charOK := pathOK
+ if fileName {
+ charOK = fileNameOK
+ }
+ for _, r := range elem {
+ if !charOK(r) {
+ return fmt.Errorf("invalid char %q", r)
+ }
+ }
+
+ // Windows disallows a bunch of path elements, sadly.
+ // See https://docs.microsoft.com/en-us/windows/desktop/fileio/naming-a-file
+ short := elem
+ if i := strings.Index(short, "."); i >= 0 {
+ short = short[:i]
+ }
+ for _, bad := range badWindowsNames {
+ if strings.EqualFold(bad, short) {
+ return fmt.Errorf("disallowed path element %q", elem)
+ }
+ }
+ return nil
+}
+
+// CheckFilePath checks whether a slash-separated file path is valid.
+func CheckFilePath(path string) error {
+ if err := checkPath(path, true); err != nil {
+ return fmt.Errorf("malformed file path %q: %v", path, err)
+ }
+ return nil
+}
+
+// badWindowsNames are the reserved file path elements on Windows.
+// See https://docs.microsoft.com/en-us/windows/desktop/fileio/naming-a-file
+var badWindowsNames = []string{
+ "CON",
+ "PRN",
+ "AUX",
+ "NUL",
+ "COM1",
+ "COM2",
+ "COM3",
+ "COM4",
+ "COM5",
+ "COM6",
+ "COM7",
+ "COM8",
+ "COM9",
+ "LPT1",
+ "LPT2",
+ "LPT3",
+ "LPT4",
+ "LPT5",
+ "LPT6",
+ "LPT7",
+ "LPT8",
+ "LPT9",
+}
+
+// SplitPathVersion returns prefix and major version such that prefix+pathMajor == path
+// and version is either empty or "/vN" for N >= 2.
+// As a special case, gopkg.in paths are recognized directly;
+// they require ".vN" instead of "/vN", and for all N, not just N >= 2.
+func SplitPathVersion(path string) (prefix, pathMajor string, ok bool) {
+ if strings.HasPrefix(path, "gopkg.in/") {
+ return splitGopkgIn(path)
+ }
+
+ i := len(path)
+ dot := false
+ for i > 0 && ('0' <= path[i-1] && path[i-1] <= '9' || path[i-1] == '.') {
+ if path[i-1] == '.' {
+ dot = true
+ }
+ i--
+ }
+ if i <= 1 || i == len(path) || path[i-1] != 'v' || path[i-2] != '/' {
+ return path, "", true
+ }
+ prefix, pathMajor = path[:i-2], path[i-2:]
+ if dot || len(pathMajor) <= 2 || pathMajor[2] == '0' || pathMajor == "/v1" {
+ return path, "", false
+ }
+ return prefix, pathMajor, true
+}
+
+// splitGopkgIn is like SplitPathVersion but only for gopkg.in paths.
+func splitGopkgIn(path string) (prefix, pathMajor string, ok bool) {
+ if !strings.HasPrefix(path, "gopkg.in/") {
+ return path, "", false
+ }
+ i := len(path)
+ if strings.HasSuffix(path, "-unstable") {
+ i -= len("-unstable")
+ }
+ for i > 0 && ('0' <= path[i-1] && path[i-1] <= '9') {
+ i--
+ }
+ if i <= 1 || path[i-1] != 'v' || path[i-2] != '.' {
+ // All gopkg.in paths must end in vN for some N.
+ return path, "", false
+ }
+ prefix, pathMajor = path[:i-2], path[i-2:]
+ if len(pathMajor) <= 2 || pathMajor[2] == '0' && pathMajor != ".v0" {
+ return path, "", false
+ }
+ return prefix, pathMajor, true
+}
+
+// MatchPathMajor reports whether the semantic version v
+// matches the path major version pathMajor.
+func MatchPathMajor(v, pathMajor string) bool {
+ if strings.HasPrefix(pathMajor, ".v") && strings.HasSuffix(pathMajor, "-unstable") {
+ pathMajor = strings.TrimSuffix(pathMajor, "-unstable")
+ }
+ if strings.HasPrefix(v, "v0.0.0-") && pathMajor == ".v1" {
+ // Allow old bug in pseudo-versions that generated v0.0.0- pseudoversion for gopkg .v1.
+ // For example, gopkg.in/[email protected]'s go.mod requires gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405.
+ return true
+ }
+ m := semver.Major(v)
+ if pathMajor == "" {
+ return m == "v0" || m == "v1" || semver.Build(v) == "+incompatible"
+ }
+ return (pathMajor[0] == '/' || pathMajor[0] == '.') && m == pathMajor[1:]
+}
+
+// CanonicalVersion returns the canonical form of the version string v.
+// It is the same as semver.Canonical(v) except that it preserves the special build suffix "+incompatible".
+func CanonicalVersion(v string) string {
+ cv := semver.Canonical(v)
+ if semver.Build(v) == "+incompatible" {
+ cv += "+incompatible"
+ }
+ return cv
+}
+
+// Sort sorts the list by Path, breaking ties by comparing Versions.
+func Sort(list []Version) {
+ sort.Slice(list, func(i, j int) bool {
+ mi := list[i]
+ mj := list[j]
+ if mi.Path != mj.Path {
+ return mi.Path < mj.Path
+ }
+ // To help go.sum formatting, allow version/file.
+ // Compare semver prefix by semver rules,
+ // file by string order.
+ vi := mi.Version
+ vj := mj.Version
+ var fi, fj string
+ if k := strings.Index(vi, "/"); k >= 0 {
+ vi, fi = vi[:k], vi[k:]
+ }
+ if k := strings.Index(vj, "/"); k >= 0 {
+ vj, fj = vj[:k], vj[k:]
+ }
+ if vi != vj {
+ return semver.Compare(vi, vj) < 0
+ }
+ return fi < fj
+ })
+}
+
+// Safe encodings
+//
+// Module paths appear as substrings of file system paths
+// (in the download cache) and of web server URLs in the proxy protocol.
+// In general we cannot rely on file systems to be case-sensitive,
+// nor can we rely on web servers, since they read from file systems.
+// That is, we cannot rely on the file system to keep rsc.io/QUOTE
+// and rsc.io/quote separate. Windows and macOS don't.
+// Instead, we must never require two different casings of a file path.
+// Because we want the download cache to match the proxy protocol,
+// and because we want the proxy protocol to be possible to serve
+// from a tree of static files (which might be stored on a case-insensitive
+// file system), the proxy protocol must never require two different casings
+// of a URL path either.
+//
+// One possibility would be to make the safe encoding be the lowercase
+// hexadecimal encoding of the actual path bytes. This would avoid ever
+// needing different casings of a file path, but it would be fairly illegible
+// to most programmers when those paths appeared in the file system
+// (including in file paths in compiler errors and stack traces)
+// in web server logs, and so on. Instead, we want a safe encoding that
+// leaves most paths unaltered.
+//
+// The safe encoding is this:
+// replace every uppercase letter with an exclamation mark
+// followed by the letter's lowercase equivalent.
+//
+// For example,
+// github.com/Azure/azure-sdk-for-go -> github.com/!azure/azure-sdk-for-go.
+// github.com/GoogleCloudPlatform/cloudsql-proxy -> github.com/!google!cloud!platform/cloudsql-proxy
+// github.com/Sirupsen/logrus -> github.com/!sirupsen/logrus.
+//
+// Import paths that avoid upper-case letters are left unchanged.
+// Note that because import paths are ASCII-only and avoid various
+// problematic punctuation (like : < and >), the safe encoding is also ASCII-only
+// and avoids the same problematic punctuation.
+//
+// Import paths have never allowed exclamation marks, so there is no
+// need to define how to encode a literal !.
+//
+// Although paths are disallowed from using Unicode (see pathOK above),
+// the eventual plan is to allow Unicode letters as well, to assume that
+// file systems and URLs are Unicode-safe (storing UTF-8), and apply
+// the !-for-uppercase convention. Note however that not all runes that
+// are different but case-fold equivalent are an upper/lower pair.
+// For example, U+004B ('K'), U+006B ('k'), and U+212A ('K' for Kelvin)
+// are considered to case-fold to each other. When we do add Unicode
+// letters, we must not assume that upper/lower are the only case-equivalent pairs.
+// Perhaps the Kelvin symbol would be disallowed entirely, for example.
+// Or perhaps it would encode as "!!k", or perhaps as "(212A)".
+//
+// Also, it would be nice to allow Unicode marks as well as letters,
+// but marks include combining marks, and then we must deal not
+// only with case folding but also normalization: both U+00E9 ('é')
+// and U+0065 U+0301 ('e' followed by combining acute accent)
+// look the same on the page and are treated by some file systems
+// as the same path. If we do allow Unicode marks in paths, there
+// must be some kind of normalization to allow only one canonical
+// encoding of any character used in an import path.
+
+// EncodePath returns the safe encoding of the given module path.
+// It fails if the module path is invalid.
+func EncodePath(path string) (encoding string, err error) {
+ if err := CheckPath(path); err != nil {
+ return "", err
+ }
+
+ return encodeString(path)
+}
+
+// EncodeVersion returns the safe encoding of the given module version.
+// Versions are allowed to be in non-semver form but must be valid file names
+// and not contain exclamation marks.
+func EncodeVersion(v string) (encoding string, err error) {
+ if err := checkElem(v, true); err != nil || strings.Contains(v, "!") {
+ return "", fmt.Errorf("disallowed version string %q", v)
+ }
+ return encodeString(v)
+}
+
+func encodeString(s string) (encoding string, err error) {
+ haveUpper := false
+ for _, r := range s {
+ if r == '!' || r >= utf8.RuneSelf {
+ // This should be disallowed by CheckPath, but diagnose anyway.
+ // The correctness of the encoding loop below depends on it.
+ return "", fmt.Errorf("internal error: inconsistency in EncodePath")
+ }
+ if 'A' <= r && r <= 'Z' {
+ haveUpper = true
+ }
+ }
+
+ if !haveUpper {
+ return s, nil
+ }
+
+ var buf []byte
+ for _, r := range s {
+ if 'A' <= r && r <= 'Z' {
+ buf = append(buf, '!', byte(r+'a'-'A'))
+ } else {
+ buf = append(buf, byte(r))
+ }
+ }
+ return string(buf), nil
+}
+
+// DecodePath returns the module path of the given safe encoding.
+// It fails if the encoding is invalid or encodes an invalid path.
+func DecodePath(encoding string) (path string, err error) {
+ path, ok := decodeString(encoding)
+ if !ok {
+ return "", fmt.Errorf("invalid module path encoding %q", encoding)
+ }
+ if err := CheckPath(path); err != nil {
+ return "", fmt.Errorf("invalid module path encoding %q: %v", encoding, err)
+ }
+ return path, nil
+}
+
+// DecodeVersion returns the version string for the given safe encoding.
+// It fails if the encoding is invalid or encodes an invalid version.
+// Versions are allowed to be in non-semver form but must be valid file names
+// and not contain exclamation marks.
+func DecodeVersion(encoding string) (v string, err error) {
+ v, ok := decodeString(encoding)
+ if !ok {
+ return "", fmt.Errorf("invalid version encoding %q", encoding)
+ }
+ if err := checkElem(v, true); err != nil {
+ return "", fmt.Errorf("disallowed version string %q", v)
+ }
+ return v, nil
+}
+
+func decodeString(encoding string) (string, bool) {
+ var buf []byte
+
+ bang := false
+ for _, r := range encoding {
+ if r >= utf8.RuneSelf {
+ return "", false
+ }
+ if bang {
+ bang = false
+ if r < 'a' || 'z' < r {
+ return "", false
+ }
+ buf = append(buf, byte(r+'A'-'a'))
+ continue
+ }
+ if r == '!' {
+ bang = true
+ continue
+ }
+ if 'A' <= r && r <= 'Z' {
+ return "", false
+ }
+ buf = append(buf, byte(r))
+ }
+ if bang {
+ return "", false
+ }
+ return string(buf), true
+}
diff --git a/vendor/golang.org/x/tools/internal/semver/semver.go b/vendor/golang.org/x/tools/internal/semver/semver.go
new file mode 100644
index 0000000000000..4af7118e55d2e
--- /dev/null
+++ b/vendor/golang.org/x/tools/internal/semver/semver.go
@@ -0,0 +1,388 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package semver implements comparison of semantic version strings.
+// In this package, semantic version strings must begin with a leading "v",
+// as in "v1.0.0".
+//
+// The general form of a semantic version string accepted by this package is
+//
+// vMAJOR[.MINOR[.PATCH[-PRERELEASE][+BUILD]]]
+//
+// where square brackets indicate optional parts of the syntax;
+// MAJOR, MINOR, and PATCH are decimal integers without extra leading zeros;
+// PRERELEASE and BUILD are each a series of non-empty dot-separated identifiers
+// using only alphanumeric characters and hyphens; and
+// all-numeric PRERELEASE identifiers must not have leading zeros.
+//
+// This package follows Semantic Versioning 2.0.0 (see semver.org)
+// with two exceptions. First, it requires the "v" prefix. Second, it recognizes
+// vMAJOR and vMAJOR.MINOR (with no prerelease or build suffixes)
+// as shorthands for vMAJOR.0.0 and vMAJOR.MINOR.0.
+package semver
+
+// parsed returns the parsed form of a semantic version string.
+type parsed struct {
+ major string
+ minor string
+ patch string
+ short string
+ prerelease string
+ build string
+ err string
+}
+
+// IsValid reports whether v is a valid semantic version string.
+func IsValid(v string) bool {
+ _, ok := parse(v)
+ return ok
+}
+
+// Canonical returns the canonical formatting of the semantic version v.
+// It fills in any missing .MINOR or .PATCH and discards build metadata.
+// Two semantic versions compare equal only if their canonical formattings
+// are identical strings.
+// The canonical invalid semantic version is the empty string.
+func Canonical(v string) string {
+ p, ok := parse(v)
+ if !ok {
+ return ""
+ }
+ if p.build != "" {
+ return v[:len(v)-len(p.build)]
+ }
+ if p.short != "" {
+ return v + p.short
+ }
+ return v
+}
+
+// Major returns the major version prefix of the semantic version v.
+// For example, Major("v2.1.0") == "v2".
+// If v is an invalid semantic version string, Major returns the empty string.
+func Major(v string) string {
+ pv, ok := parse(v)
+ if !ok {
+ return ""
+ }
+ return v[:1+len(pv.major)]
+}
+
+// MajorMinor returns the major.minor version prefix of the semantic version v.
+// For example, MajorMinor("v2.1.0") == "v2.1".
+// If v is an invalid semantic version string, MajorMinor returns the empty string.
+func MajorMinor(v string) string {
+ pv, ok := parse(v)
+ if !ok {
+ return ""
+ }
+ i := 1 + len(pv.major)
+ if j := i + 1 + len(pv.minor); j <= len(v) && v[i] == '.' && v[i+1:j] == pv.minor {
+ return v[:j]
+ }
+ return v[:i] + "." + pv.minor
+}
+
+// Prerelease returns the prerelease suffix of the semantic version v.
+// For example, Prerelease("v2.1.0-pre+meta") == "-pre".
+// If v is an invalid semantic version string, Prerelease returns the empty string.
+func Prerelease(v string) string {
+ pv, ok := parse(v)
+ if !ok {
+ return ""
+ }
+ return pv.prerelease
+}
+
+// Build returns the build suffix of the semantic version v.
+// For example, Build("v2.1.0+meta") == "+meta".
+// If v is an invalid semantic version string, Build returns the empty string.
+func Build(v string) string {
+ pv, ok := parse(v)
+ if !ok {
+ return ""
+ }
+ return pv.build
+}
+
+// Compare returns an integer comparing two versions according to
+// according to semantic version precedence.
+// The result will be 0 if v == w, -1 if v < w, or +1 if v > w.
+//
+// An invalid semantic version string is considered less than a valid one.
+// All invalid semantic version strings compare equal to each other.
+func Compare(v, w string) int {
+ pv, ok1 := parse(v)
+ pw, ok2 := parse(w)
+ if !ok1 && !ok2 {
+ return 0
+ }
+ if !ok1 {
+ return -1
+ }
+ if !ok2 {
+ return +1
+ }
+ if c := compareInt(pv.major, pw.major); c != 0 {
+ return c
+ }
+ if c := compareInt(pv.minor, pw.minor); c != 0 {
+ return c
+ }
+ if c := compareInt(pv.patch, pw.patch); c != 0 {
+ return c
+ }
+ return comparePrerelease(pv.prerelease, pw.prerelease)
+}
+
+// Max canonicalizes its arguments and then returns the version string
+// that compares greater.
+func Max(v, w string) string {
+ v = Canonical(v)
+ w = Canonical(w)
+ if Compare(v, w) > 0 {
+ return v
+ }
+ return w
+}
+
+func parse(v string) (p parsed, ok bool) {
+ if v == "" || v[0] != 'v' {
+ p.err = "missing v prefix"
+ return
+ }
+ p.major, v, ok = parseInt(v[1:])
+ if !ok {
+ p.err = "bad major version"
+ return
+ }
+ if v == "" {
+ p.minor = "0"
+ p.patch = "0"
+ p.short = ".0.0"
+ return
+ }
+ if v[0] != '.' {
+ p.err = "bad minor prefix"
+ ok = false
+ return
+ }
+ p.minor, v, ok = parseInt(v[1:])
+ if !ok {
+ p.err = "bad minor version"
+ return
+ }
+ if v == "" {
+ p.patch = "0"
+ p.short = ".0"
+ return
+ }
+ if v[0] != '.' {
+ p.err = "bad patch prefix"
+ ok = false
+ return
+ }
+ p.patch, v, ok = parseInt(v[1:])
+ if !ok {
+ p.err = "bad patch version"
+ return
+ }
+ if len(v) > 0 && v[0] == '-' {
+ p.prerelease, v, ok = parsePrerelease(v)
+ if !ok {
+ p.err = "bad prerelease"
+ return
+ }
+ }
+ if len(v) > 0 && v[0] == '+' {
+ p.build, v, ok = parseBuild(v)
+ if !ok {
+ p.err = "bad build"
+ return
+ }
+ }
+ if v != "" {
+ p.err = "junk on end"
+ ok = false
+ return
+ }
+ ok = true
+ return
+}
+
+func parseInt(v string) (t, rest string, ok bool) {
+ if v == "" {
+ return
+ }
+ if v[0] < '0' || '9' < v[0] {
+ return
+ }
+ i := 1
+ for i < len(v) && '0' <= v[i] && v[i] <= '9' {
+ i++
+ }
+ if v[0] == '0' && i != 1 {
+ return
+ }
+ return v[:i], v[i:], true
+}
+
+func parsePrerelease(v string) (t, rest string, ok bool) {
+ // "A pre-release version MAY be denoted by appending a hyphen and
+ // a series of dot separated identifiers immediately following the patch version.
+ // Identifiers MUST comprise only ASCII alphanumerics and hyphen [0-9A-Za-z-].
+ // Identifiers MUST NOT be empty. Numeric identifiers MUST NOT include leading zeroes."
+ if v == "" || v[0] != '-' {
+ return
+ }
+ i := 1
+ start := 1
+ for i < len(v) && v[i] != '+' {
+ if !isIdentChar(v[i]) && v[i] != '.' {
+ return
+ }
+ if v[i] == '.' {
+ if start == i || isBadNum(v[start:i]) {
+ return
+ }
+ start = i + 1
+ }
+ i++
+ }
+ if start == i || isBadNum(v[start:i]) {
+ return
+ }
+ return v[:i], v[i:], true
+}
+
+func parseBuild(v string) (t, rest string, ok bool) {
+ if v == "" || v[0] != '+' {
+ return
+ }
+ i := 1
+ start := 1
+ for i < len(v) {
+ if !isIdentChar(v[i]) {
+ return
+ }
+ if v[i] == '.' {
+ if start == i {
+ return
+ }
+ start = i + 1
+ }
+ i++
+ }
+ if start == i {
+ return
+ }
+ return v[:i], v[i:], true
+}
+
+func isIdentChar(c byte) bool {
+ return 'A' <= c && c <= 'Z' || 'a' <= c && c <= 'z' || '0' <= c && c <= '9' || c == '-'
+}
+
+func isBadNum(v string) bool {
+ i := 0
+ for i < len(v) && '0' <= v[i] && v[i] <= '9' {
+ i++
+ }
+ return i == len(v) && i > 1 && v[0] == '0'
+}
+
+func isNum(v string) bool {
+ i := 0
+ for i < len(v) && '0' <= v[i] && v[i] <= '9' {
+ i++
+ }
+ return i == len(v)
+}
+
+func compareInt(x, y string) int {
+ if x == y {
+ return 0
+ }
+ if len(x) < len(y) {
+ return -1
+ }
+ if len(x) > len(y) {
+ return +1
+ }
+ if x < y {
+ return -1
+ } else {
+ return +1
+ }
+}
+
+func comparePrerelease(x, y string) int {
+ // "When major, minor, and patch are equal, a pre-release version has
+ // lower precedence than a normal version.
+ // Example: 1.0.0-alpha < 1.0.0.
+ // Precedence for two pre-release versions with the same major, minor,
+ // and patch version MUST be determined by comparing each dot separated
+ // identifier from left to right until a difference is found as follows:
+ // identifiers consisting of only digits are compared numerically and
+ // identifiers with letters or hyphens are compared lexically in ASCII
+ // sort order. Numeric identifiers always have lower precedence than
+ // non-numeric identifiers. A larger set of pre-release fields has a
+ // higher precedence than a smaller set, if all of the preceding
+ // identifiers are equal.
+ // Example: 1.0.0-alpha < 1.0.0-alpha.1 < 1.0.0-alpha.beta <
+ // 1.0.0-beta < 1.0.0-beta.2 < 1.0.0-beta.11 < 1.0.0-rc.1 < 1.0.0."
+ if x == y {
+ return 0
+ }
+ if x == "" {
+ return +1
+ }
+ if y == "" {
+ return -1
+ }
+ for x != "" && y != "" {
+ x = x[1:] // skip - or .
+ y = y[1:] // skip - or .
+ var dx, dy string
+ dx, x = nextIdent(x)
+ dy, y = nextIdent(y)
+ if dx != dy {
+ ix := isNum(dx)
+ iy := isNum(dy)
+ if ix != iy {
+ if ix {
+ return -1
+ } else {
+ return +1
+ }
+ }
+ if ix {
+ if len(dx) < len(dy) {
+ return -1
+ }
+ if len(dx) > len(dy) {
+ return +1
+ }
+ }
+ if dx < dy {
+ return -1
+ } else {
+ return +1
+ }
+ }
+ }
+ if x == "" {
+ return -1
+ } else {
+ return +1
+ }
+}
+
+func nextIdent(x string) (dx, rest string) {
+ i := 0
+ for i < len(x) && x[i] != '.' {
+ i++
+ }
+ return x[:i], x[i:]
+}
diff --git a/vendor/golang.org/x/tools/internal/span/parse.go b/vendor/golang.org/x/tools/internal/span/parse.go
new file mode 100644
index 0000000000000..b3f268a38aea7
--- /dev/null
+++ b/vendor/golang.org/x/tools/internal/span/parse.go
@@ -0,0 +1,100 @@
+// Copyright 2019 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package span
+
+import (
+ "strconv"
+ "strings"
+ "unicode/utf8"
+)
+
+// Parse returns the location represented by the input.
+// All inputs are valid locations, as they can always be a pure filename.
+// The returned span will be normalized, and thus if printed may produce a
+// different string.
+func Parse(input string) Span {
+ // :0:0#0-0:0#0
+ valid := input
+ var hold, offset int
+ hadCol := false
+ suf := rstripSuffix(input)
+ if suf.sep == "#" {
+ offset = suf.num
+ suf = rstripSuffix(suf.remains)
+ }
+ if suf.sep == ":" {
+ valid = suf.remains
+ hold = suf.num
+ hadCol = true
+ suf = rstripSuffix(suf.remains)
+ }
+ switch {
+ case suf.sep == ":":
+ return New(NewURI(suf.remains), NewPoint(suf.num, hold, offset), Point{})
+ case suf.sep == "-":
+ // we have a span, fall out of the case to continue
+ default:
+ // separator not valid, rewind to either the : or the start
+ return New(NewURI(valid), NewPoint(hold, 0, offset), Point{})
+ }
+ // only the span form can get here
+ // at this point we still don't know what the numbers we have mean
+ // if have not yet seen a : then we might have either a line or a column depending
+ // on whether start has a column or not
+ // we build an end point and will fix it later if needed
+ end := NewPoint(suf.num, hold, offset)
+ hold, offset = 0, 0
+ suf = rstripSuffix(suf.remains)
+ if suf.sep == "#" {
+ offset = suf.num
+ suf = rstripSuffix(suf.remains)
+ }
+ if suf.sep != ":" {
+ // turns out we don't have a span after all, rewind
+ return New(NewURI(valid), end, Point{})
+ }
+ valid = suf.remains
+ hold = suf.num
+ suf = rstripSuffix(suf.remains)
+ if suf.sep != ":" {
+ // line#offset only
+ return New(NewURI(valid), NewPoint(hold, 0, offset), end)
+ }
+ // we have a column, so if end only had one number, it is also the column
+ if !hadCol {
+ end = NewPoint(suf.num, end.v.Line, end.v.Offset)
+ }
+ return New(NewURI(suf.remains), NewPoint(suf.num, hold, offset), end)
+}
+
+type suffix struct {
+ remains string
+ sep string
+ num int
+}
+
+func rstripSuffix(input string) suffix {
+ if len(input) == 0 {
+ return suffix{"", "", -1}
+ }
+ remains := input
+ num := -1
+ // first see if we have a number at the end
+ last := strings.LastIndexFunc(remains, func(r rune) bool { return r < '0' || r > '9' })
+ if last >= 0 && last < len(remains)-1 {
+ number, err := strconv.ParseInt(remains[last+1:], 10, 64)
+ if err == nil {
+ num = int(number)
+ remains = remains[:last+1]
+ }
+ }
+ // now see if we have a trailing separator
+ r, w := utf8.DecodeLastRuneInString(remains)
+ if r != ':' && r != '#' && r == '#' {
+ return suffix{input, "", -1}
+ }
+ remains = remains[:len(remains)-w]
+ return suffix{remains, string(r), num}
+}
diff --git a/vendor/golang.org/x/tools/internal/span/span.go b/vendor/golang.org/x/tools/internal/span/span.go
new file mode 100644
index 0000000000000..4d2ad0986670b
--- /dev/null
+++ b/vendor/golang.org/x/tools/internal/span/span.go
@@ -0,0 +1,285 @@
+// Copyright 2019 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package span contains support for representing with positions and ranges in
+// text files.
+package span
+
+import (
+ "encoding/json"
+ "fmt"
+ "path"
+)
+
+// Span represents a source code range in standardized form.
+type Span struct {
+ v span
+}
+
+// Point represents a single point within a file.
+// In general this should only be used as part of a Span, as on its own it
+// does not carry enough information.
+type Point struct {
+ v point
+}
+
+type span struct {
+ URI URI `json:"uri"`
+ Start point `json:"start"`
+ End point `json:"end"`
+}
+
+type point struct {
+ Line int `json:"line"`
+ Column int `json:"column"`
+ Offset int `json:"offset"`
+}
+
+// Invalid is a span that reports false from IsValid
+var Invalid = Span{v: span{Start: invalidPoint.v, End: invalidPoint.v}}
+
+var invalidPoint = Point{v: point{Line: 0, Column: 0, Offset: -1}}
+
+// Converter is the interface to an object that can convert between line:column
+// and offset forms for a single file.
+type Converter interface {
+ //ToPosition converts from an offset to a line:column pair.
+ ToPosition(offset int) (int, int, error)
+ //ToOffset converts from a line:column pair to an offset.
+ ToOffset(line, col int) (int, error)
+}
+
+func New(uri URI, start Point, end Point) Span {
+ s := Span{v: span{URI: uri, Start: start.v, End: end.v}}
+ s.v.clean()
+ return s
+}
+
+func NewPoint(line, col, offset int) Point {
+ p := Point{v: point{Line: line, Column: col, Offset: offset}}
+ p.v.clean()
+ return p
+}
+
+func Compare(a, b Span) int {
+ if r := CompareURI(a.URI(), b.URI()); r != 0 {
+ return r
+ }
+ if r := comparePoint(a.v.Start, b.v.Start); r != 0 {
+ return r
+ }
+ return comparePoint(a.v.End, b.v.End)
+}
+
+func ComparePoint(a, b Point) int {
+ return comparePoint(a.v, b.v)
+}
+
+func comparePoint(a, b point) int {
+ if !a.hasPosition() {
+ if a.Offset < b.Offset {
+ return -1
+ }
+ if a.Offset > b.Offset {
+ return 1
+ }
+ return 0
+ }
+ if a.Line < b.Line {
+ return -1
+ }
+ if a.Line > b.Line {
+ return 1
+ }
+ if a.Column < b.Column {
+ return -1
+ }
+ if a.Column > b.Column {
+ return 1
+ }
+ return 0
+}
+
+func (s Span) HasPosition() bool { return s.v.Start.hasPosition() }
+func (s Span) HasOffset() bool { return s.v.Start.hasOffset() }
+func (s Span) IsValid() bool { return s.v.Start.isValid() }
+func (s Span) IsPoint() bool { return s.v.Start == s.v.End }
+func (s Span) URI() URI { return s.v.URI }
+func (s Span) Start() Point { return Point{s.v.Start} }
+func (s Span) End() Point { return Point{s.v.End} }
+func (s *Span) MarshalJSON() ([]byte, error) { return json.Marshal(&s.v) }
+func (s *Span) UnmarshalJSON(b []byte) error { return json.Unmarshal(b, &s.v) }
+
+func (p Point) HasPosition() bool { return p.v.hasPosition() }
+func (p Point) HasOffset() bool { return p.v.hasOffset() }
+func (p Point) IsValid() bool { return p.v.isValid() }
+func (p *Point) MarshalJSON() ([]byte, error) { return json.Marshal(&p.v) }
+func (p *Point) UnmarshalJSON(b []byte) error { return json.Unmarshal(b, &p.v) }
+func (p Point) Line() int {
+ if !p.v.hasPosition() {
+ panic(fmt.Errorf("position not set in %v", p.v))
+ }
+ return p.v.Line
+}
+func (p Point) Column() int {
+ if !p.v.hasPosition() {
+ panic(fmt.Errorf("position not set in %v", p.v))
+ }
+ return p.v.Column
+}
+func (p Point) Offset() int {
+ if !p.v.hasOffset() {
+ panic(fmt.Errorf("offset not set in %v", p.v))
+ }
+ return p.v.Offset
+}
+
+func (p point) hasPosition() bool { return p.Line > 0 }
+func (p point) hasOffset() bool { return p.Offset >= 0 }
+func (p point) isValid() bool { return p.hasPosition() || p.hasOffset() }
+func (p point) isZero() bool {
+ return (p.Line == 1 && p.Column == 1) || (!p.hasPosition() && p.Offset == 0)
+}
+
+func (s *span) clean() {
+ //this presumes the points are already clean
+ if !s.End.isValid() || (s.End == point{}) {
+ s.End = s.Start
+ }
+}
+
+func (p *point) clean() {
+ if p.Line < 0 {
+ p.Line = 0
+ }
+ if p.Column <= 0 {
+ if p.Line > 0 {
+ p.Column = 1
+ } else {
+ p.Column = 0
+ }
+ }
+ if p.Offset == 0 && (p.Line > 1 || p.Column > 1) {
+ p.Offset = -1
+ }
+}
+
+// Format implements fmt.Formatter to print the Location in a standard form.
+// The format produced is one that can be read back in using Parse.
+func (s Span) Format(f fmt.State, c rune) {
+ fullForm := f.Flag('+')
+ preferOffset := f.Flag('#')
+ // we should always have a uri, simplify if it is file format
+ //TODO: make sure the end of the uri is unambiguous
+ uri := string(s.v.URI)
+ if c == 'f' {
+ uri = path.Base(uri)
+ } else if !fullForm {
+ uri = s.v.URI.Filename()
+ }
+ fmt.Fprint(f, uri)
+ if !s.IsValid() || (!fullForm && s.v.Start.isZero() && s.v.End.isZero()) {
+ return
+ }
+ // see which bits of start to write
+ printOffset := s.HasOffset() && (fullForm || preferOffset || !s.HasPosition())
+ printLine := s.HasPosition() && (fullForm || !printOffset)
+ printColumn := printLine && (fullForm || (s.v.Start.Column > 1 || s.v.End.Column > 1))
+ fmt.Fprint(f, ":")
+ if printLine {
+ fmt.Fprintf(f, "%d", s.v.Start.Line)
+ }
+ if printColumn {
+ fmt.Fprintf(f, ":%d", s.v.Start.Column)
+ }
+ if printOffset {
+ fmt.Fprintf(f, "#%d", s.v.Start.Offset)
+ }
+ // start is written, do we need end?
+ if s.IsPoint() {
+ return
+ }
+ // we don't print the line if it did not change
+ printLine = fullForm || (printLine && s.v.End.Line > s.v.Start.Line)
+ fmt.Fprint(f, "-")
+ if printLine {
+ fmt.Fprintf(f, "%d", s.v.End.Line)
+ }
+ if printColumn {
+ if printLine {
+ fmt.Fprint(f, ":")
+ }
+ fmt.Fprintf(f, "%d", s.v.End.Column)
+ }
+ if printOffset {
+ fmt.Fprintf(f, "#%d", s.v.End.Offset)
+ }
+}
+
+func (s Span) WithPosition(c Converter) (Span, error) {
+ if err := s.update(c, true, false); err != nil {
+ return Span{}, err
+ }
+ return s, nil
+}
+
+func (s Span) WithOffset(c Converter) (Span, error) {
+ if err := s.update(c, false, true); err != nil {
+ return Span{}, err
+ }
+ return s, nil
+}
+
+func (s Span) WithAll(c Converter) (Span, error) {
+ if err := s.update(c, true, true); err != nil {
+ return Span{}, err
+ }
+ return s, nil
+}
+
+func (s *Span) update(c Converter, withPos, withOffset bool) error {
+ if !s.IsValid() {
+ return fmt.Errorf("cannot add information to an invalid span")
+ }
+ if withPos && !s.HasPosition() {
+ if err := s.v.Start.updatePosition(c); err != nil {
+ return err
+ }
+ if s.v.End.Offset == s.v.Start.Offset {
+ s.v.End = s.v.Start
+ } else if err := s.v.End.updatePosition(c); err != nil {
+ return err
+ }
+ }
+ if withOffset && (!s.HasOffset() || (s.v.End.hasPosition() && !s.v.End.hasOffset())) {
+ if err := s.v.Start.updateOffset(c); err != nil {
+ return err
+ }
+ if s.v.End.Line == s.v.Start.Line && s.v.End.Column == s.v.Start.Column {
+ s.v.End.Offset = s.v.Start.Offset
+ } else if err := s.v.End.updateOffset(c); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func (p *point) updatePosition(c Converter) error {
+ line, col, err := c.ToPosition(p.Offset)
+ if err != nil {
+ return err
+ }
+ p.Line = line
+ p.Column = col
+ return nil
+}
+
+func (p *point) updateOffset(c Converter) error {
+ offset, err := c.ToOffset(p.Line, p.Column)
+ if err != nil {
+ return err
+ }
+ p.Offset = offset
+ return nil
+}
diff --git a/vendor/golang.org/x/tools/internal/span/token.go b/vendor/golang.org/x/tools/internal/span/token.go
new file mode 100644
index 0000000000000..ce44541b2fc49
--- /dev/null
+++ b/vendor/golang.org/x/tools/internal/span/token.go
@@ -0,0 +1,151 @@
+// Copyright 2019 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package span
+
+import (
+ "fmt"
+ "go/token"
+)
+
+// Range represents a source code range in token.Pos form.
+// It also carries the FileSet that produced the positions, so that it is
+// self contained.
+type Range struct {
+ FileSet *token.FileSet
+ Start token.Pos
+ End token.Pos
+}
+
+// TokenConverter is a Converter backed by a token file set and file.
+// It uses the file set methods to work out the conversions, which
+// makes it fast and does not require the file contents.
+type TokenConverter struct {
+ fset *token.FileSet
+ file *token.File
+}
+
+// NewRange creates a new Range from a FileSet and two positions.
+// To represent a point pass a 0 as the end pos.
+func NewRange(fset *token.FileSet, start, end token.Pos) Range {
+ return Range{
+ FileSet: fset,
+ Start: start,
+ End: end,
+ }
+}
+
+// NewTokenConverter returns an implementation of Converter backed by a
+// token.File.
+func NewTokenConverter(fset *token.FileSet, f *token.File) *TokenConverter {
+ return &TokenConverter{fset: fset, file: f}
+}
+
+// NewContentConverter returns an implementation of Converter for the
+// given file content.
+func NewContentConverter(filename string, content []byte) *TokenConverter {
+ fset := token.NewFileSet()
+ f := fset.AddFile(filename, -1, len(content))
+ f.SetLinesForContent(content)
+ return &TokenConverter{fset: fset, file: f}
+}
+
+// IsPoint returns true if the range represents a single point.
+func (r Range) IsPoint() bool {
+ return r.Start == r.End
+}
+
+// Span converts a Range to a Span that represents the Range.
+// It will fill in all the members of the Span, calculating the line and column
+// information.
+func (r Range) Span() (Span, error) {
+ f := r.FileSet.File(r.Start)
+ if f == nil {
+ return Span{}, fmt.Errorf("file not found in FileSet")
+ }
+ s := Span{v: span{URI: FileURI(f.Name())}}
+ var err error
+ s.v.Start.Offset, err = offset(f, r.Start)
+ if err != nil {
+ return Span{}, err
+ }
+ if r.End.IsValid() {
+ s.v.End.Offset, err = offset(f, r.End)
+ if err != nil {
+ return Span{}, err
+ }
+ }
+ s.v.Start.clean()
+ s.v.End.clean()
+ s.v.clean()
+ converter := NewTokenConverter(r.FileSet, f)
+ return s.WithPosition(converter)
+}
+
+// offset is a copy of the Offset function in go/token, but with the adjustment
+// that it does not panic on invalid positions.
+func offset(f *token.File, pos token.Pos) (int, error) {
+ if int(pos) < f.Base() || int(pos) > f.Base()+f.Size() {
+ return 0, fmt.Errorf("invalid pos")
+ }
+ return int(pos) - f.Base(), nil
+}
+
+// Range converts a Span to a Range that represents the Span for the supplied
+// File.
+func (s Span) Range(converter *TokenConverter) (Range, error) {
+ s, err := s.WithOffset(converter)
+ if err != nil {
+ return Range{}, err
+ }
+ // go/token will panic if the offset is larger than the file's size,
+ // so check here to avoid panicking.
+ if s.Start().Offset() > converter.file.Size() {
+ return Range{}, fmt.Errorf("start offset %v is past the end of the file %v", s.Start(), converter.file.Size())
+ }
+ if s.End().Offset() > converter.file.Size() {
+ return Range{}, fmt.Errorf("end offset %v is past the end of the file %v", s.End(), converter.file.Size())
+ }
+ return Range{
+ FileSet: converter.fset,
+ Start: converter.file.Pos(s.Start().Offset()),
+ End: converter.file.Pos(s.End().Offset()),
+ }, nil
+}
+
+func (l *TokenConverter) ToPosition(offset int) (int, int, error) {
+ if offset > l.file.Size() {
+ return 0, 0, fmt.Errorf("offset %v is past the end of the file %v", offset, l.file.Size())
+ }
+ pos := l.file.Pos(offset)
+ p := l.fset.Position(pos)
+ if offset == l.file.Size() {
+ return p.Line + 1, 1, nil
+ }
+ return p.Line, p.Column, nil
+}
+
+func (l *TokenConverter) ToOffset(line, col int) (int, error) {
+ if line < 0 {
+ return -1, fmt.Errorf("line is not valid")
+ }
+ lineMax := l.file.LineCount() + 1
+ if line > lineMax {
+ return -1, fmt.Errorf("line is beyond end of file %v", lineMax)
+ } else if line == lineMax {
+ if col > 1 {
+ return -1, fmt.Errorf("column is beyond end of file")
+ }
+ // at the end of the file, allowing for a trailing eol
+ return l.file.Size(), nil
+ }
+ pos := lineStart(l.file, line)
+ if !pos.IsValid() {
+ return -1, fmt.Errorf("line is not in file")
+ }
+ // we assume that column is in bytes here, and that the first byte of a
+ // line is at column 1
+ pos += token.Pos(col - 1)
+ return offset(l.file, pos)
+}
diff --git a/vendor/golang.org/x/tools/internal/span/token111.go b/vendor/golang.org/x/tools/internal/span/token111.go
new file mode 100644
index 0000000000000..bf7a5406b6e0e
--- /dev/null
+++ b/vendor/golang.org/x/tools/internal/span/token111.go
@@ -0,0 +1,39 @@
+// Copyright 2019 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build !go1.12
+
+package span
+
+import (
+ "go/token"
+)
+
+// lineStart is the pre-Go 1.12 version of (*token.File).LineStart. For Go
+// versions <= 1.11, we borrow logic from the analysisutil package.
+// TODO(rstambler): Delete this file when we no longer support Go 1.11.
+func lineStart(f *token.File, line int) token.Pos {
+ // Use binary search to find the start offset of this line.
+
+ min := 0 // inclusive
+ max := f.Size() // exclusive
+ for {
+ offset := (min + max) / 2
+ pos := f.Pos(offset)
+ posn := f.Position(pos)
+ if posn.Line == line {
+ return pos - (token.Pos(posn.Column) - 1)
+ }
+
+ if min+1 >= max {
+ return token.NoPos
+ }
+
+ if posn.Line < line {
+ min = offset
+ } else {
+ max = offset
+ }
+ }
+}
diff --git a/vendor/golang.org/x/tools/internal/span/token112.go b/vendor/golang.org/x/tools/internal/span/token112.go
new file mode 100644
index 0000000000000..017aec9c13eec
--- /dev/null
+++ b/vendor/golang.org/x/tools/internal/span/token112.go
@@ -0,0 +1,16 @@
+// Copyright 2019 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// +build go1.12
+
+package span
+
+import (
+ "go/token"
+)
+
+// TODO(rstambler): Delete this file when we no longer support Go 1.11.
+func lineStart(f *token.File, line int) token.Pos {
+ return f.LineStart(line)
+}
diff --git a/vendor/golang.org/x/tools/internal/span/uri.go b/vendor/golang.org/x/tools/internal/span/uri.go
new file mode 100644
index 0000000000000..e05a9e6ef5df8
--- /dev/null
+++ b/vendor/golang.org/x/tools/internal/span/uri.go
@@ -0,0 +1,152 @@
+// Copyright 2019 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package span
+
+import (
+ "fmt"
+ "net/url"
+ "os"
+ "path"
+ "path/filepath"
+ "runtime"
+ "strings"
+ "unicode"
+)
+
+const fileScheme = "file"
+
+// URI represents the full URI for a file.
+type URI string
+
+// Filename returns the file path for the given URI.
+// It is an error to call this on a URI that is not a valid filename.
+func (uri URI) Filename() string {
+ filename, err := filename(uri)
+ if err != nil {
+ panic(err)
+ }
+ return filepath.FromSlash(filename)
+}
+
+func filename(uri URI) (string, error) {
+ if uri == "" {
+ return "", nil
+ }
+ u, err := url.ParseRequestURI(string(uri))
+ if err != nil {
+ return "", err
+ }
+ if u.Scheme != fileScheme {
+ return "", fmt.Errorf("only file URIs are supported, got %q from %q", u.Scheme, uri)
+ }
+ if isWindowsDriveURI(u.Path) {
+ u.Path = u.Path[1:]
+ }
+ return u.Path, nil
+}
+
+// NewURI returns a span URI for the string.
+// It will attempt to detect if the string is a file path or uri.
+func NewURI(s string) URI {
+ if u, err := url.PathUnescape(s); err == nil {
+ s = u
+ }
+ if strings.HasPrefix(s, fileScheme+"://") {
+ return URI(s)
+ }
+ return FileURI(s)
+}
+
+func CompareURI(a, b URI) int {
+ if equalURI(a, b) {
+ return 0
+ }
+ if a < b {
+ return -1
+ }
+ return 1
+}
+
+func equalURI(a, b URI) bool {
+ if a == b {
+ return true
+ }
+ // If we have the same URI basename, we may still have the same file URIs.
+ if !strings.EqualFold(path.Base(string(a)), path.Base(string(b))) {
+ return false
+ }
+ fa, err := filename(a)
+ if err != nil {
+ return false
+ }
+ fb, err := filename(b)
+ if err != nil {
+ return false
+ }
+ // Stat the files to check if they are equal.
+ infoa, err := os.Stat(filepath.FromSlash(fa))
+ if err != nil {
+ return false
+ }
+ infob, err := os.Stat(filepath.FromSlash(fb))
+ if err != nil {
+ return false
+ }
+ return os.SameFile(infoa, infob)
+}
+
+// FileURI returns a span URI for the supplied file path.
+// It will always have the file scheme.
+func FileURI(path string) URI {
+ if path == "" {
+ return ""
+ }
+ // Handle standard library paths that contain the literal "$GOROOT".
+ // TODO(rstambler): The go/packages API should allow one to determine a user's $GOROOT.
+ const prefix = "$GOROOT"
+ if len(path) >= len(prefix) && strings.EqualFold(prefix, path[:len(prefix)]) {
+ suffix := path[len(prefix):]
+ path = runtime.GOROOT() + suffix
+ }
+ if !isWindowsDrivePath(path) {
+ if abs, err := filepath.Abs(path); err == nil {
+ path = abs
+ }
+ }
+ // Check the file path again, in case it became absolute.
+ if isWindowsDrivePath(path) {
+ path = "/" + path
+ }
+ path = filepath.ToSlash(path)
+ u := url.URL{
+ Scheme: fileScheme,
+ Path: path,
+ }
+ uri := u.String()
+ if unescaped, err := url.PathUnescape(uri); err == nil {
+ uri = unescaped
+ }
+ return URI(uri)
+}
+
+// isWindowsDrivePath returns true if the file path is of the form used by
+// Windows. We check if the path begins with a drive letter, followed by a ":".
+func isWindowsDrivePath(path string) bool {
+ if len(path) < 4 {
+ return false
+ }
+ return unicode.IsLetter(rune(path[0])) && path[1] == ':'
+}
+
+// isWindowsDriveURI returns true if the file URI is of the format used by
+// Windows URIs. The url.Parse package does not specially handle Windows paths
+// (see https://golang.org/issue/6027). We check if the URI path has
+// a drive prefix (e.g. "/C:"). If so, we trim the leading "/".
+func isWindowsDriveURI(uri string) bool {
+ if len(uri) < 4 {
+ return false
+ }
+ return uri[0] == '/' && unicode.IsLetter(rune(uri[1])) && uri[2] == ':'
+}
diff --git a/vendor/golang.org/x/tools/internal/span/utf16.go b/vendor/golang.org/x/tools/internal/span/utf16.go
new file mode 100644
index 0000000000000..561b3fa50a835
--- /dev/null
+++ b/vendor/golang.org/x/tools/internal/span/utf16.go
@@ -0,0 +1,94 @@
+// Copyright 2019 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package span
+
+import (
+ "fmt"
+ "unicode/utf16"
+ "unicode/utf8"
+)
+
+// ToUTF16Column calculates the utf16 column expressed by the point given the
+// supplied file contents.
+// This is used to convert from the native (always in bytes) column
+// representation and the utf16 counts used by some editors.
+func ToUTF16Column(p Point, content []byte) (int, error) {
+ if content == nil {
+ return -1, fmt.Errorf("ToUTF16Column: missing content")
+ }
+ if !p.HasPosition() {
+ return -1, fmt.Errorf("ToUTF16Column: point is missing position")
+ }
+ if !p.HasOffset() {
+ return -1, fmt.Errorf("ToUTF16Column: point is missing offset")
+ }
+ offset := p.Offset() // 0-based
+ colZero := p.Column() - 1 // 0-based
+ if colZero == 0 {
+ // 0-based column 0, so it must be chr 1
+ return 1, nil
+ } else if colZero < 0 {
+ return -1, fmt.Errorf("ToUTF16Column: column is invalid (%v)", colZero)
+ }
+ // work out the offset at the start of the line using the column
+ lineOffset := offset - colZero
+ if lineOffset < 0 || offset > len(content) {
+ return -1, fmt.Errorf("ToUTF16Column: offsets %v-%v outside file contents (%v)", lineOffset, offset, len(content))
+ }
+ // Use the offset to pick out the line start.
+ // This cannot panic: offset > len(content) and lineOffset < offset.
+ start := content[lineOffset:]
+
+ // Now, truncate down to the supplied column.
+ start = start[:colZero]
+
+ // and count the number of utf16 characters
+ // in theory we could do this by hand more efficiently...
+ return len(utf16.Encode([]rune(string(start)))) + 1, nil
+}
+
+// FromUTF16Column advances the point by the utf16 character offset given the
+// supplied line contents.
+// This is used to convert from the utf16 counts used by some editors to the
+// native (always in bytes) column representation.
+func FromUTF16Column(p Point, chr int, content []byte) (Point, error) {
+ if !p.HasOffset() {
+ return Point{}, fmt.Errorf("FromUTF16Column: point is missing offset")
+ }
+ // if chr is 1 then no adjustment needed
+ if chr <= 1 {
+ return p, nil
+ }
+ if p.Offset() >= len(content) {
+ return p, fmt.Errorf("FromUTF16Column: offset (%v) greater than length of content (%v)", p.Offset(), len(content))
+ }
+ remains := content[p.Offset():]
+ // scan forward the specified number of characters
+ for count := 1; count < chr; count++ {
+ if len(remains) <= 0 {
+ return Point{}, fmt.Errorf("FromUTF16Column: chr goes beyond the content")
+ }
+ r, w := utf8.DecodeRune(remains)
+ if r == '\n' {
+ // Per the LSP spec:
+ //
+ // > If the character value is greater than the line length it
+ // > defaults back to the line length.
+ break
+ }
+ remains = remains[w:]
+ if r >= 0x10000 {
+ // a two point rune
+ count++
+ // if we finished in a two point rune, do not advance past the first
+ if count >= chr {
+ break
+ }
+ }
+ p.v.Column += w
+ p.v.Offset += w
+ }
+ return p, nil
+}
diff --git a/vendor/google.golang.org/api/AUTHORS b/vendor/google.golang.org/api/AUTHORS
index f73b7257457eb..f07029059d21b 100644
--- a/vendor/google.golang.org/api/AUTHORS
+++ b/vendor/google.golang.org/api/AUTHORS
@@ -8,3 +8,4 @@
# Please keep the list sorted.
Google Inc.
+LightStep Inc.
diff --git a/vendor/google.golang.org/api/CONTRIBUTORS b/vendor/google.golang.org/api/CONTRIBUTORS
index fe55ebff072d9..788677b8f0472 100644
--- a/vendor/google.golang.org/api/CONTRIBUTORS
+++ b/vendor/google.golang.org/api/CONTRIBUTORS
@@ -45,6 +45,7 @@ Jason Hall <[email protected]>
Johan Euphrosine <[email protected]>
Kostik Shtoyk <[email protected]>
Kunpei Sakai <[email protected]>
+Matthew Dolan <[email protected]>
Matthew Whisenhunt <[email protected]>
Michael McGreevy <[email protected]>
Nick Craig-Wood <[email protected]>
diff --git a/vendor/google.golang.org/api/cloudresourcemanager/v1/cloudresourcemanager-api.json b/vendor/google.golang.org/api/cloudresourcemanager/v1/cloudresourcemanager-api.json
index 3c90f817dc664..9057dfe73905d 100644
--- a/vendor/google.golang.org/api/cloudresourcemanager/v1/cloudresourcemanager-api.json
+++ b/vendor/google.golang.org/api/cloudresourcemanager/v1/cloudresourcemanager-api.json
@@ -1170,7 +1170,7 @@
}
}
},
- "revision": "20190927",
+ "revision": "20191018",
"rootUrl": "https://cloudresourcemanager.googleapis.com/",
"schemas": {
"Ancestor": {
@@ -1632,7 +1632,7 @@
"type": "object"
},
"ListPolicy": {
- "description": "Used in `policy_type` to specify how `list_policy` behaves at this\nresource.\n\n`ListPolicy` can define specific values and subtrees of Cloud Resource\nManager resource hierarchy (`Organizations`, `Folders`, `Projects`) that\nare allowed or denied by setting the `allowed_values` and `denied_values`\nfields. This is achieved by using the `under:` and optional `is:` prefixes.\nThe `under:` prefix is used to denote resource subtree values.\nThe `is:` prefix is used to denote specific values, and is required only\nif the value contains a \":\". Values prefixed with \"is:\" are treated the\nsame as values with no prefix.\nAncestry subtrees must be in one of the following formats:\n - “projects/\u003cproject-id\u003e”, e.g. “projects/tokyo-rain-123”\n - “folders/\u003cfolder-id\u003e”, e.g. “folders/1234”\n - “organizations/\u003corganization-id\u003e”, e.g. “organizations/1234”\nThe `supports_under` field of the associated `Constraint` defines whether\nancestry prefixes can be used. You can set `allowed_values` and\n`denied_values` in the same `Policy` if `all_values` is\n`ALL_VALUES_UNSPECIFIED`. `ALLOW` or `DENY` are used to allow or deny all\nvalues. If `all_values` is set to either `ALLOW` or `DENY`,\n`allowed_values` and `denied_values` must be unset.",
+ "description": "Used in `policy_type` to specify how `list_policy` behaves at this\nresource.\n\n`ListPolicy` can define specific values and subtrees of Cloud Resource\nManager resource hierarchy (`Organizations`, `Folders`, `Projects`) that\nare allowed or denied by setting the `allowed_values` and `denied_values`\nfields. This is achieved by using the `under:` and optional `is:` prefixes.\nThe `under:` prefix is used to denote resource subtree values.\nThe `is:` prefix is used to denote specific values, and is required only\nif the value contains a \":\". Values prefixed with \"is:\" are treated the\nsame as values with no prefix.\nAncestry subtrees must be in one of the following formats:\n - \"projects/\u003cproject-id\u003e\", e.g. \"projects/tokyo-rain-123\"\n - \"folders/\u003cfolder-id\u003e\", e.g. \"folders/1234\"\n - \"organizations/\u003corganization-id\u003e\", e.g. \"organizations/1234\"\nThe `supports_under` field of the associated `Constraint` defines whether\nancestry prefixes can be used. You can set `allowed_values` and\n`denied_values` in the same `Policy` if `all_values` is\n`ALL_VALUES_UNSPECIFIED`. `ALLOW` or `DENY` are used to allow or deny all\nvalues. If `all_values` is set to either `ALLOW` or `DENY`,\n`allowed_values` and `denied_values` must be unset.",
"id": "ListPolicy",
"properties": {
"allValues": {
@@ -1664,7 +1664,7 @@
"type": "array"
},
"inheritFromParent": {
- "description": "Determines the inheritance behavior for this `Policy`.\n\nBy default, a `ListPolicy` set at a resource supercedes any `Policy` set\nanywhere up the resource hierarchy. However, if `inherit_from_parent` is\nset to `true`, then the values from the effective `Policy` of the parent\nresource are inherited, meaning the values set in this `Policy` are\nadded to the values inherited up the hierarchy.\n\nSetting `Policy` hierarchies that inherit both allowed values and denied\nvalues isn't recommended in most circumstances to keep the configuration\nsimple and understandable. However, it is possible to set a `Policy` with\n`allowed_values` set that inherits a `Policy` with `denied_values` set.\nIn this case, the values that are allowed must be in `allowed_values` and\nnot present in `denied_values`.\n\nFor example, suppose you have a `Constraint`\n`constraints/serviceuser.services`, which has a `constraint_type` of\n`list_constraint`, and with `constraint_default` set to `ALLOW`.\nSuppose that at the Organization level, a `Policy` is applied that\nrestricts the allowed API activations to {`E1`, `E2`}. Then, if a\n`Policy` is applied to a project below the Organization that has\n`inherit_from_parent` set to `false` and field all_values set to DENY,\nthen an attempt to activate any API will be denied.\n\nThe following examples demonstrate different possible layerings for\n`projects/bar` parented by `organizations/foo`:\n\nExample 1 (no inherited values):\n `organizations/foo` has a `Policy` with values:\n {allowed_values: “E1” allowed_values:”E2”}\n `projects/bar` has `inherit_from_parent` `false` and values:\n {allowed_values: \"E3\" allowed_values: \"E4\"}\nThe accepted values at `organizations/foo` are `E1`, `E2`.\nThe accepted values at `projects/bar` are `E3`, and `E4`.\n\nExample 2 (inherited values):\n `organizations/foo` has a `Policy` with values:\n {allowed_values: “E1” allowed_values:”E2”}\n `projects/bar` has a `Policy` with values:\n {value: “E3” value: ”E4” inherit_from_parent: true}\nThe accepted values at `organizations/foo` are `E1`, `E2`.\nThe accepted values at `projects/bar` are `E1`, `E2`, `E3`, and `E4`.\n\nExample 3 (inheriting both allowed and denied values):\n `organizations/foo` has a `Policy` with values:\n {allowed_values: \"E1\" allowed_values: \"E2\"}\n `projects/bar` has a `Policy` with:\n {denied_values: \"E1\"}\nThe accepted values at `organizations/foo` are `E1`, `E2`.\nThe value accepted at `projects/bar` is `E2`.\n\nExample 4 (RestoreDefault):\n `organizations/foo` has a `Policy` with values:\n {allowed_values: “E1” allowed_values:”E2”}\n `projects/bar` has a `Policy` with values:\n {RestoreDefault: {}}\nThe accepted values at `organizations/foo` are `E1`, `E2`.\nThe accepted values at `projects/bar` are either all or none depending on\nthe value of `constraint_default` (if `ALLOW`, all; if\n`DENY`, none).\n\nExample 5 (no policy inherits parent policy):\n `organizations/foo` has no `Policy` set.\n `projects/bar` has no `Policy` set.\nThe accepted values at both levels are either all or none depending on\nthe value of `constraint_default` (if `ALLOW`, all; if\n`DENY`, none).\n\nExample 6 (ListConstraint allowing all):\n `organizations/foo` has a `Policy` with values:\n {allowed_values: “E1” allowed_values: ”E2”}\n `projects/bar` has a `Policy` with:\n {all: ALLOW}\nThe accepted values at `organizations/foo` are `E1`, E2`.\nAny value is accepted at `projects/bar`.\n\nExample 7 (ListConstraint allowing none):\n `organizations/foo` has a `Policy` with values:\n {allowed_values: “E1” allowed_values: ”E2”}\n `projects/bar` has a `Policy` with:\n {all: DENY}\nThe accepted values at `organizations/foo` are `E1`, E2`.\nNo value is accepted at `projects/bar`.\n\nExample 10 (allowed and denied subtrees of Resource Manager hierarchy):\nGiven the following resource hierarchy\n O1-\u003e{F1, F2}; F1-\u003e{P1}; F2-\u003e{P2, P3},\n `organizations/foo` has a `Policy` with values:\n {allowed_values: \"under:organizations/O1\"}\n `projects/bar` has a `Policy` with:\n {allowed_values: \"under:projects/P3\"}\n {denied_values: \"under:folders/F2\"}\nThe accepted values at `organizations/foo` are `organizations/O1`,\n `folders/F1`, `folders/F2`, `projects/P1`, `projects/P2`,\n `projects/P3`.\nThe accepted values at `projects/bar` are `organizations/O1`,\n `folders/F1`, `projects/P1`.",
+ "description": "Determines the inheritance behavior for this `Policy`.\n\nBy default, a `ListPolicy` set at a resource supercedes any `Policy` set\nanywhere up the resource hierarchy. However, if `inherit_from_parent` is\nset to `true`, then the values from the effective `Policy` of the parent\nresource are inherited, meaning the values set in this `Policy` are\nadded to the values inherited up the hierarchy.\n\nSetting `Policy` hierarchies that inherit both allowed values and denied\nvalues isn't recommended in most circumstances to keep the configuration\nsimple and understandable. However, it is possible to set a `Policy` with\n`allowed_values` set that inherits a `Policy` with `denied_values` set.\nIn this case, the values that are allowed must be in `allowed_values` and\nnot present in `denied_values`.\n\nFor example, suppose you have a `Constraint`\n`constraints/serviceuser.services`, which has a `constraint_type` of\n`list_constraint`, and with `constraint_default` set to `ALLOW`.\nSuppose that at the Organization level, a `Policy` is applied that\nrestricts the allowed API activations to {`E1`, `E2`}. Then, if a\n`Policy` is applied to a project below the Organization that has\n`inherit_from_parent` set to `false` and field all_values set to DENY,\nthen an attempt to activate any API will be denied.\n\nThe following examples demonstrate different possible layerings for\n`projects/bar` parented by `organizations/foo`:\n\nExample 1 (no inherited values):\n `organizations/foo` has a `Policy` with values:\n {allowed_values: \"E1\" allowed_values:\"E2\"}\n `projects/bar` has `inherit_from_parent` `false` and values:\n {allowed_values: \"E3\" allowed_values: \"E4\"}\nThe accepted values at `organizations/foo` are `E1`, `E2`.\nThe accepted values at `projects/bar` are `E3`, and `E4`.\n\nExample 2 (inherited values):\n `organizations/foo` has a `Policy` with values:\n {allowed_values: \"E1\" allowed_values:\"E2\"}\n `projects/bar` has a `Policy` with values:\n {value: \"E3\" value: \"E4\" inherit_from_parent: true}\nThe accepted values at `organizations/foo` are `E1`, `E2`.\nThe accepted values at `projects/bar` are `E1`, `E2`, `E3`, and `E4`.\n\nExample 3 (inheriting both allowed and denied values):\n `organizations/foo` has a `Policy` with values:\n {allowed_values: \"E1\" allowed_values: \"E2\"}\n `projects/bar` has a `Policy` with:\n {denied_values: \"E1\"}\nThe accepted values at `organizations/foo` are `E1`, `E2`.\nThe value accepted at `projects/bar` is `E2`.\n\nExample 4 (RestoreDefault):\n `organizations/foo` has a `Policy` with values:\n {allowed_values: \"E1\" allowed_values:\"E2\"}\n `projects/bar` has a `Policy` with values:\n {RestoreDefault: {}}\nThe accepted values at `organizations/foo` are `E1`, `E2`.\nThe accepted values at `projects/bar` are either all or none depending on\nthe value of `constraint_default` (if `ALLOW`, all; if\n`DENY`, none).\n\nExample 5 (no policy inherits parent policy):\n `organizations/foo` has no `Policy` set.\n `projects/bar` has no `Policy` set.\nThe accepted values at both levels are either all or none depending on\nthe value of `constraint_default` (if `ALLOW`, all; if\n`DENY`, none).\n\nExample 6 (ListConstraint allowing all):\n `organizations/foo` has a `Policy` with values:\n {allowed_values: \"E1\" allowed_values: \"E2\"}\n `projects/bar` has a `Policy` with:\n {all: ALLOW}\nThe accepted values at `organizations/foo` are `E1`, E2`.\nAny value is accepted at `projects/bar`.\n\nExample 7 (ListConstraint allowing none):\n `organizations/foo` has a `Policy` with values:\n {allowed_values: \"E1\" allowed_values: \"E2\"}\n `projects/bar` has a `Policy` with:\n {all: DENY}\nThe accepted values at `organizations/foo` are `E1`, E2`.\nNo value is accepted at `projects/bar`.\n\nExample 10 (allowed and denied subtrees of Resource Manager hierarchy):\nGiven the following resource hierarchy\n O1-\u003e{F1, F2}; F1-\u003e{P1}; F2-\u003e{P2, P3},\n `organizations/foo` has a `Policy` with values:\n {allowed_values: \"under:organizations/O1\"}\n `projects/bar` has a `Policy` with:\n {allowed_values: \"under:projects/P3\"}\n {denied_values: \"under:folders/F2\"}\nThe accepted values at `organizations/foo` are `organizations/O1`,\n `folders/F1`, `folders/F2`, `projects/P1`, `projects/P2`,\n `projects/P3`.\nThe accepted values at `projects/bar` are `organizations/O1`,\n `folders/F1`, `projects/P1`.",
"type": "boolean"
},
"suggestedValue": {
diff --git a/vendor/google.golang.org/api/cloudresourcemanager/v1/cloudresourcemanager-gen.go b/vendor/google.golang.org/api/cloudresourcemanager/v1/cloudresourcemanager-gen.go
index d5037f0b69a38..e40e17589604d 100644
--- a/vendor/google.golang.org/api/cloudresourcemanager/v1/cloudresourcemanager-gen.go
+++ b/vendor/google.golang.org/api/cloudresourcemanager/v1/cloudresourcemanager-gen.go
@@ -1349,11 +1349,10 @@ func (s *ListOrgPoliciesResponse) MarshalJSON() ([]byte, error) {
// the
// same as values with no prefix.
// Ancestry subtrees must be in one of the following formats:
-// - “projects/<project-id>”, e.g.
-// “projects/tokyo-rain-123”
-// - “folders/<folder-id>”, e.g. “folders/1234”
-// - “organizations/<organization-id>”, e.g.
-// “organizations/1234”
+// - "projects/<project-id>", e.g. "projects/tokyo-rain-123"
+// - "folders/<folder-id>", e.g. "folders/1234"
+// - "organizations/<organization-id>", e.g.
+// "organizations/1234"
// The `supports_under` field of the associated `Constraint` defines
// whether
// ancestry prefixes can be used. You can set `allowed_values`
@@ -1432,7 +1431,7 @@ type ListPolicy struct {
//
// Example 1 (no inherited values):
// `organizations/foo` has a `Policy` with values:
- // {allowed_values: “E1” allowed_values:”E2”}
+ // {allowed_values: "E1" allowed_values:"E2"}
// `projects/bar` has `inherit_from_parent` `false` and values:
// {allowed_values: "E3" allowed_values: "E4"}
// The accepted values at `organizations/foo` are `E1`, `E2`.
@@ -1440,9 +1439,9 @@ type ListPolicy struct {
//
// Example 2 (inherited values):
// `organizations/foo` has a `Policy` with values:
- // {allowed_values: “E1” allowed_values:”E2”}
+ // {allowed_values: "E1" allowed_values:"E2"}
// `projects/bar` has a `Policy` with values:
- // {value: “E3” value: ”E4” inherit_from_parent: true}
+ // {value: "E3" value: "E4" inherit_from_parent: true}
// The accepted values at `organizations/foo` are `E1`, `E2`.
// The accepted values at `projects/bar` are `E1`, `E2`, `E3`, and
// `E4`.
@@ -1457,7 +1456,7 @@ type ListPolicy struct {
//
// Example 4 (RestoreDefault):
// `organizations/foo` has a `Policy` with values:
- // {allowed_values: “E1” allowed_values:”E2”}
+ // {allowed_values: "E1" allowed_values:"E2"}
// `projects/bar` has a `Policy` with values:
// {RestoreDefault: {}}
// The accepted values at `organizations/foo` are `E1`, `E2`.
@@ -1476,7 +1475,7 @@ type ListPolicy struct {
//
// Example 6 (ListConstraint allowing all):
// `organizations/foo` has a `Policy` with values:
- // {allowed_values: “E1” allowed_values: ”E2”}
+ // {allowed_values: "E1" allowed_values: "E2"}
// `projects/bar` has a `Policy` with:
// {all: ALLOW}
// The accepted values at `organizations/foo` are `E1`, E2`.
@@ -1484,7 +1483,7 @@ type ListPolicy struct {
//
// Example 7 (ListConstraint allowing none):
// `organizations/foo` has a `Policy` with values:
- // {allowed_values: “E1” allowed_values: ”E2”}
+ // {allowed_values: "E1" allowed_values: "E2"}
// `projects/bar` has a `Policy` with:
// {all: DENY}
// The accepted values at `organizations/foo` are `E1`, E2`.
@@ -2589,7 +2588,7 @@ func (c *FoldersClearOrgPolicyCall) Header() http.Header {
func (c *FoldersClearOrgPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -2737,7 +2736,7 @@ func (c *FoldersGetEffectiveOrgPolicyCall) Header() http.Header {
func (c *FoldersGetEffectiveOrgPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -2886,7 +2885,7 @@ func (c *FoldersGetOrgPolicyCall) Header() http.Header {
func (c *FoldersGetOrgPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -3028,7 +3027,7 @@ func (c *FoldersListAvailableOrgPolicyConstraintsCall) Header() http.Header {
func (c *FoldersListAvailableOrgPolicyConstraintsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -3193,7 +3192,7 @@ func (c *FoldersListOrgPoliciesCall) Header() http.Header {
func (c *FoldersListOrgPoliciesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -3361,7 +3360,7 @@ func (c *FoldersSetOrgPolicyCall) Header() http.Header {
func (c *FoldersSetOrgPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -3509,7 +3508,7 @@ func (c *LiensCreateCall) Header() http.Header {
func (c *LiensCreateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -3641,7 +3640,7 @@ func (c *LiensDeleteCall) Header() http.Header {
func (c *LiensDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -3791,7 +3790,7 @@ func (c *LiensGetCall) Header() http.Header {
func (c *LiensGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -3963,7 +3962,7 @@ func (c *LiensListCall) Header() http.Header {
func (c *LiensListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -4137,7 +4136,7 @@ func (c *OperationsGetCall) Header() http.Header {
func (c *OperationsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -4273,7 +4272,7 @@ func (c *OrganizationsClearOrgPolicyCall) Header() http.Header {
func (c *OrganizationsClearOrgPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -4423,7 +4422,7 @@ func (c *OrganizationsGetCall) Header() http.Header {
func (c *OrganizationsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -4567,7 +4566,7 @@ func (c *OrganizationsGetEffectiveOrgPolicyCall) Header() http.Header {
func (c *OrganizationsGetEffectiveOrgPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -4718,7 +4717,7 @@ func (c *OrganizationsGetIamPolicyCall) Header() http.Header {
func (c *OrganizationsGetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -4867,7 +4866,7 @@ func (c *OrganizationsGetOrgPolicyCall) Header() http.Header {
func (c *OrganizationsGetOrgPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -5009,7 +5008,7 @@ func (c *OrganizationsListAvailableOrgPolicyConstraintsCall) Header() http.Heade
func (c *OrganizationsListAvailableOrgPolicyConstraintsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -5174,7 +5173,7 @@ func (c *OrganizationsListOrgPoliciesCall) Header() http.Header {
func (c *OrganizationsListOrgPoliciesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -5344,7 +5343,7 @@ func (c *OrganizationsSearchCall) Header() http.Header {
func (c *OrganizationsSearchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -5502,7 +5501,7 @@ func (c *OrganizationsSetIamPolicyCall) Header() http.Header {
func (c *OrganizationsSetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -5648,7 +5647,7 @@ func (c *OrganizationsSetOrgPolicyCall) Header() http.Header {
func (c *OrganizationsSetOrgPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -5793,7 +5792,7 @@ func (c *OrganizationsTestIamPermissionsCall) Header() http.Header {
func (c *OrganizationsTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -5934,7 +5933,7 @@ func (c *ProjectsClearOrgPolicyCall) Header() http.Header {
func (c *ProjectsClearOrgPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -6096,7 +6095,7 @@ func (c *ProjectsCreateCall) Header() http.Header {
func (c *ProjectsCreateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -6243,7 +6242,7 @@ func (c *ProjectsDeleteCall) Header() http.Header {
func (c *ProjectsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -6386,7 +6385,7 @@ func (c *ProjectsGetCall) Header() http.Header {
func (c *ProjectsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -6526,7 +6525,7 @@ func (c *ProjectsGetAncestryCall) Header() http.Header {
func (c *ProjectsGetAncestryCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -6674,7 +6673,7 @@ func (c *ProjectsGetEffectiveOrgPolicyCall) Header() http.Header {
func (c *ProjectsGetEffectiveOrgPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -6826,7 +6825,7 @@ func (c *ProjectsGetIamPolicyCall) Header() http.Header {
func (c *ProjectsGetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -6974,7 +6973,7 @@ func (c *ProjectsGetOrgPolicyCall) Header() http.Header {
func (c *ProjectsGetOrgPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -7216,7 +7215,7 @@ func (c *ProjectsListCall) Header() http.Header {
func (c *ProjectsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -7378,7 +7377,7 @@ func (c *ProjectsListAvailableOrgPolicyConstraintsCall) Header() http.Header {
func (c *ProjectsListAvailableOrgPolicyConstraintsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -7543,7 +7542,7 @@ func (c *ProjectsListOrgPoliciesCall) Header() http.Header {
func (c *ProjectsListOrgPoliciesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -7778,7 +7777,7 @@ func (c *ProjectsSetIamPolicyCall) Header() http.Header {
func (c *ProjectsSetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -7923,7 +7922,7 @@ func (c *ProjectsSetOrgPolicyCall) Header() http.Header {
func (c *ProjectsSetOrgPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -8066,7 +8065,7 @@ func (c *ProjectsTestIamPermissionsCall) Header() http.Header {
func (c *ProjectsTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -8214,7 +8213,7 @@ func (c *ProjectsUndeleteCall) Header() http.Header {
func (c *ProjectsUndeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -8357,7 +8356,7 @@ func (c *ProjectsUpdateCall) Header() http.Header {
func (c *ProjectsUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
diff --git a/vendor/google.golang.org/api/compute/v1/compute-api.json b/vendor/google.golang.org/api/compute/v1/compute-api.json
index d4732723314ac..a166b61865387 100644
--- a/vendor/google.golang.org/api/compute/v1/compute-api.json
+++ b/vendor/google.golang.org/api/compute/v1/compute-api.json
@@ -29,7 +29,7 @@
"description": "Creates and runs virtual machines on Google Cloud Platform.",
"discoveryVersion": "v1",
"documentationLink": "https://developers.google.com/compute/docs/reference/latest/",
- "etag": "\"LYADMvHWYH2ul9D6m9UT9gT77YM/vJ5sEvcswqgzJDezOEEfhJ7kuRc\"",
+ "etag": "\"F5McR9eEaw0XRpaO3M9gbIugkbs/SUAabl0tEHVm4xtF3n0zpfUm3IU\"",
"icons": {
"x16": "https://www.google.com/images/icons/product/compute_engine-16.png",
"x32": "https://www.google.com/images/icons/product/compute_engine-32.png"
@@ -89,7 +89,7 @@
"acceleratorTypes": {
"methods": {
"aggregatedList": {
- "description": "Retrieves an aggregated list of accelerator types.",
+ "description": "Retrieves an aggregated list of accelerator types. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.acceleratorTypes.aggregatedList",
"parameterOrder": [
@@ -138,7 +138,7 @@
]
},
"get": {
- "description": "Returns the specified accelerator type.",
+ "description": "Returns the specified accelerator type. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.acceleratorTypes.get",
"parameterOrder": [
@@ -180,7 +180,7 @@
]
},
"list": {
- "description": "Retrieves a list of accelerator types available to the specified project.",
+ "description": "Retrieves a list of accelerator types available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.acceleratorTypes.list",
"parameterOrder": [
@@ -241,7 +241,7 @@
"addresses": {
"methods": {
"aggregatedList": {
- "description": "Retrieves an aggregated list of addresses.",
+ "description": "Retrieves an aggregated list of addresses. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.addresses.aggregatedList",
"parameterOrder": [
@@ -290,7 +290,7 @@
]
},
"delete": {
- "description": "Deletes the specified address resource.",
+ "description": "Deletes the specified address resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.addresses.delete",
"parameterOrder": [
@@ -336,7 +336,7 @@
]
},
"get": {
- "description": "Returns the specified address resource.",
+ "description": "Returns the specified address resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.addresses.get",
"parameterOrder": [
@@ -378,7 +378,7 @@
]
},
"insert": {
- "description": "Creates an address resource in the specified project by using the data included in the request.",
+ "description": "Creates an address resource in the specified project by using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.addresses.insert",
"parameterOrder": [
@@ -419,7 +419,7 @@
]
},
"list": {
- "description": "Retrieves a list of addresses contained within the specified region.",
+ "description": "Retrieves a list of addresses contained within the specified region. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.addresses.list",
"parameterOrder": [
@@ -480,7 +480,7 @@
"autoscalers": {
"methods": {
"aggregatedList": {
- "description": "Retrieves an aggregated list of autoscalers.",
+ "description": "Retrieves an aggregated list of autoscalers. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.autoscalers.aggregatedList",
"parameterOrder": [
@@ -529,7 +529,7 @@
]
},
"delete": {
- "description": "Deletes the specified autoscaler.",
+ "description": "Deletes the specified autoscaler. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.autoscalers.delete",
"parameterOrder": [
@@ -575,7 +575,7 @@
]
},
"get": {
- "description": "Returns the specified autoscaler resource. Gets a list of available autoscalers by making a list() request.",
+ "description": "Returns the specified autoscaler resource. Gets a list of available autoscalers by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.autoscalers.get",
"parameterOrder": [
@@ -617,7 +617,7 @@
]
},
"insert": {
- "description": "Creates an autoscaler in the specified project using the data included in the request.",
+ "description": "Creates an autoscaler in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.autoscalers.insert",
"parameterOrder": [
@@ -658,7 +658,7 @@
]
},
"list": {
- "description": "Retrieves a list of autoscalers contained within the specified zone.",
+ "description": "Retrieves a list of autoscalers contained within the specified zone. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.autoscalers.list",
"parameterOrder": [
@@ -715,7 +715,7 @@
]
},
"patch": {
- "description": "Updates an autoscaler in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ "description": "Updates an autoscaler in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.autoscalers.patch",
"parameterOrder": [
@@ -762,7 +762,7 @@
]
},
"update": {
- "description": "Updates an autoscaler in the specified project using the data included in the request.",
+ "description": "Updates an autoscaler in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PUT",
"id": "compute.autoscalers.update",
"parameterOrder": [
@@ -813,7 +813,7 @@
"backendBuckets": {
"methods": {
"addSignedUrlKey": {
- "description": "Adds a key for validating requests with signed URLs for this backend bucket.",
+ "description": "Adds a key for validating requests with signed URLs for this backend bucket. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.backendBuckets.addSignedUrlKey",
"parameterOrder": [
@@ -853,7 +853,7 @@
]
},
"delete": {
- "description": "Deletes the specified BackendBucket resource.",
+ "description": "Deletes the specified BackendBucket resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.backendBuckets.delete",
"parameterOrder": [
@@ -891,7 +891,7 @@
]
},
"deleteSignedUrlKey": {
- "description": "Deletes a key for validating requests with signed URLs for this backend bucket.",
+ "description": "Deletes a key for validating requests with signed URLs for this backend bucket. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.backendBuckets.deleteSignedUrlKey",
"parameterOrder": [
@@ -935,7 +935,7 @@
]
},
"get": {
- "description": "Returns the specified BackendBucket resource. Gets a list of available backend buckets by making a list() request.",
+ "description": "Returns the specified BackendBucket resource. Gets a list of available backend buckets by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.backendBuckets.get",
"parameterOrder": [
@@ -969,7 +969,7 @@
]
},
"insert": {
- "description": "Creates a BackendBucket resource in the specified project using the data included in the request.",
+ "description": "Creates a BackendBucket resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.backendBuckets.insert",
"parameterOrder": [
@@ -1002,7 +1002,7 @@
]
},
"list": {
- "description": "Retrieves the list of BackendBucket resources available to the specified project.",
+ "description": "Retrieves the list of BackendBucket resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.backendBuckets.list",
"parameterOrder": [
@@ -1051,7 +1051,7 @@
]
},
"patch": {
- "description": "Updates the specified BackendBucket resource with the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ "description": "Updates the specified BackendBucket resource with the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.backendBuckets.patch",
"parameterOrder": [
@@ -1092,7 +1092,7 @@
]
},
"update": {
- "description": "Updates the specified BackendBucket resource with the data included in the request.",
+ "description": "Updates the specified BackendBucket resource with the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PUT",
"id": "compute.backendBuckets.update",
"parameterOrder": [
@@ -1137,7 +1137,7 @@
"backendServices": {
"methods": {
"addSignedUrlKey": {
- "description": "Adds a key for validating requests with signed URLs for this backend service.",
+ "description": "Adds a key for validating requests with signed URLs for this backend service. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.backendServices.addSignedUrlKey",
"parameterOrder": [
@@ -1177,7 +1177,7 @@
]
},
"aggregatedList": {
- "description": "Retrieves the list of all BackendService resources, regional and global, available to the specified project.",
+ "description": "Retrieves the list of all BackendService resources, regional and global, available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.backendServices.aggregatedList",
"parameterOrder": [
@@ -1226,7 +1226,7 @@
]
},
"delete": {
- "description": "Deletes the specified BackendService resource.",
+ "description": "Deletes the specified BackendService resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.backendServices.delete",
"parameterOrder": [
@@ -1264,7 +1264,7 @@
]
},
"deleteSignedUrlKey": {
- "description": "Deletes a key for validating requests with signed URLs for this backend service.",
+ "description": "Deletes a key for validating requests with signed URLs for this backend service. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.backendServices.deleteSignedUrlKey",
"parameterOrder": [
@@ -1308,7 +1308,7 @@
]
},
"get": {
- "description": "Returns the specified BackendService resource. Gets a list of available backend services.",
+ "description": "Returns the specified BackendService resource. Gets a list of available backend services. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.backendServices.get",
"parameterOrder": [
@@ -1342,7 +1342,7 @@
]
},
"getHealth": {
- "description": "Gets the most recent health check results for this BackendService.",
+ "description": "Gets the most recent health check results for this BackendService. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.backendServices.getHealth",
"parameterOrder": [
@@ -1378,7 +1378,7 @@
]
},
"insert": {
- "description": "Creates a BackendService resource in the specified project using the data included in the request. There are several restrictions and guidelines to keep in mind when creating a backend service. Read Restrictions and Guidelines for more information.",
+ "description": "Creates a BackendService resource in the specified project using the data included in the request. There are several restrictions and guidelines to keep in mind when creating a backend service. Read Restrictions and Guidelines for more information. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.backendServices.insert",
"parameterOrder": [
@@ -1411,7 +1411,7 @@
]
},
"list": {
- "description": "Retrieves the list of BackendService resources available to the specified project.",
+ "description": "Retrieves the list of BackendService resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.backendServices.list",
"parameterOrder": [
@@ -1460,7 +1460,7 @@
]
},
"patch": {
- "description": "Patches the specified BackendService resource with the data included in the request. There are several restrictions and guidelines to keep in mind when updating a backend service. Read Restrictions and Guidelines for more information. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ "description": "Patches the specified BackendService resource with the data included in the request. There are several restrictions and guidelines to keep in mind when updating a backend service. Read Restrictions and Guidelines for more information. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.backendServices.patch",
"parameterOrder": [
@@ -1501,7 +1501,7 @@
]
},
"setSecurityPolicy": {
- "description": "Sets the security policy for the specified backend service.",
+ "description": "Sets the security policy for the specified backend service. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.backendServices.setSecurityPolicy",
"parameterOrder": [
@@ -1541,7 +1541,7 @@
]
},
"update": {
- "description": "Updates the specified BackendService resource with the data included in the request. There are several restrictions and guidelines to keep in mind when updating a backend service. Read Restrictions and Guidelines for more information.",
+ "description": "Updates the specified BackendService resource with the data included in the request. There are several restrictions and guidelines to keep in mind when updating a backend service. Read Restrictions and Guidelines for more information. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PUT",
"id": "compute.backendServices.update",
"parameterOrder": [
@@ -1586,7 +1586,7 @@
"diskTypes": {
"methods": {
"aggregatedList": {
- "description": "Retrieves an aggregated list of disk types.",
+ "description": "Retrieves an aggregated list of disk types. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.diskTypes.aggregatedList",
"parameterOrder": [
@@ -1635,7 +1635,7 @@
]
},
"get": {
- "description": "Returns the specified disk type. Gets a list of available disk types by making a list() request.",
+ "description": "Returns the specified disk type. Gets a list of available disk types by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.diskTypes.get",
"parameterOrder": [
@@ -1677,7 +1677,7 @@
]
},
"list": {
- "description": "Retrieves a list of disk types available to the specified project.",
+ "description": "Retrieves a list of disk types available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.diskTypes.list",
"parameterOrder": [
@@ -1738,7 +1738,7 @@
"disks": {
"methods": {
"addResourcePolicies": {
- "description": "Adds existing resource policies to a disk. You can only add one policy which will be applied to this disk for scheduling snapshot creation.",
+ "description": "Adds existing resource policies to a disk. You can only add one policy which will be applied to this disk for scheduling snapshot creation. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.disks.addResourcePolicies",
"parameterOrder": [
@@ -1787,7 +1787,7 @@
]
},
"aggregatedList": {
- "description": "Retrieves an aggregated list of persistent disks.",
+ "description": "Retrieves an aggregated list of persistent disks. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.disks.aggregatedList",
"parameterOrder": [
@@ -1836,7 +1836,7 @@
]
},
"createSnapshot": {
- "description": "Creates a snapshot of a specified persistent disk.",
+ "description": "Creates a snapshot of a specified persistent disk. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.disks.createSnapshot",
"parameterOrder": [
@@ -1890,7 +1890,7 @@
]
},
"delete": {
- "description": "Deletes the specified persistent disk. Deleting a disk removes its data permanently and is irreversible. However, deleting a disk does not delete any snapshots previously made from the disk. You must separately delete snapshots.",
+ "description": "Deletes the specified persistent disk. Deleting a disk removes its data permanently and is irreversible. However, deleting a disk does not delete any snapshots previously made from the disk. You must separately delete snapshots. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.disks.delete",
"parameterOrder": [
@@ -1935,7 +1935,7 @@
]
},
"get": {
- "description": "Returns a specified persistent disk. Gets a list of available persistent disks by making a list() request.",
+ "description": "Returns a specified persistent disk. Gets a list of available persistent disks by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.disks.get",
"parameterOrder": [
@@ -1977,7 +1977,7 @@
]
},
"getIamPolicy": {
- "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists.",
+ "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.disks.getIamPolicy",
"parameterOrder": [
@@ -2019,7 +2019,7 @@
]
},
"insert": {
- "description": "Creates a persistent disk in the specified project using the data in the request. You can create a disk with a sourceImage, a sourceSnapshot, or create an empty 500 GB data disk by omitting all properties. You can also create a disk that is larger than the default size by specifying the sizeGb property.",
+ "description": "Creates a persistent disk in the specified project using the data in the request. You can create a disk with a sourceImage, a sourceSnapshot, or create an empty 500 GB data disk by omitting all properties. You can also create a disk that is larger than the default size by specifying the sizeGb property. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.disks.insert",
"parameterOrder": [
@@ -2065,7 +2065,7 @@
]
},
"list": {
- "description": "Retrieves a list of persistent disks contained within the specified zone.",
+ "description": "Retrieves a list of persistent disks contained within the specified zone. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.disks.list",
"parameterOrder": [
@@ -2122,7 +2122,7 @@
]
},
"removeResourcePolicies": {
- "description": "Removes resource policies from a disk.",
+ "description": "Removes resource policies from a disk. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.disks.removeResourcePolicies",
"parameterOrder": [
@@ -2171,7 +2171,7 @@
]
},
"resize": {
- "description": "Resizes the specified persistent disk. You can only increase the size of the disk.",
+ "description": "Resizes the specified persistent disk. You can only increase the size of the disk. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.disks.resize",
"parameterOrder": [
@@ -2220,7 +2220,7 @@
]
},
"setIamPolicy": {
- "description": "Sets the access control policy on the specified resource. Replaces any existing policy.",
+ "description": "Sets the access control policy on the specified resource. Replaces any existing policy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.disks.setIamPolicy",
"parameterOrder": [
@@ -2264,7 +2264,7 @@
]
},
"setLabels": {
- "description": "Sets the labels on a disk. To learn more about labels, read the Labeling Resources documentation.",
+ "description": "Sets the labels on a disk. To learn more about labels, read the Labeling Resources documentation. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.disks.setLabels",
"parameterOrder": [
@@ -2313,7 +2313,7 @@
]
},
"testIamPermissions": {
- "description": "Returns permissions that a caller has on the specified resource.",
+ "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.disks.testIamPermissions",
"parameterOrder": [
@@ -2362,7 +2362,7 @@
"externalVpnGateways": {
"methods": {
"delete": {
- "description": "Deletes the specified externalVpnGateway.",
+ "description": "Deletes the specified externalVpnGateway. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.externalVpnGateways.delete",
"parameterOrder": [
@@ -2400,7 +2400,7 @@
]
},
"get": {
- "description": "Returns the specified externalVpnGateway. Get a list of available externalVpnGateways by making a list() request.",
+ "description": "Returns the specified externalVpnGateway. Get a list of available externalVpnGateways by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.externalVpnGateways.get",
"parameterOrder": [
@@ -2434,7 +2434,7 @@
]
},
"insert": {
- "description": "Creates a ExternalVpnGateway in the specified project using the data included in the request.",
+ "description": "Creates a ExternalVpnGateway in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.externalVpnGateways.insert",
"parameterOrder": [
@@ -2467,7 +2467,7 @@
]
},
"list": {
- "description": "Retrieves the list of ExternalVpnGateway available to the specified project.",
+ "description": "Retrieves the list of ExternalVpnGateway available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.externalVpnGateways.list",
"parameterOrder": [
@@ -2516,7 +2516,7 @@
]
},
"setLabels": {
- "description": "Sets the labels on an ExternalVpnGateway. To learn more about labels, read the Labeling Resources documentation.",
+ "description": "Sets the labels on an ExternalVpnGateway. To learn more about labels, read the Labeling Resources documentation. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.externalVpnGateways.setLabels",
"parameterOrder": [
@@ -2552,7 +2552,7 @@
]
},
"testIamPermissions": {
- "description": "Returns permissions that a caller has on the specified resource.",
+ "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.externalVpnGateways.testIamPermissions",
"parameterOrder": [
@@ -2593,7 +2593,7 @@
"firewalls": {
"methods": {
"delete": {
- "description": "Deletes the specified firewall.",
+ "description": "Deletes the specified firewall. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.firewalls.delete",
"parameterOrder": [
@@ -2631,7 +2631,7 @@
]
},
"get": {
- "description": "Returns the specified firewall.",
+ "description": "Returns the specified firewall. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.firewalls.get",
"parameterOrder": [
@@ -2665,7 +2665,7 @@
]
},
"insert": {
- "description": "Creates a firewall rule in the specified project using the data included in the request.",
+ "description": "Creates a firewall rule in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.firewalls.insert",
"parameterOrder": [
@@ -2698,7 +2698,7 @@
]
},
"list": {
- "description": "Retrieves the list of firewall rules available to the specified project.",
+ "description": "Retrieves the list of firewall rules available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.firewalls.list",
"parameterOrder": [
@@ -2747,7 +2747,7 @@
]
},
"patch": {
- "description": "Updates the specified firewall rule with the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ "description": "Updates the specified firewall rule with the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.firewalls.patch",
"parameterOrder": [
@@ -2788,7 +2788,7 @@
]
},
"update": {
- "description": "Updates the specified firewall rule with the data included in the request. The PUT method can only update the following fields of firewall rule: allowed, description, sourceRanges, sourceTags, targetTags.",
+ "description": "Updates the specified firewall rule with the data included in the request. Note that all fields will be updated if using PUT, even fields that are not specified. To update individual fields, please use PATCH instead. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PUT",
"id": "compute.firewalls.update",
"parameterOrder": [
@@ -2833,7 +2833,7 @@
"forwardingRules": {
"methods": {
"aggregatedList": {
- "description": "Retrieves an aggregated list of forwarding rules.",
+ "description": "Retrieves an aggregated list of forwarding rules. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.forwardingRules.aggregatedList",
"parameterOrder": [
@@ -2882,7 +2882,7 @@
]
},
"delete": {
- "description": "Deletes the specified ForwardingRule resource.",
+ "description": "Deletes the specified ForwardingRule resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.forwardingRules.delete",
"parameterOrder": [
@@ -2928,7 +2928,7 @@
]
},
"get": {
- "description": "Returns the specified ForwardingRule resource.",
+ "description": "Returns the specified ForwardingRule resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.forwardingRules.get",
"parameterOrder": [
@@ -2970,7 +2970,7 @@
]
},
"insert": {
- "description": "Creates a ForwardingRule resource in the specified project and region using the data included in the request.",
+ "description": "Creates a ForwardingRule resource in the specified project and region using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.forwardingRules.insert",
"parameterOrder": [
@@ -3011,7 +3011,7 @@
]
},
"list": {
- "description": "Retrieves a list of ForwardingRule resources available to the specified project and region.",
+ "description": "Retrieves a list of ForwardingRule resources available to the specified project and region. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.forwardingRules.list",
"parameterOrder": [
@@ -3068,7 +3068,7 @@
]
},
"setTarget": {
- "description": "Changes target URL for forwarding rule. The new target should be of the same type as the old target.",
+ "description": "Changes target URL for forwarding rule. The new target should be of the same type as the old target. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.forwardingRules.setTarget",
"parameterOrder": [
@@ -3121,7 +3121,7 @@
"globalAddresses": {
"methods": {
"delete": {
- "description": "Deletes the specified address resource.",
+ "description": "Deletes the specified address resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.globalAddresses.delete",
"parameterOrder": [
@@ -3159,7 +3159,7 @@
]
},
"get": {
- "description": "Returns the specified address resource. Gets a list of available addresses by making a list() request.",
+ "description": "Returns the specified address resource. Gets a list of available addresses by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.globalAddresses.get",
"parameterOrder": [
@@ -3193,7 +3193,7 @@
]
},
"insert": {
- "description": "Creates an address resource in the specified project by using the data included in the request.",
+ "description": "Creates an address resource in the specified project by using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.globalAddresses.insert",
"parameterOrder": [
@@ -3226,7 +3226,7 @@
]
},
"list": {
- "description": "Retrieves a list of global addresses.",
+ "description": "Retrieves a list of global addresses. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.globalAddresses.list",
"parameterOrder": [
@@ -3279,7 +3279,7 @@
"globalForwardingRules": {
"methods": {
"delete": {
- "description": "Deletes the specified GlobalForwardingRule resource.",
+ "description": "Deletes the specified GlobalForwardingRule resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.globalForwardingRules.delete",
"parameterOrder": [
@@ -3317,7 +3317,7 @@
]
},
"get": {
- "description": "Returns the specified GlobalForwardingRule resource. Gets a list of available forwarding rules by making a list() request.",
+ "description": "Returns the specified GlobalForwardingRule resource. Gets a list of available forwarding rules by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.globalForwardingRules.get",
"parameterOrder": [
@@ -3351,7 +3351,7 @@
]
},
"insert": {
- "description": "Creates a GlobalForwardingRule resource in the specified project using the data included in the request.",
+ "description": "Creates a GlobalForwardingRule resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.globalForwardingRules.insert",
"parameterOrder": [
@@ -3384,7 +3384,7 @@
]
},
"list": {
- "description": "Retrieves a list of GlobalForwardingRule resources available to the specified project.",
+ "description": "Retrieves a list of GlobalForwardingRule resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.globalForwardingRules.list",
"parameterOrder": [
@@ -3433,7 +3433,7 @@
]
},
"setTarget": {
- "description": "Changes target URL for the GlobalForwardingRule resource. The new target should be of the same type as the old target.",
+ "description": "Changes target URL for the GlobalForwardingRule resource. The new target should be of the same type as the old target. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.globalForwardingRules.setTarget",
"parameterOrder": [
@@ -3478,7 +3478,7 @@
"globalOperations": {
"methods": {
"aggregatedList": {
- "description": "Retrieves an aggregated list of all operations.",
+ "description": "Retrieves an aggregated list of all operations. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.globalOperations.aggregatedList",
"parameterOrder": [
@@ -3527,7 +3527,7 @@
]
},
"delete": {
- "description": "Deletes the specified Operations resource.",
+ "description": "Deletes the specified Operations resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.globalOperations.delete",
"parameterOrder": [
@@ -3557,7 +3557,7 @@
]
},
"get": {
- "description": "Retrieves the specified Operations resource. Gets a list of operations by making a list() request.",
+ "description": "Retrieves the specified Operations resource. Gets a list of operations by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.globalOperations.get",
"parameterOrder": [
@@ -3591,7 +3591,7 @@
]
},
"list": {
- "description": "Retrieves a list of Operation resources contained within the specified project.",
+ "description": "Retrieves a list of Operation resources contained within the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.globalOperations.list",
"parameterOrder": [
@@ -3644,7 +3644,7 @@
"healthChecks": {
"methods": {
"aggregatedList": {
- "description": "Retrieves the list of all HealthCheck resources, regional and global, available to the specified project.",
+ "description": "Retrieves the list of all HealthCheck resources, regional and global, available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.healthChecks.aggregatedList",
"parameterOrder": [
@@ -3693,7 +3693,7 @@
]
},
"delete": {
- "description": "Deletes the specified HealthCheck resource.",
+ "description": "Deletes the specified HealthCheck resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.healthChecks.delete",
"parameterOrder": [
@@ -3731,7 +3731,7 @@
]
},
"get": {
- "description": "Returns the specified HealthCheck resource. Gets a list of available health checks by making a list() request.",
+ "description": "Returns the specified HealthCheck resource. Gets a list of available health checks by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.healthChecks.get",
"parameterOrder": [
@@ -3765,7 +3765,7 @@
]
},
"insert": {
- "description": "Creates a HealthCheck resource in the specified project using the data included in the request.",
+ "description": "Creates a HealthCheck resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.healthChecks.insert",
"parameterOrder": [
@@ -3798,7 +3798,7 @@
]
},
"list": {
- "description": "Retrieves the list of HealthCheck resources available to the specified project.",
+ "description": "Retrieves the list of HealthCheck resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.healthChecks.list",
"parameterOrder": [
@@ -3847,7 +3847,7 @@
]
},
"patch": {
- "description": "Updates a HealthCheck resource in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ "description": "Updates a HealthCheck resource in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.healthChecks.patch",
"parameterOrder": [
@@ -3888,7 +3888,7 @@
]
},
"update": {
- "description": "Updates a HealthCheck resource in the specified project using the data included in the request.",
+ "description": "Updates a HealthCheck resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PUT",
"id": "compute.healthChecks.update",
"parameterOrder": [
@@ -3933,7 +3933,7 @@
"httpHealthChecks": {
"methods": {
"delete": {
- "description": "Deletes the specified HttpHealthCheck resource.",
+ "description": "Deletes the specified HttpHealthCheck resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.httpHealthChecks.delete",
"parameterOrder": [
@@ -3971,7 +3971,7 @@
]
},
"get": {
- "description": "Returns the specified HttpHealthCheck resource. Gets a list of available HTTP health checks by making a list() request.",
+ "description": "Returns the specified HttpHealthCheck resource. Gets a list of available HTTP health checks by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.httpHealthChecks.get",
"parameterOrder": [
@@ -4005,7 +4005,7 @@
]
},
"insert": {
- "description": "Creates a HttpHealthCheck resource in the specified project using the data included in the request.",
+ "description": "Creates a HttpHealthCheck resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.httpHealthChecks.insert",
"parameterOrder": [
@@ -4038,7 +4038,7 @@
]
},
"list": {
- "description": "Retrieves the list of HttpHealthCheck resources available to the specified project.",
+ "description": "Retrieves the list of HttpHealthCheck resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.httpHealthChecks.list",
"parameterOrder": [
@@ -4087,7 +4087,7 @@
]
},
"patch": {
- "description": "Updates a HttpHealthCheck resource in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ "description": "Updates a HttpHealthCheck resource in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.httpHealthChecks.patch",
"parameterOrder": [
@@ -4128,7 +4128,7 @@
]
},
"update": {
- "description": "Updates a HttpHealthCheck resource in the specified project using the data included in the request.",
+ "description": "Updates a HttpHealthCheck resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PUT",
"id": "compute.httpHealthChecks.update",
"parameterOrder": [
@@ -4173,7 +4173,7 @@
"httpsHealthChecks": {
"methods": {
"delete": {
- "description": "Deletes the specified HttpsHealthCheck resource.",
+ "description": "Deletes the specified HttpsHealthCheck resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.httpsHealthChecks.delete",
"parameterOrder": [
@@ -4211,7 +4211,7 @@
]
},
"get": {
- "description": "Returns the specified HttpsHealthCheck resource. Gets a list of available HTTPS health checks by making a list() request.",
+ "description": "Returns the specified HttpsHealthCheck resource. Gets a list of available HTTPS health checks by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.httpsHealthChecks.get",
"parameterOrder": [
@@ -4245,7 +4245,7 @@
]
},
"insert": {
- "description": "Creates a HttpsHealthCheck resource in the specified project using the data included in the request.",
+ "description": "Creates a HttpsHealthCheck resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.httpsHealthChecks.insert",
"parameterOrder": [
@@ -4278,7 +4278,7 @@
]
},
"list": {
- "description": "Retrieves the list of HttpsHealthCheck resources available to the specified project.",
+ "description": "Retrieves the list of HttpsHealthCheck resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.httpsHealthChecks.list",
"parameterOrder": [
@@ -4327,7 +4327,7 @@
]
},
"patch": {
- "description": "Updates a HttpsHealthCheck resource in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ "description": "Updates a HttpsHealthCheck resource in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.httpsHealthChecks.patch",
"parameterOrder": [
@@ -4368,7 +4368,7 @@
]
},
"update": {
- "description": "Updates a HttpsHealthCheck resource in the specified project using the data included in the request.",
+ "description": "Updates a HttpsHealthCheck resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PUT",
"id": "compute.httpsHealthChecks.update",
"parameterOrder": [
@@ -4413,7 +4413,7 @@
"images": {
"methods": {
"delete": {
- "description": "Deletes the specified image.",
+ "description": "Deletes the specified image. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.images.delete",
"parameterOrder": [
@@ -4451,7 +4451,7 @@
]
},
"deprecate": {
- "description": "Sets the deprecation status of an image.\n\nIf an empty request body is given, clears the deprecation status instead.",
+ "description": "Sets the deprecation status of an image.\n\nIf an empty request body is given, clears the deprecation status instead. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.images.deprecate",
"parameterOrder": [
@@ -4492,7 +4492,7 @@
]
},
"get": {
- "description": "Returns the specified image. Gets a list of available images by making a list() request.",
+ "description": "Returns the specified image. Gets a list of available images by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.images.get",
"parameterOrder": [
@@ -4526,7 +4526,7 @@
]
},
"getFromFamily": {
- "description": "Returns the latest image that is part of an image family and is not deprecated.",
+ "description": "Returns the latest image that is part of an image family and is not deprecated. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.images.getFromFamily",
"parameterOrder": [
@@ -4560,7 +4560,7 @@
]
},
"getIamPolicy": {
- "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists.",
+ "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.images.getIamPolicy",
"parameterOrder": [
@@ -4594,7 +4594,7 @@
]
},
"insert": {
- "description": "Creates an image in the specified project using the data included in the request.",
+ "description": "Creates an image in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.images.insert",
"parameterOrder": [
@@ -4635,7 +4635,7 @@
]
},
"list": {
- "description": "Retrieves the list of custom images available to the specified project. Custom images are images you create that belong to your project. This method does not get any images that belong to other projects, including publicly-available images, like Debian 8. If you want to get a list of publicly-available images, use this method to make a request to the respective image project, such as debian-cloud or windows-cloud.",
+ "description": "Retrieves the list of custom images available to the specified project. Custom images are images you create that belong to your project. This method does not get any images that belong to other projects, including publicly-available images, like Debian 8. If you want to get a list of publicly-available images, use this method to make a request to the respective image project, such as debian-cloud or windows-cloud. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.images.list",
"parameterOrder": [
@@ -4684,7 +4684,7 @@
]
},
"setIamPolicy": {
- "description": "Sets the access control policy on the specified resource. Replaces any existing policy.",
+ "description": "Sets the access control policy on the specified resource. Replaces any existing policy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.images.setIamPolicy",
"parameterOrder": [
@@ -4720,7 +4720,7 @@
]
},
"setLabels": {
- "description": "Sets the labels on an image. To learn more about labels, read the Labeling Resources documentation.",
+ "description": "Sets the labels on an image. To learn more about labels, read the Labeling Resources documentation. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.images.setLabels",
"parameterOrder": [
@@ -4756,7 +4756,7 @@
]
},
"testIamPermissions": {
- "description": "Returns permissions that a caller has on the specified resource.",
+ "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.images.testIamPermissions",
"parameterOrder": [
@@ -4797,7 +4797,7 @@
"instanceGroupManagers": {
"methods": {
"abandonInstances": {
- "description": "Flags the specified instances to be removed from the managed instance group. Abandoning an instance does not delete the instance, but it does remove the instance from any target pools that are applied by the managed instance group. This method reduces the targetSize of the managed instance group by the number of instances that you abandon. This operation is marked as DONE when the action is scheduled even if the instances have not yet been removed from the group. You must separately verify the status of the abandoning action with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.\n\nYou can specify a maximum of 1000 instances with this method per request.",
+ "description": "Flags the specified instances to be removed from the managed instance group. Abandoning an instance does not delete the instance, but it does remove the instance from any target pools that are applied by the managed instance group. This method reduces the targetSize of the managed instance group by the number of instances that you abandon. This operation is marked as DONE when the action is scheduled even if the instances have not yet been removed from the group. You must separately verify the status of the abandoning action with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.\n\nYou can specify a maximum of 1000 instances with this method per request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instanceGroupManagers.abandonInstances",
"parameterOrder": [
@@ -4844,7 +4844,7 @@
]
},
"aggregatedList": {
- "description": "Retrieves the list of managed instance groups and groups them by zone.",
+ "description": "Retrieves the list of managed instance groups and groups them by zone. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.instanceGroupManagers.aggregatedList",
"parameterOrder": [
@@ -4893,7 +4893,7 @@
]
},
"delete": {
- "description": "Deletes the specified managed instance group and all of the instances in that group. Note that the instance group must not belong to a backend service. Read Deleting an instance group for more information.",
+ "description": "Deletes the specified managed instance group and all of the instances in that group. Note that the instance group must not belong to a backend service. Read Deleting an instance group for more information. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.instanceGroupManagers.delete",
"parameterOrder": [
@@ -4937,7 +4937,7 @@
]
},
"deleteInstances": {
- "description": "Flags the specified instances in the managed instance group for immediate deletion. The instances are also removed from any target pools of which they were a member. This method reduces the targetSize of the managed instance group by the number of instances that you delete. This operation is marked as DONE when the action is scheduled even if the instances are still being deleted. You must separately verify the status of the deleting action with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.\n\nYou can specify a maximum of 1000 instances with this method per request.",
+ "description": "Flags the specified instances in the managed instance group for immediate deletion. The instances are also removed from any target pools of which they were a member. This method reduces the targetSize of the managed instance group by the number of instances that you delete. This operation is marked as DONE when the action is scheduled even if the instances are still being deleted. You must separately verify the status of the deleting action with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.\n\nYou can specify a maximum of 1000 instances with this method per request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instanceGroupManagers.deleteInstances",
"parameterOrder": [
@@ -4984,7 +4984,7 @@
]
},
"get": {
- "description": "Returns all of the details about the specified managed instance group. Gets a list of available managed instance groups by making a list() request.",
+ "description": "Returns all of the details about the specified managed instance group. Gets a list of available managed instance groups by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.instanceGroupManagers.get",
"parameterOrder": [
@@ -5024,7 +5024,7 @@
]
},
"insert": {
- "description": "Creates a managed instance group using the information that you specify in the request. After the group is created, instances in the group are created using the specified instance template. This operation is marked as DONE when the group is created even if the instances in the group have not yet been created. You must separately verify the status of the individual instances with the listmanagedinstances method.\n\nA managed instance group can have up to 1000 VM instances per group. Please contact Cloud Support if you need an increase in this limit.",
+ "description": "Creates a managed instance group using the information that you specify in the request. After the group is created, instances in the group are created using the specified instance template. This operation is marked as DONE when the group is created even if the instances in the group have not yet been created. You must separately verify the status of the individual instances with the listmanagedinstances method.\n\nA managed instance group can have up to 1000 VM instances per group. Please contact Cloud Support if you need an increase in this limit. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instanceGroupManagers.insert",
"parameterOrder": [
@@ -5064,7 +5064,7 @@
]
},
"list": {
- "description": "Retrieves a list of managed instance groups that are contained within the specified project and zone.",
+ "description": "Retrieves a list of managed instance groups that are contained within the specified project and zone. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.instanceGroupManagers.list",
"parameterOrder": [
@@ -5120,7 +5120,7 @@
]
},
"listManagedInstances": {
- "description": "Lists all of the instances in the managed instance group. Each instance in the list has a currentAction, which indicates the action that the managed instance group is performing on the instance. For example, if the group is still creating an instance, the currentAction is CREATING. If a previous action failed, the list displays the errors for that failed action.",
+ "description": "Lists all of the instances in the managed instance group. Each instance in the list has a currentAction, which indicates the action that the managed instance group is performing on the instance. For example, if the group is still creating an instance, the currentAction is CREATING. If a previous action failed, the list displays the errors for that failed action. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instanceGroupManagers.listManagedInstances",
"parameterOrder": [
@@ -5183,7 +5183,7 @@
]
},
"patch": {
- "description": "Updates a managed instance group using the information that you specify in the request. This operation is marked as DONE when the group is patched even if the instances in the group are still in the process of being patched. You must separately verify the status of the individual instances with the listManagedInstances method. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ "description": "Updates a managed instance group using the information that you specify in the request. This operation is marked as DONE when the group is patched even if the instances in the group are still in the process of being patched. You must separately verify the status of the individual instances with the listManagedInstances method. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.instanceGroupManagers.patch",
"parameterOrder": [
@@ -5230,7 +5230,7 @@
]
},
"recreateInstances": {
- "description": "Flags the specified instances in the managed instance group to be immediately recreated. The instances are deleted and recreated using the current instance template for the managed instance group. This operation is marked as DONE when the flag is set even if the instances have not yet been recreated. You must separately verify the status of the recreating action with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.\n\nYou can specify a maximum of 1000 instances with this method per request.",
+ "description": "Flags the specified instances in the managed instance group to be immediately recreated. The instances are deleted and recreated using the current instance template for the managed instance group. This operation is marked as DONE when the flag is set even if the instances have not yet been recreated. You must separately verify the status of the recreating action with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.\n\nYou can specify a maximum of 1000 instances with this method per request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instanceGroupManagers.recreateInstances",
"parameterOrder": [
@@ -5277,7 +5277,7 @@
]
},
"resize": {
- "description": "Resizes the managed instance group. If you increase the size, the group creates new instances using the current instance template. If you decrease the size, the group deletes instances. The resize operation is marked DONE when the resize actions are scheduled even if the group has not yet added or deleted any instances. You must separately verify the status of the creating or deleting actions with the listmanagedinstances method.\n\nWhen resizing down, the instance group arbitrarily chooses the order in which VMs are deleted. The group takes into account some VM attributes when making the selection including:\n\n+ The status of the VM instance. + The health of the VM instance. + The instance template version the VM is based on. + For regional managed instance groups, the location of the VM instance.\n\nThis list is subject to change.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.",
+ "description": "Resizes the managed instance group. If you increase the size, the group creates new instances using the current instance template. If you decrease the size, the group deletes instances. The resize operation is marked DONE when the resize actions are scheduled even if the group has not yet added or deleted any instances. You must separately verify the status of the creating or deleting actions with the listmanagedinstances method.\n\nWhen resizing down, the instance group arbitrarily chooses the order in which VMs are deleted. The group takes into account some VM attributes when making the selection including:\n\n+ The status of the VM instance. + The health of the VM instance. + The instance template version the VM is based on. + For regional managed instance groups, the location of the VM instance.\n\nThis list is subject to change.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instanceGroupManagers.resize",
"parameterOrder": [
@@ -5329,7 +5329,7 @@
]
},
"setInstanceTemplate": {
- "description": "Specifies the instance template to use when creating new instances in this group. The templates for existing instances in the group do not change unless you recreate them.",
+ "description": "Specifies the instance template to use when creating new instances in this group. The templates for existing instances in the group do not change unless you recreate them. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instanceGroupManagers.setInstanceTemplate",
"parameterOrder": [
@@ -5376,7 +5376,7 @@
]
},
"setTargetPools": {
- "description": "Modifies the target pools to which all instances in this managed instance group are assigned. The target pools automatically apply to all of the instances in the managed instance group. This operation is marked DONE when you make the request even if the instances have not yet been added to their target pools. The change might take some time to apply to all of the instances in the group depending on the size of the group.",
+ "description": "Modifies the target pools to which all instances in this managed instance group are assigned. The target pools automatically apply to all of the instances in the managed instance group. This operation is marked DONE when you make the request even if the instances have not yet been added to their target pools. The change might take some time to apply to all of the instances in the group depending on the size of the group. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instanceGroupManagers.setTargetPools",
"parameterOrder": [
@@ -5427,7 +5427,7 @@
"instanceGroups": {
"methods": {
"addInstances": {
- "description": "Adds a list of instances to the specified instance group. All of the instances in the instance group must be in the same network/subnetwork. Read Adding instances for more information.",
+ "description": "Adds a list of instances to the specified instance group. All of the instances in the instance group must be in the same network/subnetwork. Read Adding instances for more information. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instanceGroups.addInstances",
"parameterOrder": [
@@ -5474,7 +5474,7 @@
]
},
"aggregatedList": {
- "description": "Retrieves the list of instance groups and sorts them by zone.",
+ "description": "Retrieves the list of instance groups and sorts them by zone. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.instanceGroups.aggregatedList",
"parameterOrder": [
@@ -5523,7 +5523,7 @@
]
},
"delete": {
- "description": "Deletes the specified instance group. The instances in the group are not deleted. Note that instance group must not belong to a backend service. Read Deleting an instance group for more information.",
+ "description": "Deletes the specified instance group. The instances in the group are not deleted. Note that instance group must not belong to a backend service. Read Deleting an instance group for more information. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.instanceGroups.delete",
"parameterOrder": [
@@ -5567,7 +5567,7 @@
]
},
"get": {
- "description": "Returns the specified instance group. Gets a list of available instance groups by making a list() request.",
+ "description": "Returns the specified instance group. Gets a list of available instance groups by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.instanceGroups.get",
"parameterOrder": [
@@ -5607,7 +5607,7 @@
]
},
"insert": {
- "description": "Creates an instance group in the specified project using the parameters that are included in the request.",
+ "description": "Creates an instance group in the specified project using the parameters that are included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instanceGroups.insert",
"parameterOrder": [
@@ -5647,7 +5647,7 @@
]
},
"list": {
- "description": "Retrieves the list of instance groups that are located in the specified project and zone.",
+ "description": "Retrieves the list of instance groups that are located in the specified project and zone. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.instanceGroups.list",
"parameterOrder": [
@@ -5703,7 +5703,7 @@
]
},
"listInstances": {
- "description": "Lists the instances in the specified instance group.",
+ "description": "Lists the instances in the specified instance group. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instanceGroups.listInstances",
"parameterOrder": [
@@ -5769,7 +5769,7 @@
]
},
"removeInstances": {
- "description": "Removes one or more instances from the specified instance group, but does not delete those instances.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration before the VM instance is removed or deleted.",
+ "description": "Removes one or more instances from the specified instance group, but does not delete those instances.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration before the VM instance is removed or deleted. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instanceGroups.removeInstances",
"parameterOrder": [
@@ -5816,7 +5816,7 @@
]
},
"setNamedPorts": {
- "description": "Sets the named ports for the specified instance group.",
+ "description": "Sets the named ports for the specified instance group. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instanceGroups.setNamedPorts",
"parameterOrder": [
@@ -5867,7 +5867,7 @@
"instanceTemplates": {
"methods": {
"delete": {
- "description": "Deletes the specified instance template. Deleting an instance template is permanent and cannot be undone. It is not possible to delete templates that are already in use by a managed instance group.",
+ "description": "Deletes the specified instance template. Deleting an instance template is permanent and cannot be undone. It is not possible to delete templates that are already in use by a managed instance group. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.instanceTemplates.delete",
"parameterOrder": [
@@ -5905,7 +5905,7 @@
]
},
"get": {
- "description": "Returns the specified instance template. Gets a list of available instance templates by making a list() request.",
+ "description": "Returns the specified instance template. Gets a list of available instance templates by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.instanceTemplates.get",
"parameterOrder": [
@@ -5939,7 +5939,7 @@
]
},
"getIamPolicy": {
- "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists.",
+ "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.instanceTemplates.getIamPolicy",
"parameterOrder": [
@@ -5973,7 +5973,7 @@
]
},
"insert": {
- "description": "Creates an instance template in the specified project using the data that is included in the request. If you are creating a new template to update an existing instance group, your new instance template must use the same network or, if applicable, the same subnetwork as the original template.",
+ "description": "Creates an instance template in the specified project using the data that is included in the request. If you are creating a new template to update an existing instance group, your new instance template must use the same network or, if applicable, the same subnetwork as the original template. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instanceTemplates.insert",
"parameterOrder": [
@@ -6006,7 +6006,7 @@
]
},
"list": {
- "description": "Retrieves a list of instance templates that are contained within the specified project.",
+ "description": "Retrieves a list of instance templates that are contained within the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.instanceTemplates.list",
"parameterOrder": [
@@ -6055,7 +6055,7 @@
]
},
"setIamPolicy": {
- "description": "Sets the access control policy on the specified resource. Replaces any existing policy.",
+ "description": "Sets the access control policy on the specified resource. Replaces any existing policy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instanceTemplates.setIamPolicy",
"parameterOrder": [
@@ -6091,7 +6091,7 @@
]
},
"testIamPermissions": {
- "description": "Returns permissions that a caller has on the specified resource.",
+ "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instanceTemplates.testIamPermissions",
"parameterOrder": [
@@ -6132,7 +6132,7 @@
"instances": {
"methods": {
"addAccessConfig": {
- "description": "Adds an access config to an instance's network interface.",
+ "description": "Adds an access config to an instance's network interface. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instances.addAccessConfig",
"parameterOrder": [
@@ -6188,7 +6188,7 @@
]
},
"aggregatedList": {
- "description": "Retrieves aggregated list of all of the instances in your project across all regions and zones.",
+ "description": "Retrieves aggregated list of all of the instances in your project across all regions and zones. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.instances.aggregatedList",
"parameterOrder": [
@@ -6237,7 +6237,7 @@
]
},
"attachDisk": {
- "description": "Attaches an existing Disk resource to an instance. You must first create the disk before you can attach it. It is not possible to create and attach a disk at the same time. For more information, read Adding a persistent disk to your instance.",
+ "description": "Attaches an existing Disk resource to an instance. You must first create the disk before you can attach it. It is not possible to create and attach a disk at the same time. For more information, read Adding a persistent disk to your instance. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instances.attachDisk",
"parameterOrder": [
@@ -6291,7 +6291,7 @@
]
},
"delete": {
- "description": "Deletes the specified Instance resource. For more information, see Stopping or Deleting an Instance.",
+ "description": "Deletes the specified Instance resource. For more information, see Stopping or Deleting an Instance. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.instances.delete",
"parameterOrder": [
@@ -6337,7 +6337,7 @@
]
},
"deleteAccessConfig": {
- "description": "Deletes an access config from an instance's network interface.",
+ "description": "Deletes an access config from an instance's network interface. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instances.deleteAccessConfig",
"parameterOrder": [
@@ -6397,7 +6397,7 @@
]
},
"detachDisk": {
- "description": "Detaches a disk from an instance.",
+ "description": "Detaches a disk from an instance. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instances.detachDisk",
"parameterOrder": [
@@ -6450,7 +6450,7 @@
]
},
"get": {
- "description": "Returns the specified Instance resource. Gets a list of available instances by making a list() request.",
+ "description": "Returns the specified Instance resource. Gets a list of available instances by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.instances.get",
"parameterOrder": [
@@ -6492,7 +6492,7 @@
]
},
"getGuestAttributes": {
- "description": "Returns the specified guest attributes entry.",
+ "description": "Returns the specified guest attributes entry. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.instances.getGuestAttributes",
"parameterOrder": [
@@ -6544,7 +6544,7 @@
]
},
"getIamPolicy": {
- "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists.",
+ "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.instances.getIamPolicy",
"parameterOrder": [
@@ -6586,7 +6586,7 @@
]
},
"getSerialPortOutput": {
- "description": "Returns the last 1 MB of serial port output from the specified instance.",
+ "description": "Returns the last 1 MB of serial port output from the specified instance. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.instances.getSerialPortOutput",
"parameterOrder": [
@@ -6643,7 +6643,7 @@
]
},
"getShieldedInstanceIdentity": {
- "description": "Returns the Shielded Instance Identity of an instance",
+ "description": "Returns the Shielded Instance Identity of an instance (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.instances.getShieldedInstanceIdentity",
"parameterOrder": [
@@ -6685,7 +6685,7 @@
]
},
"insert": {
- "description": "Creates an instance resource in the specified project using the data included in the request.",
+ "description": "Creates an instance resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instances.insert",
"parameterOrder": [
@@ -6731,7 +6731,7 @@
]
},
"list": {
- "description": "Retrieves the list of instances contained within the specified zone.",
+ "description": "Retrieves the list of instances contained within the specified zone. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.instances.list",
"parameterOrder": [
@@ -6788,7 +6788,7 @@
]
},
"listReferrers": {
- "description": "Retrieves the list of referrers to instances contained within the specified zone. For more information, read Viewing Referrers to VM Instances.",
+ "description": "Retrieves the list of referrers to instances contained within the specified zone. For more information, read Viewing Referrers to VM Instances. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.instances.listReferrers",
"parameterOrder": [
@@ -6853,7 +6853,7 @@
]
},
"reset": {
- "description": "Performs a reset on the instance. This is a hard reset the VM does not do a graceful shutdown. For more information, see Resetting an instance.",
+ "description": "Performs a reset on the instance. This is a hard reset the VM does not do a graceful shutdown. For more information, see Resetting an instance. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instances.reset",
"parameterOrder": [
@@ -6899,7 +6899,7 @@
]
},
"setDeletionProtection": {
- "description": "Sets deletion protection on the instance.",
+ "description": "Sets deletion protection on the instance. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instances.setDeletionProtection",
"parameterOrder": [
@@ -6951,7 +6951,7 @@
]
},
"setDiskAutoDelete": {
- "description": "Sets the auto-delete flag for a disk attached to an instance.",
+ "description": "Sets the auto-delete flag for a disk attached to an instance. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instances.setDiskAutoDelete",
"parameterOrder": [
@@ -7012,7 +7012,7 @@
]
},
"setIamPolicy": {
- "description": "Sets the access control policy on the specified resource. Replaces any existing policy.",
+ "description": "Sets the access control policy on the specified resource. Replaces any existing policy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instances.setIamPolicy",
"parameterOrder": [
@@ -7056,7 +7056,7 @@
]
},
"setLabels": {
- "description": "Sets labels on an instance. To learn more about labels, read the Labeling Resources documentation.",
+ "description": "Sets labels on an instance. To learn more about labels, read the Labeling Resources documentation. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instances.setLabels",
"parameterOrder": [
@@ -7105,7 +7105,7 @@
]
},
"setMachineResources": {
- "description": "Changes the number and/or type of accelerator for a stopped instance to the values specified in the request.",
+ "description": "Changes the number and/or type of accelerator for a stopped instance to the values specified in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instances.setMachineResources",
"parameterOrder": [
@@ -7154,7 +7154,7 @@
]
},
"setMachineType": {
- "description": "Changes the machine type for a stopped instance to the machine type specified in the request.",
+ "description": "Changes the machine type for a stopped instance to the machine type specified in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instances.setMachineType",
"parameterOrder": [
@@ -7203,7 +7203,7 @@
]
},
"setMetadata": {
- "description": "Sets metadata for the specified instance to the data included in the request.",
+ "description": "Sets metadata for the specified instance to the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instances.setMetadata",
"parameterOrder": [
@@ -7252,7 +7252,7 @@
]
},
"setMinCpuPlatform": {
- "description": "Changes the minimum CPU platform that this instance should use. This method can only be called on a stopped instance. For more information, read Specifying a Minimum CPU Platform.",
+ "description": "Changes the minimum CPU platform that this instance should use. This method can only be called on a stopped instance. For more information, read Specifying a Minimum CPU Platform. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instances.setMinCpuPlatform",
"parameterOrder": [
@@ -7301,7 +7301,7 @@
]
},
"setScheduling": {
- "description": "Sets an instance's scheduling options.",
+ "description": "Sets an instance's scheduling options. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instances.setScheduling",
"parameterOrder": [
@@ -7350,7 +7350,7 @@
]
},
"setServiceAccount": {
- "description": "Sets the service account on the instance. For more information, read Changing the service account and access scopes for an instance.",
+ "description": "Sets the service account on the instance. For more information, read Changing the service account and access scopes for an instance. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instances.setServiceAccount",
"parameterOrder": [
@@ -7399,7 +7399,7 @@
]
},
"setShieldedInstanceIntegrityPolicy": {
- "description": "Sets the Shielded Instance integrity policy for an instance. You can only use this method on a running instance. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ "description": "Sets the Shielded Instance integrity policy for an instance. You can only use this method on a running instance. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.instances.setShieldedInstanceIntegrityPolicy",
"parameterOrder": [
@@ -7448,7 +7448,7 @@
]
},
"setTags": {
- "description": "Sets network tags for the specified instance to the data included in the request.",
+ "description": "Sets network tags for the specified instance to the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instances.setTags",
"parameterOrder": [
@@ -7497,7 +7497,7 @@
]
},
"simulateMaintenanceEvent": {
- "description": "Simulates a maintenance event on the instance.",
+ "description": "Simulates a maintenance event on the instance. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instances.simulateMaintenanceEvent",
"parameterOrder": [
@@ -7538,7 +7538,7 @@
]
},
"start": {
- "description": "Starts an instance that was stopped using the instances().stop method. For more information, see Restart an instance.",
+ "description": "Starts an instance that was stopped using the instances().stop method. For more information, see Restart an instance. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instances.start",
"parameterOrder": [
@@ -7584,7 +7584,7 @@
]
},
"startWithEncryptionKey": {
- "description": "Starts an instance that was stopped using the instances().stop method. For more information, see Restart an instance.",
+ "description": "Starts an instance that was stopped using the instances().stop method. For more information, see Restart an instance. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instances.startWithEncryptionKey",
"parameterOrder": [
@@ -7633,7 +7633,7 @@
]
},
"stop": {
- "description": "Stops a running instance, shutting it down cleanly, and allows you to restart the instance at a later time. Stopped instances do not incur VM usage charges while they are stopped. However, resources that the VM is using, such as persistent disks and static IP addresses, will continue to be charged until they are deleted. For more information, see Stopping an instance.",
+ "description": "Stops a running instance, shutting it down cleanly, and allows you to restart the instance at a later time. Stopped instances do not incur VM usage charges while they are stopped. However, resources that the VM is using, such as persistent disks and static IP addresses, will continue to be charged until they are deleted. For more information, see Stopping an instance. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instances.stop",
"parameterOrder": [
@@ -7679,7 +7679,7 @@
]
},
"testIamPermissions": {
- "description": "Returns permissions that a caller has on the specified resource.",
+ "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instances.testIamPermissions",
"parameterOrder": [
@@ -7724,7 +7724,7 @@
]
},
"updateAccessConfig": {
- "description": "Updates the specified access config from an instance's network interface with the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ "description": "Updates the specified access config from an instance's network interface with the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.instances.updateAccessConfig",
"parameterOrder": [
@@ -7780,7 +7780,7 @@
]
},
"updateDisplayDevice": {
- "description": "Updates the Display config for a VM instance. You can only use this method on a stopped VM instance. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ "description": "Updates the Display config for a VM instance. You can only use this method on a stopped VM instance. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.instances.updateDisplayDevice",
"parameterOrder": [
@@ -7829,7 +7829,7 @@
]
},
"updateNetworkInterface": {
- "description": "Updates an instance's network interface. This method follows PATCH semantics.",
+ "description": "Updates an instance's network interface. This method follows PATCH semantics. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.instances.updateNetworkInterface",
"parameterOrder": [
@@ -7885,7 +7885,7 @@
]
},
"updateShieldedInstanceConfig": {
- "description": "Updates the Shielded Instance config for an instance. You can only use this method on a stopped instance. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ "description": "Updates the Shielded Instance config for an instance. You can only use this method on a stopped instance. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.instances.updateShieldedInstanceConfig",
"parameterOrder": [
@@ -7938,7 +7938,7 @@
"interconnectAttachments": {
"methods": {
"aggregatedList": {
- "description": "Retrieves an aggregated list of interconnect attachments.",
+ "description": "Retrieves an aggregated list of interconnect attachments. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.interconnectAttachments.aggregatedList",
"parameterOrder": [
@@ -7987,7 +7987,7 @@
]
},
"delete": {
- "description": "Deletes the specified interconnect attachment.",
+ "description": "Deletes the specified interconnect attachment. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.interconnectAttachments.delete",
"parameterOrder": [
@@ -8033,7 +8033,7 @@
]
},
"get": {
- "description": "Returns the specified interconnect attachment.",
+ "description": "Returns the specified interconnect attachment. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.interconnectAttachments.get",
"parameterOrder": [
@@ -8075,7 +8075,7 @@
]
},
"insert": {
- "description": "Creates an InterconnectAttachment in the specified project using the data included in the request.",
+ "description": "Creates an InterconnectAttachment in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.interconnectAttachments.insert",
"parameterOrder": [
@@ -8116,7 +8116,7 @@
]
},
"list": {
- "description": "Retrieves the list of interconnect attachments contained within the specified region.",
+ "description": "Retrieves the list of interconnect attachments contained within the specified region. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.interconnectAttachments.list",
"parameterOrder": [
@@ -8173,7 +8173,7 @@
]
},
"patch": {
- "description": "Updates the specified interconnect attachment with the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ "description": "Updates the specified interconnect attachment with the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.interconnectAttachments.patch",
"parameterOrder": [
@@ -8226,7 +8226,7 @@
"interconnectLocations": {
"methods": {
"get": {
- "description": "Returns the details for the specified interconnect location. Gets a list of available interconnect locations by making a list() request.",
+ "description": "Returns the details for the specified interconnect location. Gets a list of available interconnect locations by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.interconnectLocations.get",
"parameterOrder": [
@@ -8260,7 +8260,7 @@
]
},
"list": {
- "description": "Retrieves the list of interconnect locations available to the specified project.",
+ "description": "Retrieves the list of interconnect locations available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.interconnectLocations.list",
"parameterOrder": [
@@ -8313,7 +8313,7 @@
"interconnects": {
"methods": {
"delete": {
- "description": "Deletes the specified interconnect.",
+ "description": "Deletes the specified interconnect. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.interconnects.delete",
"parameterOrder": [
@@ -8351,7 +8351,7 @@
]
},
"get": {
- "description": "Returns the specified interconnect. Get a list of available interconnects by making a list() request.",
+ "description": "Returns the specified interconnect. Get a list of available interconnects by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.interconnects.get",
"parameterOrder": [
@@ -8385,7 +8385,7 @@
]
},
"getDiagnostics": {
- "description": "Returns the interconnectDiagnostics for the specified interconnect.",
+ "description": "Returns the interconnectDiagnostics for the specified interconnect. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.interconnects.getDiagnostics",
"parameterOrder": [
@@ -8419,7 +8419,7 @@
]
},
"insert": {
- "description": "Creates a Interconnect in the specified project using the data included in the request.",
+ "description": "Creates a Interconnect in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.interconnects.insert",
"parameterOrder": [
@@ -8452,7 +8452,7 @@
]
},
"list": {
- "description": "Retrieves the list of interconnect available to the specified project.",
+ "description": "Retrieves the list of interconnect available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.interconnects.list",
"parameterOrder": [
@@ -8501,7 +8501,7 @@
]
},
"patch": {
- "description": "Updates the specified interconnect with the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ "description": "Updates the specified interconnect with the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.interconnects.patch",
"parameterOrder": [
@@ -8546,7 +8546,7 @@
"licenseCodes": {
"methods": {
"get": {
- "description": "Return a specified license code. License codes are mirrored across all projects that have permissions to read the License Code.",
+ "description": "Return a specified license code. License codes are mirrored across all projects that have permissions to read the License Code. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.licenseCodes.get",
"parameterOrder": [
@@ -8580,7 +8580,7 @@
]
},
"testIamPermissions": {
- "description": "Returns permissions that a caller has on the specified resource.",
+ "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.licenseCodes.testIamPermissions",
"parameterOrder": [
@@ -8621,7 +8621,7 @@
"licenses": {
"methods": {
"delete": {
- "description": "Deletes the specified license.",
+ "description": "Deletes the specified license. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.licenses.delete",
"parameterOrder": [
@@ -8659,7 +8659,7 @@
]
},
"get": {
- "description": "Returns the specified License resource.",
+ "description": "Returns the specified License resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.licenses.get",
"parameterOrder": [
@@ -8693,7 +8693,7 @@
]
},
"getIamPolicy": {
- "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists.",
+ "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.licenses.getIamPolicy",
"parameterOrder": [
@@ -8727,7 +8727,7 @@
]
},
"insert": {
- "description": "Create a License resource in the specified project.",
+ "description": "Create a License resource in the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.licenses.insert",
"parameterOrder": [
@@ -8763,7 +8763,7 @@
]
},
"list": {
- "description": "Retrieves the list of licenses available in the specified project. This method does not get any licenses that belong to other projects, including licenses attached to publicly-available images, like Debian 9. If you want to get a list of publicly-available licenses, use this method to make a request to the respective image project, such as debian-cloud or windows-cloud.",
+ "description": "Retrieves the list of licenses available in the specified project. This method does not get any licenses that belong to other projects, including licenses attached to publicly-available images, like Debian 9. If you want to get a list of publicly-available licenses, use this method to make a request to the respective image project, such as debian-cloud or windows-cloud. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.licenses.list",
"parameterOrder": [
@@ -8812,7 +8812,7 @@
]
},
"setIamPolicy": {
- "description": "Sets the access control policy on the specified resource. Replaces any existing policy.",
+ "description": "Sets the access control policy on the specified resource. Replaces any existing policy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.licenses.setIamPolicy",
"parameterOrder": [
@@ -8848,7 +8848,7 @@
]
},
"testIamPermissions": {
- "description": "Returns permissions that a caller has on the specified resource.",
+ "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.licenses.testIamPermissions",
"parameterOrder": [
@@ -8889,7 +8889,7 @@
"machineTypes": {
"methods": {
"aggregatedList": {
- "description": "Retrieves an aggregated list of machine types.",
+ "description": "Retrieves an aggregated list of machine types. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.machineTypes.aggregatedList",
"parameterOrder": [
@@ -8938,7 +8938,7 @@
]
},
"get": {
- "description": "Returns the specified machine type. Gets a list of available machine types by making a list() request.",
+ "description": "Returns the specified machine type. Gets a list of available machine types by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.machineTypes.get",
"parameterOrder": [
@@ -8980,7 +8980,7 @@
]
},
"list": {
- "description": "Retrieves a list of machine types available to the specified project.",
+ "description": "Retrieves a list of machine types available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.machineTypes.list",
"parameterOrder": [
@@ -9041,7 +9041,7 @@
"networkEndpointGroups": {
"methods": {
"aggregatedList": {
- "description": "Retrieves the list of network endpoint groups and sorts them by zone.",
+ "description": "Retrieves the list of network endpoint groups and sorts them by zone. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.networkEndpointGroups.aggregatedList",
"parameterOrder": [
@@ -9090,7 +9090,7 @@
]
},
"attachNetworkEndpoints": {
- "description": "Attach a list of network endpoints to the specified network endpoint group.",
+ "description": "Attach a list of network endpoints to the specified network endpoint group. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.networkEndpointGroups.attachNetworkEndpoints",
"parameterOrder": [
@@ -9137,7 +9137,7 @@
]
},
"delete": {
- "description": "Deletes the specified network endpoint group. The network endpoints in the NEG and the VM instances they belong to are not terminated when the NEG is deleted. Note that the NEG cannot be deleted if there are backend services referencing it.",
+ "description": "Deletes the specified network endpoint group. The network endpoints in the NEG and the VM instances they belong to are not terminated when the NEG is deleted. Note that the NEG cannot be deleted if there are backend services referencing it. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.networkEndpointGroups.delete",
"parameterOrder": [
@@ -9181,7 +9181,7 @@
]
},
"detachNetworkEndpoints": {
- "description": "Detach a list of network endpoints from the specified network endpoint group.",
+ "description": "Detach a list of network endpoints from the specified network endpoint group. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.networkEndpointGroups.detachNetworkEndpoints",
"parameterOrder": [
@@ -9228,7 +9228,7 @@
]
},
"get": {
- "description": "Returns the specified network endpoint group. Gets a list of available network endpoint groups by making a list() request.",
+ "description": "Returns the specified network endpoint group. Gets a list of available network endpoint groups by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.networkEndpointGroups.get",
"parameterOrder": [
@@ -9268,7 +9268,7 @@
]
},
"insert": {
- "description": "Creates a network endpoint group in the specified project using the parameters that are included in the request.",
+ "description": "Creates a network endpoint group in the specified project using the parameters that are included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.networkEndpointGroups.insert",
"parameterOrder": [
@@ -9308,7 +9308,7 @@
]
},
"list": {
- "description": "Retrieves the list of network endpoint groups that are located in the specified project and zone.",
+ "description": "Retrieves the list of network endpoint groups that are located in the specified project and zone. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.networkEndpointGroups.list",
"parameterOrder": [
@@ -9364,7 +9364,7 @@
]
},
"listNetworkEndpoints": {
- "description": "Lists the network endpoints in the specified network endpoint group.",
+ "description": "Lists the network endpoints in the specified network endpoint group. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.networkEndpointGroups.listNetworkEndpoints",
"parameterOrder": [
@@ -9430,7 +9430,7 @@
]
},
"testIamPermissions": {
- "description": "Returns permissions that a caller has on the specified resource.",
+ "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.networkEndpointGroups.testIamPermissions",
"parameterOrder": [
@@ -9479,7 +9479,7 @@
"networks": {
"methods": {
"addPeering": {
- "description": "Adds a peering to the specified network.",
+ "description": "Adds a peering to the specified network. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.networks.addPeering",
"parameterOrder": [
@@ -9520,7 +9520,7 @@
]
},
"delete": {
- "description": "Deletes the specified network.",
+ "description": "Deletes the specified network. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.networks.delete",
"parameterOrder": [
@@ -9558,7 +9558,7 @@
]
},
"get": {
- "description": "Returns the specified network. Gets a list of available networks by making a list() request.",
+ "description": "Returns the specified network. Gets a list of available networks by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.networks.get",
"parameterOrder": [
@@ -9592,7 +9592,7 @@
]
},
"insert": {
- "description": "Creates a network in the specified project using the data included in the request.",
+ "description": "Creates a network in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.networks.insert",
"parameterOrder": [
@@ -9625,7 +9625,7 @@
]
},
"list": {
- "description": "Retrieves the list of networks available to the specified project.",
+ "description": "Retrieves the list of networks available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.networks.list",
"parameterOrder": [
@@ -9674,7 +9674,7 @@
]
},
"patch": {
- "description": "Patches the specified network with the data included in the request. Only the following fields can be modified: routingConfig.routingMode.",
+ "description": "Patches the specified network with the data included in the request. Only the following fields can be modified: routingConfig.routingMode. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.networks.patch",
"parameterOrder": [
@@ -9715,7 +9715,7 @@
]
},
"removePeering": {
- "description": "Removes a peering from the specified network.",
+ "description": "Removes a peering from the specified network. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.networks.removePeering",
"parameterOrder": [
@@ -9756,7 +9756,7 @@
]
},
"switchToCustomMode": {
- "description": "Switches the network mode from auto subnet mode to custom subnet mode.",
+ "description": "Switches the network mode from auto subnet mode to custom subnet mode. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.networks.switchToCustomMode",
"parameterOrder": [
@@ -9794,7 +9794,7 @@
]
},
"updatePeering": {
- "description": "Updates the specified network peering with the data included in the request Only the following fields can be modified: NetworkPeering.export_custom_routes, and NetworkPeering.import_custom_routes",
+ "description": "Updates the specified network peering with the data included in the request Only the following fields can be modified: NetworkPeering.export_custom_routes, and NetworkPeering.import_custom_routes (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.networks.updatePeering",
"parameterOrder": [
@@ -9839,7 +9839,7 @@
"nodeGroups": {
"methods": {
"addNodes": {
- "description": "Adds specified number of nodes to the node group.",
+ "description": "Adds specified number of nodes to the node group. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.nodeGroups.addNodes",
"parameterOrder": [
@@ -9888,7 +9888,7 @@
]
},
"aggregatedList": {
- "description": "Retrieves an aggregated list of node groups. Note: use nodeGroups.listNodes for more details about each group.",
+ "description": "Retrieves an aggregated list of node groups. Note: use nodeGroups.listNodes for more details about each group. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.nodeGroups.aggregatedList",
"parameterOrder": [
@@ -9937,7 +9937,7 @@
]
},
"delete": {
- "description": "Deletes the specified NodeGroup resource.",
+ "description": "Deletes the specified NodeGroup resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.nodeGroups.delete",
"parameterOrder": [
@@ -9983,7 +9983,7 @@
]
},
"deleteNodes": {
- "description": "Deletes specified nodes from the node group.",
+ "description": "Deletes specified nodes from the node group. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.nodeGroups.deleteNodes",
"parameterOrder": [
@@ -10032,7 +10032,7 @@
]
},
"get": {
- "description": "Returns the specified NodeGroup. Get a list of available NodeGroups by making a list() request. Note: the \"nodes\" field should not be used. Use nodeGroups.listNodes instead.",
+ "description": "Returns the specified NodeGroup. Get a list of available NodeGroups by making a list() request. Note: the \"nodes\" field should not be used. Use nodeGroups.listNodes instead. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.nodeGroups.get",
"parameterOrder": [
@@ -10074,7 +10074,7 @@
]
},
"getIamPolicy": {
- "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists.",
+ "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.nodeGroups.getIamPolicy",
"parameterOrder": [
@@ -10116,7 +10116,7 @@
]
},
"insert": {
- "description": "Creates a NodeGroup resource in the specified project using the data included in the request.",
+ "description": "Creates a NodeGroup resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.nodeGroups.insert",
"parameterOrder": [
@@ -10165,7 +10165,7 @@
]
},
"list": {
- "description": "Retrieves a list of node groups available to the specified project. Note: use nodeGroups.listNodes for more details about each group.",
+ "description": "Retrieves a list of node groups available to the specified project. Note: use nodeGroups.listNodes for more details about each group. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.nodeGroups.list",
"parameterOrder": [
@@ -10222,7 +10222,7 @@
]
},
"listNodes": {
- "description": "Lists nodes in the node group.",
+ "description": "Lists nodes in the node group. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.nodeGroups.listNodes",
"parameterOrder": [
@@ -10287,7 +10287,7 @@
]
},
"setIamPolicy": {
- "description": "Sets the access control policy on the specified resource. Replaces any existing policy.",
+ "description": "Sets the access control policy on the specified resource. Replaces any existing policy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.nodeGroups.setIamPolicy",
"parameterOrder": [
@@ -10331,7 +10331,7 @@
]
},
"setNodeTemplate": {
- "description": "Updates the node template of the node group.",
+ "description": "Updates the node template of the node group. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.nodeGroups.setNodeTemplate",
"parameterOrder": [
@@ -10380,7 +10380,7 @@
]
},
"testIamPermissions": {
- "description": "Returns permissions that a caller has on the specified resource.",
+ "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.nodeGroups.testIamPermissions",
"parameterOrder": [
@@ -10429,7 +10429,7 @@
"nodeTemplates": {
"methods": {
"aggregatedList": {
- "description": "Retrieves an aggregated list of node templates.",
+ "description": "Retrieves an aggregated list of node templates. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.nodeTemplates.aggregatedList",
"parameterOrder": [
@@ -10478,7 +10478,7 @@
]
},
"delete": {
- "description": "Deletes the specified NodeTemplate resource.",
+ "description": "Deletes the specified NodeTemplate resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.nodeTemplates.delete",
"parameterOrder": [
@@ -10524,7 +10524,7 @@
]
},
"get": {
- "description": "Returns the specified node template. Gets a list of available node templates by making a list() request.",
+ "description": "Returns the specified node template. Gets a list of available node templates by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.nodeTemplates.get",
"parameterOrder": [
@@ -10566,7 +10566,7 @@
]
},
"getIamPolicy": {
- "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists.",
+ "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.nodeTemplates.getIamPolicy",
"parameterOrder": [
@@ -10608,7 +10608,7 @@
]
},
"insert": {
- "description": "Creates a NodeTemplate resource in the specified project using the data included in the request.",
+ "description": "Creates a NodeTemplate resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.nodeTemplates.insert",
"parameterOrder": [
@@ -10649,7 +10649,7 @@
]
},
"list": {
- "description": "Retrieves a list of node templates available to the specified project.",
+ "description": "Retrieves a list of node templates available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.nodeTemplates.list",
"parameterOrder": [
@@ -10706,7 +10706,7 @@
]
},
"setIamPolicy": {
- "description": "Sets the access control policy on the specified resource. Replaces any existing policy.",
+ "description": "Sets the access control policy on the specified resource. Replaces any existing policy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.nodeTemplates.setIamPolicy",
"parameterOrder": [
@@ -10750,7 +10750,7 @@
]
},
"testIamPermissions": {
- "description": "Returns permissions that a caller has on the specified resource.",
+ "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.nodeTemplates.testIamPermissions",
"parameterOrder": [
@@ -10799,7 +10799,7 @@
"nodeTypes": {
"methods": {
"aggregatedList": {
- "description": "Retrieves an aggregated list of node types.",
+ "description": "Retrieves an aggregated list of node types. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.nodeTypes.aggregatedList",
"parameterOrder": [
@@ -10848,7 +10848,7 @@
]
},
"get": {
- "description": "Returns the specified node type. Gets a list of available node types by making a list() request.",
+ "description": "Returns the specified node type. Gets a list of available node types by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.nodeTypes.get",
"parameterOrder": [
@@ -10890,7 +10890,7 @@
]
},
"list": {
- "description": "Retrieves a list of node types available to the specified project.",
+ "description": "Retrieves a list of node types available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.nodeTypes.list",
"parameterOrder": [
@@ -10951,7 +10951,7 @@
"projects": {
"methods": {
"disableXpnHost": {
- "description": "Disable this project as a shared VPC host project.",
+ "description": "Disable this project as a shared VPC host project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.projects.disableXpnHost",
"parameterOrder": [
@@ -10981,7 +10981,7 @@
]
},
"disableXpnResource": {
- "description": "Disable a service resource (also known as service project) associated with this host project.",
+ "description": "Disable a service resource (also known as service project) associated with this host project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.projects.disableXpnResource",
"parameterOrder": [
@@ -11014,7 +11014,7 @@
]
},
"enableXpnHost": {
- "description": "Enable this project as a shared VPC host project.",
+ "description": "Enable this project as a shared VPC host project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.projects.enableXpnHost",
"parameterOrder": [
@@ -11044,7 +11044,7 @@
]
},
"enableXpnResource": {
- "description": "Enable service resource (a.k.a service project) for a host project, so that subnets in the host project can be used by instances in the service project.",
+ "description": "Enable service resource (a.k.a service project) for a host project, so that subnets in the host project can be used by instances in the service project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.projects.enableXpnResource",
"parameterOrder": [
@@ -11077,7 +11077,7 @@
]
},
"get": {
- "description": "Returns the specified Project resource.",
+ "description": "Returns the specified Project resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.projects.get",
"parameterOrder": [
@@ -11103,7 +11103,7 @@
]
},
"getXpnHost": {
- "description": "Gets the shared VPC host project that this project links to. May be empty if no link exists.",
+ "description": "Gets the shared VPC host project that this project links to. May be empty if no link exists. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.projects.getXpnHost",
"parameterOrder": [
@@ -11128,7 +11128,7 @@
]
},
"getXpnResources": {
- "description": "Gets service resources (a.k.a service project) associated with this host project.",
+ "description": "Gets service resources (a.k.a service project) associated with this host project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.projects.getXpnResources",
"parameterOrder": [
@@ -11176,7 +11176,7 @@
]
},
"listXpnHosts": {
- "description": "Lists all shared VPC host projects visible to the user in an organization.",
+ "description": "Lists all shared VPC host projects visible to the user in an organization. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.projects.listXpnHosts",
"parameterOrder": [
@@ -11227,7 +11227,7 @@
]
},
"moveDisk": {
- "description": "Moves a persistent disk from one zone to another.",
+ "description": "Moves a persistent disk from one zone to another. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.projects.moveDisk",
"parameterOrder": [
@@ -11260,7 +11260,7 @@
]
},
"moveInstance": {
- "description": "Moves an instance and its attached persistent disks from one zone to another.",
+ "description": "Moves an instance and its attached persistent disks from one zone to another. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.projects.moveInstance",
"parameterOrder": [
@@ -11293,7 +11293,7 @@
]
},
"setCommonInstanceMetadata": {
- "description": "Sets metadata common to all instances within the specified project using the data included in the request.",
+ "description": "Sets metadata common to all instances within the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.projects.setCommonInstanceMetadata",
"parameterOrder": [
@@ -11326,7 +11326,7 @@
]
},
"setDefaultNetworkTier": {
- "description": "Sets the default network tier of the project. The default network tier is used when an address/forwardingRule/instance is created without specifying the network tier field.",
+ "description": "Sets the default network tier of the project. The default network tier is used when an address/forwardingRule/instance is created without specifying the network tier field. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.projects.setDefaultNetworkTier",
"parameterOrder": [
@@ -11359,7 +11359,7 @@
]
},
"setUsageExportBucket": {
- "description": "Enables the usage export feature and sets the usage export bucket where reports are stored. If you provide an empty request body using this method, the usage export feature will be disabled.",
+ "description": "Enables the usage export feature and sets the usage export bucket where reports are stored. If you provide an empty request body using this method, the usage export feature will be disabled. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.projects.setUsageExportBucket",
"parameterOrder": [
@@ -11399,7 +11399,7 @@
"regionAutoscalers": {
"methods": {
"delete": {
- "description": "Deletes the specified autoscaler.",
+ "description": "Deletes the specified autoscaler. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.regionAutoscalers.delete",
"parameterOrder": [
@@ -11445,7 +11445,7 @@
]
},
"get": {
- "description": "Returns the specified autoscaler.",
+ "description": "Returns the specified autoscaler. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionAutoscalers.get",
"parameterOrder": [
@@ -11487,7 +11487,7 @@
]
},
"insert": {
- "description": "Creates an autoscaler in the specified project using the data included in the request.",
+ "description": "Creates an autoscaler in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionAutoscalers.insert",
"parameterOrder": [
@@ -11528,7 +11528,7 @@
]
},
"list": {
- "description": "Retrieves a list of autoscalers contained within the specified region.",
+ "description": "Retrieves a list of autoscalers contained within the specified region. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionAutoscalers.list",
"parameterOrder": [
@@ -11585,7 +11585,7 @@
]
},
"patch": {
- "description": "Updates an autoscaler in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ "description": "Updates an autoscaler in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.regionAutoscalers.patch",
"parameterOrder": [
@@ -11632,7 +11632,7 @@
]
},
"update": {
- "description": "Updates an autoscaler in the specified project using the data included in the request.",
+ "description": "Updates an autoscaler in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PUT",
"id": "compute.regionAutoscalers.update",
"parameterOrder": [
@@ -11683,7 +11683,7 @@
"regionBackendServices": {
"methods": {
"delete": {
- "description": "Deletes the specified regional BackendService resource.",
+ "description": "Deletes the specified regional BackendService resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.regionBackendServices.delete",
"parameterOrder": [
@@ -11729,7 +11729,7 @@
]
},
"get": {
- "description": "Returns the specified regional BackendService resource.",
+ "description": "Returns the specified regional BackendService resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionBackendServices.get",
"parameterOrder": [
@@ -11771,7 +11771,7 @@
]
},
"getHealth": {
- "description": "Gets the most recent health check results for this regional BackendService.",
+ "description": "Gets the most recent health check results for this regional BackendService. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionBackendServices.getHealth",
"parameterOrder": [
@@ -11815,7 +11815,7 @@
]
},
"insert": {
- "description": "Creates a regional BackendService resource in the specified project using the data included in the request. There are several restrictions and guidelines to keep in mind when creating a regional backend service. Read Restrictions and Guidelines for more information.",
+ "description": "Creates a regional BackendService resource in the specified project using the data included in the request. There are several restrictions and guidelines to keep in mind when creating a regional backend service. Read Restrictions and Guidelines for more information. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionBackendServices.insert",
"parameterOrder": [
@@ -11856,7 +11856,7 @@
]
},
"list": {
- "description": "Retrieves the list of regional BackendService resources available to the specified project in the given region.",
+ "description": "Retrieves the list of regional BackendService resources available to the specified project in the given region. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionBackendServices.list",
"parameterOrder": [
@@ -11913,7 +11913,7 @@
]
},
"patch": {
- "description": "Updates the specified regional BackendService resource with the data included in the request. There are several restrictions and guidelines to keep in mind when updating a backend service. Read Restrictions and Guidelines for more information. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ "description": "Updates the specified regional BackendService resource with the data included in the request. There are several restrictions and guidelines to keep in mind when updating a backend service. Read Restrictions and Guidelines for more information. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.regionBackendServices.patch",
"parameterOrder": [
@@ -11962,7 +11962,7 @@
]
},
"update": {
- "description": "Updates the specified regional BackendService resource with the data included in the request. There are several restrictions and guidelines to keep in mind when updating a backend service. Read Restrictions and Guidelines for more information.",
+ "description": "Updates the specified regional BackendService resource with the data included in the request. There are several restrictions and guidelines to keep in mind when updating a backend service. Read Restrictions and Guidelines for more information. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PUT",
"id": "compute.regionBackendServices.update",
"parameterOrder": [
@@ -12015,7 +12015,7 @@
"regionCommitments": {
"methods": {
"aggregatedList": {
- "description": "Retrieves an aggregated list of commitments.",
+ "description": "Retrieves an aggregated list of commitments. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionCommitments.aggregatedList",
"parameterOrder": [
@@ -12064,7 +12064,7 @@
]
},
"get": {
- "description": "Returns the specified commitment resource. Gets a list of available commitments by making a list() request.",
+ "description": "Returns the specified commitment resource. Gets a list of available commitments by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionCommitments.get",
"parameterOrder": [
@@ -12106,7 +12106,7 @@
]
},
"insert": {
- "description": "Creates a commitment in the specified project using the data included in the request.",
+ "description": "Creates a commitment in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionCommitments.insert",
"parameterOrder": [
@@ -12147,7 +12147,7 @@
]
},
"list": {
- "description": "Retrieves a list of commitments contained within the specified region.",
+ "description": "Retrieves a list of commitments contained within the specified region. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionCommitments.list",
"parameterOrder": [
@@ -12208,7 +12208,7 @@
"regionDiskTypes": {
"methods": {
"get": {
- "description": "Returns the specified regional disk type. Gets a list of available disk types by making a list() request.",
+ "description": "Returns the specified regional disk type. Gets a list of available disk types by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionDiskTypes.get",
"parameterOrder": [
@@ -12250,7 +12250,7 @@
]
},
"list": {
- "description": "Retrieves a list of regional disk types available to the specified project.",
+ "description": "Retrieves a list of regional disk types available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionDiskTypes.list",
"parameterOrder": [
@@ -12311,7 +12311,7 @@
"regionDisks": {
"methods": {
"addResourcePolicies": {
- "description": "Adds existing resource policies to a regional disk. You can only add one policy which will be applied to this disk for scheduling snapshot creation.",
+ "description": "Adds existing resource policies to a regional disk. You can only add one policy which will be applied to this disk for scheduling snapshot creation. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionDisks.addResourcePolicies",
"parameterOrder": [
@@ -12360,7 +12360,7 @@
]
},
"createSnapshot": {
- "description": "Creates a snapshot of this regional disk.",
+ "description": "Creates a snapshot of this regional disk. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionDisks.createSnapshot",
"parameterOrder": [
@@ -12409,7 +12409,7 @@
]
},
"delete": {
- "description": "Deletes the specified regional persistent disk. Deleting a regional disk removes all the replicas of its data permanently and is irreversible. However, deleting a disk does not delete any snapshots previously made from the disk. You must separately delete snapshots.",
+ "description": "Deletes the specified regional persistent disk. Deleting a regional disk removes all the replicas of its data permanently and is irreversible. However, deleting a disk does not delete any snapshots previously made from the disk. You must separately delete snapshots. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.regionDisks.delete",
"parameterOrder": [
@@ -12454,7 +12454,7 @@
]
},
"get": {
- "description": "Returns a specified regional persistent disk.",
+ "description": "Returns a specified regional persistent disk. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionDisks.get",
"parameterOrder": [
@@ -12496,7 +12496,7 @@
]
},
"insert": {
- "description": "Creates a persistent regional disk in the specified project using the data included in the request.",
+ "description": "Creates a persistent regional disk in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionDisks.insert",
"parameterOrder": [
@@ -12542,7 +12542,7 @@
]
},
"list": {
- "description": "Retrieves the list of persistent disks contained within the specified region.",
+ "description": "Retrieves the list of persistent disks contained within the specified region. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionDisks.list",
"parameterOrder": [
@@ -12599,7 +12599,7 @@
]
},
"removeResourcePolicies": {
- "description": "Removes resource policies from a regional disk.",
+ "description": "Removes resource policies from a regional disk. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionDisks.removeResourcePolicies",
"parameterOrder": [
@@ -12648,7 +12648,7 @@
]
},
"resize": {
- "description": "Resizes the specified regional persistent disk.",
+ "description": "Resizes the specified regional persistent disk. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionDisks.resize",
"parameterOrder": [
@@ -12697,7 +12697,7 @@
]
},
"setLabels": {
- "description": "Sets the labels on the target regional disk.",
+ "description": "Sets the labels on the target regional disk. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionDisks.setLabels",
"parameterOrder": [
@@ -12746,7 +12746,7 @@
]
},
"testIamPermissions": {
- "description": "Returns permissions that a caller has on the specified resource.",
+ "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionDisks.testIamPermissions",
"parameterOrder": [
@@ -12795,7 +12795,7 @@
"regionHealthChecks": {
"methods": {
"delete": {
- "description": "Deletes the specified HealthCheck resource.",
+ "description": "Deletes the specified HealthCheck resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.regionHealthChecks.delete",
"parameterOrder": [
@@ -12841,7 +12841,7 @@
]
},
"get": {
- "description": "Returns the specified HealthCheck resource. Gets a list of available health checks by making a list() request.",
+ "description": "Returns the specified HealthCheck resource. Gets a list of available health checks by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionHealthChecks.get",
"parameterOrder": [
@@ -12883,7 +12883,7 @@
]
},
"insert": {
- "description": "Creates a HealthCheck resource in the specified project using the data included in the request.",
+ "description": "Creates a HealthCheck resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionHealthChecks.insert",
"parameterOrder": [
@@ -12924,7 +12924,7 @@
]
},
"list": {
- "description": "Retrieves the list of HealthCheck resources available to the specified project.",
+ "description": "Retrieves the list of HealthCheck resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionHealthChecks.list",
"parameterOrder": [
@@ -12981,7 +12981,7 @@
]
},
"patch": {
- "description": "Updates a HealthCheck resource in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ "description": "Updates a HealthCheck resource in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.regionHealthChecks.patch",
"parameterOrder": [
@@ -13030,7 +13030,7 @@
]
},
"update": {
- "description": "Updates a HealthCheck resource in the specified project using the data included in the request.",
+ "description": "Updates a HealthCheck resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PUT",
"id": "compute.regionHealthChecks.update",
"parameterOrder": [
@@ -13083,7 +13083,7 @@
"regionInstanceGroupManagers": {
"methods": {
"abandonInstances": {
- "description": "Flags the specified instances to be immediately removed from the managed instance group. Abandoning an instance does not delete the instance, but it does remove the instance from any target pools that are applied by the managed instance group. This method reduces the targetSize of the managed instance group by the number of instances that you abandon. This operation is marked as DONE when the action is scheduled even if the instances have not yet been removed from the group. You must separately verify the status of the abandoning action with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.\n\nYou can specify a maximum of 1000 instances with this method per request.",
+ "description": "Flags the specified instances to be immediately removed from the managed instance group. Abandoning an instance does not delete the instance, but it does remove the instance from any target pools that are applied by the managed instance group. This method reduces the targetSize of the managed instance group by the number of instances that you abandon. This operation is marked as DONE when the action is scheduled even if the instances have not yet been removed from the group. You must separately verify the status of the abandoning action with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.\n\nYou can specify a maximum of 1000 instances with this method per request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionInstanceGroupManagers.abandonInstances",
"parameterOrder": [
@@ -13130,7 +13130,7 @@
]
},
"delete": {
- "description": "Deletes the specified managed instance group and all of the instances in that group.",
+ "description": "Deletes the specified managed instance group and all of the instances in that group. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.regionInstanceGroupManagers.delete",
"parameterOrder": [
@@ -13174,7 +13174,7 @@
]
},
"deleteInstances": {
- "description": "Flags the specified instances in the managed instance group to be immediately deleted. The instances are also removed from any target pools of which they were a member. This method reduces the targetSize of the managed instance group by the number of instances that you delete. The deleteInstances operation is marked DONE if the deleteInstances request is successful. The underlying actions take additional time. You must separately verify the status of the deleting action with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.\n\nYou can specify a maximum of 1000 instances with this method per request.",
+ "description": "Flags the specified instances in the managed instance group to be immediately deleted. The instances are also removed from any target pools of which they were a member. This method reduces the targetSize of the managed instance group by the number of instances that you delete. The deleteInstances operation is marked DONE if the deleteInstances request is successful. The underlying actions take additional time. You must separately verify the status of the deleting action with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.\n\nYou can specify a maximum of 1000 instances with this method per request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionInstanceGroupManagers.deleteInstances",
"parameterOrder": [
@@ -13221,7 +13221,7 @@
]
},
"get": {
- "description": "Returns all of the details about the specified managed instance group.",
+ "description": "Returns all of the details about the specified managed instance group. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionInstanceGroupManagers.get",
"parameterOrder": [
@@ -13261,7 +13261,7 @@
]
},
"insert": {
- "description": "Creates a managed instance group using the information that you specify in the request. After the group is created, instances in the group are created using the specified instance template. This operation is marked as DONE when the group is created even if the instances in the group have not yet been created. You must separately verify the status of the individual instances with the listmanagedinstances method.\n\nA regional managed instance group can contain up to 2000 instances.",
+ "description": "Creates a managed instance group using the information that you specify in the request. After the group is created, instances in the group are created using the specified instance template. This operation is marked as DONE when the group is created even if the instances in the group have not yet been created. You must separately verify the status of the individual instances with the listmanagedinstances method.\n\nA regional managed instance group can contain up to 2000 instances. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionInstanceGroupManagers.insert",
"parameterOrder": [
@@ -13301,7 +13301,7 @@
]
},
"list": {
- "description": "Retrieves the list of managed instance groups that are contained within the specified region.",
+ "description": "Retrieves the list of managed instance groups that are contained within the specified region. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionInstanceGroupManagers.list",
"parameterOrder": [
@@ -13357,7 +13357,7 @@
]
},
"listManagedInstances": {
- "description": "Lists the instances in the managed instance group and instances that are scheduled to be created. The list includes any current actions that the group has scheduled for its instances.",
+ "description": "Lists the instances in the managed instance group and instances that are scheduled to be created. The list includes any current actions that the group has scheduled for its instances. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionInstanceGroupManagers.listManagedInstances",
"parameterOrder": [
@@ -13420,7 +13420,7 @@
]
},
"patch": {
- "description": "Updates a managed instance group using the information that you specify in the request. This operation is marked as DONE when the group is patched even if the instances in the group are still in the process of being patched. You must separately verify the status of the individual instances with the listmanagedinstances method. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ "description": "Updates a managed instance group using the information that you specify in the request. This operation is marked as DONE when the group is patched even if the instances in the group are still in the process of being patched. You must separately verify the status of the individual instances with the listmanagedinstances method. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.regionInstanceGroupManagers.patch",
"parameterOrder": [
@@ -13467,7 +13467,7 @@
]
},
"recreateInstances": {
- "description": "Flags the specified instances in the managed instance group to be immediately recreated. The instances are deleted and recreated using the current instance template for the managed instance group. This operation is marked as DONE when the flag is set even if the instances have not yet been recreated. You must separately verify the status of the recreating action with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.\n\nYou can specify a maximum of 1000 instances with this method per request.",
+ "description": "Flags the specified instances in the managed instance group to be immediately recreated. The instances are deleted and recreated using the current instance template for the managed instance group. This operation is marked as DONE when the flag is set even if the instances have not yet been recreated. You must separately verify the status of the recreating action with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.\n\nYou can specify a maximum of 1000 instances with this method per request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionInstanceGroupManagers.recreateInstances",
"parameterOrder": [
@@ -13514,7 +13514,7 @@
]
},
"resize": {
- "description": "Changes the intended size of the managed instance group. If you increase the size, the group creates new instances using the current instance template. If you decrease the size, the group deletes one or more instances.\n\nThe resize operation is marked DONE if the resize request is successful. The underlying actions take additional time. You must separately verify the status of the creating or deleting actions with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.",
+ "description": "Changes the intended size of the managed instance group. If you increase the size, the group creates new instances using the current instance template. If you decrease the size, the group deletes one or more instances.\n\nThe resize operation is marked DONE if the resize request is successful. The underlying actions take additional time. You must separately verify the status of the creating or deleting actions with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionInstanceGroupManagers.resize",
"parameterOrder": [
@@ -13567,7 +13567,7 @@
]
},
"setInstanceTemplate": {
- "description": "Sets the instance template to use when creating new instances or recreating instances in this group. Existing instances are not affected.",
+ "description": "Sets the instance template to use when creating new instances or recreating instances in this group. Existing instances are not affected. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionInstanceGroupManagers.setInstanceTemplate",
"parameterOrder": [
@@ -13614,7 +13614,7 @@
]
},
"setTargetPools": {
- "description": "Modifies the target pools to which all new instances in this group are assigned. Existing instances in the group are not affected.",
+ "description": "Modifies the target pools to which all new instances in this group are assigned. Existing instances in the group are not affected. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionInstanceGroupManagers.setTargetPools",
"parameterOrder": [
@@ -13665,7 +13665,7 @@
"regionInstanceGroups": {
"methods": {
"get": {
- "description": "Returns the specified instance group resource.",
+ "description": "Returns the specified instance group resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionInstanceGroups.get",
"parameterOrder": [
@@ -13705,7 +13705,7 @@
]
},
"list": {
- "description": "Retrieves the list of instance group resources contained within the specified region.",
+ "description": "Retrieves the list of instance group resources contained within the specified region. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionInstanceGroups.list",
"parameterOrder": [
@@ -13761,7 +13761,7 @@
]
},
"listInstances": {
- "description": "Lists the instances in the specified instance group and displays information about the named ports. Depending on the specified options, this method can list all instances or only the instances that are running.",
+ "description": "Lists the instances in the specified instance group and displays information about the named ports. Depending on the specified options, this method can list all instances or only the instances that are running. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionInstanceGroups.listInstances",
"parameterOrder": [
@@ -13827,7 +13827,7 @@
]
},
"setNamedPorts": {
- "description": "Sets the named ports for the specified regional instance group.",
+ "description": "Sets the named ports for the specified regional instance group. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionInstanceGroups.setNamedPorts",
"parameterOrder": [
@@ -13878,7 +13878,7 @@
"regionOperations": {
"methods": {
"delete": {
- "description": "Deletes the specified region-specific Operations resource.",
+ "description": "Deletes the specified region-specific Operations resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.regionOperations.delete",
"parameterOrder": [
@@ -13916,7 +13916,7 @@
]
},
"get": {
- "description": "Retrieves the specified region-specific Operations resource.",
+ "description": "Retrieves the specified region-specific Operations resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionOperations.get",
"parameterOrder": [
@@ -13958,7 +13958,7 @@
]
},
"list": {
- "description": "Retrieves a list of Operation resources contained within the specified region.",
+ "description": "Retrieves a list of Operation resources contained within the specified region. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionOperations.list",
"parameterOrder": [
@@ -14019,7 +14019,7 @@
"regionSslCertificates": {
"methods": {
"delete": {
- "description": "Deletes the specified SslCertificate resource in the region.",
+ "description": "Deletes the specified SslCertificate resource in the region. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.regionSslCertificates.delete",
"parameterOrder": [
@@ -14065,7 +14065,7 @@
]
},
"get": {
- "description": "Returns the specified SslCertificate resource in the specified region. Get a list of available SSL certificates by making a list() request.",
+ "description": "Returns the specified SslCertificate resource in the specified region. Get a list of available SSL certificates by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionSslCertificates.get",
"parameterOrder": [
@@ -14107,7 +14107,7 @@
]
},
"insert": {
- "description": "Creates a SslCertificate resource in the specified project and region using the data included in the request",
+ "description": "Creates a SslCertificate resource in the specified project and region using the data included in the request (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionSslCertificates.insert",
"parameterOrder": [
@@ -14148,7 +14148,7 @@
]
},
"list": {
- "description": "Retrieves the list of SslCertificate resources available to the specified project in the specified region.",
+ "description": "Retrieves the list of SslCertificate resources available to the specified project in the specified region. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionSslCertificates.list",
"parameterOrder": [
@@ -14209,7 +14209,7 @@
"regionTargetHttpProxies": {
"methods": {
"delete": {
- "description": "Deletes the specified TargetHttpProxy resource.",
+ "description": "Deletes the specified TargetHttpProxy resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.regionTargetHttpProxies.delete",
"parameterOrder": [
@@ -14255,7 +14255,7 @@
]
},
"get": {
- "description": "Returns the specified TargetHttpProxy resource in the specified region. Gets a list of available target HTTP proxies by making a list() request.",
+ "description": "Returns the specified TargetHttpProxy resource in the specified region. Gets a list of available target HTTP proxies by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionTargetHttpProxies.get",
"parameterOrder": [
@@ -14297,7 +14297,7 @@
]
},
"insert": {
- "description": "Creates a TargetHttpProxy resource in the specified project and region using the data included in the request.",
+ "description": "Creates a TargetHttpProxy resource in the specified project and region using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionTargetHttpProxies.insert",
"parameterOrder": [
@@ -14338,7 +14338,7 @@
]
},
"list": {
- "description": "Retrieves the list of TargetHttpProxy resources available to the specified project in the specified region.",
+ "description": "Retrieves the list of TargetHttpProxy resources available to the specified project in the specified region. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionTargetHttpProxies.list",
"parameterOrder": [
@@ -14395,7 +14395,7 @@
]
},
"setUrlMap": {
- "description": "Changes the URL map for TargetHttpProxy.",
+ "description": "Changes the URL map for TargetHttpProxy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionTargetHttpProxies.setUrlMap",
"parameterOrder": [
@@ -14448,7 +14448,7 @@
"regionTargetHttpsProxies": {
"methods": {
"delete": {
- "description": "Deletes the specified TargetHttpsProxy resource.",
+ "description": "Deletes the specified TargetHttpsProxy resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.regionTargetHttpsProxies.delete",
"parameterOrder": [
@@ -14494,7 +14494,7 @@
]
},
"get": {
- "description": "Returns the specified TargetHttpsProxy resource in the specified region. Gets a list of available target HTTP proxies by making a list() request.",
+ "description": "Returns the specified TargetHttpsProxy resource in the specified region. Gets a list of available target HTTP proxies by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionTargetHttpsProxies.get",
"parameterOrder": [
@@ -14536,7 +14536,7 @@
]
},
"insert": {
- "description": "Creates a TargetHttpsProxy resource in the specified project and region using the data included in the request.",
+ "description": "Creates a TargetHttpsProxy resource in the specified project and region using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionTargetHttpsProxies.insert",
"parameterOrder": [
@@ -14577,7 +14577,7 @@
]
},
"list": {
- "description": "Retrieves the list of TargetHttpsProxy resources available to the specified project in the specified region.",
+ "description": "Retrieves the list of TargetHttpsProxy resources available to the specified project in the specified region. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionTargetHttpsProxies.list",
"parameterOrder": [
@@ -14634,7 +14634,7 @@
]
},
"setSslCertificates": {
- "description": "Replaces SslCertificates for TargetHttpsProxy.",
+ "description": "Replaces SslCertificates for TargetHttpsProxy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionTargetHttpsProxies.setSslCertificates",
"parameterOrder": [
@@ -14683,7 +14683,7 @@
]
},
"setUrlMap": {
- "description": "Changes the URL map for TargetHttpsProxy.",
+ "description": "Changes the URL map for TargetHttpsProxy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionTargetHttpsProxies.setUrlMap",
"parameterOrder": [
@@ -14736,7 +14736,7 @@
"regionUrlMaps": {
"methods": {
"delete": {
- "description": "Deletes the specified UrlMap resource.",
+ "description": "Deletes the specified UrlMap resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.regionUrlMaps.delete",
"parameterOrder": [
@@ -14782,7 +14782,7 @@
]
},
"get": {
- "description": "Returns the specified UrlMap resource. Gets a list of available URL maps by making a list() request.",
+ "description": "Returns the specified UrlMap resource. Gets a list of available URL maps by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionUrlMaps.get",
"parameterOrder": [
@@ -14824,7 +14824,7 @@
]
},
"insert": {
- "description": "Creates a UrlMap resource in the specified project using the data included in the request.",
+ "description": "Creates a UrlMap resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionUrlMaps.insert",
"parameterOrder": [
@@ -14865,7 +14865,7 @@
]
},
"list": {
- "description": "Retrieves the list of UrlMap resources available to the specified project in the specified region.",
+ "description": "Retrieves the list of UrlMap resources available to the specified project in the specified region. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regionUrlMaps.list",
"parameterOrder": [
@@ -14922,7 +14922,7 @@
]
},
"patch": {
- "description": "Patches the specified UrlMap resource with the data included in the request. This method supports PATCH semantics and uses JSON merge patch format and processing rules.",
+ "description": "Patches the specified UrlMap resource with the data included in the request. This method supports PATCH semantics and uses JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.regionUrlMaps.patch",
"parameterOrder": [
@@ -14971,7 +14971,7 @@
]
},
"update": {
- "description": "Updates the specified UrlMap resource with the data included in the request.",
+ "description": "Updates the specified UrlMap resource with the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PUT",
"id": "compute.regionUrlMaps.update",
"parameterOrder": [
@@ -15020,7 +15020,7 @@
]
},
"validate": {
- "description": "Runs static validation for the UrlMap. In particular, the tests of the provided UrlMap will be run. Calling this method does NOT create the UrlMap.",
+ "description": "Runs static validation for the UrlMap. In particular, the tests of the provided UrlMap will be run. Calling this method does NOT create the UrlMap. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.regionUrlMaps.validate",
"parameterOrder": [
@@ -15068,7 +15068,7 @@
"regions": {
"methods": {
"get": {
- "description": "Returns the specified Region resource. Gets a list of available regions by making a list() request.",
+ "description": "Returns the specified Region resource. Gets a list of available regions by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regions.get",
"parameterOrder": [
@@ -15102,7 +15102,7 @@
]
},
"list": {
- "description": "Retrieves the list of region resources available to the specified project.",
+ "description": "Retrieves the list of region resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.regions.list",
"parameterOrder": [
@@ -15155,7 +15155,7 @@
"reservations": {
"methods": {
"aggregatedList": {
- "description": "Retrieves an aggregated list of reservations.",
+ "description": "Retrieves an aggregated list of reservations. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.reservations.aggregatedList",
"parameterOrder": [
@@ -15204,7 +15204,7 @@
]
},
"delete": {
- "description": "Deletes the specified reservation.",
+ "description": "Deletes the specified reservation. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.reservations.delete",
"parameterOrder": [
@@ -15250,7 +15250,7 @@
]
},
"get": {
- "description": "Retrieves information about the specified reservation.",
+ "description": "Retrieves information about the specified reservation. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.reservations.get",
"parameterOrder": [
@@ -15292,7 +15292,7 @@
]
},
"getIamPolicy": {
- "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists.",
+ "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.reservations.getIamPolicy",
"parameterOrder": [
@@ -15334,7 +15334,7 @@
]
},
"insert": {
- "description": "Creates a new reservation. For more information, read Reserving zonal resources.",
+ "description": "Creates a new reservation. For more information, read Reserving zonal resources. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.reservations.insert",
"parameterOrder": [
@@ -15375,7 +15375,7 @@
]
},
"list": {
- "description": "A list of all the reservations that have been configured for the specified project in specified zone.",
+ "description": "A list of all the reservations that have been configured for the specified project in specified zone. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.reservations.list",
"parameterOrder": [
@@ -15432,7 +15432,7 @@
]
},
"resize": {
- "description": "Resizes the reservation (applicable to standalone reservations only). For more information, read Modifying reservations.",
+ "description": "Resizes the reservation (applicable to standalone reservations only). For more information, read Modifying reservations. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.reservations.resize",
"parameterOrder": [
@@ -15481,7 +15481,7 @@
]
},
"setIamPolicy": {
- "description": "Sets the access control policy on the specified resource. Replaces any existing policy.",
+ "description": "Sets the access control policy on the specified resource. Replaces any existing policy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.reservations.setIamPolicy",
"parameterOrder": [
@@ -15525,7 +15525,7 @@
]
},
"testIamPermissions": {
- "description": "Returns permissions that a caller has on the specified resource.",
+ "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.reservations.testIamPermissions",
"parameterOrder": [
@@ -15574,7 +15574,7 @@
"resourcePolicies": {
"methods": {
"aggregatedList": {
- "description": "Retrieves an aggregated list of resource policies.",
+ "description": "Retrieves an aggregated list of resource policies. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.resourcePolicies.aggregatedList",
"parameterOrder": [
@@ -15623,7 +15623,7 @@
]
},
"delete": {
- "description": "Deletes the specified resource policy.",
+ "description": "Deletes the specified resource policy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.resourcePolicies.delete",
"parameterOrder": [
@@ -15669,7 +15669,7 @@
]
},
"get": {
- "description": "Retrieves all information of the specified resource policy.",
+ "description": "Retrieves all information of the specified resource policy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.resourcePolicies.get",
"parameterOrder": [
@@ -15711,7 +15711,7 @@
]
},
"getIamPolicy": {
- "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists.",
+ "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.resourcePolicies.getIamPolicy",
"parameterOrder": [
@@ -15753,7 +15753,7 @@
]
},
"insert": {
- "description": "Creates a new resource policy.",
+ "description": "Creates a new resource policy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.resourcePolicies.insert",
"parameterOrder": [
@@ -15794,7 +15794,7 @@
]
},
"list": {
- "description": "A list all the resource policies that have been configured for the specified project in specified region.",
+ "description": "A list all the resource policies that have been configured for the specified project in specified region. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.resourcePolicies.list",
"parameterOrder": [
@@ -15851,7 +15851,7 @@
]
},
"setIamPolicy": {
- "description": "Sets the access control policy on the specified resource. Replaces any existing policy.",
+ "description": "Sets the access control policy on the specified resource. Replaces any existing policy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.resourcePolicies.setIamPolicy",
"parameterOrder": [
@@ -15895,7 +15895,7 @@
]
},
"testIamPermissions": {
- "description": "Returns permissions that a caller has on the specified resource.",
+ "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.resourcePolicies.testIamPermissions",
"parameterOrder": [
@@ -15944,7 +15944,7 @@
"routers": {
"methods": {
"aggregatedList": {
- "description": "Retrieves an aggregated list of routers.",
+ "description": "Retrieves an aggregated list of routers. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.routers.aggregatedList",
"parameterOrder": [
@@ -15993,7 +15993,7 @@
]
},
"delete": {
- "description": "Deletes the specified Router resource.",
+ "description": "Deletes the specified Router resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.routers.delete",
"parameterOrder": [
@@ -16039,7 +16039,7 @@
]
},
"get": {
- "description": "Returns the specified Router resource. Gets a list of available routers by making a list() request.",
+ "description": "Returns the specified Router resource. Gets a list of available routers by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.routers.get",
"parameterOrder": [
@@ -16081,7 +16081,7 @@
]
},
"getNatMappingInfo": {
- "description": "Retrieves runtime Nat mapping information of VM endpoints.",
+ "description": "Retrieves runtime Nat mapping information of VM endpoints. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.routers.getNatMappingInfo",
"parameterOrder": [
@@ -16146,7 +16146,7 @@
]
},
"getRouterStatus": {
- "description": "Retrieves runtime information of the specified router.",
+ "description": "Retrieves runtime information of the specified router. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.routers.getRouterStatus",
"parameterOrder": [
@@ -16188,7 +16188,7 @@
]
},
"insert": {
- "description": "Creates a Router resource in the specified project and region using the data included in the request.",
+ "description": "Creates a Router resource in the specified project and region using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.routers.insert",
"parameterOrder": [
@@ -16229,7 +16229,7 @@
]
},
"list": {
- "description": "Retrieves a list of Router resources available to the specified project.",
+ "description": "Retrieves a list of Router resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.routers.list",
"parameterOrder": [
@@ -16286,7 +16286,7 @@
]
},
"patch": {
- "description": "Patches the specified Router resource with the data included in the request. This method supports PATCH semantics and uses JSON merge patch format and processing rules.",
+ "description": "Patches the specified Router resource with the data included in the request. This method supports PATCH semantics and uses JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.routers.patch",
"parameterOrder": [
@@ -16335,7 +16335,7 @@
]
},
"preview": {
- "description": "Preview fields auto-generated during router create and update operations. Calling this method does NOT create or update the router.",
+ "description": "Preview fields auto-generated during router create and update operations. Calling this method does NOT create or update the router. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.routers.preview",
"parameterOrder": [
@@ -16380,7 +16380,7 @@
]
},
"update": {
- "description": "Updates the specified Router resource with the data included in the request. This method conforms to PUT semantics, which requests that the state of the target resource be created or replaced with the state defined by the representation enclosed in the request message payload.",
+ "description": "Updates the specified Router resource with the data included in the request. This method conforms to PUT semantics, which requests that the state of the target resource be created or replaced with the state defined by the representation enclosed in the request message payload. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PUT",
"id": "compute.routers.update",
"parameterOrder": [
@@ -16433,7 +16433,7 @@
"routes": {
"methods": {
"delete": {
- "description": "Deletes the specified Route resource.",
+ "description": "Deletes the specified Route resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.routes.delete",
"parameterOrder": [
@@ -16471,7 +16471,7 @@
]
},
"get": {
- "description": "Returns the specified Route resource. Gets a list of available routes by making a list() request.",
+ "description": "Returns the specified Route resource. Gets a list of available routes by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.routes.get",
"parameterOrder": [
@@ -16505,7 +16505,7 @@
]
},
"insert": {
- "description": "Creates a Route resource in the specified project using the data included in the request.",
+ "description": "Creates a Route resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.routes.insert",
"parameterOrder": [
@@ -16538,7 +16538,7 @@
]
},
"list": {
- "description": "Retrieves the list of Route resources available to the specified project.",
+ "description": "Retrieves the list of Route resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.routes.list",
"parameterOrder": [
@@ -16591,7 +16591,7 @@
"securityPolicies": {
"methods": {
"addRule": {
- "description": "Inserts a rule into a security policy.",
+ "description": "Inserts a rule into a security policy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.securityPolicies.addRule",
"parameterOrder": [
@@ -16627,7 +16627,7 @@
]
},
"delete": {
- "description": "Deletes the specified policy.",
+ "description": "Deletes the specified policy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.securityPolicies.delete",
"parameterOrder": [
@@ -16665,7 +16665,7 @@
]
},
"get": {
- "description": "List all of the ordered rules present in a single specified policy.",
+ "description": "List all of the ordered rules present in a single specified policy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.securityPolicies.get",
"parameterOrder": [
@@ -16699,7 +16699,7 @@
]
},
"getRule": {
- "description": "Gets a rule at the specified priority.",
+ "description": "Gets a rule at the specified priority. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.securityPolicies.getRule",
"parameterOrder": [
@@ -16739,7 +16739,7 @@
]
},
"insert": {
- "description": "Creates a new policy in the specified project using the data included in the request.",
+ "description": "Creates a new policy in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.securityPolicies.insert",
"parameterOrder": [
@@ -16772,7 +16772,7 @@
]
},
"list": {
- "description": "List all the policies that have been configured for the specified project.",
+ "description": "List all the policies that have been configured for the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.securityPolicies.list",
"parameterOrder": [
@@ -16821,7 +16821,7 @@
]
},
"patch": {
- "description": "Patches the specified policy with the data included in the request.",
+ "description": "Patches the specified policy with the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.securityPolicies.patch",
"parameterOrder": [
@@ -16862,7 +16862,7 @@
]
},
"patchRule": {
- "description": "Patches a rule at the specified priority.",
+ "description": "Patches a rule at the specified priority. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.securityPolicies.patchRule",
"parameterOrder": [
@@ -16904,7 +16904,7 @@
]
},
"removeRule": {
- "description": "Deletes a rule at the specified priority.",
+ "description": "Deletes a rule at the specified priority. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.securityPolicies.removeRule",
"parameterOrder": [
@@ -16947,7 +16947,7 @@
"snapshots": {
"methods": {
"delete": {
- "description": "Deletes the specified Snapshot resource. Keep in mind that deleting a single snapshot might not necessarily delete all the data on that snapshot. If any data on the snapshot that is marked for deletion is needed for subsequent snapshots, the data will be moved to the next corresponding snapshot.\n\nFor more information, see Deleting snapshots.",
+ "description": "Deletes the specified Snapshot resource. Keep in mind that deleting a single snapshot might not necessarily delete all the data on that snapshot. If any data on the snapshot that is marked for deletion is needed for subsequent snapshots, the data will be moved to the next corresponding snapshot.\n\nFor more information, see Deleting snapshots. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.snapshots.delete",
"parameterOrder": [
@@ -16985,7 +16985,7 @@
]
},
"get": {
- "description": "Returns the specified Snapshot resource. Gets a list of available snapshots by making a list() request.",
+ "description": "Returns the specified Snapshot resource. Gets a list of available snapshots by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.snapshots.get",
"parameterOrder": [
@@ -17019,7 +17019,7 @@
]
},
"getIamPolicy": {
- "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists.",
+ "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.snapshots.getIamPolicy",
"parameterOrder": [
@@ -17053,7 +17053,7 @@
]
},
"list": {
- "description": "Retrieves the list of Snapshot resources contained within the specified project.",
+ "description": "Retrieves the list of Snapshot resources contained within the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.snapshots.list",
"parameterOrder": [
@@ -17102,7 +17102,7 @@
]
},
"setIamPolicy": {
- "description": "Sets the access control policy on the specified resource. Replaces any existing policy.",
+ "description": "Sets the access control policy on the specified resource. Replaces any existing policy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.snapshots.setIamPolicy",
"parameterOrder": [
@@ -17138,7 +17138,7 @@
]
},
"setLabels": {
- "description": "Sets the labels on a snapshot. To learn more about labels, read the Labeling Resources documentation.",
+ "description": "Sets the labels on a snapshot. To learn more about labels, read the Labeling Resources documentation. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.snapshots.setLabels",
"parameterOrder": [
@@ -17174,7 +17174,7 @@
]
},
"testIamPermissions": {
- "description": "Returns permissions that a caller has on the specified resource.",
+ "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.snapshots.testIamPermissions",
"parameterOrder": [
@@ -17215,7 +17215,7 @@
"sslCertificates": {
"methods": {
"aggregatedList": {
- "description": "Retrieves the list of all SslCertificate resources, regional and global, available to the specified project.",
+ "description": "Retrieves the list of all SslCertificate resources, regional and global, available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.sslCertificates.aggregatedList",
"parameterOrder": [
@@ -17264,7 +17264,7 @@
]
},
"delete": {
- "description": "Deletes the specified SslCertificate resource.",
+ "description": "Deletes the specified SslCertificate resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.sslCertificates.delete",
"parameterOrder": [
@@ -17302,7 +17302,7 @@
]
},
"get": {
- "description": "Returns the specified SslCertificate resource. Gets a list of available SSL certificates by making a list() request.",
+ "description": "Returns the specified SslCertificate resource. Gets a list of available SSL certificates by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.sslCertificates.get",
"parameterOrder": [
@@ -17336,7 +17336,7 @@
]
},
"insert": {
- "description": "Creates a SslCertificate resource in the specified project using the data included in the request.",
+ "description": "Creates a SslCertificate resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.sslCertificates.insert",
"parameterOrder": [
@@ -17369,7 +17369,7 @@
]
},
"list": {
- "description": "Retrieves the list of SslCertificate resources available to the specified project.",
+ "description": "Retrieves the list of SslCertificate resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.sslCertificates.list",
"parameterOrder": [
@@ -17422,7 +17422,7 @@
"sslPolicies": {
"methods": {
"delete": {
- "description": "Deletes the specified SSL policy. The SSL policy resource can be deleted only if it is not in use by any TargetHttpsProxy or TargetSslProxy resources.",
+ "description": "Deletes the specified SSL policy. The SSL policy resource can be deleted only if it is not in use by any TargetHttpsProxy or TargetSslProxy resources. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.sslPolicies.delete",
"parameterOrder": [
@@ -17459,7 +17459,7 @@
]
},
"get": {
- "description": "Lists all of the ordered rules present in a single specified policy.",
+ "description": "Lists all of the ordered rules present in a single specified policy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.sslPolicies.get",
"parameterOrder": [
@@ -17492,7 +17492,7 @@
]
},
"insert": {
- "description": "Returns the specified SSL policy resource. Gets a list of available SSL policies by making a list() request.",
+ "description": "Returns the specified SSL policy resource. Gets a list of available SSL policies by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.sslPolicies.insert",
"parameterOrder": [
@@ -17525,7 +17525,7 @@
]
},
"list": {
- "description": "Lists all the SSL policies that have been configured for the specified project.",
+ "description": "Lists all the SSL policies that have been configured for the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.sslPolicies.list",
"parameterOrder": [
@@ -17574,7 +17574,7 @@
]
},
"listAvailableFeatures": {
- "description": "Lists all features that can be specified in the SSL policy when using custom profile.",
+ "description": "Lists all features that can be specified in the SSL policy when using custom profile. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.sslPolicies.listAvailableFeatures",
"parameterOrder": [
@@ -17623,7 +17623,7 @@
]
},
"patch": {
- "description": "Patches the specified SSL policy with the data included in the request.",
+ "description": "Patches the specified SSL policy with the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.sslPolicies.patch",
"parameterOrder": [
@@ -17667,7 +17667,7 @@
"subnetworks": {
"methods": {
"aggregatedList": {
- "description": "Retrieves an aggregated list of subnetworks.",
+ "description": "Retrieves an aggregated list of subnetworks. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.subnetworks.aggregatedList",
"parameterOrder": [
@@ -17716,7 +17716,7 @@
]
},
"delete": {
- "description": "Deletes the specified subnetwork.",
+ "description": "Deletes the specified subnetwork. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.subnetworks.delete",
"parameterOrder": [
@@ -17762,7 +17762,7 @@
]
},
"expandIpCidrRange": {
- "description": "Expands the IP CIDR range of the subnetwork to a specified value.",
+ "description": "Expands the IP CIDR range of the subnetwork to a specified value. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.subnetworks.expandIpCidrRange",
"parameterOrder": [
@@ -17811,7 +17811,7 @@
]
},
"get": {
- "description": "Returns the specified subnetwork. Gets a list of available subnetworks list() request.",
+ "description": "Returns the specified subnetwork. Gets a list of available subnetworks list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.subnetworks.get",
"parameterOrder": [
@@ -17853,7 +17853,7 @@
]
},
"getIamPolicy": {
- "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists.",
+ "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.subnetworks.getIamPolicy",
"parameterOrder": [
@@ -17895,7 +17895,7 @@
]
},
"insert": {
- "description": "Creates a subnetwork in the specified project using the data included in the request.",
+ "description": "Creates a subnetwork in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.subnetworks.insert",
"parameterOrder": [
@@ -17936,7 +17936,7 @@
]
},
"list": {
- "description": "Retrieves a list of subnetworks available to the specified project.",
+ "description": "Retrieves a list of subnetworks available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.subnetworks.list",
"parameterOrder": [
@@ -17993,7 +17993,7 @@
]
},
"listUsable": {
- "description": "Retrieves an aggregated list of all usable subnetworks in the project. The list contains all of the subnetworks in the project and the subnetworks that were shared by a Shared VPC host project.",
+ "description": "Retrieves an aggregated list of all usable subnetworks in the project. The list contains all of the subnetworks in the project and the subnetworks that were shared by a Shared VPC host project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.subnetworks.listUsable",
"parameterOrder": [
@@ -18042,7 +18042,7 @@
]
},
"patch": {
- "description": "Patches the specified subnetwork with the data included in the request. Only certain fields can up updated with a patch request as indicated in the field descriptions. You must specify the current fingeprint of the subnetwork resource being patched.",
+ "description": "Patches the specified subnetwork with the data included in the request. Only certain fields can up updated with a patch request as indicated in the field descriptions. You must specify the current fingeprint of the subnetwork resource being patched. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.subnetworks.patch",
"parameterOrder": [
@@ -18051,6 +18051,12 @@
"subnetwork"
],
"parameters": {
+ "drainTimeoutSeconds": {
+ "description": "The drain timeout specifies the upper bound in seconds on the amount of time allowed to drain connections from the current ACTIVE subnetwork to the current BACKUP subnetwork. The drain timeout is only applicable when the following conditions are true: - the subnetwork being patched has purpose = INTERNAL_HTTPS_LOAD_BALANCER - the subnetwork being patched has role = BACKUP - the patch request is setting the role to ACTIVE. Note that after this patch operation the roles of the ACTIVE and BACKUP subnetworks will be swapped.",
+ "format": "int32",
+ "location": "query",
+ "type": "integer"
+ },
"project": {
"description": "Project ID for this request.",
"location": "path",
@@ -18091,7 +18097,7 @@
]
},
"setIamPolicy": {
- "description": "Sets the access control policy on the specified resource. Replaces any existing policy.",
+ "description": "Sets the access control policy on the specified resource. Replaces any existing policy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.subnetworks.setIamPolicy",
"parameterOrder": [
@@ -18135,7 +18141,7 @@
]
},
"setPrivateIpGoogleAccess": {
- "description": "Set whether VMs in this subnet can access Google services without assigning external IP addresses through Private Google Access.",
+ "description": "Set whether VMs in this subnet can access Google services without assigning external IP addresses through Private Google Access. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.subnetworks.setPrivateIpGoogleAccess",
"parameterOrder": [
@@ -18184,7 +18190,7 @@
]
},
"testIamPermissions": {
- "description": "Returns permissions that a caller has on the specified resource.",
+ "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.subnetworks.testIamPermissions",
"parameterOrder": [
@@ -18233,7 +18239,7 @@
"targetHttpProxies": {
"methods": {
"aggregatedList": {
- "description": "Retrieves the list of all TargetHttpProxy resources, regional and global, available to the specified project.",
+ "description": "Retrieves the list of all TargetHttpProxy resources, regional and global, available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.targetHttpProxies.aggregatedList",
"parameterOrder": [
@@ -18282,7 +18288,7 @@
]
},
"delete": {
- "description": "Deletes the specified TargetHttpProxy resource.",
+ "description": "Deletes the specified TargetHttpProxy resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.targetHttpProxies.delete",
"parameterOrder": [
@@ -18320,7 +18326,7 @@
]
},
"get": {
- "description": "Returns the specified TargetHttpProxy resource. Gets a list of available target HTTP proxies by making a list() request.",
+ "description": "Returns the specified TargetHttpProxy resource. Gets a list of available target HTTP proxies by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.targetHttpProxies.get",
"parameterOrder": [
@@ -18354,7 +18360,7 @@
]
},
"insert": {
- "description": "Creates a TargetHttpProxy resource in the specified project using the data included in the request.",
+ "description": "Creates a TargetHttpProxy resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.targetHttpProxies.insert",
"parameterOrder": [
@@ -18387,7 +18393,7 @@
]
},
"list": {
- "description": "Retrieves the list of TargetHttpProxy resources available to the specified project.",
+ "description": "Retrieves the list of TargetHttpProxy resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.targetHttpProxies.list",
"parameterOrder": [
@@ -18436,7 +18442,7 @@
]
},
"setUrlMap": {
- "description": "Changes the URL map for TargetHttpProxy.",
+ "description": "Changes the URL map for TargetHttpProxy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.targetHttpProxies.setUrlMap",
"parameterOrder": [
@@ -18481,7 +18487,7 @@
"targetHttpsProxies": {
"methods": {
"aggregatedList": {
- "description": "Retrieves the list of all TargetHttpsProxy resources, regional and global, available to the specified project.",
+ "description": "Retrieves the list of all TargetHttpsProxy resources, regional and global, available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.targetHttpsProxies.aggregatedList",
"parameterOrder": [
@@ -18530,7 +18536,7 @@
]
},
"delete": {
- "description": "Deletes the specified TargetHttpsProxy resource.",
+ "description": "Deletes the specified TargetHttpsProxy resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.targetHttpsProxies.delete",
"parameterOrder": [
@@ -18568,7 +18574,7 @@
]
},
"get": {
- "description": "Returns the specified TargetHttpsProxy resource. Gets a list of available target HTTPS proxies by making a list() request.",
+ "description": "Returns the specified TargetHttpsProxy resource. Gets a list of available target HTTPS proxies by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.targetHttpsProxies.get",
"parameterOrder": [
@@ -18602,7 +18608,7 @@
]
},
"insert": {
- "description": "Creates a TargetHttpsProxy resource in the specified project using the data included in the request.",
+ "description": "Creates a TargetHttpsProxy resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.targetHttpsProxies.insert",
"parameterOrder": [
@@ -18635,7 +18641,7 @@
]
},
"list": {
- "description": "Retrieves the list of TargetHttpsProxy resources available to the specified project.",
+ "description": "Retrieves the list of TargetHttpsProxy resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.targetHttpsProxies.list",
"parameterOrder": [
@@ -18684,7 +18690,7 @@
]
},
"setQuicOverride": {
- "description": "Sets the QUIC override policy for TargetHttpsProxy.",
+ "description": "Sets the QUIC override policy for TargetHttpsProxy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.targetHttpsProxies.setQuicOverride",
"parameterOrder": [
@@ -18724,7 +18730,7 @@
]
},
"setSslCertificates": {
- "description": "Replaces SslCertificates for TargetHttpsProxy.",
+ "description": "Replaces SslCertificates for TargetHttpsProxy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.targetHttpsProxies.setSslCertificates",
"parameterOrder": [
@@ -18765,7 +18771,7 @@
]
},
"setSslPolicy": {
- "description": "Sets the SSL policy for TargetHttpsProxy. The SSL policy specifies the server-side support for SSL features. This affects connections between clients and the HTTPS proxy load balancer. They do not affect the connection between the load balancer and the backends.",
+ "description": "Sets the SSL policy for TargetHttpsProxy. The SSL policy specifies the server-side support for SSL features. This affects connections between clients and the HTTPS proxy load balancer. They do not affect the connection between the load balancer and the backends. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.targetHttpsProxies.setSslPolicy",
"parameterOrder": [
@@ -18805,7 +18811,7 @@
]
},
"setUrlMap": {
- "description": "Changes the URL map for TargetHttpsProxy.",
+ "description": "Changes the URL map for TargetHttpsProxy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.targetHttpsProxies.setUrlMap",
"parameterOrder": [
@@ -18850,7 +18856,7 @@
"targetInstances": {
"methods": {
"aggregatedList": {
- "description": "Retrieves an aggregated list of target instances.",
+ "description": "Retrieves an aggregated list of target instances. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.targetInstances.aggregatedList",
"parameterOrder": [
@@ -18899,7 +18905,7 @@
]
},
"delete": {
- "description": "Deletes the specified TargetInstance resource.",
+ "description": "Deletes the specified TargetInstance resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.targetInstances.delete",
"parameterOrder": [
@@ -18945,7 +18951,7 @@
]
},
"get": {
- "description": "Returns the specified TargetInstance resource. Gets a list of available target instances by making a list() request.",
+ "description": "Returns the specified TargetInstance resource. Gets a list of available target instances by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.targetInstances.get",
"parameterOrder": [
@@ -18987,7 +18993,7 @@
]
},
"insert": {
- "description": "Creates a TargetInstance resource in the specified project and zone using the data included in the request.",
+ "description": "Creates a TargetInstance resource in the specified project and zone using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.targetInstances.insert",
"parameterOrder": [
@@ -19028,7 +19034,7 @@
]
},
"list": {
- "description": "Retrieves a list of TargetInstance resources available to the specified project and zone.",
+ "description": "Retrieves a list of TargetInstance resources available to the specified project and zone. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.targetInstances.list",
"parameterOrder": [
@@ -19089,7 +19095,7 @@
"targetPools": {
"methods": {
"addHealthCheck": {
- "description": "Adds health check URLs to a target pool.",
+ "description": "Adds health check URLs to a target pool. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.targetPools.addHealthCheck",
"parameterOrder": [
@@ -19138,7 +19144,7 @@
]
},
"addInstance": {
- "description": "Adds an instance to a target pool.",
+ "description": "Adds an instance to a target pool. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.targetPools.addInstance",
"parameterOrder": [
@@ -19187,7 +19193,7 @@
]
},
"aggregatedList": {
- "description": "Retrieves an aggregated list of target pools.",
+ "description": "Retrieves an aggregated list of target pools. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.targetPools.aggregatedList",
"parameterOrder": [
@@ -19236,7 +19242,7 @@
]
},
"delete": {
- "description": "Deletes the specified target pool.",
+ "description": "Deletes the specified target pool. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.targetPools.delete",
"parameterOrder": [
@@ -19282,7 +19288,7 @@
]
},
"get": {
- "description": "Returns the specified target pool. Gets a list of available target pools by making a list() request.",
+ "description": "Returns the specified target pool. Gets a list of available target pools by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.targetPools.get",
"parameterOrder": [
@@ -19324,7 +19330,7 @@
]
},
"getHealth": {
- "description": "Gets the most recent health check results for each IP for the instance that is referenced by the given target pool.",
+ "description": "Gets the most recent health check results for each IP for the instance that is referenced by the given target pool. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.targetPools.getHealth",
"parameterOrder": [
@@ -19369,7 +19375,7 @@
]
},
"insert": {
- "description": "Creates a target pool in the specified project and region using the data included in the request.",
+ "description": "Creates a target pool in the specified project and region using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.targetPools.insert",
"parameterOrder": [
@@ -19410,7 +19416,7 @@
]
},
"list": {
- "description": "Retrieves a list of target pools available to the specified project and region.",
+ "description": "Retrieves a list of target pools available to the specified project and region. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.targetPools.list",
"parameterOrder": [
@@ -19467,7 +19473,7 @@
]
},
"removeHealthCheck": {
- "description": "Removes health check URL from a target pool.",
+ "description": "Removes health check URL from a target pool. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.targetPools.removeHealthCheck",
"parameterOrder": [
@@ -19516,7 +19522,7 @@
]
},
"removeInstance": {
- "description": "Removes instance URL from a target pool.",
+ "description": "Removes instance URL from a target pool. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.targetPools.removeInstance",
"parameterOrder": [
@@ -19565,7 +19571,7 @@
]
},
"setBackup": {
- "description": "Changes a backup target pool's configurations.",
+ "description": "Changes a backup target pool's configurations. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.targetPools.setBackup",
"parameterOrder": [
@@ -19624,7 +19630,7 @@
"targetSslProxies": {
"methods": {
"delete": {
- "description": "Deletes the specified TargetSslProxy resource.",
+ "description": "Deletes the specified TargetSslProxy resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.targetSslProxies.delete",
"parameterOrder": [
@@ -19662,7 +19668,7 @@
]
},
"get": {
- "description": "Returns the specified TargetSslProxy resource. Gets a list of available target SSL proxies by making a list() request.",
+ "description": "Returns the specified TargetSslProxy resource. Gets a list of available target SSL proxies by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.targetSslProxies.get",
"parameterOrder": [
@@ -19696,7 +19702,7 @@
]
},
"insert": {
- "description": "Creates a TargetSslProxy resource in the specified project using the data included in the request.",
+ "description": "Creates a TargetSslProxy resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.targetSslProxies.insert",
"parameterOrder": [
@@ -19729,7 +19735,7 @@
]
},
"list": {
- "description": "Retrieves the list of TargetSslProxy resources available to the specified project.",
+ "description": "Retrieves the list of TargetSslProxy resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.targetSslProxies.list",
"parameterOrder": [
@@ -19778,7 +19784,7 @@
]
},
"setBackendService": {
- "description": "Changes the BackendService for TargetSslProxy.",
+ "description": "Changes the BackendService for TargetSslProxy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.targetSslProxies.setBackendService",
"parameterOrder": [
@@ -19819,7 +19825,7 @@
]
},
"setProxyHeader": {
- "description": "Changes the ProxyHeaderType for TargetSslProxy.",
+ "description": "Changes the ProxyHeaderType for TargetSslProxy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.targetSslProxies.setProxyHeader",
"parameterOrder": [
@@ -19860,7 +19866,7 @@
]
},
"setSslCertificates": {
- "description": "Changes SslCertificates for TargetSslProxy.",
+ "description": "Changes SslCertificates for TargetSslProxy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.targetSslProxies.setSslCertificates",
"parameterOrder": [
@@ -19901,7 +19907,7 @@
]
},
"setSslPolicy": {
- "description": "Sets the SSL policy for TargetSslProxy. The SSL policy specifies the server-side support for SSL features. This affects connections between clients and the SSL proxy load balancer. They do not affect the connection between the load balancer and the backends.",
+ "description": "Sets the SSL policy for TargetSslProxy. The SSL policy specifies the server-side support for SSL features. This affects connections between clients and the SSL proxy load balancer. They do not affect the connection between the load balancer and the backends. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.targetSslProxies.setSslPolicy",
"parameterOrder": [
@@ -19945,7 +19951,7 @@
"targetTcpProxies": {
"methods": {
"delete": {
- "description": "Deletes the specified TargetTcpProxy resource.",
+ "description": "Deletes the specified TargetTcpProxy resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.targetTcpProxies.delete",
"parameterOrder": [
@@ -19983,7 +19989,7 @@
]
},
"get": {
- "description": "Returns the specified TargetTcpProxy resource. Gets a list of available target TCP proxies by making a list() request.",
+ "description": "Returns the specified TargetTcpProxy resource. Gets a list of available target TCP proxies by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.targetTcpProxies.get",
"parameterOrder": [
@@ -20017,7 +20023,7 @@
]
},
"insert": {
- "description": "Creates a TargetTcpProxy resource in the specified project using the data included in the request.",
+ "description": "Creates a TargetTcpProxy resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.targetTcpProxies.insert",
"parameterOrder": [
@@ -20050,7 +20056,7 @@
]
},
"list": {
- "description": "Retrieves the list of TargetTcpProxy resources available to the specified project.",
+ "description": "Retrieves the list of TargetTcpProxy resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.targetTcpProxies.list",
"parameterOrder": [
@@ -20099,7 +20105,7 @@
]
},
"setBackendService": {
- "description": "Changes the BackendService for TargetTcpProxy.",
+ "description": "Changes the BackendService for TargetTcpProxy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.targetTcpProxies.setBackendService",
"parameterOrder": [
@@ -20140,7 +20146,7 @@
]
},
"setProxyHeader": {
- "description": "Changes the ProxyHeaderType for TargetTcpProxy.",
+ "description": "Changes the ProxyHeaderType for TargetTcpProxy. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.targetTcpProxies.setProxyHeader",
"parameterOrder": [
@@ -20185,7 +20191,7 @@
"targetVpnGateways": {
"methods": {
"aggregatedList": {
- "description": "Retrieves an aggregated list of target VPN gateways.",
+ "description": "Retrieves an aggregated list of target VPN gateways. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.targetVpnGateways.aggregatedList",
"parameterOrder": [
@@ -20234,7 +20240,7 @@
]
},
"delete": {
- "description": "Deletes the specified target VPN gateway.",
+ "description": "Deletes the specified target VPN gateway. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.targetVpnGateways.delete",
"parameterOrder": [
@@ -20280,7 +20286,7 @@
]
},
"get": {
- "description": "Returns the specified target VPN gateway. Gets a list of available target VPN gateways by making a list() request.",
+ "description": "Returns the specified target VPN gateway. Gets a list of available target VPN gateways by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.targetVpnGateways.get",
"parameterOrder": [
@@ -20322,7 +20328,7 @@
]
},
"insert": {
- "description": "Creates a target VPN gateway in the specified project and region using the data included in the request.",
+ "description": "Creates a target VPN gateway in the specified project and region using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.targetVpnGateways.insert",
"parameterOrder": [
@@ -20363,7 +20369,7 @@
]
},
"list": {
- "description": "Retrieves a list of target VPN gateways available to the specified project and region.",
+ "description": "Retrieves a list of target VPN gateways available to the specified project and region. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.targetVpnGateways.list",
"parameterOrder": [
@@ -20424,7 +20430,7 @@
"urlMaps": {
"methods": {
"aggregatedList": {
- "description": "Retrieves the list of all UrlMap resources, regional and global, available to the specified project.",
+ "description": "Retrieves the list of all UrlMap resources, regional and global, available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.urlMaps.aggregatedList",
"parameterOrder": [
@@ -20473,7 +20479,7 @@
]
},
"delete": {
- "description": "Deletes the specified UrlMap resource.",
+ "description": "Deletes the specified UrlMap resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.urlMaps.delete",
"parameterOrder": [
@@ -20511,7 +20517,7 @@
]
},
"get": {
- "description": "Returns the specified UrlMap resource. Gets a list of available URL maps by making a list() request.",
+ "description": "Returns the specified UrlMap resource. Gets a list of available URL maps by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.urlMaps.get",
"parameterOrder": [
@@ -20545,7 +20551,7 @@
]
},
"insert": {
- "description": "Creates a UrlMap resource in the specified project using the data included in the request.",
+ "description": "Creates a UrlMap resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.urlMaps.insert",
"parameterOrder": [
@@ -20578,7 +20584,7 @@
]
},
"invalidateCache": {
- "description": "Initiates a cache invalidation operation, invalidating the specified path, scoped to the specified UrlMap.",
+ "description": "Initiates a cache invalidation operation, invalidating the specified path, scoped to the specified UrlMap. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.urlMaps.invalidateCache",
"parameterOrder": [
@@ -20619,7 +20625,7 @@
]
},
"list": {
- "description": "Retrieves the list of UrlMap resources available to the specified project.",
+ "description": "Retrieves the list of UrlMap resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.urlMaps.list",
"parameterOrder": [
@@ -20668,7 +20674,7 @@
]
},
"patch": {
- "description": "Patches the specified UrlMap resource with the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ "description": "Patches the specified UrlMap resource with the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PATCH",
"id": "compute.urlMaps.patch",
"parameterOrder": [
@@ -20709,7 +20715,7 @@
]
},
"update": {
- "description": "Updates the specified UrlMap resource with the data included in the request.",
+ "description": "Updates the specified UrlMap resource with the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "PUT",
"id": "compute.urlMaps.update",
"parameterOrder": [
@@ -20750,7 +20756,7 @@
]
},
"validate": {
- "description": "Runs static validation for the UrlMap. In particular, the tests of the provided UrlMap will be run. Calling this method does NOT create the UrlMap.",
+ "description": "Runs static validation for the UrlMap. In particular, the tests of the provided UrlMap will be run. Calling this method does NOT create the UrlMap. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.urlMaps.validate",
"parameterOrder": [
@@ -20790,7 +20796,7 @@
"vpnGateways": {
"methods": {
"aggregatedList": {
- "description": "Retrieves an aggregated list of VPN gateways.",
+ "description": "Retrieves an aggregated list of VPN gateways. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.vpnGateways.aggregatedList",
"parameterOrder": [
@@ -20839,7 +20845,7 @@
]
},
"delete": {
- "description": "Deletes the specified VPN gateway.",
+ "description": "Deletes the specified VPN gateway. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.vpnGateways.delete",
"parameterOrder": [
@@ -20885,7 +20891,7 @@
]
},
"get": {
- "description": "Returns the specified VPN gateway. Gets a list of available VPN gateways by making a list() request.",
+ "description": "Returns the specified VPN gateway. Gets a list of available VPN gateways by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.vpnGateways.get",
"parameterOrder": [
@@ -20927,7 +20933,7 @@
]
},
"getStatus": {
- "description": "Returns the status for the specified VPN gateway.",
+ "description": "Returns the status for the specified VPN gateway. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.vpnGateways.getStatus",
"parameterOrder": [
@@ -20969,7 +20975,7 @@
]
},
"insert": {
- "description": "Creates a VPN gateway in the specified project and region using the data included in the request.",
+ "description": "Creates a VPN gateway in the specified project and region using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.vpnGateways.insert",
"parameterOrder": [
@@ -21010,7 +21016,7 @@
]
},
"list": {
- "description": "Retrieves a list of VPN gateways available to the specified project and region.",
+ "description": "Retrieves a list of VPN gateways available to the specified project and region. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.vpnGateways.list",
"parameterOrder": [
@@ -21067,7 +21073,7 @@
]
},
"setLabels": {
- "description": "Sets the labels on a VpnGateway. To learn more about labels, read the Labeling Resources documentation.",
+ "description": "Sets the labels on a VpnGateway. To learn more about labels, read the Labeling Resources documentation. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.vpnGateways.setLabels",
"parameterOrder": [
@@ -21116,7 +21122,7 @@
]
},
"testIamPermissions": {
- "description": "Returns permissions that a caller has on the specified resource.",
+ "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.vpnGateways.testIamPermissions",
"parameterOrder": [
@@ -21165,7 +21171,7 @@
"vpnTunnels": {
"methods": {
"aggregatedList": {
- "description": "Retrieves an aggregated list of VPN tunnels.",
+ "description": "Retrieves an aggregated list of VPN tunnels. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.vpnTunnels.aggregatedList",
"parameterOrder": [
@@ -21214,7 +21220,7 @@
]
},
"delete": {
- "description": "Deletes the specified VpnTunnel resource.",
+ "description": "Deletes the specified VpnTunnel resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.vpnTunnels.delete",
"parameterOrder": [
@@ -21260,7 +21266,7 @@
]
},
"get": {
- "description": "Returns the specified VpnTunnel resource. Gets a list of available VPN tunnels by making a list() request.",
+ "description": "Returns the specified VpnTunnel resource. Gets a list of available VPN tunnels by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.vpnTunnels.get",
"parameterOrder": [
@@ -21302,7 +21308,7 @@
]
},
"insert": {
- "description": "Creates a VpnTunnel resource in the specified project and region using the data included in the request.",
+ "description": "Creates a VpnTunnel resource in the specified project and region using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "POST",
"id": "compute.vpnTunnels.insert",
"parameterOrder": [
@@ -21343,7 +21349,7 @@
]
},
"list": {
- "description": "Retrieves a list of VpnTunnel resources contained in the specified project and region.",
+ "description": "Retrieves a list of VpnTunnel resources contained in the specified project and region. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.vpnTunnels.list",
"parameterOrder": [
@@ -21404,7 +21410,7 @@
"zoneOperations": {
"methods": {
"delete": {
- "description": "Deletes the specified zone-specific Operations resource.",
+ "description": "Deletes the specified zone-specific Operations resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "DELETE",
"id": "compute.zoneOperations.delete",
"parameterOrder": [
@@ -21442,7 +21448,7 @@
]
},
"get": {
- "description": "Retrieves the specified zone-specific Operations resource.",
+ "description": "Retrieves the specified zone-specific Operations resource. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.zoneOperations.get",
"parameterOrder": [
@@ -21484,7 +21490,7 @@
]
},
"list": {
- "description": "Retrieves a list of Operation resources contained within the specified zone.",
+ "description": "Retrieves a list of Operation resources contained within the specified zone. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.zoneOperations.list",
"parameterOrder": [
@@ -21545,7 +21551,7 @@
"zones": {
"methods": {
"get": {
- "description": "Returns the specified Zone resource. Gets a list of available zones by making a list() request.",
+ "description": "Returns the specified Zone resource. Gets a list of available zones by making a list() request. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.zones.get",
"parameterOrder": [
@@ -21579,7 +21585,7 @@
]
},
"list": {
- "description": "Retrieves the list of Zone resources available to the specified project.",
+ "description": "Retrieves the list of Zone resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
"httpMethod": "GET",
"id": "compute.zones.list",
"parameterOrder": [
@@ -21630,7 +21636,7 @@
}
}
},
- "revision": "20190905",
+ "revision": "20191014",
"rootUrl": "https://compute.googleapis.com/",
"schemas": {
"AcceleratorConfig": {
@@ -22597,7 +22603,7 @@
"type": "string"
},
"inUseCount": {
- "description": "[OutputOnly] Indicates how many instances are in use.",
+ "description": "[Output Only] Indicates how many instances are in use.",
"format": "int64",
"type": "string"
},
@@ -22882,7 +22888,7 @@
"type": "string"
},
"status": {
- "description": "[Output Only] The status of the autoscaler configuration.",
+ "description": "[Output Only] The status of the autoscaler configuration. Current set of possible values: PENDING: Autoscaler backend hasn't read new/updated configuration DELETING: Configuration is being deleted ACTIVE: Configuration is acknowledged to be effective. Some warnings might or might not be present in the status_details field. ERROR: Configuration has errors. Actionable for users. Details are present in the status_details field. New values might be added in the future.",
"enum": [
"ACTIVE",
"DELETING",
@@ -23147,7 +23153,7 @@
"type": "string"
},
"type": {
- "description": "The type of error returned.",
+ "description": "The type of error, warning or notice returned. Current set of possible values: ALL_INSTANCES_UNHEALTHY (WARNING): All instances in the instance group are unhealthy (not in RUNNING state). BACKEND_SERVICE_DOES_NOT_EXIST (ERROR): There is no backend service attached to the instance group. CAPPED_AT_MAX_NUM_REPLICAS (WARNING): Autoscaler recommends size bigger than maxNumReplicas. CUSTOM_METRIC_DATA_POINTS_TOO_SPARSE (WARNING): The custom metric samples are not exported often enough to be a credible base for autoscaling. CUSTOM_METRIC_INVALID (ERROR): The custom metric that was specified does not exist or does not have the necessary labels. MIN_EQUALS_MAX (WARNING): The minNumReplicas is equal to maxNumReplicas. This means the autoscaler cannot add or remove instances from the instance group. MISSING_CUSTOM_METRIC_DATA_POINTS (WARNING): The autoscaler did not receive any data from the custom metric configured for autoscaling. MISSING_LOAD_BALANCING_DATA_POINTS (WARNING): The autoscaler is configured to scale based on a load balancing signal but the instance group has not received any requests from the load balancer. MODE_OFF (WARNING): Autoscaling is turned off. The number of instances in the group won't change automatically. The autoscaling configuration is preserved. MODE_ONLY_UP (WARNING): Autoscaling is in the \"Autoscale only up\" mode. Instances in the group will be only added. MORE_THAN_ONE_BACKEND_SERVICE (ERROR): The instance group cannot be autoscaled because it has more than one backend service attached to it. NOT_ENOUGH_QUOTA_AVAILABLE (ERROR): Exceeded quota for necessary resources, such as CPU, number of instances and so on. REGION_RESOURCE_STOCKOUT (ERROR): Showed only for regional autoscalers: there is a resource stockout in the chosen region. SCALING_TARGET_DOES_NOT_EXIST (ERROR): The target to be scaled does not exist. UNSUPPORTED_MAX_RATE_LOAD_BALANCING_CONFIGURATION (ERROR): Autoscaling does not work with an HTTP/S load balancer that has been configured for maxRate. ZONE_RESOURCE_STOCKOUT (ERROR): For zonal autoscalers: there is a resource stockout in the chosen zone. For regional autoscalers: in at least one of the zones you're using there is a resource stockout. New values might be added in the future. Some of the values might not be available in all API versions.",
"enum": [
"ALL_INSTANCES_UNHEALTHY",
"BACKEND_SERVICE_DOES_NOT_EXIST",
@@ -23157,6 +23163,7 @@
"MIN_EQUALS_MAX",
"MISSING_CUSTOM_METRIC_DATA_POINTS",
"MISSING_LOAD_BALANCING_DATA_POINTS",
+ "MODE_OFF",
"MORE_THAN_ONE_BACKEND_SERVICE",
"NOT_ENOUGH_QUOTA_AVAILABLE",
"REGION_RESOURCE_STOCKOUT",
@@ -23180,6 +23187,7 @@
"",
"",
"",
+ "",
""
],
"type": "string"
@@ -23619,7 +23627,7 @@
"type": "object"
},
"BackendService": {
- "description": "Represents a Backend Service resource.\n\n\n\nBackend services must have an associated health check. Backend services also store information about session affinity. For more information, read Backend Services.\n\nA backendServices resource represents a global backend service. Global backend services are used for HTTP(S), SSL Proxy, TCP Proxy load balancing and Traffic Director.\n\nA regionBackendServices resource represents a regional backend service. Regional backend services are used for internal TCP/UDP load balancing. For more information, read Internal TCP/UDP Load balancing. (== resource_for v1.backendService ==) (== resource_for beta.backendService ==)",
+ "description": "Represents a Backend Service resource.\n\nA backend service contains configuration values for Google Cloud Platform load balancing services.\n\nFor more information, read Backend Services.\n\n(== resource_for v1.backendService ==) (== resource_for beta.backendService ==)",
"id": "BackendService",
"properties": {
"affinityCookieTtlSec": {
@@ -23640,7 +23648,7 @@
},
"circuitBreakers": {
"$ref": "CircuitBreakers",
- "description": "Settings controlling the volume of connections to a backend service.\n\nThis field is applicable to either: \n- A regional backend service with the service_protocol set to HTTP, HTTPS, or HTTP2, and load_balancing_scheme set to INTERNAL_MANAGED. \n- A global backend service with the load_balancing_scheme set to INTERNAL_SELF_MANAGED."
+ "description": "Settings controlling the volume of connections to a backend service. If not set, this feature is considered disabled.\n\nThis field is applicable to either: \n- A regional backend service with the service_protocol set to HTTP, HTTPS, or HTTP2, and load_balancing_scheme set to INTERNAL_MANAGED. \n- A global backend service with the load_balancing_scheme set to INTERNAL_SELF_MANAGED."
},
"connectionDraining": {
"$ref": "ConnectionDraining"
@@ -23694,7 +23702,7 @@
"type": "string"
},
"loadBalancingScheme": {
- "description": "Indicates whether the backend service will be used with internal or external load balancing. A backend service created for one type of load balancing cannot be used with the other. Possible values are INTERNAL and EXTERNAL.",
+ "description": "Specifies the load balancer type. Choose EXTERNAL for load balancers that receive traffic from external clients. Choose INTERNAL for Internal TCP/UDP Load Balancing. Choose INTERNAL_MANAGED for Internal HTTP(S) Load Balancing. Choose INTERNAL_SELF_MANAGED for Traffic Director. A backend service created for one type of load balancing cannot be used with another. For more information, refer to Choosing a load balancer.",
"enum": [
"EXTERNAL",
"INTERNAL",
@@ -23740,7 +23748,7 @@
},
"outlierDetection": {
"$ref": "OutlierDetection",
- "description": "Settings controlling eviction of unhealthy hosts from the load balancing pool. This field is applicable to either: \n- A regional backend service with the service_protocol set to HTTP, HTTPS, or HTTP2, and load_balancing_scheme set to INTERNAL_MANAGED. \n- A global backend service with the load_balancing_scheme set to INTERNAL_SELF_MANAGED."
+ "description": "Settings controlling the eviction of unhealthy hosts from the load balancing pool for the backend service. If not set, this feature is considered disabled.\n\nThis field is applicable to either: \n- A regional backend service with the service_protocol set to HTTP, HTTPS, or HTTP2, and load_balancing_scheme set to INTERNAL_MANAGED. \n- A global backend service with the load_balancing_scheme set to INTERNAL_SELF_MANAGED."
},
"port": {
"description": "Deprecated in favor of portName. The TCP port to connect on the backend. The default value is 80.\n\nThis cannot be used if the loadBalancingScheme is INTERNAL (Internal TCP/UDP Load Balancing).",
@@ -23752,7 +23760,7 @@
"type": "string"
},
"protocol": {
- "description": "The protocol this BackendService uses to communicate with backends.\n\nPossible values are HTTP, HTTPS, TCP, SSL, or UDP, depending on the chosen load balancer or Traffic Director configuration. Refer to the documentation for the load balancer or for Traffic director for more information.",
+ "description": "The protocol this BackendService uses to communicate with backends.\n\nPossible values are HTTP, HTTPS, HTTP2, TCP, SSL, or UDP, depending on the chosen load balancer or Traffic Director configuration. Refer to the documentation for the load balancer or for Traffic Director for more information.",
"enum": [
"HTTP",
"HTTP2",
@@ -24275,27 +24283,27 @@
"id": "CircuitBreakers",
"properties": {
"maxConnections": {
- "description": "The maximum number of connections to the backend cluster. If not specified, the default is 1024.",
+ "description": "The maximum number of connections to the backend service. If not specified, there is no limit.",
"format": "int32",
"type": "integer"
},
"maxPendingRequests": {
- "description": "The maximum number of pending requests allowed to the backend cluster. If not specified, the default is 1024.",
+ "description": "The maximum number of pending requests allowed to the backend service. If not specified, there is no limit.",
"format": "int32",
"type": "integer"
},
"maxRequests": {
- "description": "The maximum number of parallel requests that allowed to the backend cluster. If not specified, the default is 1024.",
+ "description": "The maximum number of parallel requests that allowed to the backend service. If not specified, there is no limit.",
"format": "int32",
"type": "integer"
},
"maxRequestsPerConnection": {
- "description": "Maximum requests for a single backend connection. This parameter is respected by both the HTTP/1.1 and HTTP/2 implementations. If not specified, there is no limit. Setting this parameter to 1 will effectively disable keep alive.",
+ "description": "Maximum requests for a single connection to the backend service. This parameter is respected by both the HTTP/1.1 and HTTP/2 implementations. If not specified, there is no limit. Setting this parameter to 1 will effectively disable keep alive.",
"format": "int32",
"type": "integer"
},
"maxRetries": {
- "description": "The maximum number of parallel retries allowed to the backend cluster. If not specified, the default is 3.",
+ "description": "The maximum number of parallel retries allowed to the backend cluster. If not specified, the default is 1.",
"format": "int32",
"type": "integer"
}
@@ -25800,7 +25808,7 @@
"id": "DisksAddResourcePoliciesRequest",
"properties": {
"resourcePolicies": {
- "description": "Resource policies to be added to this disk.",
+ "description": "Resource policies to be added to this disk. Currently you can only specify one policy here.",
"items": {
"type": "string"
},
@@ -26506,7 +26514,7 @@
"type": "object"
},
"ForwardingRule": {
- "description": "Represents a Forwarding Rule resource.\n\n\n\nA forwardingRules resource represents a regional forwarding rule.\n\nRegional external forwarding rules can reference any of the following resources:\n \n- A target instance \n- A Cloud VPN Classic gateway (targetVpnGateway), \n- A target pool for a Network Load Balancer \n- A global target HTTP(S) proxy for an HTTP(S) load balancer using Standard Tier \n- A target SSL proxy for a SSL Proxy load balancer using Standard Tier \n- A target TCP proxy for a TCP Proxy load balancer using Standard Tier. \n\nRegional internal forwarding rules can reference the backend service of an internal TCP/UDP load balancer.\n\nFor regional internal forwarding rules, the following applies: \n- If the loadBalancingScheme for the load balancer is INTERNAL, then the forwarding rule references a regional internal backend service. \n- If the loadBalancingScheme for the load balancer is INTERNAL_MANAGED, then the forwarding rule must reference a regional target HTTP(S) proxy. \n\nFor more information, read Using Forwarding rules.\n\nA globalForwardingRules resource represents a global forwarding rule.\n\nGlobal forwarding rules are only used by load balancers that use Premium Tier. (== resource_for beta.forwardingRules ==) (== resource_for v1.forwardingRules ==) (== resource_for beta.globalForwardingRules ==) (== resource_for v1.globalForwardingRules ==) (== resource_for beta.regionForwardingRules ==) (== resource_for v1.regionForwardingRules ==)",
+ "description": "Represents a Forwarding Rule resource.\n\nA forwarding rule and its corresponding IP address represent the frontend configuration of a Google Cloud Platform load balancer. Forwarding rules can also reference target instances and Cloud VPN Classic gateways (targetVpnGateway).\n\nFor more information, read Forwarding rule concepts and Using protocol forwarding.\n\n(== resource_for beta.forwardingRules ==) (== resource_for v1.forwardingRules ==) (== resource_for beta.globalForwardingRules ==) (== resource_for v1.globalForwardingRules ==) (== resource_for beta.regionForwardingRules ==) (== resource_for v1.regionForwardingRules ==)",
"id": "ForwardingRule",
"properties": {
"IPAddress": {
@@ -26514,7 +26522,7 @@
"type": "string"
},
"IPProtocol": {
- "description": "The IP protocol to which this rule applies. Valid options are TCP, UDP, ESP, AH, SCTP or ICMP.\n\nWhen the load balancing scheme is INTERNAL, only TCP and UDP are valid. When the load balancing scheme is INTERNAL_SELF_MANAGED, only TCPis valid.",
+ "description": "The IP protocol to which this rule applies. Valid options are TCP, UDP, ESP, AH, SCTP or ICMP.\n\nFor Internal TCP/UDP Load Balancing, the load balancing scheme is INTERNAL, and one of TCP or UDP are valid. For Traffic Director, the load balancing scheme is INTERNAL_SELF_MANAGED, and only TCPis valid. For Internal HTTP(S) Load Balancing, the load balancing scheme is INTERNAL_MANAGED, and only TCP is valid. For HTTP(S), SSL Proxy, and TCP Proxy Load Balancing, the load balancing scheme is EXTERNAL and only TCP is valid. For Network TCP/UDP Load Balancing, the load balancing scheme is EXTERNAL, and one of TCP or UDP is valid.",
"enum": [
"AH",
"ESP",
@@ -26574,7 +26582,7 @@
"type": "string"
},
"loadBalancingScheme": {
- "description": "This signifies what the ForwardingRule will be used for and can only take the following values: INTERNAL, INTERNAL_SELF_MANAGED, EXTERNAL. The value of INTERNAL means that this will be used for Internal Network Load Balancing (TCP, UDP). The value of INTERNAL_SELF_MANAGED means that this will be used for Internal Global HTTP(S) LB. The value of EXTERNAL means that this will be used for External Load Balancing (HTTP(S) LB, External TCP/UDP LB, SSL Proxy)",
+ "description": "Specifies the forwarding rule type. EXTERNAL is used for: - Classic Cloud VPN gateways - Protocol forwarding to VMs from an external IP address - The following load balancers: HTTP(S), SSL Proxy, TCP Proxy, and Network TCP/UDP.\n\nINTERNAL is used for: - Protocol forwarding to VMs from an internal IP address - Internal TCP/UDP load balancers\n\nINTERNAL_MANAGED is used for: - Internal HTTP(S) load balancers\n\nINTERNAL_SELF_MANAGED is used for: - Traffic Director\n\nFor more information about forwarding rules, refer to Forwarding rule concepts.",
"enum": [
"EXTERNAL",
"INTERNAL",
@@ -28176,13 +28184,13 @@
"id": "HttpRetryPolicy",
"properties": {
"numRetries": {
- "description": "Specifies the allowed number retries. This number must be \u003e 0.",
+ "description": "Specifies the allowed number retries. This number must be \u003e 0. If not specified, defaults to 1.",
"format": "uint32",
"type": "integer"
},
"perTryTimeout": {
"$ref": "Duration",
- "description": "Specifies a non-zero timeout per retry attempt."
+ "description": "Specifies a non-zero timeout per retry attempt.\nIf not specified, will use the timeout set in HttpRouteAction. If timeout in HttpRouteAction is not set, will use the largest timeout among all backend services associated with the route."
},
"retryConditions": {
"description": "Specfies one or more conditions when this retry rule applies. Valid values are: \n- 5xx: Loadbalancer will attempt a retry if the backend service responds with any 5xx response code, or if the backend service does not respond at all, example: disconnects, reset, read timeout, connection failure, and refused streams. \n- gateway-error: Similar to 5xx, but only applies to response codes 502, 503 or 504.\n- \n- connect-failure: Loadbalancer will retry on failures connecting to backend services, for example due to connection timeouts. \n- retriable-4xx: Loadbalancer will retry for retriable 4xx response codes. Currently the only retriable error supported is 409. \n- refused-stream:Loadbalancer will retry if the backend service resets the stream with a REFUSED_STREAM error code. This reset type indicates that it is safe to retry. \n- cancelledLoadbalancer will retry if the gRPC status code in the response header is set to cancelled \n- deadline-exceeded: Loadbalancer will retry if the gRPC status code in the response header is set to deadline-exceeded \n- resource-exhausted: Loadbalancer will retry if the gRPC status code in the response header is set to resource-exhausted \n- unavailable: Loadbalancer will retry if the gRPC status code in the response header is set to unavailable",
@@ -28215,7 +28223,7 @@
},
"timeout": {
"$ref": "Duration",
- "description": "Specifies the timeout for the selected route. Timeout is computed from the time the request is has been fully processed (i.e. end-of-stream) up until the response has been completely processed. Timeout includes all retries.\nIf not specified, the default value is 15 seconds."
+ "description": "Specifies the timeout for the selected route. Timeout is computed from the time the request has been fully processed (i.e. end-of-stream) up until the response has been completely processed. Timeout includes all retries.\nIf not specified, will use the largest timeout among all backend services associated with the route."
},
"urlRewrite": {
"$ref": "UrlRewrite",
@@ -28235,6 +28243,10 @@
"description": "An HttpRouteRule specifies how to match an HTTP request and the corresponding routing action that load balancing proxies will perform.",
"id": "HttpRouteRule",
"properties": {
+ "description": {
+ "description": "The short description conveying the intent of this routeRule.\nThe description can have a maximum length of 1024 characters.",
+ "type": "string"
+ },
"headerAction": {
"$ref": "HttpHeaderAction",
"description": "Specifies changes to request and response headers that need to take effect for the selected backendService.\nThe headerAction specified here are applied before the matching pathMatchers[].headerAction and after pathMatchers[].routeRules[].routeAction.weightedBackendService.backendServiceWeightAction[].headerAction"
@@ -28245,6 +28257,11 @@
},
"type": "array"
},
+ "priority": {
+ "description": "For routeRules within a given pathMatcher, priority determines the order in which load balancer will interpret routeRules. RouteRules are evaluated in order of priority, from the lowest to highest number. The priority of a rule decreases as its number increases (1, 2, 3, N+1). The first rule that matches the request is applied.\nYou cannot configure two or more routeRules with the same priority. Priority for each rule must be set to a number between 0 and 2147483647 inclusive.\nPriority numbers can have gaps, which enable you to add or remove rules in the future without affecting the rest of the rules. For example, 1, 2, 3, 4, 5, 9, 12, 16 is a valid series of priority numbers to which you could add rules numbered from 6 to 8, 10 to 11, and 13 to 15 in the future without any impact on existing rules.",
+ "format": "int32",
+ "type": "integer"
+ },
"routeAction": {
"$ref": "HttpRouteAction",
"description": "In response to a matching matchRule, the load balancer performs advanced routing actions like URL rewrites, header transformations, etc. prior to forwarding the request to the selected backend. If routeAction specifies any weightedBackendServices, service must not be set. Conversely if service is set, routeAction cannot contain any weightedBackendServices.\nOnly one of routeAction or urlRedirect must be set."
@@ -32593,9 +32610,16 @@
"type": "object"
},
"LogConfigCounterOptions": {
- "description": "Increment a streamz counter with the specified metric and field names.\n\nMetric names should start with a '/', generally be lowercase-only, and end in \"_count\". Field names should not contain an initial slash. The actual exported metric names will have \"/iam/policy\" prepended.\n\nField names correspond to IAM request parameters and field values are their respective values.\n\nSupported field names: - \"authority\", which is \"[token]\" if IAMContext.token is present, otherwise the value of IAMContext.authority_selector if present, and otherwise a representation of IAMContext.principal; or - \"iam_principal\", a representation of IAMContext.principal even if a token or authority selector is present; or - \"\" (empty string), resulting in a counter with no fields.\n\nExamples: counter { metric: \"/debug_access_count\" field: \"iam_principal\" } ==\u003e increment counter /iam/policy/backend_debug_access_count {iam_principal=[value of IAMContext.principal]}\n\nAt this time we do not support multiple field names (though this may be supported in the future).",
+ "description": "Increment a streamz counter with the specified metric and field names.\n\nMetric names should start with a '/', generally be lowercase-only, and end in \"_count\". Field names should not contain an initial slash. The actual exported metric names will have \"/iam/policy\" prepended.\n\nField names correspond to IAM request parameters and field values are their respective values.\n\nSupported field names: - \"authority\", which is \"[token]\" if IAMContext.token is present, otherwise the value of IAMContext.authority_selector if present, and otherwise a representation of IAMContext.principal; or - \"iam_principal\", a representation of IAMContext.principal even if a token or authority selector is present; or - \"\" (empty string), resulting in a counter with no fields.\n\nExamples: counter { metric: \"/debug_access_count\" field: \"iam_principal\" } ==\u003e increment counter /iam/policy/debug_access_count {iam_principal=[value of IAMContext.principal]}\n\nTODO(b/141846426): Consider supporting \"authority\" and \"iam_principal\" fields in the same counter.",
"id": "LogConfigCounterOptions",
"properties": {
+ "customFields": {
+ "description": "Custom fields.",
+ "items": {
+ "$ref": "LogConfigCounterOptionsCustomField"
+ },
+ "type": "array"
+ },
"field": {
"description": "The field value to attribute.",
"type": "string"
@@ -32607,6 +32631,21 @@
},
"type": "object"
},
+ "LogConfigCounterOptionsCustomField": {
+ "description": "Custom fields. These can be used to create a counter with arbitrary field/value pairs. See: go/rpcsp-custom-fields.",
+ "id": "LogConfigCounterOptionsCustomField",
+ "properties": {
+ "name": {
+ "description": "Name is the field name.",
+ "type": "string"
+ },
+ "value": {
+ "description": "Value is the field value. It is important that in contrast to the CounterOptions.field, the value here is a constant that is not derived from the IAMContext.",
+ "type": "string"
+ }
+ },
+ "type": "object"
+ },
"LogConfigDataAccessOptions": {
"description": "Write a Data Access (Gin) log",
"id": "LogConfigDataAccessOptions",
@@ -33350,7 +33389,7 @@
"type": "object"
},
"NetworkEndpointGroup": {
- "description": "Represents a collection of network endpoints.",
+ "description": "Represents a collection of network endpoints.\n\nFor more information read Setting up network endpoint groups in load balancing. (== resource_for v1.networkEndpointGroups ==) (== resource_for beta.networkEndpointGroups ==)",
"id": "NetworkEndpointGroup",
"properties": {
"creationTimestamp": {
@@ -34171,7 +34210,7 @@
"type": "object"
},
"NodeGroup": {
- "description": "Represent a sole-tenant Node Group resource.\n\nA sole-tenant node is a physical server that is dedicated to hosting VM instances only for your specific project. Use sole-tenant nodes to keep your instances physically separated from instances in other projects, or to group your instances together on the same host hardware. For more information, read Sole-tenant nodes. (== resource_for beta.nodeGroups ==) (== resource_for v1.nodeGroups ==) NextID: 15",
+ "description": "Represent a sole-tenant Node Group resource.\n\nA sole-tenant node is a physical server that is dedicated to hosting VM instances only for your specific project. Use sole-tenant nodes to keep your instances physically separated from instances in other projects, or to group your instances together on the same host hardware. For more information, read Sole-tenant nodes. (== resource_for beta.nodeGroups ==) (== resource_for v1.nodeGroups ==) NextID: 16",
"id": "NodeGroup",
"properties": {
"creationTimestamp": {
@@ -36060,12 +36099,12 @@
"type": "object"
},
"OutlierDetection": {
- "description": "Settings controlling eviction of unhealthy hosts from the load balancing pool.",
+ "description": "Settings controlling the eviction of unhealthy hosts from the load balancing pool for the backend service.",
"id": "OutlierDetection",
"properties": {
"baseEjectionTime": {
"$ref": "Duration",
- "description": "The base time that a host is ejected for. The real time is equal to the base time multiplied by the number of times the host has been ejected. Defaults to 30000ms or 30s."
+ "description": "The base time that a host is ejected for. The real ejection time is equal to the base ejection time multiplied by the number of times the host has been ejected. Defaults to 30000ms or 30s."
},
"consecutiveErrors": {
"description": "Number of errors before a host is ejected from the connection pool. When the backend host is accessed over HTTP, a 5xx return code qualifies as an error. Defaults to 5.",
@@ -36073,17 +36112,17 @@
"type": "integer"
},
"consecutiveGatewayFailure": {
- "description": "The number of consecutive gateway failures (502, 503, 504 status or connection errors that are mapped to one of those status codes) before a consecutive gateway failure ejection occurs. Defaults to 5.",
+ "description": "The number of consecutive gateway failures (502, 503, 504 status or connection errors that are mapped to one of those status codes) before a consecutive gateway failure ejection occurs. Defaults to 3.",
"format": "int32",
"type": "integer"
},
"enforcingConsecutiveErrors": {
- "description": "The percentage chance that a host will be actually ejected when an outlier status is detected through consecutive 5xx. This setting can be used to disable ejection or to ramp it up slowly. Defaults to 100.",
+ "description": "The percentage chance that a host will be actually ejected when an outlier status is detected through consecutive 5xx. This setting can be used to disable ejection or to ramp it up slowly. Defaults to 0.",
"format": "int32",
"type": "integer"
},
"enforcingConsecutiveGatewayFailure": {
- "description": "The percentage chance that a host will be actually ejected when an outlier status is detected through consecutive gateway failures. This setting can be used to disable ejection or to ramp it up slowly. Defaults to 0.",
+ "description": "The percentage chance that a host will be actually ejected when an outlier status is detected through consecutive gateway failures. This setting can be used to disable ejection or to ramp it up slowly. Defaults to 100.",
"format": "int32",
"type": "integer"
},
@@ -36094,10 +36133,10 @@
},
"interval": {
"$ref": "Duration",
- "description": "Time interval between ejection sweep analysis. This can result in both new ejections as well as hosts being returned to service. Defaults to 10 seconds."
+ "description": "Time interval between ejection sweep analysis. This can result in both new ejections as well as hosts being returned to service. Defaults to 1 seconds."
},
"maxEjectionPercent": {
- "description": "Maximum percentage of hosts in the load balancing pool for the backend service that can be ejected. Defaults to 10%.",
+ "description": "Maximum percentage of hosts in the load balancing pool for the backend service that can be ejected. Defaults to 50%.",
"format": "int32",
"type": "integer"
},
@@ -36191,7 +36230,7 @@
"type": "object"
},
"Policy": {
- "description": "Defines an Identity and Access Management (IAM) policy. It is used to specify access control policies for Cloud Platform resources.\n\n\n\nA `Policy` consists of a list of `bindings`. A `binding` binds a list of `members` to a `role`, where the members can be user accounts, Google groups, Google domains, and service accounts. A `role` is a named list of permissions defined by IAM.\n\n**JSON Example**\n\n{ \"bindings\": [ { \"role\": \"roles/owner\", \"members\": [ \"user:[email protected]\", \"group:[email protected]\", \"domain:google.com\", \"serviceAccount:[email protected]\" ] }, { \"role\": \"roles/viewer\", \"members\": [\"user:[email protected]\"] } ] }\n\n**YAML Example**\n\nbindings: - members: - user:[email protected] - group:[email protected] - domain:google.com - serviceAccount:[email protected] role: roles/owner - members: - user:[email protected] role: roles/viewer\n\n\n\nFor a description of IAM and its features, see the [IAM developer's guide](https://cloud.google.com/iam/docs).",
+ "description": "Defines an Identity and Access Management (IAM) policy. It is used to specify access control policies for Cloud Platform resources.\n\n\n\nA `Policy` is a collection of `bindings`. A `binding` binds one or more `members` to a single `role`. Members can be user accounts, service accounts, Google groups, and domains (such as G Suite). A `role` is a named list of permissions (defined by IAM or configured by users). A `binding` can optionally specify a `condition`, which is a logic expression that further constrains the role binding based on attributes about the request and/or target resource.\n\n**JSON Example**\n\n{ \"bindings\": [ { \"role\": \"roles/resourcemanager.organizationAdmin\", \"members\": [ \"user:[email protected]\", \"group:[email protected]\", \"domain:google.com\", \"serviceAccount:[email protected]\" ] }, { \"role\": \"roles/resourcemanager.organizationViewer\", \"members\": [\"user:[email protected]\"], \"condition\": { \"title\": \"expirable access\", \"description\": \"Does not grant access after Sep 2020\", \"expression\": \"request.time \u003c timestamp('2020-10-01T00:00:00.000Z')\", } } ] }\n\n**YAML Example**\n\nbindings: - members: - user:[email protected] - group:[email protected] - domain:google.com - serviceAccount:[email protected] role: roles/resourcemanager.organizationAdmin - members: - user:[email protected] role: roles/resourcemanager.organizationViewer condition: title: expirable access description: Does not grant access after Sep 2020 expression: request.time \u003c timestamp('2020-10-01T00:00:00.000Z')\n\nFor a description of IAM and its features, see the [IAM developer's guide](https://cloud.google.com/iam/docs).",
"id": "Policy",
"properties": {
"auditConfigs": {
@@ -36202,14 +36241,14 @@
"type": "array"
},
"bindings": {
- "description": "Associates a list of `members` to a `role`. `bindings` with no members will result in an error.",
+ "description": "Associates a list of `members` to a `role`. Optionally may specify a `condition` that determines when binding is in effect. `bindings` with no members will result in an error.",
"items": {
"$ref": "Binding"
},
"type": "array"
},
"etag": {
- "description": "`etag` is used for optimistic concurrency control as a way to help prevent simultaneous updates of a policy from overwriting each other. It is strongly suggested that systems make use of the `etag` in the read-modify-write cycle to perform policy updates in order to avoid race conditions: An `etag` is returned in the response to `getIamPolicy`, and systems are expected to put that etag in the request to `setIamPolicy` to ensure that their change will be applied to the same version of the policy.\n\nIf no `etag` is provided in the call to `setIamPolicy`, then the existing policy is overwritten.",
+ "description": "`etag` is used for optimistic concurrency control as a way to help prevent simultaneous updates of a policy from overwriting each other. It is strongly suggested that systems make use of the `etag` in the read-modify-write cycle to perform policy updates in order to avoid race conditions: An `etag` is returned in the response to `getIamPolicy`, and systems are expected to put that etag in the request to `setIamPolicy` to ensure that their change will be applied to the same version of the policy.\n\nIf no `etag` is provided in the call to `setIamPolicy`, then the existing policy is overwritten. Due to blind-set semantics of an etag-less policy, 'setIamPolicy' will not fail even if either of incoming or stored policy does not meet the version requirements.",
"format": "byte",
"type": "string"
},
@@ -36225,7 +36264,7 @@
"type": "array"
},
"version": {
- "description": "Specifies the format of the policy.\n\nValid values are 0, 1, and 3. Requests specifying an invalid value will be rejected.\n\nPolicies with any conditional bindings must specify version 3. Policies without any conditional bindings may specify any valid value or leave the field unset.",
+ "description": "Specifies the format of the policy.\n\nValid values are 0, 1, and 3. Requests specifying an invalid value will be rejected.\n\nOperations affecting conditional bindings must specify version 3. This can be either setting a conditional policy, modifying a conditional binding, or removing a conditional binding from the stored conditional policy. Operations on non-conditional policies may specify any valid value or leave the field unset.\n\nIf no etag is provided in the call to `setIamPolicy`, any version compliance checks on the incoming and/or stored policy is skipped.",
"format": "int32",
"type": "integer"
}
@@ -36428,6 +36467,7 @@
"INTERCONNECTS",
"INTERCONNECT_ATTACHMENTS_PER_REGION",
"INTERCONNECT_ATTACHMENTS_TOTAL_MBPS",
+ "INTERCONNECT_TOTAL_GBPS",
"INTERNAL_ADDRESSES",
"IN_USE_ADDRESSES",
"IN_USE_BACKUP_SCHEDULES",
@@ -36558,6 +36598,7 @@
"",
"",
"",
+ "",
""
],
"type": "string"
@@ -37568,7 +37609,7 @@
"id": "Reservation",
"properties": {
"commitment": {
- "description": "[OutputOnly] Full or partial URL to a parent commitment. This field displays for reservations that are tied to a commitment.",
+ "description": "[Output Only] Full or partial URL to a parent commitment. This field displays for reservations that are tied to a commitment.",
"type": "string"
},
"creationTimestamp": {
@@ -38659,6 +38700,10 @@
"description": "The URL to a gateway that should handle matching packets. You can only specify the internet gateway using a full or partial valid URL: projects/project/global/gateways/default-internet-gateway",
"type": "string"
},
+ "nextHopIlb": {
+ "description": "The URL to a forwarding rule of type loadBalancingScheme=INTERNAL that should handle matching packets. You can only specify the forwarding rule as a partial or full URL. For example, the following are all valid URLs: \n- https://www.googleapis.com/compute/v1/projects/project/regions/region/forwardingRules/forwardingRule \n- regions/region/forwardingRules/forwardingRule",
+ "type": "string"
+ },
"nextHopInstance": {
"description": "The URL to an instance that should handle matching packets. You can specify this as a full or partial URL. For example:\nhttps://www.googleapis.com/compute/v1/projects/project/zones/zone/instances/",
"type": "string"
@@ -39395,6 +39440,13 @@
"description": "Represents a Nat resource. It enables the VMs within the specified subnetworks to access Internet without external IP addresses. It specifies a list of subnetworks (and the ranges within) that want to use NAT. Customers can also provide the external IPs that would be used for NAT. GCP would auto-allocate ephemeral IPs if no external IPs are provided.",
"id": "RouterNat",
"properties": {
+ "drainNatIps": {
+ "description": "A list of URLs of the IP resources to be drained. These IPs must be valid static external IPs that have been assigned to the NAT. These IPs should be used for updating/patching a NAT only.",
+ "items": {
+ "type": "string"
+ },
+ "type": "array"
+ },
"icmpIdleTimeoutSec": {
"description": "Timeout (in seconds) for ICMP connections. Defaults to 30s if not set.",
"format": "int32",
@@ -39481,7 +39533,7 @@
"type": "boolean"
},
"filter": {
- "description": "Specifies the desired filtering of logs on this NAT. If unspecified, logs are exported for all connections handled by this NAT.",
+ "description": "Specify the desired filtering of logs on this NAT. If unspecified, logs are exported for all connections handled by this NAT. This option can take one of the following values: \n- ERRORS_ONLY: Export logs only for connection failures. \n- TRANSLATIONS_ONLY: Export logs only for successful connections. \n- ALL: Export logs for all connections, successful and unsuccessful.",
"enum": [
"ALL",
"ERRORS_ONLY",
@@ -39639,6 +39691,20 @@
},
"type": "array"
},
+ "drainAutoAllocatedNatIps": {
+ "description": "A list of IPs auto-allocated for NAT that are in drain mode. Example: [\"1.1.1.1\", \"179.12.26.133\"].",
+ "items": {
+ "type": "string"
+ },
+ "type": "array"
+ },
+ "drainUserAllocatedNatIps": {
+ "description": "A list of IPs user-allocated for NAT that are in drain mode. Example: [\"1.1.1.1\", \"179.12.26.133\"].",
+ "items": {
+ "type": "string"
+ },
+ "type": "array"
+ },
"minExtraNatIpsNeeded": {
"description": "The number of extra IPs to allocate. This will be greater than 0 only if user-specified IPs are NOT enough to allow all configured VMs to use NAT. This value is meaningful only when auto-allocation of NAT IPs is *not* used.",
"format": "int32",
@@ -40290,7 +40356,7 @@
"properties": {
"encryptionKey": {
"$ref": "ShieldedInstanceIdentityEntry",
- "description": "An Endorsement Key (EK) issued to the Shielded Instance's vTPM."
+ "description": "An Endorsement Key (EK) made by the RSA 2048 algorithm issued to the Shielded Instance's vTPM."
},
"kind": {
"default": "compute#shieldedInstanceIdentity",
@@ -40299,7 +40365,7 @@
},
"signingKey": {
"$ref": "ShieldedInstanceIdentityEntry",
- "description": "An Attestation Key (AK) issued to the Shielded Instance's vTPM."
+ "description": "An Attestation Key (AK) made by the RSA 2048 algorithm issued to the Shielded Instance's vTPM."
}
},
"type": "object"
@@ -40363,7 +40429,7 @@
"type": "string"
},
"diskSizeGb": {
- "description": "[Output Only] Size of the snapshot, specified in GB.",
+ "description": "[Output Only] Size of the source disk, specified in GB.",
"format": "int64",
"type": "string"
},
@@ -42278,7 +42344,7 @@
"type": "string"
},
"quicOverride": {
- "description": "Specifies the QUIC override policy for this TargetHttpsProxy resource. This determines whether the load balancer will attempt to negotiate QUIC with clients or not. Can specify one of NONE, ENABLE, or DISABLE. Specify ENABLE to always enable QUIC, Enables QUIC when set to ENABLE, and disables QUIC when set to DISABLE. If NONE is specified, uses the QUIC policy with no user overrides, which is equivalent to DISABLE. Not specifying this field is equivalent to specifying NONE.",
+ "description": "Specifies the QUIC override policy for this TargetHttpsProxy resource. This setting determines whether the load balancer attempts to negotiate QUIC with clients. You can specify NONE, ENABLE, or DISABLE. \n- When quic-override is set to NONE, Google manages whether QUIC is used. \n- When quic-override is set to ENABLE, the load balancer uses QUIC when possible. \n- When quic-override is set to DISABLE, the load balancer doesn't use QUIC. \n- If the quic-override flag is not specified, NONE is implied.\n-",
"enum": [
"DISABLE",
"ENABLE",
@@ -42307,7 +42373,7 @@
"type": "array"
},
"sslPolicy": {
- "description": "URL of SslPolicy resource that will be associated with the TargetHttpsProxy resource. If not set, the TargetHttpsProxy resource will not have any SSL policy configured.",
+ "description": "URL of SslPolicy resource that will be associated with the TargetHttpsProxy resource. If not set, the TargetHttpsProxy resource has no SSL policy configured.",
"type": "string"
},
"urlMap": {
@@ -44917,6 +44983,13 @@
"description": "Contain information of Nat mapping for an interface of this endpoint.",
"id": "VmEndpointNatMappingsInterfaceNatMappings",
"properties": {
+ "drainNatIpPortRanges": {
+ "description": "List of all drain IP:port-range mappings assigned to this interface. These ranges are inclusive, that is, both the first and the last ports can be used for NAT. Example: [\"2.2.2.2:12345-12355\", \"1.1.1.1:2234-2234\"].",
+ "items": {
+ "type": "string"
+ },
+ "type": "array"
+ },
"natIpPortRanges": {
"description": "A list of all IP:port-range mappings assigned to this interface. These ranges are inclusive, that is, both the first and the last ports can be used for NAT. Example: [\"2.2.2.2:12345-12355\", \"1.1.1.1:2234-2234\"].",
"items": {
@@ -44924,6 +44997,11 @@
},
"type": "array"
},
+ "numTotalDrainNatPorts": {
+ "description": "Total number of drain ports across all NAT IPs allocated to this interface. It equals to the aggregated port number in the field drain_nat_ip_port_ranges.",
+ "format": "int32",
+ "type": "integer"
+ },
"numTotalNatPorts": {
"description": "Total number of ports across all NAT IPs allocated to this interface. It equals to the aggregated port number in the field nat_ip_port_ranges.",
"format": "int32",
@@ -45648,7 +45726,7 @@
"type": "string"
},
"status": {
- "description": "[Output Only] The status of the VPN tunnel, which can be one of the following: \n- PROVISIONING: Resource is being allocated for the VPN tunnel. \n- WAITING_FOR_FULL_CONFIG: Waiting to receive all VPN-related configs from the user. Network, TargetVpnGateway, VpnTunnel, ForwardingRule, and Route resources are needed to setup the VPN tunnel. \n- FIRST_HANDSHAKE: Successful first handshake with the peer VPN. \n- ESTABLISHED: Secure session is successfully established with the peer VPN. \n- NETWORK_ERROR: Deprecated, replaced by NO_INCOMING_PACKETS \n- AUTHORIZATION_ERROR: Auth error (for example, bad shared secret). \n- NEGOTIATION_FAILURE: Handshake failed. \n- DEPROVISIONING: Resources are being deallocated for the VPN tunnel. \n- FAILED: Tunnel creation has failed and the tunnel is not ready to be used.",
+ "description": "[Output Only] The status of the VPN tunnel, which can be one of the following: \n- PROVISIONING: Resource is being allocated for the VPN tunnel. \n- WAITING_FOR_FULL_CONFIG: Waiting to receive all VPN-related configs from the user. Network, TargetVpnGateway, VpnTunnel, ForwardingRule, and Route resources are needed to setup the VPN tunnel. \n- FIRST_HANDSHAKE: Successful first handshake with the peer VPN. \n- ESTABLISHED: Secure session is successfully established with the peer VPN. \n- NETWORK_ERROR: Deprecated, replaced by NO_INCOMING_PACKETS \n- AUTHORIZATION_ERROR: Auth error (for example, bad shared secret). \n- NEGOTIATION_FAILURE: Handshake failed. \n- DEPROVISIONING: Resources are being deallocated for the VPN tunnel. \n- FAILED: Tunnel creation has failed and the tunnel is not ready to be used. \n- NO_INCOMING_PACKETS: No incoming packets from peer. \n- REJECTED: Tunnel configuration was rejected, can be result of being blacklisted. \n- ALLOCATING_RESOURCES: Cloud VPN is in the process of allocating all required resources. \n- STOPPED: Tunnel is stopped due to its Forwarding Rules being deleted for Classic VPN tunnels or the project is in frozen state. \n- PEER_IDENTITY_MISMATCH: Peer identity does not match peer IP, probably behind NAT. \n- TS_NARROWING_NOT_ALLOWED: Traffic selector narrowing not allowed for an HA-VPN tunnel.",
"enum": [
"ALLOCATING_RESOURCES",
"AUTHORIZATION_ERROR",
diff --git a/vendor/google.golang.org/api/compute/v1/compute-gen.go b/vendor/google.golang.org/api/compute/v1/compute-gen.go
index 3ea5e698c2551..06da7dba93ad2 100644
--- a/vendor/google.golang.org/api/compute/v1/compute-gen.go
+++ b/vendor/google.golang.org/api/compute/v1/compute-gen.go
@@ -2333,7 +2333,7 @@ type AllocationSpecificSKUReservation struct {
// Count: Specifies the number of resources that are allocated.
Count int64 `json:"count,omitempty,string"`
- // InUseCount: [OutputOnly] Indicates how many instances are in use.
+ // InUseCount: [Output Only] Indicates how many instances are in use.
InUseCount int64 `json:"inUseCount,omitempty,string"`
// InstanceProperties: The instance properties for the reservation.
@@ -2840,6 +2840,13 @@ type Autoscaler struct {
SelfLink string `json:"selfLink,omitempty"`
// Status: [Output Only] The status of the autoscaler configuration.
+ // Current set of possible values: PENDING: Autoscaler backend hasn't
+ // read new/updated configuration DELETING: Configuration is being
+ // deleted ACTIVE: Configuration is acknowledged to be effective. Some
+ // warnings might or might not be present in the status_details field.
+ // ERROR: Configuration has errors. Actionable for users. Details are
+ // present in the status_details field. New values might be added in the
+ // future.
//
// Possible values:
// "ACTIVE"
@@ -3203,7 +3210,42 @@ type AutoscalerStatusDetails struct {
// Message: The status message.
Message string `json:"message,omitempty"`
- // Type: The type of error returned.
+ // Type: The type of error, warning or notice returned. Current set of
+ // possible values: ALL_INSTANCES_UNHEALTHY (WARNING): All instances in
+ // the instance group are unhealthy (not in RUNNING state).
+ // BACKEND_SERVICE_DOES_NOT_EXIST (ERROR): There is no backend service
+ // attached to the instance group. CAPPED_AT_MAX_NUM_REPLICAS (WARNING):
+ // Autoscaler recommends size bigger than maxNumReplicas.
+ // CUSTOM_METRIC_DATA_POINTS_TOO_SPARSE (WARNING): The custom metric
+ // samples are not exported often enough to be a credible base for
+ // autoscaling. CUSTOM_METRIC_INVALID (ERROR): The custom metric that
+ // was specified does not exist or does not have the necessary labels.
+ // MIN_EQUALS_MAX (WARNING): The minNumReplicas is equal to
+ // maxNumReplicas. This means the autoscaler cannot add or remove
+ // instances from the instance group. MISSING_CUSTOM_METRIC_DATA_POINTS
+ // (WARNING): The autoscaler did not receive any data from the custom
+ // metric configured for autoscaling. MISSING_LOAD_BALANCING_DATA_POINTS
+ // (WARNING): The autoscaler is configured to scale based on a load
+ // balancing signal but the instance group has not received any requests
+ // from the load balancer. MODE_OFF (WARNING): Autoscaling is turned
+ // off. The number of instances in the group won't change automatically.
+ // The autoscaling configuration is preserved. MODE_ONLY_UP (WARNING):
+ // Autoscaling is in the "Autoscale only up" mode. Instances in the
+ // group will be only added. MORE_THAN_ONE_BACKEND_SERVICE (ERROR): The
+ // instance group cannot be autoscaled because it has more than one
+ // backend service attached to it. NOT_ENOUGH_QUOTA_AVAILABLE (ERROR):
+ // Exceeded quota for necessary resources, such as CPU, number of
+ // instances and so on. REGION_RESOURCE_STOCKOUT (ERROR): Showed only
+ // for regional autoscalers: there is a resource stockout in the chosen
+ // region. SCALING_TARGET_DOES_NOT_EXIST (ERROR): The target to be
+ // scaled does not exist.
+ // UNSUPPORTED_MAX_RATE_LOAD_BALANCING_CONFIGURATION (ERROR):
+ // Autoscaling does not work with an HTTP/S load balancer that has been
+ // configured for maxRate. ZONE_RESOURCE_STOCKOUT (ERROR): For zonal
+ // autoscalers: there is a resource stockout in the chosen zone. For
+ // regional autoscalers: in at least one of the zones you're using there
+ // is a resource stockout. New values might be added in the future. Some
+ // of the values might not be available in all API versions.
//
// Possible values:
// "ALL_INSTANCES_UNHEALTHY"
@@ -3214,6 +3256,7 @@ type AutoscalerStatusDetails struct {
// "MIN_EQUALS_MAX"
// "MISSING_CUSTOM_METRIC_DATA_POINTS"
// "MISSING_LOAD_BALANCING_DATA_POINTS"
+ // "MODE_OFF"
// "MORE_THAN_ONE_BACKEND_SERVICE"
// "NOT_ENOUGH_QUOTA_AVAILABLE"
// "REGION_RESOURCE_STOCKOUT"
@@ -4088,20 +4131,12 @@ func (s *BackendBucketListWarningData) MarshalJSON() ([]byte, error) {
// BackendService: Represents a Backend Service resource.
//
+// A backend service contains configuration values for Google Cloud
+// Platform load balancing services.
//
+// For more information, read Backend Services.
//
-// Backend services must have an associated health check. Backend
-// services also store information about session affinity. For more
-// information, read Backend Services.
-//
-// A backendServices resource represents a global backend service.
-// Global backend services are used for HTTP(S), SSL Proxy, TCP Proxy
-// load balancing and Traffic Director.
-//
-// A regionBackendServices resource represents a regional backend
-// service. Regional backend services are used for internal TCP/UDP load
-// balancing. For more information, read Internal TCP/UDP Load
-// balancing. (== resource_for v1.backendService ==) (== resource_for
+// (== resource_for v1.backendService ==) (== resource_for
// beta.backendService ==)
type BackendService struct {
// AffinityCookieTtlSec: If set to 0, the cookie is non-persistent and
@@ -4116,7 +4151,8 @@ type BackendService struct {
CdnPolicy *BackendServiceCdnPolicy `json:"cdnPolicy,omitempty"`
// CircuitBreakers: Settings controlling the volume of connections to a
- // backend service.
+ // backend service. If not set, this feature is considered
+ // disabled.
//
// This field is applicable to either:
// - A regional backend service with the service_protocol set to HTTP,
@@ -4193,10 +4229,13 @@ type BackendService struct {
// for backend services.
Kind string `json:"kind,omitempty"`
- // LoadBalancingScheme: Indicates whether the backend service will be
- // used with internal or external load balancing. A backend service
- // created for one type of load balancing cannot be used with the other.
- // Possible values are INTERNAL and EXTERNAL.
+ // LoadBalancingScheme: Specifies the load balancer type. Choose
+ // EXTERNAL for load balancers that receive traffic from external
+ // clients. Choose INTERNAL for Internal TCP/UDP Load Balancing. Choose
+ // INTERNAL_MANAGED for Internal HTTP(S) Load Balancing. Choose
+ // INTERNAL_SELF_MANAGED for Traffic Director. A backend service created
+ // for one type of load balancing cannot be used with another. For more
+ // information, refer to Choosing a load balancer.
//
// Possible values:
// "EXTERNAL"
@@ -4253,8 +4292,11 @@ type BackendService struct {
// last character, which cannot be a dash.
Name string `json:"name,omitempty"`
- // OutlierDetection: Settings controlling eviction of unhealthy hosts
- // from the load balancing pool. This field is applicable to either:
+ // OutlierDetection: Settings controlling the eviction of unhealthy
+ // hosts from the load balancing pool for the backend service. If not
+ // set, this feature is considered disabled.
+ //
+ // This field is applicable to either:
// - A regional backend service with the service_protocol set to HTTP,
// HTTPS, or HTTP2, and load_balancing_scheme set to INTERNAL_MANAGED.
//
@@ -4285,10 +4327,10 @@ type BackendService struct {
// Protocol: The protocol this BackendService uses to communicate with
// backends.
//
- // Possible values are HTTP, HTTPS, TCP, SSL, or UDP, depending on the
- // chosen load balancer or Traffic Director configuration. Refer to the
- // documentation for the load balancer or for Traffic director for more
- // information.
+ // Possible values are HTTP, HTTPS, HTTP2, TCP, SSL, or UDP, depending
+ // on the chosen load balancer or Traffic Director configuration. Refer
+ // to the documentation for the load balancer or for Traffic Director
+ // for more information.
//
// Possible values:
// "HTTP"
@@ -5110,25 +5152,25 @@ func (s *CacheKeyPolicy) MarshalJSON() ([]byte, error) {
// backend service.
type CircuitBreakers struct {
// MaxConnections: The maximum number of connections to the backend
- // cluster. If not specified, the default is 1024.
+ // service. If not specified, there is no limit.
MaxConnections int64 `json:"maxConnections,omitempty"`
// MaxPendingRequests: The maximum number of pending requests allowed to
- // the backend cluster. If not specified, the default is 1024.
+ // the backend service. If not specified, there is no limit.
MaxPendingRequests int64 `json:"maxPendingRequests,omitempty"`
// MaxRequests: The maximum number of parallel requests that allowed to
- // the backend cluster. If not specified, the default is 1024.
+ // the backend service. If not specified, there is no limit.
MaxRequests int64 `json:"maxRequests,omitempty"`
- // MaxRequestsPerConnection: Maximum requests for a single backend
- // connection. This parameter is respected by both the HTTP/1.1 and
- // HTTP/2 implementations. If not specified, there is no limit. Setting
- // this parameter to 1 will effectively disable keep alive.
+ // MaxRequestsPerConnection: Maximum requests for a single connection to
+ // the backend service. This parameter is respected by both the HTTP/1.1
+ // and HTTP/2 implementations. If not specified, there is no limit.
+ // Setting this parameter to 1 will effectively disable keep alive.
MaxRequestsPerConnection int64 `json:"maxRequestsPerConnection,omitempty"`
// MaxRetries: The maximum number of parallel retries allowed to the
- // backend cluster. If not specified, the default is 3.
+ // backend cluster. If not specified, the default is 1.
MaxRetries int64 `json:"maxRetries,omitempty"`
// ForceSendFields is a list of field names (e.g. "MaxConnections") to
@@ -7290,6 +7332,7 @@ func (s *DiskTypesScopedListWarningData) MarshalJSON() ([]byte, error) {
type DisksAddResourcePoliciesRequest struct {
// ResourcePolicies: Resource policies to be added to this disk.
+ // Currently you can only specify one policy here.
ResourcePolicies []string `json:"resourcePolicies,omitempty"`
// ForceSendFields is a list of field names (e.g. "ResourcePolicies") to
@@ -8451,45 +8494,17 @@ func (s *FixedOrPercent) MarshalJSON() ([]byte, error) {
// ForwardingRule: Represents a Forwarding Rule resource.
//
+// A forwarding rule and its corresponding IP address represent the
+// frontend configuration of a Google Cloud Platform load balancer.
+// Forwarding rules can also reference target instances and Cloud VPN
+// Classic gateways (targetVpnGateway).
//
+// For more information, read Forwarding rule concepts and Using
+// protocol forwarding.
//
-// A forwardingRules resource represents a regional forwarding
-// rule.
-//
-// Regional external forwarding rules can reference any of the following
-// resources:
-//
-// - A target instance
-// - A Cloud VPN Classic gateway (targetVpnGateway),
-// - A target pool for a Network Load Balancer
-// - A global target HTTP(S) proxy for an HTTP(S) load balancer using
-// Standard Tier
-// - A target SSL proxy for a SSL Proxy load balancer using Standard
-// Tier
-// - A target TCP proxy for a TCP Proxy load balancer using Standard
-// Tier.
-//
-// Regional internal forwarding rules can reference the backend service
-// of an internal TCP/UDP load balancer.
-//
-// For regional internal forwarding rules, the following applies:
-// - If the loadBalancingScheme for the load balancer is INTERNAL, then
-// the forwarding rule references a regional internal backend service.
-//
-// - If the loadBalancingScheme for the load balancer is
-// INTERNAL_MANAGED, then the forwarding rule must reference a regional
-// target HTTP(S) proxy.
-//
-// For more information, read Using Forwarding rules.
-//
-// A globalForwardingRules resource represents a global forwarding
-// rule.
-//
-// Global forwarding rules are only used by load balancers that use
-// Premium Tier. (== resource_for beta.forwardingRules ==) (==
-// resource_for v1.forwardingRules ==) (== resource_for
-// beta.globalForwardingRules ==) (== resource_for
-// v1.globalForwardingRules ==) (== resource_for
+// (== resource_for beta.forwardingRules ==) (== resource_for
+// v1.forwardingRules ==) (== resource_for beta.globalForwardingRules
+// ==) (== resource_for v1.globalForwardingRules ==) (== resource_for
// beta.regionForwardingRules ==) (== resource_for
// v1.regionForwardingRules ==)
type ForwardingRule struct {
@@ -8513,9 +8528,14 @@ type ForwardingRule struct {
// IPProtocol: The IP protocol to which this rule applies. Valid options
// are TCP, UDP, ESP, AH, SCTP or ICMP.
//
- // When the load balancing scheme is INTERNAL, only TCP and UDP are
- // valid. When the load balancing scheme is INTERNAL_SELF_MANAGED, only
- // TCPis valid.
+ // For Internal TCP/UDP Load Balancing, the load balancing scheme is
+ // INTERNAL, and one of TCP or UDP are valid. For Traffic Director, the
+ // load balancing scheme is INTERNAL_SELF_MANAGED, and only TCPis valid.
+ // For Internal HTTP(S) Load Balancing, the load balancing scheme is
+ // INTERNAL_MANAGED, and only TCP is valid. For HTTP(S), SSL Proxy, and
+ // TCP Proxy Load Balancing, the load balancing scheme is EXTERNAL and
+ // only TCP is valid. For Network TCP/UDP Load Balancing, the load
+ // balancing scheme is EXTERNAL, and one of TCP or UDP is valid.
//
// Possible values:
// "AH"
@@ -8569,14 +8589,21 @@ type ForwardingRule struct {
// compute#forwardingRule for Forwarding Rule resources.
Kind string `json:"kind,omitempty"`
- // LoadBalancingScheme: This signifies what the ForwardingRule will be
- // used for and can only take the following values: INTERNAL,
- // INTERNAL_SELF_MANAGED, EXTERNAL. The value of INTERNAL means that
- // this will be used for Internal Network Load Balancing (TCP, UDP). The
- // value of INTERNAL_SELF_MANAGED means that this will be used for
- // Internal Global HTTP(S) LB. The value of EXTERNAL means that this
- // will be used for External Load Balancing (HTTP(S) LB, External
- // TCP/UDP LB, SSL Proxy)
+ // LoadBalancingScheme: Specifies the forwarding rule type. EXTERNAL is
+ // used for: - Classic Cloud VPN gateways - Protocol forwarding to VMs
+ // from an external IP address - The following load balancers: HTTP(S),
+ // SSL Proxy, TCP Proxy, and Network TCP/UDP.
+ //
+ // INTERNAL is used for: - Protocol forwarding to VMs from an internal
+ // IP address - Internal TCP/UDP load balancers
+ //
+ // INTERNAL_MANAGED is used for: - Internal HTTP(S) load
+ // balancers
+ //
+ // INTERNAL_SELF_MANAGED is used for: - Traffic Director
+ //
+ // For more information about forwarding rules, refer to Forwarding rule
+ // concepts.
//
// Possible values:
// "EXTERNAL"
@@ -11093,10 +11120,13 @@ func (s *HttpRedirectAction) MarshalJSON() ([]byte, error) {
// HttpRetryPolicy: The retry policy associates with HttpRouteRule
type HttpRetryPolicy struct {
// NumRetries: Specifies the allowed number retries. This number must be
- // > 0.
+ // > 0. If not specified, defaults to 1.
NumRetries int64 `json:"numRetries,omitempty"`
// PerTryTimeout: Specifies a non-zero timeout per retry attempt.
+ // If not specified, will use the timeout set in HttpRouteAction. If
+ // timeout in HttpRouteAction is not set, will use the largest timeout
+ // among all backend services associated with the route.
PerTryTimeout *Duration `json:"perTryTimeout,omitempty"`
// RetryConditions: Specfies one or more conditions when this retry rule
@@ -11176,10 +11206,11 @@ type HttpRouteAction struct {
RetryPolicy *HttpRetryPolicy `json:"retryPolicy,omitempty"`
// Timeout: Specifies the timeout for the selected route. Timeout is
- // computed from the time the request is has been fully processed (i.e.
+ // computed from the time the request has been fully processed (i.e.
// end-of-stream) up until the response has been completely processed.
// Timeout includes all retries.
- // If not specified, the default value is 15 seconds.
+ // If not specified, will use the largest timeout among all backend
+ // services associated with the route.
Timeout *Duration `json:"timeout,omitempty"`
// UrlRewrite: The spec to modify the URL of the request, prior to
@@ -11225,6 +11256,11 @@ func (s *HttpRouteAction) MarshalJSON() ([]byte, error) {
// request and the corresponding routing action that load balancing
// proxies will perform.
type HttpRouteRule struct {
+ // Description: The short description conveying the intent of this
+ // routeRule.
+ // The description can have a maximum length of 1024 characters.
+ Description string `json:"description,omitempty"`
+
// HeaderAction: Specifies changes to request and response headers that
// need to take effect for the selected backendService.
// The headerAction specified here are applied before the matching
@@ -11235,6 +11271,22 @@ type HttpRouteRule struct {
MatchRules []*HttpRouteRuleMatch `json:"matchRules,omitempty"`
+ // Priority: For routeRules within a given pathMatcher, priority
+ // determines the order in which load balancer will interpret
+ // routeRules. RouteRules are evaluated in order of priority, from the
+ // lowest to highest number. The priority of a rule decreases as its
+ // number increases (1, 2, 3, N+1). The first rule that matches the
+ // request is applied.
+ // You cannot configure two or more routeRules with the same priority.
+ // Priority for each rule must be set to a number between 0 and
+ // 2147483647 inclusive.
+ // Priority numbers can have gaps, which enable you to add or remove
+ // rules in the future without affecting the rest of the rules. For
+ // example, 1, 2, 3, 4, 5, 9, 12, 16 is a valid series of priority
+ // numbers to which you could add rules numbered from 6 to 8, 10 to 11,
+ // and 13 to 15 in the future without any impact on existing rules.
+ Priority int64 `json:"priority,omitempty"`
+
// RouteAction: In response to a matching matchRule, the load balancer
// performs advanced routing actions like URL rewrites, header
// transformations, etc. prior to forwarding the request to the selected
@@ -11260,7 +11312,7 @@ type HttpRouteRule struct {
// If urlRedirect is specified, service or routeAction must not be set.
UrlRedirect *HttpRedirectAction `json:"urlRedirect,omitempty"`
- // ForceSendFields is a list of field names (e.g. "HeaderAction") to
+ // ForceSendFields is a list of field names (e.g. "Description") to
// unconditionally include in API requests. By default, fields with
// empty values are omitted from API requests. However, any non-pointer,
// non-interface field appearing in ForceSendFields will be sent to the
@@ -11268,7 +11320,7 @@ type HttpRouteRule struct {
// used to include empty fields in Patch requests.
ForceSendFields []string `json:"-"`
- // NullFields is a list of field names (e.g. "HeaderAction") to include
+ // NullFields is a list of field names (e.g. "Description") to include
// in API requests with the JSON null value. By default, fields with
// empty values are omitted from API requests. However, any field with
// an empty value appearing in NullFields will be sent to the server as
@@ -17627,19 +17679,22 @@ func (s *LogConfigCloudAuditOptions) MarshalJSON() ([]byte, error) {
//
// Examples: counter { metric: "/debug_access_count" field:
// "iam_principal" } ==> increment counter
-// /iam/policy/backend_debug_access_count {iam_principal=[value of
+// /iam/policy/debug_access_count {iam_principal=[value of
// IAMContext.principal]}
//
-// At this time we do not support multiple field names (though this may
-// be supported in the future).
+// TODO(b/141846426): Consider supporting "authority" and
+// "iam_principal" fields in the same counter.
type LogConfigCounterOptions struct {
+ // CustomFields: Custom fields.
+ CustomFields []*LogConfigCounterOptionsCustomField `json:"customFields,omitempty"`
+
// Field: The field value to attribute.
Field string `json:"field,omitempty"`
// Metric: The metric to update.
Metric string `json:"metric,omitempty"`
- // ForceSendFields is a list of field names (e.g. "Field") to
+ // ForceSendFields is a list of field names (e.g. "CustomFields") to
// unconditionally include in API requests. By default, fields with
// empty values are omitted from API requests. However, any non-pointer,
// non-interface field appearing in ForceSendFields will be sent to the
@@ -17647,7 +17702,42 @@ type LogConfigCounterOptions struct {
// used to include empty fields in Patch requests.
ForceSendFields []string `json:"-"`
- // NullFields is a list of field names (e.g. "Field") to include in API
+ // NullFields is a list of field names (e.g. "CustomFields") to include
+ // in API requests with the JSON null value. By default, fields with
+ // empty values are omitted from API requests. However, any field with
+ // an empty value appearing in NullFields will be sent to the server as
+ // null. It is an error if a field in this list has a non-empty value.
+ // This may be used to include null fields in Patch requests.
+ NullFields []string `json:"-"`
+}
+
+func (s *LogConfigCounterOptions) MarshalJSON() ([]byte, error) {
+ type NoMethod LogConfigCounterOptions
+ raw := NoMethod(*s)
+ return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields)
+}
+
+// LogConfigCounterOptionsCustomField: Custom fields. These can be used
+// to create a counter with arbitrary field/value pairs. See:
+// go/rpcsp-custom-fields.
+type LogConfigCounterOptionsCustomField struct {
+ // Name: Name is the field name.
+ Name string `json:"name,omitempty"`
+
+ // Value: Value is the field value. It is important that in contrast to
+ // the CounterOptions.field, the value here is a constant that is not
+ // derived from the IAMContext.
+ Value string `json:"value,omitempty"`
+
+ // ForceSendFields is a list of field names (e.g. "Name") to
+ // unconditionally include in API requests. By default, fields with
+ // empty values are omitted from API requests. However, any non-pointer,
+ // non-interface field appearing in ForceSendFields will be sent to the
+ // server regardless of whether the field is empty or not. This may be
+ // used to include empty fields in Patch requests.
+ ForceSendFields []string `json:"-"`
+
+ // NullFields is a list of field names (e.g. "Name") to include in API
// requests with the JSON null value. By default, fields with empty
// values are omitted from API requests. However, any field with an
// empty value appearing in NullFields will be sent to the server as
@@ -17656,8 +17746,8 @@ type LogConfigCounterOptions struct {
NullFields []string `json:"-"`
}
-func (s *LogConfigCounterOptions) MarshalJSON() ([]byte, error) {
- type NoMethod LogConfigCounterOptions
+func (s *LogConfigCounterOptionsCustomField) MarshalJSON() ([]byte, error) {
+ type NoMethod LogConfigCounterOptionsCustomField
raw := NoMethod(*s)
return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields)
}
@@ -18828,7 +18918,12 @@ func (s *NetworkEndpoint) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields)
}
-// NetworkEndpointGroup: Represents a collection of network endpoints.
+// NetworkEndpointGroup: Represents a collection of network
+// endpoints.
+//
+// For more information read Setting up network endpoint groups in load
+// balancing. (== resource_for v1.networkEndpointGroups ==) (==
+// resource_for beta.networkEndpointGroups ==)
type NetworkEndpointGroup struct {
// CreationTimestamp: [Output Only] Creation timestamp in RFC3339 text
// format.
@@ -20103,7 +20198,7 @@ func (s *NetworksUpdatePeeringRequest) MarshalJSON() ([]byte, error) {
// projects, or to group your instances together on the same host
// hardware. For more information, read Sole-tenant nodes. (==
// resource_for beta.nodeGroups ==) (== resource_for v1.nodeGroups ==)
-// NextID: 15
+// NextID: 16
type NodeGroup struct {
// CreationTimestamp: [Output Only] Creation timestamp in RFC3339 text
// format.
@@ -22768,12 +22863,13 @@ func (s *OperationsScopedListWarningData) MarshalJSON() ([]byte, error) {
return gensupport.MarshalJSON(raw, s.ForceSendFields, s.NullFields)
}
-// OutlierDetection: Settings controlling eviction of unhealthy hosts
-// from the load balancing pool.
+// OutlierDetection: Settings controlling the eviction of unhealthy
+// hosts from the load balancing pool for the backend service.
type OutlierDetection struct {
// BaseEjectionTime: The base time that a host is ejected for. The real
- // time is equal to the base time multiplied by the number of times the
- // host has been ejected. Defaults to 30000ms or 30s.
+ // ejection time is equal to the base ejection time multiplied by the
+ // number of times the host has been ejected. Defaults to 30000ms or
+ // 30s.
BaseEjectionTime *Duration `json:"baseEjectionTime,omitempty"`
// ConsecutiveErrors: Number of errors before a host is ejected from the
@@ -22784,19 +22880,19 @@ type OutlierDetection struct {
// ConsecutiveGatewayFailure: The number of consecutive gateway failures
// (502, 503, 504 status or connection errors that are mapped to one of
// those status codes) before a consecutive gateway failure ejection
- // occurs. Defaults to 5.
+ // occurs. Defaults to 3.
ConsecutiveGatewayFailure int64 `json:"consecutiveGatewayFailure,omitempty"`
// EnforcingConsecutiveErrors: The percentage chance that a host will be
// actually ejected when an outlier status is detected through
// consecutive 5xx. This setting can be used to disable ejection or to
- // ramp it up slowly. Defaults to 100.
+ // ramp it up slowly. Defaults to 0.
EnforcingConsecutiveErrors int64 `json:"enforcingConsecutiveErrors,omitempty"`
// EnforcingConsecutiveGatewayFailure: The percentage chance that a host
// will be actually ejected when an outlier status is detected through
// consecutive gateway failures. This setting can be used to disable
- // ejection or to ramp it up slowly. Defaults to 0.
+ // ejection or to ramp it up slowly. Defaults to 100.
EnforcingConsecutiveGatewayFailure int64 `json:"enforcingConsecutiveGatewayFailure,omitempty"`
// EnforcingSuccessRate: The percentage chance that a host will be
@@ -22807,11 +22903,11 @@ type OutlierDetection struct {
// Interval: Time interval between ejection sweep analysis. This can
// result in both new ejections as well as hosts being returned to
- // service. Defaults to 10 seconds.
+ // service. Defaults to 1 seconds.
Interval *Duration `json:"interval,omitempty"`
// MaxEjectionPercent: Maximum percentage of hosts in the load balancing
- // pool for the backend service that can be ejected. Defaults to 10%.
+ // pool for the backend service that can be ejected. Defaults to 50%.
MaxEjectionPercent int64 `json:"maxEjectionPercent,omitempty"`
// SuccessRateMinimumHosts: The number of hosts in a cluster that must
@@ -23028,29 +23124,36 @@ func (s *PathRule) MarshalJSON() ([]byte, error) {
//
//
//
-// A `Policy` consists of a list of `bindings`. A `binding` binds a list
-// of `members` to a `role`, where the members can be user accounts,
-// Google groups, Google domains, and service accounts. A `role` is a
-// named list of permissions defined by IAM.
+// A `Policy` is a collection of `bindings`. A `binding` binds one or
+// more `members` to a single `role`. Members can be user accounts,
+// service accounts, Google groups, and domains (such as G Suite). A
+// `role` is a named list of permissions (defined by IAM or configured
+// by users). A `binding` can optionally specify a `condition`, which is
+// a logic expression that further constrains the role binding based on
+// attributes about the request and/or target resource.
//
// **JSON Example**
//
-// { "bindings": [ { "role": "roles/owner", "members": [
-// "user:[email protected]", "group:[email protected]",
+// { "bindings": [ { "role": "roles/resourcemanager.organizationAdmin",
+// "members": [ "user:[email protected]", "group:[email protected]",
// "domain:google.com",
-// "serviceAccount:[email protected]" ] }, {
-// "role": "roles/viewer", "members": ["user:[email protected]"] } ]
+// "serviceAccount:[email protected]" ] }, {
+// "role": "roles/resourcemanager.organizationViewer", "members":
+// ["user:[email protected]"], "condition": { "title": "expirable access",
+// "description": "Does not grant access after Sep 2020", "expression":
+// "request.time < timestamp('2020-10-01T00:00:00.000Z')", } } ]
// }
//
// **YAML Example**
//
// bindings: - members: - user:[email protected] -
// group:[email protected] - domain:google.com -
-// serviceAccount:[email protected] role:
-// roles/owner - members: - user:[email protected] role:
-// roles/viewer
-//
-//
+// serviceAccount:[email protected] role:
+// roles/resourcemanager.organizationAdmin - members: -
+// user:[email protected] role: roles/resourcemanager.organizationViewer
+// condition: title: expirable access description: Does not grant access
+// after Sep 2020 expression: request.time <
+// timestamp('2020-10-01T00:00:00.000Z')
//
// For a description of IAM and its features, see the [IAM developer's
// guide](https://cloud.google.com/iam/docs).
@@ -23059,8 +23162,9 @@ type Policy struct {
// policy.
AuditConfigs []*AuditConfig `json:"auditConfigs,omitempty"`
- // Bindings: Associates a list of `members` to a `role`. `bindings` with
- // no members will result in an error.
+ // Bindings: Associates a list of `members` to a `role`. Optionally may
+ // specify a `condition` that determines when binding is in effect.
+ // `bindings` with no members will result in an error.
Bindings []*Binding `json:"bindings,omitempty"`
// Etag: `etag` is used for optimistic concurrency control as a way to
@@ -23073,7 +23177,9 @@ type Policy struct {
// to the same version of the policy.
//
// If no `etag` is provided in the call to `setIamPolicy`, then the
- // existing policy is overwritten.
+ // existing policy is overwritten. Due to blind-set semantics of an
+ // etag-less policy, 'setIamPolicy' will not fail even if either of
+ // incoming or stored policy does not meet the version requirements.
Etag string `json:"etag,omitempty"`
IamOwned bool `json:"iamOwned,omitempty"`
@@ -23093,9 +23199,14 @@ type Policy struct {
// Valid values are 0, 1, and 3. Requests specifying an invalid value
// will be rejected.
//
- // Policies with any conditional bindings must specify version 3.
- // Policies without any conditional bindings may specify any valid value
- // or leave the field unset.
+ // Operations affecting conditional bindings must specify version 3.
+ // This can be either setting a conditional policy, modifying a
+ // conditional binding, or removing a conditional binding from the
+ // stored conditional policy. Operations on non-conditional policies may
+ // specify any valid value or leave the field unset.
+ //
+ // If no etag is provided in the call to `setIamPolicy`, any version
+ // compliance checks on the incoming and/or stored policy is skipped.
Version int64 `json:"version,omitempty"`
// ServerResponse contains the HTTP response code and headers from the
@@ -23419,6 +23530,7 @@ type Quota struct {
// "INTERCONNECTS"
// "INTERCONNECT_ATTACHMENTS_PER_REGION"
// "INTERCONNECT_ATTACHMENTS_TOTAL_MBPS"
+ // "INTERCONNECT_TOTAL_GBPS"
// "INTERNAL_ADDRESSES"
// "IN_USE_ADDRESSES"
// "IN_USE_BACKUP_SCHEDULES"
@@ -25071,7 +25183,7 @@ func (s *RequestMirrorPolicy) MarshalJSON() ([]byte, error) {
// (== resource_for beta.reservations ==) (== resource_for
// v1.reservations ==)
type Reservation struct {
- // Commitment: [OutputOnly] Full or partial URL to a parent commitment.
+ // Commitment: [Output Only] Full or partial URL to a parent commitment.
// This field displays for reservations that are tied to a commitment.
Commitment string `json:"commitment,omitempty"`
@@ -26618,6 +26730,15 @@ type Route struct {
// projects/project/global/gateways/default-internet-gateway
NextHopGateway string `json:"nextHopGateway,omitempty"`
+ // NextHopIlb: The URL to a forwarding rule of type
+ // loadBalancingScheme=INTERNAL that should handle matching packets. You
+ // can only specify the forwarding rule as a partial or full URL. For
+ // example, the following are all valid URLs:
+ // -
+ // https://www.googleapis.com/compute/v1/projects/project/regions/region/forwardingRules/forwardingRule
+ // - regions/region/forwardingRules/forwardingRule
+ NextHopIlb string `json:"nextHopIlb,omitempty"`
+
// NextHopInstance: The URL to an instance that should handle matching
// packets. You can specify this as a full or partial URL. For
// example:
@@ -27598,6 +27719,11 @@ func (s *RouterListWarningData) MarshalJSON() ([]byte, error) {
// that would be used for NAT. GCP would auto-allocate ephemeral IPs if
// no external IPs are provided.
type RouterNat struct {
+ // DrainNatIps: A list of URLs of the IP resources to be drained. These
+ // IPs must be valid static external IPs that have been assigned to the
+ // NAT. These IPs should be used for updating/patching a NAT only.
+ DrainNatIps []string `json:"drainNatIps,omitempty"`
+
// IcmpIdleTimeoutSec: Timeout (in seconds) for ICMP connections.
// Defaults to 30s if not set.
IcmpIdleTimeoutSec int64 `json:"icmpIdleTimeoutSec,omitempty"`
@@ -27671,21 +27797,20 @@ type RouterNat struct {
// to 30s if not set.
UdpIdleTimeoutSec int64 `json:"udpIdleTimeoutSec,omitempty"`
- // ForceSendFields is a list of field names (e.g. "IcmpIdleTimeoutSec")
- // to unconditionally include in API requests. By default, fields with
+ // ForceSendFields is a list of field names (e.g. "DrainNatIps") to
+ // unconditionally include in API requests. By default, fields with
// empty values are omitted from API requests. However, any non-pointer,
// non-interface field appearing in ForceSendFields will be sent to the
// server regardless of whether the field is empty or not. This may be
// used to include empty fields in Patch requests.
ForceSendFields []string `json:"-"`
- // NullFields is a list of field names (e.g. "IcmpIdleTimeoutSec") to
- // include in API requests with the JSON null value. By default, fields
- // with empty values are omitted from API requests. However, any field
- // with an empty value appearing in NullFields will be sent to the
- // server as null. It is an error if a field in this list has a
- // non-empty value. This may be used to include null fields in Patch
- // requests.
+ // NullFields is a list of field names (e.g. "DrainNatIps") to include
+ // in API requests with the JSON null value. By default, fields with
+ // empty values are omitted from API requests. However, any field with
+ // an empty value appearing in NullFields will be sent to the server as
+ // null. It is an error if a field in this list has a non-empty value.
+ // This may be used to include null fields in Patch requests.
NullFields []string `json:"-"`
}
@@ -27701,9 +27826,12 @@ type RouterNatLogConfig struct {
// default.
Enable bool `json:"enable,omitempty"`
- // Filter: Specifies the desired filtering of logs on this NAT. If
+ // Filter: Specify the desired filtering of logs on this NAT. If
// unspecified, logs are exported for all connections handled by this
- // NAT.
+ // NAT. This option can take one of the following values:
+ // - ERRORS_ONLY: Export logs only for connection failures.
+ // - TRANSLATIONS_ONLY: Export logs only for successful connections.
+ // - ALL: Export logs for all connections, successful and unsuccessful.
//
// Possible values:
// "ALL"
@@ -27886,6 +28014,14 @@ type RouterStatusNatStatus struct {
// ["1.1.1.1", "129.2.16.89"]
AutoAllocatedNatIps []string `json:"autoAllocatedNatIps,omitempty"`
+ // DrainAutoAllocatedNatIps: A list of IPs auto-allocated for NAT that
+ // are in drain mode. Example: ["1.1.1.1", "179.12.26.133"].
+ DrainAutoAllocatedNatIps []string `json:"drainAutoAllocatedNatIps,omitempty"`
+
+ // DrainUserAllocatedNatIps: A list of IPs user-allocated for NAT that
+ // are in drain mode. Example: ["1.1.1.1", "179.12.26.133"].
+ DrainUserAllocatedNatIps []string `json:"drainUserAllocatedNatIps,omitempty"`
+
// MinExtraNatIpsNeeded: The number of extra IPs to allocate. This will
// be greater than 0 only if user-specified IPs are NOT enough to allow
// all configured VMs to use NAT. This value is meaningful only when
@@ -28904,8 +29040,8 @@ func (s *ShieldedInstanceConfig) MarshalJSON() ([]byte, error) {
// ShieldedInstanceIdentity: A shielded Instance identity entry.
type ShieldedInstanceIdentity struct {
- // EncryptionKey: An Endorsement Key (EK) issued to the Shielded
- // Instance's vTPM.
+ // EncryptionKey: An Endorsement Key (EK) made by the RSA 2048 algorithm
+ // issued to the Shielded Instance's vTPM.
EncryptionKey *ShieldedInstanceIdentityEntry `json:"encryptionKey,omitempty"`
// Kind: [Output Only] Type of the resource. Always
@@ -28913,8 +29049,8 @@ type ShieldedInstanceIdentity struct {
// entry.
Kind string `json:"kind,omitempty"`
- // SigningKey: An Attestation Key (AK) issued to the Shielded Instance's
- // vTPM.
+ // SigningKey: An Attestation Key (AK) made by the RSA 2048 algorithm
+ // issued to the Shielded Instance's vTPM.
SigningKey *ShieldedInstanceIdentityEntry `json:"signingKey,omitempty"`
// ServerResponse contains the HTTP response code and headers from the
@@ -29063,7 +29199,7 @@ type Snapshot struct {
// property when you create the resource.
Description string `json:"description,omitempty"`
- // DiskSizeGb: [Output Only] Size of the snapshot, specified in GB.
+ // DiskSizeGb: [Output Only] Size of the source disk, specified in GB.
DiskSizeGb int64 `json:"diskSizeGb,omitempty,string"`
// Id: [Output Only] The unique identifier for the resource. This
@@ -31871,13 +32007,17 @@ type TargetHttpsProxy struct {
Name string `json:"name,omitempty"`
// QuicOverride: Specifies the QUIC override policy for this
- // TargetHttpsProxy resource. This determines whether the load balancer
- // will attempt to negotiate QUIC with clients or not. Can specify one
- // of NONE, ENABLE, or DISABLE. Specify ENABLE to always enable QUIC,
- // Enables QUIC when set to ENABLE, and disables QUIC when set to
- // DISABLE. If NONE is specified, uses the QUIC policy with no user
- // overrides, which is equivalent to DISABLE. Not specifying this field
- // is equivalent to specifying NONE.
+ // TargetHttpsProxy resource. This setting determines whether the load
+ // balancer attempts to negotiate QUIC with clients. You can specify
+ // NONE, ENABLE, or DISABLE.
+ // - When quic-override is set to NONE, Google manages whether QUIC is
+ // used.
+ // - When quic-override is set to ENABLE, the load balancer uses QUIC
+ // when possible.
+ // - When quic-override is set to DISABLE, the load balancer doesn't use
+ // QUIC.
+ // - If the quic-override flag is not specified, NONE is implied.
+ // -
//
// Possible values:
// "DISABLE"
@@ -31901,7 +32041,7 @@ type TargetHttpsProxy struct {
// SslPolicy: URL of SslPolicy resource that will be associated with the
// TargetHttpsProxy resource. If not set, the TargetHttpsProxy resource
- // will not have any SSL policy configured.
+ // has no SSL policy configured.
SslPolicy string `json:"sslPolicy,omitempty"`
// UrlMap: A fully-qualified or valid partial URL to the UrlMap resource
@@ -35883,12 +36023,23 @@ func (s *VmEndpointNatMappings) MarshalJSON() ([]byte, error) {
// VmEndpointNatMappingsInterfaceNatMappings: Contain information of Nat
// mapping for an interface of this endpoint.
type VmEndpointNatMappingsInterfaceNatMappings struct {
+ // DrainNatIpPortRanges: List of all drain IP:port-range mappings
+ // assigned to this interface. These ranges are inclusive, that is, both
+ // the first and the last ports can be used for NAT. Example:
+ // ["2.2.2.2:12345-12355", "1.1.1.1:2234-2234"].
+ DrainNatIpPortRanges []string `json:"drainNatIpPortRanges,omitempty"`
+
// NatIpPortRanges: A list of all IP:port-range mappings assigned to
// this interface. These ranges are inclusive, that is, both the first
// and the last ports can be used for NAT. Example:
// ["2.2.2.2:12345-12355", "1.1.1.1:2234-2234"].
NatIpPortRanges []string `json:"natIpPortRanges,omitempty"`
+ // NumTotalDrainNatPorts: Total number of drain ports across all NAT IPs
+ // allocated to this interface. It equals to the aggregated port number
+ // in the field drain_nat_ip_port_ranges.
+ NumTotalDrainNatPorts int64 `json:"numTotalDrainNatPorts,omitempty"`
+
// NumTotalNatPorts: Total number of ports across all NAT IPs allocated
// to this interface. It equals to the aggregated port number in the
// field nat_ip_port_ranges.
@@ -35902,15 +36053,16 @@ type VmEndpointNatMappingsInterfaceNatMappings struct {
// SourceVirtualIp: Primary IP of the VM for this NIC.
SourceVirtualIp string `json:"sourceVirtualIp,omitempty"`
- // ForceSendFields is a list of field names (e.g. "NatIpPortRanges") to
- // unconditionally include in API requests. By default, fields with
- // empty values are omitted from API requests. However, any non-pointer,
- // non-interface field appearing in ForceSendFields will be sent to the
- // server regardless of whether the field is empty or not. This may be
- // used to include empty fields in Patch requests.
+ // ForceSendFields is a list of field names (e.g.
+ // "DrainNatIpPortRanges") to unconditionally include in API requests.
+ // By default, fields with empty values are omitted from API requests.
+ // However, any non-pointer, non-interface field appearing in
+ // ForceSendFields will be sent to the server regardless of whether the
+ // field is empty or not. This may be used to include empty fields in
+ // Patch requests.
ForceSendFields []string `json:"-"`
- // NullFields is a list of field names (e.g. "NatIpPortRanges") to
+ // NullFields is a list of field names (e.g. "DrainNatIpPortRanges") to
// include in API requests with the JSON null value. By default, fields
// with empty values are omitted from API requests. However, any field
// with an empty value appearing in NullFields will be sent to the
@@ -36941,6 +37093,17 @@ type VpnTunnel struct {
//
// - FAILED: Tunnel creation has failed and the tunnel is not ready to
// be used.
+ // - NO_INCOMING_PACKETS: No incoming packets from peer.
+ // - REJECTED: Tunnel configuration was rejected, can be result of being
+ // blacklisted.
+ // - ALLOCATING_RESOURCES: Cloud VPN is in the process of allocating all
+ // required resources.
+ // - STOPPED: Tunnel is stopped due to its Forwarding Rules being
+ // deleted for Classic VPN tunnels or the project is in frozen state.
+ // - PEER_IDENTITY_MISMATCH: Peer identity does not match peer IP,
+ // probably behind NAT.
+ // - TS_NARROWING_NOT_ALLOWED: Traffic selector narrowing not allowed
+ // for an HA-VPN tunnel.
//
// Possible values:
// "ALLOCATING_RESOURCES"
@@ -38003,6 +38166,7 @@ type AcceleratorTypesAggregatedListCall struct {
}
// AggregatedList: Retrieves an aggregated list of accelerator types.
+// (== suppress_warning http-rest-shadowed ==)
func (r *AcceleratorTypesService) AggregatedList(project string) *AcceleratorTypesAggregatedListCall {
c := &AcceleratorTypesAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -38109,7 +38273,7 @@ func (c *AcceleratorTypesAggregatedListCall) Header() http.Header {
func (c *AcceleratorTypesAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -38171,7 +38335,7 @@ func (c *AcceleratorTypesAggregatedListCall) Do(opts ...googleapi.CallOption) (*
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of accelerator types.",
+ // "description": "Retrieves an aggregated list of accelerator types. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.acceleratorTypes.aggregatedList",
// "parameterOrder": [
@@ -38256,7 +38420,8 @@ type AcceleratorTypesGetCall struct {
header_ http.Header
}
-// Get: Returns the specified accelerator type.
+// Get: Returns the specified accelerator type. (== suppress_warning
+// http-rest-shadowed ==)
func (r *AcceleratorTypesService) Get(project string, zone string, acceleratorType string) *AcceleratorTypesGetCall {
c := &AcceleratorTypesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -38302,7 +38467,7 @@ func (c *AcceleratorTypesGetCall) Header() http.Header {
func (c *AcceleratorTypesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -38366,7 +38531,7 @@ func (c *AcceleratorTypesGetCall) Do(opts ...googleapi.CallOption) (*Accelerator
}
return ret, nil
// {
- // "description": "Returns the specified accelerator type.",
+ // "description": "Returns the specified accelerator type. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.acceleratorTypes.get",
// "parameterOrder": [
@@ -38423,7 +38588,7 @@ type AcceleratorTypesListCall struct {
}
// List: Retrieves a list of accelerator types available to the
-// specified project.
+// specified project. (== suppress_warning http-rest-shadowed ==)
func (r *AcceleratorTypesService) List(project string, zone string) *AcceleratorTypesListCall {
c := &AcceleratorTypesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -38531,7 +38696,7 @@ func (c *AcceleratorTypesListCall) Header() http.Header {
func (c *AcceleratorTypesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -38594,7 +38759,7 @@ func (c *AcceleratorTypesListCall) Do(opts ...googleapi.CallOption) (*Accelerato
}
return ret, nil
// {
- // "description": "Retrieves a list of accelerator types available to the specified project.",
+ // "description": "Retrieves a list of accelerator types available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.acceleratorTypes.list",
// "parameterOrder": [
@@ -38685,7 +38850,8 @@ type AddressesAggregatedListCall struct {
header_ http.Header
}
-// AggregatedList: Retrieves an aggregated list of addresses.
+// AggregatedList: Retrieves an aggregated list of addresses. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/addresses/aggregatedList
func (r *AddressesService) AggregatedList(project string) *AddressesAggregatedListCall {
c := &AddressesAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -38793,7 +38959,7 @@ func (c *AddressesAggregatedListCall) Header() http.Header {
func (c *AddressesAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -38855,7 +39021,7 @@ func (c *AddressesAggregatedListCall) Do(opts ...googleapi.CallOption) (*Address
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of addresses.",
+ // "description": "Retrieves an aggregated list of addresses. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.addresses.aggregatedList",
// "parameterOrder": [
@@ -38939,7 +39105,8 @@ type AddressesDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified address resource.
+// Delete: Deletes the specified address resource. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/addresses/delete
func (r *AddressesService) Delete(project string, region string, address string) *AddressesDeleteCall {
c := &AddressesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -38995,7 +39162,7 @@ func (c *AddressesDeleteCall) Header() http.Header {
func (c *AddressesDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -39056,7 +39223,7 @@ func (c *AddressesDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, erro
}
return ret, nil
// {
- // "description": "Deletes the specified address resource.",
+ // "description": "Deletes the specified address resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.addresses.delete",
// "parameterOrder": [
@@ -39117,7 +39284,8 @@ type AddressesGetCall struct {
header_ http.Header
}
-// Get: Returns the specified address resource.
+// Get: Returns the specified address resource. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/addresses/get
func (r *AddressesService) Get(project string, region string, address string) *AddressesGetCall {
c := &AddressesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -39164,7 +39332,7 @@ func (c *AddressesGetCall) Header() http.Header {
func (c *AddressesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -39228,7 +39396,7 @@ func (c *AddressesGetCall) Do(opts ...googleapi.CallOption) (*Address, error) {
}
return ret, nil
// {
- // "description": "Returns the specified address resource.",
+ // "description": "Returns the specified address resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.addresses.get",
// "parameterOrder": [
@@ -39285,7 +39453,8 @@ type AddressesInsertCall struct {
}
// Insert: Creates an address resource in the specified project by using
-// the data included in the request.
+// the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/addresses/insert
func (r *AddressesService) Insert(project string, region string, address *Address) *AddressesInsertCall {
c := &AddressesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -39341,7 +39510,7 @@ func (c *AddressesInsertCall) Header() http.Header {
func (c *AddressesInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -39406,7 +39575,7 @@ func (c *AddressesInsertCall) Do(opts ...googleapi.CallOption) (*Operation, erro
}
return ret, nil
// {
- // "description": "Creates an address resource in the specified project by using the data included in the request.",
+ // "description": "Creates an address resource in the specified project by using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.addresses.insert",
// "parameterOrder": [
@@ -39462,7 +39631,7 @@ type AddressesListCall struct {
}
// List: Retrieves a list of addresses contained within the specified
-// region.
+// region. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/addresses/list
func (r *AddressesService) List(project string, region string) *AddressesListCall {
c := &AddressesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -39571,7 +39740,7 @@ func (c *AddressesListCall) Header() http.Header {
func (c *AddressesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -39634,7 +39803,7 @@ func (c *AddressesListCall) Do(opts ...googleapi.CallOption) (*AddressList, erro
}
return ret, nil
// {
- // "description": "Retrieves a list of addresses contained within the specified region.",
+ // "description": "Retrieves a list of addresses contained within the specified region. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.addresses.list",
// "parameterOrder": [
@@ -39725,7 +39894,8 @@ type AutoscalersAggregatedListCall struct {
header_ http.Header
}
-// AggregatedList: Retrieves an aggregated list of autoscalers.
+// AggregatedList: Retrieves an aggregated list of autoscalers. (==
+// suppress_warning http-rest-shadowed ==)
func (r *AutoscalersService) AggregatedList(project string) *AutoscalersAggregatedListCall {
c := &AutoscalersAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -39832,7 +40002,7 @@ func (c *AutoscalersAggregatedListCall) Header() http.Header {
func (c *AutoscalersAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -39894,7 +40064,7 @@ func (c *AutoscalersAggregatedListCall) Do(opts ...googleapi.CallOption) (*Autos
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of autoscalers.",
+ // "description": "Retrieves an aggregated list of autoscalers. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.autoscalers.aggregatedList",
// "parameterOrder": [
@@ -39978,7 +40148,8 @@ type AutoscalersDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified autoscaler.
+// Delete: Deletes the specified autoscaler. (== suppress_warning
+// http-rest-shadowed ==)
func (r *AutoscalersService) Delete(project string, zone string, autoscaler string) *AutoscalersDeleteCall {
c := &AutoscalersDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -40033,7 +40204,7 @@ func (c *AutoscalersDeleteCall) Header() http.Header {
func (c *AutoscalersDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -40094,7 +40265,7 @@ func (c *AutoscalersDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, er
}
return ret, nil
// {
- // "description": "Deletes the specified autoscaler.",
+ // "description": "Deletes the specified autoscaler. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.autoscalers.delete",
// "parameterOrder": [
@@ -40156,7 +40327,8 @@ type AutoscalersGetCall struct {
}
// Get: Returns the specified autoscaler resource. Gets a list of
-// available autoscalers by making a list() request.
+// available autoscalers by making a list() request. (==
+// suppress_warning http-rest-shadowed ==)
func (r *AutoscalersService) Get(project string, zone string, autoscaler string) *AutoscalersGetCall {
c := &AutoscalersGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -40202,7 +40374,7 @@ func (c *AutoscalersGetCall) Header() http.Header {
func (c *AutoscalersGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -40266,7 +40438,7 @@ func (c *AutoscalersGetCall) Do(opts ...googleapi.CallOption) (*Autoscaler, erro
}
return ret, nil
// {
- // "description": "Returns the specified autoscaler resource. Gets a list of available autoscalers by making a list() request.",
+ // "description": "Returns the specified autoscaler resource. Gets a list of available autoscalers by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.autoscalers.get",
// "parameterOrder": [
@@ -40323,7 +40495,7 @@ type AutoscalersInsertCall struct {
}
// Insert: Creates an autoscaler in the specified project using the data
-// included in the request.
+// included in the request. (== suppress_warning http-rest-shadowed ==)
func (r *AutoscalersService) Insert(project string, zone string, autoscaler *Autoscaler) *AutoscalersInsertCall {
c := &AutoscalersInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -40378,7 +40550,7 @@ func (c *AutoscalersInsertCall) Header() http.Header {
func (c *AutoscalersInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -40443,7 +40615,7 @@ func (c *AutoscalersInsertCall) Do(opts ...googleapi.CallOption) (*Operation, er
}
return ret, nil
// {
- // "description": "Creates an autoscaler in the specified project using the data included in the request.",
+ // "description": "Creates an autoscaler in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.autoscalers.insert",
// "parameterOrder": [
@@ -40499,7 +40671,7 @@ type AutoscalersListCall struct {
}
// List: Retrieves a list of autoscalers contained within the specified
-// zone.
+// zone. (== suppress_warning http-rest-shadowed ==)
func (r *AutoscalersService) List(project string, zone string) *AutoscalersListCall {
c := &AutoscalersListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -40607,7 +40779,7 @@ func (c *AutoscalersListCall) Header() http.Header {
func (c *AutoscalersListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -40670,7 +40842,7 @@ func (c *AutoscalersListCall) Do(opts ...googleapi.CallOption) (*AutoscalerList,
}
return ret, nil
// {
- // "description": "Retrieves a list of autoscalers contained within the specified zone.",
+ // "description": "Retrieves a list of autoscalers contained within the specified zone. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.autoscalers.list",
// "parameterOrder": [
@@ -40764,7 +40936,8 @@ type AutoscalersPatchCall struct {
// Patch: Updates an autoscaler in the specified project using the data
// included in the request. This method supports PATCH semantics and
-// uses the JSON merge patch format and processing rules.
+// uses the JSON merge patch format and processing rules. (==
+// suppress_warning http-rest-shadowed ==)
func (r *AutoscalersService) Patch(project string, zone string, autoscaler *Autoscaler) *AutoscalersPatchCall {
c := &AutoscalersPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -40826,7 +40999,7 @@ func (c *AutoscalersPatchCall) Header() http.Header {
func (c *AutoscalersPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -40891,7 +41064,7 @@ func (c *AutoscalersPatchCall) Do(opts ...googleapi.CallOption) (*Operation, err
}
return ret, nil
// {
- // "description": "Updates an autoscaler in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ // "description": "Updates an autoscaler in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.autoscalers.patch",
// "parameterOrder": [
@@ -40953,7 +41126,7 @@ type AutoscalersUpdateCall struct {
}
// Update: Updates an autoscaler in the specified project using the data
-// included in the request.
+// included in the request. (== suppress_warning http-rest-shadowed ==)
func (r *AutoscalersService) Update(project string, zone string, autoscaler *Autoscaler) *AutoscalersUpdateCall {
c := &AutoscalersUpdateCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -41015,7 +41188,7 @@ func (c *AutoscalersUpdateCall) Header() http.Header {
func (c *AutoscalersUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -41080,7 +41253,7 @@ func (c *AutoscalersUpdateCall) Do(opts ...googleapi.CallOption) (*Operation, er
}
return ret, nil
// {
- // "description": "Updates an autoscaler in the specified project using the data included in the request.",
+ // "description": "Updates an autoscaler in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PUT",
// "id": "compute.autoscalers.update",
// "parameterOrder": [
@@ -41142,7 +41315,7 @@ type BackendBucketsAddSignedUrlKeyCall struct {
}
// AddSignedUrlKey: Adds a key for validating requests with signed URLs
-// for this backend bucket.
+// for this backend bucket. (== suppress_warning http-rest-shadowed ==)
func (r *BackendBucketsService) AddSignedUrlKey(project string, backendBucket string, signedurlkey *SignedUrlKey) *BackendBucketsAddSignedUrlKeyCall {
c := &BackendBucketsAddSignedUrlKeyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -41197,7 +41370,7 @@ func (c *BackendBucketsAddSignedUrlKeyCall) Header() http.Header {
func (c *BackendBucketsAddSignedUrlKeyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -41262,7 +41435,7 @@ func (c *BackendBucketsAddSignedUrlKeyCall) Do(opts ...googleapi.CallOption) (*O
}
return ret, nil
// {
- // "description": "Adds a key for validating requests with signed URLs for this backend bucket.",
+ // "description": "Adds a key for validating requests with signed URLs for this backend bucket. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.backendBuckets.addSignedUrlKey",
// "parameterOrder": [
@@ -41315,7 +41488,8 @@ type BackendBucketsDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified BackendBucket resource.
+// Delete: Deletes the specified BackendBucket resource. (==
+// suppress_warning http-rest-shadowed ==)
func (r *BackendBucketsService) Delete(project string, backendBucket string) *BackendBucketsDeleteCall {
c := &BackendBucketsDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -41369,7 +41543,7 @@ func (c *BackendBucketsDeleteCall) Header() http.Header {
func (c *BackendBucketsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -41429,7 +41603,7 @@ func (c *BackendBucketsDeleteCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Deletes the specified BackendBucket resource.",
+ // "description": "Deletes the specified BackendBucket resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.backendBuckets.delete",
// "parameterOrder": [
@@ -41481,7 +41655,8 @@ type BackendBucketsDeleteSignedUrlKeyCall struct {
}
// DeleteSignedUrlKey: Deletes a key for validating requests with signed
-// URLs for this backend bucket.
+// URLs for this backend bucket. (== suppress_warning http-rest-shadowed
+// ==)
func (r *BackendBucketsService) DeleteSignedUrlKey(project string, backendBucket string, keyName string) *BackendBucketsDeleteSignedUrlKeyCall {
c := &BackendBucketsDeleteSignedUrlKeyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -41536,7 +41711,7 @@ func (c *BackendBucketsDeleteSignedUrlKeyCall) Header() http.Header {
func (c *BackendBucketsDeleteSignedUrlKeyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -41596,7 +41771,7 @@ func (c *BackendBucketsDeleteSignedUrlKeyCall) Do(opts ...googleapi.CallOption)
}
return ret, nil
// {
- // "description": "Deletes a key for validating requests with signed URLs for this backend bucket.",
+ // "description": "Deletes a key for validating requests with signed URLs for this backend bucket. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.backendBuckets.deleteSignedUrlKey",
// "parameterOrder": [
@@ -41655,7 +41830,8 @@ type BackendBucketsGetCall struct {
}
// Get: Returns the specified BackendBucket resource. Gets a list of
-// available backend buckets by making a list() request.
+// available backend buckets by making a list() request. (==
+// suppress_warning http-rest-shadowed ==)
func (r *BackendBucketsService) Get(project string, backendBucket string) *BackendBucketsGetCall {
c := &BackendBucketsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -41700,7 +41876,7 @@ func (c *BackendBucketsGetCall) Header() http.Header {
func (c *BackendBucketsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -41763,7 +41939,7 @@ func (c *BackendBucketsGetCall) Do(opts ...googleapi.CallOption) (*BackendBucket
}
return ret, nil
// {
- // "description": "Returns the specified BackendBucket resource. Gets a list of available backend buckets by making a list() request.",
+ // "description": "Returns the specified BackendBucket resource. Gets a list of available backend buckets by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.backendBuckets.get",
// "parameterOrder": [
@@ -41811,7 +41987,8 @@ type BackendBucketsInsertCall struct {
}
// Insert: Creates a BackendBucket resource in the specified project
-// using the data included in the request.
+// using the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *BackendBucketsService) Insert(project string, backendbucket *BackendBucket) *BackendBucketsInsertCall {
c := &BackendBucketsInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -41865,7 +42042,7 @@ func (c *BackendBucketsInsertCall) Header() http.Header {
func (c *BackendBucketsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -41929,7 +42106,7 @@ func (c *BackendBucketsInsertCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Creates a BackendBucket resource in the specified project using the data included in the request.",
+ // "description": "Creates a BackendBucket resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.backendBuckets.insert",
// "parameterOrder": [
@@ -41976,7 +42153,7 @@ type BackendBucketsListCall struct {
}
// List: Retrieves the list of BackendBucket resources available to the
-// specified project.
+// specified project. (== suppress_warning http-rest-shadowed ==)
func (r *BackendBucketsService) List(project string) *BackendBucketsListCall {
c := &BackendBucketsListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -42083,7 +42260,7 @@ func (c *BackendBucketsListCall) Header() http.Header {
func (c *BackendBucketsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -42145,7 +42322,7 @@ func (c *BackendBucketsListCall) Do(opts ...googleapi.CallOption) (*BackendBucke
}
return ret, nil
// {
- // "description": "Retrieves the list of BackendBucket resources available to the specified project.",
+ // "description": "Retrieves the list of BackendBucket resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.backendBuckets.list",
// "parameterOrder": [
@@ -42231,7 +42408,8 @@ type BackendBucketsPatchCall struct {
// Patch: Updates the specified BackendBucket resource with the data
// included in the request. This method supports PATCH semantics and
-// uses the JSON merge patch format and processing rules.
+// uses the JSON merge patch format and processing rules. (==
+// suppress_warning http-rest-shadowed ==)
func (r *BackendBucketsService) Patch(project string, backendBucket string, backendbucket *BackendBucket) *BackendBucketsPatchCall {
c := &BackendBucketsPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -42286,7 +42464,7 @@ func (c *BackendBucketsPatchCall) Header() http.Header {
func (c *BackendBucketsPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -42351,7 +42529,7 @@ func (c *BackendBucketsPatchCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Updates the specified BackendBucket resource with the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ // "description": "Updates the specified BackendBucket resource with the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.backendBuckets.patch",
// "parameterOrder": [
@@ -42407,7 +42585,7 @@ type BackendBucketsUpdateCall struct {
}
// Update: Updates the specified BackendBucket resource with the data
-// included in the request.
+// included in the request. (== suppress_warning http-rest-shadowed ==)
func (r *BackendBucketsService) Update(project string, backendBucket string, backendbucket *BackendBucket) *BackendBucketsUpdateCall {
c := &BackendBucketsUpdateCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -42462,7 +42640,7 @@ func (c *BackendBucketsUpdateCall) Header() http.Header {
func (c *BackendBucketsUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -42527,7 +42705,7 @@ func (c *BackendBucketsUpdateCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Updates the specified BackendBucket resource with the data included in the request.",
+ // "description": "Updates the specified BackendBucket resource with the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PUT",
// "id": "compute.backendBuckets.update",
// "parameterOrder": [
@@ -42583,7 +42761,7 @@ type BackendServicesAddSignedUrlKeyCall struct {
}
// AddSignedUrlKey: Adds a key for validating requests with signed URLs
-// for this backend service.
+// for this backend service. (== suppress_warning http-rest-shadowed ==)
func (r *BackendServicesService) AddSignedUrlKey(project string, backendService string, signedurlkey *SignedUrlKey) *BackendServicesAddSignedUrlKeyCall {
c := &BackendServicesAddSignedUrlKeyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -42638,7 +42816,7 @@ func (c *BackendServicesAddSignedUrlKeyCall) Header() http.Header {
func (c *BackendServicesAddSignedUrlKeyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -42703,7 +42881,7 @@ func (c *BackendServicesAddSignedUrlKeyCall) Do(opts ...googleapi.CallOption) (*
}
return ret, nil
// {
- // "description": "Adds a key for validating requests with signed URLs for this backend service.",
+ // "description": "Adds a key for validating requests with signed URLs for this backend service. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.backendServices.addSignedUrlKey",
// "parameterOrder": [
@@ -42757,7 +42935,8 @@ type BackendServicesAggregatedListCall struct {
}
// AggregatedList: Retrieves the list of all BackendService resources,
-// regional and global, available to the specified project.
+// regional and global, available to the specified project. (==
+// suppress_warning http-rest-shadowed ==)
func (r *BackendServicesService) AggregatedList(project string) *BackendServicesAggregatedListCall {
c := &BackendServicesAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -42864,7 +43043,7 @@ func (c *BackendServicesAggregatedListCall) Header() http.Header {
func (c *BackendServicesAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -42926,7 +43105,7 @@ func (c *BackendServicesAggregatedListCall) Do(opts ...googleapi.CallOption) (*B
}
return ret, nil
// {
- // "description": "Retrieves the list of all BackendService resources, regional and global, available to the specified project.",
+ // "description": "Retrieves the list of all BackendService resources, regional and global, available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.backendServices.aggregatedList",
// "parameterOrder": [
@@ -43009,7 +43188,8 @@ type BackendServicesDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified BackendService resource.
+// Delete: Deletes the specified BackendService resource. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/backendServices/delete
func (r *BackendServicesService) Delete(project string, backendService string) *BackendServicesDeleteCall {
c := &BackendServicesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -43064,7 +43244,7 @@ func (c *BackendServicesDeleteCall) Header() http.Header {
func (c *BackendServicesDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -43124,7 +43304,7 @@ func (c *BackendServicesDeleteCall) Do(opts ...googleapi.CallOption) (*Operation
}
return ret, nil
// {
- // "description": "Deletes the specified BackendService resource.",
+ // "description": "Deletes the specified BackendService resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.backendServices.delete",
// "parameterOrder": [
@@ -43176,7 +43356,8 @@ type BackendServicesDeleteSignedUrlKeyCall struct {
}
// DeleteSignedUrlKey: Deletes a key for validating requests with signed
-// URLs for this backend service.
+// URLs for this backend service. (== suppress_warning
+// http-rest-shadowed ==)
func (r *BackendServicesService) DeleteSignedUrlKey(project string, backendService string, keyName string) *BackendServicesDeleteSignedUrlKeyCall {
c := &BackendServicesDeleteSignedUrlKeyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -43231,7 +43412,7 @@ func (c *BackendServicesDeleteSignedUrlKeyCall) Header() http.Header {
func (c *BackendServicesDeleteSignedUrlKeyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -43291,7 +43472,7 @@ func (c *BackendServicesDeleteSignedUrlKeyCall) Do(opts ...googleapi.CallOption)
}
return ret, nil
// {
- // "description": "Deletes a key for validating requests with signed URLs for this backend service.",
+ // "description": "Deletes a key for validating requests with signed URLs for this backend service. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.backendServices.deleteSignedUrlKey",
// "parameterOrder": [
@@ -43350,7 +43531,8 @@ type BackendServicesGetCall struct {
}
// Get: Returns the specified BackendService resource. Gets a list of
-// available backend services.
+// available backend services. (== suppress_warning http-rest-shadowed
+// ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/backendServices/get
func (r *BackendServicesService) Get(project string, backendService string) *BackendServicesGetCall {
c := &BackendServicesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -43396,7 +43578,7 @@ func (c *BackendServicesGetCall) Header() http.Header {
func (c *BackendServicesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -43459,7 +43641,7 @@ func (c *BackendServicesGetCall) Do(opts ...googleapi.CallOption) (*BackendServi
}
return ret, nil
// {
- // "description": "Returns the specified BackendService resource. Gets a list of available backend services.",
+ // "description": "Returns the specified BackendService resource. Gets a list of available backend services. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.backendServices.get",
// "parameterOrder": [
@@ -43508,7 +43690,7 @@ type BackendServicesGetHealthCall struct {
}
// GetHealth: Gets the most recent health check results for this
-// BackendService.
+// BackendService. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/backendServices/getHealth
func (r *BackendServicesService) GetHealth(project string, backendService string, resourcegroupreference *ResourceGroupReference) *BackendServicesGetHealthCall {
c := &BackendServicesGetHealthCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -43545,7 +43727,7 @@ func (c *BackendServicesGetHealthCall) Header() http.Header {
func (c *BackendServicesGetHealthCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -43610,7 +43792,7 @@ func (c *BackendServicesGetHealthCall) Do(opts ...googleapi.CallOption) (*Backen
}
return ret, nil
// {
- // "description": "Gets the most recent health check results for this BackendService.",
+ // "description": "Gets the most recent health check results for this BackendService. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.backendServices.getHealth",
// "parameterOrder": [
@@ -43662,7 +43844,8 @@ type BackendServicesInsertCall struct {
// Insert: Creates a BackendService resource in the specified project
// using the data included in the request. There are several
// restrictions and guidelines to keep in mind when creating a backend
-// service. Read Restrictions and Guidelines for more information.
+// service. Read Restrictions and Guidelines for more information. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/backendServices/insert
func (r *BackendServicesService) Insert(project string, backendservice *BackendService) *BackendServicesInsertCall {
c := &BackendServicesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -43717,7 +43900,7 @@ func (c *BackendServicesInsertCall) Header() http.Header {
func (c *BackendServicesInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -43781,7 +43964,7 @@ func (c *BackendServicesInsertCall) Do(opts ...googleapi.CallOption) (*Operation
}
return ret, nil
// {
- // "description": "Creates a BackendService resource in the specified project using the data included in the request. There are several restrictions and guidelines to keep in mind when creating a backend service. Read Restrictions and Guidelines for more information.",
+ // "description": "Creates a BackendService resource in the specified project using the data included in the request. There are several restrictions and guidelines to keep in mind when creating a backend service. Read Restrictions and Guidelines for more information. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.backendServices.insert",
// "parameterOrder": [
@@ -43828,7 +44011,7 @@ type BackendServicesListCall struct {
}
// List: Retrieves the list of BackendService resources available to the
-// specified project.
+// specified project. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/backendServices/list
func (r *BackendServicesService) List(project string) *BackendServicesListCall {
c := &BackendServicesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -43936,7 +44119,7 @@ func (c *BackendServicesListCall) Header() http.Header {
func (c *BackendServicesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -43998,7 +44181,7 @@ func (c *BackendServicesListCall) Do(opts ...googleapi.CallOption) (*BackendServ
}
return ret, nil
// {
- // "description": "Retrieves the list of BackendService resources available to the specified project.",
+ // "description": "Retrieves the list of BackendService resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.backendServices.list",
// "parameterOrder": [
@@ -44087,7 +44270,7 @@ type BackendServicesPatchCall struct {
// guidelines to keep in mind when updating a backend service. Read
// Restrictions and Guidelines for more information. This method
// supports PATCH semantics and uses the JSON merge patch format and
-// processing rules.
+// processing rules. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/backendServices/patch
func (r *BackendServicesService) Patch(project string, backendService string, backendservice *BackendService) *BackendServicesPatchCall {
c := &BackendServicesPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -44143,7 +44326,7 @@ func (c *BackendServicesPatchCall) Header() http.Header {
func (c *BackendServicesPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -44208,7 +44391,7 @@ func (c *BackendServicesPatchCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Patches the specified BackendService resource with the data included in the request. There are several restrictions and guidelines to keep in mind when updating a backend service. Read Restrictions and Guidelines for more information. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ // "description": "Patches the specified BackendService resource with the data included in the request. There are several restrictions and guidelines to keep in mind when updating a backend service. Read Restrictions and Guidelines for more information. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.backendServices.patch",
// "parameterOrder": [
@@ -44264,7 +44447,7 @@ type BackendServicesSetSecurityPolicyCall struct {
}
// SetSecurityPolicy: Sets the security policy for the specified backend
-// service.
+// service. (== suppress_warning http-rest-shadowed ==)
func (r *BackendServicesService) SetSecurityPolicy(project string, backendService string, securitypolicyreference *SecurityPolicyReference) *BackendServicesSetSecurityPolicyCall {
c := &BackendServicesSetSecurityPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -44319,7 +44502,7 @@ func (c *BackendServicesSetSecurityPolicyCall) Header() http.Header {
func (c *BackendServicesSetSecurityPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -44384,7 +44567,7 @@ func (c *BackendServicesSetSecurityPolicyCall) Do(opts ...googleapi.CallOption)
}
return ret, nil
// {
- // "description": "Sets the security policy for the specified backend service.",
+ // "description": "Sets the security policy for the specified backend service. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.backendServices.setSecurityPolicy",
// "parameterOrder": [
@@ -44441,7 +44624,8 @@ type BackendServicesUpdateCall struct {
// Update: Updates the specified BackendService resource with the data
// included in the request. There are several restrictions and
// guidelines to keep in mind when updating a backend service. Read
-// Restrictions and Guidelines for more information.
+// Restrictions and Guidelines for more information. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/backendServices/update
func (r *BackendServicesService) Update(project string, backendService string, backendservice *BackendService) *BackendServicesUpdateCall {
c := &BackendServicesUpdateCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -44497,7 +44681,7 @@ func (c *BackendServicesUpdateCall) Header() http.Header {
func (c *BackendServicesUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -44562,7 +44746,7 @@ func (c *BackendServicesUpdateCall) Do(opts ...googleapi.CallOption) (*Operation
}
return ret, nil
// {
- // "description": "Updates the specified BackendService resource with the data included in the request. There are several restrictions and guidelines to keep in mind when updating a backend service. Read Restrictions and Guidelines for more information.",
+ // "description": "Updates the specified BackendService resource with the data included in the request. There are several restrictions and guidelines to keep in mind when updating a backend service. Read Restrictions and Guidelines for more information. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PUT",
// "id": "compute.backendServices.update",
// "parameterOrder": [
@@ -44616,7 +44800,8 @@ type DiskTypesAggregatedListCall struct {
header_ http.Header
}
-// AggregatedList: Retrieves an aggregated list of disk types.
+// AggregatedList: Retrieves an aggregated list of disk types. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/diskTypes/aggregatedList
func (r *DiskTypesService) AggregatedList(project string) *DiskTypesAggregatedListCall {
c := &DiskTypesAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -44724,7 +44909,7 @@ func (c *DiskTypesAggregatedListCall) Header() http.Header {
func (c *DiskTypesAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -44786,7 +44971,7 @@ func (c *DiskTypesAggregatedListCall) Do(opts ...googleapi.CallOption) (*DiskTyp
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of disk types.",
+ // "description": "Retrieves an aggregated list of disk types. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.diskTypes.aggregatedList",
// "parameterOrder": [
@@ -44872,7 +45057,8 @@ type DiskTypesGetCall struct {
}
// Get: Returns the specified disk type. Gets a list of available disk
-// types by making a list() request.
+// types by making a list() request. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/diskTypes/get
func (r *DiskTypesService) Get(project string, zone string, diskType string) *DiskTypesGetCall {
c := &DiskTypesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -44919,7 +45105,7 @@ func (c *DiskTypesGetCall) Header() http.Header {
func (c *DiskTypesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -44983,7 +45169,7 @@ func (c *DiskTypesGetCall) Do(opts ...googleapi.CallOption) (*DiskType, error) {
}
return ret, nil
// {
- // "description": "Returns the specified disk type. Gets a list of available disk types by making a list() request.",
+ // "description": "Returns the specified disk type. Gets a list of available disk types by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.diskTypes.get",
// "parameterOrder": [
@@ -45040,7 +45226,7 @@ type DiskTypesListCall struct {
}
// List: Retrieves a list of disk types available to the specified
-// project.
+// project. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/diskTypes/list
func (r *DiskTypesService) List(project string, zone string) *DiskTypesListCall {
c := &DiskTypesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -45149,7 +45335,7 @@ func (c *DiskTypesListCall) Header() http.Header {
func (c *DiskTypesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -45212,7 +45398,7 @@ func (c *DiskTypesListCall) Do(opts ...googleapi.CallOption) (*DiskTypeList, err
}
return ret, nil
// {
- // "description": "Retrieves a list of disk types available to the specified project.",
+ // "description": "Retrieves a list of disk types available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.diskTypes.list",
// "parameterOrder": [
@@ -45307,7 +45493,8 @@ type DisksAddResourcePoliciesCall struct {
// AddResourcePolicies: Adds existing resource policies to a disk. You
// can only add one policy which will be applied to this disk for
-// scheduling snapshot creation.
+// scheduling snapshot creation. (== suppress_warning http-rest-shadowed
+// ==)
func (r *DisksService) AddResourcePolicies(project string, zone string, disk string, disksaddresourcepoliciesrequest *DisksAddResourcePoliciesRequest) *DisksAddResourcePoliciesCall {
c := &DisksAddResourcePoliciesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -45363,7 +45550,7 @@ func (c *DisksAddResourcePoliciesCall) Header() http.Header {
func (c *DisksAddResourcePoliciesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -45429,7 +45616,7 @@ func (c *DisksAddResourcePoliciesCall) Do(opts ...googleapi.CallOption) (*Operat
}
return ret, nil
// {
- // "description": "Adds existing resource policies to a disk. You can only add one policy which will be applied to this disk for scheduling snapshot creation.",
+ // "description": "Adds existing resource policies to a disk. You can only add one policy which will be applied to this disk for scheduling snapshot creation. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.disks.addResourcePolicies",
// "parameterOrder": [
@@ -45491,7 +45678,8 @@ type DisksAggregatedListCall struct {
header_ http.Header
}
-// AggregatedList: Retrieves an aggregated list of persistent disks.
+// AggregatedList: Retrieves an aggregated list of persistent disks. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/disks/aggregatedList
func (r *DisksService) AggregatedList(project string) *DisksAggregatedListCall {
c := &DisksAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -45599,7 +45787,7 @@ func (c *DisksAggregatedListCall) Header() http.Header {
func (c *DisksAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -45661,7 +45849,7 @@ func (c *DisksAggregatedListCall) Do(opts ...googleapi.CallOption) (*DiskAggrega
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of persistent disks.",
+ // "description": "Retrieves an aggregated list of persistent disks. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.disks.aggregatedList",
// "parameterOrder": [
@@ -45747,6 +45935,7 @@ type DisksCreateSnapshotCall struct {
}
// CreateSnapshot: Creates a snapshot of a specified persistent disk.
+// (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/disks/createSnapshot
func (r *DisksService) CreateSnapshot(project string, zone string, disk string, snapshot *Snapshot) *DisksCreateSnapshotCall {
c := &DisksCreateSnapshotCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -45812,7 +46001,7 @@ func (c *DisksCreateSnapshotCall) Header() http.Header {
func (c *DisksCreateSnapshotCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -45878,7 +46067,7 @@ func (c *DisksCreateSnapshotCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Creates a snapshot of a specified persistent disk.",
+ // "description": "Creates a snapshot of a specified persistent disk. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.disks.createSnapshot",
// "parameterOrder": [
@@ -45949,7 +46138,8 @@ type DisksDeleteCall struct {
// Delete: Deletes the specified persistent disk. Deleting a disk
// removes its data permanently and is irreversible. However, deleting a
// disk does not delete any snapshots previously made from the disk. You
-// must separately delete snapshots.
+// must separately delete snapshots. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/disks/delete
func (r *DisksService) Delete(project string, zone string, disk string) *DisksDeleteCall {
c := &DisksDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -46005,7 +46195,7 @@ func (c *DisksDeleteCall) Header() http.Header {
func (c *DisksDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -46066,7 +46256,7 @@ func (c *DisksDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, error) {
}
return ret, nil
// {
- // "description": "Deletes the specified persistent disk. Deleting a disk removes its data permanently and is irreversible. However, deleting a disk does not delete any snapshots previously made from the disk. You must separately delete snapshots.",
+ // "description": "Deletes the specified persistent disk. Deleting a disk removes its data permanently and is irreversible. However, deleting a disk does not delete any snapshots previously made from the disk. You must separately delete snapshots. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.disks.delete",
// "parameterOrder": [
@@ -46127,7 +46317,8 @@ type DisksGetCall struct {
}
// Get: Returns a specified persistent disk. Gets a list of available
-// persistent disks by making a list() request.
+// persistent disks by making a list() request. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/disks/get
func (r *DisksService) Get(project string, zone string, disk string) *DisksGetCall {
c := &DisksGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -46174,7 +46365,7 @@ func (c *DisksGetCall) Header() http.Header {
func (c *DisksGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -46238,7 +46429,7 @@ func (c *DisksGetCall) Do(opts ...googleapi.CallOption) (*Disk, error) {
}
return ret, nil
// {
- // "description": "Returns a specified persistent disk. Gets a list of available persistent disks by making a list() request.",
+ // "description": "Returns a specified persistent disk. Gets a list of available persistent disks by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.disks.get",
// "parameterOrder": [
@@ -46296,7 +46487,8 @@ type DisksGetIamPolicyCall struct {
}
// GetIamPolicy: Gets the access control policy for a resource. May be
-// empty if no such policy or resource exists.
+// empty if no such policy or resource exists. (== suppress_warning
+// http-rest-shadowed ==)
func (r *DisksService) GetIamPolicy(project string, zone string, resource string) *DisksGetIamPolicyCall {
c := &DisksGetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -46342,7 +46534,7 @@ func (c *DisksGetIamPolicyCall) Header() http.Header {
func (c *DisksGetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -46406,7 +46598,7 @@ func (c *DisksGetIamPolicyCall) Do(opts ...googleapi.CallOption) (*Policy, error
}
return ret, nil
// {
- // "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists.",
+ // "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.disks.getIamPolicy",
// "parameterOrder": [
@@ -46466,7 +46658,8 @@ type DisksInsertCall struct {
// data in the request. You can create a disk with a sourceImage, a
// sourceSnapshot, or create an empty 500 GB data disk by omitting all
// properties. You can also create a disk that is larger than the
-// default size by specifying the sizeGb property.
+// default size by specifying the sizeGb property. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/disks/insert
func (r *DisksService) Insert(project string, zone string, disk *Disk) *DisksInsertCall {
c := &DisksInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -46529,7 +46722,7 @@ func (c *DisksInsertCall) Header() http.Header {
func (c *DisksInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -46594,7 +46787,7 @@ func (c *DisksInsertCall) Do(opts ...googleapi.CallOption) (*Operation, error) {
}
return ret, nil
// {
- // "description": "Creates a persistent disk in the specified project using the data in the request. You can create a disk with a sourceImage, a sourceSnapshot, or create an empty 500 GB data disk by omitting all properties. You can also create a disk that is larger than the default size by specifying the sizeGb property.",
+ // "description": "Creates a persistent disk in the specified project using the data in the request. You can create a disk with a sourceImage, a sourceSnapshot, or create an empty 500 GB data disk by omitting all properties. You can also create a disk that is larger than the default size by specifying the sizeGb property. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.disks.insert",
// "parameterOrder": [
@@ -46655,7 +46848,7 @@ type DisksListCall struct {
}
// List: Retrieves a list of persistent disks contained within the
-// specified zone.
+// specified zone. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/disks/list
func (r *DisksService) List(project string, zone string) *DisksListCall {
c := &DisksListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -46764,7 +46957,7 @@ func (c *DisksListCall) Header() http.Header {
func (c *DisksListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -46827,7 +47020,7 @@ func (c *DisksListCall) Do(opts ...googleapi.CallOption) (*DiskList, error) {
}
return ret, nil
// {
- // "description": "Retrieves a list of persistent disks contained within the specified zone.",
+ // "description": "Retrieves a list of persistent disks contained within the specified zone. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.disks.list",
// "parameterOrder": [
@@ -46920,7 +47113,8 @@ type DisksRemoveResourcePoliciesCall struct {
header_ http.Header
}
-// RemoveResourcePolicies: Removes resource policies from a disk.
+// RemoveResourcePolicies: Removes resource policies from a disk. (==
+// suppress_warning http-rest-shadowed ==)
func (r *DisksService) RemoveResourcePolicies(project string, zone string, disk string, disksremoveresourcepoliciesrequest *DisksRemoveResourcePoliciesRequest) *DisksRemoveResourcePoliciesCall {
c := &DisksRemoveResourcePoliciesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -46976,7 +47170,7 @@ func (c *DisksRemoveResourcePoliciesCall) Header() http.Header {
func (c *DisksRemoveResourcePoliciesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -47042,7 +47236,7 @@ func (c *DisksRemoveResourcePoliciesCall) Do(opts ...googleapi.CallOption) (*Ope
}
return ret, nil
// {
- // "description": "Removes resource policies from a disk.",
+ // "description": "Removes resource policies from a disk. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.disks.removeResourcePolicies",
// "parameterOrder": [
@@ -47107,7 +47301,7 @@ type DisksResizeCall struct {
}
// Resize: Resizes the specified persistent disk. You can only increase
-// the size of the disk.
+// the size of the disk. (== suppress_warning http-rest-shadowed ==)
func (r *DisksService) Resize(project string, zone string, disk string, disksresizerequest *DisksResizeRequest) *DisksResizeCall {
c := &DisksResizeCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -47163,7 +47357,7 @@ func (c *DisksResizeCall) Header() http.Header {
func (c *DisksResizeCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -47229,7 +47423,7 @@ func (c *DisksResizeCall) Do(opts ...googleapi.CallOption) (*Operation, error) {
}
return ret, nil
// {
- // "description": "Resizes the specified persistent disk. You can only increase the size of the disk.",
+ // "description": "Resizes the specified persistent disk. You can only increase the size of the disk. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.disks.resize",
// "parameterOrder": [
@@ -47294,7 +47488,8 @@ type DisksSetIamPolicyCall struct {
}
// SetIamPolicy: Sets the access control policy on the specified
-// resource. Replaces any existing policy.
+// resource. Replaces any existing policy. (== suppress_warning
+// http-rest-shadowed ==)
func (r *DisksService) SetIamPolicy(project string, zone string, resource string, zonesetpolicyrequest *ZoneSetPolicyRequest) *DisksSetIamPolicyCall {
c := &DisksSetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -47331,7 +47526,7 @@ func (c *DisksSetIamPolicyCall) Header() http.Header {
func (c *DisksSetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -47397,7 +47592,7 @@ func (c *DisksSetIamPolicyCall) Do(opts ...googleapi.CallOption) (*Policy, error
}
return ret, nil
// {
- // "description": "Sets the access control policy on the specified resource. Replaces any existing policy.",
+ // "description": "Sets the access control policy on the specified resource. Replaces any existing policy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.disks.setIamPolicy",
// "parameterOrder": [
@@ -47457,7 +47652,8 @@ type DisksSetLabelsCall struct {
}
// SetLabels: Sets the labels on a disk. To learn more about labels,
-// read the Labeling Resources documentation.
+// read the Labeling Resources documentation. (== suppress_warning
+// http-rest-shadowed ==)
func (r *DisksService) SetLabels(project string, zone string, resource string, zonesetlabelsrequest *ZoneSetLabelsRequest) *DisksSetLabelsCall {
c := &DisksSetLabelsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -47513,7 +47709,7 @@ func (c *DisksSetLabelsCall) Header() http.Header {
func (c *DisksSetLabelsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -47579,7 +47775,7 @@ func (c *DisksSetLabelsCall) Do(opts ...googleapi.CallOption) (*Operation, error
}
return ret, nil
// {
- // "description": "Sets the labels on a disk. To learn more about labels, read the Labeling Resources documentation.",
+ // "description": "Sets the labels on a disk. To learn more about labels, read the Labeling Resources documentation. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.disks.setLabels",
// "parameterOrder": [
@@ -47644,7 +47840,7 @@ type DisksTestIamPermissionsCall struct {
}
// TestIamPermissions: Returns permissions that a caller has on the
-// specified resource.
+// specified resource. (== suppress_warning http-rest-shadowed ==)
func (r *DisksService) TestIamPermissions(project string, zone string, resource string, testpermissionsrequest *TestPermissionsRequest) *DisksTestIamPermissionsCall {
c := &DisksTestIamPermissionsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -47681,7 +47877,7 @@ func (c *DisksTestIamPermissionsCall) Header() http.Header {
func (c *DisksTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -47747,7 +47943,7 @@ func (c *DisksTestIamPermissionsCall) Do(opts ...googleapi.CallOption) (*TestPer
}
return ret, nil
// {
- // "description": "Returns permissions that a caller has on the specified resource.",
+ // "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.disks.testIamPermissions",
// "parameterOrder": [
@@ -47805,7 +48001,8 @@ type ExternalVpnGatewaysDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified externalVpnGateway.
+// Delete: Deletes the specified externalVpnGateway. (==
+// suppress_warning http-rest-shadowed ==)
func (r *ExternalVpnGatewaysService) Delete(project string, externalVpnGateway string) *ExternalVpnGatewaysDeleteCall {
c := &ExternalVpnGatewaysDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -47859,7 +48056,7 @@ func (c *ExternalVpnGatewaysDeleteCall) Header() http.Header {
func (c *ExternalVpnGatewaysDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -47919,7 +48116,7 @@ func (c *ExternalVpnGatewaysDeleteCall) Do(opts ...googleapi.CallOption) (*Opera
}
return ret, nil
// {
- // "description": "Deletes the specified externalVpnGateway.",
+ // "description": "Deletes the specified externalVpnGateway. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.externalVpnGateways.delete",
// "parameterOrder": [
@@ -47972,7 +48169,8 @@ type ExternalVpnGatewaysGetCall struct {
}
// Get: Returns the specified externalVpnGateway. Get a list of
-// available externalVpnGateways by making a list() request.
+// available externalVpnGateways by making a list() request. (==
+// suppress_warning http-rest-shadowed ==)
func (r *ExternalVpnGatewaysService) Get(project string, externalVpnGateway string) *ExternalVpnGatewaysGetCall {
c := &ExternalVpnGatewaysGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -48017,7 +48215,7 @@ func (c *ExternalVpnGatewaysGetCall) Header() http.Header {
func (c *ExternalVpnGatewaysGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -48080,7 +48278,7 @@ func (c *ExternalVpnGatewaysGetCall) Do(opts ...googleapi.CallOption) (*External
}
return ret, nil
// {
- // "description": "Returns the specified externalVpnGateway. Get a list of available externalVpnGateways by making a list() request.",
+ // "description": "Returns the specified externalVpnGateway. Get a list of available externalVpnGateways by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.externalVpnGateways.get",
// "parameterOrder": [
@@ -48128,7 +48326,8 @@ type ExternalVpnGatewaysInsertCall struct {
}
// Insert: Creates a ExternalVpnGateway in the specified project using
-// the data included in the request.
+// the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *ExternalVpnGatewaysService) Insert(project string, externalvpngateway *ExternalVpnGateway) *ExternalVpnGatewaysInsertCall {
c := &ExternalVpnGatewaysInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -48182,7 +48381,7 @@ func (c *ExternalVpnGatewaysInsertCall) Header() http.Header {
func (c *ExternalVpnGatewaysInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -48246,7 +48445,7 @@ func (c *ExternalVpnGatewaysInsertCall) Do(opts ...googleapi.CallOption) (*Opera
}
return ret, nil
// {
- // "description": "Creates a ExternalVpnGateway in the specified project using the data included in the request.",
+ // "description": "Creates a ExternalVpnGateway in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.externalVpnGateways.insert",
// "parameterOrder": [
@@ -48293,7 +48492,7 @@ type ExternalVpnGatewaysListCall struct {
}
// List: Retrieves the list of ExternalVpnGateway available to the
-// specified project.
+// specified project. (== suppress_warning http-rest-shadowed ==)
func (r *ExternalVpnGatewaysService) List(project string) *ExternalVpnGatewaysListCall {
c := &ExternalVpnGatewaysListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -48400,7 +48599,7 @@ func (c *ExternalVpnGatewaysListCall) Header() http.Header {
func (c *ExternalVpnGatewaysListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -48462,7 +48661,7 @@ func (c *ExternalVpnGatewaysListCall) Do(opts ...googleapi.CallOption) (*Externa
}
return ret, nil
// {
- // "description": "Retrieves the list of ExternalVpnGateway available to the specified project.",
+ // "description": "Retrieves the list of ExternalVpnGateway available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.externalVpnGateways.list",
// "parameterOrder": [
@@ -48547,7 +48746,8 @@ type ExternalVpnGatewaysSetLabelsCall struct {
}
// SetLabels: Sets the labels on an ExternalVpnGateway. To learn more
-// about labels, read the Labeling Resources documentation.
+// about labels, read the Labeling Resources documentation. (==
+// suppress_warning http-rest-shadowed ==)
func (r *ExternalVpnGatewaysService) SetLabels(project string, resource string, globalsetlabelsrequest *GlobalSetLabelsRequest) *ExternalVpnGatewaysSetLabelsCall {
c := &ExternalVpnGatewaysSetLabelsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -48583,7 +48783,7 @@ func (c *ExternalVpnGatewaysSetLabelsCall) Header() http.Header {
func (c *ExternalVpnGatewaysSetLabelsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -48648,7 +48848,7 @@ func (c *ExternalVpnGatewaysSetLabelsCall) Do(opts ...googleapi.CallOption) (*Op
}
return ret, nil
// {
- // "description": "Sets the labels on an ExternalVpnGateway. To learn more about labels, read the Labeling Resources documentation.",
+ // "description": "Sets the labels on an ExternalVpnGateway. To learn more about labels, read the Labeling Resources documentation. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.externalVpnGateways.setLabels",
// "parameterOrder": [
@@ -48699,7 +48899,7 @@ type ExternalVpnGatewaysTestIamPermissionsCall struct {
}
// TestIamPermissions: Returns permissions that a caller has on the
-// specified resource.
+// specified resource. (== suppress_warning http-rest-shadowed ==)
func (r *ExternalVpnGatewaysService) TestIamPermissions(project string, resource string, testpermissionsrequest *TestPermissionsRequest) *ExternalVpnGatewaysTestIamPermissionsCall {
c := &ExternalVpnGatewaysTestIamPermissionsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -48735,7 +48935,7 @@ func (c *ExternalVpnGatewaysTestIamPermissionsCall) Header() http.Header {
func (c *ExternalVpnGatewaysTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -48800,7 +49000,7 @@ func (c *ExternalVpnGatewaysTestIamPermissionsCall) Do(opts ...googleapi.CallOpt
}
return ret, nil
// {
- // "description": "Returns permissions that a caller has on the specified resource.",
+ // "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.externalVpnGateways.testIamPermissions",
// "parameterOrder": [
@@ -48850,7 +49050,8 @@ type FirewallsDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified firewall.
+// Delete: Deletes the specified firewall. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/firewalls/delete
func (r *FirewallsService) Delete(project string, firewall string) *FirewallsDeleteCall {
c := &FirewallsDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -48905,7 +49106,7 @@ func (c *FirewallsDeleteCall) Header() http.Header {
func (c *FirewallsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -48965,7 +49166,7 @@ func (c *FirewallsDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, erro
}
return ret, nil
// {
- // "description": "Deletes the specified firewall.",
+ // "description": "Deletes the specified firewall. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.firewalls.delete",
// "parameterOrder": [
@@ -49017,7 +49218,8 @@ type FirewallsGetCall struct {
header_ http.Header
}
-// Get: Returns the specified firewall.
+// Get: Returns the specified firewall. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/firewalls/get
func (r *FirewallsService) Get(project string, firewall string) *FirewallsGetCall {
c := &FirewallsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -49063,7 +49265,7 @@ func (c *FirewallsGetCall) Header() http.Header {
func (c *FirewallsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -49126,7 +49328,7 @@ func (c *FirewallsGetCall) Do(opts ...googleapi.CallOption) (*Firewall, error) {
}
return ret, nil
// {
- // "description": "Returns the specified firewall.",
+ // "description": "Returns the specified firewall. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.firewalls.get",
// "parameterOrder": [
@@ -49174,7 +49376,8 @@ type FirewallsInsertCall struct {
}
// Insert: Creates a firewall rule in the specified project using the
-// data included in the request.
+// data included in the request. (== suppress_warning http-rest-shadowed
+// ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/firewalls/insert
func (r *FirewallsService) Insert(project string, firewall *Firewall) *FirewallsInsertCall {
c := &FirewallsInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -49229,7 +49432,7 @@ func (c *FirewallsInsertCall) Header() http.Header {
func (c *FirewallsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -49293,7 +49496,7 @@ func (c *FirewallsInsertCall) Do(opts ...googleapi.CallOption) (*Operation, erro
}
return ret, nil
// {
- // "description": "Creates a firewall rule in the specified project using the data included in the request.",
+ // "description": "Creates a firewall rule in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.firewalls.insert",
// "parameterOrder": [
@@ -49340,7 +49543,7 @@ type FirewallsListCall struct {
}
// List: Retrieves the list of firewall rules available to the specified
-// project.
+// project. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/firewalls/list
func (r *FirewallsService) List(project string) *FirewallsListCall {
c := &FirewallsListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -49448,7 +49651,7 @@ func (c *FirewallsListCall) Header() http.Header {
func (c *FirewallsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -49510,7 +49713,7 @@ func (c *FirewallsListCall) Do(opts ...googleapi.CallOption) (*FirewallList, err
}
return ret, nil
// {
- // "description": "Retrieves the list of firewall rules available to the specified project.",
+ // "description": "Retrieves the list of firewall rules available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.firewalls.list",
// "parameterOrder": [
@@ -49596,7 +49799,8 @@ type FirewallsPatchCall struct {
// Patch: Updates the specified firewall rule with the data included in
// the request. This method supports PATCH semantics and uses the JSON
-// merge patch format and processing rules.
+// merge patch format and processing rules. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/firewalls/patch
func (r *FirewallsService) Patch(project string, firewall string, firewall2 *Firewall) *FirewallsPatchCall {
c := &FirewallsPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -49652,7 +49856,7 @@ func (c *FirewallsPatchCall) Header() http.Header {
func (c *FirewallsPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -49717,7 +49921,7 @@ func (c *FirewallsPatchCall) Do(opts ...googleapi.CallOption) (*Operation, error
}
return ret, nil
// {
- // "description": "Updates the specified firewall rule with the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ // "description": "Updates the specified firewall rule with the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.firewalls.patch",
// "parameterOrder": [
@@ -49773,9 +49977,9 @@ type FirewallsUpdateCall struct {
}
// Update: Updates the specified firewall rule with the data included in
-// the request. The PUT method can only update the following fields of
-// firewall rule: allowed, description, sourceRanges, sourceTags,
-// targetTags.
+// the request. Note that all fields will be updated if using PUT, even
+// fields that are not specified. To update individual fields, please
+// use PATCH instead. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/firewalls/update
func (r *FirewallsService) Update(project string, firewall string, firewall2 *Firewall) *FirewallsUpdateCall {
c := &FirewallsUpdateCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -49831,7 +50035,7 @@ func (c *FirewallsUpdateCall) Header() http.Header {
func (c *FirewallsUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -49896,7 +50100,7 @@ func (c *FirewallsUpdateCall) Do(opts ...googleapi.CallOption) (*Operation, erro
}
return ret, nil
// {
- // "description": "Updates the specified firewall rule with the data included in the request. The PUT method can only update the following fields of firewall rule: allowed, description, sourceRanges, sourceTags, targetTags.",
+ // "description": "Updates the specified firewall rule with the data included in the request. Note that all fields will be updated if using PUT, even fields that are not specified. To update individual fields, please use PATCH instead. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PUT",
// "id": "compute.firewalls.update",
// "parameterOrder": [
@@ -49950,7 +50154,8 @@ type ForwardingRulesAggregatedListCall struct {
header_ http.Header
}
-// AggregatedList: Retrieves an aggregated list of forwarding rules.
+// AggregatedList: Retrieves an aggregated list of forwarding rules. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/forwardingRules/aggregatedList
func (r *ForwardingRulesService) AggregatedList(project string) *ForwardingRulesAggregatedListCall {
c := &ForwardingRulesAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -50058,7 +50263,7 @@ func (c *ForwardingRulesAggregatedListCall) Header() http.Header {
func (c *ForwardingRulesAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -50120,7 +50325,7 @@ func (c *ForwardingRulesAggregatedListCall) Do(opts ...googleapi.CallOption) (*F
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of forwarding rules.",
+ // "description": "Retrieves an aggregated list of forwarding rules. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.forwardingRules.aggregatedList",
// "parameterOrder": [
@@ -50204,7 +50409,8 @@ type ForwardingRulesDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified ForwardingRule resource.
+// Delete: Deletes the specified ForwardingRule resource. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/forwardingRules/delete
func (r *ForwardingRulesService) Delete(project string, region string, forwardingRule string) *ForwardingRulesDeleteCall {
c := &ForwardingRulesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -50260,7 +50466,7 @@ func (c *ForwardingRulesDeleteCall) Header() http.Header {
func (c *ForwardingRulesDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -50321,7 +50527,7 @@ func (c *ForwardingRulesDeleteCall) Do(opts ...googleapi.CallOption) (*Operation
}
return ret, nil
// {
- // "description": "Deletes the specified ForwardingRule resource.",
+ // "description": "Deletes the specified ForwardingRule resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.forwardingRules.delete",
// "parameterOrder": [
@@ -50382,7 +50588,8 @@ type ForwardingRulesGetCall struct {
header_ http.Header
}
-// Get: Returns the specified ForwardingRule resource.
+// Get: Returns the specified ForwardingRule resource. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/forwardingRules/get
func (r *ForwardingRulesService) Get(project string, region string, forwardingRule string) *ForwardingRulesGetCall {
c := &ForwardingRulesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -50429,7 +50636,7 @@ func (c *ForwardingRulesGetCall) Header() http.Header {
func (c *ForwardingRulesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -50493,7 +50700,7 @@ func (c *ForwardingRulesGetCall) Do(opts ...googleapi.CallOption) (*ForwardingRu
}
return ret, nil
// {
- // "description": "Returns the specified ForwardingRule resource.",
+ // "description": "Returns the specified ForwardingRule resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.forwardingRules.get",
// "parameterOrder": [
@@ -50550,7 +50757,8 @@ type ForwardingRulesInsertCall struct {
}
// Insert: Creates a ForwardingRule resource in the specified project
-// and region using the data included in the request.
+// and region using the data included in the request. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/forwardingRules/insert
func (r *ForwardingRulesService) Insert(project string, region string, forwardingrule *ForwardingRule) *ForwardingRulesInsertCall {
c := &ForwardingRulesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -50606,7 +50814,7 @@ func (c *ForwardingRulesInsertCall) Header() http.Header {
func (c *ForwardingRulesInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -50671,7 +50879,7 @@ func (c *ForwardingRulesInsertCall) Do(opts ...googleapi.CallOption) (*Operation
}
return ret, nil
// {
- // "description": "Creates a ForwardingRule resource in the specified project and region using the data included in the request.",
+ // "description": "Creates a ForwardingRule resource in the specified project and region using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.forwardingRules.insert",
// "parameterOrder": [
@@ -50727,7 +50935,8 @@ type ForwardingRulesListCall struct {
}
// List: Retrieves a list of ForwardingRule resources available to the
-// specified project and region.
+// specified project and region. (== suppress_warning http-rest-shadowed
+// ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/forwardingRules/list
func (r *ForwardingRulesService) List(project string, region string) *ForwardingRulesListCall {
c := &ForwardingRulesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -50836,7 +51045,7 @@ func (c *ForwardingRulesListCall) Header() http.Header {
func (c *ForwardingRulesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -50899,7 +51108,7 @@ func (c *ForwardingRulesListCall) Do(opts ...googleapi.CallOption) (*ForwardingR
}
return ret, nil
// {
- // "description": "Retrieves a list of ForwardingRule resources available to the specified project and region.",
+ // "description": "Retrieves a list of ForwardingRule resources available to the specified project and region. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.forwardingRules.list",
// "parameterOrder": [
@@ -50993,7 +51202,8 @@ type ForwardingRulesSetTargetCall struct {
}
// SetTarget: Changes target URL for forwarding rule. The new target
-// should be of the same type as the old target.
+// should be of the same type as the old target. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/forwardingRules/setTarget
func (r *ForwardingRulesService) SetTarget(project string, region string, forwardingRule string, targetreference *TargetReference) *ForwardingRulesSetTargetCall {
c := &ForwardingRulesSetTargetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -51050,7 +51260,7 @@ func (c *ForwardingRulesSetTargetCall) Header() http.Header {
func (c *ForwardingRulesSetTargetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -51116,7 +51326,7 @@ func (c *ForwardingRulesSetTargetCall) Do(opts ...googleapi.CallOption) (*Operat
}
return ret, nil
// {
- // "description": "Changes target URL for forwarding rule. The new target should be of the same type as the old target.",
+ // "description": "Changes target URL for forwarding rule. The new target should be of the same type as the old target. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.forwardingRules.setTarget",
// "parameterOrder": [
@@ -51178,7 +51388,8 @@ type GlobalAddressesDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified address resource.
+// Delete: Deletes the specified address resource. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/globalAddresses/delete
func (r *GlobalAddressesService) Delete(project string, address string) *GlobalAddressesDeleteCall {
c := &GlobalAddressesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -51233,7 +51444,7 @@ func (c *GlobalAddressesDeleteCall) Header() http.Header {
func (c *GlobalAddressesDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -51293,7 +51504,7 @@ func (c *GlobalAddressesDeleteCall) Do(opts ...googleapi.CallOption) (*Operation
}
return ret, nil
// {
- // "description": "Deletes the specified address resource.",
+ // "description": "Deletes the specified address resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.globalAddresses.delete",
// "parameterOrder": [
@@ -51346,7 +51557,8 @@ type GlobalAddressesGetCall struct {
}
// Get: Returns the specified address resource. Gets a list of available
-// addresses by making a list() request.
+// addresses by making a list() request. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/globalAddresses/get
func (r *GlobalAddressesService) Get(project string, address string) *GlobalAddressesGetCall {
c := &GlobalAddressesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -51392,7 +51604,7 @@ func (c *GlobalAddressesGetCall) Header() http.Header {
func (c *GlobalAddressesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -51455,7 +51667,7 @@ func (c *GlobalAddressesGetCall) Do(opts ...googleapi.CallOption) (*Address, err
}
return ret, nil
// {
- // "description": "Returns the specified address resource. Gets a list of available addresses by making a list() request.",
+ // "description": "Returns the specified address resource. Gets a list of available addresses by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.globalAddresses.get",
// "parameterOrder": [
@@ -51503,7 +51715,8 @@ type GlobalAddressesInsertCall struct {
}
// Insert: Creates an address resource in the specified project by using
-// the data included in the request.
+// the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/globalAddresses/insert
func (r *GlobalAddressesService) Insert(project string, address *Address) *GlobalAddressesInsertCall {
c := &GlobalAddressesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -51558,7 +51771,7 @@ func (c *GlobalAddressesInsertCall) Header() http.Header {
func (c *GlobalAddressesInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -51622,7 +51835,7 @@ func (c *GlobalAddressesInsertCall) Do(opts ...googleapi.CallOption) (*Operation
}
return ret, nil
// {
- // "description": "Creates an address resource in the specified project by using the data included in the request.",
+ // "description": "Creates an address resource in the specified project by using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.globalAddresses.insert",
// "parameterOrder": [
@@ -51668,7 +51881,8 @@ type GlobalAddressesListCall struct {
header_ http.Header
}
-// List: Retrieves a list of global addresses.
+// List: Retrieves a list of global addresses. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/globalAddresses/list
func (r *GlobalAddressesService) List(project string) *GlobalAddressesListCall {
c := &GlobalAddressesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -51776,7 +51990,7 @@ func (c *GlobalAddressesListCall) Header() http.Header {
func (c *GlobalAddressesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -51838,7 +52052,7 @@ func (c *GlobalAddressesListCall) Do(opts ...googleapi.CallOption) (*AddressList
}
return ret, nil
// {
- // "description": "Retrieves a list of global addresses.",
+ // "description": "Retrieves a list of global addresses. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.globalAddresses.list",
// "parameterOrder": [
@@ -51921,7 +52135,8 @@ type GlobalForwardingRulesDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified GlobalForwardingRule resource.
+// Delete: Deletes the specified GlobalForwardingRule resource. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/globalForwardingRules/delete
func (r *GlobalForwardingRulesService) Delete(project string, forwardingRule string) *GlobalForwardingRulesDeleteCall {
c := &GlobalForwardingRulesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -51976,7 +52191,7 @@ func (c *GlobalForwardingRulesDeleteCall) Header() http.Header {
func (c *GlobalForwardingRulesDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -52036,7 +52251,7 @@ func (c *GlobalForwardingRulesDeleteCall) Do(opts ...googleapi.CallOption) (*Ope
}
return ret, nil
// {
- // "description": "Deletes the specified GlobalForwardingRule resource.",
+ // "description": "Deletes the specified GlobalForwardingRule resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.globalForwardingRules.delete",
// "parameterOrder": [
@@ -52089,7 +52304,8 @@ type GlobalForwardingRulesGetCall struct {
}
// Get: Returns the specified GlobalForwardingRule resource. Gets a list
-// of available forwarding rules by making a list() request.
+// of available forwarding rules by making a list() request. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/globalForwardingRules/get
func (r *GlobalForwardingRulesService) Get(project string, forwardingRule string) *GlobalForwardingRulesGetCall {
c := &GlobalForwardingRulesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -52135,7 +52351,7 @@ func (c *GlobalForwardingRulesGetCall) Header() http.Header {
func (c *GlobalForwardingRulesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -52198,7 +52414,7 @@ func (c *GlobalForwardingRulesGetCall) Do(opts ...googleapi.CallOption) (*Forwar
}
return ret, nil
// {
- // "description": "Returns the specified GlobalForwardingRule resource. Gets a list of available forwarding rules by making a list() request.",
+ // "description": "Returns the specified GlobalForwardingRule resource. Gets a list of available forwarding rules by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.globalForwardingRules.get",
// "parameterOrder": [
@@ -52246,7 +52462,8 @@ type GlobalForwardingRulesInsertCall struct {
}
// Insert: Creates a GlobalForwardingRule resource in the specified
-// project using the data included in the request.
+// project using the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/globalForwardingRules/insert
func (r *GlobalForwardingRulesService) Insert(project string, forwardingrule *ForwardingRule) *GlobalForwardingRulesInsertCall {
c := &GlobalForwardingRulesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -52301,7 +52518,7 @@ func (c *GlobalForwardingRulesInsertCall) Header() http.Header {
func (c *GlobalForwardingRulesInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -52365,7 +52582,7 @@ func (c *GlobalForwardingRulesInsertCall) Do(opts ...googleapi.CallOption) (*Ope
}
return ret, nil
// {
- // "description": "Creates a GlobalForwardingRule resource in the specified project using the data included in the request.",
+ // "description": "Creates a GlobalForwardingRule resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.globalForwardingRules.insert",
// "parameterOrder": [
@@ -52412,7 +52629,7 @@ type GlobalForwardingRulesListCall struct {
}
// List: Retrieves a list of GlobalForwardingRule resources available to
-// the specified project.
+// the specified project. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/globalForwardingRules/list
func (r *GlobalForwardingRulesService) List(project string) *GlobalForwardingRulesListCall {
c := &GlobalForwardingRulesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -52520,7 +52737,7 @@ func (c *GlobalForwardingRulesListCall) Header() http.Header {
func (c *GlobalForwardingRulesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -52582,7 +52799,7 @@ func (c *GlobalForwardingRulesListCall) Do(opts ...googleapi.CallOption) (*Forwa
}
return ret, nil
// {
- // "description": "Retrieves a list of GlobalForwardingRule resources available to the specified project.",
+ // "description": "Retrieves a list of GlobalForwardingRule resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.globalForwardingRules.list",
// "parameterOrder": [
@@ -52667,7 +52884,8 @@ type GlobalForwardingRulesSetTargetCall struct {
}
// SetTarget: Changes target URL for the GlobalForwardingRule resource.
-// The new target should be of the same type as the old target.
+// The new target should be of the same type as the old target. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/globalForwardingRules/setTarget
func (r *GlobalForwardingRulesService) SetTarget(project string, forwardingRule string, targetreference *TargetReference) *GlobalForwardingRulesSetTargetCall {
c := &GlobalForwardingRulesSetTargetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -52723,7 +52941,7 @@ func (c *GlobalForwardingRulesSetTargetCall) Header() http.Header {
func (c *GlobalForwardingRulesSetTargetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -52788,7 +53006,7 @@ func (c *GlobalForwardingRulesSetTargetCall) Do(opts ...googleapi.CallOption) (*
}
return ret, nil
// {
- // "description": "Changes target URL for the GlobalForwardingRule resource. The new target should be of the same type as the old target.",
+ // "description": "Changes target URL for the GlobalForwardingRule resource. The new target should be of the same type as the old target. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.globalForwardingRules.setTarget",
// "parameterOrder": [
@@ -52842,7 +53060,8 @@ type GlobalOperationsAggregatedListCall struct {
header_ http.Header
}
-// AggregatedList: Retrieves an aggregated list of all operations.
+// AggregatedList: Retrieves an aggregated list of all operations. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/globalOperations/aggregatedList
func (r *GlobalOperationsService) AggregatedList(project string) *GlobalOperationsAggregatedListCall {
c := &GlobalOperationsAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -52950,7 +53169,7 @@ func (c *GlobalOperationsAggregatedListCall) Header() http.Header {
func (c *GlobalOperationsAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -53012,7 +53231,7 @@ func (c *GlobalOperationsAggregatedListCall) Do(opts ...googleapi.CallOption) (*
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of all operations.",
+ // "description": "Retrieves an aggregated list of all operations. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.globalOperations.aggregatedList",
// "parameterOrder": [
@@ -53095,7 +53314,8 @@ type GlobalOperationsDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified Operations resource.
+// Delete: Deletes the specified Operations resource. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/globalOperations/delete
func (r *GlobalOperationsService) Delete(project string, operation string) *GlobalOperationsDeleteCall {
c := &GlobalOperationsDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -53131,7 +53351,7 @@ func (c *GlobalOperationsDeleteCall) Header() http.Header {
func (c *GlobalOperationsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -53166,7 +53386,7 @@ func (c *GlobalOperationsDeleteCall) Do(opts ...googleapi.CallOption) error {
}
return nil
// {
- // "description": "Deletes the specified Operations resource.",
+ // "description": "Deletes the specified Operations resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.globalOperations.delete",
// "parameterOrder": [
@@ -53211,7 +53431,8 @@ type GlobalOperationsGetCall struct {
}
// Get: Retrieves the specified Operations resource. Gets a list of
-// operations by making a list() request.
+// operations by making a list() request. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/globalOperations/get
func (r *GlobalOperationsService) Get(project string, operation string) *GlobalOperationsGetCall {
c := &GlobalOperationsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -53257,7 +53478,7 @@ func (c *GlobalOperationsGetCall) Header() http.Header {
func (c *GlobalOperationsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -53320,7 +53541,7 @@ func (c *GlobalOperationsGetCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Retrieves the specified Operations resource. Gets a list of operations by making a list() request.",
+ // "description": "Retrieves the specified Operations resource. Gets a list of operations by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.globalOperations.get",
// "parameterOrder": [
@@ -53368,7 +53589,7 @@ type GlobalOperationsListCall struct {
}
// List: Retrieves a list of Operation resources contained within the
-// specified project.
+// specified project. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/globalOperations/list
func (r *GlobalOperationsService) List(project string) *GlobalOperationsListCall {
c := &GlobalOperationsListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -53476,7 +53697,7 @@ func (c *GlobalOperationsListCall) Header() http.Header {
func (c *GlobalOperationsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -53538,7 +53759,7 @@ func (c *GlobalOperationsListCall) Do(opts ...googleapi.CallOption) (*OperationL
}
return ret, nil
// {
- // "description": "Retrieves a list of Operation resources contained within the specified project.",
+ // "description": "Retrieves a list of Operation resources contained within the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.globalOperations.list",
// "parameterOrder": [
@@ -53622,7 +53843,8 @@ type HealthChecksAggregatedListCall struct {
}
// AggregatedList: Retrieves the list of all HealthCheck resources,
-// regional and global, available to the specified project.
+// regional and global, available to the specified project. (==
+// suppress_warning http-rest-shadowed ==)
func (r *HealthChecksService) AggregatedList(project string) *HealthChecksAggregatedListCall {
c := &HealthChecksAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -53729,7 +53951,7 @@ func (c *HealthChecksAggregatedListCall) Header() http.Header {
func (c *HealthChecksAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -53791,7 +54013,7 @@ func (c *HealthChecksAggregatedListCall) Do(opts ...googleapi.CallOption) (*Heal
}
return ret, nil
// {
- // "description": "Retrieves the list of all HealthCheck resources, regional and global, available to the specified project.",
+ // "description": "Retrieves the list of all HealthCheck resources, regional and global, available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.healthChecks.aggregatedList",
// "parameterOrder": [
@@ -53874,7 +54096,8 @@ type HealthChecksDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified HealthCheck resource.
+// Delete: Deletes the specified HealthCheck resource. (==
+// suppress_warning http-rest-shadowed ==)
func (r *HealthChecksService) Delete(project string, healthCheck string) *HealthChecksDeleteCall {
c := &HealthChecksDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -53928,7 +54151,7 @@ func (c *HealthChecksDeleteCall) Header() http.Header {
func (c *HealthChecksDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -53988,7 +54211,7 @@ func (c *HealthChecksDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, e
}
return ret, nil
// {
- // "description": "Deletes the specified HealthCheck resource.",
+ // "description": "Deletes the specified HealthCheck resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.healthChecks.delete",
// "parameterOrder": [
@@ -54041,7 +54264,8 @@ type HealthChecksGetCall struct {
}
// Get: Returns the specified HealthCheck resource. Gets a list of
-// available health checks by making a list() request.
+// available health checks by making a list() request. (==
+// suppress_warning http-rest-shadowed ==)
func (r *HealthChecksService) Get(project string, healthCheck string) *HealthChecksGetCall {
c := &HealthChecksGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -54086,7 +54310,7 @@ func (c *HealthChecksGetCall) Header() http.Header {
func (c *HealthChecksGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -54149,7 +54373,7 @@ func (c *HealthChecksGetCall) Do(opts ...googleapi.CallOption) (*HealthCheck, er
}
return ret, nil
// {
- // "description": "Returns the specified HealthCheck resource. Gets a list of available health checks by making a list() request.",
+ // "description": "Returns the specified HealthCheck resource. Gets a list of available health checks by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.healthChecks.get",
// "parameterOrder": [
@@ -54197,7 +54421,8 @@ type HealthChecksInsertCall struct {
}
// Insert: Creates a HealthCheck resource in the specified project using
-// the data included in the request.
+// the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *HealthChecksService) Insert(project string, healthcheck *HealthCheck) *HealthChecksInsertCall {
c := &HealthChecksInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -54251,7 +54476,7 @@ func (c *HealthChecksInsertCall) Header() http.Header {
func (c *HealthChecksInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -54315,7 +54540,7 @@ func (c *HealthChecksInsertCall) Do(opts ...googleapi.CallOption) (*Operation, e
}
return ret, nil
// {
- // "description": "Creates a HealthCheck resource in the specified project using the data included in the request.",
+ // "description": "Creates a HealthCheck resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.healthChecks.insert",
// "parameterOrder": [
@@ -54362,7 +54587,7 @@ type HealthChecksListCall struct {
}
// List: Retrieves the list of HealthCheck resources available to the
-// specified project.
+// specified project. (== suppress_warning http-rest-shadowed ==)
func (r *HealthChecksService) List(project string) *HealthChecksListCall {
c := &HealthChecksListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -54469,7 +54694,7 @@ func (c *HealthChecksListCall) Header() http.Header {
func (c *HealthChecksListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -54531,7 +54756,7 @@ func (c *HealthChecksListCall) Do(opts ...googleapi.CallOption) (*HealthCheckLis
}
return ret, nil
// {
- // "description": "Retrieves the list of HealthCheck resources available to the specified project.",
+ // "description": "Retrieves the list of HealthCheck resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.healthChecks.list",
// "parameterOrder": [
@@ -54618,6 +54843,7 @@ type HealthChecksPatchCall struct {
// Patch: Updates a HealthCheck resource in the specified project using
// the data included in the request. This method supports PATCH
// semantics and uses the JSON merge patch format and processing rules.
+// (== suppress_warning http-rest-shadowed ==)
func (r *HealthChecksService) Patch(project string, healthCheck string, healthcheck *HealthCheck) *HealthChecksPatchCall {
c := &HealthChecksPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -54672,7 +54898,7 @@ func (c *HealthChecksPatchCall) Header() http.Header {
func (c *HealthChecksPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -54737,7 +54963,7 @@ func (c *HealthChecksPatchCall) Do(opts ...googleapi.CallOption) (*Operation, er
}
return ret, nil
// {
- // "description": "Updates a HealthCheck resource in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ // "description": "Updates a HealthCheck resource in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.healthChecks.patch",
// "parameterOrder": [
@@ -54793,7 +55019,8 @@ type HealthChecksUpdateCall struct {
}
// Update: Updates a HealthCheck resource in the specified project using
-// the data included in the request.
+// the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *HealthChecksService) Update(project string, healthCheck string, healthcheck *HealthCheck) *HealthChecksUpdateCall {
c := &HealthChecksUpdateCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -54848,7 +55075,7 @@ func (c *HealthChecksUpdateCall) Header() http.Header {
func (c *HealthChecksUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -54913,7 +55140,7 @@ func (c *HealthChecksUpdateCall) Do(opts ...googleapi.CallOption) (*Operation, e
}
return ret, nil
// {
- // "description": "Updates a HealthCheck resource in the specified project using the data included in the request.",
+ // "description": "Updates a HealthCheck resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PUT",
// "id": "compute.healthChecks.update",
// "parameterOrder": [
@@ -54967,7 +55194,8 @@ type HttpHealthChecksDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified HttpHealthCheck resource.
+// Delete: Deletes the specified HttpHealthCheck resource. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/httpHealthChecks/delete
func (r *HttpHealthChecksService) Delete(project string, httpHealthCheck string) *HttpHealthChecksDeleteCall {
c := &HttpHealthChecksDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -55022,7 +55250,7 @@ func (c *HttpHealthChecksDeleteCall) Header() http.Header {
func (c *HttpHealthChecksDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -55082,7 +55310,7 @@ func (c *HttpHealthChecksDeleteCall) Do(opts ...googleapi.CallOption) (*Operatio
}
return ret, nil
// {
- // "description": "Deletes the specified HttpHealthCheck resource.",
+ // "description": "Deletes the specified HttpHealthCheck resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.httpHealthChecks.delete",
// "parameterOrder": [
@@ -55135,7 +55363,8 @@ type HttpHealthChecksGetCall struct {
}
// Get: Returns the specified HttpHealthCheck resource. Gets a list of
-// available HTTP health checks by making a list() request.
+// available HTTP health checks by making a list() request. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/httpHealthChecks/get
func (r *HttpHealthChecksService) Get(project string, httpHealthCheck string) *HttpHealthChecksGetCall {
c := &HttpHealthChecksGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -55181,7 +55410,7 @@ func (c *HttpHealthChecksGetCall) Header() http.Header {
func (c *HttpHealthChecksGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -55244,7 +55473,7 @@ func (c *HttpHealthChecksGetCall) Do(opts ...googleapi.CallOption) (*HttpHealthC
}
return ret, nil
// {
- // "description": "Returns the specified HttpHealthCheck resource. Gets a list of available HTTP health checks by making a list() request.",
+ // "description": "Returns the specified HttpHealthCheck resource. Gets a list of available HTTP health checks by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.httpHealthChecks.get",
// "parameterOrder": [
@@ -55292,7 +55521,8 @@ type HttpHealthChecksInsertCall struct {
}
// Insert: Creates a HttpHealthCheck resource in the specified project
-// using the data included in the request.
+// using the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/httpHealthChecks/insert
func (r *HttpHealthChecksService) Insert(project string, httphealthcheck *HttpHealthCheck) *HttpHealthChecksInsertCall {
c := &HttpHealthChecksInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -55347,7 +55577,7 @@ func (c *HttpHealthChecksInsertCall) Header() http.Header {
func (c *HttpHealthChecksInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -55411,7 +55641,7 @@ func (c *HttpHealthChecksInsertCall) Do(opts ...googleapi.CallOption) (*Operatio
}
return ret, nil
// {
- // "description": "Creates a HttpHealthCheck resource in the specified project using the data included in the request.",
+ // "description": "Creates a HttpHealthCheck resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.httpHealthChecks.insert",
// "parameterOrder": [
@@ -55458,7 +55688,7 @@ type HttpHealthChecksListCall struct {
}
// List: Retrieves the list of HttpHealthCheck resources available to
-// the specified project.
+// the specified project. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/httpHealthChecks/list
func (r *HttpHealthChecksService) List(project string) *HttpHealthChecksListCall {
c := &HttpHealthChecksListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -55566,7 +55796,7 @@ func (c *HttpHealthChecksListCall) Header() http.Header {
func (c *HttpHealthChecksListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -55628,7 +55858,7 @@ func (c *HttpHealthChecksListCall) Do(opts ...googleapi.CallOption) (*HttpHealth
}
return ret, nil
// {
- // "description": "Retrieves the list of HttpHealthCheck resources available to the specified project.",
+ // "description": "Retrieves the list of HttpHealthCheck resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.httpHealthChecks.list",
// "parameterOrder": [
@@ -55715,6 +55945,7 @@ type HttpHealthChecksPatchCall struct {
// Patch: Updates a HttpHealthCheck resource in the specified project
// using the data included in the request. This method supports PATCH
// semantics and uses the JSON merge patch format and processing rules.
+// (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/httpHealthChecks/patch
func (r *HttpHealthChecksService) Patch(project string, httpHealthCheck string, httphealthcheck *HttpHealthCheck) *HttpHealthChecksPatchCall {
c := &HttpHealthChecksPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -55770,7 +56001,7 @@ func (c *HttpHealthChecksPatchCall) Header() http.Header {
func (c *HttpHealthChecksPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -55835,7 +56066,7 @@ func (c *HttpHealthChecksPatchCall) Do(opts ...googleapi.CallOption) (*Operation
}
return ret, nil
// {
- // "description": "Updates a HttpHealthCheck resource in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ // "description": "Updates a HttpHealthCheck resource in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.httpHealthChecks.patch",
// "parameterOrder": [
@@ -55891,7 +56122,8 @@ type HttpHealthChecksUpdateCall struct {
}
// Update: Updates a HttpHealthCheck resource in the specified project
-// using the data included in the request.
+// using the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/httpHealthChecks/update
func (r *HttpHealthChecksService) Update(project string, httpHealthCheck string, httphealthcheck *HttpHealthCheck) *HttpHealthChecksUpdateCall {
c := &HttpHealthChecksUpdateCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -55947,7 +56179,7 @@ func (c *HttpHealthChecksUpdateCall) Header() http.Header {
func (c *HttpHealthChecksUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -56012,7 +56244,7 @@ func (c *HttpHealthChecksUpdateCall) Do(opts ...googleapi.CallOption) (*Operatio
}
return ret, nil
// {
- // "description": "Updates a HttpHealthCheck resource in the specified project using the data included in the request.",
+ // "description": "Updates a HttpHealthCheck resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PUT",
// "id": "compute.httpHealthChecks.update",
// "parameterOrder": [
@@ -56066,7 +56298,8 @@ type HttpsHealthChecksDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified HttpsHealthCheck resource.
+// Delete: Deletes the specified HttpsHealthCheck resource. (==
+// suppress_warning http-rest-shadowed ==)
func (r *HttpsHealthChecksService) Delete(project string, httpsHealthCheck string) *HttpsHealthChecksDeleteCall {
c := &HttpsHealthChecksDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -56120,7 +56353,7 @@ func (c *HttpsHealthChecksDeleteCall) Header() http.Header {
func (c *HttpsHealthChecksDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -56180,7 +56413,7 @@ func (c *HttpsHealthChecksDeleteCall) Do(opts ...googleapi.CallOption) (*Operati
}
return ret, nil
// {
- // "description": "Deletes the specified HttpsHealthCheck resource.",
+ // "description": "Deletes the specified HttpsHealthCheck resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.httpsHealthChecks.delete",
// "parameterOrder": [
@@ -56233,7 +56466,8 @@ type HttpsHealthChecksGetCall struct {
}
// Get: Returns the specified HttpsHealthCheck resource. Gets a list of
-// available HTTPS health checks by making a list() request.
+// available HTTPS health checks by making a list() request. (==
+// suppress_warning http-rest-shadowed ==)
func (r *HttpsHealthChecksService) Get(project string, httpsHealthCheck string) *HttpsHealthChecksGetCall {
c := &HttpsHealthChecksGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -56278,7 +56512,7 @@ func (c *HttpsHealthChecksGetCall) Header() http.Header {
func (c *HttpsHealthChecksGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -56341,7 +56575,7 @@ func (c *HttpsHealthChecksGetCall) Do(opts ...googleapi.CallOption) (*HttpsHealt
}
return ret, nil
// {
- // "description": "Returns the specified HttpsHealthCheck resource. Gets a list of available HTTPS health checks by making a list() request.",
+ // "description": "Returns the specified HttpsHealthCheck resource. Gets a list of available HTTPS health checks by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.httpsHealthChecks.get",
// "parameterOrder": [
@@ -56389,7 +56623,8 @@ type HttpsHealthChecksInsertCall struct {
}
// Insert: Creates a HttpsHealthCheck resource in the specified project
-// using the data included in the request.
+// using the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *HttpsHealthChecksService) Insert(project string, httpshealthcheck *HttpsHealthCheck) *HttpsHealthChecksInsertCall {
c := &HttpsHealthChecksInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -56443,7 +56678,7 @@ func (c *HttpsHealthChecksInsertCall) Header() http.Header {
func (c *HttpsHealthChecksInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -56507,7 +56742,7 @@ func (c *HttpsHealthChecksInsertCall) Do(opts ...googleapi.CallOption) (*Operati
}
return ret, nil
// {
- // "description": "Creates a HttpsHealthCheck resource in the specified project using the data included in the request.",
+ // "description": "Creates a HttpsHealthCheck resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.httpsHealthChecks.insert",
// "parameterOrder": [
@@ -56554,7 +56789,7 @@ type HttpsHealthChecksListCall struct {
}
// List: Retrieves the list of HttpsHealthCheck resources available to
-// the specified project.
+// the specified project. (== suppress_warning http-rest-shadowed ==)
func (r *HttpsHealthChecksService) List(project string) *HttpsHealthChecksListCall {
c := &HttpsHealthChecksListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -56661,7 +56896,7 @@ func (c *HttpsHealthChecksListCall) Header() http.Header {
func (c *HttpsHealthChecksListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -56723,7 +56958,7 @@ func (c *HttpsHealthChecksListCall) Do(opts ...googleapi.CallOption) (*HttpsHeal
}
return ret, nil
// {
- // "description": "Retrieves the list of HttpsHealthCheck resources available to the specified project.",
+ // "description": "Retrieves the list of HttpsHealthCheck resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.httpsHealthChecks.list",
// "parameterOrder": [
@@ -56810,6 +57045,7 @@ type HttpsHealthChecksPatchCall struct {
// Patch: Updates a HttpsHealthCheck resource in the specified project
// using the data included in the request. This method supports PATCH
// semantics and uses the JSON merge patch format and processing rules.
+// (== suppress_warning http-rest-shadowed ==)
func (r *HttpsHealthChecksService) Patch(project string, httpsHealthCheck string, httpshealthcheck *HttpsHealthCheck) *HttpsHealthChecksPatchCall {
c := &HttpsHealthChecksPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -56864,7 +57100,7 @@ func (c *HttpsHealthChecksPatchCall) Header() http.Header {
func (c *HttpsHealthChecksPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -56929,7 +57165,7 @@ func (c *HttpsHealthChecksPatchCall) Do(opts ...googleapi.CallOption) (*Operatio
}
return ret, nil
// {
- // "description": "Updates a HttpsHealthCheck resource in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ // "description": "Updates a HttpsHealthCheck resource in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.httpsHealthChecks.patch",
// "parameterOrder": [
@@ -56985,7 +57221,8 @@ type HttpsHealthChecksUpdateCall struct {
}
// Update: Updates a HttpsHealthCheck resource in the specified project
-// using the data included in the request.
+// using the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *HttpsHealthChecksService) Update(project string, httpsHealthCheck string, httpshealthcheck *HttpsHealthCheck) *HttpsHealthChecksUpdateCall {
c := &HttpsHealthChecksUpdateCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -57040,7 +57277,7 @@ func (c *HttpsHealthChecksUpdateCall) Header() http.Header {
func (c *HttpsHealthChecksUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -57105,7 +57342,7 @@ func (c *HttpsHealthChecksUpdateCall) Do(opts ...googleapi.CallOption) (*Operati
}
return ret, nil
// {
- // "description": "Updates a HttpsHealthCheck resource in the specified project using the data included in the request.",
+ // "description": "Updates a HttpsHealthCheck resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PUT",
// "id": "compute.httpsHealthChecks.update",
// "parameterOrder": [
@@ -57159,7 +57396,8 @@ type ImagesDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified image.
+// Delete: Deletes the specified image. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/images/delete
func (r *ImagesService) Delete(project string, image string) *ImagesDeleteCall {
c := &ImagesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -57214,7 +57452,7 @@ func (c *ImagesDeleteCall) Header() http.Header {
func (c *ImagesDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -57274,7 +57512,7 @@ func (c *ImagesDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, error)
}
return ret, nil
// {
- // "description": "Deletes the specified image.",
+ // "description": "Deletes the specified image. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.images.delete",
// "parameterOrder": [
@@ -57329,7 +57567,7 @@ type ImagesDeprecateCall struct {
// Deprecate: Sets the deprecation status of an image.
//
// If an empty request body is given, clears the deprecation status
-// instead.
+// instead. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/images/deprecate
func (r *ImagesService) Deprecate(project string, image string, deprecationstatus *DeprecationStatus) *ImagesDeprecateCall {
c := &ImagesDeprecateCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -57385,7 +57623,7 @@ func (c *ImagesDeprecateCall) Header() http.Header {
func (c *ImagesDeprecateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -57450,7 +57688,7 @@ func (c *ImagesDeprecateCall) Do(opts ...googleapi.CallOption) (*Operation, erro
}
return ret, nil
// {
- // "description": "Sets the deprecation status of an image.\n\nIf an empty request body is given, clears the deprecation status instead.",
+ // "description": "Sets the deprecation status of an image.\n\nIf an empty request body is given, clears the deprecation status instead. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.images.deprecate",
// "parameterOrder": [
@@ -57506,7 +57744,7 @@ type ImagesGetCall struct {
}
// Get: Returns the specified image. Gets a list of available images by
-// making a list() request.
+// making a list() request. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/images/get
func (r *ImagesService) Get(project string, image string) *ImagesGetCall {
c := &ImagesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -57552,7 +57790,7 @@ func (c *ImagesGetCall) Header() http.Header {
func (c *ImagesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -57615,7 +57853,7 @@ func (c *ImagesGetCall) Do(opts ...googleapi.CallOption) (*Image, error) {
}
return ret, nil
// {
- // "description": "Returns the specified image. Gets a list of available images by making a list() request.",
+ // "description": "Returns the specified image. Gets a list of available images by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.images.get",
// "parameterOrder": [
@@ -57664,7 +57902,8 @@ type ImagesGetFromFamilyCall struct {
}
// GetFromFamily: Returns the latest image that is part of an image
-// family and is not deprecated.
+// family and is not deprecated. (== suppress_warning http-rest-shadowed
+// ==)
func (r *ImagesService) GetFromFamily(project string, family string) *ImagesGetFromFamilyCall {
c := &ImagesGetFromFamilyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -57709,7 +57948,7 @@ func (c *ImagesGetFromFamilyCall) Header() http.Header {
func (c *ImagesGetFromFamilyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -57772,7 +58011,7 @@ func (c *ImagesGetFromFamilyCall) Do(opts ...googleapi.CallOption) (*Image, erro
}
return ret, nil
// {
- // "description": "Returns the latest image that is part of an image family and is not deprecated.",
+ // "description": "Returns the latest image that is part of an image family and is not deprecated. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.images.getFromFamily",
// "parameterOrder": [
@@ -57821,7 +58060,8 @@ type ImagesGetIamPolicyCall struct {
}
// GetIamPolicy: Gets the access control policy for a resource. May be
-// empty if no such policy or resource exists.
+// empty if no such policy or resource exists. (== suppress_warning
+// http-rest-shadowed ==)
func (r *ImagesService) GetIamPolicy(project string, resource string) *ImagesGetIamPolicyCall {
c := &ImagesGetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -57866,7 +58106,7 @@ func (c *ImagesGetIamPolicyCall) Header() http.Header {
func (c *ImagesGetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -57929,7 +58169,7 @@ func (c *ImagesGetIamPolicyCall) Do(opts ...googleapi.CallOption) (*Policy, erro
}
return ret, nil
// {
- // "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists.",
+ // "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.images.getIamPolicy",
// "parameterOrder": [
@@ -57977,7 +58217,7 @@ type ImagesInsertCall struct {
}
// Insert: Creates an image in the specified project using the data
-// included in the request.
+// included in the request. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/images/insert
func (r *ImagesService) Insert(project string, image *Image) *ImagesInsertCall {
c := &ImagesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -58039,7 +58279,7 @@ func (c *ImagesInsertCall) Header() http.Header {
func (c *ImagesInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -58103,7 +58343,7 @@ func (c *ImagesInsertCall) Do(opts ...googleapi.CallOption) (*Operation, error)
}
return ret, nil
// {
- // "description": "Creates an image in the specified project using the data included in the request.",
+ // "description": "Creates an image in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.images.insert",
// "parameterOrder": [
@@ -58163,7 +58403,7 @@ type ImagesListCall struct {
// projects, including publicly-available images, like Debian 8. If you
// want to get a list of publicly-available images, use this method to
// make a request to the respective image project, such as debian-cloud
-// or windows-cloud.
+// or windows-cloud. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/images/list
func (r *ImagesService) List(project string) *ImagesListCall {
c := &ImagesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -58271,7 +58511,7 @@ func (c *ImagesListCall) Header() http.Header {
func (c *ImagesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -58333,7 +58573,7 @@ func (c *ImagesListCall) Do(opts ...googleapi.CallOption) (*ImageList, error) {
}
return ret, nil
// {
- // "description": "Retrieves the list of custom images available to the specified project. Custom images are images you create that belong to your project. This method does not get any images that belong to other projects, including publicly-available images, like Debian 8. If you want to get a list of publicly-available images, use this method to make a request to the respective image project, such as debian-cloud or windows-cloud.",
+ // "description": "Retrieves the list of custom images available to the specified project. Custom images are images you create that belong to your project. This method does not get any images that belong to other projects, including publicly-available images, like Debian 8. If you want to get a list of publicly-available images, use this method to make a request to the respective image project, such as debian-cloud or windows-cloud. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.images.list",
// "parameterOrder": [
@@ -58418,7 +58658,8 @@ type ImagesSetIamPolicyCall struct {
}
// SetIamPolicy: Sets the access control policy on the specified
-// resource. Replaces any existing policy.
+// resource. Replaces any existing policy. (== suppress_warning
+// http-rest-shadowed ==)
func (r *ImagesService) SetIamPolicy(project string, resource string, globalsetpolicyrequest *GlobalSetPolicyRequest) *ImagesSetIamPolicyCall {
c := &ImagesSetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -58454,7 +58695,7 @@ func (c *ImagesSetIamPolicyCall) Header() http.Header {
func (c *ImagesSetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -58519,7 +58760,7 @@ func (c *ImagesSetIamPolicyCall) Do(opts ...googleapi.CallOption) (*Policy, erro
}
return ret, nil
// {
- // "description": "Sets the access control policy on the specified resource. Replaces any existing policy.",
+ // "description": "Sets the access control policy on the specified resource. Replaces any existing policy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.images.setIamPolicy",
// "parameterOrder": [
@@ -58570,7 +58811,8 @@ type ImagesSetLabelsCall struct {
}
// SetLabels: Sets the labels on an image. To learn more about labels,
-// read the Labeling Resources documentation.
+// read the Labeling Resources documentation. (== suppress_warning
+// http-rest-shadowed ==)
func (r *ImagesService) SetLabels(project string, resource string, globalsetlabelsrequest *GlobalSetLabelsRequest) *ImagesSetLabelsCall {
c := &ImagesSetLabelsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -58606,7 +58848,7 @@ func (c *ImagesSetLabelsCall) Header() http.Header {
func (c *ImagesSetLabelsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -58671,7 +58913,7 @@ func (c *ImagesSetLabelsCall) Do(opts ...googleapi.CallOption) (*Operation, erro
}
return ret, nil
// {
- // "description": "Sets the labels on an image. To learn more about labels, read the Labeling Resources documentation.",
+ // "description": "Sets the labels on an image. To learn more about labels, read the Labeling Resources documentation. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.images.setLabels",
// "parameterOrder": [
@@ -58722,7 +58964,7 @@ type ImagesTestIamPermissionsCall struct {
}
// TestIamPermissions: Returns permissions that a caller has on the
-// specified resource.
+// specified resource. (== suppress_warning http-rest-shadowed ==)
func (r *ImagesService) TestIamPermissions(project string, resource string, testpermissionsrequest *TestPermissionsRequest) *ImagesTestIamPermissionsCall {
c := &ImagesTestIamPermissionsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -58758,7 +59000,7 @@ func (c *ImagesTestIamPermissionsCall) Header() http.Header {
func (c *ImagesTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -58823,7 +59065,7 @@ func (c *ImagesTestIamPermissionsCall) Do(opts ...googleapi.CallOption) (*TestPe
}
return ret, nil
// {
- // "description": "Returns permissions that a caller has on the specified resource.",
+ // "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.images.testIamPermissions",
// "parameterOrder": [
@@ -58891,7 +59133,7 @@ type InstanceGroupManagersAbandonInstancesCall struct {
// deleted.
//
// You can specify a maximum of 1000 instances with this method per
-// request.
+// request. (== suppress_warning http-rest-shadowed ==)
func (r *InstanceGroupManagersService) AbandonInstances(project string, zone string, instanceGroupManager string, instancegroupmanagersabandoninstancesrequest *InstanceGroupManagersAbandonInstancesRequest) *InstanceGroupManagersAbandonInstancesCall {
c := &InstanceGroupManagersAbandonInstancesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -58947,7 +59189,7 @@ func (c *InstanceGroupManagersAbandonInstancesCall) Header() http.Header {
func (c *InstanceGroupManagersAbandonInstancesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -59013,7 +59255,7 @@ func (c *InstanceGroupManagersAbandonInstancesCall) Do(opts ...googleapi.CallOpt
}
return ret, nil
// {
- // "description": "Flags the specified instances to be removed from the managed instance group. Abandoning an instance does not delete the instance, but it does remove the instance from any target pools that are applied by the managed instance group. This method reduces the targetSize of the managed instance group by the number of instances that you abandon. This operation is marked as DONE when the action is scheduled even if the instances have not yet been removed from the group. You must separately verify the status of the abandoning action with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.\n\nYou can specify a maximum of 1000 instances with this method per request.",
+ // "description": "Flags the specified instances to be removed from the managed instance group. Abandoning an instance does not delete the instance, but it does remove the instance from any target pools that are applied by the managed instance group. This method reduces the targetSize of the managed instance group by the number of instances that you abandon. This operation is marked as DONE when the action is scheduled even if the instances have not yet been removed from the group. You must separately verify the status of the abandoning action with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.\n\nYou can specify a maximum of 1000 instances with this method per request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instanceGroupManagers.abandonInstances",
// "parameterOrder": [
@@ -59074,7 +59316,7 @@ type InstanceGroupManagersAggregatedListCall struct {
}
// AggregatedList: Retrieves the list of managed instance groups and
-// groups them by zone.
+// groups them by zone. (== suppress_warning http-rest-shadowed ==)
func (r *InstanceGroupManagersService) AggregatedList(project string) *InstanceGroupManagersAggregatedListCall {
c := &InstanceGroupManagersAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -59181,7 +59423,7 @@ func (c *InstanceGroupManagersAggregatedListCall) Header() http.Header {
func (c *InstanceGroupManagersAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -59244,7 +59486,7 @@ func (c *InstanceGroupManagersAggregatedListCall) Do(opts ...googleapi.CallOptio
}
return ret, nil
// {
- // "description": "Retrieves the list of managed instance groups and groups them by zone.",
+ // "description": "Retrieves the list of managed instance groups and groups them by zone. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.instanceGroupManagers.aggregatedList",
// "parameterOrder": [
@@ -59331,7 +59573,7 @@ type InstanceGroupManagersDeleteCall struct {
// Delete: Deletes the specified managed instance group and all of the
// instances in that group. Note that the instance group must not belong
// to a backend service. Read Deleting an instance group for more
-// information.
+// information. (== suppress_warning http-rest-shadowed ==)
func (r *InstanceGroupManagersService) Delete(project string, zone string, instanceGroupManager string) *InstanceGroupManagersDeleteCall {
c := &InstanceGroupManagersDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -59386,7 +59628,7 @@ func (c *InstanceGroupManagersDeleteCall) Header() http.Header {
func (c *InstanceGroupManagersDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -59447,7 +59689,7 @@ func (c *InstanceGroupManagersDeleteCall) Do(opts ...googleapi.CallOption) (*Ope
}
return ret, nil
// {
- // "description": "Deletes the specified managed instance group and all of the instances in that group. Note that the instance group must not belong to a backend service. Read Deleting an instance group for more information.",
+ // "description": "Deletes the specified managed instance group and all of the instances in that group. Note that the instance group must not belong to a backend service. Read Deleting an instance group for more information. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.instanceGroupManagers.delete",
// "parameterOrder": [
@@ -59521,7 +59763,7 @@ type InstanceGroupManagersDeleteInstancesCall struct {
// deleted.
//
// You can specify a maximum of 1000 instances with this method per
-// request.
+// request. (== suppress_warning http-rest-shadowed ==)
func (r *InstanceGroupManagersService) DeleteInstances(project string, zone string, instanceGroupManager string, instancegroupmanagersdeleteinstancesrequest *InstanceGroupManagersDeleteInstancesRequest) *InstanceGroupManagersDeleteInstancesCall {
c := &InstanceGroupManagersDeleteInstancesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -59577,7 +59819,7 @@ func (c *InstanceGroupManagersDeleteInstancesCall) Header() http.Header {
func (c *InstanceGroupManagersDeleteInstancesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -59643,7 +59885,7 @@ func (c *InstanceGroupManagersDeleteInstancesCall) Do(opts ...googleapi.CallOpti
}
return ret, nil
// {
- // "description": "Flags the specified instances in the managed instance group for immediate deletion. The instances are also removed from any target pools of which they were a member. This method reduces the targetSize of the managed instance group by the number of instances that you delete. This operation is marked as DONE when the action is scheduled even if the instances are still being deleted. You must separately verify the status of the deleting action with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.\n\nYou can specify a maximum of 1000 instances with this method per request.",
+ // "description": "Flags the specified instances in the managed instance group for immediate deletion. The instances are also removed from any target pools of which they were a member. This method reduces the targetSize of the managed instance group by the number of instances that you delete. This operation is marked as DONE when the action is scheduled even if the instances are still being deleted. You must separately verify the status of the deleting action with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.\n\nYou can specify a maximum of 1000 instances with this method per request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instanceGroupManagers.deleteInstances",
// "parameterOrder": [
@@ -59707,7 +59949,7 @@ type InstanceGroupManagersGetCall struct {
// Get: Returns all of the details about the specified managed instance
// group. Gets a list of available managed instance groups by making a
-// list() request.
+// list() request. (== suppress_warning http-rest-shadowed ==)
func (r *InstanceGroupManagersService) Get(project string, zone string, instanceGroupManager string) *InstanceGroupManagersGetCall {
c := &InstanceGroupManagersGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -59753,7 +59995,7 @@ func (c *InstanceGroupManagersGetCall) Header() http.Header {
func (c *InstanceGroupManagersGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -59817,7 +60059,7 @@ func (c *InstanceGroupManagersGetCall) Do(opts ...googleapi.CallOption) (*Instan
}
return ret, nil
// {
- // "description": "Returns all of the details about the specified managed instance group. Gets a list of available managed instance groups by making a list() request.",
+ // "description": "Returns all of the details about the specified managed instance group. Gets a list of available managed instance groups by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.instanceGroupManagers.get",
// "parameterOrder": [
@@ -59881,6 +60123,7 @@ type InstanceGroupManagersInsertCall struct {
//
// A managed instance group can have up to 1000 VM instances per group.
// Please contact Cloud Support if you need an increase in this limit.
+// (== suppress_warning http-rest-shadowed ==)
func (r *InstanceGroupManagersService) Insert(project string, zone string, instancegroupmanager *InstanceGroupManager) *InstanceGroupManagersInsertCall {
c := &InstanceGroupManagersInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -59935,7 +60178,7 @@ func (c *InstanceGroupManagersInsertCall) Header() http.Header {
func (c *InstanceGroupManagersInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -60000,7 +60243,7 @@ func (c *InstanceGroupManagersInsertCall) Do(opts ...googleapi.CallOption) (*Ope
}
return ret, nil
// {
- // "description": "Creates a managed instance group using the information that you specify in the request. After the group is created, instances in the group are created using the specified instance template. This operation is marked as DONE when the group is created even if the instances in the group have not yet been created. You must separately verify the status of the individual instances with the listmanagedinstances method.\n\nA managed instance group can have up to 1000 VM instances per group. Please contact Cloud Support if you need an increase in this limit.",
+ // "description": "Creates a managed instance group using the information that you specify in the request. After the group is created, instances in the group are created using the specified instance template. This operation is marked as DONE when the group is created even if the instances in the group have not yet been created. You must separately verify the status of the individual instances with the listmanagedinstances method.\n\nA managed instance group can have up to 1000 VM instances per group. Please contact Cloud Support if you need an increase in this limit. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instanceGroupManagers.insert",
// "parameterOrder": [
@@ -60055,7 +60298,8 @@ type InstanceGroupManagersListCall struct {
}
// List: Retrieves a list of managed instance groups that are contained
-// within the specified project and zone.
+// within the specified project and zone. (== suppress_warning
+// http-rest-shadowed ==)
func (r *InstanceGroupManagersService) List(project string, zone string) *InstanceGroupManagersListCall {
c := &InstanceGroupManagersListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -60163,7 +60407,7 @@ func (c *InstanceGroupManagersListCall) Header() http.Header {
func (c *InstanceGroupManagersListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -60226,7 +60470,7 @@ func (c *InstanceGroupManagersListCall) Do(opts ...googleapi.CallOption) (*Insta
}
return ret, nil
// {
- // "description": "Retrieves a list of managed instance groups that are contained within the specified project and zone.",
+ // "description": "Retrieves a list of managed instance groups that are contained within the specified project and zone. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.instanceGroupManagers.list",
// "parameterOrder": [
@@ -60322,7 +60566,8 @@ type InstanceGroupManagersListManagedInstancesCall struct {
// indicates the action that the managed instance group is performing on
// the instance. For example, if the group is still creating an
// instance, the currentAction is CREATING. If a previous action failed,
-// the list displays the errors for that failed action.
+// the list displays the errors for that failed action. (==
+// suppress_warning http-rest-shadowed ==)
func (r *InstanceGroupManagersService) ListManagedInstances(project string, zone string, instanceGroupManager string) *InstanceGroupManagersListManagedInstancesCall {
c := &InstanceGroupManagersListManagedInstancesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -60421,7 +60666,7 @@ func (c *InstanceGroupManagersListManagedInstancesCall) Header() http.Header {
func (c *InstanceGroupManagersListManagedInstancesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -60484,7 +60729,7 @@ func (c *InstanceGroupManagersListManagedInstancesCall) Do(opts ...googleapi.Cal
}
return ret, nil
// {
- // "description": "Lists all of the instances in the managed instance group. Each instance in the list has a currentAction, which indicates the action that the managed instance group is performing on the instance. For example, if the group is still creating an instance, the currentAction is CREATING. If a previous action failed, the list displays the errors for that failed action.",
+ // "description": "Lists all of the instances in the managed instance group. Each instance in the list has a currentAction, which indicates the action that the managed instance group is performing on the instance. For example, if the group is still creating an instance, the currentAction is CREATING. If a previous action failed, the list displays the errors for that failed action. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instanceGroupManagers.listManagedInstances",
// "parameterOrder": [
@@ -60568,7 +60813,7 @@ type InstanceGroupManagersPatchCall struct {
// process of being patched. You must separately verify the status of
// the individual instances with the listManagedInstances method. This
// method supports PATCH semantics and uses the JSON merge patch format
-// and processing rules.
+// and processing rules. (== suppress_warning http-rest-shadowed ==)
func (r *InstanceGroupManagersService) Patch(project string, zone string, instanceGroupManager string, instancegroupmanager *InstanceGroupManager) *InstanceGroupManagersPatchCall {
c := &InstanceGroupManagersPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -60624,7 +60869,7 @@ func (c *InstanceGroupManagersPatchCall) Header() http.Header {
func (c *InstanceGroupManagersPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -60690,7 +60935,7 @@ func (c *InstanceGroupManagersPatchCall) Do(opts ...googleapi.CallOption) (*Oper
}
return ret, nil
// {
- // "description": "Updates a managed instance group using the information that you specify in the request. This operation is marked as DONE when the group is patched even if the instances in the group are still in the process of being patched. You must separately verify the status of the individual instances with the listManagedInstances method. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ // "description": "Updates a managed instance group using the information that you specify in the request. This operation is marked as DONE when the group is patched even if the instances in the group are still in the process of being patched. You must separately verify the status of the individual instances with the listManagedInstances method. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.instanceGroupManagers.patch",
// "parameterOrder": [
@@ -60766,7 +61011,7 @@ type InstanceGroupManagersRecreateInstancesCall struct {
// deleted.
//
// You can specify a maximum of 1000 instances with this method per
-// request.
+// request. (== suppress_warning http-rest-shadowed ==)
func (r *InstanceGroupManagersService) RecreateInstances(project string, zone string, instanceGroupManager string, instancegroupmanagersrecreateinstancesrequest *InstanceGroupManagersRecreateInstancesRequest) *InstanceGroupManagersRecreateInstancesCall {
c := &InstanceGroupManagersRecreateInstancesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -60822,7 +61067,7 @@ func (c *InstanceGroupManagersRecreateInstancesCall) Header() http.Header {
func (c *InstanceGroupManagersRecreateInstancesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -60888,7 +61133,7 @@ func (c *InstanceGroupManagersRecreateInstancesCall) Do(opts ...googleapi.CallOp
}
return ret, nil
// {
- // "description": "Flags the specified instances in the managed instance group to be immediately recreated. The instances are deleted and recreated using the current instance template for the managed instance group. This operation is marked as DONE when the flag is set even if the instances have not yet been recreated. You must separately verify the status of the recreating action with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.\n\nYou can specify a maximum of 1000 instances with this method per request.",
+ // "description": "Flags the specified instances in the managed instance group to be immediately recreated. The instances are deleted and recreated using the current instance template for the managed instance group. This operation is marked as DONE when the flag is set even if the instances have not yet been recreated. You must separately verify the status of the recreating action with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.\n\nYou can specify a maximum of 1000 instances with this method per request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instanceGroupManagers.recreateInstances",
// "parameterOrder": [
@@ -60970,6 +61215,7 @@ type InstanceGroupManagersResizeCall struct {
// If the group is part of a backend service that has enabled connection
// draining, it can take up to 60 seconds after the connection draining
// duration has elapsed before the VM instance is removed or deleted.
+// (== suppress_warning http-rest-shadowed ==)
func (r *InstanceGroupManagersService) Resize(project string, zone string, instanceGroupManager string, size int64) *InstanceGroupManagersResizeCall {
c := &InstanceGroupManagersResizeCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -61025,7 +61271,7 @@ func (c *InstanceGroupManagersResizeCall) Header() http.Header {
func (c *InstanceGroupManagersResizeCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -61086,7 +61332,7 @@ func (c *InstanceGroupManagersResizeCall) Do(opts ...googleapi.CallOption) (*Ope
}
return ret, nil
// {
- // "description": "Resizes the managed instance group. If you increase the size, the group creates new instances using the current instance template. If you decrease the size, the group deletes instances. The resize operation is marked DONE when the resize actions are scheduled even if the group has not yet added or deleted any instances. You must separately verify the status of the creating or deleting actions with the listmanagedinstances method.\n\nWhen resizing down, the instance group arbitrarily chooses the order in which VMs are deleted. The group takes into account some VM attributes when making the selection including:\n\n+ The status of the VM instance. + The health of the VM instance. + The instance template version the VM is based on. + For regional managed instance groups, the location of the VM instance.\n\nThis list is subject to change.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.",
+ // "description": "Resizes the managed instance group. If you increase the size, the group creates new instances using the current instance template. If you decrease the size, the group deletes instances. The resize operation is marked DONE when the resize actions are scheduled even if the group has not yet added or deleted any instances. You must separately verify the status of the creating or deleting actions with the listmanagedinstances method.\n\nWhen resizing down, the instance group arbitrarily chooses the order in which VMs are deleted. The group takes into account some VM attributes when making the selection including:\n\n+ The status of the VM instance. + The health of the VM instance. + The instance template version the VM is based on. + For regional managed instance groups, the location of the VM instance.\n\nThis list is subject to change.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instanceGroupManagers.resize",
// "parameterOrder": [
@@ -61155,7 +61401,8 @@ type InstanceGroupManagersSetInstanceTemplateCall struct {
// SetInstanceTemplate: Specifies the instance template to use when
// creating new instances in this group. The templates for existing
-// instances in the group do not change unless you recreate them.
+// instances in the group do not change unless you recreate them. (==
+// suppress_warning http-rest-shadowed ==)
func (r *InstanceGroupManagersService) SetInstanceTemplate(project string, zone string, instanceGroupManager string, instancegroupmanagerssetinstancetemplaterequest *InstanceGroupManagersSetInstanceTemplateRequest) *InstanceGroupManagersSetInstanceTemplateCall {
c := &InstanceGroupManagersSetInstanceTemplateCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -61211,7 +61458,7 @@ func (c *InstanceGroupManagersSetInstanceTemplateCall) Header() http.Header {
func (c *InstanceGroupManagersSetInstanceTemplateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -61277,7 +61524,7 @@ func (c *InstanceGroupManagersSetInstanceTemplateCall) Do(opts ...googleapi.Call
}
return ret, nil
// {
- // "description": "Specifies the instance template to use when creating new instances in this group. The templates for existing instances in the group do not change unless you recreate them.",
+ // "description": "Specifies the instance template to use when creating new instances in this group. The templates for existing instances in the group do not change unless you recreate them. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instanceGroupManagers.setInstanceTemplate",
// "parameterOrder": [
@@ -61345,7 +61592,8 @@ type InstanceGroupManagersSetTargetPoolsCall struct {
// group. This operation is marked DONE when you make the request even
// if the instances have not yet been added to their target pools. The
// change might take some time to apply to all of the instances in the
-// group depending on the size of the group.
+// group depending on the size of the group. (== suppress_warning
+// http-rest-shadowed ==)
func (r *InstanceGroupManagersService) SetTargetPools(project string, zone string, instanceGroupManager string, instancegroupmanagerssettargetpoolsrequest *InstanceGroupManagersSetTargetPoolsRequest) *InstanceGroupManagersSetTargetPoolsCall {
c := &InstanceGroupManagersSetTargetPoolsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -61401,7 +61649,7 @@ func (c *InstanceGroupManagersSetTargetPoolsCall) Header() http.Header {
func (c *InstanceGroupManagersSetTargetPoolsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -61467,7 +61715,7 @@ func (c *InstanceGroupManagersSetTargetPoolsCall) Do(opts ...googleapi.CallOptio
}
return ret, nil
// {
- // "description": "Modifies the target pools to which all instances in this managed instance group are assigned. The target pools automatically apply to all of the instances in the managed instance group. This operation is marked DONE when you make the request even if the instances have not yet been added to their target pools. The change might take some time to apply to all of the instances in the group depending on the size of the group.",
+ // "description": "Modifies the target pools to which all instances in this managed instance group are assigned. The target pools automatically apply to all of the instances in the managed instance group. This operation is marked DONE when you make the request even if the instances have not yet been added to their target pools. The change might take some time to apply to all of the instances in the group depending on the size of the group. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instanceGroupManagers.setTargetPools",
// "parameterOrder": [
@@ -61531,7 +61779,8 @@ type InstanceGroupsAddInstancesCall struct {
// AddInstances: Adds a list of instances to the specified instance
// group. All of the instances in the instance group must be in the same
-// network/subnetwork. Read Adding instances for more information.
+// network/subnetwork. Read Adding instances for more information. (==
+// suppress_warning http-rest-shadowed ==)
func (r *InstanceGroupsService) AddInstances(project string, zone string, instanceGroup string, instancegroupsaddinstancesrequest *InstanceGroupsAddInstancesRequest) *InstanceGroupsAddInstancesCall {
c := &InstanceGroupsAddInstancesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -61587,7 +61836,7 @@ func (c *InstanceGroupsAddInstancesCall) Header() http.Header {
func (c *InstanceGroupsAddInstancesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -61653,7 +61902,7 @@ func (c *InstanceGroupsAddInstancesCall) Do(opts ...googleapi.CallOption) (*Oper
}
return ret, nil
// {
- // "description": "Adds a list of instances to the specified instance group. All of the instances in the instance group must be in the same network/subnetwork. Read Adding instances for more information.",
+ // "description": "Adds a list of instances to the specified instance group. All of the instances in the instance group must be in the same network/subnetwork. Read Adding instances for more information. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instanceGroups.addInstances",
// "parameterOrder": [
@@ -61714,7 +61963,7 @@ type InstanceGroupsAggregatedListCall struct {
}
// AggregatedList: Retrieves the list of instance groups and sorts them
-// by zone.
+// by zone. (== suppress_warning http-rest-shadowed ==)
func (r *InstanceGroupsService) AggregatedList(project string) *InstanceGroupsAggregatedListCall {
c := &InstanceGroupsAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -61821,7 +62070,7 @@ func (c *InstanceGroupsAggregatedListCall) Header() http.Header {
func (c *InstanceGroupsAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -61883,7 +62132,7 @@ func (c *InstanceGroupsAggregatedListCall) Do(opts ...googleapi.CallOption) (*In
}
return ret, nil
// {
- // "description": "Retrieves the list of instance groups and sorts them by zone.",
+ // "description": "Retrieves the list of instance groups and sorts them by zone. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.instanceGroups.aggregatedList",
// "parameterOrder": [
@@ -61970,7 +62219,7 @@ type InstanceGroupsDeleteCall struct {
// Delete: Deletes the specified instance group. The instances in the
// group are not deleted. Note that instance group must not belong to a
// backend service. Read Deleting an instance group for more
-// information.
+// information. (== suppress_warning http-rest-shadowed ==)
func (r *InstanceGroupsService) Delete(project string, zone string, instanceGroup string) *InstanceGroupsDeleteCall {
c := &InstanceGroupsDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -62025,7 +62274,7 @@ func (c *InstanceGroupsDeleteCall) Header() http.Header {
func (c *InstanceGroupsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -62086,7 +62335,7 @@ func (c *InstanceGroupsDeleteCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Deletes the specified instance group. The instances in the group are not deleted. Note that instance group must not belong to a backend service. Read Deleting an instance group for more information.",
+ // "description": "Deletes the specified instance group. The instances in the group are not deleted. Note that instance group must not belong to a backend service. Read Deleting an instance group for more information. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.instanceGroups.delete",
// "parameterOrder": [
@@ -62146,7 +62395,8 @@ type InstanceGroupsGetCall struct {
}
// Get: Returns the specified instance group. Gets a list of available
-// instance groups by making a list() request.
+// instance groups by making a list() request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *InstanceGroupsService) Get(project string, zone string, instanceGroup string) *InstanceGroupsGetCall {
c := &InstanceGroupsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -62192,7 +62442,7 @@ func (c *InstanceGroupsGetCall) Header() http.Header {
func (c *InstanceGroupsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -62256,7 +62506,7 @@ func (c *InstanceGroupsGetCall) Do(opts ...googleapi.CallOption) (*InstanceGroup
}
return ret, nil
// {
- // "description": "Returns the specified instance group. Gets a list of available instance groups by making a list() request.",
+ // "description": "Returns the specified instance group. Gets a list of available instance groups by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.instanceGroups.get",
// "parameterOrder": [
@@ -62311,7 +62561,8 @@ type InstanceGroupsInsertCall struct {
}
// Insert: Creates an instance group in the specified project using the
-// parameters that are included in the request.
+// parameters that are included in the request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *InstanceGroupsService) Insert(project string, zone string, instancegroup *InstanceGroup) *InstanceGroupsInsertCall {
c := &InstanceGroupsInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -62366,7 +62617,7 @@ func (c *InstanceGroupsInsertCall) Header() http.Header {
func (c *InstanceGroupsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -62431,7 +62682,7 @@ func (c *InstanceGroupsInsertCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Creates an instance group in the specified project using the parameters that are included in the request.",
+ // "description": "Creates an instance group in the specified project using the parameters that are included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instanceGroups.insert",
// "parameterOrder": [
@@ -62486,7 +62737,8 @@ type InstanceGroupsListCall struct {
}
// List: Retrieves the list of instance groups that are located in the
-// specified project and zone.
+// specified project and zone. (== suppress_warning http-rest-shadowed
+// ==)
func (r *InstanceGroupsService) List(project string, zone string) *InstanceGroupsListCall {
c := &InstanceGroupsListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -62594,7 +62846,7 @@ func (c *InstanceGroupsListCall) Header() http.Header {
func (c *InstanceGroupsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -62657,7 +62909,7 @@ func (c *InstanceGroupsListCall) Do(opts ...googleapi.CallOption) (*InstanceGrou
}
return ret, nil
// {
- // "description": "Retrieves the list of instance groups that are located in the specified project and zone.",
+ // "description": "Retrieves the list of instance groups that are located in the specified project and zone. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.instanceGroups.list",
// "parameterOrder": [
@@ -62750,6 +63002,7 @@ type InstanceGroupsListInstancesCall struct {
}
// ListInstances: Lists the instances in the specified instance group.
+// (== suppress_warning http-rest-shadowed ==)
func (r *InstanceGroupsService) ListInstances(project string, zone string, instanceGroup string, instancegroupslistinstancesrequest *InstanceGroupsListInstancesRequest) *InstanceGroupsListInstancesCall {
c := &InstanceGroupsListInstancesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -62849,7 +63102,7 @@ func (c *InstanceGroupsListInstancesCall) Header() http.Header {
func (c *InstanceGroupsListInstancesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -62915,7 +63168,7 @@ func (c *InstanceGroupsListInstancesCall) Do(opts ...googleapi.CallOption) (*Ins
}
return ret, nil
// {
- // "description": "Lists the instances in the specified instance group.",
+ // "description": "Lists the instances in the specified instance group. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instanceGroups.listInstances",
// "parameterOrder": [
@@ -63022,7 +63275,8 @@ type InstanceGroupsRemoveInstancesCall struct {
//
// If the group is part of a backend service that has enabled connection
// draining, it can take up to 60 seconds after the connection draining
-// duration before the VM instance is removed or deleted.
+// duration before the VM instance is removed or deleted. (==
+// suppress_warning http-rest-shadowed ==)
func (r *InstanceGroupsService) RemoveInstances(project string, zone string, instanceGroup string, instancegroupsremoveinstancesrequest *InstanceGroupsRemoveInstancesRequest) *InstanceGroupsRemoveInstancesCall {
c := &InstanceGroupsRemoveInstancesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -63078,7 +63332,7 @@ func (c *InstanceGroupsRemoveInstancesCall) Header() http.Header {
func (c *InstanceGroupsRemoveInstancesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -63144,7 +63398,7 @@ func (c *InstanceGroupsRemoveInstancesCall) Do(opts ...googleapi.CallOption) (*O
}
return ret, nil
// {
- // "description": "Removes one or more instances from the specified instance group, but does not delete those instances.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration before the VM instance is removed or deleted.",
+ // "description": "Removes one or more instances from the specified instance group, but does not delete those instances.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration before the VM instance is removed or deleted. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instanceGroups.removeInstances",
// "parameterOrder": [
@@ -63207,6 +63461,7 @@ type InstanceGroupsSetNamedPortsCall struct {
}
// SetNamedPorts: Sets the named ports for the specified instance group.
+// (== suppress_warning http-rest-shadowed ==)
func (r *InstanceGroupsService) SetNamedPorts(project string, zone string, instanceGroup string, instancegroupssetnamedportsrequest *InstanceGroupsSetNamedPortsRequest) *InstanceGroupsSetNamedPortsCall {
c := &InstanceGroupsSetNamedPortsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -63262,7 +63517,7 @@ func (c *InstanceGroupsSetNamedPortsCall) Header() http.Header {
func (c *InstanceGroupsSetNamedPortsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -63328,7 +63583,7 @@ func (c *InstanceGroupsSetNamedPortsCall) Do(opts ...googleapi.CallOption) (*Ope
}
return ret, nil
// {
- // "description": "Sets the named ports for the specified instance group.",
+ // "description": "Sets the named ports for the specified instance group. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instanceGroups.setNamedPorts",
// "parameterOrder": [
@@ -63391,6 +63646,7 @@ type InstanceTemplatesDeleteCall struct {
// Delete: Deletes the specified instance template. Deleting an instance
// template is permanent and cannot be undone. It is not possible to
// delete templates that are already in use by a managed instance group.
+// (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/instanceTemplates/delete
func (r *InstanceTemplatesService) Delete(project string, instanceTemplate string) *InstanceTemplatesDeleteCall {
c := &InstanceTemplatesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -63445,7 +63701,7 @@ func (c *InstanceTemplatesDeleteCall) Header() http.Header {
func (c *InstanceTemplatesDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -63505,7 +63761,7 @@ func (c *InstanceTemplatesDeleteCall) Do(opts ...googleapi.CallOption) (*Operati
}
return ret, nil
// {
- // "description": "Deletes the specified instance template. Deleting an instance template is permanent and cannot be undone. It is not possible to delete templates that are already in use by a managed instance group.",
+ // "description": "Deletes the specified instance template. Deleting an instance template is permanent and cannot be undone. It is not possible to delete templates that are already in use by a managed instance group. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.instanceTemplates.delete",
// "parameterOrder": [
@@ -63558,7 +63814,8 @@ type InstanceTemplatesGetCall struct {
}
// Get: Returns the specified instance template. Gets a list of
-// available instance templates by making a list() request.
+// available instance templates by making a list() request. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/instanceTemplates/get
func (r *InstanceTemplatesService) Get(project string, instanceTemplate string) *InstanceTemplatesGetCall {
c := &InstanceTemplatesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -63604,7 +63861,7 @@ func (c *InstanceTemplatesGetCall) Header() http.Header {
func (c *InstanceTemplatesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -63667,7 +63924,7 @@ func (c *InstanceTemplatesGetCall) Do(opts ...googleapi.CallOption) (*InstanceTe
}
return ret, nil
// {
- // "description": "Returns the specified instance template. Gets a list of available instance templates by making a list() request.",
+ // "description": "Returns the specified instance template. Gets a list of available instance templates by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.instanceTemplates.get",
// "parameterOrder": [
@@ -63716,7 +63973,8 @@ type InstanceTemplatesGetIamPolicyCall struct {
}
// GetIamPolicy: Gets the access control policy for a resource. May be
-// empty if no such policy or resource exists.
+// empty if no such policy or resource exists. (== suppress_warning
+// http-rest-shadowed ==)
func (r *InstanceTemplatesService) GetIamPolicy(project string, resource string) *InstanceTemplatesGetIamPolicyCall {
c := &InstanceTemplatesGetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -63761,7 +64019,7 @@ func (c *InstanceTemplatesGetIamPolicyCall) Header() http.Header {
func (c *InstanceTemplatesGetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -63824,7 +64082,7 @@ func (c *InstanceTemplatesGetIamPolicyCall) Do(opts ...googleapi.CallOption) (*P
}
return ret, nil
// {
- // "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists.",
+ // "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.instanceTemplates.getIamPolicy",
// "parameterOrder": [
@@ -63875,7 +64133,8 @@ type InstanceTemplatesInsertCall struct {
// the data that is included in the request. If you are creating a new
// template to update an existing instance group, your new instance
// template must use the same network or, if applicable, the same
-// subnetwork as the original template.
+// subnetwork as the original template. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/instanceTemplates/insert
func (r *InstanceTemplatesService) Insert(project string, instancetemplate *InstanceTemplate) *InstanceTemplatesInsertCall {
c := &InstanceTemplatesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -63930,7 +64189,7 @@ func (c *InstanceTemplatesInsertCall) Header() http.Header {
func (c *InstanceTemplatesInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -63994,7 +64253,7 @@ func (c *InstanceTemplatesInsertCall) Do(opts ...googleapi.CallOption) (*Operati
}
return ret, nil
// {
- // "description": "Creates an instance template in the specified project using the data that is included in the request. If you are creating a new template to update an existing instance group, your new instance template must use the same network or, if applicable, the same subnetwork as the original template.",
+ // "description": "Creates an instance template in the specified project using the data that is included in the request. If you are creating a new template to update an existing instance group, your new instance template must use the same network or, if applicable, the same subnetwork as the original template. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instanceTemplates.insert",
// "parameterOrder": [
@@ -64041,7 +64300,8 @@ type InstanceTemplatesListCall struct {
}
// List: Retrieves a list of instance templates that are contained
-// within the specified project.
+// within the specified project. (== suppress_warning http-rest-shadowed
+// ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/instanceTemplates/list
func (r *InstanceTemplatesService) List(project string) *InstanceTemplatesListCall {
c := &InstanceTemplatesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -64149,7 +64409,7 @@ func (c *InstanceTemplatesListCall) Header() http.Header {
func (c *InstanceTemplatesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -64211,7 +64471,7 @@ func (c *InstanceTemplatesListCall) Do(opts ...googleapi.CallOption) (*InstanceT
}
return ret, nil
// {
- // "description": "Retrieves a list of instance templates that are contained within the specified project.",
+ // "description": "Retrieves a list of instance templates that are contained within the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.instanceTemplates.list",
// "parameterOrder": [
@@ -64296,7 +64556,8 @@ type InstanceTemplatesSetIamPolicyCall struct {
}
// SetIamPolicy: Sets the access control policy on the specified
-// resource. Replaces any existing policy.
+// resource. Replaces any existing policy. (== suppress_warning
+// http-rest-shadowed ==)
func (r *InstanceTemplatesService) SetIamPolicy(project string, resource string, globalsetpolicyrequest *GlobalSetPolicyRequest) *InstanceTemplatesSetIamPolicyCall {
c := &InstanceTemplatesSetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -64332,7 +64593,7 @@ func (c *InstanceTemplatesSetIamPolicyCall) Header() http.Header {
func (c *InstanceTemplatesSetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -64397,7 +64658,7 @@ func (c *InstanceTemplatesSetIamPolicyCall) Do(opts ...googleapi.CallOption) (*P
}
return ret, nil
// {
- // "description": "Sets the access control policy on the specified resource. Replaces any existing policy.",
+ // "description": "Sets the access control policy on the specified resource. Replaces any existing policy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instanceTemplates.setIamPolicy",
// "parameterOrder": [
@@ -64448,7 +64709,7 @@ type InstanceTemplatesTestIamPermissionsCall struct {
}
// TestIamPermissions: Returns permissions that a caller has on the
-// specified resource.
+// specified resource. (== suppress_warning http-rest-shadowed ==)
func (r *InstanceTemplatesService) TestIamPermissions(project string, resource string, testpermissionsrequest *TestPermissionsRequest) *InstanceTemplatesTestIamPermissionsCall {
c := &InstanceTemplatesTestIamPermissionsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -64484,7 +64745,7 @@ func (c *InstanceTemplatesTestIamPermissionsCall) Header() http.Header {
func (c *InstanceTemplatesTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -64549,7 +64810,7 @@ func (c *InstanceTemplatesTestIamPermissionsCall) Do(opts ...googleapi.CallOptio
}
return ret, nil
// {
- // "description": "Returns permissions that a caller has on the specified resource.",
+ // "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instanceTemplates.testIamPermissions",
// "parameterOrder": [
@@ -64602,7 +64863,7 @@ type InstancesAddAccessConfigCall struct {
}
// AddAccessConfig: Adds an access config to an instance's network
-// interface.
+// interface. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/instances/addAccessConfig
func (r *InstancesService) AddAccessConfig(project string, zone string, instance string, networkInterface string, accessconfig *AccessConfig) *InstancesAddAccessConfigCall {
c := &InstancesAddAccessConfigCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -64660,7 +64921,7 @@ func (c *InstancesAddAccessConfigCall) Header() http.Header {
func (c *InstancesAddAccessConfigCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -64726,7 +64987,7 @@ func (c *InstancesAddAccessConfigCall) Do(opts ...googleapi.CallOption) (*Operat
}
return ret, nil
// {
- // "description": "Adds an access config to an instance's network interface.",
+ // "description": "Adds an access config to an instance's network interface. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instances.addAccessConfig",
// "parameterOrder": [
@@ -64796,7 +65057,8 @@ type InstancesAggregatedListCall struct {
}
// AggregatedList: Retrieves aggregated list of all of the instances in
-// your project across all regions and zones.
+// your project across all regions and zones. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/instances/aggregatedList
func (r *InstancesService) AggregatedList(project string) *InstancesAggregatedListCall {
c := &InstancesAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -64904,7 +65166,7 @@ func (c *InstancesAggregatedListCall) Header() http.Header {
func (c *InstancesAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -64966,7 +65228,7 @@ func (c *InstancesAggregatedListCall) Do(opts ...googleapi.CallOption) (*Instanc
}
return ret, nil
// {
- // "description": "Retrieves aggregated list of all of the instances in your project across all regions and zones.",
+ // "description": "Retrieves aggregated list of all of the instances in your project across all regions and zones. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.instances.aggregatedList",
// "parameterOrder": [
@@ -65054,7 +65316,8 @@ type InstancesAttachDiskCall struct {
// AttachDisk: Attaches an existing Disk resource to an instance. You
// must first create the disk before you can attach it. It is not
// possible to create and attach a disk at the same time. For more
-// information, read Adding a persistent disk to your instance.
+// information, read Adding a persistent disk to your instance. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/instances/attachDisk
func (r *InstancesService) AttachDisk(project string, zone string, instance string, attacheddisk *AttachedDisk) *InstancesAttachDiskCall {
c := &InstancesAttachDiskCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -65119,7 +65382,7 @@ func (c *InstancesAttachDiskCall) Header() http.Header {
func (c *InstancesAttachDiskCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -65185,7 +65448,7 @@ func (c *InstancesAttachDiskCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Attaches an existing Disk resource to an instance. You must first create the disk before you can attach it. It is not possible to create and attach a disk at the same time. For more information, read Adding a persistent disk to your instance.",
+ // "description": "Attaches an existing Disk resource to an instance. You must first create the disk before you can attach it. It is not possible to create and attach a disk at the same time. For more information, read Adding a persistent disk to your instance. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instances.attachDisk",
// "parameterOrder": [
@@ -65254,7 +65517,8 @@ type InstancesDeleteCall struct {
}
// Delete: Deletes the specified Instance resource. For more
-// information, see Stopping or Deleting an Instance.
+// information, see Stopping or Deleting an Instance. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/instances/delete
func (r *InstancesService) Delete(project string, zone string, instance string) *InstancesDeleteCall {
c := &InstancesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -65310,7 +65574,7 @@ func (c *InstancesDeleteCall) Header() http.Header {
func (c *InstancesDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -65371,7 +65635,7 @@ func (c *InstancesDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, erro
}
return ret, nil
// {
- // "description": "Deletes the specified Instance resource. For more information, see Stopping or Deleting an Instance.",
+ // "description": "Deletes the specified Instance resource. For more information, see Stopping or Deleting an Instance. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.instances.delete",
// "parameterOrder": [
@@ -65432,7 +65696,7 @@ type InstancesDeleteAccessConfigCall struct {
}
// DeleteAccessConfig: Deletes an access config from an instance's
-// network interface.
+// network interface. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/instances/deleteAccessConfig
func (r *InstancesService) DeleteAccessConfig(project string, zone string, instance string, accessConfig string, networkInterface string) *InstancesDeleteAccessConfigCall {
c := &InstancesDeleteAccessConfigCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -65490,7 +65754,7 @@ func (c *InstancesDeleteAccessConfigCall) Header() http.Header {
func (c *InstancesDeleteAccessConfigCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -65551,7 +65815,7 @@ func (c *InstancesDeleteAccessConfigCall) Do(opts ...googleapi.CallOption) (*Ope
}
return ret, nil
// {
- // "description": "Deletes an access config from an instance's network interface.",
+ // "description": "Deletes an access config from an instance's network interface. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instances.deleteAccessConfig",
// "parameterOrder": [
@@ -65625,7 +65889,8 @@ type InstancesDetachDiskCall struct {
header_ http.Header
}
-// DetachDisk: Detaches a disk from an instance.
+// DetachDisk: Detaches a disk from an instance. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/instances/detachDisk
func (r *InstancesService) DetachDisk(project string, zone string, instance string, deviceName string) *InstancesDetachDiskCall {
c := &InstancesDetachDiskCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -65682,7 +65947,7 @@ func (c *InstancesDetachDiskCall) Header() http.Header {
func (c *InstancesDetachDiskCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -65743,7 +66008,7 @@ func (c *InstancesDetachDiskCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Detaches a disk from an instance.",
+ // "description": "Detaches a disk from an instance. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instances.detachDisk",
// "parameterOrder": [
@@ -65812,7 +66077,8 @@ type InstancesGetCall struct {
}
// Get: Returns the specified Instance resource. Gets a list of
-// available instances by making a list() request.
+// available instances by making a list() request. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/instances/get
func (r *InstancesService) Get(project string, zone string, instance string) *InstancesGetCall {
c := &InstancesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -65859,7 +66125,7 @@ func (c *InstancesGetCall) Header() http.Header {
func (c *InstancesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -65923,7 +66189,7 @@ func (c *InstancesGetCall) Do(opts ...googleapi.CallOption) (*Instance, error) {
}
return ret, nil
// {
- // "description": "Returns the specified Instance resource. Gets a list of available instances by making a list() request.",
+ // "description": "Returns the specified Instance resource. Gets a list of available instances by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.instances.get",
// "parameterOrder": [
@@ -65980,7 +66246,8 @@ type InstancesGetGuestAttributesCall struct {
header_ http.Header
}
-// GetGuestAttributes: Returns the specified guest attributes entry.
+// GetGuestAttributes: Returns the specified guest attributes entry. (==
+// suppress_warning http-rest-shadowed ==)
func (r *InstancesService) GetGuestAttributes(project string, zone string, instance string) *InstancesGetGuestAttributesCall {
c := &InstancesGetGuestAttributesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -66040,7 +66307,7 @@ func (c *InstancesGetGuestAttributesCall) Header() http.Header {
func (c *InstancesGetGuestAttributesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -66104,7 +66371,7 @@ func (c *InstancesGetGuestAttributesCall) Do(opts ...googleapi.CallOption) (*Gue
}
return ret, nil
// {
- // "description": "Returns the specified guest attributes entry.",
+ // "description": "Returns the specified guest attributes entry. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.instances.getGuestAttributes",
// "parameterOrder": [
@@ -66172,7 +66439,8 @@ type InstancesGetIamPolicyCall struct {
}
// GetIamPolicy: Gets the access control policy for a resource. May be
-// empty if no such policy or resource exists.
+// empty if no such policy or resource exists. (== suppress_warning
+// http-rest-shadowed ==)
func (r *InstancesService) GetIamPolicy(project string, zone string, resource string) *InstancesGetIamPolicyCall {
c := &InstancesGetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -66218,7 +66486,7 @@ func (c *InstancesGetIamPolicyCall) Header() http.Header {
func (c *InstancesGetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -66282,7 +66550,7 @@ func (c *InstancesGetIamPolicyCall) Do(opts ...googleapi.CallOption) (*Policy, e
}
return ret, nil
// {
- // "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists.",
+ // "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.instances.getIamPolicy",
// "parameterOrder": [
@@ -66340,7 +66608,7 @@ type InstancesGetSerialPortOutputCall struct {
}
// GetSerialPortOutput: Returns the last 1 MB of serial port output from
-// the specified instance.
+// the specified instance. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/instances/getSerialPortOutput
func (r *InstancesService) GetSerialPortOutput(project string, zone string, instance string) *InstancesGetSerialPortOutputCall {
c := &InstancesGetSerialPortOutputCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -66405,7 +66673,7 @@ func (c *InstancesGetSerialPortOutputCall) Header() http.Header {
func (c *InstancesGetSerialPortOutputCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -66469,7 +66737,7 @@ func (c *InstancesGetSerialPortOutputCall) Do(opts ...googleapi.CallOption) (*Se
}
return ret, nil
// {
- // "description": "Returns the last 1 MB of serial port output from the specified instance.",
+ // "description": "Returns the last 1 MB of serial port output from the specified instance. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.instances.getSerialPortOutput",
// "parameterOrder": [
@@ -66542,7 +66810,7 @@ type InstancesGetShieldedInstanceIdentityCall struct {
}
// GetShieldedInstanceIdentity: Returns the Shielded Instance Identity
-// of an instance
+// of an instance (== suppress_warning http-rest-shadowed ==)
func (r *InstancesService) GetShieldedInstanceIdentity(project string, zone string, instance string) *InstancesGetShieldedInstanceIdentityCall {
c := &InstancesGetShieldedInstanceIdentityCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -66588,7 +66856,7 @@ func (c *InstancesGetShieldedInstanceIdentityCall) Header() http.Header {
func (c *InstancesGetShieldedInstanceIdentityCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -66652,7 +66920,7 @@ func (c *InstancesGetShieldedInstanceIdentityCall) Do(opts ...googleapi.CallOpti
}
return ret, nil
// {
- // "description": "Returns the Shielded Instance Identity of an instance",
+ // "description": "Returns the Shielded Instance Identity of an instance (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.instances.getShieldedInstanceIdentity",
// "parameterOrder": [
@@ -66709,7 +66977,8 @@ type InstancesInsertCall struct {
}
// Insert: Creates an instance resource in the specified project using
-// the data included in the request.
+// the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/instances/insert
func (r *InstancesService) Insert(project string, zone string, instance *Instance) *InstancesInsertCall {
c := &InstancesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -66780,7 +67049,7 @@ func (c *InstancesInsertCall) Header() http.Header {
func (c *InstancesInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -66845,7 +67114,7 @@ func (c *InstancesInsertCall) Do(opts ...googleapi.CallOption) (*Operation, erro
}
return ret, nil
// {
- // "description": "Creates an instance resource in the specified project using the data included in the request.",
+ // "description": "Creates an instance resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instances.insert",
// "parameterOrder": [
@@ -66906,7 +67175,7 @@ type InstancesListCall struct {
}
// List: Retrieves the list of instances contained within the specified
-// zone.
+// zone. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/instances/list
func (r *InstancesService) List(project string, zone string) *InstancesListCall {
c := &InstancesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -67015,7 +67284,7 @@ func (c *InstancesListCall) Header() http.Header {
func (c *InstancesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -67078,7 +67347,7 @@ func (c *InstancesListCall) Do(opts ...googleapi.CallOption) (*InstanceList, err
}
return ret, nil
// {
- // "description": "Retrieves the list of instances contained within the specified zone.",
+ // "description": "Retrieves the list of instances contained within the specified zone. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.instances.list",
// "parameterOrder": [
@@ -67173,7 +67442,8 @@ type InstancesListReferrersCall struct {
// ListReferrers: Retrieves the list of referrers to instances contained
// within the specified zone. For more information, read Viewing
-// Referrers to VM Instances.
+// Referrers to VM Instances. (== suppress_warning http-rest-shadowed
+// ==)
func (r *InstancesService) ListReferrers(project string, zone string, instance string) *InstancesListReferrersCall {
c := &InstancesListReferrersCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -67282,7 +67552,7 @@ func (c *InstancesListReferrersCall) Header() http.Header {
func (c *InstancesListReferrersCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -67346,7 +67616,7 @@ func (c *InstancesListReferrersCall) Do(opts ...googleapi.CallOption) (*Instance
}
return ret, nil
// {
- // "description": "Retrieves the list of referrers to instances contained within the specified zone. For more information, read Viewing Referrers to VM Instances.",
+ // "description": "Retrieves the list of referrers to instances contained within the specified zone. For more information, read Viewing Referrers to VM Instances. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.instances.listReferrers",
// "parameterOrder": [
@@ -67448,7 +67718,7 @@ type InstancesResetCall struct {
// Reset: Performs a reset on the instance. This is a hard reset the VM
// does not do a graceful shutdown. For more information, see Resetting
-// an instance.
+// an instance. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/instances/reset
func (r *InstancesService) Reset(project string, zone string, instance string) *InstancesResetCall {
c := &InstancesResetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -67504,7 +67774,7 @@ func (c *InstancesResetCall) Header() http.Header {
func (c *InstancesResetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -67565,7 +67835,7 @@ func (c *InstancesResetCall) Do(opts ...googleapi.CallOption) (*Operation, error
}
return ret, nil
// {
- // "description": "Performs a reset on the instance. This is a hard reset the VM does not do a graceful shutdown. For more information, see Resetting an instance.",
+ // "description": "Performs a reset on the instance. This is a hard reset the VM does not do a graceful shutdown. For more information, see Resetting an instance. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instances.reset",
// "parameterOrder": [
@@ -67625,7 +67895,8 @@ type InstancesSetDeletionProtectionCall struct {
header_ http.Header
}
-// SetDeletionProtection: Sets deletion protection on the instance.
+// SetDeletionProtection: Sets deletion protection on the instance. (==
+// suppress_warning http-rest-shadowed ==)
func (r *InstancesService) SetDeletionProtection(project string, zone string, resource string) *InstancesSetDeletionProtectionCall {
c := &InstancesSetDeletionProtectionCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -67687,7 +67958,7 @@ func (c *InstancesSetDeletionProtectionCall) Header() http.Header {
func (c *InstancesSetDeletionProtectionCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -67748,7 +68019,7 @@ func (c *InstancesSetDeletionProtectionCall) Do(opts ...googleapi.CallOption) (*
}
return ret, nil
// {
- // "description": "Sets deletion protection on the instance.",
+ // "description": "Sets deletion protection on the instance. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instances.setDeletionProtection",
// "parameterOrder": [
@@ -67815,7 +68086,7 @@ type InstancesSetDiskAutoDeleteCall struct {
}
// SetDiskAutoDelete: Sets the auto-delete flag for a disk attached to
-// an instance.
+// an instance. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/instances/setDiskAutoDelete
func (r *InstancesService) SetDiskAutoDelete(project string, zone string, instance string, autoDelete bool, deviceName string) *InstancesSetDiskAutoDeleteCall {
c := &InstancesSetDiskAutoDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -67873,7 +68144,7 @@ func (c *InstancesSetDiskAutoDeleteCall) Header() http.Header {
func (c *InstancesSetDiskAutoDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -67934,7 +68205,7 @@ func (c *InstancesSetDiskAutoDeleteCall) Do(opts ...googleapi.CallOption) (*Oper
}
return ret, nil
// {
- // "description": "Sets the auto-delete flag for a disk attached to an instance.",
+ // "description": "Sets the auto-delete flag for a disk attached to an instance. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instances.setDiskAutoDelete",
// "parameterOrder": [
@@ -68011,7 +68282,8 @@ type InstancesSetIamPolicyCall struct {
}
// SetIamPolicy: Sets the access control policy on the specified
-// resource. Replaces any existing policy.
+// resource. Replaces any existing policy. (== suppress_warning
+// http-rest-shadowed ==)
func (r *InstancesService) SetIamPolicy(project string, zone string, resource string, zonesetpolicyrequest *ZoneSetPolicyRequest) *InstancesSetIamPolicyCall {
c := &InstancesSetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -68048,7 +68320,7 @@ func (c *InstancesSetIamPolicyCall) Header() http.Header {
func (c *InstancesSetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -68114,7 +68386,7 @@ func (c *InstancesSetIamPolicyCall) Do(opts ...googleapi.CallOption) (*Policy, e
}
return ret, nil
// {
- // "description": "Sets the access control policy on the specified resource. Replaces any existing policy.",
+ // "description": "Sets the access control policy on the specified resource. Replaces any existing policy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instances.setIamPolicy",
// "parameterOrder": [
@@ -68174,7 +68446,8 @@ type InstancesSetLabelsCall struct {
}
// SetLabels: Sets labels on an instance. To learn more about labels,
-// read the Labeling Resources documentation.
+// read the Labeling Resources documentation. (== suppress_warning
+// http-rest-shadowed ==)
func (r *InstancesService) SetLabels(project string, zone string, instance string, instancessetlabelsrequest *InstancesSetLabelsRequest) *InstancesSetLabelsCall {
c := &InstancesSetLabelsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -68230,7 +68503,7 @@ func (c *InstancesSetLabelsCall) Header() http.Header {
func (c *InstancesSetLabelsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -68296,7 +68569,7 @@ func (c *InstancesSetLabelsCall) Do(opts ...googleapi.CallOption) (*Operation, e
}
return ret, nil
// {
- // "description": "Sets labels on an instance. To learn more about labels, read the Labeling Resources documentation.",
+ // "description": "Sets labels on an instance. To learn more about labels, read the Labeling Resources documentation. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instances.setLabels",
// "parameterOrder": [
@@ -68361,7 +68634,8 @@ type InstancesSetMachineResourcesCall struct {
}
// SetMachineResources: Changes the number and/or type of accelerator
-// for a stopped instance to the values specified in the request.
+// for a stopped instance to the values specified in the request. (==
+// suppress_warning http-rest-shadowed ==)
func (r *InstancesService) SetMachineResources(project string, zone string, instance string, instancessetmachineresourcesrequest *InstancesSetMachineResourcesRequest) *InstancesSetMachineResourcesCall {
c := &InstancesSetMachineResourcesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -68417,7 +68691,7 @@ func (c *InstancesSetMachineResourcesCall) Header() http.Header {
func (c *InstancesSetMachineResourcesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -68483,7 +68757,7 @@ func (c *InstancesSetMachineResourcesCall) Do(opts ...googleapi.CallOption) (*Op
}
return ret, nil
// {
- // "description": "Changes the number and/or type of accelerator for a stopped instance to the values specified in the request.",
+ // "description": "Changes the number and/or type of accelerator for a stopped instance to the values specified in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instances.setMachineResources",
// "parameterOrder": [
@@ -68548,7 +68822,8 @@ type InstancesSetMachineTypeCall struct {
}
// SetMachineType: Changes the machine type for a stopped instance to
-// the machine type specified in the request.
+// the machine type specified in the request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *InstancesService) SetMachineType(project string, zone string, instance string, instancessetmachinetyperequest *InstancesSetMachineTypeRequest) *InstancesSetMachineTypeCall {
c := &InstancesSetMachineTypeCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -68604,7 +68879,7 @@ func (c *InstancesSetMachineTypeCall) Header() http.Header {
func (c *InstancesSetMachineTypeCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -68670,7 +68945,7 @@ func (c *InstancesSetMachineTypeCall) Do(opts ...googleapi.CallOption) (*Operati
}
return ret, nil
// {
- // "description": "Changes the machine type for a stopped instance to the machine type specified in the request.",
+ // "description": "Changes the machine type for a stopped instance to the machine type specified in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instances.setMachineType",
// "parameterOrder": [
@@ -68735,7 +69010,7 @@ type InstancesSetMetadataCall struct {
}
// SetMetadata: Sets metadata for the specified instance to the data
-// included in the request.
+// included in the request. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/instances/setMetadata
func (r *InstancesService) SetMetadata(project string, zone string, instance string, metadata *Metadata) *InstancesSetMetadataCall {
c := &InstancesSetMetadataCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -68792,7 +69067,7 @@ func (c *InstancesSetMetadataCall) Header() http.Header {
func (c *InstancesSetMetadataCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -68858,7 +69133,7 @@ func (c *InstancesSetMetadataCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Sets metadata for the specified instance to the data included in the request.",
+ // "description": "Sets metadata for the specified instance to the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instances.setMetadata",
// "parameterOrder": [
@@ -68925,7 +69200,7 @@ type InstancesSetMinCpuPlatformCall struct {
// SetMinCpuPlatform: Changes the minimum CPU platform that this
// instance should use. This method can only be called on a stopped
// instance. For more information, read Specifying a Minimum CPU
-// Platform.
+// Platform. (== suppress_warning http-rest-shadowed ==)
func (r *InstancesService) SetMinCpuPlatform(project string, zone string, instance string, instancessetmincpuplatformrequest *InstancesSetMinCpuPlatformRequest) *InstancesSetMinCpuPlatformCall {
c := &InstancesSetMinCpuPlatformCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -68981,7 +69256,7 @@ func (c *InstancesSetMinCpuPlatformCall) Header() http.Header {
func (c *InstancesSetMinCpuPlatformCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -69047,7 +69322,7 @@ func (c *InstancesSetMinCpuPlatformCall) Do(opts ...googleapi.CallOption) (*Oper
}
return ret, nil
// {
- // "description": "Changes the minimum CPU platform that this instance should use. This method can only be called on a stopped instance. For more information, read Specifying a Minimum CPU Platform.",
+ // "description": "Changes the minimum CPU platform that this instance should use. This method can only be called on a stopped instance. For more information, read Specifying a Minimum CPU Platform. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instances.setMinCpuPlatform",
// "parameterOrder": [
@@ -69111,7 +69386,8 @@ type InstancesSetSchedulingCall struct {
header_ http.Header
}
-// SetScheduling: Sets an instance's scheduling options.
+// SetScheduling: Sets an instance's scheduling options. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/instances/setScheduling
func (r *InstancesService) SetScheduling(project string, zone string, instance string, scheduling *Scheduling) *InstancesSetSchedulingCall {
c := &InstancesSetSchedulingCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -69168,7 +69444,7 @@ func (c *InstancesSetSchedulingCall) Header() http.Header {
func (c *InstancesSetSchedulingCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -69234,7 +69510,7 @@ func (c *InstancesSetSchedulingCall) Do(opts ...googleapi.CallOption) (*Operatio
}
return ret, nil
// {
- // "description": "Sets an instance's scheduling options.",
+ // "description": "Sets an instance's scheduling options. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instances.setScheduling",
// "parameterOrder": [
@@ -69300,7 +69576,7 @@ type InstancesSetServiceAccountCall struct {
// SetServiceAccount: Sets the service account on the instance. For more
// information, read Changing the service account and access scopes for
-// an instance.
+// an instance. (== suppress_warning http-rest-shadowed ==)
func (r *InstancesService) SetServiceAccount(project string, zone string, instance string, instancessetserviceaccountrequest *InstancesSetServiceAccountRequest) *InstancesSetServiceAccountCall {
c := &InstancesSetServiceAccountCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -69356,7 +69632,7 @@ func (c *InstancesSetServiceAccountCall) Header() http.Header {
func (c *InstancesSetServiceAccountCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -69422,7 +69698,7 @@ func (c *InstancesSetServiceAccountCall) Do(opts ...googleapi.CallOption) (*Oper
}
return ret, nil
// {
- // "description": "Sets the service account on the instance. For more information, read Changing the service account and access scopes for an instance.",
+ // "description": "Sets the service account on the instance. For more information, read Changing the service account and access scopes for an instance. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instances.setServiceAccount",
// "parameterOrder": [
@@ -69489,7 +69765,8 @@ type InstancesSetShieldedInstanceIntegrityPolicyCall struct {
// SetShieldedInstanceIntegrityPolicy: Sets the Shielded Instance
// integrity policy for an instance. You can only use this method on a
// running instance. This method supports PATCH semantics and uses the
-// JSON merge patch format and processing rules.
+// JSON merge patch format and processing rules. (== suppress_warning
+// http-rest-shadowed ==)
func (r *InstancesService) SetShieldedInstanceIntegrityPolicy(project string, zone string, instance string, shieldedinstanceintegritypolicy *ShieldedInstanceIntegrityPolicy) *InstancesSetShieldedInstanceIntegrityPolicyCall {
c := &InstancesSetShieldedInstanceIntegrityPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -69545,7 +69822,7 @@ func (c *InstancesSetShieldedInstanceIntegrityPolicyCall) Header() http.Header {
func (c *InstancesSetShieldedInstanceIntegrityPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -69611,7 +69888,7 @@ func (c *InstancesSetShieldedInstanceIntegrityPolicyCall) Do(opts ...googleapi.C
}
return ret, nil
// {
- // "description": "Sets the Shielded Instance integrity policy for an instance. You can only use this method on a running instance. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ // "description": "Sets the Shielded Instance integrity policy for an instance. You can only use this method on a running instance. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.instances.setShieldedInstanceIntegrityPolicy",
// "parameterOrder": [
@@ -69676,7 +69953,7 @@ type InstancesSetTagsCall struct {
}
// SetTags: Sets network tags for the specified instance to the data
-// included in the request.
+// included in the request. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/instances/setTags
func (r *InstancesService) SetTags(project string, zone string, instance string, tags *Tags) *InstancesSetTagsCall {
c := &InstancesSetTagsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -69733,7 +70010,7 @@ func (c *InstancesSetTagsCall) Header() http.Header {
func (c *InstancesSetTagsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -69799,7 +70076,7 @@ func (c *InstancesSetTagsCall) Do(opts ...googleapi.CallOption) (*Operation, err
}
return ret, nil
// {
- // "description": "Sets network tags for the specified instance to the data included in the request.",
+ // "description": "Sets network tags for the specified instance to the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instances.setTags",
// "parameterOrder": [
@@ -69863,7 +70140,7 @@ type InstancesSimulateMaintenanceEventCall struct {
}
// SimulateMaintenanceEvent: Simulates a maintenance event on the
-// instance.
+// instance. (== suppress_warning http-rest-shadowed ==)
func (r *InstancesService) SimulateMaintenanceEvent(project string, zone string, instance string) *InstancesSimulateMaintenanceEventCall {
c := &InstancesSimulateMaintenanceEventCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -69899,7 +70176,7 @@ func (c *InstancesSimulateMaintenanceEventCall) Header() http.Header {
func (c *InstancesSimulateMaintenanceEventCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -69960,7 +70237,7 @@ func (c *InstancesSimulateMaintenanceEventCall) Do(opts ...googleapi.CallOption)
}
return ret, nil
// {
- // "description": "Simulates a maintenance event on the instance.",
+ // "description": "Simulates a maintenance event on the instance. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instances.simulateMaintenanceEvent",
// "parameterOrder": [
@@ -70016,7 +70293,8 @@ type InstancesStartCall struct {
}
// Start: Starts an instance that was stopped using the instances().stop
-// method. For more information, see Restart an instance.
+// method. For more information, see Restart an instance. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/instances/start
func (r *InstancesService) Start(project string, zone string, instance string) *InstancesStartCall {
c := &InstancesStartCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -70072,7 +70350,7 @@ func (c *InstancesStartCall) Header() http.Header {
func (c *InstancesStartCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -70133,7 +70411,7 @@ func (c *InstancesStartCall) Do(opts ...googleapi.CallOption) (*Operation, error
}
return ret, nil
// {
- // "description": "Starts an instance that was stopped using the instances().stop method. For more information, see Restart an instance.",
+ // "description": "Starts an instance that was stopped using the instances().stop method. For more information, see Restart an instance. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instances.start",
// "parameterOrder": [
@@ -70196,7 +70474,7 @@ type InstancesStartWithEncryptionKeyCall struct {
// StartWithEncryptionKey: Starts an instance that was stopped using the
// instances().stop method. For more information, see Restart an
-// instance.
+// instance. (== suppress_warning http-rest-shadowed ==)
func (r *InstancesService) StartWithEncryptionKey(project string, zone string, instance string, instancesstartwithencryptionkeyrequest *InstancesStartWithEncryptionKeyRequest) *InstancesStartWithEncryptionKeyCall {
c := &InstancesStartWithEncryptionKeyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -70252,7 +70530,7 @@ func (c *InstancesStartWithEncryptionKeyCall) Header() http.Header {
func (c *InstancesStartWithEncryptionKeyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -70318,7 +70596,7 @@ func (c *InstancesStartWithEncryptionKeyCall) Do(opts ...googleapi.CallOption) (
}
return ret, nil
// {
- // "description": "Starts an instance that was stopped using the instances().stop method. For more information, see Restart an instance.",
+ // "description": "Starts an instance that was stopped using the instances().stop method. For more information, see Restart an instance. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instances.startWithEncryptionKey",
// "parameterOrder": [
@@ -70386,7 +70664,8 @@ type InstancesStopCall struct {
// incur VM usage charges while they are stopped. However, resources
// that the VM is using, such as persistent disks and static IP
// addresses, will continue to be charged until they are deleted. For
-// more information, see Stopping an instance.
+// more information, see Stopping an instance. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/instances/stop
func (r *InstancesService) Stop(project string, zone string, instance string) *InstancesStopCall {
c := &InstancesStopCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -70442,7 +70721,7 @@ func (c *InstancesStopCall) Header() http.Header {
func (c *InstancesStopCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -70503,7 +70782,7 @@ func (c *InstancesStopCall) Do(opts ...googleapi.CallOption) (*Operation, error)
}
return ret, nil
// {
- // "description": "Stops a running instance, shutting it down cleanly, and allows you to restart the instance at a later time. Stopped instances do not incur VM usage charges while they are stopped. However, resources that the VM is using, such as persistent disks and static IP addresses, will continue to be charged until they are deleted. For more information, see Stopping an instance.",
+ // "description": "Stops a running instance, shutting it down cleanly, and allows you to restart the instance at a later time. Stopped instances do not incur VM usage charges while they are stopped. However, resources that the VM is using, such as persistent disks and static IP addresses, will continue to be charged until they are deleted. For more information, see Stopping an instance. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instances.stop",
// "parameterOrder": [
@@ -70565,7 +70844,7 @@ type InstancesTestIamPermissionsCall struct {
}
// TestIamPermissions: Returns permissions that a caller has on the
-// specified resource.
+// specified resource. (== suppress_warning http-rest-shadowed ==)
func (r *InstancesService) TestIamPermissions(project string, zone string, resource string, testpermissionsrequest *TestPermissionsRequest) *InstancesTestIamPermissionsCall {
c := &InstancesTestIamPermissionsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -70602,7 +70881,7 @@ func (c *InstancesTestIamPermissionsCall) Header() http.Header {
func (c *InstancesTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -70668,7 +70947,7 @@ func (c *InstancesTestIamPermissionsCall) Do(opts ...googleapi.CallOption) (*Tes
}
return ret, nil
// {
- // "description": "Returns permissions that a caller has on the specified resource.",
+ // "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instances.testIamPermissions",
// "parameterOrder": [
@@ -70731,7 +71010,8 @@ type InstancesUpdateAccessConfigCall struct {
// UpdateAccessConfig: Updates the specified access config from an
// instance's network interface with the data included in the request.
// This method supports PATCH semantics and uses the JSON merge patch
-// format and processing rules.
+// format and processing rules. (== suppress_warning http-rest-shadowed
+// ==)
func (r *InstancesService) UpdateAccessConfig(project string, zone string, instance string, networkInterface string, accessconfig *AccessConfig) *InstancesUpdateAccessConfigCall {
c := &InstancesUpdateAccessConfigCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -70788,7 +71068,7 @@ func (c *InstancesUpdateAccessConfigCall) Header() http.Header {
func (c *InstancesUpdateAccessConfigCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -70854,7 +71134,7 @@ func (c *InstancesUpdateAccessConfigCall) Do(opts ...googleapi.CallOption) (*Ope
}
return ret, nil
// {
- // "description": "Updates the specified access config from an instance's network interface with the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ // "description": "Updates the specified access config from an instance's network interface with the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.instances.updateAccessConfig",
// "parameterOrder": [
@@ -70928,7 +71208,7 @@ type InstancesUpdateDisplayDeviceCall struct {
// UpdateDisplayDevice: Updates the Display config for a VM instance.
// You can only use this method on a stopped VM instance. This method
// supports PATCH semantics and uses the JSON merge patch format and
-// processing rules.
+// processing rules. (== suppress_warning http-rest-shadowed ==)
func (r *InstancesService) UpdateDisplayDevice(project string, zone string, instance string, displaydevice *DisplayDevice) *InstancesUpdateDisplayDeviceCall {
c := &InstancesUpdateDisplayDeviceCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -70984,7 +71264,7 @@ func (c *InstancesUpdateDisplayDeviceCall) Header() http.Header {
func (c *InstancesUpdateDisplayDeviceCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -71050,7 +71330,7 @@ func (c *InstancesUpdateDisplayDeviceCall) Do(opts ...googleapi.CallOption) (*Op
}
return ret, nil
// {
- // "description": "Updates the Display config for a VM instance. You can only use this method on a stopped VM instance. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ // "description": "Updates the Display config for a VM instance. You can only use this method on a stopped VM instance. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.instances.updateDisplayDevice",
// "parameterOrder": [
@@ -71115,7 +71395,8 @@ type InstancesUpdateNetworkInterfaceCall struct {
}
// UpdateNetworkInterface: Updates an instance's network interface. This
-// method follows PATCH semantics.
+// method follows PATCH semantics. (== suppress_warning
+// http-rest-shadowed ==)
func (r *InstancesService) UpdateNetworkInterface(project string, zone string, instance string, networkInterface string, networkinterface *NetworkInterface) *InstancesUpdateNetworkInterfaceCall {
c := &InstancesUpdateNetworkInterfaceCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -71172,7 +71453,7 @@ func (c *InstancesUpdateNetworkInterfaceCall) Header() http.Header {
func (c *InstancesUpdateNetworkInterfaceCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -71238,7 +71519,7 @@ func (c *InstancesUpdateNetworkInterfaceCall) Do(opts ...googleapi.CallOption) (
}
return ret, nil
// {
- // "description": "Updates an instance's network interface. This method follows PATCH semantics.",
+ // "description": "Updates an instance's network interface. This method follows PATCH semantics. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.instances.updateNetworkInterface",
// "parameterOrder": [
@@ -71312,7 +71593,8 @@ type InstancesUpdateShieldedInstanceConfigCall struct {
// UpdateShieldedInstanceConfig: Updates the Shielded Instance config
// for an instance. You can only use this method on a stopped instance.
// This method supports PATCH semantics and uses the JSON merge patch
-// format and processing rules.
+// format and processing rules. (== suppress_warning http-rest-shadowed
+// ==)
func (r *InstancesService) UpdateShieldedInstanceConfig(project string, zone string, instance string, shieldedinstanceconfig *ShieldedInstanceConfig) *InstancesUpdateShieldedInstanceConfigCall {
c := &InstancesUpdateShieldedInstanceConfigCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -71368,7 +71650,7 @@ func (c *InstancesUpdateShieldedInstanceConfigCall) Header() http.Header {
func (c *InstancesUpdateShieldedInstanceConfigCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -71434,7 +71716,7 @@ func (c *InstancesUpdateShieldedInstanceConfigCall) Do(opts ...googleapi.CallOpt
}
return ret, nil
// {
- // "description": "Updates the Shielded Instance config for an instance. You can only use this method on a stopped instance. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ // "description": "Updates the Shielded Instance config for an instance. You can only use this method on a stopped instance. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.instances.updateShieldedInstanceConfig",
// "parameterOrder": [
@@ -71497,7 +71779,7 @@ type InterconnectAttachmentsAggregatedListCall struct {
}
// AggregatedList: Retrieves an aggregated list of interconnect
-// attachments.
+// attachments. (== suppress_warning http-rest-shadowed ==)
func (r *InterconnectAttachmentsService) AggregatedList(project string) *InterconnectAttachmentsAggregatedListCall {
c := &InterconnectAttachmentsAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -71604,7 +71886,7 @@ func (c *InterconnectAttachmentsAggregatedListCall) Header() http.Header {
func (c *InterconnectAttachmentsAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -71667,7 +71949,7 @@ func (c *InterconnectAttachmentsAggregatedListCall) Do(opts ...googleapi.CallOpt
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of interconnect attachments.",
+ // "description": "Retrieves an aggregated list of interconnect attachments. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.interconnectAttachments.aggregatedList",
// "parameterOrder": [
@@ -71751,7 +72033,8 @@ type InterconnectAttachmentsDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified interconnect attachment.
+// Delete: Deletes the specified interconnect attachment. (==
+// suppress_warning http-rest-shadowed ==)
func (r *InterconnectAttachmentsService) Delete(project string, region string, interconnectAttachment string) *InterconnectAttachmentsDeleteCall {
c := &InterconnectAttachmentsDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -71806,7 +72089,7 @@ func (c *InterconnectAttachmentsDeleteCall) Header() http.Header {
func (c *InterconnectAttachmentsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -71867,7 +72150,7 @@ func (c *InterconnectAttachmentsDeleteCall) Do(opts ...googleapi.CallOption) (*O
}
return ret, nil
// {
- // "description": "Deletes the specified interconnect attachment.",
+ // "description": "Deletes the specified interconnect attachment. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.interconnectAttachments.delete",
// "parameterOrder": [
@@ -71928,7 +72211,8 @@ type InterconnectAttachmentsGetCall struct {
header_ http.Header
}
-// Get: Returns the specified interconnect attachment.
+// Get: Returns the specified interconnect attachment. (==
+// suppress_warning http-rest-shadowed ==)
func (r *InterconnectAttachmentsService) Get(project string, region string, interconnectAttachment string) *InterconnectAttachmentsGetCall {
c := &InterconnectAttachmentsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -71974,7 +72258,7 @@ func (c *InterconnectAttachmentsGetCall) Header() http.Header {
func (c *InterconnectAttachmentsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -72038,7 +72322,7 @@ func (c *InterconnectAttachmentsGetCall) Do(opts ...googleapi.CallOption) (*Inte
}
return ret, nil
// {
- // "description": "Returns the specified interconnect attachment.",
+ // "description": "Returns the specified interconnect attachment. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.interconnectAttachments.get",
// "parameterOrder": [
@@ -72095,7 +72379,8 @@ type InterconnectAttachmentsInsertCall struct {
}
// Insert: Creates an InterconnectAttachment in the specified project
-// using the data included in the request.
+// using the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *InterconnectAttachmentsService) Insert(project string, region string, interconnectattachment *InterconnectAttachment) *InterconnectAttachmentsInsertCall {
c := &InterconnectAttachmentsInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -72150,7 +72435,7 @@ func (c *InterconnectAttachmentsInsertCall) Header() http.Header {
func (c *InterconnectAttachmentsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -72215,7 +72500,7 @@ func (c *InterconnectAttachmentsInsertCall) Do(opts ...googleapi.CallOption) (*O
}
return ret, nil
// {
- // "description": "Creates an InterconnectAttachment in the specified project using the data included in the request.",
+ // "description": "Creates an InterconnectAttachment in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.interconnectAttachments.insert",
// "parameterOrder": [
@@ -72271,7 +72556,7 @@ type InterconnectAttachmentsListCall struct {
}
// List: Retrieves the list of interconnect attachments contained within
-// the specified region.
+// the specified region. (== suppress_warning http-rest-shadowed ==)
func (r *InterconnectAttachmentsService) List(project string, region string) *InterconnectAttachmentsListCall {
c := &InterconnectAttachmentsListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -72379,7 +72664,7 @@ func (c *InterconnectAttachmentsListCall) Header() http.Header {
func (c *InterconnectAttachmentsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -72442,7 +72727,7 @@ func (c *InterconnectAttachmentsListCall) Do(opts ...googleapi.CallOption) (*Int
}
return ret, nil
// {
- // "description": "Retrieves the list of interconnect attachments contained within the specified region.",
+ // "description": "Retrieves the list of interconnect attachments contained within the specified region. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.interconnectAttachments.list",
// "parameterOrder": [
@@ -72537,7 +72822,8 @@ type InterconnectAttachmentsPatchCall struct {
// Patch: Updates the specified interconnect attachment with the data
// included in the request. This method supports PATCH semantics and
-// uses the JSON merge patch format and processing rules.
+// uses the JSON merge patch format and processing rules. (==
+// suppress_warning http-rest-shadowed ==)
func (r *InterconnectAttachmentsService) Patch(project string, region string, interconnectAttachment string, interconnectattachment *InterconnectAttachment) *InterconnectAttachmentsPatchCall {
c := &InterconnectAttachmentsPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -72593,7 +72879,7 @@ func (c *InterconnectAttachmentsPatchCall) Header() http.Header {
func (c *InterconnectAttachmentsPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -72659,7 +72945,7 @@ func (c *InterconnectAttachmentsPatchCall) Do(opts ...googleapi.CallOption) (*Op
}
return ret, nil
// {
- // "description": "Updates the specified interconnect attachment with the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ // "description": "Updates the specified interconnect attachment with the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.interconnectAttachments.patch",
// "parameterOrder": [
@@ -72724,7 +73010,7 @@ type InterconnectLocationsGetCall struct {
// Get: Returns the details for the specified interconnect location.
// Gets a list of available interconnect locations by making a list()
-// request.
+// request. (== suppress_warning http-rest-shadowed ==)
func (r *InterconnectLocationsService) Get(project string, interconnectLocation string) *InterconnectLocationsGetCall {
c := &InterconnectLocationsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -72769,7 +73055,7 @@ func (c *InterconnectLocationsGetCall) Header() http.Header {
func (c *InterconnectLocationsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -72832,7 +73118,7 @@ func (c *InterconnectLocationsGetCall) Do(opts ...googleapi.CallOption) (*Interc
}
return ret, nil
// {
- // "description": "Returns the details for the specified interconnect location. Gets a list of available interconnect locations by making a list() request.",
+ // "description": "Returns the details for the specified interconnect location. Gets a list of available interconnect locations by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.interconnectLocations.get",
// "parameterOrder": [
@@ -72880,7 +73166,7 @@ type InterconnectLocationsListCall struct {
}
// List: Retrieves the list of interconnect locations available to the
-// specified project.
+// specified project. (== suppress_warning http-rest-shadowed ==)
func (r *InterconnectLocationsService) List(project string) *InterconnectLocationsListCall {
c := &InterconnectLocationsListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -72987,7 +73273,7 @@ func (c *InterconnectLocationsListCall) Header() http.Header {
func (c *InterconnectLocationsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -73049,7 +73335,7 @@ func (c *InterconnectLocationsListCall) Do(opts ...googleapi.CallOption) (*Inter
}
return ret, nil
// {
- // "description": "Retrieves the list of interconnect locations available to the specified project.",
+ // "description": "Retrieves the list of interconnect locations available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.interconnectLocations.list",
// "parameterOrder": [
@@ -73132,7 +73418,8 @@ type InterconnectsDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified interconnect.
+// Delete: Deletes the specified interconnect. (== suppress_warning
+// http-rest-shadowed ==)
func (r *InterconnectsService) Delete(project string, interconnect string) *InterconnectsDeleteCall {
c := &InterconnectsDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -73186,7 +73473,7 @@ func (c *InterconnectsDeleteCall) Header() http.Header {
func (c *InterconnectsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -73246,7 +73533,7 @@ func (c *InterconnectsDeleteCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Deletes the specified interconnect.",
+ // "description": "Deletes the specified interconnect. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.interconnects.delete",
// "parameterOrder": [
@@ -73299,7 +73586,8 @@ type InterconnectsGetCall struct {
}
// Get: Returns the specified interconnect. Get a list of available
-// interconnects by making a list() request.
+// interconnects by making a list() request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *InterconnectsService) Get(project string, interconnect string) *InterconnectsGetCall {
c := &InterconnectsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -73344,7 +73632,7 @@ func (c *InterconnectsGetCall) Header() http.Header {
func (c *InterconnectsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -73407,7 +73695,7 @@ func (c *InterconnectsGetCall) Do(opts ...googleapi.CallOption) (*Interconnect,
}
return ret, nil
// {
- // "description": "Returns the specified interconnect. Get a list of available interconnects by making a list() request.",
+ // "description": "Returns the specified interconnect. Get a list of available interconnects by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.interconnects.get",
// "parameterOrder": [
@@ -73456,7 +73744,7 @@ type InterconnectsGetDiagnosticsCall struct {
}
// GetDiagnostics: Returns the interconnectDiagnostics for the specified
-// interconnect.
+// interconnect. (== suppress_warning http-rest-shadowed ==)
func (r *InterconnectsService) GetDiagnostics(project string, interconnect string) *InterconnectsGetDiagnosticsCall {
c := &InterconnectsGetDiagnosticsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -73501,7 +73789,7 @@ func (c *InterconnectsGetDiagnosticsCall) Header() http.Header {
func (c *InterconnectsGetDiagnosticsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -73565,7 +73853,7 @@ func (c *InterconnectsGetDiagnosticsCall) Do(opts ...googleapi.CallOption) (*Int
}
return ret, nil
// {
- // "description": "Returns the interconnectDiagnostics for the specified interconnect.",
+ // "description": "Returns the interconnectDiagnostics for the specified interconnect. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.interconnects.getDiagnostics",
// "parameterOrder": [
@@ -73613,7 +73901,8 @@ type InterconnectsInsertCall struct {
}
// Insert: Creates a Interconnect in the specified project using the
-// data included in the request.
+// data included in the request. (== suppress_warning http-rest-shadowed
+// ==)
func (r *InterconnectsService) Insert(project string, interconnect *Interconnect) *InterconnectsInsertCall {
c := &InterconnectsInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -73667,7 +73956,7 @@ func (c *InterconnectsInsertCall) Header() http.Header {
func (c *InterconnectsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -73731,7 +74020,7 @@ func (c *InterconnectsInsertCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Creates a Interconnect in the specified project using the data included in the request.",
+ // "description": "Creates a Interconnect in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.interconnects.insert",
// "parameterOrder": [
@@ -73778,7 +74067,7 @@ type InterconnectsListCall struct {
}
// List: Retrieves the list of interconnect available to the specified
-// project.
+// project. (== suppress_warning http-rest-shadowed ==)
func (r *InterconnectsService) List(project string) *InterconnectsListCall {
c := &InterconnectsListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -73885,7 +74174,7 @@ func (c *InterconnectsListCall) Header() http.Header {
func (c *InterconnectsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -73947,7 +74236,7 @@ func (c *InterconnectsListCall) Do(opts ...googleapi.CallOption) (*InterconnectL
}
return ret, nil
// {
- // "description": "Retrieves the list of interconnect available to the specified project.",
+ // "description": "Retrieves the list of interconnect available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.interconnects.list",
// "parameterOrder": [
@@ -74033,7 +74322,8 @@ type InterconnectsPatchCall struct {
// Patch: Updates the specified interconnect with the data included in
// the request. This method supports PATCH semantics and uses the JSON
-// merge patch format and processing rules.
+// merge patch format and processing rules. (== suppress_warning
+// http-rest-shadowed ==)
func (r *InterconnectsService) Patch(project string, interconnect string, interconnect2 *Interconnect) *InterconnectsPatchCall {
c := &InterconnectsPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -74088,7 +74378,7 @@ func (c *InterconnectsPatchCall) Header() http.Header {
func (c *InterconnectsPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -74153,7 +74443,7 @@ func (c *InterconnectsPatchCall) Do(opts ...googleapi.CallOption) (*Operation, e
}
return ret, nil
// {
- // "description": "Updates the specified interconnect with the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ // "description": "Updates the specified interconnect with the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.interconnects.patch",
// "parameterOrder": [
@@ -74210,6 +74500,7 @@ type LicenseCodesGetCall struct {
// Get: Return a specified license code. License codes are mirrored
// across all projects that have permissions to read the License Code.
+// (== suppress_warning http-rest-shadowed ==)
func (r *LicenseCodesService) Get(project string, licenseCode string) *LicenseCodesGetCall {
c := &LicenseCodesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -74254,7 +74545,7 @@ func (c *LicenseCodesGetCall) Header() http.Header {
func (c *LicenseCodesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -74317,7 +74608,7 @@ func (c *LicenseCodesGetCall) Do(opts ...googleapi.CallOption) (*LicenseCode, er
}
return ret, nil
// {
- // "description": "Return a specified license code. License codes are mirrored across all projects that have permissions to read the License Code.",
+ // "description": "Return a specified license code. License codes are mirrored across all projects that have permissions to read the License Code. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.licenseCodes.get",
// "parameterOrder": [
@@ -74366,7 +74657,7 @@ type LicenseCodesTestIamPermissionsCall struct {
}
// TestIamPermissions: Returns permissions that a caller has on the
-// specified resource.
+// specified resource. (== suppress_warning http-rest-shadowed ==)
func (r *LicenseCodesService) TestIamPermissions(project string, resource string, testpermissionsrequest *TestPermissionsRequest) *LicenseCodesTestIamPermissionsCall {
c := &LicenseCodesTestIamPermissionsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -74402,7 +74693,7 @@ func (c *LicenseCodesTestIamPermissionsCall) Header() http.Header {
func (c *LicenseCodesTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -74467,7 +74758,7 @@ func (c *LicenseCodesTestIamPermissionsCall) Do(opts ...googleapi.CallOption) (*
}
return ret, nil
// {
- // "description": "Returns permissions that a caller has on the specified resource.",
+ // "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.licenseCodes.testIamPermissions",
// "parameterOrder": [
@@ -74517,7 +74808,8 @@ type LicensesDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified license.
+// Delete: Deletes the specified license. (== suppress_warning
+// http-rest-shadowed ==)
func (r *LicensesService) Delete(project string, license string) *LicensesDeleteCall {
c := &LicensesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -74571,7 +74863,7 @@ func (c *LicensesDeleteCall) Header() http.Header {
func (c *LicensesDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -74631,7 +74923,7 @@ func (c *LicensesDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, error
}
return ret, nil
// {
- // "description": "Deletes the specified license.",
+ // "description": "Deletes the specified license. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.licenses.delete",
// "parameterOrder": [
@@ -74683,7 +74975,8 @@ type LicensesGetCall struct {
header_ http.Header
}
-// Get: Returns the specified License resource.
+// Get: Returns the specified License resource. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/licenses/get
func (r *LicensesService) Get(project string, license string) *LicensesGetCall {
c := &LicensesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -74729,7 +75022,7 @@ func (c *LicensesGetCall) Header() http.Header {
func (c *LicensesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -74792,7 +75085,7 @@ func (c *LicensesGetCall) Do(opts ...googleapi.CallOption) (*License, error) {
}
return ret, nil
// {
- // "description": "Returns the specified License resource.",
+ // "description": "Returns the specified License resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.licenses.get",
// "parameterOrder": [
@@ -74841,7 +75134,8 @@ type LicensesGetIamPolicyCall struct {
}
// GetIamPolicy: Gets the access control policy for a resource. May be
-// empty if no such policy or resource exists.
+// empty if no such policy or resource exists. (== suppress_warning
+// http-rest-shadowed ==)
func (r *LicensesService) GetIamPolicy(project string, resource string) *LicensesGetIamPolicyCall {
c := &LicensesGetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -74886,7 +75180,7 @@ func (c *LicensesGetIamPolicyCall) Header() http.Header {
func (c *LicensesGetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -74949,7 +75243,7 @@ func (c *LicensesGetIamPolicyCall) Do(opts ...googleapi.CallOption) (*Policy, er
}
return ret, nil
// {
- // "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists.",
+ // "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.licenses.getIamPolicy",
// "parameterOrder": [
@@ -74996,7 +75290,8 @@ type LicensesInsertCall struct {
header_ http.Header
}
-// Insert: Create a License resource in the specified project.
+// Insert: Create a License resource in the specified project. (==
+// suppress_warning http-rest-shadowed ==)
func (r *LicensesService) Insert(project string, license *License) *LicensesInsertCall {
c := &LicensesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -75050,7 +75345,7 @@ func (c *LicensesInsertCall) Header() http.Header {
func (c *LicensesInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -75114,7 +75409,7 @@ func (c *LicensesInsertCall) Do(opts ...googleapi.CallOption) (*Operation, error
}
return ret, nil
// {
- // "description": "Create a License resource in the specified project.",
+ // "description": "Create a License resource in the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.licenses.insert",
// "parameterOrder": [
@@ -75168,7 +75463,8 @@ type LicensesListCall struct {
// projects, including licenses attached to publicly-available images,
// like Debian 9. If you want to get a list of publicly-available
// licenses, use this method to make a request to the respective image
-// project, such as debian-cloud or windows-cloud.
+// project, such as debian-cloud or windows-cloud. (== suppress_warning
+// http-rest-shadowed ==)
func (r *LicensesService) List(project string) *LicensesListCall {
c := &LicensesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -75275,7 +75571,7 @@ func (c *LicensesListCall) Header() http.Header {
func (c *LicensesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -75337,7 +75633,7 @@ func (c *LicensesListCall) Do(opts ...googleapi.CallOption) (*LicensesListRespon
}
return ret, nil
// {
- // "description": "Retrieves the list of licenses available in the specified project. This method does not get any licenses that belong to other projects, including licenses attached to publicly-available images, like Debian 9. If you want to get a list of publicly-available licenses, use this method to make a request to the respective image project, such as debian-cloud or windows-cloud.",
+ // "description": "Retrieves the list of licenses available in the specified project. This method does not get any licenses that belong to other projects, including licenses attached to publicly-available images, like Debian 9. If you want to get a list of publicly-available licenses, use this method to make a request to the respective image project, such as debian-cloud or windows-cloud. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.licenses.list",
// "parameterOrder": [
@@ -75422,7 +75718,8 @@ type LicensesSetIamPolicyCall struct {
}
// SetIamPolicy: Sets the access control policy on the specified
-// resource. Replaces any existing policy.
+// resource. Replaces any existing policy. (== suppress_warning
+// http-rest-shadowed ==)
func (r *LicensesService) SetIamPolicy(project string, resource string, globalsetpolicyrequest *GlobalSetPolicyRequest) *LicensesSetIamPolicyCall {
c := &LicensesSetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -75458,7 +75755,7 @@ func (c *LicensesSetIamPolicyCall) Header() http.Header {
func (c *LicensesSetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -75523,7 +75820,7 @@ func (c *LicensesSetIamPolicyCall) Do(opts ...googleapi.CallOption) (*Policy, er
}
return ret, nil
// {
- // "description": "Sets the access control policy on the specified resource. Replaces any existing policy.",
+ // "description": "Sets the access control policy on the specified resource. Replaces any existing policy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.licenses.setIamPolicy",
// "parameterOrder": [
@@ -75574,7 +75871,7 @@ type LicensesTestIamPermissionsCall struct {
}
// TestIamPermissions: Returns permissions that a caller has on the
-// specified resource.
+// specified resource. (== suppress_warning http-rest-shadowed ==)
func (r *LicensesService) TestIamPermissions(project string, resource string, testpermissionsrequest *TestPermissionsRequest) *LicensesTestIamPermissionsCall {
c := &LicensesTestIamPermissionsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -75610,7 +75907,7 @@ func (c *LicensesTestIamPermissionsCall) Header() http.Header {
func (c *LicensesTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -75675,7 +75972,7 @@ func (c *LicensesTestIamPermissionsCall) Do(opts ...googleapi.CallOption) (*Test
}
return ret, nil
// {
- // "description": "Returns permissions that a caller has on the specified resource.",
+ // "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.licenses.testIamPermissions",
// "parameterOrder": [
@@ -75725,7 +76022,8 @@ type MachineTypesAggregatedListCall struct {
header_ http.Header
}
-// AggregatedList: Retrieves an aggregated list of machine types.
+// AggregatedList: Retrieves an aggregated list of machine types. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/machineTypes/aggregatedList
func (r *MachineTypesService) AggregatedList(project string) *MachineTypesAggregatedListCall {
c := &MachineTypesAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -75833,7 +76131,7 @@ func (c *MachineTypesAggregatedListCall) Header() http.Header {
func (c *MachineTypesAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -75895,7 +76193,7 @@ func (c *MachineTypesAggregatedListCall) Do(opts ...googleapi.CallOption) (*Mach
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of machine types.",
+ // "description": "Retrieves an aggregated list of machine types. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.machineTypes.aggregatedList",
// "parameterOrder": [
@@ -75981,7 +76279,8 @@ type MachineTypesGetCall struct {
}
// Get: Returns the specified machine type. Gets a list of available
-// machine types by making a list() request.
+// machine types by making a list() request. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/machineTypes/get
func (r *MachineTypesService) Get(project string, zone string, machineType string) *MachineTypesGetCall {
c := &MachineTypesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -76028,7 +76327,7 @@ func (c *MachineTypesGetCall) Header() http.Header {
func (c *MachineTypesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -76092,7 +76391,7 @@ func (c *MachineTypesGetCall) Do(opts ...googleapi.CallOption) (*MachineType, er
}
return ret, nil
// {
- // "description": "Returns the specified machine type. Gets a list of available machine types by making a list() request.",
+ // "description": "Returns the specified machine type. Gets a list of available machine types by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.machineTypes.get",
// "parameterOrder": [
@@ -76149,7 +76448,7 @@ type MachineTypesListCall struct {
}
// List: Retrieves a list of machine types available to the specified
-// project.
+// project. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/machineTypes/list
func (r *MachineTypesService) List(project string, zone string) *MachineTypesListCall {
c := &MachineTypesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -76258,7 +76557,7 @@ func (c *MachineTypesListCall) Header() http.Header {
func (c *MachineTypesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -76321,7 +76620,7 @@ func (c *MachineTypesListCall) Do(opts ...googleapi.CallOption) (*MachineTypeLis
}
return ret, nil
// {
- // "description": "Retrieves a list of machine types available to the specified project.",
+ // "description": "Retrieves a list of machine types available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.machineTypes.list",
// "parameterOrder": [
@@ -76413,7 +76712,7 @@ type NetworkEndpointGroupsAggregatedListCall struct {
}
// AggregatedList: Retrieves the list of network endpoint groups and
-// sorts them by zone.
+// sorts them by zone. (== suppress_warning http-rest-shadowed ==)
func (r *NetworkEndpointGroupsService) AggregatedList(project string) *NetworkEndpointGroupsAggregatedListCall {
c := &NetworkEndpointGroupsAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -76520,7 +76819,7 @@ func (c *NetworkEndpointGroupsAggregatedListCall) Header() http.Header {
func (c *NetworkEndpointGroupsAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -76583,7 +76882,7 @@ func (c *NetworkEndpointGroupsAggregatedListCall) Do(opts ...googleapi.CallOptio
}
return ret, nil
// {
- // "description": "Retrieves the list of network endpoint groups and sorts them by zone.",
+ // "description": "Retrieves the list of network endpoint groups and sorts them by zone. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.networkEndpointGroups.aggregatedList",
// "parameterOrder": [
@@ -76669,7 +76968,8 @@ type NetworkEndpointGroupsAttachNetworkEndpointsCall struct {
}
// AttachNetworkEndpoints: Attach a list of network endpoints to the
-// specified network endpoint group.
+// specified network endpoint group. (== suppress_warning
+// http-rest-shadowed ==)
func (r *NetworkEndpointGroupsService) AttachNetworkEndpoints(project string, zone string, networkEndpointGroup string, networkendpointgroupsattachendpointsrequest *NetworkEndpointGroupsAttachEndpointsRequest) *NetworkEndpointGroupsAttachNetworkEndpointsCall {
c := &NetworkEndpointGroupsAttachNetworkEndpointsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -76725,7 +77025,7 @@ func (c *NetworkEndpointGroupsAttachNetworkEndpointsCall) Header() http.Header {
func (c *NetworkEndpointGroupsAttachNetworkEndpointsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -76791,7 +77091,7 @@ func (c *NetworkEndpointGroupsAttachNetworkEndpointsCall) Do(opts ...googleapi.C
}
return ret, nil
// {
- // "description": "Attach a list of network endpoints to the specified network endpoint group.",
+ // "description": "Attach a list of network endpoints to the specified network endpoint group. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.networkEndpointGroups.attachNetworkEndpoints",
// "parameterOrder": [
@@ -76855,7 +77155,8 @@ type NetworkEndpointGroupsDeleteCall struct {
// Delete: Deletes the specified network endpoint group. The network
// endpoints in the NEG and the VM instances they belong to are not
// terminated when the NEG is deleted. Note that the NEG cannot be
-// deleted if there are backend services referencing it.
+// deleted if there are backend services referencing it. (==
+// suppress_warning http-rest-shadowed ==)
func (r *NetworkEndpointGroupsService) Delete(project string, zone string, networkEndpointGroup string) *NetworkEndpointGroupsDeleteCall {
c := &NetworkEndpointGroupsDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -76910,7 +77211,7 @@ func (c *NetworkEndpointGroupsDeleteCall) Header() http.Header {
func (c *NetworkEndpointGroupsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -76971,7 +77272,7 @@ func (c *NetworkEndpointGroupsDeleteCall) Do(opts ...googleapi.CallOption) (*Ope
}
return ret, nil
// {
- // "description": "Deletes the specified network endpoint group. The network endpoints in the NEG and the VM instances they belong to are not terminated when the NEG is deleted. Note that the NEG cannot be deleted if there are backend services referencing it.",
+ // "description": "Deletes the specified network endpoint group. The network endpoints in the NEG and the VM instances they belong to are not terminated when the NEG is deleted. Note that the NEG cannot be deleted if there are backend services referencing it. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.networkEndpointGroups.delete",
// "parameterOrder": [
@@ -77031,7 +77332,8 @@ type NetworkEndpointGroupsDetachNetworkEndpointsCall struct {
}
// DetachNetworkEndpoints: Detach a list of network endpoints from the
-// specified network endpoint group.
+// specified network endpoint group. (== suppress_warning
+// http-rest-shadowed ==)
func (r *NetworkEndpointGroupsService) DetachNetworkEndpoints(project string, zone string, networkEndpointGroup string, networkendpointgroupsdetachendpointsrequest *NetworkEndpointGroupsDetachEndpointsRequest) *NetworkEndpointGroupsDetachNetworkEndpointsCall {
c := &NetworkEndpointGroupsDetachNetworkEndpointsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -77087,7 +77389,7 @@ func (c *NetworkEndpointGroupsDetachNetworkEndpointsCall) Header() http.Header {
func (c *NetworkEndpointGroupsDetachNetworkEndpointsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -77153,7 +77455,7 @@ func (c *NetworkEndpointGroupsDetachNetworkEndpointsCall) Do(opts ...googleapi.C
}
return ret, nil
// {
- // "description": "Detach a list of network endpoints from the specified network endpoint group.",
+ // "description": "Detach a list of network endpoints from the specified network endpoint group. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.networkEndpointGroups.detachNetworkEndpoints",
// "parameterOrder": [
@@ -77216,7 +77518,8 @@ type NetworkEndpointGroupsGetCall struct {
}
// Get: Returns the specified network endpoint group. Gets a list of
-// available network endpoint groups by making a list() request.
+// available network endpoint groups by making a list() request. (==
+// suppress_warning http-rest-shadowed ==)
func (r *NetworkEndpointGroupsService) Get(project string, zone string, networkEndpointGroup string) *NetworkEndpointGroupsGetCall {
c := &NetworkEndpointGroupsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -77262,7 +77565,7 @@ func (c *NetworkEndpointGroupsGetCall) Header() http.Header {
func (c *NetworkEndpointGroupsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -77326,7 +77629,7 @@ func (c *NetworkEndpointGroupsGetCall) Do(opts ...googleapi.CallOption) (*Networ
}
return ret, nil
// {
- // "description": "Returns the specified network endpoint group. Gets a list of available network endpoint groups by making a list() request.",
+ // "description": "Returns the specified network endpoint group. Gets a list of available network endpoint groups by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.networkEndpointGroups.get",
// "parameterOrder": [
@@ -77381,7 +77684,8 @@ type NetworkEndpointGroupsInsertCall struct {
}
// Insert: Creates a network endpoint group in the specified project
-// using the parameters that are included in the request.
+// using the parameters that are included in the request. (==
+// suppress_warning http-rest-shadowed ==)
func (r *NetworkEndpointGroupsService) Insert(project string, zone string, networkendpointgroup *NetworkEndpointGroup) *NetworkEndpointGroupsInsertCall {
c := &NetworkEndpointGroupsInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -77436,7 +77740,7 @@ func (c *NetworkEndpointGroupsInsertCall) Header() http.Header {
func (c *NetworkEndpointGroupsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -77501,7 +77805,7 @@ func (c *NetworkEndpointGroupsInsertCall) Do(opts ...googleapi.CallOption) (*Ope
}
return ret, nil
// {
- // "description": "Creates a network endpoint group in the specified project using the parameters that are included in the request.",
+ // "description": "Creates a network endpoint group in the specified project using the parameters that are included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.networkEndpointGroups.insert",
// "parameterOrder": [
@@ -77556,7 +77860,8 @@ type NetworkEndpointGroupsListCall struct {
}
// List: Retrieves the list of network endpoint groups that are located
-// in the specified project and zone.
+// in the specified project and zone. (== suppress_warning
+// http-rest-shadowed ==)
func (r *NetworkEndpointGroupsService) List(project string, zone string) *NetworkEndpointGroupsListCall {
c := &NetworkEndpointGroupsListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -77664,7 +77969,7 @@ func (c *NetworkEndpointGroupsListCall) Header() http.Header {
func (c *NetworkEndpointGroupsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -77727,7 +78032,7 @@ func (c *NetworkEndpointGroupsListCall) Do(opts ...googleapi.CallOption) (*Netwo
}
return ret, nil
// {
- // "description": "Retrieves the list of network endpoint groups that are located in the specified project and zone.",
+ // "description": "Retrieves the list of network endpoint groups that are located in the specified project and zone. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.networkEndpointGroups.list",
// "parameterOrder": [
@@ -77820,7 +78125,7 @@ type NetworkEndpointGroupsListNetworkEndpointsCall struct {
}
// ListNetworkEndpoints: Lists the network endpoints in the specified
-// network endpoint group.
+// network endpoint group. (== suppress_warning http-rest-shadowed ==)
func (r *NetworkEndpointGroupsService) ListNetworkEndpoints(project string, zone string, networkEndpointGroup string, networkendpointgroupslistendpointsrequest *NetworkEndpointGroupsListEndpointsRequest) *NetworkEndpointGroupsListNetworkEndpointsCall {
c := &NetworkEndpointGroupsListNetworkEndpointsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -77920,7 +78225,7 @@ func (c *NetworkEndpointGroupsListNetworkEndpointsCall) Header() http.Header {
func (c *NetworkEndpointGroupsListNetworkEndpointsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -77988,7 +78293,7 @@ func (c *NetworkEndpointGroupsListNetworkEndpointsCall) Do(opts ...googleapi.Cal
}
return ret, nil
// {
- // "description": "Lists the network endpoints in the specified network endpoint group.",
+ // "description": "Lists the network endpoints in the specified network endpoint group. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.networkEndpointGroups.listNetworkEndpoints",
// "parameterOrder": [
@@ -78091,7 +78396,7 @@ type NetworkEndpointGroupsTestIamPermissionsCall struct {
}
// TestIamPermissions: Returns permissions that a caller has on the
-// specified resource.
+// specified resource. (== suppress_warning http-rest-shadowed ==)
func (r *NetworkEndpointGroupsService) TestIamPermissions(project string, zone string, resource string, testpermissionsrequest *TestPermissionsRequest) *NetworkEndpointGroupsTestIamPermissionsCall {
c := &NetworkEndpointGroupsTestIamPermissionsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -78128,7 +78433,7 @@ func (c *NetworkEndpointGroupsTestIamPermissionsCall) Header() http.Header {
func (c *NetworkEndpointGroupsTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -78194,7 +78499,7 @@ func (c *NetworkEndpointGroupsTestIamPermissionsCall) Do(opts ...googleapi.CallO
}
return ret, nil
// {
- // "description": "Returns permissions that a caller has on the specified resource.",
+ // "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.networkEndpointGroups.testIamPermissions",
// "parameterOrder": [
@@ -78253,7 +78558,8 @@ type NetworksAddPeeringCall struct {
header_ http.Header
}
-// AddPeering: Adds a peering to the specified network.
+// AddPeering: Adds a peering to the specified network. (==
+// suppress_warning http-rest-shadowed ==)
func (r *NetworksService) AddPeering(project string, network string, networksaddpeeringrequest *NetworksAddPeeringRequest) *NetworksAddPeeringCall {
c := &NetworksAddPeeringCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -78308,7 +78614,7 @@ func (c *NetworksAddPeeringCall) Header() http.Header {
func (c *NetworksAddPeeringCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -78373,7 +78679,7 @@ func (c *NetworksAddPeeringCall) Do(opts ...googleapi.CallOption) (*Operation, e
}
return ret, nil
// {
- // "description": "Adds a peering to the specified network.",
+ // "description": "Adds a peering to the specified network. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.networks.addPeering",
// "parameterOrder": [
@@ -78427,7 +78733,8 @@ type NetworksDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified network.
+// Delete: Deletes the specified network. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/networks/delete
func (r *NetworksService) Delete(project string, network string) *NetworksDeleteCall {
c := &NetworksDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -78482,7 +78789,7 @@ func (c *NetworksDeleteCall) Header() http.Header {
func (c *NetworksDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -78542,7 +78849,7 @@ func (c *NetworksDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, error
}
return ret, nil
// {
- // "description": "Deletes the specified network.",
+ // "description": "Deletes the specified network. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.networks.delete",
// "parameterOrder": [
@@ -78595,7 +78902,8 @@ type NetworksGetCall struct {
}
// Get: Returns the specified network. Gets a list of available networks
-// by making a list() request.
+// by making a list() request. (== suppress_warning http-rest-shadowed
+// ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/networks/get
func (r *NetworksService) Get(project string, network string) *NetworksGetCall {
c := &NetworksGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -78641,7 +78949,7 @@ func (c *NetworksGetCall) Header() http.Header {
func (c *NetworksGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -78704,7 +79012,7 @@ func (c *NetworksGetCall) Do(opts ...googleapi.CallOption) (*Network, error) {
}
return ret, nil
// {
- // "description": "Returns the specified network. Gets a list of available networks by making a list() request.",
+ // "description": "Returns the specified network. Gets a list of available networks by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.networks.get",
// "parameterOrder": [
@@ -78752,7 +79060,7 @@ type NetworksInsertCall struct {
}
// Insert: Creates a network in the specified project using the data
-// included in the request.
+// included in the request. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/networks/insert
func (r *NetworksService) Insert(project string, network *Network) *NetworksInsertCall {
c := &NetworksInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -78807,7 +79115,7 @@ func (c *NetworksInsertCall) Header() http.Header {
func (c *NetworksInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -78871,7 +79179,7 @@ func (c *NetworksInsertCall) Do(opts ...googleapi.CallOption) (*Operation, error
}
return ret, nil
// {
- // "description": "Creates a network in the specified project using the data included in the request.",
+ // "description": "Creates a network in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.networks.insert",
// "parameterOrder": [
@@ -78918,7 +79226,7 @@ type NetworksListCall struct {
}
// List: Retrieves the list of networks available to the specified
-// project.
+// project. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/networks/list
func (r *NetworksService) List(project string) *NetworksListCall {
c := &NetworksListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -79026,7 +79334,7 @@ func (c *NetworksListCall) Header() http.Header {
func (c *NetworksListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -79088,7 +79396,7 @@ func (c *NetworksListCall) Do(opts ...googleapi.CallOption) (*NetworkList, error
}
return ret, nil
// {
- // "description": "Retrieves the list of networks available to the specified project.",
+ // "description": "Retrieves the list of networks available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.networks.list",
// "parameterOrder": [
@@ -79174,7 +79482,8 @@ type NetworksPatchCall struct {
// Patch: Patches the specified network with the data included in the
// request. Only the following fields can be modified:
-// routingConfig.routingMode.
+// routingConfig.routingMode. (== suppress_warning http-rest-shadowed
+// ==)
func (r *NetworksService) Patch(project string, network string, network2 *Network) *NetworksPatchCall {
c := &NetworksPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -79229,7 +79538,7 @@ func (c *NetworksPatchCall) Header() http.Header {
func (c *NetworksPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -79294,7 +79603,7 @@ func (c *NetworksPatchCall) Do(opts ...googleapi.CallOption) (*Operation, error)
}
return ret, nil
// {
- // "description": "Patches the specified network with the data included in the request. Only the following fields can be modified: routingConfig.routingMode.",
+ // "description": "Patches the specified network with the data included in the request. Only the following fields can be modified: routingConfig.routingMode. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.networks.patch",
// "parameterOrder": [
@@ -79349,7 +79658,8 @@ type NetworksRemovePeeringCall struct {
header_ http.Header
}
-// RemovePeering: Removes a peering from the specified network.
+// RemovePeering: Removes a peering from the specified network. (==
+// suppress_warning http-rest-shadowed ==)
func (r *NetworksService) RemovePeering(project string, network string, networksremovepeeringrequest *NetworksRemovePeeringRequest) *NetworksRemovePeeringCall {
c := &NetworksRemovePeeringCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -79404,7 +79714,7 @@ func (c *NetworksRemovePeeringCall) Header() http.Header {
func (c *NetworksRemovePeeringCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -79469,7 +79779,7 @@ func (c *NetworksRemovePeeringCall) Do(opts ...googleapi.CallOption) (*Operation
}
return ret, nil
// {
- // "description": "Removes a peering from the specified network.",
+ // "description": "Removes a peering from the specified network. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.networks.removePeering",
// "parameterOrder": [
@@ -79524,7 +79834,7 @@ type NetworksSwitchToCustomModeCall struct {
}
// SwitchToCustomMode: Switches the network mode from auto subnet mode
-// to custom subnet mode.
+// to custom subnet mode. (== suppress_warning http-rest-shadowed ==)
func (r *NetworksService) SwitchToCustomMode(project string, network string) *NetworksSwitchToCustomModeCall {
c := &NetworksSwitchToCustomModeCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -79578,7 +79888,7 @@ func (c *NetworksSwitchToCustomModeCall) Header() http.Header {
func (c *NetworksSwitchToCustomModeCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -79638,7 +79948,7 @@ func (c *NetworksSwitchToCustomModeCall) Do(opts ...googleapi.CallOption) (*Oper
}
return ret, nil
// {
- // "description": "Switches the network mode from auto subnet mode to custom subnet mode.",
+ // "description": "Switches the network mode from auto subnet mode to custom subnet mode. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.networks.switchToCustomMode",
// "parameterOrder": [
@@ -79693,7 +80003,8 @@ type NetworksUpdatePeeringCall struct {
// UpdatePeering: Updates the specified network peering with the data
// included in the request Only the following fields can be modified:
// NetworkPeering.export_custom_routes, and
-// NetworkPeering.import_custom_routes
+// NetworkPeering.import_custom_routes (== suppress_warning
+// http-rest-shadowed ==)
func (r *NetworksService) UpdatePeering(project string, network string, networksupdatepeeringrequest *NetworksUpdatePeeringRequest) *NetworksUpdatePeeringCall {
c := &NetworksUpdatePeeringCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -79748,7 +80059,7 @@ func (c *NetworksUpdatePeeringCall) Header() http.Header {
func (c *NetworksUpdatePeeringCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -79813,7 +80124,7 @@ func (c *NetworksUpdatePeeringCall) Do(opts ...googleapi.CallOption) (*Operation
}
return ret, nil
// {
- // "description": "Updates the specified network peering with the data included in the request Only the following fields can be modified: NetworkPeering.export_custom_routes, and NetworkPeering.import_custom_routes",
+ // "description": "Updates the specified network peering with the data included in the request Only the following fields can be modified: NetworkPeering.export_custom_routes, and NetworkPeering.import_custom_routes (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.networks.updatePeering",
// "parameterOrder": [
@@ -79869,7 +80180,8 @@ type NodeGroupsAddNodesCall struct {
header_ http.Header
}
-// AddNodes: Adds specified number of nodes to the node group.
+// AddNodes: Adds specified number of nodes to the node group. (==
+// suppress_warning http-rest-shadowed ==)
func (r *NodeGroupsService) AddNodes(project string, zone string, nodeGroup string, nodegroupsaddnodesrequest *NodeGroupsAddNodesRequest) *NodeGroupsAddNodesCall {
c := &NodeGroupsAddNodesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -79925,7 +80237,7 @@ func (c *NodeGroupsAddNodesCall) Header() http.Header {
func (c *NodeGroupsAddNodesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -79991,7 +80303,7 @@ func (c *NodeGroupsAddNodesCall) Do(opts ...googleapi.CallOption) (*Operation, e
}
return ret, nil
// {
- // "description": "Adds specified number of nodes to the node group.",
+ // "description": "Adds specified number of nodes to the node group. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.nodeGroups.addNodes",
// "parameterOrder": [
@@ -80054,7 +80366,8 @@ type NodeGroupsAggregatedListCall struct {
}
// AggregatedList: Retrieves an aggregated list of node groups. Note:
-// use nodeGroups.listNodes for more details about each group.
+// use nodeGroups.listNodes for more details about each group. (==
+// suppress_warning http-rest-shadowed ==)
func (r *NodeGroupsService) AggregatedList(project string) *NodeGroupsAggregatedListCall {
c := &NodeGroupsAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -80161,7 +80474,7 @@ func (c *NodeGroupsAggregatedListCall) Header() http.Header {
func (c *NodeGroupsAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -80223,7 +80536,7 @@ func (c *NodeGroupsAggregatedListCall) Do(opts ...googleapi.CallOption) (*NodeGr
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of node groups. Note: use nodeGroups.listNodes for more details about each group.",
+ // "description": "Retrieves an aggregated list of node groups. Note: use nodeGroups.listNodes for more details about each group. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.nodeGroups.aggregatedList",
// "parameterOrder": [
@@ -80307,7 +80620,8 @@ type NodeGroupsDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified NodeGroup resource.
+// Delete: Deletes the specified NodeGroup resource. (==
+// suppress_warning http-rest-shadowed ==)
func (r *NodeGroupsService) Delete(project string, zone string, nodeGroup string) *NodeGroupsDeleteCall {
c := &NodeGroupsDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -80362,7 +80676,7 @@ func (c *NodeGroupsDeleteCall) Header() http.Header {
func (c *NodeGroupsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -80423,7 +80737,7 @@ func (c *NodeGroupsDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, err
}
return ret, nil
// {
- // "description": "Deletes the specified NodeGroup resource.",
+ // "description": "Deletes the specified NodeGroup resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.nodeGroups.delete",
// "parameterOrder": [
@@ -80484,7 +80798,8 @@ type NodeGroupsDeleteNodesCall struct {
header_ http.Header
}
-// DeleteNodes: Deletes specified nodes from the node group.
+// DeleteNodes: Deletes specified nodes from the node group. (==
+// suppress_warning http-rest-shadowed ==)
func (r *NodeGroupsService) DeleteNodes(project string, zone string, nodeGroup string, nodegroupsdeletenodesrequest *NodeGroupsDeleteNodesRequest) *NodeGroupsDeleteNodesCall {
c := &NodeGroupsDeleteNodesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -80540,7 +80855,7 @@ func (c *NodeGroupsDeleteNodesCall) Header() http.Header {
func (c *NodeGroupsDeleteNodesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -80606,7 +80921,7 @@ func (c *NodeGroupsDeleteNodesCall) Do(opts ...googleapi.CallOption) (*Operation
}
return ret, nil
// {
- // "description": "Deletes specified nodes from the node group.",
+ // "description": "Deletes specified nodes from the node group. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.nodeGroups.deleteNodes",
// "parameterOrder": [
@@ -80672,7 +80987,8 @@ type NodeGroupsGetCall struct {
// Get: Returns the specified NodeGroup. Get a list of available
// NodeGroups by making a list() request. Note: the "nodes" field should
-// not be used. Use nodeGroups.listNodes instead.
+// not be used. Use nodeGroups.listNodes instead. (== suppress_warning
+// http-rest-shadowed ==)
func (r *NodeGroupsService) Get(project string, zone string, nodeGroup string) *NodeGroupsGetCall {
c := &NodeGroupsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -80718,7 +81034,7 @@ func (c *NodeGroupsGetCall) Header() http.Header {
func (c *NodeGroupsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -80782,7 +81098,7 @@ func (c *NodeGroupsGetCall) Do(opts ...googleapi.CallOption) (*NodeGroup, error)
}
return ret, nil
// {
- // "description": "Returns the specified NodeGroup. Get a list of available NodeGroups by making a list() request. Note: the \"nodes\" field should not be used. Use nodeGroups.listNodes instead.",
+ // "description": "Returns the specified NodeGroup. Get a list of available NodeGroups by making a list() request. Note: the \"nodes\" field should not be used. Use nodeGroups.listNodes instead. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.nodeGroups.get",
// "parameterOrder": [
@@ -80840,7 +81156,8 @@ type NodeGroupsGetIamPolicyCall struct {
}
// GetIamPolicy: Gets the access control policy for a resource. May be
-// empty if no such policy or resource exists.
+// empty if no such policy or resource exists. (== suppress_warning
+// http-rest-shadowed ==)
func (r *NodeGroupsService) GetIamPolicy(project string, zone string, resource string) *NodeGroupsGetIamPolicyCall {
c := &NodeGroupsGetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -80886,7 +81203,7 @@ func (c *NodeGroupsGetIamPolicyCall) Header() http.Header {
func (c *NodeGroupsGetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -80950,7 +81267,7 @@ func (c *NodeGroupsGetIamPolicyCall) Do(opts ...googleapi.CallOption) (*Policy,
}
return ret, nil
// {
- // "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists.",
+ // "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.nodeGroups.getIamPolicy",
// "parameterOrder": [
@@ -81007,7 +81324,8 @@ type NodeGroupsInsertCall struct {
}
// Insert: Creates a NodeGroup resource in the specified project using
-// the data included in the request.
+// the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *NodeGroupsService) Insert(project string, zone string, initialNodeCount int64, nodegroup *NodeGroup) *NodeGroupsInsertCall {
c := &NodeGroupsInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -81063,7 +81381,7 @@ func (c *NodeGroupsInsertCall) Header() http.Header {
func (c *NodeGroupsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -81128,7 +81446,7 @@ func (c *NodeGroupsInsertCall) Do(opts ...googleapi.CallOption) (*Operation, err
}
return ret, nil
// {
- // "description": "Creates a NodeGroup resource in the specified project using the data included in the request.",
+ // "description": "Creates a NodeGroup resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.nodeGroups.insert",
// "parameterOrder": [
@@ -81193,7 +81511,7 @@ type NodeGroupsListCall struct {
// List: Retrieves a list of node groups available to the specified
// project. Note: use nodeGroups.listNodes for more details about each
-// group.
+// group. (== suppress_warning http-rest-shadowed ==)
func (r *NodeGroupsService) List(project string, zone string) *NodeGroupsListCall {
c := &NodeGroupsListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -81301,7 +81619,7 @@ func (c *NodeGroupsListCall) Header() http.Header {
func (c *NodeGroupsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -81364,7 +81682,7 @@ func (c *NodeGroupsListCall) Do(opts ...googleapi.CallOption) (*NodeGroupList, e
}
return ret, nil
// {
- // "description": "Retrieves a list of node groups available to the specified project. Note: use nodeGroups.listNodes for more details about each group.",
+ // "description": "Retrieves a list of node groups available to the specified project. Note: use nodeGroups.listNodes for more details about each group. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.nodeGroups.list",
// "parameterOrder": [
@@ -81456,7 +81774,8 @@ type NodeGroupsListNodesCall struct {
header_ http.Header
}
-// ListNodes: Lists nodes in the node group.
+// ListNodes: Lists nodes in the node group. (== suppress_warning
+// http-rest-shadowed ==)
func (r *NodeGroupsService) ListNodes(project string, zone string, nodeGroup string) *NodeGroupsListNodesCall {
c := &NodeGroupsListNodesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -81555,7 +81874,7 @@ func (c *NodeGroupsListNodesCall) Header() http.Header {
func (c *NodeGroupsListNodesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -81616,7 +81935,7 @@ func (c *NodeGroupsListNodesCall) Do(opts ...googleapi.CallOption) (*NodeGroupsL
}
return ret, nil
// {
- // "description": "Lists nodes in the node group.",
+ // "description": "Lists nodes in the node group. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.nodeGroups.listNodes",
// "parameterOrder": [
@@ -81718,7 +82037,8 @@ type NodeGroupsSetIamPolicyCall struct {
}
// SetIamPolicy: Sets the access control policy on the specified
-// resource. Replaces any existing policy.
+// resource. Replaces any existing policy. (== suppress_warning
+// http-rest-shadowed ==)
func (r *NodeGroupsService) SetIamPolicy(project string, zone string, resource string, zonesetpolicyrequest *ZoneSetPolicyRequest) *NodeGroupsSetIamPolicyCall {
c := &NodeGroupsSetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -81755,7 +82075,7 @@ func (c *NodeGroupsSetIamPolicyCall) Header() http.Header {
func (c *NodeGroupsSetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -81821,7 +82141,7 @@ func (c *NodeGroupsSetIamPolicyCall) Do(opts ...googleapi.CallOption) (*Policy,
}
return ret, nil
// {
- // "description": "Sets the access control policy on the specified resource. Replaces any existing policy.",
+ // "description": "Sets the access control policy on the specified resource. Replaces any existing policy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.nodeGroups.setIamPolicy",
// "parameterOrder": [
@@ -81880,7 +82200,8 @@ type NodeGroupsSetNodeTemplateCall struct {
header_ http.Header
}
-// SetNodeTemplate: Updates the node template of the node group.
+// SetNodeTemplate: Updates the node template of the node group. (==
+// suppress_warning http-rest-shadowed ==)
func (r *NodeGroupsService) SetNodeTemplate(project string, zone string, nodeGroup string, nodegroupssetnodetemplaterequest *NodeGroupsSetNodeTemplateRequest) *NodeGroupsSetNodeTemplateCall {
c := &NodeGroupsSetNodeTemplateCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -81936,7 +82257,7 @@ func (c *NodeGroupsSetNodeTemplateCall) Header() http.Header {
func (c *NodeGroupsSetNodeTemplateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -82002,7 +82323,7 @@ func (c *NodeGroupsSetNodeTemplateCall) Do(opts ...googleapi.CallOption) (*Opera
}
return ret, nil
// {
- // "description": "Updates the node template of the node group.",
+ // "description": "Updates the node template of the node group. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.nodeGroups.setNodeTemplate",
// "parameterOrder": [
@@ -82067,7 +82388,7 @@ type NodeGroupsTestIamPermissionsCall struct {
}
// TestIamPermissions: Returns permissions that a caller has on the
-// specified resource.
+// specified resource. (== suppress_warning http-rest-shadowed ==)
func (r *NodeGroupsService) TestIamPermissions(project string, zone string, resource string, testpermissionsrequest *TestPermissionsRequest) *NodeGroupsTestIamPermissionsCall {
c := &NodeGroupsTestIamPermissionsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -82104,7 +82425,7 @@ func (c *NodeGroupsTestIamPermissionsCall) Header() http.Header {
func (c *NodeGroupsTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -82170,7 +82491,7 @@ func (c *NodeGroupsTestIamPermissionsCall) Do(opts ...googleapi.CallOption) (*Te
}
return ret, nil
// {
- // "description": "Returns permissions that a caller has on the specified resource.",
+ // "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.nodeGroups.testIamPermissions",
// "parameterOrder": [
@@ -82228,7 +82549,8 @@ type NodeTemplatesAggregatedListCall struct {
header_ http.Header
}
-// AggregatedList: Retrieves an aggregated list of node templates.
+// AggregatedList: Retrieves an aggregated list of node templates. (==
+// suppress_warning http-rest-shadowed ==)
func (r *NodeTemplatesService) AggregatedList(project string) *NodeTemplatesAggregatedListCall {
c := &NodeTemplatesAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -82335,7 +82657,7 @@ func (c *NodeTemplatesAggregatedListCall) Header() http.Header {
func (c *NodeTemplatesAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -82397,7 +82719,7 @@ func (c *NodeTemplatesAggregatedListCall) Do(opts ...googleapi.CallOption) (*Nod
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of node templates.",
+ // "description": "Retrieves an aggregated list of node templates. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.nodeTemplates.aggregatedList",
// "parameterOrder": [
@@ -82481,7 +82803,8 @@ type NodeTemplatesDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified NodeTemplate resource.
+// Delete: Deletes the specified NodeTemplate resource. (==
+// suppress_warning http-rest-shadowed ==)
func (r *NodeTemplatesService) Delete(project string, region string, nodeTemplate string) *NodeTemplatesDeleteCall {
c := &NodeTemplatesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -82536,7 +82859,7 @@ func (c *NodeTemplatesDeleteCall) Header() http.Header {
func (c *NodeTemplatesDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -82597,7 +82920,7 @@ func (c *NodeTemplatesDeleteCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Deletes the specified NodeTemplate resource.",
+ // "description": "Deletes the specified NodeTemplate resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.nodeTemplates.delete",
// "parameterOrder": [
@@ -82659,7 +82982,8 @@ type NodeTemplatesGetCall struct {
}
// Get: Returns the specified node template. Gets a list of available
-// node templates by making a list() request.
+// node templates by making a list() request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *NodeTemplatesService) Get(project string, region string, nodeTemplate string) *NodeTemplatesGetCall {
c := &NodeTemplatesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -82705,7 +83029,7 @@ func (c *NodeTemplatesGetCall) Header() http.Header {
func (c *NodeTemplatesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -82769,7 +83093,7 @@ func (c *NodeTemplatesGetCall) Do(opts ...googleapi.CallOption) (*NodeTemplate,
}
return ret, nil
// {
- // "description": "Returns the specified node template. Gets a list of available node templates by making a list() request.",
+ // "description": "Returns the specified node template. Gets a list of available node templates by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.nodeTemplates.get",
// "parameterOrder": [
@@ -82827,7 +83151,8 @@ type NodeTemplatesGetIamPolicyCall struct {
}
// GetIamPolicy: Gets the access control policy for a resource. May be
-// empty if no such policy or resource exists.
+// empty if no such policy or resource exists. (== suppress_warning
+// http-rest-shadowed ==)
func (r *NodeTemplatesService) GetIamPolicy(project string, region string, resource string) *NodeTemplatesGetIamPolicyCall {
c := &NodeTemplatesGetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -82873,7 +83198,7 @@ func (c *NodeTemplatesGetIamPolicyCall) Header() http.Header {
func (c *NodeTemplatesGetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -82937,7 +83262,7 @@ func (c *NodeTemplatesGetIamPolicyCall) Do(opts ...googleapi.CallOption) (*Polic
}
return ret, nil
// {
- // "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists.",
+ // "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.nodeTemplates.getIamPolicy",
// "parameterOrder": [
@@ -82994,7 +83319,8 @@ type NodeTemplatesInsertCall struct {
}
// Insert: Creates a NodeTemplate resource in the specified project
-// using the data included in the request.
+// using the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *NodeTemplatesService) Insert(project string, region string, nodetemplate *NodeTemplate) *NodeTemplatesInsertCall {
c := &NodeTemplatesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -83049,7 +83375,7 @@ func (c *NodeTemplatesInsertCall) Header() http.Header {
func (c *NodeTemplatesInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -83114,7 +83440,7 @@ func (c *NodeTemplatesInsertCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Creates a NodeTemplate resource in the specified project using the data included in the request.",
+ // "description": "Creates a NodeTemplate resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.nodeTemplates.insert",
// "parameterOrder": [
@@ -83170,7 +83496,7 @@ type NodeTemplatesListCall struct {
}
// List: Retrieves a list of node templates available to the specified
-// project.
+// project. (== suppress_warning http-rest-shadowed ==)
func (r *NodeTemplatesService) List(project string, region string) *NodeTemplatesListCall {
c := &NodeTemplatesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -83278,7 +83604,7 @@ func (c *NodeTemplatesListCall) Header() http.Header {
func (c *NodeTemplatesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -83341,7 +83667,7 @@ func (c *NodeTemplatesListCall) Do(opts ...googleapi.CallOption) (*NodeTemplateL
}
return ret, nil
// {
- // "description": "Retrieves a list of node templates available to the specified project.",
+ // "description": "Retrieves a list of node templates available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.nodeTemplates.list",
// "parameterOrder": [
@@ -83435,7 +83761,8 @@ type NodeTemplatesSetIamPolicyCall struct {
}
// SetIamPolicy: Sets the access control policy on the specified
-// resource. Replaces any existing policy.
+// resource. Replaces any existing policy. (== suppress_warning
+// http-rest-shadowed ==)
func (r *NodeTemplatesService) SetIamPolicy(project string, region string, resource string, regionsetpolicyrequest *RegionSetPolicyRequest) *NodeTemplatesSetIamPolicyCall {
c := &NodeTemplatesSetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -83472,7 +83799,7 @@ func (c *NodeTemplatesSetIamPolicyCall) Header() http.Header {
func (c *NodeTemplatesSetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -83538,7 +83865,7 @@ func (c *NodeTemplatesSetIamPolicyCall) Do(opts ...googleapi.CallOption) (*Polic
}
return ret, nil
// {
- // "description": "Sets the access control policy on the specified resource. Replaces any existing policy.",
+ // "description": "Sets the access control policy on the specified resource. Replaces any existing policy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.nodeTemplates.setIamPolicy",
// "parameterOrder": [
@@ -83598,7 +83925,7 @@ type NodeTemplatesTestIamPermissionsCall struct {
}
// TestIamPermissions: Returns permissions that a caller has on the
-// specified resource.
+// specified resource. (== suppress_warning http-rest-shadowed ==)
func (r *NodeTemplatesService) TestIamPermissions(project string, region string, resource string, testpermissionsrequest *TestPermissionsRequest) *NodeTemplatesTestIamPermissionsCall {
c := &NodeTemplatesTestIamPermissionsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -83635,7 +83962,7 @@ func (c *NodeTemplatesTestIamPermissionsCall) Header() http.Header {
func (c *NodeTemplatesTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -83701,7 +84028,7 @@ func (c *NodeTemplatesTestIamPermissionsCall) Do(opts ...googleapi.CallOption) (
}
return ret, nil
// {
- // "description": "Returns permissions that a caller has on the specified resource.",
+ // "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.nodeTemplates.testIamPermissions",
// "parameterOrder": [
@@ -83759,7 +84086,8 @@ type NodeTypesAggregatedListCall struct {
header_ http.Header
}
-// AggregatedList: Retrieves an aggregated list of node types.
+// AggregatedList: Retrieves an aggregated list of node types. (==
+// suppress_warning http-rest-shadowed ==)
func (r *NodeTypesService) AggregatedList(project string) *NodeTypesAggregatedListCall {
c := &NodeTypesAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -83866,7 +84194,7 @@ func (c *NodeTypesAggregatedListCall) Header() http.Header {
func (c *NodeTypesAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -83928,7 +84256,7 @@ func (c *NodeTypesAggregatedListCall) Do(opts ...googleapi.CallOption) (*NodeTyp
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of node types.",
+ // "description": "Retrieves an aggregated list of node types. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.nodeTypes.aggregatedList",
// "parameterOrder": [
@@ -84014,7 +84342,8 @@ type NodeTypesGetCall struct {
}
// Get: Returns the specified node type. Gets a list of available node
-// types by making a list() request.
+// types by making a list() request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *NodeTypesService) Get(project string, zone string, nodeType string) *NodeTypesGetCall {
c := &NodeTypesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -84060,7 +84389,7 @@ func (c *NodeTypesGetCall) Header() http.Header {
func (c *NodeTypesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -84124,7 +84453,7 @@ func (c *NodeTypesGetCall) Do(opts ...googleapi.CallOption) (*NodeType, error) {
}
return ret, nil
// {
- // "description": "Returns the specified node type. Gets a list of available node types by making a list() request.",
+ // "description": "Returns the specified node type. Gets a list of available node types by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.nodeTypes.get",
// "parameterOrder": [
@@ -84181,7 +84510,7 @@ type NodeTypesListCall struct {
}
// List: Retrieves a list of node types available to the specified
-// project.
+// project. (== suppress_warning http-rest-shadowed ==)
func (r *NodeTypesService) List(project string, zone string) *NodeTypesListCall {
c := &NodeTypesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -84289,7 +84618,7 @@ func (c *NodeTypesListCall) Header() http.Header {
func (c *NodeTypesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -84352,7 +84681,7 @@ func (c *NodeTypesListCall) Do(opts ...googleapi.CallOption) (*NodeTypeList, err
}
return ret, nil
// {
- // "description": "Retrieves a list of node types available to the specified project.",
+ // "description": "Retrieves a list of node types available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.nodeTypes.list",
// "parameterOrder": [
@@ -84443,6 +84772,7 @@ type ProjectsDisableXpnHostCall struct {
}
// DisableXpnHost: Disable this project as a shared VPC host project.
+// (== suppress_warning http-rest-shadowed ==)
func (r *ProjectsService) DisableXpnHost(project string) *ProjectsDisableXpnHostCall {
c := &ProjectsDisableXpnHostCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -84495,7 +84825,7 @@ func (c *ProjectsDisableXpnHostCall) Header() http.Header {
func (c *ProjectsDisableXpnHostCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -84554,7 +84884,7 @@ func (c *ProjectsDisableXpnHostCall) Do(opts ...googleapi.CallOption) (*Operatio
}
return ret, nil
// {
- // "description": "Disable this project as a shared VPC host project.",
+ // "description": "Disable this project as a shared VPC host project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.projects.disableXpnHost",
// "parameterOrder": [
@@ -84598,7 +84928,8 @@ type ProjectsDisableXpnResourceCall struct {
}
// DisableXpnResource: Disable a service resource (also known as service
-// project) associated with this host project.
+// project) associated with this host project. (== suppress_warning
+// http-rest-shadowed ==)
func (r *ProjectsService) DisableXpnResource(project string, projectsdisablexpnresourcerequest *ProjectsDisableXpnResourceRequest) *ProjectsDisableXpnResourceCall {
c := &ProjectsDisableXpnResourceCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -84652,7 +84983,7 @@ func (c *ProjectsDisableXpnResourceCall) Header() http.Header {
func (c *ProjectsDisableXpnResourceCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -84716,7 +85047,7 @@ func (c *ProjectsDisableXpnResourceCall) Do(opts ...googleapi.CallOption) (*Oper
}
return ret, nil
// {
- // "description": "Disable a service resource (also known as service project) associated with this host project.",
+ // "description": "Disable a service resource (also known as service project) associated with this host project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.projects.disableXpnResource",
// "parameterOrder": [
@@ -84761,7 +85092,8 @@ type ProjectsEnableXpnHostCall struct {
header_ http.Header
}
-// EnableXpnHost: Enable this project as a shared VPC host project.
+// EnableXpnHost: Enable this project as a shared VPC host project. (==
+// suppress_warning http-rest-shadowed ==)
func (r *ProjectsService) EnableXpnHost(project string) *ProjectsEnableXpnHostCall {
c := &ProjectsEnableXpnHostCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -84814,7 +85146,7 @@ func (c *ProjectsEnableXpnHostCall) Header() http.Header {
func (c *ProjectsEnableXpnHostCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -84873,7 +85205,7 @@ func (c *ProjectsEnableXpnHostCall) Do(opts ...googleapi.CallOption) (*Operation
}
return ret, nil
// {
- // "description": "Enable this project as a shared VPC host project.",
+ // "description": "Enable this project as a shared VPC host project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.projects.enableXpnHost",
// "parameterOrder": [
@@ -84918,7 +85250,8 @@ type ProjectsEnableXpnResourceCall struct {
// EnableXpnResource: Enable service resource (a.k.a service project)
// for a host project, so that subnets in the host project can be used
-// by instances in the service project.
+// by instances in the service project. (== suppress_warning
+// http-rest-shadowed ==)
func (r *ProjectsService) EnableXpnResource(project string, projectsenablexpnresourcerequest *ProjectsEnableXpnResourceRequest) *ProjectsEnableXpnResourceCall {
c := &ProjectsEnableXpnResourceCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -84972,7 +85305,7 @@ func (c *ProjectsEnableXpnResourceCall) Header() http.Header {
func (c *ProjectsEnableXpnResourceCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -85036,7 +85369,7 @@ func (c *ProjectsEnableXpnResourceCall) Do(opts ...googleapi.CallOption) (*Opera
}
return ret, nil
// {
- // "description": "Enable service resource (a.k.a service project) for a host project, so that subnets in the host project can be used by instances in the service project.",
+ // "description": "Enable service resource (a.k.a service project) for a host project, so that subnets in the host project can be used by instances in the service project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.projects.enableXpnResource",
// "parameterOrder": [
@@ -85082,7 +85415,8 @@ type ProjectsGetCall struct {
header_ http.Header
}
-// Get: Returns the specified Project resource.
+// Get: Returns the specified Project resource. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/projects/get
func (r *ProjectsService) Get(project string) *ProjectsGetCall {
c := &ProjectsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -85127,7 +85461,7 @@ func (c *ProjectsGetCall) Header() http.Header {
func (c *ProjectsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -85189,7 +85523,7 @@ func (c *ProjectsGetCall) Do(opts ...googleapi.CallOption) (*Project, error) {
}
return ret, nil
// {
- // "description": "Returns the specified Project resource.",
+ // "description": "Returns the specified Project resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.projects.get",
// "parameterOrder": [
@@ -85229,7 +85563,8 @@ type ProjectsGetXpnHostCall struct {
}
// GetXpnHost: Gets the shared VPC host project that this project links
-// to. May be empty if no link exists.
+// to. May be empty if no link exists. (== suppress_warning
+// http-rest-shadowed ==)
func (r *ProjectsService) GetXpnHost(project string) *ProjectsGetXpnHostCall {
c := &ProjectsGetXpnHostCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -85273,7 +85608,7 @@ func (c *ProjectsGetXpnHostCall) Header() http.Header {
func (c *ProjectsGetXpnHostCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -85335,7 +85670,7 @@ func (c *ProjectsGetXpnHostCall) Do(opts ...googleapi.CallOption) (*Project, err
}
return ret, nil
// {
- // "description": "Gets the shared VPC host project that this project links to. May be empty if no link exists.",
+ // "description": "Gets the shared VPC host project that this project links to. May be empty if no link exists. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.projects.getXpnHost",
// "parameterOrder": [
@@ -85374,7 +85709,8 @@ type ProjectsGetXpnResourcesCall struct {
}
// GetXpnResources: Gets service resources (a.k.a service project)
-// associated with this host project.
+// associated with this host project. (== suppress_warning
+// http-rest-shadowed ==)
func (r *ProjectsService) GetXpnResources(project string) *ProjectsGetXpnResourcesCall {
c := &ProjectsGetXpnResourcesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -85481,7 +85817,7 @@ func (c *ProjectsGetXpnResourcesCall) Header() http.Header {
func (c *ProjectsGetXpnResourcesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -85543,7 +85879,7 @@ func (c *ProjectsGetXpnResourcesCall) Do(opts ...googleapi.CallOption) (*Project
}
return ret, nil
// {
- // "description": "Gets service resources (a.k.a service project) associated with this host project.",
+ // "description": "Gets service resources (a.k.a service project) associated with this host project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.projects.getXpnResources",
// "parameterOrder": [
@@ -85626,7 +85962,7 @@ type ProjectsListXpnHostsCall struct {
}
// ListXpnHosts: Lists all shared VPC host projects visible to the user
-// in an organization.
+// in an organization. (== suppress_warning http-rest-shadowed ==)
func (r *ProjectsService) ListXpnHosts(project string, projectslistxpnhostsrequest *ProjectsListXpnHostsRequest) *ProjectsListXpnHostsCall {
c := &ProjectsListXpnHostsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -85724,7 +86060,7 @@ func (c *ProjectsListXpnHostsCall) Header() http.Header {
func (c *ProjectsListXpnHostsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -85788,7 +86124,7 @@ func (c *ProjectsListXpnHostsCall) Do(opts ...googleapi.CallOption) (*XpnHostLis
}
return ret, nil
// {
- // "description": "Lists all shared VPC host projects visible to the user in an organization.",
+ // "description": "Lists all shared VPC host projects visible to the user in an organization. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.projects.listXpnHosts",
// "parameterOrder": [
@@ -85873,7 +86209,8 @@ type ProjectsMoveDiskCall struct {
header_ http.Header
}
-// MoveDisk: Moves a persistent disk from one zone to another.
+// MoveDisk: Moves a persistent disk from one zone to another. (==
+// suppress_warning http-rest-shadowed ==)
func (r *ProjectsService) MoveDisk(project string, diskmoverequest *DiskMoveRequest) *ProjectsMoveDiskCall {
c := &ProjectsMoveDiskCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -85927,7 +86264,7 @@ func (c *ProjectsMoveDiskCall) Header() http.Header {
func (c *ProjectsMoveDiskCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -85991,7 +86328,7 @@ func (c *ProjectsMoveDiskCall) Do(opts ...googleapi.CallOption) (*Operation, err
}
return ret, nil
// {
- // "description": "Moves a persistent disk from one zone to another.",
+ // "description": "Moves a persistent disk from one zone to another. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.projects.moveDisk",
// "parameterOrder": [
@@ -86038,7 +86375,7 @@ type ProjectsMoveInstanceCall struct {
}
// MoveInstance: Moves an instance and its attached persistent disks
-// from one zone to another.
+// from one zone to another. (== suppress_warning http-rest-shadowed ==)
func (r *ProjectsService) MoveInstance(project string, instancemoverequest *InstanceMoveRequest) *ProjectsMoveInstanceCall {
c := &ProjectsMoveInstanceCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -86092,7 +86429,7 @@ func (c *ProjectsMoveInstanceCall) Header() http.Header {
func (c *ProjectsMoveInstanceCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -86156,7 +86493,7 @@ func (c *ProjectsMoveInstanceCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Moves an instance and its attached persistent disks from one zone to another.",
+ // "description": "Moves an instance and its attached persistent disks from one zone to another. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.projects.moveInstance",
// "parameterOrder": [
@@ -86204,6 +86541,7 @@ type ProjectsSetCommonInstanceMetadataCall struct {
// SetCommonInstanceMetadata: Sets metadata common to all instances
// within the specified project using the data included in the request.
+// (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/projects/setCommonInstanceMetadata
func (r *ProjectsService) SetCommonInstanceMetadata(project string, metadata *Metadata) *ProjectsSetCommonInstanceMetadataCall {
c := &ProjectsSetCommonInstanceMetadataCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -86258,7 +86596,7 @@ func (c *ProjectsSetCommonInstanceMetadataCall) Header() http.Header {
func (c *ProjectsSetCommonInstanceMetadataCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -86322,7 +86660,7 @@ func (c *ProjectsSetCommonInstanceMetadataCall) Do(opts ...googleapi.CallOption)
}
return ret, nil
// {
- // "description": "Sets metadata common to all instances within the specified project using the data included in the request.",
+ // "description": "Sets metadata common to all instances within the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.projects.setCommonInstanceMetadata",
// "parameterOrder": [
@@ -86371,7 +86709,7 @@ type ProjectsSetDefaultNetworkTierCall struct {
// SetDefaultNetworkTier: Sets the default network tier of the project.
// The default network tier is used when an
// address/forwardingRule/instance is created without specifying the
-// network tier field.
+// network tier field. (== suppress_warning http-rest-shadowed ==)
func (r *ProjectsService) SetDefaultNetworkTier(project string, projectssetdefaultnetworktierrequest *ProjectsSetDefaultNetworkTierRequest) *ProjectsSetDefaultNetworkTierCall {
c := &ProjectsSetDefaultNetworkTierCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -86425,7 +86763,7 @@ func (c *ProjectsSetDefaultNetworkTierCall) Header() http.Header {
func (c *ProjectsSetDefaultNetworkTierCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -86489,7 +86827,7 @@ func (c *ProjectsSetDefaultNetworkTierCall) Do(opts ...googleapi.CallOption) (*O
}
return ret, nil
// {
- // "description": "Sets the default network tier of the project. The default network tier is used when an address/forwardingRule/instance is created without specifying the network tier field.",
+ // "description": "Sets the default network tier of the project. The default network tier is used when an address/forwardingRule/instance is created without specifying the network tier field. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.projects.setDefaultNetworkTier",
// "parameterOrder": [
@@ -86538,7 +86876,7 @@ type ProjectsSetUsageExportBucketCall struct {
// SetUsageExportBucket: Enables the usage export feature and sets the
// usage export bucket where reports are stored. If you provide an empty
// request body using this method, the usage export feature will be
-// disabled.
+// disabled. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/projects/setUsageExportBucket
func (r *ProjectsService) SetUsageExportBucket(project string, usageexportlocation *UsageExportLocation) *ProjectsSetUsageExportBucketCall {
c := &ProjectsSetUsageExportBucketCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -86593,7 +86931,7 @@ func (c *ProjectsSetUsageExportBucketCall) Header() http.Header {
func (c *ProjectsSetUsageExportBucketCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -86657,7 +86995,7 @@ func (c *ProjectsSetUsageExportBucketCall) Do(opts ...googleapi.CallOption) (*Op
}
return ret, nil
// {
- // "description": "Enables the usage export feature and sets the usage export bucket where reports are stored. If you provide an empty request body using this method, the usage export feature will be disabled.",
+ // "description": "Enables the usage export feature and sets the usage export bucket where reports are stored. If you provide an empty request body using this method, the usage export feature will be disabled. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.projects.setUsageExportBucket",
// "parameterOrder": [
@@ -86707,7 +87045,8 @@ type RegionAutoscalersDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified autoscaler.
+// Delete: Deletes the specified autoscaler. (== suppress_warning
+// http-rest-shadowed ==)
func (r *RegionAutoscalersService) Delete(project string, region string, autoscaler string) *RegionAutoscalersDeleteCall {
c := &RegionAutoscalersDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -86762,7 +87101,7 @@ func (c *RegionAutoscalersDeleteCall) Header() http.Header {
func (c *RegionAutoscalersDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -86823,7 +87162,7 @@ func (c *RegionAutoscalersDeleteCall) Do(opts ...googleapi.CallOption) (*Operati
}
return ret, nil
// {
- // "description": "Deletes the specified autoscaler.",
+ // "description": "Deletes the specified autoscaler. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.regionAutoscalers.delete",
// "parameterOrder": [
@@ -86884,7 +87223,8 @@ type RegionAutoscalersGetCall struct {
header_ http.Header
}
-// Get: Returns the specified autoscaler.
+// Get: Returns the specified autoscaler. (== suppress_warning
+// http-rest-shadowed ==)
func (r *RegionAutoscalersService) Get(project string, region string, autoscaler string) *RegionAutoscalersGetCall {
c := &RegionAutoscalersGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -86930,7 +87270,7 @@ func (c *RegionAutoscalersGetCall) Header() http.Header {
func (c *RegionAutoscalersGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -86994,7 +87334,7 @@ func (c *RegionAutoscalersGetCall) Do(opts ...googleapi.CallOption) (*Autoscaler
}
return ret, nil
// {
- // "description": "Returns the specified autoscaler.",
+ // "description": "Returns the specified autoscaler. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionAutoscalers.get",
// "parameterOrder": [
@@ -87051,7 +87391,7 @@ type RegionAutoscalersInsertCall struct {
}
// Insert: Creates an autoscaler in the specified project using the data
-// included in the request.
+// included in the request. (== suppress_warning http-rest-shadowed ==)
func (r *RegionAutoscalersService) Insert(project string, region string, autoscaler *Autoscaler) *RegionAutoscalersInsertCall {
c := &RegionAutoscalersInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -87106,7 +87446,7 @@ func (c *RegionAutoscalersInsertCall) Header() http.Header {
func (c *RegionAutoscalersInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -87171,7 +87511,7 @@ func (c *RegionAutoscalersInsertCall) Do(opts ...googleapi.CallOption) (*Operati
}
return ret, nil
// {
- // "description": "Creates an autoscaler in the specified project using the data included in the request.",
+ // "description": "Creates an autoscaler in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionAutoscalers.insert",
// "parameterOrder": [
@@ -87227,7 +87567,7 @@ type RegionAutoscalersListCall struct {
}
// List: Retrieves a list of autoscalers contained within the specified
-// region.
+// region. (== suppress_warning http-rest-shadowed ==)
func (r *RegionAutoscalersService) List(project string, region string) *RegionAutoscalersListCall {
c := &RegionAutoscalersListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -87335,7 +87675,7 @@ func (c *RegionAutoscalersListCall) Header() http.Header {
func (c *RegionAutoscalersListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -87398,7 +87738,7 @@ func (c *RegionAutoscalersListCall) Do(opts ...googleapi.CallOption) (*RegionAut
}
return ret, nil
// {
- // "description": "Retrieves a list of autoscalers contained within the specified region.",
+ // "description": "Retrieves a list of autoscalers contained within the specified region. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionAutoscalers.list",
// "parameterOrder": [
@@ -87492,7 +87832,8 @@ type RegionAutoscalersPatchCall struct {
// Patch: Updates an autoscaler in the specified project using the data
// included in the request. This method supports PATCH semantics and
-// uses the JSON merge patch format and processing rules.
+// uses the JSON merge patch format and processing rules. (==
+// suppress_warning http-rest-shadowed ==)
func (r *RegionAutoscalersService) Patch(project string, region string, autoscaler *Autoscaler) *RegionAutoscalersPatchCall {
c := &RegionAutoscalersPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -87554,7 +87895,7 @@ func (c *RegionAutoscalersPatchCall) Header() http.Header {
func (c *RegionAutoscalersPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -87619,7 +87960,7 @@ func (c *RegionAutoscalersPatchCall) Do(opts ...googleapi.CallOption) (*Operatio
}
return ret, nil
// {
- // "description": "Updates an autoscaler in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ // "description": "Updates an autoscaler in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.regionAutoscalers.patch",
// "parameterOrder": [
@@ -87681,7 +88022,7 @@ type RegionAutoscalersUpdateCall struct {
}
// Update: Updates an autoscaler in the specified project using the data
-// included in the request.
+// included in the request. (== suppress_warning http-rest-shadowed ==)
func (r *RegionAutoscalersService) Update(project string, region string, autoscaler *Autoscaler) *RegionAutoscalersUpdateCall {
c := &RegionAutoscalersUpdateCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -87743,7 +88084,7 @@ func (c *RegionAutoscalersUpdateCall) Header() http.Header {
func (c *RegionAutoscalersUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -87808,7 +88149,7 @@ func (c *RegionAutoscalersUpdateCall) Do(opts ...googleapi.CallOption) (*Operati
}
return ret, nil
// {
- // "description": "Updates an autoscaler in the specified project using the data included in the request.",
+ // "description": "Updates an autoscaler in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PUT",
// "id": "compute.regionAutoscalers.update",
// "parameterOrder": [
@@ -87869,7 +88210,8 @@ type RegionBackendServicesDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified regional BackendService resource.
+// Delete: Deletes the specified regional BackendService resource. (==
+// suppress_warning http-rest-shadowed ==)
func (r *RegionBackendServicesService) Delete(project string, region string, backendService string) *RegionBackendServicesDeleteCall {
c := &RegionBackendServicesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -87924,7 +88266,7 @@ func (c *RegionBackendServicesDeleteCall) Header() http.Header {
func (c *RegionBackendServicesDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -87985,7 +88327,7 @@ func (c *RegionBackendServicesDeleteCall) Do(opts ...googleapi.CallOption) (*Ope
}
return ret, nil
// {
- // "description": "Deletes the specified regional BackendService resource.",
+ // "description": "Deletes the specified regional BackendService resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.regionBackendServices.delete",
// "parameterOrder": [
@@ -88046,7 +88388,8 @@ type RegionBackendServicesGetCall struct {
header_ http.Header
}
-// Get: Returns the specified regional BackendService resource.
+// Get: Returns the specified regional BackendService resource. (==
+// suppress_warning http-rest-shadowed ==)
func (r *RegionBackendServicesService) Get(project string, region string, backendService string) *RegionBackendServicesGetCall {
c := &RegionBackendServicesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -88092,7 +88435,7 @@ func (c *RegionBackendServicesGetCall) Header() http.Header {
func (c *RegionBackendServicesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -88156,7 +88499,7 @@ func (c *RegionBackendServicesGetCall) Do(opts ...googleapi.CallOption) (*Backen
}
return ret, nil
// {
- // "description": "Returns the specified regional BackendService resource.",
+ // "description": "Returns the specified regional BackendService resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionBackendServices.get",
// "parameterOrder": [
@@ -88214,7 +88557,7 @@ type RegionBackendServicesGetHealthCall struct {
}
// GetHealth: Gets the most recent health check results for this
-// regional BackendService.
+// regional BackendService. (== suppress_warning http-rest-shadowed ==)
func (r *RegionBackendServicesService) GetHealth(project string, region string, backendService string, resourcegroupreference *ResourceGroupReference) *RegionBackendServicesGetHealthCall {
c := &RegionBackendServicesGetHealthCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -88251,7 +88594,7 @@ func (c *RegionBackendServicesGetHealthCall) Header() http.Header {
func (c *RegionBackendServicesGetHealthCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -88317,7 +88660,7 @@ func (c *RegionBackendServicesGetHealthCall) Do(opts ...googleapi.CallOption) (*
}
return ret, nil
// {
- // "description": "Gets the most recent health check results for this regional BackendService.",
+ // "description": "Gets the most recent health check results for this regional BackendService. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionBackendServices.getHealth",
// "parameterOrder": [
@@ -88379,7 +88722,7 @@ type RegionBackendServicesInsertCall struct {
// project using the data included in the request. There are several
// restrictions and guidelines to keep in mind when creating a regional
// backend service. Read Restrictions and Guidelines for more
-// information.
+// information. (== suppress_warning http-rest-shadowed ==)
func (r *RegionBackendServicesService) Insert(project string, region string, backendservice *BackendService) *RegionBackendServicesInsertCall {
c := &RegionBackendServicesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -88434,7 +88777,7 @@ func (c *RegionBackendServicesInsertCall) Header() http.Header {
func (c *RegionBackendServicesInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -88499,7 +88842,7 @@ func (c *RegionBackendServicesInsertCall) Do(opts ...googleapi.CallOption) (*Ope
}
return ret, nil
// {
- // "description": "Creates a regional BackendService resource in the specified project using the data included in the request. There are several restrictions and guidelines to keep in mind when creating a regional backend service. Read Restrictions and Guidelines for more information.",
+ // "description": "Creates a regional BackendService resource in the specified project using the data included in the request. There are several restrictions and guidelines to keep in mind when creating a regional backend service. Read Restrictions and Guidelines for more information. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionBackendServices.insert",
// "parameterOrder": [
@@ -88555,7 +88898,8 @@ type RegionBackendServicesListCall struct {
}
// List: Retrieves the list of regional BackendService resources
-// available to the specified project in the given region.
+// available to the specified project in the given region. (==
+// suppress_warning http-rest-shadowed ==)
func (r *RegionBackendServicesService) List(project string, region string) *RegionBackendServicesListCall {
c := &RegionBackendServicesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -88663,7 +89007,7 @@ func (c *RegionBackendServicesListCall) Header() http.Header {
func (c *RegionBackendServicesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -88726,7 +89070,7 @@ func (c *RegionBackendServicesListCall) Do(opts ...googleapi.CallOption) (*Backe
}
return ret, nil
// {
- // "description": "Retrieves the list of regional BackendService resources available to the specified project in the given region.",
+ // "description": "Retrieves the list of regional BackendService resources available to the specified project in the given region. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionBackendServices.list",
// "parameterOrder": [
@@ -88824,7 +89168,7 @@ type RegionBackendServicesPatchCall struct {
// guidelines to keep in mind when updating a backend service. Read
// Restrictions and Guidelines for more information. This method
// supports PATCH semantics and uses the JSON merge patch format and
-// processing rules.
+// processing rules. (== suppress_warning http-rest-shadowed ==)
func (r *RegionBackendServicesService) Patch(project string, region string, backendService string, backendservice *BackendService) *RegionBackendServicesPatchCall {
c := &RegionBackendServicesPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -88880,7 +89224,7 @@ func (c *RegionBackendServicesPatchCall) Header() http.Header {
func (c *RegionBackendServicesPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -88946,7 +89290,7 @@ func (c *RegionBackendServicesPatchCall) Do(opts ...googleapi.CallOption) (*Oper
}
return ret, nil
// {
- // "description": "Updates the specified regional BackendService resource with the data included in the request. There are several restrictions and guidelines to keep in mind when updating a backend service. Read Restrictions and Guidelines for more information. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ // "description": "Updates the specified regional BackendService resource with the data included in the request. There are several restrictions and guidelines to keep in mind when updating a backend service. Read Restrictions and Guidelines for more information. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.regionBackendServices.patch",
// "parameterOrder": [
@@ -89013,7 +89357,8 @@ type RegionBackendServicesUpdateCall struct {
// Update: Updates the specified regional BackendService resource with
// the data included in the request. There are several restrictions and
// guidelines to keep in mind when updating a backend service. Read
-// Restrictions and Guidelines for more information.
+// Restrictions and Guidelines for more information. (==
+// suppress_warning http-rest-shadowed ==)
func (r *RegionBackendServicesService) Update(project string, region string, backendService string, backendservice *BackendService) *RegionBackendServicesUpdateCall {
c := &RegionBackendServicesUpdateCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -89069,7 +89414,7 @@ func (c *RegionBackendServicesUpdateCall) Header() http.Header {
func (c *RegionBackendServicesUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -89135,7 +89480,7 @@ func (c *RegionBackendServicesUpdateCall) Do(opts ...googleapi.CallOption) (*Ope
}
return ret, nil
// {
- // "description": "Updates the specified regional BackendService resource with the data included in the request. There are several restrictions and guidelines to keep in mind when updating a backend service. Read Restrictions and Guidelines for more information.",
+ // "description": "Updates the specified regional BackendService resource with the data included in the request. There are several restrictions and guidelines to keep in mind when updating a backend service. Read Restrictions and Guidelines for more information. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PUT",
// "id": "compute.regionBackendServices.update",
// "parameterOrder": [
@@ -89197,7 +89542,8 @@ type RegionCommitmentsAggregatedListCall struct {
header_ http.Header
}
-// AggregatedList: Retrieves an aggregated list of commitments.
+// AggregatedList: Retrieves an aggregated list of commitments. (==
+// suppress_warning http-rest-shadowed ==)
func (r *RegionCommitmentsService) AggregatedList(project string) *RegionCommitmentsAggregatedListCall {
c := &RegionCommitmentsAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -89304,7 +89650,7 @@ func (c *RegionCommitmentsAggregatedListCall) Header() http.Header {
func (c *RegionCommitmentsAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -89366,7 +89712,7 @@ func (c *RegionCommitmentsAggregatedListCall) Do(opts ...googleapi.CallOption) (
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of commitments.",
+ // "description": "Retrieves an aggregated list of commitments. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionCommitments.aggregatedList",
// "parameterOrder": [
@@ -89452,7 +89798,8 @@ type RegionCommitmentsGetCall struct {
}
// Get: Returns the specified commitment resource. Gets a list of
-// available commitments by making a list() request.
+// available commitments by making a list() request. (==
+// suppress_warning http-rest-shadowed ==)
func (r *RegionCommitmentsService) Get(project string, region string, commitment string) *RegionCommitmentsGetCall {
c := &RegionCommitmentsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -89498,7 +89845,7 @@ func (c *RegionCommitmentsGetCall) Header() http.Header {
func (c *RegionCommitmentsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -89562,7 +89909,7 @@ func (c *RegionCommitmentsGetCall) Do(opts ...googleapi.CallOption) (*Commitment
}
return ret, nil
// {
- // "description": "Returns the specified commitment resource. Gets a list of available commitments by making a list() request.",
+ // "description": "Returns the specified commitment resource. Gets a list of available commitments by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionCommitments.get",
// "parameterOrder": [
@@ -89619,7 +89966,7 @@ type RegionCommitmentsInsertCall struct {
}
// Insert: Creates a commitment in the specified project using the data
-// included in the request.
+// included in the request. (== suppress_warning http-rest-shadowed ==)
func (r *RegionCommitmentsService) Insert(project string, region string, commitment *Commitment) *RegionCommitmentsInsertCall {
c := &RegionCommitmentsInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -89674,7 +90021,7 @@ func (c *RegionCommitmentsInsertCall) Header() http.Header {
func (c *RegionCommitmentsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -89739,7 +90086,7 @@ func (c *RegionCommitmentsInsertCall) Do(opts ...googleapi.CallOption) (*Operati
}
return ret, nil
// {
- // "description": "Creates a commitment in the specified project using the data included in the request.",
+ // "description": "Creates a commitment in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionCommitments.insert",
// "parameterOrder": [
@@ -89795,7 +90142,7 @@ type RegionCommitmentsListCall struct {
}
// List: Retrieves a list of commitments contained within the specified
-// region.
+// region. (== suppress_warning http-rest-shadowed ==)
func (r *RegionCommitmentsService) List(project string, region string) *RegionCommitmentsListCall {
c := &RegionCommitmentsListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -89903,7 +90250,7 @@ func (c *RegionCommitmentsListCall) Header() http.Header {
func (c *RegionCommitmentsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -89966,7 +90313,7 @@ func (c *RegionCommitmentsListCall) Do(opts ...googleapi.CallOption) (*Commitmen
}
return ret, nil
// {
- // "description": "Retrieves a list of commitments contained within the specified region.",
+ // "description": "Retrieves a list of commitments contained within the specified region. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionCommitments.list",
// "parameterOrder": [
@@ -90060,7 +90407,8 @@ type RegionDiskTypesGetCall struct {
}
// Get: Returns the specified regional disk type. Gets a list of
-// available disk types by making a list() request.
+// available disk types by making a list() request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *RegionDiskTypesService) Get(project string, region string, diskType string) *RegionDiskTypesGetCall {
c := &RegionDiskTypesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -90106,7 +90454,7 @@ func (c *RegionDiskTypesGetCall) Header() http.Header {
func (c *RegionDiskTypesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -90170,7 +90518,7 @@ func (c *RegionDiskTypesGetCall) Do(opts ...googleapi.CallOption) (*DiskType, er
}
return ret, nil
// {
- // "description": "Returns the specified regional disk type. Gets a list of available disk types by making a list() request.",
+ // "description": "Returns the specified regional disk type. Gets a list of available disk types by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionDiskTypes.get",
// "parameterOrder": [
@@ -90227,7 +90575,7 @@ type RegionDiskTypesListCall struct {
}
// List: Retrieves a list of regional disk types available to the
-// specified project.
+// specified project. (== suppress_warning http-rest-shadowed ==)
func (r *RegionDiskTypesService) List(project string, region string) *RegionDiskTypesListCall {
c := &RegionDiskTypesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -90335,7 +90683,7 @@ func (c *RegionDiskTypesListCall) Header() http.Header {
func (c *RegionDiskTypesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -90398,7 +90746,7 @@ func (c *RegionDiskTypesListCall) Do(opts ...googleapi.CallOption) (*RegionDiskT
}
return ret, nil
// {
- // "description": "Retrieves a list of regional disk types available to the specified project.",
+ // "description": "Retrieves a list of regional disk types available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionDiskTypes.list",
// "parameterOrder": [
@@ -90493,7 +90841,8 @@ type RegionDisksAddResourcePoliciesCall struct {
// AddResourcePolicies: Adds existing resource policies to a regional
// disk. You can only add one policy which will be applied to this disk
-// for scheduling snapshot creation.
+// for scheduling snapshot creation. (== suppress_warning
+// http-rest-shadowed ==)
func (r *RegionDisksService) AddResourcePolicies(project string, region string, disk string, regiondisksaddresourcepoliciesrequest *RegionDisksAddResourcePoliciesRequest) *RegionDisksAddResourcePoliciesCall {
c := &RegionDisksAddResourcePoliciesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -90549,7 +90898,7 @@ func (c *RegionDisksAddResourcePoliciesCall) Header() http.Header {
func (c *RegionDisksAddResourcePoliciesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -90615,7 +90964,7 @@ func (c *RegionDisksAddResourcePoliciesCall) Do(opts ...googleapi.CallOption) (*
}
return ret, nil
// {
- // "description": "Adds existing resource policies to a regional disk. You can only add one policy which will be applied to this disk for scheduling snapshot creation.",
+ // "description": "Adds existing resource policies to a regional disk. You can only add one policy which will be applied to this disk for scheduling snapshot creation. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionDisks.addResourcePolicies",
// "parameterOrder": [
@@ -90679,7 +91028,8 @@ type RegionDisksCreateSnapshotCall struct {
header_ http.Header
}
-// CreateSnapshot: Creates a snapshot of this regional disk.
+// CreateSnapshot: Creates a snapshot of this regional disk. (==
+// suppress_warning http-rest-shadowed ==)
func (r *RegionDisksService) CreateSnapshot(project string, region string, disk string, snapshot *Snapshot) *RegionDisksCreateSnapshotCall {
c := &RegionDisksCreateSnapshotCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -90735,7 +91085,7 @@ func (c *RegionDisksCreateSnapshotCall) Header() http.Header {
func (c *RegionDisksCreateSnapshotCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -90801,7 +91151,7 @@ func (c *RegionDisksCreateSnapshotCall) Do(opts ...googleapi.CallOption) (*Opera
}
return ret, nil
// {
- // "description": "Creates a snapshot of this regional disk.",
+ // "description": "Creates a snapshot of this regional disk. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionDisks.createSnapshot",
// "parameterOrder": [
@@ -90868,6 +91218,7 @@ type RegionDisksDeleteCall struct {
// regional disk removes all the replicas of its data permanently and is
// irreversible. However, deleting a disk does not delete any snapshots
// previously made from the disk. You must separately delete snapshots.
+// (== suppress_warning http-rest-shadowed ==)
func (r *RegionDisksService) Delete(project string, region string, disk string) *RegionDisksDeleteCall {
c := &RegionDisksDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -90922,7 +91273,7 @@ func (c *RegionDisksDeleteCall) Header() http.Header {
func (c *RegionDisksDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -90983,7 +91334,7 @@ func (c *RegionDisksDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, er
}
return ret, nil
// {
- // "description": "Deletes the specified regional persistent disk. Deleting a regional disk removes all the replicas of its data permanently and is irreversible. However, deleting a disk does not delete any snapshots previously made from the disk. You must separately delete snapshots.",
+ // "description": "Deletes the specified regional persistent disk. Deleting a regional disk removes all the replicas of its data permanently and is irreversible. However, deleting a disk does not delete any snapshots previously made from the disk. You must separately delete snapshots. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.regionDisks.delete",
// "parameterOrder": [
@@ -91043,7 +91394,8 @@ type RegionDisksGetCall struct {
header_ http.Header
}
-// Get: Returns a specified regional persistent disk.
+// Get: Returns a specified regional persistent disk. (==
+// suppress_warning http-rest-shadowed ==)
func (r *RegionDisksService) Get(project string, region string, disk string) *RegionDisksGetCall {
c := &RegionDisksGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -91089,7 +91441,7 @@ func (c *RegionDisksGetCall) Header() http.Header {
func (c *RegionDisksGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -91153,7 +91505,7 @@ func (c *RegionDisksGetCall) Do(opts ...googleapi.CallOption) (*Disk, error) {
}
return ret, nil
// {
- // "description": "Returns a specified regional persistent disk.",
+ // "description": "Returns a specified regional persistent disk. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionDisks.get",
// "parameterOrder": [
@@ -91210,7 +91562,8 @@ type RegionDisksInsertCall struct {
}
// Insert: Creates a persistent regional disk in the specified project
-// using the data included in the request.
+// using the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *RegionDisksService) Insert(project string, region string, disk *Disk) *RegionDisksInsertCall {
c := &RegionDisksInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -91272,7 +91625,7 @@ func (c *RegionDisksInsertCall) Header() http.Header {
func (c *RegionDisksInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -91337,7 +91690,7 @@ func (c *RegionDisksInsertCall) Do(opts ...googleapi.CallOption) (*Operation, er
}
return ret, nil
// {
- // "description": "Creates a persistent regional disk in the specified project using the data included in the request.",
+ // "description": "Creates a persistent regional disk in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionDisks.insert",
// "parameterOrder": [
@@ -91398,7 +91751,7 @@ type RegionDisksListCall struct {
}
// List: Retrieves the list of persistent disks contained within the
-// specified region.
+// specified region. (== suppress_warning http-rest-shadowed ==)
func (r *RegionDisksService) List(project string, region string) *RegionDisksListCall {
c := &RegionDisksListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -91506,7 +91859,7 @@ func (c *RegionDisksListCall) Header() http.Header {
func (c *RegionDisksListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -91569,7 +91922,7 @@ func (c *RegionDisksListCall) Do(opts ...googleapi.CallOption) (*DiskList, error
}
return ret, nil
// {
- // "description": "Retrieves the list of persistent disks contained within the specified region.",
+ // "description": "Retrieves the list of persistent disks contained within the specified region. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionDisks.list",
// "parameterOrder": [
@@ -91663,7 +92016,7 @@ type RegionDisksRemoveResourcePoliciesCall struct {
}
// RemoveResourcePolicies: Removes resource policies from a regional
-// disk.
+// disk. (== suppress_warning http-rest-shadowed ==)
func (r *RegionDisksService) RemoveResourcePolicies(project string, region string, disk string, regiondisksremoveresourcepoliciesrequest *RegionDisksRemoveResourcePoliciesRequest) *RegionDisksRemoveResourcePoliciesCall {
c := &RegionDisksRemoveResourcePoliciesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -91719,7 +92072,7 @@ func (c *RegionDisksRemoveResourcePoliciesCall) Header() http.Header {
func (c *RegionDisksRemoveResourcePoliciesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -91785,7 +92138,7 @@ func (c *RegionDisksRemoveResourcePoliciesCall) Do(opts ...googleapi.CallOption)
}
return ret, nil
// {
- // "description": "Removes resource policies from a regional disk.",
+ // "description": "Removes resource policies from a regional disk. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionDisks.removeResourcePolicies",
// "parameterOrder": [
@@ -91849,7 +92202,8 @@ type RegionDisksResizeCall struct {
header_ http.Header
}
-// Resize: Resizes the specified regional persistent disk.
+// Resize: Resizes the specified regional persistent disk. (==
+// suppress_warning http-rest-shadowed ==)
func (r *RegionDisksService) Resize(project string, region string, disk string, regiondisksresizerequest *RegionDisksResizeRequest) *RegionDisksResizeCall {
c := &RegionDisksResizeCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -91905,7 +92259,7 @@ func (c *RegionDisksResizeCall) Header() http.Header {
func (c *RegionDisksResizeCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -91971,7 +92325,7 @@ func (c *RegionDisksResizeCall) Do(opts ...googleapi.CallOption) (*Operation, er
}
return ret, nil
// {
- // "description": "Resizes the specified regional persistent disk.",
+ // "description": "Resizes the specified regional persistent disk. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionDisks.resize",
// "parameterOrder": [
@@ -92035,7 +92389,8 @@ type RegionDisksSetLabelsCall struct {
header_ http.Header
}
-// SetLabels: Sets the labels on the target regional disk.
+// SetLabels: Sets the labels on the target regional disk. (==
+// suppress_warning http-rest-shadowed ==)
func (r *RegionDisksService) SetLabels(project string, region string, resource string, regionsetlabelsrequest *RegionSetLabelsRequest) *RegionDisksSetLabelsCall {
c := &RegionDisksSetLabelsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -92091,7 +92446,7 @@ func (c *RegionDisksSetLabelsCall) Header() http.Header {
func (c *RegionDisksSetLabelsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -92157,7 +92512,7 @@ func (c *RegionDisksSetLabelsCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Sets the labels on the target regional disk.",
+ // "description": "Sets the labels on the target regional disk. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionDisks.setLabels",
// "parameterOrder": [
@@ -92222,7 +92577,7 @@ type RegionDisksTestIamPermissionsCall struct {
}
// TestIamPermissions: Returns permissions that a caller has on the
-// specified resource.
+// specified resource. (== suppress_warning http-rest-shadowed ==)
func (r *RegionDisksService) TestIamPermissions(project string, region string, resource string, testpermissionsrequest *TestPermissionsRequest) *RegionDisksTestIamPermissionsCall {
c := &RegionDisksTestIamPermissionsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -92259,7 +92614,7 @@ func (c *RegionDisksTestIamPermissionsCall) Header() http.Header {
func (c *RegionDisksTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -92325,7 +92680,7 @@ func (c *RegionDisksTestIamPermissionsCall) Do(opts ...googleapi.CallOption) (*T
}
return ret, nil
// {
- // "description": "Returns permissions that a caller has on the specified resource.",
+ // "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionDisks.testIamPermissions",
// "parameterOrder": [
@@ -92384,7 +92739,8 @@ type RegionHealthChecksDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified HealthCheck resource.
+// Delete: Deletes the specified HealthCheck resource. (==
+// suppress_warning http-rest-shadowed ==)
func (r *RegionHealthChecksService) Delete(project string, region string, healthCheck string) *RegionHealthChecksDeleteCall {
c := &RegionHealthChecksDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -92439,7 +92795,7 @@ func (c *RegionHealthChecksDeleteCall) Header() http.Header {
func (c *RegionHealthChecksDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -92500,7 +92856,7 @@ func (c *RegionHealthChecksDeleteCall) Do(opts ...googleapi.CallOption) (*Operat
}
return ret, nil
// {
- // "description": "Deletes the specified HealthCheck resource.",
+ // "description": "Deletes the specified HealthCheck resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.regionHealthChecks.delete",
// "parameterOrder": [
@@ -92562,7 +92918,8 @@ type RegionHealthChecksGetCall struct {
}
// Get: Returns the specified HealthCheck resource. Gets a list of
-// available health checks by making a list() request.
+// available health checks by making a list() request. (==
+// suppress_warning http-rest-shadowed ==)
func (r *RegionHealthChecksService) Get(project string, region string, healthCheck string) *RegionHealthChecksGetCall {
c := &RegionHealthChecksGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -92608,7 +92965,7 @@ func (c *RegionHealthChecksGetCall) Header() http.Header {
func (c *RegionHealthChecksGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -92672,7 +93029,7 @@ func (c *RegionHealthChecksGetCall) Do(opts ...googleapi.CallOption) (*HealthChe
}
return ret, nil
// {
- // "description": "Returns the specified HealthCheck resource. Gets a list of available health checks by making a list() request.",
+ // "description": "Returns the specified HealthCheck resource. Gets a list of available health checks by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionHealthChecks.get",
// "parameterOrder": [
@@ -92729,7 +93086,8 @@ type RegionHealthChecksInsertCall struct {
}
// Insert: Creates a HealthCheck resource in the specified project using
-// the data included in the request.
+// the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *RegionHealthChecksService) Insert(project string, region string, healthcheck *HealthCheck) *RegionHealthChecksInsertCall {
c := &RegionHealthChecksInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -92784,7 +93142,7 @@ func (c *RegionHealthChecksInsertCall) Header() http.Header {
func (c *RegionHealthChecksInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -92849,7 +93207,7 @@ func (c *RegionHealthChecksInsertCall) Do(opts ...googleapi.CallOption) (*Operat
}
return ret, nil
// {
- // "description": "Creates a HealthCheck resource in the specified project using the data included in the request.",
+ // "description": "Creates a HealthCheck resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionHealthChecks.insert",
// "parameterOrder": [
@@ -92905,7 +93263,7 @@ type RegionHealthChecksListCall struct {
}
// List: Retrieves the list of HealthCheck resources available to the
-// specified project.
+// specified project. (== suppress_warning http-rest-shadowed ==)
func (r *RegionHealthChecksService) List(project string, region string) *RegionHealthChecksListCall {
c := &RegionHealthChecksListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -93013,7 +93371,7 @@ func (c *RegionHealthChecksListCall) Header() http.Header {
func (c *RegionHealthChecksListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -93076,7 +93434,7 @@ func (c *RegionHealthChecksListCall) Do(opts ...googleapi.CallOption) (*HealthCh
}
return ret, nil
// {
- // "description": "Retrieves the list of HealthCheck resources available to the specified project.",
+ // "description": "Retrieves the list of HealthCheck resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionHealthChecks.list",
// "parameterOrder": [
@@ -93172,6 +93530,7 @@ type RegionHealthChecksPatchCall struct {
// Patch: Updates a HealthCheck resource in the specified project using
// the data included in the request. This method supports PATCH
// semantics and uses the JSON merge patch format and processing rules.
+// (== suppress_warning http-rest-shadowed ==)
func (r *RegionHealthChecksService) Patch(project string, region string, healthCheck string, healthcheck *HealthCheck) *RegionHealthChecksPatchCall {
c := &RegionHealthChecksPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -93227,7 +93586,7 @@ func (c *RegionHealthChecksPatchCall) Header() http.Header {
func (c *RegionHealthChecksPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -93293,7 +93652,7 @@ func (c *RegionHealthChecksPatchCall) Do(opts ...googleapi.CallOption) (*Operati
}
return ret, nil
// {
- // "description": "Updates a HealthCheck resource in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ // "description": "Updates a HealthCheck resource in the specified project using the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.regionHealthChecks.patch",
// "parameterOrder": [
@@ -93358,7 +93717,8 @@ type RegionHealthChecksUpdateCall struct {
}
// Update: Updates a HealthCheck resource in the specified project using
-// the data included in the request.
+// the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *RegionHealthChecksService) Update(project string, region string, healthCheck string, healthcheck *HealthCheck) *RegionHealthChecksUpdateCall {
c := &RegionHealthChecksUpdateCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -93414,7 +93774,7 @@ func (c *RegionHealthChecksUpdateCall) Header() http.Header {
func (c *RegionHealthChecksUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -93480,7 +93840,7 @@ func (c *RegionHealthChecksUpdateCall) Do(opts ...googleapi.CallOption) (*Operat
}
return ret, nil
// {
- // "description": "Updates a HealthCheck resource in the specified project using the data included in the request.",
+ // "description": "Updates a HealthCheck resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PUT",
// "id": "compute.regionHealthChecks.update",
// "parameterOrder": [
@@ -93560,7 +93920,7 @@ type RegionInstanceGroupManagersAbandonInstancesCall struct {
// deleted.
//
// You can specify a maximum of 1000 instances with this method per
-// request.
+// request. (== suppress_warning http-rest-shadowed ==)
func (r *RegionInstanceGroupManagersService) AbandonInstances(project string, region string, instanceGroupManager string, regioninstancegroupmanagersabandoninstancesrequest *RegionInstanceGroupManagersAbandonInstancesRequest) *RegionInstanceGroupManagersAbandonInstancesCall {
c := &RegionInstanceGroupManagersAbandonInstancesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -93616,7 +93976,7 @@ func (c *RegionInstanceGroupManagersAbandonInstancesCall) Header() http.Header {
func (c *RegionInstanceGroupManagersAbandonInstancesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -93682,7 +94042,7 @@ func (c *RegionInstanceGroupManagersAbandonInstancesCall) Do(opts ...googleapi.C
}
return ret, nil
// {
- // "description": "Flags the specified instances to be immediately removed from the managed instance group. Abandoning an instance does not delete the instance, but it does remove the instance from any target pools that are applied by the managed instance group. This method reduces the targetSize of the managed instance group by the number of instances that you abandon. This operation is marked as DONE when the action is scheduled even if the instances have not yet been removed from the group. You must separately verify the status of the abandoning action with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.\n\nYou can specify a maximum of 1000 instances with this method per request.",
+ // "description": "Flags the specified instances to be immediately removed from the managed instance group. Abandoning an instance does not delete the instance, but it does remove the instance from any target pools that are applied by the managed instance group. This method reduces the targetSize of the managed instance group by the number of instances that you abandon. This operation is marked as DONE when the action is scheduled even if the instances have not yet been removed from the group. You must separately verify the status of the abandoning action with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.\n\nYou can specify a maximum of 1000 instances with this method per request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionInstanceGroupManagers.abandonInstances",
// "parameterOrder": [
@@ -93744,7 +94104,7 @@ type RegionInstanceGroupManagersDeleteCall struct {
}
// Delete: Deletes the specified managed instance group and all of the
-// instances in that group.
+// instances in that group. (== suppress_warning http-rest-shadowed ==)
func (r *RegionInstanceGroupManagersService) Delete(project string, region string, instanceGroupManager string) *RegionInstanceGroupManagersDeleteCall {
c := &RegionInstanceGroupManagersDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -93799,7 +94159,7 @@ func (c *RegionInstanceGroupManagersDeleteCall) Header() http.Header {
func (c *RegionInstanceGroupManagersDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -93860,7 +94220,7 @@ func (c *RegionInstanceGroupManagersDeleteCall) Do(opts ...googleapi.CallOption)
}
return ret, nil
// {
- // "description": "Deletes the specified managed instance group and all of the instances in that group.",
+ // "description": "Deletes the specified managed instance group and all of the instances in that group. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.regionInstanceGroupManagers.delete",
// "parameterOrder": [
@@ -93935,7 +94295,7 @@ type RegionInstanceGroupManagersDeleteInstancesCall struct {
// deleted.
//
// You can specify a maximum of 1000 instances with this method per
-// request.
+// request. (== suppress_warning http-rest-shadowed ==)
func (r *RegionInstanceGroupManagersService) DeleteInstances(project string, region string, instanceGroupManager string, regioninstancegroupmanagersdeleteinstancesrequest *RegionInstanceGroupManagersDeleteInstancesRequest) *RegionInstanceGroupManagersDeleteInstancesCall {
c := &RegionInstanceGroupManagersDeleteInstancesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -93991,7 +94351,7 @@ func (c *RegionInstanceGroupManagersDeleteInstancesCall) Header() http.Header {
func (c *RegionInstanceGroupManagersDeleteInstancesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -94057,7 +94417,7 @@ func (c *RegionInstanceGroupManagersDeleteInstancesCall) Do(opts ...googleapi.Ca
}
return ret, nil
// {
- // "description": "Flags the specified instances in the managed instance group to be immediately deleted. The instances are also removed from any target pools of which they were a member. This method reduces the targetSize of the managed instance group by the number of instances that you delete. The deleteInstances operation is marked DONE if the deleteInstances request is successful. The underlying actions take additional time. You must separately verify the status of the deleting action with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.\n\nYou can specify a maximum of 1000 instances with this method per request.",
+ // "description": "Flags the specified instances in the managed instance group to be immediately deleted. The instances are also removed from any target pools of which they were a member. This method reduces the targetSize of the managed instance group by the number of instances that you delete. The deleteInstances operation is marked DONE if the deleteInstances request is successful. The underlying actions take additional time. You must separately verify the status of the deleting action with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.\n\nYou can specify a maximum of 1000 instances with this method per request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionInstanceGroupManagers.deleteInstances",
// "parameterOrder": [
@@ -94120,7 +94480,7 @@ type RegionInstanceGroupManagersGetCall struct {
}
// Get: Returns all of the details about the specified managed instance
-// group.
+// group. (== suppress_warning http-rest-shadowed ==)
func (r *RegionInstanceGroupManagersService) Get(project string, region string, instanceGroupManager string) *RegionInstanceGroupManagersGetCall {
c := &RegionInstanceGroupManagersGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -94166,7 +94526,7 @@ func (c *RegionInstanceGroupManagersGetCall) Header() http.Header {
func (c *RegionInstanceGroupManagersGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -94230,7 +94590,7 @@ func (c *RegionInstanceGroupManagersGetCall) Do(opts ...googleapi.CallOption) (*
}
return ret, nil
// {
- // "description": "Returns all of the details about the specified managed instance group.",
+ // "description": "Returns all of the details about the specified managed instance group. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionInstanceGroupManagers.get",
// "parameterOrder": [
@@ -94293,6 +94653,7 @@ type RegionInstanceGroupManagersInsertCall struct {
// listmanagedinstances method.
//
// A regional managed instance group can contain up to 2000 instances.
+// (== suppress_warning http-rest-shadowed ==)
func (r *RegionInstanceGroupManagersService) Insert(project string, region string, instancegroupmanager *InstanceGroupManager) *RegionInstanceGroupManagersInsertCall {
c := &RegionInstanceGroupManagersInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -94347,7 +94708,7 @@ func (c *RegionInstanceGroupManagersInsertCall) Header() http.Header {
func (c *RegionInstanceGroupManagersInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -94412,7 +94773,7 @@ func (c *RegionInstanceGroupManagersInsertCall) Do(opts ...googleapi.CallOption)
}
return ret, nil
// {
- // "description": "Creates a managed instance group using the information that you specify in the request. After the group is created, instances in the group are created using the specified instance template. This operation is marked as DONE when the group is created even if the instances in the group have not yet been created. You must separately verify the status of the individual instances with the listmanagedinstances method.\n\nA regional managed instance group can contain up to 2000 instances.",
+ // "description": "Creates a managed instance group using the information that you specify in the request. After the group is created, instances in the group are created using the specified instance template. This operation is marked as DONE when the group is created even if the instances in the group have not yet been created. You must separately verify the status of the individual instances with the listmanagedinstances method.\n\nA regional managed instance group can contain up to 2000 instances. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionInstanceGroupManagers.insert",
// "parameterOrder": [
@@ -94467,7 +94828,8 @@ type RegionInstanceGroupManagersListCall struct {
}
// List: Retrieves the list of managed instance groups that are
-// contained within the specified region.
+// contained within the specified region. (== suppress_warning
+// http-rest-shadowed ==)
func (r *RegionInstanceGroupManagersService) List(project string, region string) *RegionInstanceGroupManagersListCall {
c := &RegionInstanceGroupManagersListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -94575,7 +94937,7 @@ func (c *RegionInstanceGroupManagersListCall) Header() http.Header {
func (c *RegionInstanceGroupManagersListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -94638,7 +95000,7 @@ func (c *RegionInstanceGroupManagersListCall) Do(opts ...googleapi.CallOption) (
}
return ret, nil
// {
- // "description": "Retrieves the list of managed instance groups that are contained within the specified region.",
+ // "description": "Retrieves the list of managed instance groups that are contained within the specified region. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionInstanceGroupManagers.list",
// "parameterOrder": [
@@ -94732,7 +95094,7 @@ type RegionInstanceGroupManagersListManagedInstancesCall struct {
// ListManagedInstances: Lists the instances in the managed instance
// group and instances that are scheduled to be created. The list
// includes any current actions that the group has scheduled for its
-// instances.
+// instances. (== suppress_warning http-rest-shadowed ==)
func (r *RegionInstanceGroupManagersService) ListManagedInstances(project string, region string, instanceGroupManager string) *RegionInstanceGroupManagersListManagedInstancesCall {
c := &RegionInstanceGroupManagersListManagedInstancesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -94831,7 +95193,7 @@ func (c *RegionInstanceGroupManagersListManagedInstancesCall) Header() http.Head
func (c *RegionInstanceGroupManagersListManagedInstancesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -94894,7 +95256,7 @@ func (c *RegionInstanceGroupManagersListManagedInstancesCall) Do(opts ...googlea
}
return ret, nil
// {
- // "description": "Lists the instances in the managed instance group and instances that are scheduled to be created. The list includes any current actions that the group has scheduled for its instances.",
+ // "description": "Lists the instances in the managed instance group and instances that are scheduled to be created. The list includes any current actions that the group has scheduled for its instances. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionInstanceGroupManagers.listManagedInstances",
// "parameterOrder": [
@@ -94978,7 +95340,7 @@ type RegionInstanceGroupManagersPatchCall struct {
// process of being patched. You must separately verify the status of
// the individual instances with the listmanagedinstances method. This
// method supports PATCH semantics and uses the JSON merge patch format
-// and processing rules.
+// and processing rules. (== suppress_warning http-rest-shadowed ==)
func (r *RegionInstanceGroupManagersService) Patch(project string, region string, instanceGroupManager string, instancegroupmanager *InstanceGroupManager) *RegionInstanceGroupManagersPatchCall {
c := &RegionInstanceGroupManagersPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -95034,7 +95396,7 @@ func (c *RegionInstanceGroupManagersPatchCall) Header() http.Header {
func (c *RegionInstanceGroupManagersPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -95100,7 +95462,7 @@ func (c *RegionInstanceGroupManagersPatchCall) Do(opts ...googleapi.CallOption)
}
return ret, nil
// {
- // "description": "Updates a managed instance group using the information that you specify in the request. This operation is marked as DONE when the group is patched even if the instances in the group are still in the process of being patched. You must separately verify the status of the individual instances with the listmanagedinstances method. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ // "description": "Updates a managed instance group using the information that you specify in the request. This operation is marked as DONE when the group is patched even if the instances in the group are still in the process of being patched. You must separately verify the status of the individual instances with the listmanagedinstances method. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.regionInstanceGroupManagers.patch",
// "parameterOrder": [
@@ -95176,7 +95538,7 @@ type RegionInstanceGroupManagersRecreateInstancesCall struct {
// deleted.
//
// You can specify a maximum of 1000 instances with this method per
-// request.
+// request. (== suppress_warning http-rest-shadowed ==)
func (r *RegionInstanceGroupManagersService) RecreateInstances(project string, region string, instanceGroupManager string, regioninstancegroupmanagersrecreaterequest *RegionInstanceGroupManagersRecreateRequest) *RegionInstanceGroupManagersRecreateInstancesCall {
c := &RegionInstanceGroupManagersRecreateInstancesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -95232,7 +95594,7 @@ func (c *RegionInstanceGroupManagersRecreateInstancesCall) Header() http.Header
func (c *RegionInstanceGroupManagersRecreateInstancesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -95298,7 +95660,7 @@ func (c *RegionInstanceGroupManagersRecreateInstancesCall) Do(opts ...googleapi.
}
return ret, nil
// {
- // "description": "Flags the specified instances in the managed instance group to be immediately recreated. The instances are deleted and recreated using the current instance template for the managed instance group. This operation is marked as DONE when the flag is set even if the instances have not yet been recreated. You must separately verify the status of the recreating action with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.\n\nYou can specify a maximum of 1000 instances with this method per request.",
+ // "description": "Flags the specified instances in the managed instance group to be immediately recreated. The instances are deleted and recreated using the current instance template for the managed instance group. This operation is marked as DONE when the flag is set even if the instances have not yet been recreated. You must separately verify the status of the recreating action with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.\n\nYou can specify a maximum of 1000 instances with this method per request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionInstanceGroupManagers.recreateInstances",
// "parameterOrder": [
@@ -95372,6 +95734,7 @@ type RegionInstanceGroupManagersResizeCall struct {
// If the group is part of a backend service that has enabled connection
// draining, it can take up to 60 seconds after the connection draining
// duration has elapsed before the VM instance is removed or deleted.
+// (== suppress_warning http-rest-shadowed ==)
func (r *RegionInstanceGroupManagersService) Resize(project string, region string, instanceGroupManager string, size int64) *RegionInstanceGroupManagersResizeCall {
c := &RegionInstanceGroupManagersResizeCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -95427,7 +95790,7 @@ func (c *RegionInstanceGroupManagersResizeCall) Header() http.Header {
func (c *RegionInstanceGroupManagersResizeCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -95488,7 +95851,7 @@ func (c *RegionInstanceGroupManagersResizeCall) Do(opts ...googleapi.CallOption)
}
return ret, nil
// {
- // "description": "Changes the intended size of the managed instance group. If you increase the size, the group creates new instances using the current instance template. If you decrease the size, the group deletes one or more instances.\n\nThe resize operation is marked DONE if the resize request is successful. The underlying actions take additional time. You must separately verify the status of the creating or deleting actions with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted.",
+ // "description": "Changes the intended size of the managed instance group. If you increase the size, the group creates new instances using the current instance template. If you decrease the size, the group deletes one or more instances.\n\nThe resize operation is marked DONE if the resize request is successful. The underlying actions take additional time. You must separately verify the status of the creating or deleting actions with the listmanagedinstances method.\n\nIf the group is part of a backend service that has enabled connection draining, it can take up to 60 seconds after the connection draining duration has elapsed before the VM instance is removed or deleted. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionInstanceGroupManagers.resize",
// "parameterOrder": [
@@ -95558,7 +95921,8 @@ type RegionInstanceGroupManagersSetInstanceTemplateCall struct {
// SetInstanceTemplate: Sets the instance template to use when creating
// new instances or recreating instances in this group. Existing
-// instances are not affected.
+// instances are not affected. (== suppress_warning http-rest-shadowed
+// ==)
func (r *RegionInstanceGroupManagersService) SetInstanceTemplate(project string, region string, instanceGroupManager string, regioninstancegroupmanagerssettemplaterequest *RegionInstanceGroupManagersSetTemplateRequest) *RegionInstanceGroupManagersSetInstanceTemplateCall {
c := &RegionInstanceGroupManagersSetInstanceTemplateCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -95614,7 +95978,7 @@ func (c *RegionInstanceGroupManagersSetInstanceTemplateCall) Header() http.Heade
func (c *RegionInstanceGroupManagersSetInstanceTemplateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -95680,7 +96044,7 @@ func (c *RegionInstanceGroupManagersSetInstanceTemplateCall) Do(opts ...googleap
}
return ret, nil
// {
- // "description": "Sets the instance template to use when creating new instances or recreating instances in this group. Existing instances are not affected.",
+ // "description": "Sets the instance template to use when creating new instances or recreating instances in this group. Existing instances are not affected. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionInstanceGroupManagers.setInstanceTemplate",
// "parameterOrder": [
@@ -95744,7 +96108,7 @@ type RegionInstanceGroupManagersSetTargetPoolsCall struct {
// SetTargetPools: Modifies the target pools to which all new instances
// in this group are assigned. Existing instances in the group are not
-// affected.
+// affected. (== suppress_warning http-rest-shadowed ==)
func (r *RegionInstanceGroupManagersService) SetTargetPools(project string, region string, instanceGroupManager string, regioninstancegroupmanagerssettargetpoolsrequest *RegionInstanceGroupManagersSetTargetPoolsRequest) *RegionInstanceGroupManagersSetTargetPoolsCall {
c := &RegionInstanceGroupManagersSetTargetPoolsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -95800,7 +96164,7 @@ func (c *RegionInstanceGroupManagersSetTargetPoolsCall) Header() http.Header {
func (c *RegionInstanceGroupManagersSetTargetPoolsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -95866,7 +96230,7 @@ func (c *RegionInstanceGroupManagersSetTargetPoolsCall) Do(opts ...googleapi.Cal
}
return ret, nil
// {
- // "description": "Modifies the target pools to which all new instances in this group are assigned. Existing instances in the group are not affected.",
+ // "description": "Modifies the target pools to which all new instances in this group are assigned. Existing instances in the group are not affected. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionInstanceGroupManagers.setTargetPools",
// "parameterOrder": [
@@ -95928,7 +96292,8 @@ type RegionInstanceGroupsGetCall struct {
header_ http.Header
}
-// Get: Returns the specified instance group resource.
+// Get: Returns the specified instance group resource. (==
+// suppress_warning http-rest-shadowed ==)
func (r *RegionInstanceGroupsService) Get(project string, region string, instanceGroup string) *RegionInstanceGroupsGetCall {
c := &RegionInstanceGroupsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -95974,7 +96339,7 @@ func (c *RegionInstanceGroupsGetCall) Header() http.Header {
func (c *RegionInstanceGroupsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -96038,7 +96403,7 @@ func (c *RegionInstanceGroupsGetCall) Do(opts ...googleapi.CallOption) (*Instanc
}
return ret, nil
// {
- // "description": "Returns the specified instance group resource.",
+ // "description": "Returns the specified instance group resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionInstanceGroups.get",
// "parameterOrder": [
@@ -96093,7 +96458,7 @@ type RegionInstanceGroupsListCall struct {
}
// List: Retrieves the list of instance group resources contained within
-// the specified region.
+// the specified region. (== suppress_warning http-rest-shadowed ==)
func (r *RegionInstanceGroupsService) List(project string, region string) *RegionInstanceGroupsListCall {
c := &RegionInstanceGroupsListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -96201,7 +96566,7 @@ func (c *RegionInstanceGroupsListCall) Header() http.Header {
func (c *RegionInstanceGroupsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -96264,7 +96629,7 @@ func (c *RegionInstanceGroupsListCall) Do(opts ...googleapi.CallOption) (*Region
}
return ret, nil
// {
- // "description": "Retrieves the list of instance group resources contained within the specified region.",
+ // "description": "Retrieves the list of instance group resources contained within the specified region. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionInstanceGroups.list",
// "parameterOrder": [
@@ -96359,7 +96724,8 @@ type RegionInstanceGroupsListInstancesCall struct {
// ListInstances: Lists the instances in the specified instance group
// and displays information about the named ports. Depending on the
// specified options, this method can list all instances or only the
-// instances that are running.
+// instances that are running. (== suppress_warning http-rest-shadowed
+// ==)
func (r *RegionInstanceGroupsService) ListInstances(project string, region string, instanceGroup string, regioninstancegroupslistinstancesrequest *RegionInstanceGroupsListInstancesRequest) *RegionInstanceGroupsListInstancesCall {
c := &RegionInstanceGroupsListInstancesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -96459,7 +96825,7 @@ func (c *RegionInstanceGroupsListInstancesCall) Header() http.Header {
func (c *RegionInstanceGroupsListInstancesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -96526,7 +96892,7 @@ func (c *RegionInstanceGroupsListInstancesCall) Do(opts ...googleapi.CallOption)
}
return ret, nil
// {
- // "description": "Lists the instances in the specified instance group and displays information about the named ports. Depending on the specified options, this method can list all instances or only the instances that are running.",
+ // "description": "Lists the instances in the specified instance group and displays information about the named ports. Depending on the specified options, this method can list all instances or only the instances that are running. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionInstanceGroups.listInstances",
// "parameterOrder": [
@@ -96629,7 +96995,7 @@ type RegionInstanceGroupsSetNamedPortsCall struct {
}
// SetNamedPorts: Sets the named ports for the specified regional
-// instance group.
+// instance group. (== suppress_warning http-rest-shadowed ==)
func (r *RegionInstanceGroupsService) SetNamedPorts(project string, region string, instanceGroup string, regioninstancegroupssetnamedportsrequest *RegionInstanceGroupsSetNamedPortsRequest) *RegionInstanceGroupsSetNamedPortsCall {
c := &RegionInstanceGroupsSetNamedPortsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -96685,7 +97051,7 @@ func (c *RegionInstanceGroupsSetNamedPortsCall) Header() http.Header {
func (c *RegionInstanceGroupsSetNamedPortsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -96751,7 +97117,7 @@ func (c *RegionInstanceGroupsSetNamedPortsCall) Do(opts ...googleapi.CallOption)
}
return ret, nil
// {
- // "description": "Sets the named ports for the specified regional instance group.",
+ // "description": "Sets the named ports for the specified regional instance group. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionInstanceGroups.setNamedPorts",
// "parameterOrder": [
@@ -96813,6 +97179,7 @@ type RegionOperationsDeleteCall struct {
}
// Delete: Deletes the specified region-specific Operations resource.
+// (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/regionOperations/delete
func (r *RegionOperationsService) Delete(project string, region string, operation string) *RegionOperationsDeleteCall {
c := &RegionOperationsDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -96849,7 +97216,7 @@ func (c *RegionOperationsDeleteCall) Header() http.Header {
func (c *RegionOperationsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -96885,7 +97252,7 @@ func (c *RegionOperationsDeleteCall) Do(opts ...googleapi.CallOption) error {
}
return nil
// {
- // "description": "Deletes the specified region-specific Operations resource.",
+ // "description": "Deletes the specified region-specific Operations resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.regionOperations.delete",
// "parameterOrder": [
@@ -96938,7 +97305,8 @@ type RegionOperationsGetCall struct {
header_ http.Header
}
-// Get: Retrieves the specified region-specific Operations resource.
+// Get: Retrieves the specified region-specific Operations resource. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/regionOperations/get
func (r *RegionOperationsService) Get(project string, region string, operation string) *RegionOperationsGetCall {
c := &RegionOperationsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -96985,7 +97353,7 @@ func (c *RegionOperationsGetCall) Header() http.Header {
func (c *RegionOperationsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -97049,7 +97417,7 @@ func (c *RegionOperationsGetCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Retrieves the specified region-specific Operations resource.",
+ // "description": "Retrieves the specified region-specific Operations resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionOperations.get",
// "parameterOrder": [
@@ -97106,7 +97474,7 @@ type RegionOperationsListCall struct {
}
// List: Retrieves a list of Operation resources contained within the
-// specified region.
+// specified region. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/regionOperations/list
func (r *RegionOperationsService) List(project string, region string) *RegionOperationsListCall {
c := &RegionOperationsListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -97215,7 +97583,7 @@ func (c *RegionOperationsListCall) Header() http.Header {
func (c *RegionOperationsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -97278,7 +97646,7 @@ func (c *RegionOperationsListCall) Do(opts ...googleapi.CallOption) (*OperationL
}
return ret, nil
// {
- // "description": "Retrieves a list of Operation resources contained within the specified region.",
+ // "description": "Retrieves a list of Operation resources contained within the specified region. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionOperations.list",
// "parameterOrder": [
@@ -97371,6 +97739,7 @@ type RegionSslCertificatesDeleteCall struct {
}
// Delete: Deletes the specified SslCertificate resource in the region.
+// (== suppress_warning http-rest-shadowed ==)
func (r *RegionSslCertificatesService) Delete(project string, region string, sslCertificate string) *RegionSslCertificatesDeleteCall {
c := &RegionSslCertificatesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -97425,7 +97794,7 @@ func (c *RegionSslCertificatesDeleteCall) Header() http.Header {
func (c *RegionSslCertificatesDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -97486,7 +97855,7 @@ func (c *RegionSslCertificatesDeleteCall) Do(opts ...googleapi.CallOption) (*Ope
}
return ret, nil
// {
- // "description": "Deletes the specified SslCertificate resource in the region.",
+ // "description": "Deletes the specified SslCertificate resource in the region. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.regionSslCertificates.delete",
// "parameterOrder": [
@@ -97549,7 +97918,7 @@ type RegionSslCertificatesGetCall struct {
// Get: Returns the specified SslCertificate resource in the specified
// region. Get a list of available SSL certificates by making a list()
-// request.
+// request. (== suppress_warning http-rest-shadowed ==)
func (r *RegionSslCertificatesService) Get(project string, region string, sslCertificate string) *RegionSslCertificatesGetCall {
c := &RegionSslCertificatesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -97595,7 +97964,7 @@ func (c *RegionSslCertificatesGetCall) Header() http.Header {
func (c *RegionSslCertificatesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -97659,7 +98028,7 @@ func (c *RegionSslCertificatesGetCall) Do(opts ...googleapi.CallOption) (*SslCer
}
return ret, nil
// {
- // "description": "Returns the specified SslCertificate resource in the specified region. Get a list of available SSL certificates by making a list() request.",
+ // "description": "Returns the specified SslCertificate resource in the specified region. Get a list of available SSL certificates by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionSslCertificates.get",
// "parameterOrder": [
@@ -97716,7 +98085,8 @@ type RegionSslCertificatesInsertCall struct {
}
// Insert: Creates a SslCertificate resource in the specified project
-// and region using the data included in the request
+// and region using the data included in the request (==
+// suppress_warning http-rest-shadowed ==)
func (r *RegionSslCertificatesService) Insert(project string, region string, sslcertificate *SslCertificate) *RegionSslCertificatesInsertCall {
c := &RegionSslCertificatesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -97771,7 +98141,7 @@ func (c *RegionSslCertificatesInsertCall) Header() http.Header {
func (c *RegionSslCertificatesInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -97836,7 +98206,7 @@ func (c *RegionSslCertificatesInsertCall) Do(opts ...googleapi.CallOption) (*Ope
}
return ret, nil
// {
- // "description": "Creates a SslCertificate resource in the specified project and region using the data included in the request",
+ // "description": "Creates a SslCertificate resource in the specified project and region using the data included in the request (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionSslCertificates.insert",
// "parameterOrder": [
@@ -97892,7 +98262,8 @@ type RegionSslCertificatesListCall struct {
}
// List: Retrieves the list of SslCertificate resources available to the
-// specified project in the specified region.
+// specified project in the specified region. (== suppress_warning
+// http-rest-shadowed ==)
func (r *RegionSslCertificatesService) List(project string, region string) *RegionSslCertificatesListCall {
c := &RegionSslCertificatesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -98000,7 +98371,7 @@ func (c *RegionSslCertificatesListCall) Header() http.Header {
func (c *RegionSslCertificatesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -98063,7 +98434,7 @@ func (c *RegionSslCertificatesListCall) Do(opts ...googleapi.CallOption) (*SslCe
}
return ret, nil
// {
- // "description": "Retrieves the list of SslCertificate resources available to the specified project in the specified region.",
+ // "description": "Retrieves the list of SslCertificate resources available to the specified project in the specified region. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionSslCertificates.list",
// "parameterOrder": [
@@ -98155,7 +98526,8 @@ type RegionTargetHttpProxiesDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified TargetHttpProxy resource.
+// Delete: Deletes the specified TargetHttpProxy resource. (==
+// suppress_warning http-rest-shadowed ==)
func (r *RegionTargetHttpProxiesService) Delete(project string, region string, targetHttpProxy string) *RegionTargetHttpProxiesDeleteCall {
c := &RegionTargetHttpProxiesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -98210,7 +98582,7 @@ func (c *RegionTargetHttpProxiesDeleteCall) Header() http.Header {
func (c *RegionTargetHttpProxiesDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -98271,7 +98643,7 @@ func (c *RegionTargetHttpProxiesDeleteCall) Do(opts ...googleapi.CallOption) (*O
}
return ret, nil
// {
- // "description": "Deletes the specified TargetHttpProxy resource.",
+ // "description": "Deletes the specified TargetHttpProxy resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.regionTargetHttpProxies.delete",
// "parameterOrder": [
@@ -98334,7 +98706,7 @@ type RegionTargetHttpProxiesGetCall struct {
// Get: Returns the specified TargetHttpProxy resource in the specified
// region. Gets a list of available target HTTP proxies by making a
-// list() request.
+// list() request. (== suppress_warning http-rest-shadowed ==)
func (r *RegionTargetHttpProxiesService) Get(project string, region string, targetHttpProxy string) *RegionTargetHttpProxiesGetCall {
c := &RegionTargetHttpProxiesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -98380,7 +98752,7 @@ func (c *RegionTargetHttpProxiesGetCall) Header() http.Header {
func (c *RegionTargetHttpProxiesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -98444,7 +98816,7 @@ func (c *RegionTargetHttpProxiesGetCall) Do(opts ...googleapi.CallOption) (*Targ
}
return ret, nil
// {
- // "description": "Returns the specified TargetHttpProxy resource in the specified region. Gets a list of available target HTTP proxies by making a list() request.",
+ // "description": "Returns the specified TargetHttpProxy resource in the specified region. Gets a list of available target HTTP proxies by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionTargetHttpProxies.get",
// "parameterOrder": [
@@ -98501,7 +98873,8 @@ type RegionTargetHttpProxiesInsertCall struct {
}
// Insert: Creates a TargetHttpProxy resource in the specified project
-// and region using the data included in the request.
+// and region using the data included in the request. (==
+// suppress_warning http-rest-shadowed ==)
func (r *RegionTargetHttpProxiesService) Insert(project string, region string, targethttpproxy *TargetHttpProxy) *RegionTargetHttpProxiesInsertCall {
c := &RegionTargetHttpProxiesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -98556,7 +98929,7 @@ func (c *RegionTargetHttpProxiesInsertCall) Header() http.Header {
func (c *RegionTargetHttpProxiesInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -98621,7 +98994,7 @@ func (c *RegionTargetHttpProxiesInsertCall) Do(opts ...googleapi.CallOption) (*O
}
return ret, nil
// {
- // "description": "Creates a TargetHttpProxy resource in the specified project and region using the data included in the request.",
+ // "description": "Creates a TargetHttpProxy resource in the specified project and region using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionTargetHttpProxies.insert",
// "parameterOrder": [
@@ -98677,7 +99050,8 @@ type RegionTargetHttpProxiesListCall struct {
}
// List: Retrieves the list of TargetHttpProxy resources available to
-// the specified project in the specified region.
+// the specified project in the specified region. (== suppress_warning
+// http-rest-shadowed ==)
func (r *RegionTargetHttpProxiesService) List(project string, region string) *RegionTargetHttpProxiesListCall {
c := &RegionTargetHttpProxiesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -98785,7 +99159,7 @@ func (c *RegionTargetHttpProxiesListCall) Header() http.Header {
func (c *RegionTargetHttpProxiesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -98848,7 +99222,7 @@ func (c *RegionTargetHttpProxiesListCall) Do(opts ...googleapi.CallOption) (*Tar
}
return ret, nil
// {
- // "description": "Retrieves the list of TargetHttpProxy resources available to the specified project in the specified region.",
+ // "description": "Retrieves the list of TargetHttpProxy resources available to the specified project in the specified region. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionTargetHttpProxies.list",
// "parameterOrder": [
@@ -98941,7 +99315,8 @@ type RegionTargetHttpProxiesSetUrlMapCall struct {
header_ http.Header
}
-// SetUrlMap: Changes the URL map for TargetHttpProxy.
+// SetUrlMap: Changes the URL map for TargetHttpProxy. (==
+// suppress_warning http-rest-shadowed ==)
func (r *RegionTargetHttpProxiesService) SetUrlMap(project string, region string, targetHttpProxy string, urlmapreference *UrlMapReference) *RegionTargetHttpProxiesSetUrlMapCall {
c := &RegionTargetHttpProxiesSetUrlMapCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -98997,7 +99372,7 @@ func (c *RegionTargetHttpProxiesSetUrlMapCall) Header() http.Header {
func (c *RegionTargetHttpProxiesSetUrlMapCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -99063,7 +99438,7 @@ func (c *RegionTargetHttpProxiesSetUrlMapCall) Do(opts ...googleapi.CallOption)
}
return ret, nil
// {
- // "description": "Changes the URL map for TargetHttpProxy.",
+ // "description": "Changes the URL map for TargetHttpProxy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionTargetHttpProxies.setUrlMap",
// "parameterOrder": [
@@ -99126,7 +99501,8 @@ type RegionTargetHttpsProxiesDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified TargetHttpsProxy resource.
+// Delete: Deletes the specified TargetHttpsProxy resource. (==
+// suppress_warning http-rest-shadowed ==)
func (r *RegionTargetHttpsProxiesService) Delete(project string, region string, targetHttpsProxy string) *RegionTargetHttpsProxiesDeleteCall {
c := &RegionTargetHttpsProxiesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -99181,7 +99557,7 @@ func (c *RegionTargetHttpsProxiesDeleteCall) Header() http.Header {
func (c *RegionTargetHttpsProxiesDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -99242,7 +99618,7 @@ func (c *RegionTargetHttpsProxiesDeleteCall) Do(opts ...googleapi.CallOption) (*
}
return ret, nil
// {
- // "description": "Deletes the specified TargetHttpsProxy resource.",
+ // "description": "Deletes the specified TargetHttpsProxy resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.regionTargetHttpsProxies.delete",
// "parameterOrder": [
@@ -99305,7 +99681,7 @@ type RegionTargetHttpsProxiesGetCall struct {
// Get: Returns the specified TargetHttpsProxy resource in the specified
// region. Gets a list of available target HTTP proxies by making a
-// list() request.
+// list() request. (== suppress_warning http-rest-shadowed ==)
func (r *RegionTargetHttpsProxiesService) Get(project string, region string, targetHttpsProxy string) *RegionTargetHttpsProxiesGetCall {
c := &RegionTargetHttpsProxiesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -99351,7 +99727,7 @@ func (c *RegionTargetHttpsProxiesGetCall) Header() http.Header {
func (c *RegionTargetHttpsProxiesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -99415,7 +99791,7 @@ func (c *RegionTargetHttpsProxiesGetCall) Do(opts ...googleapi.CallOption) (*Tar
}
return ret, nil
// {
- // "description": "Returns the specified TargetHttpsProxy resource in the specified region. Gets a list of available target HTTP proxies by making a list() request.",
+ // "description": "Returns the specified TargetHttpsProxy resource in the specified region. Gets a list of available target HTTP proxies by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionTargetHttpsProxies.get",
// "parameterOrder": [
@@ -99472,7 +99848,8 @@ type RegionTargetHttpsProxiesInsertCall struct {
}
// Insert: Creates a TargetHttpsProxy resource in the specified project
-// and region using the data included in the request.
+// and region using the data included in the request. (==
+// suppress_warning http-rest-shadowed ==)
func (r *RegionTargetHttpsProxiesService) Insert(project string, region string, targethttpsproxy *TargetHttpsProxy) *RegionTargetHttpsProxiesInsertCall {
c := &RegionTargetHttpsProxiesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -99527,7 +99904,7 @@ func (c *RegionTargetHttpsProxiesInsertCall) Header() http.Header {
func (c *RegionTargetHttpsProxiesInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -99592,7 +99969,7 @@ func (c *RegionTargetHttpsProxiesInsertCall) Do(opts ...googleapi.CallOption) (*
}
return ret, nil
// {
- // "description": "Creates a TargetHttpsProxy resource in the specified project and region using the data included in the request.",
+ // "description": "Creates a TargetHttpsProxy resource in the specified project and region using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionTargetHttpsProxies.insert",
// "parameterOrder": [
@@ -99648,7 +100025,8 @@ type RegionTargetHttpsProxiesListCall struct {
}
// List: Retrieves the list of TargetHttpsProxy resources available to
-// the specified project in the specified region.
+// the specified project in the specified region. (== suppress_warning
+// http-rest-shadowed ==)
func (r *RegionTargetHttpsProxiesService) List(project string, region string) *RegionTargetHttpsProxiesListCall {
c := &RegionTargetHttpsProxiesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -99756,7 +100134,7 @@ func (c *RegionTargetHttpsProxiesListCall) Header() http.Header {
func (c *RegionTargetHttpsProxiesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -99819,7 +100197,7 @@ func (c *RegionTargetHttpsProxiesListCall) Do(opts ...googleapi.CallOption) (*Ta
}
return ret, nil
// {
- // "description": "Retrieves the list of TargetHttpsProxy resources available to the specified project in the specified region.",
+ // "description": "Retrieves the list of TargetHttpsProxy resources available to the specified project in the specified region. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionTargetHttpsProxies.list",
// "parameterOrder": [
@@ -99913,6 +100291,7 @@ type RegionTargetHttpsProxiesSetSslCertificatesCall struct {
}
// SetSslCertificates: Replaces SslCertificates for TargetHttpsProxy.
+// (== suppress_warning http-rest-shadowed ==)
func (r *RegionTargetHttpsProxiesService) SetSslCertificates(project string, region string, targetHttpsProxy string, regiontargethttpsproxiessetsslcertificatesrequest *RegionTargetHttpsProxiesSetSslCertificatesRequest) *RegionTargetHttpsProxiesSetSslCertificatesCall {
c := &RegionTargetHttpsProxiesSetSslCertificatesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -99968,7 +100347,7 @@ func (c *RegionTargetHttpsProxiesSetSslCertificatesCall) Header() http.Header {
func (c *RegionTargetHttpsProxiesSetSslCertificatesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -100034,7 +100413,7 @@ func (c *RegionTargetHttpsProxiesSetSslCertificatesCall) Do(opts ...googleapi.Ca
}
return ret, nil
// {
- // "description": "Replaces SslCertificates for TargetHttpsProxy.",
+ // "description": "Replaces SslCertificates for TargetHttpsProxy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionTargetHttpsProxies.setSslCertificates",
// "parameterOrder": [
@@ -100098,7 +100477,8 @@ type RegionTargetHttpsProxiesSetUrlMapCall struct {
header_ http.Header
}
-// SetUrlMap: Changes the URL map for TargetHttpsProxy.
+// SetUrlMap: Changes the URL map for TargetHttpsProxy. (==
+// suppress_warning http-rest-shadowed ==)
func (r *RegionTargetHttpsProxiesService) SetUrlMap(project string, region string, targetHttpsProxy string, urlmapreference *UrlMapReference) *RegionTargetHttpsProxiesSetUrlMapCall {
c := &RegionTargetHttpsProxiesSetUrlMapCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -100154,7 +100534,7 @@ func (c *RegionTargetHttpsProxiesSetUrlMapCall) Header() http.Header {
func (c *RegionTargetHttpsProxiesSetUrlMapCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -100220,7 +100600,7 @@ func (c *RegionTargetHttpsProxiesSetUrlMapCall) Do(opts ...googleapi.CallOption)
}
return ret, nil
// {
- // "description": "Changes the URL map for TargetHttpsProxy.",
+ // "description": "Changes the URL map for TargetHttpsProxy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionTargetHttpsProxies.setUrlMap",
// "parameterOrder": [
@@ -100283,7 +100663,8 @@ type RegionUrlMapsDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified UrlMap resource.
+// Delete: Deletes the specified UrlMap resource. (== suppress_warning
+// http-rest-shadowed ==)
func (r *RegionUrlMapsService) Delete(project string, region string, urlMap string) *RegionUrlMapsDeleteCall {
c := &RegionUrlMapsDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -100326,7 +100707,7 @@ func (c *RegionUrlMapsDeleteCall) Header() http.Header {
func (c *RegionUrlMapsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -100387,7 +100768,7 @@ func (c *RegionUrlMapsDeleteCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Deletes the specified UrlMap resource.",
+ // "description": "Deletes the specified UrlMap resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.regionUrlMaps.delete",
// "parameterOrder": [
@@ -100449,7 +100830,8 @@ type RegionUrlMapsGetCall struct {
}
// Get: Returns the specified UrlMap resource. Gets a list of available
-// URL maps by making a list() request.
+// URL maps by making a list() request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *RegionUrlMapsService) Get(project string, region string, urlMap string) *RegionUrlMapsGetCall {
c := &RegionUrlMapsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -100495,7 +100877,7 @@ func (c *RegionUrlMapsGetCall) Header() http.Header {
func (c *RegionUrlMapsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -100559,7 +100941,7 @@ func (c *RegionUrlMapsGetCall) Do(opts ...googleapi.CallOption) (*UrlMap, error)
}
return ret, nil
// {
- // "description": "Returns the specified UrlMap resource. Gets a list of available URL maps by making a list() request.",
+ // "description": "Returns the specified UrlMap resource. Gets a list of available URL maps by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionUrlMaps.get",
// "parameterOrder": [
@@ -100616,7 +100998,8 @@ type RegionUrlMapsInsertCall struct {
}
// Insert: Creates a UrlMap resource in the specified project using the
-// data included in the request.
+// data included in the request. (== suppress_warning http-rest-shadowed
+// ==)
func (r *RegionUrlMapsService) Insert(project string, region string, urlmap *UrlMap) *RegionUrlMapsInsertCall {
c := &RegionUrlMapsInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -100659,7 +101042,7 @@ func (c *RegionUrlMapsInsertCall) Header() http.Header {
func (c *RegionUrlMapsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -100724,7 +101107,7 @@ func (c *RegionUrlMapsInsertCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Creates a UrlMap resource in the specified project using the data included in the request.",
+ // "description": "Creates a UrlMap resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionUrlMaps.insert",
// "parameterOrder": [
@@ -100780,7 +101163,8 @@ type RegionUrlMapsListCall struct {
}
// List: Retrieves the list of UrlMap resources available to the
-// specified project in the specified region.
+// specified project in the specified region. (== suppress_warning
+// http-rest-shadowed ==)
func (r *RegionUrlMapsService) List(project string, region string) *RegionUrlMapsListCall {
c := &RegionUrlMapsListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -100888,7 +101272,7 @@ func (c *RegionUrlMapsListCall) Header() http.Header {
func (c *RegionUrlMapsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -100951,7 +101335,7 @@ func (c *RegionUrlMapsListCall) Do(opts ...googleapi.CallOption) (*UrlMapList, e
}
return ret, nil
// {
- // "description": "Retrieves the list of UrlMap resources available to the specified project in the specified region.",
+ // "description": "Retrieves the list of UrlMap resources available to the specified project in the specified region. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regionUrlMaps.list",
// "parameterOrder": [
@@ -101046,7 +101430,8 @@ type RegionUrlMapsPatchCall struct {
// Patch: Patches the specified UrlMap resource with the data included
// in the request. This method supports PATCH semantics and uses JSON
-// merge patch format and processing rules.
+// merge patch format and processing rules. (== suppress_warning
+// http-rest-shadowed ==)
func (r *RegionUrlMapsService) Patch(project string, region string, urlMap string, urlmap *UrlMap) *RegionUrlMapsPatchCall {
c := &RegionUrlMapsPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -101090,7 +101475,7 @@ func (c *RegionUrlMapsPatchCall) Header() http.Header {
func (c *RegionUrlMapsPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -101156,7 +101541,7 @@ func (c *RegionUrlMapsPatchCall) Do(opts ...googleapi.CallOption) (*Operation, e
}
return ret, nil
// {
- // "description": "Patches the specified UrlMap resource with the data included in the request. This method supports PATCH semantics and uses JSON merge patch format and processing rules.",
+ // "description": "Patches the specified UrlMap resource with the data included in the request. This method supports PATCH semantics and uses JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.regionUrlMaps.patch",
// "parameterOrder": [
@@ -101221,7 +101606,7 @@ type RegionUrlMapsUpdateCall struct {
}
// Update: Updates the specified UrlMap resource with the data included
-// in the request.
+// in the request. (== suppress_warning http-rest-shadowed ==)
func (r *RegionUrlMapsService) Update(project string, region string, urlMap string, urlmap *UrlMap) *RegionUrlMapsUpdateCall {
c := &RegionUrlMapsUpdateCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -101265,7 +101650,7 @@ func (c *RegionUrlMapsUpdateCall) Header() http.Header {
func (c *RegionUrlMapsUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -101331,7 +101716,7 @@ func (c *RegionUrlMapsUpdateCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Updates the specified UrlMap resource with the data included in the request.",
+ // "description": "Updates the specified UrlMap resource with the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PUT",
// "id": "compute.regionUrlMaps.update",
// "parameterOrder": [
@@ -101397,7 +101782,7 @@ type RegionUrlMapsValidateCall struct {
// Validate: Runs static validation for the UrlMap. In particular, the
// tests of the provided UrlMap will be run. Calling this method does
-// NOT create the UrlMap.
+// NOT create the UrlMap. (== suppress_warning http-rest-shadowed ==)
func (r *RegionUrlMapsService) Validate(project string, region string, urlMap string, regionurlmapsvalidaterequest *RegionUrlMapsValidateRequest) *RegionUrlMapsValidateCall {
c := &RegionUrlMapsValidateCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -101434,7 +101819,7 @@ func (c *RegionUrlMapsValidateCall) Header() http.Header {
func (c *RegionUrlMapsValidateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -101500,7 +101885,7 @@ func (c *RegionUrlMapsValidateCall) Do(opts ...googleapi.CallOption) (*UrlMapsVa
}
return ret, nil
// {
- // "description": "Runs static validation for the UrlMap. In particular, the tests of the provided UrlMap will be run. Calling this method does NOT create the UrlMap.",
+ // "description": "Runs static validation for the UrlMap. In particular, the tests of the provided UrlMap will be run. Calling this method does NOT create the UrlMap. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.regionUrlMaps.validate",
// "parameterOrder": [
@@ -101559,7 +101944,8 @@ type RegionsGetCall struct {
}
// Get: Returns the specified Region resource. Gets a list of available
-// regions by making a list() request.
+// regions by making a list() request. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/regions/get
func (r *RegionsService) Get(project string, region string) *RegionsGetCall {
c := &RegionsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -101605,7 +101991,7 @@ func (c *RegionsGetCall) Header() http.Header {
func (c *RegionsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -101668,7 +102054,7 @@ func (c *RegionsGetCall) Do(opts ...googleapi.CallOption) (*Region, error) {
}
return ret, nil
// {
- // "description": "Returns the specified Region resource. Gets a list of available regions by making a list() request.",
+ // "description": "Returns the specified Region resource. Gets a list of available regions by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regions.get",
// "parameterOrder": [
@@ -101716,7 +102102,7 @@ type RegionsListCall struct {
}
// List: Retrieves the list of region resources available to the
-// specified project.
+// specified project. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/regions/list
func (r *RegionsService) List(project string) *RegionsListCall {
c := &RegionsListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -101824,7 +102210,7 @@ func (c *RegionsListCall) Header() http.Header {
func (c *RegionsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -101886,7 +102272,7 @@ func (c *RegionsListCall) Do(opts ...googleapi.CallOption) (*RegionList, error)
}
return ret, nil
// {
- // "description": "Retrieves the list of region resources available to the specified project.",
+ // "description": "Retrieves the list of region resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.regions.list",
// "parameterOrder": [
@@ -101969,7 +102355,8 @@ type ReservationsAggregatedListCall struct {
header_ http.Header
}
-// AggregatedList: Retrieves an aggregated list of reservations.
+// AggregatedList: Retrieves an aggregated list of reservations. (==
+// suppress_warning http-rest-shadowed ==)
func (r *ReservationsService) AggregatedList(project string) *ReservationsAggregatedListCall {
c := &ReservationsAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -102076,7 +102463,7 @@ func (c *ReservationsAggregatedListCall) Header() http.Header {
func (c *ReservationsAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -102138,7 +102525,7 @@ func (c *ReservationsAggregatedListCall) Do(opts ...googleapi.CallOption) (*Rese
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of reservations.",
+ // "description": "Retrieves an aggregated list of reservations. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.reservations.aggregatedList",
// "parameterOrder": [
@@ -102222,7 +102609,8 @@ type ReservationsDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified reservation.
+// Delete: Deletes the specified reservation. (== suppress_warning
+// http-rest-shadowed ==)
func (r *ReservationsService) Delete(project string, zone string, reservation string) *ReservationsDeleteCall {
c := &ReservationsDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -102277,7 +102665,7 @@ func (c *ReservationsDeleteCall) Header() http.Header {
func (c *ReservationsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -102338,7 +102726,7 @@ func (c *ReservationsDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, e
}
return ret, nil
// {
- // "description": "Deletes the specified reservation.",
+ // "description": "Deletes the specified reservation. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.reservations.delete",
// "parameterOrder": [
@@ -102399,7 +102787,8 @@ type ReservationsGetCall struct {
header_ http.Header
}
-// Get: Retrieves information about the specified reservation.
+// Get: Retrieves information about the specified reservation. (==
+// suppress_warning http-rest-shadowed ==)
func (r *ReservationsService) Get(project string, zone string, reservation string) *ReservationsGetCall {
c := &ReservationsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -102445,7 +102834,7 @@ func (c *ReservationsGetCall) Header() http.Header {
func (c *ReservationsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -102509,7 +102898,7 @@ func (c *ReservationsGetCall) Do(opts ...googleapi.CallOption) (*Reservation, er
}
return ret, nil
// {
- // "description": "Retrieves information about the specified reservation.",
+ // "description": "Retrieves information about the specified reservation. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.reservations.get",
// "parameterOrder": [
@@ -102567,7 +102956,8 @@ type ReservationsGetIamPolicyCall struct {
}
// GetIamPolicy: Gets the access control policy for a resource. May be
-// empty if no such policy or resource exists.
+// empty if no such policy or resource exists. (== suppress_warning
+// http-rest-shadowed ==)
func (r *ReservationsService) GetIamPolicy(project string, zone string, resource string) *ReservationsGetIamPolicyCall {
c := &ReservationsGetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -102613,7 +103003,7 @@ func (c *ReservationsGetIamPolicyCall) Header() http.Header {
func (c *ReservationsGetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -102677,7 +103067,7 @@ func (c *ReservationsGetIamPolicyCall) Do(opts ...googleapi.CallOption) (*Policy
}
return ret, nil
// {
- // "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists.",
+ // "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.reservations.getIamPolicy",
// "parameterOrder": [
@@ -102734,7 +103124,8 @@ type ReservationsInsertCall struct {
}
// Insert: Creates a new reservation. For more information, read
-// Reserving zonal resources.
+// Reserving zonal resources. (== suppress_warning http-rest-shadowed
+// ==)
func (r *ReservationsService) Insert(project string, zone string, reservation *Reservation) *ReservationsInsertCall {
c := &ReservationsInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -102789,7 +103180,7 @@ func (c *ReservationsInsertCall) Header() http.Header {
func (c *ReservationsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -102854,7 +103245,7 @@ func (c *ReservationsInsertCall) Do(opts ...googleapi.CallOption) (*Operation, e
}
return ret, nil
// {
- // "description": "Creates a new reservation. For more information, read Reserving zonal resources.",
+ // "description": "Creates a new reservation. For more information, read Reserving zonal resources. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.reservations.insert",
// "parameterOrder": [
@@ -102910,7 +103301,8 @@ type ReservationsListCall struct {
}
// List: A list of all the reservations that have been configured for
-// the specified project in specified zone.
+// the specified project in specified zone. (== suppress_warning
+// http-rest-shadowed ==)
func (r *ReservationsService) List(project string, zone string) *ReservationsListCall {
c := &ReservationsListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -103018,7 +103410,7 @@ func (c *ReservationsListCall) Header() http.Header {
func (c *ReservationsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -103081,7 +103473,7 @@ func (c *ReservationsListCall) Do(opts ...googleapi.CallOption) (*ReservationLis
}
return ret, nil
// {
- // "description": "A list of all the reservations that have been configured for the specified project in specified zone.",
+ // "description": "A list of all the reservations that have been configured for the specified project in specified zone. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.reservations.list",
// "parameterOrder": [
@@ -103176,7 +103568,7 @@ type ReservationsResizeCall struct {
// Resize: Resizes the reservation (applicable to standalone
// reservations only). For more information, read Modifying
-// reservations.
+// reservations. (== suppress_warning http-rest-shadowed ==)
func (r *ReservationsService) Resize(project string, zone string, reservation string, reservationsresizerequest *ReservationsResizeRequest) *ReservationsResizeCall {
c := &ReservationsResizeCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -103232,7 +103624,7 @@ func (c *ReservationsResizeCall) Header() http.Header {
func (c *ReservationsResizeCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -103298,7 +103690,7 @@ func (c *ReservationsResizeCall) Do(opts ...googleapi.CallOption) (*Operation, e
}
return ret, nil
// {
- // "description": "Resizes the reservation (applicable to standalone reservations only). For more information, read Modifying reservations.",
+ // "description": "Resizes the reservation (applicable to standalone reservations only). For more information, read Modifying reservations. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.reservations.resize",
// "parameterOrder": [
@@ -103363,7 +103755,8 @@ type ReservationsSetIamPolicyCall struct {
}
// SetIamPolicy: Sets the access control policy on the specified
-// resource. Replaces any existing policy.
+// resource. Replaces any existing policy. (== suppress_warning
+// http-rest-shadowed ==)
func (r *ReservationsService) SetIamPolicy(project string, zone string, resource string, zonesetpolicyrequest *ZoneSetPolicyRequest) *ReservationsSetIamPolicyCall {
c := &ReservationsSetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -103400,7 +103793,7 @@ func (c *ReservationsSetIamPolicyCall) Header() http.Header {
func (c *ReservationsSetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -103466,7 +103859,7 @@ func (c *ReservationsSetIamPolicyCall) Do(opts ...googleapi.CallOption) (*Policy
}
return ret, nil
// {
- // "description": "Sets the access control policy on the specified resource. Replaces any existing policy.",
+ // "description": "Sets the access control policy on the specified resource. Replaces any existing policy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.reservations.setIamPolicy",
// "parameterOrder": [
@@ -103526,7 +103919,7 @@ type ReservationsTestIamPermissionsCall struct {
}
// TestIamPermissions: Returns permissions that a caller has on the
-// specified resource.
+// specified resource. (== suppress_warning http-rest-shadowed ==)
func (r *ReservationsService) TestIamPermissions(project string, zone string, resource string, testpermissionsrequest *TestPermissionsRequest) *ReservationsTestIamPermissionsCall {
c := &ReservationsTestIamPermissionsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -103563,7 +103956,7 @@ func (c *ReservationsTestIamPermissionsCall) Header() http.Header {
func (c *ReservationsTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -103629,7 +104022,7 @@ func (c *ReservationsTestIamPermissionsCall) Do(opts ...googleapi.CallOption) (*
}
return ret, nil
// {
- // "description": "Returns permissions that a caller has on the specified resource.",
+ // "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.reservations.testIamPermissions",
// "parameterOrder": [
@@ -103688,6 +104081,7 @@ type ResourcePoliciesAggregatedListCall struct {
}
// AggregatedList: Retrieves an aggregated list of resource policies.
+// (== suppress_warning http-rest-shadowed ==)
func (r *ResourcePoliciesService) AggregatedList(project string) *ResourcePoliciesAggregatedListCall {
c := &ResourcePoliciesAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -103794,7 +104188,7 @@ func (c *ResourcePoliciesAggregatedListCall) Header() http.Header {
func (c *ResourcePoliciesAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -103856,7 +104250,7 @@ func (c *ResourcePoliciesAggregatedListCall) Do(opts ...googleapi.CallOption) (*
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of resource policies.",
+ // "description": "Retrieves an aggregated list of resource policies. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.resourcePolicies.aggregatedList",
// "parameterOrder": [
@@ -103940,7 +104334,8 @@ type ResourcePoliciesDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified resource policy.
+// Delete: Deletes the specified resource policy. (== suppress_warning
+// http-rest-shadowed ==)
func (r *ResourcePoliciesService) Delete(project string, region string, resourcePolicy string) *ResourcePoliciesDeleteCall {
c := &ResourcePoliciesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -103995,7 +104390,7 @@ func (c *ResourcePoliciesDeleteCall) Header() http.Header {
func (c *ResourcePoliciesDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -104056,7 +104451,7 @@ func (c *ResourcePoliciesDeleteCall) Do(opts ...googleapi.CallOption) (*Operatio
}
return ret, nil
// {
- // "description": "Deletes the specified resource policy.",
+ // "description": "Deletes the specified resource policy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.resourcePolicies.delete",
// "parameterOrder": [
@@ -104117,7 +104512,8 @@ type ResourcePoliciesGetCall struct {
header_ http.Header
}
-// Get: Retrieves all information of the specified resource policy.
+// Get: Retrieves all information of the specified resource policy. (==
+// suppress_warning http-rest-shadowed ==)
func (r *ResourcePoliciesService) Get(project string, region string, resourcePolicy string) *ResourcePoliciesGetCall {
c := &ResourcePoliciesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -104163,7 +104559,7 @@ func (c *ResourcePoliciesGetCall) Header() http.Header {
func (c *ResourcePoliciesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -104227,7 +104623,7 @@ func (c *ResourcePoliciesGetCall) Do(opts ...googleapi.CallOption) (*ResourcePol
}
return ret, nil
// {
- // "description": "Retrieves all information of the specified resource policy.",
+ // "description": "Retrieves all information of the specified resource policy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.resourcePolicies.get",
// "parameterOrder": [
@@ -104285,7 +104681,8 @@ type ResourcePoliciesGetIamPolicyCall struct {
}
// GetIamPolicy: Gets the access control policy for a resource. May be
-// empty if no such policy or resource exists.
+// empty if no such policy or resource exists. (== suppress_warning
+// http-rest-shadowed ==)
func (r *ResourcePoliciesService) GetIamPolicy(project string, region string, resource string) *ResourcePoliciesGetIamPolicyCall {
c := &ResourcePoliciesGetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -104331,7 +104728,7 @@ func (c *ResourcePoliciesGetIamPolicyCall) Header() http.Header {
func (c *ResourcePoliciesGetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -104395,7 +104792,7 @@ func (c *ResourcePoliciesGetIamPolicyCall) Do(opts ...googleapi.CallOption) (*Po
}
return ret, nil
// {
- // "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists.",
+ // "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.resourcePolicies.getIamPolicy",
// "parameterOrder": [
@@ -104451,7 +104848,8 @@ type ResourcePoliciesInsertCall struct {
header_ http.Header
}
-// Insert: Creates a new resource policy.
+// Insert: Creates a new resource policy. (== suppress_warning
+// http-rest-shadowed ==)
func (r *ResourcePoliciesService) Insert(project string, region string, resourcepolicy *ResourcePolicy) *ResourcePoliciesInsertCall {
c := &ResourcePoliciesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -104506,7 +104904,7 @@ func (c *ResourcePoliciesInsertCall) Header() http.Header {
func (c *ResourcePoliciesInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -104571,7 +104969,7 @@ func (c *ResourcePoliciesInsertCall) Do(opts ...googleapi.CallOption) (*Operatio
}
return ret, nil
// {
- // "description": "Creates a new resource policy.",
+ // "description": "Creates a new resource policy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.resourcePolicies.insert",
// "parameterOrder": [
@@ -104627,7 +105025,8 @@ type ResourcePoliciesListCall struct {
}
// List: A list all the resource policies that have been configured for
-// the specified project in specified region.
+// the specified project in specified region. (== suppress_warning
+// http-rest-shadowed ==)
func (r *ResourcePoliciesService) List(project string, region string) *ResourcePoliciesListCall {
c := &ResourcePoliciesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -104735,7 +105134,7 @@ func (c *ResourcePoliciesListCall) Header() http.Header {
func (c *ResourcePoliciesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -104798,7 +105197,7 @@ func (c *ResourcePoliciesListCall) Do(opts ...googleapi.CallOption) (*ResourcePo
}
return ret, nil
// {
- // "description": "A list all the resource policies that have been configured for the specified project in specified region.",
+ // "description": "A list all the resource policies that have been configured for the specified project in specified region. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.resourcePolicies.list",
// "parameterOrder": [
@@ -104892,7 +105291,8 @@ type ResourcePoliciesSetIamPolicyCall struct {
}
// SetIamPolicy: Sets the access control policy on the specified
-// resource. Replaces any existing policy.
+// resource. Replaces any existing policy. (== suppress_warning
+// http-rest-shadowed ==)
func (r *ResourcePoliciesService) SetIamPolicy(project string, region string, resource string, regionsetpolicyrequest *RegionSetPolicyRequest) *ResourcePoliciesSetIamPolicyCall {
c := &ResourcePoliciesSetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -104929,7 +105329,7 @@ func (c *ResourcePoliciesSetIamPolicyCall) Header() http.Header {
func (c *ResourcePoliciesSetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -104995,7 +105395,7 @@ func (c *ResourcePoliciesSetIamPolicyCall) Do(opts ...googleapi.CallOption) (*Po
}
return ret, nil
// {
- // "description": "Sets the access control policy on the specified resource. Replaces any existing policy.",
+ // "description": "Sets the access control policy on the specified resource. Replaces any existing policy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.resourcePolicies.setIamPolicy",
// "parameterOrder": [
@@ -105055,7 +105455,7 @@ type ResourcePoliciesTestIamPermissionsCall struct {
}
// TestIamPermissions: Returns permissions that a caller has on the
-// specified resource.
+// specified resource. (== suppress_warning http-rest-shadowed ==)
func (r *ResourcePoliciesService) TestIamPermissions(project string, region string, resource string, testpermissionsrequest *TestPermissionsRequest) *ResourcePoliciesTestIamPermissionsCall {
c := &ResourcePoliciesTestIamPermissionsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -105092,7 +105492,7 @@ func (c *ResourcePoliciesTestIamPermissionsCall) Header() http.Header {
func (c *ResourcePoliciesTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -105158,7 +105558,7 @@ func (c *ResourcePoliciesTestIamPermissionsCall) Do(opts ...googleapi.CallOption
}
return ret, nil
// {
- // "description": "Returns permissions that a caller has on the specified resource.",
+ // "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.resourcePolicies.testIamPermissions",
// "parameterOrder": [
@@ -105216,7 +105616,8 @@ type RoutersAggregatedListCall struct {
header_ http.Header
}
-// AggregatedList: Retrieves an aggregated list of routers.
+// AggregatedList: Retrieves an aggregated list of routers. (==
+// suppress_warning http-rest-shadowed ==)
func (r *RoutersService) AggregatedList(project string) *RoutersAggregatedListCall {
c := &RoutersAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -105323,7 +105724,7 @@ func (c *RoutersAggregatedListCall) Header() http.Header {
func (c *RoutersAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -105385,7 +105786,7 @@ func (c *RoutersAggregatedListCall) Do(opts ...googleapi.CallOption) (*RouterAgg
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of routers.",
+ // "description": "Retrieves an aggregated list of routers. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.routers.aggregatedList",
// "parameterOrder": [
@@ -105469,7 +105870,8 @@ type RoutersDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified Router resource.
+// Delete: Deletes the specified Router resource. (== suppress_warning
+// http-rest-shadowed ==)
func (r *RoutersService) Delete(project string, region string, router string) *RoutersDeleteCall {
c := &RoutersDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -105524,7 +105926,7 @@ func (c *RoutersDeleteCall) Header() http.Header {
func (c *RoutersDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -105585,7 +105987,7 @@ func (c *RoutersDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, error)
}
return ret, nil
// {
- // "description": "Deletes the specified Router resource.",
+ // "description": "Deletes the specified Router resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.routers.delete",
// "parameterOrder": [
@@ -105647,7 +106049,8 @@ type RoutersGetCall struct {
}
// Get: Returns the specified Router resource. Gets a list of available
-// routers by making a list() request.
+// routers by making a list() request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *RoutersService) Get(project string, region string, router string) *RoutersGetCall {
c := &RoutersGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -105693,7 +106096,7 @@ func (c *RoutersGetCall) Header() http.Header {
func (c *RoutersGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -105757,7 +106160,7 @@ func (c *RoutersGetCall) Do(opts ...googleapi.CallOption) (*Router, error) {
}
return ret, nil
// {
- // "description": "Returns the specified Router resource. Gets a list of available routers by making a list() request.",
+ // "description": "Returns the specified Router resource. Gets a list of available routers by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.routers.get",
// "parameterOrder": [
@@ -105815,7 +106218,7 @@ type RoutersGetNatMappingInfoCall struct {
}
// GetNatMappingInfo: Retrieves runtime Nat mapping information of VM
-// endpoints.
+// endpoints. (== suppress_warning http-rest-shadowed ==)
func (r *RoutersService) GetNatMappingInfo(project string, region string, router string) *RoutersGetNatMappingInfoCall {
c := &RoutersGetNatMappingInfoCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -105924,7 +106327,7 @@ func (c *RoutersGetNatMappingInfoCall) Header() http.Header {
func (c *RoutersGetNatMappingInfoCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -105988,7 +106391,7 @@ func (c *RoutersGetNatMappingInfoCall) Do(opts ...googleapi.CallOption) (*VmEndp
}
return ret, nil
// {
- // "description": "Retrieves runtime Nat mapping information of VM endpoints.",
+ // "description": "Retrieves runtime Nat mapping information of VM endpoints. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.routers.getNatMappingInfo",
// "parameterOrder": [
@@ -106090,7 +106493,7 @@ type RoutersGetRouterStatusCall struct {
}
// GetRouterStatus: Retrieves runtime information of the specified
-// router.
+// router. (== suppress_warning http-rest-shadowed ==)
func (r *RoutersService) GetRouterStatus(project string, region string, router string) *RoutersGetRouterStatusCall {
c := &RoutersGetRouterStatusCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -106136,7 +106539,7 @@ func (c *RoutersGetRouterStatusCall) Header() http.Header {
func (c *RoutersGetRouterStatusCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -106200,7 +106603,7 @@ func (c *RoutersGetRouterStatusCall) Do(opts ...googleapi.CallOption) (*RouterSt
}
return ret, nil
// {
- // "description": "Retrieves runtime information of the specified router.",
+ // "description": "Retrieves runtime information of the specified router. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.routers.getRouterStatus",
// "parameterOrder": [
@@ -106257,7 +106660,8 @@ type RoutersInsertCall struct {
}
// Insert: Creates a Router resource in the specified project and region
-// using the data included in the request.
+// using the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *RoutersService) Insert(project string, region string, router *Router) *RoutersInsertCall {
c := &RoutersInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -106312,7 +106716,7 @@ func (c *RoutersInsertCall) Header() http.Header {
func (c *RoutersInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -106377,7 +106781,7 @@ func (c *RoutersInsertCall) Do(opts ...googleapi.CallOption) (*Operation, error)
}
return ret, nil
// {
- // "description": "Creates a Router resource in the specified project and region using the data included in the request.",
+ // "description": "Creates a Router resource in the specified project and region using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.routers.insert",
// "parameterOrder": [
@@ -106433,7 +106837,7 @@ type RoutersListCall struct {
}
// List: Retrieves a list of Router resources available to the specified
-// project.
+// project. (== suppress_warning http-rest-shadowed ==)
func (r *RoutersService) List(project string, region string) *RoutersListCall {
c := &RoutersListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -106541,7 +106945,7 @@ func (c *RoutersListCall) Header() http.Header {
func (c *RoutersListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -106604,7 +107008,7 @@ func (c *RoutersListCall) Do(opts ...googleapi.CallOption) (*RouterList, error)
}
return ret, nil
// {
- // "description": "Retrieves a list of Router resources available to the specified project.",
+ // "description": "Retrieves a list of Router resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.routers.list",
// "parameterOrder": [
@@ -106699,7 +107103,8 @@ type RoutersPatchCall struct {
// Patch: Patches the specified Router resource with the data included
// in the request. This method supports PATCH semantics and uses JSON
-// merge patch format and processing rules.
+// merge patch format and processing rules. (== suppress_warning
+// http-rest-shadowed ==)
func (r *RoutersService) Patch(project string, region string, router string, router2 *Router) *RoutersPatchCall {
c := &RoutersPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -106755,7 +107160,7 @@ func (c *RoutersPatchCall) Header() http.Header {
func (c *RoutersPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -106821,7 +107226,7 @@ func (c *RoutersPatchCall) Do(opts ...googleapi.CallOption) (*Operation, error)
}
return ret, nil
// {
- // "description": "Patches the specified Router resource with the data included in the request. This method supports PATCH semantics and uses JSON merge patch format and processing rules.",
+ // "description": "Patches the specified Router resource with the data included in the request. This method supports PATCH semantics and uses JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.routers.patch",
// "parameterOrder": [
@@ -106887,7 +107292,7 @@ type RoutersPreviewCall struct {
// Preview: Preview fields auto-generated during router create and
// update operations. Calling this method does NOT create or update the
-// router.
+// router. (== suppress_warning http-rest-shadowed ==)
func (r *RoutersService) Preview(project string, region string, router string, router2 *Router) *RoutersPreviewCall {
c := &RoutersPreviewCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -106924,7 +107329,7 @@ func (c *RoutersPreviewCall) Header() http.Header {
func (c *RoutersPreviewCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -106990,7 +107395,7 @@ func (c *RoutersPreviewCall) Do(opts ...googleapi.CallOption) (*RoutersPreviewRe
}
return ret, nil
// {
- // "description": "Preview fields auto-generated during router create and update operations. Calling this method does NOT create or update the router.",
+ // "description": "Preview fields auto-generated during router create and update operations. Calling this method does NOT create or update the router. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.routers.preview",
// "parameterOrder": [
@@ -107054,7 +107459,7 @@ type RoutersUpdateCall struct {
// in the request. This method conforms to PUT semantics, which requests
// that the state of the target resource be created or replaced with the
// state defined by the representation enclosed in the request message
-// payload.
+// payload. (== suppress_warning http-rest-shadowed ==)
func (r *RoutersService) Update(project string, region string, router string, router2 *Router) *RoutersUpdateCall {
c := &RoutersUpdateCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -107110,7 +107515,7 @@ func (c *RoutersUpdateCall) Header() http.Header {
func (c *RoutersUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -107176,7 +107581,7 @@ func (c *RoutersUpdateCall) Do(opts ...googleapi.CallOption) (*Operation, error)
}
return ret, nil
// {
- // "description": "Updates the specified Router resource with the data included in the request. This method conforms to PUT semantics, which requests that the state of the target resource be created or replaced with the state defined by the representation enclosed in the request message payload.",
+ // "description": "Updates the specified Router resource with the data included in the request. This method conforms to PUT semantics, which requests that the state of the target resource be created or replaced with the state defined by the representation enclosed in the request message payload. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PUT",
// "id": "compute.routers.update",
// "parameterOrder": [
@@ -107238,7 +107643,8 @@ type RoutesDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified Route resource.
+// Delete: Deletes the specified Route resource. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/routes/delete
func (r *RoutesService) Delete(project string, route string) *RoutesDeleteCall {
c := &RoutesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -107293,7 +107699,7 @@ func (c *RoutesDeleteCall) Header() http.Header {
func (c *RoutesDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -107353,7 +107759,7 @@ func (c *RoutesDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, error)
}
return ret, nil
// {
- // "description": "Deletes the specified Route resource.",
+ // "description": "Deletes the specified Route resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.routes.delete",
// "parameterOrder": [
@@ -107406,7 +107812,8 @@ type RoutesGetCall struct {
}
// Get: Returns the specified Route resource. Gets a list of available
-// routes by making a list() request.
+// routes by making a list() request. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/routes/get
func (r *RoutesService) Get(project string, route string) *RoutesGetCall {
c := &RoutesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -107452,7 +107859,7 @@ func (c *RoutesGetCall) Header() http.Header {
func (c *RoutesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -107515,7 +107922,7 @@ func (c *RoutesGetCall) Do(opts ...googleapi.CallOption) (*Route, error) {
}
return ret, nil
// {
- // "description": "Returns the specified Route resource. Gets a list of available routes by making a list() request.",
+ // "description": "Returns the specified Route resource. Gets a list of available routes by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.routes.get",
// "parameterOrder": [
@@ -107563,7 +107970,8 @@ type RoutesInsertCall struct {
}
// Insert: Creates a Route resource in the specified project using the
-// data included in the request.
+// data included in the request. (== suppress_warning http-rest-shadowed
+// ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/routes/insert
func (r *RoutesService) Insert(project string, route *Route) *RoutesInsertCall {
c := &RoutesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -107618,7 +108026,7 @@ func (c *RoutesInsertCall) Header() http.Header {
func (c *RoutesInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -107682,7 +108090,7 @@ func (c *RoutesInsertCall) Do(opts ...googleapi.CallOption) (*Operation, error)
}
return ret, nil
// {
- // "description": "Creates a Route resource in the specified project using the data included in the request.",
+ // "description": "Creates a Route resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.routes.insert",
// "parameterOrder": [
@@ -107729,7 +108137,7 @@ type RoutesListCall struct {
}
// List: Retrieves the list of Route resources available to the
-// specified project.
+// specified project. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/routes/list
func (r *RoutesService) List(project string) *RoutesListCall {
c := &RoutesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -107837,7 +108245,7 @@ func (c *RoutesListCall) Header() http.Header {
func (c *RoutesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -107899,7 +108307,7 @@ func (c *RoutesListCall) Do(opts ...googleapi.CallOption) (*RouteList, error) {
}
return ret, nil
// {
- // "description": "Retrieves the list of Route resources available to the specified project.",
+ // "description": "Retrieves the list of Route resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.routes.list",
// "parameterOrder": [
@@ -107983,7 +108391,8 @@ type SecurityPoliciesAddRuleCall struct {
header_ http.Header
}
-// AddRule: Inserts a rule into a security policy.
+// AddRule: Inserts a rule into a security policy. (== suppress_warning
+// http-rest-shadowed ==)
func (r *SecurityPoliciesService) AddRule(project string, securityPolicy string, securitypolicyrule *SecurityPolicyRule) *SecurityPoliciesAddRuleCall {
c := &SecurityPoliciesAddRuleCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -108019,7 +108428,7 @@ func (c *SecurityPoliciesAddRuleCall) Header() http.Header {
func (c *SecurityPoliciesAddRuleCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -108084,7 +108493,7 @@ func (c *SecurityPoliciesAddRuleCall) Do(opts ...googleapi.CallOption) (*Operati
}
return ret, nil
// {
- // "description": "Inserts a rule into a security policy.",
+ // "description": "Inserts a rule into a security policy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.securityPolicies.addRule",
// "parameterOrder": [
@@ -108133,7 +108542,8 @@ type SecurityPoliciesDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified policy.
+// Delete: Deletes the specified policy. (== suppress_warning
+// http-rest-shadowed ==)
func (r *SecurityPoliciesService) Delete(project string, securityPolicy string) *SecurityPoliciesDeleteCall {
c := &SecurityPoliciesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -108187,7 +108597,7 @@ func (c *SecurityPoliciesDeleteCall) Header() http.Header {
func (c *SecurityPoliciesDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -108247,7 +108657,7 @@ func (c *SecurityPoliciesDeleteCall) Do(opts ...googleapi.CallOption) (*Operatio
}
return ret, nil
// {
- // "description": "Deletes the specified policy.",
+ // "description": "Deletes the specified policy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.securityPolicies.delete",
// "parameterOrder": [
@@ -108300,7 +108710,7 @@ type SecurityPoliciesGetCall struct {
}
// Get: List all of the ordered rules present in a single specified
-// policy.
+// policy. (== suppress_warning http-rest-shadowed ==)
func (r *SecurityPoliciesService) Get(project string, securityPolicy string) *SecurityPoliciesGetCall {
c := &SecurityPoliciesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -108345,7 +108755,7 @@ func (c *SecurityPoliciesGetCall) Header() http.Header {
func (c *SecurityPoliciesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -108408,7 +108818,7 @@ func (c *SecurityPoliciesGetCall) Do(opts ...googleapi.CallOption) (*SecurityPol
}
return ret, nil
// {
- // "description": "List all of the ordered rules present in a single specified policy.",
+ // "description": "List all of the ordered rules present in a single specified policy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.securityPolicies.get",
// "parameterOrder": [
@@ -108456,7 +108866,8 @@ type SecurityPoliciesGetRuleCall struct {
header_ http.Header
}
-// GetRule: Gets a rule at the specified priority.
+// GetRule: Gets a rule at the specified priority. (== suppress_warning
+// http-rest-shadowed ==)
func (r *SecurityPoliciesService) GetRule(project string, securityPolicy string) *SecurityPoliciesGetRuleCall {
c := &SecurityPoliciesGetRuleCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -108508,7 +108919,7 @@ func (c *SecurityPoliciesGetRuleCall) Header() http.Header {
func (c *SecurityPoliciesGetRuleCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -108571,7 +108982,7 @@ func (c *SecurityPoliciesGetRuleCall) Do(opts ...googleapi.CallOption) (*Securit
}
return ret, nil
// {
- // "description": "Gets a rule at the specified priority.",
+ // "description": "Gets a rule at the specified priority. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.securityPolicies.getRule",
// "parameterOrder": [
@@ -108625,7 +109036,7 @@ type SecurityPoliciesInsertCall struct {
}
// Insert: Creates a new policy in the specified project using the data
-// included in the request.
+// included in the request. (== suppress_warning http-rest-shadowed ==)
func (r *SecurityPoliciesService) Insert(project string, securitypolicy *SecurityPolicy) *SecurityPoliciesInsertCall {
c := &SecurityPoliciesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -108679,7 +109090,7 @@ func (c *SecurityPoliciesInsertCall) Header() http.Header {
func (c *SecurityPoliciesInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -108743,7 +109154,7 @@ func (c *SecurityPoliciesInsertCall) Do(opts ...googleapi.CallOption) (*Operatio
}
return ret, nil
// {
- // "description": "Creates a new policy in the specified project using the data included in the request.",
+ // "description": "Creates a new policy in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.securityPolicies.insert",
// "parameterOrder": [
@@ -108790,7 +109201,7 @@ type SecurityPoliciesListCall struct {
}
// List: List all the policies that have been configured for the
-// specified project.
+// specified project. (== suppress_warning http-rest-shadowed ==)
func (r *SecurityPoliciesService) List(project string) *SecurityPoliciesListCall {
c := &SecurityPoliciesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -108897,7 +109308,7 @@ func (c *SecurityPoliciesListCall) Header() http.Header {
func (c *SecurityPoliciesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -108959,7 +109370,7 @@ func (c *SecurityPoliciesListCall) Do(opts ...googleapi.CallOption) (*SecurityPo
}
return ret, nil
// {
- // "description": "List all the policies that have been configured for the specified project.",
+ // "description": "List all the policies that have been configured for the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.securityPolicies.list",
// "parameterOrder": [
@@ -109044,7 +109455,7 @@ type SecurityPoliciesPatchCall struct {
}
// Patch: Patches the specified policy with the data included in the
-// request.
+// request. (== suppress_warning http-rest-shadowed ==)
func (r *SecurityPoliciesService) Patch(project string, securityPolicy string, securitypolicy *SecurityPolicy) *SecurityPoliciesPatchCall {
c := &SecurityPoliciesPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -109099,7 +109510,7 @@ func (c *SecurityPoliciesPatchCall) Header() http.Header {
func (c *SecurityPoliciesPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -109164,7 +109575,7 @@ func (c *SecurityPoliciesPatchCall) Do(opts ...googleapi.CallOption) (*Operation
}
return ret, nil
// {
- // "description": "Patches the specified policy with the data included in the request.",
+ // "description": "Patches the specified policy with the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.securityPolicies.patch",
// "parameterOrder": [
@@ -109219,7 +109630,8 @@ type SecurityPoliciesPatchRuleCall struct {
header_ http.Header
}
-// PatchRule: Patches a rule at the specified priority.
+// PatchRule: Patches a rule at the specified priority. (==
+// suppress_warning http-rest-shadowed ==)
func (r *SecurityPoliciesService) PatchRule(project string, securityPolicy string, securitypolicyrule *SecurityPolicyRule) *SecurityPoliciesPatchRuleCall {
c := &SecurityPoliciesPatchRuleCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -109262,7 +109674,7 @@ func (c *SecurityPoliciesPatchRuleCall) Header() http.Header {
func (c *SecurityPoliciesPatchRuleCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -109327,7 +109739,7 @@ func (c *SecurityPoliciesPatchRuleCall) Do(opts ...googleapi.CallOption) (*Opera
}
return ret, nil
// {
- // "description": "Patches a rule at the specified priority.",
+ // "description": "Patches a rule at the specified priority. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.securityPolicies.patchRule",
// "parameterOrder": [
@@ -109382,7 +109794,8 @@ type SecurityPoliciesRemoveRuleCall struct {
header_ http.Header
}
-// RemoveRule: Deletes a rule at the specified priority.
+// RemoveRule: Deletes a rule at the specified priority. (==
+// suppress_warning http-rest-shadowed ==)
func (r *SecurityPoliciesService) RemoveRule(project string, securityPolicy string) *SecurityPoliciesRemoveRuleCall {
c := &SecurityPoliciesRemoveRuleCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -109424,7 +109837,7 @@ func (c *SecurityPoliciesRemoveRuleCall) Header() http.Header {
func (c *SecurityPoliciesRemoveRuleCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -109484,7 +109897,7 @@ func (c *SecurityPoliciesRemoveRuleCall) Do(opts ...googleapi.CallOption) (*Oper
}
return ret, nil
// {
- // "description": "Deletes a rule at the specified priority.",
+ // "description": "Deletes a rule at the specified priority. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.securityPolicies.removeRule",
// "parameterOrder": [
@@ -109542,7 +109955,8 @@ type SnapshotsDeleteCall struct {
// deletion is needed for subsequent snapshots, the data will be moved
// to the next corresponding snapshot.
//
-// For more information, see Deleting snapshots.
+// For more information, see Deleting snapshots. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/snapshots/delete
func (r *SnapshotsService) Delete(project string, snapshot string) *SnapshotsDeleteCall {
c := &SnapshotsDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -109597,7 +110011,7 @@ func (c *SnapshotsDeleteCall) Header() http.Header {
func (c *SnapshotsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -109657,7 +110071,7 @@ func (c *SnapshotsDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, erro
}
return ret, nil
// {
- // "description": "Deletes the specified Snapshot resource. Keep in mind that deleting a single snapshot might not necessarily delete all the data on that snapshot. If any data on the snapshot that is marked for deletion is needed for subsequent snapshots, the data will be moved to the next corresponding snapshot.\n\nFor more information, see Deleting snapshots.",
+ // "description": "Deletes the specified Snapshot resource. Keep in mind that deleting a single snapshot might not necessarily delete all the data on that snapshot. If any data on the snapshot that is marked for deletion is needed for subsequent snapshots, the data will be moved to the next corresponding snapshot.\n\nFor more information, see Deleting snapshots. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.snapshots.delete",
// "parameterOrder": [
@@ -109710,7 +110124,8 @@ type SnapshotsGetCall struct {
}
// Get: Returns the specified Snapshot resource. Gets a list of
-// available snapshots by making a list() request.
+// available snapshots by making a list() request. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/snapshots/get
func (r *SnapshotsService) Get(project string, snapshot string) *SnapshotsGetCall {
c := &SnapshotsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -109756,7 +110171,7 @@ func (c *SnapshotsGetCall) Header() http.Header {
func (c *SnapshotsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -109819,7 +110234,7 @@ func (c *SnapshotsGetCall) Do(opts ...googleapi.CallOption) (*Snapshot, error) {
}
return ret, nil
// {
- // "description": "Returns the specified Snapshot resource. Gets a list of available snapshots by making a list() request.",
+ // "description": "Returns the specified Snapshot resource. Gets a list of available snapshots by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.snapshots.get",
// "parameterOrder": [
@@ -109868,7 +110283,8 @@ type SnapshotsGetIamPolicyCall struct {
}
// GetIamPolicy: Gets the access control policy for a resource. May be
-// empty if no such policy or resource exists.
+// empty if no such policy or resource exists. (== suppress_warning
+// http-rest-shadowed ==)
func (r *SnapshotsService) GetIamPolicy(project string, resource string) *SnapshotsGetIamPolicyCall {
c := &SnapshotsGetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -109913,7 +110329,7 @@ func (c *SnapshotsGetIamPolicyCall) Header() http.Header {
func (c *SnapshotsGetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -109976,7 +110392,7 @@ func (c *SnapshotsGetIamPolicyCall) Do(opts ...googleapi.CallOption) (*Policy, e
}
return ret, nil
// {
- // "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists.",
+ // "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.snapshots.getIamPolicy",
// "parameterOrder": [
@@ -110024,7 +110440,7 @@ type SnapshotsListCall struct {
}
// List: Retrieves the list of Snapshot resources contained within the
-// specified project.
+// specified project. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/snapshots/list
func (r *SnapshotsService) List(project string) *SnapshotsListCall {
c := &SnapshotsListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -110132,7 +110548,7 @@ func (c *SnapshotsListCall) Header() http.Header {
func (c *SnapshotsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -110194,7 +110610,7 @@ func (c *SnapshotsListCall) Do(opts ...googleapi.CallOption) (*SnapshotList, err
}
return ret, nil
// {
- // "description": "Retrieves the list of Snapshot resources contained within the specified project.",
+ // "description": "Retrieves the list of Snapshot resources contained within the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.snapshots.list",
// "parameterOrder": [
@@ -110279,7 +110695,8 @@ type SnapshotsSetIamPolicyCall struct {
}
// SetIamPolicy: Sets the access control policy on the specified
-// resource. Replaces any existing policy.
+// resource. Replaces any existing policy. (== suppress_warning
+// http-rest-shadowed ==)
func (r *SnapshotsService) SetIamPolicy(project string, resource string, globalsetpolicyrequest *GlobalSetPolicyRequest) *SnapshotsSetIamPolicyCall {
c := &SnapshotsSetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -110315,7 +110732,7 @@ func (c *SnapshotsSetIamPolicyCall) Header() http.Header {
func (c *SnapshotsSetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -110380,7 +110797,7 @@ func (c *SnapshotsSetIamPolicyCall) Do(opts ...googleapi.CallOption) (*Policy, e
}
return ret, nil
// {
- // "description": "Sets the access control policy on the specified resource. Replaces any existing policy.",
+ // "description": "Sets the access control policy on the specified resource. Replaces any existing policy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.snapshots.setIamPolicy",
// "parameterOrder": [
@@ -110431,7 +110848,8 @@ type SnapshotsSetLabelsCall struct {
}
// SetLabels: Sets the labels on a snapshot. To learn more about labels,
-// read the Labeling Resources documentation.
+// read the Labeling Resources documentation. (== suppress_warning
+// http-rest-shadowed ==)
func (r *SnapshotsService) SetLabels(project string, resource string, globalsetlabelsrequest *GlobalSetLabelsRequest) *SnapshotsSetLabelsCall {
c := &SnapshotsSetLabelsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -110467,7 +110885,7 @@ func (c *SnapshotsSetLabelsCall) Header() http.Header {
func (c *SnapshotsSetLabelsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -110532,7 +110950,7 @@ func (c *SnapshotsSetLabelsCall) Do(opts ...googleapi.CallOption) (*Operation, e
}
return ret, nil
// {
- // "description": "Sets the labels on a snapshot. To learn more about labels, read the Labeling Resources documentation.",
+ // "description": "Sets the labels on a snapshot. To learn more about labels, read the Labeling Resources documentation. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.snapshots.setLabels",
// "parameterOrder": [
@@ -110583,7 +111001,7 @@ type SnapshotsTestIamPermissionsCall struct {
}
// TestIamPermissions: Returns permissions that a caller has on the
-// specified resource.
+// specified resource. (== suppress_warning http-rest-shadowed ==)
func (r *SnapshotsService) TestIamPermissions(project string, resource string, testpermissionsrequest *TestPermissionsRequest) *SnapshotsTestIamPermissionsCall {
c := &SnapshotsTestIamPermissionsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -110619,7 +111037,7 @@ func (c *SnapshotsTestIamPermissionsCall) Header() http.Header {
func (c *SnapshotsTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -110684,7 +111102,7 @@ func (c *SnapshotsTestIamPermissionsCall) Do(opts ...googleapi.CallOption) (*Tes
}
return ret, nil
// {
- // "description": "Returns permissions that a caller has on the specified resource.",
+ // "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.snapshots.testIamPermissions",
// "parameterOrder": [
@@ -110735,7 +111153,8 @@ type SslCertificatesAggregatedListCall struct {
}
// AggregatedList: Retrieves the list of all SslCertificate resources,
-// regional and global, available to the specified project.
+// regional and global, available to the specified project. (==
+// suppress_warning http-rest-shadowed ==)
func (r *SslCertificatesService) AggregatedList(project string) *SslCertificatesAggregatedListCall {
c := &SslCertificatesAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -110842,7 +111261,7 @@ func (c *SslCertificatesAggregatedListCall) Header() http.Header {
func (c *SslCertificatesAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -110904,7 +111323,7 @@ func (c *SslCertificatesAggregatedListCall) Do(opts ...googleapi.CallOption) (*S
}
return ret, nil
// {
- // "description": "Retrieves the list of all SslCertificate resources, regional and global, available to the specified project.",
+ // "description": "Retrieves the list of all SslCertificate resources, regional and global, available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.sslCertificates.aggregatedList",
// "parameterOrder": [
@@ -110987,7 +111406,8 @@ type SslCertificatesDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified SslCertificate resource.
+// Delete: Deletes the specified SslCertificate resource. (==
+// suppress_warning http-rest-shadowed ==)
func (r *SslCertificatesService) Delete(project string, sslCertificate string) *SslCertificatesDeleteCall {
c := &SslCertificatesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -111041,7 +111461,7 @@ func (c *SslCertificatesDeleteCall) Header() http.Header {
func (c *SslCertificatesDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -111101,7 +111521,7 @@ func (c *SslCertificatesDeleteCall) Do(opts ...googleapi.CallOption) (*Operation
}
return ret, nil
// {
- // "description": "Deletes the specified SslCertificate resource.",
+ // "description": "Deletes the specified SslCertificate resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.sslCertificates.delete",
// "parameterOrder": [
@@ -111154,7 +111574,8 @@ type SslCertificatesGetCall struct {
}
// Get: Returns the specified SslCertificate resource. Gets a list of
-// available SSL certificates by making a list() request.
+// available SSL certificates by making a list() request. (==
+// suppress_warning http-rest-shadowed ==)
func (r *SslCertificatesService) Get(project string, sslCertificate string) *SslCertificatesGetCall {
c := &SslCertificatesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -111199,7 +111620,7 @@ func (c *SslCertificatesGetCall) Header() http.Header {
func (c *SslCertificatesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -111262,7 +111683,7 @@ func (c *SslCertificatesGetCall) Do(opts ...googleapi.CallOption) (*SslCertifica
}
return ret, nil
// {
- // "description": "Returns the specified SslCertificate resource. Gets a list of available SSL certificates by making a list() request.",
+ // "description": "Returns the specified SslCertificate resource. Gets a list of available SSL certificates by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.sslCertificates.get",
// "parameterOrder": [
@@ -111310,7 +111731,8 @@ type SslCertificatesInsertCall struct {
}
// Insert: Creates a SslCertificate resource in the specified project
-// using the data included in the request.
+// using the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *SslCertificatesService) Insert(project string, sslcertificate *SslCertificate) *SslCertificatesInsertCall {
c := &SslCertificatesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -111364,7 +111786,7 @@ func (c *SslCertificatesInsertCall) Header() http.Header {
func (c *SslCertificatesInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -111428,7 +111850,7 @@ func (c *SslCertificatesInsertCall) Do(opts ...googleapi.CallOption) (*Operation
}
return ret, nil
// {
- // "description": "Creates a SslCertificate resource in the specified project using the data included in the request.",
+ // "description": "Creates a SslCertificate resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.sslCertificates.insert",
// "parameterOrder": [
@@ -111475,7 +111897,7 @@ type SslCertificatesListCall struct {
}
// List: Retrieves the list of SslCertificate resources available to the
-// specified project.
+// specified project. (== suppress_warning http-rest-shadowed ==)
func (r *SslCertificatesService) List(project string) *SslCertificatesListCall {
c := &SslCertificatesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -111582,7 +112004,7 @@ func (c *SslCertificatesListCall) Header() http.Header {
func (c *SslCertificatesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -111644,7 +112066,7 @@ func (c *SslCertificatesListCall) Do(opts ...googleapi.CallOption) (*SslCertific
}
return ret, nil
// {
- // "description": "Retrieves the list of SslCertificate resources available to the specified project.",
+ // "description": "Retrieves the list of SslCertificate resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.sslCertificates.list",
// "parameterOrder": [
@@ -111729,7 +112151,7 @@ type SslPoliciesDeleteCall struct {
// Delete: Deletes the specified SSL policy. The SSL policy resource can
// be deleted only if it is not in use by any TargetHttpsProxy or
-// TargetSslProxy resources.
+// TargetSslProxy resources. (== suppress_warning http-rest-shadowed ==)
func (r *SslPoliciesService) Delete(project string, sslPolicy string) *SslPoliciesDeleteCall {
c := &SslPoliciesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -111783,7 +112205,7 @@ func (c *SslPoliciesDeleteCall) Header() http.Header {
func (c *SslPoliciesDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -111843,7 +112265,7 @@ func (c *SslPoliciesDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, er
}
return ret, nil
// {
- // "description": "Deletes the specified SSL policy. The SSL policy resource can be deleted only if it is not in use by any TargetHttpsProxy or TargetSslProxy resources.",
+ // "description": "Deletes the specified SSL policy. The SSL policy resource can be deleted only if it is not in use by any TargetHttpsProxy or TargetSslProxy resources. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.sslPolicies.delete",
// "parameterOrder": [
@@ -111895,7 +112317,7 @@ type SslPoliciesGetCall struct {
}
// Get: Lists all of the ordered rules present in a single specified
-// policy.
+// policy. (== suppress_warning http-rest-shadowed ==)
func (r *SslPoliciesService) Get(project string, sslPolicy string) *SslPoliciesGetCall {
c := &SslPoliciesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -111940,7 +112362,7 @@ func (c *SslPoliciesGetCall) Header() http.Header {
func (c *SslPoliciesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -112003,7 +112425,7 @@ func (c *SslPoliciesGetCall) Do(opts ...googleapi.CallOption) (*SslPolicy, error
}
return ret, nil
// {
- // "description": "Lists all of the ordered rules present in a single specified policy.",
+ // "description": "Lists all of the ordered rules present in a single specified policy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.sslPolicies.get",
// "parameterOrder": [
@@ -112050,7 +112472,8 @@ type SslPoliciesInsertCall struct {
}
// Insert: Returns the specified SSL policy resource. Gets a list of
-// available SSL policies by making a list() request.
+// available SSL policies by making a list() request. (==
+// suppress_warning http-rest-shadowed ==)
func (r *SslPoliciesService) Insert(project string, sslpolicy *SslPolicy) *SslPoliciesInsertCall {
c := &SslPoliciesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -112104,7 +112527,7 @@ func (c *SslPoliciesInsertCall) Header() http.Header {
func (c *SslPoliciesInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -112168,7 +112591,7 @@ func (c *SslPoliciesInsertCall) Do(opts ...googleapi.CallOption) (*Operation, er
}
return ret, nil
// {
- // "description": "Returns the specified SSL policy resource. Gets a list of available SSL policies by making a list() request.",
+ // "description": "Returns the specified SSL policy resource. Gets a list of available SSL policies by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.sslPolicies.insert",
// "parameterOrder": [
@@ -112215,7 +112638,7 @@ type SslPoliciesListCall struct {
}
// List: Lists all the SSL policies that have been configured for the
-// specified project.
+// specified project. (== suppress_warning http-rest-shadowed ==)
func (r *SslPoliciesService) List(project string) *SslPoliciesListCall {
c := &SslPoliciesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -112322,7 +112745,7 @@ func (c *SslPoliciesListCall) Header() http.Header {
func (c *SslPoliciesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -112384,7 +112807,7 @@ func (c *SslPoliciesListCall) Do(opts ...googleapi.CallOption) (*SslPoliciesList
}
return ret, nil
// {
- // "description": "Lists all the SSL policies that have been configured for the specified project.",
+ // "description": "Lists all the SSL policies that have been configured for the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.sslPolicies.list",
// "parameterOrder": [
@@ -112468,7 +112891,8 @@ type SslPoliciesListAvailableFeaturesCall struct {
}
// ListAvailableFeatures: Lists all features that can be specified in
-// the SSL policy when using custom profile.
+// the SSL policy when using custom profile. (== suppress_warning
+// http-rest-shadowed ==)
func (r *SslPoliciesService) ListAvailableFeatures(project string) *SslPoliciesListAvailableFeaturesCall {
c := &SslPoliciesListAvailableFeaturesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -112575,7 +112999,7 @@ func (c *SslPoliciesListAvailableFeaturesCall) Header() http.Header {
func (c *SslPoliciesListAvailableFeaturesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -112639,7 +113063,7 @@ func (c *SslPoliciesListAvailableFeaturesCall) Do(opts ...googleapi.CallOption)
}
return ret, nil
// {
- // "description": "Lists all features that can be specified in the SSL policy when using custom profile.",
+ // "description": "Lists all features that can be specified in the SSL policy when using custom profile. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.sslPolicies.listAvailableFeatures",
// "parameterOrder": [
@@ -112703,7 +113127,7 @@ type SslPoliciesPatchCall struct {
}
// Patch: Patches the specified SSL policy with the data included in the
-// request.
+// request. (== suppress_warning http-rest-shadowed ==)
func (r *SslPoliciesService) Patch(project string, sslPolicy string, sslpolicy *SslPolicy) *SslPoliciesPatchCall {
c := &SslPoliciesPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -112758,7 +113182,7 @@ func (c *SslPoliciesPatchCall) Header() http.Header {
func (c *SslPoliciesPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -112823,7 +113247,7 @@ func (c *SslPoliciesPatchCall) Do(opts ...googleapi.CallOption) (*Operation, err
}
return ret, nil
// {
- // "description": "Patches the specified SSL policy with the data included in the request.",
+ // "description": "Patches the specified SSL policy with the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.sslPolicies.patch",
// "parameterOrder": [
@@ -112876,7 +113300,8 @@ type SubnetworksAggregatedListCall struct {
header_ http.Header
}
-// AggregatedList: Retrieves an aggregated list of subnetworks.
+// AggregatedList: Retrieves an aggregated list of subnetworks. (==
+// suppress_warning http-rest-shadowed ==)
func (r *SubnetworksService) AggregatedList(project string) *SubnetworksAggregatedListCall {
c := &SubnetworksAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -112983,7 +113408,7 @@ func (c *SubnetworksAggregatedListCall) Header() http.Header {
func (c *SubnetworksAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -113045,7 +113470,7 @@ func (c *SubnetworksAggregatedListCall) Do(opts ...googleapi.CallOption) (*Subne
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of subnetworks.",
+ // "description": "Retrieves an aggregated list of subnetworks. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.subnetworks.aggregatedList",
// "parameterOrder": [
@@ -113129,7 +113554,8 @@ type SubnetworksDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified subnetwork.
+// Delete: Deletes the specified subnetwork. (== suppress_warning
+// http-rest-shadowed ==)
func (r *SubnetworksService) Delete(project string, region string, subnetwork string) *SubnetworksDeleteCall {
c := &SubnetworksDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -113184,7 +113610,7 @@ func (c *SubnetworksDeleteCall) Header() http.Header {
func (c *SubnetworksDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -113245,7 +113671,7 @@ func (c *SubnetworksDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, er
}
return ret, nil
// {
- // "description": "Deletes the specified subnetwork.",
+ // "description": "Deletes the specified subnetwork. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.subnetworks.delete",
// "parameterOrder": [
@@ -113307,7 +113733,7 @@ type SubnetworksExpandIpCidrRangeCall struct {
}
// ExpandIpCidrRange: Expands the IP CIDR range of the subnetwork to a
-// specified value.
+// specified value. (== suppress_warning http-rest-shadowed ==)
func (r *SubnetworksService) ExpandIpCidrRange(project string, region string, subnetwork string, subnetworksexpandipcidrrangerequest *SubnetworksExpandIpCidrRangeRequest) *SubnetworksExpandIpCidrRangeCall {
c := &SubnetworksExpandIpCidrRangeCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -113363,7 +113789,7 @@ func (c *SubnetworksExpandIpCidrRangeCall) Header() http.Header {
func (c *SubnetworksExpandIpCidrRangeCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -113429,7 +113855,7 @@ func (c *SubnetworksExpandIpCidrRangeCall) Do(opts ...googleapi.CallOption) (*Op
}
return ret, nil
// {
- // "description": "Expands the IP CIDR range of the subnetwork to a specified value.",
+ // "description": "Expands the IP CIDR range of the subnetwork to a specified value. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.subnetworks.expandIpCidrRange",
// "parameterOrder": [
@@ -113494,7 +113920,8 @@ type SubnetworksGetCall struct {
}
// Get: Returns the specified subnetwork. Gets a list of available
-// subnetworks list() request.
+// subnetworks list() request. (== suppress_warning http-rest-shadowed
+// ==)
func (r *SubnetworksService) Get(project string, region string, subnetwork string) *SubnetworksGetCall {
c := &SubnetworksGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -113540,7 +113967,7 @@ func (c *SubnetworksGetCall) Header() http.Header {
func (c *SubnetworksGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -113604,7 +114031,7 @@ func (c *SubnetworksGetCall) Do(opts ...googleapi.CallOption) (*Subnetwork, erro
}
return ret, nil
// {
- // "description": "Returns the specified subnetwork. Gets a list of available subnetworks list() request.",
+ // "description": "Returns the specified subnetwork. Gets a list of available subnetworks list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.subnetworks.get",
// "parameterOrder": [
@@ -113662,7 +114089,8 @@ type SubnetworksGetIamPolicyCall struct {
}
// GetIamPolicy: Gets the access control policy for a resource. May be
-// empty if no such policy or resource exists.
+// empty if no such policy or resource exists. (== suppress_warning
+// http-rest-shadowed ==)
func (r *SubnetworksService) GetIamPolicy(project string, region string, resource string) *SubnetworksGetIamPolicyCall {
c := &SubnetworksGetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -113708,7 +114136,7 @@ func (c *SubnetworksGetIamPolicyCall) Header() http.Header {
func (c *SubnetworksGetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -113772,7 +114200,7 @@ func (c *SubnetworksGetIamPolicyCall) Do(opts ...googleapi.CallOption) (*Policy,
}
return ret, nil
// {
- // "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists.",
+ // "description": "Gets the access control policy for a resource. May be empty if no such policy or resource exists. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.subnetworks.getIamPolicy",
// "parameterOrder": [
@@ -113829,7 +114257,7 @@ type SubnetworksInsertCall struct {
}
// Insert: Creates a subnetwork in the specified project using the data
-// included in the request.
+// included in the request. (== suppress_warning http-rest-shadowed ==)
func (r *SubnetworksService) Insert(project string, region string, subnetwork *Subnetwork) *SubnetworksInsertCall {
c := &SubnetworksInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -113884,7 +114312,7 @@ func (c *SubnetworksInsertCall) Header() http.Header {
func (c *SubnetworksInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -113949,7 +114377,7 @@ func (c *SubnetworksInsertCall) Do(opts ...googleapi.CallOption) (*Operation, er
}
return ret, nil
// {
- // "description": "Creates a subnetwork in the specified project using the data included in the request.",
+ // "description": "Creates a subnetwork in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.subnetworks.insert",
// "parameterOrder": [
@@ -114005,7 +114433,7 @@ type SubnetworksListCall struct {
}
// List: Retrieves a list of subnetworks available to the specified
-// project.
+// project. (== suppress_warning http-rest-shadowed ==)
func (r *SubnetworksService) List(project string, region string) *SubnetworksListCall {
c := &SubnetworksListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -114113,7 +114541,7 @@ func (c *SubnetworksListCall) Header() http.Header {
func (c *SubnetworksListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -114176,7 +114604,7 @@ func (c *SubnetworksListCall) Do(opts ...googleapi.CallOption) (*SubnetworkList,
}
return ret, nil
// {
- // "description": "Retrieves a list of subnetworks available to the specified project.",
+ // "description": "Retrieves a list of subnetworks available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.subnetworks.list",
// "parameterOrder": [
@@ -114270,6 +114698,7 @@ type SubnetworksListUsableCall struct {
// ListUsable: Retrieves an aggregated list of all usable subnetworks in
// the project. The list contains all of the subnetworks in the project
// and the subnetworks that were shared by a Shared VPC host project.
+// (== suppress_warning http-rest-shadowed ==)
func (r *SubnetworksService) ListUsable(project string) *SubnetworksListUsableCall {
c := &SubnetworksListUsableCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -114376,7 +114805,7 @@ func (c *SubnetworksListUsableCall) Header() http.Header {
func (c *SubnetworksListUsableCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -114438,7 +114867,7 @@ func (c *SubnetworksListUsableCall) Do(opts ...googleapi.CallOption) (*UsableSub
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of all usable subnetworks in the project. The list contains all of the subnetworks in the project and the subnetworks that were shared by a Shared VPC host project.",
+ // "description": "Retrieves an aggregated list of all usable subnetworks in the project. The list contains all of the subnetworks in the project and the subnetworks that were shared by a Shared VPC host project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.subnetworks.listUsable",
// "parameterOrder": [
@@ -114526,7 +114955,8 @@ type SubnetworksPatchCall struct {
// Patch: Patches the specified subnetwork with the data included in the
// request. Only certain fields can up updated with a patch request as
// indicated in the field descriptions. You must specify the current
-// fingeprint of the subnetwork resource being patched.
+// fingeprint of the subnetwork resource being patched. (==
+// suppress_warning http-rest-shadowed ==)
func (r *SubnetworksService) Patch(project string, region string, subnetwork string, subnetwork2 *Subnetwork) *SubnetworksPatchCall {
c := &SubnetworksPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -114536,6 +114966,21 @@ func (r *SubnetworksService) Patch(project string, region string, subnetwork str
return c
}
+// DrainTimeoutSeconds sets the optional parameter
+// "drainTimeoutSeconds": The drain timeout specifies the upper bound in
+// seconds on the amount of time allowed to drain connections from the
+// current ACTIVE subnetwork to the current BACKUP subnetwork. The drain
+// timeout is only applicable when the following conditions are true: -
+// the subnetwork being patched has purpose =
+// INTERNAL_HTTPS_LOAD_BALANCER - the subnetwork being patched has role
+// = BACKUP - the patch request is setting the role to ACTIVE. Note that
+// after this patch operation the roles of the ACTIVE and BACKUP
+// subnetworks will be swapped.
+func (c *SubnetworksPatchCall) DrainTimeoutSeconds(drainTimeoutSeconds int64) *SubnetworksPatchCall {
+ c.urlParams_.Set("drainTimeoutSeconds", fmt.Sprint(drainTimeoutSeconds))
+ return c
+}
+
// RequestId sets the optional parameter "requestId": An optional
// request ID to identify requests. Specify a unique request ID so that
// if you must retry your request, the server will know to ignore the
@@ -114582,7 +115027,7 @@ func (c *SubnetworksPatchCall) Header() http.Header {
func (c *SubnetworksPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -114648,7 +115093,7 @@ func (c *SubnetworksPatchCall) Do(opts ...googleapi.CallOption) (*Operation, err
}
return ret, nil
// {
- // "description": "Patches the specified subnetwork with the data included in the request. Only certain fields can up updated with a patch request as indicated in the field descriptions. You must specify the current fingeprint of the subnetwork resource being patched.",
+ // "description": "Patches the specified subnetwork with the data included in the request. Only certain fields can up updated with a patch request as indicated in the field descriptions. You must specify the current fingeprint of the subnetwork resource being patched. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.subnetworks.patch",
// "parameterOrder": [
@@ -114657,6 +115102,12 @@ func (c *SubnetworksPatchCall) Do(opts ...googleapi.CallOption) (*Operation, err
// "subnetwork"
// ],
// "parameters": {
+ // "drainTimeoutSeconds": {
+ // "description": "The drain timeout specifies the upper bound in seconds on the amount of time allowed to drain connections from the current ACTIVE subnetwork to the current BACKUP subnetwork. The drain timeout is only applicable when the following conditions are true: - the subnetwork being patched has purpose = INTERNAL_HTTPS_LOAD_BALANCER - the subnetwork being patched has role = BACKUP - the patch request is setting the role to ACTIVE. Note that after this patch operation the roles of the ACTIVE and BACKUP subnetworks will be swapped.",
+ // "format": "int32",
+ // "location": "query",
+ // "type": "integer"
+ // },
// "project": {
// "description": "Project ID for this request.",
// "location": "path",
@@ -114713,7 +115164,8 @@ type SubnetworksSetIamPolicyCall struct {
}
// SetIamPolicy: Sets the access control policy on the specified
-// resource. Replaces any existing policy.
+// resource. Replaces any existing policy. (== suppress_warning
+// http-rest-shadowed ==)
func (r *SubnetworksService) SetIamPolicy(project string, region string, resource string, regionsetpolicyrequest *RegionSetPolicyRequest) *SubnetworksSetIamPolicyCall {
c := &SubnetworksSetIamPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -114750,7 +115202,7 @@ func (c *SubnetworksSetIamPolicyCall) Header() http.Header {
func (c *SubnetworksSetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -114816,7 +115268,7 @@ func (c *SubnetworksSetIamPolicyCall) Do(opts ...googleapi.CallOption) (*Policy,
}
return ret, nil
// {
- // "description": "Sets the access control policy on the specified resource. Replaces any existing policy.",
+ // "description": "Sets the access control policy on the specified resource. Replaces any existing policy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.subnetworks.setIamPolicy",
// "parameterOrder": [
@@ -114877,7 +115329,7 @@ type SubnetworksSetPrivateIpGoogleAccessCall struct {
// SetPrivateIpGoogleAccess: Set whether VMs in this subnet can access
// Google services without assigning external IP addresses through
-// Private Google Access.
+// Private Google Access. (== suppress_warning http-rest-shadowed ==)
func (r *SubnetworksService) SetPrivateIpGoogleAccess(project string, region string, subnetwork string, subnetworkssetprivateipgoogleaccessrequest *SubnetworksSetPrivateIpGoogleAccessRequest) *SubnetworksSetPrivateIpGoogleAccessCall {
c := &SubnetworksSetPrivateIpGoogleAccessCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -114933,7 +115385,7 @@ func (c *SubnetworksSetPrivateIpGoogleAccessCall) Header() http.Header {
func (c *SubnetworksSetPrivateIpGoogleAccessCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -114999,7 +115451,7 @@ func (c *SubnetworksSetPrivateIpGoogleAccessCall) Do(opts ...googleapi.CallOptio
}
return ret, nil
// {
- // "description": "Set whether VMs in this subnet can access Google services without assigning external IP addresses through Private Google Access.",
+ // "description": "Set whether VMs in this subnet can access Google services without assigning external IP addresses through Private Google Access. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.subnetworks.setPrivateIpGoogleAccess",
// "parameterOrder": [
@@ -115064,7 +115516,7 @@ type SubnetworksTestIamPermissionsCall struct {
}
// TestIamPermissions: Returns permissions that a caller has on the
-// specified resource.
+// specified resource. (== suppress_warning http-rest-shadowed ==)
func (r *SubnetworksService) TestIamPermissions(project string, region string, resource string, testpermissionsrequest *TestPermissionsRequest) *SubnetworksTestIamPermissionsCall {
c := &SubnetworksTestIamPermissionsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -115101,7 +115553,7 @@ func (c *SubnetworksTestIamPermissionsCall) Header() http.Header {
func (c *SubnetworksTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -115167,7 +115619,7 @@ func (c *SubnetworksTestIamPermissionsCall) Do(opts ...googleapi.CallOption) (*T
}
return ret, nil
// {
- // "description": "Returns permissions that a caller has on the specified resource.",
+ // "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.subnetworks.testIamPermissions",
// "parameterOrder": [
@@ -115226,7 +115678,8 @@ type TargetHttpProxiesAggregatedListCall struct {
}
// AggregatedList: Retrieves the list of all TargetHttpProxy resources,
-// regional and global, available to the specified project.
+// regional and global, available to the specified project. (==
+// suppress_warning http-rest-shadowed ==)
func (r *TargetHttpProxiesService) AggregatedList(project string) *TargetHttpProxiesAggregatedListCall {
c := &TargetHttpProxiesAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -115333,7 +115786,7 @@ func (c *TargetHttpProxiesAggregatedListCall) Header() http.Header {
func (c *TargetHttpProxiesAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -115395,7 +115848,7 @@ func (c *TargetHttpProxiesAggregatedListCall) Do(opts ...googleapi.CallOption) (
}
return ret, nil
// {
- // "description": "Retrieves the list of all TargetHttpProxy resources, regional and global, available to the specified project.",
+ // "description": "Retrieves the list of all TargetHttpProxy resources, regional and global, available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.targetHttpProxies.aggregatedList",
// "parameterOrder": [
@@ -115478,7 +115931,8 @@ type TargetHttpProxiesDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified TargetHttpProxy resource.
+// Delete: Deletes the specified TargetHttpProxy resource. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetHttpProxies/delete
func (r *TargetHttpProxiesService) Delete(project string, targetHttpProxy string) *TargetHttpProxiesDeleteCall {
c := &TargetHttpProxiesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -115533,7 +115987,7 @@ func (c *TargetHttpProxiesDeleteCall) Header() http.Header {
func (c *TargetHttpProxiesDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -115593,7 +116047,7 @@ func (c *TargetHttpProxiesDeleteCall) Do(opts ...googleapi.CallOption) (*Operati
}
return ret, nil
// {
- // "description": "Deletes the specified TargetHttpProxy resource.",
+ // "description": "Deletes the specified TargetHttpProxy resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.targetHttpProxies.delete",
// "parameterOrder": [
@@ -115646,7 +116100,8 @@ type TargetHttpProxiesGetCall struct {
}
// Get: Returns the specified TargetHttpProxy resource. Gets a list of
-// available target HTTP proxies by making a list() request.
+// available target HTTP proxies by making a list() request. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetHttpProxies/get
func (r *TargetHttpProxiesService) Get(project string, targetHttpProxy string) *TargetHttpProxiesGetCall {
c := &TargetHttpProxiesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -115692,7 +116147,7 @@ func (c *TargetHttpProxiesGetCall) Header() http.Header {
func (c *TargetHttpProxiesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -115755,7 +116210,7 @@ func (c *TargetHttpProxiesGetCall) Do(opts ...googleapi.CallOption) (*TargetHttp
}
return ret, nil
// {
- // "description": "Returns the specified TargetHttpProxy resource. Gets a list of available target HTTP proxies by making a list() request.",
+ // "description": "Returns the specified TargetHttpProxy resource. Gets a list of available target HTTP proxies by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.targetHttpProxies.get",
// "parameterOrder": [
@@ -115803,7 +116258,8 @@ type TargetHttpProxiesInsertCall struct {
}
// Insert: Creates a TargetHttpProxy resource in the specified project
-// using the data included in the request.
+// using the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetHttpProxies/insert
func (r *TargetHttpProxiesService) Insert(project string, targethttpproxy *TargetHttpProxy) *TargetHttpProxiesInsertCall {
c := &TargetHttpProxiesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -115858,7 +116314,7 @@ func (c *TargetHttpProxiesInsertCall) Header() http.Header {
func (c *TargetHttpProxiesInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -115922,7 +116378,7 @@ func (c *TargetHttpProxiesInsertCall) Do(opts ...googleapi.CallOption) (*Operati
}
return ret, nil
// {
- // "description": "Creates a TargetHttpProxy resource in the specified project using the data included in the request.",
+ // "description": "Creates a TargetHttpProxy resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.targetHttpProxies.insert",
// "parameterOrder": [
@@ -115969,7 +116425,7 @@ type TargetHttpProxiesListCall struct {
}
// List: Retrieves the list of TargetHttpProxy resources available to
-// the specified project.
+// the specified project. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetHttpProxies/list
func (r *TargetHttpProxiesService) List(project string) *TargetHttpProxiesListCall {
c := &TargetHttpProxiesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -116077,7 +116533,7 @@ func (c *TargetHttpProxiesListCall) Header() http.Header {
func (c *TargetHttpProxiesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -116139,7 +116595,7 @@ func (c *TargetHttpProxiesListCall) Do(opts ...googleapi.CallOption) (*TargetHtt
}
return ret, nil
// {
- // "description": "Retrieves the list of TargetHttpProxy resources available to the specified project.",
+ // "description": "Retrieves the list of TargetHttpProxy resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.targetHttpProxies.list",
// "parameterOrder": [
@@ -116223,7 +116679,8 @@ type TargetHttpProxiesSetUrlMapCall struct {
header_ http.Header
}
-// SetUrlMap: Changes the URL map for TargetHttpProxy.
+// SetUrlMap: Changes the URL map for TargetHttpProxy. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetHttpProxies/setUrlMap
func (r *TargetHttpProxiesService) SetUrlMap(project string, targetHttpProxy string, urlmapreference *UrlMapReference) *TargetHttpProxiesSetUrlMapCall {
c := &TargetHttpProxiesSetUrlMapCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -116279,7 +116736,7 @@ func (c *TargetHttpProxiesSetUrlMapCall) Header() http.Header {
func (c *TargetHttpProxiesSetUrlMapCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -116344,7 +116801,7 @@ func (c *TargetHttpProxiesSetUrlMapCall) Do(opts ...googleapi.CallOption) (*Oper
}
return ret, nil
// {
- // "description": "Changes the URL map for TargetHttpProxy.",
+ // "description": "Changes the URL map for TargetHttpProxy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.targetHttpProxies.setUrlMap",
// "parameterOrder": [
@@ -116399,7 +116856,8 @@ type TargetHttpsProxiesAggregatedListCall struct {
}
// AggregatedList: Retrieves the list of all TargetHttpsProxy resources,
-// regional and global, available to the specified project.
+// regional and global, available to the specified project. (==
+// suppress_warning http-rest-shadowed ==)
func (r *TargetHttpsProxiesService) AggregatedList(project string) *TargetHttpsProxiesAggregatedListCall {
c := &TargetHttpsProxiesAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -116506,7 +116964,7 @@ func (c *TargetHttpsProxiesAggregatedListCall) Header() http.Header {
func (c *TargetHttpsProxiesAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -116568,7 +117026,7 @@ func (c *TargetHttpsProxiesAggregatedListCall) Do(opts ...googleapi.CallOption)
}
return ret, nil
// {
- // "description": "Retrieves the list of all TargetHttpsProxy resources, regional and global, available to the specified project.",
+ // "description": "Retrieves the list of all TargetHttpsProxy resources, regional and global, available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.targetHttpsProxies.aggregatedList",
// "parameterOrder": [
@@ -116651,7 +117109,8 @@ type TargetHttpsProxiesDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified TargetHttpsProxy resource.
+// Delete: Deletes the specified TargetHttpsProxy resource. (==
+// suppress_warning http-rest-shadowed ==)
func (r *TargetHttpsProxiesService) Delete(project string, targetHttpsProxy string) *TargetHttpsProxiesDeleteCall {
c := &TargetHttpsProxiesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -116705,7 +117164,7 @@ func (c *TargetHttpsProxiesDeleteCall) Header() http.Header {
func (c *TargetHttpsProxiesDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -116765,7 +117224,7 @@ func (c *TargetHttpsProxiesDeleteCall) Do(opts ...googleapi.CallOption) (*Operat
}
return ret, nil
// {
- // "description": "Deletes the specified TargetHttpsProxy resource.",
+ // "description": "Deletes the specified TargetHttpsProxy resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.targetHttpsProxies.delete",
// "parameterOrder": [
@@ -116818,7 +117277,8 @@ type TargetHttpsProxiesGetCall struct {
}
// Get: Returns the specified TargetHttpsProxy resource. Gets a list of
-// available target HTTPS proxies by making a list() request.
+// available target HTTPS proxies by making a list() request. (==
+// suppress_warning http-rest-shadowed ==)
func (r *TargetHttpsProxiesService) Get(project string, targetHttpsProxy string) *TargetHttpsProxiesGetCall {
c := &TargetHttpsProxiesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -116863,7 +117323,7 @@ func (c *TargetHttpsProxiesGetCall) Header() http.Header {
func (c *TargetHttpsProxiesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -116926,7 +117386,7 @@ func (c *TargetHttpsProxiesGetCall) Do(opts ...googleapi.CallOption) (*TargetHtt
}
return ret, nil
// {
- // "description": "Returns the specified TargetHttpsProxy resource. Gets a list of available target HTTPS proxies by making a list() request.",
+ // "description": "Returns the specified TargetHttpsProxy resource. Gets a list of available target HTTPS proxies by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.targetHttpsProxies.get",
// "parameterOrder": [
@@ -116974,7 +117434,8 @@ type TargetHttpsProxiesInsertCall struct {
}
// Insert: Creates a TargetHttpsProxy resource in the specified project
-// using the data included in the request.
+// using the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *TargetHttpsProxiesService) Insert(project string, targethttpsproxy *TargetHttpsProxy) *TargetHttpsProxiesInsertCall {
c := &TargetHttpsProxiesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -117028,7 +117489,7 @@ func (c *TargetHttpsProxiesInsertCall) Header() http.Header {
func (c *TargetHttpsProxiesInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -117092,7 +117553,7 @@ func (c *TargetHttpsProxiesInsertCall) Do(opts ...googleapi.CallOption) (*Operat
}
return ret, nil
// {
- // "description": "Creates a TargetHttpsProxy resource in the specified project using the data included in the request.",
+ // "description": "Creates a TargetHttpsProxy resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.targetHttpsProxies.insert",
// "parameterOrder": [
@@ -117139,7 +117600,7 @@ type TargetHttpsProxiesListCall struct {
}
// List: Retrieves the list of TargetHttpsProxy resources available to
-// the specified project.
+// the specified project. (== suppress_warning http-rest-shadowed ==)
func (r *TargetHttpsProxiesService) List(project string) *TargetHttpsProxiesListCall {
c := &TargetHttpsProxiesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -117246,7 +117707,7 @@ func (c *TargetHttpsProxiesListCall) Header() http.Header {
func (c *TargetHttpsProxiesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -117308,7 +117769,7 @@ func (c *TargetHttpsProxiesListCall) Do(opts ...googleapi.CallOption) (*TargetHt
}
return ret, nil
// {
- // "description": "Retrieves the list of TargetHttpsProxy resources available to the specified project.",
+ // "description": "Retrieves the list of TargetHttpsProxy resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.targetHttpsProxies.list",
// "parameterOrder": [
@@ -117393,6 +117854,7 @@ type TargetHttpsProxiesSetQuicOverrideCall struct {
}
// SetQuicOverride: Sets the QUIC override policy for TargetHttpsProxy.
+// (== suppress_warning http-rest-shadowed ==)
func (r *TargetHttpsProxiesService) SetQuicOverride(project string, targetHttpsProxy string, targethttpsproxiessetquicoverriderequest *TargetHttpsProxiesSetQuicOverrideRequest) *TargetHttpsProxiesSetQuicOverrideCall {
c := &TargetHttpsProxiesSetQuicOverrideCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -117447,7 +117909,7 @@ func (c *TargetHttpsProxiesSetQuicOverrideCall) Header() http.Header {
func (c *TargetHttpsProxiesSetQuicOverrideCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -117512,7 +117974,7 @@ func (c *TargetHttpsProxiesSetQuicOverrideCall) Do(opts ...googleapi.CallOption)
}
return ret, nil
// {
- // "description": "Sets the QUIC override policy for TargetHttpsProxy.",
+ // "description": "Sets the QUIC override policy for TargetHttpsProxy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.targetHttpsProxies.setQuicOverride",
// "parameterOrder": [
@@ -117567,6 +118029,7 @@ type TargetHttpsProxiesSetSslCertificatesCall struct {
}
// SetSslCertificates: Replaces SslCertificates for TargetHttpsProxy.
+// (== suppress_warning http-rest-shadowed ==)
func (r *TargetHttpsProxiesService) SetSslCertificates(project string, targetHttpsProxy string, targethttpsproxiessetsslcertificatesrequest *TargetHttpsProxiesSetSslCertificatesRequest) *TargetHttpsProxiesSetSslCertificatesCall {
c := &TargetHttpsProxiesSetSslCertificatesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -117621,7 +118084,7 @@ func (c *TargetHttpsProxiesSetSslCertificatesCall) Header() http.Header {
func (c *TargetHttpsProxiesSetSslCertificatesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -117686,7 +118149,7 @@ func (c *TargetHttpsProxiesSetSslCertificatesCall) Do(opts ...googleapi.CallOpti
}
return ret, nil
// {
- // "description": "Replaces SslCertificates for TargetHttpsProxy.",
+ // "description": "Replaces SslCertificates for TargetHttpsProxy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.targetHttpsProxies.setSslCertificates",
// "parameterOrder": [
@@ -117745,7 +118208,7 @@ type TargetHttpsProxiesSetSslPolicyCall struct {
// policy specifies the server-side support for SSL features. This
// affects connections between clients and the HTTPS proxy load
// balancer. They do not affect the connection between the load balancer
-// and the backends.
+// and the backends. (== suppress_warning http-rest-shadowed ==)
func (r *TargetHttpsProxiesService) SetSslPolicy(project string, targetHttpsProxy string, sslpolicyreference *SslPolicyReference) *TargetHttpsProxiesSetSslPolicyCall {
c := &TargetHttpsProxiesSetSslPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -117800,7 +118263,7 @@ func (c *TargetHttpsProxiesSetSslPolicyCall) Header() http.Header {
func (c *TargetHttpsProxiesSetSslPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -117865,7 +118328,7 @@ func (c *TargetHttpsProxiesSetSslPolicyCall) Do(opts ...googleapi.CallOption) (*
}
return ret, nil
// {
- // "description": "Sets the SSL policy for TargetHttpsProxy. The SSL policy specifies the server-side support for SSL features. This affects connections between clients and the HTTPS proxy load balancer. They do not affect the connection between the load balancer and the backends.",
+ // "description": "Sets the SSL policy for TargetHttpsProxy. The SSL policy specifies the server-side support for SSL features. This affects connections between clients and the HTTPS proxy load balancer. They do not affect the connection between the load balancer and the backends. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.targetHttpsProxies.setSslPolicy",
// "parameterOrder": [
@@ -117919,7 +118382,8 @@ type TargetHttpsProxiesSetUrlMapCall struct {
header_ http.Header
}
-// SetUrlMap: Changes the URL map for TargetHttpsProxy.
+// SetUrlMap: Changes the URL map for TargetHttpsProxy. (==
+// suppress_warning http-rest-shadowed ==)
func (r *TargetHttpsProxiesService) SetUrlMap(project string, targetHttpsProxy string, urlmapreference *UrlMapReference) *TargetHttpsProxiesSetUrlMapCall {
c := &TargetHttpsProxiesSetUrlMapCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -117974,7 +118438,7 @@ func (c *TargetHttpsProxiesSetUrlMapCall) Header() http.Header {
func (c *TargetHttpsProxiesSetUrlMapCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -118039,7 +118503,7 @@ func (c *TargetHttpsProxiesSetUrlMapCall) Do(opts ...googleapi.CallOption) (*Ope
}
return ret, nil
// {
- // "description": "Changes the URL map for TargetHttpsProxy.",
+ // "description": "Changes the URL map for TargetHttpsProxy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.targetHttpsProxies.setUrlMap",
// "parameterOrder": [
@@ -118093,7 +118557,8 @@ type TargetInstancesAggregatedListCall struct {
header_ http.Header
}
-// AggregatedList: Retrieves an aggregated list of target instances.
+// AggregatedList: Retrieves an aggregated list of target instances. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetInstances/aggregatedList
func (r *TargetInstancesService) AggregatedList(project string) *TargetInstancesAggregatedListCall {
c := &TargetInstancesAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -118201,7 +118666,7 @@ func (c *TargetInstancesAggregatedListCall) Header() http.Header {
func (c *TargetInstancesAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -118263,7 +118728,7 @@ func (c *TargetInstancesAggregatedListCall) Do(opts ...googleapi.CallOption) (*T
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of target instances.",
+ // "description": "Retrieves an aggregated list of target instances. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.targetInstances.aggregatedList",
// "parameterOrder": [
@@ -118347,7 +118812,8 @@ type TargetInstancesDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified TargetInstance resource.
+// Delete: Deletes the specified TargetInstance resource. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetInstances/delete
func (r *TargetInstancesService) Delete(project string, zone string, targetInstance string) *TargetInstancesDeleteCall {
c := &TargetInstancesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -118403,7 +118869,7 @@ func (c *TargetInstancesDeleteCall) Header() http.Header {
func (c *TargetInstancesDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -118464,7 +118930,7 @@ func (c *TargetInstancesDeleteCall) Do(opts ...googleapi.CallOption) (*Operation
}
return ret, nil
// {
- // "description": "Deletes the specified TargetInstance resource.",
+ // "description": "Deletes the specified TargetInstance resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.targetInstances.delete",
// "parameterOrder": [
@@ -118526,7 +118992,8 @@ type TargetInstancesGetCall struct {
}
// Get: Returns the specified TargetInstance resource. Gets a list of
-// available target instances by making a list() request.
+// available target instances by making a list() request. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetInstances/get
func (r *TargetInstancesService) Get(project string, zone string, targetInstance string) *TargetInstancesGetCall {
c := &TargetInstancesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -118573,7 +119040,7 @@ func (c *TargetInstancesGetCall) Header() http.Header {
func (c *TargetInstancesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -118637,7 +119104,7 @@ func (c *TargetInstancesGetCall) Do(opts ...googleapi.CallOption) (*TargetInstan
}
return ret, nil
// {
- // "description": "Returns the specified TargetInstance resource. Gets a list of available target instances by making a list() request.",
+ // "description": "Returns the specified TargetInstance resource. Gets a list of available target instances by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.targetInstances.get",
// "parameterOrder": [
@@ -118694,7 +119161,8 @@ type TargetInstancesInsertCall struct {
}
// Insert: Creates a TargetInstance resource in the specified project
-// and zone using the data included in the request.
+// and zone using the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetInstances/insert
func (r *TargetInstancesService) Insert(project string, zone string, targetinstance *TargetInstance) *TargetInstancesInsertCall {
c := &TargetInstancesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -118750,7 +119218,7 @@ func (c *TargetInstancesInsertCall) Header() http.Header {
func (c *TargetInstancesInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -118815,7 +119283,7 @@ func (c *TargetInstancesInsertCall) Do(opts ...googleapi.CallOption) (*Operation
}
return ret, nil
// {
- // "description": "Creates a TargetInstance resource in the specified project and zone using the data included in the request.",
+ // "description": "Creates a TargetInstance resource in the specified project and zone using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.targetInstances.insert",
// "parameterOrder": [
@@ -118871,7 +119339,8 @@ type TargetInstancesListCall struct {
}
// List: Retrieves a list of TargetInstance resources available to the
-// specified project and zone.
+// specified project and zone. (== suppress_warning http-rest-shadowed
+// ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetInstances/list
func (r *TargetInstancesService) List(project string, zone string) *TargetInstancesListCall {
c := &TargetInstancesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -118980,7 +119449,7 @@ func (c *TargetInstancesListCall) Header() http.Header {
func (c *TargetInstancesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -119043,7 +119512,7 @@ func (c *TargetInstancesListCall) Do(opts ...googleapi.CallOption) (*TargetInsta
}
return ret, nil
// {
- // "description": "Retrieves a list of TargetInstance resources available to the specified project and zone.",
+ // "description": "Retrieves a list of TargetInstance resources available to the specified project and zone. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.targetInstances.list",
// "parameterOrder": [
@@ -119136,7 +119605,8 @@ type TargetPoolsAddHealthCheckCall struct {
header_ http.Header
}
-// AddHealthCheck: Adds health check URLs to a target pool.
+// AddHealthCheck: Adds health check URLs to a target pool. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetPools/addHealthCheck
func (r *TargetPoolsService) AddHealthCheck(project string, region string, targetPool string, targetpoolsaddhealthcheckrequest *TargetPoolsAddHealthCheckRequest) *TargetPoolsAddHealthCheckCall {
c := &TargetPoolsAddHealthCheckCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -119193,7 +119663,7 @@ func (c *TargetPoolsAddHealthCheckCall) Header() http.Header {
func (c *TargetPoolsAddHealthCheckCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -119259,7 +119729,7 @@ func (c *TargetPoolsAddHealthCheckCall) Do(opts ...googleapi.CallOption) (*Opera
}
return ret, nil
// {
- // "description": "Adds health check URLs to a target pool.",
+ // "description": "Adds health check URLs to a target pool. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.targetPools.addHealthCheck",
// "parameterOrder": [
@@ -119323,7 +119793,8 @@ type TargetPoolsAddInstanceCall struct {
header_ http.Header
}
-// AddInstance: Adds an instance to a target pool.
+// AddInstance: Adds an instance to a target pool. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetPools/addInstance
func (r *TargetPoolsService) AddInstance(project string, region string, targetPool string, targetpoolsaddinstancerequest *TargetPoolsAddInstanceRequest) *TargetPoolsAddInstanceCall {
c := &TargetPoolsAddInstanceCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -119380,7 +119851,7 @@ func (c *TargetPoolsAddInstanceCall) Header() http.Header {
func (c *TargetPoolsAddInstanceCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -119446,7 +119917,7 @@ func (c *TargetPoolsAddInstanceCall) Do(opts ...googleapi.CallOption) (*Operatio
}
return ret, nil
// {
- // "description": "Adds an instance to a target pool.",
+ // "description": "Adds an instance to a target pool. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.targetPools.addInstance",
// "parameterOrder": [
@@ -119508,7 +119979,8 @@ type TargetPoolsAggregatedListCall struct {
header_ http.Header
}
-// AggregatedList: Retrieves an aggregated list of target pools.
+// AggregatedList: Retrieves an aggregated list of target pools. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetPools/aggregatedList
func (r *TargetPoolsService) AggregatedList(project string) *TargetPoolsAggregatedListCall {
c := &TargetPoolsAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -119616,7 +120088,7 @@ func (c *TargetPoolsAggregatedListCall) Header() http.Header {
func (c *TargetPoolsAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -119678,7 +120150,7 @@ func (c *TargetPoolsAggregatedListCall) Do(opts ...googleapi.CallOption) (*Targe
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of target pools.",
+ // "description": "Retrieves an aggregated list of target pools. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.targetPools.aggregatedList",
// "parameterOrder": [
@@ -119762,7 +120234,8 @@ type TargetPoolsDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified target pool.
+// Delete: Deletes the specified target pool. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetPools/delete
func (r *TargetPoolsService) Delete(project string, region string, targetPool string) *TargetPoolsDeleteCall {
c := &TargetPoolsDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -119818,7 +120291,7 @@ func (c *TargetPoolsDeleteCall) Header() http.Header {
func (c *TargetPoolsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -119879,7 +120352,7 @@ func (c *TargetPoolsDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, er
}
return ret, nil
// {
- // "description": "Deletes the specified target pool.",
+ // "description": "Deletes the specified target pool. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.targetPools.delete",
// "parameterOrder": [
@@ -119941,7 +120414,8 @@ type TargetPoolsGetCall struct {
}
// Get: Returns the specified target pool. Gets a list of available
-// target pools by making a list() request.
+// target pools by making a list() request. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetPools/get
func (r *TargetPoolsService) Get(project string, region string, targetPool string) *TargetPoolsGetCall {
c := &TargetPoolsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -119988,7 +120462,7 @@ func (c *TargetPoolsGetCall) Header() http.Header {
func (c *TargetPoolsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -120052,7 +120526,7 @@ func (c *TargetPoolsGetCall) Do(opts ...googleapi.CallOption) (*TargetPool, erro
}
return ret, nil
// {
- // "description": "Returns the specified target pool. Gets a list of available target pools by making a list() request.",
+ // "description": "Returns the specified target pool. Gets a list of available target pools by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.targetPools.get",
// "parameterOrder": [
@@ -120110,7 +120584,8 @@ type TargetPoolsGetHealthCall struct {
}
// GetHealth: Gets the most recent health check results for each IP for
-// the instance that is referenced by the given target pool.
+// the instance that is referenced by the given target pool. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetPools/getHealth
func (r *TargetPoolsService) GetHealth(project string, region string, targetPool string, instancereference *InstanceReference) *TargetPoolsGetHealthCall {
c := &TargetPoolsGetHealthCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -120148,7 +120623,7 @@ func (c *TargetPoolsGetHealthCall) Header() http.Header {
func (c *TargetPoolsGetHealthCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -120214,7 +120689,7 @@ func (c *TargetPoolsGetHealthCall) Do(opts ...googleapi.CallOption) (*TargetPool
}
return ret, nil
// {
- // "description": "Gets the most recent health check results for each IP for the instance that is referenced by the given target pool.",
+ // "description": "Gets the most recent health check results for each IP for the instance that is referenced by the given target pool. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.targetPools.getHealth",
// "parameterOrder": [
@@ -120274,7 +120749,8 @@ type TargetPoolsInsertCall struct {
}
// Insert: Creates a target pool in the specified project and region
-// using the data included in the request.
+// using the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetPools/insert
func (r *TargetPoolsService) Insert(project string, region string, targetpool *TargetPool) *TargetPoolsInsertCall {
c := &TargetPoolsInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -120330,7 +120806,7 @@ func (c *TargetPoolsInsertCall) Header() http.Header {
func (c *TargetPoolsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -120395,7 +120871,7 @@ func (c *TargetPoolsInsertCall) Do(opts ...googleapi.CallOption) (*Operation, er
}
return ret, nil
// {
- // "description": "Creates a target pool in the specified project and region using the data included in the request.",
+ // "description": "Creates a target pool in the specified project and region using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.targetPools.insert",
// "parameterOrder": [
@@ -120451,7 +120927,7 @@ type TargetPoolsListCall struct {
}
// List: Retrieves a list of target pools available to the specified
-// project and region.
+// project and region. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetPools/list
func (r *TargetPoolsService) List(project string, region string) *TargetPoolsListCall {
c := &TargetPoolsListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -120560,7 +121036,7 @@ func (c *TargetPoolsListCall) Header() http.Header {
func (c *TargetPoolsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -120623,7 +121099,7 @@ func (c *TargetPoolsListCall) Do(opts ...googleapi.CallOption) (*TargetPoolList,
}
return ret, nil
// {
- // "description": "Retrieves a list of target pools available to the specified project and region.",
+ // "description": "Retrieves a list of target pools available to the specified project and region. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.targetPools.list",
// "parameterOrder": [
@@ -120716,7 +121192,8 @@ type TargetPoolsRemoveHealthCheckCall struct {
header_ http.Header
}
-// RemoveHealthCheck: Removes health check URL from a target pool.
+// RemoveHealthCheck: Removes health check URL from a target pool. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetPools/removeHealthCheck
func (r *TargetPoolsService) RemoveHealthCheck(project string, region string, targetPool string, targetpoolsremovehealthcheckrequest *TargetPoolsRemoveHealthCheckRequest) *TargetPoolsRemoveHealthCheckCall {
c := &TargetPoolsRemoveHealthCheckCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -120773,7 +121250,7 @@ func (c *TargetPoolsRemoveHealthCheckCall) Header() http.Header {
func (c *TargetPoolsRemoveHealthCheckCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -120839,7 +121316,7 @@ func (c *TargetPoolsRemoveHealthCheckCall) Do(opts ...googleapi.CallOption) (*Op
}
return ret, nil
// {
- // "description": "Removes health check URL from a target pool.",
+ // "description": "Removes health check URL from a target pool. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.targetPools.removeHealthCheck",
// "parameterOrder": [
@@ -120903,7 +121380,8 @@ type TargetPoolsRemoveInstanceCall struct {
header_ http.Header
}
-// RemoveInstance: Removes instance URL from a target pool.
+// RemoveInstance: Removes instance URL from a target pool. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetPools/removeInstance
func (r *TargetPoolsService) RemoveInstance(project string, region string, targetPool string, targetpoolsremoveinstancerequest *TargetPoolsRemoveInstanceRequest) *TargetPoolsRemoveInstanceCall {
c := &TargetPoolsRemoveInstanceCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -120960,7 +121438,7 @@ func (c *TargetPoolsRemoveInstanceCall) Header() http.Header {
func (c *TargetPoolsRemoveInstanceCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -121026,7 +121504,7 @@ func (c *TargetPoolsRemoveInstanceCall) Do(opts ...googleapi.CallOption) (*Opera
}
return ret, nil
// {
- // "description": "Removes instance URL from a target pool.",
+ // "description": "Removes instance URL from a target pool. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.targetPools.removeInstance",
// "parameterOrder": [
@@ -121090,7 +121568,8 @@ type TargetPoolsSetBackupCall struct {
header_ http.Header
}
-// SetBackup: Changes a backup target pool's configurations.
+// SetBackup: Changes a backup target pool's configurations. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/targetPools/setBackup
func (r *TargetPoolsService) SetBackup(project string, region string, targetPool string, targetreference *TargetReference) *TargetPoolsSetBackupCall {
c := &TargetPoolsSetBackupCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -121154,7 +121633,7 @@ func (c *TargetPoolsSetBackupCall) Header() http.Header {
func (c *TargetPoolsSetBackupCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -121220,7 +121699,7 @@ func (c *TargetPoolsSetBackupCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Changes a backup target pool's configurations.",
+ // "description": "Changes a backup target pool's configurations. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.targetPools.setBackup",
// "parameterOrder": [
@@ -121288,7 +121767,8 @@ type TargetSslProxiesDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified TargetSslProxy resource.
+// Delete: Deletes the specified TargetSslProxy resource. (==
+// suppress_warning http-rest-shadowed ==)
func (r *TargetSslProxiesService) Delete(project string, targetSslProxy string) *TargetSslProxiesDeleteCall {
c := &TargetSslProxiesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -121342,7 +121822,7 @@ func (c *TargetSslProxiesDeleteCall) Header() http.Header {
func (c *TargetSslProxiesDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -121402,7 +121882,7 @@ func (c *TargetSslProxiesDeleteCall) Do(opts ...googleapi.CallOption) (*Operatio
}
return ret, nil
// {
- // "description": "Deletes the specified TargetSslProxy resource.",
+ // "description": "Deletes the specified TargetSslProxy resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.targetSslProxies.delete",
// "parameterOrder": [
@@ -121455,7 +121935,8 @@ type TargetSslProxiesGetCall struct {
}
// Get: Returns the specified TargetSslProxy resource. Gets a list of
-// available target SSL proxies by making a list() request.
+// available target SSL proxies by making a list() request. (==
+// suppress_warning http-rest-shadowed ==)
func (r *TargetSslProxiesService) Get(project string, targetSslProxy string) *TargetSslProxiesGetCall {
c := &TargetSslProxiesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -121500,7 +121981,7 @@ func (c *TargetSslProxiesGetCall) Header() http.Header {
func (c *TargetSslProxiesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -121563,7 +122044,7 @@ func (c *TargetSslProxiesGetCall) Do(opts ...googleapi.CallOption) (*TargetSslPr
}
return ret, nil
// {
- // "description": "Returns the specified TargetSslProxy resource. Gets a list of available target SSL proxies by making a list() request.",
+ // "description": "Returns the specified TargetSslProxy resource. Gets a list of available target SSL proxies by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.targetSslProxies.get",
// "parameterOrder": [
@@ -121611,7 +122092,8 @@ type TargetSslProxiesInsertCall struct {
}
// Insert: Creates a TargetSslProxy resource in the specified project
-// using the data included in the request.
+// using the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *TargetSslProxiesService) Insert(project string, targetsslproxy *TargetSslProxy) *TargetSslProxiesInsertCall {
c := &TargetSslProxiesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -121665,7 +122147,7 @@ func (c *TargetSslProxiesInsertCall) Header() http.Header {
func (c *TargetSslProxiesInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -121729,7 +122211,7 @@ func (c *TargetSslProxiesInsertCall) Do(opts ...googleapi.CallOption) (*Operatio
}
return ret, nil
// {
- // "description": "Creates a TargetSslProxy resource in the specified project using the data included in the request.",
+ // "description": "Creates a TargetSslProxy resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.targetSslProxies.insert",
// "parameterOrder": [
@@ -121776,7 +122258,7 @@ type TargetSslProxiesListCall struct {
}
// List: Retrieves the list of TargetSslProxy resources available to the
-// specified project.
+// specified project. (== suppress_warning http-rest-shadowed ==)
func (r *TargetSslProxiesService) List(project string) *TargetSslProxiesListCall {
c := &TargetSslProxiesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -121883,7 +122365,7 @@ func (c *TargetSslProxiesListCall) Header() http.Header {
func (c *TargetSslProxiesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -121945,7 +122427,7 @@ func (c *TargetSslProxiesListCall) Do(opts ...googleapi.CallOption) (*TargetSslP
}
return ret, nil
// {
- // "description": "Retrieves the list of TargetSslProxy resources available to the specified project.",
+ // "description": "Retrieves the list of TargetSslProxy resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.targetSslProxies.list",
// "parameterOrder": [
@@ -122029,7 +122511,8 @@ type TargetSslProxiesSetBackendServiceCall struct {
header_ http.Header
}
-// SetBackendService: Changes the BackendService for TargetSslProxy.
+// SetBackendService: Changes the BackendService for TargetSslProxy. (==
+// suppress_warning http-rest-shadowed ==)
func (r *TargetSslProxiesService) SetBackendService(project string, targetSslProxy string, targetsslproxiessetbackendservicerequest *TargetSslProxiesSetBackendServiceRequest) *TargetSslProxiesSetBackendServiceCall {
c := &TargetSslProxiesSetBackendServiceCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -122084,7 +122567,7 @@ func (c *TargetSslProxiesSetBackendServiceCall) Header() http.Header {
func (c *TargetSslProxiesSetBackendServiceCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -122149,7 +122632,7 @@ func (c *TargetSslProxiesSetBackendServiceCall) Do(opts ...googleapi.CallOption)
}
return ret, nil
// {
- // "description": "Changes the BackendService for TargetSslProxy.",
+ // "description": "Changes the BackendService for TargetSslProxy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.targetSslProxies.setBackendService",
// "parameterOrder": [
@@ -122204,7 +122687,8 @@ type TargetSslProxiesSetProxyHeaderCall struct {
header_ http.Header
}
-// SetProxyHeader: Changes the ProxyHeaderType for TargetSslProxy.
+// SetProxyHeader: Changes the ProxyHeaderType for TargetSslProxy. (==
+// suppress_warning http-rest-shadowed ==)
func (r *TargetSslProxiesService) SetProxyHeader(project string, targetSslProxy string, targetsslproxiessetproxyheaderrequest *TargetSslProxiesSetProxyHeaderRequest) *TargetSslProxiesSetProxyHeaderCall {
c := &TargetSslProxiesSetProxyHeaderCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -122259,7 +122743,7 @@ func (c *TargetSslProxiesSetProxyHeaderCall) Header() http.Header {
func (c *TargetSslProxiesSetProxyHeaderCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -122324,7 +122808,7 @@ func (c *TargetSslProxiesSetProxyHeaderCall) Do(opts ...googleapi.CallOption) (*
}
return ret, nil
// {
- // "description": "Changes the ProxyHeaderType for TargetSslProxy.",
+ // "description": "Changes the ProxyHeaderType for TargetSslProxy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.targetSslProxies.setProxyHeader",
// "parameterOrder": [
@@ -122379,7 +122863,8 @@ type TargetSslProxiesSetSslCertificatesCall struct {
header_ http.Header
}
-// SetSslCertificates: Changes SslCertificates for TargetSslProxy.
+// SetSslCertificates: Changes SslCertificates for TargetSslProxy. (==
+// suppress_warning http-rest-shadowed ==)
func (r *TargetSslProxiesService) SetSslCertificates(project string, targetSslProxy string, targetsslproxiessetsslcertificatesrequest *TargetSslProxiesSetSslCertificatesRequest) *TargetSslProxiesSetSslCertificatesCall {
c := &TargetSslProxiesSetSslCertificatesCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -122434,7 +122919,7 @@ func (c *TargetSslProxiesSetSslCertificatesCall) Header() http.Header {
func (c *TargetSslProxiesSetSslCertificatesCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -122499,7 +122984,7 @@ func (c *TargetSslProxiesSetSslCertificatesCall) Do(opts ...googleapi.CallOption
}
return ret, nil
// {
- // "description": "Changes SslCertificates for TargetSslProxy.",
+ // "description": "Changes SslCertificates for TargetSslProxy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.targetSslProxies.setSslCertificates",
// "parameterOrder": [
@@ -122558,6 +123043,7 @@ type TargetSslProxiesSetSslPolicyCall struct {
// specifies the server-side support for SSL features. This affects
// connections between clients and the SSL proxy load balancer. They do
// not affect the connection between the load balancer and the backends.
+// (== suppress_warning http-rest-shadowed ==)
func (r *TargetSslProxiesService) SetSslPolicy(project string, targetSslProxy string, sslpolicyreference *SslPolicyReference) *TargetSslProxiesSetSslPolicyCall {
c := &TargetSslProxiesSetSslPolicyCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -122612,7 +123098,7 @@ func (c *TargetSslProxiesSetSslPolicyCall) Header() http.Header {
func (c *TargetSslProxiesSetSslPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -122677,7 +123163,7 @@ func (c *TargetSslProxiesSetSslPolicyCall) Do(opts ...googleapi.CallOption) (*Op
}
return ret, nil
// {
- // "description": "Sets the SSL policy for TargetSslProxy. The SSL policy specifies the server-side support for SSL features. This affects connections between clients and the SSL proxy load balancer. They do not affect the connection between the load balancer and the backends.",
+ // "description": "Sets the SSL policy for TargetSslProxy. The SSL policy specifies the server-side support for SSL features. This affects connections between clients and the SSL proxy load balancer. They do not affect the connection between the load balancer and the backends. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.targetSslProxies.setSslPolicy",
// "parameterOrder": [
@@ -122730,7 +123216,8 @@ type TargetTcpProxiesDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified TargetTcpProxy resource.
+// Delete: Deletes the specified TargetTcpProxy resource. (==
+// suppress_warning http-rest-shadowed ==)
func (r *TargetTcpProxiesService) Delete(project string, targetTcpProxy string) *TargetTcpProxiesDeleteCall {
c := &TargetTcpProxiesDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -122784,7 +123271,7 @@ func (c *TargetTcpProxiesDeleteCall) Header() http.Header {
func (c *TargetTcpProxiesDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -122844,7 +123331,7 @@ func (c *TargetTcpProxiesDeleteCall) Do(opts ...googleapi.CallOption) (*Operatio
}
return ret, nil
// {
- // "description": "Deletes the specified TargetTcpProxy resource.",
+ // "description": "Deletes the specified TargetTcpProxy resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.targetTcpProxies.delete",
// "parameterOrder": [
@@ -122897,7 +123384,8 @@ type TargetTcpProxiesGetCall struct {
}
// Get: Returns the specified TargetTcpProxy resource. Gets a list of
-// available target TCP proxies by making a list() request.
+// available target TCP proxies by making a list() request. (==
+// suppress_warning http-rest-shadowed ==)
func (r *TargetTcpProxiesService) Get(project string, targetTcpProxy string) *TargetTcpProxiesGetCall {
c := &TargetTcpProxiesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -122942,7 +123430,7 @@ func (c *TargetTcpProxiesGetCall) Header() http.Header {
func (c *TargetTcpProxiesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -123005,7 +123493,7 @@ func (c *TargetTcpProxiesGetCall) Do(opts ...googleapi.CallOption) (*TargetTcpPr
}
return ret, nil
// {
- // "description": "Returns the specified TargetTcpProxy resource. Gets a list of available target TCP proxies by making a list() request.",
+ // "description": "Returns the specified TargetTcpProxy resource. Gets a list of available target TCP proxies by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.targetTcpProxies.get",
// "parameterOrder": [
@@ -123053,7 +123541,8 @@ type TargetTcpProxiesInsertCall struct {
}
// Insert: Creates a TargetTcpProxy resource in the specified project
-// using the data included in the request.
+// using the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *TargetTcpProxiesService) Insert(project string, targettcpproxy *TargetTcpProxy) *TargetTcpProxiesInsertCall {
c := &TargetTcpProxiesInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -123107,7 +123596,7 @@ func (c *TargetTcpProxiesInsertCall) Header() http.Header {
func (c *TargetTcpProxiesInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -123171,7 +123660,7 @@ func (c *TargetTcpProxiesInsertCall) Do(opts ...googleapi.CallOption) (*Operatio
}
return ret, nil
// {
- // "description": "Creates a TargetTcpProxy resource in the specified project using the data included in the request.",
+ // "description": "Creates a TargetTcpProxy resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.targetTcpProxies.insert",
// "parameterOrder": [
@@ -123218,7 +123707,7 @@ type TargetTcpProxiesListCall struct {
}
// List: Retrieves the list of TargetTcpProxy resources available to the
-// specified project.
+// specified project. (== suppress_warning http-rest-shadowed ==)
func (r *TargetTcpProxiesService) List(project string) *TargetTcpProxiesListCall {
c := &TargetTcpProxiesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -123325,7 +123814,7 @@ func (c *TargetTcpProxiesListCall) Header() http.Header {
func (c *TargetTcpProxiesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -123387,7 +123876,7 @@ func (c *TargetTcpProxiesListCall) Do(opts ...googleapi.CallOption) (*TargetTcpP
}
return ret, nil
// {
- // "description": "Retrieves the list of TargetTcpProxy resources available to the specified project.",
+ // "description": "Retrieves the list of TargetTcpProxy resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.targetTcpProxies.list",
// "parameterOrder": [
@@ -123471,7 +123960,8 @@ type TargetTcpProxiesSetBackendServiceCall struct {
header_ http.Header
}
-// SetBackendService: Changes the BackendService for TargetTcpProxy.
+// SetBackendService: Changes the BackendService for TargetTcpProxy. (==
+// suppress_warning http-rest-shadowed ==)
func (r *TargetTcpProxiesService) SetBackendService(project string, targetTcpProxy string, targettcpproxiessetbackendservicerequest *TargetTcpProxiesSetBackendServiceRequest) *TargetTcpProxiesSetBackendServiceCall {
c := &TargetTcpProxiesSetBackendServiceCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -123526,7 +124016,7 @@ func (c *TargetTcpProxiesSetBackendServiceCall) Header() http.Header {
func (c *TargetTcpProxiesSetBackendServiceCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -123591,7 +124081,7 @@ func (c *TargetTcpProxiesSetBackendServiceCall) Do(opts ...googleapi.CallOption)
}
return ret, nil
// {
- // "description": "Changes the BackendService for TargetTcpProxy.",
+ // "description": "Changes the BackendService for TargetTcpProxy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.targetTcpProxies.setBackendService",
// "parameterOrder": [
@@ -123646,7 +124136,8 @@ type TargetTcpProxiesSetProxyHeaderCall struct {
header_ http.Header
}
-// SetProxyHeader: Changes the ProxyHeaderType for TargetTcpProxy.
+// SetProxyHeader: Changes the ProxyHeaderType for TargetTcpProxy. (==
+// suppress_warning http-rest-shadowed ==)
func (r *TargetTcpProxiesService) SetProxyHeader(project string, targetTcpProxy string, targettcpproxiessetproxyheaderrequest *TargetTcpProxiesSetProxyHeaderRequest) *TargetTcpProxiesSetProxyHeaderCall {
c := &TargetTcpProxiesSetProxyHeaderCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -123701,7 +124192,7 @@ func (c *TargetTcpProxiesSetProxyHeaderCall) Header() http.Header {
func (c *TargetTcpProxiesSetProxyHeaderCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -123766,7 +124257,7 @@ func (c *TargetTcpProxiesSetProxyHeaderCall) Do(opts ...googleapi.CallOption) (*
}
return ret, nil
// {
- // "description": "Changes the ProxyHeaderType for TargetTcpProxy.",
+ // "description": "Changes the ProxyHeaderType for TargetTcpProxy. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.targetTcpProxies.setProxyHeader",
// "parameterOrder": [
@@ -123821,6 +124312,7 @@ type TargetVpnGatewaysAggregatedListCall struct {
}
// AggregatedList: Retrieves an aggregated list of target VPN gateways.
+// (== suppress_warning http-rest-shadowed ==)
func (r *TargetVpnGatewaysService) AggregatedList(project string) *TargetVpnGatewaysAggregatedListCall {
c := &TargetVpnGatewaysAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -123927,7 +124419,7 @@ func (c *TargetVpnGatewaysAggregatedListCall) Header() http.Header {
func (c *TargetVpnGatewaysAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -123989,7 +124481,7 @@ func (c *TargetVpnGatewaysAggregatedListCall) Do(opts ...googleapi.CallOption) (
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of target VPN gateways.",
+ // "description": "Retrieves an aggregated list of target VPN gateways. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.targetVpnGateways.aggregatedList",
// "parameterOrder": [
@@ -124073,7 +124565,8 @@ type TargetVpnGatewaysDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified target VPN gateway.
+// Delete: Deletes the specified target VPN gateway. (==
+// suppress_warning http-rest-shadowed ==)
func (r *TargetVpnGatewaysService) Delete(project string, region string, targetVpnGateway string) *TargetVpnGatewaysDeleteCall {
c := &TargetVpnGatewaysDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -124128,7 +124621,7 @@ func (c *TargetVpnGatewaysDeleteCall) Header() http.Header {
func (c *TargetVpnGatewaysDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -124189,7 +124682,7 @@ func (c *TargetVpnGatewaysDeleteCall) Do(opts ...googleapi.CallOption) (*Operati
}
return ret, nil
// {
- // "description": "Deletes the specified target VPN gateway.",
+ // "description": "Deletes the specified target VPN gateway. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.targetVpnGateways.delete",
// "parameterOrder": [
@@ -124251,7 +124744,8 @@ type TargetVpnGatewaysGetCall struct {
}
// Get: Returns the specified target VPN gateway. Gets a list of
-// available target VPN gateways by making a list() request.
+// available target VPN gateways by making a list() request. (==
+// suppress_warning http-rest-shadowed ==)
func (r *TargetVpnGatewaysService) Get(project string, region string, targetVpnGateway string) *TargetVpnGatewaysGetCall {
c := &TargetVpnGatewaysGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -124297,7 +124791,7 @@ func (c *TargetVpnGatewaysGetCall) Header() http.Header {
func (c *TargetVpnGatewaysGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -124361,7 +124855,7 @@ func (c *TargetVpnGatewaysGetCall) Do(opts ...googleapi.CallOption) (*TargetVpnG
}
return ret, nil
// {
- // "description": "Returns the specified target VPN gateway. Gets a list of available target VPN gateways by making a list() request.",
+ // "description": "Returns the specified target VPN gateway. Gets a list of available target VPN gateways by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.targetVpnGateways.get",
// "parameterOrder": [
@@ -124418,7 +124912,8 @@ type TargetVpnGatewaysInsertCall struct {
}
// Insert: Creates a target VPN gateway in the specified project and
-// region using the data included in the request.
+// region using the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *TargetVpnGatewaysService) Insert(project string, region string, targetvpngateway *TargetVpnGateway) *TargetVpnGatewaysInsertCall {
c := &TargetVpnGatewaysInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -124473,7 +124968,7 @@ func (c *TargetVpnGatewaysInsertCall) Header() http.Header {
func (c *TargetVpnGatewaysInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -124538,7 +125033,7 @@ func (c *TargetVpnGatewaysInsertCall) Do(opts ...googleapi.CallOption) (*Operati
}
return ret, nil
// {
- // "description": "Creates a target VPN gateway in the specified project and region using the data included in the request.",
+ // "description": "Creates a target VPN gateway in the specified project and region using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.targetVpnGateways.insert",
// "parameterOrder": [
@@ -124594,7 +125089,8 @@ type TargetVpnGatewaysListCall struct {
}
// List: Retrieves a list of target VPN gateways available to the
-// specified project and region.
+// specified project and region. (== suppress_warning http-rest-shadowed
+// ==)
func (r *TargetVpnGatewaysService) List(project string, region string) *TargetVpnGatewaysListCall {
c := &TargetVpnGatewaysListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -124702,7 +125198,7 @@ func (c *TargetVpnGatewaysListCall) Header() http.Header {
func (c *TargetVpnGatewaysListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -124765,7 +125261,7 @@ func (c *TargetVpnGatewaysListCall) Do(opts ...googleapi.CallOption) (*TargetVpn
}
return ret, nil
// {
- // "description": "Retrieves a list of target VPN gateways available to the specified project and region.",
+ // "description": "Retrieves a list of target VPN gateways available to the specified project and region. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.targetVpnGateways.list",
// "parameterOrder": [
@@ -124857,7 +125353,8 @@ type UrlMapsAggregatedListCall struct {
}
// AggregatedList: Retrieves the list of all UrlMap resources, regional
-// and global, available to the specified project.
+// and global, available to the specified project. (== suppress_warning
+// http-rest-shadowed ==)
func (r *UrlMapsService) AggregatedList(project string) *UrlMapsAggregatedListCall {
c := &UrlMapsAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -124964,7 +125461,7 @@ func (c *UrlMapsAggregatedListCall) Header() http.Header {
func (c *UrlMapsAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -125026,7 +125523,7 @@ func (c *UrlMapsAggregatedListCall) Do(opts ...googleapi.CallOption) (*UrlMapsAg
}
return ret, nil
// {
- // "description": "Retrieves the list of all UrlMap resources, regional and global, available to the specified project.",
+ // "description": "Retrieves the list of all UrlMap resources, regional and global, available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.urlMaps.aggregatedList",
// "parameterOrder": [
@@ -125109,7 +125606,8 @@ type UrlMapsDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified UrlMap resource.
+// Delete: Deletes the specified UrlMap resource. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/urlMaps/delete
func (r *UrlMapsService) Delete(project string, urlMap string) *UrlMapsDeleteCall {
c := &UrlMapsDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -125164,7 +125662,7 @@ func (c *UrlMapsDeleteCall) Header() http.Header {
func (c *UrlMapsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -125224,7 +125722,7 @@ func (c *UrlMapsDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, error)
}
return ret, nil
// {
- // "description": "Deletes the specified UrlMap resource.",
+ // "description": "Deletes the specified UrlMap resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.urlMaps.delete",
// "parameterOrder": [
@@ -125277,7 +125775,8 @@ type UrlMapsGetCall struct {
}
// Get: Returns the specified UrlMap resource. Gets a list of available
-// URL maps by making a list() request.
+// URL maps by making a list() request. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/urlMaps/get
func (r *UrlMapsService) Get(project string, urlMap string) *UrlMapsGetCall {
c := &UrlMapsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -125323,7 +125822,7 @@ func (c *UrlMapsGetCall) Header() http.Header {
func (c *UrlMapsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -125386,7 +125885,7 @@ func (c *UrlMapsGetCall) Do(opts ...googleapi.CallOption) (*UrlMap, error) {
}
return ret, nil
// {
- // "description": "Returns the specified UrlMap resource. Gets a list of available URL maps by making a list() request.",
+ // "description": "Returns the specified UrlMap resource. Gets a list of available URL maps by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.urlMaps.get",
// "parameterOrder": [
@@ -125434,7 +125933,8 @@ type UrlMapsInsertCall struct {
}
// Insert: Creates a UrlMap resource in the specified project using the
-// data included in the request.
+// data included in the request. (== suppress_warning http-rest-shadowed
+// ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/urlMaps/insert
func (r *UrlMapsService) Insert(project string, urlmap *UrlMap) *UrlMapsInsertCall {
c := &UrlMapsInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -125489,7 +125989,7 @@ func (c *UrlMapsInsertCall) Header() http.Header {
func (c *UrlMapsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -125553,7 +126053,7 @@ func (c *UrlMapsInsertCall) Do(opts ...googleapi.CallOption) (*Operation, error)
}
return ret, nil
// {
- // "description": "Creates a UrlMap resource in the specified project using the data included in the request.",
+ // "description": "Creates a UrlMap resource in the specified project using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.urlMaps.insert",
// "parameterOrder": [
@@ -125601,7 +126101,8 @@ type UrlMapsInvalidateCacheCall struct {
}
// InvalidateCache: Initiates a cache invalidation operation,
-// invalidating the specified path, scoped to the specified UrlMap.
+// invalidating the specified path, scoped to the specified UrlMap. (==
+// suppress_warning http-rest-shadowed ==)
func (r *UrlMapsService) InvalidateCache(project string, urlMap string, cacheinvalidationrule *CacheInvalidationRule) *UrlMapsInvalidateCacheCall {
c := &UrlMapsInvalidateCacheCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -125656,7 +126157,7 @@ func (c *UrlMapsInvalidateCacheCall) Header() http.Header {
func (c *UrlMapsInvalidateCacheCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -125721,7 +126222,7 @@ func (c *UrlMapsInvalidateCacheCall) Do(opts ...googleapi.CallOption) (*Operatio
}
return ret, nil
// {
- // "description": "Initiates a cache invalidation operation, invalidating the specified path, scoped to the specified UrlMap.",
+ // "description": "Initiates a cache invalidation operation, invalidating the specified path, scoped to the specified UrlMap. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.urlMaps.invalidateCache",
// "parameterOrder": [
@@ -125776,7 +126277,7 @@ type UrlMapsListCall struct {
}
// List: Retrieves the list of UrlMap resources available to the
-// specified project.
+// specified project. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/urlMaps/list
func (r *UrlMapsService) List(project string) *UrlMapsListCall {
c := &UrlMapsListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -125884,7 +126385,7 @@ func (c *UrlMapsListCall) Header() http.Header {
func (c *UrlMapsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -125946,7 +126447,7 @@ func (c *UrlMapsListCall) Do(opts ...googleapi.CallOption) (*UrlMapList, error)
}
return ret, nil
// {
- // "description": "Retrieves the list of UrlMap resources available to the specified project.",
+ // "description": "Retrieves the list of UrlMap resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.urlMaps.list",
// "parameterOrder": [
@@ -126032,7 +126533,8 @@ type UrlMapsPatchCall struct {
// Patch: Patches the specified UrlMap resource with the data included
// in the request. This method supports PATCH semantics and uses the
-// JSON merge patch format and processing rules.
+// JSON merge patch format and processing rules. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/urlMaps/patch
func (r *UrlMapsService) Patch(project string, urlMap string, urlmap *UrlMap) *UrlMapsPatchCall {
c := &UrlMapsPatchCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -126088,7 +126590,7 @@ func (c *UrlMapsPatchCall) Header() http.Header {
func (c *UrlMapsPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -126153,7 +126655,7 @@ func (c *UrlMapsPatchCall) Do(opts ...googleapi.CallOption) (*Operation, error)
}
return ret, nil
// {
- // "description": "Patches the specified UrlMap resource with the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules.",
+ // "description": "Patches the specified UrlMap resource with the data included in the request. This method supports PATCH semantics and uses the JSON merge patch format and processing rules. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PATCH",
// "id": "compute.urlMaps.patch",
// "parameterOrder": [
@@ -126209,7 +126711,7 @@ type UrlMapsUpdateCall struct {
}
// Update: Updates the specified UrlMap resource with the data included
-// in the request.
+// in the request. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/urlMaps/update
func (r *UrlMapsService) Update(project string, urlMap string, urlmap *UrlMap) *UrlMapsUpdateCall {
c := &UrlMapsUpdateCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -126265,7 +126767,7 @@ func (c *UrlMapsUpdateCall) Header() http.Header {
func (c *UrlMapsUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -126330,7 +126832,7 @@ func (c *UrlMapsUpdateCall) Do(opts ...googleapi.CallOption) (*Operation, error)
}
return ret, nil
// {
- // "description": "Updates the specified UrlMap resource with the data included in the request.",
+ // "description": "Updates the specified UrlMap resource with the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "PUT",
// "id": "compute.urlMaps.update",
// "parameterOrder": [
@@ -126387,7 +126889,7 @@ type UrlMapsValidateCall struct {
// Validate: Runs static validation for the UrlMap. In particular, the
// tests of the provided UrlMap will be run. Calling this method does
-// NOT create the UrlMap.
+// NOT create the UrlMap. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/urlMaps/validate
func (r *UrlMapsService) Validate(project string, urlMap string, urlmapsvalidaterequest *UrlMapsValidateRequest) *UrlMapsValidateCall {
c := &UrlMapsValidateCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -126424,7 +126926,7 @@ func (c *UrlMapsValidateCall) Header() http.Header {
func (c *UrlMapsValidateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -126489,7 +126991,7 @@ func (c *UrlMapsValidateCall) Do(opts ...googleapi.CallOption) (*UrlMapsValidate
}
return ret, nil
// {
- // "description": "Runs static validation for the UrlMap. In particular, the tests of the provided UrlMap will be run. Calling this method does NOT create the UrlMap.",
+ // "description": "Runs static validation for the UrlMap. In particular, the tests of the provided UrlMap will be run. Calling this method does NOT create the UrlMap. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.urlMaps.validate",
// "parameterOrder": [
@@ -126538,7 +127040,8 @@ type VpnGatewaysAggregatedListCall struct {
header_ http.Header
}
-// AggregatedList: Retrieves an aggregated list of VPN gateways.
+// AggregatedList: Retrieves an aggregated list of VPN gateways. (==
+// suppress_warning http-rest-shadowed ==)
func (r *VpnGatewaysService) AggregatedList(project string) *VpnGatewaysAggregatedListCall {
c := &VpnGatewaysAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -126645,7 +127148,7 @@ func (c *VpnGatewaysAggregatedListCall) Header() http.Header {
func (c *VpnGatewaysAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -126707,7 +127210,7 @@ func (c *VpnGatewaysAggregatedListCall) Do(opts ...googleapi.CallOption) (*VpnGa
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of VPN gateways.",
+ // "description": "Retrieves an aggregated list of VPN gateways. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.vpnGateways.aggregatedList",
// "parameterOrder": [
@@ -126791,7 +127294,8 @@ type VpnGatewaysDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified VPN gateway.
+// Delete: Deletes the specified VPN gateway. (== suppress_warning
+// http-rest-shadowed ==)
func (r *VpnGatewaysService) Delete(project string, region string, vpnGateway string) *VpnGatewaysDeleteCall {
c := &VpnGatewaysDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -126846,7 +127350,7 @@ func (c *VpnGatewaysDeleteCall) Header() http.Header {
func (c *VpnGatewaysDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -126907,7 +127411,7 @@ func (c *VpnGatewaysDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, er
}
return ret, nil
// {
- // "description": "Deletes the specified VPN gateway.",
+ // "description": "Deletes the specified VPN gateway. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.vpnGateways.delete",
// "parameterOrder": [
@@ -126969,7 +127473,8 @@ type VpnGatewaysGetCall struct {
}
// Get: Returns the specified VPN gateway. Gets a list of available VPN
-// gateways by making a list() request.
+// gateways by making a list() request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *VpnGatewaysService) Get(project string, region string, vpnGateway string) *VpnGatewaysGetCall {
c := &VpnGatewaysGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -127015,7 +127520,7 @@ func (c *VpnGatewaysGetCall) Header() http.Header {
func (c *VpnGatewaysGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -127079,7 +127584,7 @@ func (c *VpnGatewaysGetCall) Do(opts ...googleapi.CallOption) (*VpnGateway, erro
}
return ret, nil
// {
- // "description": "Returns the specified VPN gateway. Gets a list of available VPN gateways by making a list() request.",
+ // "description": "Returns the specified VPN gateway. Gets a list of available VPN gateways by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.vpnGateways.get",
// "parameterOrder": [
@@ -127136,7 +127641,8 @@ type VpnGatewaysGetStatusCall struct {
header_ http.Header
}
-// GetStatus: Returns the status for the specified VPN gateway.
+// GetStatus: Returns the status for the specified VPN gateway. (==
+// suppress_warning http-rest-shadowed ==)
func (r *VpnGatewaysService) GetStatus(project string, region string, vpnGateway string) *VpnGatewaysGetStatusCall {
c := &VpnGatewaysGetStatusCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -127182,7 +127688,7 @@ func (c *VpnGatewaysGetStatusCall) Header() http.Header {
func (c *VpnGatewaysGetStatusCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -127246,7 +127752,7 @@ func (c *VpnGatewaysGetStatusCall) Do(opts ...googleapi.CallOption) (*VpnGateway
}
return ret, nil
// {
- // "description": "Returns the status for the specified VPN gateway.",
+ // "description": "Returns the status for the specified VPN gateway. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.vpnGateways.getStatus",
// "parameterOrder": [
@@ -127303,7 +127809,8 @@ type VpnGatewaysInsertCall struct {
}
// Insert: Creates a VPN gateway in the specified project and region
-// using the data included in the request.
+// using the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *VpnGatewaysService) Insert(project string, region string, vpngateway *VpnGateway) *VpnGatewaysInsertCall {
c := &VpnGatewaysInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -127358,7 +127865,7 @@ func (c *VpnGatewaysInsertCall) Header() http.Header {
func (c *VpnGatewaysInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -127423,7 +127930,7 @@ func (c *VpnGatewaysInsertCall) Do(opts ...googleapi.CallOption) (*Operation, er
}
return ret, nil
// {
- // "description": "Creates a VPN gateway in the specified project and region using the data included in the request.",
+ // "description": "Creates a VPN gateway in the specified project and region using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.vpnGateways.insert",
// "parameterOrder": [
@@ -127479,7 +127986,7 @@ type VpnGatewaysListCall struct {
}
// List: Retrieves a list of VPN gateways available to the specified
-// project and region.
+// project and region. (== suppress_warning http-rest-shadowed ==)
func (r *VpnGatewaysService) List(project string, region string) *VpnGatewaysListCall {
c := &VpnGatewaysListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -127587,7 +128094,7 @@ func (c *VpnGatewaysListCall) Header() http.Header {
func (c *VpnGatewaysListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -127650,7 +128157,7 @@ func (c *VpnGatewaysListCall) Do(opts ...googleapi.CallOption) (*VpnGatewayList,
}
return ret, nil
// {
- // "description": "Retrieves a list of VPN gateways available to the specified project and region.",
+ // "description": "Retrieves a list of VPN gateways available to the specified project and region. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.vpnGateways.list",
// "parameterOrder": [
@@ -127744,7 +128251,8 @@ type VpnGatewaysSetLabelsCall struct {
}
// SetLabels: Sets the labels on a VpnGateway. To learn more about
-// labels, read the Labeling Resources documentation.
+// labels, read the Labeling Resources documentation. (==
+// suppress_warning http-rest-shadowed ==)
func (r *VpnGatewaysService) SetLabels(project string, region string, resource string, regionsetlabelsrequest *RegionSetLabelsRequest) *VpnGatewaysSetLabelsCall {
c := &VpnGatewaysSetLabelsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -127800,7 +128308,7 @@ func (c *VpnGatewaysSetLabelsCall) Header() http.Header {
func (c *VpnGatewaysSetLabelsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -127866,7 +128374,7 @@ func (c *VpnGatewaysSetLabelsCall) Do(opts ...googleapi.CallOption) (*Operation,
}
return ret, nil
// {
- // "description": "Sets the labels on a VpnGateway. To learn more about labels, read the Labeling Resources documentation.",
+ // "description": "Sets the labels on a VpnGateway. To learn more about labels, read the Labeling Resources documentation. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.vpnGateways.setLabels",
// "parameterOrder": [
@@ -127931,7 +128439,7 @@ type VpnGatewaysTestIamPermissionsCall struct {
}
// TestIamPermissions: Returns permissions that a caller has on the
-// specified resource.
+// specified resource. (== suppress_warning http-rest-shadowed ==)
func (r *VpnGatewaysService) TestIamPermissions(project string, region string, resource string, testpermissionsrequest *TestPermissionsRequest) *VpnGatewaysTestIamPermissionsCall {
c := &VpnGatewaysTestIamPermissionsCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -127968,7 +128476,7 @@ func (c *VpnGatewaysTestIamPermissionsCall) Header() http.Header {
func (c *VpnGatewaysTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -128034,7 +128542,7 @@ func (c *VpnGatewaysTestIamPermissionsCall) Do(opts ...googleapi.CallOption) (*T
}
return ret, nil
// {
- // "description": "Returns permissions that a caller has on the specified resource.",
+ // "description": "Returns permissions that a caller has on the specified resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.vpnGateways.testIamPermissions",
// "parameterOrder": [
@@ -128092,7 +128600,8 @@ type VpnTunnelsAggregatedListCall struct {
header_ http.Header
}
-// AggregatedList: Retrieves an aggregated list of VPN tunnels.
+// AggregatedList: Retrieves an aggregated list of VPN tunnels. (==
+// suppress_warning http-rest-shadowed ==)
func (r *VpnTunnelsService) AggregatedList(project string) *VpnTunnelsAggregatedListCall {
c := &VpnTunnelsAggregatedListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -128199,7 +128708,7 @@ func (c *VpnTunnelsAggregatedListCall) Header() http.Header {
func (c *VpnTunnelsAggregatedListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -128261,7 +128770,7 @@ func (c *VpnTunnelsAggregatedListCall) Do(opts ...googleapi.CallOption) (*VpnTun
}
return ret, nil
// {
- // "description": "Retrieves an aggregated list of VPN tunnels.",
+ // "description": "Retrieves an aggregated list of VPN tunnels. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.vpnTunnels.aggregatedList",
// "parameterOrder": [
@@ -128345,7 +128854,8 @@ type VpnTunnelsDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified VpnTunnel resource.
+// Delete: Deletes the specified VpnTunnel resource. (==
+// suppress_warning http-rest-shadowed ==)
func (r *VpnTunnelsService) Delete(project string, region string, vpnTunnel string) *VpnTunnelsDeleteCall {
c := &VpnTunnelsDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -128400,7 +128910,7 @@ func (c *VpnTunnelsDeleteCall) Header() http.Header {
func (c *VpnTunnelsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -128461,7 +128971,7 @@ func (c *VpnTunnelsDeleteCall) Do(opts ...googleapi.CallOption) (*Operation, err
}
return ret, nil
// {
- // "description": "Deletes the specified VpnTunnel resource.",
+ // "description": "Deletes the specified VpnTunnel resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.vpnTunnels.delete",
// "parameterOrder": [
@@ -128523,7 +129033,8 @@ type VpnTunnelsGetCall struct {
}
// Get: Returns the specified VpnTunnel resource. Gets a list of
-// available VPN tunnels by making a list() request.
+// available VPN tunnels by making a list() request. (==
+// suppress_warning http-rest-shadowed ==)
func (r *VpnTunnelsService) Get(project string, region string, vpnTunnel string) *VpnTunnelsGetCall {
c := &VpnTunnelsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -128569,7 +129080,7 @@ func (c *VpnTunnelsGetCall) Header() http.Header {
func (c *VpnTunnelsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -128633,7 +129144,7 @@ func (c *VpnTunnelsGetCall) Do(opts ...googleapi.CallOption) (*VpnTunnel, error)
}
return ret, nil
// {
- // "description": "Returns the specified VpnTunnel resource. Gets a list of available VPN tunnels by making a list() request.",
+ // "description": "Returns the specified VpnTunnel resource. Gets a list of available VPN tunnels by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.vpnTunnels.get",
// "parameterOrder": [
@@ -128690,7 +129201,8 @@ type VpnTunnelsInsertCall struct {
}
// Insert: Creates a VpnTunnel resource in the specified project and
-// region using the data included in the request.
+// region using the data included in the request. (== suppress_warning
+// http-rest-shadowed ==)
func (r *VpnTunnelsService) Insert(project string, region string, vpntunnel *VpnTunnel) *VpnTunnelsInsertCall {
c := &VpnTunnelsInsertCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -128745,7 +129257,7 @@ func (c *VpnTunnelsInsertCall) Header() http.Header {
func (c *VpnTunnelsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -128810,7 +129322,7 @@ func (c *VpnTunnelsInsertCall) Do(opts ...googleapi.CallOption) (*Operation, err
}
return ret, nil
// {
- // "description": "Creates a VpnTunnel resource in the specified project and region using the data included in the request.",
+ // "description": "Creates a VpnTunnel resource in the specified project and region using the data included in the request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "POST",
// "id": "compute.vpnTunnels.insert",
// "parameterOrder": [
@@ -128866,7 +129378,8 @@ type VpnTunnelsListCall struct {
}
// List: Retrieves a list of VpnTunnel resources contained in the
-// specified project and region.
+// specified project and region. (== suppress_warning http-rest-shadowed
+// ==)
func (r *VpnTunnelsService) List(project string, region string) *VpnTunnelsListCall {
c := &VpnTunnelsListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
c.project = project
@@ -128974,7 +129487,7 @@ func (c *VpnTunnelsListCall) Header() http.Header {
func (c *VpnTunnelsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -129037,7 +129550,7 @@ func (c *VpnTunnelsListCall) Do(opts ...googleapi.CallOption) (*VpnTunnelList, e
}
return ret, nil
// {
- // "description": "Retrieves a list of VpnTunnel resources contained in the specified project and region.",
+ // "description": "Retrieves a list of VpnTunnel resources contained in the specified project and region. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.vpnTunnels.list",
// "parameterOrder": [
@@ -129129,7 +129642,8 @@ type ZoneOperationsDeleteCall struct {
header_ http.Header
}
-// Delete: Deletes the specified zone-specific Operations resource.
+// Delete: Deletes the specified zone-specific Operations resource. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/zoneOperations/delete
func (r *ZoneOperationsService) Delete(project string, zone string, operation string) *ZoneOperationsDeleteCall {
c := &ZoneOperationsDeleteCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -129166,7 +129680,7 @@ func (c *ZoneOperationsDeleteCall) Header() http.Header {
func (c *ZoneOperationsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -129202,7 +129716,7 @@ func (c *ZoneOperationsDeleteCall) Do(opts ...googleapi.CallOption) error {
}
return nil
// {
- // "description": "Deletes the specified zone-specific Operations resource.",
+ // "description": "Deletes the specified zone-specific Operations resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "DELETE",
// "id": "compute.zoneOperations.delete",
// "parameterOrder": [
@@ -129255,7 +129769,8 @@ type ZoneOperationsGetCall struct {
header_ http.Header
}
-// Get: Retrieves the specified zone-specific Operations resource.
+// Get: Retrieves the specified zone-specific Operations resource. (==
+// suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/zoneOperations/get
func (r *ZoneOperationsService) Get(project string, zone string, operation string) *ZoneOperationsGetCall {
c := &ZoneOperationsGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -129302,7 +129817,7 @@ func (c *ZoneOperationsGetCall) Header() http.Header {
func (c *ZoneOperationsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -129366,7 +129881,7 @@ func (c *ZoneOperationsGetCall) Do(opts ...googleapi.CallOption) (*Operation, er
}
return ret, nil
// {
- // "description": "Retrieves the specified zone-specific Operations resource.",
+ // "description": "Retrieves the specified zone-specific Operations resource. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.zoneOperations.get",
// "parameterOrder": [
@@ -129423,7 +129938,7 @@ type ZoneOperationsListCall struct {
}
// List: Retrieves a list of Operation resources contained within the
-// specified zone.
+// specified zone. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/zoneOperations/list
func (r *ZoneOperationsService) List(project string, zone string) *ZoneOperationsListCall {
c := &ZoneOperationsListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -129532,7 +130047,7 @@ func (c *ZoneOperationsListCall) Header() http.Header {
func (c *ZoneOperationsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -129595,7 +130110,7 @@ func (c *ZoneOperationsListCall) Do(opts ...googleapi.CallOption) (*OperationLis
}
return ret, nil
// {
- // "description": "Retrieves a list of Operation resources contained within the specified zone.",
+ // "description": "Retrieves a list of Operation resources contained within the specified zone. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.zoneOperations.list",
// "parameterOrder": [
@@ -129688,7 +130203,8 @@ type ZonesGetCall struct {
}
// Get: Returns the specified Zone resource. Gets a list of available
-// zones by making a list() request.
+// zones by making a list() request. (== suppress_warning
+// http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/zones/get
func (r *ZonesService) Get(project string, zone string) *ZonesGetCall {
c := &ZonesGetCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -129734,7 +130250,7 @@ func (c *ZonesGetCall) Header() http.Header {
func (c *ZonesGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -129797,7 +130313,7 @@ func (c *ZonesGetCall) Do(opts ...googleapi.CallOption) (*Zone, error) {
}
return ret, nil
// {
- // "description": "Returns the specified Zone resource. Gets a list of available zones by making a list() request.",
+ // "description": "Returns the specified Zone resource. Gets a list of available zones by making a list() request. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.zones.get",
// "parameterOrder": [
@@ -129845,7 +130361,7 @@ type ZonesListCall struct {
}
// List: Retrieves the list of Zone resources available to the specified
-// project.
+// project. (== suppress_warning http-rest-shadowed ==)
// For details, see https://cloud.google.com/compute/docs/reference/latest/zones/list
func (r *ZonesService) List(project string) *ZonesListCall {
c := &ZonesListCall{s: r.s, urlParams_: make(gensupport.URLParams)}
@@ -129953,7 +130469,7 @@ func (c *ZonesListCall) Header() http.Header {
func (c *ZonesListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -130015,7 +130531,7 @@ func (c *ZonesListCall) Do(opts ...googleapi.CallOption) (*ZoneList, error) {
}
return ret, nil
// {
- // "description": "Retrieves the list of Zone resources available to the specified project.",
+ // "description": "Retrieves the list of Zone resources available to the specified project. (== suppress_warning http-rest-shadowed ==)",
// "httpMethod": "GET",
// "id": "compute.zones.list",
// "parameterOrder": [
diff --git a/vendor/google.golang.org/api/googleapi/googleapi.go b/vendor/google.golang.org/api/googleapi/googleapi.go
index ab53767624a3a..4431716d3b969 100644
--- a/vendor/google.golang.org/api/googleapi/googleapi.go
+++ b/vendor/google.golang.org/api/googleapi/googleapi.go
@@ -1,4 +1,4 @@
-// Copyright 2011 Google Inc. All rights reserved.
+// Copyright 2011 Google LLC. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
@@ -16,7 +16,7 @@ import (
"net/url"
"strings"
- "google.golang.org/api/googleapi/internal/uritemplates"
+ "google.golang.org/api/internal/third_party/uritemplates"
)
// ContentTyper is an interface for Readers which know (or would like
@@ -256,14 +256,22 @@ func ProcessMediaOptions(opts []MediaOption) *MediaOptions {
// "http://www.golang.org/topics/myproject/mytopic". It strips all parent
// references (e.g. ../..) as well as anything after the host
// (e.g. /bar/gaz gets stripped out of foo.com/bar/gaz).
+//
+// ResolveRelative panics if either basestr or relstr is not able to be parsed.
func ResolveRelative(basestr, relstr string) string {
- u, _ := url.Parse(basestr)
+ u, err := url.Parse(basestr)
+ if err != nil {
+ panic(fmt.Sprintf("failed to parse %q", basestr))
+ }
afterColonPath := ""
if i := strings.IndexRune(relstr, ':'); i > 0 {
afterColonPath = relstr[i+1:]
relstr = relstr[:i]
}
- rel, _ := url.Parse(relstr)
+ rel, err := url.Parse(relstr)
+ if err != nil {
+ panic(fmt.Sprintf("failed to parse %q", relstr))
+ }
u = u.ResolveReference(rel)
us := u.String()
if afterColonPath != "" {
@@ -331,7 +339,7 @@ func ConvertVariant(v map[string]interface{}, dst interface{}) bool {
}
// A Field names a field to be retrieved with a partial response.
-// See https://developers.google.com/gdata/docs/2.0/basics#PartialResponse
+// https://cloud.google.com/storage/docs/json_api/v1/how-tos/performance
//
// Partial responses can dramatically reduce the amount of data that must be sent to your application.
// In order to request partial responses, you can specify the full list of fields
@@ -348,9 +356,6 @@ func ConvertVariant(v map[string]interface{}, dst interface{}) bool {
//
// svc.Events.List().Fields("nextPageToken", "items(id,updated)").Do()
//
-// More information about field formatting can be found here:
-// https://developers.google.com/+/api/#fields-syntax
-//
// Another way to find field names is through the Google API explorer:
// https://developers.google.com/apis-explorer/#p/
type Field string
diff --git a/vendor/google.golang.org/api/googleapi/internal/uritemplates/LICENSE b/vendor/google.golang.org/api/googleapi/internal/uritemplates/LICENSE
deleted file mode 100644
index de9c88cb65cb3..0000000000000
--- a/vendor/google.golang.org/api/googleapi/internal/uritemplates/LICENSE
+++ /dev/null
@@ -1,18 +0,0 @@
-Copyright (c) 2013 Joshua Tacoma
-
-Permission is hereby granted, free of charge, to any person obtaining a copy of
-this software and associated documentation files (the "Software"), to deal in
-the Software without restriction, including without limitation the rights to
-use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
-the Software, and to permit persons to whom the Software is furnished to do so,
-subject to the following conditions:
-
-The above copyright notice and this permission notice shall be included in all
-copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
-FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
-COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
-IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
-CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
diff --git a/vendor/google.golang.org/api/googleapi/transport/apikey.go b/vendor/google.golang.org/api/googleapi/transport/apikey.go
index eca1ea2507711..4b6c0d527e3a9 100644
--- a/vendor/google.golang.org/api/googleapi/transport/apikey.go
+++ b/vendor/google.golang.org/api/googleapi/transport/apikey.go
@@ -1,4 +1,4 @@
-// Copyright 2012 Google Inc. All rights reserved.
+// Copyright 2012 Google LLC. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/vendor/google.golang.org/api/googleapi/types.go b/vendor/google.golang.org/api/googleapi/types.go
index a280e3021a15a..fabf74d50d0e3 100644
--- a/vendor/google.golang.org/api/googleapi/types.go
+++ b/vendor/google.golang.org/api/googleapi/types.go
@@ -1,4 +1,4 @@
-// Copyright 2013 Google Inc. All rights reserved.
+// Copyright 2013 Google LLC. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
diff --git a/vendor/google.golang.org/api/internal/creds.go b/vendor/google.golang.org/api/internal/creds.go
index 69b8659fddbd1..a6f9a2dea127e 100644
--- a/vendor/google.golang.org/api/internal/creds.go
+++ b/vendor/google.golang.org/api/internal/creds.go
@@ -1,16 +1,6 @@
-// Copyright 2017 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
+// Copyright 2017 Google LLC.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
package internal
diff --git a/vendor/google.golang.org/api/internal/gensupport/jsonfloat.go b/vendor/google.golang.org/api/internal/gensupport/jsonfloat.go
index 8377850811f03..13c2f930207b4 100644
--- a/vendor/google.golang.org/api/internal/gensupport/jsonfloat.go
+++ b/vendor/google.golang.org/api/internal/gensupport/jsonfloat.go
@@ -1,16 +1,6 @@
-// Copyright 2016 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
+// Copyright 2016 Google LLC.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
package gensupport
diff --git a/vendor/google.golang.org/api/internal/pool.go b/vendor/google.golang.org/api/internal/pool.go
index a4426dcb700fd..0680dd99015d1 100644
--- a/vendor/google.golang.org/api/internal/pool.go
+++ b/vendor/google.golang.org/api/internal/pool.go
@@ -1,16 +1,6 @@
-// Copyright 2016 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
+// Copyright 2016 Google LLC.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
package internal
diff --git a/vendor/google.golang.org/api/internal/settings.go b/vendor/google.golang.org/api/internal/settings.go
index 062301c65fd5c..544d715c87d0f 100644
--- a/vendor/google.golang.org/api/internal/settings.go
+++ b/vendor/google.golang.org/api/internal/settings.go
@@ -1,16 +1,6 @@
-// Copyright 2017 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
+// Copyright 2017 Google LLC.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
// Package internal supports the options and transport packages.
package internal
@@ -27,19 +17,20 @@ import (
// DialSettings holds information needed to establish a connection with a
// Google API service.
type DialSettings struct {
- Endpoint string
- Scopes []string
- TokenSource oauth2.TokenSource
- Credentials *google.Credentials
- CredentialsFile string // if set, Token Source is ignored.
- CredentialsJSON []byte
- UserAgent string
- APIKey string
- Audiences []string
- HTTPClient *http.Client
- GRPCDialOpts []grpc.DialOption
- GRPCConn *grpc.ClientConn
- NoAuth bool
+ Endpoint string
+ Scopes []string
+ TokenSource oauth2.TokenSource
+ Credentials *google.Credentials
+ CredentialsFile string // if set, Token Source is ignored.
+ CredentialsJSON []byte
+ UserAgent string
+ APIKey string
+ Audiences []string
+ HTTPClient *http.Client
+ GRPCDialOpts []grpc.DialOption
+ GRPCConn *grpc.ClientConn
+ NoAuth bool
+ TelemetryDisabled bool
// Google API system parameters. For more information please read:
// https://cloud.google.com/apis/docs/system-parameters
diff --git a/vendor/google.golang.org/api/internal/third_party/uritemplates/LICENSE b/vendor/google.golang.org/api/internal/third_party/uritemplates/LICENSE
new file mode 100644
index 0000000000000..7109c6ef93239
--- /dev/null
+++ b/vendor/google.golang.org/api/internal/third_party/uritemplates/LICENSE
@@ -0,0 +1,27 @@
+Copyright (c) 2013 Joshua Tacoma. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+ * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+ * Neither the name of Google Inc. nor the names of its
+contributors may be used to endorse or promote products derived from
+this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/google.golang.org/api/internal/third_party/uritemplates/METADATA b/vendor/google.golang.org/api/internal/third_party/uritemplates/METADATA
new file mode 100644
index 0000000000000..c7f86fcd5fd8d
--- /dev/null
+++ b/vendor/google.golang.org/api/internal/third_party/uritemplates/METADATA
@@ -0,0 +1,14 @@
+name: "uritemplates"
+description:
+ "Package uritemplates is a level 4 implementation of RFC 6570 (URI "
+ "Template, http://tools.ietf.org/html/rfc6570)."
+
+third_party {
+ url {
+ type: GIT
+ value: "https://github.com/jtacoma/uritemplates"
+ }
+ version: "0.1"
+ last_upgrade_date { year: 2014 month: 8 day: 18 }
+ license_type: NOTICE
+}
diff --git a/vendor/google.golang.org/api/googleapi/internal/uritemplates/uritemplates.go b/vendor/google.golang.org/api/internal/third_party/uritemplates/uritemplates.go
similarity index 98%
rename from vendor/google.golang.org/api/googleapi/internal/uritemplates/uritemplates.go
rename to vendor/google.golang.org/api/internal/third_party/uritemplates/uritemplates.go
index 63bf0538301a1..8c27d19d752e3 100644
--- a/vendor/google.golang.org/api/googleapi/internal/uritemplates/uritemplates.go
+++ b/vendor/google.golang.org/api/internal/third_party/uritemplates/uritemplates.go
@@ -191,7 +191,7 @@ func parseTerm(term string) (result templateTerm, err error) {
err = errors.New("not a valid name: " + result.name)
}
if result.explode && result.truncate > 0 {
- err = errors.New("both explode and prefix modifers on same term")
+ err = errors.New("both explode and prefix modifiers on same term")
}
return result, err
}
diff --git a/vendor/google.golang.org/api/googleapi/internal/uritemplates/utils.go b/vendor/google.golang.org/api/internal/third_party/uritemplates/utils.go
similarity index 100%
rename from vendor/google.golang.org/api/googleapi/internal/uritemplates/utils.go
rename to vendor/google.golang.org/api/internal/third_party/uritemplates/utils.go
diff --git a/vendor/google.golang.org/api/iterator/iterator.go b/vendor/google.golang.org/api/iterator/iterator.go
index 3c8ea7732af28..1799b5d9af59a 100644
--- a/vendor/google.golang.org/api/iterator/iterator.go
+++ b/vendor/google.golang.org/api/iterator/iterator.go
@@ -1,16 +1,6 @@
-// Copyright 2016 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
+// Copyright 2016 Google LLC.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
// Package iterator provides support for standard Google API iterators.
// See https://github.com/GoogleCloudPlatform/gcloud-golang/wiki/Iterator-Guidelines.
@@ -82,17 +72,23 @@ type PageInfo struct {
// It is not a stable interface.
var NewPageInfo = newPageInfo
-// If an iterator can support paging, its iterator-creating method should call
-// this (via the NewPageInfo variable above).
+// newPageInfo creates and returns a PageInfo and a next func. If an iterator can
+// support paging, its iterator-creating method should call this. Each time the
+// iterator's Next is called, it should call the returned next fn to determine
+// whether a next item exists, and if so it should pop an item from the buffer.
//
-// The fetch, bufLen and takeBuf arguments provide access to the
-// iterator's internal slice of buffered items. They behave as described in
-// PageInfo, above.
+// The fetch, bufLen and takeBuf arguments provide access to the iterator's
+// internal slice of buffered items. They behave as described in PageInfo, above.
//
// The return value is the PageInfo.next method bound to the returned PageInfo value.
// (Returning it avoids exporting PageInfo.next.)
-func newPageInfo(fetch func(int, string) (string, error), bufLen func() int, takeBuf func() interface{}) (*PageInfo, func() error) {
- pi := &PageInfo{
+//
+// Note: the returned PageInfo and next fn do not remove items from the buffer.
+// It is up to the iterator using these to remove items from the buffer:
+// typically by performing a pop in its Next. If items are not removed from the
+// buffer, memory may grow unbounded.
+func newPageInfo(fetch func(int, string) (string, error), bufLen func() int, takeBuf func() interface{}) (pi *PageInfo, next func() error) {
+ pi = &PageInfo{
fetch: fetch,
bufLen: bufLen,
takeBuf: takeBuf,
diff --git a/vendor/google.golang.org/api/option/credentials_go19.go b/vendor/google.golang.org/api/option/credentials_go19.go
index 0636a8294547a..d06f918b0e6f5 100644
--- a/vendor/google.golang.org/api/option/credentials_go19.go
+++ b/vendor/google.golang.org/api/option/credentials_go19.go
@@ -1,16 +1,6 @@
-// Copyright 2018 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
+// Copyright 2018 Google LLC.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
// +build go1.9
diff --git a/vendor/google.golang.org/api/option/credentials_notgo19.go b/vendor/google.golang.org/api/option/credentials_notgo19.go
index 74d3a4b5b918c..0ce107a624ac7 100644
--- a/vendor/google.golang.org/api/option/credentials_notgo19.go
+++ b/vendor/google.golang.org/api/option/credentials_notgo19.go
@@ -1,16 +1,6 @@
-// Copyright 2018 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
+// Copyright 2018 Google LLC.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
// +build !go1.9
diff --git a/vendor/google.golang.org/api/option/option.go b/vendor/google.golang.org/api/option/option.go
index 0a1c2dba9e382..8a4cd166cac16 100644
--- a/vendor/google.golang.org/api/option/option.go
+++ b/vendor/google.golang.org/api/option/option.go
@@ -1,16 +1,6 @@
-// Copyright 2017 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
+// Copyright 2017 Google LLC.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
// Package option contains options for Google API clients.
package option
@@ -233,3 +223,16 @@ type withRequestReason string
func (w withRequestReason) Apply(o *internal.DialSettings) {
o.RequestReason = string(w)
}
+
+// WithTelemetryDisabled returns a ClientOption that disables default telemetry (OpenCensus)
+// settings on gRPC and HTTP clients.
+// An example reason would be to bind custom telemetry that overrides the defaults.
+func WithTelemetryDisabled() ClientOption {
+ return withTelemetryDisabledOption{}
+}
+
+type withTelemetryDisabledOption struct{}
+
+func (w withTelemetryDisabledOption) Apply(o *internal.DialSettings) {
+ o.TelemetryDisabled = true
+}
diff --git a/vendor/google.golang.org/api/storage/v1/storage-api.json b/vendor/google.golang.org/api/storage/v1/storage-api.json
index e2b4980b0d3c3..6dcb56dcf97fc 100644
--- a/vendor/google.golang.org/api/storage/v1/storage-api.json
+++ b/vendor/google.golang.org/api/storage/v1/storage-api.json
@@ -26,7 +26,7 @@
"description": "Stores and retrieves potentially large, immutable data objects.",
"discoveryVersion": "v1",
"documentationLink": "https://developers.google.com/storage/docs/json_api/",
- "etag": "\"LYADMvHWYH2ul9D6m9UT9gT77YM/siVp-wXOlRuFwENWg2J2I4L8CMg\"",
+ "etag": "\"F5McR9eEaw0XRpaO3M9gbIugkbs/bQWWH-5yykbmINHZHPMOypW2I3M\"",
"icons": {
"x16": "https://www.google.com/images/icons/product/cloud_storage-16.png",
"x32": "https://www.google.com/images/icons/product/cloud_storage-32.png"
@@ -3061,6 +3061,7 @@
"scopes": [
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/cloud-platform.read-only",
+ "https://www.googleapis.com/auth/devstorage.full_control",
"https://www.googleapis.com/auth/devstorage.read_only"
]
},
@@ -3203,7 +3204,7 @@
}
}
},
- "revision": "20190910",
+ "revision": "20191011",
"rootUrl": "https://www.googleapis.com/",
"schemas": {
"Bucket": {
diff --git a/vendor/google.golang.org/api/storage/v1/storage-gen.go b/vendor/google.golang.org/api/storage/v1/storage-gen.go
index 9a0360563abe2..fd7d9488cdf6b 100644
--- a/vendor/google.golang.org/api/storage/v1/storage-gen.go
+++ b/vendor/google.golang.org/api/storage/v1/storage-gen.go
@@ -2392,7 +2392,7 @@ func (c *BucketAccessControlsDeleteCall) Header() http.Header {
func (c *BucketAccessControlsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -2540,7 +2540,7 @@ func (c *BucketAccessControlsGetCall) Header() http.Header {
func (c *BucketAccessControlsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -2707,7 +2707,7 @@ func (c *BucketAccessControlsInsertCall) Header() http.Header {
func (c *BucketAccessControlsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -2880,7 +2880,7 @@ func (c *BucketAccessControlsListCall) Header() http.Header {
func (c *BucketAccessControlsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -3041,7 +3041,7 @@ func (c *BucketAccessControlsPatchCall) Header() http.Header {
func (c *BucketAccessControlsPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -3215,7 +3215,7 @@ func (c *BucketAccessControlsUpdateCall) Header() http.Header {
func (c *BucketAccessControlsUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -3401,7 +3401,7 @@ func (c *BucketsDeleteCall) Header() http.Header {
func (c *BucketsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -3580,7 +3580,7 @@ func (c *BucketsGetCall) Header() http.Header {
func (c *BucketsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -3786,7 +3786,7 @@ func (c *BucketsGetIamPolicyCall) Header() http.Header {
func (c *BucketsGetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -4003,7 +4003,7 @@ func (c *BucketsInsertCall) Header() http.Header {
func (c *BucketsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -4260,7 +4260,7 @@ func (c *BucketsListCall) Header() http.Header {
func (c *BucketsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -4470,7 +4470,7 @@ func (c *BucketsLockRetentionPolicyCall) Header() http.Header {
func (c *BucketsLockRetentionPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -4705,7 +4705,7 @@ func (c *BucketsPatchCall) Header() http.Header {
func (c *BucketsPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -4934,7 +4934,7 @@ func (c *BucketsSetIamPolicyCall) Header() http.Header {
func (c *BucketsSetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -5109,7 +5109,7 @@ func (c *BucketsTestIamPermissionsCall) Header() http.Header {
func (c *BucketsTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -5349,7 +5349,7 @@ func (c *BucketsUpdateCall) Header() http.Header {
func (c *BucketsUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -5561,7 +5561,7 @@ func (c *ChannelsStopCall) Header() http.Header {
func (c *ChannelsStopCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -5678,7 +5678,7 @@ func (c *DefaultObjectAccessControlsDeleteCall) Header() http.Header {
func (c *DefaultObjectAccessControlsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -5826,7 +5826,7 @@ func (c *DefaultObjectAccessControlsGetCall) Header() http.Header {
func (c *DefaultObjectAccessControlsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -5994,7 +5994,7 @@ func (c *DefaultObjectAccessControlsInsertCall) Header() http.Header {
func (c *DefaultObjectAccessControlsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -6184,7 +6184,7 @@ func (c *DefaultObjectAccessControlsListCall) Header() http.Header {
func (c *DefaultObjectAccessControlsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -6357,7 +6357,7 @@ func (c *DefaultObjectAccessControlsPatchCall) Header() http.Header {
func (c *DefaultObjectAccessControlsPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -6531,7 +6531,7 @@ func (c *DefaultObjectAccessControlsUpdateCall) Header() http.Header {
func (c *DefaultObjectAccessControlsUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -6703,7 +6703,7 @@ func (c *NotificationsDeleteCall) Header() http.Header {
func (c *NotificationsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -6851,7 +6851,7 @@ func (c *NotificationsGetCall) Header() http.Header {
func (c *NotificationsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -7021,7 +7021,7 @@ func (c *NotificationsInsertCall) Header() http.Header {
func (c *NotificationsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -7196,7 +7196,7 @@ func (c *NotificationsListCall) Header() http.Header {
func (c *NotificationsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -7369,7 +7369,7 @@ func (c *ObjectAccessControlsDeleteCall) Header() http.Header {
func (c *ObjectAccessControlsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -7541,7 +7541,7 @@ func (c *ObjectAccessControlsGetCall) Header() http.Header {
func (c *ObjectAccessControlsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -7732,7 +7732,7 @@ func (c *ObjectAccessControlsInsertCall) Header() http.Header {
func (c *ObjectAccessControlsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -7929,7 +7929,7 @@ func (c *ObjectAccessControlsListCall) Header() http.Header {
func (c *ObjectAccessControlsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -8114,7 +8114,7 @@ func (c *ObjectAccessControlsPatchCall) Header() http.Header {
func (c *ObjectAccessControlsPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -8312,7 +8312,7 @@ func (c *ObjectAccessControlsUpdateCall) Header() http.Header {
func (c *ObjectAccessControlsUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -8549,7 +8549,7 @@ func (c *ObjectsComposeCall) Header() http.Header {
func (c *ObjectsComposeCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -8881,7 +8881,7 @@ func (c *ObjectsCopyCall) Header() http.Header {
func (c *ObjectsCopyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -9204,7 +9204,7 @@ func (c *ObjectsDeleteCall) Header() http.Header {
func (c *ObjectsDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -9437,7 +9437,7 @@ func (c *ObjectsGetCall) Header() http.Header {
func (c *ObjectsGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -9687,7 +9687,7 @@ func (c *ObjectsGetIamPolicyCall) Header() http.Header {
func (c *ObjectsGetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -10004,7 +10004,7 @@ func (c *ObjectsInsertCall) Header() http.Header {
func (c *ObjectsInsertCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -10019,7 +10019,7 @@ func (c *ObjectsInsertCall) doRequest(alt string) (*http.Response, error) {
c.urlParams_.Set("prettyPrint", "false")
urls := googleapi.ResolveRelative(c.s.BasePath, "b/{bucket}/o")
if c.mediaInfo_ != nil {
- urls = strings.Replace(urls, "https://www.googleapis.com/", "https://www.googleapis.com/upload/", 1)
+ urls = googleapi.ResolveRelative(c.s.BasePath, "/upload/storage/v1/b/{bucket}/o")
c.urlParams_.Set("uploadType", c.mediaInfo_.UploadType())
}
if body == nil {
@@ -10359,7 +10359,7 @@ func (c *ObjectsListCall) Header() http.Header {
func (c *ObjectsListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -10666,7 +10666,7 @@ func (c *ObjectsPatchCall) Header() http.Header {
func (c *ObjectsPatchCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -11058,7 +11058,7 @@ func (c *ObjectsRewriteCall) Header() http.Header {
func (c *ObjectsRewriteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -11361,7 +11361,7 @@ func (c *ObjectsSetIamPolicyCall) Header() http.Header {
func (c *ObjectsSetIamPolicyCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -11561,7 +11561,7 @@ func (c *ObjectsTestIamPermissionsCall) Header() http.Header {
func (c *ObjectsTestIamPermissionsCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -11822,7 +11822,7 @@ func (c *ObjectsUpdateCall) Header() http.Header {
func (c *ObjectsUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -12122,7 +12122,7 @@ func (c *ObjectsWatchAllCall) Header() http.Header {
func (c *ObjectsWatchAllCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -12328,7 +12328,7 @@ func (c *ProjectsHmacKeysCreateCall) Header() http.Header {
func (c *ProjectsHmacKeysCreateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -12478,7 +12478,7 @@ func (c *ProjectsHmacKeysDeleteCall) Header() http.Header {
func (c *ProjectsHmacKeysDeleteCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -12613,7 +12613,7 @@ func (c *ProjectsHmacKeysGetCall) Header() http.Header {
func (c *ProjectsHmacKeysGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -12709,6 +12709,7 @@ func (c *ProjectsHmacKeysGetCall) Do(opts ...googleapi.CallOption) (*HmacKeyMeta
// "scopes": [
// "https://www.googleapis.com/auth/cloud-platform",
// "https://www.googleapis.com/auth/cloud-platform.read-only",
+ // "https://www.googleapis.com/auth/devstorage.full_control",
// "https://www.googleapis.com/auth/devstorage.read_only"
// ]
// }
@@ -12812,7 +12813,7 @@ func (c *ProjectsHmacKeysListCall) Header() http.Header {
func (c *ProjectsHmacKeysListCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -13007,7 +13008,7 @@ func (c *ProjectsHmacKeysUpdateCall) Header() http.Header {
func (c *ProjectsHmacKeysUpdateCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
@@ -13184,7 +13185,7 @@ func (c *ProjectsServiceAccountGetCall) Header() http.Header {
func (c *ProjectsServiceAccountGetCall) doRequest(alt string) (*http.Response, error) {
reqHeaders := make(http.Header)
- reqHeaders.Set("x-goog-api-client", "gl-go/1.11.0 gdcl/20190926")
+ reqHeaders.Set("x-goog-api-client", "gl-go/1.13.4 gdcl/20191114")
for k, v := range c.header_ {
reqHeaders[k] = v
}
diff --git a/vendor/google.golang.org/api/transport/dial.go b/vendor/google.golang.org/api/transport/dial.go
index 1fb7cf905d921..2c495ad53895a 100644
--- a/vendor/google.golang.org/api/transport/dial.go
+++ b/vendor/google.golang.org/api/transport/dial.go
@@ -1,16 +1,6 @@
-// Copyright 2015 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
+// Copyright 2015 Google LLC.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
package transport
diff --git a/vendor/google.golang.org/api/transport/doc.go b/vendor/google.golang.org/api/transport/doc.go
index 4915036c35982..7143abee4581b 100644
--- a/vendor/google.golang.org/api/transport/doc.go
+++ b/vendor/google.golang.org/api/transport/doc.go
@@ -1,16 +1,6 @@
-// Copyright 2019 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
+// Copyright 2019 Google LLC.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
// Package transport provides utility methods for creating authenticated
// transports to Google's HTTP and gRPC APIs. It is intended to be used in
diff --git a/vendor/google.golang.org/api/transport/go19.go b/vendor/google.golang.org/api/transport/go19.go
index 3e89f932871b1..abaa633f4e0f2 100644
--- a/vendor/google.golang.org/api/transport/go19.go
+++ b/vendor/google.golang.org/api/transport/go19.go
@@ -1,16 +1,6 @@
-// Copyright 2018 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
+// Copyright 2018 Google LLC.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
// +build go1.9
diff --git a/vendor/google.golang.org/api/transport/grpc/dial.go b/vendor/google.golang.org/api/transport/grpc/dial.go
index b850246ce2f94..7526e6820ce45 100644
--- a/vendor/google.golang.org/api/transport/grpc/dial.go
+++ b/vendor/google.golang.org/api/transport/grpc/dial.go
@@ -1,16 +1,6 @@
-// Copyright 2015 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
+// Copyright 2015 Google LLC.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
// Package grpc supports network connections to GRPC servers.
// This package is not intended for use by end developers. Use the
@@ -118,7 +108,7 @@ func dial(ctx context.Context, insecure bool, opts []option.ClientOption) (*grpc
// Add tracing, but before the other options, so that clients can override the
// gRPC stats handler.
// This assumes that gRPC options are processed in order, left to right.
- grpcOpts = addOCStatsHandler(grpcOpts)
+ grpcOpts = addOCStatsHandler(grpcOpts, o)
grpcOpts = append(grpcOpts, o.GRPCDialOpts...)
if o.UserAgent != "" {
grpcOpts = append(grpcOpts, grpc.WithUserAgent(o.UserAgent))
@@ -135,7 +125,10 @@ func dial(ctx context.Context, insecure bool, opts []option.ClientOption) (*grpc
return grpc.DialContext(ctx, o.Endpoint, grpcOpts...)
}
-func addOCStatsHandler(opts []grpc.DialOption) []grpc.DialOption {
+func addOCStatsHandler(opts []grpc.DialOption, settings internal.DialSettings) []grpc.DialOption {
+ if settings.TelemetryDisabled {
+ return opts
+ }
return append(opts, grpc.WithStatsHandler(&ocgrpc.ClientHandler{}))
}
diff --git a/vendor/google.golang.org/api/transport/grpc/dial_appengine.go b/vendor/google.golang.org/api/transport/grpc/dial_appengine.go
index 87819d4e10f1b..2c6aef2264a09 100644
--- a/vendor/google.golang.org/api/transport/grpc/dial_appengine.go
+++ b/vendor/google.golang.org/api/transport/grpc/dial_appengine.go
@@ -1,16 +1,6 @@
-// Copyright 2016 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
+// Copyright 2016 Google LLC.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
// +build appengine
diff --git a/vendor/google.golang.org/api/transport/grpc/dial_socketopt.go b/vendor/google.golang.org/api/transport/grpc/dial_socketopt.go
index 2b1d9e99b1b5d..0e4f388968975 100644
--- a/vendor/google.golang.org/api/transport/grpc/dial_socketopt.go
+++ b/vendor/google.golang.org/api/transport/grpc/dial_socketopt.go
@@ -1,16 +1,6 @@
-// Copyright 2019 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
+// Copyright 2019 Google LLC.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
// +build go1.11,linux
diff --git a/vendor/google.golang.org/api/transport/http/dial.go b/vendor/google.golang.org/api/transport/http/dial.go
index c0d8bf20b0223..1ef67cefb7384 100644
--- a/vendor/google.golang.org/api/transport/http/dial.go
+++ b/vendor/google.golang.org/api/transport/http/dial.go
@@ -1,16 +1,6 @@
-// Copyright 2015 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
+// Copyright 2015 Google LLC.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
// Package http supports network connections to HTTP servers.
// This package is not intended for use by end developers. Use the
@@ -70,7 +60,7 @@ func newTransport(ctx context.Context, base http.RoundTripper, settings *interna
quotaProject: settings.QuotaProject,
requestReason: settings.RequestReason,
}
- trans = addOCTransport(trans)
+ trans = addOCTransport(trans, settings)
switch {
case settings.NoAuth:
// Do nothing.
@@ -119,16 +109,15 @@ func (t parameterTransport) RoundTrip(req *http.Request) (*http.Response, error)
if rt == nil {
return nil, errors.New("transport: no Transport specified")
}
- if t.userAgent == "" {
- return rt.RoundTrip(req)
- }
newReq := *req
newReq.Header = make(http.Header)
for k, vv := range req.Header {
newReq.Header[k] = vv
}
- // TODO(cbro): append to existing User-Agent header?
- newReq.Header.Set("User-Agent", t.userAgent)
+ if t.userAgent != "" {
+ // TODO(cbro): append to existing User-Agent header?
+ newReq.Header.Set("User-Agent", t.userAgent)
+ }
// Attach system parameters into the header
if t.quotaProject != "" {
@@ -153,7 +142,10 @@ func defaultBaseTransport(ctx context.Context) http.RoundTripper {
return http.DefaultTransport
}
-func addOCTransport(trans http.RoundTripper) http.RoundTripper {
+func addOCTransport(trans http.RoundTripper, settings *internal.DialSettings) http.RoundTripper {
+ if settings.TelemetryDisabled {
+ return trans
+ }
return &ochttp.Transport{
Base: trans,
Propagation: &propagation.HTTPFormat{},
diff --git a/vendor/google.golang.org/api/transport/http/dial_appengine.go b/vendor/google.golang.org/api/transport/http/dial_appengine.go
index 04c81413c5198..baee9f27afc6b 100644
--- a/vendor/google.golang.org/api/transport/http/dial_appengine.go
+++ b/vendor/google.golang.org/api/transport/http/dial_appengine.go
@@ -1,16 +1,6 @@
-// Copyright 2016 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
+// Copyright 2016 Google LLC.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
// +build appengine
diff --git a/vendor/google.golang.org/api/transport/http/internal/propagation/http.go b/vendor/google.golang.org/api/transport/http/internal/propagation/http.go
index 24b4f0d291561..fb951bb1624e8 100644
--- a/vendor/google.golang.org/api/transport/http/internal/propagation/http.go
+++ b/vendor/google.golang.org/api/transport/http/internal/propagation/http.go
@@ -1,16 +1,6 @@
-// Copyright 2018 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
+// Copyright 2018 Google LLC.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
// +build go1.8
diff --git a/vendor/google.golang.org/api/transport/not_go19.go b/vendor/google.golang.org/api/transport/not_go19.go
index 0cb6275944216..657bb6b2e936f 100644
--- a/vendor/google.golang.org/api/transport/not_go19.go
+++ b/vendor/google.golang.org/api/transport/not_go19.go
@@ -1,16 +1,6 @@
-// Copyright 2018 Google LLC
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
+// Copyright 2018 Google LLC.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
// +build !go1.9
diff --git a/vendor/google.golang.org/appengine/go.mod b/vendor/google.golang.org/appengine/go.mod
index 45159279854be..635c34f5a1129 100644
--- a/vendor/google.golang.org/appengine/go.mod
+++ b/vendor/google.golang.org/appengine/go.mod
@@ -1,10 +1,9 @@
module google.golang.org/appengine
+go 1.11
+
require (
github.com/golang/protobuf v1.3.1
- golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5 // indirect
golang.org/x/net v0.0.0-20190603091049-60506f45cf65
- golang.org/x/sys v0.0.0-20190606165138-5da285871e9c // indirect
golang.org/x/text v0.3.2
- golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b // indirect
)
diff --git a/vendor/google.golang.org/appengine/go.sum b/vendor/google.golang.org/appengine/go.sum
index cb3232556bf0d..ce22f6856c9e9 100644
--- a/vendor/google.golang.org/appengine/go.sum
+++ b/vendor/google.golang.org/appengine/go.sum
@@ -1,22 +1,11 @@
-github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM=
-github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1 h1:YF8+flBXS5eO826T4nzqPrxfhQThhXl0YzfuUPu4SBg=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
-golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
-golang.org/x/net v0.0.0-20180724234803-3673e40ba225 h1:kNX+jCowfMYzvlSvJu5pQWEmyWFrBXJ3PBy10xKMXK8=
-golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
-golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
-golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65 h1:+rhAzEzT3f4JtomfC371qB+0Ola2caSKcY69NUBZrRQ=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
-golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
-golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
-golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
-golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
diff --git a/vendor/google.golang.org/genproto/googleapis/api/annotations/resource.pb.go b/vendor/google.golang.org/genproto/googleapis/api/annotations/resource.pb.go
index af057b90be5e5..6aea4d701fb43 100644
--- a/vendor/google.golang.org/genproto/googleapis/api/annotations/resource.pb.go
+++ b/vendor/google.golang.org/genproto/googleapis/api/annotations/resource.pb.go
@@ -66,30 +66,106 @@ func (ResourceDescriptor_History) EnumDescriptor() ([]byte, []int) {
//
// Example:
//
-// message Topic {
-// // Indicates this message defines a resource schema.
-// // Declares the resource type in the format of {service}/{kind}.
-// // For Kubernetes resources, the format is {api group}/{kind}.
-// option (google.api.resource) = {
-// type: "pubsub.googleapis.com/Topic"
-// pattern: "projects/{project}/topics/{topic}"
-// };
-// }
+// message Topic {
+// // Indicates this message defines a resource schema.
+// // Declares the resource type in the format of {service}/{kind}.
+// // For Kubernetes resources, the format is {api group}/{kind}.
+// option (google.api.resource) = {
+// type: "pubsub.googleapis.com/Topic"
+// name_descriptor: {
+// pattern: "projects/{project}/topics/{topic}"
+// parent_type: "cloudresourcemanager.googleapis.com/Project"
+// parent_name_extractor: "projects/{project}"
+// }
+// };
+// }
+//
+// The ResourceDescriptor Yaml config will look like:
+//
+// resources:
+// - type: "pubsub.googleapis.com/Topic"
+// name_descriptor:
+// - pattern: "projects/{project}/topics/{topic}"
+// parent_type: "cloudresourcemanager.googleapis.com/Project"
+// parent_name_extractor: "projects/{project}"
//
// Sometimes, resources have multiple patterns, typically because they can
// live under multiple parents.
//
// Example:
//
-// message LogEntry {
-// option (google.api.resource) = {
-// type: "logging.googleapis.com/LogEntry"
-// pattern: "projects/{project}/logs/{log}"
-// pattern: "organizations/{organization}/logs/{log}"
-// pattern: "folders/{folder}/logs/{log}"
-// pattern: "billingAccounts/{billing_account}/logs/{log}"
-// };
-// }
+// message LogEntry {
+// option (google.api.resource) = {
+// type: "logging.googleapis.com/LogEntry"
+// name_descriptor: {
+// pattern: "projects/{project}/logs/{log}"
+// parent_type: "cloudresourcemanager.googleapis.com/Project"
+// parent_name_extractor: "projects/{project}"
+// }
+// name_descriptor: {
+// pattern: "folders/{folder}/logs/{log}"
+// parent_type: "cloudresourcemanager.googleapis.com/Folder"
+// parent_name_extractor: "folders/{folder}"
+// }
+// name_descriptor: {
+// pattern: "organizations/{organization}/logs/{log}"
+// parent_type: "cloudresourcemanager.googleapis.com/Organization"
+// parent_name_extractor: "organizations/{organization}"
+// }
+// name_descriptor: {
+// pattern: "billingAccounts/{billing_account}/logs/{log}"
+// parent_type: "billing.googleapis.com/BillingAccount"
+// parent_name_extractor: "billingAccounts/{billing_account}"
+// }
+// };
+// }
+//
+// The ResourceDescriptor Yaml config will look like:
+//
+// resources:
+// - type: 'logging.googleapis.com/LogEntry'
+// name_descriptor:
+// - pattern: "projects/{project}/logs/{log}"
+// parent_type: "cloudresourcemanager.googleapis.com/Project"
+// parent_name_extractor: "projects/{project}"
+// - pattern: "folders/{folder}/logs/{log}"
+// parent_type: "cloudresourcemanager.googleapis.com/Folder"
+// parent_name_extractor: "folders/{folder}"
+// - pattern: "organizations/{organization}/logs/{log}"
+// parent_type: "cloudresourcemanager.googleapis.com/Organization"
+// parent_name_extractor: "organizations/{organization}"
+// - pattern: "billingAccounts/{billing_account}/logs/{log}"
+// parent_type: "billing.googleapis.com/BillingAccount"
+// parent_name_extractor: "billingAccounts/{billing_account}"
+//
+// For flexible resources, the resource name doesn't contain parent names, but
+// the resource itself has parents for policy evaluation.
+//
+// Example:
+//
+// message Shelf {
+// option (google.api.resource) = {
+// type: "library.googleapis.com/Shelf"
+// name_descriptor: {
+// pattern: "shelves/{shelf}"
+// parent_type: "cloudresourcemanager.googleapis.com/Project"
+// }
+// name_descriptor: {
+// pattern: "shelves/{shelf}"
+// parent_type: "cloudresourcemanager.googleapis.com/Folder"
+// }
+// };
+// }
+//
+// The ResourceDescriptor Yaml config will look like:
+//
+// resources:
+// - type: 'library.googleapis.com/Shelf'
+// name_descriptor:
+// - pattern: "shelves/{shelf}"
+// parent_type: "cloudresourcemanager.googleapis.com/Project"
+// - pattern: "shelves/{shelf}"
+// parent_type: "cloudresourcemanager.googleapis.com/Folder"
type ResourceDescriptor struct {
// The resource type. It must be in the format of
// {service_name}/{resource_type_kind}. The `resource_type_kind` must be
@@ -102,11 +178,20 @@ type ResourceDescriptor struct {
// should use PascalCase (UpperCamelCase). The maximum number of
// characters allowed for the `resource_type_kind` is 100.
Type string `protobuf:"bytes,1,opt,name=type,proto3" json:"type,omitempty"`
- // Optional. The valid resource name pattern(s) for this resource type.
+ // Optional. The relative resource name pattern associated with this resource
+ // type. The DNS prefix of the full resource name shouldn't be specified here.
+ //
+ // The path pattern must follow the syntax, which aligns with HTTP binding
+ // syntax:
+ //
+ // Template = Segment { "/" Segment } ;
+ // Segment = LITERAL | Variable ;
+ // Variable = "{" LITERAL "}" ;
//
// Examples:
- // - "projects/{project}/topics/{topic}"
- // - "projects/{project}/knowledgeBases/{knowledge_base}"
+ //
+ // - "projects/{project}/topics/{topic}"
+ // - "projects/{project}/knowledgeBases/{knowledge_base}"
//
// The components in braces correspond to the IDs for each resource in the
// hierarchy. It is expected that, if multiple patterns are provided,
@@ -119,21 +204,31 @@ type ResourceDescriptor struct {
// Optional. The historical or future-looking state of the resource pattern.
//
// Example:
- // // The InspectTemplate message originally only supported resource
- // // names with organization, and project was added later.
- // message InspectTemplate {
- // option (google.api.resource) = {
- // type: "dlp.googleapis.com/InspectTemplate"
- // pattern:
- // "organizations/{organization}/inspectTemplates/{inspect_template}"
- // pattern: "projects/{project}/inspectTemplates/{inspect_template}"
- // history: ORIGINALLY_SINGLE_PATTERN
- // };
- // }
- History ResourceDescriptor_History `protobuf:"varint,4,opt,name=history,proto3,enum=google.api.ResourceDescriptor_History" json:"history,omitempty"`
- XXX_NoUnkeyedLiteral struct{} `json:"-"`
- XXX_unrecognized []byte `json:"-"`
- XXX_sizecache int32 `json:"-"`
+ //
+ // // The InspectTemplate message originally only supported resource
+ // // names with organization, and project was added later.
+ // message InspectTemplate {
+ // option (google.api.resource) = {
+ // type: "dlp.googleapis.com/InspectTemplate"
+ // pattern:
+ // "organizations/{organization}/inspectTemplates/{inspect_template}"
+ // pattern: "projects/{project}/inspectTemplates/{inspect_template}"
+ // history: ORIGINALLY_SINGLE_PATTERN
+ // };
+ // }
+ History ResourceDescriptor_History `protobuf:"varint,4,opt,name=history,proto3,enum=google.api.ResourceDescriptor_History" json:"history,omitempty"`
+ // The plural name used in the resource name, such as 'projects' for
+ // the name of 'projects/{project}'. It is the same concept of the `plural`
+ // field in k8s CRD spec
+ // https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/
+ Plural string `protobuf:"bytes,5,opt,name=plural,proto3" json:"plural,omitempty"`
+ // The same concept of the `singular` field in k8s CRD spec
+ // https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/
+ // Such as "project" for the `resourcemanager.googleapis.com/Project` type.
+ Singular string `protobuf:"bytes,6,opt,name=singular,proto3" json:"singular,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
}
func (m *ResourceDescriptor) Reset() { *m = ResourceDescriptor{} }
@@ -189,21 +284,36 @@ func (m *ResourceDescriptor) GetHistory() ResourceDescriptor_History {
return ResourceDescriptor_HISTORY_UNSPECIFIED
}
-// Defines a proto annotation that describes a field that refers to a resource.
+func (m *ResourceDescriptor) GetPlural() string {
+ if m != nil {
+ return m.Plural
+ }
+ return ""
+}
+
+func (m *ResourceDescriptor) GetSingular() string {
+ if m != nil {
+ return m.Singular
+ }
+ return ""
+}
+
+// Defines a proto annotation that describes a string field that refers to
+// an API resource.
type ResourceReference struct {
// The resource type that the annotated field references.
//
// Example:
//
- // message Subscription {
- // string topic = 2 [(google.api.resource_reference) = {
- // type = "pubsub.googleapis.com/Topic"
- // }];
- // }
+ // message Subscription {
+ // string topic = 2 [(google.api.resource_reference) = {
+ // type: "pubsub.googleapis.com/Topic"
+ // }];
+ // }
Type string `protobuf:"bytes,1,opt,name=type,proto3" json:"type,omitempty"`
// The resource type of a child collection that the annotated field
- // references. This is useful for `parent` fields where a resource has more
- // than one possible type of parent.
+ // references. This is useful for annotating the `parent` field that
+ // doesn't have a fixed resource type.
//
// Example:
//
@@ -266,6 +376,15 @@ var E_ResourceReference = &proto.ExtensionDesc{
Filename: "google/api/resource.proto",
}
+var E_ResourceDefinition = &proto.ExtensionDesc{
+ ExtendedType: (*descriptor.FileOptions)(nil),
+ ExtensionType: ([]*ResourceDescriptor)(nil),
+ Field: 1053,
+ Name: "google.api.resource_definition",
+ Tag: "bytes,1053,rep,name=resource_definition",
+ Filename: "google/api/resource.proto",
+}
+
var E_Resource = &proto.ExtensionDesc{
ExtendedType: (*descriptor.MessageOptions)(nil),
ExtensionType: (*ResourceDescriptor)(nil),
@@ -280,38 +399,43 @@ func init() {
proto.RegisterType((*ResourceDescriptor)(nil), "google.api.ResourceDescriptor")
proto.RegisterType((*ResourceReference)(nil), "google.api.ResourceReference")
proto.RegisterExtension(E_ResourceReference)
+ proto.RegisterExtension(E_ResourceDefinition)
proto.RegisterExtension(E_Resource)
}
func init() { proto.RegisterFile("google/api/resource.proto", fileDescriptor_465e9122405d1bb5) }
var fileDescriptor_465e9122405d1bb5 = []byte{
- // 430 bytes of a gzipped FileDescriptorProto
- 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x7c, 0x52, 0x41, 0x6f, 0xd3, 0x30,
- 0x18, 0x25, 0x59, 0x45, 0xd7, 0x0f, 0x31, 0x6d, 0x06, 0x89, 0x0c, 0x29, 0x10, 0xf5, 0x80, 0x7a,
- 0x4a, 0xa4, 0x71, 0x1b, 0x17, 0x3a, 0x96, 0x76, 0x91, 0xba, 0x36, 0x72, 0xd3, 0xc3, 0x00, 0x29,
- 0xf2, 0xd2, 0xaf, 0x59, 0xa4, 0xcc, 0xb6, 0x9c, 0xec, 0xd0, 0x1b, 0x7f, 0x04, 0x21, 0xf1, 0x2b,
- 0x39, 0xa2, 0x3a, 0x71, 0x98, 0xd8, 0xb4, 0x9b, 0xf3, 0xde, 0xfb, 0xbe, 0xf7, 0xfc, 0x1c, 0x38,
- 0xce, 0x85, 0xc8, 0x4b, 0x0c, 0x98, 0x2c, 0x02, 0x85, 0x95, 0xb8, 0x53, 0x19, 0xfa, 0x52, 0x89,
- 0x5a, 0x10, 0x68, 0x28, 0x9f, 0xc9, 0xe2, 0xad, 0xd7, 0xca, 0x34, 0x73, 0x7d, 0xb7, 0x09, 0xd6,
- 0x58, 0x65, 0xaa, 0x90, 0xb5, 0x50, 0x8d, 0x7a, 0xf8, 0xc3, 0x06, 0x42, 0xdb, 0x05, 0xe7, 0x1d,
- 0x49, 0x08, 0xf4, 0xea, 0xad, 0x44, 0xc7, 0xf2, 0xac, 0xd1, 0x80, 0xea, 0x33, 0x71, 0xa0, 0x2f,
- 0x59, 0x5d, 0xa3, 0xe2, 0x8e, 0xed, 0xed, 0x8d, 0x06, 0xd4, 0x7c, 0x12, 0x17, 0x80, 0xb3, 0x5b,
- 0x4c, 0x37, 0x05, 0x96, 0x6b, 0x67, 0x4f, 0xcf, 0x0c, 0x76, 0xc8, 0x64, 0x07, 0x90, 0xcf, 0xd0,
- 0xbf, 0x29, 0xaa, 0x5a, 0xa8, 0xad, 0xd3, 0xf3, 0xac, 0xd1, 0xc1, 0xc9, 0x07, 0xff, 0x5f, 0x46,
- 0xff, 0xa1, 0xbb, 0x7f, 0xd1, 0xa8, 0xa9, 0x19, 0x1b, 0x7e, 0x83, 0x7e, 0x8b, 0x91, 0x37, 0xf0,
- 0xea, 0x22, 0x5a, 0x26, 0x0b, 0x7a, 0x95, 0xae, 0xe6, 0xcb, 0x38, 0xfc, 0x12, 0x4d, 0xa2, 0xf0,
- 0xfc, 0xf0, 0x19, 0x71, 0xe1, 0x78, 0x41, 0xa3, 0x69, 0x34, 0x1f, 0xcf, 0x66, 0x57, 0xe9, 0x32,
- 0x9a, 0x4f, 0x67, 0x61, 0x1a, 0x8f, 0x93, 0x24, 0xa4, 0xf3, 0x43, 0x8b, 0x38, 0xf0, 0x7a, 0xb2,
- 0x4a, 0x56, 0x34, 0x4c, 0x2f, 0x57, 0xb3, 0x24, 0xea, 0x18, 0x7b, 0x38, 0x81, 0x23, 0x93, 0x81,
- 0xe2, 0x06, 0x15, 0xf2, 0x0c, 0x1f, 0x2d, 0xc0, 0x05, 0xc8, 0x6e, 0x8a, 0x72, 0x9d, 0x6a, 0xc6,
- 0x6e, 0xae, 0xa9, 0x91, 0x64, 0x2b, 0xf1, 0xb4, 0x04, 0x62, 0x9e, 0x22, 0x55, 0xdd, 0x22, 0xd7,
- 0xdc, 0xd5, 0xbc, 0x81, 0xaf, 0x4b, 0x59, 0xc8, 0xba, 0x10, 0xbc, 0x72, 0x7e, 0xed, 0x7b, 0xd6,
- 0xe8, 0xc5, 0x89, 0xfb, 0x58, 0x23, 0x5d, 0x1a, 0x7a, 0xa4, 0xfe, 0x87, 0x4e, 0xbf, 0xc3, 0xbe,
- 0x01, 0xc9, 0xfb, 0x07, 0x1e, 0x97, 0x58, 0x55, 0x2c, 0x47, 0xe3, 0xf2, 0xb3, 0x71, 0x79, 0xf7,
- 0x74, 0xef, 0xb4, 0xdb, 0x78, 0xc6, 0xe1, 0x20, 0x13, 0xb7, 0xf7, 0xe4, 0x67, 0x2f, 0x8d, 0x3e,
- 0xde, 0x79, 0xc4, 0xd6, 0xd7, 0x71, 0x4b, 0xe6, 0xa2, 0x64, 0x3c, 0xf7, 0x85, 0xca, 0x83, 0x1c,
- 0xb9, 0x4e, 0x10, 0x34, 0x14, 0x93, 0x45, 0xa5, 0xff, 0x50, 0xc6, 0xb9, 0xa8, 0x99, 0x8e, 0xf2,
- 0xe9, 0xde, 0xf9, 0x8f, 0x65, 0xfd, 0xb6, 0x7b, 0xd3, 0x71, 0x1c, 0x5d, 0x3f, 0xd7, 0x73, 0x1f,
- 0xff, 0x06, 0x00, 0x00, 0xff, 0xff, 0xb5, 0x1e, 0x07, 0x80, 0xd8, 0x02, 0x00, 0x00,
+ // 490 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0x53, 0xcd, 0x6e, 0xd3, 0x4c,
+ 0x14, 0xfd, 0x9c, 0xe4, 0xcb, 0xcf, 0xad, 0xa8, 0xda, 0x29, 0x02, 0xb7, 0x22, 0x60, 0x65, 0x81,
+ 0xb2, 0xb2, 0xa5, 0xb0, 0x0b, 0x1b, 0x52, 0xe2, 0xa4, 0x96, 0xd2, 0xc4, 0x9a, 0x38, 0x8b, 0x02,
+ 0x92, 0x35, 0x75, 0x26, 0xee, 0x48, 0xee, 0xcc, 0x68, 0xec, 0x2c, 0xf2, 0x30, 0x08, 0x89, 0x67,
+ 0xe0, 0xe1, 0x58, 0xa2, 0x8c, 0x7f, 0x88, 0x68, 0x84, 0xd8, 0xcd, 0xbd, 0xe7, 0xde, 0x73, 0x8e,
+ 0xcf, 0x95, 0xe1, 0x32, 0x16, 0x22, 0x4e, 0xa8, 0x43, 0x24, 0x73, 0x14, 0x4d, 0xc5, 0x56, 0x45,
+ 0xd4, 0x96, 0x4a, 0x64, 0x02, 0x41, 0x0e, 0xd9, 0x44, 0xb2, 0x2b, 0xab, 0x18, 0xd3, 0xc8, 0xfd,
+ 0x76, 0xe3, 0xac, 0x69, 0x1a, 0x29, 0x26, 0x33, 0xa1, 0xf2, 0xe9, 0xde, 0x8f, 0x1a, 0x20, 0x5c,
+ 0x10, 0x8c, 0x2b, 0x10, 0x21, 0x68, 0x64, 0x3b, 0x49, 0x4d, 0xc3, 0x32, 0xfa, 0x1d, 0xac, 0xdf,
+ 0xc8, 0x84, 0x96, 0x24, 0x59, 0x46, 0x15, 0x37, 0x6b, 0x56, 0xbd, 0xdf, 0xc1, 0x65, 0x89, 0xba,
+ 0x00, 0x9c, 0x3c, 0xd2, 0x70, 0xc3, 0x68, 0xb2, 0x36, 0xeb, 0x7a, 0xa7, 0xb3, 0xef, 0x4c, 0xf6,
+ 0x0d, 0xf4, 0x01, 0x5a, 0x0f, 0x2c, 0xcd, 0x84, 0xda, 0x99, 0x0d, 0xcb, 0xe8, 0x9f, 0x0e, 0xde,
+ 0xda, 0xbf, 0x3d, 0xda, 0x4f, 0xd5, 0xed, 0x9b, 0x7c, 0x1a, 0x97, 0x6b, 0xe8, 0x05, 0x34, 0x65,
+ 0xb2, 0x55, 0x24, 0x31, 0xff, 0xd7, 0xe4, 0x45, 0x85, 0xae, 0xa0, 0x9d, 0x32, 0x1e, 0x6f, 0x13,
+ 0xa2, 0xcc, 0xa6, 0x46, 0xaa, 0xba, 0xf7, 0x19, 0x5a, 0x05, 0x0f, 0x7a, 0x09, 0x17, 0x37, 0xde,
+ 0x32, 0x58, 0xe0, 0xbb, 0x70, 0x35, 0x5f, 0xfa, 0xee, 0x47, 0x6f, 0xe2, 0xb9, 0xe3, 0xb3, 0xff,
+ 0x50, 0x17, 0x2e, 0x17, 0xd8, 0x9b, 0x7a, 0xf3, 0xd1, 0x6c, 0x76, 0x17, 0x2e, 0xbd, 0xf9, 0x74,
+ 0xe6, 0x86, 0xfe, 0x28, 0x08, 0x5c, 0x3c, 0x3f, 0x33, 0x90, 0x09, 0xcf, 0x27, 0xab, 0x60, 0x85,
+ 0xdd, 0xf0, 0x76, 0x35, 0x0b, 0xbc, 0x0a, 0xa9, 0xf5, 0x26, 0x70, 0x5e, 0xfa, 0xc6, 0x74, 0x43,
+ 0x15, 0xe5, 0x11, 0x3d, 0x1a, 0x5a, 0x17, 0x20, 0x7a, 0x60, 0xc9, 0x3a, 0xd4, 0x48, 0x2d, 0x8f,
+ 0x46, 0x77, 0x82, 0x9d, 0xa4, 0xc3, 0x04, 0x50, 0x79, 0xbe, 0x50, 0x55, 0x44, 0xdd, 0x32, 0x9f,
+ 0xf2, 0x6e, 0xb6, 0x0e, 0x72, 0x21, 0x33, 0x26, 0x78, 0x6a, 0x7e, 0x6b, 0x5b, 0x46, 0xff, 0x64,
+ 0xd0, 0x3d, 0x96, 0x62, 0xe5, 0x06, 0x9f, 0xab, 0x3f, 0x5b, 0x43, 0x0e, 0x17, 0x95, 0xda, 0x9a,
+ 0x6e, 0x18, 0x67, 0x7b, 0x42, 0xf4, 0xea, 0x88, 0x5c, 0x42, 0x4b, 0xb5, 0xaf, 0x6d, 0xab, 0xde,
+ 0x3f, 0x19, 0xbc, 0xfe, 0xfb, 0xcd, 0x70, 0xf5, 0x1d, 0xe3, 0x8a, 0x78, 0xf8, 0x05, 0xda, 0x65,
+ 0x17, 0xbd, 0x79, 0x22, 0x72, 0x4b, 0xd3, 0x94, 0xc4, 0x87, 0x3a, 0xc6, 0x3f, 0xe8, 0x54, 0x8c,
+ 0xd7, 0x1c, 0x4e, 0x23, 0xf1, 0x78, 0x30, 0x7e, 0xfd, 0xac, 0x9c, 0xf7, 0xf7, 0x1a, 0xbe, 0xf1,
+ 0x69, 0x54, 0x80, 0xb1, 0x48, 0x08, 0x8f, 0x6d, 0xa1, 0x62, 0x27, 0xa6, 0x5c, 0x3b, 0x70, 0x72,
+ 0x88, 0x48, 0x96, 0xea, 0xbf, 0x88, 0x70, 0x2e, 0x32, 0xa2, 0xad, 0xbc, 0x3f, 0x78, 0xff, 0x34,
+ 0x8c, 0xef, 0xb5, 0xc6, 0x74, 0xe4, 0x7b, 0xf7, 0x4d, 0xbd, 0xf7, 0xee, 0x57, 0x00, 0x00, 0x00,
+ 0xff, 0xff, 0x75, 0x12, 0x53, 0xef, 0x7c, 0x03, 0x00, 0x00,
}
diff --git a/vendor/google.golang.org/genproto/googleapis/bigtable/v2/bigtable.pb.go b/vendor/google.golang.org/genproto/googleapis/bigtable/v2/bigtable.pb.go
index c770e8207f94a..2a37f124b5913 100644
--- a/vendor/google.golang.org/genproto/googleapis/bigtable/v2/bigtable.pb.go
+++ b/vendor/google.golang.org/genproto/googleapis/bigtable/v2/bigtable.pb.go
@@ -30,7 +30,7 @@ const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
// Request message for Bigtable.ReadRows.
type ReadRowsRequest struct {
- // The unique name of the table from which to read.
+ // Required. The unique name of the table from which to read.
// Values are of the form
// `projects/<project>/instances/<instance>/tables/<table>`.
TableName string `protobuf:"bytes,1,opt,name=table_name,json=tableName,proto3" json:"table_name,omitempty"`
@@ -112,6 +112,7 @@ func (m *ReadRowsRequest) GetRowsLimit() int64 {
// Response message for Bigtable.ReadRows.
type ReadRowsResponse struct {
+ // A collection of a row's contents as part of the read request.
Chunks []*ReadRowsResponse_CellChunk `protobuf:"bytes,1,rep,name=chunks,proto3" json:"chunks,omitempty"`
// Optionally the server might return the row key of the last row it
// has scanned. The client can use this to construct a more
@@ -172,6 +173,9 @@ type ReadRowsResponse_CellChunk struct {
// this CellChunk is a continuation of the same row as the previous
// CellChunk in the response stream, even if that CellChunk was in a
// previous ReadRowsResponse message.
+ //
+ // Classified as IDENTIFYING_ID to provide context around data accesses for
+ // auditing systems.
RowKey []byte `protobuf:"bytes,1,opt,name=row_key,json=rowKey,proto3" json:"row_key,omitempty"`
// The column family name for this chunk of data. If this message
// is not present this CellChunk is a continuation of the same column
@@ -210,6 +214,8 @@ type ReadRowsResponse_CellChunk struct {
// total length of the cell value. The client can use this size
// to pre-allocate memory to hold the full cell value.
ValueSize int32 `protobuf:"varint,7,opt,name=value_size,json=valueSize,proto3" json:"value_size,omitempty"`
+ // Signals to the client concerning previous CellChunks received.
+ //
// Types that are valid to be assigned to RowStatus:
// *ReadRowsResponse_CellChunk_ResetRow
// *ReadRowsResponse_CellChunk_CommitRow
@@ -340,7 +346,7 @@ func (*ReadRowsResponse_CellChunk) XXX_OneofWrappers() []interface{} {
// Request message for Bigtable.SampleRowKeys.
type SampleRowKeysRequest struct {
- // The unique name of the table from which to sample row keys.
+ // Required. The unique name of the table from which to sample row keys.
// Values are of the form
// `projects/<project>/instances/<instance>/tables/<table>`.
TableName string `protobuf:"bytes,1,opt,name=table_name,json=tableName,proto3" json:"table_name,omitempty"`
@@ -400,6 +406,9 @@ type SampleRowKeysResponse struct {
// Note that row keys in this list may not have ever been written to or read
// from, and users should therefore not make any assumptions about the row key
// structure that are specific to their use case.
+ //
+ // Classified as IDENTIFYING_ID to provide context around data accesses for
+ // auditing systems.
RowKey []byte `protobuf:"bytes,1,opt,name=row_key,json=rowKey,proto3" json:"row_key,omitempty"`
// Approximate total storage space used by all rows in the table which precede
// `row_key`. Buffering the contents of all rows between two subsequent
@@ -452,16 +461,19 @@ func (m *SampleRowKeysResponse) GetOffsetBytes() int64 {
// Request message for Bigtable.MutateRow.
type MutateRowRequest struct {
- // The unique name of the table to which the mutation should be applied.
+ // Required. The unique name of the table to which the mutation should be applied.
// Values are of the form
// `projects/<project>/instances/<instance>/tables/<table>`.
TableName string `protobuf:"bytes,1,opt,name=table_name,json=tableName,proto3" json:"table_name,omitempty"`
// This value specifies routing for replication. If not specified, the
// "default" application profile will be used.
AppProfileId string `protobuf:"bytes,4,opt,name=app_profile_id,json=appProfileId,proto3" json:"app_profile_id,omitempty"`
- // The key of the row to which the mutation should be applied.
+ // Required. The key of the row to which the mutation should be applied.
+ //
+ // Classified as IDENTIFYING_ID to provide context around data accesses for
+ // auditing systems.
RowKey []byte `protobuf:"bytes,2,opt,name=row_key,json=rowKey,proto3" json:"row_key,omitempty"`
- // Changes to be atomically applied to the specified row. Entries are applied
+ // Required. Changes to be atomically applied to the specified row. Entries are applied
// in order, meaning that earlier mutations can be masked by later ones.
// Must contain at least one entry and at most 100000.
Mutations []*Mutation `protobuf:"bytes,3,rep,name=mutations,proto3" json:"mutations,omitempty"`
@@ -557,12 +569,12 @@ var xxx_messageInfo_MutateRowResponse proto.InternalMessageInfo
// Request message for BigtableService.MutateRows.
type MutateRowsRequest struct {
- // The unique name of the table to which the mutations should be applied.
+ // Required. The unique name of the table to which the mutations should be applied.
TableName string `protobuf:"bytes,1,opt,name=table_name,json=tableName,proto3" json:"table_name,omitempty"`
// This value specifies routing for replication. If not specified, the
// "default" application profile will be used.
AppProfileId string `protobuf:"bytes,3,opt,name=app_profile_id,json=appProfileId,proto3" json:"app_profile_id,omitempty"`
- // The row keys and corresponding mutations to be applied in bulk.
+ // Required. The row keys and corresponding mutations to be applied in bulk.
// Each entry is applied as an atomic mutation, but the entries may be
// applied in arbitrary order (even between entries for the same row).
// At least one entry must be specified, and in total the entries can
@@ -619,10 +631,14 @@ func (m *MutateRowsRequest) GetEntries() []*MutateRowsRequest_Entry {
return nil
}
+// A mutation for a given row.
type MutateRowsRequest_Entry struct {
// The key of the row to which the `mutations` should be applied.
+ //
+ // Classified as IDENTIFYING_ID to provide context around data accesses for
+ // auditing systems.
RowKey []byte `protobuf:"bytes,1,opt,name=row_key,json=rowKey,proto3" json:"row_key,omitempty"`
- // Changes to be atomically applied to the specified row. Mutations are
+ // Required. Changes to be atomically applied to the specified row. Mutations are
// applied in order, meaning that earlier mutations can be masked by
// later ones.
// You must specify at least one mutation.
@@ -712,6 +728,7 @@ func (m *MutateRowsResponse) GetEntries() []*MutateRowsResponse_Entry {
return nil
}
+// The result of applying a passed mutation in the original request.
type MutateRowsResponse_Entry struct {
// The index into the original request's `entries` list of the Entry
// for which a result is being reported.
@@ -767,7 +784,7 @@ func (m *MutateRowsResponse_Entry) GetStatus() *status.Status {
// Request message for Bigtable.CheckAndMutateRow.
type CheckAndMutateRowRequest struct {
- // The unique name of the table to which the conditional mutation should be
+ // Required. The unique name of the table to which the conditional mutation should be
// applied.
// Values are of the form
// `projects/<project>/instances/<instance>/tables/<table>`.
@@ -775,7 +792,10 @@ type CheckAndMutateRowRequest struct {
// This value specifies routing for replication. If not specified, the
// "default" application profile will be used.
AppProfileId string `protobuf:"bytes,7,opt,name=app_profile_id,json=appProfileId,proto3" json:"app_profile_id,omitempty"`
- // The key of the row to which the conditional mutation should be applied.
+ // Required. The key of the row to which the conditional mutation should be applied.
+ //
+ // Classified as IDENTIFYING_ID to provide context around data accesses for
+ // auditing systems.
RowKey []byte `protobuf:"bytes,2,opt,name=row_key,json=rowKey,proto3" json:"row_key,omitempty"`
// The filter to be applied to the contents of the specified row. Depending
// on whether or not any results are yielded, either `true_mutations` or
@@ -910,7 +930,7 @@ func (m *CheckAndMutateRowResponse) GetPredicateMatched() bool {
// Request message for Bigtable.ReadModifyWriteRow.
type ReadModifyWriteRowRequest struct {
- // The unique name of the table to which the read/modify/write rules should be
+ // Required. The unique name of the table to which the read/modify/write rules should be
// applied.
// Values are of the form
// `projects/<project>/instances/<instance>/tables/<table>`.
@@ -918,9 +938,12 @@ type ReadModifyWriteRowRequest struct {
// This value specifies routing for replication. If not specified, the
// "default" application profile will be used.
AppProfileId string `protobuf:"bytes,4,opt,name=app_profile_id,json=appProfileId,proto3" json:"app_profile_id,omitempty"`
- // The key of the row to which the read/modify/write rules should be applied.
+ // Required. The key of the row to which the read/modify/write rules should be applied.
+ //
+ // Classified as IDENTIFYING_ID to provide context around data accesses for
+ // auditing systems.
RowKey []byte `protobuf:"bytes,2,opt,name=row_key,json=rowKey,proto3" json:"row_key,omitempty"`
- // Rules specifying how the specified row's contents are to be transformed
+ // Required. Rules specifying how the specified row's contents are to be transformed
// into writes. Entries are applied in order, meaning that earlier rules will
// affect the results of later ones.
Rules []*ReadModifyWriteRule `protobuf:"bytes,3,rep,name=rules,proto3" json:"rules,omitempty"`
@@ -1044,83 +1067,99 @@ func init() {
func init() { proto.RegisterFile("google/bigtable/v2/bigtable.proto", fileDescriptor_7e9247725ec9a6cf) }
var fileDescriptor_7e9247725ec9a6cf = []byte{
- // 1210 bytes of a gzipped FileDescriptorProto
- 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x57, 0x41, 0x6f, 0x1b, 0x45,
- 0x14, 0x66, 0xec, 0xd8, 0xf1, 0xbe, 0xa4, 0x4d, 0x32, 0x84, 0x76, 0x6b, 0x5a, 0x70, 0x97, 0x16,
- 0xdc, 0x94, 0xae, 0x2b, 0x23, 0x0e, 0x75, 0xd5, 0x02, 0x09, 0x69, 0x53, 0x41, 0xaa, 0x6a, 0x2c,
- 0x15, 0x09, 0x22, 0xad, 0xc6, 0xeb, 0xb1, 0x3b, 0x74, 0x77, 0x67, 0xbb, 0x3b, 0x5b, 0xe3, 0x22,
- 0x2e, 0xfc, 0x05, 0x8e, 0x08, 0x71, 0x42, 0x48, 0x08, 0x38, 0x73, 0xe3, 0xc0, 0x8d, 0x03, 0x17,
- 0xae, 0x1c, 0xfb, 0x0b, 0xb8, 0x23, 0xa1, 0x9d, 0x9d, 0xb5, 0x9d, 0xc4, 0x6e, 0x9d, 0x20, 0x71,
- 0xdb, 0x7d, 0xef, 0x7d, 0x6f, 0xbf, 0xf7, 0xbd, 0x37, 0x6f, 0x6c, 0x38, 0xdf, 0x17, 0xa2, 0xef,
- 0xb1, 0x46, 0x87, 0xf7, 0x25, 0xed, 0x78, 0xac, 0xf1, 0xb8, 0x39, 0x7a, 0xb6, 0xc3, 0x48, 0x48,
- 0x81, 0x71, 0x16, 0x62, 0x8f, 0xcc, 0x8f, 0x9b, 0xd5, 0xb3, 0x1a, 0x46, 0x43, 0xde, 0xa0, 0x41,
- 0x20, 0x24, 0x95, 0x5c, 0x04, 0x71, 0x86, 0xa8, 0x9e, 0x9b, 0x92, 0xb4, 0x4b, 0x25, 0xd5, 0xee,
- 0x57, 0xb4, 0x5b, 0xbd, 0x75, 0x92, 0x5e, 0x63, 0x10, 0xd1, 0x30, 0x64, 0x51, 0x0e, 0x3f, 0xad,
- 0xfd, 0x51, 0xe8, 0x36, 0x62, 0x49, 0x65, 0xa2, 0x1d, 0xd6, 0x5f, 0x08, 0x56, 0x08, 0xa3, 0x5d,
- 0x22, 0x06, 0x31, 0x61, 0x8f, 0x12, 0x16, 0x4b, 0x7c, 0x0e, 0x40, 0x7d, 0xc3, 0x09, 0xa8, 0xcf,
- 0x4c, 0x54, 0x43, 0x75, 0x83, 0x18, 0xca, 0x72, 0x97, 0xfa, 0x0c, 0x5f, 0x80, 0x93, 0x34, 0x0c,
- 0x9d, 0x30, 0x12, 0x3d, 0xee, 0x31, 0x87, 0x77, 0xcd, 0x92, 0x0a, 0x59, 0xa6, 0x61, 0x78, 0x2f,
- 0x33, 0xde, 0xe9, 0x62, 0x1b, 0x16, 0x22, 0x31, 0x88, 0xcd, 0x42, 0x0d, 0xd5, 0x97, 0x9a, 0x55,
- 0xfb, 0x70, 0xc5, 0x36, 0x11, 0x83, 0x36, 0x93, 0x44, 0xc5, 0xe1, 0xb7, 0xa1, 0xdc, 0xe3, 0x9e,
- 0x64, 0x91, 0x59, 0x54, 0x88, 0x73, 0x33, 0x10, 0xb7, 0x54, 0x10, 0xd1, 0xc1, 0x29, 0xd7, 0x14,
- 0xee, 0x78, 0xdc, 0xe7, 0xd2, 0x5c, 0xa8, 0xa1, 0x7a, 0x91, 0x18, 0xa9, 0xe5, 0xc3, 0xd4, 0x60,
- 0xfd, 0x5d, 0x84, 0xd5, 0x71, 0x79, 0x71, 0x28, 0x82, 0x98, 0xe1, 0x5b, 0x50, 0x76, 0x1f, 0x24,
- 0xc1, 0xc3, 0xd8, 0x44, 0xb5, 0x62, 0x7d, 0xa9, 0x69, 0x4f, 0xfd, 0xd4, 0x01, 0x94, 0xbd, 0xc5,
- 0x3c, 0x6f, 0x2b, 0x85, 0x11, 0x8d, 0xc6, 0x0d, 0x58, 0xf7, 0x68, 0x2c, 0x9d, 0xd8, 0xa5, 0x41,
- 0xc0, 0xba, 0x4e, 0x24, 0x06, 0xce, 0x43, 0x36, 0x54, 0x25, 0x2f, 0x93, 0xb5, 0xd4, 0xd7, 0xce,
- 0x5c, 0x44, 0x0c, 0x3e, 0x60, 0xc3, 0xea, 0xd3, 0x02, 0x18, 0xa3, 0x34, 0xf8, 0x34, 0x2c, 0xe6,
- 0x08, 0xa4, 0x10, 0xe5, 0x48, 0x85, 0xe1, 0x1b, 0xb0, 0xd4, 0xa3, 0x3e, 0xf7, 0x86, 0x59, 0x03,
- 0x32, 0x05, 0xcf, 0xe6, 0x24, 0xf3, 0x16, 0xdb, 0x6d, 0x19, 0xf1, 0xa0, 0x7f, 0x9f, 0x7a, 0x09,
- 0x23, 0x90, 0x01, 0x54, 0x7f, 0xae, 0x81, 0xf1, 0x28, 0xa1, 0x1e, 0xef, 0xf1, 0x91, 0x98, 0x2f,
- 0x1f, 0x02, 0x6f, 0x0e, 0x25, 0x8b, 0x33, 0xec, 0x38, 0x1a, 0x5f, 0x82, 0x55, 0xc9, 0x7d, 0x16,
- 0x4b, 0xea, 0x87, 0x8e, 0xcf, 0xdd, 0x48, 0xc4, 0x5a, 0xd3, 0x95, 0x91, 0x7d, 0x57, 0x99, 0xf1,
- 0x29, 0x28, 0x7b, 0xb4, 0xc3, 0xbc, 0xd8, 0x2c, 0xd5, 0x8a, 0x75, 0x83, 0xe8, 0x37, 0xbc, 0x0e,
- 0xa5, 0xc7, 0x69, 0x5a, 0xb3, 0xac, 0x6a, 0xca, 0x5e, 0xd2, 0x36, 0xa9, 0x07, 0x27, 0xe6, 0x4f,
- 0x98, 0xb9, 0x58, 0x43, 0xf5, 0x12, 0x31, 0x94, 0xa5, 0xcd, 0x9f, 0xa4, 0x6e, 0x23, 0x62, 0x31,
- 0x93, 0xa9, 0x84, 0x66, 0xa5, 0x86, 0xea, 0x95, 0x9d, 0x17, 0x48, 0x45, 0x99, 0x88, 0x18, 0xe0,
- 0x57, 0x01, 0x5c, 0xe1, 0xfb, 0x3c, 0xf3, 0x1b, 0xda, 0x6f, 0x64, 0x36, 0x22, 0x06, 0x9b, 0xcb,
- 0x6a, 0x0a, 0x9c, 0x6c, 0xb2, 0xad, 0x4f, 0x60, 0xbd, 0x4d, 0xfd, 0xd0, 0x63, 0x99, 0xec, 0xc7,
- 0x9f, 0xeb, 0xc2, 0xe1, 0xb9, 0xb6, 0xda, 0xf0, 0xd2, 0x81, 0xe4, 0x7a, 0xaa, 0x66, 0xb6, 0xf3,
- 0x3c, 0x2c, 0x8b, 0x5e, 0x2f, 0xad, 0xae, 0x93, 0x8a, 0xae, 0xb2, 0x16, 0xc9, 0x52, 0x66, 0x53,
- 0x7d, 0xb0, 0x7e, 0x44, 0xb0, 0xba, 0x9b, 0x48, 0x2a, 0xd3, 0xac, 0xc7, 0xa6, 0xbb, 0x30, 0xe5,
- 0x18, 0x4e, 0xb0, 0x2a, 0xec, 0x63, 0xd5, 0x02, 0xc3, 0x4f, 0xf4, 0x8e, 0x31, 0x8b, 0xea, 0x1c,
- 0x9c, 0x9d, 0x76, 0x0e, 0x76, 0x75, 0x10, 0x19, 0x87, 0x5b, 0x2f, 0xc2, 0xda, 0x04, 0xdb, 0xac,
- 0x7e, 0xeb, 0x1f, 0x34, 0x61, 0x3d, 0xbe, 0xe6, 0xc5, 0x29, 0x45, 0x6c, 0xc3, 0x22, 0x0b, 0x64,
- 0xc4, 0x95, 0x78, 0x29, 0xd3, 0xcb, 0x33, 0x99, 0x4e, 0x7e, 0xdc, 0xde, 0x0e, 0x64, 0x34, 0x24,
- 0x39, 0xb6, 0xba, 0x07, 0x25, 0x65, 0x99, 0xdd, 0xaa, 0x7d, 0xa2, 0x14, 0x8e, 0x26, 0xca, 0xf7,
- 0x08, 0xf0, 0x24, 0x85, 0xd1, 0xb2, 0x19, 0x71, 0xcf, 0xb6, 0xcd, 0x9b, 0xcf, 0xe3, 0xae, 0xf7,
- 0xcd, 0x01, 0xf2, 0x77, 0x72, 0xf2, 0xeb, 0x50, 0xe2, 0x41, 0x97, 0x7d, 0xa6, 0xa8, 0x17, 0x49,
- 0xf6, 0x82, 0x37, 0xa0, 0x9c, 0x4d, 0xbf, 0x5e, 0x17, 0x38, 0xff, 0x4a, 0x14, 0xba, 0x76, 0x5b,
- 0x79, 0x88, 0x8e, 0xb0, 0xfe, 0x28, 0x80, 0xb9, 0xf5, 0x80, 0xb9, 0x0f, 0xdf, 0x0b, 0xba, 0xff,
- 0x7d, 0xea, 0x16, 0x8f, 0x32, 0x75, 0x3b, 0xb0, 0x1a, 0x46, 0xac, 0xcb, 0x5d, 0x2a, 0x99, 0xa3,
- 0xf7, 0x7d, 0x79, 0x9e, 0x7d, 0xbf, 0x32, 0x82, 0x65, 0x06, 0xbc, 0x05, 0x27, 0x65, 0x94, 0x30,
- 0x67, 0xdc, 0xaf, 0x85, 0x39, 0xfa, 0x75, 0x22, 0xc5, 0xe4, 0x6f, 0x31, 0xde, 0x86, 0x95, 0x1e,
- 0xf5, 0xe2, 0xc9, 0x2c, 0xa5, 0x39, 0xb2, 0x9c, 0x54, 0xa0, 0x51, 0x1a, 0x6b, 0x07, 0xce, 0x4c,
- 0xd1, 0x53, 0x0f, 0xc0, 0x65, 0x58, 0x1b, 0x97, 0xec, 0x53, 0xe9, 0x3e, 0x60, 0x5d, 0xa5, 0x6b,
- 0x85, 0x8c, 0xb5, 0xd8, 0xcd, 0xec, 0xd6, 0x2f, 0x08, 0xce, 0xa4, 0x37, 0xcf, 0xae, 0xe8, 0xf2,
- 0xde, 0xf0, 0xa3, 0x88, 0xff, 0x8f, 0x1b, 0xe1, 0x06, 0x94, 0xa2, 0xc4, 0x63, 0xf9, 0x36, 0x78,
- 0x63, 0xd6, 0xad, 0x38, 0xc9, 0x2d, 0xf1, 0x18, 0xc9, 0x50, 0xd6, 0x6d, 0xa8, 0x4e, 0x63, 0xae,
- 0x55, 0xb8, 0x04, 0xc5, 0x74, 0x77, 0x23, 0xd5, 0xeb, 0xd3, 0x33, 0x7a, 0x4d, 0xd2, 0x98, 0xe6,
- 0x4f, 0x15, 0xa8, 0x6c, 0x6a, 0x07, 0xfe, 0x06, 0x41, 0x25, 0xbf, 0x8a, 0xf1, 0x6b, 0xcf, 0xbe,
- 0xa8, 0x95, 0x48, 0xd5, 0x0b, 0xf3, 0xdc, 0xe6, 0xd6, 0xfb, 0x5f, 0xfe, 0xf9, 0xf4, 0xab, 0xc2,
- 0x4d, 0xeb, 0x5a, 0xfa, 0x43, 0xea, 0xf3, 0xb1, 0xaa, 0x37, 0xc2, 0x48, 0x7c, 0xca, 0x5c, 0x19,
- 0x37, 0x36, 0x1a, 0x3c, 0x88, 0x25, 0x0d, 0x5c, 0x96, 0x3e, 0xab, 0x88, 0xb8, 0xb1, 0xf1, 0x45,
- 0x2b, 0xd2, 0xa9, 0x5a, 0x68, 0xe3, 0x2a, 0xc2, 0x3f, 0x23, 0x38, 0xb1, 0xef, 0x3e, 0xc0, 0xf5,
- 0x69, 0xdf, 0x9f, 0x76, 0x1f, 0x55, 0x2f, 0xcd, 0x11, 0xa9, 0xe9, 0xde, 0x52, 0x74, 0xdf, 0xc5,
- 0x37, 0x8f, 0x4c, 0x37, 0x9e, 0xcc, 0x77, 0x15, 0xe1, 0x6f, 0x11, 0x18, 0xa3, 0x21, 0xc5, 0x17,
- 0x9e, 0xb9, 0x8c, 0x72, 0xa2, 0x17, 0x9f, 0x13, 0xa5, 0x49, 0x6e, 0x2b, 0x92, 0xef, 0x58, 0xad,
- 0x23, 0x93, 0xf4, 0xf3, 0x5c, 0x2d, 0xb4, 0x81, 0xbf, 0x43, 0x00, 0xe3, 0x7d, 0x88, 0x2f, 0xce,
- 0xb5, 0xeb, 0xab, 0xaf, 0xcf, 0xb7, 0x56, 0x73, 0x25, 0xad, 0xeb, 0xc7, 0x27, 0xa9, 0x5b, 0xff,
- 0x2b, 0x82, 0xb5, 0x43, 0xc7, 0x1e, 0x4f, 0x5d, 0xef, 0xb3, 0xb6, 0x6d, 0xf5, 0xca, 0x9c, 0xd1,
- 0x9a, 0xfc, 0xae, 0x22, 0x7f, 0xdb, 0xda, 0x3c, 0x32, 0x79, 0xf7, 0x60, 0xce, 0x54, 0xe9, 0xdf,
- 0x10, 0xe0, 0xc3, 0x67, 0x16, 0x5f, 0x99, 0xe7, 0xe4, 0x8f, 0x6b, 0xb0, 0xe7, 0x0d, 0xd7, 0x45,
- 0xdc, 0x55, 0x45, 0xec, 0x58, 0x5b, 0xc7, 0x3a, 0x7a, 0xfb, 0x93, 0xb6, 0xd0, 0xc6, 0xe6, 0xd7,
- 0x08, 0x4e, 0xb9, 0xc2, 0x9f, 0xc2, 0x62, 0xf3, 0x44, 0xbe, 0x47, 0xee, 0xa5, 0xbf, 0x7b, 0xef,
- 0xa1, 0x8f, 0x5b, 0x3a, 0xa8, 0x2f, 0x3c, 0x1a, 0xf4, 0x6d, 0x11, 0xf5, 0x1b, 0x7d, 0x16, 0xa8,
- 0x5f, 0xc5, 0x8d, 0xcc, 0x45, 0x43, 0x1e, 0x4f, 0xfe, 0xcb, 0xba, 0x9e, 0x3f, 0xff, 0x50, 0x30,
- 0x6f, 0x67, 0xe0, 0x2d, 0x4f, 0x24, 0x5d, 0x3b, 0x4f, 0x6d, 0xdf, 0x6f, 0xfe, 0x9e, 0xbb, 0xf6,
- 0x94, 0x6b, 0x2f, 0x77, 0xed, 0xdd, 0x6f, 0x76, 0xca, 0x2a, 0xf9, 0x5b, 0xff, 0x06, 0x00, 0x00,
- 0xff, 0xff, 0xd6, 0x35, 0xfc, 0x0e, 0x16, 0x0e, 0x00, 0x00,
+ // 1469 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xcc, 0x58, 0xcd, 0x6f, 0xdc, 0x44,
+ 0x14, 0xc7, 0xde, 0xec, 0x26, 0x3b, 0x49, 0xf3, 0x31, 0x84, 0xc6, 0x59, 0x12, 0x9a, 0x9a, 0x96,
+ 0x6e, 0xd3, 0xc4, 0xae, 0x82, 0x40, 0x6a, 0xaa, 0xb6, 0x78, 0x43, 0xdb, 0x14, 0x08, 0x2a, 0x0e,
+ 0x6a, 0x25, 0x54, 0x69, 0x35, 0xf1, 0xce, 0x6e, 0x4c, 0x6d, 0x8f, 0x6b, 0x8f, 0xb3, 0xa4, 0x55,
+ 0x84, 0x54, 0x4e, 0x5c, 0xe9, 0x1f, 0x81, 0xc4, 0x81, 0xff, 0x80, 0x1b, 0xe2, 0xd0, 0x23, 0x1c,
+ 0x10, 0x8b, 0x40, 0x3d, 0x54, 0x48, 0x20, 0x71, 0x41, 0xe2, 0xd2, 0x13, 0xf2, 0x78, 0xfc, 0x91,
+ 0x8d, 0xb7, 0xd9, 0x2d, 0x2a, 0xe2, 0x66, 0xbf, 0xaf, 0x79, 0xbf, 0xdf, 0x7b, 0xf3, 0x9e, 0x77,
+ 0xc1, 0xf1, 0x16, 0x21, 0x2d, 0x0b, 0xab, 0x5b, 0x66, 0x8b, 0xa2, 0x2d, 0x0b, 0xab, 0x3b, 0x2b,
+ 0xc9, 0xb3, 0xe2, 0x7a, 0x84, 0x12, 0x08, 0x23, 0x13, 0x25, 0x11, 0xef, 0xac, 0x54, 0xe6, 0xb8,
+ 0x1b, 0x72, 0x4d, 0x15, 0x39, 0x0e, 0xa1, 0x88, 0x9a, 0xc4, 0xf1, 0x23, 0x8f, 0xca, 0x4c, 0x46,
+ 0x6b, 0x58, 0x26, 0x76, 0x28, 0x57, 0x1c, 0xcb, 0x28, 0x9a, 0x26, 0xb6, 0x1a, 0xf5, 0x2d, 0xbc,
+ 0x8d, 0x76, 0x4c, 0xe2, 0x71, 0x83, 0xd9, 0x8c, 0x81, 0x87, 0x7d, 0x12, 0x78, 0x06, 0x4f, 0xa3,
+ 0x32, 0x9f, 0x93, 0x69, 0x03, 0x51, 0xc4, 0xd5, 0xaf, 0x70, 0x35, 0x7b, 0xdb, 0x0a, 0x9a, 0x6a,
+ 0xdb, 0x43, 0xae, 0x8b, 0xbd, 0xee, 0x9c, 0x3c, 0xd7, 0x50, 0x7d, 0x8a, 0x68, 0xc0, 0x15, 0xf2,
+ 0x67, 0x22, 0x98, 0xd0, 0x31, 0x6a, 0xe8, 0xa4, 0xed, 0xeb, 0xf8, 0x4e, 0x80, 0x7d, 0x0a, 0xdf,
+ 0x06, 0x80, 0x9d, 0x51, 0x77, 0x90, 0x8d, 0x25, 0x61, 0x41, 0xa8, 0x96, 0x6b, 0x27, 0x1f, 0x69,
+ 0xe2, 0x13, 0xed, 0x18, 0x98, 0x4f, 0x78, 0x88, 0x22, 0x22, 0xd7, 0xf4, 0x15, 0x83, 0xd8, 0xea,
+ 0x87, 0xa1, 0x50, 0x2f, 0x33, 0xdd, 0xfb, 0xc8, 0xc6, 0xf0, 0x04, 0x18, 0x47, 0xae, 0x5b, 0x77,
+ 0x3d, 0xd2, 0x34, 0x2d, 0x5c, 0x37, 0x1b, 0x52, 0x31, 0x8c, 0xa4, 0x8f, 0x21, 0xd7, 0xbd, 0x1e,
+ 0x09, 0xaf, 0x35, 0xa0, 0x02, 0x86, 0x3c, 0xd2, 0xf6, 0x25, 0x71, 0x41, 0xa8, 0x8e, 0xae, 0x54,
+ 0x94, 0x83, 0x6c, 0x2b, 0x3a, 0x69, 0x6f, 0x62, 0xaa, 0x33, 0x3b, 0xf8, 0x06, 0x28, 0x35, 0x4d,
+ 0x8b, 0x62, 0x4f, 0x2a, 0x30, 0x8f, 0xf9, 0x1e, 0x1e, 0x57, 0x98, 0x91, 0xce, 0x8d, 0xe1, 0x3c,
+ 0x00, 0xa1, 0x7b, 0xdd, 0x32, 0x6d, 0x93, 0x4a, 0x43, 0x0b, 0x42, 0xb5, 0xa0, 0x97, 0x43, 0xc9,
+ 0x7b, 0xa1, 0x40, 0xfe, 0xab, 0x00, 0x26, 0x53, 0x16, 0x7c, 0x97, 0x38, 0x3e, 0x86, 0x57, 0x40,
+ 0xc9, 0xd8, 0x0e, 0x9c, 0xdb, 0xbe, 0x24, 0x2c, 0x14, 0xaa, 0xa3, 0x2b, 0x4a, 0xee, 0x51, 0x5d,
+ 0x5e, 0xca, 0x1a, 0xb6, 0xac, 0xb5, 0xd0, 0x4d, 0xe7, 0xde, 0x50, 0x05, 0xd3, 0x16, 0xf2, 0x69,
+ 0xdd, 0x37, 0x90, 0xe3, 0xe0, 0x46, 0xdd, 0x23, 0xed, 0xfa, 0x6d, 0xbc, 0xcb, 0x20, 0x8f, 0xe9,
+ 0x53, 0xa1, 0x6e, 0x33, 0x52, 0xe9, 0xa4, 0xfd, 0x2e, 0xde, 0xad, 0x3c, 0x16, 0x41, 0x39, 0x09,
+ 0x03, 0x67, 0xc0, 0x70, 0xec, 0x21, 0x30, 0x8f, 0x92, 0xc7, 0xcc, 0xe0, 0x05, 0x30, 0xda, 0x44,
+ 0xb6, 0x69, 0xed, 0x46, 0x75, 0x8a, 0x18, 0x9c, 0x8b, 0x93, 0x8c, 0x3b, 0x41, 0xd9, 0xa4, 0x9e,
+ 0xe9, 0xb4, 0x6e, 0x20, 0x2b, 0xc0, 0x3a, 0x88, 0x1c, 0x58, 0x7d, 0xce, 0x81, 0xf2, 0x9d, 0x00,
+ 0x59, 0x66, 0xd3, 0x4c, 0xc8, 0x7c, 0xf9, 0x80, 0x73, 0x6d, 0x97, 0x62, 0x3f, 0xf2, 0x4d, 0xad,
+ 0xe1, 0x69, 0x30, 0x49, 0x4d, 0x1b, 0xfb, 0x14, 0xd9, 0x6e, 0xdd, 0x36, 0x0d, 0x8f, 0xf8, 0x9c,
+ 0xd3, 0x89, 0x44, 0xbe, 0xc1, 0xc4, 0xf0, 0x28, 0x28, 0x59, 0x68, 0x0b, 0x5b, 0xbe, 0x54, 0x5c,
+ 0x28, 0x54, 0xcb, 0x3a, 0x7f, 0x83, 0xd3, 0xa0, 0xb8, 0x13, 0x86, 0x95, 0x4a, 0x0c, 0x53, 0xf4,
+ 0x12, 0x96, 0x89, 0x3d, 0xd4, 0x7d, 0xf3, 0x2e, 0x96, 0x86, 0x17, 0x84, 0x6a, 0x51, 0x2f, 0x33,
+ 0xc9, 0xa6, 0x79, 0x37, 0x54, 0x97, 0x3d, 0xec, 0x63, 0x1a, 0x52, 0x28, 0x8d, 0x2c, 0x08, 0xd5,
+ 0x91, 0xf5, 0x17, 0xf4, 0x11, 0x26, 0xd2, 0x49, 0x1b, 0x1e, 0x03, 0xc0, 0x20, 0xb6, 0x6d, 0x46,
+ 0xfa, 0x32, 0xd7, 0x97, 0x23, 0x99, 0x4e, 0xda, 0xb5, 0x31, 0xd6, 0x05, 0xf5, 0xe8, 0x02, 0xc8,
+ 0xf7, 0x05, 0x30, 0xbd, 0x89, 0x6c, 0xd7, 0xc2, 0x11, 0xef, 0xcf, 0xbd, 0xff, 0xc5, 0x83, 0xfd,
+ 0x2f, 0x6f, 0x82, 0x97, 0xba, 0x72, 0xe0, 0xdd, 0xd7, 0xb3, 0xec, 0xc7, 0xc1, 0x18, 0x69, 0x36,
+ 0x43, 0x16, 0xb6, 0xc2, 0xe2, 0xb0, 0xa8, 0x05, 0x7d, 0x34, 0x92, 0xb1, 0x7a, 0xc9, 0xbf, 0x08,
+ 0x60, 0x72, 0x23, 0xa0, 0x88, 0x86, 0x51, 0x9f, 0x37, 0xaa, 0xa1, 0x9c, 0x5b, 0x3d, 0x97, 0x26,
+ 0xcf, 0xba, 0xbc, 0x56, 0x78, 0xa4, 0x89, 0x09, 0x82, 0x4b, 0xa0, 0x6c, 0x07, 0x7c, 0x66, 0x4a,
+ 0x05, 0x76, 0xb7, 0xe6, 0xf2, 0xee, 0xd6, 0x06, 0x37, 0x8a, 0xbc, 0x53, 0x1f, 0xf9, 0x45, 0x30,
+ 0x95, 0x81, 0x17, 0x11, 0x26, 0x7f, 0x2d, 0x66, 0xa4, 0xcf, 0xbd, 0x96, 0x85, 0x1c, 0xd4, 0xef,
+ 0x80, 0x61, 0xec, 0x50, 0xcf, 0x64, 0x45, 0x09, 0x51, 0x9d, 0xe9, 0x89, 0x2a, 0x9b, 0xa3, 0x72,
+ 0xd9, 0xa1, 0xde, 0x6e, 0x04, 0x32, 0x0e, 0x50, 0x41, 0xa0, 0xc8, 0xc4, 0xbd, 0xfb, 0x60, 0x1f,
+ 0x8b, 0xe2, 0x33, 0xb0, 0xf8, 0xa5, 0x00, 0x60, 0x36, 0x99, 0x64, 0xec, 0x25, 0x28, 0xa2, 0xb9,
+ 0xb7, 0x74, 0x18, 0x0a, 0x3e, 0xf9, 0x58, 0xbe, 0x29, 0x82, 0x6b, 0x31, 0x82, 0x69, 0x50, 0x34,
+ 0x9d, 0x06, 0xfe, 0x84, 0xe5, 0x5f, 0xd0, 0xa3, 0x17, 0xb8, 0x08, 0x4a, 0xd1, 0x3d, 0xe4, 0x83,
+ 0x0b, 0xc6, 0xa7, 0x78, 0xae, 0xa1, 0x6c, 0x32, 0x8d, 0xce, 0x2d, 0xe4, 0x27, 0x22, 0x90, 0xd6,
+ 0xb6, 0xb1, 0x71, 0x5b, 0x73, 0x1a, 0xff, 0x59, 0x5f, 0x0f, 0x0f, 0xdc, 0xd7, 0xeb, 0x60, 0xd2,
+ 0xf5, 0x70, 0xc3, 0x34, 0x10, 0xc5, 0x75, 0xbe, 0xa5, 0x4a, 0xfd, 0x6c, 0xa9, 0x89, 0xc4, 0x2d,
+ 0x12, 0xc0, 0x35, 0x30, 0x4e, 0xbd, 0x00, 0xd7, 0xd3, 0x02, 0x0f, 0x1d, 0x5e, 0x60, 0xfd, 0x48,
+ 0xe8, 0x13, 0xbf, 0xf9, 0xf0, 0x32, 0x98, 0x68, 0x22, 0xcb, 0xcf, 0x46, 0x29, 0xf6, 0x11, 0x65,
+ 0x9c, 0x39, 0x25, 0x61, 0xe4, 0x75, 0x30, 0x9b, 0xc3, 0x3d, 0x6f, 0x96, 0x33, 0x60, 0x2a, 0x85,
+ 0x6c, 0x23, 0x6a, 0x6c, 0xe3, 0x06, 0xab, 0xc1, 0x88, 0x9e, 0x72, 0xb1, 0x11, 0xc9, 0xe5, 0xdf,
+ 0x05, 0x30, 0x1b, 0xee, 0xcb, 0x0d, 0xd2, 0x30, 0x9b, 0xbb, 0x37, 0x3d, 0xf3, 0x7f, 0x3a, 0x9f,
+ 0x6a, 0xa0, 0xe8, 0x05, 0x16, 0x8e, 0x67, 0xd3, 0xa9, 0x5e, 0x7b, 0x3f, 0x8b, 0x23, 0xb0, 0x70,
+ 0x14, 0x24, 0x72, 0x95, 0xaf, 0x82, 0x4a, 0x1e, 0x54, 0x4e, 0xdb, 0x69, 0x50, 0x08, 0x57, 0x94,
+ 0xc0, 0x9a, 0x63, 0xa6, 0x47, 0x73, 0xe8, 0xa1, 0xcd, 0xca, 0xb7, 0xe3, 0x60, 0xa4, 0xc6, 0x15,
+ 0xf0, 0x3b, 0x01, 0x8c, 0xc4, 0x5f, 0x1c, 0xf0, 0xd5, 0xa7, 0x7f, 0x8f, 0x30, 0x56, 0x2b, 0x27,
+ 0xfa, 0xf9, 0x68, 0x91, 0xed, 0x8e, 0x96, 0x21, 0xbf, 0xa3, 0xcd, 0xa6, 0x2f, 0x4b, 0xfb, 0xe9,
+ 0xbc, 0xff, 0xc3, 0xe3, 0x07, 0xe2, 0x45, 0xf9, 0x5c, 0xf8, 0xfd, 0x79, 0x2f, 0xb5, 0xba, 0xe0,
+ 0x7a, 0xe4, 0x63, 0x6c, 0x50, 0x5f, 0x5d, 0x54, 0x4d, 0xc7, 0xa7, 0xc8, 0x31, 0x70, 0xf8, 0xcc,
+ 0x2c, 0x7c, 0x75, 0x71, 0x6f, 0xd5, 0xe3, 0x67, 0xae, 0x0a, 0x8b, 0x67, 0x05, 0xf8, 0xa3, 0x00,
+ 0x8e, 0xec, 0xdb, 0x7b, 0xb0, 0x9a, 0x97, 0x68, 0xde, 0x7a, 0xae, 0x9c, 0xee, 0xc3, 0x92, 0xe3,
+ 0x22, 0x83, 0xe1, 0x7a, 0x0b, 0x5e, 0x1c, 0x18, 0x97, 0x9f, 0x3d, 0xf8, 0xac, 0x00, 0xff, 0x14,
+ 0x40, 0x39, 0xb9, 0x27, 0xf0, 0xc4, 0x53, 0x67, 0x67, 0x8c, 0xe8, 0xe4, 0x21, 0x56, 0x1c, 0xcd,
+ 0x17, 0x42, 0x47, 0x9b, 0xcb, 0x20, 0xe0, 0x2d, 0xbc, 0x94, 0xdc, 0xef, 0x8e, 0x76, 0xe6, 0x69,
+ 0xea, 0x3c, 0xc8, 0x97, 0xe4, 0xd5, 0x81, 0x21, 0xdb, 0x71, 0x66, 0xab, 0xc2, 0x22, 0xfc, 0x55,
+ 0x00, 0x20, 0x5d, 0x06, 0xf0, 0x64, 0x5f, 0x2b, 0xaf, 0xf2, 0x5a, 0x7f, 0x3b, 0x45, 0xfe, 0xb4,
+ 0xa3, 0xc1, 0x0c, 0x24, 0xbe, 0x5b, 0x3a, 0xda, 0xf1, 0x83, 0xc2, 0xdc, 0x82, 0xca, 0xe7, 0x9f,
+ 0x1d, 0x1d, 0x6f, 0xd5, 0x9f, 0x44, 0x30, 0x75, 0x60, 0x00, 0xc2, 0xdc, 0xa5, 0xd8, 0x6b, 0x47,
+ 0x55, 0x96, 0xfb, 0xb4, 0xe6, 0xa8, 0x7f, 0x13, 0x3a, 0x5a, 0x2d, 0xa7, 0x92, 0xdd, 0xdb, 0x65,
+ 0x69, 0xff, 0x92, 0x58, 0xea, 0x1a, 0xf7, 0x1d, 0xed, 0x83, 0x7f, 0x1f, 0x24, 0x8f, 0xd6, 0xab,
+ 0x72, 0x6d, 0x60, 0x5a, 0x8d, 0x6e, 0xb4, 0x61, 0xf3, 0x7c, 0x2e, 0x02, 0x78, 0x70, 0x4c, 0xc2,
+ 0xe5, 0x7e, 0x26, 0x6e, 0xca, 0xae, 0xd2, 0xaf, 0x39, 0xa7, 0xf7, 0x81, 0xd0, 0xd1, 0xa4, 0x1c,
+ 0x66, 0xd8, 0xe4, 0xee, 0x68, 0xa7, 0x7a, 0xa9, 0xf2, 0xa8, 0x58, 0x97, 0xd7, 0x9e, 0x69, 0x14,
+ 0xee, 0x4f, 0x6d, 0x55, 0x58, 0xac, 0xfc, 0x2c, 0x3e, 0xd4, 0x66, 0x7a, 0xec, 0xb9, 0xef, 0xb5,
+ 0x6f, 0xc4, 0x6d, 0x4a, 0x5d, 0x7f, 0x55, 0x55, 0xdb, 0xed, 0x76, 0xf7, 0x16, 0x44, 0x01, 0xdd,
+ 0x4e, 0xff, 0xb7, 0x68, 0x20, 0x8a, 0x96, 0x06, 0xb2, 0x56, 0xc2, 0x8c, 0x88, 0x63, 0xed, 0x1e,
+ 0xea, 0x66, 0x58, 0x24, 0x68, 0x2c, 0x0f, 0x76, 0x54, 0x8e, 0xcf, 0xa0, 0x07, 0xba, 0x16, 0xa2,
+ 0x4d, 0xe2, 0xd9, 0x03, 0x9a, 0xb3, 0x63, 0x96, 0xc3, 0x73, 0x6a, 0x7f, 0x0b, 0x7f, 0x68, 0x37,
+ 0x0f, 0xf9, 0x8e, 0x80, 0x6f, 0x26, 0xa5, 0xbb, 0xc7, 0x9f, 0xf6, 0x32, 0x25, 0xbc, 0x17, 0x3f,
+ 0xee, 0xc5, 0xb5, 0x8c, 0xaa, 0xbe, 0x07, 0x8e, 0x1a, 0xc4, 0xce, 0xe9, 0xc1, 0xda, 0x91, 0x78,
+ 0x71, 0x5f, 0x0f, 0x7f, 0x4f, 0x5f, 0x17, 0x3e, 0x5a, 0xe5, 0x46, 0x2d, 0x62, 0x21, 0xa7, 0xa5,
+ 0x10, 0xaf, 0xa5, 0xb6, 0xb0, 0xc3, 0x7e, 0x6d, 0xab, 0x69, 0x46, 0xd9, 0x3f, 0x79, 0xce, 0xc7,
+ 0xcf, 0x5f, 0x89, 0xd2, 0xd5, 0xc8, 0x79, 0x2d, 0xc4, 0xa8, 0xc4, 0xa1, 0x95, 0x1b, 0x2b, 0x0f,
+ 0x63, 0xd5, 0x2d, 0xa6, 0xba, 0x15, 0xab, 0x6e, 0xdd, 0x58, 0xd9, 0x2a, 0xb1, 0xe0, 0xaf, 0xff,
+ 0x13, 0x00, 0x00, 0xff, 0xff, 0xb9, 0x39, 0x3a, 0x5b, 0xea, 0x12, 0x00, 0x00,
}
// Reference imports to suppress errors if they are not otherwise used.
diff --git a/vendor/google.golang.org/genproto/googleapis/iam/v1/iam_policy.pb.go b/vendor/google.golang.org/genproto/googleapis/iam/v1/iam_policy.pb.go
index 631518c41440a..6e411ffd39a12 100644
--- a/vendor/google.golang.org/genproto/googleapis/iam/v1/iam_policy.pb.go
+++ b/vendor/google.golang.org/genproto/googleapis/iam/v1/iam_policy.pb.go
@@ -238,37 +238,40 @@ func init() {
func init() { proto.RegisterFile("google/iam/v1/iam_policy.proto", fileDescriptor_d2728eb97d748a32) }
var fileDescriptor_d2728eb97d748a32 = []byte{
- // 465 bytes of a gzipped FileDescriptorProto
- 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xac, 0x53, 0xcd, 0x8a, 0x13, 0x31,
- 0x1c, 0x27, 0x5d, 0x58, 0x6d, 0x56, 0x05, 0xa7, 0x88, 0x35, 0x2b, 0xb5, 0x44, 0x0f, 0x6d, 0xa1,
- 0x19, 0xbb, 0x9e, 0xac, 0x28, 0xec, 0x7a, 0x18, 0xe6, 0x20, 0x96, 0x51, 0x16, 0x94, 0x82, 0xc6,
- 0x31, 0x0c, 0x81, 0xc9, 0x24, 0x4e, 0xd2, 0x05, 0x11, 0x2f, 0x1e, 0x7c, 0x01, 0x6f, 0x3e, 0x82,
- 0x67, 0x9f, 0x62, 0xaf, 0xbe, 0x82, 0x0f, 0xe1, 0x51, 0x66, 0x92, 0xee, 0xce, 0x47, 0x95, 0x0a,
- 0x9e, 0x4a, 0xf3, 0xfb, 0xfa, 0x7f, 0xcc, 0x1f, 0x0e, 0x12, 0x29, 0x93, 0x94, 0xf9, 0x9c, 0x0a,
- 0xff, 0x64, 0x56, 0xfc, 0xbc, 0x52, 0x32, 0xe5, 0xf1, 0x7b, 0xa2, 0x72, 0x69, 0xa4, 0x77, 0xd9,
- 0xe2, 0x84, 0x53, 0x41, 0x4e, 0x66, 0x68, 0xbf, 0x4e, 0x97, 0xca, 0x70, 0x99, 0x69, 0xcb, 0x45,
- 0xa8, 0x0e, 0x56, 0x7d, 0xd0, 0x4d, 0x87, 0x51, 0xc5, 0x7d, 0x9a, 0x65, 0xd2, 0xd0, 0xaa, 0xf2,
- 0x7a, 0x05, 0x8d, 0x53, 0xce, 0x32, 0x63, 0x01, 0xfc, 0x1a, 0xf6, 0x9e, 0x31, 0x13, 0x52, 0xb1,
- 0x28, 0xcd, 0x22, 0xf6, 0x6e, 0xc5, 0xb4, 0xf1, 0x10, 0xbc, 0x98, 0x33, 0x2d, 0x57, 0x79, 0xcc,
- 0xfa, 0x60, 0x08, 0x46, 0xdd, 0xe8, 0xec, 0xbf, 0x37, 0x85, 0xbb, 0x36, 0xb9, 0xdf, 0x19, 0x82,
- 0xd1, 0xde, 0xc1, 0x35, 0x52, 0x6b, 0x81, 0x38, 0x27, 0x47, 0xc2, 0x29, 0xec, 0x05, 0xff, 0x98,
- 0x70, 0x1f, 0x5e, 0x70, 0x8d, 0xbb, 0x88, 0x5b, 0x8d, 0x88, 0x80, 0x19, 0xeb, 0xf6, 0xd4, 0xd2,
- 0xa2, 0x35, 0x1f, 0xbf, 0x80, 0x37, 0x9e, 0x33, 0x5d, 0xc6, 0xb1, 0x5c, 0x70, 0xad, 0x4b, 0x78,
- 0x8b, 0xcc, 0x21, 0xdc, 0x53, 0xe7, 0x8a, 0x7e, 0x67, 0xb8, 0x33, 0xea, 0x46, 0xd5, 0x27, 0xfc,
- 0x08, 0xa2, 0x4d, 0xd6, 0x5a, 0xc9, 0x4c, 0xb7, 0xf4, 0xa0, 0xa5, 0x3f, 0xf8, 0xbe, 0x03, 0xbb,
- 0xe1, 0xe1, 0x13, 0x5b, 0xb8, 0x67, 0xe0, 0xa5, 0xea, 0xe0, 0x3d, 0xdc, 0x68, 0x71, 0xc3, 0x56,
- 0xd0, 0xe6, 0x49, 0xe3, 0xf1, 0xa7, 0x1f, 0x3f, 0xbf, 0x74, 0x6e, 0xe3, 0x41, 0xf1, 0x51, 0x7c,
- 0x58, 0x77, 0xf4, 0x70, 0x32, 0xf9, 0x38, 0xd7, 0x15, 0x97, 0x39, 0x98, 0x14, 0xa9, 0xc1, 0xdf,
- 0x52, 0x83, 0xff, 0x92, 0x9a, 0x34, 0x52, 0xbf, 0x02, 0xe8, 0xb5, 0x47, 0xe7, 0x8d, 0x1a, 0xc6,
- 0x7f, 0x5c, 0x1c, 0x1a, 0x6f, 0xc1, 0xb4, 0x7b, 0xc0, 0x7e, 0x59, 0xd6, 0x18, 0xdf, 0x69, 0x97,
- 0x65, 0x5a, 0xaa, 0x39, 0x98, 0xa0, 0xc1, 0xe9, 0xe1, 0x3e, 0xa7, 0x62, 0x2a, 0x98, 0xa1, 0x53,
- 0xaa, 0xb8, 0x8b, 0xa2, 0x8a, 0x6b, 0x12, 0x4b, 0x71, 0xf4, 0x19, 0xc0, 0xab, 0xb1, 0x14, 0xf5,
- 0x0a, 0x8e, 0xae, 0x9c, 0x35, 0xb8, 0x28, 0xee, 0x68, 0x01, 0x5e, 0xde, 0x75, 0x84, 0x44, 0xa6,
- 0x34, 0x4b, 0x88, 0xcc, 0x13, 0x3f, 0x61, 0x59, 0x79, 0x65, 0xfe, 0xb9, 0xa5, 0xbb, 0xdd, 0x07,
- 0x9c, 0x8a, 0x5f, 0x00, 0x7c, 0xeb, 0xf4, 0x02, 0xab, 0x7a, 0x9c, 0xca, 0xd5, 0x5b, 0x12, 0x52,
- 0x41, 0x8e, 0x67, 0xa7, 0xeb, 0xd7, 0x65, 0xf9, 0xba, 0x0c, 0xa9, 0x58, 0x1e, 0xcf, 0xde, 0xec,
- 0x96, 0x5e, 0xf7, 0x7e, 0x07, 0x00, 0x00, 0xff, 0xff, 0xa3, 0x57, 0xb0, 0xe9, 0x52, 0x04, 0x00,
- 0x00,
+ // 514 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xac, 0x54, 0xc1, 0x8a, 0xd3, 0x40,
+ 0x18, 0x66, 0x52, 0x58, 0xed, 0xac, 0x0a, 0xa6, 0x88, 0xdd, 0xac, 0x74, 0x4b, 0x74, 0xa1, 0x0d,
+ 0xec, 0xc4, 0xd6, 0x93, 0x15, 0x85, 0xd4, 0x43, 0xe8, 0x41, 0x2c, 0x55, 0xf6, 0x20, 0x85, 0x65,
+ 0x36, 0x3b, 0xc6, 0x81, 0x4c, 0x66, 0xcc, 0x4c, 0x2b, 0x22, 0x5e, 0x3c, 0xf8, 0x02, 0xde, 0x7c,
+ 0x04, 0xcf, 0x3e, 0xc5, 0x5e, 0x7d, 0x81, 0x3d, 0xf8, 0x10, 0xe2, 0x49, 0x92, 0x99, 0x6e, 0x93,
+ 0xb6, 0x8a, 0xca, 0x9e, 0x0a, 0xff, 0xf7, 0xfd, 0xdf, 0xf7, 0x7f, 0xff, 0xdf, 0x09, 0x6c, 0xc5,
+ 0x9c, 0xc7, 0x09, 0xf1, 0x29, 0x66, 0xfe, 0xbc, 0x97, 0xff, 0x1c, 0x09, 0x9e, 0xd0, 0xe8, 0x2d,
+ 0x12, 0x19, 0x57, 0xdc, 0xbe, 0xaa, 0x71, 0x44, 0x31, 0x43, 0xf3, 0x9e, 0xb3, 0x5b, 0xa5, 0x73,
+ 0xa1, 0x28, 0x4f, 0xa5, 0xe6, 0x3a, 0x4e, 0x15, 0x2c, 0xeb, 0x38, 0xb7, 0x0c, 0x86, 0x05, 0xf5,
+ 0x71, 0x9a, 0x72, 0x85, 0xcb, 0x9d, 0x37, 0x4b, 0x68, 0x94, 0x50, 0x92, 0x2a, 0x03, 0xec, 0x95,
+ 0x80, 0x97, 0x94, 0x24, 0x27, 0x47, 0xc7, 0xe4, 0x15, 0x9e, 0x53, 0x9e, 0x19, 0xc2, 0x4e, 0x89,
+ 0x90, 0x11, 0xc9, 0x67, 0x59, 0x44, 0x34, 0xe4, 0x0a, 0xd8, 0x78, 0x46, 0xd4, 0x08, 0xb3, 0x71,
+ 0x31, 0xc8, 0x84, 0xbc, 0x9e, 0x11, 0xa9, 0xec, 0x7d, 0x78, 0x79, 0x41, 0x6c, 0x82, 0x36, 0xe8,
+ 0xd4, 0x87, 0xf5, 0xb3, 0xc0, 0xfa, 0x19, 0xd4, 0x20, 0xf0, 0x26, 0xe7, 0x90, 0xdd, 0x87, 0x5b,
+ 0x3a, 0x40, 0xd3, 0x6a, 0x83, 0xce, 0x76, 0xff, 0x06, 0xaa, 0x6c, 0x02, 0x69, 0xd1, 0x61, 0xed,
+ 0x2c, 0xb0, 0x26, 0x86, 0xe9, 0xbe, 0x81, 0x8d, 0xf0, 0xff, 0x1d, 0xef, 0xc3, 0x4b, 0x66, 0x9f,
+ 0xc6, 0x72, 0x6f, 0xc5, 0x32, 0x24, 0x4a, 0x0b, 0x3f, 0xd5, 0xb4, 0xc9, 0x82, 0xef, 0x52, 0xb8,
+ 0xf3, 0x9c, 0xc8, 0xc2, 0x99, 0x64, 0x8c, 0x4a, 0x59, 0xc0, 0xff, 0x66, 0xbf, 0x0f, 0xb7, 0xc5,
+ 0xb2, 0xb9, 0x69, 0xb5, 0x6b, 0x9d, 0xba, 0x8e, 0x57, 0xae, 0xbb, 0x8f, 0xa0, 0xb3, 0xc9, 0x4a,
+ 0x0a, 0x9e, 0x4a, 0x62, 0xb7, 0xab, 0x22, 0x20, 0x17, 0xa9, 0xf4, 0xf7, 0xbf, 0xd6, 0x60, 0x7d,
+ 0x14, 0x3c, 0xd1, 0x41, 0x6c, 0x05, 0xaf, 0x94, 0x6f, 0x64, 0xbb, 0x2b, 0x91, 0x37, 0x1c, 0xd0,
+ 0xd9, 0x7c, 0x09, 0xb7, 0xfb, 0xe1, 0xdb, 0xf7, 0x4f, 0xd6, 0x6d, 0xb7, 0x95, 0xff, 0xf7, 0xde,
+ 0x2d, 0x62, 0x3d, 0xf4, 0xbc, 0xf7, 0x03, 0x59, 0x52, 0x19, 0x00, 0x2f, 0x77, 0x0d, 0xff, 0xe4,
+ 0x1a, 0x5e, 0x88, 0x6b, 0xbc, 0xe2, 0xfa, 0x19, 0x40, 0x7b, 0x7d, 0x75, 0x76, 0x67, 0x45, 0xf8,
+ 0xb7, 0x87, 0x74, 0xba, 0x7f, 0xc1, 0xd4, 0x77, 0x70, 0xfd, 0x62, 0xac, 0xae, 0x7b, 0x67, 0x7d,
+ 0x2c, 0xb5, 0xd6, 0x35, 0x00, 0x9e, 0xd3, 0x3a, 0x0d, 0x76, 0x29, 0x66, 0x07, 0x8c, 0x28, 0x7c,
+ 0x80, 0x05, 0x35, 0x56, 0x58, 0x50, 0x89, 0x22, 0xce, 0x86, 0x1f, 0x01, 0xbc, 0x1e, 0x71, 0x56,
+ 0x9d, 0x60, 0x78, 0xed, 0x3c, 0xe0, 0x38, 0x7f, 0x72, 0x63, 0xf0, 0xe2, 0xae, 0x21, 0xc4, 0x3c,
+ 0xc1, 0x69, 0x8c, 0x78, 0x16, 0xfb, 0x31, 0x49, 0x8b, 0x07, 0xe9, 0x2f, 0x25, 0xcd, 0x27, 0xe2,
+ 0x01, 0xc5, 0xec, 0x07, 0x00, 0x5f, 0xac, 0x46, 0xa8, 0xbb, 0x1e, 0x27, 0x7c, 0x76, 0x82, 0x46,
+ 0x98, 0xa1, 0xc3, 0xde, 0xe9, 0xa2, 0x3a, 0x2d, 0xaa, 0xd3, 0x11, 0x66, 0xd3, 0xc3, 0xde, 0xf1,
+ 0x56, 0xa1, 0x75, 0xef, 0x57, 0x00, 0x00, 0x00, 0xff, 0xff, 0x32, 0x24, 0xb5, 0x51, 0xb9, 0x04,
+ 0x00, 0x00,
}
// Reference imports to suppress errors if they are not otherwise used.
diff --git a/vendor/google.golang.org/genproto/googleapis/iam/v1/policy.pb.go b/vendor/google.golang.org/genproto/googleapis/iam/v1/policy.pb.go
index 8548931e68bb4..b2402e1825ca3 100644
--- a/vendor/google.golang.org/genproto/googleapis/iam/v1/policy.pb.go
+++ b/vendor/google.golang.org/genproto/googleapis/iam/v1/policy.pb.go
@@ -91,27 +91,36 @@ func (AuditConfigDelta_Action) EnumDescriptor() ([]byte, []int) {
// specify access control policies for Cloud Platform resources.
//
//
-// A `Policy` consists of a list of `bindings`. A `binding` binds a list of
-// `members` to a `role`, where the members can be user accounts, Google groups,
-// Google domains, and service accounts. A `role` is a named list of permissions
-// defined by IAM.
+// A `Policy` is a collection of `bindings`. A `binding` binds one or more
+// `members` to a single `role`. Members can be user accounts, service accounts,
+// Google groups, and domains (such as G Suite). A `role` is a named list of
+// permissions (defined by IAM or configured by users). A `binding` can
+// optionally specify a `condition`, which is a logic expression that further
+// constrains the role binding based on attributes about the request and/or
+// target resource.
//
// **JSON Example**
//
// {
// "bindings": [
// {
-// "role": "roles/owner",
+// "role": "roles/resourcemanager.organizationAdmin",
// "members": [
// "user:[email protected]",
// "group:[email protected]",
// "domain:google.com",
-// "serviceAccount:[email protected]"
+// "serviceAccount:[email protected]"
// ]
// },
// {
-// "role": "roles/viewer",
-// "members": ["user:[email protected]"]
+// "role": "roles/resourcemanager.organizationViewer",
+// "members": ["user:[email protected]"],
+// "condition": {
+// "title": "expirable access",
+// "description": "Does not grant access after Sep 2020",
+// "expression": "request.time <
+// timestamp('2020-10-01T00:00:00.000Z')",
+// }
// }
// ]
// }
@@ -123,12 +132,15 @@ func (AuditConfigDelta_Action) EnumDescriptor() ([]byte, []int) {
// - user:[email protected]
// - group:[email protected]
// - domain:google.com
-// - serviceAccount:[email protected]
-// role: roles/owner
+// - serviceAccount:[email protected]
+// role: roles/resourcemanager.organizationAdmin
// - members:
-// - user:[email protected]
-// role: roles/viewer
-//
+// - user:[email protected]
+// role: roles/resourcemanager.organizationViewer
+// condition:
+// title: expirable access
+// description: Does not grant access after Sep 2020
+// expression: request.time < timestamp('2020-10-01T00:00:00.000Z')
//
// For a description of IAM and its features, see the
// [IAM developer's guide](https://cloud.google.com/iam/docs).
@@ -138,11 +150,18 @@ type Policy struct {
// Valid values are 0, 1, and 3. Requests specifying an invalid value will be
// rejected.
//
- // Policies with any conditional bindings must specify version 3. Policies
- // without any conditional bindings may specify any valid value or leave the
- // field unset.
+ // Operations affecting conditional bindings must specify version 3. This can
+ // be either setting a conditional policy, modifying a conditional binding,
+ // or removing a binding (conditional or unconditional) from the stored
+ // conditional policy.
+ // Operations on non-conditional policies may specify any valid value or
+ // leave the field unset.
+ //
+ // If no etag is provided in the call to `setIamPolicy`, version compliance
+ // checks against the stored policy is skipped.
Version int32 `protobuf:"varint,1,opt,name=version,proto3" json:"version,omitempty"`
- // Associates a list of `members` to a `role`.
+ // Associates a list of `members` to a `role`. Optionally may specify a
+ // `condition` that determines when binding is in effect.
// `bindings` with no members will result in an error.
Bindings []*Binding `protobuf:"bytes,4,rep,name=bindings,proto3" json:"bindings,omitempty"`
// `etag` is used for optimistic concurrency control as a way to help
@@ -154,7 +173,9 @@ type Policy struct {
// ensure that their change will be applied to the same version of the policy.
//
// If no `etag` is provided in the call to `setIamPolicy`, then the existing
- // policy is overwritten.
+ // policy is overwritten. Due to blind-set semantics of an etag-less policy,
+ // 'setIamPolicy' will not fail even if the incoming policy version does not
+ // meet the requirements for modifying the stored policy.
Etag []byte `protobuf:"bytes,3,opt,name=etag,proto3" json:"etag,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
@@ -357,8 +378,7 @@ type BindingDelta struct {
// Follows the same format of Binding.members.
// Required
Member string `protobuf:"bytes,3,opt,name=member,proto3" json:"member,omitempty"`
- // The condition that is associated with this binding. This field is logged
- // only for Cloud Audit Logging.
+ // The condition that is associated with this binding.
Condition *expr.Expr `protobuf:"bytes,4,opt,name=condition,proto3" json:"condition,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
diff --git a/vendor/google.golang.org/genproto/protobuf/field_mask/field_mask.pb.go b/vendor/google.golang.org/genproto/protobuf/field_mask/field_mask.pb.go
index a0889f0c7a63e..42552c3076b39 100644
--- a/vendor/google.golang.org/genproto/protobuf/field_mask/field_mask.pb.go
+++ b/vendor/google.golang.org/genproto/protobuf/field_mask/field_mask.pb.go
@@ -1,5 +1,5 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
-// source: google/protobuf/field_mask.proto
+// source: src/google/protobuf/field_mask.proto
package field_mask
@@ -219,7 +219,7 @@ const _ = proto.ProtoPackageIsVersion3 // please upgrade the proto package
//
// The implementation of any API method which has a FieldMask type field in the
// request should verify the included field paths, and return an
-// `INVALID_ARGUMENT` error if any path is duplicated or unmappable.
+// `INVALID_ARGUMENT` error if any path is unmappable.
type FieldMask struct {
// The set of field mask paths.
Paths []string `protobuf:"bytes,1,rep,name=paths,proto3" json:"paths,omitempty"`
@@ -232,7 +232,7 @@ func (m *FieldMask) Reset() { *m = FieldMask{} }
func (m *FieldMask) String() string { return proto.CompactTextString(m) }
func (*FieldMask) ProtoMessage() {}
func (*FieldMask) Descriptor() ([]byte, []int) {
- return fileDescriptor_5158202634f0da48, []int{0}
+ return fileDescriptor_2b90b55a883453ae, []int{0}
}
func (m *FieldMask) XXX_Unmarshal(b []byte) error {
@@ -264,19 +264,22 @@ func init() {
proto.RegisterType((*FieldMask)(nil), "google.protobuf.FieldMask")
}
-func init() { proto.RegisterFile("google/protobuf/field_mask.proto", fileDescriptor_5158202634f0da48) }
+func init() {
+ proto.RegisterFile("src/google/protobuf/field_mask.proto", fileDescriptor_2b90b55a883453ae)
+}
-var fileDescriptor_5158202634f0da48 = []byte{
- // 175 bytes of a gzipped FileDescriptorProto
- 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x52, 0x48, 0xcf, 0xcf, 0x4f,
- 0xcf, 0x49, 0xd5, 0x2f, 0x28, 0xca, 0x2f, 0xc9, 0x4f, 0x2a, 0x4d, 0xd3, 0x4f, 0xcb, 0x4c, 0xcd,
- 0x49, 0x89, 0xcf, 0x4d, 0x2c, 0xce, 0xd6, 0x03, 0x8b, 0x09, 0xf1, 0x43, 0x54, 0xe8, 0xc1, 0x54,
- 0x28, 0x29, 0x72, 0x71, 0xba, 0x81, 0x14, 0xf9, 0x26, 0x16, 0x67, 0x0b, 0x89, 0x70, 0xb1, 0x16,
- 0x24, 0x96, 0x64, 0x14, 0x4b, 0x30, 0x2a, 0x30, 0x6b, 0x70, 0x06, 0x41, 0x38, 0x4e, 0x3d, 0x8c,
- 0x5c, 0xc2, 0xc9, 0xf9, 0xb9, 0x7a, 0x68, 0x5a, 0x9d, 0xf8, 0xe0, 0x1a, 0x03, 0x40, 0x42, 0x01,
- 0x8c, 0x51, 0x96, 0x50, 0x25, 0xe9, 0xf9, 0x39, 0x89, 0x79, 0xe9, 0x7a, 0xf9, 0x45, 0xe9, 0xfa,
- 0xe9, 0xa9, 0x79, 0x60, 0x0d, 0xd8, 0xdc, 0x64, 0x8d, 0x60, 0xfe, 0x60, 0x64, 0x5c, 0xc4, 0xc4,
- 0xec, 0x1e, 0xe0, 0xb4, 0x8a, 0x49, 0xce, 0x1d, 0x62, 0x48, 0x00, 0x54, 0x83, 0x5e, 0x78, 0x6a,
- 0x4e, 0x8e, 0x77, 0x5e, 0x7e, 0x79, 0x5e, 0x48, 0x65, 0x41, 0x6a, 0x71, 0x12, 0x1b, 0xd8, 0x24,
- 0x63, 0x40, 0x00, 0x00, 0x00, 0xff, 0xff, 0xfd, 0xda, 0xb7, 0xa8, 0xed, 0x00, 0x00, 0x00,
+var fileDescriptor_2b90b55a883453ae = []byte{
+ // 179 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x52, 0x29, 0x2e, 0x4a, 0xd6,
+ 0x4f, 0xcf, 0xcf, 0x4f, 0xcf, 0x49, 0xd5, 0x2f, 0x28, 0xca, 0x2f, 0xc9, 0x4f, 0x2a, 0x4d, 0xd3,
+ 0x4f, 0xcb, 0x4c, 0xcd, 0x49, 0x89, 0xcf, 0x4d, 0x2c, 0xce, 0xd6, 0x03, 0x8b, 0x09, 0xf1, 0x43,
+ 0x54, 0xe8, 0xc1, 0x54, 0x28, 0x29, 0x72, 0x71, 0xba, 0x81, 0x14, 0xf9, 0x26, 0x16, 0x67, 0x0b,
+ 0x89, 0x70, 0xb1, 0x16, 0x24, 0x96, 0x64, 0x14, 0x4b, 0x30, 0x2a, 0x30, 0x6b, 0x70, 0x06, 0x41,
+ 0x38, 0x4e, 0x3d, 0x8c, 0x5c, 0xc2, 0xc9, 0xf9, 0xb9, 0x7a, 0x68, 0x5a, 0x9d, 0xf8, 0xe0, 0x1a,
+ 0x03, 0x40, 0x42, 0x01, 0x8c, 0x51, 0x96, 0x50, 0x25, 0xe9, 0xf9, 0x39, 0x89, 0x79, 0xe9, 0x7a,
+ 0xf9, 0x45, 0xe9, 0xfa, 0xe9, 0xa9, 0x79, 0x60, 0x0d, 0xd8, 0xdc, 0x64, 0x8d, 0x60, 0xfe, 0x60,
+ 0x64, 0x5c, 0xc4, 0xc4, 0xec, 0x1e, 0xe0, 0xb4, 0x8a, 0x49, 0xce, 0x1d, 0x62, 0x48, 0x00, 0x54,
+ 0x83, 0x5e, 0x78, 0x6a, 0x4e, 0x8e, 0x77, 0x5e, 0x7e, 0x79, 0x5e, 0x48, 0x65, 0x41, 0x6a, 0x71,
+ 0x12, 0x1b, 0xd8, 0x24, 0x63, 0x40, 0x00, 0x00, 0x00, 0xff, 0xff, 0xb7, 0x6d, 0x19, 0x3c, 0xf1,
+ 0x00, 0x00, 0x00,
}
diff --git a/vendor/gopkg.in/yaml.v2/decode.go b/vendor/gopkg.in/yaml.v2/decode.go
index e4e56e28e0e80..129bc2a97d317 100644
--- a/vendor/gopkg.in/yaml.v2/decode.go
+++ b/vendor/gopkg.in/yaml.v2/decode.go
@@ -229,6 +229,10 @@ type decoder struct {
mapType reflect.Type
terrors []string
strict bool
+
+ decodeCount int
+ aliasCount int
+ aliasDepth int
}
var (
@@ -314,7 +318,43 @@ func (d *decoder) prepare(n *node, out reflect.Value) (newout reflect.Value, unm
return out, false, false
}
+const (
+ // 400,000 decode operations is ~500kb of dense object declarations, or
+ // ~5kb of dense object declarations with 10000% alias expansion
+ alias_ratio_range_low = 400000
+
+ // 4,000,000 decode operations is ~5MB of dense object declarations, or
+ // ~4.5MB of dense object declarations with 10% alias expansion
+ alias_ratio_range_high = 4000000
+
+ // alias_ratio_range is the range over which we scale allowed alias ratios
+ alias_ratio_range = float64(alias_ratio_range_high - alias_ratio_range_low)
+)
+
+func allowedAliasRatio(decodeCount int) float64 {
+ switch {
+ case decodeCount <= alias_ratio_range_low:
+ // allow 99% to come from alias expansion for small-to-medium documents
+ return 0.99
+ case decodeCount >= alias_ratio_range_high:
+ // allow 10% to come from alias expansion for very large documents
+ return 0.10
+ default:
+ // scale smoothly from 99% down to 10% over the range.
+ // this maps to 396,000 - 400,000 allowed alias-driven decodes over the range.
+ // 400,000 decode operations is ~100MB of allocations in worst-case scenarios (single-item maps).
+ return 0.99 - 0.89*(float64(decodeCount-alias_ratio_range_low)/alias_ratio_range)
+ }
+}
+
func (d *decoder) unmarshal(n *node, out reflect.Value) (good bool) {
+ d.decodeCount++
+ if d.aliasDepth > 0 {
+ d.aliasCount++
+ }
+ if d.aliasCount > 100 && d.decodeCount > 1000 && float64(d.aliasCount)/float64(d.decodeCount) > allowedAliasRatio(d.decodeCount) {
+ failf("document contains excessive aliasing")
+ }
switch n.kind {
case documentNode:
return d.document(n, out)
@@ -353,7 +393,9 @@ func (d *decoder) alias(n *node, out reflect.Value) (good bool) {
failf("anchor '%s' value contains itself", n.value)
}
d.aliases[n] = true
+ d.aliasDepth++
good = d.unmarshal(n.alias, out)
+ d.aliasDepth--
delete(d.aliases, n)
return good
}
@@ -746,8 +788,7 @@ func (d *decoder) merge(n *node, out reflect.Value) {
case mappingNode:
d.unmarshal(n, out)
case aliasNode:
- an, ok := d.doc.anchors[n.value]
- if ok && an.kind != mappingNode {
+ if n.alias != nil && n.alias.kind != mappingNode {
failWantMap()
}
d.unmarshal(n, out)
@@ -756,8 +797,7 @@ func (d *decoder) merge(n *node, out reflect.Value) {
for i := len(n.children) - 1; i >= 0; i-- {
ni := n.children[i]
if ni.kind == aliasNode {
- an, ok := d.doc.anchors[ni.value]
- if ok && an.kind != mappingNode {
+ if ni.alias != nil && ni.alias.kind != mappingNode {
failWantMap()
}
} else if ni.kind != mappingNode {
diff --git a/vendor/gopkg.in/yaml.v2/resolve.go b/vendor/gopkg.in/yaml.v2/resolve.go
index 6c151db6fbd58..4120e0c9160a1 100644
--- a/vendor/gopkg.in/yaml.v2/resolve.go
+++ b/vendor/gopkg.in/yaml.v2/resolve.go
@@ -81,7 +81,7 @@ func resolvableTag(tag string) bool {
return false
}
-var yamlStyleFloat = regexp.MustCompile(`^[-+]?[0-9]*\.?[0-9]+([eE][-+][0-9]+)?$`)
+var yamlStyleFloat = regexp.MustCompile(`^[-+]?(\.[0-9]+|[0-9]+(\.[0-9]*)?)([eE][-+]?[0-9]+)?$`)
func resolve(tag string, in string) (rtag string, out interface{}) {
if !resolvableTag(tag) {
diff --git a/vendor/gopkg.in/yaml.v2/scannerc.go b/vendor/gopkg.in/yaml.v2/scannerc.go
index 077fd1dd2d446..570b8ecd10fd5 100644
--- a/vendor/gopkg.in/yaml.v2/scannerc.go
+++ b/vendor/gopkg.in/yaml.v2/scannerc.go
@@ -906,6 +906,9 @@ func yaml_parser_remove_simple_key(parser *yaml_parser_t) bool {
return true
}
+// max_flow_level limits the flow_level
+const max_flow_level = 10000
+
// Increase the flow level and resize the simple key list if needed.
func yaml_parser_increase_flow_level(parser *yaml_parser_t) bool {
// Reset the simple key on the next level.
@@ -913,6 +916,11 @@ func yaml_parser_increase_flow_level(parser *yaml_parser_t) bool {
// Increase the flow level.
parser.flow_level++
+ if parser.flow_level > max_flow_level {
+ return yaml_parser_set_scanner_error(parser,
+ "while increasing flow level", parser.simple_keys[len(parser.simple_keys)-1].mark,
+ fmt.Sprintf("exceeded max depth of %d", max_flow_level))
+ }
return true
}
@@ -925,6 +933,9 @@ func yaml_parser_decrease_flow_level(parser *yaml_parser_t) bool {
return true
}
+// max_indents limits the indents stack size
+const max_indents = 10000
+
// Push the current indentation level to the stack and set the new level
// the current column is greater than the indentation level. In this case,
// append or insert the specified token into the token queue.
@@ -939,6 +950,11 @@ func yaml_parser_roll_indent(parser *yaml_parser_t, column, number int, typ yaml
// indentation level.
parser.indents = append(parser.indents, parser.indent)
parser.indent = column
+ if len(parser.indents) > max_indents {
+ return yaml_parser_set_scanner_error(parser,
+ "while increasing indent level", parser.simple_keys[len(parser.simple_keys)-1].mark,
+ fmt.Sprintf("exceeded max depth of %d", max_indents))
+ }
// Create a token and insert it into the queue.
token := yaml_token_t{
diff --git a/vendor/honnef.co/go/tools/LICENSE b/vendor/honnef.co/go/tools/LICENSE
new file mode 100644
index 0000000000000..dfd031454603a
--- /dev/null
+++ b/vendor/honnef.co/go/tools/LICENSE
@@ -0,0 +1,20 @@
+Copyright (c) 2016 Dominik Honnef
+
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
diff --git a/vendor/honnef.co/go/tools/LICENSE-THIRD-PARTY b/vendor/honnef.co/go/tools/LICENSE-THIRD-PARTY
new file mode 100644
index 0000000000000..7c241b71aef3a
--- /dev/null
+++ b/vendor/honnef.co/go/tools/LICENSE-THIRD-PARTY
@@ -0,0 +1,226 @@
+Staticcheck and its related tools make use of third party projects,
+either by reusing their code, or by statically linking them into
+resulting binaries. These projects are:
+
+* The Go Programming Language - https://golang.org/
+
+ Copyright (c) 2009 The Go Authors. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions are
+ met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above
+ copyright notice, this list of conditions and the following disclaimer
+ in the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Google Inc. nor the names of its
+ contributors may be used to endorse or promote products derived from
+ this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+* github.com/BurntSushi/toml - https://github.com/BurntSushi/toml
+
+ The MIT License (MIT)
+
+ Copyright (c) 2013 TOML authors
+
+ Permission is hereby granted, free of charge, to any person obtaining a copy
+ of this software and associated documentation files (the "Software"), to deal
+ in the Software without restriction, including without limitation the rights
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ copies of the Software, and to permit persons to whom the Software is
+ furnished to do so, subject to the following conditions:
+
+ The above copyright notice and this permission notice shall be included in
+ all copies or substantial portions of the Software.
+
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ THE SOFTWARE.
+
+
+* github.com/google/renameio - https://github.com/google/renameio
+
+ Copyright 2018 Google Inc.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+* github.com/kisielk/gotool – https://github.com/kisielk/gotool
+
+ Copyright (c) 2013 Kamil Kisiel <[email protected]>
+
+ Permission is hereby granted, free of charge, to any person obtaining
+ a copy of this software and associated documentation files (the
+ "Software"), to deal in the Software without restriction, including
+ without limitation the rights to use, copy, modify, merge, publish,
+ distribute, sublicense, and/or sell copies of the Software, and to
+ permit persons to whom the Software is furnished to do so, subject to
+ the following conditions:
+
+ The above copyright notice and this permission notice shall be
+ included in all copies or substantial portions of the Software.
+
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+ LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+ OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+ WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+
+ All the files in this distribution are covered under either the MIT
+ license (see the file LICENSE) except some files mentioned below.
+
+ match.go, match_test.go:
+
+ Copyright (c) 2009 The Go Authors. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions are
+ met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above
+ copyright notice, this list of conditions and the following disclaimer
+ in the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Google Inc. nor the names of its
+ contributors may be used to endorse or promote products derived from
+ this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+* github.com/rogpeppe/go-internal - https://github.com/rogpeppe/go-internal
+
+ Copyright (c) 2018 The Go Authors. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions are
+ met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above
+ copyright notice, this list of conditions and the following disclaimer
+ in the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Google Inc. nor the names of its
+ contributors may be used to endorse or promote products derived from
+ this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+* golang.org/x/mod/module - https://github.com/golang/mod
+
+ Copyright (c) 2009 The Go Authors. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions are
+ met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above
+ copyright notice, this list of conditions and the following disclaimer
+ in the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Google Inc. nor the names of its
+ contributors may be used to endorse or promote products derived from
+ this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+* golang.org/x/tools/go/analysis - https://github.com/golang/tools
+
+ Copyright (c) 2009 The Go Authors. All rights reserved.
+
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions are
+ met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above
+ copyright notice, this list of conditions and the following disclaimer
+ in the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Google Inc. nor the names of its
+ contributors may be used to endorse or promote products derived from
+ this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
diff --git a/vendor/honnef.co/go/tools/arg/arg.go b/vendor/honnef.co/go/tools/arg/arg.go
new file mode 100644
index 0000000000000..1e7f30db42d4a
--- /dev/null
+++ b/vendor/honnef.co/go/tools/arg/arg.go
@@ -0,0 +1,48 @@
+package arg
+
+var args = map[string]int{
+ "(*encoding/json.Decoder).Decode.v": 0,
+ "(*encoding/json.Encoder).Encode.v": 0,
+ "(*encoding/xml.Decoder).Decode.v": 0,
+ "(*encoding/xml.Encoder).Encode.v": 0,
+ "(*sync.Pool).Put.x": 0,
+ "(*text/template.Template).Parse.text": 0,
+ "(io.Seeker).Seek.offset": 0,
+ "(time.Time).Sub.u": 0,
+ "append.elems": 1,
+ "append.slice": 0,
+ "bytes.Equal.a": 0,
+ "bytes.Equal.b": 1,
+ "encoding/binary.Write.data": 2,
+ "errors.New.text": 0,
+ "fmt.Fprintf.format": 1,
+ "fmt.Printf.format": 0,
+ "fmt.Sprintf.a[0]": 1,
+ "fmt.Sprintf.format": 0,
+ "json.Marshal.v": 0,
+ "json.Unmarshal.v": 1,
+ "len.v": 0,
+ "make.size[0]": 1,
+ "make.size[1]": 2,
+ "make.t": 0,
+ "net/url.Parse.rawurl": 0,
+ "os.OpenFile.flag": 1,
+ "os/exec.Command.name": 0,
+ "os/signal.Notify.c": 0,
+ "regexp.Compile.expr": 0,
+ "runtime.SetFinalizer.finalizer": 1,
+ "runtime.SetFinalizer.obj": 0,
+ "sort.Sort.data": 0,
+ "time.Parse.layout": 0,
+ "time.Sleep.d": 0,
+ "xml.Marshal.v": 0,
+ "xml.Unmarshal.v": 1,
+}
+
+func Arg(name string) int {
+ n, ok := args[name]
+ if !ok {
+ panic("unknown argument " + name)
+ }
+ return n
+}
diff --git a/vendor/honnef.co/go/tools/cmd/staticcheck/README.md b/vendor/honnef.co/go/tools/cmd/staticcheck/README.md
new file mode 100644
index 0000000000000..4d14577fdf767
--- /dev/null
+++ b/vendor/honnef.co/go/tools/cmd/staticcheck/README.md
@@ -0,0 +1,15 @@
+# staticcheck
+
+_staticcheck_ offers extensive analysis of Go code, covering a myriad
+of categories. It will detect bugs, suggest code simplifications,
+point out dead code, and more.
+
+## Installation
+
+See [the main README](https://github.com/dominikh/go-tools#installation) for installation instructions.
+
+## Documentation
+
+Detailed documentation can be found on
+[staticcheck.io](https://staticcheck.io/docs/).
+
diff --git a/vendor/honnef.co/go/tools/cmd/staticcheck/staticcheck.go b/vendor/honnef.co/go/tools/cmd/staticcheck/staticcheck.go
new file mode 100644
index 0000000000000..4f504dc39db9a
--- /dev/null
+++ b/vendor/honnef.co/go/tools/cmd/staticcheck/staticcheck.go
@@ -0,0 +1,44 @@
+// staticcheck analyses Go code and makes it better.
+package main // import "honnef.co/go/tools/cmd/staticcheck"
+
+import (
+ "log"
+ "os"
+
+ "golang.org/x/tools/go/analysis"
+ "honnef.co/go/tools/lint"
+ "honnef.co/go/tools/lint/lintutil"
+ "honnef.co/go/tools/simple"
+ "honnef.co/go/tools/staticcheck"
+ "honnef.co/go/tools/stylecheck"
+ "honnef.co/go/tools/unused"
+)
+
+func main() {
+ fs := lintutil.FlagSet("staticcheck")
+ wholeProgram := fs.Bool("unused.whole-program", false, "Run unused in whole program mode")
+ debug := fs.String("debug.unused-graph", "", "Write unused's object graph to `file`")
+ fs.Parse(os.Args[1:])
+
+ var cs []*analysis.Analyzer
+ for _, v := range simple.Analyzers {
+ cs = append(cs, v)
+ }
+ for _, v := range staticcheck.Analyzers {
+ cs = append(cs, v)
+ }
+ for _, v := range stylecheck.Analyzers {
+ cs = append(cs, v)
+ }
+
+ u := unused.NewChecker(*wholeProgram)
+ if *debug != "" {
+ f, err := os.OpenFile(*debug, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0666)
+ if err != nil {
+ log.Fatal(err)
+ }
+ u.Debug = f
+ }
+ cums := []lint.CumulativeChecker{u}
+ lintutil.ProcessFlagSet(cs, cums, fs)
+}
diff --git a/vendor/honnef.co/go/tools/config/config.go b/vendor/honnef.co/go/tools/config/config.go
new file mode 100644
index 0000000000000..c22093a6d9823
--- /dev/null
+++ b/vendor/honnef.co/go/tools/config/config.go
@@ -0,0 +1,224 @@
+package config
+
+import (
+ "bytes"
+ "fmt"
+ "os"
+ "path/filepath"
+ "reflect"
+ "strings"
+
+ "github.com/BurntSushi/toml"
+ "golang.org/x/tools/go/analysis"
+)
+
+var Analyzer = &analysis.Analyzer{
+ Name: "config",
+ Doc: "loads configuration for the current package tree",
+ Run: func(pass *analysis.Pass) (interface{}, error) {
+ if len(pass.Files) == 0 {
+ cfg := DefaultConfig
+ return &cfg, nil
+ }
+ cache, err := os.UserCacheDir()
+ if err != nil {
+ cache = ""
+ }
+ var path string
+ for _, f := range pass.Files {
+ p := pass.Fset.PositionFor(f.Pos(), true).Filename
+ // FIXME(dh): using strings.HasPrefix isn't technically
+ // correct, but it should be good enough for now.
+ if cache != "" && strings.HasPrefix(p, cache) {
+ // File in the build cache of the standard Go build system
+ continue
+ }
+ path = p
+ break
+ }
+
+ if path == "" {
+ // The package only consists of generated files.
+ cfg := DefaultConfig
+ return &cfg, nil
+ }
+
+ dir := filepath.Dir(path)
+ cfg, err := Load(dir)
+ if err != nil {
+ return nil, fmt.Errorf("error loading staticcheck.conf: %s", err)
+ }
+ return &cfg, nil
+ },
+ RunDespiteErrors: true,
+ ResultType: reflect.TypeOf((*Config)(nil)),
+}
+
+func For(pass *analysis.Pass) *Config {
+ return pass.ResultOf[Analyzer].(*Config)
+}
+
+func mergeLists(a, b []string) []string {
+ out := make([]string, 0, len(a)+len(b))
+ for _, el := range b {
+ if el == "inherit" {
+ out = append(out, a...)
+ } else {
+ out = append(out, el)
+ }
+ }
+
+ return out
+}
+
+func normalizeList(list []string) []string {
+ if len(list) > 1 {
+ nlist := make([]string, 0, len(list))
+ nlist = append(nlist, list[0])
+ for i, el := range list[1:] {
+ if el != list[i] {
+ nlist = append(nlist, el)
+ }
+ }
+ list = nlist
+ }
+
+ for _, el := range list {
+ if el == "inherit" {
+ // This should never happen, because the default config
+ // should not use "inherit"
+ panic(`unresolved "inherit"`)
+ }
+ }
+
+ return list
+}
+
+func (cfg Config) Merge(ocfg Config) Config {
+ if ocfg.Checks != nil {
+ cfg.Checks = mergeLists(cfg.Checks, ocfg.Checks)
+ }
+ if ocfg.Initialisms != nil {
+ cfg.Initialisms = mergeLists(cfg.Initialisms, ocfg.Initialisms)
+ }
+ if ocfg.DotImportWhitelist != nil {
+ cfg.DotImportWhitelist = mergeLists(cfg.DotImportWhitelist, ocfg.DotImportWhitelist)
+ }
+ if ocfg.HTTPStatusCodeWhitelist != nil {
+ cfg.HTTPStatusCodeWhitelist = mergeLists(cfg.HTTPStatusCodeWhitelist, ocfg.HTTPStatusCodeWhitelist)
+ }
+ return cfg
+}
+
+type Config struct {
+ // TODO(dh): this implementation makes it impossible for external
+ // clients to add their own checkers with configuration. At the
+ // moment, we don't really care about that; we don't encourage
+ // that people use this package. In the future, we may. The
+ // obvious solution would be using map[string]interface{}, but
+ // that's obviously subpar.
+
+ Checks []string `toml:"checks"`
+ Initialisms []string `toml:"initialisms"`
+ DotImportWhitelist []string `toml:"dot_import_whitelist"`
+ HTTPStatusCodeWhitelist []string `toml:"http_status_code_whitelist"`
+}
+
+func (c Config) String() string {
+ buf := &bytes.Buffer{}
+
+ fmt.Fprintf(buf, "Checks: %#v\n", c.Checks)
+ fmt.Fprintf(buf, "Initialisms: %#v\n", c.Initialisms)
+ fmt.Fprintf(buf, "DotImportWhitelist: %#v\n", c.DotImportWhitelist)
+ fmt.Fprintf(buf, "HTTPStatusCodeWhitelist: %#v", c.HTTPStatusCodeWhitelist)
+
+ return buf.String()
+}
+
+var DefaultConfig = Config{
+ Checks: []string{"all", "-ST1000", "-ST1003", "-ST1016"},
+ Initialisms: []string{
+ "ACL", "API", "ASCII", "CPU", "CSS", "DNS",
+ "EOF", "GUID", "HTML", "HTTP", "HTTPS", "ID",
+ "IP", "JSON", "QPS", "RAM", "RPC", "SLA",
+ "SMTP", "SQL", "SSH", "TCP", "TLS", "TTL",
+ "UDP", "UI", "GID", "UID", "UUID", "URI",
+ "URL", "UTF8", "VM", "XML", "XMPP", "XSRF",
+ "XSS", "SIP", "RTP",
+ },
+ DotImportWhitelist: []string{},
+ HTTPStatusCodeWhitelist: []string{"200", "400", "404", "500"},
+}
+
+const configName = "staticcheck.conf"
+
+func parseConfigs(dir string) ([]Config, error) {
+ var out []Config
+
+ // TODO(dh): consider stopping at the GOPATH/module boundary
+ for dir != "" {
+ f, err := os.Open(filepath.Join(dir, configName))
+ if os.IsNotExist(err) {
+ ndir := filepath.Dir(dir)
+ if ndir == dir {
+ break
+ }
+ dir = ndir
+ continue
+ }
+ if err != nil {
+ return nil, err
+ }
+ var cfg Config
+ _, err = toml.DecodeReader(f, &cfg)
+ f.Close()
+ if err != nil {
+ return nil, err
+ }
+ out = append(out, cfg)
+ ndir := filepath.Dir(dir)
+ if ndir == dir {
+ break
+ }
+ dir = ndir
+ }
+ out = append(out, DefaultConfig)
+ if len(out) < 2 {
+ return out, nil
+ }
+ for i := 0; i < len(out)/2; i++ {
+ out[i], out[len(out)-1-i] = out[len(out)-1-i], out[i]
+ }
+ return out, nil
+}
+
+func mergeConfigs(confs []Config) Config {
+ if len(confs) == 0 {
+ // This shouldn't happen because we always have at least a
+ // default config.
+ panic("trying to merge zero configs")
+ }
+ if len(confs) == 1 {
+ return confs[0]
+ }
+ conf := confs[0]
+ for _, oconf := range confs[1:] {
+ conf = conf.Merge(oconf)
+ }
+ return conf
+}
+
+func Load(dir string) (Config, error) {
+ confs, err := parseConfigs(dir)
+ if err != nil {
+ return Config{}, err
+ }
+ conf := mergeConfigs(confs)
+
+ conf.Checks = normalizeList(conf.Checks)
+ conf.Initialisms = normalizeList(conf.Initialisms)
+ conf.DotImportWhitelist = normalizeList(conf.DotImportWhitelist)
+ conf.HTTPStatusCodeWhitelist = normalizeList(conf.HTTPStatusCodeWhitelist)
+
+ return conf, nil
+}
diff --git a/vendor/honnef.co/go/tools/config/example.conf b/vendor/honnef.co/go/tools/config/example.conf
new file mode 100644
index 0000000000000..a715a24d4fcfe
--- /dev/null
+++ b/vendor/honnef.co/go/tools/config/example.conf
@@ -0,0 +1,10 @@
+checks = ["all", "-ST1003", "-ST1014"]
+initialisms = ["ACL", "API", "ASCII", "CPU", "CSS", "DNS",
+ "EOF", "GUID", "HTML", "HTTP", "HTTPS", "ID",
+ "IP", "JSON", "QPS", "RAM", "RPC", "SLA",
+ "SMTP", "SQL", "SSH", "TCP", "TLS", "TTL",
+ "UDP", "UI", "GID", "UID", "UUID", "URI",
+ "URL", "UTF8", "VM", "XML", "XMPP", "XSRF",
+ "XSS", "SIP", "RTP"]
+dot_import_whitelist = []
+http_status_code_whitelist = ["200", "400", "404", "500"]
diff --git a/vendor/honnef.co/go/tools/deprecated/stdlib.go b/vendor/honnef.co/go/tools/deprecated/stdlib.go
new file mode 100644
index 0000000000000..5d8ce186b160c
--- /dev/null
+++ b/vendor/honnef.co/go/tools/deprecated/stdlib.go
@@ -0,0 +1,112 @@
+package deprecated
+
+type Deprecation struct {
+ DeprecatedSince int
+ AlternativeAvailableSince int
+}
+
+var Stdlib = map[string]Deprecation{
+ "image/jpeg.Reader": {4, 0},
+ // FIXME(dh): AllowBinary isn't being detected as deprecated
+ // because the comment has a newline right after "Deprecated:"
+ "go/build.AllowBinary": {7, 7},
+ "(archive/zip.FileHeader).CompressedSize": {1, 1},
+ "(archive/zip.FileHeader).UncompressedSize": {1, 1},
+ "(archive/zip.FileHeader).ModifiedTime": {10, 10},
+ "(archive/zip.FileHeader).ModifiedDate": {10, 10},
+ "(*archive/zip.FileHeader).ModTime": {10, 10},
+ "(*archive/zip.FileHeader).SetModTime": {10, 10},
+ "(go/doc.Package).Bugs": {1, 1},
+ "os.SEEK_SET": {7, 7},
+ "os.SEEK_CUR": {7, 7},
+ "os.SEEK_END": {7, 7},
+ "(net.Dialer).Cancel": {7, 7},
+ "runtime.CPUProfile": {9, 0},
+ "compress/flate.ReadError": {6, 6},
+ "compress/flate.WriteError": {6, 6},
+ "path/filepath.HasPrefix": {0, 0},
+ "(net/http.Transport).Dial": {7, 7},
+ "(*net/http.Transport).CancelRequest": {6, 5},
+ "net/http.ErrWriteAfterFlush": {7, 0},
+ "net/http.ErrHeaderTooLong": {8, 0},
+ "net/http.ErrShortBody": {8, 0},
+ "net/http.ErrMissingContentLength": {8, 0},
+ "net/http/httputil.ErrPersistEOF": {0, 0},
+ "net/http/httputil.ErrClosed": {0, 0},
+ "net/http/httputil.ErrPipeline": {0, 0},
+ "net/http/httputil.ServerConn": {0, 0},
+ "net/http/httputil.NewServerConn": {0, 0},
+ "net/http/httputil.ClientConn": {0, 0},
+ "net/http/httputil.NewClientConn": {0, 0},
+ "net/http/httputil.NewProxyClientConn": {0, 0},
+ "(net/http.Request).Cancel": {7, 7},
+ "(text/template/parse.PipeNode).Line": {1, 1},
+ "(text/template/parse.ActionNode).Line": {1, 1},
+ "(text/template/parse.BranchNode).Line": {1, 1},
+ "(text/template/parse.TemplateNode).Line": {1, 1},
+ "database/sql/driver.ColumnConverter": {9, 9},
+ "database/sql/driver.Execer": {8, 8},
+ "database/sql/driver.Queryer": {8, 8},
+ "(database/sql/driver.Conn).Begin": {8, 8},
+ "(database/sql/driver.Stmt).Exec": {8, 8},
+ "(database/sql/driver.Stmt).Query": {8, 8},
+ "syscall.StringByteSlice": {1, 1},
+ "syscall.StringBytePtr": {1, 1},
+ "syscall.StringSlicePtr": {1, 1},
+ "syscall.StringToUTF16": {1, 1},
+ "syscall.StringToUTF16Ptr": {1, 1},
+ "(*regexp.Regexp).Copy": {12, 12},
+ "(archive/tar.Header).Xattrs": {10, 10},
+ "archive/tar.TypeRegA": {11, 1},
+ "go/types.NewInterface": {11, 11},
+ "(*go/types.Interface).Embedded": {11, 11},
+ "go/importer.For": {12, 12},
+ "encoding/json.InvalidUTF8Error": {2, 2},
+ "encoding/json.UnmarshalFieldError": {2, 2},
+ "encoding/csv.ErrTrailingComma": {2, 2},
+ "(encoding/csv.Reader).TrailingComma": {2, 2},
+ "(net.Dialer).DualStack": {12, 12},
+ "net/http.ErrUnexpectedTrailer": {12, 12},
+ "net/http.CloseNotifier": {11, 7},
+ "net/http.ProtocolError": {8, 8},
+ "(crypto/x509.CertificateRequest).Attributes": {5, 3},
+ // This function has no alternative, but also no purpose.
+ "(*crypto/rc4.Cipher).Reset": {12, 0},
+ "(net/http/httptest.ResponseRecorder).HeaderMap": {11, 7},
+
+ // All of these have been deprecated in favour of external libraries
+ "syscall.AttachLsf": {7, 0},
+ "syscall.DetachLsf": {7, 0},
+ "syscall.LsfSocket": {7, 0},
+ "syscall.SetLsfPromisc": {7, 0},
+ "syscall.LsfJump": {7, 0},
+ "syscall.LsfStmt": {7, 0},
+ "syscall.BpfStmt": {7, 0},
+ "syscall.BpfJump": {7, 0},
+ "syscall.BpfBuflen": {7, 0},
+ "syscall.SetBpfBuflen": {7, 0},
+ "syscall.BpfDatalink": {7, 0},
+ "syscall.SetBpfDatalink": {7, 0},
+ "syscall.SetBpfPromisc": {7, 0},
+ "syscall.FlushBpf": {7, 0},
+ "syscall.BpfInterface": {7, 0},
+ "syscall.SetBpfInterface": {7, 0},
+ "syscall.BpfTimeout": {7, 0},
+ "syscall.SetBpfTimeout": {7, 0},
+ "syscall.BpfStats": {7, 0},
+ "syscall.SetBpfImmediate": {7, 0},
+ "syscall.SetBpf": {7, 0},
+ "syscall.CheckBpfVersion": {7, 0},
+ "syscall.BpfHeadercmpl": {7, 0},
+ "syscall.SetBpfHeadercmpl": {7, 0},
+ "syscall.RouteRIB": {8, 0},
+ "syscall.RoutingMessage": {8, 0},
+ "syscall.RouteMessage": {8, 0},
+ "syscall.InterfaceMessage": {8, 0},
+ "syscall.InterfaceAddrMessage": {8, 0},
+ "syscall.ParseRoutingMessage": {8, 0},
+ "syscall.ParseRoutingSockaddr": {8, 0},
+ "InterfaceAnnounceMessage": {7, 0},
+ "InterfaceMulticastAddrMessage": {7, 0},
+ "syscall.FormatMessage": {5, 0},
+}
diff --git a/vendor/honnef.co/go/tools/facts/deprecated.go b/vendor/honnef.co/go/tools/facts/deprecated.go
new file mode 100644
index 0000000000000..8587b0e0eaeba
--- /dev/null
+++ b/vendor/honnef.co/go/tools/facts/deprecated.go
@@ -0,0 +1,144 @@
+package facts
+
+import (
+ "go/ast"
+ "go/token"
+ "go/types"
+ "reflect"
+ "strings"
+
+ "golang.org/x/tools/go/analysis"
+)
+
+type IsDeprecated struct{ Msg string }
+
+func (*IsDeprecated) AFact() {}
+func (d *IsDeprecated) String() string { return "Deprecated: " + d.Msg }
+
+type DeprecatedResult struct {
+ Objects map[types.Object]*IsDeprecated
+ Packages map[*types.Package]*IsDeprecated
+}
+
+var Deprecated = &analysis.Analyzer{
+ Name: "fact_deprecated",
+ Doc: "Mark deprecated objects",
+ Run: deprecated,
+ FactTypes: []analysis.Fact{(*IsDeprecated)(nil)},
+ ResultType: reflect.TypeOf(DeprecatedResult{}),
+}
+
+func deprecated(pass *analysis.Pass) (interface{}, error) {
+ var names []*ast.Ident
+
+ extractDeprecatedMessage := func(docs []*ast.CommentGroup) string {
+ for _, doc := range docs {
+ if doc == nil {
+ continue
+ }
+ parts := strings.Split(doc.Text(), "\n\n")
+ last := parts[len(parts)-1]
+ if !strings.HasPrefix(last, "Deprecated: ") {
+ continue
+ }
+ alt := last[len("Deprecated: "):]
+ alt = strings.Replace(alt, "\n", " ", -1)
+ return alt
+ }
+ return ""
+ }
+ doDocs := func(names []*ast.Ident, docs []*ast.CommentGroup) {
+ alt := extractDeprecatedMessage(docs)
+ if alt == "" {
+ return
+ }
+
+ for _, name := range names {
+ obj := pass.TypesInfo.ObjectOf(name)
+ pass.ExportObjectFact(obj, &IsDeprecated{alt})
+ }
+ }
+
+ var docs []*ast.CommentGroup
+ for _, f := range pass.Files {
+ docs = append(docs, f.Doc)
+ }
+ if alt := extractDeprecatedMessage(docs); alt != "" {
+ // Don't mark package syscall as deprecated, even though
+ // it is. A lot of people still use it for simple
+ // constants like SIGKILL, and I am not comfortable
+ // telling them to use x/sys for that.
+ if pass.Pkg.Path() != "syscall" {
+ pass.ExportPackageFact(&IsDeprecated{alt})
+ }
+ }
+
+ docs = docs[:0]
+ for _, f := range pass.Files {
+ fn := func(node ast.Node) bool {
+ if node == nil {
+ return true
+ }
+ var ret bool
+ switch node := node.(type) {
+ case *ast.GenDecl:
+ switch node.Tok {
+ case token.TYPE, token.CONST, token.VAR:
+ docs = append(docs, node.Doc)
+ return true
+ default:
+ return false
+ }
+ case *ast.FuncDecl:
+ docs = append(docs, node.Doc)
+ names = []*ast.Ident{node.Name}
+ ret = false
+ case *ast.TypeSpec:
+ docs = append(docs, node.Doc)
+ names = []*ast.Ident{node.Name}
+ ret = true
+ case *ast.ValueSpec:
+ docs = append(docs, node.Doc)
+ names = node.Names
+ ret = false
+ case *ast.File:
+ return true
+ case *ast.StructType:
+ for _, field := range node.Fields.List {
+ doDocs(field.Names, []*ast.CommentGroup{field.Doc})
+ }
+ return false
+ case *ast.InterfaceType:
+ for _, field := range node.Methods.List {
+ doDocs(field.Names, []*ast.CommentGroup{field.Doc})
+ }
+ return false
+ default:
+ return false
+ }
+ if len(names) == 0 || len(docs) == 0 {
+ return ret
+ }
+ doDocs(names, docs)
+
+ docs = docs[:0]
+ names = nil
+ return ret
+ }
+ ast.Inspect(f, fn)
+ }
+
+ out := DeprecatedResult{
+ Objects: map[types.Object]*IsDeprecated{},
+ Packages: map[*types.Package]*IsDeprecated{},
+ }
+
+ for _, fact := range pass.AllObjectFacts() {
+ out.Objects[fact.Object] = fact.Fact.(*IsDeprecated)
+ }
+ for _, fact := range pass.AllPackageFacts() {
+ out.Packages[fact.Package] = fact.Fact.(*IsDeprecated)
+ }
+
+ return out, nil
+}
diff --git a/vendor/honnef.co/go/tools/facts/generated.go b/vendor/honnef.co/go/tools/facts/generated.go
new file mode 100644
index 0000000000000..1ed9563a30bed
--- /dev/null
+++ b/vendor/honnef.co/go/tools/facts/generated.go
@@ -0,0 +1,86 @@
+package facts
+
+import (
+ "bufio"
+ "bytes"
+ "io"
+ "os"
+ "reflect"
+ "strings"
+
+ "golang.org/x/tools/go/analysis"
+)
+
+type Generator int
+
+// A list of known generators we can detect
+const (
+ Unknown Generator = iota
+ Goyacc
+ Cgo
+ Stringer
+)
+
+var (
+ // used by cgo before Go 1.11
+ oldCgo = []byte("// Created by cgo - DO NOT EDIT")
+ prefix = []byte("// Code generated ")
+ suffix = []byte(" DO NOT EDIT.")
+ nl = []byte("\n")
+ crnl = []byte("\r\n")
+)
+
+func isGenerated(path string) (Generator, bool) {
+ f, err := os.Open(path)
+ if err != nil {
+ return 0, false
+ }
+ defer f.Close()
+ br := bufio.NewReader(f)
+ for {
+ s, err := br.ReadBytes('\n')
+ if err != nil && err != io.EOF {
+ return 0, false
+ }
+ s = bytes.TrimSuffix(s, crnl)
+ s = bytes.TrimSuffix(s, nl)
+ if bytes.HasPrefix(s, prefix) && bytes.HasSuffix(s, suffix) {
+ text := string(s[len(prefix) : len(s)-len(suffix)])
+ switch text {
+ case "by goyacc.":
+ return Goyacc, true
+ case "by cmd/cgo;":
+ return Cgo, true
+ }
+ if strings.HasPrefix(text, `by "stringer `) {
+ return Stringer, true
+ }
+ return Unknown, true
+ }
+ if bytes.Equal(s, oldCgo) {
+ return Cgo, true
+ }
+ if err == io.EOF {
+ break
+ }
+ }
+ return 0, false
+}
+
+var Generated = &analysis.Analyzer{
+ Name: "isgenerated",
+ Doc: "annotate file names that have been code generated",
+ Run: func(pass *analysis.Pass) (interface{}, error) {
+ m := map[string]Generator{}
+ for _, f := range pass.Files {
+ path := pass.Fset.PositionFor(f.Pos(), false).Filename
+ g, ok := isGenerated(path)
+ if ok {
+ m[path] = g
+ }
+ }
+ return m, nil
+ },
+ RunDespiteErrors: true,
+ ResultType: reflect.TypeOf(map[string]Generator{}),
+}
diff --git a/vendor/honnef.co/go/tools/facts/purity.go b/vendor/honnef.co/go/tools/facts/purity.go
new file mode 100644
index 0000000000000..861ca41104ac5
--- /dev/null
+++ b/vendor/honnef.co/go/tools/facts/purity.go
@@ -0,0 +1,175 @@
+package facts
+
+import (
+ "go/token"
+ "go/types"
+ "reflect"
+
+ "golang.org/x/tools/go/analysis"
+ "honnef.co/go/tools/functions"
+ "honnef.co/go/tools/internal/passes/buildssa"
+ "honnef.co/go/tools/ssa"
+)
+
+type IsPure struct{}
+
+func (*IsPure) AFact() {}
+func (d *IsPure) String() string { return "is pure" }
+
+type PurityResult map[*types.Func]*IsPure
+
+var Purity = &analysis.Analyzer{
+ Name: "fact_purity",
+ Doc: "Mark pure functions",
+ Run: purity,
+ Requires: []*analysis.Analyzer{buildssa.Analyzer},
+ FactTypes: []analysis.Fact{(*IsPure)(nil)},
+ ResultType: reflect.TypeOf(PurityResult{}),
+}
+
+var pureStdlib = map[string]struct{}{
+ "errors.New": {},
+ "fmt.Errorf": {},
+ "fmt.Sprintf": {},
+ "fmt.Sprint": {},
+ "sort.Reverse": {},
+ "strings.Map": {},
+ "strings.Repeat": {},
+ "strings.Replace": {},
+ "strings.Title": {},
+ "strings.ToLower": {},
+ "strings.ToLowerSpecial": {},
+ "strings.ToTitle": {},
+ "strings.ToTitleSpecial": {},
+ "strings.ToUpper": {},
+ "strings.ToUpperSpecial": {},
+ "strings.Trim": {},
+ "strings.TrimFunc": {},
+ "strings.TrimLeft": {},
+ "strings.TrimLeftFunc": {},
+ "strings.TrimPrefix": {},
+ "strings.TrimRight": {},
+ "strings.TrimRightFunc": {},
+ "strings.TrimSpace": {},
+ "strings.TrimSuffix": {},
+ "(*net/http.Request).WithContext": {},
+}
+
+func purity(pass *analysis.Pass) (interface{}, error) {
+ seen := map[*ssa.Function]struct{}{}
+ ssapkg := pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).Pkg
+ var check func(ssafn *ssa.Function) (ret bool)
+ check = func(ssafn *ssa.Function) (ret bool) {
+ if ssafn.Object() == nil {
+ // TODO(dh): support closures
+ return false
+ }
+ if pass.ImportObjectFact(ssafn.Object(), new(IsPure)) {
+ return true
+ }
+ if ssafn.Pkg != ssapkg {
+ // Function is in another package but wasn't marked as
+ // pure, ergo it isn't pure
+ return false
+ }
+ // Break recursion
+ if _, ok := seen[ssafn]; ok {
+ return false
+ }
+
+ seen[ssafn] = struct{}{}
+ defer func() {
+ if ret {
+ pass.ExportObjectFact(ssafn.Object(), &IsPure{})
+ }
+ }()
+
+ if functions.IsStub(ssafn) {
+ return false
+ }
+
+ if _, ok := pureStdlib[ssafn.Object().(*types.Func).FullName()]; ok {
+ return true
+ }
+
+ if ssafn.Signature.Results().Len() == 0 {
+ // A function with no return values is empty or is doing some
+ // work we cannot see (for example because of build tags);
+ // don't consider it pure.
+ return false
+ }
+
+ for _, param := range ssafn.Params {
+ if _, ok := param.Type().Underlying().(*types.Basic); !ok {
+ return false
+ }
+ }
+
+ if ssafn.Blocks == nil {
+ return false
+ }
+ checkCall := func(common *ssa.CallCommon) bool {
+ if common.IsInvoke() {
+ return false
+ }
+ builtin, ok := common.Value.(*ssa.Builtin)
+ if !ok {
+ if common.StaticCallee() != ssafn {
+ if common.StaticCallee() == nil {
+ return false
+ }
+ if !check(common.StaticCallee()) {
+ return false
+ }
+ }
+ } else {
+ switch builtin.Name() {
+ case "len", "cap", "make", "new":
+ default:
+ return false
+ }
+ }
+ return true
+ }
+ for _, b := range ssafn.Blocks {
+ for _, ins := range b.Instrs {
+ switch ins := ins.(type) {
+ case *ssa.Call:
+ if !checkCall(ins.Common()) {
+ return false
+ }
+ case *ssa.Defer:
+ if !checkCall(&ins.Call) {
+ return false
+ }
+ case *ssa.Select:
+ return false
+ case *ssa.Send:
+ return false
+ case *ssa.Go:
+ return false
+ case *ssa.Panic:
+ return false
+ case *ssa.Store:
+ return false
+ case *ssa.FieldAddr:
+ return false
+ case *ssa.UnOp:
+ if ins.Op == token.MUL || ins.Op == token.AND {
+ return false
+ }
+ }
+ }
+ }
+ return true
+ }
+ for _, ssafn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ check(ssafn)
+ }
+
+ out := PurityResult{}
+ for _, fact := range pass.AllObjectFacts() {
+ out[fact.Object.(*types.Func)] = fact.Fact.(*IsPure)
+ }
+ return out, nil
+}
diff --git a/vendor/honnef.co/go/tools/facts/token.go b/vendor/honnef.co/go/tools/facts/token.go
new file mode 100644
index 0000000000000..26e76ff73d501
--- /dev/null
+++ b/vendor/honnef.co/go/tools/facts/token.go
@@ -0,0 +1,24 @@
+package facts
+
+import (
+ "go/ast"
+ "go/token"
+ "reflect"
+
+ "golang.org/x/tools/go/analysis"
+)
+
+var TokenFile = &analysis.Analyzer{
+ Name: "tokenfileanalyzer",
+ Doc: "creates a mapping of *token.File to *ast.File",
+ Run: func(pass *analysis.Pass) (interface{}, error) {
+ m := map[*token.File]*ast.File{}
+ for _, af := range pass.Files {
+ tf := pass.Fset.File(af.Pos())
+ m[tf] = af
+ }
+ return m, nil
+ },
+ RunDespiteErrors: true,
+ ResultType: reflect.TypeOf(map[*token.File]*ast.File{}),
+}
diff --git a/vendor/honnef.co/go/tools/functions/loops.go b/vendor/honnef.co/go/tools/functions/loops.go
new file mode 100644
index 0000000000000..15877a2f96b83
--- /dev/null
+++ b/vendor/honnef.co/go/tools/functions/loops.go
@@ -0,0 +1,54 @@
+package functions
+
+import "honnef.co/go/tools/ssa"
+
+type Loop struct{ ssa.BlockSet }
+
+func FindLoops(fn *ssa.Function) []Loop {
+ if fn.Blocks == nil {
+ return nil
+ }
+ tree := fn.DomPreorder()
+ var sets []Loop
+ for _, h := range tree {
+ for _, n := range h.Preds {
+ if !h.Dominates(n) {
+ continue
+ }
+ // n is a back-edge to h
+ // h is the loop header
+ if n == h {
+ set := Loop{}
+ set.Add(n)
+ sets = append(sets, set)
+ continue
+ }
+ set := Loop{}
+ set.Add(h)
+ set.Add(n)
+ for _, b := range allPredsBut(n, h, nil) {
+ set.Add(b)
+ }
+ sets = append(sets, set)
+ }
+ }
+ return sets
+}
+
+func allPredsBut(b, but *ssa.BasicBlock, list []*ssa.BasicBlock) []*ssa.BasicBlock {
+outer:
+ for _, pred := range b.Preds {
+ if pred == but {
+ continue
+ }
+ for _, p := range list {
+ // TODO improve big-o complexity of this function
+ if pred == p {
+ continue outer
+ }
+ }
+ list = append(list, pred)
+ list = allPredsBut(pred, but, list)
+ }
+ return list
+}
diff --git a/vendor/honnef.co/go/tools/functions/pure.go b/vendor/honnef.co/go/tools/functions/pure.go
new file mode 100644
index 0000000000000..8bc5587713a8e
--- /dev/null
+++ b/vendor/honnef.co/go/tools/functions/pure.go
@@ -0,0 +1,46 @@
+package functions
+
+import (
+ "honnef.co/go/tools/ssa"
+)
+
+func filterDebug(instr []ssa.Instruction) []ssa.Instruction {
+ var out []ssa.Instruction
+ for _, ins := range instr {
+ if _, ok := ins.(*ssa.DebugRef); !ok {
+ out = append(out, ins)
+ }
+ }
+ return out
+}
+
+// IsStub reports whether a function is a stub. A function is
+// considered a stub if it has no instructions or exactly one
+// instruction, which must be either returning only constant values or
+// a panic.
+func IsStub(fn *ssa.Function) bool {
+ if len(fn.Blocks) == 0 {
+ return true
+ }
+ if len(fn.Blocks) > 1 {
+ return false
+ }
+ instrs := filterDebug(fn.Blocks[0].Instrs)
+ if len(instrs) != 1 {
+ return false
+ }
+
+ switch instrs[0].(type) {
+ case *ssa.Return:
+ // Since this is the only instruction, the return value must
+ // be a constant. We consider all constants as stubs, not just
+ // the zero value. This does not, unfortunately, cover zero
+ // initialised structs, as these cause additional
+ // instructions.
+ return true
+ case *ssa.Panic:
+ return true
+ default:
+ return false
+ }
+}
diff --git a/vendor/honnef.co/go/tools/functions/terminates.go b/vendor/honnef.co/go/tools/functions/terminates.go
new file mode 100644
index 0000000000000..3e9c3a23f3787
--- /dev/null
+++ b/vendor/honnef.co/go/tools/functions/terminates.go
@@ -0,0 +1,24 @@
+package functions
+
+import "honnef.co/go/tools/ssa"
+
+// Terminates reports whether fn is supposed to return, that is if it
+// has at least one theoretic path that returns from the function.
+// Explicit panics do not count as terminating.
+func Terminates(fn *ssa.Function) bool {
+ if fn.Blocks == nil {
+ // assuming that a function terminates is the conservative
+ // choice
+ return true
+ }
+
+ for _, block := range fn.Blocks {
+ if len(block.Instrs) == 0 {
+ continue
+ }
+ if _, ok := block.Instrs[len(block.Instrs)-1].(*ssa.Return); ok {
+ return true
+ }
+ }
+ return false
+}
diff --git a/vendor/honnef.co/go/tools/go/types/typeutil/callee.go b/vendor/honnef.co/go/tools/go/types/typeutil/callee.go
new file mode 100644
index 0000000000000..38f596daf9e22
--- /dev/null
+++ b/vendor/honnef.co/go/tools/go/types/typeutil/callee.go
@@ -0,0 +1,46 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package typeutil
+
+import (
+ "go/ast"
+ "go/types"
+
+ "golang.org/x/tools/go/ast/astutil"
+)
+
+// Callee returns the named target of a function call, if any:
+// a function, method, builtin, or variable.
+func Callee(info *types.Info, call *ast.CallExpr) types.Object {
+ var obj types.Object
+ switch fun := astutil.Unparen(call.Fun).(type) {
+ case *ast.Ident:
+ obj = info.Uses[fun] // type, var, builtin, or declared func
+ case *ast.SelectorExpr:
+ if sel, ok := info.Selections[fun]; ok {
+ obj = sel.Obj() // method or field
+ } else {
+ obj = info.Uses[fun.Sel] // qualified identifier?
+ }
+ }
+ if _, ok := obj.(*types.TypeName); ok {
+ return nil // T(x) is a conversion, not a call
+ }
+ return obj
+}
+
+// StaticCallee returns the target (function or method) of a static
+// function call, if any. It returns nil for calls to builtins.
+func StaticCallee(info *types.Info, call *ast.CallExpr) *types.Func {
+ if f, ok := Callee(info, call).(*types.Func); ok && !interfaceMethod(f) {
+ return f
+ }
+ return nil
+}
+
+func interfaceMethod(f *types.Func) bool {
+ recv := f.Type().(*types.Signature).Recv()
+ return recv != nil && types.IsInterface(recv.Type())
+}
diff --git a/vendor/honnef.co/go/tools/go/types/typeutil/identical.go b/vendor/honnef.co/go/tools/go/types/typeutil/identical.go
new file mode 100644
index 0000000000000..c0ca441c32775
--- /dev/null
+++ b/vendor/honnef.co/go/tools/go/types/typeutil/identical.go
@@ -0,0 +1,75 @@
+package typeutil
+
+import (
+ "go/types"
+)
+
+// Identical reports whether x and y are identical types.
+// Unlike types.Identical, receivers of Signature types are not ignored.
+// Unlike types.Identical, interfaces are compared via pointer equality (except for the empty interface, which gets deduplicated).
+// Unlike types.Identical, structs are compared via pointer equality.
+func Identical(x, y types.Type) (ret bool) {
+ if !types.Identical(x, y) {
+ return false
+ }
+
+ switch x := x.(type) {
+ case *types.Struct:
+ y, ok := y.(*types.Struct)
+ if !ok {
+ // should be impossible
+ return true
+ }
+ return x == y
+ case *types.Interface:
+ // The issue with interfaces, typeutil.Map and types.Identical
+ //
+ // types.Identical, when comparing two interfaces, only looks at the set
+ // of all methods, not differentiating between implicit (embedded) and
+ // explicit methods.
+ //
+ // When we see the following two types, in source order
+ //
+ // type I1 interface { foo() }
+ // type I2 interface { I1 }
+ //
+ // then we will first correctly process I1 and its underlying type. When
+ // we get to I2, we will see that its underlying type is identical to
+ // that of I1 and not process it again. This, however, means that we will
+ // not record the fact that I2 embeds I1. If only I2 is reachable via the
+ // graph root, then I1 will not be considered used.
+ //
+ // We choose to be lazy and compare interfaces by their
+ // pointers. This will obviously miss identical interfaces,
+ // but this only has a runtime cost, it doesn't affect
+ // correctness.
+ y, ok := y.(*types.Interface)
+ if !ok {
+ // should be impossible
+ return true
+ }
+ if x.NumEmbeddeds() == 0 &&
+ y.NumEmbeddeds() == 0 &&
+ x.NumMethods() == 0 &&
+ y.NumMethods() == 0 {
+ // all truly empty interfaces are the same
+ return true
+ }
+ return x == y
+ case *types.Signature:
+ y, ok := y.(*types.Signature)
+ if !ok {
+ // should be impossible
+ return true
+ }
+ if x.Recv() == y.Recv() {
+ return true
+ }
+ if x.Recv() == nil || y.Recv() == nil {
+ return false
+ }
+ return Identical(x.Recv().Type(), y.Recv().Type())
+ default:
+ return true
+ }
+}
diff --git a/vendor/honnef.co/go/tools/go/types/typeutil/imports.go b/vendor/honnef.co/go/tools/go/types/typeutil/imports.go
new file mode 100644
index 0000000000000..9c441dba9c06b
--- /dev/null
+++ b/vendor/honnef.co/go/tools/go/types/typeutil/imports.go
@@ -0,0 +1,31 @@
+// Copyright 2014 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package typeutil
+
+import "go/types"
+
+// Dependencies returns all dependencies of the specified packages.
+//
+// Dependent packages appear in topological order: if package P imports
+// package Q, Q appears earlier than P in the result.
+// The algorithm follows import statements in the order they
+// appear in the source code, so the result is a total order.
+//
+func Dependencies(pkgs ...*types.Package) []*types.Package {
+ var result []*types.Package
+ seen := make(map[*types.Package]bool)
+ var visit func(pkgs []*types.Package)
+ visit = func(pkgs []*types.Package) {
+ for _, p := range pkgs {
+ if !seen[p] {
+ seen[p] = true
+ visit(p.Imports())
+ result = append(result, p)
+ }
+ }
+ }
+ visit(pkgs)
+ return result
+}
diff --git a/vendor/honnef.co/go/tools/go/types/typeutil/map.go b/vendor/honnef.co/go/tools/go/types/typeutil/map.go
new file mode 100644
index 0000000000000..f929353ccbd13
--- /dev/null
+++ b/vendor/honnef.co/go/tools/go/types/typeutil/map.go
@@ -0,0 +1,319 @@
+// Copyright 2014 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package typeutil defines various utilities for types, such as Map,
+// a mapping from types.Type to interface{} values.
+package typeutil
+
+import (
+ "bytes"
+ "fmt"
+ "go/types"
+ "reflect"
+)
+
+// Map is a hash-table-based mapping from types (types.Type) to
+// arbitrary interface{} values. The concrete types that implement
+// the Type interface are pointers. Since they are not canonicalized,
+// == cannot be used to check for equivalence, and thus we cannot
+// simply use a Go map.
+//
+// Just as with map[K]V, a nil *Map is a valid empty map.
+//
+// Not thread-safe.
+//
+// This fork handles Signatures correctly, respecting method
+// receivers. Furthermore, it doesn't deduplicate interfaces or
+// structs. Interfaces aren't deduplicated as not to conflate implicit
+// and explicit methods. Structs aren't deduplicated because we track
+// fields of each type separately.
+//
+type Map struct {
+ hasher Hasher // shared by many Maps
+ table map[uint32][]entry // maps hash to bucket; entry.key==nil means unused
+ length int // number of map entries
+}
+
+// entry is an entry (key/value association) in a hash bucket.
+type entry struct {
+ key types.Type
+ value interface{}
+}
+
+// SetHasher sets the hasher used by Map.
+//
+// All Hashers are functionally equivalent but contain internal state
+// used to cache the results of hashing previously seen types.
+//
+// A single Hasher created by MakeHasher() may be shared among many
+// Maps. This is recommended if the instances have many keys in
+// common, as it will amortize the cost of hash computation.
+//
+// A Hasher may grow without bound as new types are seen. Even when a
+// type is deleted from the map, the Hasher never shrinks, since other
+// types in the map may reference the deleted type indirectly.
+//
+// Hashers are not thread-safe, and read-only operations such as
+// Map.Lookup require updates to the hasher, so a full Mutex lock (not a
+// read-lock) is require around all Map operations if a shared
+// hasher is accessed from multiple threads.
+//
+// If SetHasher is not called, the Map will create a private hasher at
+// the first call to Insert.
+//
+func (m *Map) SetHasher(hasher Hasher) {
+ m.hasher = hasher
+}
+
+// Delete removes the entry with the given key, if any.
+// It returns true if the entry was found.
+//
+func (m *Map) Delete(key types.Type) bool {
+ if m != nil && m.table != nil {
+ hash := m.hasher.Hash(key)
+ bucket := m.table[hash]
+ for i, e := range bucket {
+ if e.key != nil && Identical(key, e.key) {
+ // We can't compact the bucket as it
+ // would disturb iterators.
+ bucket[i] = entry{}
+ m.length--
+ return true
+ }
+ }
+ }
+ return false
+}
+
+// At returns the map entry for the given key.
+// The result is nil if the entry is not present.
+//
+func (m *Map) At(key types.Type) interface{} {
+ if m != nil && m.table != nil {
+ for _, e := range m.table[m.hasher.Hash(key)] {
+ if e.key != nil && Identical(key, e.key) {
+ return e.value
+ }
+ }
+ }
+ return nil
+}
+
+// Set sets the map entry for key to val,
+// and returns the previous entry, if any.
+func (m *Map) Set(key types.Type, value interface{}) (prev interface{}) {
+ if m.table != nil {
+ hash := m.hasher.Hash(key)
+ bucket := m.table[hash]
+ var hole *entry
+ for i, e := range bucket {
+ if e.key == nil {
+ hole = &bucket[i]
+ } else if Identical(key, e.key) {
+ prev = e.value
+ bucket[i].value = value
+ return
+ }
+ }
+
+ if hole != nil {
+ *hole = entry{key, value} // overwrite deleted entry
+ } else {
+ m.table[hash] = append(bucket, entry{key, value})
+ }
+ } else {
+ if m.hasher.memo == nil {
+ m.hasher = MakeHasher()
+ }
+ hash := m.hasher.Hash(key)
+ m.table = map[uint32][]entry{hash: {entry{key, value}}}
+ }
+
+ m.length++
+ return
+}
+
+// Len returns the number of map entries.
+func (m *Map) Len() int {
+ if m != nil {
+ return m.length
+ }
+ return 0
+}
+
+// Iterate calls function f on each entry in the map in unspecified order.
+//
+// If f should mutate the map, Iterate provides the same guarantees as
+// Go maps: if f deletes a map entry that Iterate has not yet reached,
+// f will not be invoked for it, but if f inserts a map entry that
+// Iterate has not yet reached, whether or not f will be invoked for
+// it is unspecified.
+//
+func (m *Map) Iterate(f func(key types.Type, value interface{})) {
+ if m != nil {
+ for _, bucket := range m.table {
+ for _, e := range bucket {
+ if e.key != nil {
+ f(e.key, e.value)
+ }
+ }
+ }
+ }
+}
+
+// Keys returns a new slice containing the set of map keys.
+// The order is unspecified.
+func (m *Map) Keys() []types.Type {
+ keys := make([]types.Type, 0, m.Len())
+ m.Iterate(func(key types.Type, _ interface{}) {
+ keys = append(keys, key)
+ })
+ return keys
+}
+
+func (m *Map) toString(values bool) string {
+ if m == nil {
+ return "{}"
+ }
+ var buf bytes.Buffer
+ fmt.Fprint(&buf, "{")
+ sep := ""
+ m.Iterate(func(key types.Type, value interface{}) {
+ fmt.Fprint(&buf, sep)
+ sep = ", "
+ fmt.Fprint(&buf, key)
+ if values {
+ fmt.Fprintf(&buf, ": %q", value)
+ }
+ })
+ fmt.Fprint(&buf, "}")
+ return buf.String()
+}
+
+// String returns a string representation of the map's entries.
+// Values are printed using fmt.Sprintf("%v", v).
+// Order is unspecified.
+//
+func (m *Map) String() string {
+ return m.toString(true)
+}
+
+// KeysString returns a string representation of the map's key set.
+// Order is unspecified.
+//
+func (m *Map) KeysString() string {
+ return m.toString(false)
+}
+
+////////////////////////////////////////////////////////////////////////
+// Hasher
+
+// A Hasher maps each type to its hash value.
+// For efficiency, a hasher uses memoization; thus its memory
+// footprint grows monotonically over time.
+// Hashers are not thread-safe.
+// Hashers have reference semantics.
+// Call MakeHasher to create a Hasher.
+type Hasher struct {
+ memo map[types.Type]uint32
+}
+
+// MakeHasher returns a new Hasher instance.
+func MakeHasher() Hasher {
+ return Hasher{make(map[types.Type]uint32)}
+}
+
+// Hash computes a hash value for the given type t such that
+// Identical(t, t') => Hash(t) == Hash(t').
+func (h Hasher) Hash(t types.Type) uint32 {
+ hash, ok := h.memo[t]
+ if !ok {
+ hash = h.hashFor(t)
+ h.memo[t] = hash
+ }
+ return hash
+}
+
+// hashString computes the Fowler–Noll–Vo hash of s.
+func hashString(s string) uint32 {
+ var h uint32
+ for i := 0; i < len(s); i++ {
+ h ^= uint32(s[i])
+ h *= 16777619
+ }
+ return h
+}
+
+// hashFor computes the hash of t.
+func (h Hasher) hashFor(t types.Type) uint32 {
+ // See Identical for rationale.
+ switch t := t.(type) {
+ case *types.Basic:
+ return uint32(t.Kind())
+
+ case *types.Array:
+ return 9043 + 2*uint32(t.Len()) + 3*h.Hash(t.Elem())
+
+ case *types.Slice:
+ return 9049 + 2*h.Hash(t.Elem())
+
+ case *types.Struct:
+ var hash uint32 = 9059
+ for i, n := 0, t.NumFields(); i < n; i++ {
+ f := t.Field(i)
+ if f.Anonymous() {
+ hash += 8861
+ }
+ hash += hashString(t.Tag(i))
+ hash += hashString(f.Name()) // (ignore f.Pkg)
+ hash += h.Hash(f.Type())
+ }
+ return hash
+
+ case *types.Pointer:
+ return 9067 + 2*h.Hash(t.Elem())
+
+ case *types.Signature:
+ var hash uint32 = 9091
+ if t.Variadic() {
+ hash *= 8863
+ }
+ return hash + 3*h.hashTuple(t.Params()) + 5*h.hashTuple(t.Results())
+
+ case *types.Interface:
+ var hash uint32 = 9103
+ for i, n := 0, t.NumMethods(); i < n; i++ {
+ // See go/types.identicalMethods for rationale.
+ // Method order is not significant.
+ // Ignore m.Pkg().
+ m := t.Method(i)
+ hash += 3*hashString(m.Name()) + 5*h.Hash(m.Type())
+ }
+ return hash
+
+ case *types.Map:
+ return 9109 + 2*h.Hash(t.Key()) + 3*h.Hash(t.Elem())
+
+ case *types.Chan:
+ return 9127 + 2*uint32(t.Dir()) + 3*h.Hash(t.Elem())
+
+ case *types.Named:
+ // Not safe with a copying GC; objects may move.
+ return uint32(reflect.ValueOf(t.Obj()).Pointer())
+
+ case *types.Tuple:
+ return h.hashTuple(t)
+ }
+ panic(t)
+}
+
+func (h Hasher) hashTuple(tuple *types.Tuple) uint32 {
+ // See go/types.identicalTypes for rationale.
+ n := tuple.Len()
+ var hash uint32 = 9137 + 2*uint32(n)
+ for i := 0; i < n; i++ {
+ hash += 3 * h.Hash(tuple.At(i).Type())
+ }
+ return hash
+}
diff --git a/vendor/honnef.co/go/tools/go/types/typeutil/methodsetcache.go b/vendor/honnef.co/go/tools/go/types/typeutil/methodsetcache.go
new file mode 100644
index 0000000000000..32084610f49a0
--- /dev/null
+++ b/vendor/honnef.co/go/tools/go/types/typeutil/methodsetcache.go
@@ -0,0 +1,72 @@
+// Copyright 2014 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// This file implements a cache of method sets.
+
+package typeutil
+
+import (
+ "go/types"
+ "sync"
+)
+
+// A MethodSetCache records the method set of each type T for which
+// MethodSet(T) is called so that repeat queries are fast.
+// The zero value is a ready-to-use cache instance.
+type MethodSetCache struct {
+ mu sync.Mutex
+ named map[*types.Named]struct{ value, pointer *types.MethodSet } // method sets for named N and *N
+ others map[types.Type]*types.MethodSet // all other types
+}
+
+// MethodSet returns the method set of type T. It is thread-safe.
+//
+// If cache is nil, this function is equivalent to types.NewMethodSet(T).
+// Utility functions can thus expose an optional *MethodSetCache
+// parameter to clients that care about performance.
+//
+func (cache *MethodSetCache) MethodSet(T types.Type) *types.MethodSet {
+ if cache == nil {
+ return types.NewMethodSet(T)
+ }
+ cache.mu.Lock()
+ defer cache.mu.Unlock()
+
+ switch T := T.(type) {
+ case *types.Named:
+ return cache.lookupNamed(T).value
+
+ case *types.Pointer:
+ if N, ok := T.Elem().(*types.Named); ok {
+ return cache.lookupNamed(N).pointer
+ }
+ }
+
+ // all other types
+ // (The map uses pointer equivalence, not type identity.)
+ mset := cache.others[T]
+ if mset == nil {
+ mset = types.NewMethodSet(T)
+ if cache.others == nil {
+ cache.others = make(map[types.Type]*types.MethodSet)
+ }
+ cache.others[T] = mset
+ }
+ return mset
+}
+
+func (cache *MethodSetCache) lookupNamed(named *types.Named) struct{ value, pointer *types.MethodSet } {
+ if cache.named == nil {
+ cache.named = make(map[*types.Named]struct{ value, pointer *types.MethodSet })
+ }
+ // Avoid recomputing mset(*T) for each distinct Pointer
+ // instance whose underlying type is a named type.
+ msets, ok := cache.named[named]
+ if !ok {
+ msets.value = types.NewMethodSet(named)
+ msets.pointer = types.NewMethodSet(types.NewPointer(named))
+ cache.named[named] = msets
+ }
+ return msets
+}
diff --git a/vendor/honnef.co/go/tools/go/types/typeutil/ui.go b/vendor/honnef.co/go/tools/go/types/typeutil/ui.go
new file mode 100644
index 0000000000000..9849c24cef3f8
--- /dev/null
+++ b/vendor/honnef.co/go/tools/go/types/typeutil/ui.go
@@ -0,0 +1,52 @@
+// Copyright 2014 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package typeutil
+
+// This file defines utilities for user interfaces that display types.
+
+import "go/types"
+
+// IntuitiveMethodSet returns the intuitive method set of a type T,
+// which is the set of methods you can call on an addressable value of
+// that type.
+//
+// The result always contains MethodSet(T), and is exactly MethodSet(T)
+// for interface types and for pointer-to-concrete types.
+// For all other concrete types T, the result additionally
+// contains each method belonging to *T if there is no identically
+// named method on T itself.
+//
+// This corresponds to user intuition about method sets;
+// this function is intended only for user interfaces.
+//
+// The order of the result is as for types.MethodSet(T).
+//
+func IntuitiveMethodSet(T types.Type, msets *MethodSetCache) []*types.Selection {
+ isPointerToConcrete := func(T types.Type) bool {
+ ptr, ok := T.(*types.Pointer)
+ return ok && !types.IsInterface(ptr.Elem())
+ }
+
+ var result []*types.Selection
+ mset := msets.MethodSet(T)
+ if types.IsInterface(T) || isPointerToConcrete(T) {
+ for i, n := 0, mset.Len(); i < n; i++ {
+ result = append(result, mset.At(i))
+ }
+ } else {
+ // T is some other concrete type.
+ // Report methods of T and *T, preferring those of T.
+ pmset := msets.MethodSet(types.NewPointer(T))
+ for i, n := 0, pmset.Len(); i < n; i++ {
+ meth := pmset.At(i)
+ if m := mset.Lookup(meth.Obj().Pkg(), meth.Obj().Name()); m != nil {
+ meth = m
+ }
+ result = append(result, meth)
+ }
+
+ }
+ return result
+}
diff --git a/vendor/honnef.co/go/tools/internal/cache/cache.go b/vendor/honnef.co/go/tools/internal/cache/cache.go
new file mode 100644
index 0000000000000..2b33ca10671be
--- /dev/null
+++ b/vendor/honnef.co/go/tools/internal/cache/cache.go
@@ -0,0 +1,474 @@
+// Copyright 2017 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package cache implements a build artifact cache.
+//
+// This package is a slightly modified fork of Go's
+// cmd/go/internal/cache package.
+package cache
+
+import (
+ "bytes"
+ "crypto/sha256"
+ "encoding/hex"
+ "errors"
+ "fmt"
+ "io"
+ "io/ioutil"
+ "os"
+ "path/filepath"
+ "strconv"
+ "strings"
+ "time"
+
+ "honnef.co/go/tools/internal/renameio"
+)
+
+// An ActionID is a cache action key, the hash of a complete description of a
+// repeatable computation (command line, environment variables,
+// input file contents, executable contents).
+type ActionID [HashSize]byte
+
+// An OutputID is a cache output key, the hash of an output of a computation.
+type OutputID [HashSize]byte
+
+// A Cache is a package cache, backed by a file system directory tree.
+type Cache struct {
+ dir string
+ now func() time.Time
+}
+
+// Open opens and returns the cache in the given directory.
+//
+// It is safe for multiple processes on a single machine to use the
+// same cache directory in a local file system simultaneously.
+// They will coordinate using operating system file locks and may
+// duplicate effort but will not corrupt the cache.
+//
+// However, it is NOT safe for multiple processes on different machines
+// to share a cache directory (for example, if the directory were stored
+// in a network file system). File locking is notoriously unreliable in
+// network file systems and may not suffice to protect the cache.
+//
+func Open(dir string) (*Cache, error) {
+ info, err := os.Stat(dir)
+ if err != nil {
+ return nil, err
+ }
+ if !info.IsDir() {
+ return nil, &os.PathError{Op: "open", Path: dir, Err: fmt.Errorf("not a directory")}
+ }
+ for i := 0; i < 256; i++ {
+ name := filepath.Join(dir, fmt.Sprintf("%02x", i))
+ if err := os.MkdirAll(name, 0777); err != nil {
+ return nil, err
+ }
+ }
+ c := &Cache{
+ dir: dir,
+ now: time.Now,
+ }
+ return c, nil
+}
+
+// fileName returns the name of the file corresponding to the given id.
+func (c *Cache) fileName(id [HashSize]byte, key string) string {
+ return filepath.Join(c.dir, fmt.Sprintf("%02x", id[0]), fmt.Sprintf("%x", id)+"-"+key)
+}
+
+var errMissing = errors.New("cache entry not found")
+
+const (
+ // action entry file is "v1 <hex id> <hex out> <decimal size space-padded to 20 bytes> <unixnano space-padded to 20 bytes>\n"
+ hexSize = HashSize * 2
+ entrySize = 2 + 1 + hexSize + 1 + hexSize + 1 + 20 + 1 + 20 + 1
+)
+
+// verify controls whether to run the cache in verify mode.
+// In verify mode, the cache always returns errMissing from Get
+// but then double-checks in Put that the data being written
+// exactly matches any existing entry. This provides an easy
+// way to detect program behavior that would have been different
+// had the cache entry been returned from Get.
+//
+// verify is enabled by setting the environment variable
+// GODEBUG=gocacheverify=1.
+var verify = false
+
+// DebugTest is set when GODEBUG=gocachetest=1 is in the environment.
+var DebugTest = false
+
+func init() { initEnv() }
+
+func initEnv() {
+ verify = false
+ debugHash = false
+ debug := strings.Split(os.Getenv("GODEBUG"), ",")
+ for _, f := range debug {
+ if f == "gocacheverify=1" {
+ verify = true
+ }
+ if f == "gocachehash=1" {
+ debugHash = true
+ }
+ if f == "gocachetest=1" {
+ DebugTest = true
+ }
+ }
+}
+
+// Get looks up the action ID in the cache,
+// returning the corresponding output ID and file size, if any.
+// Note that finding an output ID does not guarantee that the
+// saved file for that output ID is still available.
+func (c *Cache) Get(id ActionID) (Entry, error) {
+ if verify {
+ return Entry{}, errMissing
+ }
+ return c.get(id)
+}
+
+type Entry struct {
+ OutputID OutputID
+ Size int64
+ Time time.Time
+}
+
+// get is Get but does not respect verify mode, so that Put can use it.
+func (c *Cache) get(id ActionID) (Entry, error) {
+ missing := func() (Entry, error) {
+ return Entry{}, errMissing
+ }
+ f, err := os.Open(c.fileName(id, "a"))
+ if err != nil {
+ return missing()
+ }
+ defer f.Close()
+ entry := make([]byte, entrySize+1) // +1 to detect whether f is too long
+ if n, err := io.ReadFull(f, entry); n != entrySize || err != io.ErrUnexpectedEOF {
+ return missing()
+ }
+ if entry[0] != 'v' || entry[1] != '1' || entry[2] != ' ' || entry[3+hexSize] != ' ' || entry[3+hexSize+1+hexSize] != ' ' || entry[3+hexSize+1+hexSize+1+20] != ' ' || entry[entrySize-1] != '\n' {
+ return missing()
+ }
+ eid, entry := entry[3:3+hexSize], entry[3+hexSize:]
+ eout, entry := entry[1:1+hexSize], entry[1+hexSize:]
+ esize, entry := entry[1:1+20], entry[1+20:]
+ //lint:ignore SA4006 See https://github.com/dominikh/go-tools/issues/465
+ etime, entry := entry[1:1+20], entry[1+20:]
+ var buf [HashSize]byte
+ if _, err := hex.Decode(buf[:], eid); err != nil || buf != id {
+ return missing()
+ }
+ if _, err := hex.Decode(buf[:], eout); err != nil {
+ return missing()
+ }
+ i := 0
+ for i < len(esize) && esize[i] == ' ' {
+ i++
+ }
+ size, err := strconv.ParseInt(string(esize[i:]), 10, 64)
+ if err != nil || size < 0 {
+ return missing()
+ }
+ i = 0
+ for i < len(etime) && etime[i] == ' ' {
+ i++
+ }
+ tm, err := strconv.ParseInt(string(etime[i:]), 10, 64)
+ if err != nil || size < 0 {
+ return missing()
+ }
+
+ c.used(c.fileName(id, "a"))
+
+ return Entry{buf, size, time.Unix(0, tm)}, nil
+}
+
+// GetFile looks up the action ID in the cache and returns
+// the name of the corresponding data file.
+func (c *Cache) GetFile(id ActionID) (file string, entry Entry, err error) {
+ entry, err = c.Get(id)
+ if err != nil {
+ return "", Entry{}, err
+ }
+ file = c.OutputFile(entry.OutputID)
+ info, err := os.Stat(file)
+ if err != nil || info.Size() != entry.Size {
+ return "", Entry{}, errMissing
+ }
+ return file, entry, nil
+}
+
+// GetBytes looks up the action ID in the cache and returns
+// the corresponding output bytes.
+// GetBytes should only be used for data that can be expected to fit in memory.
+func (c *Cache) GetBytes(id ActionID) ([]byte, Entry, error) {
+ entry, err := c.Get(id)
+ if err != nil {
+ return nil, entry, err
+ }
+ data, _ := ioutil.ReadFile(c.OutputFile(entry.OutputID))
+ if sha256.Sum256(data) != entry.OutputID {
+ return nil, entry, errMissing
+ }
+ return data, entry, nil
+}
+
+// OutputFile returns the name of the cache file storing output with the given OutputID.
+func (c *Cache) OutputFile(out OutputID) string {
+ file := c.fileName(out, "d")
+ c.used(file)
+ return file
+}
+
+// Time constants for cache expiration.
+//
+// We set the mtime on a cache file on each use, but at most one per mtimeInterval (1 hour),
+// to avoid causing many unnecessary inode updates. The mtimes therefore
+// roughly reflect "time of last use" but may in fact be older by at most an hour.
+//
+// We scan the cache for entries to delete at most once per trimInterval (1 day).
+//
+// When we do scan the cache, we delete entries that have not been used for
+// at least trimLimit (5 days). Statistics gathered from a month of usage by
+// Go developers found that essentially all reuse of cached entries happened
+// within 5 days of the previous reuse. See golang.org/issue/22990.
+const (
+ mtimeInterval = 1 * time.Hour
+ trimInterval = 24 * time.Hour
+ trimLimit = 5 * 24 * time.Hour
+)
+
+// used makes a best-effort attempt to update mtime on file,
+// so that mtime reflects cache access time.
+//
+// Because the reflection only needs to be approximate,
+// and to reduce the amount of disk activity caused by using
+// cache entries, used only updates the mtime if the current
+// mtime is more than an hour old. This heuristic eliminates
+// nearly all of the mtime updates that would otherwise happen,
+// while still keeping the mtimes useful for cache trimming.
+func (c *Cache) used(file string) {
+ info, err := os.Stat(file)
+ if err == nil && c.now().Sub(info.ModTime()) < mtimeInterval {
+ return
+ }
+ os.Chtimes(file, c.now(), c.now())
+}
+
+// Trim removes old cache entries that are likely not to be reused.
+func (c *Cache) Trim() {
+ now := c.now()
+
+ // We maintain in dir/trim.txt the time of the last completed cache trim.
+ // If the cache has been trimmed recently enough, do nothing.
+ // This is the common case.
+ data, _ := ioutil.ReadFile(filepath.Join(c.dir, "trim.txt"))
+ t, err := strconv.ParseInt(strings.TrimSpace(string(data)), 10, 64)
+ if err == nil && now.Sub(time.Unix(t, 0)) < trimInterval {
+ return
+ }
+
+ // Trim each of the 256 subdirectories.
+ // We subtract an additional mtimeInterval
+ // to account for the imprecision of our "last used" mtimes.
+ cutoff := now.Add(-trimLimit - mtimeInterval)
+ for i := 0; i < 256; i++ {
+ subdir := filepath.Join(c.dir, fmt.Sprintf("%02x", i))
+ c.trimSubdir(subdir, cutoff)
+ }
+
+ // Ignore errors from here: if we don't write the complete timestamp, the
+ // cache will appear older than it is, and we'll trim it again next time.
+ renameio.WriteFile(filepath.Join(c.dir, "trim.txt"), []byte(fmt.Sprintf("%d", now.Unix())))
+}
+
+// trimSubdir trims a single cache subdirectory.
+func (c *Cache) trimSubdir(subdir string, cutoff time.Time) {
+ // Read all directory entries from subdir before removing
+ // any files, in case removing files invalidates the file offset
+ // in the directory scan. Also, ignore error from f.Readdirnames,
+ // because we don't care about reporting the error and we still
+ // want to process any entries found before the error.
+ f, err := os.Open(subdir)
+ if err != nil {
+ return
+ }
+ names, _ := f.Readdirnames(-1)
+ f.Close()
+
+ for _, name := range names {
+ // Remove only cache entries (xxxx-a and xxxx-d).
+ if !strings.HasSuffix(name, "-a") && !strings.HasSuffix(name, "-d") {
+ continue
+ }
+ entry := filepath.Join(subdir, name)
+ info, err := os.Stat(entry)
+ if err == nil && info.ModTime().Before(cutoff) {
+ os.Remove(entry)
+ }
+ }
+}
+
+// putIndexEntry adds an entry to the cache recording that executing the action
+// with the given id produces an output with the given output id (hash) and size.
+func (c *Cache) putIndexEntry(id ActionID, out OutputID, size int64, allowVerify bool) error {
+ // Note: We expect that for one reason or another it may happen
+ // that repeating an action produces a different output hash
+ // (for example, if the output contains a time stamp or temp dir name).
+ // While not ideal, this is also not a correctness problem, so we
+ // don't make a big deal about it. In particular, we leave the action
+ // cache entries writable specifically so that they can be overwritten.
+ //
+ // Setting GODEBUG=gocacheverify=1 does make a big deal:
+ // in verify mode we are double-checking that the cache entries
+ // are entirely reproducible. As just noted, this may be unrealistic
+ // in some cases but the check is also useful for shaking out real bugs.
+ entry := []byte(fmt.Sprintf("v1 %x %x %20d %20d\n", id, out, size, time.Now().UnixNano()))
+ if verify && allowVerify {
+ old, err := c.get(id)
+ if err == nil && (old.OutputID != out || old.Size != size) {
+ // panic to show stack trace, so we can see what code is generating this cache entry.
+ msg := fmt.Sprintf("go: internal cache error: cache verify failed: id=%x changed:<<<\n%s\n>>>\nold: %x %d\nnew: %x %d", id, reverseHash(id), out, size, old.OutputID, old.Size)
+ panic(msg)
+ }
+ }
+ file := c.fileName(id, "a")
+ if err := ioutil.WriteFile(file, entry, 0666); err != nil {
+ // TODO(bcmills): This Remove potentially races with another go command writing to file.
+ // Can we eliminate it?
+ os.Remove(file)
+ return err
+ }
+ os.Chtimes(file, c.now(), c.now()) // mainly for tests
+
+ return nil
+}
+
+// Put stores the given output in the cache as the output for the action ID.
+// It may read file twice. The content of file must not change between the two passes.
+func (c *Cache) Put(id ActionID, file io.ReadSeeker) (OutputID, int64, error) {
+ return c.put(id, file, true)
+}
+
+// PutNoVerify is like Put but disables the verify check
+// when GODEBUG=goverifycache=1 is set.
+// It is meant for data that is OK to cache but that we expect to vary slightly from run to run,
+// like test output containing times and the like.
+func (c *Cache) PutNoVerify(id ActionID, file io.ReadSeeker) (OutputID, int64, error) {
+ return c.put(id, file, false)
+}
+
+func (c *Cache) put(id ActionID, file io.ReadSeeker, allowVerify bool) (OutputID, int64, error) {
+ // Compute output ID.
+ h := sha256.New()
+ if _, err := file.Seek(0, 0); err != nil {
+ return OutputID{}, 0, err
+ }
+ size, err := io.Copy(h, file)
+ if err != nil {
+ return OutputID{}, 0, err
+ }
+ var out OutputID
+ h.Sum(out[:0])
+
+ // Copy to cached output file (if not already present).
+ if err := c.copyFile(file, out, size); err != nil {
+ return out, size, err
+ }
+
+ // Add to cache index.
+ return out, size, c.putIndexEntry(id, out, size, allowVerify)
+}
+
+// PutBytes stores the given bytes in the cache as the output for the action ID.
+func (c *Cache) PutBytes(id ActionID, data []byte) error {
+ _, _, err := c.Put(id, bytes.NewReader(data))
+ return err
+}
+
+// copyFile copies file into the cache, expecting it to have the given
+// output ID and size, if that file is not present already.
+func (c *Cache) copyFile(file io.ReadSeeker, out OutputID, size int64) error {
+ name := c.fileName(out, "d")
+ info, err := os.Stat(name)
+ if err == nil && info.Size() == size {
+ // Check hash.
+ if f, err := os.Open(name); err == nil {
+ h := sha256.New()
+ io.Copy(h, f)
+ f.Close()
+ var out2 OutputID
+ h.Sum(out2[:0])
+ if out == out2 {
+ return nil
+ }
+ }
+ // Hash did not match. Fall through and rewrite file.
+ }
+
+ // Copy file to cache directory.
+ mode := os.O_RDWR | os.O_CREATE
+ if err == nil && info.Size() > size { // shouldn't happen but fix in case
+ mode |= os.O_TRUNC
+ }
+ f, err := os.OpenFile(name, mode, 0666)
+ if err != nil {
+ return err
+ }
+ defer f.Close()
+ if size == 0 {
+ // File now exists with correct size.
+ // Only one possible zero-length file, so contents are OK too.
+ // Early return here makes sure there's a "last byte" for code below.
+ return nil
+ }
+
+ // From here on, if any of the I/O writing the file fails,
+ // we make a best-effort attempt to truncate the file f
+ // before returning, to avoid leaving bad bytes in the file.
+
+ // Copy file to f, but also into h to double-check hash.
+ if _, err := file.Seek(0, 0); err != nil {
+ f.Truncate(0)
+ return err
+ }
+ h := sha256.New()
+ w := io.MultiWriter(f, h)
+ if _, err := io.CopyN(w, file, size-1); err != nil {
+ f.Truncate(0)
+ return err
+ }
+ // Check last byte before writing it; writing it will make the size match
+ // what other processes expect to find and might cause them to start
+ // using the file.
+ buf := make([]byte, 1)
+ if _, err := file.Read(buf); err != nil {
+ f.Truncate(0)
+ return err
+ }
+ h.Write(buf)
+ sum := h.Sum(nil)
+ if !bytes.Equal(sum, out[:]) {
+ f.Truncate(0)
+ return fmt.Errorf("file content changed underfoot")
+ }
+
+ // Commit cache file entry.
+ if _, err := f.Write(buf); err != nil {
+ f.Truncate(0)
+ return err
+ }
+ if err := f.Close(); err != nil {
+ // Data might not have been written,
+ // but file may look like it is the right size.
+ // To be extra careful, remove cached file.
+ os.Remove(name)
+ return err
+ }
+ os.Chtimes(name, c.now(), c.now()) // mainly for tests
+
+ return nil
+}
diff --git a/vendor/honnef.co/go/tools/internal/cache/default.go b/vendor/honnef.co/go/tools/internal/cache/default.go
new file mode 100644
index 0000000000000..3034f76a538f3
--- /dev/null
+++ b/vendor/honnef.co/go/tools/internal/cache/default.go
@@ -0,0 +1,85 @@
+// Copyright 2017 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package cache
+
+import (
+ "fmt"
+ "io/ioutil"
+ "log"
+ "os"
+ "path/filepath"
+ "sync"
+)
+
+// Default returns the default cache to use.
+func Default() (*Cache, error) {
+ defaultOnce.Do(initDefaultCache)
+ return defaultCache, defaultDirErr
+}
+
+var (
+ defaultOnce sync.Once
+ defaultCache *Cache
+)
+
+// cacheREADME is a message stored in a README in the cache directory.
+// Because the cache lives outside the normal Go trees, we leave the
+// README as a courtesy to explain where it came from.
+const cacheREADME = `This directory holds cached build artifacts from staticcheck.
+`
+
+// initDefaultCache does the work of finding the default cache
+// the first time Default is called.
+func initDefaultCache() {
+ dir := DefaultDir()
+ if err := os.MkdirAll(dir, 0777); err != nil {
+ log.Fatalf("failed to initialize build cache at %s: %s\n", dir, err)
+ }
+ if _, err := os.Stat(filepath.Join(dir, "README")); err != nil {
+ // Best effort.
+ ioutil.WriteFile(filepath.Join(dir, "README"), []byte(cacheREADME), 0666)
+ }
+
+ c, err := Open(dir)
+ if err != nil {
+ log.Fatalf("failed to initialize build cache at %s: %s\n", dir, err)
+ }
+ defaultCache = c
+}
+
+var (
+ defaultDirOnce sync.Once
+ defaultDir string
+ defaultDirErr error
+)
+
+// DefaultDir returns the effective STATICCHECK_CACHE setting.
+func DefaultDir() string {
+ // Save the result of the first call to DefaultDir for later use in
+ // initDefaultCache. cmd/go/main.go explicitly sets GOCACHE so that
+ // subprocesses will inherit it, but that means initDefaultCache can't
+ // otherwise distinguish between an explicit "off" and a UserCacheDir error.
+
+ defaultDirOnce.Do(func() {
+ defaultDir = os.Getenv("STATICCHECK_CACHE")
+ if filepath.IsAbs(defaultDir) {
+ return
+ }
+ if defaultDir != "" {
+ defaultDirErr = fmt.Errorf("STATICCHECK_CACHE is not an absolute path")
+ return
+ }
+
+ // Compute default location.
+ dir, err := os.UserCacheDir()
+ if err != nil {
+ defaultDirErr = fmt.Errorf("STATICCHECK_CACHE is not defined and %v", err)
+ return
+ }
+ defaultDir = filepath.Join(dir, "staticcheck")
+ })
+
+ return defaultDir
+}
diff --git a/vendor/honnef.co/go/tools/internal/cache/hash.go b/vendor/honnef.co/go/tools/internal/cache/hash.go
new file mode 100644
index 0000000000000..a53543ec50182
--- /dev/null
+++ b/vendor/honnef.co/go/tools/internal/cache/hash.go
@@ -0,0 +1,176 @@
+// Copyright 2017 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package cache
+
+import (
+ "bytes"
+ "crypto/sha256"
+ "fmt"
+ "hash"
+ "io"
+ "os"
+ "sync"
+)
+
+var debugHash = false // set when GODEBUG=gocachehash=1
+
+// HashSize is the number of bytes in a hash.
+const HashSize = 32
+
+// A Hash provides access to the canonical hash function used to index the cache.
+// The current implementation uses salted SHA256, but clients must not assume this.
+type Hash struct {
+ h hash.Hash
+ name string // for debugging
+ buf *bytes.Buffer // for verify
+}
+
+// hashSalt is a salt string added to the beginning of every hash
+// created by NewHash. Using the Staticcheck version makes sure that different
+// versions of the command do not address the same cache
+// entries, so that a bug in one version does not affect the execution
+// of other versions. This salt will result in additional ActionID files
+// in the cache, but not additional copies of the large output files,
+// which are still addressed by unsalted SHA256.
+var hashSalt []byte
+
+func SetSalt(b []byte) {
+ hashSalt = b
+}
+
+// Subkey returns an action ID corresponding to mixing a parent
+// action ID with a string description of the subkey.
+func Subkey(parent ActionID, desc string) ActionID {
+ h := sha256.New()
+ h.Write([]byte("subkey:"))
+ h.Write(parent[:])
+ h.Write([]byte(desc))
+ var out ActionID
+ h.Sum(out[:0])
+ if debugHash {
+ fmt.Fprintf(os.Stderr, "HASH subkey %x %q = %x\n", parent, desc, out)
+ }
+ if verify {
+ hashDebug.Lock()
+ hashDebug.m[out] = fmt.Sprintf("subkey %x %q", parent, desc)
+ hashDebug.Unlock()
+ }
+ return out
+}
+
+// NewHash returns a new Hash.
+// The caller is expected to Write data to it and then call Sum.
+func NewHash(name string) *Hash {
+ h := &Hash{h: sha256.New(), name: name}
+ if debugHash {
+ fmt.Fprintf(os.Stderr, "HASH[%s]\n", h.name)
+ }
+ h.Write(hashSalt)
+ if verify {
+ h.buf = new(bytes.Buffer)
+ }
+ return h
+}
+
+// Write writes data to the running hash.
+func (h *Hash) Write(b []byte) (int, error) {
+ if debugHash {
+ fmt.Fprintf(os.Stderr, "HASH[%s]: %q\n", h.name, b)
+ }
+ if h.buf != nil {
+ h.buf.Write(b)
+ }
+ return h.h.Write(b)
+}
+
+// Sum returns the hash of the data written previously.
+func (h *Hash) Sum() [HashSize]byte {
+ var out [HashSize]byte
+ h.h.Sum(out[:0])
+ if debugHash {
+ fmt.Fprintf(os.Stderr, "HASH[%s]: %x\n", h.name, out)
+ }
+ if h.buf != nil {
+ hashDebug.Lock()
+ if hashDebug.m == nil {
+ hashDebug.m = make(map[[HashSize]byte]string)
+ }
+ hashDebug.m[out] = h.buf.String()
+ hashDebug.Unlock()
+ }
+ return out
+}
+
+// In GODEBUG=gocacheverify=1 mode,
+// hashDebug holds the input to every computed hash ID,
+// so that we can work backward from the ID involved in a
+// cache entry mismatch to a description of what should be there.
+var hashDebug struct {
+ sync.Mutex
+ m map[[HashSize]byte]string
+}
+
+// reverseHash returns the input used to compute the hash id.
+func reverseHash(id [HashSize]byte) string {
+ hashDebug.Lock()
+ s := hashDebug.m[id]
+ hashDebug.Unlock()
+ return s
+}
+
+var hashFileCache struct {
+ sync.Mutex
+ m map[string][HashSize]byte
+}
+
+// FileHash returns the hash of the named file.
+// It caches repeated lookups for a given file,
+// and the cache entry for a file can be initialized
+// using SetFileHash.
+// The hash used by FileHash is not the same as
+// the hash used by NewHash.
+func FileHash(file string) ([HashSize]byte, error) {
+ hashFileCache.Lock()
+ out, ok := hashFileCache.m[file]
+ hashFileCache.Unlock()
+
+ if ok {
+ return out, nil
+ }
+
+ h := sha256.New()
+ f, err := os.Open(file)
+ if err != nil {
+ if debugHash {
+ fmt.Fprintf(os.Stderr, "HASH %s: %v\n", file, err)
+ }
+ return [HashSize]byte{}, err
+ }
+ _, err = io.Copy(h, f)
+ f.Close()
+ if err != nil {
+ if debugHash {
+ fmt.Fprintf(os.Stderr, "HASH %s: %v\n", file, err)
+ }
+ return [HashSize]byte{}, err
+ }
+ h.Sum(out[:0])
+ if debugHash {
+ fmt.Fprintf(os.Stderr, "HASH %s: %x\n", file, out)
+ }
+
+ SetFileHash(file, out)
+ return out, nil
+}
+
+// SetFileHash sets the hash returned by FileHash for file.
+func SetFileHash(file string, sum [HashSize]byte) {
+ hashFileCache.Lock()
+ if hashFileCache.m == nil {
+ hashFileCache.m = make(map[string][HashSize]byte)
+ }
+ hashFileCache.m[file] = sum
+ hashFileCache.Unlock()
+}
diff --git a/vendor/honnef.co/go/tools/internal/passes/buildssa/buildssa.go b/vendor/honnef.co/go/tools/internal/passes/buildssa/buildssa.go
new file mode 100644
index 0000000000000..fde918d121362
--- /dev/null
+++ b/vendor/honnef.co/go/tools/internal/passes/buildssa/buildssa.go
@@ -0,0 +1,116 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package buildssa defines an Analyzer that constructs the SSA
+// representation of an error-free package and returns the set of all
+// functions within it. It does not report any diagnostics itself but
+// may be used as an input to other analyzers.
+//
+// THIS INTERFACE IS EXPERIMENTAL AND MAY BE SUBJECT TO INCOMPATIBLE CHANGE.
+package buildssa
+
+import (
+ "go/ast"
+ "go/types"
+ "reflect"
+
+ "golang.org/x/tools/go/analysis"
+ "honnef.co/go/tools/ssa"
+)
+
+var Analyzer = &analysis.Analyzer{
+ Name: "buildssa",
+ Doc: "build SSA-form IR for later passes",
+ Run: run,
+ ResultType: reflect.TypeOf(new(SSA)),
+}
+
+// SSA provides SSA-form intermediate representation for all the
+// non-blank source functions in the current package.
+type SSA struct {
+ Pkg *ssa.Package
+ SrcFuncs []*ssa.Function
+}
+
+func run(pass *analysis.Pass) (interface{}, error) {
+ // Plundered from ssautil.BuildPackage.
+
+ // We must create a new Program for each Package because the
+ // analysis API provides no place to hang a Program shared by
+ // all Packages. Consequently, SSA Packages and Functions do not
+ // have a canonical representation across an analysis session of
+ // multiple packages. This is unlikely to be a problem in
+ // practice because the analysis API essentially forces all
+ // packages to be analysed independently, so any given call to
+ // Analysis.Run on a package will see only SSA objects belonging
+ // to a single Program.
+
+ mode := ssa.GlobalDebug
+
+ prog := ssa.NewProgram(pass.Fset, mode)
+
+ // Create SSA packages for all imports.
+ // Order is not significant.
+ created := make(map[*types.Package]bool)
+ var createAll func(pkgs []*types.Package)
+ createAll = func(pkgs []*types.Package) {
+ for _, p := range pkgs {
+ if !created[p] {
+ created[p] = true
+ prog.CreatePackage(p, nil, nil, true)
+ createAll(p.Imports())
+ }
+ }
+ }
+ createAll(pass.Pkg.Imports())
+
+ // Create and build the primary package.
+ ssapkg := prog.CreatePackage(pass.Pkg, pass.Files, pass.TypesInfo, false)
+ ssapkg.Build()
+
+ // Compute list of source functions, including literals,
+ // in source order.
+ var funcs []*ssa.Function
+ var addAnons func(f *ssa.Function)
+ addAnons = func(f *ssa.Function) {
+ funcs = append(funcs, f)
+ for _, anon := range f.AnonFuncs {
+ addAnons(anon)
+ }
+ }
+ addAnons(ssapkg.Members["init"].(*ssa.Function))
+ for _, f := range pass.Files {
+ for _, decl := range f.Decls {
+ if fdecl, ok := decl.(*ast.FuncDecl); ok {
+
+ // SSA will not build a Function
+ // for a FuncDecl named blank.
+ // That's arguably too strict but
+ // relaxing it would break uniqueness of
+ // names of package members.
+ if fdecl.Name.Name == "_" {
+ continue
+ }
+
+ // (init functions have distinct Func
+ // objects named "init" and distinct
+ // ssa.Functions named "init#1", ...)
+
+ fn := pass.TypesInfo.Defs[fdecl.Name].(*types.Func)
+ if fn == nil {
+ panic(fn)
+ }
+
+ f := ssapkg.Prog.FuncValue(fn)
+ if f == nil {
+ panic(fn)
+ }
+
+ addAnons(f)
+ }
+ }
+ }
+
+ return &SSA{Pkg: ssapkg, SrcFuncs: funcs}, nil
+}
diff --git a/vendor/honnef.co/go/tools/internal/renameio/renameio.go b/vendor/honnef.co/go/tools/internal/renameio/renameio.go
new file mode 100644
index 0000000000000..3f3f1708fa48b
--- /dev/null
+++ b/vendor/honnef.co/go/tools/internal/renameio/renameio.go
@@ -0,0 +1,83 @@
+// Copyright 2018 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package renameio writes files atomically by renaming temporary files.
+package renameio
+
+import (
+ "bytes"
+ "io"
+ "io/ioutil"
+ "os"
+ "path/filepath"
+ "runtime"
+ "strings"
+ "time"
+)
+
+const patternSuffix = "*.tmp"
+
+// Pattern returns a glob pattern that matches the unrenamed temporary files
+// created when writing to filename.
+func Pattern(filename string) string {
+ return filepath.Join(filepath.Dir(filename), filepath.Base(filename)+patternSuffix)
+}
+
+// WriteFile is like ioutil.WriteFile, but first writes data to an arbitrary
+// file in the same directory as filename, then renames it atomically to the
+// final name.
+//
+// That ensures that the final location, if it exists, is always a complete file.
+func WriteFile(filename string, data []byte) (err error) {
+ return WriteToFile(filename, bytes.NewReader(data))
+}
+
+// WriteToFile is a variant of WriteFile that accepts the data as an io.Reader
+// instead of a slice.
+func WriteToFile(filename string, data io.Reader) (err error) {
+ f, err := ioutil.TempFile(filepath.Dir(filename), filepath.Base(filename)+patternSuffix)
+ if err != nil {
+ return err
+ }
+ defer func() {
+ // Only call os.Remove on f.Name() if we failed to rename it: otherwise,
+ // some other process may have created a new file with the same name after
+ // that.
+ if err != nil {
+ f.Close()
+ os.Remove(f.Name())
+ }
+ }()
+
+ if _, err := io.Copy(f, data); err != nil {
+ return err
+ }
+ // Sync the file before renaming it: otherwise, after a crash the reader may
+ // observe a 0-length file instead of the actual contents.
+ // See https://golang.org/issue/22397#issuecomment-380831736.
+ if err := f.Sync(); err != nil {
+ return err
+ }
+ if err := f.Close(); err != nil {
+ return err
+ }
+
+ var start time.Time
+ for {
+ err := os.Rename(f.Name(), filename)
+ if err == nil || runtime.GOOS != "windows" || !strings.HasSuffix(err.Error(), "Access is denied.") {
+ return err
+ }
+
+ // Windows seems to occasionally trigger spurious "Access is denied" errors
+ // here (see golang.org/issue/31247). We're not sure why. It's probably
+ // worth a little extra latency to avoid propagating the spurious errors.
+ if start.IsZero() {
+ start = time.Now()
+ } else if time.Since(start) >= 500*time.Millisecond {
+ return err
+ }
+ time.Sleep(5 * time.Millisecond)
+ }
+}
diff --git a/vendor/honnef.co/go/tools/internal/sharedcheck/lint.go b/vendor/honnef.co/go/tools/internal/sharedcheck/lint.go
new file mode 100644
index 0000000000000..affee66072611
--- /dev/null
+++ b/vendor/honnef.co/go/tools/internal/sharedcheck/lint.go
@@ -0,0 +1,70 @@
+package sharedcheck
+
+import (
+ "go/ast"
+ "go/types"
+
+ "golang.org/x/tools/go/analysis"
+ "honnef.co/go/tools/internal/passes/buildssa"
+ . "honnef.co/go/tools/lint/lintdsl"
+ "honnef.co/go/tools/ssa"
+)
+
+func CheckRangeStringRunes(pass *analysis.Pass) (interface{}, error) {
+ for _, ssafn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ fn := func(node ast.Node) bool {
+ rng, ok := node.(*ast.RangeStmt)
+ if !ok || !IsBlank(rng.Key) {
+ return true
+ }
+
+ v, _ := ssafn.ValueForExpr(rng.X)
+
+ // Check that we're converting from string to []rune
+ val, _ := v.(*ssa.Convert)
+ if val == nil {
+ return true
+ }
+ Tsrc, ok := val.X.Type().(*types.Basic)
+ if !ok || Tsrc.Kind() != types.String {
+ return true
+ }
+ Tdst, ok := val.Type().(*types.Slice)
+ if !ok {
+ return true
+ }
+ TdstElem, ok := Tdst.Elem().(*types.Basic)
+ if !ok || TdstElem.Kind() != types.Int32 {
+ return true
+ }
+
+ // Check that the result of the conversion is only used to
+ // range over
+ refs := val.Referrers()
+ if refs == nil {
+ return true
+ }
+
+ // Expect two refs: one for obtaining the length of the slice,
+ // one for accessing the elements
+ if len(FilterDebug(*refs)) != 2 {
+ // TODO(dh): right now, we check that only one place
+ // refers to our slice. This will miss cases such as
+ // ranging over the slice twice. Ideally, we'd ensure that
+ // the slice is only used for ranging over (without
+ // accessing the key), but that is harder to do because in
+ // SSA form, ranging over a slice looks like an ordinary
+ // loop with index increments and slice accesses. We'd
+ // have to look at the associated AST node to check that
+ // it's a range statement.
+ return true
+ }
+
+ pass.Reportf(rng.Pos(), "should range over string, not []rune(string)")
+
+ return true
+ }
+ Inspect(ssafn.Syntax(), fn)
+ }
+ return nil, nil
+}
diff --git a/vendor/honnef.co/go/tools/lint/LICENSE b/vendor/honnef.co/go/tools/lint/LICENSE
new file mode 100644
index 0000000000000..796130a123a14
--- /dev/null
+++ b/vendor/honnef.co/go/tools/lint/LICENSE
@@ -0,0 +1,28 @@
+Copyright (c) 2013 The Go Authors. All rights reserved.
+Copyright (c) 2016 Dominik Honnef. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+ * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+ * Neither the name of Google Inc. nor the names of its
+contributors may be used to endorse or promote products derived from
+this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/honnef.co/go/tools/lint/lint.go b/vendor/honnef.co/go/tools/lint/lint.go
new file mode 100644
index 0000000000000..de5a8f1288d5b
--- /dev/null
+++ b/vendor/honnef.co/go/tools/lint/lint.go
@@ -0,0 +1,491 @@
+// Package lint provides the foundation for tools like staticcheck
+package lint // import "honnef.co/go/tools/lint"
+
+import (
+ "bytes"
+ "fmt"
+ "go/scanner"
+ "go/token"
+ "go/types"
+ "path/filepath"
+ "sort"
+ "strings"
+ "sync"
+ "sync/atomic"
+ "unicode"
+
+ "golang.org/x/tools/go/analysis"
+ "golang.org/x/tools/go/packages"
+ "honnef.co/go/tools/config"
+)
+
+type Documentation struct {
+ Title string
+ Text string
+ Since string
+ NonDefault bool
+ Options []string
+}
+
+func (doc *Documentation) String() string {
+ b := &strings.Builder{}
+ fmt.Fprintf(b, "%s\n\n", doc.Title)
+ if doc.Text != "" {
+ fmt.Fprintf(b, "%s\n\n", doc.Text)
+ }
+ fmt.Fprint(b, "Available since\n ")
+ if doc.Since == "" {
+ fmt.Fprint(b, "unreleased")
+ } else {
+ fmt.Fprintf(b, "%s", doc.Since)
+ }
+ if doc.NonDefault {
+ fmt.Fprint(b, ", non-default")
+ }
+ fmt.Fprint(b, "\n")
+ if len(doc.Options) > 0 {
+ fmt.Fprintf(b, "\nOptions\n")
+ for _, opt := range doc.Options {
+ fmt.Fprintf(b, " %s", opt)
+ }
+ fmt.Fprint(b, "\n")
+ }
+ return b.String()
+}
+
+type Ignore interface {
+ Match(p Problem) bool
+}
+
+type LineIgnore struct {
+ File string
+ Line int
+ Checks []string
+ Matched bool
+ Pos token.Pos
+}
+
+func (li *LineIgnore) Match(p Problem) bool {
+ pos := p.Pos
+ if pos.Filename != li.File || pos.Line != li.Line {
+ return false
+ }
+ for _, c := range li.Checks {
+ if m, _ := filepath.Match(c, p.Check); m {
+ li.Matched = true
+ return true
+ }
+ }
+ return false
+}
+
+func (li *LineIgnore) String() string {
+ matched := "not matched"
+ if li.Matched {
+ matched = "matched"
+ }
+ return fmt.Sprintf("%s:%d %s (%s)", li.File, li.Line, strings.Join(li.Checks, ", "), matched)
+}
+
+type FileIgnore struct {
+ File string
+ Checks []string
+}
+
+func (fi *FileIgnore) Match(p Problem) bool {
+ if p.Pos.Filename != fi.File {
+ return false
+ }
+ for _, c := range fi.Checks {
+ if m, _ := filepath.Match(c, p.Check); m {
+ return true
+ }
+ }
+ return false
+}
+
+type Severity uint8
+
+const (
+ Error Severity = iota
+ Warning
+ Ignored
+)
+
+// Problem represents a problem in some source code.
+type Problem struct {
+ Pos token.Position
+ End token.Position
+ Message string
+ Check string
+ Severity Severity
+}
+
+func (p *Problem) String() string {
+ return fmt.Sprintf("%s (%s)", p.Message, p.Check)
+}
+
+// A Linter lints Go source code.
+type Linter struct {
+ Checkers []*analysis.Analyzer
+ CumulativeCheckers []CumulativeChecker
+ GoVersion int
+ Config config.Config
+ Stats Stats
+}
+
+type CumulativeChecker interface {
+ Analyzer() *analysis.Analyzer
+ Result() []types.Object
+ ProblemObject(*token.FileSet, types.Object) Problem
+}
+
+func (l *Linter) Lint(cfg *packages.Config, patterns []string) ([]Problem, error) {
+ var allAnalyzers []*analysis.Analyzer
+ allAnalyzers = append(allAnalyzers, l.Checkers...)
+ for _, cum := range l.CumulativeCheckers {
+ allAnalyzers = append(allAnalyzers, cum.Analyzer())
+ }
+
+ // The -checks command line flag overrules all configuration
+ // files, which means that for `-checks="foo"`, no check other
+ // than foo can ever be reported to the user. Make use of this
+ // fact to cull the list of analyses we need to run.
+
+ // replace "inherit" with "all", as we don't want to base the
+ // list of all checks on the default configuration, which
+ // disables certain checks.
+ checks := make([]string, len(l.Config.Checks))
+ copy(checks, l.Config.Checks)
+ for i, c := range checks {
+ if c == "inherit" {
+ checks[i] = "all"
+ }
+ }
+
+ allowed := FilterChecks(allAnalyzers, checks)
+ var allowedAnalyzers []*analysis.Analyzer
+ for _, c := range l.Checkers {
+ if allowed[c.Name] {
+ allowedAnalyzers = append(allowedAnalyzers, c)
+ }
+ }
+ hasCumulative := false
+ for _, cum := range l.CumulativeCheckers {
+ a := cum.Analyzer()
+ if allowed[a.Name] {
+ hasCumulative = true
+ allowedAnalyzers = append(allowedAnalyzers, a)
+ }
+ }
+
+ r, err := NewRunner(&l.Stats)
+ if err != nil {
+ return nil, err
+ }
+ r.goVersion = l.GoVersion
+
+ pkgs, err := r.Run(cfg, patterns, allowedAnalyzers, hasCumulative)
+ if err != nil {
+ return nil, err
+ }
+
+ tpkgToPkg := map[*types.Package]*Package{}
+ for _, pkg := range pkgs {
+ tpkgToPkg[pkg.Types] = pkg
+
+ for _, e := range pkg.errs {
+ switch e := e.(type) {
+ case types.Error:
+ p := Problem{
+ Pos: e.Fset.PositionFor(e.Pos, false),
+ Message: e.Msg,
+ Severity: Error,
+ Check: "compile",
+ }
+ pkg.problems = append(pkg.problems, p)
+ case packages.Error:
+ msg := e.Msg
+ if len(msg) != 0 && msg[0] == '\n' {
+ // TODO(dh): See https://github.com/golang/go/issues/32363
+ msg = msg[1:]
+ }
+
+ var pos token.Position
+ if e.Pos == "" {
+ // Under certain conditions (malformed package
+ // declarations, multiple packages in the same
+ // directory), go list emits an error on stderr
+ // instead of JSON. Those errors do not have
+ // associated position information in
+ // go/packages.Error, even though the output on
+ // stderr may contain it.
+ if p, n, err := parsePos(msg); err == nil {
+ if abs, err := filepath.Abs(p.Filename); err == nil {
+ p.Filename = abs
+ }
+ pos = p
+ msg = msg[n+2:]
+ }
+ } else {
+ var err error
+ pos, _, err = parsePos(e.Pos)
+ if err != nil {
+ panic(fmt.Sprintf("internal error: %s", e))
+ }
+ }
+ p := Problem{
+ Pos: pos,
+ Message: msg,
+ Severity: Error,
+ Check: "compile",
+ }
+ pkg.problems = append(pkg.problems, p)
+ case scanner.ErrorList:
+ for _, e := range e {
+ p := Problem{
+ Pos: e.Pos,
+ Message: e.Msg,
+ Severity: Error,
+ Check: "compile",
+ }
+ pkg.problems = append(pkg.problems, p)
+ }
+ case error:
+ p := Problem{
+ Pos: token.Position{},
+ Message: e.Error(),
+ Severity: Error,
+ Check: "compile",
+ }
+ pkg.problems = append(pkg.problems, p)
+ }
+ }
+ }
+
+ atomic.StoreUint32(&r.stats.State, StateCumulative)
+ var problems []Problem
+ for _, cum := range l.CumulativeCheckers {
+ for _, res := range cum.Result() {
+ pkg := tpkgToPkg[res.Pkg()]
+ allowedChecks := FilterChecks(allowedAnalyzers, pkg.cfg.Merge(l.Config).Checks)
+ if allowedChecks[cum.Analyzer().Name] {
+ pos := DisplayPosition(pkg.Fset, res.Pos())
+ // FIXME(dh): why are we ignoring generated files
+ // here? Surely this is specific to 'unused', not all
+ // cumulative checkers
+ if _, ok := pkg.gen[pos.Filename]; ok {
+ continue
+ }
+ p := cum.ProblemObject(pkg.Fset, res)
+ problems = append(problems, p)
+ }
+ }
+ }
+
+ for _, pkg := range pkgs {
+ for _, ig := range pkg.ignores {
+ for i := range pkg.problems {
+ p := &pkg.problems[i]
+ if ig.Match(*p) {
+ p.Severity = Ignored
+ }
+ }
+ for i := range problems {
+ p := &problems[i]
+ if ig.Match(*p) {
+ p.Severity = Ignored
+ }
+ }
+ }
+
+ if pkg.cfg == nil {
+ // The package failed to load, otherwise we would have a
+ // valid config. Pass through all errors.
+ problems = append(problems, pkg.problems...)
+ } else {
+ for _, p := range pkg.problems {
+ allowedChecks := FilterChecks(allowedAnalyzers, pkg.cfg.Merge(l.Config).Checks)
+ allowedChecks["compile"] = true
+ if allowedChecks[p.Check] {
+ problems = append(problems, p)
+ }
+ }
+ }
+
+ for _, ig := range pkg.ignores {
+ ig, ok := ig.(*LineIgnore)
+ if !ok {
+ continue
+ }
+ if ig.Matched {
+ continue
+ }
+
+ couldveMatched := false
+ allowedChecks := FilterChecks(allowedAnalyzers, pkg.cfg.Merge(l.Config).Checks)
+ for _, c := range ig.Checks {
+ if !allowedChecks[c] {
+ continue
+ }
+ couldveMatched = true
+ break
+ }
+
+ if !couldveMatched {
+ // The ignored checks were disabled for the containing package.
+ // Don't flag the ignore for not having matched.
+ continue
+ }
+ p := Problem{
+ Pos: DisplayPosition(pkg.Fset, ig.Pos),
+ Message: "this linter directive didn't match anything; should it be removed?",
+ Check: "",
+ }
+ problems = append(problems, p)
+ }
+ }
+
+ if len(problems) == 0 {
+ return nil, nil
+ }
+
+ sort.Slice(problems, func(i, j int) bool {
+ pi := problems[i].Pos
+ pj := problems[j].Pos
+
+ if pi.Filename != pj.Filename {
+ return pi.Filename < pj.Filename
+ }
+ if pi.Line != pj.Line {
+ return pi.Line < pj.Line
+ }
+ if pi.Column != pj.Column {
+ return pi.Column < pj.Column
+ }
+
+ return problems[i].Message < problems[j].Message
+ })
+
+ var out []Problem
+ out = append(out, problems[0])
+ for i, p := range problems[1:] {
+ // We may encounter duplicate problems because one file
+ // can be part of many packages.
+ if problems[i] != p {
+ out = append(out, p)
+ }
+ }
+ return out, nil
+}
+
+func FilterChecks(allChecks []*analysis.Analyzer, checks []string) map[string]bool {
+ // OPT(dh): this entire computation could be cached per package
+ allowedChecks := map[string]bool{}
+
+ for _, check := range checks {
+ b := true
+ if len(check) > 1 && check[0] == '-' {
+ b = false
+ check = check[1:]
+ }
+ if check == "*" || check == "all" {
+ // Match all
+ for _, c := range allChecks {
+ allowedChecks[c.Name] = b
+ }
+ } else if strings.HasSuffix(check, "*") {
+ // Glob
+ prefix := check[:len(check)-1]
+ isCat := strings.IndexFunc(prefix, func(r rune) bool { return unicode.IsNumber(r) }) == -1
+
+ for _, c := range allChecks {
+ idx := strings.IndexFunc(c.Name, func(r rune) bool { return unicode.IsNumber(r) })
+ if isCat {
+ // Glob is S*, which should match S1000 but not SA1000
+ cat := c.Name[:idx]
+ if prefix == cat {
+ allowedChecks[c.Name] = b
+ }
+ } else {
+ // Glob is S1*
+ if strings.HasPrefix(c.Name, prefix) {
+ allowedChecks[c.Name] = b
+ }
+ }
+ }
+ } else {
+ // Literal check name
+ allowedChecks[check] = b
+ }
+ }
+ return allowedChecks
+}
+
+type Positioner interface {
+ Pos() token.Pos
+}
+
+func DisplayPosition(fset *token.FileSet, p token.Pos) token.Position {
+ if p == token.NoPos {
+ return token.Position{}
+ }
+
+ // Only use the adjusted position if it points to another Go file.
+ // This means we'll point to the original file for cgo files, but
+ // we won't point to a YACC grammar file.
+ pos := fset.PositionFor(p, false)
+ adjPos := fset.PositionFor(p, true)
+
+ if filepath.Ext(adjPos.Filename) == ".go" {
+ return adjPos
+ }
+ return pos
+}
+
+var bufferPool = &sync.Pool{
+ New: func() interface{} {
+ buf := bytes.NewBuffer(nil)
+ buf.Grow(64)
+ return buf
+ },
+}
+
+func FuncName(f *types.Func) string {
+ buf := bufferPool.Get().(*bytes.Buffer)
+ buf.Reset()
+ if f.Type() != nil {
+ sig := f.Type().(*types.Signature)
+ if recv := sig.Recv(); recv != nil {
+ buf.WriteByte('(')
+ if _, ok := recv.Type().(*types.Interface); ok {
+ // gcimporter creates abstract methods of
+ // named interfaces using the interface type
+ // (not the named type) as the receiver.
+ // Don't print it in full.
+ buf.WriteString("interface")
+ } else {
+ types.WriteType(buf, recv.Type(), nil)
+ }
+ buf.WriteByte(')')
+ buf.WriteByte('.')
+ } else if f.Pkg() != nil {
+ writePackage(buf, f.Pkg())
+ }
+ }
+ buf.WriteString(f.Name())
+ s := buf.String()
+ bufferPool.Put(buf)
+ return s
+}
+
+func writePackage(buf *bytes.Buffer, pkg *types.Package) {
+ if pkg == nil {
+ return
+ }
+ s := pkg.Path()
+ if s != "" {
+ buf.WriteString(s)
+ buf.WriteByte('.')
+ }
+}
diff --git a/vendor/honnef.co/go/tools/lint/lintdsl/lintdsl.go b/vendor/honnef.co/go/tools/lint/lintdsl/lintdsl.go
new file mode 100644
index 0000000000000..3b939e95f2ffb
--- /dev/null
+++ b/vendor/honnef.co/go/tools/lint/lintdsl/lintdsl.go
@@ -0,0 +1,400 @@
+// Package lintdsl provides helpers for implementing static analysis
+// checks. Dot-importing this package is encouraged.
+package lintdsl
+
+import (
+ "bytes"
+ "flag"
+ "fmt"
+ "go/ast"
+ "go/constant"
+ "go/printer"
+ "go/token"
+ "go/types"
+ "strings"
+
+ "golang.org/x/tools/go/analysis"
+ "honnef.co/go/tools/facts"
+ "honnef.co/go/tools/lint"
+ "honnef.co/go/tools/ssa"
+)
+
+type packager interface {
+ Package() *ssa.Package
+}
+
+func CallName(call *ssa.CallCommon) string {
+ if call.IsInvoke() {
+ return ""
+ }
+ switch v := call.Value.(type) {
+ case *ssa.Function:
+ fn, ok := v.Object().(*types.Func)
+ if !ok {
+ return ""
+ }
+ return lint.FuncName(fn)
+ case *ssa.Builtin:
+ return v.Name()
+ }
+ return ""
+}
+
+func IsCallTo(call *ssa.CallCommon, name string) bool { return CallName(call) == name }
+func IsType(T types.Type, name string) bool { return types.TypeString(T, nil) == name }
+
+func FilterDebug(instr []ssa.Instruction) []ssa.Instruction {
+ var out []ssa.Instruction
+ for _, ins := range instr {
+ if _, ok := ins.(*ssa.DebugRef); !ok {
+ out = append(out, ins)
+ }
+ }
+ return out
+}
+
+func IsExample(fn *ssa.Function) bool {
+ if !strings.HasPrefix(fn.Name(), "Example") {
+ return false
+ }
+ f := fn.Prog.Fset.File(fn.Pos())
+ if f == nil {
+ return false
+ }
+ return strings.HasSuffix(f.Name(), "_test.go")
+}
+
+func IsPointerLike(T types.Type) bool {
+ switch T := T.Underlying().(type) {
+ case *types.Interface, *types.Chan, *types.Map, *types.Signature, *types.Pointer:
+ return true
+ case *types.Basic:
+ return T.Kind() == types.UnsafePointer
+ }
+ return false
+}
+
+func IsIdent(expr ast.Expr, ident string) bool {
+ id, ok := expr.(*ast.Ident)
+ return ok && id.Name == ident
+}
+
+// isBlank returns whether id is the blank identifier "_".
+// If id == nil, the answer is false.
+func IsBlank(id ast.Expr) bool {
+ ident, _ := id.(*ast.Ident)
+ return ident != nil && ident.Name == "_"
+}
+
+func IsIntLiteral(expr ast.Expr, literal string) bool {
+ lit, ok := expr.(*ast.BasicLit)
+ return ok && lit.Kind == token.INT && lit.Value == literal
+}
+
+// Deprecated: use IsIntLiteral instead
+func IsZero(expr ast.Expr) bool {
+ return IsIntLiteral(expr, "0")
+}
+
+func IsOfType(pass *analysis.Pass, expr ast.Expr, name string) bool {
+ return IsType(pass.TypesInfo.TypeOf(expr), name)
+}
+
+func IsInTest(pass *analysis.Pass, node lint.Positioner) bool {
+ // FIXME(dh): this doesn't work for global variables with
+ // initializers
+ f := pass.Fset.File(node.Pos())
+ return f != nil && strings.HasSuffix(f.Name(), "_test.go")
+}
+
+func IsInMain(pass *analysis.Pass, node lint.Positioner) bool {
+ if node, ok := node.(packager); ok {
+ return node.Package().Pkg.Name() == "main"
+ }
+ return pass.Pkg.Name() == "main"
+}
+
+func SelectorName(pass *analysis.Pass, expr *ast.SelectorExpr) string {
+ info := pass.TypesInfo
+ sel := info.Selections[expr]
+ if sel == nil {
+ if x, ok := expr.X.(*ast.Ident); ok {
+ pkg, ok := info.ObjectOf(x).(*types.PkgName)
+ if !ok {
+ // This shouldn't happen
+ return fmt.Sprintf("%s.%s", x.Name, expr.Sel.Name)
+ }
+ return fmt.Sprintf("%s.%s", pkg.Imported().Path(), expr.Sel.Name)
+ }
+ panic(fmt.Sprintf("unsupported selector: %v", expr))
+ }
+ return fmt.Sprintf("(%s).%s", sel.Recv(), sel.Obj().Name())
+}
+
+func IsNil(pass *analysis.Pass, expr ast.Expr) bool {
+ return pass.TypesInfo.Types[expr].IsNil()
+}
+
+func BoolConst(pass *analysis.Pass, expr ast.Expr) bool {
+ val := pass.TypesInfo.ObjectOf(expr.(*ast.Ident)).(*types.Const).Val()
+ return constant.BoolVal(val)
+}
+
+func IsBoolConst(pass *analysis.Pass, expr ast.Expr) bool {
+ // We explicitly don't support typed bools because more often than
+ // not, custom bool types are used as binary enums and the
+ // explicit comparison is desired.
+
+ ident, ok := expr.(*ast.Ident)
+ if !ok {
+ return false
+ }
+ obj := pass.TypesInfo.ObjectOf(ident)
+ c, ok := obj.(*types.Const)
+ if !ok {
+ return false
+ }
+ basic, ok := c.Type().(*types.Basic)
+ if !ok {
+ return false
+ }
+ if basic.Kind() != types.UntypedBool && basic.Kind() != types.Bool {
+ return false
+ }
+ return true
+}
+
+func ExprToInt(pass *analysis.Pass, expr ast.Expr) (int64, bool) {
+ tv := pass.TypesInfo.Types[expr]
+ if tv.Value == nil {
+ return 0, false
+ }
+ if tv.Value.Kind() != constant.Int {
+ return 0, false
+ }
+ return constant.Int64Val(tv.Value)
+}
+
+func ExprToString(pass *analysis.Pass, expr ast.Expr) (string, bool) {
+ val := pass.TypesInfo.Types[expr].Value
+ if val == nil {
+ return "", false
+ }
+ if val.Kind() != constant.String {
+ return "", false
+ }
+ return constant.StringVal(val), true
+}
+
+// Dereference returns a pointer's element type; otherwise it returns
+// T.
+func Dereference(T types.Type) types.Type {
+ if p, ok := T.Underlying().(*types.Pointer); ok {
+ return p.Elem()
+ }
+ return T
+}
+
+// DereferenceR returns a pointer's element type; otherwise it returns
+// T. If the element type is itself a pointer, DereferenceR will be
+// applied recursively.
+func DereferenceR(T types.Type) types.Type {
+ if p, ok := T.Underlying().(*types.Pointer); ok {
+ return DereferenceR(p.Elem())
+ }
+ return T
+}
+
+func IsGoVersion(pass *analysis.Pass, minor int) bool {
+ version := pass.Analyzer.Flags.Lookup("go").Value.(flag.Getter).Get().(int)
+ return version >= minor
+}
+
+func CallNameAST(pass *analysis.Pass, call *ast.CallExpr) string {
+ switch fun := call.Fun.(type) {
+ case *ast.SelectorExpr:
+ fn, ok := pass.TypesInfo.ObjectOf(fun.Sel).(*types.Func)
+ if !ok {
+ return ""
+ }
+ return lint.FuncName(fn)
+ case *ast.Ident:
+ obj := pass.TypesInfo.ObjectOf(fun)
+ switch obj := obj.(type) {
+ case *types.Func:
+ return lint.FuncName(obj)
+ case *types.Builtin:
+ return obj.Name()
+ default:
+ return ""
+ }
+ default:
+ return ""
+ }
+}
+
+func IsCallToAST(pass *analysis.Pass, node ast.Node, name string) bool {
+ call, ok := node.(*ast.CallExpr)
+ if !ok {
+ return false
+ }
+ return CallNameAST(pass, call) == name
+}
+
+func IsCallToAnyAST(pass *analysis.Pass, node ast.Node, names ...string) bool {
+ for _, name := range names {
+ if IsCallToAST(pass, node, name) {
+ return true
+ }
+ }
+ return false
+}
+
+func Render(pass *analysis.Pass, x interface{}) string {
+ var buf bytes.Buffer
+ if err := printer.Fprint(&buf, pass.Fset, x); err != nil {
+ panic(err)
+ }
+ return buf.String()
+}
+
+func RenderArgs(pass *analysis.Pass, args []ast.Expr) string {
+ var ss []string
+ for _, arg := range args {
+ ss = append(ss, Render(pass, arg))
+ }
+ return strings.Join(ss, ", ")
+}
+
+func Preamble(f *ast.File) string {
+ cutoff := f.Package
+ if f.Doc != nil {
+ cutoff = f.Doc.Pos()
+ }
+ var out []string
+ for _, cmt := range f.Comments {
+ if cmt.Pos() >= cutoff {
+ break
+ }
+ out = append(out, cmt.Text())
+ }
+ return strings.Join(out, "\n")
+}
+
+func Inspect(node ast.Node, fn func(node ast.Node) bool) {
+ if node == nil {
+ return
+ }
+ ast.Inspect(node, fn)
+}
+
+func GroupSpecs(fset *token.FileSet, specs []ast.Spec) [][]ast.Spec {
+ if len(specs) == 0 {
+ return nil
+ }
+ groups := make([][]ast.Spec, 1)
+ groups[0] = append(groups[0], specs[0])
+
+ for _, spec := range specs[1:] {
+ g := groups[len(groups)-1]
+ if fset.PositionFor(spec.Pos(), false).Line-1 !=
+ fset.PositionFor(g[len(g)-1].End(), false).Line {
+
+ groups = append(groups, nil)
+ }
+
+ groups[len(groups)-1] = append(groups[len(groups)-1], spec)
+ }
+
+ return groups
+}
+
+func IsObject(obj types.Object, name string) bool {
+ var path string
+ if pkg := obj.Pkg(); pkg != nil {
+ path = pkg.Path() + "."
+ }
+ return path+obj.Name() == name
+}
+
+type Field struct {
+ Var *types.Var
+ Tag string
+ Path []int
+}
+
+// FlattenFields recursively flattens T and embedded structs,
+// returning a list of fields. If multiple fields with the same name
+// exist, all will be returned.
+func FlattenFields(T *types.Struct) []Field {
+ return flattenFields(T, nil, nil)
+}
+
+func flattenFields(T *types.Struct, path []int, seen map[types.Type]bool) []Field {
+ if seen == nil {
+ seen = map[types.Type]bool{}
+ }
+ if seen[T] {
+ return nil
+ }
+ seen[T] = true
+ var out []Field
+ for i := 0; i < T.NumFields(); i++ {
+ field := T.Field(i)
+ tag := T.Tag(i)
+ np := append(path[:len(path):len(path)], i)
+ if field.Anonymous() {
+ if s, ok := Dereference(field.Type()).Underlying().(*types.Struct); ok {
+ out = append(out, flattenFields(s, np, seen)...)
+ }
+ } else {
+ out = append(out, Field{field, tag, np})
+ }
+ }
+ return out
+}
+
+func File(pass *analysis.Pass, node lint.Positioner) *ast.File {
+ pass.Fset.PositionFor(node.Pos(), true)
+ m := pass.ResultOf[facts.TokenFile].(map[*token.File]*ast.File)
+ return m[pass.Fset.File(node.Pos())]
+}
+
+// IsGenerated reports whether pos is in a generated file, It ignores
+// //line directives.
+func IsGenerated(pass *analysis.Pass, pos token.Pos) bool {
+ _, ok := Generator(pass, pos)
+ return ok
+}
+
+// Generator returns the generator that generated the file containing
+// pos. It ignores //line directives.
+func Generator(pass *analysis.Pass, pos token.Pos) (facts.Generator, bool) {
+ file := pass.Fset.PositionFor(pos, false).Filename
+ m := pass.ResultOf[facts.Generated].(map[string]facts.Generator)
+ g, ok := m[file]
+ return g, ok
+}
+
+func ReportfFG(pass *analysis.Pass, pos token.Pos, f string, args ...interface{}) {
+ file := lint.DisplayPosition(pass.Fset, pos).Filename
+ m := pass.ResultOf[facts.Generated].(map[string]facts.Generator)
+ if _, ok := m[file]; ok {
+ return
+ }
+ pass.Reportf(pos, f, args...)
+}
+
+func ReportNodef(pass *analysis.Pass, node ast.Node, format string, args ...interface{}) {
+ msg := fmt.Sprintf(format, args...)
+ pass.Report(analysis.Diagnostic{Pos: node.Pos(), End: node.End(), Message: msg})
+}
+
+func ReportNodefFG(pass *analysis.Pass, node ast.Node, format string, args ...interface{}) {
+ file := lint.DisplayPosition(pass.Fset, node.Pos()).Filename
+ m := pass.ResultOf[facts.Generated].(map[string]facts.Generator)
+ if _, ok := m[file]; ok {
+ return
+ }
+ ReportNodef(pass, node, format, args...)
+}
diff --git a/vendor/honnef.co/go/tools/lint/lintutil/format/format.go b/vendor/honnef.co/go/tools/lint/lintutil/format/format.go
new file mode 100644
index 0000000000000..9385431f88b8f
--- /dev/null
+++ b/vendor/honnef.co/go/tools/lint/lintutil/format/format.go
@@ -0,0 +1,135 @@
+// Package format provides formatters for linter problems.
+package format
+
+import (
+ "encoding/json"
+ "fmt"
+ "go/token"
+ "io"
+ "os"
+ "path/filepath"
+ "text/tabwriter"
+
+ "honnef.co/go/tools/lint"
+)
+
+func shortPath(path string) string {
+ cwd, err := os.Getwd()
+ if err != nil {
+ return path
+ }
+ if rel, err := filepath.Rel(cwd, path); err == nil && len(rel) < len(path) {
+ return rel
+ }
+ return path
+}
+
+func relativePositionString(pos token.Position) string {
+ s := shortPath(pos.Filename)
+ if pos.IsValid() {
+ if s != "" {
+ s += ":"
+ }
+ s += fmt.Sprintf("%d:%d", pos.Line, pos.Column)
+ }
+ if s == "" {
+ s = "-"
+ }
+ return s
+}
+
+type Statter interface {
+ Stats(total, errors, warnings int)
+}
+
+type Formatter interface {
+ Format(p lint.Problem)
+}
+
+type Text struct {
+ W io.Writer
+}
+
+func (o Text) Format(p lint.Problem) {
+ fmt.Fprintf(o.W, "%v: %s\n", relativePositionString(p.Pos), p.String())
+}
+
+type JSON struct {
+ W io.Writer
+}
+
+func severity(s lint.Severity) string {
+ switch s {
+ case lint.Error:
+ return "error"
+ case lint.Warning:
+ return "warning"
+ case lint.Ignored:
+ return "ignored"
+ }
+ return ""
+}
+
+func (o JSON) Format(p lint.Problem) {
+ type location struct {
+ File string `json:"file"`
+ Line int `json:"line"`
+ Column int `json:"column"`
+ }
+ jp := struct {
+ Code string `json:"code"`
+ Severity string `json:"severity,omitempty"`
+ Location location `json:"location"`
+ End location `json:"end"`
+ Message string `json:"message"`
+ }{
+ Code: p.Check,
+ Severity: severity(p.Severity),
+ Location: location{
+ File: p.Pos.Filename,
+ Line: p.Pos.Line,
+ Column: p.Pos.Column,
+ },
+ End: location{
+ File: p.End.Filename,
+ Line: p.End.Line,
+ Column: p.End.Column,
+ },
+ Message: p.Message,
+ }
+ _ = json.NewEncoder(o.W).Encode(jp)
+}
+
+type Stylish struct {
+ W io.Writer
+
+ prevFile string
+ tw *tabwriter.Writer
+}
+
+func (o *Stylish) Format(p lint.Problem) {
+ pos := p.Pos
+ if pos.Filename == "" {
+ pos.Filename = "-"
+ }
+
+ if pos.Filename != o.prevFile {
+ if o.prevFile != "" {
+ o.tw.Flush()
+ fmt.Fprintln(o.W)
+ }
+ fmt.Fprintln(o.W, pos.Filename)
+ o.prevFile = pos.Filename
+ o.tw = tabwriter.NewWriter(o.W, 0, 4, 2, ' ', 0)
+ }
+ fmt.Fprintf(o.tw, " (%d, %d)\t%s\t%s\n", pos.Line, pos.Column, p.Check, p.Message)
+}
+
+func (o *Stylish) Stats(total, errors, warnings int) {
+ if o.tw != nil {
+ o.tw.Flush()
+ fmt.Fprintln(o.W)
+ }
+ fmt.Fprintf(o.W, " ✖ %d problems (%d errors, %d warnings)\n",
+ total, errors, warnings)
+}
diff --git a/vendor/honnef.co/go/tools/lint/lintutil/stats.go b/vendor/honnef.co/go/tools/lint/lintutil/stats.go
new file mode 100644
index 0000000000000..ba8caf0afddb9
--- /dev/null
+++ b/vendor/honnef.co/go/tools/lint/lintutil/stats.go
@@ -0,0 +1,7 @@
+// +build !aix,!android,!darwin,!dragonfly,!freebsd,!linux,!netbsd,!openbsd,!solaris
+
+package lintutil
+
+import "os"
+
+var infoSignals = []os.Signal{}
diff --git a/vendor/honnef.co/go/tools/lint/lintutil/stats_bsd.go b/vendor/honnef.co/go/tools/lint/lintutil/stats_bsd.go
new file mode 100644
index 0000000000000..3a62ede031c32
--- /dev/null
+++ b/vendor/honnef.co/go/tools/lint/lintutil/stats_bsd.go
@@ -0,0 +1,10 @@
+// +build darwin dragonfly freebsd netbsd openbsd
+
+package lintutil
+
+import (
+ "os"
+ "syscall"
+)
+
+var infoSignals = []os.Signal{syscall.SIGINFO}
diff --git a/vendor/honnef.co/go/tools/lint/lintutil/stats_posix.go b/vendor/honnef.co/go/tools/lint/lintutil/stats_posix.go
new file mode 100644
index 0000000000000..53f21c666b121
--- /dev/null
+++ b/vendor/honnef.co/go/tools/lint/lintutil/stats_posix.go
@@ -0,0 +1,10 @@
+// +build aix android linux solaris
+
+package lintutil
+
+import (
+ "os"
+ "syscall"
+)
+
+var infoSignals = []os.Signal{syscall.SIGUSR1}
diff --git a/vendor/honnef.co/go/tools/lint/lintutil/util.go b/vendor/honnef.co/go/tools/lint/lintutil/util.go
new file mode 100644
index 0000000000000..fe0279f921cef
--- /dev/null
+++ b/vendor/honnef.co/go/tools/lint/lintutil/util.go
@@ -0,0 +1,392 @@
+// Copyright (c) 2013 The Go Authors. All rights reserved.
+//
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file or at
+// https://developers.google.com/open-source/licenses/bsd.
+
+// Package lintutil provides helpers for writing linter command lines.
+package lintutil // import "honnef.co/go/tools/lint/lintutil"
+
+import (
+ "crypto/sha256"
+ "errors"
+ "flag"
+ "fmt"
+ "go/build"
+ "go/token"
+ "io"
+ "log"
+ "os"
+ "os/signal"
+ "regexp"
+ "runtime"
+ "runtime/pprof"
+ "strconv"
+ "strings"
+ "sync/atomic"
+
+ "honnef.co/go/tools/config"
+ "honnef.co/go/tools/internal/cache"
+ "honnef.co/go/tools/lint"
+ "honnef.co/go/tools/lint/lintutil/format"
+ "honnef.co/go/tools/version"
+
+ "golang.org/x/tools/go/analysis"
+ "golang.org/x/tools/go/buildutil"
+ "golang.org/x/tools/go/packages"
+)
+
+func NewVersionFlag() flag.Getter {
+ tags := build.Default.ReleaseTags
+ v := tags[len(tags)-1][2:]
+ version := new(VersionFlag)
+ if err := version.Set(v); err != nil {
+ panic(fmt.Sprintf("internal error: %s", err))
+ }
+ return version
+}
+
+type VersionFlag int
+
+func (v *VersionFlag) String() string {
+ return fmt.Sprintf("1.%d", *v)
+
+}
+
+func (v *VersionFlag) Set(s string) error {
+ if len(s) < 3 {
+ return errors.New("invalid Go version")
+ }
+ if s[0] != '1' {
+ return errors.New("invalid Go version")
+ }
+ if s[1] != '.' {
+ return errors.New("invalid Go version")
+ }
+ i, err := strconv.Atoi(s[2:])
+ *v = VersionFlag(i)
+ return err
+}
+
+func (v *VersionFlag) Get() interface{} {
+ return int(*v)
+}
+
+func usage(name string, flags *flag.FlagSet) func() {
+ return func() {
+ fmt.Fprintf(os.Stderr, "Usage of %s:\n", name)
+ fmt.Fprintf(os.Stderr, "\t%s [flags] # runs on package in current directory\n", name)
+ fmt.Fprintf(os.Stderr, "\t%s [flags] packages\n", name)
+ fmt.Fprintf(os.Stderr, "\t%s [flags] directory\n", name)
+ fmt.Fprintf(os.Stderr, "\t%s [flags] files... # must be a single package\n", name)
+ fmt.Fprintf(os.Stderr, "Flags:\n")
+ flags.PrintDefaults()
+ }
+}
+
+type list []string
+
+func (list *list) String() string {
+ return `"` + strings.Join(*list, ",") + `"`
+}
+
+func (list *list) Set(s string) error {
+ if s == "" {
+ *list = nil
+ return nil
+ }
+
+ *list = strings.Split(s, ",")
+ return nil
+}
+
+func FlagSet(name string) *flag.FlagSet {
+ flags := flag.NewFlagSet("", flag.ExitOnError)
+ flags.Usage = usage(name, flags)
+ flags.String("tags", "", "List of `build tags`")
+ flags.Bool("tests", true, "Include tests")
+ flags.Bool("version", false, "Print version and exit")
+ flags.Bool("show-ignored", false, "Don't filter ignored problems")
+ flags.String("f", "text", "Output `format` (valid choices are 'stylish', 'text' and 'json')")
+ flags.String("explain", "", "Print description of `check`")
+
+ flags.String("debug.cpuprofile", "", "Write CPU profile to `file`")
+ flags.String("debug.memprofile", "", "Write memory profile to `file`")
+ flags.Bool("debug.version", false, "Print detailed version information about this program")
+ flags.Bool("debug.no-compile-errors", false, "Don't print compile errors")
+
+ checks := list{"inherit"}
+ fail := list{"all"}
+ flags.Var(&checks, "checks", "Comma-separated list of `checks` to enable.")
+ flags.Var(&fail, "fail", "Comma-separated list of `checks` that can cause a non-zero exit status.")
+
+ tags := build.Default.ReleaseTags
+ v := tags[len(tags)-1][2:]
+ version := new(VersionFlag)
+ if err := version.Set(v); err != nil {
+ panic(fmt.Sprintf("internal error: %s", err))
+ }
+
+ flags.Var(version, "go", "Target Go `version` in the format '1.x'")
+ return flags
+}
+
+func findCheck(cs []*analysis.Analyzer, check string) (*analysis.Analyzer, bool) {
+ for _, c := range cs {
+ if c.Name == check {
+ return c, true
+ }
+ }
+ return nil, false
+}
+
+func ProcessFlagSet(cs []*analysis.Analyzer, cums []lint.CumulativeChecker, fs *flag.FlagSet) {
+ tags := fs.Lookup("tags").Value.(flag.Getter).Get().(string)
+ tests := fs.Lookup("tests").Value.(flag.Getter).Get().(bool)
+ goVersion := fs.Lookup("go").Value.(flag.Getter).Get().(int)
+ formatter := fs.Lookup("f").Value.(flag.Getter).Get().(string)
+ printVersion := fs.Lookup("version").Value.(flag.Getter).Get().(bool)
+ showIgnored := fs.Lookup("show-ignored").Value.(flag.Getter).Get().(bool)
+ explain := fs.Lookup("explain").Value.(flag.Getter).Get().(string)
+
+ cpuProfile := fs.Lookup("debug.cpuprofile").Value.(flag.Getter).Get().(string)
+ memProfile := fs.Lookup("debug.memprofile").Value.(flag.Getter).Get().(string)
+ debugVersion := fs.Lookup("debug.version").Value.(flag.Getter).Get().(bool)
+ debugNoCompile := fs.Lookup("debug.no-compile-errors").Value.(flag.Getter).Get().(bool)
+
+ cfg := config.Config{}
+ cfg.Checks = *fs.Lookup("checks").Value.(*list)
+
+ exit := func(code int) {
+ if cpuProfile != "" {
+ pprof.StopCPUProfile()
+ }
+ if memProfile != "" {
+ f, err := os.Create(memProfile)
+ if err != nil {
+ panic(err)
+ }
+ runtime.GC()
+ pprof.WriteHeapProfile(f)
+ }
+ os.Exit(code)
+ }
+ if cpuProfile != "" {
+ f, err := os.Create(cpuProfile)
+ if err != nil {
+ log.Fatal(err)
+ }
+ pprof.StartCPUProfile(f)
+ }
+
+ if debugVersion {
+ version.Verbose()
+ exit(0)
+ }
+
+ if printVersion {
+ version.Print()
+ exit(0)
+ }
+
+ // Validate that the tags argument is well-formed. go/packages
+ // doesn't detect malformed build flags and returns unhelpful
+ // errors.
+ tf := buildutil.TagsFlag{}
+ if err := tf.Set(tags); err != nil {
+ fmt.Fprintln(os.Stderr, fmt.Errorf("invalid value %q for flag -tags: %s", tags, err))
+ exit(1)
+ }
+
+ if explain != "" {
+ var haystack []*analysis.Analyzer
+ haystack = append(haystack, cs...)
+ for _, cum := range cums {
+ haystack = append(haystack, cum.Analyzer())
+ }
+ check, ok := findCheck(haystack, explain)
+ if !ok {
+ fmt.Fprintln(os.Stderr, "Couldn't find check", explain)
+ exit(1)
+ }
+ if check.Doc == "" {
+ fmt.Fprintln(os.Stderr, explain, "has no documentation")
+ exit(1)
+ }
+ fmt.Println(check.Doc)
+ exit(0)
+ }
+
+ ps, err := Lint(cs, cums, fs.Args(), &Options{
+ Tags: tags,
+ LintTests: tests,
+ GoVersion: goVersion,
+ Config: cfg,
+ })
+ if err != nil {
+ fmt.Fprintln(os.Stderr, err)
+ exit(1)
+ }
+
+ var f format.Formatter
+ switch formatter {
+ case "text":
+ f = format.Text{W: os.Stdout}
+ case "stylish":
+ f = &format.Stylish{W: os.Stdout}
+ case "json":
+ f = format.JSON{W: os.Stdout}
+ default:
+ fmt.Fprintf(os.Stderr, "unsupported output format %q\n", formatter)
+ exit(2)
+ }
+
+ var (
+ total int
+ errors int
+ warnings int
+ )
+
+ fail := *fs.Lookup("fail").Value.(*list)
+ analyzers := make([]*analysis.Analyzer, len(cs), len(cs)+len(cums))
+ copy(analyzers, cs)
+ for _, cum := range cums {
+ analyzers = append(analyzers, cum.Analyzer())
+ }
+ shouldExit := lint.FilterChecks(analyzers, fail)
+ shouldExit["compile"] = true
+
+ total = len(ps)
+ for _, p := range ps {
+ if p.Check == "compile" && debugNoCompile {
+ continue
+ }
+ if p.Severity == lint.Ignored && !showIgnored {
+ continue
+ }
+ if shouldExit[p.Check] {
+ errors++
+ } else {
+ p.Severity = lint.Warning
+ warnings++
+ }
+ f.Format(p)
+ }
+ if f, ok := f.(format.Statter); ok {
+ f.Stats(total, errors, warnings)
+ }
+ if errors > 0 {
+ exit(1)
+ }
+ exit(0)
+}
+
+type Options struct {
+ Config config.Config
+
+ Tags string
+ LintTests bool
+ GoVersion int
+}
+
+func computeSalt() ([]byte, error) {
+ if version.Version != "devel" {
+ return []byte(version.Version), nil
+ }
+ p, err := os.Executable()
+ if err != nil {
+ return nil, err
+ }
+ f, err := os.Open(p)
+ if err != nil {
+ return nil, err
+ }
+ defer f.Close()
+ h := sha256.New()
+ if _, err := io.Copy(h, f); err != nil {
+ return nil, err
+ }
+ return h.Sum(nil), nil
+}
+
+func Lint(cs []*analysis.Analyzer, cums []lint.CumulativeChecker, paths []string, opt *Options) ([]lint.Problem, error) {
+ salt, err := computeSalt()
+ if err != nil {
+ return nil, fmt.Errorf("could not compute salt for cache: %s", err)
+ }
+ cache.SetSalt(salt)
+
+ if opt == nil {
+ opt = &Options{}
+ }
+
+ l := &lint.Linter{
+ Checkers: cs,
+ CumulativeCheckers: cums,
+ GoVersion: opt.GoVersion,
+ Config: opt.Config,
+ }
+ cfg := &packages.Config{}
+ if opt.LintTests {
+ cfg.Tests = true
+ }
+ if opt.Tags != "" {
+ cfg.BuildFlags = append(cfg.BuildFlags, "-tags", opt.Tags)
+ }
+
+ printStats := func() {
+ // Individual stats are read atomically, but overall there
+ // is no synchronisation. For printing rough progress
+ // information, this doesn't matter.
+ switch atomic.LoadUint32(&l.Stats.State) {
+ case lint.StateInitializing:
+ fmt.Fprintln(os.Stderr, "Status: initializing")
+ case lint.StateGraph:
+ fmt.Fprintln(os.Stderr, "Status: loading package graph")
+ case lint.StateProcessing:
+ fmt.Fprintf(os.Stderr, "Packages: %d/%d initial, %d/%d total; Workers: %d/%d; Problems: %d\n",
+ atomic.LoadUint32(&l.Stats.ProcessedInitialPackages),
+ atomic.LoadUint32(&l.Stats.InitialPackages),
+ atomic.LoadUint32(&l.Stats.ProcessedPackages),
+ atomic.LoadUint32(&l.Stats.TotalPackages),
+ atomic.LoadUint32(&l.Stats.ActiveWorkers),
+ atomic.LoadUint32(&l.Stats.TotalWorkers),
+ atomic.LoadUint32(&l.Stats.Problems),
+ )
+ case lint.StateCumulative:
+ fmt.Fprintln(os.Stderr, "Status: processing cumulative checkers")
+ }
+ }
+ if len(infoSignals) > 0 {
+ ch := make(chan os.Signal, 1)
+ signal.Notify(ch, infoSignals...)
+ defer signal.Stop(ch)
+ go func() {
+ for range ch {
+ printStats()
+ }
+ }()
+ }
+
+ return l.Lint(cfg, paths)
+}
+
+var posRe = regexp.MustCompile(`^(.+?):(\d+)(?::(\d+)?)?$`)
+
+func parsePos(pos string) token.Position {
+ if pos == "-" || pos == "" {
+ return token.Position{}
+ }
+ parts := posRe.FindStringSubmatch(pos)
+ if parts == nil {
+ panic(fmt.Sprintf("internal error: malformed position %q", pos))
+ }
+ file := parts[1]
+ line, _ := strconv.Atoi(parts[2])
+ col, _ := strconv.Atoi(parts[3])
+ return token.Position{
+ Filename: file,
+ Line: line,
+ Column: col,
+ }
+}
diff --git a/vendor/honnef.co/go/tools/lint/runner.go b/vendor/honnef.co/go/tools/lint/runner.go
new file mode 100644
index 0000000000000..3b22a63fa219b
--- /dev/null
+++ b/vendor/honnef.co/go/tools/lint/runner.go
@@ -0,0 +1,970 @@
+package lint
+
+/*
+Parallelism
+
+Runner implements parallel processing of packages by spawning one
+goroutine per package in the dependency graph, without any semaphores.
+Each goroutine initially waits on the completion of all of its
+dependencies, thus establishing correct order of processing. Once all
+dependencies finish processing, the goroutine will load the package
+from export data or source – this loading is guarded by a semaphore,
+sized according to the number of CPU cores. This way, we only have as
+many packages occupying memory and CPU resources as there are actual
+cores to process them.
+
+This combination of unbounded goroutines but bounded package loading
+means that if we have many parallel, independent subgraphs, they will
+all execute in parallel, while not wasting resources for long linear
+chains or trying to process more subgraphs in parallel than the system
+can handle.
+
+*/
+
+import (
+ "bytes"
+ "encoding/gob"
+ "encoding/hex"
+ "fmt"
+ "go/ast"
+ "go/token"
+ "go/types"
+ "reflect"
+ "regexp"
+ "runtime"
+ "sort"
+ "strconv"
+ "strings"
+ "sync"
+ "sync/atomic"
+
+ "golang.org/x/tools/go/analysis"
+ "golang.org/x/tools/go/packages"
+ "golang.org/x/tools/go/types/objectpath"
+ "honnef.co/go/tools/config"
+ "honnef.co/go/tools/facts"
+ "honnef.co/go/tools/internal/cache"
+ "honnef.co/go/tools/loader"
+)
+
+// If enabled, abuse of the go/analysis API will lead to panics
+const sanityCheck = true
+
+// OPT(dh): for a dependency tree A->B->C->D, if we have cached data
+// for B, there should be no need to load C and D individually. Go's
+// export data for B contains all the data we need on types, and our
+// fact cache could store the union of B, C and D in B.
+//
+// This may change unused's behavior, however, as it may observe fewer
+// interfaces from transitive dependencies.
+
+type Package struct {
+ dependents uint64
+
+ *packages.Package
+ Imports []*Package
+ initial bool
+ fromSource bool
+ hash string
+ done chan struct{}
+
+ resultsMu sync.Mutex
+ // results maps analyzer IDs to analyzer results
+ results []*result
+
+ cfg *config.Config
+ gen map[string]facts.Generator
+ problems []Problem
+ ignores []Ignore
+ errs []error
+
+ // these slices are indexed by analysis
+ facts []map[types.Object][]analysis.Fact
+ pkgFacts [][]analysis.Fact
+
+ canClearTypes bool
+}
+
+func (pkg *Package) decUse() {
+ atomic.AddUint64(&pkg.dependents, ^uint64(0))
+ if atomic.LoadUint64(&pkg.dependents) == 0 {
+ // nobody depends on this package anymore
+ if pkg.canClearTypes {
+ pkg.Types = nil
+ }
+ pkg.facts = nil
+ pkg.pkgFacts = nil
+
+ for _, imp := range pkg.Imports {
+ imp.decUse()
+ }
+ }
+}
+
+type result struct {
+ v interface{}
+ err error
+ ready chan struct{}
+}
+
+type Runner struct {
+ ld loader.Loader
+ cache *cache.Cache
+
+ analyzerIDs analyzerIDs
+
+ // limits parallelism of loading packages
+ loadSem chan struct{}
+
+ goVersion int
+ stats *Stats
+}
+
+type analyzerIDs struct {
+ m map[*analysis.Analyzer]int
+}
+
+func (ids analyzerIDs) get(a *analysis.Analyzer) int {
+ id, ok := ids.m[a]
+ if !ok {
+ panic(fmt.Sprintf("no analyzer ID for %s", a.Name))
+ }
+ return id
+}
+
+type Fact struct {
+ Path string
+ Fact analysis.Fact
+}
+
+type analysisAction struct {
+ analyzer *analysis.Analyzer
+ analyzerID int
+ pkg *Package
+ newPackageFacts []analysis.Fact
+ problems []Problem
+
+ pkgFacts map[*types.Package][]analysis.Fact
+}
+
+func (ac *analysisAction) String() string {
+ return fmt.Sprintf("%s @ %s", ac.analyzer, ac.pkg)
+}
+
+func (ac *analysisAction) allObjectFacts() []analysis.ObjectFact {
+ out := make([]analysis.ObjectFact, 0, len(ac.pkg.facts[ac.analyzerID]))
+ for obj, facts := range ac.pkg.facts[ac.analyzerID] {
+ for _, fact := range facts {
+ out = append(out, analysis.ObjectFact{
+ Object: obj,
+ Fact: fact,
+ })
+ }
+ }
+ return out
+}
+
+func (ac *analysisAction) allPackageFacts() []analysis.PackageFact {
+ out := make([]analysis.PackageFact, 0, len(ac.pkgFacts))
+ for pkg, facts := range ac.pkgFacts {
+ for _, fact := range facts {
+ out = append(out, analysis.PackageFact{
+ Package: pkg,
+ Fact: fact,
+ })
+ }
+ }
+ return out
+}
+
+func (ac *analysisAction) importObjectFact(obj types.Object, fact analysis.Fact) bool {
+ if sanityCheck && len(ac.analyzer.FactTypes) == 0 {
+ panic("analysis doesn't export any facts")
+ }
+ for _, f := range ac.pkg.facts[ac.analyzerID][obj] {
+ if reflect.TypeOf(f) == reflect.TypeOf(fact) {
+ reflect.ValueOf(fact).Elem().Set(reflect.ValueOf(f).Elem())
+ return true
+ }
+ }
+ return false
+}
+
+func (ac *analysisAction) importPackageFact(pkg *types.Package, fact analysis.Fact) bool {
+ if sanityCheck && len(ac.analyzer.FactTypes) == 0 {
+ panic("analysis doesn't export any facts")
+ }
+ for _, f := range ac.pkgFacts[pkg] {
+ if reflect.TypeOf(f) == reflect.TypeOf(fact) {
+ reflect.ValueOf(fact).Elem().Set(reflect.ValueOf(f).Elem())
+ return true
+ }
+ }
+ return false
+}
+
+func (ac *analysisAction) exportObjectFact(obj types.Object, fact analysis.Fact) {
+ if sanityCheck && len(ac.analyzer.FactTypes) == 0 {
+ panic("analysis doesn't export any facts")
+ }
+ ac.pkg.facts[ac.analyzerID][obj] = append(ac.pkg.facts[ac.analyzerID][obj], fact)
+}
+
+func (ac *analysisAction) exportPackageFact(fact analysis.Fact) {
+ if sanityCheck && len(ac.analyzer.FactTypes) == 0 {
+ panic("analysis doesn't export any facts")
+ }
+ ac.pkgFacts[ac.pkg.Types] = append(ac.pkgFacts[ac.pkg.Types], fact)
+ ac.newPackageFacts = append(ac.newPackageFacts, fact)
+}
+
+func (ac *analysisAction) report(pass *analysis.Pass, d analysis.Diagnostic) {
+ p := Problem{
+ Pos: DisplayPosition(pass.Fset, d.Pos),
+ End: DisplayPosition(pass.Fset, d.End),
+ Message: d.Message,
+ Check: pass.Analyzer.Name,
+ }
+ ac.problems = append(ac.problems, p)
+}
+
+func (r *Runner) runAnalysis(ac *analysisAction) (ret interface{}, err error) {
+ ac.pkg.resultsMu.Lock()
+ res := ac.pkg.results[r.analyzerIDs.get(ac.analyzer)]
+ if res != nil {
+ ac.pkg.resultsMu.Unlock()
+ <-res.ready
+ return res.v, res.err
+ } else {
+ res = &result{
+ ready: make(chan struct{}),
+ }
+ ac.pkg.results[r.analyzerIDs.get(ac.analyzer)] = res
+ ac.pkg.resultsMu.Unlock()
+
+ defer func() {
+ res.v = ret
+ res.err = err
+ close(res.ready)
+ }()
+
+ pass := new(analysis.Pass)
+ *pass = analysis.Pass{
+ Analyzer: ac.analyzer,
+ Fset: ac.pkg.Fset,
+ Files: ac.pkg.Syntax,
+ // type information may be nil or may be populated. if it is
+ // nil, it will get populated later.
+ Pkg: ac.pkg.Types,
+ TypesInfo: ac.pkg.TypesInfo,
+ TypesSizes: ac.pkg.TypesSizes,
+ ResultOf: map[*analysis.Analyzer]interface{}{},
+ ImportObjectFact: ac.importObjectFact,
+ ImportPackageFact: ac.importPackageFact,
+ ExportObjectFact: ac.exportObjectFact,
+ ExportPackageFact: ac.exportPackageFact,
+ Report: func(d analysis.Diagnostic) {
+ ac.report(pass, d)
+ },
+ AllObjectFacts: ac.allObjectFacts,
+ AllPackageFacts: ac.allPackageFacts,
+ }
+
+ if !ac.pkg.initial {
+ // Don't report problems in dependencies
+ pass.Report = func(analysis.Diagnostic) {}
+ }
+ return r.runAnalysisUser(pass, ac)
+ }
+}
+
+func (r *Runner) loadCachedFacts(a *analysis.Analyzer, pkg *Package) ([]Fact, bool) {
+ if len(a.FactTypes) == 0 {
+ return nil, true
+ }
+
+ var facts []Fact
+ // Look in the cache for facts
+ aID, err := passActionID(pkg, a)
+ if err != nil {
+ return nil, false
+ }
+ aID = cache.Subkey(aID, "facts")
+ b, _, err := r.cache.GetBytes(aID)
+ if err != nil {
+ // No cached facts, analyse this package like a user-provided one, but ignore diagnostics
+ return nil, false
+ }
+
+ if err := gob.NewDecoder(bytes.NewReader(b)).Decode(&facts); err != nil {
+ // Cached facts are broken, analyse this package like a user-provided one, but ignore diagnostics
+ return nil, false
+ }
+ return facts, true
+}
+
+type dependencyError struct {
+ dep string
+ err error
+}
+
+func (err dependencyError) nested() dependencyError {
+ if o, ok := err.err.(dependencyError); ok {
+ return o.nested()
+ }
+ return err
+}
+
+func (err dependencyError) Error() string {
+ if o, ok := err.err.(dependencyError); ok {
+ return o.Error()
+ }
+ return fmt.Sprintf("error running dependency %s: %s", err.dep, err.err)
+}
+
+func (r *Runner) makeAnalysisAction(a *analysis.Analyzer, pkg *Package) *analysisAction {
+ aid := r.analyzerIDs.get(a)
+ ac := &analysisAction{
+ analyzer: a,
+ analyzerID: aid,
+ pkg: pkg,
+ }
+
+ if len(a.FactTypes) == 0 {
+ return ac
+ }
+
+ // Merge all package facts of dependencies
+ ac.pkgFacts = map[*types.Package][]analysis.Fact{}
+ seen := map[*Package]struct{}{}
+ var dfs func(*Package)
+ dfs = func(pkg *Package) {
+ if _, ok := seen[pkg]; ok {
+ return
+ }
+ seen[pkg] = struct{}{}
+ s := pkg.pkgFacts[aid]
+ ac.pkgFacts[pkg.Types] = s[0:len(s):len(s)]
+ for _, imp := range pkg.Imports {
+ dfs(imp)
+ }
+ }
+ dfs(pkg)
+
+ return ac
+}
+
+// analyzes that we always want to run, even if they're not being run
+// explicitly or as dependencies. these are necessary for the inner
+// workings of the runner.
+var injectedAnalyses = []*analysis.Analyzer{facts.Generated, config.Analyzer}
+
+func (r *Runner) runAnalysisUser(pass *analysis.Pass, ac *analysisAction) (interface{}, error) {
+ if !ac.pkg.fromSource {
+ panic(fmt.Sprintf("internal error: %s was not loaded from source", ac.pkg))
+ }
+
+ // User-provided package, analyse it
+ // First analyze it with dependencies
+ for _, req := range ac.analyzer.Requires {
+ acReq := r.makeAnalysisAction(req, ac.pkg)
+ ret, err := r.runAnalysis(acReq)
+ if err != nil {
+ // We couldn't run a dependency, no point in going on
+ return nil, dependencyError{req.Name, err}
+ }
+
+ pass.ResultOf[req] = ret
+ }
+
+ // Then with this analyzer
+ ret, err := ac.analyzer.Run(pass)
+ if err != nil {
+ return nil, err
+ }
+
+ if len(ac.analyzer.FactTypes) > 0 {
+ // Merge new facts into the package and persist them.
+ var facts []Fact
+ for _, fact := range ac.newPackageFacts {
+ id := r.analyzerIDs.get(ac.analyzer)
+ ac.pkg.pkgFacts[id] = append(ac.pkg.pkgFacts[id], fact)
+ facts = append(facts, Fact{"", fact})
+ }
+ for obj, afacts := range ac.pkg.facts[ac.analyzerID] {
+ if obj.Pkg() != ac.pkg.Package.Types {
+ continue
+ }
+ path, err := objectpath.For(obj)
+ if err != nil {
+ continue
+ }
+ for _, fact := range afacts {
+ facts = append(facts, Fact{string(path), fact})
+ }
+ }
+
+ buf := &bytes.Buffer{}
+ if err := gob.NewEncoder(buf).Encode(facts); err != nil {
+ return nil, err
+ }
+ aID, err := passActionID(ac.pkg, ac.analyzer)
+ if err != nil {
+ return nil, err
+ }
+ aID = cache.Subkey(aID, "facts")
+ if err := r.cache.PutBytes(aID, buf.Bytes()); err != nil {
+ return nil, err
+ }
+ }
+
+ return ret, nil
+}
+
+func NewRunner(stats *Stats) (*Runner, error) {
+ cache, err := cache.Default()
+ if err != nil {
+ return nil, err
+ }
+
+ return &Runner{
+ cache: cache,
+ stats: stats,
+ }, nil
+}
+
+// Run loads packages corresponding to patterns and analyses them with
+// analyzers. It returns the loaded packages, which contain reported
+// diagnostics as well as extracted ignore directives.
+//
+// Note that diagnostics have not been filtered at this point yet, to
+// accomodate cumulative analyzes that require additional steps to
+// produce diagnostics.
+func (r *Runner) Run(cfg *packages.Config, patterns []string, analyzers []*analysis.Analyzer, hasCumulative bool) ([]*Package, error) {
+ r.analyzerIDs = analyzerIDs{m: map[*analysis.Analyzer]int{}}
+ id := 0
+ seen := map[*analysis.Analyzer]struct{}{}
+ var dfs func(a *analysis.Analyzer)
+ dfs = func(a *analysis.Analyzer) {
+ if _, ok := seen[a]; ok {
+ return
+ }
+ seen[a] = struct{}{}
+ r.analyzerIDs.m[a] = id
+ id++
+ for _, f := range a.FactTypes {
+ gob.Register(f)
+ }
+ for _, req := range a.Requires {
+ dfs(req)
+ }
+ }
+ for _, a := range analyzers {
+ if v := a.Flags.Lookup("go"); v != nil {
+ v.Value.Set(fmt.Sprintf("1.%d", r.goVersion))
+ }
+ dfs(a)
+ }
+ for _, a := range injectedAnalyses {
+ dfs(a)
+ }
+
+ var dcfg packages.Config
+ if cfg != nil {
+ dcfg = *cfg
+ }
+
+ atomic.StoreUint32(&r.stats.State, StateGraph)
+ initialPkgs, err := r.ld.Graph(dcfg, patterns...)
+ if err != nil {
+ return nil, err
+ }
+
+ defer r.cache.Trim()
+
+ var allPkgs []*Package
+ m := map[*packages.Package]*Package{}
+ packages.Visit(initialPkgs, nil, func(l *packages.Package) {
+ m[l] = &Package{
+ Package: l,
+ results: make([]*result, len(r.analyzerIDs.m)),
+ facts: make([]map[types.Object][]analysis.Fact, len(r.analyzerIDs.m)),
+ pkgFacts: make([][]analysis.Fact, len(r.analyzerIDs.m)),
+ done: make(chan struct{}),
+ // every package needs itself
+ dependents: 1,
+ canClearTypes: !hasCumulative,
+ }
+ allPkgs = append(allPkgs, m[l])
+ for i := range m[l].facts {
+ m[l].facts[i] = map[types.Object][]analysis.Fact{}
+ }
+ for _, err := range l.Errors {
+ m[l].errs = append(m[l].errs, err)
+ }
+ for _, v := range l.Imports {
+ m[v].dependents++
+ m[l].Imports = append(m[l].Imports, m[v])
+ }
+
+ m[l].hash, err = packageHash(m[l])
+ if err != nil {
+ m[l].errs = append(m[l].errs, err)
+ }
+ })
+
+ pkgs := make([]*Package, len(initialPkgs))
+ for i, l := range initialPkgs {
+ pkgs[i] = m[l]
+ pkgs[i].initial = true
+ }
+
+ atomic.StoreUint32(&r.stats.InitialPackages, uint32(len(initialPkgs)))
+ atomic.StoreUint32(&r.stats.TotalPackages, uint32(len(allPkgs)))
+ atomic.StoreUint32(&r.stats.State, StateProcessing)
+
+ var wg sync.WaitGroup
+ wg.Add(len(allPkgs))
+ r.loadSem = make(chan struct{}, runtime.GOMAXPROCS(-1))
+ atomic.StoreUint32(&r.stats.TotalWorkers, uint32(cap(r.loadSem)))
+ for _, pkg := range allPkgs {
+ pkg := pkg
+ go func() {
+ r.processPkg(pkg, analyzers)
+
+ if pkg.initial {
+ atomic.AddUint32(&r.stats.ProcessedInitialPackages, 1)
+ }
+ atomic.AddUint32(&r.stats.Problems, uint32(len(pkg.problems)))
+ wg.Done()
+ }()
+ }
+ wg.Wait()
+
+ return pkgs, nil
+}
+
+var posRe = regexp.MustCompile(`^(.+?):(\d+)(?::(\d+)?)?`)
+
+func parsePos(pos string) (token.Position, int, error) {
+ if pos == "-" || pos == "" {
+ return token.Position{}, 0, nil
+ }
+ parts := posRe.FindStringSubmatch(pos)
+ if parts == nil {
+ return token.Position{}, 0, fmt.Errorf("malformed position %q", pos)
+ }
+ file := parts[1]
+ line, _ := strconv.Atoi(parts[2])
+ col, _ := strconv.Atoi(parts[3])
+ return token.Position{
+ Filename: file,
+ Line: line,
+ Column: col,
+ }, len(parts[0]), nil
+}
+
+// loadPkg loads a Go package. If the package is in the set of initial
+// packages, it will be loaded from source, otherwise it will be
+// loaded from export data. In the case that the package was loaded
+// from export data, cached facts will also be loaded.
+//
+// Currently, only cached facts for this package will be loaded, not
+// for any of its dependencies.
+func (r *Runner) loadPkg(pkg *Package, analyzers []*analysis.Analyzer) error {
+ if pkg.Types != nil {
+ panic(fmt.Sprintf("internal error: %s has already been loaded", pkg.Package))
+ }
+
+ // Load type information
+ if pkg.initial {
+ // Load package from source
+ pkg.fromSource = true
+ return r.ld.LoadFromSource(pkg.Package)
+ }
+
+ // Load package from export data
+ if err := r.ld.LoadFromExport(pkg.Package); err != nil {
+ // We asked Go to give us up to date export data, yet
+ // we can't load it. There must be something wrong.
+ //
+ // Attempt loading from source. This should fail (because
+ // otherwise there would be export data); we just want to
+ // get the compile errors. If loading from source succeeds
+ // we discard the result, anyway. Otherwise we'll fail
+ // when trying to reload from export data later.
+ //
+ // FIXME(dh): we no longer reload from export data, so
+ // theoretically we should be able to continue
+ pkg.fromSource = true
+ if err := r.ld.LoadFromSource(pkg.Package); err != nil {
+ return err
+ }
+ // Make sure this package can't be imported successfully
+ pkg.Package.Errors = append(pkg.Package.Errors, packages.Error{
+ Pos: "-",
+ Msg: fmt.Sprintf("could not load export data: %s", err),
+ Kind: packages.ParseError,
+ })
+ return fmt.Errorf("could not load export data: %s", err)
+ }
+
+ failed := false
+ seen := make([]bool, len(r.analyzerIDs.m))
+ var dfs func(*analysis.Analyzer)
+ dfs = func(a *analysis.Analyzer) {
+ if seen[r.analyzerIDs.get(a)] {
+ return
+ }
+ seen[r.analyzerIDs.get(a)] = true
+
+ if len(a.FactTypes) > 0 {
+ facts, ok := r.loadCachedFacts(a, pkg)
+ if !ok {
+ failed = true
+ return
+ }
+
+ for _, f := range facts {
+ if f.Path == "" {
+ // This is a package fact
+ pkg.pkgFacts[r.analyzerIDs.get(a)] = append(pkg.pkgFacts[r.analyzerIDs.get(a)], f.Fact)
+ continue
+ }
+ obj, err := objectpath.Object(pkg.Types, objectpath.Path(f.Path))
+ if err != nil {
+ // Be lenient about these errors. For example, when
+ // analysing io/ioutil from source, we may get a fact
+ // for methods on the devNull type, and objectpath
+ // will happily create a path for them. However, when
+ // we later load io/ioutil from export data, the path
+ // no longer resolves.
+ //
+ // If an exported type embeds the unexported type,
+ // then (part of) the unexported type will become part
+ // of the type information and our path will resolve
+ // again.
+ continue
+ }
+ pkg.facts[r.analyzerIDs.get(a)][obj] = append(pkg.facts[r.analyzerIDs.get(a)][obj], f.Fact)
+ }
+ }
+
+ for _, req := range a.Requires {
+ dfs(req)
+ }
+ }
+ for _, a := range analyzers {
+ dfs(a)
+ }
+
+ if failed {
+ pkg.fromSource = true
+ // XXX we added facts to the maps, we need to get rid of those
+ return r.ld.LoadFromSource(pkg.Package)
+ }
+
+ return nil
+}
+
+type analysisError struct {
+ analyzer *analysis.Analyzer
+ pkg *Package
+ err error
+}
+
+func (err analysisError) Error() string {
+ return fmt.Sprintf("error running analyzer %s on %s: %s", err.analyzer, err.pkg, err.err)
+}
+
+// processPkg processes a package. This involves loading the package,
+// either from export data or from source. For packages loaded from
+// source, the provides analyzers will be run on the package.
+func (r *Runner) processPkg(pkg *Package, analyzers []*analysis.Analyzer) {
+ defer func() {
+ // Clear information we no longer need. Make sure to do this
+ // when returning from processPkg so that we clear
+ // dependencies, not just initial packages.
+ pkg.TypesInfo = nil
+ pkg.Syntax = nil
+ pkg.results = nil
+
+ atomic.AddUint32(&r.stats.ProcessedPackages, 1)
+ pkg.decUse()
+ close(pkg.done)
+ }()
+
+ // Ensure all packages have the generated map and config. This is
+ // required by interna of the runner. Analyses that themselves
+ // make use of either have an explicit dependency so that other
+ // runners work correctly, too.
+ analyzers = append(analyzers[0:len(analyzers):len(analyzers)], injectedAnalyses...)
+
+ if len(pkg.errs) != 0 {
+ return
+ }
+
+ for _, imp := range pkg.Imports {
+ <-imp.done
+ if len(imp.errs) > 0 {
+ if imp.initial {
+ // Don't print the error of the dependency since it's
+ // an initial package and we're already printing the
+ // error.
+ pkg.errs = append(pkg.errs, fmt.Errorf("could not analyze dependency %s of %s", imp, pkg))
+ } else {
+ var s string
+ for _, err := range imp.errs {
+ s += "\n\t" + err.Error()
+ }
+ pkg.errs = append(pkg.errs, fmt.Errorf("could not analyze dependency %s of %s: %s", imp, pkg, s))
+ }
+ return
+ }
+ }
+ if pkg.PkgPath == "unsafe" {
+ pkg.Types = types.Unsafe
+ return
+ }
+
+ r.loadSem <- struct{}{}
+ atomic.AddUint32(&r.stats.ActiveWorkers, 1)
+ defer func() {
+ <-r.loadSem
+ atomic.AddUint32(&r.stats.ActiveWorkers, ^uint32(0))
+ }()
+ if err := r.loadPkg(pkg, analyzers); err != nil {
+ pkg.errs = append(pkg.errs, err)
+ return
+ }
+
+ // A package's object facts is the union of all of its dependencies.
+ for _, imp := range pkg.Imports {
+ for ai, m := range imp.facts {
+ for obj, facts := range m {
+ pkg.facts[ai][obj] = facts[0:len(facts):len(facts)]
+ }
+ }
+ }
+
+ if !pkg.fromSource {
+ // Nothing left to do for the package.
+ return
+ }
+
+ // Run analyses on initial packages and those missing facts
+ var wg sync.WaitGroup
+ wg.Add(len(analyzers))
+ errs := make([]error, len(analyzers))
+ var acs []*analysisAction
+ for i, a := range analyzers {
+ i := i
+ a := a
+ ac := r.makeAnalysisAction(a, pkg)
+ acs = append(acs, ac)
+ go func() {
+ defer wg.Done()
+ // Only initial packages and packages with missing
+ // facts will have been loaded from source.
+ if pkg.initial || r.hasFacts(a) {
+ if _, err := r.runAnalysis(ac); err != nil {
+ errs[i] = analysisError{a, pkg, err}
+ return
+ }
+ }
+ }()
+ }
+ wg.Wait()
+
+ depErrors := map[dependencyError]int{}
+ for _, err := range errs {
+ if err == nil {
+ continue
+ }
+ switch err := err.(type) {
+ case analysisError:
+ switch err := err.err.(type) {
+ case dependencyError:
+ depErrors[err.nested()]++
+ default:
+ pkg.errs = append(pkg.errs, err)
+ }
+ default:
+ pkg.errs = append(pkg.errs, err)
+ }
+ }
+ for err, count := range depErrors {
+ pkg.errs = append(pkg.errs,
+ fmt.Errorf("could not run %s@%s, preventing %d analyzers from running: %s", err.dep, pkg, count, err.err))
+ }
+
+ // We can't process ignores at this point because `unused` needs
+ // to see more than one package to make its decision.
+ ignores, problems := parseDirectives(pkg.Package)
+ pkg.ignores = append(pkg.ignores, ignores...)
+ pkg.problems = append(pkg.problems, problems...)
+ for _, ac := range acs {
+ pkg.problems = append(pkg.problems, ac.problems...)
+ }
+
+ if pkg.initial {
+ // Only initial packages have these analyzers run, and only
+ // initial packages need these.
+ if pkg.results[r.analyzerIDs.get(config.Analyzer)].v != nil {
+ pkg.cfg = pkg.results[r.analyzerIDs.get(config.Analyzer)].v.(*config.Config)
+ }
+ pkg.gen = pkg.results[r.analyzerIDs.get(facts.Generated)].v.(map[string]facts.Generator)
+ }
+
+ // In a previous version of the code, we would throw away all type
+ // information and reload it from export data. That was
+ // nonsensical. The *types.Package doesn't keep any information
+ // live that export data wouldn't also. We only need to discard
+ // the AST and the TypesInfo maps; that happens after we return
+ // from processPkg.
+}
+
+// hasFacts reports whether an analysis exports any facts. An analysis
+// that has a transitive dependency that exports facts is considered
+// to be exporting facts.
+func (r *Runner) hasFacts(a *analysis.Analyzer) bool {
+ ret := false
+ seen := make([]bool, len(r.analyzerIDs.m))
+ var dfs func(*analysis.Analyzer)
+ dfs = func(a *analysis.Analyzer) {
+ if seen[r.analyzerIDs.get(a)] {
+ return
+ }
+ seen[r.analyzerIDs.get(a)] = true
+ if len(a.FactTypes) > 0 {
+ ret = true
+ }
+ for _, req := range a.Requires {
+ if ret {
+ break
+ }
+ dfs(req)
+ }
+ }
+ dfs(a)
+ return ret
+}
+
+func parseDirective(s string) (cmd string, args []string) {
+ if !strings.HasPrefix(s, "//lint:") {
+ return "", nil
+ }
+ s = strings.TrimPrefix(s, "//lint:")
+ fields := strings.Split(s, " ")
+ return fields[0], fields[1:]
+}
+
+// parseDirectives extracts all linter directives from the source
+// files of the package. Malformed directives are returned as problems.
+func parseDirectives(pkg *packages.Package) ([]Ignore, []Problem) {
+ var ignores []Ignore
+ var problems []Problem
+
+ for _, f := range pkg.Syntax {
+ found := false
+ commentLoop:
+ for _, cg := range f.Comments {
+ for _, c := range cg.List {
+ if strings.Contains(c.Text, "//lint:") {
+ found = true
+ break commentLoop
+ }
+ }
+ }
+ if !found {
+ continue
+ }
+ cm := ast.NewCommentMap(pkg.Fset, f, f.Comments)
+ for node, cgs := range cm {
+ for _, cg := range cgs {
+ for _, c := range cg.List {
+ if !strings.HasPrefix(c.Text, "//lint:") {
+ continue
+ }
+ cmd, args := parseDirective(c.Text)
+ switch cmd {
+ case "ignore", "file-ignore":
+ if len(args) < 2 {
+ p := Problem{
+ Pos: DisplayPosition(pkg.Fset, c.Pos()),
+ Message: "malformed linter directive; missing the required reason field?",
+ Severity: Error,
+ Check: "compile",
+ }
+ problems = append(problems, p)
+ continue
+ }
+ default:
+ // unknown directive, ignore
+ continue
+ }
+ checks := strings.Split(args[0], ",")
+ pos := DisplayPosition(pkg.Fset, node.Pos())
+ var ig Ignore
+ switch cmd {
+ case "ignore":
+ ig = &LineIgnore{
+ File: pos.Filename,
+ Line: pos.Line,
+ Checks: checks,
+ Pos: c.Pos(),
+ }
+ case "file-ignore":
+ ig = &FileIgnore{
+ File: pos.Filename,
+ Checks: checks,
+ }
+ }
+ ignores = append(ignores, ig)
+ }
+ }
+ }
+ }
+
+ return ignores, problems
+}
+
+// packageHash computes a package's hash. The hash is based on all Go
+// files that make up the package, as well as the hashes of imported
+// packages.
+func packageHash(pkg *Package) (string, error) {
+ key := cache.NewHash("package hash")
+ fmt.Fprintf(key, "pkgpath %s\n", pkg.PkgPath)
+ for _, f := range pkg.CompiledGoFiles {
+ h, err := cache.FileHash(f)
+ if err != nil {
+ return "", err
+ }
+ fmt.Fprintf(key, "file %s %x\n", f, h)
+ }
+
+ imps := make([]*Package, len(pkg.Imports))
+ copy(imps, pkg.Imports)
+ sort.Slice(imps, func(i, j int) bool {
+ return imps[i].PkgPath < imps[j].PkgPath
+ })
+ for _, dep := range imps {
+ if dep.PkgPath == "unsafe" {
+ continue
+ }
+
+ fmt.Fprintf(key, "import %s %s\n", dep.PkgPath, dep.hash)
+ }
+ h := key.Sum()
+ return hex.EncodeToString(h[:]), nil
+}
+
+// passActionID computes an ActionID for an analysis pass.
+func passActionID(pkg *Package, analyzer *analysis.Analyzer) (cache.ActionID, error) {
+ key := cache.NewHash("action ID")
+ fmt.Fprintf(key, "pkgpath %s\n", pkg.PkgPath)
+ fmt.Fprintf(key, "pkghash %s\n", pkg.hash)
+ fmt.Fprintf(key, "analyzer %s\n", analyzer.Name)
+
+ return key.Sum(), nil
+}
diff --git a/vendor/honnef.co/go/tools/lint/stats.go b/vendor/honnef.co/go/tools/lint/stats.go
new file mode 100644
index 0000000000000..2f65085593707
--- /dev/null
+++ b/vendor/honnef.co/go/tools/lint/stats.go
@@ -0,0 +1,20 @@
+package lint
+
+const (
+ StateInitializing = 0
+ StateGraph = 1
+ StateProcessing = 2
+ StateCumulative = 3
+)
+
+type Stats struct {
+ State uint32
+
+ InitialPackages uint32
+ TotalPackages uint32
+ ProcessedPackages uint32
+ ProcessedInitialPackages uint32
+ Problems uint32
+ ActiveWorkers uint32
+ TotalWorkers uint32
+}
diff --git a/vendor/honnef.co/go/tools/loader/loader.go b/vendor/honnef.co/go/tools/loader/loader.go
new file mode 100644
index 0000000000000..9c6885d485f02
--- /dev/null
+++ b/vendor/honnef.co/go/tools/loader/loader.go
@@ -0,0 +1,197 @@
+package loader
+
+import (
+ "fmt"
+ "go/ast"
+ "go/parser"
+ "go/scanner"
+ "go/token"
+ "go/types"
+ "log"
+ "os"
+ "sync"
+
+ "golang.org/x/tools/go/gcexportdata"
+ "golang.org/x/tools/go/packages"
+)
+
+type Loader struct {
+ exportMu sync.RWMutex
+}
+
+// Graph resolves patterns and returns packages with all the
+// information required to later load type information, and optionally
+// syntax trees.
+//
+// The provided config can set any setting with the exception of Mode.
+func (ld *Loader) Graph(cfg packages.Config, patterns ...string) ([]*packages.Package, error) {
+ cfg.Mode = packages.NeedName | packages.NeedImports | packages.NeedDeps | packages.NeedExportsFile | packages.NeedFiles | packages.NeedCompiledGoFiles | packages.NeedTypesSizes
+ pkgs, err := packages.Load(&cfg, patterns...)
+ if err != nil {
+ return nil, err
+ }
+ fset := token.NewFileSet()
+ packages.Visit(pkgs, nil, func(pkg *packages.Package) {
+ pkg.Fset = fset
+ })
+ return pkgs, nil
+}
+
+// LoadFromExport loads a package from export data. All of its
+// dependencies must have been loaded already.
+func (ld *Loader) LoadFromExport(pkg *packages.Package) error {
+ ld.exportMu.Lock()
+ defer ld.exportMu.Unlock()
+
+ pkg.IllTyped = true
+ for path, pkg := range pkg.Imports {
+ if pkg.Types == nil {
+ return fmt.Errorf("dependency %q hasn't been loaded yet", path)
+ }
+ }
+ if pkg.ExportFile == "" {
+ return fmt.Errorf("no export data for %q", pkg.ID)
+ }
+ f, err := os.Open(pkg.ExportFile)
+ if err != nil {
+ return err
+ }
+ defer f.Close()
+
+ r, err := gcexportdata.NewReader(f)
+ if err != nil {
+ return err
+ }
+
+ view := make(map[string]*types.Package) // view seen by gcexportdata
+ seen := make(map[*packages.Package]bool) // all visited packages
+ var visit func(pkgs map[string]*packages.Package)
+ visit = func(pkgs map[string]*packages.Package) {
+ for _, pkg := range pkgs {
+ if !seen[pkg] {
+ seen[pkg] = true
+ view[pkg.PkgPath] = pkg.Types
+ visit(pkg.Imports)
+ }
+ }
+ }
+ visit(pkg.Imports)
+ tpkg, err := gcexportdata.Read(r, pkg.Fset, view, pkg.PkgPath)
+ if err != nil {
+ return err
+ }
+ pkg.Types = tpkg
+ pkg.IllTyped = false
+ return nil
+}
+
+// LoadFromSource loads a package from source. All of its dependencies
+// must have been loaded already.
+func (ld *Loader) LoadFromSource(pkg *packages.Package) error {
+ ld.exportMu.RLock()
+ defer ld.exportMu.RUnlock()
+
+ pkg.IllTyped = true
+ pkg.Types = types.NewPackage(pkg.PkgPath, pkg.Name)
+
+ // OPT(dh): many packages have few files, much fewer than there
+ // are CPU cores. Additionally, parsing each individual file is
+ // very fast. A naive parallel implementation of this loop won't
+ // be faster, and tends to be slower due to extra scheduling,
+ // bookkeeping and potentially false sharing of cache lines.
+ pkg.Syntax = make([]*ast.File, len(pkg.CompiledGoFiles))
+ for i, file := range pkg.CompiledGoFiles {
+ f, err := parser.ParseFile(pkg.Fset, file, nil, parser.ParseComments)
+ if err != nil {
+ pkg.Errors = append(pkg.Errors, convertError(err)...)
+ return err
+ }
+ pkg.Syntax[i] = f
+ }
+ pkg.TypesInfo = &types.Info{
+ Types: make(map[ast.Expr]types.TypeAndValue),
+ Defs: make(map[*ast.Ident]types.Object),
+ Uses: make(map[*ast.Ident]types.Object),
+ Implicits: make(map[ast.Node]types.Object),
+ Scopes: make(map[ast.Node]*types.Scope),
+ Selections: make(map[*ast.SelectorExpr]*types.Selection),
+ }
+
+ importer := func(path string) (*types.Package, error) {
+ if path == "unsafe" {
+ return types.Unsafe, nil
+ }
+ imp := pkg.Imports[path]
+ if imp == nil {
+ return nil, nil
+ }
+ if len(imp.Errors) > 0 {
+ return nil, imp.Errors[0]
+ }
+ return imp.Types, nil
+ }
+ tc := &types.Config{
+ Importer: importerFunc(importer),
+ Error: func(err error) {
+ pkg.Errors = append(pkg.Errors, convertError(err)...)
+ },
+ }
+ err := types.NewChecker(tc, pkg.Fset, pkg.Types, pkg.TypesInfo).Files(pkg.Syntax)
+ if err != nil {
+ return err
+ }
+ pkg.IllTyped = false
+ return nil
+}
+
+func convertError(err error) []packages.Error {
+ var errs []packages.Error
+ // taken from go/packages
+ switch err := err.(type) {
+ case packages.Error:
+ // from driver
+ errs = append(errs, err)
+
+ case *os.PathError:
+ // from parser
+ errs = append(errs, packages.Error{
+ Pos: err.Path + ":1",
+ Msg: err.Err.Error(),
+ Kind: packages.ParseError,
+ })
+
+ case scanner.ErrorList:
+ // from parser
+ for _, err := range err {
+ errs = append(errs, packages.Error{
+ Pos: err.Pos.String(),
+ Msg: err.Msg,
+ Kind: packages.ParseError,
+ })
+ }
+
+ case types.Error:
+ // from type checker
+ errs = append(errs, packages.Error{
+ Pos: err.Fset.Position(err.Pos).String(),
+ Msg: err.Msg,
+ Kind: packages.TypeError,
+ })
+
+ default:
+ // unexpected impoverished error from parser?
+ errs = append(errs, packages.Error{
+ Pos: "-",
+ Msg: err.Error(),
+ Kind: packages.UnknownError,
+ })
+
+ // If you see this error message, please file a bug.
+ log.Printf("internal error: error %q (%T) without position", err, err)
+ }
+ return errs
+}
+
+type importerFunc func(path string) (*types.Package, error)
+
+func (f importerFunc) Import(path string) (*types.Package, error) { return f(path) }
diff --git a/vendor/honnef.co/go/tools/printf/fuzz.go b/vendor/honnef.co/go/tools/printf/fuzz.go
new file mode 100644
index 0000000000000..8ebf357fb42c9
--- /dev/null
+++ b/vendor/honnef.co/go/tools/printf/fuzz.go
@@ -0,0 +1,11 @@
+// +build gofuzz
+
+package printf
+
+func Fuzz(data []byte) int {
+ _, err := Parse(string(data))
+ if err == nil {
+ return 1
+ }
+ return 0
+}
diff --git a/vendor/honnef.co/go/tools/printf/printf.go b/vendor/honnef.co/go/tools/printf/printf.go
new file mode 100644
index 0000000000000..754db9b16d8fb
--- /dev/null
+++ b/vendor/honnef.co/go/tools/printf/printf.go
@@ -0,0 +1,197 @@
+// Package printf implements a parser for fmt.Printf-style format
+// strings.
+//
+// It parses verbs according to the following syntax:
+// Numeric -> '0'-'9'
+// Letter -> 'a'-'z' | 'A'-'Z'
+// Index -> '[' Numeric+ ']'
+// Star -> '*'
+// Star -> Index '*'
+//
+// Precision -> Numeric+ | Star
+// Width -> Numeric+ | Star
+//
+// WidthAndPrecision -> Width '.' Precision
+// WidthAndPrecision -> Width '.'
+// WidthAndPrecision -> Width
+// WidthAndPrecision -> '.' Precision
+// WidthAndPrecision -> '.'
+//
+// Flag -> '+' | '-' | '#' | ' ' | '0'
+// Verb -> Letter | '%'
+//
+// Input -> '%' [ Flag+ ] [ WidthAndPrecision ] [ Index ] Verb
+package printf
+
+import (
+ "errors"
+ "regexp"
+ "strconv"
+ "strings"
+)
+
+// ErrInvalid is returned for invalid format strings or verbs.
+var ErrInvalid = errors.New("invalid format string")
+
+type Verb struct {
+ Letter rune
+ Flags string
+
+ Width Argument
+ Precision Argument
+ // Which value in the argument list the verb uses.
+ // -1 denotes the next argument,
+ // values > 0 denote explicit arguments.
+ // The value 0 denotes that no argument is consumed. This is the case for %%.
+ Value int
+
+ Raw string
+}
+
+// Argument is an implicit or explicit width or precision.
+type Argument interface {
+ isArgument()
+}
+
+// The Default value, when no width or precision is provided.
+type Default struct{}
+
+// Zero is the implicit zero value.
+// This value may only appear for precisions in format strings like %6.f
+type Zero struct{}
+
+// Star is a * value, which may either refer to the next argument (Index == -1) or an explicit argument.
+type Star struct{ Index int }
+
+// A Literal value, such as 6 in %6d.
+type Literal int
+
+func (Default) isArgument() {}
+func (Zero) isArgument() {}
+func (Star) isArgument() {}
+func (Literal) isArgument() {}
+
+// Parse parses f and returns a list of actions.
+// An action may either be a literal string, or a Verb.
+func Parse(f string) ([]interface{}, error) {
+ var out []interface{}
+ for len(f) > 0 {
+ if f[0] == '%' {
+ v, n, err := ParseVerb(f)
+ if err != nil {
+ return nil, err
+ }
+ f = f[n:]
+ out = append(out, v)
+ } else {
+ n := strings.IndexByte(f, '%')
+ if n > -1 {
+ out = append(out, f[:n])
+ f = f[n:]
+ } else {
+ out = append(out, f)
+ f = ""
+ }
+ }
+ }
+
+ return out, nil
+}
+
+func atoi(s string) int {
+ n, _ := strconv.Atoi(s)
+ return n
+}
+
+// ParseVerb parses the verb at the beginning of f.
+// It returns the verb, how much of the input was consumed, and an error, if any.
+func ParseVerb(f string) (Verb, int, error) {
+ if len(f) < 2 {
+ return Verb{}, 0, ErrInvalid
+ }
+ const (
+ flags = 1
+
+ width = 2
+ widthStar = 3
+ widthIndex = 5
+
+ dot = 6
+ prec = 7
+ precStar = 8
+ precIndex = 10
+
+ verbIndex = 11
+ verb = 12
+ )
+
+ m := re.FindStringSubmatch(f)
+ if m == nil {
+ return Verb{}, 0, ErrInvalid
+ }
+
+ v := Verb{
+ Letter: []rune(m[verb])[0],
+ Flags: m[flags],
+ Raw: m[0],
+ }
+
+ if m[width] != "" {
+ // Literal width
+ v.Width = Literal(atoi(m[width]))
+ } else if m[widthStar] != "" {
+ // Star width
+ if m[widthIndex] != "" {
+ v.Width = Star{atoi(m[widthIndex])}
+ } else {
+ v.Width = Star{-1}
+ }
+ } else {
+ // Default width
+ v.Width = Default{}
+ }
+
+ if m[dot] == "" {
+ // default precision
+ v.Precision = Default{}
+ } else {
+ if m[prec] != "" {
+ // Literal precision
+ v.Precision = Literal(atoi(m[prec]))
+ } else if m[precStar] != "" {
+ // Star precision
+ if m[precIndex] != "" {
+ v.Precision = Star{atoi(m[precIndex])}
+ } else {
+ v.Precision = Star{-1}
+ }
+ } else {
+ // Zero precision
+ v.Precision = Zero{}
+ }
+ }
+
+ if m[verb] == "%" {
+ v.Value = 0
+ } else if m[verbIndex] != "" {
+ v.Value = atoi(m[verbIndex])
+ } else {
+ v.Value = -1
+ }
+
+ return v, len(m[0]), nil
+}
+
+const (
+ flags = `([+#0 -]*)`
+ verb = `([a-zA-Z%])`
+ index = `(?:\[([0-9]+)\])`
+ star = `((` + index + `)?\*)`
+ width1 = `([0-9]+)`
+ width2 = star
+ width = `(?:` + width1 + `|` + width2 + `)`
+ precision = width
+ widthAndPrecision = `(?:(?:` + width + `)?(?:(\.)(?:` + precision + `)?)?)`
+)
+
+var re = regexp.MustCompile(`^%` + flags + widthAndPrecision + `?` + index + `?` + verb)
diff --git a/vendor/honnef.co/go/tools/simple/CONTRIBUTING.md b/vendor/honnef.co/go/tools/simple/CONTRIBUTING.md
new file mode 100644
index 0000000000000..c54c6c50ac95d
--- /dev/null
+++ b/vendor/honnef.co/go/tools/simple/CONTRIBUTING.md
@@ -0,0 +1,15 @@
+# Contributing to gosimple
+
+## Before filing an issue:
+
+### Are you having trouble building gosimple?
+
+Check you have the latest version of its dependencies. Run
+```
+go get -u honnef.co/go/tools/simple
+```
+If you still have problems, consider searching for existing issues before filing a new issue.
+
+## Before sending a pull request:
+
+Have you understood the purpose of gosimple? Make sure to carefully read `README`.
diff --git a/vendor/honnef.co/go/tools/simple/analysis.go b/vendor/honnef.co/go/tools/simple/analysis.go
new file mode 100644
index 0000000000000..abb1648fab0bc
--- /dev/null
+++ b/vendor/honnef.co/go/tools/simple/analysis.go
@@ -0,0 +1,223 @@
+package simple
+
+import (
+ "flag"
+
+ "golang.org/x/tools/go/analysis"
+ "golang.org/x/tools/go/analysis/passes/inspect"
+ "honnef.co/go/tools/facts"
+ "honnef.co/go/tools/internal/passes/buildssa"
+ "honnef.co/go/tools/lint/lintutil"
+)
+
+func newFlagSet() flag.FlagSet {
+ fs := flag.NewFlagSet("", flag.PanicOnError)
+ fs.Var(lintutil.NewVersionFlag(), "go", "Target Go version")
+ return *fs
+}
+
+var Analyzers = map[string]*analysis.Analyzer{
+ "S1000": {
+ Name: "S1000",
+ Run: LintSingleCaseSelect,
+ Doc: Docs["S1000"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1001": {
+ Name: "S1001",
+ Run: LintLoopCopy,
+ Doc: Docs["S1001"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1002": {
+ Name: "S1002",
+ Run: LintIfBoolCmp,
+ Doc: Docs["S1002"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1003": {
+ Name: "S1003",
+ Run: LintStringsContains,
+ Doc: Docs["S1003"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1004": {
+ Name: "S1004",
+ Run: LintBytesCompare,
+ Doc: Docs["S1004"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1005": {
+ Name: "S1005",
+ Run: LintUnnecessaryBlank,
+ Doc: Docs["S1005"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1006": {
+ Name: "S1006",
+ Run: LintForTrue,
+ Doc: Docs["S1006"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1007": {
+ Name: "S1007",
+ Run: LintRegexpRaw,
+ Doc: Docs["S1007"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1008": {
+ Name: "S1008",
+ Run: LintIfReturn,
+ Doc: Docs["S1008"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1009": {
+ Name: "S1009",
+ Run: LintRedundantNilCheckWithLen,
+ Doc: Docs["S1009"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1010": {
+ Name: "S1010",
+ Run: LintSlicing,
+ Doc: Docs["S1010"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1011": {
+ Name: "S1011",
+ Run: LintLoopAppend,
+ Doc: Docs["S1011"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1012": {
+ Name: "S1012",
+ Run: LintTimeSince,
+ Doc: Docs["S1012"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1016": {
+ Name: "S1016",
+ Run: LintSimplerStructConversion,
+ Doc: Docs["S1016"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1017": {
+ Name: "S1017",
+ Run: LintTrim,
+ Doc: Docs["S1017"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1018": {
+ Name: "S1018",
+ Run: LintLoopSlide,
+ Doc: Docs["S1018"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1019": {
+ Name: "S1019",
+ Run: LintMakeLenCap,
+ Doc: Docs["S1019"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1020": {
+ Name: "S1020",
+ Run: LintAssertNotNil,
+ Doc: Docs["S1020"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1021": {
+ Name: "S1021",
+ Run: LintDeclareAssign,
+ Doc: Docs["S1021"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1023": {
+ Name: "S1023",
+ Run: LintRedundantBreak,
+ Doc: Docs["S1023"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1024": {
+ Name: "S1024",
+ Run: LintTimeUntil,
+ Doc: Docs["S1024"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1025": {
+ Name: "S1025",
+ Run: LintRedundantSprintf,
+ Doc: Docs["S1025"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1028": {
+ Name: "S1028",
+ Run: LintErrorsNewSprintf,
+ Doc: Docs["S1028"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1029": {
+ Name: "S1029",
+ Run: LintRangeStringRunes,
+ Doc: Docs["S1029"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "S1030": {
+ Name: "S1030",
+ Run: LintBytesBufferConversions,
+ Doc: Docs["S1030"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1031": {
+ Name: "S1031",
+ Run: LintNilCheckAroundRange,
+ Doc: Docs["S1031"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1032": {
+ Name: "S1032",
+ Run: LintSortHelpers,
+ Doc: Docs["S1032"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1033": {
+ Name: "S1033",
+ Run: LintGuardedDelete,
+ Doc: Docs["S1033"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "S1034": {
+ Name: "S1034",
+ Run: LintSimplifyTypeSwitch,
+ Doc: Docs["S1034"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+}
diff --git a/vendor/honnef.co/go/tools/simple/doc.go b/vendor/honnef.co/go/tools/simple/doc.go
new file mode 100644
index 0000000000000..eb0072de5dc5d
--- /dev/null
+++ b/vendor/honnef.co/go/tools/simple/doc.go
@@ -0,0 +1,425 @@
+package simple
+
+import "honnef.co/go/tools/lint"
+
+var Docs = map[string]*lint.Documentation{
+ "S1000": &lint.Documentation{
+ Title: `Use plain channel send or receive instead of single-case select`,
+ Text: `Select statements with a single case can be replaced with a simple
+send or receive.
+
+Before:
+
+ select {
+ case x := <-ch:
+ fmt.Println(x)
+ }
+
+After:
+
+ x := <-ch
+ fmt.Println(x)`,
+ Since: "2017.1",
+ },
+
+ "S1001": &lint.Documentation{
+ Title: `Replace for loop with call to copy`,
+ Text: `Use copy() for copying elements from one slice to another.
+
+Before:
+
+ for i, x := range src {
+ dst[i] = x
+ }
+
+After:
+
+ copy(dst, src)`,
+ Since: "2017.1",
+ },
+
+ "S1002": &lint.Documentation{
+ Title: `Omit comparison with boolean constant`,
+ Text: `Before:
+
+ if x == true {}
+
+After:
+
+ if x {}`,
+ Since: "2017.1",
+ },
+
+ "S1003": &lint.Documentation{
+ Title: `Replace call to strings.Index with strings.Contains`,
+ Text: `Before:
+
+ if strings.Index(x, y) != -1 {}
+
+After:
+
+ if strings.Contains(x, y) {}`,
+ Since: "2017.1",
+ },
+
+ "S1004": &lint.Documentation{
+ Title: `Replace call to bytes.Compare with bytes.Equal`,
+ Text: `Before:
+
+ if bytes.Compare(x, y) == 0 {}
+
+After:
+
+ if bytes.Equal(x, y) {}`,
+ Since: "2017.1",
+ },
+
+ "S1005": &lint.Documentation{
+ Title: `Drop unnecessary use of the blank identifier`,
+ Text: `In many cases, assigning to the blank identifier is unnecessary.
+
+Before:
+
+ for _ = range s {}
+ x, _ = someMap[key]
+ _ = <-ch
+
+After:
+
+ for range s{}
+ x = someMap[key]
+ <-ch`,
+ Since: "2017.1",
+ },
+
+ "S1006": &lint.Documentation{
+ Title: `Use for { ... } for infinite loops`,
+ Text: `For infinite loops, using for { ... } is the most idiomatic choice.`,
+ Since: "2017.1",
+ },
+
+ "S1007": &lint.Documentation{
+ Title: `Simplify regular expression by using raw string literal`,
+ Text: `Raw string literals use ` + "`" + ` instead of " and do not support
+any escape sequences. This means that the backslash (\) can be used
+freely, without the need of escaping.
+
+Since regular expressions have their own escape sequences, raw strings
+can improve their readability.
+
+Before:
+
+ regexp.Compile("\\A(\\w+) profile: total \\d+\\n\\z")
+
+After:
+
+ regexp.Compile(` + "`" + `\A(\w+) profile: total \d+\n\z` + "`" + `)`,
+ Since: "2017.1",
+ },
+
+ "S1008": &lint.Documentation{
+ Title: `Simplify returning boolean expression`,
+ Text: `Before:
+
+ if <expr> {
+ return true
+ }
+ return false
+
+After:
+
+ return <expr>`,
+ Since: "2017.1",
+ },
+
+ "S1009": &lint.Documentation{
+ Title: `Omit redundant nil check on slices`,
+ Text: `The len function is defined for all slices, even nil ones, which have
+a length of zero. It is not necessary to check if a slice is not nil
+before checking that its length is not zero.
+
+Before:
+
+ if x != nil && len(x) != 0 {}
+
+After:
+
+ if len(x) != 0 {}`,
+ Since: "2017.1",
+ },
+
+ "S1010": &lint.Documentation{
+ Title: `Omit default slice index`,
+ Text: `When slicing, the second index defaults to the length of the value,
+making s[n:len(s)] and s[n:] equivalent.`,
+ Since: "2017.1",
+ },
+
+ "S1011": &lint.Documentation{
+ Title: `Use a single append to concatenate two slices`,
+ Text: `Before:
+
+ for _, e := range y {
+ x = append(x, e)
+ }
+
+After:
+
+ x = append(x, y...)`,
+ Since: "2017.1",
+ },
+
+ "S1012": &lint.Documentation{
+ Title: `Replace time.Now().Sub(x) with time.Since(x)`,
+ Text: `The time.Since helper has the same effect as using time.Now().Sub(x)
+but is easier to read.
+
+Before:
+
+ time.Now().Sub(x)
+
+After:
+
+ time.Since(x)`,
+ Since: "2017.1",
+ },
+
+ "S1016": &lint.Documentation{
+ Title: `Use a type conversion instead of manually copying struct fields`,
+ Text: `Two struct types with identical fields can be converted between each
+other. In older versions of Go, the fields had to have identical
+struct tags. Since Go 1.8, however, struct tags are ignored during
+conversions. It is thus not necessary to manually copy every field
+individually.
+
+Before:
+
+ var x T1
+ y := T2{
+ Field1: x.Field1,
+ Field2: x.Field2,
+ }
+
+After:
+
+ var x T1
+ y := T2(x)`,
+ Since: "2017.1",
+ },
+
+ "S1017": &lint.Documentation{
+ Title: `Replace manual trimming with strings.TrimPrefix`,
+ Text: `Instead of using strings.HasPrefix and manual slicing, use the
+strings.TrimPrefix function. If the string doesn't start with the
+prefix, the original string will be returned. Using strings.TrimPrefix
+reduces complexity, and avoids common bugs, such as off-by-one
+mistakes.
+
+Before:
+
+ if strings.HasPrefix(str, prefix) {
+ str = str[len(prefix):]
+ }
+
+After:
+
+ str = strings.TrimPrefix(str, prefix)`,
+ Since: "2017.1",
+ },
+
+ "S1018": &lint.Documentation{
+ Title: `Use copy for sliding elements`,
+ Text: `copy() permits using the same source and destination slice, even with
+overlapping ranges. This makes it ideal for sliding elements in a
+slice.
+
+Before:
+
+ for i := 0; i < n; i++ {
+ bs[i] = bs[offset+i]
+ }
+
+After:
+
+ copy(bs[:n], bs[offset:])`,
+ Since: "2017.1",
+ },
+
+ "S1019": &lint.Documentation{
+ Title: `Simplify make call by omitting redundant arguments`,
+ Text: `The make function has default values for the length and capacity
+arguments. For channels and maps, the length defaults to zero.
+Additionally, for slices the capacity defaults to the length.`,
+ Since: "2017.1",
+ },
+
+ "S1020": &lint.Documentation{
+ Title: `Omit redundant nil check in type assertion`,
+ Text: `Before:
+
+ if _, ok := i.(T); ok && i != nil {}
+
+After:
+
+ if _, ok := i.(T); ok {}`,
+ Since: "2017.1",
+ },
+
+ "S1021": &lint.Documentation{
+ Title: `Merge variable declaration and assignment`,
+ Text: `Before:
+
+ var x uint
+ x = 1
+
+After:
+
+ var x uint = 1`,
+ Since: "2017.1",
+ },
+
+ "S1023": &lint.Documentation{
+ Title: `Omit redundant control flow`,
+ Text: `Functions that have no return value do not need a return statement as
+the final statement of the function.
+
+Switches in Go do not have automatic fallthrough, unlike languages
+like C. It is not necessary to have a break statement as the final
+statement in a case block.`,
+ Since: "2017.1",
+ },
+
+ "S1024": &lint.Documentation{
+ Title: `Replace x.Sub(time.Now()) with time.Until(x)`,
+ Text: `The time.Until helper has the same effect as using x.Sub(time.Now())
+but is easier to read.
+
+Before:
+
+ x.Sub(time.Now())
+
+After:
+
+ time.Until(x)`,
+ Since: "2017.1",
+ },
+
+ "S1025": &lint.Documentation{
+ Title: `Don't use fmt.Sprintf("%s", x) unnecessarily`,
+ Text: `In many instances, there are easier and more efficient ways of getting
+a value's string representation. Whenever a value's underlying type is
+a string already, or the type has a String method, they should be used
+directly.
+
+Given the following shared definitions
+
+ type T1 string
+ type T2 int
+
+ func (T2) String() string { return "Hello, world" }
+
+ var x string
+ var y T1
+ var z T2
+
+we can simplify the following
+
+ fmt.Sprintf("%s", x)
+ fmt.Sprintf("%s", y)
+ fmt.Sprintf("%s", z)
+
+to
+
+ x
+ string(y)
+ z.String()`,
+ Since: "2017.1",
+ },
+
+ "S1028": &lint.Documentation{
+ Title: `Simplify error construction with fmt.Errorf`,
+ Text: `Before:
+
+ errors.New(fmt.Sprintf(...))
+
+After:
+
+ fmt.Errorf(...)`,
+ Since: "2017.1",
+ },
+
+ "S1029": &lint.Documentation{
+ Title: `Range over the string directly`,
+ Text: `Ranging over a string will yield byte offsets and runes. If the offset
+isn't used, this is functionally equivalent to converting the string
+to a slice of runes and ranging over that. Ranging directly over the
+string will be more performant, however, as it avoids allocating a new
+slice, the size of which depends on the length of the string.
+
+Before:
+
+ for _, r := range []rune(s) {}
+
+After:
+
+ for _, r := range s {}`,
+ Since: "2017.1",
+ },
+
+ "S1030": &lint.Documentation{
+ Title: `Use bytes.Buffer.String or bytes.Buffer.Bytes`,
+ Text: `bytes.Buffer has both a String and a Bytes method. It is never
+necessary to use string(buf.Bytes()) or []byte(buf.String()) – simply
+use the other method.`,
+ Since: "2017.1",
+ },
+
+ "S1031": &lint.Documentation{
+ Title: `Omit redundant nil check around loop`,
+ Text: `You can use range on nil slices and maps, the loop will simply never
+execute. This makes an additional nil check around the loop
+unnecessary.
+
+Before:
+
+ if s != nil {
+ for _, x := range s {
+ ...
+ }
+ }
+
+After:
+
+ for _, x := range s {
+ ...
+ }`,
+ Since: "2017.1",
+ },
+
+ "S1032": &lint.Documentation{
+ Title: `Use sort.Ints(x), sort.Float64s(x), and sort.Strings(x)`,
+ Text: `The sort.Ints, sort.Float64s and sort.Strings functions are easier to
+read than sort.Sort(sort.IntSlice(x)), sort.Sort(sort.Float64Slice(x))
+and sort.Sort(sort.StringSlice(x)).
+
+Before:
+
+ sort.Sort(sort.StringSlice(x))
+
+After:
+
+ sort.Strings(x)`,
+ Since: "2019.1",
+ },
+
+ "S1033": &lint.Documentation{
+ Title: `Unnecessary guard around call to delete`,
+ Text: `Calling delete on a nil map is a no-op.`,
+ Since: "2019.2",
+ },
+
+ "S1034": &lint.Documentation{
+ Title: `Use result of type assertion to simplify cases`,
+ Since: "2019.2",
+ },
+}
diff --git a/vendor/honnef.co/go/tools/simple/lint.go b/vendor/honnef.co/go/tools/simple/lint.go
new file mode 100644
index 0000000000000..c78a7bb7a4259
--- /dev/null
+++ b/vendor/honnef.co/go/tools/simple/lint.go
@@ -0,0 +1,1816 @@
+// Package simple contains a linter for Go source code.
+package simple // import "honnef.co/go/tools/simple"
+
+import (
+ "fmt"
+ "go/ast"
+ "go/constant"
+ "go/token"
+ "go/types"
+ "reflect"
+ "sort"
+ "strings"
+
+ "golang.org/x/tools/go/analysis"
+ "golang.org/x/tools/go/analysis/passes/inspect"
+ "golang.org/x/tools/go/ast/inspector"
+ "golang.org/x/tools/go/types/typeutil"
+ . "honnef.co/go/tools/arg"
+ "honnef.co/go/tools/internal/passes/buildssa"
+ "honnef.co/go/tools/internal/sharedcheck"
+ "honnef.co/go/tools/lint"
+ . "honnef.co/go/tools/lint/lintdsl"
+)
+
+func LintSingleCaseSelect(pass *analysis.Pass) (interface{}, error) {
+ isSingleSelect := func(node ast.Node) bool {
+ v, ok := node.(*ast.SelectStmt)
+ if !ok {
+ return false
+ }
+ return len(v.Body.List) == 1
+ }
+
+ seen := map[ast.Node]struct{}{}
+ fn := func(node ast.Node) {
+ switch v := node.(type) {
+ case *ast.ForStmt:
+ if len(v.Body.List) != 1 {
+ return
+ }
+ if !isSingleSelect(v.Body.List[0]) {
+ return
+ }
+ if _, ok := v.Body.List[0].(*ast.SelectStmt).Body.List[0].(*ast.CommClause).Comm.(*ast.SendStmt); ok {
+ // Don't suggest using range for channel sends
+ return
+ }
+ seen[v.Body.List[0]] = struct{}{}
+ ReportNodefFG(pass, node, "should use for range instead of for { select {} }")
+ case *ast.SelectStmt:
+ if _, ok := seen[v]; ok {
+ return
+ }
+ if !isSingleSelect(v) {
+ return
+ }
+ ReportNodefFG(pass, node, "should use a simple channel send/receive instead of select with a single case")
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.ForStmt)(nil), (*ast.SelectStmt)(nil)}, fn)
+ return nil, nil
+}
+
+func LintLoopCopy(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ loop := node.(*ast.RangeStmt)
+
+ if loop.Key == nil {
+ return
+ }
+ if len(loop.Body.List) != 1 {
+ return
+ }
+ stmt, ok := loop.Body.List[0].(*ast.AssignStmt)
+ if !ok {
+ return
+ }
+ if stmt.Tok != token.ASSIGN || len(stmt.Lhs) != 1 || len(stmt.Rhs) != 1 {
+ return
+ }
+ lhs, ok := stmt.Lhs[0].(*ast.IndexExpr)
+ if !ok {
+ return
+ }
+
+ if _, ok := pass.TypesInfo.TypeOf(lhs.X).(*types.Slice); !ok {
+ return
+ }
+ lidx, ok := lhs.Index.(*ast.Ident)
+ if !ok {
+ return
+ }
+ key, ok := loop.Key.(*ast.Ident)
+ if !ok {
+ return
+ }
+ if pass.TypesInfo.TypeOf(lhs) == nil || pass.TypesInfo.TypeOf(stmt.Rhs[0]) == nil {
+ return
+ }
+ if pass.TypesInfo.ObjectOf(lidx) != pass.TypesInfo.ObjectOf(key) {
+ return
+ }
+ if !types.Identical(pass.TypesInfo.TypeOf(lhs), pass.TypesInfo.TypeOf(stmt.Rhs[0])) {
+ return
+ }
+ if _, ok := pass.TypesInfo.TypeOf(loop.X).(*types.Slice); !ok {
+ return
+ }
+
+ if rhs, ok := stmt.Rhs[0].(*ast.IndexExpr); ok {
+ rx, ok := rhs.X.(*ast.Ident)
+ _ = rx
+ if !ok {
+ return
+ }
+ ridx, ok := rhs.Index.(*ast.Ident)
+ if !ok {
+ return
+ }
+ if pass.TypesInfo.ObjectOf(ridx) != pass.TypesInfo.ObjectOf(key) {
+ return
+ }
+ } else if rhs, ok := stmt.Rhs[0].(*ast.Ident); ok {
+ value, ok := loop.Value.(*ast.Ident)
+ if !ok {
+ return
+ }
+ if pass.TypesInfo.ObjectOf(rhs) != pass.TypesInfo.ObjectOf(value) {
+ return
+ }
+ } else {
+ return
+ }
+ ReportNodefFG(pass, loop, "should use copy() instead of a loop")
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.RangeStmt)(nil)}, fn)
+ return nil, nil
+}
+
+func LintIfBoolCmp(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ expr := node.(*ast.BinaryExpr)
+ if expr.Op != token.EQL && expr.Op != token.NEQ {
+ return
+ }
+ x := IsBoolConst(pass, expr.X)
+ y := IsBoolConst(pass, expr.Y)
+ if !x && !y {
+ return
+ }
+ var other ast.Expr
+ var val bool
+ if x {
+ val = BoolConst(pass, expr.X)
+ other = expr.Y
+ } else {
+ val = BoolConst(pass, expr.Y)
+ other = expr.X
+ }
+ basic, ok := pass.TypesInfo.TypeOf(other).Underlying().(*types.Basic)
+ if !ok || basic.Kind() != types.Bool {
+ return
+ }
+ op := ""
+ if (expr.Op == token.EQL && !val) || (expr.Op == token.NEQ && val) {
+ op = "!"
+ }
+ r := op + Render(pass, other)
+ l1 := len(r)
+ r = strings.TrimLeft(r, "!")
+ if (l1-len(r))%2 == 1 {
+ r = "!" + r
+ }
+ if IsInTest(pass, node) {
+ return
+ }
+ ReportNodefFG(pass, expr, "should omit comparison to bool constant, can be simplified to %s", r)
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.BinaryExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func LintBytesBufferConversions(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ call := node.(*ast.CallExpr)
+ if len(call.Args) != 1 {
+ return
+ }
+
+ argCall, ok := call.Args[0].(*ast.CallExpr)
+ if !ok {
+ return
+ }
+ sel, ok := argCall.Fun.(*ast.SelectorExpr)
+ if !ok {
+ return
+ }
+
+ typ := pass.TypesInfo.TypeOf(call.Fun)
+ if typ == types.Universe.Lookup("string").Type() && IsCallToAST(pass, call.Args[0], "(*bytes.Buffer).Bytes") {
+ ReportNodefFG(pass, call, "should use %v.String() instead of %v", Render(pass, sel.X), Render(pass, call))
+ } else if typ, ok := typ.(*types.Slice); ok && typ.Elem() == types.Universe.Lookup("byte").Type() && IsCallToAST(pass, call.Args[0], "(*bytes.Buffer).String") {
+ ReportNodefFG(pass, call, "should use %v.Bytes() instead of %v", Render(pass, sel.X), Render(pass, call))
+ }
+
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.CallExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func LintStringsContains(pass *analysis.Pass) (interface{}, error) {
+ // map of value to token to bool value
+ allowed := map[int64]map[token.Token]bool{
+ -1: {token.GTR: true, token.NEQ: true, token.EQL: false},
+ 0: {token.GEQ: true, token.LSS: false},
+ }
+ fn := func(node ast.Node) {
+ expr := node.(*ast.BinaryExpr)
+ switch expr.Op {
+ case token.GEQ, token.GTR, token.NEQ, token.LSS, token.EQL:
+ default:
+ return
+ }
+
+ value, ok := ExprToInt(pass, expr.Y)
+ if !ok {
+ return
+ }
+
+ allowedOps, ok := allowed[value]
+ if !ok {
+ return
+ }
+ b, ok := allowedOps[expr.Op]
+ if !ok {
+ return
+ }
+
+ call, ok := expr.X.(*ast.CallExpr)
+ if !ok {
+ return
+ }
+ sel, ok := call.Fun.(*ast.SelectorExpr)
+ if !ok {
+ return
+ }
+ pkgIdent, ok := sel.X.(*ast.Ident)
+ if !ok {
+ return
+ }
+ funIdent := sel.Sel
+ if pkgIdent.Name != "strings" && pkgIdent.Name != "bytes" {
+ return
+ }
+ newFunc := ""
+ switch funIdent.Name {
+ case "IndexRune":
+ newFunc = "ContainsRune"
+ case "IndexAny":
+ newFunc = "ContainsAny"
+ case "Index":
+ newFunc = "Contains"
+ default:
+ return
+ }
+
+ prefix := ""
+ if !b {
+ prefix = "!"
+ }
+ ReportNodefFG(pass, node, "should use %s%s.%s(%s) instead", prefix, pkgIdent.Name, newFunc, RenderArgs(pass, call.Args))
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.BinaryExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func LintBytesCompare(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ expr := node.(*ast.BinaryExpr)
+ if expr.Op != token.NEQ && expr.Op != token.EQL {
+ return
+ }
+ call, ok := expr.X.(*ast.CallExpr)
+ if !ok {
+ return
+ }
+ if !IsCallToAST(pass, call, "bytes.Compare") {
+ return
+ }
+ value, ok := ExprToInt(pass, expr.Y)
+ if !ok || value != 0 {
+ return
+ }
+ args := RenderArgs(pass, call.Args)
+ prefix := ""
+ if expr.Op == token.NEQ {
+ prefix = "!"
+ }
+ ReportNodefFG(pass, node, "should use %sbytes.Equal(%s) instead", prefix, args)
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.BinaryExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func LintForTrue(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ loop := node.(*ast.ForStmt)
+ if loop.Init != nil || loop.Post != nil {
+ return
+ }
+ if !IsBoolConst(pass, loop.Cond) || !BoolConst(pass, loop.Cond) {
+ return
+ }
+ ReportNodefFG(pass, loop, "should use for {} instead of for true {}")
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.ForStmt)(nil)}, fn)
+ return nil, nil
+}
+
+func LintRegexpRaw(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ call := node.(*ast.CallExpr)
+ if !IsCallToAST(pass, call, "regexp.MustCompile") &&
+ !IsCallToAST(pass, call, "regexp.Compile") {
+ return
+ }
+ sel, ok := call.Fun.(*ast.SelectorExpr)
+ if !ok {
+ return
+ }
+ if len(call.Args) != 1 {
+ // invalid function call
+ return
+ }
+ lit, ok := call.Args[Arg("regexp.Compile.expr")].(*ast.BasicLit)
+ if !ok {
+ // TODO(dominikh): support string concat, maybe support constants
+ return
+ }
+ if lit.Kind != token.STRING {
+ // invalid function call
+ return
+ }
+ if lit.Value[0] != '"' {
+ // already a raw string
+ return
+ }
+ val := lit.Value
+ if !strings.Contains(val, `\\`) {
+ return
+ }
+ if strings.Contains(val, "`") {
+ return
+ }
+
+ bs := false
+ for _, c := range val {
+ if !bs && c == '\\' {
+ bs = true
+ continue
+ }
+ if bs && c == '\\' {
+ bs = false
+ continue
+ }
+ if bs {
+ // backslash followed by non-backslash -> escape sequence
+ return
+ }
+ }
+
+ ReportNodefFG(pass, call, "should use raw string (`...`) with regexp.%s to avoid having to escape twice", sel.Sel.Name)
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.CallExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func LintIfReturn(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ block := node.(*ast.BlockStmt)
+ l := len(block.List)
+ if l < 2 {
+ return
+ }
+ n1, n2 := block.List[l-2], block.List[l-1]
+
+ if len(block.List) >= 3 {
+ if _, ok := block.List[l-3].(*ast.IfStmt); ok {
+ // Do not flag a series of if statements
+ return
+ }
+ }
+ // if statement with no init, no else, a single condition
+ // checking an identifier or function call and just a return
+ // statement in the body, that returns a boolean constant
+ ifs, ok := n1.(*ast.IfStmt)
+ if !ok {
+ return
+ }
+ if ifs.Else != nil || ifs.Init != nil {
+ return
+ }
+ if len(ifs.Body.List) != 1 {
+ return
+ }
+ if op, ok := ifs.Cond.(*ast.BinaryExpr); ok {
+ switch op.Op {
+ case token.EQL, token.LSS, token.GTR, token.NEQ, token.LEQ, token.GEQ:
+ default:
+ return
+ }
+ }
+ ret1, ok := ifs.Body.List[0].(*ast.ReturnStmt)
+ if !ok {
+ return
+ }
+ if len(ret1.Results) != 1 {
+ return
+ }
+ if !IsBoolConst(pass, ret1.Results[0]) {
+ return
+ }
+
+ ret2, ok := n2.(*ast.ReturnStmt)
+ if !ok {
+ return
+ }
+ if len(ret2.Results) != 1 {
+ return
+ }
+ if !IsBoolConst(pass, ret2.Results[0]) {
+ return
+ }
+
+ if ret1.Results[0].(*ast.Ident).Name == ret2.Results[0].(*ast.Ident).Name {
+ // we want the function to return true and false, not the
+ // same value both times.
+ return
+ }
+
+ ReportNodefFG(pass, n1, "should use 'return <expr>' instead of 'if <expr> { return <bool> }; return <bool>'")
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.BlockStmt)(nil)}, fn)
+ return nil, nil
+}
+
+// LintRedundantNilCheckWithLen checks for the following reduntant nil-checks:
+//
+// if x == nil || len(x) == 0 {}
+// if x != nil && len(x) != 0 {}
+// if x != nil && len(x) == N {} (where N != 0)
+// if x != nil && len(x) > N {}
+// if x != nil && len(x) >= N {} (where N != 0)
+//
+func LintRedundantNilCheckWithLen(pass *analysis.Pass) (interface{}, error) {
+ isConstZero := func(expr ast.Expr) (isConst bool, isZero bool) {
+ _, ok := expr.(*ast.BasicLit)
+ if ok {
+ return true, IsZero(expr)
+ }
+ id, ok := expr.(*ast.Ident)
+ if !ok {
+ return false, false
+ }
+ c, ok := pass.TypesInfo.ObjectOf(id).(*types.Const)
+ if !ok {
+ return false, false
+ }
+ return true, c.Val().Kind() == constant.Int && c.Val().String() == "0"
+ }
+
+ fn := func(node ast.Node) {
+ // check that expr is "x || y" or "x && y"
+ expr := node.(*ast.BinaryExpr)
+ if expr.Op != token.LOR && expr.Op != token.LAND {
+ return
+ }
+ eqNil := expr.Op == token.LOR
+
+ // check that x is "xx == nil" or "xx != nil"
+ x, ok := expr.X.(*ast.BinaryExpr)
+ if !ok {
+ return
+ }
+ if eqNil && x.Op != token.EQL {
+ return
+ }
+ if !eqNil && x.Op != token.NEQ {
+ return
+ }
+ xx, ok := x.X.(*ast.Ident)
+ if !ok {
+ return
+ }
+ if !IsNil(pass, x.Y) {
+ return
+ }
+
+ // check that y is "len(xx) == 0" or "len(xx) ... "
+ y, ok := expr.Y.(*ast.BinaryExpr)
+ if !ok {
+ return
+ }
+ if eqNil && y.Op != token.EQL { // must be len(xx) *==* 0
+ return
+ }
+ yx, ok := y.X.(*ast.CallExpr)
+ if !ok {
+ return
+ }
+ yxFun, ok := yx.Fun.(*ast.Ident)
+ if !ok || yxFun.Name != "len" || len(yx.Args) != 1 {
+ return
+ }
+ yxArg, ok := yx.Args[Arg("len.v")].(*ast.Ident)
+ if !ok {
+ return
+ }
+ if yxArg.Name != xx.Name {
+ return
+ }
+
+ if eqNil && !IsZero(y.Y) { // must be len(x) == *0*
+ return
+ }
+
+ if !eqNil {
+ isConst, isZero := isConstZero(y.Y)
+ if !isConst {
+ return
+ }
+ switch y.Op {
+ case token.EQL:
+ // avoid false positive for "xx != nil && len(xx) == 0"
+ if isZero {
+ return
+ }
+ case token.GEQ:
+ // avoid false positive for "xx != nil && len(xx) >= 0"
+ if isZero {
+ return
+ }
+ case token.NEQ:
+ // avoid false positive for "xx != nil && len(xx) != <non-zero>"
+ if !isZero {
+ return
+ }
+ case token.GTR:
+ // ok
+ default:
+ return
+ }
+ }
+
+ // finally check that xx type is one of array, slice, map or chan
+ // this is to prevent false positive in case if xx is a pointer to an array
+ var nilType string
+ switch pass.TypesInfo.TypeOf(xx).(type) {
+ case *types.Slice:
+ nilType = "nil slices"
+ case *types.Map:
+ nilType = "nil maps"
+ case *types.Chan:
+ nilType = "nil channels"
+ default:
+ return
+ }
+ ReportNodefFG(pass, expr, "should omit nil check; len() for %s is defined as zero", nilType)
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.BinaryExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func LintSlicing(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ n := node.(*ast.SliceExpr)
+ if n.Max != nil {
+ return
+ }
+ s, ok := n.X.(*ast.Ident)
+ if !ok || s.Obj == nil {
+ return
+ }
+ call, ok := n.High.(*ast.CallExpr)
+ if !ok || len(call.Args) != 1 || call.Ellipsis.IsValid() {
+ return
+ }
+ fun, ok := call.Fun.(*ast.Ident)
+ if !ok || fun.Name != "len" {
+ return
+ }
+ if _, ok := pass.TypesInfo.ObjectOf(fun).(*types.Builtin); !ok {
+ return
+ }
+ arg, ok := call.Args[Arg("len.v")].(*ast.Ident)
+ if !ok || arg.Obj != s.Obj {
+ return
+ }
+ ReportNodefFG(pass, n, "should omit second index in slice, s[a:len(s)] is identical to s[a:]")
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.SliceExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func refersTo(pass *analysis.Pass, expr ast.Expr, ident *ast.Ident) bool {
+ found := false
+ fn := func(node ast.Node) bool {
+ ident2, ok := node.(*ast.Ident)
+ if !ok {
+ return true
+ }
+ if pass.TypesInfo.ObjectOf(ident) == pass.TypesInfo.ObjectOf(ident2) {
+ found = true
+ return false
+ }
+ return true
+ }
+ ast.Inspect(expr, fn)
+ return found
+}
+
+func LintLoopAppend(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ loop := node.(*ast.RangeStmt)
+ if !IsBlank(loop.Key) {
+ return
+ }
+ val, ok := loop.Value.(*ast.Ident)
+ if !ok {
+ return
+ }
+ if len(loop.Body.List) != 1 {
+ return
+ }
+ stmt, ok := loop.Body.List[0].(*ast.AssignStmt)
+ if !ok {
+ return
+ }
+ if stmt.Tok != token.ASSIGN || len(stmt.Lhs) != 1 || len(stmt.Rhs) != 1 {
+ return
+ }
+ if refersTo(pass, stmt.Lhs[0], val) {
+ return
+ }
+ call, ok := stmt.Rhs[0].(*ast.CallExpr)
+ if !ok {
+ return
+ }
+ if len(call.Args) != 2 || call.Ellipsis.IsValid() {
+ return
+ }
+ fun, ok := call.Fun.(*ast.Ident)
+ if !ok {
+ return
+ }
+ obj := pass.TypesInfo.ObjectOf(fun)
+ fn, ok := obj.(*types.Builtin)
+ if !ok || fn.Name() != "append" {
+ return
+ }
+
+ src := pass.TypesInfo.TypeOf(loop.X)
+ dst := pass.TypesInfo.TypeOf(call.Args[Arg("append.slice")])
+ // TODO(dominikh) remove nil check once Go issue #15173 has
+ // been fixed
+ if src == nil {
+ return
+ }
+ if !types.Identical(src, dst) {
+ return
+ }
+
+ if Render(pass, stmt.Lhs[0]) != Render(pass, call.Args[Arg("append.slice")]) {
+ return
+ }
+
+ el, ok := call.Args[Arg("append.elems")].(*ast.Ident)
+ if !ok {
+ return
+ }
+ if pass.TypesInfo.ObjectOf(val) != pass.TypesInfo.ObjectOf(el) {
+ return
+ }
+ ReportNodefFG(pass, loop, "should replace loop with %s = append(%s, %s...)",
+ Render(pass, stmt.Lhs[0]), Render(pass, call.Args[Arg("append.slice")]), Render(pass, loop.X))
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.RangeStmt)(nil)}, fn)
+ return nil, nil
+}
+
+func LintTimeSince(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ call := node.(*ast.CallExpr)
+ sel, ok := call.Fun.(*ast.SelectorExpr)
+ if !ok {
+ return
+ }
+ if !IsCallToAST(pass, sel.X, "time.Now") {
+ return
+ }
+ if sel.Sel.Name != "Sub" {
+ return
+ }
+ ReportNodefFG(pass, call, "should use time.Since instead of time.Now().Sub")
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.CallExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func LintTimeUntil(pass *analysis.Pass) (interface{}, error) {
+ if !IsGoVersion(pass, 8) {
+ return nil, nil
+ }
+ fn := func(node ast.Node) {
+ call := node.(*ast.CallExpr)
+ if !IsCallToAST(pass, call, "(time.Time).Sub") {
+ return
+ }
+ if !IsCallToAST(pass, call.Args[Arg("(time.Time).Sub.u")], "time.Now") {
+ return
+ }
+ ReportNodefFG(pass, call, "should use time.Until instead of t.Sub(time.Now())")
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.CallExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func LintUnnecessaryBlank(pass *analysis.Pass) (interface{}, error) {
+ fn1 := func(node ast.Node) {
+ assign := node.(*ast.AssignStmt)
+ if len(assign.Lhs) != 2 || len(assign.Rhs) != 1 {
+ return
+ }
+ if !IsBlank(assign.Lhs[1]) {
+ return
+ }
+ switch rhs := assign.Rhs[0].(type) {
+ case *ast.IndexExpr:
+ // The type-checker should make sure that it's a map, but
+ // let's be safe.
+ if _, ok := pass.TypesInfo.TypeOf(rhs.X).Underlying().(*types.Map); !ok {
+ return
+ }
+ case *ast.UnaryExpr:
+ if rhs.Op != token.ARROW {
+ return
+ }
+ default:
+ return
+ }
+ cp := *assign
+ cp.Lhs = cp.Lhs[0:1]
+ ReportNodefFG(pass, assign, "should write %s instead of %s", Render(pass, &cp), Render(pass, assign))
+ }
+
+ fn2 := func(node ast.Node) {
+ stmt := node.(*ast.AssignStmt)
+ if len(stmt.Lhs) != len(stmt.Rhs) {
+ return
+ }
+ for i, lh := range stmt.Lhs {
+ rh := stmt.Rhs[i]
+ if !IsBlank(lh) {
+ continue
+ }
+ expr, ok := rh.(*ast.UnaryExpr)
+ if !ok {
+ continue
+ }
+ if expr.Op != token.ARROW {
+ continue
+ }
+ ReportNodefFG(pass, lh, "'_ = <-ch' can be simplified to '<-ch'")
+ }
+ }
+
+ fn3 := func(node ast.Node) {
+ rs := node.(*ast.RangeStmt)
+
+ // for x, _
+ if !IsBlank(rs.Key) && IsBlank(rs.Value) {
+ ReportNodefFG(pass, rs.Value, "should omit value from range; this loop is equivalent to `for %s %s range ...`", Render(pass, rs.Key), rs.Tok)
+ }
+ // for _, _ || for _
+ if IsBlank(rs.Key) && (IsBlank(rs.Value) || rs.Value == nil) {
+ ReportNodefFG(pass, rs.Key, "should omit values from range; this loop is equivalent to `for range ...`")
+ }
+ }
+
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.AssignStmt)(nil)}, fn1)
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.AssignStmt)(nil)}, fn2)
+ if IsGoVersion(pass, 4) {
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.RangeStmt)(nil)}, fn3)
+ }
+ return nil, nil
+}
+
+func LintSimplerStructConversion(pass *analysis.Pass) (interface{}, error) {
+ var skip ast.Node
+ fn := func(node ast.Node) {
+ // Do not suggest type conversion between pointers
+ if unary, ok := node.(*ast.UnaryExpr); ok && unary.Op == token.AND {
+ if lit, ok := unary.X.(*ast.CompositeLit); ok {
+ skip = lit
+ }
+ return
+ }
+
+ if node == skip {
+ return
+ }
+
+ lit, ok := node.(*ast.CompositeLit)
+ if !ok {
+ return
+ }
+ typ1, _ := pass.TypesInfo.TypeOf(lit.Type).(*types.Named)
+ if typ1 == nil {
+ return
+ }
+ s1, ok := typ1.Underlying().(*types.Struct)
+ if !ok {
+ return
+ }
+
+ var typ2 *types.Named
+ var ident *ast.Ident
+ getSelType := func(expr ast.Expr) (types.Type, *ast.Ident, bool) {
+ sel, ok := expr.(*ast.SelectorExpr)
+ if !ok {
+ return nil, nil, false
+ }
+ ident, ok := sel.X.(*ast.Ident)
+ if !ok {
+ return nil, nil, false
+ }
+ typ := pass.TypesInfo.TypeOf(sel.X)
+ return typ, ident, typ != nil
+ }
+ if len(lit.Elts) == 0 {
+ return
+ }
+ if s1.NumFields() != len(lit.Elts) {
+ return
+ }
+ for i, elt := range lit.Elts {
+ var t types.Type
+ var id *ast.Ident
+ var ok bool
+ switch elt := elt.(type) {
+ case *ast.SelectorExpr:
+ t, id, ok = getSelType(elt)
+ if !ok {
+ return
+ }
+ if i >= s1.NumFields() || s1.Field(i).Name() != elt.Sel.Name {
+ return
+ }
+ case *ast.KeyValueExpr:
+ var sel *ast.SelectorExpr
+ sel, ok = elt.Value.(*ast.SelectorExpr)
+ if !ok {
+ return
+ }
+
+ if elt.Key.(*ast.Ident).Name != sel.Sel.Name {
+ return
+ }
+ t, id, ok = getSelType(elt.Value)
+ }
+ if !ok {
+ return
+ }
+ // All fields must be initialized from the same object
+ if ident != nil && ident.Obj != id.Obj {
+ return
+ }
+ typ2, _ = t.(*types.Named)
+ if typ2 == nil {
+ return
+ }
+ ident = id
+ }
+
+ if typ2 == nil {
+ return
+ }
+
+ if typ1.Obj().Pkg() != typ2.Obj().Pkg() {
+ // Do not suggest type conversions between different
+ // packages. Types in different packages might only match
+ // by coincidence. Furthermore, if the dependency ever
+ // adds more fields to its type, it could break the code
+ // that relies on the type conversion to work.
+ return
+ }
+
+ s2, ok := typ2.Underlying().(*types.Struct)
+ if !ok {
+ return
+ }
+ if typ1 == typ2 {
+ return
+ }
+ if IsGoVersion(pass, 8) {
+ if !types.IdenticalIgnoreTags(s1, s2) {
+ return
+ }
+ } else {
+ if !types.Identical(s1, s2) {
+ return
+ }
+ }
+ ReportNodefFG(pass, node, "should convert %s (type %s) to %s instead of using struct literal",
+ ident.Name, typ2.Obj().Name(), typ1.Obj().Name())
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.UnaryExpr)(nil), (*ast.CompositeLit)(nil)}, fn)
+ return nil, nil
+}
+
+func LintTrim(pass *analysis.Pass) (interface{}, error) {
+ sameNonDynamic := func(node1, node2 ast.Node) bool {
+ if reflect.TypeOf(node1) != reflect.TypeOf(node2) {
+ return false
+ }
+
+ switch node1 := node1.(type) {
+ case *ast.Ident:
+ return node1.Obj == node2.(*ast.Ident).Obj
+ case *ast.SelectorExpr:
+ return Render(pass, node1) == Render(pass, node2)
+ case *ast.IndexExpr:
+ return Render(pass, node1) == Render(pass, node2)
+ }
+ return false
+ }
+
+ isLenOnIdent := func(fn ast.Expr, ident ast.Expr) bool {
+ call, ok := fn.(*ast.CallExpr)
+ if !ok {
+ return false
+ }
+ if fn, ok := call.Fun.(*ast.Ident); !ok || fn.Name != "len" {
+ return false
+ }
+ if len(call.Args) != 1 {
+ return false
+ }
+ return sameNonDynamic(call.Args[Arg("len.v")], ident)
+ }
+
+ fn := func(node ast.Node) {
+ var pkg string
+ var fun string
+
+ ifstmt := node.(*ast.IfStmt)
+ if ifstmt.Init != nil {
+ return
+ }
+ if ifstmt.Else != nil {
+ return
+ }
+ if len(ifstmt.Body.List) != 1 {
+ return
+ }
+ condCall, ok := ifstmt.Cond.(*ast.CallExpr)
+ if !ok {
+ return
+ }
+ switch {
+ case IsCallToAST(pass, condCall, "strings.HasPrefix"):
+ pkg = "strings"
+ fun = "HasPrefix"
+ case IsCallToAST(pass, condCall, "strings.HasSuffix"):
+ pkg = "strings"
+ fun = "HasSuffix"
+ case IsCallToAST(pass, condCall, "strings.Contains"):
+ pkg = "strings"
+ fun = "Contains"
+ case IsCallToAST(pass, condCall, "bytes.HasPrefix"):
+ pkg = "bytes"
+ fun = "HasPrefix"
+ case IsCallToAST(pass, condCall, "bytes.HasSuffix"):
+ pkg = "bytes"
+ fun = "HasSuffix"
+ case IsCallToAST(pass, condCall, "bytes.Contains"):
+ pkg = "bytes"
+ fun = "Contains"
+ default:
+ return
+ }
+
+ assign, ok := ifstmt.Body.List[0].(*ast.AssignStmt)
+ if !ok {
+ return
+ }
+ if assign.Tok != token.ASSIGN {
+ return
+ }
+ if len(assign.Lhs) != 1 || len(assign.Rhs) != 1 {
+ return
+ }
+ if !sameNonDynamic(condCall.Args[0], assign.Lhs[0]) {
+ return
+ }
+
+ switch rhs := assign.Rhs[0].(type) {
+ case *ast.CallExpr:
+ if len(rhs.Args) < 2 || !sameNonDynamic(condCall.Args[0], rhs.Args[0]) || !sameNonDynamic(condCall.Args[1], rhs.Args[1]) {
+ return
+ }
+ if IsCallToAST(pass, condCall, "strings.HasPrefix") && IsCallToAST(pass, rhs, "strings.TrimPrefix") ||
+ IsCallToAST(pass, condCall, "strings.HasSuffix") && IsCallToAST(pass, rhs, "strings.TrimSuffix") ||
+ IsCallToAST(pass, condCall, "strings.Contains") && IsCallToAST(pass, rhs, "strings.Replace") ||
+ IsCallToAST(pass, condCall, "bytes.HasPrefix") && IsCallToAST(pass, rhs, "bytes.TrimPrefix") ||
+ IsCallToAST(pass, condCall, "bytes.HasSuffix") && IsCallToAST(pass, rhs, "bytes.TrimSuffix") ||
+ IsCallToAST(pass, condCall, "bytes.Contains") && IsCallToAST(pass, rhs, "bytes.Replace") {
+ ReportNodefFG(pass, ifstmt, "should replace this if statement with an unconditional %s", CallNameAST(pass, rhs))
+ }
+ return
+ case *ast.SliceExpr:
+ slice := rhs
+ if !ok {
+ return
+ }
+ if slice.Slice3 {
+ return
+ }
+ if !sameNonDynamic(slice.X, condCall.Args[0]) {
+ return
+ }
+ var index ast.Expr
+ switch fun {
+ case "HasPrefix":
+ // TODO(dh) We could detect a High that is len(s), but another
+ // rule will already flag that, anyway.
+ if slice.High != nil {
+ return
+ }
+ index = slice.Low
+ case "HasSuffix":
+ if slice.Low != nil {
+ n, ok := ExprToInt(pass, slice.Low)
+ if !ok || n != 0 {
+ return
+ }
+ }
+ index = slice.High
+ }
+
+ switch index := index.(type) {
+ case *ast.CallExpr:
+ if fun != "HasPrefix" {
+ return
+ }
+ if fn, ok := index.Fun.(*ast.Ident); !ok || fn.Name != "len" {
+ return
+ }
+ if len(index.Args) != 1 {
+ return
+ }
+ id3 := index.Args[Arg("len.v")]
+ switch oid3 := condCall.Args[1].(type) {
+ case *ast.BasicLit:
+ if pkg != "strings" {
+ return
+ }
+ lit, ok := id3.(*ast.BasicLit)
+ if !ok {
+ return
+ }
+ s1, ok1 := ExprToString(pass, lit)
+ s2, ok2 := ExprToString(pass, condCall.Args[1])
+ if !ok1 || !ok2 || s1 != s2 {
+ return
+ }
+ default:
+ if !sameNonDynamic(id3, oid3) {
+ return
+ }
+ }
+ case *ast.BasicLit, *ast.Ident:
+ if fun != "HasPrefix" {
+ return
+ }
+ if pkg != "strings" {
+ return
+ }
+ string, ok1 := ExprToString(pass, condCall.Args[1])
+ int, ok2 := ExprToInt(pass, slice.Low)
+ if !ok1 || !ok2 || int != int64(len(string)) {
+ return
+ }
+ case *ast.BinaryExpr:
+ if fun != "HasSuffix" {
+ return
+ }
+ if index.Op != token.SUB {
+ return
+ }
+ if !isLenOnIdent(index.X, condCall.Args[0]) ||
+ !isLenOnIdent(index.Y, condCall.Args[1]) {
+ return
+ }
+ default:
+ return
+ }
+
+ var replacement string
+ switch fun {
+ case "HasPrefix":
+ replacement = "TrimPrefix"
+ case "HasSuffix":
+ replacement = "TrimSuffix"
+ }
+ ReportNodefFG(pass, ifstmt, "should replace this if statement with an unconditional %s.%s", pkg, replacement)
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.IfStmt)(nil)}, fn)
+ return nil, nil
+}
+
+func LintLoopSlide(pass *analysis.Pass) (interface{}, error) {
+ // TODO(dh): detect bs[i+offset] in addition to bs[offset+i]
+ // TODO(dh): consider merging this function with LintLoopCopy
+ // TODO(dh): detect length that is an expression, not a variable name
+ // TODO(dh): support sliding to a different offset than the beginning of the slice
+
+ fn := func(node ast.Node) {
+ /*
+ for i := 0; i < n; i++ {
+ bs[i] = bs[offset+i]
+ }
+
+ ↓
+
+ copy(bs[:n], bs[offset:offset+n])
+ */
+
+ loop := node.(*ast.ForStmt)
+ if len(loop.Body.List) != 1 || loop.Init == nil || loop.Cond == nil || loop.Post == nil {
+ return
+ }
+ assign, ok := loop.Init.(*ast.AssignStmt)
+ if !ok || len(assign.Lhs) != 1 || len(assign.Rhs) != 1 || !IsZero(assign.Rhs[0]) {
+ return
+ }
+ initvar, ok := assign.Lhs[0].(*ast.Ident)
+ if !ok {
+ return
+ }
+ post, ok := loop.Post.(*ast.IncDecStmt)
+ if !ok || post.Tok != token.INC {
+ return
+ }
+ postvar, ok := post.X.(*ast.Ident)
+ if !ok || pass.TypesInfo.ObjectOf(postvar) != pass.TypesInfo.ObjectOf(initvar) {
+ return
+ }
+ bin, ok := loop.Cond.(*ast.BinaryExpr)
+ if !ok || bin.Op != token.LSS {
+ return
+ }
+ binx, ok := bin.X.(*ast.Ident)
+ if !ok || pass.TypesInfo.ObjectOf(binx) != pass.TypesInfo.ObjectOf(initvar) {
+ return
+ }
+ biny, ok := bin.Y.(*ast.Ident)
+ if !ok {
+ return
+ }
+
+ assign, ok = loop.Body.List[0].(*ast.AssignStmt)
+ if !ok || len(assign.Lhs) != 1 || len(assign.Rhs) != 1 || assign.Tok != token.ASSIGN {
+ return
+ }
+ lhs, ok := assign.Lhs[0].(*ast.IndexExpr)
+ if !ok {
+ return
+ }
+ rhs, ok := assign.Rhs[0].(*ast.IndexExpr)
+ if !ok {
+ return
+ }
+
+ bs1, ok := lhs.X.(*ast.Ident)
+ if !ok {
+ return
+ }
+ bs2, ok := rhs.X.(*ast.Ident)
+ if !ok {
+ return
+ }
+ obj1 := pass.TypesInfo.ObjectOf(bs1)
+ obj2 := pass.TypesInfo.ObjectOf(bs2)
+ if obj1 != obj2 {
+ return
+ }
+ if _, ok := obj1.Type().Underlying().(*types.Slice); !ok {
+ return
+ }
+
+ index1, ok := lhs.Index.(*ast.Ident)
+ if !ok || pass.TypesInfo.ObjectOf(index1) != pass.TypesInfo.ObjectOf(initvar) {
+ return
+ }
+ index2, ok := rhs.Index.(*ast.BinaryExpr)
+ if !ok || index2.Op != token.ADD {
+ return
+ }
+ add1, ok := index2.X.(*ast.Ident)
+ if !ok {
+ return
+ }
+ add2, ok := index2.Y.(*ast.Ident)
+ if !ok || pass.TypesInfo.ObjectOf(add2) != pass.TypesInfo.ObjectOf(initvar) {
+ return
+ }
+
+ ReportNodefFG(pass, loop, "should use copy(%s[:%s], %s[%s:]) instead", Render(pass, bs1), Render(pass, biny), Render(pass, bs1), Render(pass, add1))
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.ForStmt)(nil)}, fn)
+ return nil, nil
+}
+
+func LintMakeLenCap(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ call := node.(*ast.CallExpr)
+ if fn, ok := call.Fun.(*ast.Ident); !ok || fn.Name != "make" {
+ // FIXME check whether make is indeed the built-in function
+ return
+ }
+ switch len(call.Args) {
+ case 2:
+ // make(T, len)
+ if _, ok := pass.TypesInfo.TypeOf(call.Args[Arg("make.t")]).Underlying().(*types.Slice); ok {
+ break
+ }
+ if IsZero(call.Args[Arg("make.size[0]")]) {
+ ReportNodefFG(pass, call.Args[Arg("make.size[0]")], "should use make(%s) instead", Render(pass, call.Args[Arg("make.t")]))
+ }
+ case 3:
+ // make(T, len, cap)
+ if Render(pass, call.Args[Arg("make.size[0]")]) == Render(pass, call.Args[Arg("make.size[1]")]) {
+ ReportNodefFG(pass, call.Args[Arg("make.size[0]")],
+ "should use make(%s, %s) instead",
+ Render(pass, call.Args[Arg("make.t")]), Render(pass, call.Args[Arg("make.size[0]")]))
+ }
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.CallExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func LintAssertNotNil(pass *analysis.Pass) (interface{}, error) {
+ isNilCheck := func(ident *ast.Ident, expr ast.Expr) bool {
+ xbinop, ok := expr.(*ast.BinaryExpr)
+ if !ok || xbinop.Op != token.NEQ {
+ return false
+ }
+ xident, ok := xbinop.X.(*ast.Ident)
+ if !ok || xident.Obj != ident.Obj {
+ return false
+ }
+ if !IsNil(pass, xbinop.Y) {
+ return false
+ }
+ return true
+ }
+ isOKCheck := func(ident *ast.Ident, expr ast.Expr) bool {
+ yident, ok := expr.(*ast.Ident)
+ if !ok || yident.Obj != ident.Obj {
+ return false
+ }
+ return true
+ }
+ fn1 := func(node ast.Node) {
+ ifstmt := node.(*ast.IfStmt)
+ assign, ok := ifstmt.Init.(*ast.AssignStmt)
+ if !ok || len(assign.Lhs) != 2 || len(assign.Rhs) != 1 || !IsBlank(assign.Lhs[0]) {
+ return
+ }
+ assert, ok := assign.Rhs[0].(*ast.TypeAssertExpr)
+ if !ok {
+ return
+ }
+ binop, ok := ifstmt.Cond.(*ast.BinaryExpr)
+ if !ok || binop.Op != token.LAND {
+ return
+ }
+ assertIdent, ok := assert.X.(*ast.Ident)
+ if !ok {
+ return
+ }
+ assignIdent, ok := assign.Lhs[1].(*ast.Ident)
+ if !ok {
+ return
+ }
+ if !(isNilCheck(assertIdent, binop.X) && isOKCheck(assignIdent, binop.Y)) &&
+ !(isNilCheck(assertIdent, binop.Y) && isOKCheck(assignIdent, binop.X)) {
+ return
+ }
+ ReportNodefFG(pass, ifstmt, "when %s is true, %s can't be nil", Render(pass, assignIdent), Render(pass, assertIdent))
+ }
+ fn2 := func(node ast.Node) {
+ // Check that outer ifstmt is an 'if x != nil {}'
+ ifstmt := node.(*ast.IfStmt)
+ if ifstmt.Init != nil {
+ return
+ }
+ if ifstmt.Else != nil {
+ return
+ }
+ if len(ifstmt.Body.List) != 1 {
+ return
+ }
+ binop, ok := ifstmt.Cond.(*ast.BinaryExpr)
+ if !ok {
+ return
+ }
+ if binop.Op != token.NEQ {
+ return
+ }
+ lhs, ok := binop.X.(*ast.Ident)
+ if !ok {
+ return
+ }
+ if !IsNil(pass, binop.Y) {
+ return
+ }
+
+ // Check that inner ifstmt is an `if _, ok := x.(T); ok {}`
+ ifstmt, ok = ifstmt.Body.List[0].(*ast.IfStmt)
+ if !ok {
+ return
+ }
+ assign, ok := ifstmt.Init.(*ast.AssignStmt)
+ if !ok || len(assign.Lhs) != 2 || len(assign.Rhs) != 1 || !IsBlank(assign.Lhs[0]) {
+ return
+ }
+ assert, ok := assign.Rhs[0].(*ast.TypeAssertExpr)
+ if !ok {
+ return
+ }
+ assertIdent, ok := assert.X.(*ast.Ident)
+ if !ok {
+ return
+ }
+ if lhs.Obj != assertIdent.Obj {
+ return
+ }
+ assignIdent, ok := assign.Lhs[1].(*ast.Ident)
+ if !ok {
+ return
+ }
+ if !isOKCheck(assignIdent, ifstmt.Cond) {
+ return
+ }
+ ReportNodefFG(pass, ifstmt, "when %s is true, %s can't be nil", Render(pass, assignIdent), Render(pass, assertIdent))
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.IfStmt)(nil)}, fn1)
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.IfStmt)(nil)}, fn2)
+ return nil, nil
+}
+
+func LintDeclareAssign(pass *analysis.Pass) (interface{}, error) {
+ hasMultipleAssignments := func(root ast.Node, ident *ast.Ident) bool {
+ num := 0
+ ast.Inspect(root, func(node ast.Node) bool {
+ if num >= 2 {
+ return false
+ }
+ assign, ok := node.(*ast.AssignStmt)
+ if !ok {
+ return true
+ }
+ for _, lhs := range assign.Lhs {
+ if oident, ok := lhs.(*ast.Ident); ok {
+ if oident.Obj == ident.Obj {
+ num++
+ }
+ }
+ }
+
+ return true
+ })
+ return num >= 2
+ }
+ fn := func(node ast.Node) {
+ block := node.(*ast.BlockStmt)
+ if len(block.List) < 2 {
+ return
+ }
+ for i, stmt := range block.List[:len(block.List)-1] {
+ _ = i
+ decl, ok := stmt.(*ast.DeclStmt)
+ if !ok {
+ continue
+ }
+ gdecl, ok := decl.Decl.(*ast.GenDecl)
+ if !ok || gdecl.Tok != token.VAR || len(gdecl.Specs) != 1 {
+ continue
+ }
+ vspec, ok := gdecl.Specs[0].(*ast.ValueSpec)
+ if !ok || len(vspec.Names) != 1 || len(vspec.Values) != 0 {
+ continue
+ }
+
+ assign, ok := block.List[i+1].(*ast.AssignStmt)
+ if !ok || assign.Tok != token.ASSIGN {
+ continue
+ }
+ if len(assign.Lhs) != 1 || len(assign.Rhs) != 1 {
+ continue
+ }
+ ident, ok := assign.Lhs[0].(*ast.Ident)
+ if !ok {
+ continue
+ }
+ if vspec.Names[0].Obj != ident.Obj {
+ continue
+ }
+
+ if refersTo(pass, assign.Rhs[0], ident) {
+ continue
+ }
+ if hasMultipleAssignments(block, ident) {
+ continue
+ }
+
+ ReportNodefFG(pass, decl, "should merge variable declaration with assignment on next line")
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.BlockStmt)(nil)}, fn)
+ return nil, nil
+}
+
+func LintRedundantBreak(pass *analysis.Pass) (interface{}, error) {
+ fn1 := func(node ast.Node) {
+ clause := node.(*ast.CaseClause)
+ if len(clause.Body) < 2 {
+ return
+ }
+ branch, ok := clause.Body[len(clause.Body)-1].(*ast.BranchStmt)
+ if !ok || branch.Tok != token.BREAK || branch.Label != nil {
+ return
+ }
+ ReportNodefFG(pass, branch, "redundant break statement")
+ }
+ fn2 := func(node ast.Node) {
+ var ret *ast.FieldList
+ var body *ast.BlockStmt
+ switch x := node.(type) {
+ case *ast.FuncDecl:
+ ret = x.Type.Results
+ body = x.Body
+ case *ast.FuncLit:
+ ret = x.Type.Results
+ body = x.Body
+ default:
+ panic(fmt.Sprintf("unreachable: %T", node))
+ }
+ // if the func has results, a return can't be redundant.
+ // similarly, if there are no statements, there can be
+ // no return.
+ if ret != nil || body == nil || len(body.List) < 1 {
+ return
+ }
+ rst, ok := body.List[len(body.List)-1].(*ast.ReturnStmt)
+ if !ok {
+ return
+ }
+ // we don't need to check rst.Results as we already
+ // checked x.Type.Results to be nil.
+ ReportNodefFG(pass, rst, "redundant return statement")
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.CaseClause)(nil)}, fn1)
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.FuncDecl)(nil), (*ast.FuncLit)(nil)}, fn2)
+ return nil, nil
+}
+
+func isStringer(T types.Type, msCache *typeutil.MethodSetCache) bool {
+ ms := msCache.MethodSet(T)
+ sel := ms.Lookup(nil, "String")
+ if sel == nil {
+ return false
+ }
+ fn, ok := sel.Obj().(*types.Func)
+ if !ok {
+ // should be unreachable
+ return false
+ }
+ sig := fn.Type().(*types.Signature)
+ if sig.Params().Len() != 0 {
+ return false
+ }
+ if sig.Results().Len() != 1 {
+ return false
+ }
+ if !IsType(sig.Results().At(0).Type(), "string") {
+ return false
+ }
+ return true
+}
+
+func LintRedundantSprintf(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ call := node.(*ast.CallExpr)
+ if !IsCallToAST(pass, call, "fmt.Sprintf") {
+ return
+ }
+ if len(call.Args) != 2 {
+ return
+ }
+ if s, ok := ExprToString(pass, call.Args[Arg("fmt.Sprintf.format")]); !ok || s != "%s" {
+ return
+ }
+ arg := call.Args[Arg("fmt.Sprintf.a[0]")]
+ typ := pass.TypesInfo.TypeOf(arg)
+
+ ssapkg := pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).Pkg
+ if isStringer(typ, &ssapkg.Prog.MethodSets) {
+ ReportNodef(pass, call, "should use String() instead of fmt.Sprintf")
+ return
+ }
+
+ if typ.Underlying() == types.Universe.Lookup("string").Type() {
+ if typ == types.Universe.Lookup("string").Type() {
+ ReportNodefFG(pass, call, "the argument is already a string, there's no need to use fmt.Sprintf")
+ } else {
+ ReportNodefFG(pass, call, "the argument's underlying type is a string, should use a simple conversion instead of fmt.Sprintf")
+ }
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.CallExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func LintErrorsNewSprintf(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ if !IsCallToAST(pass, node, "errors.New") {
+ return
+ }
+ call := node.(*ast.CallExpr)
+ if !IsCallToAST(pass, call.Args[Arg("errors.New.text")], "fmt.Sprintf") {
+ return
+ }
+ ReportNodefFG(pass, node, "should use fmt.Errorf(...) instead of errors.New(fmt.Sprintf(...))")
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.CallExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func LintRangeStringRunes(pass *analysis.Pass) (interface{}, error) {
+ return sharedcheck.CheckRangeStringRunes(pass)
+}
+
+func LintNilCheckAroundRange(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ ifstmt := node.(*ast.IfStmt)
+ cond, ok := ifstmt.Cond.(*ast.BinaryExpr)
+ if !ok {
+ return
+ }
+
+ if cond.Op != token.NEQ || !IsNil(pass, cond.Y) || len(ifstmt.Body.List) != 1 {
+ return
+ }
+
+ loop, ok := ifstmt.Body.List[0].(*ast.RangeStmt)
+ if !ok {
+ return
+ }
+ ifXIdent, ok := cond.X.(*ast.Ident)
+ if !ok {
+ return
+ }
+ rangeXIdent, ok := loop.X.(*ast.Ident)
+ if !ok {
+ return
+ }
+ if ifXIdent.Obj != rangeXIdent.Obj {
+ return
+ }
+ switch pass.TypesInfo.TypeOf(rangeXIdent).(type) {
+ case *types.Slice, *types.Map:
+ ReportNodefFG(pass, node, "unnecessary nil check around range")
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.IfStmt)(nil)}, fn)
+ return nil, nil
+}
+
+func isPermissibleSort(pass *analysis.Pass, node ast.Node) bool {
+ call := node.(*ast.CallExpr)
+ typeconv, ok := call.Args[0].(*ast.CallExpr)
+ if !ok {
+ return true
+ }
+
+ sel, ok := typeconv.Fun.(*ast.SelectorExpr)
+ if !ok {
+ return true
+ }
+ name := SelectorName(pass, sel)
+ switch name {
+ case "sort.IntSlice", "sort.Float64Slice", "sort.StringSlice":
+ default:
+ return true
+ }
+
+ return false
+}
+
+func LintSortHelpers(pass *analysis.Pass) (interface{}, error) {
+ type Error struct {
+ node ast.Node
+ msg string
+ }
+ var allErrors []Error
+ fn := func(node ast.Node) {
+ var body *ast.BlockStmt
+ switch node := node.(type) {
+ case *ast.FuncLit:
+ body = node.Body
+ case *ast.FuncDecl:
+ body = node.Body
+ default:
+ panic(fmt.Sprintf("unreachable: %T", node))
+ }
+ if body == nil {
+ return
+ }
+
+ var errors []Error
+ permissible := false
+ fnSorts := func(node ast.Node) bool {
+ if permissible {
+ return false
+ }
+ if !IsCallToAST(pass, node, "sort.Sort") {
+ return true
+ }
+ if isPermissibleSort(pass, node) {
+ permissible = true
+ return false
+ }
+ call := node.(*ast.CallExpr)
+ typeconv := call.Args[Arg("sort.Sort.data")].(*ast.CallExpr)
+ sel := typeconv.Fun.(*ast.SelectorExpr)
+ name := SelectorName(pass, sel)
+
+ switch name {
+ case "sort.IntSlice":
+ errors = append(errors, Error{node, "should use sort.Ints(...) instead of sort.Sort(sort.IntSlice(...))"})
+ case "sort.Float64Slice":
+ errors = append(errors, Error{node, "should use sort.Float64s(...) instead of sort.Sort(sort.Float64Slice(...))"})
+ case "sort.StringSlice":
+ errors = append(errors, Error{node, "should use sort.Strings(...) instead of sort.Sort(sort.StringSlice(...))"})
+ }
+ return true
+ }
+ ast.Inspect(body, fnSorts)
+
+ if permissible {
+ return
+ }
+ allErrors = append(allErrors, errors...)
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.FuncLit)(nil), (*ast.FuncDecl)(nil)}, fn)
+ sort.Slice(allErrors, func(i, j int) bool {
+ return allErrors[i].node.Pos() < allErrors[j].node.Pos()
+ })
+ var prev token.Pos
+ for _, err := range allErrors {
+ if err.node.Pos() == prev {
+ continue
+ }
+ prev = err.node.Pos()
+ ReportNodefFG(pass, err.node, "%s", err.msg)
+ }
+ return nil, nil
+}
+
+func LintGuardedDelete(pass *analysis.Pass) (interface{}, error) {
+ isCommaOkMapIndex := func(stmt ast.Stmt) (b *ast.Ident, m ast.Expr, key ast.Expr, ok bool) {
+ // Has to be of the form `_, <b:*ast.Ident> = <m:*types.Map>[<key>]
+
+ assign, ok := stmt.(*ast.AssignStmt)
+ if !ok {
+ return nil, nil, nil, false
+ }
+ if len(assign.Lhs) != 2 || len(assign.Rhs) != 1 {
+ return nil, nil, nil, false
+ }
+ if !IsBlank(assign.Lhs[0]) {
+ return nil, nil, nil, false
+ }
+ ident, ok := assign.Lhs[1].(*ast.Ident)
+ if !ok {
+ return nil, nil, nil, false
+ }
+ index, ok := assign.Rhs[0].(*ast.IndexExpr)
+ if !ok {
+ return nil, nil, nil, false
+ }
+ if _, ok := pass.TypesInfo.TypeOf(index.X).(*types.Map); !ok {
+ return nil, nil, nil, false
+ }
+ key = index.Index
+ return ident, index.X, key, true
+ }
+ fn := func(node ast.Node) {
+ stmt := node.(*ast.IfStmt)
+ if len(stmt.Body.List) != 1 {
+ return
+ }
+ if stmt.Else != nil {
+ return
+ }
+ expr, ok := stmt.Body.List[0].(*ast.ExprStmt)
+ if !ok {
+ return
+ }
+ call, ok := expr.X.(*ast.CallExpr)
+ if !ok {
+ return
+ }
+ if !IsCallToAST(pass, call, "delete") {
+ return
+ }
+ b, m, key, ok := isCommaOkMapIndex(stmt.Init)
+ if !ok {
+ return
+ }
+ if cond, ok := stmt.Cond.(*ast.Ident); !ok || pass.TypesInfo.ObjectOf(cond) != pass.TypesInfo.ObjectOf(b) {
+ return
+ }
+ if Render(pass, call.Args[0]) != Render(pass, m) || Render(pass, call.Args[1]) != Render(pass, key) {
+ return
+ }
+ ReportNodefFG(pass, stmt, "unnecessary guard around call to delete")
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.IfStmt)(nil)}, fn)
+ return nil, nil
+}
+
+func LintSimplifyTypeSwitch(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ stmt := node.(*ast.TypeSwitchStmt)
+ if stmt.Init != nil {
+ // bailing out for now, can't anticipate how type switches with initializers are being used
+ return
+ }
+ expr, ok := stmt.Assign.(*ast.ExprStmt)
+ if !ok {
+ // the user is in fact assigning the result
+ return
+ }
+ assert := expr.X.(*ast.TypeAssertExpr)
+ ident, ok := assert.X.(*ast.Ident)
+ if !ok {
+ return
+ }
+ x := pass.TypesInfo.ObjectOf(ident)
+ var allOffenders []ast.Node
+ for _, clause := range stmt.Body.List {
+ clause := clause.(*ast.CaseClause)
+ if len(clause.List) != 1 {
+ continue
+ }
+ hasUnrelatedAssertion := false
+ var offenders []ast.Node
+ ast.Inspect(clause, func(node ast.Node) bool {
+ assert2, ok := node.(*ast.TypeAssertExpr)
+ if !ok {
+ return true
+ }
+ ident, ok := assert2.X.(*ast.Ident)
+ if !ok {
+ hasUnrelatedAssertion = true
+ return false
+ }
+ if pass.TypesInfo.ObjectOf(ident) != x {
+ hasUnrelatedAssertion = true
+ return false
+ }
+
+ if !types.Identical(pass.TypesInfo.TypeOf(clause.List[0]), pass.TypesInfo.TypeOf(assert2.Type)) {
+ hasUnrelatedAssertion = true
+ return false
+ }
+ offenders = append(offenders, assert2)
+ return true
+ })
+ if !hasUnrelatedAssertion {
+ // don't flag cases that have other type assertions
+ // unrelated to the one in the case clause. often
+ // times, this is done for symmetry, when two
+ // different values have to be asserted to the same
+ // type.
+ allOffenders = append(allOffenders, offenders...)
+ }
+ }
+ if len(allOffenders) != 0 {
+ at := ""
+ for _, offender := range allOffenders {
+ pos := lint.DisplayPosition(pass.Fset, offender.Pos())
+ at += "\n\t" + pos.String()
+ }
+ ReportNodefFG(pass, expr, "assigning the result of this type assertion to a variable (switch %s := %s.(type)) could eliminate the following type assertions:%s", Render(pass, ident), Render(pass, ident), at)
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.TypeSwitchStmt)(nil)}, fn)
+ return nil, nil
+}
diff --git a/vendor/honnef.co/go/tools/ssa/LICENSE b/vendor/honnef.co/go/tools/ssa/LICENSE
new file mode 100644
index 0000000000000..aee48041e113d
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssa/LICENSE
@@ -0,0 +1,28 @@
+Copyright (c) 2009 The Go Authors. All rights reserved.
+Copyright (c) 2016 Dominik Honnef. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are
+met:
+
+ * Redistributions of source code must retain the above copyright
+notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above
+copyright notice, this list of conditions and the following disclaimer
+in the documentation and/or other materials provided with the
+distribution.
+ * Neither the name of Google Inc. nor the names of its
+contributors may be used to endorse or promote products derived from
+this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/vendor/honnef.co/go/tools/ssa/blockopt.go b/vendor/honnef.co/go/tools/ssa/blockopt.go
new file mode 100644
index 0000000000000..22c9a4c0d420d
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssa/blockopt.go
@@ -0,0 +1,195 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// Simple block optimizations to simplify the control flow graph.
+
+// TODO(adonovan): opt: instead of creating several "unreachable" blocks
+// per function in the Builder, reuse a single one (e.g. at Blocks[1])
+// to reduce garbage.
+
+import (
+ "fmt"
+ "os"
+)
+
+// If true, perform sanity checking and show progress at each
+// successive iteration of optimizeBlocks. Very verbose.
+const debugBlockOpt = false
+
+// markReachable sets Index=-1 for all blocks reachable from b.
+func markReachable(b *BasicBlock) {
+ b.Index = -1
+ for _, succ := range b.Succs {
+ if succ.Index == 0 {
+ markReachable(succ)
+ }
+ }
+}
+
+func DeleteUnreachableBlocks(f *Function) {
+ deleteUnreachableBlocks(f)
+}
+
+// deleteUnreachableBlocks marks all reachable blocks of f and
+// eliminates (nils) all others, including possibly cyclic subgraphs.
+//
+func deleteUnreachableBlocks(f *Function) {
+ const white, black = 0, -1
+ // We borrow b.Index temporarily as the mark bit.
+ for _, b := range f.Blocks {
+ b.Index = white
+ }
+ markReachable(f.Blocks[0])
+ if f.Recover != nil {
+ markReachable(f.Recover)
+ }
+ for i, b := range f.Blocks {
+ if b.Index == white {
+ for _, c := range b.Succs {
+ if c.Index == black {
+ c.removePred(b) // delete white->black edge
+ }
+ }
+ if debugBlockOpt {
+ fmt.Fprintln(os.Stderr, "unreachable", b)
+ }
+ f.Blocks[i] = nil // delete b
+ }
+ }
+ f.removeNilBlocks()
+}
+
+// jumpThreading attempts to apply simple jump-threading to block b,
+// in which a->b->c become a->c if b is just a Jump.
+// The result is true if the optimization was applied.
+//
+func jumpThreading(f *Function, b *BasicBlock) bool {
+ if b.Index == 0 {
+ return false // don't apply to entry block
+ }
+ if b.Instrs == nil {
+ return false
+ }
+ if _, ok := b.Instrs[0].(*Jump); !ok {
+ return false // not just a jump
+ }
+ c := b.Succs[0]
+ if c == b {
+ return false // don't apply to degenerate jump-to-self.
+ }
+ if c.hasPhi() {
+ return false // not sound without more effort
+ }
+ for j, a := range b.Preds {
+ a.replaceSucc(b, c)
+
+ // If a now has two edges to c, replace its degenerate If by Jump.
+ if len(a.Succs) == 2 && a.Succs[0] == c && a.Succs[1] == c {
+ jump := new(Jump)
+ jump.setBlock(a)
+ a.Instrs[len(a.Instrs)-1] = jump
+ a.Succs = a.Succs[:1]
+ c.removePred(b)
+ } else {
+ if j == 0 {
+ c.replacePred(b, a)
+ } else {
+ c.Preds = append(c.Preds, a)
+ }
+ }
+
+ if debugBlockOpt {
+ fmt.Fprintln(os.Stderr, "jumpThreading", a, b, c)
+ }
+ }
+ f.Blocks[b.Index] = nil // delete b
+ return true
+}
+
+// fuseBlocks attempts to apply the block fusion optimization to block
+// a, in which a->b becomes ab if len(a.Succs)==len(b.Preds)==1.
+// The result is true if the optimization was applied.
+//
+func fuseBlocks(f *Function, a *BasicBlock) bool {
+ if len(a.Succs) != 1 {
+ return false
+ }
+ b := a.Succs[0]
+ if len(b.Preds) != 1 {
+ return false
+ }
+
+ // Degenerate &&/|| ops may result in a straight-line CFG
+ // containing φ-nodes. (Ideally we'd replace such them with
+ // their sole operand but that requires Referrers, built later.)
+ if b.hasPhi() {
+ return false // not sound without further effort
+ }
+
+ // Eliminate jump at end of A, then copy all of B across.
+ a.Instrs = append(a.Instrs[:len(a.Instrs)-1], b.Instrs...)
+ for _, instr := range b.Instrs {
+ instr.setBlock(a)
+ }
+
+ // A inherits B's successors
+ a.Succs = append(a.succs2[:0], b.Succs...)
+
+ // Fix up Preds links of all successors of B.
+ for _, c := range b.Succs {
+ c.replacePred(b, a)
+ }
+
+ if debugBlockOpt {
+ fmt.Fprintln(os.Stderr, "fuseBlocks", a, b)
+ }
+
+ f.Blocks[b.Index] = nil // delete b
+ return true
+}
+
+func OptimizeBlocks(f *Function) {
+ optimizeBlocks(f)
+}
+
+// optimizeBlocks() performs some simple block optimizations on a
+// completed function: dead block elimination, block fusion, jump
+// threading.
+//
+func optimizeBlocks(f *Function) {
+ deleteUnreachableBlocks(f)
+
+ // Loop until no further progress.
+ changed := true
+ for changed {
+ changed = false
+
+ if debugBlockOpt {
+ f.WriteTo(os.Stderr)
+ mustSanityCheck(f, nil)
+ }
+
+ for _, b := range f.Blocks {
+ // f.Blocks will temporarily contain nils to indicate
+ // deleted blocks; we remove them at the end.
+ if b == nil {
+ continue
+ }
+
+ // Fuse blocks. b->c becomes bc.
+ if fuseBlocks(f, b) {
+ changed = true
+ }
+
+ // a->b->c becomes a->c if b contains only a Jump.
+ if jumpThreading(f, b) {
+ changed = true
+ continue // (b was disconnected)
+ }
+ }
+ }
+ f.removeNilBlocks()
+}
diff --git a/vendor/honnef.co/go/tools/ssa/builder.go b/vendor/honnef.co/go/tools/ssa/builder.go
new file mode 100644
index 0000000000000..317ac06116659
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssa/builder.go
@@ -0,0 +1,2379 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// This file implements the BUILD phase of SSA construction.
+//
+// SSA construction has two phases, CREATE and BUILD. In the CREATE phase
+// (create.go), all packages are constructed and type-checked and
+// definitions of all package members are created, method-sets are
+// computed, and wrapper methods are synthesized.
+// ssa.Packages are created in arbitrary order.
+//
+// In the BUILD phase (builder.go), the builder traverses the AST of
+// each Go source function and generates SSA instructions for the
+// function body. Initializer expressions for package-level variables
+// are emitted to the package's init() function in the order specified
+// by go/types.Info.InitOrder, then code for each function in the
+// package is generated in lexical order.
+// The BUILD phases for distinct packages are independent and are
+// executed in parallel.
+//
+// TODO(adonovan): indeed, building functions is now embarrassingly parallel.
+// Audit for concurrency then benchmark using more goroutines.
+//
+// The builder's and Program's indices (maps) are populated and
+// mutated during the CREATE phase, but during the BUILD phase they
+// remain constant. The sole exception is Prog.methodSets and its
+// related maps, which are protected by a dedicated mutex.
+
+import (
+ "fmt"
+ "go/ast"
+ "go/constant"
+ "go/token"
+ "go/types"
+ "os"
+ "sync"
+)
+
+type opaqueType struct {
+ types.Type
+ name string
+}
+
+func (t *opaqueType) String() string { return t.name }
+
+var (
+ varOk = newVar("ok", tBool)
+ varIndex = newVar("index", tInt)
+
+ // Type constants.
+ tBool = types.Typ[types.Bool]
+ tByte = types.Typ[types.Byte]
+ tInt = types.Typ[types.Int]
+ tInvalid = types.Typ[types.Invalid]
+ tString = types.Typ[types.String]
+ tUntypedNil = types.Typ[types.UntypedNil]
+ tRangeIter = &opaqueType{nil, "iter"} // the type of all "range" iterators
+ tEface = types.NewInterfaceType(nil, nil).Complete()
+
+ // SSA Value constants.
+ vZero = intConst(0)
+ vOne = intConst(1)
+ vTrue = NewConst(constant.MakeBool(true), tBool)
+)
+
+// builder holds state associated with the package currently being built.
+// Its methods contain all the logic for AST-to-SSA conversion.
+type builder struct{}
+
+// cond emits to fn code to evaluate boolean condition e and jump
+// to t or f depending on its value, performing various simplifications.
+//
+// Postcondition: fn.currentBlock is nil.
+//
+func (b *builder) cond(fn *Function, e ast.Expr, t, f *BasicBlock) {
+ switch e := e.(type) {
+ case *ast.ParenExpr:
+ b.cond(fn, e.X, t, f)
+ return
+
+ case *ast.BinaryExpr:
+ switch e.Op {
+ case token.LAND:
+ ltrue := fn.newBasicBlock("cond.true")
+ b.cond(fn, e.X, ltrue, f)
+ fn.currentBlock = ltrue
+ b.cond(fn, e.Y, t, f)
+ return
+
+ case token.LOR:
+ lfalse := fn.newBasicBlock("cond.false")
+ b.cond(fn, e.X, t, lfalse)
+ fn.currentBlock = lfalse
+ b.cond(fn, e.Y, t, f)
+ return
+ }
+
+ case *ast.UnaryExpr:
+ if e.Op == token.NOT {
+ b.cond(fn, e.X, f, t)
+ return
+ }
+ }
+
+ // A traditional compiler would simplify "if false" (etc) here
+ // but we do not, for better fidelity to the source code.
+ //
+ // The value of a constant condition may be platform-specific,
+ // and may cause blocks that are reachable in some configuration
+ // to be hidden from subsequent analyses such as bug-finding tools.
+ emitIf(fn, b.expr(fn, e), t, f)
+}
+
+// logicalBinop emits code to fn to evaluate e, a &&- or
+// ||-expression whose reified boolean value is wanted.
+// The value is returned.
+//
+func (b *builder) logicalBinop(fn *Function, e *ast.BinaryExpr) Value {
+ rhs := fn.newBasicBlock("binop.rhs")
+ done := fn.newBasicBlock("binop.done")
+
+ // T(e) = T(e.X) = T(e.Y) after untyped constants have been
+ // eliminated.
+ // TODO(adonovan): not true; MyBool==MyBool yields UntypedBool.
+ t := fn.Pkg.typeOf(e)
+
+ var short Value // value of the short-circuit path
+ switch e.Op {
+ case token.LAND:
+ b.cond(fn, e.X, rhs, done)
+ short = NewConst(constant.MakeBool(false), t)
+
+ case token.LOR:
+ b.cond(fn, e.X, done, rhs)
+ short = NewConst(constant.MakeBool(true), t)
+ }
+
+ // Is rhs unreachable?
+ if rhs.Preds == nil {
+ // Simplify false&&y to false, true||y to true.
+ fn.currentBlock = done
+ return short
+ }
+
+ // Is done unreachable?
+ if done.Preds == nil {
+ // Simplify true&&y (or false||y) to y.
+ fn.currentBlock = rhs
+ return b.expr(fn, e.Y)
+ }
+
+ // All edges from e.X to done carry the short-circuit value.
+ var edges []Value
+ for range done.Preds {
+ edges = append(edges, short)
+ }
+
+ // The edge from e.Y to done carries the value of e.Y.
+ fn.currentBlock = rhs
+ edges = append(edges, b.expr(fn, e.Y))
+ emitJump(fn, done)
+ fn.currentBlock = done
+
+ phi := &Phi{Edges: edges, Comment: e.Op.String()}
+ phi.pos = e.OpPos
+ phi.typ = t
+ return done.emit(phi)
+}
+
+// exprN lowers a multi-result expression e to SSA form, emitting code
+// to fn and returning a single Value whose type is a *types.Tuple.
+// The caller must access the components via Extract.
+//
+// Multi-result expressions include CallExprs in a multi-value
+// assignment or return statement, and "value,ok" uses of
+// TypeAssertExpr, IndexExpr (when X is a map), and UnaryExpr (when Op
+// is token.ARROW).
+//
+func (b *builder) exprN(fn *Function, e ast.Expr) Value {
+ typ := fn.Pkg.typeOf(e).(*types.Tuple)
+ switch e := e.(type) {
+ case *ast.ParenExpr:
+ return b.exprN(fn, e.X)
+
+ case *ast.CallExpr:
+ // Currently, no built-in function nor type conversion
+ // has multiple results, so we can avoid some of the
+ // cases for single-valued CallExpr.
+ var c Call
+ b.setCall(fn, e, &c.Call)
+ c.typ = typ
+ return fn.emit(&c)
+
+ case *ast.IndexExpr:
+ mapt := fn.Pkg.typeOf(e.X).Underlying().(*types.Map)
+ lookup := &Lookup{
+ X: b.expr(fn, e.X),
+ Index: emitConv(fn, b.expr(fn, e.Index), mapt.Key()),
+ CommaOk: true,
+ }
+ lookup.setType(typ)
+ lookup.setPos(e.Lbrack)
+ return fn.emit(lookup)
+
+ case *ast.TypeAssertExpr:
+ return emitTypeTest(fn, b.expr(fn, e.X), typ.At(0).Type(), e.Lparen)
+
+ case *ast.UnaryExpr: // must be receive <-
+ unop := &UnOp{
+ Op: token.ARROW,
+ X: b.expr(fn, e.X),
+ CommaOk: true,
+ }
+ unop.setType(typ)
+ unop.setPos(e.OpPos)
+ return fn.emit(unop)
+ }
+ panic(fmt.Sprintf("exprN(%T) in %s", e, fn))
+}
+
+// builtin emits to fn SSA instructions to implement a call to the
+// built-in function obj with the specified arguments
+// and return type. It returns the value defined by the result.
+//
+// The result is nil if no special handling was required; in this case
+// the caller should treat this like an ordinary library function
+// call.
+//
+func (b *builder) builtin(fn *Function, obj *types.Builtin, args []ast.Expr, typ types.Type, pos token.Pos) Value {
+ switch obj.Name() {
+ case "make":
+ switch typ.Underlying().(type) {
+ case *types.Slice:
+ n := b.expr(fn, args[1])
+ m := n
+ if len(args) == 3 {
+ m = b.expr(fn, args[2])
+ }
+ if m, ok := m.(*Const); ok {
+ // treat make([]T, n, m) as new([m]T)[:n]
+ cap := m.Int64()
+ at := types.NewArray(typ.Underlying().(*types.Slice).Elem(), cap)
+ alloc := emitNew(fn, at, pos)
+ alloc.Comment = "makeslice"
+ v := &Slice{
+ X: alloc,
+ High: n,
+ }
+ v.setPos(pos)
+ v.setType(typ)
+ return fn.emit(v)
+ }
+ v := &MakeSlice{
+ Len: n,
+ Cap: m,
+ }
+ v.setPos(pos)
+ v.setType(typ)
+ return fn.emit(v)
+
+ case *types.Map:
+ var res Value
+ if len(args) == 2 {
+ res = b.expr(fn, args[1])
+ }
+ v := &MakeMap{Reserve: res}
+ v.setPos(pos)
+ v.setType(typ)
+ return fn.emit(v)
+
+ case *types.Chan:
+ var sz Value = vZero
+ if len(args) == 2 {
+ sz = b.expr(fn, args[1])
+ }
+ v := &MakeChan{Size: sz}
+ v.setPos(pos)
+ v.setType(typ)
+ return fn.emit(v)
+ }
+
+ case "new":
+ alloc := emitNew(fn, deref(typ), pos)
+ alloc.Comment = "new"
+ return alloc
+
+ case "len", "cap":
+ // Special case: len or cap of an array or *array is
+ // based on the type, not the value which may be nil.
+ // We must still evaluate the value, though. (If it
+ // was side-effect free, the whole call would have
+ // been constant-folded.)
+ t := deref(fn.Pkg.typeOf(args[0])).Underlying()
+ if at, ok := t.(*types.Array); ok {
+ b.expr(fn, args[0]) // for effects only
+ return intConst(at.Len())
+ }
+ // Otherwise treat as normal.
+
+ case "panic":
+ fn.emit(&Panic{
+ X: emitConv(fn, b.expr(fn, args[0]), tEface),
+ pos: pos,
+ })
+ fn.currentBlock = fn.newBasicBlock("unreachable")
+ return vTrue // any non-nil Value will do
+ }
+ return nil // treat all others as a regular function call
+}
+
+// addr lowers a single-result addressable expression e to SSA form,
+// emitting code to fn and returning the location (an lvalue) defined
+// by the expression.
+//
+// If escaping is true, addr marks the base variable of the
+// addressable expression e as being a potentially escaping pointer
+// value. For example, in this code:
+//
+// a := A{
+// b: [1]B{B{c: 1}}
+// }
+// return &a.b[0].c
+//
+// the application of & causes a.b[0].c to have its address taken,
+// which means that ultimately the local variable a must be
+// heap-allocated. This is a simple but very conservative escape
+// analysis.
+//
+// Operations forming potentially escaping pointers include:
+// - &x, including when implicit in method call or composite literals.
+// - a[:] iff a is an array (not *array)
+// - references to variables in lexically enclosing functions.
+//
+func (b *builder) addr(fn *Function, e ast.Expr, escaping bool) lvalue {
+ switch e := e.(type) {
+ case *ast.Ident:
+ if isBlankIdent(e) {
+ return blank{}
+ }
+ obj := fn.Pkg.objectOf(e)
+ v := fn.Prog.packageLevelValue(obj) // var (address)
+ if v == nil {
+ v = fn.lookup(obj, escaping)
+ }
+ return &address{addr: v, pos: e.Pos(), expr: e}
+
+ case *ast.CompositeLit:
+ t := deref(fn.Pkg.typeOf(e))
+ var v *Alloc
+ if escaping {
+ v = emitNew(fn, t, e.Lbrace)
+ } else {
+ v = fn.addLocal(t, e.Lbrace)
+ }
+ v.Comment = "complit"
+ var sb storebuf
+ b.compLit(fn, v, e, true, &sb)
+ sb.emit(fn)
+ return &address{addr: v, pos: e.Lbrace, expr: e}
+
+ case *ast.ParenExpr:
+ return b.addr(fn, e.X, escaping)
+
+ case *ast.SelectorExpr:
+ sel, ok := fn.Pkg.info.Selections[e]
+ if !ok {
+ // qualified identifier
+ return b.addr(fn, e.Sel, escaping)
+ }
+ if sel.Kind() != types.FieldVal {
+ panic(sel)
+ }
+ wantAddr := true
+ v := b.receiver(fn, e.X, wantAddr, escaping, sel)
+ last := len(sel.Index()) - 1
+ return &address{
+ addr: emitFieldSelection(fn, v, sel.Index()[last], true, e.Sel),
+ pos: e.Sel.Pos(),
+ expr: e.Sel,
+ }
+
+ case *ast.IndexExpr:
+ var x Value
+ var et types.Type
+ switch t := fn.Pkg.typeOf(e.X).Underlying().(type) {
+ case *types.Array:
+ x = b.addr(fn, e.X, escaping).address(fn)
+ et = types.NewPointer(t.Elem())
+ case *types.Pointer: // *array
+ x = b.expr(fn, e.X)
+ et = types.NewPointer(t.Elem().Underlying().(*types.Array).Elem())
+ case *types.Slice:
+ x = b.expr(fn, e.X)
+ et = types.NewPointer(t.Elem())
+ case *types.Map:
+ return &element{
+ m: b.expr(fn, e.X),
+ k: emitConv(fn, b.expr(fn, e.Index), t.Key()),
+ t: t.Elem(),
+ pos: e.Lbrack,
+ }
+ default:
+ panic("unexpected container type in IndexExpr: " + t.String())
+ }
+ v := &IndexAddr{
+ X: x,
+ Index: emitConv(fn, b.expr(fn, e.Index), tInt),
+ }
+ v.setPos(e.Lbrack)
+ v.setType(et)
+ return &address{addr: fn.emit(v), pos: e.Lbrack, expr: e}
+
+ case *ast.StarExpr:
+ return &address{addr: b.expr(fn, e.X), pos: e.Star, expr: e}
+ }
+
+ panic(fmt.Sprintf("unexpected address expression: %T", e))
+}
+
+type store struct {
+ lhs lvalue
+ rhs Value
+}
+
+type storebuf struct{ stores []store }
+
+func (sb *storebuf) store(lhs lvalue, rhs Value) {
+ sb.stores = append(sb.stores, store{lhs, rhs})
+}
+
+func (sb *storebuf) emit(fn *Function) {
+ for _, s := range sb.stores {
+ s.lhs.store(fn, s.rhs)
+ }
+}
+
+// assign emits to fn code to initialize the lvalue loc with the value
+// of expression e. If isZero is true, assign assumes that loc holds
+// the zero value for its type.
+//
+// This is equivalent to loc.store(fn, b.expr(fn, e)), but may generate
+// better code in some cases, e.g., for composite literals in an
+// addressable location.
+//
+// If sb is not nil, assign generates code to evaluate expression e, but
+// not to update loc. Instead, the necessary stores are appended to the
+// storebuf sb so that they can be executed later. This allows correct
+// in-place update of existing variables when the RHS is a composite
+// literal that may reference parts of the LHS.
+//
+func (b *builder) assign(fn *Function, loc lvalue, e ast.Expr, isZero bool, sb *storebuf) {
+ // Can we initialize it in place?
+ if e, ok := unparen(e).(*ast.CompositeLit); ok {
+ // A CompositeLit never evaluates to a pointer,
+ // so if the type of the location is a pointer,
+ // an &-operation is implied.
+ if _, ok := loc.(blank); !ok { // avoid calling blank.typ()
+ if isPointer(loc.typ()) {
+ ptr := b.addr(fn, e, true).address(fn)
+ // copy address
+ if sb != nil {
+ sb.store(loc, ptr)
+ } else {
+ loc.store(fn, ptr)
+ }
+ return
+ }
+ }
+
+ if _, ok := loc.(*address); ok {
+ if isInterface(loc.typ()) {
+ // e.g. var x interface{} = T{...}
+ // Can't in-place initialize an interface value.
+ // Fall back to copying.
+ } else {
+ // x = T{...} or x := T{...}
+ addr := loc.address(fn)
+ if sb != nil {
+ b.compLit(fn, addr, e, isZero, sb)
+ } else {
+ var sb storebuf
+ b.compLit(fn, addr, e, isZero, &sb)
+ sb.emit(fn)
+ }
+
+ // Subtle: emit debug ref for aggregate types only;
+ // slice and map are handled by store ops in compLit.
+ switch loc.typ().Underlying().(type) {
+ case *types.Struct, *types.Array:
+ emitDebugRef(fn, e, addr, true)
+ }
+
+ return
+ }
+ }
+ }
+
+ // simple case: just copy
+ rhs := b.expr(fn, e)
+ if sb != nil {
+ sb.store(loc, rhs)
+ } else {
+ loc.store(fn, rhs)
+ }
+}
+
+// expr lowers a single-result expression e to SSA form, emitting code
+// to fn and returning the Value defined by the expression.
+//
+func (b *builder) expr(fn *Function, e ast.Expr) Value {
+ e = unparen(e)
+
+ tv := fn.Pkg.info.Types[e]
+
+ // Is expression a constant?
+ if tv.Value != nil {
+ return NewConst(tv.Value, tv.Type)
+ }
+
+ var v Value
+ if tv.Addressable() {
+ // Prefer pointer arithmetic ({Index,Field}Addr) followed
+ // by Load over subelement extraction (e.g. Index, Field),
+ // to avoid large copies.
+ v = b.addr(fn, e, false).load(fn)
+ } else {
+ v = b.expr0(fn, e, tv)
+ }
+ if fn.debugInfo() {
+ emitDebugRef(fn, e, v, false)
+ }
+ return v
+}
+
+func (b *builder) expr0(fn *Function, e ast.Expr, tv types.TypeAndValue) Value {
+ switch e := e.(type) {
+ case *ast.BasicLit:
+ panic("non-constant BasicLit") // unreachable
+
+ case *ast.FuncLit:
+ fn2 := &Function{
+ name: fmt.Sprintf("%s$%d", fn.Name(), 1+len(fn.AnonFuncs)),
+ Signature: fn.Pkg.typeOf(e.Type).Underlying().(*types.Signature),
+ pos: e.Type.Func,
+ parent: fn,
+ Pkg: fn.Pkg,
+ Prog: fn.Prog,
+ syntax: e,
+ }
+ fn.AnonFuncs = append(fn.AnonFuncs, fn2)
+ b.buildFunction(fn2)
+ if fn2.FreeVars == nil {
+ return fn2
+ }
+ v := &MakeClosure{Fn: fn2}
+ v.setType(tv.Type)
+ for _, fv := range fn2.FreeVars {
+ v.Bindings = append(v.Bindings, fv.outer)
+ fv.outer = nil
+ }
+ return fn.emit(v)
+
+ case *ast.TypeAssertExpr: // single-result form only
+ return emitTypeAssert(fn, b.expr(fn, e.X), tv.Type, e.Lparen)
+
+ case *ast.CallExpr:
+ if fn.Pkg.info.Types[e.Fun].IsType() {
+ // Explicit type conversion, e.g. string(x) or big.Int(x)
+ x := b.expr(fn, e.Args[0])
+ y := emitConv(fn, x, tv.Type)
+ if y != x {
+ switch y := y.(type) {
+ case *Convert:
+ y.pos = e.Lparen
+ case *ChangeType:
+ y.pos = e.Lparen
+ case *MakeInterface:
+ y.pos = e.Lparen
+ }
+ }
+ return y
+ }
+ // Call to "intrinsic" built-ins, e.g. new, make, panic.
+ if id, ok := unparen(e.Fun).(*ast.Ident); ok {
+ if obj, ok := fn.Pkg.info.Uses[id].(*types.Builtin); ok {
+ if v := b.builtin(fn, obj, e.Args, tv.Type, e.Lparen); v != nil {
+ return v
+ }
+ }
+ }
+ // Regular function call.
+ var v Call
+ b.setCall(fn, e, &v.Call)
+ v.setType(tv.Type)
+ return fn.emit(&v)
+
+ case *ast.UnaryExpr:
+ switch e.Op {
+ case token.AND: // &X --- potentially escaping.
+ addr := b.addr(fn, e.X, true)
+ if _, ok := unparen(e.X).(*ast.StarExpr); ok {
+ // &*p must panic if p is nil (http://golang.org/s/go12nil).
+ // For simplicity, we'll just (suboptimally) rely
+ // on the side effects of a load.
+ // TODO(adonovan): emit dedicated nilcheck.
+ addr.load(fn)
+ }
+ return addr.address(fn)
+ case token.ADD:
+ return b.expr(fn, e.X)
+ case token.NOT, token.ARROW, token.SUB, token.XOR: // ! <- - ^
+ v := &UnOp{
+ Op: e.Op,
+ X: b.expr(fn, e.X),
+ }
+ v.setPos(e.OpPos)
+ v.setType(tv.Type)
+ return fn.emit(v)
+ default:
+ panic(e.Op)
+ }
+
+ case *ast.BinaryExpr:
+ switch e.Op {
+ case token.LAND, token.LOR:
+ return b.logicalBinop(fn, e)
+ case token.SHL, token.SHR:
+ fallthrough
+ case token.ADD, token.SUB, token.MUL, token.QUO, token.REM, token.AND, token.OR, token.XOR, token.AND_NOT:
+ return emitArith(fn, e.Op, b.expr(fn, e.X), b.expr(fn, e.Y), tv.Type, e.OpPos)
+
+ case token.EQL, token.NEQ, token.GTR, token.LSS, token.LEQ, token.GEQ:
+ cmp := emitCompare(fn, e.Op, b.expr(fn, e.X), b.expr(fn, e.Y), e.OpPos)
+ // The type of x==y may be UntypedBool.
+ return emitConv(fn, cmp, DefaultType(tv.Type))
+ default:
+ panic("illegal op in BinaryExpr: " + e.Op.String())
+ }
+
+ case *ast.SliceExpr:
+ var low, high, max Value
+ var x Value
+ switch fn.Pkg.typeOf(e.X).Underlying().(type) {
+ case *types.Array:
+ // Potentially escaping.
+ x = b.addr(fn, e.X, true).address(fn)
+ case *types.Basic, *types.Slice, *types.Pointer: // *array
+ x = b.expr(fn, e.X)
+ default:
+ panic("unreachable")
+ }
+ if e.High != nil {
+ high = b.expr(fn, e.High)
+ }
+ if e.Low != nil {
+ low = b.expr(fn, e.Low)
+ }
+ if e.Slice3 {
+ max = b.expr(fn, e.Max)
+ }
+ v := &Slice{
+ X: x,
+ Low: low,
+ High: high,
+ Max: max,
+ }
+ v.setPos(e.Lbrack)
+ v.setType(tv.Type)
+ return fn.emit(v)
+
+ case *ast.Ident:
+ obj := fn.Pkg.info.Uses[e]
+ // Universal built-in or nil?
+ switch obj := obj.(type) {
+ case *types.Builtin:
+ return &Builtin{name: obj.Name(), sig: tv.Type.(*types.Signature)}
+ case *types.Nil:
+ return nilConst(tv.Type)
+ }
+ // Package-level func or var?
+ if v := fn.Prog.packageLevelValue(obj); v != nil {
+ if _, ok := obj.(*types.Var); ok {
+ return emitLoad(fn, v) // var (address)
+ }
+ return v // (func)
+ }
+ // Local var.
+ return emitLoad(fn, fn.lookup(obj, false)) // var (address)
+
+ case *ast.SelectorExpr:
+ sel, ok := fn.Pkg.info.Selections[e]
+ if !ok {
+ // qualified identifier
+ return b.expr(fn, e.Sel)
+ }
+ switch sel.Kind() {
+ case types.MethodExpr:
+ // (*T).f or T.f, the method f from the method-set of type T.
+ // The result is a "thunk".
+ return emitConv(fn, makeThunk(fn.Prog, sel), tv.Type)
+
+ case types.MethodVal:
+ // e.f where e is an expression and f is a method.
+ // The result is a "bound".
+ obj := sel.Obj().(*types.Func)
+ rt := recvType(obj)
+ wantAddr := isPointer(rt)
+ escaping := true
+ v := b.receiver(fn, e.X, wantAddr, escaping, sel)
+ if isInterface(rt) {
+ // If v has interface type I,
+ // we must emit a check that v is non-nil.
+ // We use: typeassert v.(I).
+ emitTypeAssert(fn, v, rt, token.NoPos)
+ }
+ c := &MakeClosure{
+ Fn: makeBound(fn.Prog, obj),
+ Bindings: []Value{v},
+ }
+ c.setPos(e.Sel.Pos())
+ c.setType(tv.Type)
+ return fn.emit(c)
+
+ case types.FieldVal:
+ indices := sel.Index()
+ last := len(indices) - 1
+ v := b.expr(fn, e.X)
+ v = emitImplicitSelections(fn, v, indices[:last])
+ v = emitFieldSelection(fn, v, indices[last], false, e.Sel)
+ return v
+ }
+
+ panic("unexpected expression-relative selector")
+
+ case *ast.IndexExpr:
+ switch t := fn.Pkg.typeOf(e.X).Underlying().(type) {
+ case *types.Array:
+ // Non-addressable array (in a register).
+ v := &Index{
+ X: b.expr(fn, e.X),
+ Index: emitConv(fn, b.expr(fn, e.Index), tInt),
+ }
+ v.setPos(e.Lbrack)
+ v.setType(t.Elem())
+ return fn.emit(v)
+
+ case *types.Map:
+ // Maps are not addressable.
+ mapt := fn.Pkg.typeOf(e.X).Underlying().(*types.Map)
+ v := &Lookup{
+ X: b.expr(fn, e.X),
+ Index: emitConv(fn, b.expr(fn, e.Index), mapt.Key()),
+ }
+ v.setPos(e.Lbrack)
+ v.setType(mapt.Elem())
+ return fn.emit(v)
+
+ case *types.Basic: // => string
+ // Strings are not addressable.
+ v := &Lookup{
+ X: b.expr(fn, e.X),
+ Index: b.expr(fn, e.Index),
+ }
+ v.setPos(e.Lbrack)
+ v.setType(tByte)
+ return fn.emit(v)
+
+ case *types.Slice, *types.Pointer: // *array
+ // Addressable slice/array; use IndexAddr and Load.
+ return b.addr(fn, e, false).load(fn)
+
+ default:
+ panic("unexpected container type in IndexExpr: " + t.String())
+ }
+
+ case *ast.CompositeLit, *ast.StarExpr:
+ // Addressable types (lvalues)
+ return b.addr(fn, e, false).load(fn)
+ }
+
+ panic(fmt.Sprintf("unexpected expr: %T", e))
+}
+
+// stmtList emits to fn code for all statements in list.
+func (b *builder) stmtList(fn *Function, list []ast.Stmt) {
+ for _, s := range list {
+ b.stmt(fn, s)
+ }
+}
+
+// receiver emits to fn code for expression e in the "receiver"
+// position of selection e.f (where f may be a field or a method) and
+// returns the effective receiver after applying the implicit field
+// selections of sel.
+//
+// wantAddr requests that the result is an an address. If
+// !sel.Indirect(), this may require that e be built in addr() mode; it
+// must thus be addressable.
+//
+// escaping is defined as per builder.addr().
+//
+func (b *builder) receiver(fn *Function, e ast.Expr, wantAddr, escaping bool, sel *types.Selection) Value {
+ var v Value
+ if wantAddr && !sel.Indirect() && !isPointer(fn.Pkg.typeOf(e)) {
+ v = b.addr(fn, e, escaping).address(fn)
+ } else {
+ v = b.expr(fn, e)
+ }
+
+ last := len(sel.Index()) - 1
+ v = emitImplicitSelections(fn, v, sel.Index()[:last])
+ if !wantAddr && isPointer(v.Type()) {
+ v = emitLoad(fn, v)
+ }
+ return v
+}
+
+// setCallFunc populates the function parts of a CallCommon structure
+// (Func, Method, Recv, Args[0]) based on the kind of invocation
+// occurring in e.
+//
+func (b *builder) setCallFunc(fn *Function, e *ast.CallExpr, c *CallCommon) {
+ c.pos = e.Lparen
+
+ // Is this a method call?
+ if selector, ok := unparen(e.Fun).(*ast.SelectorExpr); ok {
+ sel, ok := fn.Pkg.info.Selections[selector]
+ if ok && sel.Kind() == types.MethodVal {
+ obj := sel.Obj().(*types.Func)
+ recv := recvType(obj)
+ wantAddr := isPointer(recv)
+ escaping := true
+ v := b.receiver(fn, selector.X, wantAddr, escaping, sel)
+ if isInterface(recv) {
+ // Invoke-mode call.
+ c.Value = v
+ c.Method = obj
+ } else {
+ // "Call"-mode call.
+ c.Value = fn.Prog.declaredFunc(obj)
+ c.Args = append(c.Args, v)
+ }
+ return
+ }
+
+ // sel.Kind()==MethodExpr indicates T.f() or (*T).f():
+ // a statically dispatched call to the method f in the
+ // method-set of T or *T. T may be an interface.
+ //
+ // e.Fun would evaluate to a concrete method, interface
+ // wrapper function, or promotion wrapper.
+ //
+ // For now, we evaluate it in the usual way.
+ //
+ // TODO(adonovan): opt: inline expr() here, to make the
+ // call static and to avoid generation of wrappers.
+ // It's somewhat tricky as it may consume the first
+ // actual parameter if the call is "invoke" mode.
+ //
+ // Examples:
+ // type T struct{}; func (T) f() {} // "call" mode
+ // type T interface { f() } // "invoke" mode
+ //
+ // type S struct{ T }
+ //
+ // var s S
+ // S.f(s)
+ // (*S).f(&s)
+ //
+ // Suggested approach:
+ // - consume the first actual parameter expression
+ // and build it with b.expr().
+ // - apply implicit field selections.
+ // - use MethodVal logic to populate fields of c.
+ }
+
+ // Evaluate the function operand in the usual way.
+ c.Value = b.expr(fn, e.Fun)
+}
+
+// emitCallArgs emits to f code for the actual parameters of call e to
+// a (possibly built-in) function of effective type sig.
+// The argument values are appended to args, which is then returned.
+//
+func (b *builder) emitCallArgs(fn *Function, sig *types.Signature, e *ast.CallExpr, args []Value) []Value {
+ // f(x, y, z...): pass slice z straight through.
+ if e.Ellipsis != 0 {
+ for i, arg := range e.Args {
+ v := emitConv(fn, b.expr(fn, arg), sig.Params().At(i).Type())
+ args = append(args, v)
+ }
+ return args
+ }
+
+ offset := len(args) // 1 if call has receiver, 0 otherwise
+
+ // Evaluate actual parameter expressions.
+ //
+ // If this is a chained call of the form f(g()) where g has
+ // multiple return values (MRV), they are flattened out into
+ // args; a suffix of them may end up in a varargs slice.
+ for _, arg := range e.Args {
+ v := b.expr(fn, arg)
+ if ttuple, ok := v.Type().(*types.Tuple); ok { // MRV chain
+ for i, n := 0, ttuple.Len(); i < n; i++ {
+ args = append(args, emitExtract(fn, v, i))
+ }
+ } else {
+ args = append(args, v)
+ }
+ }
+
+ // Actual->formal assignability conversions for normal parameters.
+ np := sig.Params().Len() // number of normal parameters
+ if sig.Variadic() {
+ np--
+ }
+ for i := 0; i < np; i++ {
+ args[offset+i] = emitConv(fn, args[offset+i], sig.Params().At(i).Type())
+ }
+
+ // Actual->formal assignability conversions for variadic parameter,
+ // and construction of slice.
+ if sig.Variadic() {
+ varargs := args[offset+np:]
+ st := sig.Params().At(np).Type().(*types.Slice)
+ vt := st.Elem()
+ if len(varargs) == 0 {
+ args = append(args, nilConst(st))
+ } else {
+ // Replace a suffix of args with a slice containing it.
+ at := types.NewArray(vt, int64(len(varargs)))
+ a := emitNew(fn, at, token.NoPos)
+ a.setPos(e.Rparen)
+ a.Comment = "varargs"
+ for i, arg := range varargs {
+ iaddr := &IndexAddr{
+ X: a,
+ Index: intConst(int64(i)),
+ }
+ iaddr.setType(types.NewPointer(vt))
+ fn.emit(iaddr)
+ emitStore(fn, iaddr, arg, arg.Pos())
+ }
+ s := &Slice{X: a}
+ s.setType(st)
+ args[offset+np] = fn.emit(s)
+ args = args[:offset+np+1]
+ }
+ }
+ return args
+}
+
+// setCall emits to fn code to evaluate all the parameters of a function
+// call e, and populates *c with those values.
+//
+func (b *builder) setCall(fn *Function, e *ast.CallExpr, c *CallCommon) {
+ // First deal with the f(...) part and optional receiver.
+ b.setCallFunc(fn, e, c)
+
+ // Then append the other actual parameters.
+ sig, _ := fn.Pkg.typeOf(e.Fun).Underlying().(*types.Signature)
+ if sig == nil {
+ panic(fmt.Sprintf("no signature for call of %s", e.Fun))
+ }
+ c.Args = b.emitCallArgs(fn, sig, e, c.Args)
+}
+
+// assignOp emits to fn code to perform loc <op>= val.
+func (b *builder) assignOp(fn *Function, loc lvalue, val Value, op token.Token, pos token.Pos) {
+ oldv := loc.load(fn)
+ loc.store(fn, emitArith(fn, op, oldv, emitConv(fn, val, oldv.Type()), loc.typ(), pos))
+}
+
+// localValueSpec emits to fn code to define all of the vars in the
+// function-local ValueSpec, spec.
+//
+func (b *builder) localValueSpec(fn *Function, spec *ast.ValueSpec) {
+ switch {
+ case len(spec.Values) == len(spec.Names):
+ // e.g. var x, y = 0, 1
+ // 1:1 assignment
+ for i, id := range spec.Names {
+ if !isBlankIdent(id) {
+ fn.addLocalForIdent(id)
+ }
+ lval := b.addr(fn, id, false) // non-escaping
+ b.assign(fn, lval, spec.Values[i], true, nil)
+ }
+
+ case len(spec.Values) == 0:
+ // e.g. var x, y int
+ // Locals are implicitly zero-initialized.
+ for _, id := range spec.Names {
+ if !isBlankIdent(id) {
+ lhs := fn.addLocalForIdent(id)
+ if fn.debugInfo() {
+ emitDebugRef(fn, id, lhs, true)
+ }
+ }
+ }
+
+ default:
+ // e.g. var x, y = pos()
+ tuple := b.exprN(fn, spec.Values[0])
+ for i, id := range spec.Names {
+ if !isBlankIdent(id) {
+ fn.addLocalForIdent(id)
+ lhs := b.addr(fn, id, false) // non-escaping
+ lhs.store(fn, emitExtract(fn, tuple, i))
+ }
+ }
+ }
+}
+
+// assignStmt emits code to fn for a parallel assignment of rhss to lhss.
+// isDef is true if this is a short variable declaration (:=).
+//
+// Note the similarity with localValueSpec.
+//
+func (b *builder) assignStmt(fn *Function, lhss, rhss []ast.Expr, isDef bool) {
+ // Side effects of all LHSs and RHSs must occur in left-to-right order.
+ lvals := make([]lvalue, len(lhss))
+ isZero := make([]bool, len(lhss))
+ for i, lhs := range lhss {
+ var lval lvalue = blank{}
+ if !isBlankIdent(lhs) {
+ if isDef {
+ if obj := fn.Pkg.info.Defs[lhs.(*ast.Ident)]; obj != nil {
+ fn.addNamedLocal(obj)
+ isZero[i] = true
+ }
+ }
+ lval = b.addr(fn, lhs, false) // non-escaping
+ }
+ lvals[i] = lval
+ }
+ if len(lhss) == len(rhss) {
+ // Simple assignment: x = f() (!isDef)
+ // Parallel assignment: x, y = f(), g() (!isDef)
+ // or short var decl: x, y := f(), g() (isDef)
+ //
+ // In all cases, the RHSs may refer to the LHSs,
+ // so we need a storebuf.
+ var sb storebuf
+ for i := range rhss {
+ b.assign(fn, lvals[i], rhss[i], isZero[i], &sb)
+ }
+ sb.emit(fn)
+ } else {
+ // e.g. x, y = pos()
+ tuple := b.exprN(fn, rhss[0])
+ emitDebugRef(fn, rhss[0], tuple, false)
+ for i, lval := range lvals {
+ lval.store(fn, emitExtract(fn, tuple, i))
+ }
+ }
+}
+
+// arrayLen returns the length of the array whose composite literal elements are elts.
+func (b *builder) arrayLen(fn *Function, elts []ast.Expr) int64 {
+ var max int64 = -1
+ var i int64 = -1
+ for _, e := range elts {
+ if kv, ok := e.(*ast.KeyValueExpr); ok {
+ i = b.expr(fn, kv.Key).(*Const).Int64()
+ } else {
+ i++
+ }
+ if i > max {
+ max = i
+ }
+ }
+ return max + 1
+}
+
+// compLit emits to fn code to initialize a composite literal e at
+// address addr with type typ.
+//
+// Nested composite literals are recursively initialized in place
+// where possible. If isZero is true, compLit assumes that addr
+// holds the zero value for typ.
+//
+// Because the elements of a composite literal may refer to the
+// variables being updated, as in the second line below,
+// x := T{a: 1}
+// x = T{a: x.a}
+// all the reads must occur before all the writes. Thus all stores to
+// loc are emitted to the storebuf sb for later execution.
+//
+// A CompositeLit may have pointer type only in the recursive (nested)
+// case when the type name is implicit. e.g. in []*T{{}}, the inner
+// literal has type *T behaves like &T{}.
+// In that case, addr must hold a T, not a *T.
+//
+func (b *builder) compLit(fn *Function, addr Value, e *ast.CompositeLit, isZero bool, sb *storebuf) {
+ typ := deref(fn.Pkg.typeOf(e))
+ switch t := typ.Underlying().(type) {
+ case *types.Struct:
+ if !isZero && len(e.Elts) != t.NumFields() {
+ // memclear
+ sb.store(&address{addr, e.Lbrace, nil},
+ zeroValue(fn, deref(addr.Type())))
+ isZero = true
+ }
+ for i, e := range e.Elts {
+ fieldIndex := i
+ pos := e.Pos()
+ if kv, ok := e.(*ast.KeyValueExpr); ok {
+ fname := kv.Key.(*ast.Ident).Name
+ for i, n := 0, t.NumFields(); i < n; i++ {
+ sf := t.Field(i)
+ if sf.Name() == fname {
+ fieldIndex = i
+ pos = kv.Colon
+ e = kv.Value
+ break
+ }
+ }
+ }
+ sf := t.Field(fieldIndex)
+ faddr := &FieldAddr{
+ X: addr,
+ Field: fieldIndex,
+ }
+ faddr.setType(types.NewPointer(sf.Type()))
+ fn.emit(faddr)
+ b.assign(fn, &address{addr: faddr, pos: pos, expr: e}, e, isZero, sb)
+ }
+
+ case *types.Array, *types.Slice:
+ var at *types.Array
+ var array Value
+ switch t := t.(type) {
+ case *types.Slice:
+ at = types.NewArray(t.Elem(), b.arrayLen(fn, e.Elts))
+ alloc := emitNew(fn, at, e.Lbrace)
+ alloc.Comment = "slicelit"
+ array = alloc
+ case *types.Array:
+ at = t
+ array = addr
+
+ if !isZero && int64(len(e.Elts)) != at.Len() {
+ // memclear
+ sb.store(&address{array, e.Lbrace, nil},
+ zeroValue(fn, deref(array.Type())))
+ }
+ }
+
+ var idx *Const
+ for _, e := range e.Elts {
+ pos := e.Pos()
+ if kv, ok := e.(*ast.KeyValueExpr); ok {
+ idx = b.expr(fn, kv.Key).(*Const)
+ pos = kv.Colon
+ e = kv.Value
+ } else {
+ var idxval int64
+ if idx != nil {
+ idxval = idx.Int64() + 1
+ }
+ idx = intConst(idxval)
+ }
+ iaddr := &IndexAddr{
+ X: array,
+ Index: idx,
+ }
+ iaddr.setType(types.NewPointer(at.Elem()))
+ fn.emit(iaddr)
+ if t != at { // slice
+ // backing array is unaliased => storebuf not needed.
+ b.assign(fn, &address{addr: iaddr, pos: pos, expr: e}, e, true, nil)
+ } else {
+ b.assign(fn, &address{addr: iaddr, pos: pos, expr: e}, e, true, sb)
+ }
+ }
+
+ if t != at { // slice
+ s := &Slice{X: array}
+ s.setPos(e.Lbrace)
+ s.setType(typ)
+ sb.store(&address{addr: addr, pos: e.Lbrace, expr: e}, fn.emit(s))
+ }
+
+ case *types.Map:
+ m := &MakeMap{Reserve: intConst(int64(len(e.Elts)))}
+ m.setPos(e.Lbrace)
+ m.setType(typ)
+ fn.emit(m)
+ for _, e := range e.Elts {
+ e := e.(*ast.KeyValueExpr)
+
+ // If a key expression in a map literal is itself a
+ // composite literal, the type may be omitted.
+ // For example:
+ // map[*struct{}]bool{{}: true}
+ // An &-operation may be implied:
+ // map[*struct{}]bool{&struct{}{}: true}
+ var key Value
+ if _, ok := unparen(e.Key).(*ast.CompositeLit); ok && isPointer(t.Key()) {
+ // A CompositeLit never evaluates to a pointer,
+ // so if the type of the location is a pointer,
+ // an &-operation is implied.
+ key = b.addr(fn, e.Key, true).address(fn)
+ } else {
+ key = b.expr(fn, e.Key)
+ }
+
+ loc := element{
+ m: m,
+ k: emitConv(fn, key, t.Key()),
+ t: t.Elem(),
+ pos: e.Colon,
+ }
+
+ // We call assign() only because it takes care
+ // of any &-operation required in the recursive
+ // case, e.g.,
+ // map[int]*struct{}{0: {}} implies &struct{}{}.
+ // In-place update is of course impossible,
+ // and no storebuf is needed.
+ b.assign(fn, &loc, e.Value, true, nil)
+ }
+ sb.store(&address{addr: addr, pos: e.Lbrace, expr: e}, m)
+
+ default:
+ panic("unexpected CompositeLit type: " + t.String())
+ }
+}
+
+// switchStmt emits to fn code for the switch statement s, optionally
+// labelled by label.
+//
+func (b *builder) switchStmt(fn *Function, s *ast.SwitchStmt, label *lblock) {
+ // We treat SwitchStmt like a sequential if-else chain.
+ // Multiway dispatch can be recovered later by ssautil.Switches()
+ // to those cases that are free of side effects.
+ if s.Init != nil {
+ b.stmt(fn, s.Init)
+ }
+ var tag Value = vTrue
+ if s.Tag != nil {
+ tag = b.expr(fn, s.Tag)
+ }
+ done := fn.newBasicBlock("switch.done")
+ if label != nil {
+ label._break = done
+ }
+ // We pull the default case (if present) down to the end.
+ // But each fallthrough label must point to the next
+ // body block in source order, so we preallocate a
+ // body block (fallthru) for the next case.
+ // Unfortunately this makes for a confusing block order.
+ var dfltBody *[]ast.Stmt
+ var dfltFallthrough *BasicBlock
+ var fallthru, dfltBlock *BasicBlock
+ ncases := len(s.Body.List)
+ for i, clause := range s.Body.List {
+ body := fallthru
+ if body == nil {
+ body = fn.newBasicBlock("switch.body") // first case only
+ }
+
+ // Preallocate body block for the next case.
+ fallthru = done
+ if i+1 < ncases {
+ fallthru = fn.newBasicBlock("switch.body")
+ }
+
+ cc := clause.(*ast.CaseClause)
+ if cc.List == nil {
+ // Default case.
+ dfltBody = &cc.Body
+ dfltFallthrough = fallthru
+ dfltBlock = body
+ continue
+ }
+
+ var nextCond *BasicBlock
+ for _, cond := range cc.List {
+ nextCond = fn.newBasicBlock("switch.next")
+ // TODO(adonovan): opt: when tag==vTrue, we'd
+ // get better code if we use b.cond(cond)
+ // instead of BinOp(EQL, tag, b.expr(cond))
+ // followed by If. Don't forget conversions
+ // though.
+ cond := emitCompare(fn, token.EQL, tag, b.expr(fn, cond), cond.Pos())
+ emitIf(fn, cond, body, nextCond)
+ fn.currentBlock = nextCond
+ }
+ fn.currentBlock = body
+ fn.targets = &targets{
+ tail: fn.targets,
+ _break: done,
+ _fallthrough: fallthru,
+ }
+ b.stmtList(fn, cc.Body)
+ fn.targets = fn.targets.tail
+ emitJump(fn, done)
+ fn.currentBlock = nextCond
+ }
+ if dfltBlock != nil {
+ emitJump(fn, dfltBlock)
+ fn.currentBlock = dfltBlock
+ fn.targets = &targets{
+ tail: fn.targets,
+ _break: done,
+ _fallthrough: dfltFallthrough,
+ }
+ b.stmtList(fn, *dfltBody)
+ fn.targets = fn.targets.tail
+ }
+ emitJump(fn, done)
+ fn.currentBlock = done
+}
+
+// typeSwitchStmt emits to fn code for the type switch statement s, optionally
+// labelled by label.
+//
+func (b *builder) typeSwitchStmt(fn *Function, s *ast.TypeSwitchStmt, label *lblock) {
+ // We treat TypeSwitchStmt like a sequential if-else chain.
+ // Multiway dispatch can be recovered later by ssautil.Switches().
+
+ // Typeswitch lowering:
+ //
+ // var x X
+ // switch y := x.(type) {
+ // case T1, T2: S1 // >1 (y := x)
+ // case nil: SN // nil (y := x)
+ // default: SD // 0 types (y := x)
+ // case T3: S3 // 1 type (y := x.(T3))
+ // }
+ //
+ // ...s.Init...
+ // x := eval x
+ // .caseT1:
+ // t1, ok1 := typeswitch,ok x <T1>
+ // if ok1 then goto S1 else goto .caseT2
+ // .caseT2:
+ // t2, ok2 := typeswitch,ok x <T2>
+ // if ok2 then goto S1 else goto .caseNil
+ // .S1:
+ // y := x
+ // ...S1...
+ // goto done
+ // .caseNil:
+ // if t2, ok2 := typeswitch,ok x <T2>
+ // if x == nil then goto SN else goto .caseT3
+ // .SN:
+ // y := x
+ // ...SN...
+ // goto done
+ // .caseT3:
+ // t3, ok3 := typeswitch,ok x <T3>
+ // if ok3 then goto S3 else goto default
+ // .S3:
+ // y := t3
+ // ...S3...
+ // goto done
+ // .default:
+ // y := x
+ // ...SD...
+ // goto done
+ // .done:
+
+ if s.Init != nil {
+ b.stmt(fn, s.Init)
+ }
+
+ var x Value
+ switch ass := s.Assign.(type) {
+ case *ast.ExprStmt: // x.(type)
+ x = b.expr(fn, unparen(ass.X).(*ast.TypeAssertExpr).X)
+ case *ast.AssignStmt: // y := x.(type)
+ x = b.expr(fn, unparen(ass.Rhs[0]).(*ast.TypeAssertExpr).X)
+ }
+
+ done := fn.newBasicBlock("typeswitch.done")
+ if label != nil {
+ label._break = done
+ }
+ var default_ *ast.CaseClause
+ for _, clause := range s.Body.List {
+ cc := clause.(*ast.CaseClause)
+ if cc.List == nil {
+ default_ = cc
+ continue
+ }
+ body := fn.newBasicBlock("typeswitch.body")
+ var next *BasicBlock
+ var casetype types.Type
+ var ti Value // ti, ok := typeassert,ok x <Ti>
+ for _, cond := range cc.List {
+ next = fn.newBasicBlock("typeswitch.next")
+ casetype = fn.Pkg.typeOf(cond)
+ var condv Value
+ if casetype == tUntypedNil {
+ condv = emitCompare(fn, token.EQL, x, nilConst(x.Type()), token.NoPos)
+ ti = x
+ } else {
+ yok := emitTypeTest(fn, x, casetype, cc.Case)
+ ti = emitExtract(fn, yok, 0)
+ condv = emitExtract(fn, yok, 1)
+ }
+ emitIf(fn, condv, body, next)
+ fn.currentBlock = next
+ }
+ if len(cc.List) != 1 {
+ ti = x
+ }
+ fn.currentBlock = body
+ b.typeCaseBody(fn, cc, ti, done)
+ fn.currentBlock = next
+ }
+ if default_ != nil {
+ b.typeCaseBody(fn, default_, x, done)
+ } else {
+ emitJump(fn, done)
+ }
+ fn.currentBlock = done
+}
+
+func (b *builder) typeCaseBody(fn *Function, cc *ast.CaseClause, x Value, done *BasicBlock) {
+ if obj := fn.Pkg.info.Implicits[cc]; obj != nil {
+ // In a switch y := x.(type), each case clause
+ // implicitly declares a distinct object y.
+ // In a single-type case, y has that type.
+ // In multi-type cases, 'case nil' and default,
+ // y has the same type as the interface operand.
+ emitStore(fn, fn.addNamedLocal(obj), x, obj.Pos())
+ }
+ fn.targets = &targets{
+ tail: fn.targets,
+ _break: done,
+ }
+ b.stmtList(fn, cc.Body)
+ fn.targets = fn.targets.tail
+ emitJump(fn, done)
+}
+
+// selectStmt emits to fn code for the select statement s, optionally
+// labelled by label.
+//
+func (b *builder) selectStmt(fn *Function, s *ast.SelectStmt, label *lblock) {
+ // A blocking select of a single case degenerates to a
+ // simple send or receive.
+ // TODO(adonovan): opt: is this optimization worth its weight?
+ if len(s.Body.List) == 1 {
+ clause := s.Body.List[0].(*ast.CommClause)
+ if clause.Comm != nil {
+ b.stmt(fn, clause.Comm)
+ done := fn.newBasicBlock("select.done")
+ if label != nil {
+ label._break = done
+ }
+ fn.targets = &targets{
+ tail: fn.targets,
+ _break: done,
+ }
+ b.stmtList(fn, clause.Body)
+ fn.targets = fn.targets.tail
+ emitJump(fn, done)
+ fn.currentBlock = done
+ return
+ }
+ }
+
+ // First evaluate all channels in all cases, and find
+ // the directions of each state.
+ var states []*SelectState
+ blocking := true
+ debugInfo := fn.debugInfo()
+ for _, clause := range s.Body.List {
+ var st *SelectState
+ switch comm := clause.(*ast.CommClause).Comm.(type) {
+ case nil: // default case
+ blocking = false
+ continue
+
+ case *ast.SendStmt: // ch<- i
+ ch := b.expr(fn, comm.Chan)
+ st = &SelectState{
+ Dir: types.SendOnly,
+ Chan: ch,
+ Send: emitConv(fn, b.expr(fn, comm.Value),
+ ch.Type().Underlying().(*types.Chan).Elem()),
+ Pos: comm.Arrow,
+ }
+ if debugInfo {
+ st.DebugNode = comm
+ }
+
+ case *ast.AssignStmt: // x := <-ch
+ recv := unparen(comm.Rhs[0]).(*ast.UnaryExpr)
+ st = &SelectState{
+ Dir: types.RecvOnly,
+ Chan: b.expr(fn, recv.X),
+ Pos: recv.OpPos,
+ }
+ if debugInfo {
+ st.DebugNode = recv
+ }
+
+ case *ast.ExprStmt: // <-ch
+ recv := unparen(comm.X).(*ast.UnaryExpr)
+ st = &SelectState{
+ Dir: types.RecvOnly,
+ Chan: b.expr(fn, recv.X),
+ Pos: recv.OpPos,
+ }
+ if debugInfo {
+ st.DebugNode = recv
+ }
+ }
+ states = append(states, st)
+ }
+
+ // We dispatch on the (fair) result of Select using a
+ // sequential if-else chain, in effect:
+ //
+ // idx, recvOk, r0...r_n-1 := select(...)
+ // if idx == 0 { // receive on channel 0 (first receive => r0)
+ // x, ok := r0, recvOk
+ // ...state0...
+ // } else if v == 1 { // send on channel 1
+ // ...state1...
+ // } else {
+ // ...default...
+ // }
+ sel := &Select{
+ States: states,
+ Blocking: blocking,
+ }
+ sel.setPos(s.Select)
+ var vars []*types.Var
+ vars = append(vars, varIndex, varOk)
+ for _, st := range states {
+ if st.Dir == types.RecvOnly {
+ tElem := st.Chan.Type().Underlying().(*types.Chan).Elem()
+ vars = append(vars, anonVar(tElem))
+ }
+ }
+ sel.setType(types.NewTuple(vars...))
+
+ fn.emit(sel)
+ idx := emitExtract(fn, sel, 0)
+
+ done := fn.newBasicBlock("select.done")
+ if label != nil {
+ label._break = done
+ }
+
+ var defaultBody *[]ast.Stmt
+ state := 0
+ r := 2 // index in 'sel' tuple of value; increments if st.Dir==RECV
+ for _, cc := range s.Body.List {
+ clause := cc.(*ast.CommClause)
+ if clause.Comm == nil {
+ defaultBody = &clause.Body
+ continue
+ }
+ body := fn.newBasicBlock("select.body")
+ next := fn.newBasicBlock("select.next")
+ emitIf(fn, emitCompare(fn, token.EQL, idx, intConst(int64(state)), token.NoPos), body, next)
+ fn.currentBlock = body
+ fn.targets = &targets{
+ tail: fn.targets,
+ _break: done,
+ }
+ switch comm := clause.Comm.(type) {
+ case *ast.ExprStmt: // <-ch
+ if debugInfo {
+ v := emitExtract(fn, sel, r)
+ emitDebugRef(fn, states[state].DebugNode.(ast.Expr), v, false)
+ }
+ r++
+
+ case *ast.AssignStmt: // x := <-states[state].Chan
+ if comm.Tok == token.DEFINE {
+ fn.addLocalForIdent(comm.Lhs[0].(*ast.Ident))
+ }
+ x := b.addr(fn, comm.Lhs[0], false) // non-escaping
+ v := emitExtract(fn, sel, r)
+ if debugInfo {
+ emitDebugRef(fn, states[state].DebugNode.(ast.Expr), v, false)
+ }
+ x.store(fn, v)
+
+ if len(comm.Lhs) == 2 { // x, ok := ...
+ if comm.Tok == token.DEFINE {
+ fn.addLocalForIdent(comm.Lhs[1].(*ast.Ident))
+ }
+ ok := b.addr(fn, comm.Lhs[1], false) // non-escaping
+ ok.store(fn, emitExtract(fn, sel, 1))
+ }
+ r++
+ }
+ b.stmtList(fn, clause.Body)
+ fn.targets = fn.targets.tail
+ emitJump(fn, done)
+ fn.currentBlock = next
+ state++
+ }
+ if defaultBody != nil {
+ fn.targets = &targets{
+ tail: fn.targets,
+ _break: done,
+ }
+ b.stmtList(fn, *defaultBody)
+ fn.targets = fn.targets.tail
+ } else {
+ // A blocking select must match some case.
+ // (This should really be a runtime.errorString, not a string.)
+ fn.emit(&Panic{
+ X: emitConv(fn, stringConst("blocking select matched no case"), tEface),
+ })
+ fn.currentBlock = fn.newBasicBlock("unreachable")
+ }
+ emitJump(fn, done)
+ fn.currentBlock = done
+}
+
+// forStmt emits to fn code for the for statement s, optionally
+// labelled by label.
+//
+func (b *builder) forStmt(fn *Function, s *ast.ForStmt, label *lblock) {
+ // ...init...
+ // jump loop
+ // loop:
+ // if cond goto body else done
+ // body:
+ // ...body...
+ // jump post
+ // post: (target of continue)
+ // ...post...
+ // jump loop
+ // done: (target of break)
+ if s.Init != nil {
+ b.stmt(fn, s.Init)
+ }
+ body := fn.newBasicBlock("for.body")
+ done := fn.newBasicBlock("for.done") // target of 'break'
+ loop := body // target of back-edge
+ if s.Cond != nil {
+ loop = fn.newBasicBlock("for.loop")
+ }
+ cont := loop // target of 'continue'
+ if s.Post != nil {
+ cont = fn.newBasicBlock("for.post")
+ }
+ if label != nil {
+ label._break = done
+ label._continue = cont
+ }
+ emitJump(fn, loop)
+ fn.currentBlock = loop
+ if loop != body {
+ b.cond(fn, s.Cond, body, done)
+ fn.currentBlock = body
+ }
+ fn.targets = &targets{
+ tail: fn.targets,
+ _break: done,
+ _continue: cont,
+ }
+ b.stmt(fn, s.Body)
+ fn.targets = fn.targets.tail
+ emitJump(fn, cont)
+
+ if s.Post != nil {
+ fn.currentBlock = cont
+ b.stmt(fn, s.Post)
+ emitJump(fn, loop) // back-edge
+ }
+ fn.currentBlock = done
+}
+
+// rangeIndexed emits to fn the header for an integer-indexed loop
+// over array, *array or slice value x.
+// The v result is defined only if tv is non-nil.
+// forPos is the position of the "for" token.
+//
+func (b *builder) rangeIndexed(fn *Function, x Value, tv types.Type, pos token.Pos) (k, v Value, loop, done *BasicBlock) {
+ //
+ // length = len(x)
+ // index = -1
+ // loop: (target of continue)
+ // index++
+ // if index < length goto body else done
+ // body:
+ // k = index
+ // v = x[index]
+ // ...body...
+ // jump loop
+ // done: (target of break)
+
+ // Determine number of iterations.
+ var length Value
+ if arr, ok := deref(x.Type()).Underlying().(*types.Array); ok {
+ // For array or *array, the number of iterations is
+ // known statically thanks to the type. We avoid a
+ // data dependence upon x, permitting later dead-code
+ // elimination if x is pure, static unrolling, etc.
+ // Ranging over a nil *array may have >0 iterations.
+ // We still generate code for x, in case it has effects.
+ length = intConst(arr.Len())
+ } else {
+ // length = len(x).
+ var c Call
+ c.Call.Value = makeLen(x.Type())
+ c.Call.Args = []Value{x}
+ c.setType(tInt)
+ length = fn.emit(&c)
+ }
+
+ index := fn.addLocal(tInt, token.NoPos)
+ emitStore(fn, index, intConst(-1), pos)
+
+ loop = fn.newBasicBlock("rangeindex.loop")
+ emitJump(fn, loop)
+ fn.currentBlock = loop
+
+ incr := &BinOp{
+ Op: token.ADD,
+ X: emitLoad(fn, index),
+ Y: vOne,
+ }
+ incr.setType(tInt)
+ emitStore(fn, index, fn.emit(incr), pos)
+
+ body := fn.newBasicBlock("rangeindex.body")
+ done = fn.newBasicBlock("rangeindex.done")
+ emitIf(fn, emitCompare(fn, token.LSS, incr, length, token.NoPos), body, done)
+ fn.currentBlock = body
+
+ k = emitLoad(fn, index)
+ if tv != nil {
+ switch t := x.Type().Underlying().(type) {
+ case *types.Array:
+ instr := &Index{
+ X: x,
+ Index: k,
+ }
+ instr.setType(t.Elem())
+ v = fn.emit(instr)
+
+ case *types.Pointer: // *array
+ instr := &IndexAddr{
+ X: x,
+ Index: k,
+ }
+ instr.setType(types.NewPointer(t.Elem().Underlying().(*types.Array).Elem()))
+ v = emitLoad(fn, fn.emit(instr))
+
+ case *types.Slice:
+ instr := &IndexAddr{
+ X: x,
+ Index: k,
+ }
+ instr.setType(types.NewPointer(t.Elem()))
+ v = emitLoad(fn, fn.emit(instr))
+
+ default:
+ panic("rangeIndexed x:" + t.String())
+ }
+ }
+ return
+}
+
+// rangeIter emits to fn the header for a loop using
+// Range/Next/Extract to iterate over map or string value x.
+// tk and tv are the types of the key/value results k and v, or nil
+// if the respective component is not wanted.
+//
+func (b *builder) rangeIter(fn *Function, x Value, tk, tv types.Type, pos token.Pos) (k, v Value, loop, done *BasicBlock) {
+ //
+ // it = range x
+ // loop: (target of continue)
+ // okv = next it (ok, key, value)
+ // ok = extract okv #0
+ // if ok goto body else done
+ // body:
+ // k = extract okv #1
+ // v = extract okv #2
+ // ...body...
+ // jump loop
+ // done: (target of break)
+ //
+
+ if tk == nil {
+ tk = tInvalid
+ }
+ if tv == nil {
+ tv = tInvalid
+ }
+
+ rng := &Range{X: x}
+ rng.setPos(pos)
+ rng.setType(tRangeIter)
+ it := fn.emit(rng)
+
+ loop = fn.newBasicBlock("rangeiter.loop")
+ emitJump(fn, loop)
+ fn.currentBlock = loop
+
+ _, isString := x.Type().Underlying().(*types.Basic)
+
+ okv := &Next{
+ Iter: it,
+ IsString: isString,
+ }
+ okv.setType(types.NewTuple(
+ varOk,
+ newVar("k", tk),
+ newVar("v", tv),
+ ))
+ fn.emit(okv)
+
+ body := fn.newBasicBlock("rangeiter.body")
+ done = fn.newBasicBlock("rangeiter.done")
+ emitIf(fn, emitExtract(fn, okv, 0), body, done)
+ fn.currentBlock = body
+
+ if tk != tInvalid {
+ k = emitExtract(fn, okv, 1)
+ }
+ if tv != tInvalid {
+ v = emitExtract(fn, okv, 2)
+ }
+ return
+}
+
+// rangeChan emits to fn the header for a loop that receives from
+// channel x until it fails.
+// tk is the channel's element type, or nil if the k result is
+// not wanted
+// pos is the position of the '=' or ':=' token.
+//
+func (b *builder) rangeChan(fn *Function, x Value, tk types.Type, pos token.Pos) (k Value, loop, done *BasicBlock) {
+ //
+ // loop: (target of continue)
+ // ko = <-x (key, ok)
+ // ok = extract ko #1
+ // if ok goto body else done
+ // body:
+ // k = extract ko #0
+ // ...
+ // goto loop
+ // done: (target of break)
+
+ loop = fn.newBasicBlock("rangechan.loop")
+ emitJump(fn, loop)
+ fn.currentBlock = loop
+ recv := &UnOp{
+ Op: token.ARROW,
+ X: x,
+ CommaOk: true,
+ }
+ recv.setPos(pos)
+ recv.setType(types.NewTuple(
+ newVar("k", x.Type().Underlying().(*types.Chan).Elem()),
+ varOk,
+ ))
+ ko := fn.emit(recv)
+ body := fn.newBasicBlock("rangechan.body")
+ done = fn.newBasicBlock("rangechan.done")
+ emitIf(fn, emitExtract(fn, ko, 1), body, done)
+ fn.currentBlock = body
+ if tk != nil {
+ k = emitExtract(fn, ko, 0)
+ }
+ return
+}
+
+// rangeStmt emits to fn code for the range statement s, optionally
+// labelled by label.
+//
+func (b *builder) rangeStmt(fn *Function, s *ast.RangeStmt, label *lblock) {
+ var tk, tv types.Type
+ if s.Key != nil && !isBlankIdent(s.Key) {
+ tk = fn.Pkg.typeOf(s.Key)
+ }
+ if s.Value != nil && !isBlankIdent(s.Value) {
+ tv = fn.Pkg.typeOf(s.Value)
+ }
+
+ // If iteration variables are defined (:=), this
+ // occurs once outside the loop.
+ //
+ // Unlike a short variable declaration, a RangeStmt
+ // using := never redeclares an existing variable; it
+ // always creates a new one.
+ if s.Tok == token.DEFINE {
+ if tk != nil {
+ fn.addLocalForIdent(s.Key.(*ast.Ident))
+ }
+ if tv != nil {
+ fn.addLocalForIdent(s.Value.(*ast.Ident))
+ }
+ }
+
+ x := b.expr(fn, s.X)
+
+ var k, v Value
+ var loop, done *BasicBlock
+ switch rt := x.Type().Underlying().(type) {
+ case *types.Slice, *types.Array, *types.Pointer: // *array
+ k, v, loop, done = b.rangeIndexed(fn, x, tv, s.For)
+
+ case *types.Chan:
+ k, loop, done = b.rangeChan(fn, x, tk, s.For)
+
+ case *types.Map, *types.Basic: // string
+ k, v, loop, done = b.rangeIter(fn, x, tk, tv, s.For)
+
+ default:
+ panic("Cannot range over: " + rt.String())
+ }
+
+ // Evaluate both LHS expressions before we update either.
+ var kl, vl lvalue
+ if tk != nil {
+ kl = b.addr(fn, s.Key, false) // non-escaping
+ }
+ if tv != nil {
+ vl = b.addr(fn, s.Value, false) // non-escaping
+ }
+ if tk != nil {
+ kl.store(fn, k)
+ }
+ if tv != nil {
+ vl.store(fn, v)
+ }
+
+ if label != nil {
+ label._break = done
+ label._continue = loop
+ }
+
+ fn.targets = &targets{
+ tail: fn.targets,
+ _break: done,
+ _continue: loop,
+ }
+ b.stmt(fn, s.Body)
+ fn.targets = fn.targets.tail
+ emitJump(fn, loop) // back-edge
+ fn.currentBlock = done
+}
+
+// stmt lowers statement s to SSA form, emitting code to fn.
+func (b *builder) stmt(fn *Function, _s ast.Stmt) {
+ // The label of the current statement. If non-nil, its _goto
+ // target is always set; its _break and _continue are set only
+ // within the body of switch/typeswitch/select/for/range.
+ // It is effectively an additional default-nil parameter of stmt().
+ var label *lblock
+start:
+ switch s := _s.(type) {
+ case *ast.EmptyStmt:
+ // ignore. (Usually removed by gofmt.)
+
+ case *ast.DeclStmt: // Con, Var or Typ
+ d := s.Decl.(*ast.GenDecl)
+ if d.Tok == token.VAR {
+ for _, spec := range d.Specs {
+ if vs, ok := spec.(*ast.ValueSpec); ok {
+ b.localValueSpec(fn, vs)
+ }
+ }
+ }
+
+ case *ast.LabeledStmt:
+ label = fn.labelledBlock(s.Label)
+ emitJump(fn, label._goto)
+ fn.currentBlock = label._goto
+ _s = s.Stmt
+ goto start // effectively: tailcall stmt(fn, s.Stmt, label)
+
+ case *ast.ExprStmt:
+ b.expr(fn, s.X)
+
+ case *ast.SendStmt:
+ fn.emit(&Send{
+ Chan: b.expr(fn, s.Chan),
+ X: emitConv(fn, b.expr(fn, s.Value),
+ fn.Pkg.typeOf(s.Chan).Underlying().(*types.Chan).Elem()),
+ pos: s.Arrow,
+ })
+
+ case *ast.IncDecStmt:
+ op := token.ADD
+ if s.Tok == token.DEC {
+ op = token.SUB
+ }
+ loc := b.addr(fn, s.X, false)
+ b.assignOp(fn, loc, NewConst(constant.MakeInt64(1), loc.typ()), op, s.Pos())
+
+ case *ast.AssignStmt:
+ switch s.Tok {
+ case token.ASSIGN, token.DEFINE:
+ b.assignStmt(fn, s.Lhs, s.Rhs, s.Tok == token.DEFINE)
+
+ default: // +=, etc.
+ op := s.Tok + token.ADD - token.ADD_ASSIGN
+ b.assignOp(fn, b.addr(fn, s.Lhs[0], false), b.expr(fn, s.Rhs[0]), op, s.Pos())
+ }
+
+ case *ast.GoStmt:
+ // The "intrinsics" new/make/len/cap are forbidden here.
+ // panic is treated like an ordinary function call.
+ v := Go{pos: s.Go}
+ b.setCall(fn, s.Call, &v.Call)
+ fn.emit(&v)
+
+ case *ast.DeferStmt:
+ // The "intrinsics" new/make/len/cap are forbidden here.
+ // panic is treated like an ordinary function call.
+ v := Defer{pos: s.Defer}
+ b.setCall(fn, s.Call, &v.Call)
+ fn.emit(&v)
+
+ // A deferred call can cause recovery from panic,
+ // and control resumes at the Recover block.
+ createRecoverBlock(fn)
+
+ case *ast.ReturnStmt:
+ var results []Value
+ if len(s.Results) == 1 && fn.Signature.Results().Len() > 1 {
+ // Return of one expression in a multi-valued function.
+ tuple := b.exprN(fn, s.Results[0])
+ ttuple := tuple.Type().(*types.Tuple)
+ for i, n := 0, ttuple.Len(); i < n; i++ {
+ results = append(results,
+ emitConv(fn, emitExtract(fn, tuple, i),
+ fn.Signature.Results().At(i).Type()))
+ }
+ } else {
+ // 1:1 return, or no-arg return in non-void function.
+ for i, r := range s.Results {
+ v := emitConv(fn, b.expr(fn, r), fn.Signature.Results().At(i).Type())
+ results = append(results, v)
+ }
+ }
+ if fn.namedResults != nil {
+ // Function has named result parameters (NRPs).
+ // Perform parallel assignment of return operands to NRPs.
+ for i, r := range results {
+ emitStore(fn, fn.namedResults[i], r, s.Return)
+ }
+ }
+ // Run function calls deferred in this
+ // function when explicitly returning from it.
+ fn.emit(new(RunDefers))
+ if fn.namedResults != nil {
+ // Reload NRPs to form the result tuple.
+ results = results[:0]
+ for _, r := range fn.namedResults {
+ results = append(results, emitLoad(fn, r))
+ }
+ }
+ fn.emit(&Return{Results: results, pos: s.Return})
+ fn.currentBlock = fn.newBasicBlock("unreachable")
+
+ case *ast.BranchStmt:
+ var block *BasicBlock
+ switch s.Tok {
+ case token.BREAK:
+ if s.Label != nil {
+ block = fn.labelledBlock(s.Label)._break
+ } else {
+ for t := fn.targets; t != nil && block == nil; t = t.tail {
+ block = t._break
+ }
+ }
+
+ case token.CONTINUE:
+ if s.Label != nil {
+ block = fn.labelledBlock(s.Label)._continue
+ } else {
+ for t := fn.targets; t != nil && block == nil; t = t.tail {
+ block = t._continue
+ }
+ }
+
+ case token.FALLTHROUGH:
+ for t := fn.targets; t != nil && block == nil; t = t.tail {
+ block = t._fallthrough
+ }
+
+ case token.GOTO:
+ block = fn.labelledBlock(s.Label)._goto
+ }
+ emitJump(fn, block)
+ fn.currentBlock = fn.newBasicBlock("unreachable")
+
+ case *ast.BlockStmt:
+ b.stmtList(fn, s.List)
+
+ case *ast.IfStmt:
+ if s.Init != nil {
+ b.stmt(fn, s.Init)
+ }
+ then := fn.newBasicBlock("if.then")
+ done := fn.newBasicBlock("if.done")
+ els := done
+ if s.Else != nil {
+ els = fn.newBasicBlock("if.else")
+ }
+ b.cond(fn, s.Cond, then, els)
+ fn.currentBlock = then
+ b.stmt(fn, s.Body)
+ emitJump(fn, done)
+
+ if s.Else != nil {
+ fn.currentBlock = els
+ b.stmt(fn, s.Else)
+ emitJump(fn, done)
+ }
+
+ fn.currentBlock = done
+
+ case *ast.SwitchStmt:
+ b.switchStmt(fn, s, label)
+
+ case *ast.TypeSwitchStmt:
+ b.typeSwitchStmt(fn, s, label)
+
+ case *ast.SelectStmt:
+ b.selectStmt(fn, s, label)
+
+ case *ast.ForStmt:
+ b.forStmt(fn, s, label)
+
+ case *ast.RangeStmt:
+ b.rangeStmt(fn, s, label)
+
+ default:
+ panic(fmt.Sprintf("unexpected statement kind: %T", s))
+ }
+}
+
+// buildFunction builds SSA code for the body of function fn. Idempotent.
+func (b *builder) buildFunction(fn *Function) {
+ if fn.Blocks != nil {
+ return // building already started
+ }
+
+ var recvField *ast.FieldList
+ var body *ast.BlockStmt
+ var functype *ast.FuncType
+ switch n := fn.syntax.(type) {
+ case nil:
+ return // not a Go source function. (Synthetic, or from object file.)
+ case *ast.FuncDecl:
+ functype = n.Type
+ recvField = n.Recv
+ body = n.Body
+ case *ast.FuncLit:
+ functype = n.Type
+ body = n.Body
+ default:
+ panic(n)
+ }
+
+ if body == nil {
+ // External function.
+ if fn.Params == nil {
+ // This condition ensures we add a non-empty
+ // params list once only, but we may attempt
+ // the degenerate empty case repeatedly.
+ // TODO(adonovan): opt: don't do that.
+
+ // We set Function.Params even though there is no body
+ // code to reference them. This simplifies clients.
+ if recv := fn.Signature.Recv(); recv != nil {
+ fn.addParamObj(recv)
+ }
+ params := fn.Signature.Params()
+ for i, n := 0, params.Len(); i < n; i++ {
+ fn.addParamObj(params.At(i))
+ }
+ }
+ return
+ }
+ if fn.Prog.mode&LogSource != 0 {
+ defer logStack("build function %s @ %s", fn, fn.Prog.Fset.Position(fn.pos))()
+ }
+ fn.startBody()
+ fn.createSyntacticParams(recvField, functype)
+ b.stmt(fn, body)
+ if cb := fn.currentBlock; cb != nil && (cb == fn.Blocks[0] || cb == fn.Recover || cb.Preds != nil) {
+ // Control fell off the end of the function's body block.
+ //
+ // Block optimizations eliminate the current block, if
+ // unreachable. It is a builder invariant that
+ // if this no-arg return is ill-typed for
+ // fn.Signature.Results, this block must be
+ // unreachable. The sanity checker checks this.
+ fn.emit(new(RunDefers))
+ fn.emit(new(Return))
+ }
+ fn.finishBody()
+}
+
+// buildFuncDecl builds SSA code for the function or method declared
+// by decl in package pkg.
+//
+func (b *builder) buildFuncDecl(pkg *Package, decl *ast.FuncDecl) {
+ id := decl.Name
+ if isBlankIdent(id) {
+ return // discard
+ }
+ fn := pkg.values[pkg.info.Defs[id]].(*Function)
+ if decl.Recv == nil && id.Name == "init" {
+ var v Call
+ v.Call.Value = fn
+ v.setType(types.NewTuple())
+ pkg.init.emit(&v)
+ }
+ b.buildFunction(fn)
+}
+
+// Build calls Package.Build for each package in prog.
+// Building occurs in parallel unless the BuildSerially mode flag was set.
+//
+// Build is intended for whole-program analysis; a typical compiler
+// need only build a single package.
+//
+// Build is idempotent and thread-safe.
+//
+func (prog *Program) Build() {
+ var wg sync.WaitGroup
+ for _, p := range prog.packages {
+ if prog.mode&BuildSerially != 0 {
+ p.Build()
+ } else {
+ wg.Add(1)
+ go func(p *Package) {
+ p.Build()
+ wg.Done()
+ }(p)
+ }
+ }
+ wg.Wait()
+}
+
+// Build builds SSA code for all functions and vars in package p.
+//
+// Precondition: CreatePackage must have been called for all of p's
+// direct imports (and hence its direct imports must have been
+// error-free).
+//
+// Build is idempotent and thread-safe.
+//
+func (p *Package) Build() { p.buildOnce.Do(p.build) }
+
+func (p *Package) build() {
+ if p.info == nil {
+ return // synthetic package, e.g. "testmain"
+ }
+
+ // Ensure we have runtime type info for all exported members.
+ // TODO(adonovan): ideally belongs in memberFromObject, but
+ // that would require package creation in topological order.
+ for name, mem := range p.Members {
+ if ast.IsExported(name) {
+ p.Prog.needMethodsOf(mem.Type())
+ }
+ }
+ if p.Prog.mode&LogSource != 0 {
+ defer logStack("build %s", p)()
+ }
+ init := p.init
+ init.startBody()
+
+ var done *BasicBlock
+
+ if p.Prog.mode&BareInits == 0 {
+ // Make init() skip if package is already initialized.
+ initguard := p.Var("init$guard")
+ doinit := init.newBasicBlock("init.start")
+ done = init.newBasicBlock("init.done")
+ emitIf(init, emitLoad(init, initguard), done, doinit)
+ init.currentBlock = doinit
+ emitStore(init, initguard, vTrue, token.NoPos)
+
+ // Call the init() function of each package we import.
+ for _, pkg := range p.Pkg.Imports() {
+ prereq := p.Prog.packages[pkg]
+ if prereq == nil {
+ panic(fmt.Sprintf("Package(%q).Build(): unsatisfied import: Program.CreatePackage(%q) was not called", p.Pkg.Path(), pkg.Path()))
+ }
+ var v Call
+ v.Call.Value = prereq.init
+ v.Call.pos = init.pos
+ v.setType(types.NewTuple())
+ init.emit(&v)
+ }
+ }
+
+ var b builder
+
+ // Initialize package-level vars in correct order.
+ for _, varinit := range p.info.InitOrder {
+ if init.Prog.mode&LogSource != 0 {
+ fmt.Fprintf(os.Stderr, "build global initializer %v @ %s\n",
+ varinit.Lhs, p.Prog.Fset.Position(varinit.Rhs.Pos()))
+ }
+ if len(varinit.Lhs) == 1 {
+ // 1:1 initialization: var x, y = a(), b()
+ var lval lvalue
+ if v := varinit.Lhs[0]; v.Name() != "_" {
+ lval = &address{addr: p.values[v].(*Global), pos: v.Pos()}
+ } else {
+ lval = blank{}
+ }
+ b.assign(init, lval, varinit.Rhs, true, nil)
+ } else {
+ // n:1 initialization: var x, y := f()
+ tuple := b.exprN(init, varinit.Rhs)
+ for i, v := range varinit.Lhs {
+ if v.Name() == "_" {
+ continue
+ }
+ emitStore(init, p.values[v].(*Global), emitExtract(init, tuple, i), v.Pos())
+ }
+ }
+ }
+
+ // Build all package-level functions, init functions
+ // and methods, including unreachable/blank ones.
+ // We build them in source order, but it's not significant.
+ for _, file := range p.files {
+ for _, decl := range file.Decls {
+ if decl, ok := decl.(*ast.FuncDecl); ok {
+ b.buildFuncDecl(p, decl)
+ }
+ }
+ }
+
+ // Finish up init().
+ if p.Prog.mode&BareInits == 0 {
+ emitJump(init, done)
+ init.currentBlock = done
+ }
+ init.emit(new(Return))
+ init.finishBody()
+
+ p.info = nil // We no longer need ASTs or go/types deductions.
+
+ if p.Prog.mode&SanityCheckFunctions != 0 {
+ sanityCheckPackage(p)
+ }
+}
+
+// Like ObjectOf, but panics instead of returning nil.
+// Only valid during p's create and build phases.
+func (p *Package) objectOf(id *ast.Ident) types.Object {
+ if o := p.info.ObjectOf(id); o != nil {
+ return o
+ }
+ panic(fmt.Sprintf("no types.Object for ast.Ident %s @ %s",
+ id.Name, p.Prog.Fset.Position(id.Pos())))
+}
+
+// Like TypeOf, but panics instead of returning nil.
+// Only valid during p's create and build phases.
+func (p *Package) typeOf(e ast.Expr) types.Type {
+ if T := p.info.TypeOf(e); T != nil {
+ return T
+ }
+ panic(fmt.Sprintf("no type for %T @ %s",
+ e, p.Prog.Fset.Position(e.Pos())))
+}
diff --git a/vendor/honnef.co/go/tools/ssa/const.go b/vendor/honnef.co/go/tools/ssa/const.go
new file mode 100644
index 0000000000000..f95d9e114008e
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssa/const.go
@@ -0,0 +1,169 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// This file defines the Const SSA value type.
+
+import (
+ "fmt"
+ "go/constant"
+ "go/token"
+ "go/types"
+ "strconv"
+)
+
+// NewConst returns a new constant of the specified value and type.
+// val must be valid according to the specification of Const.Value.
+//
+func NewConst(val constant.Value, typ types.Type) *Const {
+ return &Const{typ, val}
+}
+
+// intConst returns an 'int' constant that evaluates to i.
+// (i is an int64 in case the host is narrower than the target.)
+func intConst(i int64) *Const {
+ return NewConst(constant.MakeInt64(i), tInt)
+}
+
+// nilConst returns a nil constant of the specified type, which may
+// be any reference type, including interfaces.
+//
+func nilConst(typ types.Type) *Const {
+ return NewConst(nil, typ)
+}
+
+// stringConst returns a 'string' constant that evaluates to s.
+func stringConst(s string) *Const {
+ return NewConst(constant.MakeString(s), tString)
+}
+
+// zeroConst returns a new "zero" constant of the specified type,
+// which must not be an array or struct type: the zero values of
+// aggregates are well-defined but cannot be represented by Const.
+//
+func zeroConst(t types.Type) *Const {
+ switch t := t.(type) {
+ case *types.Basic:
+ switch {
+ case t.Info()&types.IsBoolean != 0:
+ return NewConst(constant.MakeBool(false), t)
+ case t.Info()&types.IsNumeric != 0:
+ return NewConst(constant.MakeInt64(0), t)
+ case t.Info()&types.IsString != 0:
+ return NewConst(constant.MakeString(""), t)
+ case t.Kind() == types.UnsafePointer:
+ fallthrough
+ case t.Kind() == types.UntypedNil:
+ return nilConst(t)
+ default:
+ panic(fmt.Sprint("zeroConst for unexpected type:", t))
+ }
+ case *types.Pointer, *types.Slice, *types.Interface, *types.Chan, *types.Map, *types.Signature:
+ return nilConst(t)
+ case *types.Named:
+ return NewConst(zeroConst(t.Underlying()).Value, t)
+ case *types.Array, *types.Struct, *types.Tuple:
+ panic(fmt.Sprint("zeroConst applied to aggregate:", t))
+ }
+ panic(fmt.Sprint("zeroConst: unexpected ", t))
+}
+
+func (c *Const) RelString(from *types.Package) string {
+ var s string
+ if c.Value == nil {
+ s = "nil"
+ } else if c.Value.Kind() == constant.String {
+ s = constant.StringVal(c.Value)
+ const max = 20
+ // TODO(adonovan): don't cut a rune in half.
+ if len(s) > max {
+ s = s[:max-3] + "..." // abbreviate
+ }
+ s = strconv.Quote(s)
+ } else {
+ s = c.Value.String()
+ }
+ return s + ":" + relType(c.Type(), from)
+}
+
+func (c *Const) Name() string {
+ return c.RelString(nil)
+}
+
+func (c *Const) String() string {
+ return c.Name()
+}
+
+func (c *Const) Type() types.Type {
+ return c.typ
+}
+
+func (c *Const) Referrers() *[]Instruction {
+ return nil
+}
+
+func (c *Const) Parent() *Function { return nil }
+
+func (c *Const) Pos() token.Pos {
+ return token.NoPos
+}
+
+// IsNil returns true if this constant represents a typed or untyped nil value.
+func (c *Const) IsNil() bool {
+ return c.Value == nil
+}
+
+// TODO(adonovan): move everything below into honnef.co/go/tools/ssa/interp.
+
+// Int64 returns the numeric value of this constant truncated to fit
+// a signed 64-bit integer.
+//
+func (c *Const) Int64() int64 {
+ switch x := constant.ToInt(c.Value); x.Kind() {
+ case constant.Int:
+ if i, ok := constant.Int64Val(x); ok {
+ return i
+ }
+ return 0
+ case constant.Float:
+ f, _ := constant.Float64Val(x)
+ return int64(f)
+ }
+ panic(fmt.Sprintf("unexpected constant value: %T", c.Value))
+}
+
+// Uint64 returns the numeric value of this constant truncated to fit
+// an unsigned 64-bit integer.
+//
+func (c *Const) Uint64() uint64 {
+ switch x := constant.ToInt(c.Value); x.Kind() {
+ case constant.Int:
+ if u, ok := constant.Uint64Val(x); ok {
+ return u
+ }
+ return 0
+ case constant.Float:
+ f, _ := constant.Float64Val(x)
+ return uint64(f)
+ }
+ panic(fmt.Sprintf("unexpected constant value: %T", c.Value))
+}
+
+// Float64 returns the numeric value of this constant truncated to fit
+// a float64.
+//
+func (c *Const) Float64() float64 {
+ f, _ := constant.Float64Val(c.Value)
+ return f
+}
+
+// Complex128 returns the complex value of this constant truncated to
+// fit a complex128.
+//
+func (c *Const) Complex128() complex128 {
+ re, _ := constant.Float64Val(constant.Real(c.Value))
+ im, _ := constant.Float64Val(constant.Imag(c.Value))
+ return complex(re, im)
+}
diff --git a/vendor/honnef.co/go/tools/ssa/create.go b/vendor/honnef.co/go/tools/ssa/create.go
new file mode 100644
index 0000000000000..85163a0c5a744
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssa/create.go
@@ -0,0 +1,270 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// This file implements the CREATE phase of SSA construction.
+// See builder.go for explanation.
+
+import (
+ "fmt"
+ "go/ast"
+ "go/token"
+ "go/types"
+ "os"
+ "sync"
+
+ "golang.org/x/tools/go/types/typeutil"
+)
+
+// NewProgram returns a new SSA Program.
+//
+// mode controls diagnostics and checking during SSA construction.
+//
+func NewProgram(fset *token.FileSet, mode BuilderMode) *Program {
+ prog := &Program{
+ Fset: fset,
+ imported: make(map[string]*Package),
+ packages: make(map[*types.Package]*Package),
+ thunks: make(map[selectionKey]*Function),
+ bounds: make(map[*types.Func]*Function),
+ mode: mode,
+ }
+
+ h := typeutil.MakeHasher() // protected by methodsMu, in effect
+ prog.methodSets.SetHasher(h)
+ prog.canon.SetHasher(h)
+
+ return prog
+}
+
+// memberFromObject populates package pkg with a member for the
+// typechecker object obj.
+//
+// For objects from Go source code, syntax is the associated syntax
+// tree (for funcs and vars only); it will be used during the build
+// phase.
+//
+func memberFromObject(pkg *Package, obj types.Object, syntax ast.Node) {
+ name := obj.Name()
+ switch obj := obj.(type) {
+ case *types.Builtin:
+ if pkg.Pkg != types.Unsafe {
+ panic("unexpected builtin object: " + obj.String())
+ }
+
+ case *types.TypeName:
+ pkg.Members[name] = &Type{
+ object: obj,
+ pkg: pkg,
+ }
+
+ case *types.Const:
+ c := &NamedConst{
+ object: obj,
+ Value: NewConst(obj.Val(), obj.Type()),
+ pkg: pkg,
+ }
+ pkg.values[obj] = c.Value
+ pkg.Members[name] = c
+
+ case *types.Var:
+ g := &Global{
+ Pkg: pkg,
+ name: name,
+ object: obj,
+ typ: types.NewPointer(obj.Type()), // address
+ pos: obj.Pos(),
+ }
+ pkg.values[obj] = g
+ pkg.Members[name] = g
+
+ case *types.Func:
+ sig := obj.Type().(*types.Signature)
+ if sig.Recv() == nil && name == "init" {
+ pkg.ninit++
+ name = fmt.Sprintf("init#%d", pkg.ninit)
+ }
+ fn := &Function{
+ name: name,
+ object: obj,
+ Signature: sig,
+ syntax: syntax,
+ pos: obj.Pos(),
+ Pkg: pkg,
+ Prog: pkg.Prog,
+ }
+ if syntax == nil {
+ fn.Synthetic = "loaded from gc object file"
+ }
+
+ pkg.values[obj] = fn
+ if sig.Recv() == nil {
+ pkg.Members[name] = fn // package-level function
+ }
+
+ default: // (incl. *types.Package)
+ panic("unexpected Object type: " + obj.String())
+ }
+}
+
+// membersFromDecl populates package pkg with members for each
+// typechecker object (var, func, const or type) associated with the
+// specified decl.
+//
+func membersFromDecl(pkg *Package, decl ast.Decl) {
+ switch decl := decl.(type) {
+ case *ast.GenDecl: // import, const, type or var
+ switch decl.Tok {
+ case token.CONST:
+ for _, spec := range decl.Specs {
+ for _, id := range spec.(*ast.ValueSpec).Names {
+ if !isBlankIdent(id) {
+ memberFromObject(pkg, pkg.info.Defs[id], nil)
+ }
+ }
+ }
+
+ case token.VAR:
+ for _, spec := range decl.Specs {
+ for _, id := range spec.(*ast.ValueSpec).Names {
+ if !isBlankIdent(id) {
+ memberFromObject(pkg, pkg.info.Defs[id], spec)
+ }
+ }
+ }
+
+ case token.TYPE:
+ for _, spec := range decl.Specs {
+ id := spec.(*ast.TypeSpec).Name
+ if !isBlankIdent(id) {
+ memberFromObject(pkg, pkg.info.Defs[id], nil)
+ }
+ }
+ }
+
+ case *ast.FuncDecl:
+ id := decl.Name
+ if !isBlankIdent(id) {
+ memberFromObject(pkg, pkg.info.Defs[id], decl)
+ }
+ }
+}
+
+// CreatePackage constructs and returns an SSA Package from the
+// specified type-checked, error-free file ASTs, and populates its
+// Members mapping.
+//
+// importable determines whether this package should be returned by a
+// subsequent call to ImportedPackage(pkg.Path()).
+//
+// The real work of building SSA form for each function is not done
+// until a subsequent call to Package.Build().
+//
+func (prog *Program) CreatePackage(pkg *types.Package, files []*ast.File, info *types.Info, importable bool) *Package {
+ p := &Package{
+ Prog: prog,
+ Members: make(map[string]Member),
+ values: make(map[types.Object]Value),
+ Pkg: pkg,
+ info: info, // transient (CREATE and BUILD phases)
+ files: files, // transient (CREATE and BUILD phases)
+ }
+
+ // Add init() function.
+ p.init = &Function{
+ name: "init",
+ Signature: new(types.Signature),
+ Synthetic: "package initializer",
+ Pkg: p,
+ Prog: prog,
+ }
+ p.Members[p.init.name] = p.init
+
+ // CREATE phase.
+ // Allocate all package members: vars, funcs, consts and types.
+ if len(files) > 0 {
+ // Go source package.
+ for _, file := range files {
+ for _, decl := range file.Decls {
+ membersFromDecl(p, decl)
+ }
+ }
+ } else {
+ // GC-compiled binary package (or "unsafe")
+ // No code.
+ // No position information.
+ scope := p.Pkg.Scope()
+ for _, name := range scope.Names() {
+ obj := scope.Lookup(name)
+ memberFromObject(p, obj, nil)
+ if obj, ok := obj.(*types.TypeName); ok {
+ if named, ok := obj.Type().(*types.Named); ok {
+ for i, n := 0, named.NumMethods(); i < n; i++ {
+ memberFromObject(p, named.Method(i), nil)
+ }
+ }
+ }
+ }
+ }
+
+ if prog.mode&BareInits == 0 {
+ // Add initializer guard variable.
+ initguard := &Global{
+ Pkg: p,
+ name: "init$guard",
+ typ: types.NewPointer(tBool),
+ }
+ p.Members[initguard.Name()] = initguard
+ }
+
+ if prog.mode&GlobalDebug != 0 {
+ p.SetDebugMode(true)
+ }
+
+ if prog.mode&PrintPackages != 0 {
+ printMu.Lock()
+ p.WriteTo(os.Stdout)
+ printMu.Unlock()
+ }
+
+ if importable {
+ prog.imported[p.Pkg.Path()] = p
+ }
+ prog.packages[p.Pkg] = p
+
+ return p
+}
+
+// printMu serializes printing of Packages/Functions to stdout.
+var printMu sync.Mutex
+
+// AllPackages returns a new slice containing all packages in the
+// program prog in unspecified order.
+//
+func (prog *Program) AllPackages() []*Package {
+ pkgs := make([]*Package, 0, len(prog.packages))
+ for _, pkg := range prog.packages {
+ pkgs = append(pkgs, pkg)
+ }
+ return pkgs
+}
+
+// ImportedPackage returns the importable Package whose PkgPath
+// is path, or nil if no such Package has been created.
+//
+// A parameter to CreatePackage determines whether a package should be
+// considered importable. For example, no import declaration can resolve
+// to the ad-hoc main package created by 'go build foo.go'.
+//
+// TODO(adonovan): rethink this function and the "importable" concept;
+// most packages are importable. This function assumes that all
+// types.Package.Path values are unique within the ssa.Program, which is
+// false---yet this function remains very convenient.
+// Clients should use (*Program).Package instead where possible.
+// SSA doesn't really need a string-keyed map of packages.
+//
+func (prog *Program) ImportedPackage(path string) *Package {
+ return prog.imported[path]
+}
diff --git a/vendor/honnef.co/go/tools/ssa/doc.go b/vendor/honnef.co/go/tools/ssa/doc.go
new file mode 100644
index 0000000000000..0f71fda001372
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssa/doc.go
@@ -0,0 +1,125 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+// Package ssa defines a representation of the elements of Go programs
+// (packages, types, functions, variables and constants) using a
+// static single-assignment (SSA) form intermediate representation
+// (IR) for the bodies of functions.
+//
+// THIS INTERFACE IS EXPERIMENTAL AND IS LIKELY TO CHANGE.
+//
+// For an introduction to SSA form, see
+// http://en.wikipedia.org/wiki/Static_single_assignment_form.
+// This page provides a broader reading list:
+// http://www.dcs.gla.ac.uk/~jsinger/ssa.html.
+//
+// The level of abstraction of the SSA form is intentionally close to
+// the source language to facilitate construction of source analysis
+// tools. It is not intended for machine code generation.
+//
+// All looping, branching and switching constructs are replaced with
+// unstructured control flow. Higher-level control flow constructs
+// such as multi-way branch can be reconstructed as needed; see
+// ssautil.Switches() for an example.
+//
+// The simplest way to create the SSA representation of a package is
+// to load typed syntax trees using golang.org/x/tools/go/packages, then
+// invoke the ssautil.Packages helper function. See ExampleLoadPackages
+// and ExampleWholeProgram for examples.
+// The resulting ssa.Program contains all the packages and their
+// members, but SSA code is not created for function bodies until a
+// subsequent call to (*Package).Build or (*Program).Build.
+//
+// The builder initially builds a naive SSA form in which all local
+// variables are addresses of stack locations with explicit loads and
+// stores. Registerisation of eligible locals and φ-node insertion
+// using dominance and dataflow are then performed as a second pass
+// called "lifting" to improve the accuracy and performance of
+// subsequent analyses; this pass can be skipped by setting the
+// NaiveForm builder flag.
+//
+// The primary interfaces of this package are:
+//
+// - Member: a named member of a Go package.
+// - Value: an expression that yields a value.
+// - Instruction: a statement that consumes values and performs computation.
+// - Node: a Value or Instruction (emphasizing its membership in the SSA value graph)
+//
+// A computation that yields a result implements both the Value and
+// Instruction interfaces. The following table shows for each
+// concrete type which of these interfaces it implements.
+//
+// Value? Instruction? Member?
+// *Alloc ✔ ✔
+// *BinOp ✔ ✔
+// *Builtin ✔
+// *Call ✔ ✔
+// *ChangeInterface ✔ ✔
+// *ChangeType ✔ ✔
+// *Const ✔
+// *Convert ✔ ✔
+// *DebugRef ✔
+// *Defer ✔
+// *Extract ✔ ✔
+// *Field ✔ ✔
+// *FieldAddr ✔ ✔
+// *FreeVar ✔
+// *Function ✔ ✔ (func)
+// *Global ✔ ✔ (var)
+// *Go ✔
+// *If ✔
+// *Index ✔ ✔
+// *IndexAddr ✔ ✔
+// *Jump ✔
+// *Lookup ✔ ✔
+// *MakeChan ✔ ✔
+// *MakeClosure ✔ ✔
+// *MakeInterface ✔ ✔
+// *MakeMap ✔ ✔
+// *MakeSlice ✔ ✔
+// *MapUpdate ✔
+// *NamedConst ✔ (const)
+// *Next ✔ ✔
+// *Panic ✔
+// *Parameter ✔
+// *Phi ✔ ✔
+// *Range ✔ ✔
+// *Return ✔
+// *RunDefers ✔
+// *Select ✔ ✔
+// *Send ✔
+// *Slice ✔ ✔
+// *Store ✔
+// *Type ✔ (type)
+// *TypeAssert ✔ ✔
+// *UnOp ✔ ✔
+//
+// Other key types in this package include: Program, Package, Function
+// and BasicBlock.
+//
+// The program representation constructed by this package is fully
+// resolved internally, i.e. it does not rely on the names of Values,
+// Packages, Functions, Types or BasicBlocks for the correct
+// interpretation of the program. Only the identities of objects and
+// the topology of the SSA and type graphs are semantically
+// significant. (There is one exception: Ids, used to identify field
+// and method names, contain strings.) Avoidance of name-based
+// operations simplifies the implementation of subsequent passes and
+// can make them very efficient. Many objects are nonetheless named
+// to aid in debugging, but it is not essential that the names be
+// either accurate or unambiguous. The public API exposes a number of
+// name-based maps for client convenience.
+//
+// The ssa/ssautil package provides various utilities that depend only
+// on the public API of this package.
+//
+// TODO(adonovan): Consider the exceptional control-flow implications
+// of defer and recover().
+//
+// TODO(adonovan): write a how-to document for all the various cases
+// of trying to determine corresponding elements across the four
+// domains of source locations, ast.Nodes, types.Objects,
+// ssa.Values/Instructions.
+//
+package ssa // import "honnef.co/go/tools/ssa"
diff --git a/vendor/honnef.co/go/tools/ssa/dom.go b/vendor/honnef.co/go/tools/ssa/dom.go
new file mode 100644
index 0000000000000..a036be87c4c36
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssa/dom.go
@@ -0,0 +1,343 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// This file defines algorithms related to dominance.
+
+// Dominator tree construction ----------------------------------------
+//
+// We use the algorithm described in Lengauer & Tarjan. 1979. A fast
+// algorithm for finding dominators in a flowgraph.
+// http://doi.acm.org/10.1145/357062.357071
+//
+// We also apply the optimizations to SLT described in Georgiadis et
+// al, Finding Dominators in Practice, JGAA 2006,
+// http://jgaa.info/accepted/2006/GeorgiadisTarjanWerneck2006.10.1.pdf
+// to avoid the need for buckets of size > 1.
+
+import (
+ "bytes"
+ "fmt"
+ "math/big"
+ "os"
+ "sort"
+)
+
+// Idom returns the block that immediately dominates b:
+// its parent in the dominator tree, if any.
+// Neither the entry node (b.Index==0) nor recover node
+// (b==b.Parent().Recover()) have a parent.
+//
+func (b *BasicBlock) Idom() *BasicBlock { return b.dom.idom }
+
+// Dominees returns the list of blocks that b immediately dominates:
+// its children in the dominator tree.
+//
+func (b *BasicBlock) Dominees() []*BasicBlock { return b.dom.children }
+
+// Dominates reports whether b dominates c.
+func (b *BasicBlock) Dominates(c *BasicBlock) bool {
+ return b.dom.pre <= c.dom.pre && c.dom.post <= b.dom.post
+}
+
+type byDomPreorder []*BasicBlock
+
+func (a byDomPreorder) Len() int { return len(a) }
+func (a byDomPreorder) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
+func (a byDomPreorder) Less(i, j int) bool { return a[i].dom.pre < a[j].dom.pre }
+
+// DomPreorder returns a new slice containing the blocks of f in
+// dominator tree preorder.
+//
+func (f *Function) DomPreorder() []*BasicBlock {
+ n := len(f.Blocks)
+ order := make(byDomPreorder, n)
+ copy(order, f.Blocks)
+ sort.Sort(order)
+ return order
+}
+
+// domInfo contains a BasicBlock's dominance information.
+type domInfo struct {
+ idom *BasicBlock // immediate dominator (parent in domtree)
+ children []*BasicBlock // nodes immediately dominated by this one
+ pre, post int32 // pre- and post-order numbering within domtree
+}
+
+// ltState holds the working state for Lengauer-Tarjan algorithm
+// (during which domInfo.pre is repurposed for CFG DFS preorder number).
+type ltState struct {
+ // Each slice is indexed by b.Index.
+ sdom []*BasicBlock // b's semidominator
+ parent []*BasicBlock // b's parent in DFS traversal of CFG
+ ancestor []*BasicBlock // b's ancestor with least sdom
+}
+
+// dfs implements the depth-first search part of the LT algorithm.
+func (lt *ltState) dfs(v *BasicBlock, i int32, preorder []*BasicBlock) int32 {
+ preorder[i] = v
+ v.dom.pre = i // For now: DFS preorder of spanning tree of CFG
+ i++
+ lt.sdom[v.Index] = v
+ lt.link(nil, v)
+ for _, w := range v.Succs {
+ if lt.sdom[w.Index] == nil {
+ lt.parent[w.Index] = v
+ i = lt.dfs(w, i, preorder)
+ }
+ }
+ return i
+}
+
+// eval implements the EVAL part of the LT algorithm.
+func (lt *ltState) eval(v *BasicBlock) *BasicBlock {
+ // TODO(adonovan): opt: do path compression per simple LT.
+ u := v
+ for ; lt.ancestor[v.Index] != nil; v = lt.ancestor[v.Index] {
+ if lt.sdom[v.Index].dom.pre < lt.sdom[u.Index].dom.pre {
+ u = v
+ }
+ }
+ return u
+}
+
+// link implements the LINK part of the LT algorithm.
+func (lt *ltState) link(v, w *BasicBlock) {
+ lt.ancestor[w.Index] = v
+}
+
+// buildDomTree computes the dominator tree of f using the LT algorithm.
+// Precondition: all blocks are reachable (e.g. optimizeBlocks has been run).
+//
+func buildDomTree(f *Function) {
+ // The step numbers refer to the original LT paper; the
+ // reordering is due to Georgiadis.
+
+ // Clear any previous domInfo.
+ for _, b := range f.Blocks {
+ b.dom = domInfo{}
+ }
+
+ n := len(f.Blocks)
+ // Allocate space for 5 contiguous [n]*BasicBlock arrays:
+ // sdom, parent, ancestor, preorder, buckets.
+ space := make([]*BasicBlock, 5*n)
+ lt := ltState{
+ sdom: space[0:n],
+ parent: space[n : 2*n],
+ ancestor: space[2*n : 3*n],
+ }
+
+ // Step 1. Number vertices by depth-first preorder.
+ preorder := space[3*n : 4*n]
+ root := f.Blocks[0]
+ prenum := lt.dfs(root, 0, preorder)
+ recover := f.Recover
+ if recover != nil {
+ lt.dfs(recover, prenum, preorder)
+ }
+
+ buckets := space[4*n : 5*n]
+ copy(buckets, preorder)
+
+ // In reverse preorder...
+ for i := int32(n) - 1; i > 0; i-- {
+ w := preorder[i]
+
+ // Step 3. Implicitly define the immediate dominator of each node.
+ for v := buckets[i]; v != w; v = buckets[v.dom.pre] {
+ u := lt.eval(v)
+ if lt.sdom[u.Index].dom.pre < i {
+ v.dom.idom = u
+ } else {
+ v.dom.idom = w
+ }
+ }
+
+ // Step 2. Compute the semidominators of all nodes.
+ lt.sdom[w.Index] = lt.parent[w.Index]
+ for _, v := range w.Preds {
+ u := lt.eval(v)
+ if lt.sdom[u.Index].dom.pre < lt.sdom[w.Index].dom.pre {
+ lt.sdom[w.Index] = lt.sdom[u.Index]
+ }
+ }
+
+ lt.link(lt.parent[w.Index], w)
+
+ if lt.parent[w.Index] == lt.sdom[w.Index] {
+ w.dom.idom = lt.parent[w.Index]
+ } else {
+ buckets[i] = buckets[lt.sdom[w.Index].dom.pre]
+ buckets[lt.sdom[w.Index].dom.pre] = w
+ }
+ }
+
+ // The final 'Step 3' is now outside the loop.
+ for v := buckets[0]; v != root; v = buckets[v.dom.pre] {
+ v.dom.idom = root
+ }
+
+ // Step 4. Explicitly define the immediate dominator of each
+ // node, in preorder.
+ for _, w := range preorder[1:] {
+ if w == root || w == recover {
+ w.dom.idom = nil
+ } else {
+ if w.dom.idom != lt.sdom[w.Index] {
+ w.dom.idom = w.dom.idom.dom.idom
+ }
+ // Calculate Children relation as inverse of Idom.
+ w.dom.idom.dom.children = append(w.dom.idom.dom.children, w)
+ }
+ }
+
+ pre, post := numberDomTree(root, 0, 0)
+ if recover != nil {
+ numberDomTree(recover, pre, post)
+ }
+
+ // printDomTreeDot(os.Stderr, f) // debugging
+ // printDomTreeText(os.Stderr, root, 0) // debugging
+
+ if f.Prog.mode&SanityCheckFunctions != 0 {
+ sanityCheckDomTree(f)
+ }
+}
+
+// numberDomTree sets the pre- and post-order numbers of a depth-first
+// traversal of the dominator tree rooted at v. These are used to
+// answer dominance queries in constant time.
+//
+func numberDomTree(v *BasicBlock, pre, post int32) (int32, int32) {
+ v.dom.pre = pre
+ pre++
+ for _, child := range v.dom.children {
+ pre, post = numberDomTree(child, pre, post)
+ }
+ v.dom.post = post
+ post++
+ return pre, post
+}
+
+// Testing utilities ----------------------------------------
+
+// sanityCheckDomTree checks the correctness of the dominator tree
+// computed by the LT algorithm by comparing against the dominance
+// relation computed by a naive Kildall-style forward dataflow
+// analysis (Algorithm 10.16 from the "Dragon" book).
+//
+func sanityCheckDomTree(f *Function) {
+ n := len(f.Blocks)
+
+ // D[i] is the set of blocks that dominate f.Blocks[i],
+ // represented as a bit-set of block indices.
+ D := make([]big.Int, n)
+
+ one := big.NewInt(1)
+
+ // all is the set of all blocks; constant.
+ var all big.Int
+ all.Set(one).Lsh(&all, uint(n)).Sub(&all, one)
+
+ // Initialization.
+ for i, b := range f.Blocks {
+ if i == 0 || b == f.Recover {
+ // A root is dominated only by itself.
+ D[i].SetBit(&D[0], 0, 1)
+ } else {
+ // All other blocks are (initially) dominated
+ // by every block.
+ D[i].Set(&all)
+ }
+ }
+
+ // Iteration until fixed point.
+ for changed := true; changed; {
+ changed = false
+ for i, b := range f.Blocks {
+ if i == 0 || b == f.Recover {
+ continue
+ }
+ // Compute intersection across predecessors.
+ var x big.Int
+ x.Set(&all)
+ for _, pred := range b.Preds {
+ x.And(&x, &D[pred.Index])
+ }
+ x.SetBit(&x, i, 1) // a block always dominates itself.
+ if D[i].Cmp(&x) != 0 {
+ D[i].Set(&x)
+ changed = true
+ }
+ }
+ }
+
+ // Check the entire relation. O(n^2).
+ // The Recover block (if any) must be treated specially so we skip it.
+ ok := true
+ for i := 0; i < n; i++ {
+ for j := 0; j < n; j++ {
+ b, c := f.Blocks[i], f.Blocks[j]
+ if c == f.Recover {
+ continue
+ }
+ actual := b.Dominates(c)
+ expected := D[j].Bit(i) == 1
+ if actual != expected {
+ fmt.Fprintf(os.Stderr, "dominates(%s, %s)==%t, want %t\n", b, c, actual, expected)
+ ok = false
+ }
+ }
+ }
+
+ preorder := f.DomPreorder()
+ for _, b := range f.Blocks {
+ if got := preorder[b.dom.pre]; got != b {
+ fmt.Fprintf(os.Stderr, "preorder[%d]==%s, want %s\n", b.dom.pre, got, b)
+ ok = false
+ }
+ }
+
+ if !ok {
+ panic("sanityCheckDomTree failed for " + f.String())
+ }
+
+}
+
+// Printing functions ----------------------------------------
+
+// printDomTree prints the dominator tree as text, using indentation.
+//lint:ignore U1000 used during debugging
+func printDomTreeText(buf *bytes.Buffer, v *BasicBlock, indent int) {
+ fmt.Fprintf(buf, "%*s%s\n", 4*indent, "", v)
+ for _, child := range v.dom.children {
+ printDomTreeText(buf, child, indent+1)
+ }
+}
+
+// printDomTreeDot prints the dominator tree of f in AT&T GraphViz
+// (.dot) format.
+//lint:ignore U1000 used during debugging
+func printDomTreeDot(buf *bytes.Buffer, f *Function) {
+ fmt.Fprintln(buf, "//", f)
+ fmt.Fprintln(buf, "digraph domtree {")
+ for i, b := range f.Blocks {
+ v := b.dom
+ fmt.Fprintf(buf, "\tn%d [label=\"%s (%d, %d)\",shape=\"rectangle\"];\n", v.pre, b, v.pre, v.post)
+ // TODO(adonovan): improve appearance of edges
+ // belonging to both dominator tree and CFG.
+
+ // Dominator tree edge.
+ if i != 0 {
+ fmt.Fprintf(buf, "\tn%d -> n%d [style=\"solid\",weight=100];\n", v.idom.dom.pre, v.pre)
+ }
+ // CFG edges.
+ for _, pred := range b.Preds {
+ fmt.Fprintf(buf, "\tn%d -> n%d [style=\"dotted\",weight=0];\n", pred.dom.pre, v.pre)
+ }
+ }
+ fmt.Fprintln(buf, "}")
+}
diff --git a/vendor/honnef.co/go/tools/ssa/emit.go b/vendor/honnef.co/go/tools/ssa/emit.go
new file mode 100644
index 0000000000000..6bf9ec32dae9d
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssa/emit.go
@@ -0,0 +1,469 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// Helpers for emitting SSA instructions.
+
+import (
+ "fmt"
+ "go/ast"
+ "go/token"
+ "go/types"
+)
+
+// emitNew emits to f a new (heap Alloc) instruction allocating an
+// object of type typ. pos is the optional source location.
+//
+func emitNew(f *Function, typ types.Type, pos token.Pos) *Alloc {
+ v := &Alloc{Heap: true}
+ v.setType(types.NewPointer(typ))
+ v.setPos(pos)
+ f.emit(v)
+ return v
+}
+
+// emitLoad emits to f an instruction to load the address addr into a
+// new temporary, and returns the value so defined.
+//
+func emitLoad(f *Function, addr Value) *UnOp {
+ v := &UnOp{Op: token.MUL, X: addr}
+ v.setType(deref(addr.Type()))
+ f.emit(v)
+ return v
+}
+
+// emitDebugRef emits to f a DebugRef pseudo-instruction associating
+// expression e with value v.
+//
+func emitDebugRef(f *Function, e ast.Expr, v Value, isAddr bool) {
+ if !f.debugInfo() {
+ return // debugging not enabled
+ }
+ if v == nil || e == nil {
+ panic("nil")
+ }
+ var obj types.Object
+ e = unparen(e)
+ if id, ok := e.(*ast.Ident); ok {
+ if isBlankIdent(id) {
+ return
+ }
+ obj = f.Pkg.objectOf(id)
+ switch obj.(type) {
+ case *types.Nil, *types.Const, *types.Builtin:
+ return
+ }
+ }
+ f.emit(&DebugRef{
+ X: v,
+ Expr: e,
+ IsAddr: isAddr,
+ object: obj,
+ })
+}
+
+// emitArith emits to f code to compute the binary operation op(x, y)
+// where op is an eager shift, logical or arithmetic operation.
+// (Use emitCompare() for comparisons and Builder.logicalBinop() for
+// non-eager operations.)
+//
+func emitArith(f *Function, op token.Token, x, y Value, t types.Type, pos token.Pos) Value {
+ switch op {
+ case token.SHL, token.SHR:
+ x = emitConv(f, x, t)
+ // y may be signed or an 'untyped' constant.
+ // TODO(adonovan): whence signed values?
+ if b, ok := y.Type().Underlying().(*types.Basic); ok && b.Info()&types.IsUnsigned == 0 {
+ y = emitConv(f, y, types.Typ[types.Uint64])
+ }
+
+ case token.ADD, token.SUB, token.MUL, token.QUO, token.REM, token.AND, token.OR, token.XOR, token.AND_NOT:
+ x = emitConv(f, x, t)
+ y = emitConv(f, y, t)
+
+ default:
+ panic("illegal op in emitArith: " + op.String())
+
+ }
+ v := &BinOp{
+ Op: op,
+ X: x,
+ Y: y,
+ }
+ v.setPos(pos)
+ v.setType(t)
+ return f.emit(v)
+}
+
+// emitCompare emits to f code compute the boolean result of
+// comparison comparison 'x op y'.
+//
+func emitCompare(f *Function, op token.Token, x, y Value, pos token.Pos) Value {
+ xt := x.Type().Underlying()
+ yt := y.Type().Underlying()
+
+ // Special case to optimise a tagless SwitchStmt so that
+ // these are equivalent
+ // switch { case e: ...}
+ // switch true { case e: ... }
+ // if e==true { ... }
+ // even in the case when e's type is an interface.
+ // TODO(adonovan): opt: generalise to x==true, false!=y, etc.
+ if x == vTrue && op == token.EQL {
+ if yt, ok := yt.(*types.Basic); ok && yt.Info()&types.IsBoolean != 0 {
+ return y
+ }
+ }
+
+ if types.Identical(xt, yt) {
+ // no conversion necessary
+ } else if _, ok := xt.(*types.Interface); ok {
+ y = emitConv(f, y, x.Type())
+ } else if _, ok := yt.(*types.Interface); ok {
+ x = emitConv(f, x, y.Type())
+ } else if _, ok := x.(*Const); ok {
+ x = emitConv(f, x, y.Type())
+ } else if _, ok := y.(*Const); ok {
+ y = emitConv(f, y, x.Type())
+ //lint:ignore SA9003 no-op
+ } else {
+ // other cases, e.g. channels. No-op.
+ }
+
+ v := &BinOp{
+ Op: op,
+ X: x,
+ Y: y,
+ }
+ v.setPos(pos)
+ v.setType(tBool)
+ return f.emit(v)
+}
+
+// isValuePreserving returns true if a conversion from ut_src to
+// ut_dst is value-preserving, i.e. just a change of type.
+// Precondition: neither argument is a named type.
+//
+func isValuePreserving(ut_src, ut_dst types.Type) bool {
+ // Identical underlying types?
+ if structTypesIdentical(ut_dst, ut_src) {
+ return true
+ }
+
+ switch ut_dst.(type) {
+ case *types.Chan:
+ // Conversion between channel types?
+ _, ok := ut_src.(*types.Chan)
+ return ok
+
+ case *types.Pointer:
+ // Conversion between pointers with identical base types?
+ _, ok := ut_src.(*types.Pointer)
+ return ok
+ }
+ return false
+}
+
+// emitConv emits to f code to convert Value val to exactly type typ,
+// and returns the converted value. Implicit conversions are required
+// by language assignability rules in assignments, parameter passing,
+// etc. Conversions cannot fail dynamically.
+//
+func emitConv(f *Function, val Value, typ types.Type) Value {
+ t_src := val.Type()
+
+ // Identical types? Conversion is a no-op.
+ if types.Identical(t_src, typ) {
+ return val
+ }
+
+ ut_dst := typ.Underlying()
+ ut_src := t_src.Underlying()
+
+ // Just a change of type, but not value or representation?
+ if isValuePreserving(ut_src, ut_dst) {
+ c := &ChangeType{X: val}
+ c.setType(typ)
+ return f.emit(c)
+ }
+
+ // Conversion to, or construction of a value of, an interface type?
+ if _, ok := ut_dst.(*types.Interface); ok {
+ // Assignment from one interface type to another?
+ if _, ok := ut_src.(*types.Interface); ok {
+ c := &ChangeInterface{X: val}
+ c.setType(typ)
+ return f.emit(c)
+ }
+
+ // Untyped nil constant? Return interface-typed nil constant.
+ if ut_src == tUntypedNil {
+ return nilConst(typ)
+ }
+
+ // Convert (non-nil) "untyped" literals to their default type.
+ if t, ok := ut_src.(*types.Basic); ok && t.Info()&types.IsUntyped != 0 {
+ val = emitConv(f, val, DefaultType(ut_src))
+ }
+
+ f.Pkg.Prog.needMethodsOf(val.Type())
+ mi := &MakeInterface{X: val}
+ mi.setType(typ)
+ return f.emit(mi)
+ }
+
+ // Conversion of a compile-time constant value?
+ if c, ok := val.(*Const); ok {
+ if _, ok := ut_dst.(*types.Basic); ok || c.IsNil() {
+ // Conversion of a compile-time constant to
+ // another constant type results in a new
+ // constant of the destination type and
+ // (initially) the same abstract value.
+ // We don't truncate the value yet.
+ return NewConst(c.Value, typ)
+ }
+
+ // We're converting from constant to non-constant type,
+ // e.g. string -> []byte/[]rune.
+ }
+
+ // A representation-changing conversion?
+ // At least one of {ut_src,ut_dst} must be *Basic.
+ // (The other may be []byte or []rune.)
+ _, ok1 := ut_src.(*types.Basic)
+ _, ok2 := ut_dst.(*types.Basic)
+ if ok1 || ok2 {
+ c := &Convert{X: val}
+ c.setType(typ)
+ return f.emit(c)
+ }
+
+ panic(fmt.Sprintf("in %s: cannot convert %s (%s) to %s", f, val, val.Type(), typ))
+}
+
+// emitStore emits to f an instruction to store value val at location
+// addr, applying implicit conversions as required by assignability rules.
+//
+func emitStore(f *Function, addr, val Value, pos token.Pos) *Store {
+ s := &Store{
+ Addr: addr,
+ Val: emitConv(f, val, deref(addr.Type())),
+ pos: pos,
+ }
+ f.emit(s)
+ return s
+}
+
+// emitJump emits to f a jump to target, and updates the control-flow graph.
+// Postcondition: f.currentBlock is nil.
+//
+func emitJump(f *Function, target *BasicBlock) {
+ b := f.currentBlock
+ b.emit(new(Jump))
+ addEdge(b, target)
+ f.currentBlock = nil
+}
+
+// emitIf emits to f a conditional jump to tblock or fblock based on
+// cond, and updates the control-flow graph.
+// Postcondition: f.currentBlock is nil.
+//
+func emitIf(f *Function, cond Value, tblock, fblock *BasicBlock) {
+ b := f.currentBlock
+ b.emit(&If{Cond: cond})
+ addEdge(b, tblock)
+ addEdge(b, fblock)
+ f.currentBlock = nil
+}
+
+// emitExtract emits to f an instruction to extract the index'th
+// component of tuple. It returns the extracted value.
+//
+func emitExtract(f *Function, tuple Value, index int) Value {
+ e := &Extract{Tuple: tuple, Index: index}
+ e.setType(tuple.Type().(*types.Tuple).At(index).Type())
+ return f.emit(e)
+}
+
+// emitTypeAssert emits to f a type assertion value := x.(t) and
+// returns the value. x.Type() must be an interface.
+//
+func emitTypeAssert(f *Function, x Value, t types.Type, pos token.Pos) Value {
+ a := &TypeAssert{X: x, AssertedType: t}
+ a.setPos(pos)
+ a.setType(t)
+ return f.emit(a)
+}
+
+// emitTypeTest emits to f a type test value,ok := x.(t) and returns
+// a (value, ok) tuple. x.Type() must be an interface.
+//
+func emitTypeTest(f *Function, x Value, t types.Type, pos token.Pos) Value {
+ a := &TypeAssert{
+ X: x,
+ AssertedType: t,
+ CommaOk: true,
+ }
+ a.setPos(pos)
+ a.setType(types.NewTuple(
+ newVar("value", t),
+ varOk,
+ ))
+ return f.emit(a)
+}
+
+// emitTailCall emits to f a function call in tail position. The
+// caller is responsible for all fields of 'call' except its type.
+// Intended for wrapper methods.
+// Precondition: f does/will not use deferred procedure calls.
+// Postcondition: f.currentBlock is nil.
+//
+func emitTailCall(f *Function, call *Call) {
+ tresults := f.Signature.Results()
+ nr := tresults.Len()
+ if nr == 1 {
+ call.typ = tresults.At(0).Type()
+ } else {
+ call.typ = tresults
+ }
+ tuple := f.emit(call)
+ var ret Return
+ switch nr {
+ case 0:
+ // no-op
+ case 1:
+ ret.Results = []Value{tuple}
+ default:
+ for i := 0; i < nr; i++ {
+ v := emitExtract(f, tuple, i)
+ // TODO(adonovan): in principle, this is required:
+ // v = emitConv(f, o.Type, f.Signature.Results[i].Type)
+ // but in practice emitTailCall is only used when
+ // the types exactly match.
+ ret.Results = append(ret.Results, v)
+ }
+ }
+ f.emit(&ret)
+ f.currentBlock = nil
+}
+
+// emitImplicitSelections emits to f code to apply the sequence of
+// implicit field selections specified by indices to base value v, and
+// returns the selected value.
+//
+// If v is the address of a struct, the result will be the address of
+// a field; if it is the value of a struct, the result will be the
+// value of a field.
+//
+func emitImplicitSelections(f *Function, v Value, indices []int) Value {
+ for _, index := range indices {
+ fld := deref(v.Type()).Underlying().(*types.Struct).Field(index)
+
+ if isPointer(v.Type()) {
+ instr := &FieldAddr{
+ X: v,
+ Field: index,
+ }
+ instr.setType(types.NewPointer(fld.Type()))
+ v = f.emit(instr)
+ // Load the field's value iff indirectly embedded.
+ if isPointer(fld.Type()) {
+ v = emitLoad(f, v)
+ }
+ } else {
+ instr := &Field{
+ X: v,
+ Field: index,
+ }
+ instr.setType(fld.Type())
+ v = f.emit(instr)
+ }
+ }
+ return v
+}
+
+// emitFieldSelection emits to f code to select the index'th field of v.
+//
+// If wantAddr, the input must be a pointer-to-struct and the result
+// will be the field's address; otherwise the result will be the
+// field's value.
+// Ident id is used for position and debug info.
+//
+func emitFieldSelection(f *Function, v Value, index int, wantAddr bool, id *ast.Ident) Value {
+ fld := deref(v.Type()).Underlying().(*types.Struct).Field(index)
+ if isPointer(v.Type()) {
+ instr := &FieldAddr{
+ X: v,
+ Field: index,
+ }
+ instr.setPos(id.Pos())
+ instr.setType(types.NewPointer(fld.Type()))
+ v = f.emit(instr)
+ // Load the field's value iff we don't want its address.
+ if !wantAddr {
+ v = emitLoad(f, v)
+ }
+ } else {
+ instr := &Field{
+ X: v,
+ Field: index,
+ }
+ instr.setPos(id.Pos())
+ instr.setType(fld.Type())
+ v = f.emit(instr)
+ }
+ emitDebugRef(f, id, v, wantAddr)
+ return v
+}
+
+// zeroValue emits to f code to produce a zero value of type t,
+// and returns it.
+//
+func zeroValue(f *Function, t types.Type) Value {
+ switch t.Underlying().(type) {
+ case *types.Struct, *types.Array:
+ return emitLoad(f, f.addLocal(t, token.NoPos))
+ default:
+ return zeroConst(t)
+ }
+}
+
+// createRecoverBlock emits to f a block of code to return after a
+// recovered panic, and sets f.Recover to it.
+//
+// If f's result parameters are named, the code loads and returns
+// their current values, otherwise it returns the zero values of their
+// type.
+//
+// Idempotent.
+//
+func createRecoverBlock(f *Function) {
+ if f.Recover != nil {
+ return // already created
+ }
+ saved := f.currentBlock
+
+ f.Recover = f.newBasicBlock("recover")
+ f.currentBlock = f.Recover
+
+ var results []Value
+ if f.namedResults != nil {
+ // Reload NRPs to form value tuple.
+ for _, r := range f.namedResults {
+ results = append(results, emitLoad(f, r))
+ }
+ } else {
+ R := f.Signature.Results()
+ for i, n := 0, R.Len(); i < n; i++ {
+ T := R.At(i).Type()
+
+ // Return zero value of each result type.
+ results = append(results, zeroValue(f, T))
+ }
+ }
+ f.emit(&Return{Results: results})
+
+ f.currentBlock = saved
+}
diff --git a/vendor/honnef.co/go/tools/ssa/func.go b/vendor/honnef.co/go/tools/ssa/func.go
new file mode 100644
index 0000000000000..222eea64183f5
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssa/func.go
@@ -0,0 +1,765 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// This file implements the Function and BasicBlock types.
+
+import (
+ "bytes"
+ "fmt"
+ "go/ast"
+ "go/token"
+ "go/types"
+ "io"
+ "os"
+ "strings"
+)
+
+// addEdge adds a control-flow graph edge from from to to.
+func addEdge(from, to *BasicBlock) {
+ from.Succs = append(from.Succs, to)
+ to.Preds = append(to.Preds, from)
+}
+
+// Parent returns the function that contains block b.
+func (b *BasicBlock) Parent() *Function { return b.parent }
+
+// String returns a human-readable label of this block.
+// It is not guaranteed unique within the function.
+//
+func (b *BasicBlock) String() string {
+ return fmt.Sprintf("%d", b.Index)
+}
+
+// emit appends an instruction to the current basic block.
+// If the instruction defines a Value, it is returned.
+//
+func (b *BasicBlock) emit(i Instruction) Value {
+ i.setBlock(b)
+ b.Instrs = append(b.Instrs, i)
+ v, _ := i.(Value)
+ return v
+}
+
+// predIndex returns the i such that b.Preds[i] == c or panics if
+// there is none.
+func (b *BasicBlock) predIndex(c *BasicBlock) int {
+ for i, pred := range b.Preds {
+ if pred == c {
+ return i
+ }
+ }
+ panic(fmt.Sprintf("no edge %s -> %s", c, b))
+}
+
+// hasPhi returns true if b.Instrs contains φ-nodes.
+func (b *BasicBlock) hasPhi() bool {
+ _, ok := b.Instrs[0].(*Phi)
+ return ok
+}
+
+func (b *BasicBlock) Phis() []Instruction {
+ return b.phis()
+}
+
+// phis returns the prefix of b.Instrs containing all the block's φ-nodes.
+func (b *BasicBlock) phis() []Instruction {
+ for i, instr := range b.Instrs {
+ if _, ok := instr.(*Phi); !ok {
+ return b.Instrs[:i]
+ }
+ }
+ return nil // unreachable in well-formed blocks
+}
+
+// replacePred replaces all occurrences of p in b's predecessor list with q.
+// Ordinarily there should be at most one.
+//
+func (b *BasicBlock) replacePred(p, q *BasicBlock) {
+ for i, pred := range b.Preds {
+ if pred == p {
+ b.Preds[i] = q
+ }
+ }
+}
+
+// replaceSucc replaces all occurrences of p in b's successor list with q.
+// Ordinarily there should be at most one.
+//
+func (b *BasicBlock) replaceSucc(p, q *BasicBlock) {
+ for i, succ := range b.Succs {
+ if succ == p {
+ b.Succs[i] = q
+ }
+ }
+}
+
+func (b *BasicBlock) RemovePred(p *BasicBlock) {
+ b.removePred(p)
+}
+
+// removePred removes all occurrences of p in b's
+// predecessor list and φ-nodes.
+// Ordinarily there should be at most one.
+//
+func (b *BasicBlock) removePred(p *BasicBlock) {
+ phis := b.phis()
+
+ // We must preserve edge order for φ-nodes.
+ j := 0
+ for i, pred := range b.Preds {
+ if pred != p {
+ b.Preds[j] = b.Preds[i]
+ // Strike out φ-edge too.
+ for _, instr := range phis {
+ phi := instr.(*Phi)
+ phi.Edges[j] = phi.Edges[i]
+ }
+ j++
+ }
+ }
+ // Nil out b.Preds[j:] and φ-edges[j:] to aid GC.
+ for i := j; i < len(b.Preds); i++ {
+ b.Preds[i] = nil
+ for _, instr := range phis {
+ instr.(*Phi).Edges[i] = nil
+ }
+ }
+ b.Preds = b.Preds[:j]
+ for _, instr := range phis {
+ phi := instr.(*Phi)
+ phi.Edges = phi.Edges[:j]
+ }
+}
+
+// Destinations associated with unlabelled for/switch/select stmts.
+// We push/pop one of these as we enter/leave each construct and for
+// each BranchStmt we scan for the innermost target of the right type.
+//
+type targets struct {
+ tail *targets // rest of stack
+ _break *BasicBlock
+ _continue *BasicBlock
+ _fallthrough *BasicBlock
+}
+
+// Destinations associated with a labelled block.
+// We populate these as labels are encountered in forward gotos or
+// labelled statements.
+//
+type lblock struct {
+ _goto *BasicBlock
+ _break *BasicBlock
+ _continue *BasicBlock
+}
+
+// labelledBlock returns the branch target associated with the
+// specified label, creating it if needed.
+//
+func (f *Function) labelledBlock(label *ast.Ident) *lblock {
+ lb := f.lblocks[label.Obj]
+ if lb == nil {
+ lb = &lblock{_goto: f.newBasicBlock(label.Name)}
+ if f.lblocks == nil {
+ f.lblocks = make(map[*ast.Object]*lblock)
+ }
+ f.lblocks[label.Obj] = lb
+ }
+ return lb
+}
+
+// addParam adds a (non-escaping) parameter to f.Params of the
+// specified name, type and source position.
+//
+func (f *Function) addParam(name string, typ types.Type, pos token.Pos) *Parameter {
+ v := &Parameter{
+ name: name,
+ typ: typ,
+ pos: pos,
+ parent: f,
+ }
+ f.Params = append(f.Params, v)
+ return v
+}
+
+func (f *Function) addParamObj(obj types.Object) *Parameter {
+ name := obj.Name()
+ if name == "" {
+ name = fmt.Sprintf("arg%d", len(f.Params))
+ }
+ param := f.addParam(name, obj.Type(), obj.Pos())
+ param.object = obj
+ return param
+}
+
+// addSpilledParam declares a parameter that is pre-spilled to the
+// stack; the function body will load/store the spilled location.
+// Subsequent lifting will eliminate spills where possible.
+//
+func (f *Function) addSpilledParam(obj types.Object) {
+ param := f.addParamObj(obj)
+ spill := &Alloc{Comment: obj.Name()}
+ spill.setType(types.NewPointer(obj.Type()))
+ spill.setPos(obj.Pos())
+ f.objects[obj] = spill
+ f.Locals = append(f.Locals, spill)
+ f.emit(spill)
+ f.emit(&Store{Addr: spill, Val: param})
+}
+
+// startBody initializes the function prior to generating SSA code for its body.
+// Precondition: f.Type() already set.
+//
+func (f *Function) startBody() {
+ f.currentBlock = f.newBasicBlock("entry")
+ f.objects = make(map[types.Object]Value) // needed for some synthetics, e.g. init
+}
+
+// createSyntacticParams populates f.Params and generates code (spills
+// and named result locals) for all the parameters declared in the
+// syntax. In addition it populates the f.objects mapping.
+//
+// Preconditions:
+// f.startBody() was called.
+// Postcondition:
+// len(f.Params) == len(f.Signature.Params) + (f.Signature.Recv() ? 1 : 0)
+//
+func (f *Function) createSyntacticParams(recv *ast.FieldList, functype *ast.FuncType) {
+ // Receiver (at most one inner iteration).
+ if recv != nil {
+ for _, field := range recv.List {
+ for _, n := range field.Names {
+ f.addSpilledParam(f.Pkg.info.Defs[n])
+ }
+ // Anonymous receiver? No need to spill.
+ if field.Names == nil {
+ f.addParamObj(f.Signature.Recv())
+ }
+ }
+ }
+
+ // Parameters.
+ if functype.Params != nil {
+ n := len(f.Params) // 1 if has recv, 0 otherwise
+ for _, field := range functype.Params.List {
+ for _, n := range field.Names {
+ f.addSpilledParam(f.Pkg.info.Defs[n])
+ }
+ // Anonymous parameter? No need to spill.
+ if field.Names == nil {
+ f.addParamObj(f.Signature.Params().At(len(f.Params) - n))
+ }
+ }
+ }
+
+ // Named results.
+ if functype.Results != nil {
+ for _, field := range functype.Results.List {
+ // Implicit "var" decl of locals for named results.
+ for _, n := range field.Names {
+ f.namedResults = append(f.namedResults, f.addLocalForIdent(n))
+ }
+ }
+ }
+}
+
+// numberRegisters assigns numbers to all SSA registers
+// (value-defining Instructions) in f, to aid debugging.
+// (Non-Instruction Values are named at construction.)
+//
+func numberRegisters(f *Function) {
+ v := 0
+ for _, b := range f.Blocks {
+ for _, instr := range b.Instrs {
+ switch instr.(type) {
+ case Value:
+ instr.(interface {
+ setNum(int)
+ }).setNum(v)
+ v++
+ }
+ }
+ }
+}
+
+// buildReferrers populates the def/use information in all non-nil
+// Value.Referrers slice.
+// Precondition: all such slices are initially empty.
+func buildReferrers(f *Function) {
+ var rands []*Value
+ for _, b := range f.Blocks {
+ for _, instr := range b.Instrs {
+ rands = instr.Operands(rands[:0]) // recycle storage
+ for _, rand := range rands {
+ if r := *rand; r != nil {
+ if ref := r.Referrers(); ref != nil {
+ *ref = append(*ref, instr)
+ }
+ }
+ }
+ }
+ }
+}
+
+// finishBody() finalizes the function after SSA code generation of its body.
+func (f *Function) finishBody() {
+ f.objects = nil
+ f.currentBlock = nil
+ f.lblocks = nil
+
+ // Don't pin the AST in memory (except in debug mode).
+ if n := f.syntax; n != nil && !f.debugInfo() {
+ f.syntax = extentNode{n.Pos(), n.End()}
+ }
+
+ // Remove from f.Locals any Allocs that escape to the heap.
+ j := 0
+ for _, l := range f.Locals {
+ if !l.Heap {
+ f.Locals[j] = l
+ j++
+ }
+ }
+ // Nil out f.Locals[j:] to aid GC.
+ for i := j; i < len(f.Locals); i++ {
+ f.Locals[i] = nil
+ }
+ f.Locals = f.Locals[:j]
+
+ // comma-ok receiving from a time.Tick channel will never return
+ // ok == false, so any branching on the value of ok can be
+ // replaced with an unconditional jump. This will primarily match
+ // `for range time.Tick(x)` loops, but it can also match
+ // user-written code.
+ for _, block := range f.Blocks {
+ if len(block.Instrs) < 3 {
+ continue
+ }
+ if len(block.Succs) != 2 {
+ continue
+ }
+ var instrs []*Instruction
+ for i, ins := range block.Instrs {
+ if _, ok := ins.(*DebugRef); ok {
+ continue
+ }
+ instrs = append(instrs, &block.Instrs[i])
+ }
+
+ for i, ins := range instrs {
+ unop, ok := (*ins).(*UnOp)
+ if !ok || unop.Op != token.ARROW {
+ continue
+ }
+ call, ok := unop.X.(*Call)
+ if !ok {
+ continue
+ }
+ if call.Common().IsInvoke() {
+ continue
+ }
+
+ // OPT(dh): surely there is a more efficient way of doing
+ // this, than using FullName. We should already have
+ // resolved time.Tick somewhere?
+ v, ok := call.Common().Value.(*Function)
+ if !ok {
+ continue
+ }
+ t, ok := v.Object().(*types.Func)
+ if !ok {
+ continue
+ }
+ if t.FullName() != "time.Tick" {
+ continue
+ }
+ ex, ok := (*instrs[i+1]).(*Extract)
+ if !ok || ex.Tuple != unop || ex.Index != 1 {
+ continue
+ }
+
+ ifstmt, ok := (*instrs[i+2]).(*If)
+ if !ok || ifstmt.Cond != ex {
+ continue
+ }
+
+ *instrs[i+2] = NewJump(block)
+ succ := block.Succs[1]
+ block.Succs = block.Succs[0:1]
+ succ.RemovePred(block)
+ }
+ }
+
+ optimizeBlocks(f)
+
+ buildReferrers(f)
+
+ buildDomTree(f)
+
+ if f.Prog.mode&NaiveForm == 0 {
+ // For debugging pre-state of lifting pass:
+ // numberRegisters(f)
+ // f.WriteTo(os.Stderr)
+ lift(f)
+ }
+
+ f.namedResults = nil // (used by lifting)
+
+ numberRegisters(f)
+
+ if f.Prog.mode&PrintFunctions != 0 {
+ printMu.Lock()
+ f.WriteTo(os.Stdout)
+ printMu.Unlock()
+ }
+
+ if f.Prog.mode&SanityCheckFunctions != 0 {
+ mustSanityCheck(f, nil)
+ }
+}
+
+func (f *Function) RemoveNilBlocks() {
+ f.removeNilBlocks()
+}
+
+// removeNilBlocks eliminates nils from f.Blocks and updates each
+// BasicBlock.Index. Use this after any pass that may delete blocks.
+//
+func (f *Function) removeNilBlocks() {
+ j := 0
+ for _, b := range f.Blocks {
+ if b != nil {
+ b.Index = j
+ f.Blocks[j] = b
+ j++
+ }
+ }
+ // Nil out f.Blocks[j:] to aid GC.
+ for i := j; i < len(f.Blocks); i++ {
+ f.Blocks[i] = nil
+ }
+ f.Blocks = f.Blocks[:j]
+}
+
+// SetDebugMode sets the debug mode for package pkg. If true, all its
+// functions will include full debug info. This greatly increases the
+// size of the instruction stream, and causes Functions to depend upon
+// the ASTs, potentially keeping them live in memory for longer.
+//
+func (pkg *Package) SetDebugMode(debug bool) {
+ // TODO(adonovan): do we want ast.File granularity?
+ pkg.debug = debug
+}
+
+// debugInfo reports whether debug info is wanted for this function.
+func (f *Function) debugInfo() bool {
+ return f.Pkg != nil && f.Pkg.debug
+}
+
+// addNamedLocal creates a local variable, adds it to function f and
+// returns it. Its name and type are taken from obj. Subsequent
+// calls to f.lookup(obj) will return the same local.
+//
+func (f *Function) addNamedLocal(obj types.Object) *Alloc {
+ l := f.addLocal(obj.Type(), obj.Pos())
+ l.Comment = obj.Name()
+ f.objects[obj] = l
+ return l
+}
+
+func (f *Function) addLocalForIdent(id *ast.Ident) *Alloc {
+ return f.addNamedLocal(f.Pkg.info.Defs[id])
+}
+
+// addLocal creates an anonymous local variable of type typ, adds it
+// to function f and returns it. pos is the optional source location.
+//
+func (f *Function) addLocal(typ types.Type, pos token.Pos) *Alloc {
+ v := &Alloc{}
+ v.setType(types.NewPointer(typ))
+ v.setPos(pos)
+ f.Locals = append(f.Locals, v)
+ f.emit(v)
+ return v
+}
+
+// lookup returns the address of the named variable identified by obj
+// that is local to function f or one of its enclosing functions.
+// If escaping, the reference comes from a potentially escaping pointer
+// expression and the referent must be heap-allocated.
+//
+func (f *Function) lookup(obj types.Object, escaping bool) Value {
+ if v, ok := f.objects[obj]; ok {
+ if alloc, ok := v.(*Alloc); ok && escaping {
+ alloc.Heap = true
+ }
+ return v // function-local var (address)
+ }
+
+ // Definition must be in an enclosing function;
+ // plumb it through intervening closures.
+ if f.parent == nil {
+ panic("no ssa.Value for " + obj.String())
+ }
+ outer := f.parent.lookup(obj, true) // escaping
+ v := &FreeVar{
+ name: obj.Name(),
+ typ: outer.Type(),
+ pos: outer.Pos(),
+ outer: outer,
+ parent: f,
+ }
+ f.objects[obj] = v
+ f.FreeVars = append(f.FreeVars, v)
+ return v
+}
+
+// emit emits the specified instruction to function f.
+func (f *Function) emit(instr Instruction) Value {
+ return f.currentBlock.emit(instr)
+}
+
+// RelString returns the full name of this function, qualified by
+// package name, receiver type, etc.
+//
+// The specific formatting rules are not guaranteed and may change.
+//
+// Examples:
+// "math.IsNaN" // a package-level function
+// "(*bytes.Buffer).Bytes" // a declared method or a wrapper
+// "(*bytes.Buffer).Bytes$thunk" // thunk (func wrapping method; receiver is param 0)
+// "(*bytes.Buffer).Bytes$bound" // bound (func wrapping method; receiver supplied by closure)
+// "main.main$1" // an anonymous function in main
+// "main.init#1" // a declared init function
+// "main.init" // the synthesized package initializer
+//
+// When these functions are referred to from within the same package
+// (i.e. from == f.Pkg.Object), they are rendered without the package path.
+// For example: "IsNaN", "(*Buffer).Bytes", etc.
+//
+// All non-synthetic functions have distinct package-qualified names.
+// (But two methods may have the same name "(T).f" if one is a synthetic
+// wrapper promoting a non-exported method "f" from another package; in
+// that case, the strings are equal but the identifiers "f" are distinct.)
+//
+func (f *Function) RelString(from *types.Package) string {
+ // Anonymous?
+ if f.parent != nil {
+ // An anonymous function's Name() looks like "parentName$1",
+ // but its String() should include the type/package/etc.
+ parent := f.parent.RelString(from)
+ for i, anon := range f.parent.AnonFuncs {
+ if anon == f {
+ return fmt.Sprintf("%s$%d", parent, 1+i)
+ }
+ }
+
+ return f.name // should never happen
+ }
+
+ // Method (declared or wrapper)?
+ if recv := f.Signature.Recv(); recv != nil {
+ return f.relMethod(from, recv.Type())
+ }
+
+ // Thunk?
+ if f.method != nil {
+ return f.relMethod(from, f.method.Recv())
+ }
+
+ // Bound?
+ if len(f.FreeVars) == 1 && strings.HasSuffix(f.name, "$bound") {
+ return f.relMethod(from, f.FreeVars[0].Type())
+ }
+
+ // Package-level function?
+ // Prefix with package name for cross-package references only.
+ if p := f.pkg(); p != nil && p != from {
+ return fmt.Sprintf("%s.%s", p.Path(), f.name)
+ }
+
+ // Unknown.
+ return f.name
+}
+
+func (f *Function) relMethod(from *types.Package, recv types.Type) string {
+ return fmt.Sprintf("(%s).%s", relType(recv, from), f.name)
+}
+
+// writeSignature writes to buf the signature sig in declaration syntax.
+func writeSignature(buf *bytes.Buffer, from *types.Package, name string, sig *types.Signature, params []*Parameter) {
+ buf.WriteString("func ")
+ if recv := sig.Recv(); recv != nil {
+ buf.WriteString("(")
+ if n := params[0].Name(); n != "" {
+ buf.WriteString(n)
+ buf.WriteString(" ")
+ }
+ types.WriteType(buf, params[0].Type(), types.RelativeTo(from))
+ buf.WriteString(") ")
+ }
+ buf.WriteString(name)
+ types.WriteSignature(buf, sig, types.RelativeTo(from))
+}
+
+func (f *Function) pkg() *types.Package {
+ if f.Pkg != nil {
+ return f.Pkg.Pkg
+ }
+ return nil
+}
+
+var _ io.WriterTo = (*Function)(nil) // *Function implements io.Writer
+
+func (f *Function) WriteTo(w io.Writer) (int64, error) {
+ var buf bytes.Buffer
+ WriteFunction(&buf, f)
+ n, err := w.Write(buf.Bytes())
+ return int64(n), err
+}
+
+// WriteFunction writes to buf a human-readable "disassembly" of f.
+func WriteFunction(buf *bytes.Buffer, f *Function) {
+ fmt.Fprintf(buf, "# Name: %s\n", f.String())
+ if f.Pkg != nil {
+ fmt.Fprintf(buf, "# Package: %s\n", f.Pkg.Pkg.Path())
+ }
+ if syn := f.Synthetic; syn != "" {
+ fmt.Fprintln(buf, "# Synthetic:", syn)
+ }
+ if pos := f.Pos(); pos.IsValid() {
+ fmt.Fprintf(buf, "# Location: %s\n", f.Prog.Fset.Position(pos))
+ }
+
+ if f.parent != nil {
+ fmt.Fprintf(buf, "# Parent: %s\n", f.parent.Name())
+ }
+
+ if f.Recover != nil {
+ fmt.Fprintf(buf, "# Recover: %s\n", f.Recover)
+ }
+
+ from := f.pkg()
+
+ if f.FreeVars != nil {
+ buf.WriteString("# Free variables:\n")
+ for i, fv := range f.FreeVars {
+ fmt.Fprintf(buf, "# % 3d:\t%s %s\n", i, fv.Name(), relType(fv.Type(), from))
+ }
+ }
+
+ if len(f.Locals) > 0 {
+ buf.WriteString("# Locals:\n")
+ for i, l := range f.Locals {
+ fmt.Fprintf(buf, "# % 3d:\t%s %s\n", i, l.Name(), relType(deref(l.Type()), from))
+ }
+ }
+ writeSignature(buf, from, f.Name(), f.Signature, f.Params)
+ buf.WriteString(":\n")
+
+ if f.Blocks == nil {
+ buf.WriteString("\t(external)\n")
+ }
+
+ // NB. column calculations are confused by non-ASCII
+ // characters and assume 8-space tabs.
+ const punchcard = 80 // for old time's sake.
+ const tabwidth = 8
+ for _, b := range f.Blocks {
+ if b == nil {
+ // Corrupt CFG.
+ fmt.Fprintf(buf, ".nil:\n")
+ continue
+ }
+ n, _ := fmt.Fprintf(buf, "%d:", b.Index)
+ bmsg := fmt.Sprintf("%s P:%d S:%d", b.Comment, len(b.Preds), len(b.Succs))
+ fmt.Fprintf(buf, "%*s%s\n", punchcard-1-n-len(bmsg), "", bmsg)
+
+ if false { // CFG debugging
+ fmt.Fprintf(buf, "\t# CFG: %s --> %s --> %s\n", b.Preds, b, b.Succs)
+ }
+ for _, instr := range b.Instrs {
+ buf.WriteString("\t")
+ switch v := instr.(type) {
+ case Value:
+ l := punchcard - tabwidth
+ // Left-align the instruction.
+ if name := v.Name(); name != "" {
+ n, _ := fmt.Fprintf(buf, "%s = ", name)
+ l -= n
+ }
+ n, _ := buf.WriteString(instr.String())
+ l -= n
+ // Right-align the type if there's space.
+ if t := v.Type(); t != nil {
+ buf.WriteByte(' ')
+ ts := relType(t, from)
+ l -= len(ts) + len(" ") // (spaces before and after type)
+ if l > 0 {
+ fmt.Fprintf(buf, "%*s", l, "")
+ }
+ buf.WriteString(ts)
+ }
+ case nil:
+ // Be robust against bad transforms.
+ buf.WriteString("<deleted>")
+ default:
+ buf.WriteString(instr.String())
+ }
+ buf.WriteString("\n")
+ }
+ }
+ fmt.Fprintf(buf, "\n")
+}
+
+// newBasicBlock adds to f a new basic block and returns it. It does
+// not automatically become the current block for subsequent calls to emit.
+// comment is an optional string for more readable debugging output.
+//
+func (f *Function) newBasicBlock(comment string) *BasicBlock {
+ b := &BasicBlock{
+ Index: len(f.Blocks),
+ Comment: comment,
+ parent: f,
+ }
+ b.Succs = b.succs2[:0]
+ f.Blocks = append(f.Blocks, b)
+ return b
+}
+
+// NewFunction returns a new synthetic Function instance belonging to
+// prog, with its name and signature fields set as specified.
+//
+// The caller is responsible for initializing the remaining fields of
+// the function object, e.g. Pkg, Params, Blocks.
+//
+// It is practically impossible for clients to construct well-formed
+// SSA functions/packages/programs directly, so we assume this is the
+// job of the Builder alone. NewFunction exists to provide clients a
+// little flexibility. For example, analysis tools may wish to
+// construct fake Functions for the root of the callgraph, a fake
+// "reflect" package, etc.
+//
+// TODO(adonovan): think harder about the API here.
+//
+func (prog *Program) NewFunction(name string, sig *types.Signature, provenance string) *Function {
+ return &Function{Prog: prog, name: name, Signature: sig, Synthetic: provenance}
+}
+
+type extentNode [2]token.Pos
+
+func (n extentNode) Pos() token.Pos { return n[0] }
+func (n extentNode) End() token.Pos { return n[1] }
+
+// Syntax returns an ast.Node whose Pos/End methods provide the
+// lexical extent of the function if it was defined by Go source code
+// (f.Synthetic==""), or nil otherwise.
+//
+// If f was built with debug information (see Package.SetDebugRef),
+// the result is the *ast.FuncDecl or *ast.FuncLit that declared the
+// function. Otherwise, it is an opaque Node providing only position
+// information; this avoids pinning the AST in memory.
+//
+func (f *Function) Syntax() ast.Node { return f.syntax }
diff --git a/vendor/honnef.co/go/tools/ssa/identical.go b/vendor/honnef.co/go/tools/ssa/identical.go
new file mode 100644
index 0000000000000..53cbee107b6ba
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssa/identical.go
@@ -0,0 +1,7 @@
+// +build go1.8
+
+package ssa
+
+import "go/types"
+
+var structTypesIdentical = types.IdenticalIgnoreTags
diff --git a/vendor/honnef.co/go/tools/ssa/identical_17.go b/vendor/honnef.co/go/tools/ssa/identical_17.go
new file mode 100644
index 0000000000000..da89d3339a5d5
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssa/identical_17.go
@@ -0,0 +1,7 @@
+// +build !go1.8
+
+package ssa
+
+import "go/types"
+
+var structTypesIdentical = types.Identical
diff --git a/vendor/honnef.co/go/tools/ssa/lift.go b/vendor/honnef.co/go/tools/ssa/lift.go
new file mode 100644
index 0000000000000..531358fa3bb2b
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssa/lift.go
@@ -0,0 +1,657 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// This file defines the lifting pass which tries to "lift" Alloc
+// cells (new/local variables) into SSA registers, replacing loads
+// with the dominating stored value, eliminating loads and stores, and
+// inserting φ-nodes as needed.
+
+// Cited papers and resources:
+//
+// Ron Cytron et al. 1991. Efficiently computing SSA form...
+// http://doi.acm.org/10.1145/115372.115320
+//
+// Cooper, Harvey, Kennedy. 2001. A Simple, Fast Dominance Algorithm.
+// Software Practice and Experience 2001, 4:1-10.
+// http://www.hipersoft.rice.edu/grads/publications/dom14.pdf
+//
+// Daniel Berlin, llvmdev mailing list, 2012.
+// http://lists.cs.uiuc.edu/pipermail/llvmdev/2012-January/046638.html
+// (Be sure to expand the whole thread.)
+
+// TODO(adonovan): opt: there are many optimizations worth evaluating, and
+// the conventional wisdom for SSA construction is that a simple
+// algorithm well engineered often beats those of better asymptotic
+// complexity on all but the most egregious inputs.
+//
+// Danny Berlin suggests that the Cooper et al. algorithm for
+// computing the dominance frontier is superior to Cytron et al.
+// Furthermore he recommends that rather than computing the DF for the
+// whole function then renaming all alloc cells, it may be cheaper to
+// compute the DF for each alloc cell separately and throw it away.
+//
+// Consider exploiting liveness information to avoid creating dead
+// φ-nodes which we then immediately remove.
+//
+// Also see many other "TODO: opt" suggestions in the code.
+
+import (
+ "fmt"
+ "go/token"
+ "go/types"
+ "math/big"
+ "os"
+)
+
+// If true, show diagnostic information at each step of lifting.
+// Very verbose.
+const debugLifting = false
+
+// domFrontier maps each block to the set of blocks in its dominance
+// frontier. The outer slice is conceptually a map keyed by
+// Block.Index. The inner slice is conceptually a set, possibly
+// containing duplicates.
+//
+// TODO(adonovan): opt: measure impact of dups; consider a packed bit
+// representation, e.g. big.Int, and bitwise parallel operations for
+// the union step in the Children loop.
+//
+// domFrontier's methods mutate the slice's elements but not its
+// length, so their receivers needn't be pointers.
+//
+type domFrontier [][]*BasicBlock
+
+func (df domFrontier) add(u, v *BasicBlock) {
+ p := &df[u.Index]
+ *p = append(*p, v)
+}
+
+// build builds the dominance frontier df for the dominator (sub)tree
+// rooted at u, using the Cytron et al. algorithm.
+//
+// TODO(adonovan): opt: consider Berlin approach, computing pruned SSA
+// by pruning the entire IDF computation, rather than merely pruning
+// the DF -> IDF step.
+func (df domFrontier) build(u *BasicBlock) {
+ // Encounter each node u in postorder of dom tree.
+ for _, child := range u.dom.children {
+ df.build(child)
+ }
+ for _, vb := range u.Succs {
+ if v := vb.dom; v.idom != u {
+ df.add(u, vb)
+ }
+ }
+ for _, w := range u.dom.children {
+ for _, vb := range df[w.Index] {
+ // TODO(adonovan): opt: use word-parallel bitwise union.
+ if v := vb.dom; v.idom != u {
+ df.add(u, vb)
+ }
+ }
+ }
+}
+
+func buildDomFrontier(fn *Function) domFrontier {
+ df := make(domFrontier, len(fn.Blocks))
+ df.build(fn.Blocks[0])
+ if fn.Recover != nil {
+ df.build(fn.Recover)
+ }
+ return df
+}
+
+func removeInstr(refs []Instruction, instr Instruction) []Instruction {
+ i := 0
+ for _, ref := range refs {
+ if ref == instr {
+ continue
+ }
+ refs[i] = ref
+ i++
+ }
+ for j := i; j != len(refs); j++ {
+ refs[j] = nil // aid GC
+ }
+ return refs[:i]
+}
+
+// lift replaces local and new Allocs accessed only with
+// load/store by SSA registers, inserting φ-nodes where necessary.
+// The result is a program in classical pruned SSA form.
+//
+// Preconditions:
+// - fn has no dead blocks (blockopt has run).
+// - Def/use info (Operands and Referrers) is up-to-date.
+// - The dominator tree is up-to-date.
+//
+func lift(fn *Function) {
+ // TODO(adonovan): opt: lots of little optimizations may be
+ // worthwhile here, especially if they cause us to avoid
+ // buildDomFrontier. For example:
+ //
+ // - Alloc never loaded? Eliminate.
+ // - Alloc never stored? Replace all loads with a zero constant.
+ // - Alloc stored once? Replace loads with dominating store;
+ // don't forget that an Alloc is itself an effective store
+ // of zero.
+ // - Alloc used only within a single block?
+ // Use degenerate algorithm avoiding φ-nodes.
+ // - Consider synergy with scalar replacement of aggregates (SRA).
+ // e.g. *(&x.f) where x is an Alloc.
+ // Perhaps we'd get better results if we generated this as x.f
+ // i.e. Field(x, .f) instead of Load(FieldIndex(x, .f)).
+ // Unclear.
+ //
+ // But we will start with the simplest correct code.
+ df := buildDomFrontier(fn)
+
+ if debugLifting {
+ title := false
+ for i, blocks := range df {
+ if blocks != nil {
+ if !title {
+ fmt.Fprintf(os.Stderr, "Dominance frontier of %s:\n", fn)
+ title = true
+ }
+ fmt.Fprintf(os.Stderr, "\t%s: %s\n", fn.Blocks[i], blocks)
+ }
+ }
+ }
+
+ newPhis := make(newPhiMap)
+
+ // During this pass we will replace some BasicBlock.Instrs
+ // (allocs, loads and stores) with nil, keeping a count in
+ // BasicBlock.gaps. At the end we will reset Instrs to the
+ // concatenation of all non-dead newPhis and non-nil Instrs
+ // for the block, reusing the original array if space permits.
+
+ // While we're here, we also eliminate 'rundefers'
+ // instructions in functions that contain no 'defer'
+ // instructions.
+ usesDefer := false
+
+ // A counter used to generate ~unique ids for Phi nodes, as an
+ // aid to debugging. We use large numbers to make them highly
+ // visible. All nodes are renumbered later.
+ fresh := 1000
+
+ // Determine which allocs we can lift and number them densely.
+ // The renaming phase uses this numbering for compact maps.
+ numAllocs := 0
+ for _, b := range fn.Blocks {
+ b.gaps = 0
+ b.rundefers = 0
+ for _, instr := range b.Instrs {
+ switch instr := instr.(type) {
+ case *Alloc:
+ index := -1
+ if liftAlloc(df, instr, newPhis, &fresh) {
+ index = numAllocs
+ numAllocs++
+ }
+ instr.index = index
+ case *Defer:
+ usesDefer = true
+ case *RunDefers:
+ b.rundefers++
+ }
+ }
+ }
+
+ // renaming maps an alloc (keyed by index) to its replacement
+ // value. Initially the renaming contains nil, signifying the
+ // zero constant of the appropriate type; we construct the
+ // Const lazily at most once on each path through the domtree.
+ // TODO(adonovan): opt: cache per-function not per subtree.
+ renaming := make([]Value, numAllocs)
+
+ // Renaming.
+ rename(fn.Blocks[0], renaming, newPhis)
+
+ // Eliminate dead φ-nodes.
+ removeDeadPhis(fn.Blocks, newPhis)
+
+ // Prepend remaining live φ-nodes to each block.
+ for _, b := range fn.Blocks {
+ nps := newPhis[b]
+ j := len(nps)
+
+ rundefersToKill := b.rundefers
+ if usesDefer {
+ rundefersToKill = 0
+ }
+
+ if j+b.gaps+rundefersToKill == 0 {
+ continue // fast path: no new phis or gaps
+ }
+
+ // Compact nps + non-nil Instrs into a new slice.
+ // TODO(adonovan): opt: compact in situ (rightwards)
+ // if Instrs has sufficient space or slack.
+ dst := make([]Instruction, len(b.Instrs)+j-b.gaps-rundefersToKill)
+ for i, np := range nps {
+ dst[i] = np.phi
+ }
+ for _, instr := range b.Instrs {
+ if instr == nil {
+ continue
+ }
+ if !usesDefer {
+ if _, ok := instr.(*RunDefers); ok {
+ continue
+ }
+ }
+ dst[j] = instr
+ j++
+ }
+ b.Instrs = dst
+ }
+
+ // Remove any fn.Locals that were lifted.
+ j := 0
+ for _, l := range fn.Locals {
+ if l.index < 0 {
+ fn.Locals[j] = l
+ j++
+ }
+ }
+ // Nil out fn.Locals[j:] to aid GC.
+ for i := j; i < len(fn.Locals); i++ {
+ fn.Locals[i] = nil
+ }
+ fn.Locals = fn.Locals[:j]
+}
+
+// removeDeadPhis removes φ-nodes not transitively needed by a
+// non-Phi, non-DebugRef instruction.
+func removeDeadPhis(blocks []*BasicBlock, newPhis newPhiMap) {
+ // First pass: find the set of "live" φ-nodes: those reachable
+ // from some non-Phi instruction.
+ //
+ // We compute reachability in reverse, starting from each φ,
+ // rather than forwards, starting from each live non-Phi
+ // instruction, because this way visits much less of the
+ // Value graph.
+ livePhis := make(map[*Phi]bool)
+ for _, npList := range newPhis {
+ for _, np := range npList {
+ phi := np.phi
+ if !livePhis[phi] && phiHasDirectReferrer(phi) {
+ markLivePhi(livePhis, phi)
+ }
+ }
+ }
+
+ // Existing φ-nodes due to && and || operators
+ // are all considered live (see Go issue 19622).
+ for _, b := range blocks {
+ for _, phi := range b.phis() {
+ markLivePhi(livePhis, phi.(*Phi))
+ }
+ }
+
+ // Second pass: eliminate unused phis from newPhis.
+ for block, npList := range newPhis {
+ j := 0
+ for _, np := range npList {
+ if livePhis[np.phi] {
+ npList[j] = np
+ j++
+ } else {
+ // discard it, first removing it from referrers
+ for _, val := range np.phi.Edges {
+ if refs := val.Referrers(); refs != nil {
+ *refs = removeInstr(*refs, np.phi)
+ }
+ }
+ np.phi.block = nil
+ }
+ }
+ newPhis[block] = npList[:j]
+ }
+}
+
+// markLivePhi marks phi, and all φ-nodes transitively reachable via
+// its Operands, live.
+func markLivePhi(livePhis map[*Phi]bool, phi *Phi) {
+ livePhis[phi] = true
+ for _, rand := range phi.Operands(nil) {
+ if q, ok := (*rand).(*Phi); ok {
+ if !livePhis[q] {
+ markLivePhi(livePhis, q)
+ }
+ }
+ }
+}
+
+// phiHasDirectReferrer reports whether phi is directly referred to by
+// a non-Phi instruction. Such instructions are the
+// roots of the liveness traversal.
+func phiHasDirectReferrer(phi *Phi) bool {
+ for _, instr := range *phi.Referrers() {
+ if _, ok := instr.(*Phi); !ok {
+ return true
+ }
+ }
+ return false
+}
+
+type BlockSet struct{ big.Int } // (inherit methods from Int)
+
+// add adds b to the set and returns true if the set changed.
+func (s *BlockSet) Add(b *BasicBlock) bool {
+ i := b.Index
+ if s.Bit(i) != 0 {
+ return false
+ }
+ s.SetBit(&s.Int, i, 1)
+ return true
+}
+
+func (s *BlockSet) Has(b *BasicBlock) bool {
+ return s.Bit(b.Index) == 1
+}
+
+// take removes an arbitrary element from a set s and
+// returns its index, or returns -1 if empty.
+func (s *BlockSet) Take() int {
+ l := s.BitLen()
+ for i := 0; i < l; i++ {
+ if s.Bit(i) == 1 {
+ s.SetBit(&s.Int, i, 0)
+ return i
+ }
+ }
+ return -1
+}
+
+// newPhi is a pair of a newly introduced φ-node and the lifted Alloc
+// it replaces.
+type newPhi struct {
+ phi *Phi
+ alloc *Alloc
+}
+
+// newPhiMap records for each basic block, the set of newPhis that
+// must be prepended to the block.
+type newPhiMap map[*BasicBlock][]newPhi
+
+// liftAlloc determines whether alloc can be lifted into registers,
+// and if so, it populates newPhis with all the φ-nodes it may require
+// and returns true.
+//
+// fresh is a source of fresh ids for phi nodes.
+//
+func liftAlloc(df domFrontier, alloc *Alloc, newPhis newPhiMap, fresh *int) bool {
+ // Don't lift aggregates into registers, because we don't have
+ // a way to express their zero-constants.
+ switch deref(alloc.Type()).Underlying().(type) {
+ case *types.Array, *types.Struct:
+ return false
+ }
+
+ // Don't lift named return values in functions that defer
+ // calls that may recover from panic.
+ if fn := alloc.Parent(); fn.Recover != nil {
+ for _, nr := range fn.namedResults {
+ if nr == alloc {
+ return false
+ }
+ }
+ }
+
+ // Compute defblocks, the set of blocks containing a
+ // definition of the alloc cell.
+ var defblocks BlockSet
+ for _, instr := range *alloc.Referrers() {
+ // Bail out if we discover the alloc is not liftable;
+ // the only operations permitted to use the alloc are
+ // loads/stores into the cell, and DebugRef.
+ switch instr := instr.(type) {
+ case *Store:
+ if instr.Val == alloc {
+ return false // address used as value
+ }
+ if instr.Addr != alloc {
+ panic("Alloc.Referrers is inconsistent")
+ }
+ defblocks.Add(instr.Block())
+ case *UnOp:
+ if instr.Op != token.MUL {
+ return false // not a load
+ }
+ if instr.X != alloc {
+ panic("Alloc.Referrers is inconsistent")
+ }
+ case *DebugRef:
+ // ok
+ default:
+ return false // some other instruction
+ }
+ }
+ // The Alloc itself counts as a (zero) definition of the cell.
+ defblocks.Add(alloc.Block())
+
+ if debugLifting {
+ fmt.Fprintln(os.Stderr, "\tlifting ", alloc, alloc.Name())
+ }
+
+ fn := alloc.Parent()
+
+ // Φ-insertion.
+ //
+ // What follows is the body of the main loop of the insert-φ
+ // function described by Cytron et al, but instead of using
+ // counter tricks, we just reset the 'hasAlready' and 'work'
+ // sets each iteration. These are bitmaps so it's pretty cheap.
+ //
+ // TODO(adonovan): opt: recycle slice storage for W,
+ // hasAlready, defBlocks across liftAlloc calls.
+ var hasAlready BlockSet
+
+ // Initialize W and work to defblocks.
+ var work BlockSet = defblocks // blocks seen
+ var W BlockSet // blocks to do
+ W.Set(&defblocks.Int)
+
+ // Traverse iterated dominance frontier, inserting φ-nodes.
+ for i := W.Take(); i != -1; i = W.Take() {
+ u := fn.Blocks[i]
+ for _, v := range df[u.Index] {
+ if hasAlready.Add(v) {
+ // Create φ-node.
+ // It will be prepended to v.Instrs later, if needed.
+ phi := &Phi{
+ Edges: make([]Value, len(v.Preds)),
+ Comment: alloc.Comment,
+ }
+ // This is merely a debugging aid:
+ phi.setNum(*fresh)
+ *fresh++
+
+ phi.pos = alloc.Pos()
+ phi.setType(deref(alloc.Type()))
+ phi.block = v
+ if debugLifting {
+ fmt.Fprintf(os.Stderr, "\tplace %s = %s at block %s\n", phi.Name(), phi, v)
+ }
+ newPhis[v] = append(newPhis[v], newPhi{phi, alloc})
+
+ if work.Add(v) {
+ W.Add(v)
+ }
+ }
+ }
+ }
+
+ return true
+}
+
+// replaceAll replaces all intraprocedural uses of x with y,
+// updating x.Referrers and y.Referrers.
+// Precondition: x.Referrers() != nil, i.e. x must be local to some function.
+//
+func replaceAll(x, y Value) {
+ var rands []*Value
+ pxrefs := x.Referrers()
+ pyrefs := y.Referrers()
+ for _, instr := range *pxrefs {
+ rands = instr.Operands(rands[:0]) // recycle storage
+ for _, rand := range rands {
+ if *rand != nil {
+ if *rand == x {
+ *rand = y
+ }
+ }
+ }
+ if pyrefs != nil {
+ *pyrefs = append(*pyrefs, instr) // dups ok
+ }
+ }
+ *pxrefs = nil // x is now unreferenced
+}
+
+// renamed returns the value to which alloc is being renamed,
+// constructing it lazily if it's the implicit zero initialization.
+//
+func renamed(renaming []Value, alloc *Alloc) Value {
+ v := renaming[alloc.index]
+ if v == nil {
+ v = zeroConst(deref(alloc.Type()))
+ renaming[alloc.index] = v
+ }
+ return v
+}
+
+// rename implements the (Cytron et al) SSA renaming algorithm, a
+// preorder traversal of the dominator tree replacing all loads of
+// Alloc cells with the value stored to that cell by the dominating
+// store instruction. For lifting, we need only consider loads,
+// stores and φ-nodes.
+//
+// renaming is a map from *Alloc (keyed by index number) to its
+// dominating stored value; newPhis[x] is the set of new φ-nodes to be
+// prepended to block x.
+//
+func rename(u *BasicBlock, renaming []Value, newPhis newPhiMap) {
+ // Each φ-node becomes the new name for its associated Alloc.
+ for _, np := range newPhis[u] {
+ phi := np.phi
+ alloc := np.alloc
+ renaming[alloc.index] = phi
+ }
+
+ // Rename loads and stores of allocs.
+ for i, instr := range u.Instrs {
+ switch instr := instr.(type) {
+ case *Alloc:
+ if instr.index >= 0 { // store of zero to Alloc cell
+ // Replace dominated loads by the zero value.
+ renaming[instr.index] = nil
+ if debugLifting {
+ fmt.Fprintf(os.Stderr, "\tkill alloc %s\n", instr)
+ }
+ // Delete the Alloc.
+ u.Instrs[i] = nil
+ u.gaps++
+ }
+
+ case *Store:
+ if alloc, ok := instr.Addr.(*Alloc); ok && alloc.index >= 0 { // store to Alloc cell
+ // Replace dominated loads by the stored value.
+ renaming[alloc.index] = instr.Val
+ if debugLifting {
+ fmt.Fprintf(os.Stderr, "\tkill store %s; new value: %s\n",
+ instr, instr.Val.Name())
+ }
+ // Remove the store from the referrer list of the stored value.
+ if refs := instr.Val.Referrers(); refs != nil {
+ *refs = removeInstr(*refs, instr)
+ }
+ // Delete the Store.
+ u.Instrs[i] = nil
+ u.gaps++
+ }
+
+ case *UnOp:
+ if instr.Op == token.MUL {
+ if alloc, ok := instr.X.(*Alloc); ok && alloc.index >= 0 { // load of Alloc cell
+ newval := renamed(renaming, alloc)
+ if debugLifting {
+ fmt.Fprintf(os.Stderr, "\tupdate load %s = %s with %s\n",
+ instr.Name(), instr, newval.Name())
+ }
+ // Replace all references to
+ // the loaded value by the
+ // dominating stored value.
+ replaceAll(instr, newval)
+ // Delete the Load.
+ u.Instrs[i] = nil
+ u.gaps++
+ }
+ }
+
+ case *DebugRef:
+ if alloc, ok := instr.X.(*Alloc); ok && alloc.index >= 0 { // ref of Alloc cell
+ if instr.IsAddr {
+ instr.X = renamed(renaming, alloc)
+ instr.IsAddr = false
+
+ // Add DebugRef to instr.X's referrers.
+ if refs := instr.X.Referrers(); refs != nil {
+ *refs = append(*refs, instr)
+ }
+ } else {
+ // A source expression denotes the address
+ // of an Alloc that was optimized away.
+ instr.X = nil
+
+ // Delete the DebugRef.
+ u.Instrs[i] = nil
+ u.gaps++
+ }
+ }
+ }
+ }
+
+ // For each φ-node in a CFG successor, rename the edge.
+ for _, v := range u.Succs {
+ phis := newPhis[v]
+ if len(phis) == 0 {
+ continue
+ }
+ i := v.predIndex(u)
+ for _, np := range phis {
+ phi := np.phi
+ alloc := np.alloc
+ newval := renamed(renaming, alloc)
+ if debugLifting {
+ fmt.Fprintf(os.Stderr, "\tsetphi %s edge %s -> %s (#%d) (alloc=%s) := %s\n",
+ phi.Name(), u, v, i, alloc.Name(), newval.Name())
+ }
+ phi.Edges[i] = newval
+ if prefs := newval.Referrers(); prefs != nil {
+ *prefs = append(*prefs, phi)
+ }
+ }
+ }
+
+ // Continue depth-first recursion over domtree, pushing a
+ // fresh copy of the renaming map for each subtree.
+ for i, v := range u.dom.children {
+ r := renaming
+ if i < len(u.dom.children)-1 {
+ // On all but the final iteration, we must make
+ // a copy to avoid destructive update.
+ r = make([]Value, len(renaming))
+ copy(r, renaming)
+ }
+ rename(v, r, newPhis)
+ }
+
+}
diff --git a/vendor/honnef.co/go/tools/ssa/lvalue.go b/vendor/honnef.co/go/tools/ssa/lvalue.go
new file mode 100644
index 0000000000000..eb5d71e188fb1
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssa/lvalue.go
@@ -0,0 +1,123 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// lvalues are the union of addressable expressions and map-index
+// expressions.
+
+import (
+ "go/ast"
+ "go/token"
+ "go/types"
+)
+
+// An lvalue represents an assignable location that may appear on the
+// left-hand side of an assignment. This is a generalization of a
+// pointer to permit updates to elements of maps.
+//
+type lvalue interface {
+ store(fn *Function, v Value) // stores v into the location
+ load(fn *Function) Value // loads the contents of the location
+ address(fn *Function) Value // address of the location
+ typ() types.Type // returns the type of the location
+}
+
+// An address is an lvalue represented by a true pointer.
+type address struct {
+ addr Value
+ pos token.Pos // source position
+ expr ast.Expr // source syntax of the value (not address) [debug mode]
+}
+
+func (a *address) load(fn *Function) Value {
+ load := emitLoad(fn, a.addr)
+ load.pos = a.pos
+ return load
+}
+
+func (a *address) store(fn *Function, v Value) {
+ store := emitStore(fn, a.addr, v, a.pos)
+ if a.expr != nil {
+ // store.Val is v, converted for assignability.
+ emitDebugRef(fn, a.expr, store.Val, false)
+ }
+}
+
+func (a *address) address(fn *Function) Value {
+ if a.expr != nil {
+ emitDebugRef(fn, a.expr, a.addr, true)
+ }
+ return a.addr
+}
+
+func (a *address) typ() types.Type {
+ return deref(a.addr.Type())
+}
+
+// An element is an lvalue represented by m[k], the location of an
+// element of a map or string. These locations are not addressable
+// since pointers cannot be formed from them, but they do support
+// load(), and in the case of maps, store().
+//
+type element struct {
+ m, k Value // map or string
+ t types.Type // map element type or string byte type
+ pos token.Pos // source position of colon ({k:v}) or lbrack (m[k]=v)
+}
+
+func (e *element) load(fn *Function) Value {
+ l := &Lookup{
+ X: e.m,
+ Index: e.k,
+ }
+ l.setPos(e.pos)
+ l.setType(e.t)
+ return fn.emit(l)
+}
+
+func (e *element) store(fn *Function, v Value) {
+ up := &MapUpdate{
+ Map: e.m,
+ Key: e.k,
+ Value: emitConv(fn, v, e.t),
+ }
+ up.pos = e.pos
+ fn.emit(up)
+}
+
+func (e *element) address(fn *Function) Value {
+ panic("map/string elements are not addressable")
+}
+
+func (e *element) typ() types.Type {
+ return e.t
+}
+
+// A blank is a dummy variable whose name is "_".
+// It is not reified: loads are illegal and stores are ignored.
+//
+type blank struct{}
+
+func (bl blank) load(fn *Function) Value {
+ panic("blank.load is illegal")
+}
+
+func (bl blank) store(fn *Function, v Value) {
+ s := &BlankStore{
+ Val: v,
+ }
+ fn.emit(s)
+}
+
+func (bl blank) address(fn *Function) Value {
+ panic("blank var is not addressable")
+}
+
+func (bl blank) typ() types.Type {
+ // This should be the type of the blank Ident; the typechecker
+ // doesn't provide this yet, but fortunately, we don't need it
+ // yet either.
+ panic("blank.typ is unimplemented")
+}
diff --git a/vendor/honnef.co/go/tools/ssa/methods.go b/vendor/honnef.co/go/tools/ssa/methods.go
new file mode 100644
index 0000000000000..9cf383916bbea
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssa/methods.go
@@ -0,0 +1,239 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// This file defines utilities for population of method sets.
+
+import (
+ "fmt"
+ "go/types"
+)
+
+// MethodValue returns the Function implementing method sel, building
+// wrapper methods on demand. It returns nil if sel denotes an
+// abstract (interface) method.
+//
+// Precondition: sel.Kind() == MethodVal.
+//
+// Thread-safe.
+//
+// EXCLUSIVE_LOCKS_ACQUIRED(prog.methodsMu)
+//
+func (prog *Program) MethodValue(sel *types.Selection) *Function {
+ if sel.Kind() != types.MethodVal {
+ panic(fmt.Sprintf("MethodValue(%s) kind != MethodVal", sel))
+ }
+ T := sel.Recv()
+ if isInterface(T) {
+ return nil // abstract method
+ }
+ if prog.mode&LogSource != 0 {
+ defer logStack("MethodValue %s %v", T, sel)()
+ }
+
+ prog.methodsMu.Lock()
+ defer prog.methodsMu.Unlock()
+
+ return prog.addMethod(prog.createMethodSet(T), sel)
+}
+
+// LookupMethod returns the implementation of the method of type T
+// identified by (pkg, name). It returns nil if the method exists but
+// is abstract, and panics if T has no such method.
+//
+func (prog *Program) LookupMethod(T types.Type, pkg *types.Package, name string) *Function {
+ sel := prog.MethodSets.MethodSet(T).Lookup(pkg, name)
+ if sel == nil {
+ panic(fmt.Sprintf("%s has no method %s", T, types.Id(pkg, name)))
+ }
+ return prog.MethodValue(sel)
+}
+
+// methodSet contains the (concrete) methods of a non-interface type.
+type methodSet struct {
+ mapping map[string]*Function // populated lazily
+ complete bool // mapping contains all methods
+}
+
+// Precondition: !isInterface(T).
+// EXCLUSIVE_LOCKS_REQUIRED(prog.methodsMu)
+func (prog *Program) createMethodSet(T types.Type) *methodSet {
+ mset, ok := prog.methodSets.At(T).(*methodSet)
+ if !ok {
+ mset = &methodSet{mapping: make(map[string]*Function)}
+ prog.methodSets.Set(T, mset)
+ }
+ return mset
+}
+
+// EXCLUSIVE_LOCKS_REQUIRED(prog.methodsMu)
+func (prog *Program) addMethod(mset *methodSet, sel *types.Selection) *Function {
+ if sel.Kind() == types.MethodExpr {
+ panic(sel)
+ }
+ id := sel.Obj().Id()
+ fn := mset.mapping[id]
+ if fn == nil {
+ obj := sel.Obj().(*types.Func)
+
+ needsPromotion := len(sel.Index()) > 1
+ needsIndirection := !isPointer(recvType(obj)) && isPointer(sel.Recv())
+ if needsPromotion || needsIndirection {
+ fn = makeWrapper(prog, sel)
+ } else {
+ fn = prog.declaredFunc(obj)
+ }
+ if fn.Signature.Recv() == nil {
+ panic(fn) // missing receiver
+ }
+ mset.mapping[id] = fn
+ }
+ return fn
+}
+
+// RuntimeTypes returns a new unordered slice containing all
+// concrete types in the program for which a complete (non-empty)
+// method set is required at run-time.
+//
+// Thread-safe.
+//
+// EXCLUSIVE_LOCKS_ACQUIRED(prog.methodsMu)
+//
+func (prog *Program) RuntimeTypes() []types.Type {
+ prog.methodsMu.Lock()
+ defer prog.methodsMu.Unlock()
+
+ var res []types.Type
+ prog.methodSets.Iterate(func(T types.Type, v interface{}) {
+ if v.(*methodSet).complete {
+ res = append(res, T)
+ }
+ })
+ return res
+}
+
+// declaredFunc returns the concrete function/method denoted by obj.
+// Panic ensues if there is none.
+//
+func (prog *Program) declaredFunc(obj *types.Func) *Function {
+ if v := prog.packageLevelValue(obj); v != nil {
+ return v.(*Function)
+ }
+ panic("no concrete method: " + obj.String())
+}
+
+// needMethodsOf ensures that runtime type information (including the
+// complete method set) is available for the specified type T and all
+// its subcomponents.
+//
+// needMethodsOf must be called for at least every type that is an
+// operand of some MakeInterface instruction, and for the type of
+// every exported package member.
+//
+// Precondition: T is not a method signature (*Signature with Recv()!=nil).
+//
+// Thread-safe. (Called via emitConv from multiple builder goroutines.)
+//
+// TODO(adonovan): make this faster. It accounts for 20% of SSA build time.
+//
+// EXCLUSIVE_LOCKS_ACQUIRED(prog.methodsMu)
+//
+func (prog *Program) needMethodsOf(T types.Type) {
+ prog.methodsMu.Lock()
+ prog.needMethods(T, false)
+ prog.methodsMu.Unlock()
+}
+
+// Precondition: T is not a method signature (*Signature with Recv()!=nil).
+// Recursive case: skip => don't create methods for T.
+//
+// EXCLUSIVE_LOCKS_REQUIRED(prog.methodsMu)
+//
+func (prog *Program) needMethods(T types.Type, skip bool) {
+ // Each package maintains its own set of types it has visited.
+ if prevSkip, ok := prog.runtimeTypes.At(T).(bool); ok {
+ // needMethods(T) was previously called
+ if !prevSkip || skip {
+ return // already seen, with same or false 'skip' value
+ }
+ }
+ prog.runtimeTypes.Set(T, skip)
+
+ tmset := prog.MethodSets.MethodSet(T)
+
+ if !skip && !isInterface(T) && tmset.Len() > 0 {
+ // Create methods of T.
+ mset := prog.createMethodSet(T)
+ if !mset.complete {
+ mset.complete = true
+ n := tmset.Len()
+ for i := 0; i < n; i++ {
+ prog.addMethod(mset, tmset.At(i))
+ }
+ }
+ }
+
+ // Recursion over signatures of each method.
+ for i := 0; i < tmset.Len(); i++ {
+ sig := tmset.At(i).Type().(*types.Signature)
+ prog.needMethods(sig.Params(), false)
+ prog.needMethods(sig.Results(), false)
+ }
+
+ switch t := T.(type) {
+ case *types.Basic:
+ // nop
+
+ case *types.Interface:
+ // nop---handled by recursion over method set.
+
+ case *types.Pointer:
+ prog.needMethods(t.Elem(), false)
+
+ case *types.Slice:
+ prog.needMethods(t.Elem(), false)
+
+ case *types.Chan:
+ prog.needMethods(t.Elem(), false)
+
+ case *types.Map:
+ prog.needMethods(t.Key(), false)
+ prog.needMethods(t.Elem(), false)
+
+ case *types.Signature:
+ if t.Recv() != nil {
+ panic(fmt.Sprintf("Signature %s has Recv %s", t, t.Recv()))
+ }
+ prog.needMethods(t.Params(), false)
+ prog.needMethods(t.Results(), false)
+
+ case *types.Named:
+ // A pointer-to-named type can be derived from a named
+ // type via reflection. It may have methods too.
+ prog.needMethods(types.NewPointer(T), false)
+
+ // Consider 'type T struct{S}' where S has methods.
+ // Reflection provides no way to get from T to struct{S},
+ // only to S, so the method set of struct{S} is unwanted,
+ // so set 'skip' flag during recursion.
+ prog.needMethods(t.Underlying(), true)
+
+ case *types.Array:
+ prog.needMethods(t.Elem(), false)
+
+ case *types.Struct:
+ for i, n := 0, t.NumFields(); i < n; i++ {
+ prog.needMethods(t.Field(i).Type(), false)
+ }
+
+ case *types.Tuple:
+ for i, n := 0, t.Len(); i < n; i++ {
+ prog.needMethods(t.At(i).Type(), false)
+ }
+
+ default:
+ panic(T)
+ }
+}
diff --git a/vendor/honnef.co/go/tools/ssa/mode.go b/vendor/honnef.co/go/tools/ssa/mode.go
new file mode 100644
index 0000000000000..d2a269893a712
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssa/mode.go
@@ -0,0 +1,100 @@
+// Copyright 2015 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// This file defines the BuilderMode type and its command-line flag.
+
+import (
+ "bytes"
+ "fmt"
+)
+
+// BuilderMode is a bitmask of options for diagnostics and checking.
+//
+// *BuilderMode satisfies the flag.Value interface. Example:
+//
+// var mode = ssa.BuilderMode(0)
+// func init() { flag.Var(&mode, "build", ssa.BuilderModeDoc) }
+//
+type BuilderMode uint
+
+const (
+ PrintPackages BuilderMode = 1 << iota // Print package inventory to stdout
+ PrintFunctions // Print function SSA code to stdout
+ LogSource // Log source locations as SSA builder progresses
+ SanityCheckFunctions // Perform sanity checking of function bodies
+ NaiveForm // Build naïve SSA form: don't replace local loads/stores with registers
+ BuildSerially // Build packages serially, not in parallel.
+ GlobalDebug // Enable debug info for all packages
+ BareInits // Build init functions without guards or calls to dependent inits
+)
+
+const BuilderModeDoc = `Options controlling the SSA builder.
+The value is a sequence of zero or more of these letters:
+C perform sanity [C]hecking of the SSA form.
+D include [D]ebug info for every function.
+P print [P]ackage inventory.
+F print [F]unction SSA code.
+S log [S]ource locations as SSA builder progresses.
+L build distinct packages seria[L]ly instead of in parallel.
+N build [N]aive SSA form: don't replace local loads/stores with registers.
+I build bare [I]nit functions: no init guards or calls to dependent inits.
+`
+
+func (m BuilderMode) String() string {
+ var buf bytes.Buffer
+ if m&GlobalDebug != 0 {
+ buf.WriteByte('D')
+ }
+ if m&PrintPackages != 0 {
+ buf.WriteByte('P')
+ }
+ if m&PrintFunctions != 0 {
+ buf.WriteByte('F')
+ }
+ if m&LogSource != 0 {
+ buf.WriteByte('S')
+ }
+ if m&SanityCheckFunctions != 0 {
+ buf.WriteByte('C')
+ }
+ if m&NaiveForm != 0 {
+ buf.WriteByte('N')
+ }
+ if m&BuildSerially != 0 {
+ buf.WriteByte('L')
+ }
+ return buf.String()
+}
+
+// Set parses the flag characters in s and updates *m.
+func (m *BuilderMode) Set(s string) error {
+ var mode BuilderMode
+ for _, c := range s {
+ switch c {
+ case 'D':
+ mode |= GlobalDebug
+ case 'P':
+ mode |= PrintPackages
+ case 'F':
+ mode |= PrintFunctions
+ case 'S':
+ mode |= LogSource | BuildSerially
+ case 'C':
+ mode |= SanityCheckFunctions
+ case 'N':
+ mode |= NaiveForm
+ case 'L':
+ mode |= BuildSerially
+ default:
+ return fmt.Errorf("unknown BuilderMode option: %q", c)
+ }
+ }
+ *m = mode
+ return nil
+}
+
+// Get returns m.
+func (m BuilderMode) Get() interface{} { return m }
diff --git a/vendor/honnef.co/go/tools/ssa/print.go b/vendor/honnef.co/go/tools/ssa/print.go
new file mode 100644
index 0000000000000..6fd277277c053
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssa/print.go
@@ -0,0 +1,435 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// This file implements the String() methods for all Value and
+// Instruction types.
+
+import (
+ "bytes"
+ "fmt"
+ "go/types"
+ "io"
+ "reflect"
+ "sort"
+
+ "golang.org/x/tools/go/types/typeutil"
+)
+
+// relName returns the name of v relative to i.
+// In most cases, this is identical to v.Name(), but references to
+// Functions (including methods) and Globals use RelString and
+// all types are displayed with relType, so that only cross-package
+// references are package-qualified.
+//
+func relName(v Value, i Instruction) string {
+ var from *types.Package
+ if i != nil {
+ from = i.Parent().pkg()
+ }
+ switch v := v.(type) {
+ case Member: // *Function or *Global
+ return v.RelString(from)
+ case *Const:
+ return v.RelString(from)
+ }
+ return v.Name()
+}
+
+func relType(t types.Type, from *types.Package) string {
+ return types.TypeString(t, types.RelativeTo(from))
+}
+
+func relString(m Member, from *types.Package) string {
+ // NB: not all globals have an Object (e.g. init$guard),
+ // so use Package().Object not Object.Package().
+ if pkg := m.Package().Pkg; pkg != nil && pkg != from {
+ return fmt.Sprintf("%s.%s", pkg.Path(), m.Name())
+ }
+ return m.Name()
+}
+
+// Value.String()
+//
+// This method is provided only for debugging.
+// It never appears in disassembly, which uses Value.Name().
+
+func (v *Parameter) String() string {
+ from := v.Parent().pkg()
+ return fmt.Sprintf("parameter %s : %s", v.Name(), relType(v.Type(), from))
+}
+
+func (v *FreeVar) String() string {
+ from := v.Parent().pkg()
+ return fmt.Sprintf("freevar %s : %s", v.Name(), relType(v.Type(), from))
+}
+
+func (v *Builtin) String() string {
+ return fmt.Sprintf("builtin %s", v.Name())
+}
+
+// Instruction.String()
+
+func (v *Alloc) String() string {
+ op := "local"
+ if v.Heap {
+ op = "new"
+ }
+ from := v.Parent().pkg()
+ return fmt.Sprintf("%s %s (%s)", op, relType(deref(v.Type()), from), v.Comment)
+}
+
+func (v *Phi) String() string {
+ var b bytes.Buffer
+ b.WriteString("phi [")
+ for i, edge := range v.Edges {
+ if i > 0 {
+ b.WriteString(", ")
+ }
+ // Be robust against malformed CFG.
+ if v.block == nil {
+ b.WriteString("??")
+ continue
+ }
+ block := -1
+ if i < len(v.block.Preds) {
+ block = v.block.Preds[i].Index
+ }
+ fmt.Fprintf(&b, "%d: ", block)
+ edgeVal := "<nil>" // be robust
+ if edge != nil {
+ edgeVal = relName(edge, v)
+ }
+ b.WriteString(edgeVal)
+ }
+ b.WriteString("]")
+ if v.Comment != "" {
+ b.WriteString(" #")
+ b.WriteString(v.Comment)
+ }
+ return b.String()
+}
+
+func printCall(v *CallCommon, prefix string, instr Instruction) string {
+ var b bytes.Buffer
+ b.WriteString(prefix)
+ if !v.IsInvoke() {
+ b.WriteString(relName(v.Value, instr))
+ } else {
+ fmt.Fprintf(&b, "invoke %s.%s", relName(v.Value, instr), v.Method.Name())
+ }
+ b.WriteString("(")
+ for i, arg := range v.Args {
+ if i > 0 {
+ b.WriteString(", ")
+ }
+ b.WriteString(relName(arg, instr))
+ }
+ if v.Signature().Variadic() {
+ b.WriteString("...")
+ }
+ b.WriteString(")")
+ return b.String()
+}
+
+func (c *CallCommon) String() string {
+ return printCall(c, "", nil)
+}
+
+func (v *Call) String() string {
+ return printCall(&v.Call, "", v)
+}
+
+func (v *BinOp) String() string {
+ return fmt.Sprintf("%s %s %s", relName(v.X, v), v.Op.String(), relName(v.Y, v))
+}
+
+func (v *UnOp) String() string {
+ return fmt.Sprintf("%s%s%s", v.Op, relName(v.X, v), commaOk(v.CommaOk))
+}
+
+func printConv(prefix string, v, x Value) string {
+ from := v.Parent().pkg()
+ return fmt.Sprintf("%s %s <- %s (%s)",
+ prefix,
+ relType(v.Type(), from),
+ relType(x.Type(), from),
+ relName(x, v.(Instruction)))
+}
+
+func (v *ChangeType) String() string { return printConv("changetype", v, v.X) }
+func (v *Convert) String() string { return printConv("convert", v, v.X) }
+func (v *ChangeInterface) String() string { return printConv("change interface", v, v.X) }
+func (v *MakeInterface) String() string { return printConv("make", v, v.X) }
+
+func (v *MakeClosure) String() string {
+ var b bytes.Buffer
+ fmt.Fprintf(&b, "make closure %s", relName(v.Fn, v))
+ if v.Bindings != nil {
+ b.WriteString(" [")
+ for i, c := range v.Bindings {
+ if i > 0 {
+ b.WriteString(", ")
+ }
+ b.WriteString(relName(c, v))
+ }
+ b.WriteString("]")
+ }
+ return b.String()
+}
+
+func (v *MakeSlice) String() string {
+ from := v.Parent().pkg()
+ return fmt.Sprintf("make %s %s %s",
+ relType(v.Type(), from),
+ relName(v.Len, v),
+ relName(v.Cap, v))
+}
+
+func (v *Slice) String() string {
+ var b bytes.Buffer
+ b.WriteString("slice ")
+ b.WriteString(relName(v.X, v))
+ b.WriteString("[")
+ if v.Low != nil {
+ b.WriteString(relName(v.Low, v))
+ }
+ b.WriteString(":")
+ if v.High != nil {
+ b.WriteString(relName(v.High, v))
+ }
+ if v.Max != nil {
+ b.WriteString(":")
+ b.WriteString(relName(v.Max, v))
+ }
+ b.WriteString("]")
+ return b.String()
+}
+
+func (v *MakeMap) String() string {
+ res := ""
+ if v.Reserve != nil {
+ res = relName(v.Reserve, v)
+ }
+ from := v.Parent().pkg()
+ return fmt.Sprintf("make %s %s", relType(v.Type(), from), res)
+}
+
+func (v *MakeChan) String() string {
+ from := v.Parent().pkg()
+ return fmt.Sprintf("make %s %s", relType(v.Type(), from), relName(v.Size, v))
+}
+
+func (v *FieldAddr) String() string {
+ st := deref(v.X.Type()).Underlying().(*types.Struct)
+ // Be robust against a bad index.
+ name := "?"
+ if 0 <= v.Field && v.Field < st.NumFields() {
+ name = st.Field(v.Field).Name()
+ }
+ return fmt.Sprintf("&%s.%s [#%d]", relName(v.X, v), name, v.Field)
+}
+
+func (v *Field) String() string {
+ st := v.X.Type().Underlying().(*types.Struct)
+ // Be robust against a bad index.
+ name := "?"
+ if 0 <= v.Field && v.Field < st.NumFields() {
+ name = st.Field(v.Field).Name()
+ }
+ return fmt.Sprintf("%s.%s [#%d]", relName(v.X, v), name, v.Field)
+}
+
+func (v *IndexAddr) String() string {
+ return fmt.Sprintf("&%s[%s]", relName(v.X, v), relName(v.Index, v))
+}
+
+func (v *Index) String() string {
+ return fmt.Sprintf("%s[%s]", relName(v.X, v), relName(v.Index, v))
+}
+
+func (v *Lookup) String() string {
+ return fmt.Sprintf("%s[%s]%s", relName(v.X, v), relName(v.Index, v), commaOk(v.CommaOk))
+}
+
+func (v *Range) String() string {
+ return "range " + relName(v.X, v)
+}
+
+func (v *Next) String() string {
+ return "next " + relName(v.Iter, v)
+}
+
+func (v *TypeAssert) String() string {
+ from := v.Parent().pkg()
+ return fmt.Sprintf("typeassert%s %s.(%s)", commaOk(v.CommaOk), relName(v.X, v), relType(v.AssertedType, from))
+}
+
+func (v *Extract) String() string {
+ return fmt.Sprintf("extract %s #%d", relName(v.Tuple, v), v.Index)
+}
+
+func (s *Jump) String() string {
+ // Be robust against malformed CFG.
+ block := -1
+ if s.block != nil && len(s.block.Succs) == 1 {
+ block = s.block.Succs[0].Index
+ }
+ return fmt.Sprintf("jump %d", block)
+}
+
+func (s *If) String() string {
+ // Be robust against malformed CFG.
+ tblock, fblock := -1, -1
+ if s.block != nil && len(s.block.Succs) == 2 {
+ tblock = s.block.Succs[0].Index
+ fblock = s.block.Succs[1].Index
+ }
+ return fmt.Sprintf("if %s goto %d else %d", relName(s.Cond, s), tblock, fblock)
+}
+
+func (s *Go) String() string {
+ return printCall(&s.Call, "go ", s)
+}
+
+func (s *Panic) String() string {
+ return "panic " + relName(s.X, s)
+}
+
+func (s *Return) String() string {
+ var b bytes.Buffer
+ b.WriteString("return")
+ for i, r := range s.Results {
+ if i == 0 {
+ b.WriteString(" ")
+ } else {
+ b.WriteString(", ")
+ }
+ b.WriteString(relName(r, s))
+ }
+ return b.String()
+}
+
+func (*RunDefers) String() string {
+ return "rundefers"
+}
+
+func (s *Send) String() string {
+ return fmt.Sprintf("send %s <- %s", relName(s.Chan, s), relName(s.X, s))
+}
+
+func (s *Defer) String() string {
+ return printCall(&s.Call, "defer ", s)
+}
+
+func (s *Select) String() string {
+ var b bytes.Buffer
+ for i, st := range s.States {
+ if i > 0 {
+ b.WriteString(", ")
+ }
+ if st.Dir == types.RecvOnly {
+ b.WriteString("<-")
+ b.WriteString(relName(st.Chan, s))
+ } else {
+ b.WriteString(relName(st.Chan, s))
+ b.WriteString("<-")
+ b.WriteString(relName(st.Send, s))
+ }
+ }
+ non := ""
+ if !s.Blocking {
+ non = "non"
+ }
+ return fmt.Sprintf("select %sblocking [%s]", non, b.String())
+}
+
+func (s *Store) String() string {
+ return fmt.Sprintf("*%s = %s", relName(s.Addr, s), relName(s.Val, s))
+}
+
+func (s *BlankStore) String() string {
+ return fmt.Sprintf("_ = %s", relName(s.Val, s))
+}
+
+func (s *MapUpdate) String() string {
+ return fmt.Sprintf("%s[%s] = %s", relName(s.Map, s), relName(s.Key, s), relName(s.Value, s))
+}
+
+func (s *DebugRef) String() string {
+ p := s.Parent().Prog.Fset.Position(s.Pos())
+ var descr interface{}
+ if s.object != nil {
+ descr = s.object // e.g. "var x int"
+ } else {
+ descr = reflect.TypeOf(s.Expr) // e.g. "*ast.CallExpr"
+ }
+ var addr string
+ if s.IsAddr {
+ addr = "address of "
+ }
+ return fmt.Sprintf("; %s%s @ %d:%d is %s", addr, descr, p.Line, p.Column, s.X.Name())
+}
+
+func (p *Package) String() string {
+ return "package " + p.Pkg.Path()
+}
+
+var _ io.WriterTo = (*Package)(nil) // *Package implements io.Writer
+
+func (p *Package) WriteTo(w io.Writer) (int64, error) {
+ var buf bytes.Buffer
+ WritePackage(&buf, p)
+ n, err := w.Write(buf.Bytes())
+ return int64(n), err
+}
+
+// WritePackage writes to buf a human-readable summary of p.
+func WritePackage(buf *bytes.Buffer, p *Package) {
+ fmt.Fprintf(buf, "%s:\n", p)
+
+ var names []string
+ maxname := 0
+ for name := range p.Members {
+ if l := len(name); l > maxname {
+ maxname = l
+ }
+ names = append(names, name)
+ }
+
+ from := p.Pkg
+ sort.Strings(names)
+ for _, name := range names {
+ switch mem := p.Members[name].(type) {
+ case *NamedConst:
+ fmt.Fprintf(buf, " const %-*s %s = %s\n",
+ maxname, name, mem.Name(), mem.Value.RelString(from))
+
+ case *Function:
+ fmt.Fprintf(buf, " func %-*s %s\n",
+ maxname, name, relType(mem.Type(), from))
+
+ case *Type:
+ fmt.Fprintf(buf, " type %-*s %s\n",
+ maxname, name, relType(mem.Type().Underlying(), from))
+ for _, meth := range typeutil.IntuitiveMethodSet(mem.Type(), &p.Prog.MethodSets) {
+ fmt.Fprintf(buf, " %s\n", types.SelectionString(meth, types.RelativeTo(from)))
+ }
+
+ case *Global:
+ fmt.Fprintf(buf, " var %-*s %s\n",
+ maxname, name, relType(mem.Type().(*types.Pointer).Elem(), from))
+ }
+ }
+
+ fmt.Fprintf(buf, "\n")
+}
+
+func commaOk(x bool) string {
+ if x {
+ return ",ok"
+ }
+ return ""
+}
diff --git a/vendor/honnef.co/go/tools/ssa/sanity.go b/vendor/honnef.co/go/tools/ssa/sanity.go
new file mode 100644
index 0000000000000..1d29b66b02c3a
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssa/sanity.go
@@ -0,0 +1,535 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// An optional pass for sanity-checking invariants of the SSA representation.
+// Currently it checks CFG invariants but little at the instruction level.
+
+import (
+ "fmt"
+ "go/types"
+ "io"
+ "os"
+ "strings"
+)
+
+type sanity struct {
+ reporter io.Writer
+ fn *Function
+ block *BasicBlock
+ instrs map[Instruction]struct{}
+ insane bool
+}
+
+// sanityCheck performs integrity checking of the SSA representation
+// of the function fn and returns true if it was valid. Diagnostics
+// are written to reporter if non-nil, os.Stderr otherwise. Some
+// diagnostics are only warnings and do not imply a negative result.
+//
+// Sanity-checking is intended to facilitate the debugging of code
+// transformation passes.
+//
+func sanityCheck(fn *Function, reporter io.Writer) bool {
+ if reporter == nil {
+ reporter = os.Stderr
+ }
+ return (&sanity{reporter: reporter}).checkFunction(fn)
+}
+
+// mustSanityCheck is like sanityCheck but panics instead of returning
+// a negative result.
+//
+func mustSanityCheck(fn *Function, reporter io.Writer) {
+ if !sanityCheck(fn, reporter) {
+ fn.WriteTo(os.Stderr)
+ panic("SanityCheck failed")
+ }
+}
+
+func (s *sanity) diagnostic(prefix, format string, args ...interface{}) {
+ fmt.Fprintf(s.reporter, "%s: function %s", prefix, s.fn)
+ if s.block != nil {
+ fmt.Fprintf(s.reporter, ", block %s", s.block)
+ }
+ io.WriteString(s.reporter, ": ")
+ fmt.Fprintf(s.reporter, format, args...)
+ io.WriteString(s.reporter, "\n")
+}
+
+func (s *sanity) errorf(format string, args ...interface{}) {
+ s.insane = true
+ s.diagnostic("Error", format, args...)
+}
+
+func (s *sanity) warnf(format string, args ...interface{}) {
+ s.diagnostic("Warning", format, args...)
+}
+
+// findDuplicate returns an arbitrary basic block that appeared more
+// than once in blocks, or nil if all were unique.
+func findDuplicate(blocks []*BasicBlock) *BasicBlock {
+ if len(blocks) < 2 {
+ return nil
+ }
+ if blocks[0] == blocks[1] {
+ return blocks[0]
+ }
+ // Slow path:
+ m := make(map[*BasicBlock]bool)
+ for _, b := range blocks {
+ if m[b] {
+ return b
+ }
+ m[b] = true
+ }
+ return nil
+}
+
+func (s *sanity) checkInstr(idx int, instr Instruction) {
+ switch instr := instr.(type) {
+ case *If, *Jump, *Return, *Panic:
+ s.errorf("control flow instruction not at end of block")
+ case *Phi:
+ if idx == 0 {
+ // It suffices to apply this check to just the first phi node.
+ if dup := findDuplicate(s.block.Preds); dup != nil {
+ s.errorf("phi node in block with duplicate predecessor %s", dup)
+ }
+ } else {
+ prev := s.block.Instrs[idx-1]
+ if _, ok := prev.(*Phi); !ok {
+ s.errorf("Phi instruction follows a non-Phi: %T", prev)
+ }
+ }
+ if ne, np := len(instr.Edges), len(s.block.Preds); ne != np {
+ s.errorf("phi node has %d edges but %d predecessors", ne, np)
+
+ } else {
+ for i, e := range instr.Edges {
+ if e == nil {
+ s.errorf("phi node '%s' has no value for edge #%d from %s", instr.Comment, i, s.block.Preds[i])
+ }
+ }
+ }
+
+ case *Alloc:
+ if !instr.Heap {
+ found := false
+ for _, l := range s.fn.Locals {
+ if l == instr {
+ found = true
+ break
+ }
+ }
+ if !found {
+ s.errorf("local alloc %s = %s does not appear in Function.Locals", instr.Name(), instr)
+ }
+ }
+
+ case *BinOp:
+ case *Call:
+ case *ChangeInterface:
+ case *ChangeType:
+ case *Convert:
+ if _, ok := instr.X.Type().Underlying().(*types.Basic); !ok {
+ if _, ok := instr.Type().Underlying().(*types.Basic); !ok {
+ s.errorf("convert %s -> %s: at least one type must be basic", instr.X.Type(), instr.Type())
+ }
+ }
+
+ case *Defer:
+ case *Extract:
+ case *Field:
+ case *FieldAddr:
+ case *Go:
+ case *Index:
+ case *IndexAddr:
+ case *Lookup:
+ case *MakeChan:
+ case *MakeClosure:
+ numFree := len(instr.Fn.(*Function).FreeVars)
+ numBind := len(instr.Bindings)
+ if numFree != numBind {
+ s.errorf("MakeClosure has %d Bindings for function %s with %d free vars",
+ numBind, instr.Fn, numFree)
+
+ }
+ if recv := instr.Type().(*types.Signature).Recv(); recv != nil {
+ s.errorf("MakeClosure's type includes receiver %s", recv.Type())
+ }
+
+ case *MakeInterface:
+ case *MakeMap:
+ case *MakeSlice:
+ case *MapUpdate:
+ case *Next:
+ case *Range:
+ case *RunDefers:
+ case *Select:
+ case *Send:
+ case *Slice:
+ case *Store:
+ case *TypeAssert:
+ case *UnOp:
+ case *DebugRef:
+ case *BlankStore:
+ case *Sigma:
+ // TODO(adonovan): implement checks.
+ default:
+ panic(fmt.Sprintf("Unknown instruction type: %T", instr))
+ }
+
+ if call, ok := instr.(CallInstruction); ok {
+ if call.Common().Signature() == nil {
+ s.errorf("nil signature: %s", call)
+ }
+ }
+
+ // Check that value-defining instructions have valid types
+ // and a valid referrer list.
+ if v, ok := instr.(Value); ok {
+ t := v.Type()
+ if t == nil {
+ s.errorf("no type: %s = %s", v.Name(), v)
+ } else if t == tRangeIter {
+ // not a proper type; ignore.
+ } else if b, ok := t.Underlying().(*types.Basic); ok && b.Info()&types.IsUntyped != 0 {
+ s.errorf("instruction has 'untyped' result: %s = %s : %s", v.Name(), v, t)
+ }
+ s.checkReferrerList(v)
+ }
+
+ // Untyped constants are legal as instruction Operands(),
+ // for example:
+ // _ = "foo"[0]
+ // or:
+ // if wordsize==64 {...}
+
+ // All other non-Instruction Values can be found via their
+ // enclosing Function or Package.
+}
+
+func (s *sanity) checkFinalInstr(instr Instruction) {
+ switch instr := instr.(type) {
+ case *If:
+ if nsuccs := len(s.block.Succs); nsuccs != 2 {
+ s.errorf("If-terminated block has %d successors; expected 2", nsuccs)
+ return
+ }
+ if s.block.Succs[0] == s.block.Succs[1] {
+ s.errorf("If-instruction has same True, False target blocks: %s", s.block.Succs[0])
+ return
+ }
+
+ case *Jump:
+ if nsuccs := len(s.block.Succs); nsuccs != 1 {
+ s.errorf("Jump-terminated block has %d successors; expected 1", nsuccs)
+ return
+ }
+
+ case *Return:
+ if nsuccs := len(s.block.Succs); nsuccs != 0 {
+ s.errorf("Return-terminated block has %d successors; expected none", nsuccs)
+ return
+ }
+ if na, nf := len(instr.Results), s.fn.Signature.Results().Len(); nf != na {
+ s.errorf("%d-ary return in %d-ary function", na, nf)
+ }
+
+ case *Panic:
+ if nsuccs := len(s.block.Succs); nsuccs != 0 {
+ s.errorf("Panic-terminated block has %d successors; expected none", nsuccs)
+ return
+ }
+
+ default:
+ s.errorf("non-control flow instruction at end of block")
+ }
+}
+
+func (s *sanity) checkBlock(b *BasicBlock, index int) {
+ s.block = b
+
+ if b.Index != index {
+ s.errorf("block has incorrect Index %d", b.Index)
+ }
+ if b.parent != s.fn {
+ s.errorf("block has incorrect parent %s", b.parent)
+ }
+
+ // Check all blocks are reachable.
+ // (The entry block is always implicitly reachable,
+ // as is the Recover block, if any.)
+ if (index > 0 && b != b.parent.Recover) && len(b.Preds) == 0 {
+ s.warnf("unreachable block")
+ if b.Instrs == nil {
+ // Since this block is about to be pruned,
+ // tolerating transient problems in it
+ // simplifies other optimizations.
+ return
+ }
+ }
+
+ // Check predecessor and successor relations are dual,
+ // and that all blocks in CFG belong to same function.
+ for _, a := range b.Preds {
+ found := false
+ for _, bb := range a.Succs {
+ if bb == b {
+ found = true
+ break
+ }
+ }
+ if !found {
+ s.errorf("expected successor edge in predecessor %s; found only: %s", a, a.Succs)
+ }
+ if a.parent != s.fn {
+ s.errorf("predecessor %s belongs to different function %s", a, a.parent)
+ }
+ }
+ for _, c := range b.Succs {
+ found := false
+ for _, bb := range c.Preds {
+ if bb == b {
+ found = true
+ break
+ }
+ }
+ if !found {
+ s.errorf("expected predecessor edge in successor %s; found only: %s", c, c.Preds)
+ }
+ if c.parent != s.fn {
+ s.errorf("successor %s belongs to different function %s", c, c.parent)
+ }
+ }
+
+ // Check each instruction is sane.
+ n := len(b.Instrs)
+ if n == 0 {
+ s.errorf("basic block contains no instructions")
+ }
+ var rands [10]*Value // reuse storage
+ for j, instr := range b.Instrs {
+ if instr == nil {
+ s.errorf("nil instruction at index %d", j)
+ continue
+ }
+ if b2 := instr.Block(); b2 == nil {
+ s.errorf("nil Block() for instruction at index %d", j)
+ continue
+ } else if b2 != b {
+ s.errorf("wrong Block() (%s) for instruction at index %d ", b2, j)
+ continue
+ }
+ if j < n-1 {
+ s.checkInstr(j, instr)
+ } else {
+ s.checkFinalInstr(instr)
+ }
+
+ // Check Instruction.Operands.
+ operands:
+ for i, op := range instr.Operands(rands[:0]) {
+ if op == nil {
+ s.errorf("nil operand pointer %d of %s", i, instr)
+ continue
+ }
+ val := *op
+ if val == nil {
+ continue // a nil operand is ok
+ }
+
+ // Check that "untyped" types only appear on constant operands.
+ if _, ok := (*op).(*Const); !ok {
+ if basic, ok := (*op).Type().(*types.Basic); ok {
+ if basic.Info()&types.IsUntyped != 0 {
+ s.errorf("operand #%d of %s is untyped: %s", i, instr, basic)
+ }
+ }
+ }
+
+ // Check that Operands that are also Instructions belong to same function.
+ // TODO(adonovan): also check their block dominates block b.
+ if val, ok := val.(Instruction); ok {
+ if val.Block() == nil {
+ s.errorf("operand %d of %s is an instruction (%s) that belongs to no block", i, instr, val)
+ } else if val.Parent() != s.fn {
+ s.errorf("operand %d of %s is an instruction (%s) from function %s", i, instr, val, val.Parent())
+ }
+ }
+
+ // Check that each function-local operand of
+ // instr refers back to instr. (NB: quadratic)
+ switch val := val.(type) {
+ case *Const, *Global, *Builtin:
+ continue // not local
+ case *Function:
+ if val.parent == nil {
+ continue // only anon functions are local
+ }
+ }
+
+ // TODO(adonovan): check val.Parent() != nil <=> val.Referrers() is defined.
+
+ if refs := val.Referrers(); refs != nil {
+ for _, ref := range *refs {
+ if ref == instr {
+ continue operands
+ }
+ }
+ s.errorf("operand %d of %s (%s) does not refer to us", i, instr, val)
+ } else {
+ s.errorf("operand %d of %s (%s) has no referrers", i, instr, val)
+ }
+ }
+ }
+}
+
+func (s *sanity) checkReferrerList(v Value) {
+ refs := v.Referrers()
+ if refs == nil {
+ s.errorf("%s has missing referrer list", v.Name())
+ return
+ }
+ for i, ref := range *refs {
+ if _, ok := s.instrs[ref]; !ok {
+ s.errorf("%s.Referrers()[%d] = %s is not an instruction belonging to this function", v.Name(), i, ref)
+ }
+ }
+}
+
+func (s *sanity) checkFunction(fn *Function) bool {
+ // TODO(adonovan): check Function invariants:
+ // - check params match signature
+ // - check transient fields are nil
+ // - warn if any fn.Locals do not appear among block instructions.
+ s.fn = fn
+ if fn.Prog == nil {
+ s.errorf("nil Prog")
+ }
+
+ _ = fn.String() // must not crash
+ _ = fn.RelString(fn.pkg()) // must not crash
+
+ // All functions have a package, except delegates (which are
+ // shared across packages, or duplicated as weak symbols in a
+ // separate-compilation model), and error.Error.
+ if fn.Pkg == nil {
+ if strings.HasPrefix(fn.Synthetic, "wrapper ") ||
+ strings.HasPrefix(fn.Synthetic, "bound ") ||
+ strings.HasPrefix(fn.Synthetic, "thunk ") ||
+ strings.HasSuffix(fn.name, "Error") {
+ // ok
+ } else {
+ s.errorf("nil Pkg")
+ }
+ }
+ if src, syn := fn.Synthetic == "", fn.Syntax() != nil; src != syn {
+ s.errorf("got fromSource=%t, hasSyntax=%t; want same values", src, syn)
+ }
+ for i, l := range fn.Locals {
+ if l.Parent() != fn {
+ s.errorf("Local %s at index %d has wrong parent", l.Name(), i)
+ }
+ if l.Heap {
+ s.errorf("Local %s at index %d has Heap flag set", l.Name(), i)
+ }
+ }
+ // Build the set of valid referrers.
+ s.instrs = make(map[Instruction]struct{})
+ for _, b := range fn.Blocks {
+ for _, instr := range b.Instrs {
+ s.instrs[instr] = struct{}{}
+ }
+ }
+ for i, p := range fn.Params {
+ if p.Parent() != fn {
+ s.errorf("Param %s at index %d has wrong parent", p.Name(), i)
+ }
+ // Check common suffix of Signature and Params match type.
+ if sig := fn.Signature; sig != nil {
+ j := i - len(fn.Params) + sig.Params().Len() // index within sig.Params
+ if j < 0 {
+ continue
+ }
+ if !types.Identical(p.Type(), sig.Params().At(j).Type()) {
+ s.errorf("Param %s at index %d has wrong type (%s, versus %s in Signature)", p.Name(), i, p.Type(), sig.Params().At(j).Type())
+
+ }
+ }
+
+ s.checkReferrerList(p)
+ }
+ for i, fv := range fn.FreeVars {
+ if fv.Parent() != fn {
+ s.errorf("FreeVar %s at index %d has wrong parent", fv.Name(), i)
+ }
+ s.checkReferrerList(fv)
+ }
+
+ if fn.Blocks != nil && len(fn.Blocks) == 0 {
+ // Function _had_ blocks (so it's not external) but
+ // they were "optimized" away, even the entry block.
+ s.errorf("Blocks slice is non-nil but empty")
+ }
+ for i, b := range fn.Blocks {
+ if b == nil {
+ s.warnf("nil *BasicBlock at f.Blocks[%d]", i)
+ continue
+ }
+ s.checkBlock(b, i)
+ }
+ if fn.Recover != nil && fn.Blocks[fn.Recover.Index] != fn.Recover {
+ s.errorf("Recover block is not in Blocks slice")
+ }
+
+ s.block = nil
+ for i, anon := range fn.AnonFuncs {
+ if anon.Parent() != fn {
+ s.errorf("AnonFuncs[%d]=%s but %s.Parent()=%s", i, anon, anon, anon.Parent())
+ }
+ }
+ s.fn = nil
+ return !s.insane
+}
+
+// sanityCheckPackage checks invariants of packages upon creation.
+// It does not require that the package is built.
+// Unlike sanityCheck (for functions), it just panics at the first error.
+func sanityCheckPackage(pkg *Package) {
+ if pkg.Pkg == nil {
+ panic(fmt.Sprintf("Package %s has no Object", pkg))
+ }
+ _ = pkg.String() // must not crash
+
+ for name, mem := range pkg.Members {
+ if name != mem.Name() {
+ panic(fmt.Sprintf("%s: %T.Name() = %s, want %s",
+ pkg.Pkg.Path(), mem, mem.Name(), name))
+ }
+ obj := mem.Object()
+ if obj == nil {
+ // This check is sound because fields
+ // {Global,Function}.object have type
+ // types.Object. (If they were declared as
+ // *types.{Var,Func}, we'd have a non-empty
+ // interface containing a nil pointer.)
+
+ continue // not all members have typechecker objects
+ }
+ if obj.Name() != name {
+ if obj.Name() == "init" && strings.HasPrefix(mem.Name(), "init#") {
+ // Ok. The name of a declared init function varies between
+ // its types.Func ("init") and its ssa.Function ("init#%d").
+ } else {
+ panic(fmt.Sprintf("%s: %T.Object().Name() = %s, want %s",
+ pkg.Pkg.Path(), mem, obj.Name(), name))
+ }
+ }
+ if obj.Pos() != mem.Pos() {
+ panic(fmt.Sprintf("%s Pos=%d obj.Pos=%d", mem, mem.Pos(), obj.Pos()))
+ }
+ }
+}
diff --git a/vendor/honnef.co/go/tools/ssa/source.go b/vendor/honnef.co/go/tools/ssa/source.go
new file mode 100644
index 0000000000000..8d9cca1703956
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssa/source.go
@@ -0,0 +1,293 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// This file defines utilities for working with source positions
+// or source-level named entities ("objects").
+
+// TODO(adonovan): test that {Value,Instruction}.Pos() positions match
+// the originating syntax, as specified.
+
+import (
+ "go/ast"
+ "go/token"
+ "go/types"
+)
+
+// EnclosingFunction returns the function that contains the syntax
+// node denoted by path.
+//
+// Syntax associated with package-level variable specifications is
+// enclosed by the package's init() function.
+//
+// Returns nil if not found; reasons might include:
+// - the node is not enclosed by any function.
+// - the node is within an anonymous function (FuncLit) and
+// its SSA function has not been created yet
+// (pkg.Build() has not yet been called).
+//
+func EnclosingFunction(pkg *Package, path []ast.Node) *Function {
+ // Start with package-level function...
+ fn := findEnclosingPackageLevelFunction(pkg, path)
+ if fn == nil {
+ return nil // not in any function
+ }
+
+ // ...then walk down the nested anonymous functions.
+ n := len(path)
+outer:
+ for i := range path {
+ if lit, ok := path[n-1-i].(*ast.FuncLit); ok {
+ for _, anon := range fn.AnonFuncs {
+ if anon.Pos() == lit.Type.Func {
+ fn = anon
+ continue outer
+ }
+ }
+ // SSA function not found:
+ // - package not yet built, or maybe
+ // - builder skipped FuncLit in dead block
+ // (in principle; but currently the Builder
+ // generates even dead FuncLits).
+ return nil
+ }
+ }
+ return fn
+}
+
+// HasEnclosingFunction returns true if the AST node denoted by path
+// is contained within the declaration of some function or
+// package-level variable.
+//
+// Unlike EnclosingFunction, the behaviour of this function does not
+// depend on whether SSA code for pkg has been built, so it can be
+// used to quickly reject check inputs that will cause
+// EnclosingFunction to fail, prior to SSA building.
+//
+func HasEnclosingFunction(pkg *Package, path []ast.Node) bool {
+ return findEnclosingPackageLevelFunction(pkg, path) != nil
+}
+
+// findEnclosingPackageLevelFunction returns the Function
+// corresponding to the package-level function enclosing path.
+//
+func findEnclosingPackageLevelFunction(pkg *Package, path []ast.Node) *Function {
+ if n := len(path); n >= 2 { // [... {Gen,Func}Decl File]
+ switch decl := path[n-2].(type) {
+ case *ast.GenDecl:
+ if decl.Tok == token.VAR && n >= 3 {
+ // Package-level 'var' initializer.
+ return pkg.init
+ }
+
+ case *ast.FuncDecl:
+ if decl.Recv == nil && decl.Name.Name == "init" {
+ // Explicit init() function.
+ for _, b := range pkg.init.Blocks {
+ for _, instr := range b.Instrs {
+ if instr, ok := instr.(*Call); ok {
+ if callee, ok := instr.Call.Value.(*Function); ok && callee.Pkg == pkg && callee.Pos() == decl.Name.NamePos {
+ return callee
+ }
+ }
+ }
+ }
+ // Hack: return non-nil when SSA is not yet
+ // built so that HasEnclosingFunction works.
+ return pkg.init
+ }
+ // Declared function/method.
+ return findNamedFunc(pkg, decl.Name.NamePos)
+ }
+ }
+ return nil // not in any function
+}
+
+// findNamedFunc returns the named function whose FuncDecl.Ident is at
+// position pos.
+//
+func findNamedFunc(pkg *Package, pos token.Pos) *Function {
+ // Look at all package members and method sets of named types.
+ // Not very efficient.
+ for _, mem := range pkg.Members {
+ switch mem := mem.(type) {
+ case *Function:
+ if mem.Pos() == pos {
+ return mem
+ }
+ case *Type:
+ mset := pkg.Prog.MethodSets.MethodSet(types.NewPointer(mem.Type()))
+ for i, n := 0, mset.Len(); i < n; i++ {
+ // Don't call Program.Method: avoid creating wrappers.
+ obj := mset.At(i).Obj().(*types.Func)
+ if obj.Pos() == pos {
+ return pkg.values[obj].(*Function)
+ }
+ }
+ }
+ }
+ return nil
+}
+
+// ValueForExpr returns the SSA Value that corresponds to non-constant
+// expression e.
+//
+// It returns nil if no value was found, e.g.
+// - the expression is not lexically contained within f;
+// - f was not built with debug information; or
+// - e is a constant expression. (For efficiency, no debug
+// information is stored for constants. Use
+// go/types.Info.Types[e].Value instead.)
+// - e is a reference to nil or a built-in function.
+// - the value was optimised away.
+//
+// If e is an addressable expression used in an lvalue context,
+// value is the address denoted by e, and isAddr is true.
+//
+// The types of e (or &e, if isAddr) and the result are equal
+// (modulo "untyped" bools resulting from comparisons).
+//
+// (Tip: to find the ssa.Value given a source position, use
+// astutil.PathEnclosingInterval to locate the ast.Node, then
+// EnclosingFunction to locate the Function, then ValueForExpr to find
+// the ssa.Value.)
+//
+func (f *Function) ValueForExpr(e ast.Expr) (value Value, isAddr bool) {
+ if f.debugInfo() { // (opt)
+ e = unparen(e)
+ for _, b := range f.Blocks {
+ for _, instr := range b.Instrs {
+ if ref, ok := instr.(*DebugRef); ok {
+ if ref.Expr == e {
+ return ref.X, ref.IsAddr
+ }
+ }
+ }
+ }
+ }
+ return
+}
+
+// --- Lookup functions for source-level named entities (types.Objects) ---
+
+// Package returns the SSA Package corresponding to the specified
+// type-checker package object.
+// It returns nil if no such SSA package has been created.
+//
+func (prog *Program) Package(obj *types.Package) *Package {
+ return prog.packages[obj]
+}
+
+// packageLevelValue returns the package-level value corresponding to
+// the specified named object, which may be a package-level const
+// (*Const), var (*Global) or func (*Function) of some package in
+// prog. It returns nil if the object is not found.
+//
+func (prog *Program) packageLevelValue(obj types.Object) Value {
+ if pkg, ok := prog.packages[obj.Pkg()]; ok {
+ return pkg.values[obj]
+ }
+ return nil
+}
+
+// FuncValue returns the concrete Function denoted by the source-level
+// named function obj, or nil if obj denotes an interface method.
+//
+// TODO(adonovan): check the invariant that obj.Type() matches the
+// result's Signature, both in the params/results and in the receiver.
+//
+func (prog *Program) FuncValue(obj *types.Func) *Function {
+ fn, _ := prog.packageLevelValue(obj).(*Function)
+ return fn
+}
+
+// ConstValue returns the SSA Value denoted by the source-level named
+// constant obj.
+//
+func (prog *Program) ConstValue(obj *types.Const) *Const {
+ // TODO(adonovan): opt: share (don't reallocate)
+ // Consts for const objects and constant ast.Exprs.
+
+ // Universal constant? {true,false,nil}
+ if obj.Parent() == types.Universe {
+ return NewConst(obj.Val(), obj.Type())
+ }
+ // Package-level named constant?
+ if v := prog.packageLevelValue(obj); v != nil {
+ return v.(*Const)
+ }
+ return NewConst(obj.Val(), obj.Type())
+}
+
+// VarValue returns the SSA Value that corresponds to a specific
+// identifier denoting the source-level named variable obj.
+//
+// VarValue returns nil if a local variable was not found, perhaps
+// because its package was not built, the debug information was not
+// requested during SSA construction, or the value was optimized away.
+//
+// ref is the path to an ast.Ident (e.g. from PathEnclosingInterval),
+// and that ident must resolve to obj.
+//
+// pkg is the package enclosing the reference. (A reference to a var
+// always occurs within a function, so we need to know where to find it.)
+//
+// If the identifier is a field selector and its base expression is
+// non-addressable, then VarValue returns the value of that field.
+// For example:
+// func f() struct {x int}
+// f().x // VarValue(x) returns a *Field instruction of type int
+//
+// All other identifiers denote addressable locations (variables).
+// For them, VarValue may return either the variable's address or its
+// value, even when the expression is evaluated only for its value; the
+// situation is reported by isAddr, the second component of the result.
+//
+// If !isAddr, the returned value is the one associated with the
+// specific identifier. For example,
+// var x int // VarValue(x) returns Const 0 here
+// x = 1 // VarValue(x) returns Const 1 here
+//
+// It is not specified whether the value or the address is returned in
+// any particular case, as it may depend upon optimizations performed
+// during SSA code generation, such as registerization, constant
+// folding, avoidance of materialization of subexpressions, etc.
+//
+func (prog *Program) VarValue(obj *types.Var, pkg *Package, ref []ast.Node) (value Value, isAddr bool) {
+ // All references to a var are local to some function, possibly init.
+ fn := EnclosingFunction(pkg, ref)
+ if fn == nil {
+ return // e.g. def of struct field; SSA not built?
+ }
+
+ id := ref[0].(*ast.Ident)
+
+ // Defining ident of a parameter?
+ if id.Pos() == obj.Pos() {
+ for _, param := range fn.Params {
+ if param.Object() == obj {
+ return param, false
+ }
+ }
+ }
+
+ // Other ident?
+ for _, b := range fn.Blocks {
+ for _, instr := range b.Instrs {
+ if dr, ok := instr.(*DebugRef); ok {
+ if dr.Pos() == id.Pos() {
+ return dr.X, dr.IsAddr
+ }
+ }
+ }
+ }
+
+ // Defining ident of package-level var?
+ if v := prog.packageLevelValue(obj); v != nil {
+ return v.(*Global), true
+ }
+
+ return // e.g. debug info not requested, or var optimized away
+}
diff --git a/vendor/honnef.co/go/tools/ssa/ssa.go b/vendor/honnef.co/go/tools/ssa/ssa.go
new file mode 100644
index 0000000000000..aeddd65e58143
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssa/ssa.go
@@ -0,0 +1,1745 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// This package defines a high-level intermediate representation for
+// Go programs using static single-assignment (SSA) form.
+
+import (
+ "fmt"
+ "go/ast"
+ "go/constant"
+ "go/token"
+ "go/types"
+ "sync"
+
+ "golang.org/x/tools/go/types/typeutil"
+)
+
+// A Program is a partial or complete Go program converted to SSA form.
+type Program struct {
+ Fset *token.FileSet // position information for the files of this Program
+ imported map[string]*Package // all importable Packages, keyed by import path
+ packages map[*types.Package]*Package // all loaded Packages, keyed by object
+ mode BuilderMode // set of mode bits for SSA construction
+ MethodSets typeutil.MethodSetCache // cache of type-checker's method-sets
+
+ methodsMu sync.Mutex // guards the following maps:
+ methodSets typeutil.Map // maps type to its concrete methodSet
+ runtimeTypes typeutil.Map // types for which rtypes are needed
+ canon typeutil.Map // type canonicalization map
+ bounds map[*types.Func]*Function // bounds for curried x.Method closures
+ thunks map[selectionKey]*Function // thunks for T.Method expressions
+}
+
+// A Package is a single analyzed Go package containing Members for
+// all package-level functions, variables, constants and types it
+// declares. These may be accessed directly via Members, or via the
+// type-specific accessor methods Func, Type, Var and Const.
+//
+// Members also contains entries for "init" (the synthetic package
+// initializer) and "init#%d", the nth declared init function,
+// and unspecified other things too.
+//
+type Package struct {
+ Prog *Program // the owning program
+ Pkg *types.Package // the corresponding go/types.Package
+ Members map[string]Member // all package members keyed by name (incl. init and init#%d)
+ values map[types.Object]Value // package members (incl. types and methods), keyed by object
+ init *Function // Func("init"); the package's init function
+ debug bool // include full debug info in this package
+
+ // The following fields are set transiently, then cleared
+ // after building.
+ buildOnce sync.Once // ensures package building occurs once
+ ninit int32 // number of init functions
+ info *types.Info // package type information
+ files []*ast.File // package ASTs
+}
+
+// A Member is a member of a Go package, implemented by *NamedConst,
+// *Global, *Function, or *Type; they are created by package-level
+// const, var, func and type declarations respectively.
+//
+type Member interface {
+ Name() string // declared name of the package member
+ String() string // package-qualified name of the package member
+ RelString(*types.Package) string // like String, but relative refs are unqualified
+ Object() types.Object // typechecker's object for this member, if any
+ Pos() token.Pos // position of member's declaration, if known
+ Type() types.Type // type of the package member
+ Token() token.Token // token.{VAR,FUNC,CONST,TYPE}
+ Package() *Package // the containing package
+}
+
+// A Type is a Member of a Package representing a package-level named type.
+type Type struct {
+ object *types.TypeName
+ pkg *Package
+}
+
+// A NamedConst is a Member of a Package representing a package-level
+// named constant.
+//
+// Pos() returns the position of the declaring ast.ValueSpec.Names[*]
+// identifier.
+//
+// NB: a NamedConst is not a Value; it contains a constant Value, which
+// it augments with the name and position of its 'const' declaration.
+//
+type NamedConst struct {
+ object *types.Const
+ Value *Const
+ pkg *Package
+}
+
+// A Value is an SSA value that can be referenced by an instruction.
+type Value interface {
+ // Name returns the name of this value, and determines how
+ // this Value appears when used as an operand of an
+ // Instruction.
+ //
+ // This is the same as the source name for Parameters,
+ // Builtins, Functions, FreeVars, Globals.
+ // For constants, it is a representation of the constant's value
+ // and type. For all other Values this is the name of the
+ // virtual register defined by the instruction.
+ //
+ // The name of an SSA Value is not semantically significant,
+ // and may not even be unique within a function.
+ Name() string
+
+ // If this value is an Instruction, String returns its
+ // disassembled form; otherwise it returns unspecified
+ // human-readable information about the Value, such as its
+ // kind, name and type.
+ String() string
+
+ // Type returns the type of this value. Many instructions
+ // (e.g. IndexAddr) change their behaviour depending on the
+ // types of their operands.
+ Type() types.Type
+
+ // Parent returns the function to which this Value belongs.
+ // It returns nil for named Functions, Builtin, Const and Global.
+ Parent() *Function
+
+ // Referrers returns the list of instructions that have this
+ // value as one of their operands; it may contain duplicates
+ // if an instruction has a repeated operand.
+ //
+ // Referrers actually returns a pointer through which the
+ // caller may perform mutations to the object's state.
+ //
+ // Referrers is currently only defined if Parent()!=nil,
+ // i.e. for the function-local values FreeVar, Parameter,
+ // Functions (iff anonymous) and all value-defining instructions.
+ // It returns nil for named Functions, Builtin, Const and Global.
+ //
+ // Instruction.Operands contains the inverse of this relation.
+ Referrers() *[]Instruction
+
+ // Pos returns the location of the AST token most closely
+ // associated with the operation that gave rise to this value,
+ // or token.NoPos if it was not explicit in the source.
+ //
+ // For each ast.Node type, a particular token is designated as
+ // the closest location for the expression, e.g. the Lparen
+ // for an *ast.CallExpr. This permits a compact but
+ // approximate mapping from Values to source positions for use
+ // in diagnostic messages, for example.
+ //
+ // (Do not use this position to determine which Value
+ // corresponds to an ast.Expr; use Function.ValueForExpr
+ // instead. NB: it requires that the function was built with
+ // debug information.)
+ Pos() token.Pos
+}
+
+// An Instruction is an SSA instruction that computes a new Value or
+// has some effect.
+//
+// An Instruction that defines a value (e.g. BinOp) also implements
+// the Value interface; an Instruction that only has an effect (e.g. Store)
+// does not.
+//
+type Instruction interface {
+ // String returns the disassembled form of this value.
+ //
+ // Examples of Instructions that are Values:
+ // "x + y" (BinOp)
+ // "len([])" (Call)
+ // Note that the name of the Value is not printed.
+ //
+ // Examples of Instructions that are not Values:
+ // "return x" (Return)
+ // "*y = x" (Store)
+ //
+ // (The separation Value.Name() from Value.String() is useful
+ // for some analyses which distinguish the operation from the
+ // value it defines, e.g., 'y = local int' is both an allocation
+ // of memory 'local int' and a definition of a pointer y.)
+ String() string
+
+ // Parent returns the function to which this instruction
+ // belongs.
+ Parent() *Function
+
+ // Block returns the basic block to which this instruction
+ // belongs.
+ Block() *BasicBlock
+
+ // setBlock sets the basic block to which this instruction belongs.
+ setBlock(*BasicBlock)
+
+ // Operands returns the operands of this instruction: the
+ // set of Values it references.
+ //
+ // Specifically, it appends their addresses to rands, a
+ // user-provided slice, and returns the resulting slice,
+ // permitting avoidance of memory allocation.
+ //
+ // The operands are appended in undefined order, but the order
+ // is consistent for a given Instruction; the addresses are
+ // always non-nil but may point to a nil Value. Clients may
+ // store through the pointers, e.g. to effect a value
+ // renaming.
+ //
+ // Value.Referrers is a subset of the inverse of this
+ // relation. (Referrers are not tracked for all types of
+ // Values.)
+ Operands(rands []*Value) []*Value
+
+ // Pos returns the location of the AST token most closely
+ // associated with the operation that gave rise to this
+ // instruction, or token.NoPos if it was not explicit in the
+ // source.
+ //
+ // For each ast.Node type, a particular token is designated as
+ // the closest location for the expression, e.g. the Go token
+ // for an *ast.GoStmt. This permits a compact but approximate
+ // mapping from Instructions to source positions for use in
+ // diagnostic messages, for example.
+ //
+ // (Do not use this position to determine which Instruction
+ // corresponds to an ast.Expr; see the notes for Value.Pos.
+ // This position may be used to determine which non-Value
+ // Instruction corresponds to some ast.Stmts, but not all: If
+ // and Jump instructions have no Pos(), for example.)
+ Pos() token.Pos
+}
+
+// A Node is a node in the SSA value graph. Every concrete type that
+// implements Node is also either a Value, an Instruction, or both.
+//
+// Node contains the methods common to Value and Instruction, plus the
+// Operands and Referrers methods generalized to return nil for
+// non-Instructions and non-Values, respectively.
+//
+// Node is provided to simplify SSA graph algorithms. Clients should
+// use the more specific and informative Value or Instruction
+// interfaces where appropriate.
+//
+type Node interface {
+ // Common methods:
+ String() string
+ Pos() token.Pos
+ Parent() *Function
+
+ // Partial methods:
+ Operands(rands []*Value) []*Value // nil for non-Instructions
+ Referrers() *[]Instruction // nil for non-Values
+}
+
+// Function represents the parameters, results, and code of a function
+// or method.
+//
+// If Blocks is nil, this indicates an external function for which no
+// Go source code is available. In this case, FreeVars and Locals
+// are nil too. Clients performing whole-program analysis must
+// handle external functions specially.
+//
+// Blocks contains the function's control-flow graph (CFG).
+// Blocks[0] is the function entry point; block order is not otherwise
+// semantically significant, though it may affect the readability of
+// the disassembly.
+// To iterate over the blocks in dominance order, use DomPreorder().
+//
+// Recover is an optional second entry point to which control resumes
+// after a recovered panic. The Recover block may contain only a return
+// statement, preceded by a load of the function's named return
+// parameters, if any.
+//
+// A nested function (Parent()!=nil) that refers to one or more
+// lexically enclosing local variables ("free variables") has FreeVars.
+// Such functions cannot be called directly but require a
+// value created by MakeClosure which, via its Bindings, supplies
+// values for these parameters.
+//
+// If the function is a method (Signature.Recv() != nil) then the first
+// element of Params is the receiver parameter.
+//
+// A Go package may declare many functions called "init".
+// For each one, Object().Name() returns "init" but Name() returns
+// "init#1", etc, in declaration order.
+//
+// Pos() returns the declaring ast.FuncLit.Type.Func or the position
+// of the ast.FuncDecl.Name, if the function was explicit in the
+// source. Synthetic wrappers, for which Synthetic != "", may share
+// the same position as the function they wrap.
+// Syntax.Pos() always returns the position of the declaring "func" token.
+//
+// Type() returns the function's Signature.
+//
+type Function struct {
+ name string
+ object types.Object // a declared *types.Func or one of its wrappers
+ method *types.Selection // info about provenance of synthetic methods
+ Signature *types.Signature
+ pos token.Pos
+
+ Synthetic string // provenance of synthetic function; "" for true source functions
+ syntax ast.Node // *ast.Func{Decl,Lit}; replaced with simple ast.Node after build, unless debug mode
+ parent *Function // enclosing function if anon; nil if global
+ Pkg *Package // enclosing package; nil for shared funcs (wrappers and error.Error)
+ Prog *Program // enclosing program
+ Params []*Parameter // function parameters; for methods, includes receiver
+ FreeVars []*FreeVar // free variables whose values must be supplied by closure
+ Locals []*Alloc // local variables of this function
+ Blocks []*BasicBlock // basic blocks of the function; nil => external
+ Recover *BasicBlock // optional; control transfers here after recovered panic
+ AnonFuncs []*Function // anonymous functions directly beneath this one
+ referrers []Instruction // referring instructions (iff Parent() != nil)
+
+ // The following fields are set transiently during building,
+ // then cleared.
+ currentBlock *BasicBlock // where to emit code
+ objects map[types.Object]Value // addresses of local variables
+ namedResults []*Alloc // tuple of named results
+ targets *targets // linked stack of branch targets
+ lblocks map[*ast.Object]*lblock // labelled blocks
+}
+
+// BasicBlock represents an SSA basic block.
+//
+// The final element of Instrs is always an explicit transfer of
+// control (If, Jump, Return, or Panic).
+//
+// A block may contain no Instructions only if it is unreachable,
+// i.e., Preds is nil. Empty blocks are typically pruned.
+//
+// BasicBlocks and their Preds/Succs relation form a (possibly cyclic)
+// graph independent of the SSA Value graph: the control-flow graph or
+// CFG. It is illegal for multiple edges to exist between the same
+// pair of blocks.
+//
+// Each BasicBlock is also a node in the dominator tree of the CFG.
+// The tree may be navigated using Idom()/Dominees() and queried using
+// Dominates().
+//
+// The order of Preds and Succs is significant (to Phi and If
+// instructions, respectively).
+//
+type BasicBlock struct {
+ Index int // index of this block within Parent().Blocks
+ Comment string // optional label; no semantic significance
+ parent *Function // parent function
+ Instrs []Instruction // instructions in order
+ Preds, Succs []*BasicBlock // predecessors and successors
+ succs2 [2]*BasicBlock // initial space for Succs
+ dom domInfo // dominator tree info
+ gaps int // number of nil Instrs (transient)
+ rundefers int // number of rundefers (transient)
+}
+
+// Pure values ----------------------------------------
+
+// A FreeVar represents a free variable of the function to which it
+// belongs.
+//
+// FreeVars are used to implement anonymous functions, whose free
+// variables are lexically captured in a closure formed by
+// MakeClosure. The value of such a free var is an Alloc or another
+// FreeVar and is considered a potentially escaping heap address, with
+// pointer type.
+//
+// FreeVars are also used to implement bound method closures. Such a
+// free var represents the receiver value and may be of any type that
+// has concrete methods.
+//
+// Pos() returns the position of the value that was captured, which
+// belongs to an enclosing function.
+//
+type FreeVar struct {
+ name string
+ typ types.Type
+ pos token.Pos
+ parent *Function
+ referrers []Instruction
+
+ // Transiently needed during building.
+ outer Value // the Value captured from the enclosing context.
+}
+
+// A Parameter represents an input parameter of a function.
+//
+type Parameter struct {
+ name string
+ object types.Object // a *types.Var; nil for non-source locals
+ typ types.Type
+ pos token.Pos
+ parent *Function
+ referrers []Instruction
+}
+
+// A Const represents the value of a constant expression.
+//
+// The underlying type of a constant may be any boolean, numeric, or
+// string type. In addition, a Const may represent the nil value of
+// any reference type---interface, map, channel, pointer, slice, or
+// function---but not "untyped nil".
+//
+// All source-level constant expressions are represented by a Const
+// of the same type and value.
+//
+// Value holds the exact value of the constant, independent of its
+// Type(), using the same representation as package go/constant uses for
+// constants, or nil for a typed nil value.
+//
+// Pos() returns token.NoPos.
+//
+// Example printed form:
+// 42:int
+// "hello":untyped string
+// 3+4i:MyComplex
+//
+type Const struct {
+ typ types.Type
+ Value constant.Value
+}
+
+// A Global is a named Value holding the address of a package-level
+// variable.
+//
+// Pos() returns the position of the ast.ValueSpec.Names[*]
+// identifier.
+//
+type Global struct {
+ name string
+ object types.Object // a *types.Var; may be nil for synthetics e.g. init$guard
+ typ types.Type
+ pos token.Pos
+
+ Pkg *Package
+}
+
+// A Builtin represents a specific use of a built-in function, e.g. len.
+//
+// Builtins are immutable values. Builtins do not have addresses.
+// Builtins can only appear in CallCommon.Func.
+//
+// Name() indicates the function: one of the built-in functions from the
+// Go spec (excluding "make" and "new") or one of these ssa-defined
+// intrinsics:
+//
+// // wrapnilchk returns ptr if non-nil, panics otherwise.
+// // (For use in indirection wrappers.)
+// func ssa:wrapnilchk(ptr *T, recvType, methodName string) *T
+//
+// Object() returns a *types.Builtin for built-ins defined by the spec,
+// nil for others.
+//
+// Type() returns a *types.Signature representing the effective
+// signature of the built-in for this call.
+//
+type Builtin struct {
+ name string
+ sig *types.Signature
+}
+
+// Value-defining instructions ----------------------------------------
+
+// The Alloc instruction reserves space for a variable of the given type,
+// zero-initializes it, and yields its address.
+//
+// Alloc values are always addresses, and have pointer types, so the
+// type of the allocated variable is actually
+// Type().Underlying().(*types.Pointer).Elem().
+//
+// If Heap is false, Alloc allocates space in the function's
+// activation record (frame); we refer to an Alloc(Heap=false) as a
+// "local" alloc. Each local Alloc returns the same address each time
+// it is executed within the same activation; the space is
+// re-initialized to zero.
+//
+// If Heap is true, Alloc allocates space in the heap; we
+// refer to an Alloc(Heap=true) as a "new" alloc. Each new Alloc
+// returns a different address each time it is executed.
+//
+// When Alloc is applied to a channel, map or slice type, it returns
+// the address of an uninitialized (nil) reference of that kind; store
+// the result of MakeSlice, MakeMap or MakeChan in that location to
+// instantiate these types.
+//
+// Pos() returns the ast.CompositeLit.Lbrace for a composite literal,
+// or the ast.CallExpr.Rparen for a call to new() or for a call that
+// allocates a varargs slice.
+//
+// Example printed form:
+// t0 = local int
+// t1 = new int
+//
+type Alloc struct {
+ register
+ Comment string
+ Heap bool
+ index int // dense numbering; for lifting
+}
+
+var _ Instruction = (*Sigma)(nil)
+var _ Value = (*Sigma)(nil)
+
+type Sigma struct {
+ register
+ X Value
+ Branch bool
+}
+
+func (p *Sigma) Value() Value {
+ v := p.X
+ for {
+ sigma, ok := v.(*Sigma)
+ if !ok {
+ break
+ }
+ v = sigma
+ }
+ return v
+}
+
+func (p *Sigma) String() string {
+ return fmt.Sprintf("σ [%s.%t]", relName(p.X, p), p.Branch)
+}
+
+// The Phi instruction represents an SSA φ-node, which combines values
+// that differ across incoming control-flow edges and yields a new
+// value. Within a block, all φ-nodes must appear before all non-φ
+// nodes.
+//
+// Pos() returns the position of the && or || for short-circuit
+// control-flow joins, or that of the *Alloc for φ-nodes inserted
+// during SSA renaming.
+//
+// Example printed form:
+// t2 = phi [0: t0, 1: t1]
+//
+type Phi struct {
+ register
+ Comment string // a hint as to its purpose
+ Edges []Value // Edges[i] is value for Block().Preds[i]
+}
+
+// The Call instruction represents a function or method call.
+//
+// The Call instruction yields the function result if there is exactly
+// one. Otherwise it returns a tuple, the components of which are
+// accessed via Extract.
+//
+// See CallCommon for generic function call documentation.
+//
+// Pos() returns the ast.CallExpr.Lparen, if explicit in the source.
+//
+// Example printed form:
+// t2 = println(t0, t1)
+// t4 = t3()
+// t7 = invoke t5.Println(...t6)
+//
+type Call struct {
+ register
+ Call CallCommon
+}
+
+// The BinOp instruction yields the result of binary operation X Op Y.
+//
+// Pos() returns the ast.BinaryExpr.OpPos, if explicit in the source.
+//
+// Example printed form:
+// t1 = t0 + 1:int
+//
+type BinOp struct {
+ register
+ // One of:
+ // ADD SUB MUL QUO REM + - * / %
+ // AND OR XOR SHL SHR AND_NOT & | ^ << >> &^
+ // EQL NEQ LSS LEQ GTR GEQ == != < <= < >=
+ Op token.Token
+ X, Y Value
+}
+
+// The UnOp instruction yields the result of Op X.
+// ARROW is channel receive.
+// MUL is pointer indirection (load).
+// XOR is bitwise complement.
+// SUB is negation.
+// NOT is logical negation.
+//
+// If CommaOk and Op=ARROW, the result is a 2-tuple of the value above
+// and a boolean indicating the success of the receive. The
+// components of the tuple are accessed using Extract.
+//
+// Pos() returns the ast.UnaryExpr.OpPos, if explicit in the source.
+// For receive operations (ARROW) implicit in ranging over a channel,
+// Pos() returns the ast.RangeStmt.For.
+// For implicit memory loads (STAR), Pos() returns the position of the
+// most closely associated source-level construct; the details are not
+// specified.
+//
+// Example printed form:
+// t0 = *x
+// t2 = <-t1,ok
+//
+type UnOp struct {
+ register
+ Op token.Token // One of: NOT SUB ARROW MUL XOR ! - <- * ^
+ X Value
+ CommaOk bool
+}
+
+// The ChangeType instruction applies to X a value-preserving type
+// change to Type().
+//
+// Type changes are permitted:
+// - between a named type and its underlying type.
+// - between two named types of the same underlying type.
+// - between (possibly named) pointers to identical base types.
+// - from a bidirectional channel to a read- or write-channel,
+// optionally adding/removing a name.
+//
+// This operation cannot fail dynamically.
+//
+// Pos() returns the ast.CallExpr.Lparen, if the instruction arose
+// from an explicit conversion in the source.
+//
+// Example printed form:
+// t1 = changetype *int <- IntPtr (t0)
+//
+type ChangeType struct {
+ register
+ X Value
+}
+
+// The Convert instruction yields the conversion of value X to type
+// Type(). One or both of those types is basic (but possibly named).
+//
+// A conversion may change the value and representation of its operand.
+// Conversions are permitted:
+// - between real numeric types.
+// - between complex numeric types.
+// - between string and []byte or []rune.
+// - between pointers and unsafe.Pointer.
+// - between unsafe.Pointer and uintptr.
+// - from (Unicode) integer to (UTF-8) string.
+// A conversion may imply a type name change also.
+//
+// This operation cannot fail dynamically.
+//
+// Conversions of untyped string/number/bool constants to a specific
+// representation are eliminated during SSA construction.
+//
+// Pos() returns the ast.CallExpr.Lparen, if the instruction arose
+// from an explicit conversion in the source.
+//
+// Example printed form:
+// t1 = convert []byte <- string (t0)
+//
+type Convert struct {
+ register
+ X Value
+}
+
+// ChangeInterface constructs a value of one interface type from a
+// value of another interface type known to be assignable to it.
+// This operation cannot fail.
+//
+// Pos() returns the ast.CallExpr.Lparen if the instruction arose from
+// an explicit T(e) conversion; the ast.TypeAssertExpr.Lparen if the
+// instruction arose from an explicit e.(T) operation; or token.NoPos
+// otherwise.
+//
+// Example printed form:
+// t1 = change interface interface{} <- I (t0)
+//
+type ChangeInterface struct {
+ register
+ X Value
+}
+
+// MakeInterface constructs an instance of an interface type from a
+// value of a concrete type.
+//
+// Use Program.MethodSets.MethodSet(X.Type()) to find the method-set
+// of X, and Program.MethodValue(m) to find the implementation of a method.
+//
+// To construct the zero value of an interface type T, use:
+// NewConst(constant.MakeNil(), T, pos)
+//
+// Pos() returns the ast.CallExpr.Lparen, if the instruction arose
+// from an explicit conversion in the source.
+//
+// Example printed form:
+// t1 = make interface{} <- int (42:int)
+// t2 = make Stringer <- t0
+//
+type MakeInterface struct {
+ register
+ X Value
+}
+
+// The MakeClosure instruction yields a closure value whose code is
+// Fn and whose free variables' values are supplied by Bindings.
+//
+// Type() returns a (possibly named) *types.Signature.
+//
+// Pos() returns the ast.FuncLit.Type.Func for a function literal
+// closure or the ast.SelectorExpr.Sel for a bound method closure.
+//
+// Example printed form:
+// t0 = make closure [email protected] [x y z]
+// t1 = make closure bound$(main.I).add [i]
+//
+type MakeClosure struct {
+ register
+ Fn Value // always a *Function
+ Bindings []Value // values for each free variable in Fn.FreeVars
+}
+
+// The MakeMap instruction creates a new hash-table-based map object
+// and yields a value of kind map.
+//
+// Type() returns a (possibly named) *types.Map.
+//
+// Pos() returns the ast.CallExpr.Lparen, if created by make(map), or
+// the ast.CompositeLit.Lbrack if created by a literal.
+//
+// Example printed form:
+// t1 = make map[string]int t0
+// t1 = make StringIntMap t0
+//
+type MakeMap struct {
+ register
+ Reserve Value // initial space reservation; nil => default
+}
+
+// The MakeChan instruction creates a new channel object and yields a
+// value of kind chan.
+//
+// Type() returns a (possibly named) *types.Chan.
+//
+// Pos() returns the ast.CallExpr.Lparen for the make(chan) that
+// created it.
+//
+// Example printed form:
+// t0 = make chan int 0
+// t0 = make IntChan 0
+//
+type MakeChan struct {
+ register
+ Size Value // int; size of buffer; zero => synchronous.
+}
+
+// The MakeSlice instruction yields a slice of length Len backed by a
+// newly allocated array of length Cap.
+//
+// Both Len and Cap must be non-nil Values of integer type.
+//
+// (Alloc(types.Array) followed by Slice will not suffice because
+// Alloc can only create arrays of constant length.)
+//
+// Type() returns a (possibly named) *types.Slice.
+//
+// Pos() returns the ast.CallExpr.Lparen for the make([]T) that
+// created it.
+//
+// Example printed form:
+// t1 = make []string 1:int t0
+// t1 = make StringSlice 1:int t0
+//
+type MakeSlice struct {
+ register
+ Len Value
+ Cap Value
+}
+
+// The Slice instruction yields a slice of an existing string, slice
+// or *array X between optional integer bounds Low and High.
+//
+// Dynamically, this instruction panics if X evaluates to a nil *array
+// pointer.
+//
+// Type() returns string if the type of X was string, otherwise a
+// *types.Slice with the same element type as X.
+//
+// Pos() returns the ast.SliceExpr.Lbrack if created by a x[:] slice
+// operation, the ast.CompositeLit.Lbrace if created by a literal, or
+// NoPos if not explicit in the source (e.g. a variadic argument slice).
+//
+// Example printed form:
+// t1 = slice t0[1:]
+//
+type Slice struct {
+ register
+ X Value // slice, string, or *array
+ Low, High, Max Value // each may be nil
+}
+
+// The FieldAddr instruction yields the address of Field of *struct X.
+//
+// The field is identified by its index within the field list of the
+// struct type of X.
+//
+// Dynamically, this instruction panics if X evaluates to a nil
+// pointer.
+//
+// Type() returns a (possibly named) *types.Pointer.
+//
+// Pos() returns the position of the ast.SelectorExpr.Sel for the
+// field, if explicit in the source.
+//
+// Example printed form:
+// t1 = &t0.name [#1]
+//
+type FieldAddr struct {
+ register
+ X Value // *struct
+ Field int // field is X.Type().Underlying().(*types.Pointer).Elem().Underlying().(*types.Struct).Field(Field)
+}
+
+// The Field instruction yields the Field of struct X.
+//
+// The field is identified by its index within the field list of the
+// struct type of X; by using numeric indices we avoid ambiguity of
+// package-local identifiers and permit compact representations.
+//
+// Pos() returns the position of the ast.SelectorExpr.Sel for the
+// field, if explicit in the source.
+//
+// Example printed form:
+// t1 = t0.name [#1]
+//
+type Field struct {
+ register
+ X Value // struct
+ Field int // index into X.Type().(*types.Struct).Fields
+}
+
+// The IndexAddr instruction yields the address of the element at
+// index Index of collection X. Index is an integer expression.
+//
+// The elements of maps and strings are not addressable; use Lookup or
+// MapUpdate instead.
+//
+// Dynamically, this instruction panics if X evaluates to a nil *array
+// pointer.
+//
+// Type() returns a (possibly named) *types.Pointer.
+//
+// Pos() returns the ast.IndexExpr.Lbrack for the index operation, if
+// explicit in the source.
+//
+// Example printed form:
+// t2 = &t0[t1]
+//
+type IndexAddr struct {
+ register
+ X Value // slice or *array,
+ Index Value // numeric index
+}
+
+// The Index instruction yields element Index of array X.
+//
+// Pos() returns the ast.IndexExpr.Lbrack for the index operation, if
+// explicit in the source.
+//
+// Example printed form:
+// t2 = t0[t1]
+//
+type Index struct {
+ register
+ X Value // array
+ Index Value // integer index
+}
+
+// The Lookup instruction yields element Index of collection X, a map
+// or string. Index is an integer expression if X is a string or the
+// appropriate key type if X is a map.
+//
+// If CommaOk, the result is a 2-tuple of the value above and a
+// boolean indicating the result of a map membership test for the key.
+// The components of the tuple are accessed using Extract.
+//
+// Pos() returns the ast.IndexExpr.Lbrack, if explicit in the source.
+//
+// Example printed form:
+// t2 = t0[t1]
+// t5 = t3[t4],ok
+//
+type Lookup struct {
+ register
+ X Value // string or map
+ Index Value // numeric or key-typed index
+ CommaOk bool // return a value,ok pair
+}
+
+// SelectState is a helper for Select.
+// It represents one goal state and its corresponding communication.
+//
+type SelectState struct {
+ Dir types.ChanDir // direction of case (SendOnly or RecvOnly)
+ Chan Value // channel to use (for send or receive)
+ Send Value // value to send (for send)
+ Pos token.Pos // position of token.ARROW
+ DebugNode ast.Node // ast.SendStmt or ast.UnaryExpr(<-) [debug mode]
+}
+
+// The Select instruction tests whether (or blocks until) one
+// of the specified sent or received states is entered.
+//
+// Let n be the number of States for which Dir==RECV and T_i (0<=i<n)
+// be the element type of each such state's Chan.
+// Select returns an n+2-tuple
+// (index int, recvOk bool, r_0 T_0, ... r_n-1 T_n-1)
+// The tuple's components, described below, must be accessed via the
+// Extract instruction.
+//
+// If Blocking, select waits until exactly one state holds, i.e. a
+// channel becomes ready for the designated operation of sending or
+// receiving; select chooses one among the ready states
+// pseudorandomly, performs the send or receive operation, and sets
+// 'index' to the index of the chosen channel.
+//
+// If !Blocking, select doesn't block if no states hold; instead it
+// returns immediately with index equal to -1.
+//
+// If the chosen channel was used for a receive, the r_i component is
+// set to the received value, where i is the index of that state among
+// all n receive states; otherwise r_i has the zero value of type T_i.
+// Note that the receive index i is not the same as the state
+// index index.
+//
+// The second component of the triple, recvOk, is a boolean whose value
+// is true iff the selected operation was a receive and the receive
+// successfully yielded a value.
+//
+// Pos() returns the ast.SelectStmt.Select.
+//
+// Example printed form:
+// t3 = select nonblocking [<-t0, t1<-t2]
+// t4 = select blocking []
+//
+type Select struct {
+ register
+ States []*SelectState
+ Blocking bool
+}
+
+// The Range instruction yields an iterator over the domain and range
+// of X, which must be a string or map.
+//
+// Elements are accessed via Next.
+//
+// Type() returns an opaque and degenerate "rangeIter" type.
+//
+// Pos() returns the ast.RangeStmt.For.
+//
+// Example printed form:
+// t0 = range "hello":string
+//
+type Range struct {
+ register
+ X Value // string or map
+}
+
+// The Next instruction reads and advances the (map or string)
+// iterator Iter and returns a 3-tuple value (ok, k, v). If the
+// iterator is not exhausted, ok is true and k and v are the next
+// elements of the domain and range, respectively. Otherwise ok is
+// false and k and v are undefined.
+//
+// Components of the tuple are accessed using Extract.
+//
+// The IsString field distinguishes iterators over strings from those
+// over maps, as the Type() alone is insufficient: consider
+// map[int]rune.
+//
+// Type() returns a *types.Tuple for the triple (ok, k, v).
+// The types of k and/or v may be types.Invalid.
+//
+// Example printed form:
+// t1 = next t0
+//
+type Next struct {
+ register
+ Iter Value
+ IsString bool // true => string iterator; false => map iterator.
+}
+
+// The TypeAssert instruction tests whether interface value X has type
+// AssertedType.
+//
+// If !CommaOk, on success it returns v, the result of the conversion
+// (defined below); on failure it panics.
+//
+// If CommaOk: on success it returns a pair (v, true) where v is the
+// result of the conversion; on failure it returns (z, false) where z
+// is AssertedType's zero value. The components of the pair must be
+// accessed using the Extract instruction.
+//
+// If AssertedType is a concrete type, TypeAssert checks whether the
+// dynamic type in interface X is equal to it, and if so, the result
+// of the conversion is a copy of the value in the interface.
+//
+// If AssertedType is an interface, TypeAssert checks whether the
+// dynamic type of the interface is assignable to it, and if so, the
+// result of the conversion is a copy of the interface value X.
+// If AssertedType is a superinterface of X.Type(), the operation will
+// fail iff the operand is nil. (Contrast with ChangeInterface, which
+// performs no nil-check.)
+//
+// Type() reflects the actual type of the result, possibly a
+// 2-types.Tuple; AssertedType is the asserted type.
+//
+// Pos() returns the ast.CallExpr.Lparen if the instruction arose from
+// an explicit T(e) conversion; the ast.TypeAssertExpr.Lparen if the
+// instruction arose from an explicit e.(T) operation; or the
+// ast.CaseClause.Case if the instruction arose from a case of a
+// type-switch statement.
+//
+// Example printed form:
+// t1 = typeassert t0.(int)
+// t3 = typeassert,ok t2.(T)
+//
+type TypeAssert struct {
+ register
+ X Value
+ AssertedType types.Type
+ CommaOk bool
+}
+
+// The Extract instruction yields component Index of Tuple.
+//
+// This is used to access the results of instructions with multiple
+// return values, such as Call, TypeAssert, Next, UnOp(ARROW) and
+// IndexExpr(Map).
+//
+// Example printed form:
+// t1 = extract t0 #1
+//
+type Extract struct {
+ register
+ Tuple Value
+ Index int
+}
+
+// Instructions executed for effect. They do not yield a value. --------------------
+
+// The Jump instruction transfers control to the sole successor of its
+// owning block.
+//
+// A Jump must be the last instruction of its containing BasicBlock.
+//
+// Pos() returns NoPos.
+//
+// Example printed form:
+// jump done
+//
+type Jump struct {
+ anInstruction
+}
+
+// The If instruction transfers control to one of the two successors
+// of its owning block, depending on the boolean Cond: the first if
+// true, the second if false.
+//
+// An If instruction must be the last instruction of its containing
+// BasicBlock.
+//
+// Pos() returns NoPos.
+//
+// Example printed form:
+// if t0 goto done else body
+//
+type If struct {
+ anInstruction
+ Cond Value
+}
+
+// The Return instruction returns values and control back to the calling
+// function.
+//
+// len(Results) is always equal to the number of results in the
+// function's signature.
+//
+// If len(Results) > 1, Return returns a tuple value with the specified
+// components which the caller must access using Extract instructions.
+//
+// There is no instruction to return a ready-made tuple like those
+// returned by a "value,ok"-mode TypeAssert, Lookup or UnOp(ARROW) or
+// a tail-call to a function with multiple result parameters.
+//
+// Return must be the last instruction of its containing BasicBlock.
+// Such a block has no successors.
+//
+// Pos() returns the ast.ReturnStmt.Return, if explicit in the source.
+//
+// Example printed form:
+// return
+// return nil:I, 2:int
+//
+type Return struct {
+ anInstruction
+ Results []Value
+ pos token.Pos
+}
+
+// The RunDefers instruction pops and invokes the entire stack of
+// procedure calls pushed by Defer instructions in this function.
+//
+// It is legal to encounter multiple 'rundefers' instructions in a
+// single control-flow path through a function; this is useful in
+// the combined init() function, for example.
+//
+// Pos() returns NoPos.
+//
+// Example printed form:
+// rundefers
+//
+type RunDefers struct {
+ anInstruction
+}
+
+// The Panic instruction initiates a panic with value X.
+//
+// A Panic instruction must be the last instruction of its containing
+// BasicBlock, which must have no successors.
+//
+// NB: 'go panic(x)' and 'defer panic(x)' do not use this instruction;
+// they are treated as calls to a built-in function.
+//
+// Pos() returns the ast.CallExpr.Lparen if this panic was explicit
+// in the source.
+//
+// Example printed form:
+// panic t0
+//
+type Panic struct {
+ anInstruction
+ X Value // an interface{}
+ pos token.Pos
+}
+
+// The Go instruction creates a new goroutine and calls the specified
+// function within it.
+//
+// See CallCommon for generic function call documentation.
+//
+// Pos() returns the ast.GoStmt.Go.
+//
+// Example printed form:
+// go println(t0, t1)
+// go t3()
+// go invoke t5.Println(...t6)
+//
+type Go struct {
+ anInstruction
+ Call CallCommon
+ pos token.Pos
+}
+
+// The Defer instruction pushes the specified call onto a stack of
+// functions to be called by a RunDefers instruction or by a panic.
+//
+// See CallCommon for generic function call documentation.
+//
+// Pos() returns the ast.DeferStmt.Defer.
+//
+// Example printed form:
+// defer println(t0, t1)
+// defer t3()
+// defer invoke t5.Println(...t6)
+//
+type Defer struct {
+ anInstruction
+ Call CallCommon
+ pos token.Pos
+}
+
+// The Send instruction sends X on channel Chan.
+//
+// Pos() returns the ast.SendStmt.Arrow, if explicit in the source.
+//
+// Example printed form:
+// send t0 <- t1
+//
+type Send struct {
+ anInstruction
+ Chan, X Value
+ pos token.Pos
+}
+
+// The Store instruction stores Val at address Addr.
+// Stores can be of arbitrary types.
+//
+// Pos() returns the position of the source-level construct most closely
+// associated with the memory store operation.
+// Since implicit memory stores are numerous and varied and depend upon
+// implementation choices, the details are not specified.
+//
+// Example printed form:
+// *x = y
+//
+type Store struct {
+ anInstruction
+ Addr Value
+ Val Value
+ pos token.Pos
+}
+
+// The BlankStore instruction is emitted for assignments to the blank
+// identifier.
+//
+// BlankStore is a pseudo-instruction: it has no dynamic effect.
+//
+// Pos() returns NoPos.
+//
+// Example printed form:
+// _ = t0
+//
+type BlankStore struct {
+ anInstruction
+ Val Value
+}
+
+// The MapUpdate instruction updates the association of Map[Key] to
+// Value.
+//
+// Pos() returns the ast.KeyValueExpr.Colon or ast.IndexExpr.Lbrack,
+// if explicit in the source.
+//
+// Example printed form:
+// t0[t1] = t2
+//
+type MapUpdate struct {
+ anInstruction
+ Map Value
+ Key Value
+ Value Value
+ pos token.Pos
+}
+
+// A DebugRef instruction maps a source-level expression Expr to the
+// SSA value X that represents the value (!IsAddr) or address (IsAddr)
+// of that expression.
+//
+// DebugRef is a pseudo-instruction: it has no dynamic effect.
+//
+// Pos() returns Expr.Pos(), the start position of the source-level
+// expression. This is not the same as the "designated" token as
+// documented at Value.Pos(). e.g. CallExpr.Pos() does not return the
+// position of the ("designated") Lparen token.
+//
+// If Expr is an *ast.Ident denoting a var or func, Object() returns
+// the object; though this information can be obtained from the type
+// checker, including it here greatly facilitates debugging.
+// For non-Ident expressions, Object() returns nil.
+//
+// DebugRefs are generated only for functions built with debugging
+// enabled; see Package.SetDebugMode() and the GlobalDebug builder
+// mode flag.
+//
+// DebugRefs are not emitted for ast.Idents referring to constants or
+// predeclared identifiers, since they are trivial and numerous.
+// Nor are they emitted for ast.ParenExprs.
+//
+// (By representing these as instructions, rather than out-of-band,
+// consistency is maintained during transformation passes by the
+// ordinary SSA renaming machinery.)
+//
+// Example printed form:
+// ; *ast.CallExpr @ 102:9 is t5
+// ; var x float64 @ 109:72 is x
+// ; address of *ast.CompositeLit @ 216:10 is t0
+//
+type DebugRef struct {
+ anInstruction
+ Expr ast.Expr // the referring expression (never *ast.ParenExpr)
+ object types.Object // the identity of the source var/func
+ IsAddr bool // Expr is addressable and X is the address it denotes
+ X Value // the value or address of Expr
+}
+
+// Embeddable mix-ins and helpers for common parts of other structs. -----------
+
+// register is a mix-in embedded by all SSA values that are also
+// instructions, i.e. virtual registers, and provides a uniform
+// implementation of most of the Value interface: Value.Name() is a
+// numbered register (e.g. "t0"); the other methods are field accessors.
+//
+// Temporary names are automatically assigned to each register on
+// completion of building a function in SSA form.
+//
+// Clients must not assume that the 'id' value (and the Name() derived
+// from it) is unique within a function. As always in this API,
+// semantics are determined only by identity; names exist only to
+// facilitate debugging.
+//
+type register struct {
+ anInstruction
+ num int // "name" of virtual register, e.g. "t0". Not guaranteed unique.
+ typ types.Type // type of virtual register
+ pos token.Pos // position of source expression, or NoPos
+ referrers []Instruction
+}
+
+// anInstruction is a mix-in embedded by all Instructions.
+// It provides the implementations of the Block and setBlock methods.
+type anInstruction struct {
+ block *BasicBlock // the basic block of this instruction
+}
+
+// CallCommon is contained by Go, Defer and Call to hold the
+// common parts of a function or method call.
+//
+// Each CallCommon exists in one of two modes, function call and
+// interface method invocation, or "call" and "invoke" for short.
+//
+// 1. "call" mode: when Method is nil (!IsInvoke), a CallCommon
+// represents an ordinary function call of the value in Value,
+// which may be a *Builtin, a *Function or any other value of kind
+// 'func'.
+//
+// Value may be one of:
+// (a) a *Function, indicating a statically dispatched call
+// to a package-level function, an anonymous function, or
+// a method of a named type.
+// (b) a *MakeClosure, indicating an immediately applied
+// function literal with free variables.
+// (c) a *Builtin, indicating a statically dispatched call
+// to a built-in function.
+// (d) any other value, indicating a dynamically dispatched
+// function call.
+// StaticCallee returns the identity of the callee in cases
+// (a) and (b), nil otherwise.
+//
+// Args contains the arguments to the call. If Value is a method,
+// Args[0] contains the receiver parameter.
+//
+// Example printed form:
+// t2 = println(t0, t1)
+// go t3()
+// defer t5(...t6)
+//
+// 2. "invoke" mode: when Method is non-nil (IsInvoke), a CallCommon
+// represents a dynamically dispatched call to an interface method.
+// In this mode, Value is the interface value and Method is the
+// interface's abstract method. Note: an abstract method may be
+// shared by multiple interfaces due to embedding; Value.Type()
+// provides the specific interface used for this call.
+//
+// Value is implicitly supplied to the concrete method implementation
+// as the receiver parameter; in other words, Args[0] holds not the
+// receiver but the first true argument.
+//
+// Example printed form:
+// t1 = invoke t0.String()
+// go invoke t3.Run(t2)
+// defer invoke t4.Handle(...t5)
+//
+// For all calls to variadic functions (Signature().Variadic()),
+// the last element of Args is a slice.
+//
+type CallCommon struct {
+ Value Value // receiver (invoke mode) or func value (call mode)
+ Method *types.Func // abstract method (invoke mode)
+ Args []Value // actual parameters (in static method call, includes receiver)
+ pos token.Pos // position of CallExpr.Lparen, iff explicit in source
+}
+
+// IsInvoke returns true if this call has "invoke" (not "call") mode.
+func (c *CallCommon) IsInvoke() bool {
+ return c.Method != nil
+}
+
+func (c *CallCommon) Pos() token.Pos { return c.pos }
+
+// Signature returns the signature of the called function.
+//
+// For an "invoke"-mode call, the signature of the interface method is
+// returned.
+//
+// In either "call" or "invoke" mode, if the callee is a method, its
+// receiver is represented by sig.Recv, not sig.Params().At(0).
+//
+func (c *CallCommon) Signature() *types.Signature {
+ if c.Method != nil {
+ return c.Method.Type().(*types.Signature)
+ }
+ return c.Value.Type().Underlying().(*types.Signature)
+}
+
+// StaticCallee returns the callee if this is a trivially static
+// "call"-mode call to a function.
+func (c *CallCommon) StaticCallee() *Function {
+ switch fn := c.Value.(type) {
+ case *Function:
+ return fn
+ case *MakeClosure:
+ return fn.Fn.(*Function)
+ }
+ return nil
+}
+
+// Description returns a description of the mode of this call suitable
+// for a user interface, e.g., "static method call".
+func (c *CallCommon) Description() string {
+ switch fn := c.Value.(type) {
+ case *Builtin:
+ return "built-in function call"
+ case *MakeClosure:
+ return "static function closure call"
+ case *Function:
+ if fn.Signature.Recv() != nil {
+ return "static method call"
+ }
+ return "static function call"
+ }
+ if c.IsInvoke() {
+ return "dynamic method call" // ("invoke" mode)
+ }
+ return "dynamic function call"
+}
+
+// The CallInstruction interface, implemented by *Go, *Defer and *Call,
+// exposes the common parts of function-calling instructions,
+// yet provides a way back to the Value defined by *Call alone.
+//
+type CallInstruction interface {
+ Instruction
+ Common() *CallCommon // returns the common parts of the call
+ Value() *Call // returns the result value of the call (*Call) or nil (*Go, *Defer)
+}
+
+func (s *Call) Common() *CallCommon { return &s.Call }
+func (s *Defer) Common() *CallCommon { return &s.Call }
+func (s *Go) Common() *CallCommon { return &s.Call }
+
+func (s *Call) Value() *Call { return s }
+func (s *Defer) Value() *Call { return nil }
+func (s *Go) Value() *Call { return nil }
+
+func (v *Builtin) Type() types.Type { return v.sig }
+func (v *Builtin) Name() string { return v.name }
+func (*Builtin) Referrers() *[]Instruction { return nil }
+func (v *Builtin) Pos() token.Pos { return token.NoPos }
+func (v *Builtin) Object() types.Object { return types.Universe.Lookup(v.name) }
+func (v *Builtin) Parent() *Function { return nil }
+
+func (v *FreeVar) Type() types.Type { return v.typ }
+func (v *FreeVar) Name() string { return v.name }
+func (v *FreeVar) Referrers() *[]Instruction { return &v.referrers }
+func (v *FreeVar) Pos() token.Pos { return v.pos }
+func (v *FreeVar) Parent() *Function { return v.parent }
+
+func (v *Global) Type() types.Type { return v.typ }
+func (v *Global) Name() string { return v.name }
+func (v *Global) Parent() *Function { return nil }
+func (v *Global) Pos() token.Pos { return v.pos }
+func (v *Global) Referrers() *[]Instruction { return nil }
+func (v *Global) Token() token.Token { return token.VAR }
+func (v *Global) Object() types.Object { return v.object }
+func (v *Global) String() string { return v.RelString(nil) }
+func (v *Global) Package() *Package { return v.Pkg }
+func (v *Global) RelString(from *types.Package) string { return relString(v, from) }
+
+func (v *Function) Name() string { return v.name }
+func (v *Function) Type() types.Type { return v.Signature }
+func (v *Function) Pos() token.Pos { return v.pos }
+func (v *Function) Token() token.Token { return token.FUNC }
+func (v *Function) Object() types.Object { return v.object }
+func (v *Function) String() string { return v.RelString(nil) }
+func (v *Function) Package() *Package { return v.Pkg }
+func (v *Function) Parent() *Function { return v.parent }
+func (v *Function) Referrers() *[]Instruction {
+ if v.parent != nil {
+ return &v.referrers
+ }
+ return nil
+}
+
+func (v *Parameter) Type() types.Type { return v.typ }
+func (v *Parameter) Name() string { return v.name }
+func (v *Parameter) Object() types.Object { return v.object }
+func (v *Parameter) Referrers() *[]Instruction { return &v.referrers }
+func (v *Parameter) Pos() token.Pos { return v.pos }
+func (v *Parameter) Parent() *Function { return v.parent }
+
+func (v *Alloc) Type() types.Type { return v.typ }
+func (v *Alloc) Referrers() *[]Instruction { return &v.referrers }
+func (v *Alloc) Pos() token.Pos { return v.pos }
+
+func (v *register) Type() types.Type { return v.typ }
+func (v *register) setType(typ types.Type) { v.typ = typ }
+func (v *register) Name() string { return fmt.Sprintf("t%d", v.num) }
+func (v *register) setNum(num int) { v.num = num }
+func (v *register) Referrers() *[]Instruction { return &v.referrers }
+func (v *register) Pos() token.Pos { return v.pos }
+func (v *register) setPos(pos token.Pos) { v.pos = pos }
+
+func (v *anInstruction) Parent() *Function { return v.block.parent }
+func (v *anInstruction) Block() *BasicBlock { return v.block }
+func (v *anInstruction) setBlock(block *BasicBlock) { v.block = block }
+func (v *anInstruction) Referrers() *[]Instruction { return nil }
+
+func (t *Type) Name() string { return t.object.Name() }
+func (t *Type) Pos() token.Pos { return t.object.Pos() }
+func (t *Type) Type() types.Type { return t.object.Type() }
+func (t *Type) Token() token.Token { return token.TYPE }
+func (t *Type) Object() types.Object { return t.object }
+func (t *Type) String() string { return t.RelString(nil) }
+func (t *Type) Package() *Package { return t.pkg }
+func (t *Type) RelString(from *types.Package) string { return relString(t, from) }
+
+func (c *NamedConst) Name() string { return c.object.Name() }
+func (c *NamedConst) Pos() token.Pos { return c.object.Pos() }
+func (c *NamedConst) String() string { return c.RelString(nil) }
+func (c *NamedConst) Type() types.Type { return c.object.Type() }
+func (c *NamedConst) Token() token.Token { return token.CONST }
+func (c *NamedConst) Object() types.Object { return c.object }
+func (c *NamedConst) Package() *Package { return c.pkg }
+func (c *NamedConst) RelString(from *types.Package) string { return relString(c, from) }
+
+// Func returns the package-level function of the specified name,
+// or nil if not found.
+//
+func (p *Package) Func(name string) (f *Function) {
+ f, _ = p.Members[name].(*Function)
+ return
+}
+
+// Var returns the package-level variable of the specified name,
+// or nil if not found.
+//
+func (p *Package) Var(name string) (g *Global) {
+ g, _ = p.Members[name].(*Global)
+ return
+}
+
+// Const returns the package-level constant of the specified name,
+// or nil if not found.
+//
+func (p *Package) Const(name string) (c *NamedConst) {
+ c, _ = p.Members[name].(*NamedConst)
+ return
+}
+
+// Type returns the package-level type of the specified name,
+// or nil if not found.
+//
+func (p *Package) Type(name string) (t *Type) {
+ t, _ = p.Members[name].(*Type)
+ return
+}
+
+func (v *Call) Pos() token.Pos { return v.Call.pos }
+func (s *Defer) Pos() token.Pos { return s.pos }
+func (s *Go) Pos() token.Pos { return s.pos }
+func (s *MapUpdate) Pos() token.Pos { return s.pos }
+func (s *Panic) Pos() token.Pos { return s.pos }
+func (s *Return) Pos() token.Pos { return s.pos }
+func (s *Send) Pos() token.Pos { return s.pos }
+func (s *Store) Pos() token.Pos { return s.pos }
+func (s *BlankStore) Pos() token.Pos { return token.NoPos }
+func (s *If) Pos() token.Pos { return token.NoPos }
+func (s *Jump) Pos() token.Pos { return token.NoPos }
+func (s *RunDefers) Pos() token.Pos { return token.NoPos }
+func (s *DebugRef) Pos() token.Pos { return s.Expr.Pos() }
+
+// Operands.
+
+func (v *Alloc) Operands(rands []*Value) []*Value {
+ return rands
+}
+
+func (v *BinOp) Operands(rands []*Value) []*Value {
+ return append(rands, &v.X, &v.Y)
+}
+
+func (c *CallCommon) Operands(rands []*Value) []*Value {
+ rands = append(rands, &c.Value)
+ for i := range c.Args {
+ rands = append(rands, &c.Args[i])
+ }
+ return rands
+}
+
+func (s *Go) Operands(rands []*Value) []*Value {
+ return s.Call.Operands(rands)
+}
+
+func (s *Call) Operands(rands []*Value) []*Value {
+ return s.Call.Operands(rands)
+}
+
+func (s *Defer) Operands(rands []*Value) []*Value {
+ return s.Call.Operands(rands)
+}
+
+func (v *ChangeInterface) Operands(rands []*Value) []*Value {
+ return append(rands, &v.X)
+}
+
+func (v *ChangeType) Operands(rands []*Value) []*Value {
+ return append(rands, &v.X)
+}
+
+func (v *Convert) Operands(rands []*Value) []*Value {
+ return append(rands, &v.X)
+}
+
+func (s *DebugRef) Operands(rands []*Value) []*Value {
+ return append(rands, &s.X)
+}
+
+func (v *Extract) Operands(rands []*Value) []*Value {
+ return append(rands, &v.Tuple)
+}
+
+func (v *Field) Operands(rands []*Value) []*Value {
+ return append(rands, &v.X)
+}
+
+func (v *FieldAddr) Operands(rands []*Value) []*Value {
+ return append(rands, &v.X)
+}
+
+func (s *If) Operands(rands []*Value) []*Value {
+ return append(rands, &s.Cond)
+}
+
+func (v *Index) Operands(rands []*Value) []*Value {
+ return append(rands, &v.X, &v.Index)
+}
+
+func (v *IndexAddr) Operands(rands []*Value) []*Value {
+ return append(rands, &v.X, &v.Index)
+}
+
+func (*Jump) Operands(rands []*Value) []*Value {
+ return rands
+}
+
+func (v *Lookup) Operands(rands []*Value) []*Value {
+ return append(rands, &v.X, &v.Index)
+}
+
+func (v *MakeChan) Operands(rands []*Value) []*Value {
+ return append(rands, &v.Size)
+}
+
+func (v *MakeClosure) Operands(rands []*Value) []*Value {
+ rands = append(rands, &v.Fn)
+ for i := range v.Bindings {
+ rands = append(rands, &v.Bindings[i])
+ }
+ return rands
+}
+
+func (v *MakeInterface) Operands(rands []*Value) []*Value {
+ return append(rands, &v.X)
+}
+
+func (v *MakeMap) Operands(rands []*Value) []*Value {
+ return append(rands, &v.Reserve)
+}
+
+func (v *MakeSlice) Operands(rands []*Value) []*Value {
+ return append(rands, &v.Len, &v.Cap)
+}
+
+func (v *MapUpdate) Operands(rands []*Value) []*Value {
+ return append(rands, &v.Map, &v.Key, &v.Value)
+}
+
+func (v *Next) Operands(rands []*Value) []*Value {
+ return append(rands, &v.Iter)
+}
+
+func (s *Panic) Operands(rands []*Value) []*Value {
+ return append(rands, &s.X)
+}
+
+func (v *Sigma) Operands(rands []*Value) []*Value {
+ return append(rands, &v.X)
+}
+
+func (v *Phi) Operands(rands []*Value) []*Value {
+ for i := range v.Edges {
+ rands = append(rands, &v.Edges[i])
+ }
+ return rands
+}
+
+func (v *Range) Operands(rands []*Value) []*Value {
+ return append(rands, &v.X)
+}
+
+func (s *Return) Operands(rands []*Value) []*Value {
+ for i := range s.Results {
+ rands = append(rands, &s.Results[i])
+ }
+ return rands
+}
+
+func (*RunDefers) Operands(rands []*Value) []*Value {
+ return rands
+}
+
+func (v *Select) Operands(rands []*Value) []*Value {
+ for i := range v.States {
+ rands = append(rands, &v.States[i].Chan, &v.States[i].Send)
+ }
+ return rands
+}
+
+func (s *Send) Operands(rands []*Value) []*Value {
+ return append(rands, &s.Chan, &s.X)
+}
+
+func (v *Slice) Operands(rands []*Value) []*Value {
+ return append(rands, &v.X, &v.Low, &v.High, &v.Max)
+}
+
+func (s *Store) Operands(rands []*Value) []*Value {
+ return append(rands, &s.Addr, &s.Val)
+}
+
+func (s *BlankStore) Operands(rands []*Value) []*Value {
+ return append(rands, &s.Val)
+}
+
+func (v *TypeAssert) Operands(rands []*Value) []*Value {
+ return append(rands, &v.X)
+}
+
+func (v *UnOp) Operands(rands []*Value) []*Value {
+ return append(rands, &v.X)
+}
+
+// Non-Instruction Values:
+func (v *Builtin) Operands(rands []*Value) []*Value { return rands }
+func (v *FreeVar) Operands(rands []*Value) []*Value { return rands }
+func (v *Const) Operands(rands []*Value) []*Value { return rands }
+func (v *Function) Operands(rands []*Value) []*Value { return rands }
+func (v *Global) Operands(rands []*Value) []*Value { return rands }
+func (v *Parameter) Operands(rands []*Value) []*Value { return rands }
diff --git a/vendor/honnef.co/go/tools/ssa/staticcheck.conf b/vendor/honnef.co/go/tools/ssa/staticcheck.conf
new file mode 100644
index 0000000000000..d7b38bc3563d9
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssa/staticcheck.conf
@@ -0,0 +1,3 @@
+# ssa/... is mostly imported from upstream and we don't want to
+# deviate from it too much, hence disabling SA1019
+checks = ["inherit", "-SA1019"]
diff --git a/vendor/honnef.co/go/tools/ssa/testmain.go b/vendor/honnef.co/go/tools/ssa/testmain.go
new file mode 100644
index 0000000000000..8ec15ba50513e
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssa/testmain.go
@@ -0,0 +1,271 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// CreateTestMainPackage synthesizes a main package that runs all the
+// tests of the supplied packages.
+// It is closely coupled to $GOROOT/src/cmd/go/test.go and $GOROOT/src/testing.
+//
+// TODO(adonovan): throws this all away now that x/tools/go/packages
+// provides access to the actual synthetic test main files.
+
+import (
+ "bytes"
+ "fmt"
+ "go/ast"
+ "go/parser"
+ "go/types"
+ "log"
+ "os"
+ "strings"
+ "text/template"
+)
+
+// FindTests returns the Test, Benchmark, and Example functions
+// (as defined by "go test") defined in the specified package,
+// and its TestMain function, if any.
+//
+// Deprecated: use x/tools/go/packages to access synthetic testmain packages.
+func FindTests(pkg *Package) (tests, benchmarks, examples []*Function, main *Function) {
+ prog := pkg.Prog
+
+ // The first two of these may be nil: if the program doesn't import "testing",
+ // it can't contain any tests, but it may yet contain Examples.
+ var testSig *types.Signature // func(*testing.T)
+ var benchmarkSig *types.Signature // func(*testing.B)
+ var exampleSig = types.NewSignature(nil, nil, nil, false) // func()
+
+ // Obtain the types from the parameters of testing.MainStart.
+ if testingPkg := prog.ImportedPackage("testing"); testingPkg != nil {
+ mainStart := testingPkg.Func("MainStart")
+ params := mainStart.Signature.Params()
+ testSig = funcField(params.At(1).Type())
+ benchmarkSig = funcField(params.At(2).Type())
+
+ // Does the package define this function?
+ // func TestMain(*testing.M)
+ if f := pkg.Func("TestMain"); f != nil {
+ sig := f.Type().(*types.Signature)
+ starM := mainStart.Signature.Results().At(0).Type() // *testing.M
+ if sig.Results().Len() == 0 &&
+ sig.Params().Len() == 1 &&
+ types.Identical(sig.Params().At(0).Type(), starM) {
+ main = f
+ }
+ }
+ }
+
+ // TODO(adonovan): use a stable order, e.g. lexical.
+ for _, mem := range pkg.Members {
+ if f, ok := mem.(*Function); ok &&
+ ast.IsExported(f.Name()) &&
+ strings.HasSuffix(prog.Fset.Position(f.Pos()).Filename, "_test.go") {
+
+ switch {
+ case testSig != nil && isTestSig(f, "Test", testSig):
+ tests = append(tests, f)
+ case benchmarkSig != nil && isTestSig(f, "Benchmark", benchmarkSig):
+ benchmarks = append(benchmarks, f)
+ case isTestSig(f, "Example", exampleSig):
+ examples = append(examples, f)
+ default:
+ continue
+ }
+ }
+ }
+ return
+}
+
+// Like isTest, but checks the signature too.
+func isTestSig(f *Function, prefix string, sig *types.Signature) bool {
+ return isTest(f.Name(), prefix) && types.Identical(f.Signature, sig)
+}
+
+// Given the type of one of the three slice parameters of testing.Main,
+// returns the function type.
+func funcField(slice types.Type) *types.Signature {
+ return slice.(*types.Slice).Elem().Underlying().(*types.Struct).Field(1).Type().(*types.Signature)
+}
+
+// isTest tells whether name looks like a test (or benchmark, according to prefix).
+// It is a Test (say) if there is a character after Test that is not a lower-case letter.
+// We don't want TesticularCancer.
+// Plundered from $GOROOT/src/cmd/go/test.go
+func isTest(name, prefix string) bool {
+ if !strings.HasPrefix(name, prefix) {
+ return false
+ }
+ if len(name) == len(prefix) { // "Test" is ok
+ return true
+ }
+ return ast.IsExported(name[len(prefix):])
+}
+
+// CreateTestMainPackage creates and returns a synthetic "testmain"
+// package for the specified package if it defines tests, benchmarks or
+// executable examples, or nil otherwise. The new package is named
+// "main" and provides a function named "main" that runs the tests,
+// similar to the one that would be created by the 'go test' tool.
+//
+// Subsequent calls to prog.AllPackages include the new package.
+// The package pkg must belong to the program prog.
+//
+// Deprecated: use x/tools/go/packages to access synthetic testmain packages.
+func (prog *Program) CreateTestMainPackage(pkg *Package) *Package {
+ if pkg.Prog != prog {
+ log.Fatal("Package does not belong to Program")
+ }
+
+ // Template data
+ var data struct {
+ Pkg *Package
+ Tests, Benchmarks, Examples []*Function
+ Main *Function
+ Go18 bool
+ }
+ data.Pkg = pkg
+
+ // Enumerate tests.
+ data.Tests, data.Benchmarks, data.Examples, data.Main = FindTests(pkg)
+ if data.Main == nil &&
+ data.Tests == nil && data.Benchmarks == nil && data.Examples == nil {
+ return nil
+ }
+
+ // Synthesize source for testmain package.
+ path := pkg.Pkg.Path() + "$testmain"
+ tmpl := testmainTmpl
+ if testingPkg := prog.ImportedPackage("testing"); testingPkg != nil {
+ // In Go 1.8, testing.MainStart's first argument is an interface, not a func.
+ data.Go18 = types.IsInterface(testingPkg.Func("MainStart").Signature.Params().At(0).Type())
+ } else {
+ // The program does not import "testing", but FindTests
+ // returned non-nil, which must mean there were Examples
+ // but no Test, Benchmark, or TestMain functions.
+
+ // We'll simply call them from testmain.main; this will
+ // ensure they don't panic, but will not check any
+ // "Output:" comments.
+ // (We should not execute an Example that has no
+ // "Output:" comment, but it's impossible to tell here.)
+ tmpl = examplesOnlyTmpl
+ }
+ var buf bytes.Buffer
+ if err := tmpl.Execute(&buf, data); err != nil {
+ log.Fatalf("internal error expanding template for %s: %v", path, err)
+ }
+ if false { // debugging
+ fmt.Fprintln(os.Stderr, buf.String())
+ }
+
+ // Parse and type-check the testmain package.
+ f, err := parser.ParseFile(prog.Fset, path+".go", &buf, parser.Mode(0))
+ if err != nil {
+ log.Fatalf("internal error parsing %s: %v", path, err)
+ }
+ conf := types.Config{
+ DisableUnusedImportCheck: true,
+ Importer: importer{pkg},
+ }
+ files := []*ast.File{f}
+ info := &types.Info{
+ Types: make(map[ast.Expr]types.TypeAndValue),
+ Defs: make(map[*ast.Ident]types.Object),
+ Uses: make(map[*ast.Ident]types.Object),
+ Implicits: make(map[ast.Node]types.Object),
+ Scopes: make(map[ast.Node]*types.Scope),
+ Selections: make(map[*ast.SelectorExpr]*types.Selection),
+ }
+ testmainPkg, err := conf.Check(path, prog.Fset, files, info)
+ if err != nil {
+ log.Fatalf("internal error type-checking %s: %v", path, err)
+ }
+
+ // Create and build SSA code.
+ testmain := prog.CreatePackage(testmainPkg, files, info, false)
+ testmain.SetDebugMode(false)
+ testmain.Build()
+ testmain.Func("main").Synthetic = "test main function"
+ testmain.Func("init").Synthetic = "package initializer"
+ return testmain
+}
+
+// An implementation of types.Importer for an already loaded SSA program.
+type importer struct {
+ pkg *Package // package under test; may be non-importable
+}
+
+func (imp importer) Import(path string) (*types.Package, error) {
+ if p := imp.pkg.Prog.ImportedPackage(path); p != nil {
+ return p.Pkg, nil
+ }
+ if path == imp.pkg.Pkg.Path() {
+ return imp.pkg.Pkg, nil
+ }
+ return nil, fmt.Errorf("not found") // can't happen
+}
+
+var testmainTmpl = template.Must(template.New("testmain").Parse(`
+package main
+
+import "io"
+import "os"
+import "testing"
+import p {{printf "%q" .Pkg.Pkg.Path}}
+
+{{if .Go18}}
+type deps struct{}
+
+func (deps) ImportPath() string { return "" }
+func (deps) MatchString(pat, str string) (bool, error) { return true, nil }
+func (deps) StartCPUProfile(io.Writer) error { return nil }
+func (deps) StartTestLog(io.Writer) {}
+func (deps) StopCPUProfile() {}
+func (deps) StopTestLog() error { return nil }
+func (deps) WriteHeapProfile(io.Writer) error { return nil }
+func (deps) WriteProfileTo(string, io.Writer, int) error { return nil }
+
+var match deps
+{{else}}
+func match(_, _ string) (bool, error) { return true, nil }
+{{end}}
+
+func main() {
+ tests := []testing.InternalTest{
+{{range .Tests}}
+ { {{printf "%q" .Name}}, p.{{.Name}} },
+{{end}}
+ }
+ benchmarks := []testing.InternalBenchmark{
+{{range .Benchmarks}}
+ { {{printf "%q" .Name}}, p.{{.Name}} },
+{{end}}
+ }
+ examples := []testing.InternalExample{
+{{range .Examples}}
+ {Name: {{printf "%q" .Name}}, F: p.{{.Name}}},
+{{end}}
+ }
+ m := testing.MainStart(match, tests, benchmarks, examples)
+{{with .Main}}
+ p.{{.Name}}(m)
+{{else}}
+ os.Exit(m.Run())
+{{end}}
+}
+
+`))
+
+var examplesOnlyTmpl = template.Must(template.New("examples").Parse(`
+package main
+
+import p {{printf "%q" .Pkg.Pkg.Path}}
+
+func main() {
+{{range .Examples}}
+ p.{{.Name}}()
+{{end}}
+}
+`))
diff --git a/vendor/honnef.co/go/tools/ssa/util.go b/vendor/honnef.co/go/tools/ssa/util.go
new file mode 100644
index 0000000000000..ddb1184609694
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssa/util.go
@@ -0,0 +1,119 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// This file defines a number of miscellaneous utility functions.
+
+import (
+ "fmt"
+ "go/ast"
+ "go/token"
+ "go/types"
+ "io"
+ "os"
+
+ "golang.org/x/tools/go/ast/astutil"
+)
+
+//// AST utilities
+
+func unparen(e ast.Expr) ast.Expr { return astutil.Unparen(e) }
+
+// isBlankIdent returns true iff e is an Ident with name "_".
+// They have no associated types.Object, and thus no type.
+//
+func isBlankIdent(e ast.Expr) bool {
+ id, ok := e.(*ast.Ident)
+ return ok && id.Name == "_"
+}
+
+//// Type utilities. Some of these belong in go/types.
+
+// isPointer returns true for types whose underlying type is a pointer.
+func isPointer(typ types.Type) bool {
+ _, ok := typ.Underlying().(*types.Pointer)
+ return ok
+}
+
+func isInterface(T types.Type) bool { return types.IsInterface(T) }
+
+// deref returns a pointer's element type; otherwise it returns typ.
+func deref(typ types.Type) types.Type {
+ if p, ok := typ.Underlying().(*types.Pointer); ok {
+ return p.Elem()
+ }
+ return typ
+}
+
+// recvType returns the receiver type of method obj.
+func recvType(obj *types.Func) types.Type {
+ return obj.Type().(*types.Signature).Recv().Type()
+}
+
+// DefaultType returns the default "typed" type for an "untyped" type;
+// it returns the incoming type for all other types. The default type
+// for untyped nil is untyped nil.
+//
+// Exported to ssa/interp.
+//
+// TODO(adonovan): use go/types.DefaultType after 1.8.
+//
+func DefaultType(typ types.Type) types.Type {
+ if t, ok := typ.(*types.Basic); ok {
+ k := t.Kind()
+ switch k {
+ case types.UntypedBool:
+ k = types.Bool
+ case types.UntypedInt:
+ k = types.Int
+ case types.UntypedRune:
+ k = types.Rune
+ case types.UntypedFloat:
+ k = types.Float64
+ case types.UntypedComplex:
+ k = types.Complex128
+ case types.UntypedString:
+ k = types.String
+ }
+ typ = types.Typ[k]
+ }
+ return typ
+}
+
+// logStack prints the formatted "start" message to stderr and
+// returns a closure that prints the corresponding "end" message.
+// Call using 'defer logStack(...)()' to show builder stack on panic.
+// Don't forget trailing parens!
+//
+func logStack(format string, args ...interface{}) func() {
+ msg := fmt.Sprintf(format, args...)
+ io.WriteString(os.Stderr, msg)
+ io.WriteString(os.Stderr, "\n")
+ return func() {
+ io.WriteString(os.Stderr, msg)
+ io.WriteString(os.Stderr, " end\n")
+ }
+}
+
+// newVar creates a 'var' for use in a types.Tuple.
+func newVar(name string, typ types.Type) *types.Var {
+ return types.NewParam(token.NoPos, nil, name, typ)
+}
+
+// anonVar creates an anonymous 'var' for use in a types.Tuple.
+func anonVar(typ types.Type) *types.Var {
+ return newVar("", typ)
+}
+
+var lenResults = types.NewTuple(anonVar(tInt))
+
+// makeLen returns the len builtin specialized to type func(T)int.
+func makeLen(T types.Type) *Builtin {
+ lenParams := types.NewTuple(anonVar(T))
+ return &Builtin{
+ name: "len",
+ sig: types.NewSignature(nil, lenParams, lenResults, false),
+ }
+}
diff --git a/vendor/honnef.co/go/tools/ssa/wrappers.go b/vendor/honnef.co/go/tools/ssa/wrappers.go
new file mode 100644
index 0000000000000..a4ae71d8cfcf0
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssa/wrappers.go
@@ -0,0 +1,290 @@
+// Copyright 2013 The Go Authors. All rights reserved.
+// Use of this source code is governed by a BSD-style
+// license that can be found in the LICENSE file.
+
+package ssa
+
+// This file defines synthesis of Functions that delegate to declared
+// methods; they come in three kinds:
+//
+// (1) wrappers: methods that wrap declared methods, performing
+// implicit pointer indirections and embedded field selections.
+//
+// (2) thunks: funcs that wrap declared methods. Like wrappers,
+// thunks perform indirections and field selections. The thunk's
+// first parameter is used as the receiver for the method call.
+//
+// (3) bounds: funcs that wrap declared methods. The bound's sole
+// free variable, supplied by a closure, is used as the receiver
+// for the method call. No indirections or field selections are
+// performed since they can be done before the call.
+
+import (
+ "fmt"
+
+ "go/types"
+)
+
+// -- wrappers -----------------------------------------------------------
+
+// makeWrapper returns a synthetic method that delegates to the
+// declared method denoted by meth.Obj(), first performing any
+// necessary pointer indirections or field selections implied by meth.
+//
+// The resulting method's receiver type is meth.Recv().
+//
+// This function is versatile but quite subtle! Consider the
+// following axes of variation when making changes:
+// - optional receiver indirection
+// - optional implicit field selections
+// - meth.Obj() may denote a concrete or an interface method
+// - the result may be a thunk or a wrapper.
+//
+// EXCLUSIVE_LOCKS_REQUIRED(prog.methodsMu)
+//
+func makeWrapper(prog *Program, sel *types.Selection) *Function {
+ obj := sel.Obj().(*types.Func) // the declared function
+ sig := sel.Type().(*types.Signature) // type of this wrapper
+
+ var recv *types.Var // wrapper's receiver or thunk's params[0]
+ name := obj.Name()
+ var description string
+ var start int // first regular param
+ if sel.Kind() == types.MethodExpr {
+ name += "$thunk"
+ description = "thunk"
+ recv = sig.Params().At(0)
+ start = 1
+ } else {
+ description = "wrapper"
+ recv = sig.Recv()
+ }
+
+ description = fmt.Sprintf("%s for %s", description, sel.Obj())
+ if prog.mode&LogSource != 0 {
+ defer logStack("make %s to (%s)", description, recv.Type())()
+ }
+ fn := &Function{
+ name: name,
+ method: sel,
+ object: obj,
+ Signature: sig,
+ Synthetic: description,
+ Prog: prog,
+ pos: obj.Pos(),
+ }
+ fn.startBody()
+ fn.addSpilledParam(recv)
+ createParams(fn, start)
+
+ indices := sel.Index()
+
+ var v Value = fn.Locals[0] // spilled receiver
+ if isPointer(sel.Recv()) {
+ v = emitLoad(fn, v)
+
+ // For simple indirection wrappers, perform an informative nil-check:
+ // "value method (T).f called using nil *T pointer"
+ if len(indices) == 1 && !isPointer(recvType(obj)) {
+ var c Call
+ c.Call.Value = &Builtin{
+ name: "ssa:wrapnilchk",
+ sig: types.NewSignature(nil,
+ types.NewTuple(anonVar(sel.Recv()), anonVar(tString), anonVar(tString)),
+ types.NewTuple(anonVar(sel.Recv())), false),
+ }
+ c.Call.Args = []Value{
+ v,
+ stringConst(deref(sel.Recv()).String()),
+ stringConst(sel.Obj().Name()),
+ }
+ c.setType(v.Type())
+ v = fn.emit(&c)
+ }
+ }
+
+ // Invariant: v is a pointer, either
+ // value of *A receiver param, or
+ // address of A spilled receiver.
+
+ // We use pointer arithmetic (FieldAddr possibly followed by
+ // Load) in preference to value extraction (Field possibly
+ // preceded by Load).
+
+ v = emitImplicitSelections(fn, v, indices[:len(indices)-1])
+
+ // Invariant: v is a pointer, either
+ // value of implicit *C field, or
+ // address of implicit C field.
+
+ var c Call
+ if r := recvType(obj); !isInterface(r) { // concrete method
+ if !isPointer(r) {
+ v = emitLoad(fn, v)
+ }
+ c.Call.Value = prog.declaredFunc(obj)
+ c.Call.Args = append(c.Call.Args, v)
+ } else {
+ c.Call.Method = obj
+ c.Call.Value = emitLoad(fn, v)
+ }
+ for _, arg := range fn.Params[1:] {
+ c.Call.Args = append(c.Call.Args, arg)
+ }
+ emitTailCall(fn, &c)
+ fn.finishBody()
+ return fn
+}
+
+// createParams creates parameters for wrapper method fn based on its
+// Signature.Params, which do not include the receiver.
+// start is the index of the first regular parameter to use.
+//
+func createParams(fn *Function, start int) {
+ tparams := fn.Signature.Params()
+ for i, n := start, tparams.Len(); i < n; i++ {
+ fn.addParamObj(tparams.At(i))
+ }
+}
+
+// -- bounds -----------------------------------------------------------
+
+// makeBound returns a bound method wrapper (or "bound"), a synthetic
+// function that delegates to a concrete or interface method denoted
+// by obj. The resulting function has no receiver, but has one free
+// variable which will be used as the method's receiver in the
+// tail-call.
+//
+// Use MakeClosure with such a wrapper to construct a bound method
+// closure. e.g.:
+//
+// type T int or: type T interface { meth() }
+// func (t T) meth()
+// var t T
+// f := t.meth
+// f() // calls t.meth()
+//
+// f is a closure of a synthetic wrapper defined as if by:
+//
+// f := func() { return t.meth() }
+//
+// Unlike makeWrapper, makeBound need perform no indirection or field
+// selections because that can be done before the closure is
+// constructed.
+//
+// EXCLUSIVE_LOCKS_ACQUIRED(meth.Prog.methodsMu)
+//
+func makeBound(prog *Program, obj *types.Func) *Function {
+ prog.methodsMu.Lock()
+ defer prog.methodsMu.Unlock()
+ fn, ok := prog.bounds[obj]
+ if !ok {
+ description := fmt.Sprintf("bound method wrapper for %s", obj)
+ if prog.mode&LogSource != 0 {
+ defer logStack("%s", description)()
+ }
+ fn = &Function{
+ name: obj.Name() + "$bound",
+ object: obj,
+ Signature: changeRecv(obj.Type().(*types.Signature), nil), // drop receiver
+ Synthetic: description,
+ Prog: prog,
+ pos: obj.Pos(),
+ }
+
+ fv := &FreeVar{name: "recv", typ: recvType(obj), parent: fn}
+ fn.FreeVars = []*FreeVar{fv}
+ fn.startBody()
+ createParams(fn, 0)
+ var c Call
+
+ if !isInterface(recvType(obj)) { // concrete
+ c.Call.Value = prog.declaredFunc(obj)
+ c.Call.Args = []Value{fv}
+ } else {
+ c.Call.Value = fv
+ c.Call.Method = obj
+ }
+ for _, arg := range fn.Params {
+ c.Call.Args = append(c.Call.Args, arg)
+ }
+ emitTailCall(fn, &c)
+ fn.finishBody()
+
+ prog.bounds[obj] = fn
+ }
+ return fn
+}
+
+// -- thunks -----------------------------------------------------------
+
+// makeThunk returns a thunk, a synthetic function that delegates to a
+// concrete or interface method denoted by sel.Obj(). The resulting
+// function has no receiver, but has an additional (first) regular
+// parameter.
+//
+// Precondition: sel.Kind() == types.MethodExpr.
+//
+// type T int or: type T interface { meth() }
+// func (t T) meth()
+// f := T.meth
+// var t T
+// f(t) // calls t.meth()
+//
+// f is a synthetic wrapper defined as if by:
+//
+// f := func(t T) { return t.meth() }
+//
+// TODO(adonovan): opt: currently the stub is created even when used
+// directly in a function call: C.f(i, 0). This is less efficient
+// than inlining the stub.
+//
+// EXCLUSIVE_LOCKS_ACQUIRED(meth.Prog.methodsMu)
+//
+func makeThunk(prog *Program, sel *types.Selection) *Function {
+ if sel.Kind() != types.MethodExpr {
+ panic(sel)
+ }
+
+ key := selectionKey{
+ kind: sel.Kind(),
+ recv: sel.Recv(),
+ obj: sel.Obj(),
+ index: fmt.Sprint(sel.Index()),
+ indirect: sel.Indirect(),
+ }
+
+ prog.methodsMu.Lock()
+ defer prog.methodsMu.Unlock()
+
+ // Canonicalize key.recv to avoid constructing duplicate thunks.
+ canonRecv, ok := prog.canon.At(key.recv).(types.Type)
+ if !ok {
+ canonRecv = key.recv
+ prog.canon.Set(key.recv, canonRecv)
+ }
+ key.recv = canonRecv
+
+ fn, ok := prog.thunks[key]
+ if !ok {
+ fn = makeWrapper(prog, sel)
+ if fn.Signature.Recv() != nil {
+ panic(fn) // unexpected receiver
+ }
+ prog.thunks[key] = fn
+ }
+ return fn
+}
+
+func changeRecv(s *types.Signature, recv *types.Var) *types.Signature {
+ return types.NewSignature(recv, s.Params(), s.Results(), s.Variadic())
+}
+
+// selectionKey is like types.Selection but a usable map key.
+type selectionKey struct {
+ kind types.SelectionKind
+ recv types.Type // canonicalized via Program.canon
+ obj types.Object
+ index string
+ indirect bool
+}
diff --git a/vendor/honnef.co/go/tools/ssa/write.go b/vendor/honnef.co/go/tools/ssa/write.go
new file mode 100644
index 0000000000000..89761a18a55e3
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssa/write.go
@@ -0,0 +1,5 @@
+package ssa
+
+func NewJump(parent *BasicBlock) *Jump {
+ return &Jump{anInstruction{parent}}
+}
diff --git a/vendor/honnef.co/go/tools/ssautil/ssautil.go b/vendor/honnef.co/go/tools/ssautil/ssautil.go
new file mode 100644
index 0000000000000..72c3c919d62c2
--- /dev/null
+++ b/vendor/honnef.co/go/tools/ssautil/ssautil.go
@@ -0,0 +1,58 @@
+package ssautil
+
+import (
+ "honnef.co/go/tools/ssa"
+)
+
+func Reachable(from, to *ssa.BasicBlock) bool {
+ if from == to {
+ return true
+ }
+ if from.Dominates(to) {
+ return true
+ }
+
+ found := false
+ Walk(from, func(b *ssa.BasicBlock) bool {
+ if b == to {
+ found = true
+ return false
+ }
+ return true
+ })
+ return found
+}
+
+func Walk(b *ssa.BasicBlock, fn func(*ssa.BasicBlock) bool) {
+ seen := map[*ssa.BasicBlock]bool{}
+ wl := []*ssa.BasicBlock{b}
+ for len(wl) > 0 {
+ b := wl[len(wl)-1]
+ wl = wl[:len(wl)-1]
+ if seen[b] {
+ continue
+ }
+ seen[b] = true
+ if !fn(b) {
+ continue
+ }
+ wl = append(wl, b.Succs...)
+ }
+}
+
+func Vararg(x *ssa.Slice) ([]ssa.Value, bool) {
+ var out []ssa.Value
+ slice, ok := x.X.(*ssa.Alloc)
+ if !ok || slice.Comment != "varargs" {
+ return nil, false
+ }
+ for _, ref := range *slice.Referrers() {
+ idx, ok := ref.(*ssa.IndexAddr)
+ if !ok {
+ continue
+ }
+ v := (*idx.Referrers())[0].(*ssa.Store).Val
+ out = append(out, v)
+ }
+ return out, true
+}
diff --git a/vendor/honnef.co/go/tools/staticcheck/CONTRIBUTING.md b/vendor/honnef.co/go/tools/staticcheck/CONTRIBUTING.md
new file mode 100644
index 0000000000000..b12c7afc748ed
--- /dev/null
+++ b/vendor/honnef.co/go/tools/staticcheck/CONTRIBUTING.md
@@ -0,0 +1,15 @@
+# Contributing to staticcheck
+
+## Before filing an issue:
+
+### Are you having trouble building staticcheck?
+
+Check you have the latest version of its dependencies. Run
+```
+go get -u honnef.co/go/tools/staticcheck
+```
+If you still have problems, consider searching for existing issues before filing a new issue.
+
+## Before sending a pull request:
+
+Have you understood the purpose of staticcheck? Make sure to carefully read `README`.
diff --git a/vendor/honnef.co/go/tools/staticcheck/analysis.go b/vendor/honnef.co/go/tools/staticcheck/analysis.go
new file mode 100644
index 0000000000000..442aebe5a18d3
--- /dev/null
+++ b/vendor/honnef.co/go/tools/staticcheck/analysis.go
@@ -0,0 +1,525 @@
+package staticcheck
+
+import (
+ "flag"
+
+ "honnef.co/go/tools/facts"
+ "honnef.co/go/tools/internal/passes/buildssa"
+ "honnef.co/go/tools/lint/lintutil"
+
+ "golang.org/x/tools/go/analysis"
+ "golang.org/x/tools/go/analysis/passes/inspect"
+)
+
+func newFlagSet() flag.FlagSet {
+ fs := flag.NewFlagSet("", flag.PanicOnError)
+ fs.Var(lintutil.NewVersionFlag(), "go", "Target Go version")
+ return *fs
+}
+
+var Analyzers = map[string]*analysis.Analyzer{
+ "SA1000": {
+ Name: "SA1000",
+ Run: callChecker(checkRegexpRules),
+ Doc: Docs["SA1000"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, valueRangesAnalyzer},
+ Flags: newFlagSet(),
+ },
+ "SA1001": {
+ Name: "SA1001",
+ Run: CheckTemplate,
+ Doc: Docs["SA1001"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA1002": {
+ Name: "SA1002",
+ Run: callChecker(checkTimeParseRules),
+ Doc: Docs["SA1002"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, valueRangesAnalyzer},
+ Flags: newFlagSet(),
+ },
+ "SA1003": {
+ Name: "SA1003",
+ Run: callChecker(checkEncodingBinaryRules),
+ Doc: Docs["SA1003"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, valueRangesAnalyzer},
+ Flags: newFlagSet(),
+ },
+ "SA1004": {
+ Name: "SA1004",
+ Run: CheckTimeSleepConstant,
+ Doc: Docs["SA1004"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA1005": {
+ Name: "SA1005",
+ Run: CheckExec,
+ Doc: Docs["SA1005"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA1006": {
+ Name: "SA1006",
+ Run: CheckUnsafePrintf,
+ Doc: Docs["SA1006"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA1007": {
+ Name: "SA1007",
+ Run: callChecker(checkURLsRules),
+ Doc: Docs["SA1007"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, valueRangesAnalyzer},
+ Flags: newFlagSet(),
+ },
+ "SA1008": {
+ Name: "SA1008",
+ Run: CheckCanonicalHeaderKey,
+ Doc: Docs["SA1008"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA1010": {
+ Name: "SA1010",
+ Run: callChecker(checkRegexpFindAllRules),
+ Doc: Docs["SA1010"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, valueRangesAnalyzer},
+ Flags: newFlagSet(),
+ },
+ "SA1011": {
+ Name: "SA1011",
+ Run: callChecker(checkUTF8CutsetRules),
+ Doc: Docs["SA1011"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, valueRangesAnalyzer},
+ Flags: newFlagSet(),
+ },
+ "SA1012": {
+ Name: "SA1012",
+ Run: CheckNilContext,
+ Doc: Docs["SA1012"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA1013": {
+ Name: "SA1013",
+ Run: CheckSeeker,
+ Doc: Docs["SA1013"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA1014": {
+ Name: "SA1014",
+ Run: callChecker(checkUnmarshalPointerRules),
+ Doc: Docs["SA1014"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, valueRangesAnalyzer},
+ Flags: newFlagSet(),
+ },
+ "SA1015": {
+ Name: "SA1015",
+ Run: CheckLeakyTimeTick,
+ Doc: Docs["SA1015"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA1016": {
+ Name: "SA1016",
+ Run: CheckUntrappableSignal,
+ Doc: Docs["SA1016"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA1017": {
+ Name: "SA1017",
+ Run: callChecker(checkUnbufferedSignalChanRules),
+ Doc: Docs["SA1017"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, valueRangesAnalyzer},
+ Flags: newFlagSet(),
+ },
+ "SA1018": {
+ Name: "SA1018",
+ Run: callChecker(checkStringsReplaceZeroRules),
+ Doc: Docs["SA1018"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, valueRangesAnalyzer},
+ Flags: newFlagSet(),
+ },
+ "SA1019": {
+ Name: "SA1019",
+ Run: CheckDeprecated,
+ Doc: Docs["SA1019"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Deprecated},
+ Flags: newFlagSet(),
+ },
+ "SA1020": {
+ Name: "SA1020",
+ Run: callChecker(checkListenAddressRules),
+ Doc: Docs["SA1020"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, valueRangesAnalyzer},
+ Flags: newFlagSet(),
+ },
+ "SA1021": {
+ Name: "SA1021",
+ Run: callChecker(checkBytesEqualIPRules),
+ Doc: Docs["SA1021"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, valueRangesAnalyzer},
+ Flags: newFlagSet(),
+ },
+ "SA1023": {
+ Name: "SA1023",
+ Run: CheckWriterBufferModified,
+ Doc: Docs["SA1023"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA1024": {
+ Name: "SA1024",
+ Run: callChecker(checkUniqueCutsetRules),
+ Doc: Docs["SA1024"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, valueRangesAnalyzer},
+ Flags: newFlagSet(),
+ },
+ "SA1025": {
+ Name: "SA1025",
+ Run: CheckTimerResetReturnValue,
+ Doc: Docs["SA1025"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA1026": {
+ Name: "SA1026",
+ Run: callChecker(checkUnsupportedMarshal),
+ Doc: Docs["SA1026"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, valueRangesAnalyzer},
+ Flags: newFlagSet(),
+ },
+ "SA1027": {
+ Name: "SA1027",
+ Run: callChecker(checkAtomicAlignment),
+ Doc: Docs["SA1027"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, valueRangesAnalyzer},
+ Flags: newFlagSet(),
+ },
+
+ "SA2000": {
+ Name: "SA2000",
+ Run: CheckWaitgroupAdd,
+ Doc: Docs["SA2000"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA2001": {
+ Name: "SA2001",
+ Run: CheckEmptyCriticalSection,
+ Doc: Docs["SA2001"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA2002": {
+ Name: "SA2002",
+ Run: CheckConcurrentTesting,
+ Doc: Docs["SA2002"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA2003": {
+ Name: "SA2003",
+ Run: CheckDeferLock,
+ Doc: Docs["SA2003"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer},
+ Flags: newFlagSet(),
+ },
+
+ "SA3000": {
+ Name: "SA3000",
+ Run: CheckTestMainExit,
+ Doc: Docs["SA3000"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA3001": {
+ Name: "SA3001",
+ Run: CheckBenchmarkN,
+ Doc: Docs["SA3001"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+
+ "SA4000": {
+ Name: "SA4000",
+ Run: CheckLhsRhsIdentical,
+ Doc: Docs["SA4000"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.TokenFile, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "SA4001": {
+ Name: "SA4001",
+ Run: CheckIneffectiveCopy,
+ Doc: Docs["SA4001"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA4002": {
+ Name: "SA4002",
+ Run: CheckDiffSizeComparison,
+ Doc: Docs["SA4002"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, valueRangesAnalyzer},
+ Flags: newFlagSet(),
+ },
+ "SA4003": {
+ Name: "SA4003",
+ Run: CheckExtremeComparison,
+ Doc: Docs["SA4003"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA4004": {
+ Name: "SA4004",
+ Run: CheckIneffectiveLoop,
+ Doc: Docs["SA4004"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA4006": {
+ Name: "SA4006",
+ Run: CheckUnreadVariableValues,
+ Doc: Docs["SA4006"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "SA4008": {
+ Name: "SA4008",
+ Run: CheckLoopCondition,
+ Doc: Docs["SA4008"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA4009": {
+ Name: "SA4009",
+ Run: CheckArgOverwritten,
+ Doc: Docs["SA4009"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA4010": {
+ Name: "SA4010",
+ Run: CheckIneffectiveAppend,
+ Doc: Docs["SA4010"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA4011": {
+ Name: "SA4011",
+ Run: CheckScopedBreak,
+ Doc: Docs["SA4011"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA4012": {
+ Name: "SA4012",
+ Run: CheckNaNComparison,
+ Doc: Docs["SA4012"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA4013": {
+ Name: "SA4013",
+ Run: CheckDoubleNegation,
+ Doc: Docs["SA4013"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA4014": {
+ Name: "SA4014",
+ Run: CheckRepeatedIfElse,
+ Doc: Docs["SA4014"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA4015": {
+ Name: "SA4015",
+ Run: callChecker(checkMathIntRules),
+ Doc: Docs["SA4015"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, valueRangesAnalyzer},
+ Flags: newFlagSet(),
+ },
+ "SA4016": {
+ Name: "SA4016",
+ Run: CheckSillyBitwiseOps,
+ Doc: Docs["SA4016"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, facts.TokenFile},
+ Flags: newFlagSet(),
+ },
+ "SA4017": {
+ Name: "SA4017",
+ Run: CheckPureFunctions,
+ Doc: Docs["SA4017"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, facts.Purity},
+ Flags: newFlagSet(),
+ },
+ "SA4018": {
+ Name: "SA4018",
+ Run: CheckSelfAssignment,
+ Doc: Docs["SA4018"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated, facts.TokenFile},
+ Flags: newFlagSet(),
+ },
+ "SA4019": {
+ Name: "SA4019",
+ Run: CheckDuplicateBuildConstraints,
+ Doc: Docs["SA4019"].String(),
+ Requires: []*analysis.Analyzer{facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "SA4020": {
+ Name: "SA4020",
+ Run: CheckUnreachableTypeCases,
+ Doc: Docs["SA4020"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA4021": {
+ Name: "SA4021",
+ Run: CheckSingleArgAppend,
+ Doc: Docs["SA4021"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated, facts.TokenFile},
+ Flags: newFlagSet(),
+ },
+
+ "SA5000": {
+ Name: "SA5000",
+ Run: CheckNilMaps,
+ Doc: Docs["SA5000"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA5001": {
+ Name: "SA5001",
+ Run: CheckEarlyDefer,
+ Doc: Docs["SA5001"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA5002": {
+ Name: "SA5002",
+ Run: CheckInfiniteEmptyLoop,
+ Doc: Docs["SA5002"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA5003": {
+ Name: "SA5003",
+ Run: CheckDeferInInfiniteLoop,
+ Doc: Docs["SA5003"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA5004": {
+ Name: "SA5004",
+ Run: CheckLoopEmptyDefault,
+ Doc: Docs["SA5004"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA5005": {
+ Name: "SA5005",
+ Run: CheckCyclicFinalizer,
+ Doc: Docs["SA5005"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA5007": {
+ Name: "SA5007",
+ Run: CheckInfiniteRecursion,
+ Doc: Docs["SA5007"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA5008": {
+ Name: "SA5008",
+ Run: CheckStructTags,
+ Doc: Docs["SA5008"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA5009": {
+ Name: "SA5009",
+ Run: callChecker(checkPrintfRules),
+ Doc: Docs["SA5009"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, valueRangesAnalyzer},
+ Flags: newFlagSet(),
+ },
+
+ "SA6000": {
+ Name: "SA6000",
+ Run: callChecker(checkRegexpMatchLoopRules),
+ Doc: Docs["SA6000"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, valueRangesAnalyzer},
+ Flags: newFlagSet(),
+ },
+ "SA6001": {
+ Name: "SA6001",
+ Run: CheckMapBytesKey,
+ Doc: Docs["SA6001"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA6002": {
+ Name: "SA6002",
+ Run: callChecker(checkSyncPoolValueRules),
+ Doc: Docs["SA6002"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, valueRangesAnalyzer},
+ Flags: newFlagSet(),
+ },
+ "SA6003": {
+ Name: "SA6003",
+ Run: CheckRangeStringRunes,
+ Doc: Docs["SA6003"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA6005": {
+ Name: "SA6005",
+ Run: CheckToLowerToUpperComparison,
+ Doc: Docs["SA6005"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+
+ "SA9001": {
+ Name: "SA9001",
+ Run: CheckDubiousDeferInChannelRangeLoop,
+ Doc: Docs["SA9001"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA9002": {
+ Name: "SA9002",
+ Run: CheckNonOctalFileMode,
+ Doc: Docs["SA9002"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "SA9003": {
+ Name: "SA9003",
+ Run: CheckEmptyBranch,
+ Doc: Docs["SA9003"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, facts.TokenFile, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "SA9004": {
+ Name: "SA9004",
+ Run: CheckMissingEnumTypesInDeclaration,
+ Doc: Docs["SA9004"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+ // Filtering generated code because it may include empty structs generated from data models.
+ "SA9005": {
+ Name: "SA9005",
+ Run: callChecker(checkNoopMarshal),
+ Doc: Docs["SA9005"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, valueRangesAnalyzer, facts.Generated, facts.TokenFile},
+ Flags: newFlagSet(),
+ },
+}
diff --git a/vendor/honnef.co/go/tools/staticcheck/buildtag.go b/vendor/honnef.co/go/tools/staticcheck/buildtag.go
new file mode 100644
index 0000000000000..888d3e9dc056e
--- /dev/null
+++ b/vendor/honnef.co/go/tools/staticcheck/buildtag.go
@@ -0,0 +1,21 @@
+package staticcheck
+
+import (
+ "go/ast"
+ "strings"
+
+ . "honnef.co/go/tools/lint/lintdsl"
+)
+
+func buildTags(f *ast.File) [][]string {
+ var out [][]string
+ for _, line := range strings.Split(Preamble(f), "\n") {
+ if !strings.HasPrefix(line, "+build ") {
+ continue
+ }
+ line = strings.TrimSpace(strings.TrimPrefix(line, "+build "))
+ fields := strings.Fields(line)
+ out = append(out, fields)
+ }
+ return out
+}
diff --git a/vendor/honnef.co/go/tools/staticcheck/doc.go b/vendor/honnef.co/go/tools/staticcheck/doc.go
new file mode 100644
index 0000000000000..4a87d4a24cea4
--- /dev/null
+++ b/vendor/honnef.co/go/tools/staticcheck/doc.go
@@ -0,0 +1,764 @@
+package staticcheck
+
+import "honnef.co/go/tools/lint"
+
+var Docs = map[string]*lint.Documentation{
+ "SA1000": &lint.Documentation{
+ Title: `Invalid regular expression`,
+ Since: "2017.1",
+ },
+
+ "SA1001": &lint.Documentation{
+ Title: `Invalid template`,
+ Since: "2017.1",
+ },
+
+ "SA1002": &lint.Documentation{
+ Title: `Invalid format in time.Parse`,
+ Since: "2017.1",
+ },
+
+ "SA1003": &lint.Documentation{
+ Title: `Unsupported argument to functions in encoding/binary`,
+ Text: `The encoding/binary package can only serialize types with known sizes.
+This precludes the use of the int and uint types, as their sizes
+differ on different architectures. Furthermore, it doesn't support
+serializing maps, channels, strings, or functions.
+
+Before Go 1.8, bool wasn't supported, either.`,
+ Since: "2017.1",
+ },
+
+ "SA1004": &lint.Documentation{
+ Title: `Suspiciously small untyped constant in time.Sleep`,
+ Text: `The time.Sleep function takes a time.Duration as its only argument.
+Durations are expressed in nanoseconds. Thus, calling time.Sleep(1)
+will sleep for 1 nanosecond. This is a common source of bugs, as sleep
+functions in other languages often accept seconds or milliseconds.
+
+The time package provides constants such as time.Second to express
+large durations. These can be combined with arithmetic to express
+arbitrary durations, for example '5 * time.Second' for 5 seconds.
+
+If you truly meant to sleep for a tiny amount of time, use
+'n * time.Nanosecond' to signal to staticcheck that you did mean to sleep
+for some amount of nanoseconds.`,
+ Since: "2017.1",
+ },
+
+ "SA1005": &lint.Documentation{
+ Title: `Invalid first argument to exec.Command`,
+ Text: `os/exec runs programs directly (using variants of the fork and exec
+system calls on Unix systems). This shouldn't be confused with running
+a command in a shell. The shell will allow for features such as input
+redirection, pipes, and general scripting. The shell is also
+responsible for splitting the user's input into a program name and its
+arguments. For example, the equivalent to
+
+ ls / /tmp
+
+would be
+
+ exec.Command("ls", "/", "/tmp")
+
+If you want to run a command in a shell, consider using something like
+the following – but be aware that not all systems, particularly
+Windows, will have a /bin/sh program:
+
+ exec.Command("/bin/sh", "-c", "ls | grep Awesome")`,
+ Since: "2017.1",
+ },
+
+ "SA1006": &lint.Documentation{
+ Title: `Printf with dynamic first argument and no further arguments`,
+ Text: `Using fmt.Printf with a dynamic first argument can lead to unexpected
+output. The first argument is a format string, where certain character
+combinations have special meaning. If, for example, a user were to
+enter a string such as
+
+ Interest rate: 5%
+
+and you printed it with
+
+ fmt.Printf(s)
+
+it would lead to the following output:
+
+ Interest rate: 5%!(NOVERB).
+
+Similarly, forming the first parameter via string concatenation with
+user input should be avoided for the same reason. When printing user
+input, either use a variant of fmt.Print, or use the %s Printf verb
+and pass the string as an argument.`,
+ Since: "2017.1",
+ },
+
+ "SA1007": &lint.Documentation{
+ Title: `Invalid URL in net/url.Parse`,
+ Since: "2017.1",
+ },
+
+ "SA1008": &lint.Documentation{
+ Title: `Non-canonical key in http.Header map`,
+ Text: `Keys in http.Header maps are canonical, meaning they follow a specific
+combination of uppercase and lowercase letters. Methods such as
+http.Header.Add and http.Header.Del convert inputs into this canonical
+form before manipulating the map.
+
+When manipulating http.Header maps directly, as opposed to using the
+provided methods, care should be taken to stick to canonical form in
+order to avoid inconsistencies. The following piece of code
+demonstrates one such inconsistency:
+
+ h := http.Header{}
+ h["etag"] = []string{"1234"}
+ h.Add("etag", "5678")
+ fmt.Println(h)
+
+ // Output:
+ // map[Etag:[5678] etag:[1234]]
+
+The easiest way of obtaining the canonical form of a key is to use
+http.CanonicalHeaderKey.`,
+ Since: "2017.1",
+ },
+
+ "SA1010": &lint.Documentation{
+ Title: `(*regexp.Regexp).FindAll called with n == 0, which will always return zero results`,
+ Text: `If n >= 0, the function returns at most n matches/submatches. To
+return all results, specify a negative number.`,
+ Since: "2017.1",
+ },
+
+ "SA1011": &lint.Documentation{
+ Title: `Various methods in the strings package expect valid UTF-8, but invalid input is provided`,
+ Since: "2017.1",
+ },
+
+ "SA1012": &lint.Documentation{
+ Title: `A nil context.Context is being passed to a function, consider using context.TODO instead`,
+ Since: "2017.1",
+ },
+
+ "SA1013": &lint.Documentation{
+ Title: `io.Seeker.Seek is being called with the whence constant as the first argument, but it should be the second`,
+ Since: "2017.1",
+ },
+
+ "SA1014": &lint.Documentation{
+ Title: `Non-pointer value passed to Unmarshal or Decode`,
+ Since: "2017.1",
+ },
+
+ "SA1015": &lint.Documentation{
+ Title: `Using time.Tick in a way that will leak. Consider using time.NewTicker, and only use time.Tick in tests, commands and endless functions`,
+ Since: "2017.1",
+ },
+
+ "SA1016": &lint.Documentation{
+ Title: `Trapping a signal that cannot be trapped`,
+ Text: `Not all signals can be intercepted by a process. Speficially, on
+UNIX-like systems, the syscall.SIGKILL and syscall.SIGSTOP signals are
+never passed to the process, but instead handled directly by the
+kernel. It is therefore pointless to try and handle these signals.`,
+ Since: "2017.1",
+ },
+
+ "SA1017": &lint.Documentation{
+ Title: `Channels used with os/signal.Notify should be buffered`,
+ Text: `The os/signal package uses non-blocking channel sends when delivering
+signals. If the receiving end of the channel isn't ready and the
+channel is either unbuffered or full, the signal will be dropped. To
+avoid missing signals, the channel should be buffered and of the
+appropriate size. For a channel used for notification of just one
+signal value, a buffer of size 1 is sufficient.`,
+ Since: "2017.1",
+ },
+
+ "SA1018": &lint.Documentation{
+ Title: `strings.Replace called with n == 0, which does nothing`,
+ Text: `With n == 0, zero instances will be replaced. To replace all
+instances, use a negative number, or use strings.ReplaceAll.`,
+ Since: "2017.1",
+ },
+
+ "SA1019": &lint.Documentation{
+ Title: `Using a deprecated function, variable, constant or field`,
+ Since: "2017.1",
+ },
+
+ "SA1020": &lint.Documentation{
+ Title: `Using an invalid host:port pair with a net.Listen-related function`,
+ Since: "2017.1",
+ },
+
+ "SA1021": &lint.Documentation{
+ Title: `Using bytes.Equal to compare two net.IP`,
+ Text: `A net.IP stores an IPv4 or IPv6 address as a slice of bytes. The
+length of the slice for an IPv4 address, however, can be either 4 or
+16 bytes long, using different ways of representing IPv4 addresses. In
+order to correctly compare two net.IPs, the net.IP.Equal method should
+be used, as it takes both representations into account.`,
+ Since: "2017.1",
+ },
+
+ "SA1023": &lint.Documentation{
+ Title: `Modifying the buffer in an io.Writer implementation`,
+ Text: `Write must not modify the slice data, even temporarily.`,
+ Since: "2017.1",
+ },
+
+ "SA1024": &lint.Documentation{
+ Title: `A string cutset contains duplicate characters`,
+ Text: `The strings.TrimLeft and strings.TrimRight functions take cutsets, not
+prefixes. A cutset is treated as a set of characters to remove from a
+string. For example,
+
+ strings.TrimLeft("42133word", "1234"))
+
+will result in the string "word" – any characters that are 1, 2, 3 or
+4 are cut from the left of the string.
+
+In order to remove one string from another, use strings.TrimPrefix instead.`,
+ Since: "2017.1",
+ },
+
+ "SA1025": &lint.Documentation{
+ Title: `It is not possible to use (*time.Timer).Reset's return value correctly`,
+ Since: "2019.1",
+ },
+
+ "SA1026": &lint.Documentation{
+ Title: `Cannot marshal channels or functions`,
+ Since: "2019.2",
+ },
+
+ "SA1027": &lint.Documentation{
+ Title: `Atomic access to 64-bit variable must be 64-bit aligned`,
+ Text: `On ARM, x86-32, and 32-bit MIPS, it is the caller's responsibility to
+arrange for 64-bit alignment of 64-bit words accessed atomically. The
+first word in a variable or in an allocated struct, array, or slice
+can be relied upon to be 64-bit aligned.
+
+You can use the structlayout tool to inspect the alignment of fields
+in a struct.`,
+ Since: "2019.2",
+ },
+
+ "SA2000": &lint.Documentation{
+ Title: `sync.WaitGroup.Add called inside the goroutine, leading to a race condition`,
+ Since: "2017.1",
+ },
+
+ "SA2001": &lint.Documentation{
+ Title: `Empty critical section, did you mean to defer the unlock?`,
+ Text: `Empty critical sections of the kind
+
+ mu.Lock()
+ mu.Unlock()
+
+are very often a typo, and the following was intended instead:
+
+ mu.Lock()
+ defer mu.Unlock()
+
+Do note that sometimes empty critical sections can be useful, as a
+form of signaling to wait on another goroutine. Many times, there are
+simpler ways of achieving the same effect. When that isn't the case,
+the code should be amply commented to avoid confusion. Combining such
+comments with a //lint:ignore directive can be used to suppress this
+rare false positive.`,
+ Since: "2017.1",
+ },
+
+ "SA2002": &lint.Documentation{
+ Title: `Called testing.T.FailNow or SkipNow in a goroutine, which isn't allowed`,
+ Since: "2017.1",
+ },
+
+ "SA2003": &lint.Documentation{
+ Title: `Deferred Lock right after locking, likely meant to defer Unlock instead`,
+ Since: "2017.1",
+ },
+
+ "SA3000": &lint.Documentation{
+ Title: `TestMain doesn't call os.Exit, hiding test failures`,
+ Text: `Test executables (and in turn 'go test') exit with a non-zero status
+code if any tests failed. When specifying your own TestMain function,
+it is your responsibility to arrange for this, by calling os.Exit with
+the correct code. The correct code is returned by (*testing.M).Run, so
+the usual way of implementing TestMain is to end it with
+os.Exit(m.Run()).`,
+ Since: "2017.1",
+ },
+
+ "SA3001": &lint.Documentation{
+ Title: `Assigning to b.N in benchmarks distorts the results`,
+ Text: `The testing package dynamically sets b.N to improve the reliability of
+benchmarks and uses it in computations to determine the duration of a
+single operation. Benchmark code must not alter b.N as this would
+falsify results.`,
+ Since: "2017.1",
+ },
+
+ "SA4000": &lint.Documentation{
+ Title: `Boolean expression has identical expressions on both sides`,
+ Since: "2017.1",
+ },
+
+ "SA4001": &lint.Documentation{
+ Title: `&*x gets simplified to x, it does not copy x`,
+ Since: "2017.1",
+ },
+
+ "SA4002": &lint.Documentation{
+ Title: `Comparing strings with known different sizes has predictable results`,
+ Since: "2017.1",
+ },
+
+ "SA4003": &lint.Documentation{
+ Title: `Comparing unsigned values against negative values is pointless`,
+ Since: "2017.1",
+ },
+
+ "SA4004": &lint.Documentation{
+ Title: `The loop exits unconditionally after one iteration`,
+ Since: "2017.1",
+ },
+
+ "SA4005": &lint.Documentation{
+ Title: `Field assignment that will never be observed. Did you mean to use a pointer receiver?`,
+ Since: "2017.1",
+ },
+
+ "SA4006": &lint.Documentation{
+ Title: `A value assigned to a variable is never read before being overwritten. Forgotten error check or dead code?`,
+ Since: "2017.1",
+ },
+
+ "SA4008": &lint.Documentation{
+ Title: `The variable in the loop condition never changes, are you incrementing the wrong variable?`,
+ Since: "2017.1",
+ },
+
+ "SA4009": &lint.Documentation{
+ Title: `A function argument is overwritten before its first use`,
+ Since: "2017.1",
+ },
+
+ "SA4010": &lint.Documentation{
+ Title: `The result of append will never be observed anywhere`,
+ Since: "2017.1",
+ },
+
+ "SA4011": &lint.Documentation{
+ Title: `Break statement with no effect. Did you mean to break out of an outer loop?`,
+ Since: "2017.1",
+ },
+
+ "SA4012": &lint.Documentation{
+ Title: `Comparing a value against NaN even though no value is equal to NaN`,
+ Since: "2017.1",
+ },
+
+ "SA4013": &lint.Documentation{
+ Title: `Negating a boolean twice (!!b) is the same as writing b. This is either redundant, or a typo.`,
+ Since: "2017.1",
+ },
+
+ "SA4014": &lint.Documentation{
+ Title: `An if/else if chain has repeated conditions and no side-effects; if the condition didn't match the first time, it won't match the second time, either`,
+ Since: "2017.1",
+ },
+
+ "SA4015": &lint.Documentation{
+ Title: `Calling functions like math.Ceil on floats converted from integers doesn't do anything useful`,
+ Since: "2017.1",
+ },
+
+ "SA4016": &lint.Documentation{
+ Title: `Certain bitwise operations, such as x ^ 0, do not do anything useful`,
+ Since: "2017.1",
+ },
+
+ "SA4017": &lint.Documentation{
+ Title: `A pure function's return value is discarded, making the call pointless`,
+ Since: "2017.1",
+ },
+
+ "SA4018": &lint.Documentation{
+ Title: `Self-assignment of variables`,
+ Since: "2017.1",
+ },
+
+ "SA4019": &lint.Documentation{
+ Title: `Multiple, identical build constraints in the same file`,
+ Since: "2017.1",
+ },
+
+ "SA4020": &lint.Documentation{
+ Title: `Unreachable case clause in a type switch`,
+ Text: `In a type switch like the following
+
+ type T struct{}
+ func (T) Read(b []byte) (int, error) { return 0, nil }
+
+ var v interface{} = T{}
+
+ switch v.(type) {
+ case io.Reader:
+ // ...
+ case T:
+ // unreachable
+ }
+
+the second case clause can never be reached because T implements
+io.Reader and case clauses are evaluated in source order.
+
+Another example:
+
+ type T struct{}
+ func (T) Read(b []byte) (int, error) { return 0, nil }
+ func (T) Close() error { return nil }
+
+ var v interface{} = T{}
+
+ switch v.(type) {
+ case io.Reader:
+ // ...
+ case io.ReadCloser:
+ // unreachable
+ }
+
+Even though T has a Close method and thus implements io.ReadCloser,
+io.Reader will always match first. The method set of io.Reader is a
+subset of io.ReadCloser. Thus it is impossible to match the second
+case without matching the first case.
+
+
+Structurally equivalent interfaces
+
+A special case of the previous example are structurally identical
+interfaces. Given these declarations
+
+ type T error
+ type V error
+
+ func doSomething() error {
+ err, ok := doAnotherThing()
+ if ok {
+ return T(err)
+ }
+
+ return U(err)
+ }
+
+the following type switch will have an unreachable case clause:
+
+ switch doSomething().(type) {
+ case T:
+ // ...
+ case V:
+ // unreachable
+ }
+
+T will always match before V because they are structurally equivalent
+and therefore doSomething()'s return value implements both.`,
+ Since: "2019.2",
+ },
+
+ "SA4021": &lint.Documentation{
+ Title: `x = append(y) is equivalent to x = y`,
+ Since: "2019.2",
+ },
+
+ "SA5000": &lint.Documentation{
+ Title: `Assignment to nil map`,
+ Since: "2017.1",
+ },
+
+ "SA5001": &lint.Documentation{
+ Title: `Defering Close before checking for a possible error`,
+ Since: "2017.1",
+ },
+
+ "SA5002": &lint.Documentation{
+ Title: `The empty for loop (for {}) spins and can block the scheduler`,
+ Since: "2017.1",
+ },
+
+ "SA5003": &lint.Documentation{
+ Title: `Defers in infinite loops will never execute`,
+ Text: `Defers are scoped to the surrounding function, not the surrounding
+block. In a function that never returns, i.e. one containing an
+infinite loop, defers will never execute.`,
+ Since: "2017.1",
+ },
+
+ "SA5004": &lint.Documentation{
+ Title: `for { select { ... with an empty default branch spins`,
+ Since: "2017.1",
+ },
+
+ "SA5005": &lint.Documentation{
+ Title: `The finalizer references the finalized object, preventing garbage collection`,
+ Text: `A finalizer is a function associated with an object that runs when the
+garbage collector is ready to collect said object, that is when the
+object is no longer referenced by anything.
+
+If the finalizer references the object, however, it will always remain
+as the final reference to that object, preventing the garbage
+collector from collecting the object. The finalizer will never run,
+and the object will never be collected, leading to a memory leak. That
+is why the finalizer should instead use its first argument to operate
+on the object. That way, the number of references can temporarily go
+to zero before the object is being passed to the finalizer.`,
+ Since: "2017.1",
+ },
+
+ "SA5006": &lint.Documentation{
+ Title: `Slice index out of bounds`,
+ Since: "2017.1",
+ },
+
+ "SA5007": &lint.Documentation{
+ Title: `Infinite recursive call`,
+ Text: `A function that calls itself recursively needs to have an exit
+condition. Otherwise it will recurse forever, until the system runs
+out of memory.
+
+This issue can be caused by simple bugs such as forgetting to add an
+exit condition. It can also happen "on purpose". Some languages have
+tail call optimization which makes certain infinite recursive calls
+safe to use. Go, however, does not implement TCO, and as such a loop
+should be used instead.`,
+ Since: "2017.1",
+ },
+
+ "SA5008": &lint.Documentation{
+ Title: `Invalid struct tag`,
+ Since: "2019.2",
+ },
+
+ "SA5009": &lint.Documentation{
+ Title: `Invalid Printf call`,
+ Since: "2019.2",
+ },
+
+ "SA6000": &lint.Documentation{
+ Title: `Using regexp.Match or related in a loop, should use regexp.Compile`,
+ Since: "2017.1",
+ },
+
+ "SA6001": &lint.Documentation{
+ Title: `Missing an optimization opportunity when indexing maps by byte slices`,
+
+ Text: `Map keys must be comparable, which precludes the use of byte slices.
+This usually leads to using string keys and converting byte slices to
+strings.
+
+Normally, a conversion of a byte slice to a string needs to copy the data and
+causes allocations. The compiler, however, recognizes m[string(b)] and
+uses the data of b directly, without copying it, because it knows that
+the data can't change during the map lookup. This leads to the
+counter-intuitive situation that
+
+ k := string(b)
+ println(m[k])
+ println(m[k])
+
+will be less efficient than
+
+ println(m[string(b)])
+ println(m[string(b)])
+
+because the first version needs to copy and allocate, while the second
+one does not.
+
+For some history on this optimization, check out commit
+f5f5a8b6209f84961687d993b93ea0d397f5d5bf in the Go repository.`,
+ Since: "2017.1",
+ },
+
+ "SA6002": &lint.Documentation{
+ Title: `Storing non-pointer values in sync.Pool allocates memory`,
+ Text: `A sync.Pool is used to avoid unnecessary allocations and reduce the
+amount of work the garbage collector has to do.
+
+When passing a value that is not a pointer to a function that accepts
+an interface, the value needs to be placed on the heap, which means an
+additional allocation. Slices are a common thing to put in sync.Pools,
+and they're structs with 3 fields (length, capacity, and a pointer to
+an array). In order to avoid the extra allocation, one should store a
+pointer to the slice instead.
+
+See the comments on https://go-review.googlesource.com/c/go/+/24371
+that discuss this problem.`,
+ Since: "2017.1",
+ },
+
+ "SA6003": &lint.Documentation{
+ Title: `Converting a string to a slice of runes before ranging over it`,
+ Text: `You may want to loop over the runes in a string. Instead of converting
+the string to a slice of runes and looping over that, you can loop
+over the string itself. That is,
+
+ for _, r := range s {}
+
+and
+
+ for _, r := range []rune(s) {}
+
+will yield the same values. The first version, however, will be faster
+and avoid unnecessary memory allocations.
+
+Do note that if you are interested in the indices, ranging over a
+string and over a slice of runes will yield different indices. The
+first one yields byte offsets, while the second one yields indices in
+the slice of runes.`,
+ Since: "2017.1",
+ },
+
+ "SA6005": &lint.Documentation{
+ Title: `Inefficient string comparison with strings.ToLower or strings.ToUpper`,
+ Text: `Converting two strings to the same case and comparing them like so
+
+ if strings.ToLower(s1) == strings.ToLower(s2) {
+ ...
+ }
+
+is significantly more expensive than comparing them with
+strings.EqualFold(s1, s2). This is due to memory usage as well as
+computational complexity.
+
+strings.ToLower will have to allocate memory for the new strings, as
+well as convert both strings fully, even if they differ on the very
+first byte. strings.EqualFold, on the other hand, compares the strings
+one character at a time. It doesn't need to create two intermediate
+strings and can return as soon as the first non-matching character has
+been found.
+
+For a more in-depth explanation of this issue, see
+https://blog.digitalocean.com/how-to-efficiently-compare-strings-in-go/`,
+ Since: "2019.2",
+ },
+
+ "SA9001": &lint.Documentation{
+ Title: `Defers in range loops may not run when you expect them to`,
+ Since: "2017.1",
+ },
+
+ "SA9002": &lint.Documentation{
+ Title: `Using a non-octal os.FileMode that looks like it was meant to be in octal.`,
+ Since: "2017.1",
+ },
+
+ "SA9003": &lint.Documentation{
+ Title: `Empty body in an if or else branch`,
+ Since: "2017.1",
+ },
+
+ "SA9004": &lint.Documentation{
+ Title: `Only the first constant has an explicit type`,
+
+ Text: `In a constant declaration such as the following:
+
+ const (
+ First byte = 1
+ Second = 2
+ )
+
+the constant Second does not have the same type as the constant First.
+This construct shouldn't be confused with
+
+ const (
+ First byte = iota
+ Second
+ )
+
+where First and Second do indeed have the same type. The type is only
+passed on when no explicit value is assigned to the constant.
+
+When declaring enumerations with explicit values it is therefore
+important not to write
+
+ const (
+ EnumFirst EnumType = 1
+ EnumSecond = 2
+ EnumThird = 3
+ )
+
+This discrepancy in types can cause various confusing behaviors and
+bugs.
+
+
+Wrong type in variable declarations
+
+The most obvious issue with such incorrect enumerations expresses
+itself as a compile error:
+
+ package pkg
+
+ const (
+ EnumFirst uint8 = 1
+ EnumSecond = 2
+ )
+
+ func fn(useFirst bool) {
+ x := EnumSecond
+ if useFirst {
+ x = EnumFirst
+ }
+ }
+
+fails to compile with
+
+ ./const.go:11:5: cannot use EnumFirst (type uint8) as type int in assignment
+
+
+Losing method sets
+
+A more subtle issue occurs with types that have methods and optional
+interfaces. Consider the following:
+
+ package main
+
+ import "fmt"
+
+ type Enum int
+
+ func (e Enum) String() string {
+ return "an enum"
+ }
+
+ const (
+ EnumFirst Enum = 1
+ EnumSecond = 2
+ )
+
+ func main() {
+ fmt.Println(EnumFirst)
+ fmt.Println(EnumSecond)
+ }
+
+This code will output
+
+ an enum
+ 2
+
+as EnumSecond has no explicit type, and thus defaults to int.`,
+ Since: "2019.1",
+ },
+
+ "SA9005": &lint.Documentation{
+ Title: `Trying to marshal a struct with no public fields nor custom marshaling`,
+ Text: `The encoding/json and encoding/xml packages only operate on exported
+fields in structs, not unexported ones. It is usually an error to try
+to (un)marshal structs that only consist of unexported fields.
+
+This check will not flag calls involving types that define custom
+marshaling behavior, e.g. via MarshalJSON methods. It will also not
+flag empty structs.`,
+ Since: "2019.2",
+ },
+}
diff --git a/vendor/honnef.co/go/tools/staticcheck/knowledge.go b/vendor/honnef.co/go/tools/staticcheck/knowledge.go
new file mode 100644
index 0000000000000..4c12b866a2041
--- /dev/null
+++ b/vendor/honnef.co/go/tools/staticcheck/knowledge.go
@@ -0,0 +1,25 @@
+package staticcheck
+
+import (
+ "reflect"
+
+ "golang.org/x/tools/go/analysis"
+ "honnef.co/go/tools/internal/passes/buildssa"
+ "honnef.co/go/tools/ssa"
+ "honnef.co/go/tools/staticcheck/vrp"
+)
+
+var valueRangesAnalyzer = &analysis.Analyzer{
+ Name: "vrp",
+ Doc: "calculate value ranges of functions",
+ Run: func(pass *analysis.Pass) (interface{}, error) {
+ m := map[*ssa.Function]vrp.Ranges{}
+ for _, ssafn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ vr := vrp.BuildGraph(ssafn).Solve()
+ m[ssafn] = vr
+ }
+ return m, nil
+ },
+ Requires: []*analysis.Analyzer{buildssa.Analyzer},
+ ResultType: reflect.TypeOf(map[*ssa.Function]vrp.Ranges{}),
+}
diff --git a/vendor/honnef.co/go/tools/staticcheck/lint.go b/vendor/honnef.co/go/tools/staticcheck/lint.go
new file mode 100644
index 0000000000000..1558cbf941507
--- /dev/null
+++ b/vendor/honnef.co/go/tools/staticcheck/lint.go
@@ -0,0 +1,3360 @@
+// Package staticcheck contains a linter for Go source code.
+package staticcheck // import "honnef.co/go/tools/staticcheck"
+
+import (
+ "fmt"
+ "go/ast"
+ "go/constant"
+ "go/token"
+ "go/types"
+ htmltemplate "html/template"
+ "net/http"
+ "reflect"
+ "regexp"
+ "regexp/syntax"
+ "sort"
+ "strconv"
+ "strings"
+ texttemplate "text/template"
+ "unicode"
+
+ . "honnef.co/go/tools/arg"
+ "honnef.co/go/tools/deprecated"
+ "honnef.co/go/tools/facts"
+ "honnef.co/go/tools/functions"
+ "honnef.co/go/tools/internal/passes/buildssa"
+ "honnef.co/go/tools/internal/sharedcheck"
+ "honnef.co/go/tools/lint"
+ . "honnef.co/go/tools/lint/lintdsl"
+ "honnef.co/go/tools/printf"
+ "honnef.co/go/tools/ssa"
+ "honnef.co/go/tools/ssautil"
+ "honnef.co/go/tools/staticcheck/vrp"
+
+ "golang.org/x/tools/go/analysis"
+ "golang.org/x/tools/go/analysis/passes/inspect"
+ "golang.org/x/tools/go/ast/astutil"
+ "golang.org/x/tools/go/ast/inspector"
+ "golang.org/x/tools/go/types/typeutil"
+)
+
+func validRegexp(call *Call) {
+ arg := call.Args[0]
+ err := ValidateRegexp(arg.Value)
+ if err != nil {
+ arg.Invalid(err.Error())
+ }
+}
+
+type runeSlice []rune
+
+func (rs runeSlice) Len() int { return len(rs) }
+func (rs runeSlice) Less(i int, j int) bool { return rs[i] < rs[j] }
+func (rs runeSlice) Swap(i int, j int) { rs[i], rs[j] = rs[j], rs[i] }
+
+func utf8Cutset(call *Call) {
+ arg := call.Args[1]
+ if InvalidUTF8(arg.Value) {
+ arg.Invalid(MsgInvalidUTF8)
+ }
+}
+
+func uniqueCutset(call *Call) {
+ arg := call.Args[1]
+ if !UniqueStringCutset(arg.Value) {
+ arg.Invalid(MsgNonUniqueCutset)
+ }
+}
+
+func unmarshalPointer(name string, arg int) CallCheck {
+ return func(call *Call) {
+ if !Pointer(call.Args[arg].Value) {
+ call.Args[arg].Invalid(fmt.Sprintf("%s expects to unmarshal into a pointer, but the provided value is not a pointer", name))
+ }
+ }
+}
+
+func pointlessIntMath(call *Call) {
+ if ConvertedFromInt(call.Args[0].Value) {
+ call.Invalid(fmt.Sprintf("calling %s on a converted integer is pointless", CallName(call.Instr.Common())))
+ }
+}
+
+func checkValidHostPort(arg int) CallCheck {
+ return func(call *Call) {
+ if !ValidHostPort(call.Args[arg].Value) {
+ call.Args[arg].Invalid(MsgInvalidHostPort)
+ }
+ }
+}
+
+var (
+ checkRegexpRules = map[string]CallCheck{
+ "regexp.MustCompile": validRegexp,
+ "regexp.Compile": validRegexp,
+ "regexp.Match": validRegexp,
+ "regexp.MatchReader": validRegexp,
+ "regexp.MatchString": validRegexp,
+ }
+
+ checkTimeParseRules = map[string]CallCheck{
+ "time.Parse": func(call *Call) {
+ arg := call.Args[Arg("time.Parse.layout")]
+ err := ValidateTimeLayout(arg.Value)
+ if err != nil {
+ arg.Invalid(err.Error())
+ }
+ },
+ }
+
+ checkEncodingBinaryRules = map[string]CallCheck{
+ "encoding/binary.Write": func(call *Call) {
+ arg := call.Args[Arg("encoding/binary.Write.data")]
+ if !CanBinaryMarshal(call.Pass, arg.Value) {
+ arg.Invalid(fmt.Sprintf("value of type %s cannot be used with binary.Write", arg.Value.Value.Type()))
+ }
+ },
+ }
+
+ checkURLsRules = map[string]CallCheck{
+ "net/url.Parse": func(call *Call) {
+ arg := call.Args[Arg("net/url.Parse.rawurl")]
+ err := ValidateURL(arg.Value)
+ if err != nil {
+ arg.Invalid(err.Error())
+ }
+ },
+ }
+
+ checkSyncPoolValueRules = map[string]CallCheck{
+ "(*sync.Pool).Put": func(call *Call) {
+ arg := call.Args[Arg("(*sync.Pool).Put.x")]
+ typ := arg.Value.Value.Type()
+ if !IsPointerLike(typ) {
+ arg.Invalid("argument should be pointer-like to avoid allocations")
+ }
+ },
+ }
+
+ checkRegexpFindAllRules = map[string]CallCheck{
+ "(*regexp.Regexp).FindAll": RepeatZeroTimes("a FindAll method", 1),
+ "(*regexp.Regexp).FindAllIndex": RepeatZeroTimes("a FindAll method", 1),
+ "(*regexp.Regexp).FindAllString": RepeatZeroTimes("a FindAll method", 1),
+ "(*regexp.Regexp).FindAllStringIndex": RepeatZeroTimes("a FindAll method", 1),
+ "(*regexp.Regexp).FindAllStringSubmatch": RepeatZeroTimes("a FindAll method", 1),
+ "(*regexp.Regexp).FindAllStringSubmatchIndex": RepeatZeroTimes("a FindAll method", 1),
+ "(*regexp.Regexp).FindAllSubmatch": RepeatZeroTimes("a FindAll method", 1),
+ "(*regexp.Regexp).FindAllSubmatchIndex": RepeatZeroTimes("a FindAll method", 1),
+ }
+
+ checkUTF8CutsetRules = map[string]CallCheck{
+ "strings.IndexAny": utf8Cutset,
+ "strings.LastIndexAny": utf8Cutset,
+ "strings.ContainsAny": utf8Cutset,
+ "strings.Trim": utf8Cutset,
+ "strings.TrimLeft": utf8Cutset,
+ "strings.TrimRight": utf8Cutset,
+ }
+
+ checkUniqueCutsetRules = map[string]CallCheck{
+ "strings.Trim": uniqueCutset,
+ "strings.TrimLeft": uniqueCutset,
+ "strings.TrimRight": uniqueCutset,
+ }
+
+ checkUnmarshalPointerRules = map[string]CallCheck{
+ "encoding/xml.Unmarshal": unmarshalPointer("xml.Unmarshal", 1),
+ "(*encoding/xml.Decoder).Decode": unmarshalPointer("Decode", 0),
+ "(*encoding/xml.Decoder).DecodeElement": unmarshalPointer("DecodeElement", 0),
+ "encoding/json.Unmarshal": unmarshalPointer("json.Unmarshal", 1),
+ "(*encoding/json.Decoder).Decode": unmarshalPointer("Decode", 0),
+ }
+
+ checkUnbufferedSignalChanRules = map[string]CallCheck{
+ "os/signal.Notify": func(call *Call) {
+ arg := call.Args[Arg("os/signal.Notify.c")]
+ if UnbufferedChannel(arg.Value) {
+ arg.Invalid("the channel used with signal.Notify should be buffered")
+ }
+ },
+ }
+
+ checkMathIntRules = map[string]CallCheck{
+ "math.Ceil": pointlessIntMath,
+ "math.Floor": pointlessIntMath,
+ "math.IsNaN": pointlessIntMath,
+ "math.Trunc": pointlessIntMath,
+ "math.IsInf": pointlessIntMath,
+ }
+
+ checkStringsReplaceZeroRules = map[string]CallCheck{
+ "strings.Replace": RepeatZeroTimes("strings.Replace", 3),
+ "bytes.Replace": RepeatZeroTimes("bytes.Replace", 3),
+ }
+
+ checkListenAddressRules = map[string]CallCheck{
+ "net/http.ListenAndServe": checkValidHostPort(0),
+ "net/http.ListenAndServeTLS": checkValidHostPort(0),
+ }
+
+ checkBytesEqualIPRules = map[string]CallCheck{
+ "bytes.Equal": func(call *Call) {
+ if ConvertedFrom(call.Args[Arg("bytes.Equal.a")].Value, "net.IP") &&
+ ConvertedFrom(call.Args[Arg("bytes.Equal.b")].Value, "net.IP") {
+ call.Invalid("use net.IP.Equal to compare net.IPs, not bytes.Equal")
+ }
+ },
+ }
+
+ checkRegexpMatchLoopRules = map[string]CallCheck{
+ "regexp.Match": loopedRegexp("regexp.Match"),
+ "regexp.MatchReader": loopedRegexp("regexp.MatchReader"),
+ "regexp.MatchString": loopedRegexp("regexp.MatchString"),
+ }
+
+ checkNoopMarshal = map[string]CallCheck{
+ // TODO(dh): should we really flag XML? Even an empty struct
+ // produces a non-zero amount of data, namely its type name.
+ // Let's see if we encounter any false positives.
+ //
+ // Also, should we flag gob?
+ "encoding/json.Marshal": checkNoopMarshalImpl(Arg("json.Marshal.v"), "MarshalJSON", "MarshalText"),
+ "encoding/xml.Marshal": checkNoopMarshalImpl(Arg("xml.Marshal.v"), "MarshalXML", "MarshalText"),
+ "(*encoding/json.Encoder).Encode": checkNoopMarshalImpl(Arg("(*encoding/json.Encoder).Encode.v"), "MarshalJSON", "MarshalText"),
+ "(*encoding/xml.Encoder).Encode": checkNoopMarshalImpl(Arg("(*encoding/xml.Encoder).Encode.v"), "MarshalXML", "MarshalText"),
+
+ "encoding/json.Unmarshal": checkNoopMarshalImpl(Arg("json.Unmarshal.v"), "UnmarshalJSON", "UnmarshalText"),
+ "encoding/xml.Unmarshal": checkNoopMarshalImpl(Arg("xml.Unmarshal.v"), "UnmarshalXML", "UnmarshalText"),
+ "(*encoding/json.Decoder).Decode": checkNoopMarshalImpl(Arg("(*encoding/json.Decoder).Decode.v"), "UnmarshalJSON", "UnmarshalText"),
+ "(*encoding/xml.Decoder).Decode": checkNoopMarshalImpl(Arg("(*encoding/xml.Decoder).Decode.v"), "UnmarshalXML", "UnmarshalText"),
+ }
+
+ checkUnsupportedMarshal = map[string]CallCheck{
+ "encoding/json.Marshal": checkUnsupportedMarshalImpl(Arg("json.Marshal.v"), "json", "MarshalJSON", "MarshalText"),
+ "encoding/xml.Marshal": checkUnsupportedMarshalImpl(Arg("xml.Marshal.v"), "xml", "MarshalXML", "MarshalText"),
+ "(*encoding/json.Encoder).Encode": checkUnsupportedMarshalImpl(Arg("(*encoding/json.Encoder).Encode.v"), "json", "MarshalJSON", "MarshalText"),
+ "(*encoding/xml.Encoder).Encode": checkUnsupportedMarshalImpl(Arg("(*encoding/xml.Encoder).Encode.v"), "xml", "MarshalXML", "MarshalText"),
+ }
+
+ checkAtomicAlignment = map[string]CallCheck{
+ "sync/atomic.AddInt64": checkAtomicAlignmentImpl,
+ "sync/atomic.AddUint64": checkAtomicAlignmentImpl,
+ "sync/atomic.CompareAndSwapInt64": checkAtomicAlignmentImpl,
+ "sync/atomic.CompareAndSwapUint64": checkAtomicAlignmentImpl,
+ "sync/atomic.LoadInt64": checkAtomicAlignmentImpl,
+ "sync/atomic.LoadUint64": checkAtomicAlignmentImpl,
+ "sync/atomic.StoreInt64": checkAtomicAlignmentImpl,
+ "sync/atomic.StoreUint64": checkAtomicAlignmentImpl,
+ "sync/atomic.SwapInt64": checkAtomicAlignmentImpl,
+ "sync/atomic.SwapUint64": checkAtomicAlignmentImpl,
+ }
+
+ // TODO(dh): detect printf wrappers
+ checkPrintfRules = map[string]CallCheck{
+ "fmt.Errorf": func(call *Call) { checkPrintfCall(call, 0, 1) },
+ "fmt.Printf": func(call *Call) { checkPrintfCall(call, 0, 1) },
+ "fmt.Sprintf": func(call *Call) { checkPrintfCall(call, 0, 1) },
+ "fmt.Fprintf": func(call *Call) { checkPrintfCall(call, 1, 2) },
+ }
+)
+
+func checkPrintfCall(call *Call, fIdx, vIdx int) {
+ f := call.Args[fIdx]
+ var args []ssa.Value
+ switch v := call.Args[vIdx].Value.Value.(type) {
+ case *ssa.Slice:
+ var ok bool
+ args, ok = ssautil.Vararg(v)
+ if !ok {
+ // We don't know what the actual arguments to the function are
+ return
+ }
+ case *ssa.Const:
+ // nil, i.e. no arguments
+ default:
+ // We don't know what the actual arguments to the function are
+ return
+ }
+ checkPrintfCallImpl(call, f.Value.Value, args)
+}
+
+type verbFlag int
+
+const (
+ isInt verbFlag = 1 << iota
+ isBool
+ isFP
+ isString
+ isPointer
+ isPseudoPointer
+ isSlice
+ isAny
+ noRecurse
+)
+
+var verbs = [...]verbFlag{
+ 'b': isPseudoPointer | isInt | isFP,
+ 'c': isInt,
+ 'd': isPseudoPointer | isInt,
+ 'e': isFP,
+ 'E': isFP,
+ 'f': isFP,
+ 'F': isFP,
+ 'g': isFP,
+ 'G': isFP,
+ 'o': isPseudoPointer | isInt,
+ 'p': isSlice | isPointer | noRecurse,
+ 'q': isInt | isString,
+ 's': isString,
+ 't': isBool,
+ 'T': isAny,
+ 'U': isInt,
+ 'v': isAny,
+ 'X': isPseudoPointer | isInt | isString,
+ 'x': isPseudoPointer | isInt | isString,
+}
+
+func checkPrintfCallImpl(call *Call, f ssa.Value, args []ssa.Value) {
+ var msCache *typeutil.MethodSetCache
+ if f.Parent() != nil {
+ msCache = &f.Parent().Prog.MethodSets
+ }
+
+ elem := func(T types.Type, verb rune) ([]types.Type, bool) {
+ if verbs[verb]&noRecurse != 0 {
+ return []types.Type{T}, false
+ }
+ switch T := T.(type) {
+ case *types.Slice:
+ if verbs[verb]&isSlice != 0 {
+ return []types.Type{T}, false
+ }
+ if verbs[verb]&isString != 0 && IsType(T.Elem().Underlying(), "byte") {
+ return []types.Type{T}, false
+ }
+ return []types.Type{T.Elem()}, true
+ case *types.Map:
+ key := T.Key()
+ val := T.Elem()
+ return []types.Type{key, val}, true
+ case *types.Struct:
+ out := make([]types.Type, 0, T.NumFields())
+ for i := 0; i < T.NumFields(); i++ {
+ out = append(out, T.Field(i).Type())
+ }
+ return out, true
+ case *types.Array:
+ return []types.Type{T.Elem()}, true
+ default:
+ return []types.Type{T}, false
+ }
+ }
+ isInfo := func(T types.Type, info types.BasicInfo) bool {
+ basic, ok := T.Underlying().(*types.Basic)
+ return ok && basic.Info()&info != 0
+ }
+
+ isStringer := func(T types.Type, ms *types.MethodSet) bool {
+ sel := ms.Lookup(nil, "String")
+ if sel == nil {
+ return false
+ }
+ fn, ok := sel.Obj().(*types.Func)
+ if !ok {
+ // should be unreachable
+ return false
+ }
+ sig := fn.Type().(*types.Signature)
+ if sig.Params().Len() != 0 {
+ return false
+ }
+ if sig.Results().Len() != 1 {
+ return false
+ }
+ if !IsType(sig.Results().At(0).Type(), "string") {
+ return false
+ }
+ return true
+ }
+ isError := func(T types.Type, ms *types.MethodSet) bool {
+ sel := ms.Lookup(nil, "Error")
+ if sel == nil {
+ return false
+ }
+ fn, ok := sel.Obj().(*types.Func)
+ if !ok {
+ // should be unreachable
+ return false
+ }
+ sig := fn.Type().(*types.Signature)
+ if sig.Params().Len() != 0 {
+ return false
+ }
+ if sig.Results().Len() != 1 {
+ return false
+ }
+ if !IsType(sig.Results().At(0).Type(), "string") {
+ return false
+ }
+ return true
+ }
+
+ isFormatter := func(T types.Type, ms *types.MethodSet) bool {
+ sel := ms.Lookup(nil, "Format")
+ if sel == nil {
+ return false
+ }
+ fn, ok := sel.Obj().(*types.Func)
+ if !ok {
+ // should be unreachable
+ return false
+ }
+ sig := fn.Type().(*types.Signature)
+ if sig.Params().Len() != 2 {
+ return false
+ }
+ // TODO(dh): check the types of the arguments for more
+ // precision
+ if sig.Results().Len() != 0 {
+ return false
+ }
+ return true
+ }
+
+ seen := map[types.Type]bool{}
+ var checkType func(verb rune, T types.Type, top bool) bool
+ checkType = func(verb rune, T types.Type, top bool) bool {
+ if top {
+ for k := range seen {
+ delete(seen, k)
+ }
+ }
+ if seen[T] {
+ return true
+ }
+ seen[T] = true
+ if int(verb) >= len(verbs) {
+ // Unknown verb
+ return true
+ }
+
+ flags := verbs[verb]
+ if flags == 0 {
+ // Unknown verb
+ return true
+ }
+
+ ms := msCache.MethodSet(T)
+ if isFormatter(T, ms) {
+ // the value is responsible for formatting itself
+ return true
+ }
+
+ if flags&isString != 0 && (isStringer(T, ms) || isError(T, ms)) {
+ // Check for stringer early because we're about to dereference
+ return true
+ }
+
+ T = T.Underlying()
+ if flags&(isPointer|isPseudoPointer) == 0 && top {
+ T = Dereference(T)
+ }
+ if flags&isPseudoPointer != 0 && top {
+ t := Dereference(T)
+ if _, ok := t.Underlying().(*types.Struct); ok {
+ T = t
+ }
+ }
+
+ if _, ok := T.(*types.Interface); ok {
+ // We don't know what's in the interface
+ return true
+ }
+
+ var info types.BasicInfo
+ if flags&isInt != 0 {
+ info |= types.IsInteger
+ }
+ if flags&isBool != 0 {
+ info |= types.IsBoolean
+ }
+ if flags&isFP != 0 {
+ info |= types.IsFloat | types.IsComplex
+ }
+ if flags&isString != 0 {
+ info |= types.IsString
+ }
+
+ if info != 0 && isInfo(T, info) {
+ return true
+ }
+
+ if flags&isString != 0 && (IsType(T, "[]byte") || isStringer(T, ms) || isError(T, ms)) {
+ return true
+ }
+
+ if flags&isPointer != 0 && IsPointerLike(T) {
+ return true
+ }
+ if flags&isPseudoPointer != 0 {
+ switch U := T.Underlying().(type) {
+ case *types.Pointer:
+ if !top {
+ return true
+ }
+
+ if _, ok := U.Elem().Underlying().(*types.Struct); !ok {
+ return true
+ }
+ case *types.Chan, *types.Signature:
+ return true
+ }
+ }
+
+ if flags&isSlice != 0 {
+ if _, ok := T.(*types.Slice); ok {
+ return true
+ }
+ }
+
+ if flags&isAny != 0 {
+ return true
+ }
+
+ elems, ok := elem(T.Underlying(), verb)
+ if !ok {
+ return false
+ }
+ for _, elem := range elems {
+ if !checkType(verb, elem, false) {
+ return false
+ }
+ }
+
+ return true
+ }
+
+ k, ok := f.(*ssa.Const)
+ if !ok {
+ return
+ }
+ actions, err := printf.Parse(constant.StringVal(k.Value))
+ if err != nil {
+ call.Invalid("couldn't parse format string")
+ return
+ }
+
+ ptr := 1
+ hasExplicit := false
+
+ checkStar := func(verb printf.Verb, star printf.Argument) bool {
+ if star, ok := star.(printf.Star); ok {
+ idx := 0
+ if star.Index == -1 {
+ idx = ptr
+ ptr++
+ } else {
+ hasExplicit = true
+ idx = star.Index
+ ptr = star.Index + 1
+ }
+ if idx == 0 {
+ call.Invalid(fmt.Sprintf("Printf format %s reads invalid arg 0; indices are 1-based", verb.Raw))
+ return false
+ }
+ if idx > len(args) {
+ call.Invalid(
+ fmt.Sprintf("Printf format %s reads arg #%d, but call has only %d args",
+ verb.Raw, idx, len(args)))
+ return false
+ }
+ if arg, ok := args[idx-1].(*ssa.MakeInterface); ok {
+ if !isInfo(arg.X.Type(), types.IsInteger) {
+ call.Invalid(fmt.Sprintf("Printf format %s reads non-int arg #%d as argument of *", verb.Raw, idx))
+ }
+ }
+ }
+ return true
+ }
+
+ // We only report one problem per format string. Making a
+ // mistake with an index tends to invalidate all future
+ // implicit indices.
+ for _, action := range actions {
+ verb, ok := action.(printf.Verb)
+ if !ok {
+ continue
+ }
+
+ if !checkStar(verb, verb.Width) || !checkStar(verb, verb.Precision) {
+ return
+ }
+
+ off := ptr
+ if verb.Value != -1 {
+ hasExplicit = true
+ off = verb.Value
+ }
+ if off > len(args) {
+ call.Invalid(
+ fmt.Sprintf("Printf format %s reads arg #%d, but call has only %d args",
+ verb.Raw, off, len(args)))
+ return
+ } else if verb.Value == 0 && verb.Letter != '%' {
+ call.Invalid(fmt.Sprintf("Printf format %s reads invalid arg 0; indices are 1-based", verb.Raw))
+ return
+ } else if off != 0 {
+ arg, ok := args[off-1].(*ssa.MakeInterface)
+ if ok {
+ if !checkType(verb.Letter, arg.X.Type(), true) {
+ call.Invalid(fmt.Sprintf("Printf format %s has arg #%d of wrong type %s",
+ verb.Raw, ptr, args[ptr-1].(*ssa.MakeInterface).X.Type()))
+ return
+ }
+ }
+ }
+
+ switch verb.Value {
+ case -1:
+ // Consume next argument
+ ptr++
+ case 0:
+ // Don't consume any arguments
+ default:
+ ptr = verb.Value + 1
+ }
+ }
+
+ if !hasExplicit && ptr <= len(args) {
+ call.Invalid(fmt.Sprintf("Printf call needs %d args but has %d args", ptr-1, len(args)))
+ }
+}
+
+func checkAtomicAlignmentImpl(call *Call) {
+ sizes := call.Pass.TypesSizes
+ if sizes.Sizeof(types.Typ[types.Uintptr]) != 4 {
+ // Not running on a 32-bit platform
+ return
+ }
+ v, ok := call.Args[0].Value.Value.(*ssa.FieldAddr)
+ if !ok {
+ // TODO(dh): also check indexing into arrays and slices
+ return
+ }
+ T := v.X.Type().Underlying().(*types.Pointer).Elem().Underlying().(*types.Struct)
+ fields := make([]*types.Var, 0, T.NumFields())
+ for i := 0; i < T.NumFields() && i <= v.Field; i++ {
+ fields = append(fields, T.Field(i))
+ }
+
+ off := sizes.Offsetsof(fields)[v.Field]
+ if off%8 != 0 {
+ msg := fmt.Sprintf("address of non 64-bit aligned field %s passed to %s",
+ T.Field(v.Field).Name(),
+ CallName(call.Instr.Common()))
+ call.Invalid(msg)
+ }
+}
+
+func checkNoopMarshalImpl(argN int, meths ...string) CallCheck {
+ return func(call *Call) {
+ if IsGenerated(call.Pass, call.Instr.Pos()) {
+ return
+ }
+ arg := call.Args[argN]
+ T := arg.Value.Value.Type()
+ Ts, ok := Dereference(T).Underlying().(*types.Struct)
+ if !ok {
+ return
+ }
+ if Ts.NumFields() == 0 {
+ return
+ }
+ fields := FlattenFields(Ts)
+ for _, field := range fields {
+ if field.Var.Exported() {
+ return
+ }
+ }
+ // OPT(dh): we could use a method set cache here
+ ms := call.Instr.Parent().Prog.MethodSets.MethodSet(T)
+ // TODO(dh): we're not checking the signature, which can cause false negatives.
+ // This isn't a huge problem, however, since vet complains about incorrect signatures.
+ for _, meth := range meths {
+ if ms.Lookup(nil, meth) != nil {
+ return
+ }
+ }
+ arg.Invalid("struct doesn't have any exported fields, nor custom marshaling")
+ }
+}
+
+func checkUnsupportedMarshalImpl(argN int, tag string, meths ...string) CallCheck {
+ // TODO(dh): flag slices and maps of unsupported types
+ return func(call *Call) {
+ msCache := &call.Instr.Parent().Prog.MethodSets
+
+ arg := call.Args[argN]
+ T := arg.Value.Value.Type()
+ Ts, ok := Dereference(T).Underlying().(*types.Struct)
+ if !ok {
+ return
+ }
+ ms := msCache.MethodSet(T)
+ // TODO(dh): we're not checking the signature, which can cause false negatives.
+ // This isn't a huge problem, however, since vet complains about incorrect signatures.
+ for _, meth := range meths {
+ if ms.Lookup(nil, meth) != nil {
+ return
+ }
+ }
+ fields := FlattenFields(Ts)
+ for _, field := range fields {
+ if !(field.Var.Exported()) {
+ continue
+ }
+ if reflect.StructTag(field.Tag).Get(tag) == "-" {
+ continue
+ }
+ ms := msCache.MethodSet(field.Var.Type())
+ // TODO(dh): we're not checking the signature, which can cause false negatives.
+ // This isn't a huge problem, however, since vet complains about incorrect signatures.
+ for _, meth := range meths {
+ if ms.Lookup(nil, meth) != nil {
+ return
+ }
+ }
+ switch field.Var.Type().Underlying().(type) {
+ case *types.Chan, *types.Signature:
+ arg.Invalid(fmt.Sprintf("trying to marshal chan or func value, field %s", fieldPath(T, field.Path)))
+ }
+ }
+ }
+}
+
+func fieldPath(start types.Type, indices []int) string {
+ p := start.String()
+ for _, idx := range indices {
+ field := Dereference(start).Underlying().(*types.Struct).Field(idx)
+ start = field.Type()
+ p += "." + field.Name()
+ }
+ return p
+}
+
+func isInLoop(b *ssa.BasicBlock) bool {
+ sets := functions.FindLoops(b.Parent())
+ for _, set := range sets {
+ if set.Has(b) {
+ return true
+ }
+ }
+ return false
+}
+
+func CheckUntrappableSignal(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ call := node.(*ast.CallExpr)
+ if !IsCallToAnyAST(pass, call,
+ "os/signal.Ignore", "os/signal.Notify", "os/signal.Reset") {
+ return
+ }
+ for _, arg := range call.Args {
+ if conv, ok := arg.(*ast.CallExpr); ok && isName(pass, conv.Fun, "os.Signal") {
+ arg = conv.Args[0]
+ }
+
+ if isName(pass, arg, "os.Kill") || isName(pass, arg, "syscall.SIGKILL") {
+ ReportNodef(pass, arg, "%s cannot be trapped (did you mean syscall.SIGTERM?)", Render(pass, arg))
+ }
+ if isName(pass, arg, "syscall.SIGSTOP") {
+ ReportNodef(pass, arg, "%s signal cannot be trapped", Render(pass, arg))
+ }
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.CallExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckTemplate(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ call := node.(*ast.CallExpr)
+ var kind string
+ if IsCallToAST(pass, call, "(*text/template.Template).Parse") {
+ kind = "text"
+ } else if IsCallToAST(pass, call, "(*html/template.Template).Parse") {
+ kind = "html"
+ } else {
+ return
+ }
+ sel := call.Fun.(*ast.SelectorExpr)
+ if !IsCallToAST(pass, sel.X, "text/template.New") &&
+ !IsCallToAST(pass, sel.X, "html/template.New") {
+ // TODO(dh): this is a cheap workaround for templates with
+ // different delims. A better solution with less false
+ // negatives would use data flow analysis to see where the
+ // template comes from and where it has been
+ return
+ }
+ s, ok := ExprToString(pass, call.Args[Arg("(*text/template.Template).Parse.text")])
+ if !ok {
+ return
+ }
+ var err error
+ switch kind {
+ case "text":
+ _, err = texttemplate.New("").Parse(s)
+ case "html":
+ _, err = htmltemplate.New("").Parse(s)
+ }
+ if err != nil {
+ // TODO(dominikh): whitelist other parse errors, if any
+ if strings.Contains(err.Error(), "unexpected") {
+ ReportNodef(pass, call.Args[Arg("(*text/template.Template).Parse.text")], "%s", err)
+ }
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.CallExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckTimeSleepConstant(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ call := node.(*ast.CallExpr)
+ if !IsCallToAST(pass, call, "time.Sleep") {
+ return
+ }
+ lit, ok := call.Args[Arg("time.Sleep.d")].(*ast.BasicLit)
+ if !ok {
+ return
+ }
+ n, err := strconv.Atoi(lit.Value)
+ if err != nil {
+ return
+ }
+ if n == 0 || n > 120 {
+ // time.Sleep(0) is a seldom used pattern in concurrency
+ // tests. >120 might be intentional. 120 was chosen
+ // because the user could've meant 2 minutes.
+ return
+ }
+ recommendation := "time.Sleep(time.Nanosecond)"
+ if n != 1 {
+ recommendation = fmt.Sprintf("time.Sleep(%d * time.Nanosecond)", n)
+ }
+ ReportNodef(pass, call.Args[Arg("time.Sleep.d")],
+ "sleeping for %d nanoseconds is probably a bug. Be explicit if it isn't: %s", n, recommendation)
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.CallExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckWaitgroupAdd(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ g := node.(*ast.GoStmt)
+ fun, ok := g.Call.Fun.(*ast.FuncLit)
+ if !ok {
+ return
+ }
+ if len(fun.Body.List) == 0 {
+ return
+ }
+ stmt, ok := fun.Body.List[0].(*ast.ExprStmt)
+ if !ok {
+ return
+ }
+ if IsCallToAST(pass, stmt.X, "(*sync.WaitGroup).Add") {
+ ReportNodef(pass, stmt, "should call %s before starting the goroutine to avoid a race",
+ Render(pass, stmt))
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.GoStmt)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckInfiniteEmptyLoop(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ loop := node.(*ast.ForStmt)
+ if len(loop.Body.List) != 0 || loop.Post != nil {
+ return
+ }
+
+ if loop.Init != nil {
+ // TODO(dh): this isn't strictly necessary, it just makes
+ // the check easier.
+ return
+ }
+ // An empty loop is bad news in two cases: 1) The loop has no
+ // condition. In that case, it's just a loop that spins
+ // forever and as fast as it can, keeping a core busy. 2) The
+ // loop condition only consists of variable or field reads and
+ // operators on those. The only way those could change their
+ // value is with unsynchronised access, which constitutes a
+ // data race.
+ //
+ // If the condition contains any function calls, its behaviour
+ // is dynamic and the loop might terminate. Similarly for
+ // channel receives.
+
+ if loop.Cond != nil {
+ if hasSideEffects(loop.Cond) {
+ return
+ }
+ if ident, ok := loop.Cond.(*ast.Ident); ok {
+ if k, ok := pass.TypesInfo.ObjectOf(ident).(*types.Const); ok {
+ if !constant.BoolVal(k.Val()) {
+ // don't flag `for false {}` loops. They're a debug aid.
+ return
+ }
+ }
+ }
+ ReportNodef(pass, loop, "loop condition never changes or has a race condition")
+ }
+ ReportNodef(pass, loop, "this loop will spin, using 100%% CPU")
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.ForStmt)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckDeferInInfiniteLoop(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ mightExit := false
+ var defers []ast.Stmt
+ loop := node.(*ast.ForStmt)
+ if loop.Cond != nil {
+ return
+ }
+ fn2 := func(node ast.Node) bool {
+ switch stmt := node.(type) {
+ case *ast.ReturnStmt:
+ mightExit = true
+ return false
+ case *ast.BranchStmt:
+ // TODO(dominikh): if this sees a break in a switch or
+ // select, it doesn't check if it breaks the loop or
+ // just the select/switch. This causes some false
+ // negatives.
+ if stmt.Tok == token.BREAK {
+ mightExit = true
+ return false
+ }
+ case *ast.DeferStmt:
+ defers = append(defers, stmt)
+ case *ast.FuncLit:
+ // Don't look into function bodies
+ return false
+ }
+ return true
+ }
+ ast.Inspect(loop.Body, fn2)
+ if mightExit {
+ return
+ }
+ for _, stmt := range defers {
+ ReportNodef(pass, stmt, "defers in this infinite loop will never run")
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.ForStmt)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckDubiousDeferInChannelRangeLoop(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ loop := node.(*ast.RangeStmt)
+ typ := pass.TypesInfo.TypeOf(loop.X)
+ _, ok := typ.Underlying().(*types.Chan)
+ if !ok {
+ return
+ }
+ fn2 := func(node ast.Node) bool {
+ switch stmt := node.(type) {
+ case *ast.DeferStmt:
+ ReportNodef(pass, stmt, "defers in this range loop won't run unless the channel gets closed")
+ case *ast.FuncLit:
+ // Don't look into function bodies
+ return false
+ }
+ return true
+ }
+ ast.Inspect(loop.Body, fn2)
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.RangeStmt)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckTestMainExit(pass *analysis.Pass) (interface{}, error) {
+ var (
+ fnmain ast.Node
+ callsExit bool
+ callsRun bool
+ arg types.Object
+ )
+ fn := func(node ast.Node, push bool) bool {
+ if !push {
+ if fnmain != nil && node == fnmain {
+ if !callsExit && callsRun {
+ ReportNodef(pass, fnmain, "TestMain should call os.Exit to set exit code")
+ }
+ fnmain = nil
+ callsExit = false
+ callsRun = false
+ arg = nil
+ }
+ return true
+ }
+
+ switch node := node.(type) {
+ case *ast.FuncDecl:
+ if fnmain != nil {
+ return true
+ }
+ if !isTestMain(pass, node) {
+ return false
+ }
+ fnmain = node
+ arg = pass.TypesInfo.ObjectOf(node.Type.Params.List[0].Names[0])
+ return true
+ case *ast.CallExpr:
+ if IsCallToAST(pass, node, "os.Exit") {
+ callsExit = true
+ return false
+ }
+ sel, ok := node.Fun.(*ast.SelectorExpr)
+ if !ok {
+ return true
+ }
+ ident, ok := sel.X.(*ast.Ident)
+ if !ok {
+ return true
+ }
+ if arg != pass.TypesInfo.ObjectOf(ident) {
+ return true
+ }
+ if sel.Sel.Name == "Run" {
+ callsRun = true
+ return false
+ }
+ return true
+ default:
+ // unreachable
+ return true
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Nodes([]ast.Node{(*ast.FuncDecl)(nil), (*ast.CallExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func isTestMain(pass *analysis.Pass, decl *ast.FuncDecl) bool {
+ if decl.Name.Name != "TestMain" {
+ return false
+ }
+ if len(decl.Type.Params.List) != 1 {
+ return false
+ }
+ arg := decl.Type.Params.List[0]
+ if len(arg.Names) != 1 {
+ return false
+ }
+ return IsOfType(pass, arg.Type, "*testing.M")
+}
+
+func CheckExec(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ call := node.(*ast.CallExpr)
+ if !IsCallToAST(pass, call, "os/exec.Command") {
+ return
+ }
+ val, ok := ExprToString(pass, call.Args[Arg("os/exec.Command.name")])
+ if !ok {
+ return
+ }
+ if !strings.Contains(val, " ") || strings.Contains(val, `\`) || strings.Contains(val, "/") {
+ return
+ }
+ ReportNodef(pass, call.Args[Arg("os/exec.Command.name")],
+ "first argument to exec.Command looks like a shell command, but a program name or path are expected")
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.CallExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckLoopEmptyDefault(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ loop := node.(*ast.ForStmt)
+ if len(loop.Body.List) != 1 || loop.Cond != nil || loop.Init != nil {
+ return
+ }
+ sel, ok := loop.Body.List[0].(*ast.SelectStmt)
+ if !ok {
+ return
+ }
+ for _, c := range sel.Body.List {
+ if comm, ok := c.(*ast.CommClause); ok && comm.Comm == nil && len(comm.Body) == 0 {
+ ReportNodef(pass, comm, "should not have an empty default case in a for+select loop. The loop will spin.")
+ }
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.ForStmt)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckLhsRhsIdentical(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ op := node.(*ast.BinaryExpr)
+ switch op.Op {
+ case token.EQL, token.NEQ:
+ if basic, ok := pass.TypesInfo.TypeOf(op.X).Underlying().(*types.Basic); ok {
+ if kind := basic.Kind(); kind == types.Float32 || kind == types.Float64 {
+ // f == f and f != f might be used to check for NaN
+ return
+ }
+ }
+ case token.SUB, token.QUO, token.AND, token.REM, token.OR, token.XOR, token.AND_NOT,
+ token.LAND, token.LOR, token.LSS, token.GTR, token.LEQ, token.GEQ:
+ default:
+ // For some ops, such as + and *, it can make sense to
+ // have identical operands
+ return
+ }
+
+ if Render(pass, op.X) != Render(pass, op.Y) {
+ return
+ }
+ l1, ok1 := op.X.(*ast.BasicLit)
+ l2, ok2 := op.Y.(*ast.BasicLit)
+ if ok1 && ok2 && l1.Kind == token.INT && l2.Kind == l1.Kind && l1.Value == "0" && l2.Value == l1.Value && IsGenerated(pass, l1.Pos()) {
+ // cgo generates the following function call:
+ // _cgoCheckPointer(_cgoBase0, 0 == 0) – it uses 0 == 0
+ // instead of true in case the user shadowed the
+ // identifier. Ideally we'd restrict this exception to
+ // calls of _cgoCheckPointer, but it's not worth the
+ // hassle of keeping track of the stack. <lit> <op> <lit>
+ // are very rare to begin with, and we're mostly checking
+ // for them to catch typos such as 1 == 1 where the user
+ // meant to type i == 1. The odds of a false negative for
+ // 0 == 0 are slim.
+ return
+ }
+ ReportNodef(pass, op, "identical expressions on the left and right side of the '%s' operator", op.Op)
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.BinaryExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckScopedBreak(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ var body *ast.BlockStmt
+ switch node := node.(type) {
+ case *ast.ForStmt:
+ body = node.Body
+ case *ast.RangeStmt:
+ body = node.Body
+ default:
+ panic(fmt.Sprintf("unreachable: %T", node))
+ }
+ for _, stmt := range body.List {
+ var blocks [][]ast.Stmt
+ switch stmt := stmt.(type) {
+ case *ast.SwitchStmt:
+ for _, c := range stmt.Body.List {
+ blocks = append(blocks, c.(*ast.CaseClause).Body)
+ }
+ case *ast.SelectStmt:
+ for _, c := range stmt.Body.List {
+ blocks = append(blocks, c.(*ast.CommClause).Body)
+ }
+ default:
+ continue
+ }
+
+ for _, body := range blocks {
+ if len(body) == 0 {
+ continue
+ }
+ lasts := []ast.Stmt{body[len(body)-1]}
+ // TODO(dh): unfold all levels of nested block
+ // statements, not just a single level if statement
+ if ifs, ok := lasts[0].(*ast.IfStmt); ok {
+ if len(ifs.Body.List) == 0 {
+ continue
+ }
+ lasts[0] = ifs.Body.List[len(ifs.Body.List)-1]
+
+ if block, ok := ifs.Else.(*ast.BlockStmt); ok {
+ if len(block.List) != 0 {
+ lasts = append(lasts, block.List[len(block.List)-1])
+ }
+ }
+ }
+ for _, last := range lasts {
+ branch, ok := last.(*ast.BranchStmt)
+ if !ok || branch.Tok != token.BREAK || branch.Label != nil {
+ continue
+ }
+ ReportNodef(pass, branch, "ineffective break statement. Did you mean to break out of the outer loop?")
+ }
+ }
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.ForStmt)(nil), (*ast.RangeStmt)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckUnsafePrintf(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ call := node.(*ast.CallExpr)
+ var arg int
+ if IsCallToAnyAST(pass, call, "fmt.Printf", "fmt.Sprintf", "log.Printf") {
+ arg = Arg("fmt.Printf.format")
+ } else if IsCallToAnyAST(pass, call, "fmt.Fprintf") {
+ arg = Arg("fmt.Fprintf.format")
+ } else {
+ return
+ }
+ if len(call.Args) != arg+1 {
+ return
+ }
+ switch call.Args[arg].(type) {
+ case *ast.CallExpr, *ast.Ident:
+ default:
+ return
+ }
+ ReportNodef(pass, call.Args[arg],
+ "printf-style function with dynamic format string and no further arguments should use print-style function instead")
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.CallExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckEarlyDefer(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ block := node.(*ast.BlockStmt)
+ if len(block.List) < 2 {
+ return
+ }
+ for i, stmt := range block.List {
+ if i == len(block.List)-1 {
+ break
+ }
+ assign, ok := stmt.(*ast.AssignStmt)
+ if !ok {
+ continue
+ }
+ if len(assign.Rhs) != 1 {
+ continue
+ }
+ if len(assign.Lhs) < 2 {
+ continue
+ }
+ if lhs, ok := assign.Lhs[len(assign.Lhs)-1].(*ast.Ident); ok && lhs.Name == "_" {
+ continue
+ }
+ call, ok := assign.Rhs[0].(*ast.CallExpr)
+ if !ok {
+ continue
+ }
+ sig, ok := pass.TypesInfo.TypeOf(call.Fun).(*types.Signature)
+ if !ok {
+ continue
+ }
+ if sig.Results().Len() < 2 {
+ continue
+ }
+ last := sig.Results().At(sig.Results().Len() - 1)
+ // FIXME(dh): check that it's error from universe, not
+ // another type of the same name
+ if last.Type().String() != "error" {
+ continue
+ }
+ lhs, ok := assign.Lhs[0].(*ast.Ident)
+ if !ok {
+ continue
+ }
+ def, ok := block.List[i+1].(*ast.DeferStmt)
+ if !ok {
+ continue
+ }
+ sel, ok := def.Call.Fun.(*ast.SelectorExpr)
+ if !ok {
+ continue
+ }
+ ident, ok := selectorX(sel).(*ast.Ident)
+ if !ok {
+ continue
+ }
+ if ident.Obj != lhs.Obj {
+ continue
+ }
+ if sel.Sel.Name != "Close" {
+ continue
+ }
+ ReportNodef(pass, def, "should check returned error before deferring %s", Render(pass, def.Call))
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.BlockStmt)(nil)}, fn)
+ return nil, nil
+}
+
+func selectorX(sel *ast.SelectorExpr) ast.Node {
+ switch x := sel.X.(type) {
+ case *ast.SelectorExpr:
+ return selectorX(x)
+ default:
+ return x
+ }
+}
+
+func CheckEmptyCriticalSection(pass *analysis.Pass) (interface{}, error) {
+ // Initially it might seem like this check would be easier to
+ // implement in SSA. After all, we're only checking for two
+ // consecutive method calls. In reality, however, there may be any
+ // number of other instructions between the lock and unlock, while
+ // still constituting an empty critical section. For example,
+ // given `m.x().Lock(); m.x().Unlock()`, there will be a call to
+ // x(). In the AST-based approach, this has a tiny potential for a
+ // false positive (the second call to x might be doing work that
+ // is protected by the mutex). In an SSA-based approach, however,
+ // it would miss a lot of real bugs.
+
+ mutexParams := func(s ast.Stmt) (x ast.Expr, funcName string, ok bool) {
+ expr, ok := s.(*ast.ExprStmt)
+ if !ok {
+ return nil, "", false
+ }
+ call, ok := expr.X.(*ast.CallExpr)
+ if !ok {
+ return nil, "", false
+ }
+ sel, ok := call.Fun.(*ast.SelectorExpr)
+ if !ok {
+ return nil, "", false
+ }
+
+ fn, ok := pass.TypesInfo.ObjectOf(sel.Sel).(*types.Func)
+ if !ok {
+ return nil, "", false
+ }
+ sig := fn.Type().(*types.Signature)
+ if sig.Params().Len() != 0 || sig.Results().Len() != 0 {
+ return nil, "", false
+ }
+
+ return sel.X, fn.Name(), true
+ }
+
+ fn := func(node ast.Node) {
+ block := node.(*ast.BlockStmt)
+ if len(block.List) < 2 {
+ return
+ }
+ for i := range block.List[:len(block.List)-1] {
+ sel1, method1, ok1 := mutexParams(block.List[i])
+ sel2, method2, ok2 := mutexParams(block.List[i+1])
+
+ if !ok1 || !ok2 || Render(pass, sel1) != Render(pass, sel2) {
+ continue
+ }
+ if (method1 == "Lock" && method2 == "Unlock") ||
+ (method1 == "RLock" && method2 == "RUnlock") {
+ ReportNodef(pass, block.List[i+1], "empty critical section")
+ }
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.BlockStmt)(nil)}, fn)
+ return nil, nil
+}
+
+// cgo produces code like fn(&*_Cvar_kSomeCallbacks) which we don't
+// want to flag.
+var cgoIdent = regexp.MustCompile(`^_C(func|var)_.+$`)
+
+func CheckIneffectiveCopy(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ if unary, ok := node.(*ast.UnaryExpr); ok {
+ if star, ok := unary.X.(*ast.StarExpr); ok && unary.Op == token.AND {
+ ident, ok := star.X.(*ast.Ident)
+ if !ok || !cgoIdent.MatchString(ident.Name) {
+ ReportNodef(pass, unary, "&*x will be simplified to x. It will not copy x.")
+ }
+ }
+ }
+
+ if star, ok := node.(*ast.StarExpr); ok {
+ if unary, ok := star.X.(*ast.UnaryExpr); ok && unary.Op == token.AND {
+ ReportNodef(pass, star, "*&x will be simplified to x. It will not copy x.")
+ }
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.UnaryExpr)(nil), (*ast.StarExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckDiffSizeComparison(pass *analysis.Pass) (interface{}, error) {
+ ranges := pass.ResultOf[valueRangesAnalyzer].(map[*ssa.Function]vrp.Ranges)
+ for _, ssafn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ for _, b := range ssafn.Blocks {
+ for _, ins := range b.Instrs {
+ binop, ok := ins.(*ssa.BinOp)
+ if !ok {
+ continue
+ }
+ if binop.Op != token.EQL && binop.Op != token.NEQ {
+ continue
+ }
+ _, ok1 := binop.X.(*ssa.Slice)
+ _, ok2 := binop.Y.(*ssa.Slice)
+ if !ok1 && !ok2 {
+ continue
+ }
+ r := ranges[ssafn]
+ r1, ok1 := r.Get(binop.X).(vrp.StringInterval)
+ r2, ok2 := r.Get(binop.Y).(vrp.StringInterval)
+ if !ok1 || !ok2 {
+ continue
+ }
+ if r1.Length.Intersection(r2.Length).Empty() {
+ pass.Reportf(binop.Pos(), "comparing strings of different sizes for equality will always return false")
+ }
+ }
+ }
+ }
+ return nil, nil
+}
+
+func CheckCanonicalHeaderKey(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node, push bool) bool {
+ if !push {
+ return false
+ }
+ assign, ok := node.(*ast.AssignStmt)
+ if ok {
+ // TODO(dh): This risks missing some Header reads, for
+ // example in `h1["foo"] = h2["foo"]` – these edge
+ // cases are probably rare enough to ignore for now.
+ for _, expr := range assign.Lhs {
+ op, ok := expr.(*ast.IndexExpr)
+ if !ok {
+ continue
+ }
+ if IsOfType(pass, op.X, "net/http.Header") {
+ return false
+ }
+ }
+ return true
+ }
+ op, ok := node.(*ast.IndexExpr)
+ if !ok {
+ return true
+ }
+ if !IsOfType(pass, op.X, "net/http.Header") {
+ return true
+ }
+ s, ok := ExprToString(pass, op.Index)
+ if !ok {
+ return true
+ }
+ if s == http.CanonicalHeaderKey(s) {
+ return true
+ }
+ ReportNodef(pass, op, "keys in http.Header are canonicalized, %q is not canonical; fix the constant or use http.CanonicalHeaderKey", s)
+ return true
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Nodes([]ast.Node{(*ast.AssignStmt)(nil), (*ast.IndexExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckBenchmarkN(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ assign := node.(*ast.AssignStmt)
+ if len(assign.Lhs) != 1 || len(assign.Rhs) != 1 {
+ return
+ }
+ sel, ok := assign.Lhs[0].(*ast.SelectorExpr)
+ if !ok {
+ return
+ }
+ if sel.Sel.Name != "N" {
+ return
+ }
+ if !IsOfType(pass, sel.X, "*testing.B") {
+ return
+ }
+ ReportNodef(pass, assign, "should not assign to %s", Render(pass, sel))
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.AssignStmt)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckUnreadVariableValues(pass *analysis.Pass) (interface{}, error) {
+ for _, ssafn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ if IsExample(ssafn) {
+ continue
+ }
+ node := ssafn.Syntax()
+ if node == nil {
+ continue
+ }
+ if gen, ok := Generator(pass, node.Pos()); ok && gen == facts.Goyacc {
+ // Don't flag unused values in code generated by goyacc.
+ // There may be hundreds of those due to the way the state
+ // machine is constructed.
+ continue
+ }
+
+ switchTags := map[ssa.Value]struct{}{}
+ ast.Inspect(node, func(node ast.Node) bool {
+ s, ok := node.(*ast.SwitchStmt)
+ if !ok {
+ return true
+ }
+ v, _ := ssafn.ValueForExpr(s.Tag)
+ switchTags[v] = struct{}{}
+ return true
+ })
+
+ hasUse := func(v ssa.Value) bool {
+ if _, ok := switchTags[v]; ok {
+ return true
+ }
+ refs := v.Referrers()
+ if refs == nil {
+ // TODO investigate why refs can be nil
+ return true
+ }
+ return len(FilterDebug(*refs)) > 0
+ }
+
+ ast.Inspect(node, func(node ast.Node) bool {
+ assign, ok := node.(*ast.AssignStmt)
+ if !ok {
+ return true
+ }
+ if len(assign.Lhs) > 1 && len(assign.Rhs) == 1 {
+ // Either a function call with multiple return values,
+ // or a comma-ok assignment
+
+ val, _ := ssafn.ValueForExpr(assign.Rhs[0])
+ if val == nil {
+ return true
+ }
+ refs := val.Referrers()
+ if refs == nil {
+ return true
+ }
+ for _, ref := range *refs {
+ ex, ok := ref.(*ssa.Extract)
+ if !ok {
+ continue
+ }
+ if !hasUse(ex) {
+ lhs := assign.Lhs[ex.Index]
+ if ident, ok := lhs.(*ast.Ident); !ok || ok && ident.Name == "_" {
+ continue
+ }
+ ReportNodef(pass, lhs, "this value of %s is never used", lhs)
+ }
+ }
+ return true
+ }
+ for i, lhs := range assign.Lhs {
+ rhs := assign.Rhs[i]
+ if ident, ok := lhs.(*ast.Ident); !ok || ok && ident.Name == "_" {
+ continue
+ }
+ val, _ := ssafn.ValueForExpr(rhs)
+ if val == nil {
+ continue
+ }
+
+ if !hasUse(val) {
+ ReportNodef(pass, lhs, "this value of %s is never used", lhs)
+ }
+ }
+ return true
+ })
+ }
+ return nil, nil
+}
+
+func CheckPredeterminedBooleanExprs(pass *analysis.Pass) (interface{}, error) {
+ for _, ssafn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ for _, block := range ssafn.Blocks {
+ for _, ins := range block.Instrs {
+ ssabinop, ok := ins.(*ssa.BinOp)
+ if !ok {
+ continue
+ }
+ switch ssabinop.Op {
+ case token.GTR, token.LSS, token.EQL, token.NEQ, token.LEQ, token.GEQ:
+ default:
+ continue
+ }
+
+ xs, ok1 := consts(ssabinop.X, nil, nil)
+ ys, ok2 := consts(ssabinop.Y, nil, nil)
+ if !ok1 || !ok2 || len(xs) == 0 || len(ys) == 0 {
+ continue
+ }
+
+ trues := 0
+ for _, x := range xs {
+ for _, y := range ys {
+ if x.Value == nil {
+ if y.Value == nil {
+ trues++
+ }
+ continue
+ }
+ if constant.Compare(x.Value, ssabinop.Op, y.Value) {
+ trues++
+ }
+ }
+ }
+ b := trues != 0
+ if trues == 0 || trues == len(xs)*len(ys) {
+ pass.Reportf(ssabinop.Pos(), "binary expression is always %t for all possible values (%s %s %s)",
+ b, xs, ssabinop.Op, ys)
+ }
+ }
+ }
+ }
+ return nil, nil
+}
+
+func CheckNilMaps(pass *analysis.Pass) (interface{}, error) {
+ for _, ssafn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ for _, block := range ssafn.Blocks {
+ for _, ins := range block.Instrs {
+ mu, ok := ins.(*ssa.MapUpdate)
+ if !ok {
+ continue
+ }
+ c, ok := mu.Map.(*ssa.Const)
+ if !ok {
+ continue
+ }
+ if c.Value != nil {
+ continue
+ }
+ pass.Reportf(mu.Pos(), "assignment to nil map")
+ }
+ }
+ }
+ return nil, nil
+}
+
+func CheckExtremeComparison(pass *analysis.Pass) (interface{}, error) {
+ isobj := func(expr ast.Expr, name string) bool {
+ sel, ok := expr.(*ast.SelectorExpr)
+ if !ok {
+ return false
+ }
+ return IsObject(pass.TypesInfo.ObjectOf(sel.Sel), name)
+ }
+
+ fn := func(node ast.Node) {
+ expr := node.(*ast.BinaryExpr)
+ tx := pass.TypesInfo.TypeOf(expr.X)
+ basic, ok := tx.Underlying().(*types.Basic)
+ if !ok {
+ return
+ }
+
+ var max string
+ var min string
+
+ switch basic.Kind() {
+ case types.Uint8:
+ max = "math.MaxUint8"
+ case types.Uint16:
+ max = "math.MaxUint16"
+ case types.Uint32:
+ max = "math.MaxUint32"
+ case types.Uint64:
+ max = "math.MaxUint64"
+ case types.Uint:
+ max = "math.MaxUint64"
+
+ case types.Int8:
+ min = "math.MinInt8"
+ max = "math.MaxInt8"
+ case types.Int16:
+ min = "math.MinInt16"
+ max = "math.MaxInt16"
+ case types.Int32:
+ min = "math.MinInt32"
+ max = "math.MaxInt32"
+ case types.Int64:
+ min = "math.MinInt64"
+ max = "math.MaxInt64"
+ case types.Int:
+ min = "math.MinInt64"
+ max = "math.MaxInt64"
+ }
+
+ if (expr.Op == token.GTR || expr.Op == token.GEQ) && isobj(expr.Y, max) ||
+ (expr.Op == token.LSS || expr.Op == token.LEQ) && isobj(expr.X, max) {
+ ReportNodef(pass, expr, "no value of type %s is greater than %s", basic, max)
+ }
+ if expr.Op == token.LEQ && isobj(expr.Y, max) ||
+ expr.Op == token.GEQ && isobj(expr.X, max) {
+ ReportNodef(pass, expr, "every value of type %s is <= %s", basic, max)
+ }
+
+ if (basic.Info() & types.IsUnsigned) != 0 {
+ if (expr.Op == token.LSS || expr.Op == token.LEQ) && IsIntLiteral(expr.Y, "0") ||
+ (expr.Op == token.GTR || expr.Op == token.GEQ) && IsIntLiteral(expr.X, "0") {
+ ReportNodef(pass, expr, "no value of type %s is less than 0", basic)
+ }
+ if expr.Op == token.GEQ && IsIntLiteral(expr.Y, "0") ||
+ expr.Op == token.LEQ && IsIntLiteral(expr.X, "0") {
+ ReportNodef(pass, expr, "every value of type %s is >= 0", basic)
+ }
+ } else {
+ if (expr.Op == token.LSS || expr.Op == token.LEQ) && isobj(expr.Y, min) ||
+ (expr.Op == token.GTR || expr.Op == token.GEQ) && isobj(expr.X, min) {
+ ReportNodef(pass, expr, "no value of type %s is less than %s", basic, min)
+ }
+ if expr.Op == token.GEQ && isobj(expr.Y, min) ||
+ expr.Op == token.LEQ && isobj(expr.X, min) {
+ ReportNodef(pass, expr, "every value of type %s is >= %s", basic, min)
+ }
+ }
+
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.BinaryExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func consts(val ssa.Value, out []*ssa.Const, visitedPhis map[string]bool) ([]*ssa.Const, bool) {
+ if visitedPhis == nil {
+ visitedPhis = map[string]bool{}
+ }
+ var ok bool
+ switch val := val.(type) {
+ case *ssa.Phi:
+ if visitedPhis[val.Name()] {
+ break
+ }
+ visitedPhis[val.Name()] = true
+ vals := val.Operands(nil)
+ for _, phival := range vals {
+ out, ok = consts(*phival, out, visitedPhis)
+ if !ok {
+ return nil, false
+ }
+ }
+ case *ssa.Const:
+ out = append(out, val)
+ case *ssa.Convert:
+ out, ok = consts(val.X, out, visitedPhis)
+ if !ok {
+ return nil, false
+ }
+ default:
+ return nil, false
+ }
+ if len(out) < 2 {
+ return out, true
+ }
+ uniq := []*ssa.Const{out[0]}
+ for _, val := range out[1:] {
+ if val.Value == uniq[len(uniq)-1].Value {
+ continue
+ }
+ uniq = append(uniq, val)
+ }
+ return uniq, true
+}
+
+func CheckLoopCondition(pass *analysis.Pass) (interface{}, error) {
+ for _, ssafn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ fn := func(node ast.Node) bool {
+ loop, ok := node.(*ast.ForStmt)
+ if !ok {
+ return true
+ }
+ if loop.Init == nil || loop.Cond == nil || loop.Post == nil {
+ return true
+ }
+ init, ok := loop.Init.(*ast.AssignStmt)
+ if !ok || len(init.Lhs) != 1 || len(init.Rhs) != 1 {
+ return true
+ }
+ cond, ok := loop.Cond.(*ast.BinaryExpr)
+ if !ok {
+ return true
+ }
+ x, ok := cond.X.(*ast.Ident)
+ if !ok {
+ return true
+ }
+ lhs, ok := init.Lhs[0].(*ast.Ident)
+ if !ok {
+ return true
+ }
+ if x.Obj != lhs.Obj {
+ return true
+ }
+ if _, ok := loop.Post.(*ast.IncDecStmt); !ok {
+ return true
+ }
+
+ v, isAddr := ssafn.ValueForExpr(cond.X)
+ if v == nil || isAddr {
+ return true
+ }
+ switch v := v.(type) {
+ case *ssa.Phi:
+ ops := v.Operands(nil)
+ if len(ops) != 2 {
+ return true
+ }
+ _, ok := (*ops[0]).(*ssa.Const)
+ if !ok {
+ return true
+ }
+ sigma, ok := (*ops[1]).(*ssa.Sigma)
+ if !ok {
+ return true
+ }
+ if sigma.X != v {
+ return true
+ }
+ case *ssa.UnOp:
+ return true
+ }
+ ReportNodef(pass, cond, "variable in loop condition never changes")
+
+ return true
+ }
+ Inspect(ssafn.Syntax(), fn)
+ }
+ return nil, nil
+}
+
+func CheckArgOverwritten(pass *analysis.Pass) (interface{}, error) {
+ for _, ssafn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ fn := func(node ast.Node) bool {
+ var typ *ast.FuncType
+ var body *ast.BlockStmt
+ switch fn := node.(type) {
+ case *ast.FuncDecl:
+ typ = fn.Type
+ body = fn.Body
+ case *ast.FuncLit:
+ typ = fn.Type
+ body = fn.Body
+ }
+ if body == nil {
+ return true
+ }
+ if len(typ.Params.List) == 0 {
+ return true
+ }
+ for _, field := range typ.Params.List {
+ for _, arg := range field.Names {
+ obj := pass.TypesInfo.ObjectOf(arg)
+ var ssaobj *ssa.Parameter
+ for _, param := range ssafn.Params {
+ if param.Object() == obj {
+ ssaobj = param
+ break
+ }
+ }
+ if ssaobj == nil {
+ continue
+ }
+ refs := ssaobj.Referrers()
+ if refs == nil {
+ continue
+ }
+ if len(FilterDebug(*refs)) != 0 {
+ continue
+ }
+
+ assigned := false
+ ast.Inspect(body, func(node ast.Node) bool {
+ assign, ok := node.(*ast.AssignStmt)
+ if !ok {
+ return true
+ }
+ for _, lhs := range assign.Lhs {
+ ident, ok := lhs.(*ast.Ident)
+ if !ok {
+ continue
+ }
+ if pass.TypesInfo.ObjectOf(ident) == obj {
+ assigned = true
+ return false
+ }
+ }
+ return true
+ })
+ if assigned {
+ ReportNodef(pass, arg, "argument %s is overwritten before first use", arg)
+ }
+ }
+ }
+ return true
+ }
+ Inspect(ssafn.Syntax(), fn)
+ }
+ return nil, nil
+}
+
+func CheckIneffectiveLoop(pass *analysis.Pass) (interface{}, error) {
+ // This check detects some, but not all unconditional loop exits.
+ // We give up in the following cases:
+ //
+ // - a goto anywhere in the loop. The goto might skip over our
+ // return, and we don't check that it doesn't.
+ //
+ // - any nested, unlabelled continue, even if it is in another
+ // loop or closure.
+ fn := func(node ast.Node) {
+ var body *ast.BlockStmt
+ switch fn := node.(type) {
+ case *ast.FuncDecl:
+ body = fn.Body
+ case *ast.FuncLit:
+ body = fn.Body
+ default:
+ panic(fmt.Sprintf("unreachable: %T", node))
+ }
+ if body == nil {
+ return
+ }
+ labels := map[*ast.Object]ast.Stmt{}
+ ast.Inspect(body, func(node ast.Node) bool {
+ label, ok := node.(*ast.LabeledStmt)
+ if !ok {
+ return true
+ }
+ labels[label.Label.Obj] = label.Stmt
+ return true
+ })
+
+ ast.Inspect(body, func(node ast.Node) bool {
+ var loop ast.Node
+ var body *ast.BlockStmt
+ switch node := node.(type) {
+ case *ast.ForStmt:
+ body = node.Body
+ loop = node
+ case *ast.RangeStmt:
+ typ := pass.TypesInfo.TypeOf(node.X)
+ if _, ok := typ.Underlying().(*types.Map); ok {
+ // looping once over a map is a valid pattern for
+ // getting an arbitrary element.
+ return true
+ }
+ body = node.Body
+ loop = node
+ default:
+ return true
+ }
+ if len(body.List) < 2 {
+ // avoid flagging the somewhat common pattern of using
+ // a range loop to get the first element in a slice,
+ // or the first rune in a string.
+ return true
+ }
+ var unconditionalExit ast.Node
+ hasBranching := false
+ for _, stmt := range body.List {
+ switch stmt := stmt.(type) {
+ case *ast.BranchStmt:
+ switch stmt.Tok {
+ case token.BREAK:
+ if stmt.Label == nil || labels[stmt.Label.Obj] == loop {
+ unconditionalExit = stmt
+ }
+ case token.CONTINUE:
+ if stmt.Label == nil || labels[stmt.Label.Obj] == loop {
+ unconditionalExit = nil
+ return false
+ }
+ }
+ case *ast.ReturnStmt:
+ unconditionalExit = stmt
+ case *ast.IfStmt, *ast.ForStmt, *ast.RangeStmt, *ast.SwitchStmt, *ast.SelectStmt:
+ hasBranching = true
+ }
+ }
+ if unconditionalExit == nil || !hasBranching {
+ return false
+ }
+ ast.Inspect(body, func(node ast.Node) bool {
+ if branch, ok := node.(*ast.BranchStmt); ok {
+
+ switch branch.Tok {
+ case token.GOTO:
+ unconditionalExit = nil
+ return false
+ case token.CONTINUE:
+ if branch.Label != nil && labels[branch.Label.Obj] != loop {
+ return true
+ }
+ unconditionalExit = nil
+ return false
+ }
+ }
+ return true
+ })
+ if unconditionalExit != nil {
+ ReportNodef(pass, unconditionalExit, "the surrounding loop is unconditionally terminated")
+ }
+ return true
+ })
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.FuncDecl)(nil), (*ast.FuncLit)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckNilContext(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ call := node.(*ast.CallExpr)
+ if len(call.Args) == 0 {
+ return
+ }
+ if typ, ok := pass.TypesInfo.TypeOf(call.Args[0]).(*types.Basic); !ok || typ.Kind() != types.UntypedNil {
+ return
+ }
+ sig, ok := pass.TypesInfo.TypeOf(call.Fun).(*types.Signature)
+ if !ok {
+ return
+ }
+ if sig.Params().Len() == 0 {
+ return
+ }
+ if !IsType(sig.Params().At(0).Type(), "context.Context") {
+ return
+ }
+ ReportNodef(pass, call.Args[0],
+ "do not pass a nil Context, even if a function permits it; pass context.TODO if you are unsure about which Context to use")
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.CallExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckSeeker(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ call := node.(*ast.CallExpr)
+ sel, ok := call.Fun.(*ast.SelectorExpr)
+ if !ok {
+ return
+ }
+ if sel.Sel.Name != "Seek" {
+ return
+ }
+ if len(call.Args) != 2 {
+ return
+ }
+ arg0, ok := call.Args[Arg("(io.Seeker).Seek.offset")].(*ast.SelectorExpr)
+ if !ok {
+ return
+ }
+ switch arg0.Sel.Name {
+ case "SeekStart", "SeekCurrent", "SeekEnd":
+ default:
+ return
+ }
+ pkg, ok := arg0.X.(*ast.Ident)
+ if !ok {
+ return
+ }
+ if pkg.Name != "io" {
+ return
+ }
+ ReportNodef(pass, call, "the first argument of io.Seeker is the offset, but an io.Seek* constant is being used instead")
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.CallExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckIneffectiveAppend(pass *analysis.Pass) (interface{}, error) {
+ isAppend := func(ins ssa.Value) bool {
+ call, ok := ins.(*ssa.Call)
+ if !ok {
+ return false
+ }
+ if call.Call.IsInvoke() {
+ return false
+ }
+ if builtin, ok := call.Call.Value.(*ssa.Builtin); !ok || builtin.Name() != "append" {
+ return false
+ }
+ return true
+ }
+
+ for _, ssafn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ for _, block := range ssafn.Blocks {
+ for _, ins := range block.Instrs {
+ val, ok := ins.(ssa.Value)
+ if !ok || !isAppend(val) {
+ continue
+ }
+
+ isUsed := false
+ visited := map[ssa.Instruction]bool{}
+ var walkRefs func(refs []ssa.Instruction)
+ walkRefs = func(refs []ssa.Instruction) {
+ loop:
+ for _, ref := range refs {
+ if visited[ref] {
+ continue
+ }
+ visited[ref] = true
+ if _, ok := ref.(*ssa.DebugRef); ok {
+ continue
+ }
+ switch ref := ref.(type) {
+ case *ssa.Phi:
+ walkRefs(*ref.Referrers())
+ case *ssa.Sigma:
+ walkRefs(*ref.Referrers())
+ case ssa.Value:
+ if !isAppend(ref) {
+ isUsed = true
+ } else {
+ walkRefs(*ref.Referrers())
+ }
+ case ssa.Instruction:
+ isUsed = true
+ break loop
+ }
+ }
+ }
+ refs := val.Referrers()
+ if refs == nil {
+ continue
+ }
+ walkRefs(*refs)
+ if !isUsed {
+ pass.Reportf(ins.Pos(), "this result of append is never used, except maybe in other appends")
+ }
+ }
+ }
+ }
+ return nil, nil
+}
+
+func CheckConcurrentTesting(pass *analysis.Pass) (interface{}, error) {
+ for _, ssafn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ for _, block := range ssafn.Blocks {
+ for _, ins := range block.Instrs {
+ gostmt, ok := ins.(*ssa.Go)
+ if !ok {
+ continue
+ }
+ var fn *ssa.Function
+ switch val := gostmt.Call.Value.(type) {
+ case *ssa.Function:
+ fn = val
+ case *ssa.MakeClosure:
+ fn = val.Fn.(*ssa.Function)
+ default:
+ continue
+ }
+ if fn.Blocks == nil {
+ continue
+ }
+ for _, block := range fn.Blocks {
+ for _, ins := range block.Instrs {
+ call, ok := ins.(*ssa.Call)
+ if !ok {
+ continue
+ }
+ if call.Call.IsInvoke() {
+ continue
+ }
+ callee := call.Call.StaticCallee()
+ if callee == nil {
+ continue
+ }
+ recv := callee.Signature.Recv()
+ if recv == nil {
+ continue
+ }
+ if !IsType(recv.Type(), "*testing.common") {
+ continue
+ }
+ fn, ok := call.Call.StaticCallee().Object().(*types.Func)
+ if !ok {
+ continue
+ }
+ name := fn.Name()
+ switch name {
+ case "FailNow", "Fatal", "Fatalf", "SkipNow", "Skip", "Skipf":
+ default:
+ continue
+ }
+ pass.Reportf(gostmt.Pos(), "the goroutine calls T.%s, which must be called in the same goroutine as the test", name)
+ }
+ }
+ }
+ }
+ }
+ return nil, nil
+}
+
+func eachCall(ssafn *ssa.Function, fn func(caller *ssa.Function, site ssa.CallInstruction, callee *ssa.Function)) {
+ for _, b := range ssafn.Blocks {
+ for _, instr := range b.Instrs {
+ if site, ok := instr.(ssa.CallInstruction); ok {
+ if g := site.Common().StaticCallee(); g != nil {
+ fn(ssafn, site, g)
+ }
+ }
+ }
+ }
+}
+
+func CheckCyclicFinalizer(pass *analysis.Pass) (interface{}, error) {
+ fn := func(caller *ssa.Function, site ssa.CallInstruction, callee *ssa.Function) {
+ if callee.RelString(nil) != "runtime.SetFinalizer" {
+ return
+ }
+ arg0 := site.Common().Args[Arg("runtime.SetFinalizer.obj")]
+ if iface, ok := arg0.(*ssa.MakeInterface); ok {
+ arg0 = iface.X
+ }
+ unop, ok := arg0.(*ssa.UnOp)
+ if !ok {
+ return
+ }
+ v, ok := unop.X.(*ssa.Alloc)
+ if !ok {
+ return
+ }
+ arg1 := site.Common().Args[Arg("runtime.SetFinalizer.finalizer")]
+ if iface, ok := arg1.(*ssa.MakeInterface); ok {
+ arg1 = iface.X
+ }
+ mc, ok := arg1.(*ssa.MakeClosure)
+ if !ok {
+ return
+ }
+ for _, b := range mc.Bindings {
+ if b == v {
+ pos := lint.DisplayPosition(pass.Fset, mc.Fn.Pos())
+ pass.Reportf(site.Pos(), "the finalizer closes over the object, preventing the finalizer from ever running (at %s)", pos)
+ }
+ }
+ }
+ for _, ssafn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ eachCall(ssafn, fn)
+ }
+ return nil, nil
+}
+
+/*
+func CheckSliceOutOfBounds(pass *analysis.Pass) (interface{}, error) {
+ for _, ssafn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ for _, block := range ssafn.Blocks {
+ for _, ins := range block.Instrs {
+ ia, ok := ins.(*ssa.IndexAddr)
+ if !ok {
+ continue
+ }
+ if _, ok := ia.X.Type().Underlying().(*types.Slice); !ok {
+ continue
+ }
+ sr, ok1 := c.funcDescs.Get(ssafn).Ranges[ia.X].(vrp.SliceInterval)
+ idxr, ok2 := c.funcDescs.Get(ssafn).Ranges[ia.Index].(vrp.IntInterval)
+ if !ok1 || !ok2 || !sr.IsKnown() || !idxr.IsKnown() || sr.Length.Empty() || idxr.Empty() {
+ continue
+ }
+ if idxr.Lower.Cmp(sr.Length.Upper) >= 0 {
+ ReportNodef(pass, ia, "index out of bounds")
+ }
+ }
+ }
+ }
+ return nil, nil
+}
+*/
+
+func CheckDeferLock(pass *analysis.Pass) (interface{}, error) {
+ for _, ssafn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ for _, block := range ssafn.Blocks {
+ instrs := FilterDebug(block.Instrs)
+ if len(instrs) < 2 {
+ continue
+ }
+ for i, ins := range instrs[:len(instrs)-1] {
+ call, ok := ins.(*ssa.Call)
+ if !ok {
+ continue
+ }
+ if !IsCallTo(call.Common(), "(*sync.Mutex).Lock") && !IsCallTo(call.Common(), "(*sync.RWMutex).RLock") {
+ continue
+ }
+ nins, ok := instrs[i+1].(*ssa.Defer)
+ if !ok {
+ continue
+ }
+ if !IsCallTo(&nins.Call, "(*sync.Mutex).Lock") && !IsCallTo(&nins.Call, "(*sync.RWMutex).RLock") {
+ continue
+ }
+ if call.Common().Args[0] != nins.Call.Args[0] {
+ continue
+ }
+ name := shortCallName(call.Common())
+ alt := ""
+ switch name {
+ case "Lock":
+ alt = "Unlock"
+ case "RLock":
+ alt = "RUnlock"
+ }
+ pass.Reportf(nins.Pos(), "deferring %s right after having locked already; did you mean to defer %s?", name, alt)
+ }
+ }
+ }
+ return nil, nil
+}
+
+func CheckNaNComparison(pass *analysis.Pass) (interface{}, error) {
+ isNaN := func(v ssa.Value) bool {
+ call, ok := v.(*ssa.Call)
+ if !ok {
+ return false
+ }
+ return IsCallTo(call.Common(), "math.NaN")
+ }
+ for _, ssafn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ for _, block := range ssafn.Blocks {
+ for _, ins := range block.Instrs {
+ ins, ok := ins.(*ssa.BinOp)
+ if !ok {
+ continue
+ }
+ if isNaN(ins.X) || isNaN(ins.Y) {
+ pass.Reportf(ins.Pos(), "no value is equal to NaN, not even NaN itself")
+ }
+ }
+ }
+ }
+ return nil, nil
+}
+
+func CheckInfiniteRecursion(pass *analysis.Pass) (interface{}, error) {
+ for _, ssafn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ eachCall(ssafn, func(caller *ssa.Function, site ssa.CallInstruction, callee *ssa.Function) {
+ if callee != ssafn {
+ return
+ }
+ if _, ok := site.(*ssa.Go); ok {
+ // Recursively spawning goroutines doesn't consume
+ // stack space infinitely, so don't flag it.
+ return
+ }
+
+ block := site.Block()
+ canReturn := false
+ for _, b := range ssafn.Blocks {
+ if block.Dominates(b) {
+ continue
+ }
+ if len(b.Instrs) == 0 {
+ continue
+ }
+ if _, ok := b.Instrs[len(b.Instrs)-1].(*ssa.Return); ok {
+ canReturn = true
+ break
+ }
+ }
+ if canReturn {
+ return
+ }
+ pass.Reportf(site.Pos(), "infinite recursive call")
+ })
+ }
+ return nil, nil
+}
+
+func objectName(obj types.Object) string {
+ if obj == nil {
+ return "<nil>"
+ }
+ var name string
+ if obj.Pkg() != nil && obj.Pkg().Scope().Lookup(obj.Name()) == obj {
+ s := obj.Pkg().Path()
+ if s != "" {
+ name += s + "."
+ }
+ }
+ name += obj.Name()
+ return name
+}
+
+func isName(pass *analysis.Pass, expr ast.Expr, name string) bool {
+ var obj types.Object
+ switch expr := expr.(type) {
+ case *ast.Ident:
+ obj = pass.TypesInfo.ObjectOf(expr)
+ case *ast.SelectorExpr:
+ obj = pass.TypesInfo.ObjectOf(expr.Sel)
+ }
+ return objectName(obj) == name
+}
+
+func CheckLeakyTimeTick(pass *analysis.Pass) (interface{}, error) {
+ for _, ssafn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ if IsInMain(pass, ssafn) || IsInTest(pass, ssafn) {
+ continue
+ }
+ for _, block := range ssafn.Blocks {
+ for _, ins := range block.Instrs {
+ call, ok := ins.(*ssa.Call)
+ if !ok || !IsCallTo(call.Common(), "time.Tick") {
+ continue
+ }
+ if !functions.Terminates(call.Parent()) {
+ continue
+ }
+ pass.Reportf(call.Pos(), "using time.Tick leaks the underlying ticker, consider using it only in endless functions, tests and the main package, and use time.NewTicker here")
+ }
+ }
+ }
+ return nil, nil
+}
+
+func CheckDoubleNegation(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ unary1 := node.(*ast.UnaryExpr)
+ unary2, ok := unary1.X.(*ast.UnaryExpr)
+ if !ok {
+ return
+ }
+ if unary1.Op != token.NOT || unary2.Op != token.NOT {
+ return
+ }
+ ReportNodef(pass, unary1, "negating a boolean twice has no effect; is this a typo?")
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.UnaryExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func hasSideEffects(node ast.Node) bool {
+ dynamic := false
+ ast.Inspect(node, func(node ast.Node) bool {
+ switch node := node.(type) {
+ case *ast.CallExpr:
+ dynamic = true
+ return false
+ case *ast.UnaryExpr:
+ if node.Op == token.ARROW {
+ dynamic = true
+ return false
+ }
+ }
+ return true
+ })
+ return dynamic
+}
+
+func CheckRepeatedIfElse(pass *analysis.Pass) (interface{}, error) {
+ seen := map[ast.Node]bool{}
+
+ var collectConds func(ifstmt *ast.IfStmt, inits []ast.Stmt, conds []ast.Expr) ([]ast.Stmt, []ast.Expr)
+ collectConds = func(ifstmt *ast.IfStmt, inits []ast.Stmt, conds []ast.Expr) ([]ast.Stmt, []ast.Expr) {
+ seen[ifstmt] = true
+ if ifstmt.Init != nil {
+ inits = append(inits, ifstmt.Init)
+ }
+ conds = append(conds, ifstmt.Cond)
+ if elsestmt, ok := ifstmt.Else.(*ast.IfStmt); ok {
+ return collectConds(elsestmt, inits, conds)
+ }
+ return inits, conds
+ }
+ fn := func(node ast.Node) {
+ ifstmt := node.(*ast.IfStmt)
+ if seen[ifstmt] {
+ return
+ }
+ inits, conds := collectConds(ifstmt, nil, nil)
+ if len(inits) > 0 {
+ return
+ }
+ for _, cond := range conds {
+ if hasSideEffects(cond) {
+ return
+ }
+ }
+ counts := map[string]int{}
+ for _, cond := range conds {
+ s := Render(pass, cond)
+ counts[s]++
+ if counts[s] == 2 {
+ ReportNodef(pass, cond, "this condition occurs multiple times in this if/else if chain")
+ }
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.IfStmt)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckSillyBitwiseOps(pass *analysis.Pass) (interface{}, error) {
+ for _, ssafn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ for _, block := range ssafn.Blocks {
+ for _, ins := range block.Instrs {
+ ins, ok := ins.(*ssa.BinOp)
+ if !ok {
+ continue
+ }
+
+ if c, ok := ins.Y.(*ssa.Const); !ok || c.Value == nil || c.Value.Kind() != constant.Int || c.Uint64() != 0 {
+ continue
+ }
+ switch ins.Op {
+ case token.AND, token.OR, token.XOR:
+ default:
+ // we do not flag shifts because too often, x<<0 is part
+ // of a pattern, x<<0, x<<8, x<<16, ...
+ continue
+ }
+ path, _ := astutil.PathEnclosingInterval(File(pass, ins), ins.Pos(), ins.Pos())
+ if len(path) == 0 {
+ continue
+ }
+ if node, ok := path[0].(*ast.BinaryExpr); !ok || !IsZero(node.Y) {
+ continue
+ }
+
+ switch ins.Op {
+ case token.AND:
+ pass.Reportf(ins.Pos(), "x & 0 always equals 0")
+ case token.OR, token.XOR:
+ pass.Reportf(ins.Pos(), "x %s 0 always equals x", ins.Op)
+ }
+ }
+ }
+ }
+ return nil, nil
+}
+
+func CheckNonOctalFileMode(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ call := node.(*ast.CallExpr)
+ sig, ok := pass.TypesInfo.TypeOf(call.Fun).(*types.Signature)
+ if !ok {
+ return
+ }
+ n := sig.Params().Len()
+ var args []int
+ for i := 0; i < n; i++ {
+ typ := sig.Params().At(i).Type()
+ if IsType(typ, "os.FileMode") {
+ args = append(args, i)
+ }
+ }
+ for _, i := range args {
+ lit, ok := call.Args[i].(*ast.BasicLit)
+ if !ok {
+ continue
+ }
+ if len(lit.Value) == 3 &&
+ lit.Value[0] != '0' &&
+ lit.Value[0] >= '0' && lit.Value[0] <= '7' &&
+ lit.Value[1] >= '0' && lit.Value[1] <= '7' &&
+ lit.Value[2] >= '0' && lit.Value[2] <= '7' {
+
+ v, err := strconv.ParseInt(lit.Value, 10, 64)
+ if err != nil {
+ continue
+ }
+ ReportNodef(pass, call.Args[i], "file mode '%s' evaluates to %#o; did you mean '0%s'?", lit.Value, v, lit.Value)
+ }
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.CallExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckPureFunctions(pass *analysis.Pass) (interface{}, error) {
+ pure := pass.ResultOf[facts.Purity].(facts.PurityResult)
+
+fnLoop:
+ for _, ssafn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ if IsInTest(pass, ssafn) {
+ params := ssafn.Signature.Params()
+ for i := 0; i < params.Len(); i++ {
+ param := params.At(i)
+ if IsType(param.Type(), "*testing.B") {
+ // Ignore discarded pure functions in code related
+ // to benchmarks. Instead of matching BenchmarkFoo
+ // functions, we match any function accepting a
+ // *testing.B. Benchmarks sometimes call generic
+ // functions for doing the actual work, and
+ // checking for the parameter is a lot easier and
+ // faster than analyzing call trees.
+ continue fnLoop
+ }
+ }
+ }
+
+ for _, b := range ssafn.Blocks {
+ for _, ins := range b.Instrs {
+ ins, ok := ins.(*ssa.Call)
+ if !ok {
+ continue
+ }
+ refs := ins.Referrers()
+ if refs == nil || len(FilterDebug(*refs)) > 0 {
+ continue
+ }
+ callee := ins.Common().StaticCallee()
+ if callee == nil {
+ continue
+ }
+ if callee.Object() == nil {
+ // TODO(dh): support anonymous functions
+ continue
+ }
+ if _, ok := pure[callee.Object().(*types.Func)]; ok {
+ pass.Reportf(ins.Pos(), "%s is a pure function but its return value is ignored", callee.Name())
+ continue
+ }
+ }
+ }
+ }
+ return nil, nil
+}
+
+func CheckDeprecated(pass *analysis.Pass) (interface{}, error) {
+ deprs := pass.ResultOf[facts.Deprecated].(facts.DeprecatedResult)
+
+ // Selectors can appear outside of function literals, e.g. when
+ // declaring package level variables.
+
+ var tfn types.Object
+ stack := 0
+ fn := func(node ast.Node, push bool) bool {
+ if !push {
+ stack--
+ return false
+ }
+ stack++
+ if stack == 1 {
+ tfn = nil
+ }
+ if fn, ok := node.(*ast.FuncDecl); ok {
+ tfn = pass.TypesInfo.ObjectOf(fn.Name)
+ }
+ sel, ok := node.(*ast.SelectorExpr)
+ if !ok {
+ return true
+ }
+
+ obj := pass.TypesInfo.ObjectOf(sel.Sel)
+ if obj.Pkg() == nil {
+ return true
+ }
+ if pass.Pkg == obj.Pkg() || obj.Pkg().Path()+"_test" == pass.Pkg.Path() {
+ // Don't flag stuff in our own package
+ return true
+ }
+ if depr, ok := deprs.Objects[obj]; ok {
+ // Look for the first available alternative, not the first
+ // version something was deprecated in. If a function was
+ // deprecated in Go 1.6, an alternative has been available
+ // already in 1.0, and we're targeting 1.2, it still
+ // makes sense to use the alternative from 1.0, to be
+ // future-proof.
+ minVersion := deprecated.Stdlib[SelectorName(pass, sel)].AlternativeAvailableSince
+ if !IsGoVersion(pass, minVersion) {
+ return true
+ }
+
+ if tfn != nil {
+ if _, ok := deprs.Objects[tfn]; ok {
+ // functions that are deprecated may use deprecated
+ // symbols
+ return true
+ }
+ }
+ ReportNodef(pass, sel, "%s is deprecated: %s", Render(pass, sel), depr.Msg)
+ return true
+ }
+ return true
+ }
+
+ imps := map[string]*types.Package{}
+ for _, imp := range pass.Pkg.Imports() {
+ imps[imp.Path()] = imp
+ }
+ fn2 := func(node ast.Node) {
+ spec := node.(*ast.ImportSpec)
+ p := spec.Path.Value
+ path := p[1 : len(p)-1]
+ imp := imps[path]
+ if depr, ok := deprs.Packages[imp]; ok {
+ ReportNodef(pass, spec, "Package %s is deprecated: %s", path, depr.Msg)
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Nodes(nil, fn)
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.ImportSpec)(nil)}, fn2)
+ return nil, nil
+}
+
+func callChecker(rules map[string]CallCheck) func(pass *analysis.Pass) (interface{}, error) {
+ return func(pass *analysis.Pass) (interface{}, error) {
+ return checkCalls(pass, rules)
+ }
+}
+
+func checkCalls(pass *analysis.Pass, rules map[string]CallCheck) (interface{}, error) {
+ ranges := pass.ResultOf[valueRangesAnalyzer].(map[*ssa.Function]vrp.Ranges)
+ fn := func(caller *ssa.Function, site ssa.CallInstruction, callee *ssa.Function) {
+ obj, ok := callee.Object().(*types.Func)
+ if !ok {
+ return
+ }
+
+ r, ok := rules[lint.FuncName(obj)]
+ if !ok {
+ return
+ }
+ var args []*Argument
+ ssaargs := site.Common().Args
+ if callee.Signature.Recv() != nil {
+ ssaargs = ssaargs[1:]
+ }
+ for _, arg := range ssaargs {
+ if iarg, ok := arg.(*ssa.MakeInterface); ok {
+ arg = iarg.X
+ }
+ vr := ranges[site.Parent()][arg]
+ args = append(args, &Argument{Value: Value{arg, vr}})
+ }
+ call := &Call{
+ Pass: pass,
+ Instr: site,
+ Args: args,
+ Parent: site.Parent(),
+ }
+ r(call)
+ for idx, arg := range call.Args {
+ _ = idx
+ for _, e := range arg.invalids {
+ // path, _ := astutil.PathEnclosingInterval(f.File, edge.Site.Pos(), edge.Site.Pos())
+ // if len(path) < 2 {
+ // continue
+ // }
+ // astcall, ok := path[0].(*ast.CallExpr)
+ // if !ok {
+ // continue
+ // }
+ // pass.Reportf(astcall.Args[idx], "%s", e)
+
+ pass.Reportf(site.Pos(), "%s", e)
+ }
+ }
+ for _, e := range call.invalids {
+ pass.Reportf(call.Instr.Common().Pos(), "%s", e)
+ }
+ }
+ for _, ssafn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ eachCall(ssafn, fn)
+ }
+ return nil, nil
+}
+
+func shortCallName(call *ssa.CallCommon) string {
+ if call.IsInvoke() {
+ return ""
+ }
+ switch v := call.Value.(type) {
+ case *ssa.Function:
+ fn, ok := v.Object().(*types.Func)
+ if !ok {
+ return ""
+ }
+ return fn.Name()
+ case *ssa.Builtin:
+ return v.Name()
+ }
+ return ""
+}
+
+func CheckWriterBufferModified(pass *analysis.Pass) (interface{}, error) {
+ // TODO(dh): this might be a good candidate for taint analysis.
+ // Taint the argument as MUST_NOT_MODIFY, then propagate that
+ // through functions like bytes.Split
+
+ for _, ssafn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ sig := ssafn.Signature
+ if ssafn.Name() != "Write" || sig.Recv() == nil || sig.Params().Len() != 1 || sig.Results().Len() != 2 {
+ continue
+ }
+ tArg, ok := sig.Params().At(0).Type().(*types.Slice)
+ if !ok {
+ continue
+ }
+ if basic, ok := tArg.Elem().(*types.Basic); !ok || basic.Kind() != types.Byte {
+ continue
+ }
+ if basic, ok := sig.Results().At(0).Type().(*types.Basic); !ok || basic.Kind() != types.Int {
+ continue
+ }
+ if named, ok := sig.Results().At(1).Type().(*types.Named); !ok || !IsType(named, "error") {
+ continue
+ }
+
+ for _, block := range ssafn.Blocks {
+ for _, ins := range block.Instrs {
+ switch ins := ins.(type) {
+ case *ssa.Store:
+ addr, ok := ins.Addr.(*ssa.IndexAddr)
+ if !ok {
+ continue
+ }
+ if addr.X != ssafn.Params[1] {
+ continue
+ }
+ pass.Reportf(ins.Pos(), "io.Writer.Write must not modify the provided buffer, not even temporarily")
+ case *ssa.Call:
+ if !IsCallTo(ins.Common(), "append") {
+ continue
+ }
+ if ins.Common().Args[0] != ssafn.Params[1] {
+ continue
+ }
+ pass.Reportf(ins.Pos(), "io.Writer.Write must not modify the provided buffer, not even temporarily")
+ }
+ }
+ }
+ }
+ return nil, nil
+}
+
+func loopedRegexp(name string) CallCheck {
+ return func(call *Call) {
+ if len(extractConsts(call.Args[0].Value.Value)) == 0 {
+ return
+ }
+ if !isInLoop(call.Instr.Block()) {
+ return
+ }
+ call.Invalid(fmt.Sprintf("calling %s in a loop has poor performance, consider using regexp.Compile", name))
+ }
+}
+
+func CheckEmptyBranch(pass *analysis.Pass) (interface{}, error) {
+ for _, ssafn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ if ssafn.Syntax() == nil {
+ continue
+ }
+ if IsExample(ssafn) {
+ continue
+ }
+ fn := func(node ast.Node) bool {
+ ifstmt, ok := node.(*ast.IfStmt)
+ if !ok {
+ return true
+ }
+ if ifstmt.Else != nil {
+ b, ok := ifstmt.Else.(*ast.BlockStmt)
+ if !ok || len(b.List) != 0 {
+ return true
+ }
+ ReportfFG(pass, ifstmt.Else.Pos(), "empty branch")
+ }
+ if len(ifstmt.Body.List) != 0 {
+ return true
+ }
+ ReportfFG(pass, ifstmt.Pos(), "empty branch")
+ return true
+ }
+ Inspect(ssafn.Syntax(), fn)
+ }
+ return nil, nil
+}
+
+func CheckMapBytesKey(pass *analysis.Pass) (interface{}, error) {
+ for _, fn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ for _, b := range fn.Blocks {
+ insLoop:
+ for _, ins := range b.Instrs {
+ // find []byte -> string conversions
+ conv, ok := ins.(*ssa.Convert)
+ if !ok || conv.Type() != types.Universe.Lookup("string").Type() {
+ continue
+ }
+ if s, ok := conv.X.Type().(*types.Slice); !ok || s.Elem() != types.Universe.Lookup("byte").Type() {
+ continue
+ }
+ refs := conv.Referrers()
+ // need at least two (DebugRef) references: the
+ // conversion and the *ast.Ident
+ if refs == nil || len(*refs) < 2 {
+ continue
+ }
+ ident := false
+ // skip first reference, that's the conversion itself
+ for _, ref := range (*refs)[1:] {
+ switch ref := ref.(type) {
+ case *ssa.DebugRef:
+ if _, ok := ref.Expr.(*ast.Ident); !ok {
+ // the string seems to be used somewhere
+ // unexpected; the default branch should
+ // catch this already, but be safe
+ continue insLoop
+ } else {
+ ident = true
+ }
+ case *ssa.Lookup:
+ default:
+ // the string is used somewhere else than a
+ // map lookup
+ continue insLoop
+ }
+ }
+
+ // the result of the conversion wasn't assigned to an
+ // identifier
+ if !ident {
+ continue
+ }
+ pass.Reportf(conv.Pos(), "m[string(key)] would be more efficient than k := string(key); m[k]")
+ }
+ }
+ }
+ return nil, nil
+}
+
+func CheckRangeStringRunes(pass *analysis.Pass) (interface{}, error) {
+ return sharedcheck.CheckRangeStringRunes(pass)
+}
+
+func CheckSelfAssignment(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ assign := node.(*ast.AssignStmt)
+ if assign.Tok != token.ASSIGN || len(assign.Lhs) != len(assign.Rhs) {
+ return
+ }
+ for i, stmt := range assign.Lhs {
+ rlh := Render(pass, stmt)
+ rrh := Render(pass, assign.Rhs[i])
+ if rlh == rrh {
+ ReportfFG(pass, assign.Pos(), "self-assignment of %s to %s", rrh, rlh)
+ }
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.AssignStmt)(nil)}, fn)
+ return nil, nil
+}
+
+func buildTagsIdentical(s1, s2 []string) bool {
+ if len(s1) != len(s2) {
+ return false
+ }
+ s1s := make([]string, len(s1))
+ copy(s1s, s1)
+ sort.Strings(s1s)
+ s2s := make([]string, len(s2))
+ copy(s2s, s2)
+ sort.Strings(s2s)
+ for i, s := range s1s {
+ if s != s2s[i] {
+ return false
+ }
+ }
+ return true
+}
+
+func CheckDuplicateBuildConstraints(pass *analysis.Pass) (interface{}, error) {
+ for _, f := range pass.Files {
+ constraints := buildTags(f)
+ for i, constraint1 := range constraints {
+ for j, constraint2 := range constraints {
+ if i >= j {
+ continue
+ }
+ if buildTagsIdentical(constraint1, constraint2) {
+ ReportfFG(pass, f.Pos(), "identical build constraints %q and %q",
+ strings.Join(constraint1, " "),
+ strings.Join(constraint2, " "))
+ }
+ }
+ }
+ }
+ return nil, nil
+}
+
+func CheckSillyRegexp(pass *analysis.Pass) (interface{}, error) {
+ // We could use the rule checking engine for this, but the
+ // arguments aren't really invalid.
+ for _, fn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ for _, b := range fn.Blocks {
+ for _, ins := range b.Instrs {
+ call, ok := ins.(*ssa.Call)
+ if !ok {
+ continue
+ }
+ switch CallName(call.Common()) {
+ case "regexp.MustCompile", "regexp.Compile", "regexp.Match", "regexp.MatchReader", "regexp.MatchString":
+ default:
+ continue
+ }
+ c, ok := call.Common().Args[0].(*ssa.Const)
+ if !ok {
+ continue
+ }
+ s := constant.StringVal(c.Value)
+ re, err := syntax.Parse(s, 0)
+ if err != nil {
+ continue
+ }
+ if re.Op != syntax.OpLiteral && re.Op != syntax.OpEmptyMatch {
+ continue
+ }
+ pass.Reportf(call.Pos(), "regular expression does not contain any meta characters")
+ }
+ }
+ }
+ return nil, nil
+}
+
+func CheckMissingEnumTypesInDeclaration(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ decl := node.(*ast.GenDecl)
+ if !decl.Lparen.IsValid() {
+ return
+ }
+ if decl.Tok != token.CONST {
+ return
+ }
+
+ groups := GroupSpecs(pass.Fset, decl.Specs)
+ groupLoop:
+ for _, group := range groups {
+ if len(group) < 2 {
+ continue
+ }
+ if group[0].(*ast.ValueSpec).Type == nil {
+ // first constant doesn't have a type
+ continue groupLoop
+ }
+ for i, spec := range group {
+ spec := spec.(*ast.ValueSpec)
+ if len(spec.Names) != 1 || len(spec.Values) != 1 {
+ continue groupLoop
+ }
+ switch v := spec.Values[0].(type) {
+ case *ast.BasicLit:
+ case *ast.UnaryExpr:
+ if _, ok := v.X.(*ast.BasicLit); !ok {
+ continue groupLoop
+ }
+ default:
+ // if it's not a literal it might be typed, such as
+ // time.Microsecond = 1000 * Nanosecond
+ continue groupLoop
+ }
+ if i == 0 {
+ continue
+ }
+ if spec.Type != nil {
+ continue groupLoop
+ }
+ }
+ ReportNodef(pass, group[0], "only the first constant in this group has an explicit type")
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.GenDecl)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckTimerResetReturnValue(pass *analysis.Pass) (interface{}, error) {
+ for _, fn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ for _, block := range fn.Blocks {
+ for _, ins := range block.Instrs {
+ call, ok := ins.(*ssa.Call)
+ if !ok {
+ continue
+ }
+ if !IsCallTo(call.Common(), "(*time.Timer).Reset") {
+ continue
+ }
+ refs := call.Referrers()
+ if refs == nil {
+ continue
+ }
+ for _, ref := range FilterDebug(*refs) {
+ ifstmt, ok := ref.(*ssa.If)
+ if !ok {
+ continue
+ }
+
+ found := false
+ for _, succ := range ifstmt.Block().Succs {
+ if len(succ.Preds) != 1 {
+ // Merge point, not a branch in the
+ // syntactical sense.
+
+ // FIXME(dh): this is broken for if
+ // statements a la "if x || y"
+ continue
+ }
+ ssautil.Walk(succ, func(b *ssa.BasicBlock) bool {
+ if !succ.Dominates(b) {
+ // We've reached the end of the branch
+ return false
+ }
+ for _, ins := range b.Instrs {
+ // TODO(dh): we should check that
+ // we're receiving from the channel of
+ // a time.Timer to further reduce
+ // false positives. Not a key
+ // priority, considering the rarity of
+ // Reset and the tiny likeliness of a
+ // false positive
+ if ins, ok := ins.(*ssa.UnOp); ok && ins.Op == token.ARROW && IsType(ins.X.Type(), "<-chan time.Time") {
+ found = true
+ return false
+ }
+ }
+ return true
+ })
+ }
+
+ if found {
+ pass.Reportf(call.Pos(), "it is not possible to use Reset's return value correctly, as there is a race condition between draining the channel and the new timer expiring")
+ }
+ }
+ }
+ }
+ }
+ return nil, nil
+}
+
+func CheckToLowerToUpperComparison(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ binExpr := node.(*ast.BinaryExpr)
+
+ var negative bool
+ switch binExpr.Op {
+ case token.EQL:
+ negative = false
+ case token.NEQ:
+ negative = true
+ default:
+ return
+ }
+
+ const (
+ lo = "strings.ToLower"
+ up = "strings.ToUpper"
+ )
+
+ var call string
+ if IsCallToAST(pass, binExpr.X, lo) && IsCallToAST(pass, binExpr.Y, lo) {
+ call = lo
+ } else if IsCallToAST(pass, binExpr.X, up) && IsCallToAST(pass, binExpr.Y, up) {
+ call = up
+ } else {
+ return
+ }
+
+ bang := ""
+ if negative {
+ bang = "!"
+ }
+
+ ReportNodef(pass, binExpr, "should use %sstrings.EqualFold(a, b) instead of %s(a) %s %s(b)", bang, call, binExpr.Op, call)
+ }
+
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.BinaryExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckUnreachableTypeCases(pass *analysis.Pass) (interface{}, error) {
+ // Check if T subsumes V in a type switch. T subsumes V if T is an interface and T's method set is a subset of V's method set.
+ subsumes := func(T, V types.Type) bool {
+ tIface, ok := T.Underlying().(*types.Interface)
+ if !ok {
+ return false
+ }
+
+ return types.Implements(V, tIface)
+ }
+
+ subsumesAny := func(Ts, Vs []types.Type) (types.Type, types.Type, bool) {
+ for _, T := range Ts {
+ for _, V := range Vs {
+ if subsumes(T, V) {
+ return T, V, true
+ }
+ }
+ }
+
+ return nil, nil, false
+ }
+
+ fn := func(node ast.Node) {
+ tsStmt := node.(*ast.TypeSwitchStmt)
+
+ type ccAndTypes struct {
+ cc *ast.CaseClause
+ types []types.Type
+ }
+
+ // All asserted types in the order of case clauses.
+ ccs := make([]ccAndTypes, 0, len(tsStmt.Body.List))
+ for _, stmt := range tsStmt.Body.List {
+ cc, _ := stmt.(*ast.CaseClause)
+
+ // Exclude the 'default' case.
+ if len(cc.List) == 0 {
+ continue
+ }
+
+ Ts := make([]types.Type, len(cc.List))
+ for i, expr := range cc.List {
+ Ts[i] = pass.TypesInfo.TypeOf(expr)
+ }
+
+ ccs = append(ccs, ccAndTypes{cc: cc, types: Ts})
+ }
+
+ if len(ccs) <= 1 {
+ // Zero or one case clauses, nothing to check.
+ return
+ }
+
+ // Check if case clauses following cc have types that are subsumed by cc.
+ for i, cc := range ccs[:len(ccs)-1] {
+ for _, next := range ccs[i+1:] {
+ if T, V, yes := subsumesAny(cc.types, next.types); yes {
+ ReportNodef(pass, next.cc, "unreachable case clause: %s will always match before %s", T.String(), V.String())
+ }
+ }
+ }
+ }
+
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.TypeSwitchStmt)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckSingleArgAppend(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ if !IsCallToAST(pass, node, "append") {
+ return
+ }
+ call := node.(*ast.CallExpr)
+ if len(call.Args) != 1 {
+ return
+ }
+ ReportfFG(pass, call.Pos(), "x = append(y) is equivalent to x = y")
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.CallExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckStructTags(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ for _, field := range node.(*ast.StructType).Fields.List {
+ if field.Tag == nil {
+ continue
+ }
+ tags, err := parseStructTag(field.Tag.Value[1 : len(field.Tag.Value)-1])
+ if err != nil {
+ ReportNodef(pass, field.Tag, "unparseable struct tag: %s", err)
+ continue
+ }
+ for k, v := range tags {
+ if len(v) > 1 {
+ ReportNodef(pass, field.Tag, "duplicate struct tag %q", k)
+ continue
+ }
+
+ switch k {
+ case "json":
+ checkJSONTag(pass, field, v[0])
+ case "xml":
+ checkXMLTag(pass, field, v[0])
+ }
+ }
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.StructType)(nil)}, fn)
+ return nil, nil
+}
+
+func checkJSONTag(pass *analysis.Pass, field *ast.Field, tag string) {
+ //lint:ignore SA9003 TODO(dh): should we flag empty tags?
+ if len(tag) == 0 {
+ }
+ fields := strings.Split(tag, ",")
+ for _, r := range fields[0] {
+ if !unicode.IsLetter(r) && !unicode.IsDigit(r) && !strings.ContainsRune("!#$%&()*+-./:<=>?@[]^_{|}~ ", r) {
+ ReportNodef(pass, field.Tag, "invalid JSON field name %q", fields[0])
+ }
+ }
+ var co, cs, ci int
+ for _, s := range fields[1:] {
+ switch s {
+ case "omitempty":
+ co++
+ case "":
+ // allow stuff like "-,"
+ case "string":
+ cs++
+ // only for string, floating point, integer and bool
+ T := Dereference(pass.TypesInfo.TypeOf(field.Type).Underlying()).Underlying()
+ basic, ok := T.(*types.Basic)
+ if !ok || (basic.Info()&(types.IsBoolean|types.IsInteger|types.IsFloat|types.IsString)) == 0 {
+ ReportNodef(pass, field.Tag, "the JSON string option only applies to fields of type string, floating point, integer or bool, or pointers to those")
+ }
+ case "inline":
+ ci++
+ default:
+ ReportNodef(pass, field.Tag, "unknown JSON option %q", s)
+ }
+ }
+ if co > 1 {
+ ReportNodef(pass, field.Tag, `duplicate JSON option "omitempty"`)
+ }
+ if cs > 1 {
+ ReportNodef(pass, field.Tag, `duplicate JSON option "string"`)
+ }
+ if ci > 1 {
+ ReportNodef(pass, field.Tag, `duplicate JSON option "inline"`)
+ }
+}
+
+func checkXMLTag(pass *analysis.Pass, field *ast.Field, tag string) {
+ //lint:ignore SA9003 TODO(dh): should we flag empty tags?
+ if len(tag) == 0 {
+ }
+ fields := strings.Split(tag, ",")
+ counts := map[string]int{}
+ var exclusives []string
+ for _, s := range fields[1:] {
+ switch s {
+ case "attr", "chardata", "cdata", "innerxml", "comment":
+ counts[s]++
+ if counts[s] == 1 {
+ exclusives = append(exclusives, s)
+ }
+ case "omitempty", "any":
+ counts[s]++
+ case "":
+ default:
+ ReportNodef(pass, field.Tag, "unknown XML option %q", s)
+ }
+ }
+ for k, v := range counts {
+ if v > 1 {
+ ReportNodef(pass, field.Tag, "duplicate XML option %q", k)
+ }
+ }
+ if len(exclusives) > 1 {
+ ReportNodef(pass, field.Tag, "XML options %s are mutually exclusive", strings.Join(exclusives, " and "))
+ }
+}
diff --git a/vendor/honnef.co/go/tools/staticcheck/rules.go b/vendor/honnef.co/go/tools/staticcheck/rules.go
new file mode 100644
index 0000000000000..0152cac1af139
--- /dev/null
+++ b/vendor/honnef.co/go/tools/staticcheck/rules.go
@@ -0,0 +1,321 @@
+package staticcheck
+
+import (
+ "fmt"
+ "go/constant"
+ "go/types"
+ "net"
+ "net/url"
+ "regexp"
+ "sort"
+ "strconv"
+ "strings"
+ "time"
+ "unicode/utf8"
+
+ "golang.org/x/tools/go/analysis"
+ . "honnef.co/go/tools/lint/lintdsl"
+ "honnef.co/go/tools/ssa"
+ "honnef.co/go/tools/staticcheck/vrp"
+)
+
+const (
+ MsgInvalidHostPort = "invalid port or service name in host:port pair"
+ MsgInvalidUTF8 = "argument is not a valid UTF-8 encoded string"
+ MsgNonUniqueCutset = "cutset contains duplicate characters"
+)
+
+type Call struct {
+ Pass *analysis.Pass
+ Instr ssa.CallInstruction
+ Args []*Argument
+
+ Parent *ssa.Function
+
+ invalids []string
+}
+
+func (c *Call) Invalid(msg string) {
+ c.invalids = append(c.invalids, msg)
+}
+
+type Argument struct {
+ Value Value
+ invalids []string
+}
+
+func (arg *Argument) Invalid(msg string) {
+ arg.invalids = append(arg.invalids, msg)
+}
+
+type Value struct {
+ Value ssa.Value
+ Range vrp.Range
+}
+
+type CallCheck func(call *Call)
+
+func extractConsts(v ssa.Value) []*ssa.Const {
+ switch v := v.(type) {
+ case *ssa.Const:
+ return []*ssa.Const{v}
+ case *ssa.MakeInterface:
+ return extractConsts(v.X)
+ default:
+ return nil
+ }
+}
+
+func ValidateRegexp(v Value) error {
+ for _, c := range extractConsts(v.Value) {
+ if c.Value == nil {
+ continue
+ }
+ if c.Value.Kind() != constant.String {
+ continue
+ }
+ s := constant.StringVal(c.Value)
+ if _, err := regexp.Compile(s); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func ValidateTimeLayout(v Value) error {
+ for _, c := range extractConsts(v.Value) {
+ if c.Value == nil {
+ continue
+ }
+ if c.Value.Kind() != constant.String {
+ continue
+ }
+ s := constant.StringVal(c.Value)
+ s = strings.Replace(s, "_", " ", -1)
+ s = strings.Replace(s, "Z", "-", -1)
+ _, err := time.Parse(s, s)
+ if err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func ValidateURL(v Value) error {
+ for _, c := range extractConsts(v.Value) {
+ if c.Value == nil {
+ continue
+ }
+ if c.Value.Kind() != constant.String {
+ continue
+ }
+ s := constant.StringVal(c.Value)
+ _, err := url.Parse(s)
+ if err != nil {
+ return fmt.Errorf("%q is not a valid URL: %s", s, err)
+ }
+ }
+ return nil
+}
+
+func IntValue(v Value, z vrp.Z) bool {
+ r, ok := v.Range.(vrp.IntInterval)
+ if !ok || !r.IsKnown() {
+ return false
+ }
+ if r.Lower != r.Upper {
+ return false
+ }
+ if r.Lower.Cmp(z) == 0 {
+ return true
+ }
+ return false
+}
+
+func InvalidUTF8(v Value) bool {
+ for _, c := range extractConsts(v.Value) {
+ if c.Value == nil {
+ continue
+ }
+ if c.Value.Kind() != constant.String {
+ continue
+ }
+ s := constant.StringVal(c.Value)
+ if !utf8.ValidString(s) {
+ return true
+ }
+ }
+ return false
+}
+
+func UnbufferedChannel(v Value) bool {
+ r, ok := v.Range.(vrp.ChannelInterval)
+ if !ok || !r.IsKnown() {
+ return false
+ }
+ if r.Size.Lower.Cmp(vrp.NewZ(0)) == 0 &&
+ r.Size.Upper.Cmp(vrp.NewZ(0)) == 0 {
+ return true
+ }
+ return false
+}
+
+func Pointer(v Value) bool {
+ switch v.Value.Type().Underlying().(type) {
+ case *types.Pointer, *types.Interface:
+ return true
+ }
+ return false
+}
+
+func ConvertedFromInt(v Value) bool {
+ conv, ok := v.Value.(*ssa.Convert)
+ if !ok {
+ return false
+ }
+ b, ok := conv.X.Type().Underlying().(*types.Basic)
+ if !ok {
+ return false
+ }
+ if (b.Info() & types.IsInteger) == 0 {
+ return false
+ }
+ return true
+}
+
+func validEncodingBinaryType(pass *analysis.Pass, typ types.Type) bool {
+ typ = typ.Underlying()
+ switch typ := typ.(type) {
+ case *types.Basic:
+ switch typ.Kind() {
+ case types.Uint8, types.Uint16, types.Uint32, types.Uint64,
+ types.Int8, types.Int16, types.Int32, types.Int64,
+ types.Float32, types.Float64, types.Complex64, types.Complex128, types.Invalid:
+ return true
+ case types.Bool:
+ return IsGoVersion(pass, 8)
+ }
+ return false
+ case *types.Struct:
+ n := typ.NumFields()
+ for i := 0; i < n; i++ {
+ if !validEncodingBinaryType(pass, typ.Field(i).Type()) {
+ return false
+ }
+ }
+ return true
+ case *types.Array:
+ return validEncodingBinaryType(pass, typ.Elem())
+ case *types.Interface:
+ // we can't determine if it's a valid type or not
+ return true
+ }
+ return false
+}
+
+func CanBinaryMarshal(pass *analysis.Pass, v Value) bool {
+ typ := v.Value.Type().Underlying()
+ if ttyp, ok := typ.(*types.Pointer); ok {
+ typ = ttyp.Elem().Underlying()
+ }
+ if ttyp, ok := typ.(interface {
+ Elem() types.Type
+ }); ok {
+ if _, ok := ttyp.(*types.Pointer); !ok {
+ typ = ttyp.Elem()
+ }
+ }
+
+ return validEncodingBinaryType(pass, typ)
+}
+
+func RepeatZeroTimes(name string, arg int) CallCheck {
+ return func(call *Call) {
+ arg := call.Args[arg]
+ if IntValue(arg.Value, vrp.NewZ(0)) {
+ arg.Invalid(fmt.Sprintf("calling %s with n == 0 will return no results, did you mean -1?", name))
+ }
+ }
+}
+
+func validateServiceName(s string) bool {
+ if len(s) < 1 || len(s) > 15 {
+ return false
+ }
+ if s[0] == '-' || s[len(s)-1] == '-' {
+ return false
+ }
+ if strings.Contains(s, "--") {
+ return false
+ }
+ hasLetter := false
+ for _, r := range s {
+ if (r >= 'A' && r <= 'Z') || (r >= 'a' && r <= 'z') {
+ hasLetter = true
+ continue
+ }
+ if r >= '0' && r <= '9' {
+ continue
+ }
+ return false
+ }
+ return hasLetter
+}
+
+func validatePort(s string) bool {
+ n, err := strconv.ParseInt(s, 10, 64)
+ if err != nil {
+ return validateServiceName(s)
+ }
+ return n >= 0 && n <= 65535
+}
+
+func ValidHostPort(v Value) bool {
+ for _, k := range extractConsts(v.Value) {
+ if k.Value == nil {
+ continue
+ }
+ if k.Value.Kind() != constant.String {
+ continue
+ }
+ s := constant.StringVal(k.Value)
+ _, port, err := net.SplitHostPort(s)
+ if err != nil {
+ return false
+ }
+ // TODO(dh): check hostname
+ if !validatePort(port) {
+ return false
+ }
+ }
+ return true
+}
+
+// ConvertedFrom reports whether value v was converted from type typ.
+func ConvertedFrom(v Value, typ string) bool {
+ change, ok := v.Value.(*ssa.ChangeType)
+ return ok && IsType(change.X.Type(), typ)
+}
+
+func UniqueStringCutset(v Value) bool {
+ for _, c := range extractConsts(v.Value) {
+ if c.Value == nil {
+ continue
+ }
+ if c.Value.Kind() != constant.String {
+ continue
+ }
+ s := constant.StringVal(c.Value)
+ rs := runeSlice(s)
+ if len(rs) < 2 {
+ continue
+ }
+ sort.Sort(rs)
+ for i, r := range rs[1:] {
+ if rs[i] == r {
+ return false
+ }
+ }
+ }
+ return true
+}
diff --git a/vendor/honnef.co/go/tools/staticcheck/structtag.go b/vendor/honnef.co/go/tools/staticcheck/structtag.go
new file mode 100644
index 0000000000000..38830a22c6391
--- /dev/null
+++ b/vendor/honnef.co/go/tools/staticcheck/structtag.go
@@ -0,0 +1,58 @@
+// Copyright 2009 The Go Authors. All rights reserved.
+// Copyright 2019 Dominik Honnef. All rights reserved.
+
+package staticcheck
+
+import "strconv"
+
+func parseStructTag(tag string) (map[string][]string, error) {
+ // FIXME(dh): detect missing closing quote
+ out := map[string][]string{}
+
+ for tag != "" {
+ // Skip leading space.
+ i := 0
+ for i < len(tag) && tag[i] == ' ' {
+ i++
+ }
+ tag = tag[i:]
+ if tag == "" {
+ break
+ }
+
+ // Scan to colon. A space, a quote or a control character is a syntax error.
+ // Strictly speaking, control chars include the range [0x7f, 0x9f], not just
+ // [0x00, 0x1f], but in practice, we ignore the multi-byte control characters
+ // as it is simpler to inspect the tag's bytes than the tag's runes.
+ i = 0
+ for i < len(tag) && tag[i] > ' ' && tag[i] != ':' && tag[i] != '"' && tag[i] != 0x7f {
+ i++
+ }
+ if i == 0 || i+1 >= len(tag) || tag[i] != ':' || tag[i+1] != '"' {
+ break
+ }
+ name := string(tag[:i])
+ tag = tag[i+1:]
+
+ // Scan quoted string to find value.
+ i = 1
+ for i < len(tag) && tag[i] != '"' {
+ if tag[i] == '\\' {
+ i++
+ }
+ i++
+ }
+ if i >= len(tag) {
+ break
+ }
+ qvalue := string(tag[:i+1])
+ tag = tag[i+1:]
+
+ value, err := strconv.Unquote(qvalue)
+ if err != nil {
+ return nil, err
+ }
+ out[name] = append(out[name], value)
+ }
+ return out, nil
+}
diff --git a/vendor/honnef.co/go/tools/staticcheck/vrp/channel.go b/vendor/honnef.co/go/tools/staticcheck/vrp/channel.go
new file mode 100644
index 0000000000000..0ef73787ba391
--- /dev/null
+++ b/vendor/honnef.co/go/tools/staticcheck/vrp/channel.go
@@ -0,0 +1,73 @@
+package vrp
+
+import (
+ "fmt"
+
+ "honnef.co/go/tools/ssa"
+)
+
+type ChannelInterval struct {
+ Size IntInterval
+}
+
+func (c ChannelInterval) Union(other Range) Range {
+ i, ok := other.(ChannelInterval)
+ if !ok {
+ i = ChannelInterval{EmptyIntInterval}
+ }
+ if c.Size.Empty() || !c.Size.IsKnown() {
+ return i
+ }
+ if i.Size.Empty() || !i.Size.IsKnown() {
+ return c
+ }
+ return ChannelInterval{
+ Size: c.Size.Union(i.Size).(IntInterval),
+ }
+}
+
+func (c ChannelInterval) String() string {
+ return c.Size.String()
+}
+
+func (c ChannelInterval) IsKnown() bool {
+ return c.Size.IsKnown()
+}
+
+type MakeChannelConstraint struct {
+ aConstraint
+ Buffer ssa.Value
+}
+type ChannelChangeTypeConstraint struct {
+ aConstraint
+ X ssa.Value
+}
+
+func NewMakeChannelConstraint(buffer, y ssa.Value) Constraint {
+ return &MakeChannelConstraint{NewConstraint(y), buffer}
+}
+func NewChannelChangeTypeConstraint(x, y ssa.Value) Constraint {
+ return &ChannelChangeTypeConstraint{NewConstraint(y), x}
+}
+
+func (c *MakeChannelConstraint) Operands() []ssa.Value { return []ssa.Value{c.Buffer} }
+func (c *ChannelChangeTypeConstraint) Operands() []ssa.Value { return []ssa.Value{c.X} }
+
+func (c *MakeChannelConstraint) String() string {
+ return fmt.Sprintf("%s = make(chan, %s)", c.Y().Name(), c.Buffer.Name())
+}
+func (c *ChannelChangeTypeConstraint) String() string {
+ return fmt.Sprintf("%s = changetype(%s)", c.Y().Name(), c.X.Name())
+}
+
+func (c *MakeChannelConstraint) Eval(g *Graph) Range {
+ i, ok := g.Range(c.Buffer).(IntInterval)
+ if !ok {
+ return ChannelInterval{NewIntInterval(NewZ(0), PInfinity)}
+ }
+ if i.Lower.Sign() == -1 {
+ i.Lower = NewZ(0)
+ }
+ return ChannelInterval{i}
+}
+func (c *ChannelChangeTypeConstraint) Eval(g *Graph) Range { return g.Range(c.X) }
diff --git a/vendor/honnef.co/go/tools/staticcheck/vrp/int.go b/vendor/honnef.co/go/tools/staticcheck/vrp/int.go
new file mode 100644
index 0000000000000..926bb7af3d623
--- /dev/null
+++ b/vendor/honnef.co/go/tools/staticcheck/vrp/int.go
@@ -0,0 +1,476 @@
+package vrp
+
+import (
+ "fmt"
+ "go/token"
+ "go/types"
+ "math/big"
+
+ "honnef.co/go/tools/ssa"
+)
+
+type Zs []Z
+
+func (zs Zs) Len() int {
+ return len(zs)
+}
+
+func (zs Zs) Less(i int, j int) bool {
+ return zs[i].Cmp(zs[j]) == -1
+}
+
+func (zs Zs) Swap(i int, j int) {
+ zs[i], zs[j] = zs[j], zs[i]
+}
+
+type Z struct {
+ infinity int8
+ integer *big.Int
+}
+
+func NewZ(n int64) Z {
+ return NewBigZ(big.NewInt(n))
+}
+
+func NewBigZ(n *big.Int) Z {
+ return Z{integer: n}
+}
+
+func (z1 Z) Infinite() bool {
+ return z1.infinity != 0
+}
+
+func (z1 Z) Add(z2 Z) Z {
+ if z2.Sign() == -1 {
+ return z1.Sub(z2.Negate())
+ }
+ if z1 == NInfinity {
+ return NInfinity
+ }
+ if z1 == PInfinity {
+ return PInfinity
+ }
+ if z2 == PInfinity {
+ return PInfinity
+ }
+
+ if !z1.Infinite() && !z2.Infinite() {
+ n := &big.Int{}
+ n.Add(z1.integer, z2.integer)
+ return NewBigZ(n)
+ }
+
+ panic(fmt.Sprintf("%s + %s is not defined", z1, z2))
+}
+
+func (z1 Z) Sub(z2 Z) Z {
+ if z2.Sign() == -1 {
+ return z1.Add(z2.Negate())
+ }
+ if !z1.Infinite() && !z2.Infinite() {
+ n := &big.Int{}
+ n.Sub(z1.integer, z2.integer)
+ return NewBigZ(n)
+ }
+
+ if z1 != PInfinity && z2 == PInfinity {
+ return NInfinity
+ }
+ if z1.Infinite() && !z2.Infinite() {
+ return Z{infinity: z1.infinity}
+ }
+ if z1 == PInfinity && z2 == PInfinity {
+ return PInfinity
+ }
+ panic(fmt.Sprintf("%s - %s is not defined", z1, z2))
+}
+
+func (z1 Z) Mul(z2 Z) Z {
+ if (z1.integer != nil && z1.integer.Sign() == 0) ||
+ (z2.integer != nil && z2.integer.Sign() == 0) {
+ return NewBigZ(&big.Int{})
+ }
+
+ if z1.infinity != 0 || z2.infinity != 0 {
+ return Z{infinity: int8(z1.Sign() * z2.Sign())}
+ }
+
+ n := &big.Int{}
+ n.Mul(z1.integer, z2.integer)
+ return NewBigZ(n)
+}
+
+func (z1 Z) Negate() Z {
+ if z1.infinity == 1 {
+ return NInfinity
+ }
+ if z1.infinity == -1 {
+ return PInfinity
+ }
+ n := &big.Int{}
+ n.Neg(z1.integer)
+ return NewBigZ(n)
+}
+
+func (z1 Z) Sign() int {
+ if z1.infinity != 0 {
+ return int(z1.infinity)
+ }
+ return z1.integer.Sign()
+}
+
+func (z1 Z) String() string {
+ if z1 == NInfinity {
+ return "-∞"
+ }
+ if z1 == PInfinity {
+ return "∞"
+ }
+ return fmt.Sprintf("%d", z1.integer)
+}
+
+func (z1 Z) Cmp(z2 Z) int {
+ if z1.infinity == z2.infinity && z1.infinity != 0 {
+ return 0
+ }
+ if z1 == PInfinity {
+ return 1
+ }
+ if z1 == NInfinity {
+ return -1
+ }
+ if z2 == NInfinity {
+ return 1
+ }
+ if z2 == PInfinity {
+ return -1
+ }
+ return z1.integer.Cmp(z2.integer)
+}
+
+func MaxZ(zs ...Z) Z {
+ if len(zs) == 0 {
+ panic("Max called with no arguments")
+ }
+ if len(zs) == 1 {
+ return zs[0]
+ }
+ ret := zs[0]
+ for _, z := range zs[1:] {
+ if z.Cmp(ret) == 1 {
+ ret = z
+ }
+ }
+ return ret
+}
+
+func MinZ(zs ...Z) Z {
+ if len(zs) == 0 {
+ panic("Min called with no arguments")
+ }
+ if len(zs) == 1 {
+ return zs[0]
+ }
+ ret := zs[0]
+ for _, z := range zs[1:] {
+ if z.Cmp(ret) == -1 {
+ ret = z
+ }
+ }
+ return ret
+}
+
+var NInfinity = Z{infinity: -1}
+var PInfinity = Z{infinity: 1}
+var EmptyIntInterval = IntInterval{true, PInfinity, NInfinity}
+
+func InfinityFor(v ssa.Value) IntInterval {
+ if b, ok := v.Type().Underlying().(*types.Basic); ok {
+ if (b.Info() & types.IsUnsigned) != 0 {
+ return NewIntInterval(NewZ(0), PInfinity)
+ }
+ }
+ return NewIntInterval(NInfinity, PInfinity)
+}
+
+type IntInterval struct {
+ known bool
+ Lower Z
+ Upper Z
+}
+
+func NewIntInterval(l, u Z) IntInterval {
+ if u.Cmp(l) == -1 {
+ return EmptyIntInterval
+ }
+ return IntInterval{known: true, Lower: l, Upper: u}
+}
+
+func (i IntInterval) IsKnown() bool {
+ return i.known
+}
+
+func (i IntInterval) Empty() bool {
+ return i.Lower == PInfinity && i.Upper == NInfinity
+}
+
+func (i IntInterval) IsMaxRange() bool {
+ return i.Lower == NInfinity && i.Upper == PInfinity
+}
+
+func (i1 IntInterval) Intersection(i2 IntInterval) IntInterval {
+ if !i1.IsKnown() {
+ return i2
+ }
+ if !i2.IsKnown() {
+ return i1
+ }
+ if i1.Empty() || i2.Empty() {
+ return EmptyIntInterval
+ }
+ i3 := NewIntInterval(MaxZ(i1.Lower, i2.Lower), MinZ(i1.Upper, i2.Upper))
+ if i3.Lower.Cmp(i3.Upper) == 1 {
+ return EmptyIntInterval
+ }
+ return i3
+}
+
+func (i1 IntInterval) Union(other Range) Range {
+ i2, ok := other.(IntInterval)
+ if !ok {
+ i2 = EmptyIntInterval
+ }
+ if i1.Empty() || !i1.IsKnown() {
+ return i2
+ }
+ if i2.Empty() || !i2.IsKnown() {
+ return i1
+ }
+ return NewIntInterval(MinZ(i1.Lower, i2.Lower), MaxZ(i1.Upper, i2.Upper))
+}
+
+func (i1 IntInterval) Add(i2 IntInterval) IntInterval {
+ if i1.Empty() || i2.Empty() {
+ return EmptyIntInterval
+ }
+ l1, u1, l2, u2 := i1.Lower, i1.Upper, i2.Lower, i2.Upper
+ return NewIntInterval(l1.Add(l2), u1.Add(u2))
+}
+
+func (i1 IntInterval) Sub(i2 IntInterval) IntInterval {
+ if i1.Empty() || i2.Empty() {
+ return EmptyIntInterval
+ }
+ l1, u1, l2, u2 := i1.Lower, i1.Upper, i2.Lower, i2.Upper
+ return NewIntInterval(l1.Sub(u2), u1.Sub(l2))
+}
+
+func (i1 IntInterval) Mul(i2 IntInterval) IntInterval {
+ if i1.Empty() || i2.Empty() {
+ return EmptyIntInterval
+ }
+ x1, x2 := i1.Lower, i1.Upper
+ y1, y2 := i2.Lower, i2.Upper
+ return NewIntInterval(
+ MinZ(x1.Mul(y1), x1.Mul(y2), x2.Mul(y1), x2.Mul(y2)),
+ MaxZ(x1.Mul(y1), x1.Mul(y2), x2.Mul(y1), x2.Mul(y2)),
+ )
+}
+
+func (i1 IntInterval) String() string {
+ if !i1.IsKnown() {
+ return "[⊥, ⊥]"
+ }
+ if i1.Empty() {
+ return "{}"
+ }
+ return fmt.Sprintf("[%s, %s]", i1.Lower, i1.Upper)
+}
+
+type IntArithmeticConstraint struct {
+ aConstraint
+ A ssa.Value
+ B ssa.Value
+ Op token.Token
+ Fn func(IntInterval, IntInterval) IntInterval
+}
+
+type IntAddConstraint struct{ *IntArithmeticConstraint }
+type IntSubConstraint struct{ *IntArithmeticConstraint }
+type IntMulConstraint struct{ *IntArithmeticConstraint }
+
+type IntConversionConstraint struct {
+ aConstraint
+ X ssa.Value
+}
+
+type IntIntersectionConstraint struct {
+ aConstraint
+ ranges Ranges
+ A ssa.Value
+ B ssa.Value
+ Op token.Token
+ I IntInterval
+ resolved bool
+}
+
+type IntIntervalConstraint struct {
+ aConstraint
+ I IntInterval
+}
+
+func NewIntArithmeticConstraint(a, b, y ssa.Value, op token.Token, fn func(IntInterval, IntInterval) IntInterval) *IntArithmeticConstraint {
+ return &IntArithmeticConstraint{NewConstraint(y), a, b, op, fn}
+}
+func NewIntAddConstraint(a, b, y ssa.Value) Constraint {
+ return &IntAddConstraint{NewIntArithmeticConstraint(a, b, y, token.ADD, IntInterval.Add)}
+}
+func NewIntSubConstraint(a, b, y ssa.Value) Constraint {
+ return &IntSubConstraint{NewIntArithmeticConstraint(a, b, y, token.SUB, IntInterval.Sub)}
+}
+func NewIntMulConstraint(a, b, y ssa.Value) Constraint {
+ return &IntMulConstraint{NewIntArithmeticConstraint(a, b, y, token.MUL, IntInterval.Mul)}
+}
+func NewIntConversionConstraint(x, y ssa.Value) Constraint {
+ return &IntConversionConstraint{NewConstraint(y), x}
+}
+func NewIntIntersectionConstraint(a, b ssa.Value, op token.Token, ranges Ranges, y ssa.Value) Constraint {
+ return &IntIntersectionConstraint{
+ aConstraint: NewConstraint(y),
+ ranges: ranges,
+ A: a,
+ B: b,
+ Op: op,
+ }
+}
+func NewIntIntervalConstraint(i IntInterval, y ssa.Value) Constraint {
+ return &IntIntervalConstraint{NewConstraint(y), i}
+}
+
+func (c *IntArithmeticConstraint) Operands() []ssa.Value { return []ssa.Value{c.A, c.B} }
+func (c *IntConversionConstraint) Operands() []ssa.Value { return []ssa.Value{c.X} }
+func (c *IntIntersectionConstraint) Operands() []ssa.Value { return []ssa.Value{c.A} }
+func (s *IntIntervalConstraint) Operands() []ssa.Value { return nil }
+
+func (c *IntArithmeticConstraint) String() string {
+ return fmt.Sprintf("%s = %s %s %s", c.Y().Name(), c.A.Name(), c.Op, c.B.Name())
+}
+func (c *IntConversionConstraint) String() string {
+ return fmt.Sprintf("%s = %s(%s)", c.Y().Name(), c.Y().Type(), c.X.Name())
+}
+func (c *IntIntersectionConstraint) String() string {
+ return fmt.Sprintf("%s = %s %s %s (%t branch)", c.Y().Name(), c.A.Name(), c.Op, c.B.Name(), c.Y().(*ssa.Sigma).Branch)
+}
+func (c *IntIntervalConstraint) String() string { return fmt.Sprintf("%s = %s", c.Y().Name(), c.I) }
+
+func (c *IntArithmeticConstraint) Eval(g *Graph) Range {
+ i1, i2 := g.Range(c.A).(IntInterval), g.Range(c.B).(IntInterval)
+ if !i1.IsKnown() || !i2.IsKnown() {
+ return IntInterval{}
+ }
+ return c.Fn(i1, i2)
+}
+func (c *IntConversionConstraint) Eval(g *Graph) Range {
+ s := &types.StdSizes{
+ // XXX is it okay to assume the largest word size, or do we
+ // need to be platform specific?
+ WordSize: 8,
+ MaxAlign: 1,
+ }
+ fromI := g.Range(c.X).(IntInterval)
+ toI := g.Range(c.Y()).(IntInterval)
+ fromT := c.X.Type().Underlying().(*types.Basic)
+ toT := c.Y().Type().Underlying().(*types.Basic)
+ fromB := s.Sizeof(c.X.Type())
+ toB := s.Sizeof(c.Y().Type())
+
+ if !fromI.IsKnown() {
+ return toI
+ }
+ if !toI.IsKnown() {
+ return fromI
+ }
+
+ // uint<N> -> sint/uint<M>, M > N: [max(0, l1), min(2**N-1, u2)]
+ if (fromT.Info()&types.IsUnsigned != 0) &&
+ toB > fromB {
+
+ n := big.NewInt(1)
+ n.Lsh(n, uint(fromB*8))
+ n.Sub(n, big.NewInt(1))
+ return NewIntInterval(
+ MaxZ(NewZ(0), fromI.Lower),
+ MinZ(NewBigZ(n), toI.Upper),
+ )
+ }
+
+ // sint<N> -> sint<M>, M > N; [max(-∞, l1), min(2**N-1, u2)]
+ if (fromT.Info()&types.IsUnsigned == 0) &&
+ (toT.Info()&types.IsUnsigned == 0) &&
+ toB > fromB {
+
+ n := big.NewInt(1)
+ n.Lsh(n, uint(fromB*8))
+ n.Sub(n, big.NewInt(1))
+ return NewIntInterval(
+ MaxZ(NInfinity, fromI.Lower),
+ MinZ(NewBigZ(n), toI.Upper),
+ )
+ }
+
+ return fromI
+}
+func (c *IntIntersectionConstraint) Eval(g *Graph) Range {
+ xi := g.Range(c.A).(IntInterval)
+ if !xi.IsKnown() {
+ return c.I
+ }
+ return xi.Intersection(c.I)
+}
+func (c *IntIntervalConstraint) Eval(*Graph) Range { return c.I }
+
+func (c *IntIntersectionConstraint) Futures() []ssa.Value {
+ return []ssa.Value{c.B}
+}
+
+func (c *IntIntersectionConstraint) Resolve() {
+ r, ok := c.ranges[c.B].(IntInterval)
+ if !ok {
+ c.I = InfinityFor(c.Y())
+ return
+ }
+
+ switch c.Op {
+ case token.EQL:
+ c.I = r
+ case token.GTR:
+ c.I = NewIntInterval(r.Lower.Add(NewZ(1)), PInfinity)
+ case token.GEQ:
+ c.I = NewIntInterval(r.Lower, PInfinity)
+ case token.LSS:
+ // TODO(dh): do we need 0 instead of NInfinity for uints?
+ c.I = NewIntInterval(NInfinity, r.Upper.Sub(NewZ(1)))
+ case token.LEQ:
+ c.I = NewIntInterval(NInfinity, r.Upper)
+ case token.NEQ:
+ c.I = InfinityFor(c.Y())
+ default:
+ panic("unsupported op " + c.Op.String())
+ }
+}
+
+func (c *IntIntersectionConstraint) IsKnown() bool {
+ return c.I.IsKnown()
+}
+
+func (c *IntIntersectionConstraint) MarkUnresolved() {
+ c.resolved = false
+}
+
+func (c *IntIntersectionConstraint) MarkResolved() {
+ c.resolved = true
+}
+
+func (c *IntIntersectionConstraint) IsResolved() bool {
+ return c.resolved
+}
diff --git a/vendor/honnef.co/go/tools/staticcheck/vrp/slice.go b/vendor/honnef.co/go/tools/staticcheck/vrp/slice.go
new file mode 100644
index 0000000000000..40658dd8d8607
--- /dev/null
+++ b/vendor/honnef.co/go/tools/staticcheck/vrp/slice.go
@@ -0,0 +1,273 @@
+package vrp
+
+// TODO(dh): most of the constraints have implementations identical to
+// that of strings. Consider reusing them.
+
+import (
+ "fmt"
+ "go/types"
+
+ "honnef.co/go/tools/ssa"
+)
+
+type SliceInterval struct {
+ Length IntInterval
+}
+
+func (s SliceInterval) Union(other Range) Range {
+ i, ok := other.(SliceInterval)
+ if !ok {
+ i = SliceInterval{EmptyIntInterval}
+ }
+ if s.Length.Empty() || !s.Length.IsKnown() {
+ return i
+ }
+ if i.Length.Empty() || !i.Length.IsKnown() {
+ return s
+ }
+ return SliceInterval{
+ Length: s.Length.Union(i.Length).(IntInterval),
+ }
+}
+func (s SliceInterval) String() string { return s.Length.String() }
+func (s SliceInterval) IsKnown() bool { return s.Length.IsKnown() }
+
+type SliceAppendConstraint struct {
+ aConstraint
+ A ssa.Value
+ B ssa.Value
+}
+
+type SliceSliceConstraint struct {
+ aConstraint
+ X ssa.Value
+ Lower ssa.Value
+ Upper ssa.Value
+}
+
+type ArraySliceConstraint struct {
+ aConstraint
+ X ssa.Value
+ Lower ssa.Value
+ Upper ssa.Value
+}
+
+type SliceIntersectionConstraint struct {
+ aConstraint
+ X ssa.Value
+ I IntInterval
+}
+
+type SliceLengthConstraint struct {
+ aConstraint
+ X ssa.Value
+}
+
+type MakeSliceConstraint struct {
+ aConstraint
+ Size ssa.Value
+}
+
+type SliceIntervalConstraint struct {
+ aConstraint
+ I IntInterval
+}
+
+func NewSliceAppendConstraint(a, b, y ssa.Value) Constraint {
+ return &SliceAppendConstraint{NewConstraint(y), a, b}
+}
+func NewSliceSliceConstraint(x, lower, upper, y ssa.Value) Constraint {
+ return &SliceSliceConstraint{NewConstraint(y), x, lower, upper}
+}
+func NewArraySliceConstraint(x, lower, upper, y ssa.Value) Constraint {
+ return &ArraySliceConstraint{NewConstraint(y), x, lower, upper}
+}
+func NewSliceIntersectionConstraint(x ssa.Value, i IntInterval, y ssa.Value) Constraint {
+ return &SliceIntersectionConstraint{NewConstraint(y), x, i}
+}
+func NewSliceLengthConstraint(x, y ssa.Value) Constraint {
+ return &SliceLengthConstraint{NewConstraint(y), x}
+}
+func NewMakeSliceConstraint(size, y ssa.Value) Constraint {
+ return &MakeSliceConstraint{NewConstraint(y), size}
+}
+func NewSliceIntervalConstraint(i IntInterval, y ssa.Value) Constraint {
+ return &SliceIntervalConstraint{NewConstraint(y), i}
+}
+
+func (c *SliceAppendConstraint) Operands() []ssa.Value { return []ssa.Value{c.A, c.B} }
+func (c *SliceSliceConstraint) Operands() []ssa.Value {
+ ops := []ssa.Value{c.X}
+ if c.Lower != nil {
+ ops = append(ops, c.Lower)
+ }
+ if c.Upper != nil {
+ ops = append(ops, c.Upper)
+ }
+ return ops
+}
+func (c *ArraySliceConstraint) Operands() []ssa.Value {
+ ops := []ssa.Value{c.X}
+ if c.Lower != nil {
+ ops = append(ops, c.Lower)
+ }
+ if c.Upper != nil {
+ ops = append(ops, c.Upper)
+ }
+ return ops
+}
+func (c *SliceIntersectionConstraint) Operands() []ssa.Value { return []ssa.Value{c.X} }
+func (c *SliceLengthConstraint) Operands() []ssa.Value { return []ssa.Value{c.X} }
+func (c *MakeSliceConstraint) Operands() []ssa.Value { return []ssa.Value{c.Size} }
+func (s *SliceIntervalConstraint) Operands() []ssa.Value { return nil }
+
+func (c *SliceAppendConstraint) String() string {
+ return fmt.Sprintf("%s = append(%s, %s)", c.Y().Name(), c.A.Name(), c.B.Name())
+}
+func (c *SliceSliceConstraint) String() string {
+ var lname, uname string
+ if c.Lower != nil {
+ lname = c.Lower.Name()
+ }
+ if c.Upper != nil {
+ uname = c.Upper.Name()
+ }
+ return fmt.Sprintf("%s[%s:%s]", c.X.Name(), lname, uname)
+}
+func (c *ArraySliceConstraint) String() string {
+ var lname, uname string
+ if c.Lower != nil {
+ lname = c.Lower.Name()
+ }
+ if c.Upper != nil {
+ uname = c.Upper.Name()
+ }
+ return fmt.Sprintf("%s[%s:%s]", c.X.Name(), lname, uname)
+}
+func (c *SliceIntersectionConstraint) String() string {
+ return fmt.Sprintf("%s = %s.%t ⊓ %s", c.Y().Name(), c.X.Name(), c.Y().(*ssa.Sigma).Branch, c.I)
+}
+func (c *SliceLengthConstraint) String() string {
+ return fmt.Sprintf("%s = len(%s)", c.Y().Name(), c.X.Name())
+}
+func (c *MakeSliceConstraint) String() string {
+ return fmt.Sprintf("%s = make(slice, %s)", c.Y().Name(), c.Size.Name())
+}
+func (c *SliceIntervalConstraint) String() string { return fmt.Sprintf("%s = %s", c.Y().Name(), c.I) }
+
+func (c *SliceAppendConstraint) Eval(g *Graph) Range {
+ l1 := g.Range(c.A).(SliceInterval).Length
+ var l2 IntInterval
+ switch r := g.Range(c.B).(type) {
+ case SliceInterval:
+ l2 = r.Length
+ case StringInterval:
+ l2 = r.Length
+ default:
+ return SliceInterval{}
+ }
+ if !l1.IsKnown() || !l2.IsKnown() {
+ return SliceInterval{}
+ }
+ return SliceInterval{
+ Length: l1.Add(l2),
+ }
+}
+func (c *SliceSliceConstraint) Eval(g *Graph) Range {
+ lr := NewIntInterval(NewZ(0), NewZ(0))
+ if c.Lower != nil {
+ lr = g.Range(c.Lower).(IntInterval)
+ }
+ ur := g.Range(c.X).(SliceInterval).Length
+ if c.Upper != nil {
+ ur = g.Range(c.Upper).(IntInterval)
+ }
+ if !lr.IsKnown() || !ur.IsKnown() {
+ return SliceInterval{}
+ }
+
+ ls := []Z{
+ ur.Lower.Sub(lr.Lower),
+ ur.Upper.Sub(lr.Lower),
+ ur.Lower.Sub(lr.Upper),
+ ur.Upper.Sub(lr.Upper),
+ }
+ // TODO(dh): if we don't truncate lengths to 0 we might be able to
+ // easily detect slices with high < low. we'd need to treat -∞
+ // specially, though.
+ for i, l := range ls {
+ if l.Sign() == -1 {
+ ls[i] = NewZ(0)
+ }
+ }
+
+ return SliceInterval{
+ Length: NewIntInterval(MinZ(ls...), MaxZ(ls...)),
+ }
+}
+func (c *ArraySliceConstraint) Eval(g *Graph) Range {
+ lr := NewIntInterval(NewZ(0), NewZ(0))
+ if c.Lower != nil {
+ lr = g.Range(c.Lower).(IntInterval)
+ }
+ var l int64
+ switch typ := c.X.Type().(type) {
+ case *types.Array:
+ l = typ.Len()
+ case *types.Pointer:
+ l = typ.Elem().(*types.Array).Len()
+ }
+ ur := NewIntInterval(NewZ(l), NewZ(l))
+ if c.Upper != nil {
+ ur = g.Range(c.Upper).(IntInterval)
+ }
+ if !lr.IsKnown() || !ur.IsKnown() {
+ return SliceInterval{}
+ }
+
+ ls := []Z{
+ ur.Lower.Sub(lr.Lower),
+ ur.Upper.Sub(lr.Lower),
+ ur.Lower.Sub(lr.Upper),
+ ur.Upper.Sub(lr.Upper),
+ }
+ // TODO(dh): if we don't truncate lengths to 0 we might be able to
+ // easily detect slices with high < low. we'd need to treat -∞
+ // specially, though.
+ for i, l := range ls {
+ if l.Sign() == -1 {
+ ls[i] = NewZ(0)
+ }
+ }
+
+ return SliceInterval{
+ Length: NewIntInterval(MinZ(ls...), MaxZ(ls...)),
+ }
+}
+func (c *SliceIntersectionConstraint) Eval(g *Graph) Range {
+ xi := g.Range(c.X).(SliceInterval)
+ if !xi.IsKnown() {
+ return c.I
+ }
+ return SliceInterval{
+ Length: xi.Length.Intersection(c.I),
+ }
+}
+func (c *SliceLengthConstraint) Eval(g *Graph) Range {
+ i := g.Range(c.X).(SliceInterval).Length
+ if !i.IsKnown() {
+ return NewIntInterval(NewZ(0), PInfinity)
+ }
+ return i
+}
+func (c *MakeSliceConstraint) Eval(g *Graph) Range {
+ i, ok := g.Range(c.Size).(IntInterval)
+ if !ok {
+ return SliceInterval{NewIntInterval(NewZ(0), PInfinity)}
+ }
+ if i.Lower.Sign() == -1 {
+ i.Lower = NewZ(0)
+ }
+ return SliceInterval{i}
+}
+func (c *SliceIntervalConstraint) Eval(*Graph) Range { return SliceInterval{c.I} }
diff --git a/vendor/honnef.co/go/tools/staticcheck/vrp/string.go b/vendor/honnef.co/go/tools/staticcheck/vrp/string.go
new file mode 100644
index 0000000000000..e05877f9f782a
--- /dev/null
+++ b/vendor/honnef.co/go/tools/staticcheck/vrp/string.go
@@ -0,0 +1,258 @@
+package vrp
+
+import (
+ "fmt"
+ "go/token"
+ "go/types"
+
+ "honnef.co/go/tools/ssa"
+)
+
+type StringInterval struct {
+ Length IntInterval
+}
+
+func (s StringInterval) Union(other Range) Range {
+ i, ok := other.(StringInterval)
+ if !ok {
+ i = StringInterval{EmptyIntInterval}
+ }
+ if s.Length.Empty() || !s.Length.IsKnown() {
+ return i
+ }
+ if i.Length.Empty() || !i.Length.IsKnown() {
+ return s
+ }
+ return StringInterval{
+ Length: s.Length.Union(i.Length).(IntInterval),
+ }
+}
+
+func (s StringInterval) String() string {
+ return s.Length.String()
+}
+
+func (s StringInterval) IsKnown() bool {
+ return s.Length.IsKnown()
+}
+
+type StringSliceConstraint struct {
+ aConstraint
+ X ssa.Value
+ Lower ssa.Value
+ Upper ssa.Value
+}
+
+type StringIntersectionConstraint struct {
+ aConstraint
+ ranges Ranges
+ A ssa.Value
+ B ssa.Value
+ Op token.Token
+ I IntInterval
+ resolved bool
+}
+
+type StringConcatConstraint struct {
+ aConstraint
+ A ssa.Value
+ B ssa.Value
+}
+
+type StringLengthConstraint struct {
+ aConstraint
+ X ssa.Value
+}
+
+type StringIntervalConstraint struct {
+ aConstraint
+ I IntInterval
+}
+
+func NewStringSliceConstraint(x, lower, upper, y ssa.Value) Constraint {
+ return &StringSliceConstraint{NewConstraint(y), x, lower, upper}
+}
+func NewStringIntersectionConstraint(a, b ssa.Value, op token.Token, ranges Ranges, y ssa.Value) Constraint {
+ return &StringIntersectionConstraint{
+ aConstraint: NewConstraint(y),
+ ranges: ranges,
+ A: a,
+ B: b,
+ Op: op,
+ }
+}
+func NewStringConcatConstraint(a, b, y ssa.Value) Constraint {
+ return &StringConcatConstraint{NewConstraint(y), a, b}
+}
+func NewStringLengthConstraint(x ssa.Value, y ssa.Value) Constraint {
+ return &StringLengthConstraint{NewConstraint(y), x}
+}
+func NewStringIntervalConstraint(i IntInterval, y ssa.Value) Constraint {
+ return &StringIntervalConstraint{NewConstraint(y), i}
+}
+
+func (c *StringSliceConstraint) Operands() []ssa.Value {
+ vs := []ssa.Value{c.X}
+ if c.Lower != nil {
+ vs = append(vs, c.Lower)
+ }
+ if c.Upper != nil {
+ vs = append(vs, c.Upper)
+ }
+ return vs
+}
+func (c *StringIntersectionConstraint) Operands() []ssa.Value { return []ssa.Value{c.A} }
+func (c StringConcatConstraint) Operands() []ssa.Value { return []ssa.Value{c.A, c.B} }
+func (c *StringLengthConstraint) Operands() []ssa.Value { return []ssa.Value{c.X} }
+func (s *StringIntervalConstraint) Operands() []ssa.Value { return nil }
+
+func (c *StringSliceConstraint) String() string {
+ var lname, uname string
+ if c.Lower != nil {
+ lname = c.Lower.Name()
+ }
+ if c.Upper != nil {
+ uname = c.Upper.Name()
+ }
+ return fmt.Sprintf("%s[%s:%s]", c.X.Name(), lname, uname)
+}
+func (c *StringIntersectionConstraint) String() string {
+ return fmt.Sprintf("%s = %s %s %s (%t branch)", c.Y().Name(), c.A.Name(), c.Op, c.B.Name(), c.Y().(*ssa.Sigma).Branch)
+}
+func (c StringConcatConstraint) String() string {
+ return fmt.Sprintf("%s = %s + %s", c.Y().Name(), c.A.Name(), c.B.Name())
+}
+func (c *StringLengthConstraint) String() string {
+ return fmt.Sprintf("%s = len(%s)", c.Y().Name(), c.X.Name())
+}
+func (c *StringIntervalConstraint) String() string { return fmt.Sprintf("%s = %s", c.Y().Name(), c.I) }
+
+func (c *StringSliceConstraint) Eval(g *Graph) Range {
+ lr := NewIntInterval(NewZ(0), NewZ(0))
+ if c.Lower != nil {
+ lr = g.Range(c.Lower).(IntInterval)
+ }
+ ur := g.Range(c.X).(StringInterval).Length
+ if c.Upper != nil {
+ ur = g.Range(c.Upper).(IntInterval)
+ }
+ if !lr.IsKnown() || !ur.IsKnown() {
+ return StringInterval{}
+ }
+
+ ls := []Z{
+ ur.Lower.Sub(lr.Lower),
+ ur.Upper.Sub(lr.Lower),
+ ur.Lower.Sub(lr.Upper),
+ ur.Upper.Sub(lr.Upper),
+ }
+ // TODO(dh): if we don't truncate lengths to 0 we might be able to
+ // easily detect slices with high < low. we'd need to treat -∞
+ // specially, though.
+ for i, l := range ls {
+ if l.Sign() == -1 {
+ ls[i] = NewZ(0)
+ }
+ }
+
+ return StringInterval{
+ Length: NewIntInterval(MinZ(ls...), MaxZ(ls...)),
+ }
+}
+func (c *StringIntersectionConstraint) Eval(g *Graph) Range {
+ var l IntInterval
+ switch r := g.Range(c.A).(type) {
+ case StringInterval:
+ l = r.Length
+ case IntInterval:
+ l = r
+ }
+
+ if !l.IsKnown() {
+ return StringInterval{c.I}
+ }
+ return StringInterval{
+ Length: l.Intersection(c.I),
+ }
+}
+func (c StringConcatConstraint) Eval(g *Graph) Range {
+ i1, i2 := g.Range(c.A).(StringInterval), g.Range(c.B).(StringInterval)
+ if !i1.Length.IsKnown() || !i2.Length.IsKnown() {
+ return StringInterval{}
+ }
+ return StringInterval{
+ Length: i1.Length.Add(i2.Length),
+ }
+}
+func (c *StringLengthConstraint) Eval(g *Graph) Range {
+ i := g.Range(c.X).(StringInterval).Length
+ if !i.IsKnown() {
+ return NewIntInterval(NewZ(0), PInfinity)
+ }
+ return i
+}
+func (c *StringIntervalConstraint) Eval(*Graph) Range { return StringInterval{c.I} }
+
+func (c *StringIntersectionConstraint) Futures() []ssa.Value {
+ return []ssa.Value{c.B}
+}
+
+func (c *StringIntersectionConstraint) Resolve() {
+ if (c.A.Type().Underlying().(*types.Basic).Info() & types.IsString) != 0 {
+ // comparing two strings
+ r, ok := c.ranges[c.B].(StringInterval)
+ if !ok {
+ c.I = NewIntInterval(NewZ(0), PInfinity)
+ return
+ }
+ switch c.Op {
+ case token.EQL:
+ c.I = r.Length
+ case token.GTR, token.GEQ:
+ c.I = NewIntInterval(r.Length.Lower, PInfinity)
+ case token.LSS, token.LEQ:
+ c.I = NewIntInterval(NewZ(0), r.Length.Upper)
+ case token.NEQ:
+ default:
+ panic("unsupported op " + c.Op.String())
+ }
+ } else {
+ r, ok := c.ranges[c.B].(IntInterval)
+ if !ok {
+ c.I = NewIntInterval(NewZ(0), PInfinity)
+ return
+ }
+ // comparing two lengths
+ switch c.Op {
+ case token.EQL:
+ c.I = r
+ case token.GTR:
+ c.I = NewIntInterval(r.Lower.Add(NewZ(1)), PInfinity)
+ case token.GEQ:
+ c.I = NewIntInterval(r.Lower, PInfinity)
+ case token.LSS:
+ c.I = NewIntInterval(NInfinity, r.Upper.Sub(NewZ(1)))
+ case token.LEQ:
+ c.I = NewIntInterval(NInfinity, r.Upper)
+ case token.NEQ:
+ default:
+ panic("unsupported op " + c.Op.String())
+ }
+ }
+}
+
+func (c *StringIntersectionConstraint) IsKnown() bool {
+ return c.I.IsKnown()
+}
+
+func (c *StringIntersectionConstraint) MarkUnresolved() {
+ c.resolved = false
+}
+
+func (c *StringIntersectionConstraint) MarkResolved() {
+ c.resolved = true
+}
+
+func (c *StringIntersectionConstraint) IsResolved() bool {
+ return c.resolved
+}
diff --git a/vendor/honnef.co/go/tools/staticcheck/vrp/vrp.go b/vendor/honnef.co/go/tools/staticcheck/vrp/vrp.go
new file mode 100644
index 0000000000000..3c138e51229a5
--- /dev/null
+++ b/vendor/honnef.co/go/tools/staticcheck/vrp/vrp.go
@@ -0,0 +1,1056 @@
+package vrp
+
+// TODO(dh) widening and narrowing have a lot of code in common. Make
+// it reusable.
+
+import (
+ "fmt"
+ "go/constant"
+ "go/token"
+ "go/types"
+ "math/big"
+ "sort"
+ "strings"
+
+ "honnef.co/go/tools/lint"
+ "honnef.co/go/tools/ssa"
+)
+
+type Future interface {
+ Constraint
+ Futures() []ssa.Value
+ Resolve()
+ IsKnown() bool
+ MarkUnresolved()
+ MarkResolved()
+ IsResolved() bool
+}
+
+type Range interface {
+ Union(other Range) Range
+ IsKnown() bool
+}
+
+type Constraint interface {
+ Y() ssa.Value
+ isConstraint()
+ String() string
+ Eval(*Graph) Range
+ Operands() []ssa.Value
+}
+
+type aConstraint struct {
+ y ssa.Value
+}
+
+func NewConstraint(y ssa.Value) aConstraint {
+ return aConstraint{y}
+}
+
+func (aConstraint) isConstraint() {}
+func (c aConstraint) Y() ssa.Value { return c.y }
+
+type PhiConstraint struct {
+ aConstraint
+ Vars []ssa.Value
+}
+
+func NewPhiConstraint(vars []ssa.Value, y ssa.Value) Constraint {
+ uniqm := map[ssa.Value]struct{}{}
+ for _, v := range vars {
+ uniqm[v] = struct{}{}
+ }
+ var uniq []ssa.Value
+ for v := range uniqm {
+ uniq = append(uniq, v)
+ }
+ return &PhiConstraint{
+ aConstraint: NewConstraint(y),
+ Vars: uniq,
+ }
+}
+
+func (c *PhiConstraint) Operands() []ssa.Value {
+ return c.Vars
+}
+
+func (c *PhiConstraint) Eval(g *Graph) Range {
+ i := Range(nil)
+ for _, v := range c.Vars {
+ i = g.Range(v).Union(i)
+ }
+ return i
+}
+
+func (c *PhiConstraint) String() string {
+ names := make([]string, len(c.Vars))
+ for i, v := range c.Vars {
+ names[i] = v.Name()
+ }
+ return fmt.Sprintf("%s = φ(%s)", c.Y().Name(), strings.Join(names, ", "))
+}
+
+func isSupportedType(typ types.Type) bool {
+ switch typ := typ.Underlying().(type) {
+ case *types.Basic:
+ switch typ.Kind() {
+ case types.String, types.UntypedString:
+ return true
+ default:
+ if (typ.Info() & types.IsInteger) == 0 {
+ return false
+ }
+ }
+ case *types.Chan:
+ return true
+ case *types.Slice:
+ return true
+ default:
+ return false
+ }
+ return true
+}
+
+func ConstantToZ(c constant.Value) Z {
+ s := constant.ToInt(c).ExactString()
+ n := &big.Int{}
+ n.SetString(s, 10)
+ return NewBigZ(n)
+}
+
+func sigmaInteger(g *Graph, ins *ssa.Sigma, cond *ssa.BinOp, ops []*ssa.Value) Constraint {
+ op := cond.Op
+ if !ins.Branch {
+ op = (invertToken(op))
+ }
+
+ switch op {
+ case token.EQL, token.GTR, token.GEQ, token.LSS, token.LEQ:
+ default:
+ return nil
+ }
+ var a, b ssa.Value
+ if (*ops[0]) == ins.X {
+ a = *ops[0]
+ b = *ops[1]
+ } else {
+ a = *ops[1]
+ b = *ops[0]
+ op = flipToken(op)
+ }
+ return NewIntIntersectionConstraint(a, b, op, g.ranges, ins)
+}
+
+func sigmaString(g *Graph, ins *ssa.Sigma, cond *ssa.BinOp, ops []*ssa.Value) Constraint {
+ op := cond.Op
+ if !ins.Branch {
+ op = (invertToken(op))
+ }
+
+ switch op {
+ case token.EQL, token.GTR, token.GEQ, token.LSS, token.LEQ:
+ default:
+ return nil
+ }
+
+ if ((*ops[0]).Type().Underlying().(*types.Basic).Info() & types.IsString) == 0 {
+ var a, b ssa.Value
+ call, ok := (*ops[0]).(*ssa.Call)
+ if ok && call.Common().Args[0] == ins.X {
+ a = *ops[0]
+ b = *ops[1]
+ } else {
+ a = *ops[1]
+ b = *ops[0]
+ op = flipToken(op)
+ }
+ return NewStringIntersectionConstraint(a, b, op, g.ranges, ins)
+ }
+ var a, b ssa.Value
+ if (*ops[0]) == ins.X {
+ a = *ops[0]
+ b = *ops[1]
+ } else {
+ a = *ops[1]
+ b = *ops[0]
+ op = flipToken(op)
+ }
+ return NewStringIntersectionConstraint(a, b, op, g.ranges, ins)
+}
+
+func sigmaSlice(g *Graph, ins *ssa.Sigma, cond *ssa.BinOp, ops []*ssa.Value) Constraint {
+ // TODO(dh) sigmaSlice and sigmaString are a lot alike. Can they
+ // be merged?
+ //
+ // XXX support futures
+
+ op := cond.Op
+ if !ins.Branch {
+ op = (invertToken(op))
+ }
+
+ k, ok := (*ops[1]).(*ssa.Const)
+ // XXX investigate in what cases this wouldn't be a Const
+ //
+ // XXX what if left and right are swapped?
+ if !ok {
+ return nil
+ }
+
+ call, ok := (*ops[0]).(*ssa.Call)
+ if !ok {
+ return nil
+ }
+ builtin, ok := call.Common().Value.(*ssa.Builtin)
+ if !ok {
+ return nil
+ }
+ if builtin.Name() != "len" {
+ return nil
+ }
+ callops := call.Operands(nil)
+
+ v := ConstantToZ(k.Value)
+ c := NewSliceIntersectionConstraint(*callops[1], IntInterval{}, ins).(*SliceIntersectionConstraint)
+ switch op {
+ case token.EQL:
+ c.I = NewIntInterval(v, v)
+ case token.GTR, token.GEQ:
+ off := int64(0)
+ if cond.Op == token.GTR {
+ off = 1
+ }
+ c.I = NewIntInterval(
+ v.Add(NewZ(off)),
+ PInfinity,
+ )
+ case token.LSS, token.LEQ:
+ off := int64(0)
+ if cond.Op == token.LSS {
+ off = -1
+ }
+ c.I = NewIntInterval(
+ NInfinity,
+ v.Add(NewZ(off)),
+ )
+ default:
+ return nil
+ }
+ return c
+}
+
+func BuildGraph(f *ssa.Function) *Graph {
+ g := &Graph{
+ Vertices: map[interface{}]*Vertex{},
+ ranges: Ranges{},
+ }
+
+ var cs []Constraint
+
+ ops := make([]*ssa.Value, 16)
+ seen := map[ssa.Value]bool{}
+ for _, block := range f.Blocks {
+ for _, ins := range block.Instrs {
+ ops = ins.Operands(ops[:0])
+ for _, op := range ops {
+ if c, ok := (*op).(*ssa.Const); ok {
+ if seen[c] {
+ continue
+ }
+ seen[c] = true
+ if c.Value == nil {
+ switch c.Type().Underlying().(type) {
+ case *types.Slice:
+ cs = append(cs, NewSliceIntervalConstraint(NewIntInterval(NewZ(0), NewZ(0)), c))
+ }
+ continue
+ }
+ switch c.Value.Kind() {
+ case constant.Int:
+ v := ConstantToZ(c.Value)
+ cs = append(cs, NewIntIntervalConstraint(NewIntInterval(v, v), c))
+ case constant.String:
+ s := constant.StringVal(c.Value)
+ n := NewZ(int64(len(s)))
+ cs = append(cs, NewStringIntervalConstraint(NewIntInterval(n, n), c))
+ }
+ }
+ }
+ }
+ }
+ for _, block := range f.Blocks {
+ for _, ins := range block.Instrs {
+ switch ins := ins.(type) {
+ case *ssa.Convert:
+ switch v := ins.Type().Underlying().(type) {
+ case *types.Basic:
+ if (v.Info() & types.IsInteger) == 0 {
+ continue
+ }
+ cs = append(cs, NewIntConversionConstraint(ins.X, ins))
+ }
+ case *ssa.Call:
+ if static := ins.Common().StaticCallee(); static != nil {
+ if fn, ok := static.Object().(*types.Func); ok {
+ switch lint.FuncName(fn) {
+ case "bytes.Index", "bytes.IndexAny", "bytes.IndexByte",
+ "bytes.IndexFunc", "bytes.IndexRune", "bytes.LastIndex",
+ "bytes.LastIndexAny", "bytes.LastIndexByte", "bytes.LastIndexFunc",
+ "strings.Index", "strings.IndexAny", "strings.IndexByte",
+ "strings.IndexFunc", "strings.IndexRune", "strings.LastIndex",
+ "strings.LastIndexAny", "strings.LastIndexByte", "strings.LastIndexFunc":
+ // TODO(dh): instead of limiting by +∞,
+ // limit by the upper bound of the passed
+ // string
+ cs = append(cs, NewIntIntervalConstraint(NewIntInterval(NewZ(-1), PInfinity), ins))
+ case "bytes.Title", "bytes.ToLower", "bytes.ToTitle", "bytes.ToUpper",
+ "strings.Title", "strings.ToLower", "strings.ToTitle", "strings.ToUpper":
+ cs = append(cs, NewCopyConstraint(ins.Common().Args[0], ins))
+ case "bytes.ToLowerSpecial", "bytes.ToTitleSpecial", "bytes.ToUpperSpecial",
+ "strings.ToLowerSpecial", "strings.ToTitleSpecial", "strings.ToUpperSpecial":
+ cs = append(cs, NewCopyConstraint(ins.Common().Args[1], ins))
+ case "bytes.Compare", "strings.Compare":
+ cs = append(cs, NewIntIntervalConstraint(NewIntInterval(NewZ(-1), NewZ(1)), ins))
+ case "bytes.Count", "strings.Count":
+ // TODO(dh): instead of limiting by +∞,
+ // limit by the upper bound of the passed
+ // string.
+ cs = append(cs, NewIntIntervalConstraint(NewIntInterval(NewZ(0), PInfinity), ins))
+ case "bytes.Map", "bytes.TrimFunc", "bytes.TrimLeft", "bytes.TrimLeftFunc",
+ "bytes.TrimRight", "bytes.TrimRightFunc", "bytes.TrimSpace",
+ "strings.Map", "strings.TrimFunc", "strings.TrimLeft", "strings.TrimLeftFunc",
+ "strings.TrimRight", "strings.TrimRightFunc", "strings.TrimSpace":
+ // TODO(dh): lower = 0, upper = upper of passed string
+ case "bytes.TrimPrefix", "bytes.TrimSuffix",
+ "strings.TrimPrefix", "strings.TrimSuffix":
+ // TODO(dh) range between "unmodified" and len(cutset) removed
+ case "(*bytes.Buffer).Cap", "(*bytes.Buffer).Len", "(*bytes.Reader).Len", "(*bytes.Reader).Size":
+ cs = append(cs, NewIntIntervalConstraint(NewIntInterval(NewZ(0), PInfinity), ins))
+ }
+ }
+ }
+ builtin, ok := ins.Common().Value.(*ssa.Builtin)
+ ops := ins.Operands(nil)
+ if !ok {
+ continue
+ }
+ switch builtin.Name() {
+ case "len":
+ switch op1 := (*ops[1]).Type().Underlying().(type) {
+ case *types.Basic:
+ if op1.Kind() == types.String || op1.Kind() == types.UntypedString {
+ cs = append(cs, NewStringLengthConstraint(*ops[1], ins))
+ }
+ case *types.Slice:
+ cs = append(cs, NewSliceLengthConstraint(*ops[1], ins))
+ }
+
+ case "append":
+ cs = append(cs, NewSliceAppendConstraint(ins.Common().Args[0], ins.Common().Args[1], ins))
+ }
+ case *ssa.BinOp:
+ ops := ins.Operands(nil)
+ basic, ok := (*ops[0]).Type().Underlying().(*types.Basic)
+ if !ok {
+ continue
+ }
+ switch basic.Kind() {
+ case types.Int, types.Int8, types.Int16, types.Int32, types.Int64,
+ types.Uint, types.Uint8, types.Uint16, types.Uint32, types.Uint64, types.UntypedInt:
+ fns := map[token.Token]func(ssa.Value, ssa.Value, ssa.Value) Constraint{
+ token.ADD: NewIntAddConstraint,
+ token.SUB: NewIntSubConstraint,
+ token.MUL: NewIntMulConstraint,
+ // XXX support QUO, REM, SHL, SHR
+ }
+ fn, ok := fns[ins.Op]
+ if ok {
+ cs = append(cs, fn(*ops[0], *ops[1], ins))
+ }
+ case types.String, types.UntypedString:
+ if ins.Op == token.ADD {
+ cs = append(cs, NewStringConcatConstraint(*ops[0], *ops[1], ins))
+ }
+ }
+ case *ssa.Slice:
+ typ := ins.X.Type().Underlying()
+ switch typ := typ.(type) {
+ case *types.Basic:
+ cs = append(cs, NewStringSliceConstraint(ins.X, ins.Low, ins.High, ins))
+ case *types.Slice:
+ cs = append(cs, NewSliceSliceConstraint(ins.X, ins.Low, ins.High, ins))
+ case *types.Array:
+ cs = append(cs, NewArraySliceConstraint(ins.X, ins.Low, ins.High, ins))
+ case *types.Pointer:
+ if _, ok := typ.Elem().(*types.Array); !ok {
+ continue
+ }
+ cs = append(cs, NewArraySliceConstraint(ins.X, ins.Low, ins.High, ins))
+ }
+ case *ssa.Phi:
+ if !isSupportedType(ins.Type()) {
+ continue
+ }
+ ops := ins.Operands(nil)
+ dops := make([]ssa.Value, len(ops))
+ for i, op := range ops {
+ dops[i] = *op
+ }
+ cs = append(cs, NewPhiConstraint(dops, ins))
+ case *ssa.Sigma:
+ pred := ins.Block().Preds[0]
+ instrs := pred.Instrs
+ cond, ok := instrs[len(instrs)-1].(*ssa.If).Cond.(*ssa.BinOp)
+ ops := cond.Operands(nil)
+ if !ok {
+ continue
+ }
+ switch typ := ins.Type().Underlying().(type) {
+ case *types.Basic:
+ var c Constraint
+ switch typ.Kind() {
+ case types.Int, types.Int8, types.Int16, types.Int32, types.Int64,
+ types.Uint, types.Uint8, types.Uint16, types.Uint32, types.Uint64, types.UntypedInt:
+ c = sigmaInteger(g, ins, cond, ops)
+ case types.String, types.UntypedString:
+ c = sigmaString(g, ins, cond, ops)
+ }
+ if c != nil {
+ cs = append(cs, c)
+ }
+ case *types.Slice:
+ c := sigmaSlice(g, ins, cond, ops)
+ if c != nil {
+ cs = append(cs, c)
+ }
+ default:
+ //log.Printf("unsupported sigma type %T", typ) // XXX
+ }
+ case *ssa.MakeChan:
+ cs = append(cs, NewMakeChannelConstraint(ins.Size, ins))
+ case *ssa.MakeSlice:
+ cs = append(cs, NewMakeSliceConstraint(ins.Len, ins))
+ case *ssa.ChangeType:
+ switch ins.X.Type().Underlying().(type) {
+ case *types.Chan:
+ cs = append(cs, NewChannelChangeTypeConstraint(ins.X, ins))
+ }
+ }
+ }
+ }
+
+ for _, c := range cs {
+ if c == nil {
+ panic("nil constraint")
+ }
+ // If V is used in constraint C, then we create an edge V->C
+ for _, op := range c.Operands() {
+ g.AddEdge(op, c, false)
+ }
+ if c, ok := c.(Future); ok {
+ for _, op := range c.Futures() {
+ g.AddEdge(op, c, true)
+ }
+ }
+ // If constraint C defines variable V, then we create an edge
+ // C->V
+ g.AddEdge(c, c.Y(), false)
+ }
+
+ g.FindSCCs()
+ g.sccEdges = make([][]Edge, len(g.SCCs))
+ g.futures = make([][]Future, len(g.SCCs))
+ for _, e := range g.Edges {
+ g.sccEdges[e.From.SCC] = append(g.sccEdges[e.From.SCC], e)
+ if !e.control {
+ continue
+ }
+ if c, ok := e.To.Value.(Future); ok {
+ g.futures[e.From.SCC] = append(g.futures[e.From.SCC], c)
+ }
+ }
+ return g
+}
+
+func (g *Graph) Solve() Ranges {
+ var consts []Z
+ off := NewZ(1)
+ for _, n := range g.Vertices {
+ if c, ok := n.Value.(*ssa.Const); ok {
+ basic, ok := c.Type().Underlying().(*types.Basic)
+ if !ok {
+ continue
+ }
+ if (basic.Info() & types.IsInteger) != 0 {
+ z := ConstantToZ(c.Value)
+ consts = append(consts, z)
+ consts = append(consts, z.Add(off))
+ consts = append(consts, z.Sub(off))
+ }
+ }
+
+ }
+ sort.Sort(Zs(consts))
+
+ for scc, vertices := range g.SCCs {
+ n := 0
+ n = len(vertices)
+ if n == 1 {
+ g.resolveFutures(scc)
+ v := vertices[0]
+ if v, ok := v.Value.(ssa.Value); ok {
+ switch typ := v.Type().Underlying().(type) {
+ case *types.Basic:
+ switch typ.Kind() {
+ case types.String, types.UntypedString:
+ if !g.Range(v).(StringInterval).IsKnown() {
+ g.SetRange(v, StringInterval{NewIntInterval(NewZ(0), PInfinity)})
+ }
+ default:
+ if !g.Range(v).(IntInterval).IsKnown() {
+ g.SetRange(v, InfinityFor(v))
+ }
+ }
+ case *types.Chan:
+ if !g.Range(v).(ChannelInterval).IsKnown() {
+ g.SetRange(v, ChannelInterval{NewIntInterval(NewZ(0), PInfinity)})
+ }
+ case *types.Slice:
+ if !g.Range(v).(SliceInterval).IsKnown() {
+ g.SetRange(v, SliceInterval{NewIntInterval(NewZ(0), PInfinity)})
+ }
+ }
+ }
+ if c, ok := v.Value.(Constraint); ok {
+ g.SetRange(c.Y(), c.Eval(g))
+ }
+ } else {
+ uses := g.uses(scc)
+ entries := g.entries(scc)
+ for len(entries) > 0 {
+ v := entries[len(entries)-1]
+ entries = entries[:len(entries)-1]
+ for _, use := range uses[v] {
+ if g.widen(use, consts) {
+ entries = append(entries, use.Y())
+ }
+ }
+ }
+
+ g.resolveFutures(scc)
+
+ // XXX this seems to be necessary, but shouldn't be.
+ // removing it leads to nil pointer derefs; investigate
+ // where we're not setting values correctly.
+ for _, n := range vertices {
+ if v, ok := n.Value.(ssa.Value); ok {
+ i, ok := g.Range(v).(IntInterval)
+ if !ok {
+ continue
+ }
+ if !i.IsKnown() {
+ g.SetRange(v, InfinityFor(v))
+ }
+ }
+ }
+
+ actives := g.actives(scc)
+ for len(actives) > 0 {
+ v := actives[len(actives)-1]
+ actives = actives[:len(actives)-1]
+ for _, use := range uses[v] {
+ if g.narrow(use) {
+ actives = append(actives, use.Y())
+ }
+ }
+ }
+ }
+ // propagate scc
+ for _, edge := range g.sccEdges[scc] {
+ if edge.control {
+ continue
+ }
+ if edge.From.SCC == edge.To.SCC {
+ continue
+ }
+ if c, ok := edge.To.Value.(Constraint); ok {
+ g.SetRange(c.Y(), c.Eval(g))
+ }
+ if c, ok := edge.To.Value.(Future); ok {
+ if !c.IsKnown() {
+ c.MarkUnresolved()
+ }
+ }
+ }
+ }
+
+ for v, r := range g.ranges {
+ i, ok := r.(IntInterval)
+ if !ok {
+ continue
+ }
+ if (v.Type().Underlying().(*types.Basic).Info() & types.IsUnsigned) == 0 {
+ if i.Upper != PInfinity {
+ s := &types.StdSizes{
+ // XXX is it okay to assume the largest word size, or do we
+ // need to be platform specific?
+ WordSize: 8,
+ MaxAlign: 1,
+ }
+ bits := (s.Sizeof(v.Type()) * 8) - 1
+ n := big.NewInt(1)
+ n = n.Lsh(n, uint(bits))
+ upper, lower := &big.Int{}, &big.Int{}
+ upper.Sub(n, big.NewInt(1))
+ lower.Neg(n)
+
+ if i.Upper.Cmp(NewBigZ(upper)) == 1 {
+ i = NewIntInterval(NInfinity, PInfinity)
+ } else if i.Lower.Cmp(NewBigZ(lower)) == -1 {
+ i = NewIntInterval(NInfinity, PInfinity)
+ }
+ }
+ }
+
+ g.ranges[v] = i
+ }
+
+ return g.ranges
+}
+
+func VertexString(v *Vertex) string {
+ switch v := v.Value.(type) {
+ case Constraint:
+ return v.String()
+ case ssa.Value:
+ return v.Name()
+ case nil:
+ return "BUG: nil vertex value"
+ default:
+ panic(fmt.Sprintf("unexpected type %T", v))
+ }
+}
+
+type Vertex struct {
+ Value interface{} // one of Constraint or ssa.Value
+ SCC int
+ index int
+ lowlink int
+ stack bool
+
+ Succs []Edge
+}
+
+type Ranges map[ssa.Value]Range
+
+func (r Ranges) Get(x ssa.Value) Range {
+ if x == nil {
+ return nil
+ }
+ i, ok := r[x]
+ if !ok {
+ switch x := x.Type().Underlying().(type) {
+ case *types.Basic:
+ switch x.Kind() {
+ case types.String, types.UntypedString:
+ return StringInterval{}
+ default:
+ return IntInterval{}
+ }
+ case *types.Chan:
+ return ChannelInterval{}
+ case *types.Slice:
+ return SliceInterval{}
+ }
+ }
+ return i
+}
+
+type Graph struct {
+ Vertices map[interface{}]*Vertex
+ Edges []Edge
+ SCCs [][]*Vertex
+ ranges Ranges
+
+ // map SCCs to futures
+ futures [][]Future
+ // map SCCs to edges
+ sccEdges [][]Edge
+}
+
+func (g Graph) Graphviz() string {
+ var lines []string
+ lines = append(lines, "digraph{")
+ ids := map[interface{}]int{}
+ i := 1
+ for _, v := range g.Vertices {
+ ids[v] = i
+ shape := "box"
+ if _, ok := v.Value.(ssa.Value); ok {
+ shape = "oval"
+ }
+ lines = append(lines, fmt.Sprintf(`n%d [shape="%s", label=%q, colorscheme=spectral11, style="filled", fillcolor="%d"]`,
+ i, shape, VertexString(v), (v.SCC%11)+1))
+ i++
+ }
+ for _, e := range g.Edges {
+ style := "solid"
+ if e.control {
+ style = "dashed"
+ }
+ lines = append(lines, fmt.Sprintf(`n%d -> n%d [style="%s"]`, ids[e.From], ids[e.To], style))
+ }
+ lines = append(lines, "}")
+ return strings.Join(lines, "\n")
+}
+
+func (g *Graph) SetRange(x ssa.Value, r Range) {
+ g.ranges[x] = r
+}
+
+func (g *Graph) Range(x ssa.Value) Range {
+ return g.ranges.Get(x)
+}
+
+func (g *Graph) widen(c Constraint, consts []Z) bool {
+ setRange := func(i Range) {
+ g.SetRange(c.Y(), i)
+ }
+ widenIntInterval := func(oi, ni IntInterval) (IntInterval, bool) {
+ if !ni.IsKnown() {
+ return oi, false
+ }
+ nlc := NInfinity
+ nuc := PInfinity
+
+ // Don't get stuck widening for an absurd amount of time due
+ // to an excess number of constants, as may be present in
+ // table-based scanners.
+ if len(consts) < 1000 {
+ for _, co := range consts {
+ if co.Cmp(ni.Lower) <= 0 {
+ nlc = co
+ break
+ }
+ }
+ for _, co := range consts {
+ if co.Cmp(ni.Upper) >= 0 {
+ nuc = co
+ break
+ }
+ }
+ }
+
+ if !oi.IsKnown() {
+ return ni, true
+ }
+ if ni.Lower.Cmp(oi.Lower) == -1 && ni.Upper.Cmp(oi.Upper) == 1 {
+ return NewIntInterval(nlc, nuc), true
+ }
+ if ni.Lower.Cmp(oi.Lower) == -1 {
+ return NewIntInterval(nlc, oi.Upper), true
+ }
+ if ni.Upper.Cmp(oi.Upper) == 1 {
+ return NewIntInterval(oi.Lower, nuc), true
+ }
+ return oi, false
+ }
+ switch oi := g.Range(c.Y()).(type) {
+ case IntInterval:
+ ni := c.Eval(g).(IntInterval)
+ si, changed := widenIntInterval(oi, ni)
+ if changed {
+ setRange(si)
+ return true
+ }
+ return false
+ case StringInterval:
+ ni := c.Eval(g).(StringInterval)
+ si, changed := widenIntInterval(oi.Length, ni.Length)
+ if changed {
+ setRange(StringInterval{si})
+ return true
+ }
+ return false
+ case SliceInterval:
+ ni := c.Eval(g).(SliceInterval)
+ si, changed := widenIntInterval(oi.Length, ni.Length)
+ if changed {
+ setRange(SliceInterval{si})
+ return true
+ }
+ return false
+ default:
+ return false
+ }
+}
+
+func (g *Graph) narrow(c Constraint) bool {
+ narrowIntInterval := func(oi, ni IntInterval) (IntInterval, bool) {
+ oLower := oi.Lower
+ oUpper := oi.Upper
+ nLower := ni.Lower
+ nUpper := ni.Upper
+
+ if oLower == NInfinity && nLower != NInfinity {
+ return NewIntInterval(nLower, oUpper), true
+ }
+ if oUpper == PInfinity && nUpper != PInfinity {
+ return NewIntInterval(oLower, nUpper), true
+ }
+ if oLower.Cmp(nLower) == 1 {
+ return NewIntInterval(nLower, oUpper), true
+ }
+ if oUpper.Cmp(nUpper) == -1 {
+ return NewIntInterval(oLower, nUpper), true
+ }
+ return oi, false
+ }
+ switch oi := g.Range(c.Y()).(type) {
+ case IntInterval:
+ ni := c.Eval(g).(IntInterval)
+ si, changed := narrowIntInterval(oi, ni)
+ if changed {
+ g.SetRange(c.Y(), si)
+ return true
+ }
+ return false
+ case StringInterval:
+ ni := c.Eval(g).(StringInterval)
+ si, changed := narrowIntInterval(oi.Length, ni.Length)
+ if changed {
+ g.SetRange(c.Y(), StringInterval{si})
+ return true
+ }
+ return false
+ case SliceInterval:
+ ni := c.Eval(g).(SliceInterval)
+ si, changed := narrowIntInterval(oi.Length, ni.Length)
+ if changed {
+ g.SetRange(c.Y(), SliceInterval{si})
+ return true
+ }
+ return false
+ default:
+ return false
+ }
+}
+
+func (g *Graph) resolveFutures(scc int) {
+ for _, c := range g.futures[scc] {
+ c.Resolve()
+ }
+}
+
+func (g *Graph) entries(scc int) []ssa.Value {
+ var entries []ssa.Value
+ for _, n := range g.Vertices {
+ if n.SCC != scc {
+ continue
+ }
+ if v, ok := n.Value.(ssa.Value); ok {
+ // XXX avoid quadratic runtime
+ //
+ // XXX I cannot think of any code where the future and its
+ // variables aren't in the same SCC, in which case this
+ // code isn't very useful (the variables won't be resolved
+ // yet). Before we have a cross-SCC example, however, we
+ // can't really verify that this code is working
+ // correctly, or indeed doing anything useful.
+ for _, on := range g.Vertices {
+ if c, ok := on.Value.(Future); ok {
+ if c.Y() == v {
+ if !c.IsResolved() {
+ g.SetRange(c.Y(), c.Eval(g))
+ c.MarkResolved()
+ }
+ break
+ }
+ }
+ }
+ if g.Range(v).IsKnown() {
+ entries = append(entries, v)
+ }
+ }
+ }
+ return entries
+}
+
+func (g *Graph) uses(scc int) map[ssa.Value][]Constraint {
+ m := map[ssa.Value][]Constraint{}
+ for _, e := range g.sccEdges[scc] {
+ if e.control {
+ continue
+ }
+ if v, ok := e.From.Value.(ssa.Value); ok {
+ c := e.To.Value.(Constraint)
+ sink := c.Y()
+ if g.Vertices[sink].SCC == scc {
+ m[v] = append(m[v], c)
+ }
+ }
+ }
+ return m
+}
+
+func (g *Graph) actives(scc int) []ssa.Value {
+ var actives []ssa.Value
+ for _, n := range g.Vertices {
+ if n.SCC != scc {
+ continue
+ }
+ if v, ok := n.Value.(ssa.Value); ok {
+ if _, ok := v.(*ssa.Const); !ok {
+ actives = append(actives, v)
+ }
+ }
+ }
+ return actives
+}
+
+func (g *Graph) AddEdge(from, to interface{}, ctrl bool) {
+ vf, ok := g.Vertices[from]
+ if !ok {
+ vf = &Vertex{Value: from}
+ g.Vertices[from] = vf
+ }
+ vt, ok := g.Vertices[to]
+ if !ok {
+ vt = &Vertex{Value: to}
+ g.Vertices[to] = vt
+ }
+ e := Edge{From: vf, To: vt, control: ctrl}
+ g.Edges = append(g.Edges, e)
+ vf.Succs = append(vf.Succs, e)
+}
+
+type Edge struct {
+ From, To *Vertex
+ control bool
+}
+
+func (e Edge) String() string {
+ return fmt.Sprintf("%s -> %s", VertexString(e.From), VertexString(e.To))
+}
+
+func (g *Graph) FindSCCs() {
+ // use Tarjan to find the SCCs
+
+ index := 1
+ var s []*Vertex
+
+ scc := 0
+ var strongconnect func(v *Vertex)
+ strongconnect = func(v *Vertex) {
+ // set the depth index for v to the smallest unused index
+ v.index = index
+ v.lowlink = index
+ index++
+ s = append(s, v)
+ v.stack = true
+
+ for _, e := range v.Succs {
+ w := e.To
+ if w.index == 0 {
+ // successor w has not yet been visited; recurse on it
+ strongconnect(w)
+ if w.lowlink < v.lowlink {
+ v.lowlink = w.lowlink
+ }
+ } else if w.stack {
+ // successor w is in stack s and hence in the current scc
+ if w.index < v.lowlink {
+ v.lowlink = w.index
+ }
+ }
+ }
+
+ if v.lowlink == v.index {
+ for {
+ w := s[len(s)-1]
+ s = s[:len(s)-1]
+ w.stack = false
+ w.SCC = scc
+ if w == v {
+ break
+ }
+ }
+ scc++
+ }
+ }
+ for _, v := range g.Vertices {
+ if v.index == 0 {
+ strongconnect(v)
+ }
+ }
+
+ g.SCCs = make([][]*Vertex, scc)
+ for _, n := range g.Vertices {
+ n.SCC = scc - n.SCC - 1
+ g.SCCs[n.SCC] = append(g.SCCs[n.SCC], n)
+ }
+}
+
+func invertToken(tok token.Token) token.Token {
+ switch tok {
+ case token.LSS:
+ return token.GEQ
+ case token.GTR:
+ return token.LEQ
+ case token.EQL:
+ return token.NEQ
+ case token.NEQ:
+ return token.EQL
+ case token.GEQ:
+ return token.LSS
+ case token.LEQ:
+ return token.GTR
+ default:
+ panic(fmt.Sprintf("unsupported token %s", tok))
+ }
+}
+
+func flipToken(tok token.Token) token.Token {
+ switch tok {
+ case token.LSS:
+ return token.GTR
+ case token.GTR:
+ return token.LSS
+ case token.EQL:
+ return token.EQL
+ case token.NEQ:
+ return token.NEQ
+ case token.GEQ:
+ return token.LEQ
+ case token.LEQ:
+ return token.GEQ
+ default:
+ panic(fmt.Sprintf("unsupported token %s", tok))
+ }
+}
+
+type CopyConstraint struct {
+ aConstraint
+ X ssa.Value
+}
+
+func (c *CopyConstraint) String() string {
+ return fmt.Sprintf("%s = copy(%s)", c.Y().Name(), c.X.Name())
+}
+
+func (c *CopyConstraint) Eval(g *Graph) Range {
+ return g.Range(c.X)
+}
+
+func (c *CopyConstraint) Operands() []ssa.Value {
+ return []ssa.Value{c.X}
+}
+
+func NewCopyConstraint(x, y ssa.Value) Constraint {
+ return &CopyConstraint{
+ aConstraint: aConstraint{
+ y: y,
+ },
+ X: x,
+ }
+}
diff --git a/vendor/honnef.co/go/tools/stylecheck/analysis.go b/vendor/honnef.co/go/tools/stylecheck/analysis.go
new file mode 100644
index 0000000000000..f252487f73572
--- /dev/null
+++ b/vendor/honnef.co/go/tools/stylecheck/analysis.go
@@ -0,0 +1,111 @@
+package stylecheck
+
+import (
+ "flag"
+
+ "golang.org/x/tools/go/analysis"
+ "golang.org/x/tools/go/analysis/passes/inspect"
+ "honnef.co/go/tools/config"
+ "honnef.co/go/tools/facts"
+ "honnef.co/go/tools/internal/passes/buildssa"
+ "honnef.co/go/tools/lint/lintutil"
+)
+
+func newFlagSet() flag.FlagSet {
+ fs := flag.NewFlagSet("", flag.PanicOnError)
+ fs.Var(lintutil.NewVersionFlag(), "go", "Target Go version")
+ return *fs
+}
+
+var Analyzers = map[string]*analysis.Analyzer{
+ "ST1000": {
+ Name: "ST1000",
+ Run: CheckPackageComment,
+ Doc: Docs["ST1000"].String(),
+ Requires: []*analysis.Analyzer{},
+ Flags: newFlagSet(),
+ },
+ "ST1001": {
+ Name: "ST1001",
+ Run: CheckDotImports,
+ Doc: Docs["ST1001"].String(),
+ Requires: []*analysis.Analyzer{facts.Generated, config.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "ST1003": {
+ Name: "ST1003",
+ Run: CheckNames,
+ Doc: Docs["ST1003"].String(),
+ Requires: []*analysis.Analyzer{facts.Generated, config.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "ST1005": {
+ Name: "ST1005",
+ Run: CheckErrorStrings,
+ Doc: Docs["ST1005"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "ST1006": {
+ Name: "ST1006",
+ Run: CheckReceiverNames,
+ Doc: Docs["ST1006"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer, facts.Generated},
+ Flags: newFlagSet(),
+ },
+ "ST1008": {
+ Name: "ST1008",
+ Run: CheckErrorReturn,
+ Doc: Docs["ST1008"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "ST1011": {
+ Name: "ST1011",
+ Run: CheckTimeNames,
+ Doc: Docs["ST1011"].String(),
+ Flags: newFlagSet(),
+ },
+ "ST1012": {
+ Name: "ST1012",
+ Run: CheckErrorVarNames,
+ Doc: Docs["ST1012"].String(),
+ Requires: []*analysis.Analyzer{config.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "ST1013": {
+ Name: "ST1013",
+ Run: CheckHTTPStatusCodes,
+ Doc: Docs["ST1013"].String(),
+ Requires: []*analysis.Analyzer{facts.Generated, facts.TokenFile, config.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "ST1015": {
+ Name: "ST1015",
+ Run: CheckDefaultCaseOrder,
+ Doc: Docs["ST1015"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated, facts.TokenFile},
+ Flags: newFlagSet(),
+ },
+ "ST1016": {
+ Name: "ST1016",
+ Run: CheckReceiverNamesIdentical,
+ Doc: Docs["ST1016"].String(),
+ Requires: []*analysis.Analyzer{buildssa.Analyzer},
+ Flags: newFlagSet(),
+ },
+ "ST1017": {
+ Name: "ST1017",
+ Run: CheckYodaConditions,
+ Doc: Docs["ST1017"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer, facts.Generated, facts.TokenFile},
+ Flags: newFlagSet(),
+ },
+ "ST1018": {
+ Name: "ST1018",
+ Run: CheckInvisibleCharacters,
+ Doc: Docs["ST1018"].String(),
+ Requires: []*analysis.Analyzer{inspect.Analyzer},
+ Flags: newFlagSet(),
+ },
+}
diff --git a/vendor/honnef.co/go/tools/stylecheck/doc.go b/vendor/honnef.co/go/tools/stylecheck/doc.go
new file mode 100644
index 0000000000000..9097214d9bf5b
--- /dev/null
+++ b/vendor/honnef.co/go/tools/stylecheck/doc.go
@@ -0,0 +1,154 @@
+package stylecheck
+
+import "honnef.co/go/tools/lint"
+
+var Docs = map[string]*lint.Documentation{
+ "ST1000": &lint.Documentation{
+ Title: `Incorrect or missing package comment`,
+ Text: `Packages must have a package comment that is formatted according to
+the guidelines laid out in
+https://github.com/golang/go/wiki/CodeReviewComments#package-comments.`,
+ Since: "2019.1",
+ NonDefault: true,
+ },
+
+ "ST1001": &lint.Documentation{
+ Title: `Dot imports are discouraged`,
+ Text: `Dot imports that aren't in external test packages are discouraged.
+
+The dot_import_whitelist option can be used to whitelist certain
+imports.
+
+Quoting Go Code Review Comments:
+
+ The import . form can be useful in tests that, due to circular
+ dependencies, cannot be made part of the package being tested:
+
+ package foo_test
+
+ import (
+ "bar/testutil" // also imports "foo"
+ . "foo"
+ )
+
+ In this case, the test file cannot be in package foo because it
+ uses bar/testutil, which imports foo. So we use the 'import .'
+ form to let the file pretend to be part of package foo even though
+ it is not. Except for this one case, do not use import . in your
+ programs. It makes the programs much harder to read because it is
+ unclear whether a name like Quux is a top-level identifier in the
+ current package or in an imported package.`,
+ Since: "2019.1",
+ Options: []string{"dot_import_whitelist"},
+ },
+
+ "ST1003": &lint.Documentation{
+ Title: `Poorly chosen identifier`,
+ Text: `Identifiers, such as variable and package names, follow certain rules.
+
+See the following links for details:
+
+- https://golang.org/doc/effective_go.html#package-names
+- https://golang.org/doc/effective_go.html#mixed-caps
+- https://github.com/golang/go/wiki/CodeReviewComments#initialisms
+- https://github.com/golang/go/wiki/CodeReviewComments#variable-names`,
+ Since: "2019.1",
+ NonDefault: true,
+ Options: []string{"initialisms"},
+ },
+
+ "ST1005": &lint.Documentation{
+ Title: `Incorrectly formatted error string`,
+ Text: `Error strings follow a set of guidelines to ensure uniformity and good
+composability.
+
+Quoting Go Code Review Comments:
+
+ Error strings should not be capitalized (unless beginning with
+ proper nouns or acronyms) or end with punctuation, since they are
+ usually printed following other context. That is, use
+ fmt.Errorf("something bad") not fmt.Errorf("Something bad"), so
+ that log.Printf("Reading %s: %v", filename, err) formats without a
+ spurious capital letter mid-message.`,
+ Since: "2019.1",
+ },
+
+ "ST1006": &lint.Documentation{
+ Title: `Poorly chosen receiver name`,
+ Text: `Quoting Go Code Review Comments:
+
+ The name of a method's receiver should be a reflection of its
+ identity; often a one or two letter abbreviation of its type
+ suffices (such as "c" or "cl" for "Client"). Don't use generic
+ names such as "me", "this" or "self", identifiers typical of
+ object-oriented languages that place more emphasis on methods as
+ opposed to functions. The name need not be as descriptive as that
+ of a method argument, as its role is obvious and serves no
+ documentary purpose. It can be very short as it will appear on
+ almost every line of every method of the type; familiarity admits
+ brevity. Be consistent, too: if you call the receiver "c" in one
+ method, don't call it "cl" in another.`,
+ Since: "2019.1",
+ },
+
+ "ST1008": &lint.Documentation{
+ Title: `A function's error value should be its last return value`,
+ Text: `A function's error value should be its last return value.`,
+ Since: `2019.1`,
+ },
+
+ "ST1011": &lint.Documentation{
+ Title: `Poorly chosen name for variable of type time.Duration`,
+ Text: `time.Duration values represent an amount of time, which is represented
+as a count of nanoseconds. An expression like 5 * time.Microsecond
+yields the value 5000. It is therefore not appropriate to suffix a
+variable of type time.Duration with any time unit, such as Msec or
+Milli.`,
+ Since: `2019.1`,
+ },
+
+ "ST1012": &lint.Documentation{
+ Title: `Poorly chosen name for error variable`,
+ Text: `Error variables that are part of an API should be called errFoo or
+ErrFoo.`,
+ Since: "2019.1",
+ },
+
+ "ST1013": &lint.Documentation{
+ Title: `Should use constants for HTTP error codes, not magic numbers`,
+ Text: `HTTP has a tremendous number of status codes. While some of those are
+well known (200, 400, 404, 500), most of them are not. The net/http
+package provides constants for all status codes that are part of the
+various specifications. It is recommended to use these constants
+instead of hard-coding magic numbers, to vastly improve the
+readability of your code.`,
+ Since: "2019.1",
+ Options: []string{"http_status_code_whitelist"},
+ },
+
+ "ST1015": &lint.Documentation{
+ Title: `A switch's default case should be the first or last case`,
+ Since: "2019.1",
+ },
+
+ "ST1016": &lint.Documentation{
+ Title: `Use consistent method receiver names`,
+ Since: "2019.1",
+ NonDefault: true,
+ },
+
+ "ST1017": &lint.Documentation{
+ Title: `Don't use Yoda conditions`,
+ Text: `Yoda conditions are conditions of the kind 'if 42 == x', where the
+literal is on the left side of the comparison. These are a common
+idiom in languages in which assignment is an expression, to avoid bugs
+of the kind 'if (x = 42)'. In Go, which doesn't allow for this kind of
+bug, we prefer the more idiomatic 'if x == 42'.`,
+ Since: "2019.2",
+ },
+
+ "ST1018": &lint.Documentation{
+ Title: `Avoid zero-width and control characters in string literals`,
+ Since: "2019.2",
+ },
+}
diff --git a/vendor/honnef.co/go/tools/stylecheck/lint.go b/vendor/honnef.co/go/tools/stylecheck/lint.go
new file mode 100644
index 0000000000000..1699d5898c077
--- /dev/null
+++ b/vendor/honnef.co/go/tools/stylecheck/lint.go
@@ -0,0 +1,629 @@
+package stylecheck // import "honnef.co/go/tools/stylecheck"
+
+import (
+ "fmt"
+ "go/ast"
+ "go/constant"
+ "go/token"
+ "go/types"
+ "strconv"
+ "strings"
+ "unicode"
+ "unicode/utf8"
+
+ "honnef.co/go/tools/config"
+ "honnef.co/go/tools/internal/passes/buildssa"
+ . "honnef.co/go/tools/lint/lintdsl"
+ "honnef.co/go/tools/ssa"
+
+ "golang.org/x/tools/go/analysis"
+ "golang.org/x/tools/go/analysis/passes/inspect"
+ "golang.org/x/tools/go/ast/inspector"
+ "golang.org/x/tools/go/types/typeutil"
+)
+
+func CheckPackageComment(pass *analysis.Pass) (interface{}, error) {
+ // - At least one file in a non-main package should have a package comment
+ //
+ // - The comment should be of the form
+ // "Package x ...". This has a slight potential for false
+ // positives, as multiple files can have package comments, in
+ // which case they get appended. But that doesn't happen a lot in
+ // the real world.
+
+ if pass.Pkg.Name() == "main" {
+ return nil, nil
+ }
+ hasDocs := false
+ for _, f := range pass.Files {
+ if IsInTest(pass, f) {
+ continue
+ }
+ if f.Doc != nil && len(f.Doc.List) > 0 {
+ hasDocs = true
+ prefix := "Package " + f.Name.Name + " "
+ if !strings.HasPrefix(strings.TrimSpace(f.Doc.Text()), prefix) {
+ ReportNodef(pass, f.Doc, `package comment should be of the form "%s..."`, prefix)
+ }
+ f.Doc.Text()
+ }
+ }
+
+ if !hasDocs {
+ for _, f := range pass.Files {
+ if IsInTest(pass, f) {
+ continue
+ }
+ ReportNodef(pass, f, "at least one file in a package should have a package comment")
+ }
+ }
+ return nil, nil
+}
+
+func CheckDotImports(pass *analysis.Pass) (interface{}, error) {
+ for _, f := range pass.Files {
+ imports:
+ for _, imp := range f.Imports {
+ path := imp.Path.Value
+ path = path[1 : len(path)-1]
+ for _, w := range config.For(pass).DotImportWhitelist {
+ if w == path {
+ continue imports
+ }
+ }
+
+ if imp.Name != nil && imp.Name.Name == "." && !IsInTest(pass, f) {
+ ReportNodefFG(pass, imp, "should not use dot imports")
+ }
+ }
+ }
+ return nil, nil
+}
+
+func CheckBlankImports(pass *analysis.Pass) (interface{}, error) {
+ fset := pass.Fset
+ for _, f := range pass.Files {
+ if IsInMain(pass, f) || IsInTest(pass, f) {
+ continue
+ }
+
+ // Collect imports of the form `import _ "foo"`, i.e. with no
+ // parentheses, as their comment will be associated with the
+ // (paren-free) GenDecl, not the import spec itself.
+ //
+ // We don't directly process the GenDecl so that we can
+ // correctly handle the following:
+ //
+ // import _ "foo"
+ // import _ "bar"
+ //
+ // where only the first import should get flagged.
+ skip := map[ast.Spec]bool{}
+ ast.Inspect(f, func(node ast.Node) bool {
+ switch node := node.(type) {
+ case *ast.File:
+ return true
+ case *ast.GenDecl:
+ if node.Tok != token.IMPORT {
+ return false
+ }
+ if node.Lparen == token.NoPos && node.Doc != nil {
+ skip[node.Specs[0]] = true
+ }
+ return false
+ }
+ return false
+ })
+ for i, imp := range f.Imports {
+ pos := fset.Position(imp.Pos())
+
+ if !IsBlank(imp.Name) {
+ continue
+ }
+ // Only flag the first blank import in a group of imports,
+ // or don't flag any of them, if the first one is
+ // commented
+ if i > 0 {
+ prev := f.Imports[i-1]
+ prevPos := fset.Position(prev.Pos())
+ if pos.Line-1 == prevPos.Line && IsBlank(prev.Name) {
+ continue
+ }
+ }
+
+ if imp.Doc == nil && imp.Comment == nil && !skip[imp] {
+ ReportNodef(pass, imp, "a blank import should be only in a main or test package, or have a comment justifying it")
+ }
+ }
+ }
+ return nil, nil
+}
+
+func CheckIncDec(pass *analysis.Pass) (interface{}, error) {
+ // TODO(dh): this can be noisy for function bodies that look like this:
+ // x += 3
+ // ...
+ // x += 2
+ // ...
+ // x += 1
+ fn := func(node ast.Node) {
+ assign := node.(*ast.AssignStmt)
+ if assign.Tok != token.ADD_ASSIGN && assign.Tok != token.SUB_ASSIGN {
+ return
+ }
+ if (len(assign.Lhs) != 1 || len(assign.Rhs) != 1) ||
+ !IsIntLiteral(assign.Rhs[0], "1") {
+ return
+ }
+
+ suffix := ""
+ switch assign.Tok {
+ case token.ADD_ASSIGN:
+ suffix = "++"
+ case token.SUB_ASSIGN:
+ suffix = "--"
+ }
+
+ ReportNodef(pass, assign, "should replace %s with %s%s", Render(pass, assign), Render(pass, assign.Lhs[0]), suffix)
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.AssignStmt)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckErrorReturn(pass *analysis.Pass) (interface{}, error) {
+fnLoop:
+ for _, fn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ sig := fn.Type().(*types.Signature)
+ rets := sig.Results()
+ if rets == nil || rets.Len() < 2 {
+ continue
+ }
+
+ if rets.At(rets.Len()-1).Type() == types.Universe.Lookup("error").Type() {
+ // Last return type is error. If the function also returns
+ // errors in other positions, that's fine.
+ continue
+ }
+ for i := rets.Len() - 2; i >= 0; i-- {
+ if rets.At(i).Type() == types.Universe.Lookup("error").Type() {
+ pass.Reportf(rets.At(i).Pos(), "error should be returned as the last argument")
+ continue fnLoop
+ }
+ }
+ }
+ return nil, nil
+}
+
+// CheckUnexportedReturn checks that exported functions on exported
+// types do not return unexported types.
+func CheckUnexportedReturn(pass *analysis.Pass) (interface{}, error) {
+ for _, fn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ if fn.Synthetic != "" || fn.Parent() != nil {
+ continue
+ }
+ if !ast.IsExported(fn.Name()) || IsInMain(pass, fn) || IsInTest(pass, fn) {
+ continue
+ }
+ sig := fn.Type().(*types.Signature)
+ if sig.Recv() != nil && !ast.IsExported(Dereference(sig.Recv().Type()).(*types.Named).Obj().Name()) {
+ continue
+ }
+ res := sig.Results()
+ for i := 0; i < res.Len(); i++ {
+ if named, ok := DereferenceR(res.At(i).Type()).(*types.Named); ok &&
+ !ast.IsExported(named.Obj().Name()) &&
+ named != types.Universe.Lookup("error").Type() {
+ pass.Reportf(fn.Pos(), "should not return unexported type")
+ }
+ }
+ }
+ return nil, nil
+}
+
+func CheckReceiverNames(pass *analysis.Pass) (interface{}, error) {
+ ssapkg := pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).Pkg
+ for _, m := range ssapkg.Members {
+ if T, ok := m.Object().(*types.TypeName); ok && !T.IsAlias() {
+ ms := typeutil.IntuitiveMethodSet(T.Type(), nil)
+ for _, sel := range ms {
+ fn := sel.Obj().(*types.Func)
+ recv := fn.Type().(*types.Signature).Recv()
+ if Dereference(recv.Type()) != T.Type() {
+ // skip embedded methods
+ continue
+ }
+ if recv.Name() == "self" || recv.Name() == "this" {
+ ReportfFG(pass, recv.Pos(), `receiver name should be a reflection of its identity; don't use generic names such as "this" or "self"`)
+ }
+ if recv.Name() == "_" {
+ ReportfFG(pass, recv.Pos(), "receiver name should not be an underscore, omit the name if it is unused")
+ }
+ }
+ }
+ }
+ return nil, nil
+}
+
+func CheckReceiverNamesIdentical(pass *analysis.Pass) (interface{}, error) {
+ ssapkg := pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).Pkg
+ for _, m := range ssapkg.Members {
+ names := map[string]int{}
+
+ var firstFn *types.Func
+ if T, ok := m.Object().(*types.TypeName); ok && !T.IsAlias() {
+ ms := typeutil.IntuitiveMethodSet(T.Type(), nil)
+ for _, sel := range ms {
+ fn := sel.Obj().(*types.Func)
+ recv := fn.Type().(*types.Signature).Recv()
+ if Dereference(recv.Type()) != T.Type() {
+ // skip embedded methods
+ continue
+ }
+ if firstFn == nil {
+ firstFn = fn
+ }
+ if recv.Name() != "" && recv.Name() != "_" {
+ names[recv.Name()]++
+ }
+ }
+ }
+
+ if len(names) > 1 {
+ var seen []string
+ for name, count := range names {
+ seen = append(seen, fmt.Sprintf("%dx %q", count, name))
+ }
+
+ pass.Reportf(firstFn.Pos(), "methods on the same type should have the same receiver name (seen %s)", strings.Join(seen, ", "))
+ }
+ }
+ return nil, nil
+}
+
+func CheckContextFirstArg(pass *analysis.Pass) (interface{}, error) {
+ // TODO(dh): this check doesn't apply to test helpers. Example from the stdlib:
+ // func helperCommandContext(t *testing.T, ctx context.Context, s ...string) (cmd *exec.Cmd) {
+fnLoop:
+ for _, fn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ if fn.Synthetic != "" || fn.Parent() != nil {
+ continue
+ }
+ params := fn.Signature.Params()
+ if params.Len() < 2 {
+ continue
+ }
+ if types.TypeString(params.At(0).Type(), nil) == "context.Context" {
+ continue
+ }
+ for i := 1; i < params.Len(); i++ {
+ param := params.At(i)
+ if types.TypeString(param.Type(), nil) == "context.Context" {
+ pass.Reportf(param.Pos(), "context.Context should be the first argument of a function")
+ continue fnLoop
+ }
+ }
+ }
+ return nil, nil
+}
+
+func CheckErrorStrings(pass *analysis.Pass) (interface{}, error) {
+ objNames := map[*ssa.Package]map[string]bool{}
+ ssapkg := pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).Pkg
+ objNames[ssapkg] = map[string]bool{}
+ for _, m := range ssapkg.Members {
+ if typ, ok := m.(*ssa.Type); ok {
+ objNames[ssapkg][typ.Name()] = true
+ }
+ }
+ for _, fn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ objNames[fn.Package()][fn.Name()] = true
+ }
+
+ for _, fn := range pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA).SrcFuncs {
+ if IsInTest(pass, fn) {
+ // We don't care about malformed error messages in tests;
+ // they're usually for direct human consumption, not part
+ // of an API
+ continue
+ }
+ for _, block := range fn.Blocks {
+ instrLoop:
+ for _, ins := range block.Instrs {
+ call, ok := ins.(*ssa.Call)
+ if !ok {
+ continue
+ }
+ if !IsCallTo(call.Common(), "errors.New") && !IsCallTo(call.Common(), "fmt.Errorf") {
+ continue
+ }
+
+ k, ok := call.Common().Args[0].(*ssa.Const)
+ if !ok {
+ continue
+ }
+
+ s := constant.StringVal(k.Value)
+ if len(s) == 0 {
+ continue
+ }
+ switch s[len(s)-1] {
+ case '.', ':', '!', '\n':
+ pass.Reportf(call.Pos(), "error strings should not end with punctuation or a newline")
+ }
+ idx := strings.IndexByte(s, ' ')
+ if idx == -1 {
+ // single word error message, probably not a real
+ // error but something used in tests or during
+ // debugging
+ continue
+ }
+ word := s[:idx]
+ first, n := utf8.DecodeRuneInString(word)
+ if !unicode.IsUpper(first) {
+ continue
+ }
+ for _, c := range word[n:] {
+ if unicode.IsUpper(c) {
+ // Word is probably an initialism or
+ // multi-word function name
+ continue instrLoop
+ }
+ }
+
+ word = strings.TrimRightFunc(word, func(r rune) bool { return unicode.IsPunct(r) })
+ if objNames[fn.Package()][word] {
+ // Word is probably the name of a function or type in this package
+ continue
+ }
+ // First word in error starts with a capital
+ // letter, and the word doesn't contain any other
+ // capitals, making it unlikely to be an
+ // initialism or multi-word function name.
+ //
+ // It could still be a proper noun, though.
+
+ pass.Reportf(call.Pos(), "error strings should not be capitalized")
+ }
+ }
+ }
+ return nil, nil
+}
+
+func CheckTimeNames(pass *analysis.Pass) (interface{}, error) {
+ suffixes := []string{
+ "Sec", "Secs", "Seconds",
+ "Msec", "Msecs",
+ "Milli", "Millis", "Milliseconds",
+ "Usec", "Usecs", "Microseconds",
+ "MS", "Ms",
+ }
+ fn := func(T types.Type, names []*ast.Ident) {
+ if !IsType(T, "time.Duration") && !IsType(T, "*time.Duration") {
+ return
+ }
+ for _, name := range names {
+ for _, suffix := range suffixes {
+ if strings.HasSuffix(name.Name, suffix) {
+ ReportNodef(pass, name, "var %s is of type %v; don't use unit-specific suffix %q", name.Name, T, suffix)
+ break
+ }
+ }
+ }
+ }
+ for _, f := range pass.Files {
+ ast.Inspect(f, func(node ast.Node) bool {
+ switch node := node.(type) {
+ case *ast.ValueSpec:
+ T := pass.TypesInfo.TypeOf(node.Type)
+ fn(T, node.Names)
+ case *ast.FieldList:
+ for _, field := range node.List {
+ T := pass.TypesInfo.TypeOf(field.Type)
+ fn(T, field.Names)
+ }
+ }
+ return true
+ })
+ }
+ return nil, nil
+}
+
+func CheckErrorVarNames(pass *analysis.Pass) (interface{}, error) {
+ for _, f := range pass.Files {
+ for _, decl := range f.Decls {
+ gen, ok := decl.(*ast.GenDecl)
+ if !ok || gen.Tok != token.VAR {
+ continue
+ }
+ for _, spec := range gen.Specs {
+ spec := spec.(*ast.ValueSpec)
+ if len(spec.Names) != len(spec.Values) {
+ continue
+ }
+
+ for i, name := range spec.Names {
+ val := spec.Values[i]
+ if !IsCallToAST(pass, val, "errors.New") && !IsCallToAST(pass, val, "fmt.Errorf") {
+ continue
+ }
+
+ prefix := "err"
+ if name.IsExported() {
+ prefix = "Err"
+ }
+ if !strings.HasPrefix(name.Name, prefix) {
+ ReportNodef(pass, name, "error var %s should have name of the form %sFoo", name.Name, prefix)
+ }
+ }
+ }
+ }
+ }
+ return nil, nil
+}
+
+var httpStatusCodes = map[int]string{
+ 100: "StatusContinue",
+ 101: "StatusSwitchingProtocols",
+ 102: "StatusProcessing",
+ 200: "StatusOK",
+ 201: "StatusCreated",
+ 202: "StatusAccepted",
+ 203: "StatusNonAuthoritativeInfo",
+ 204: "StatusNoContent",
+ 205: "StatusResetContent",
+ 206: "StatusPartialContent",
+ 207: "StatusMultiStatus",
+ 208: "StatusAlreadyReported",
+ 226: "StatusIMUsed",
+ 300: "StatusMultipleChoices",
+ 301: "StatusMovedPermanently",
+ 302: "StatusFound",
+ 303: "StatusSeeOther",
+ 304: "StatusNotModified",
+ 305: "StatusUseProxy",
+ 307: "StatusTemporaryRedirect",
+ 308: "StatusPermanentRedirect",
+ 400: "StatusBadRequest",
+ 401: "StatusUnauthorized",
+ 402: "StatusPaymentRequired",
+ 403: "StatusForbidden",
+ 404: "StatusNotFound",
+ 405: "StatusMethodNotAllowed",
+ 406: "StatusNotAcceptable",
+ 407: "StatusProxyAuthRequired",
+ 408: "StatusRequestTimeout",
+ 409: "StatusConflict",
+ 410: "StatusGone",
+ 411: "StatusLengthRequired",
+ 412: "StatusPreconditionFailed",
+ 413: "StatusRequestEntityTooLarge",
+ 414: "StatusRequestURITooLong",
+ 415: "StatusUnsupportedMediaType",
+ 416: "StatusRequestedRangeNotSatisfiable",
+ 417: "StatusExpectationFailed",
+ 418: "StatusTeapot",
+ 422: "StatusUnprocessableEntity",
+ 423: "StatusLocked",
+ 424: "StatusFailedDependency",
+ 426: "StatusUpgradeRequired",
+ 428: "StatusPreconditionRequired",
+ 429: "StatusTooManyRequests",
+ 431: "StatusRequestHeaderFieldsTooLarge",
+ 451: "StatusUnavailableForLegalReasons",
+ 500: "StatusInternalServerError",
+ 501: "StatusNotImplemented",
+ 502: "StatusBadGateway",
+ 503: "StatusServiceUnavailable",
+ 504: "StatusGatewayTimeout",
+ 505: "StatusHTTPVersionNotSupported",
+ 506: "StatusVariantAlsoNegotiates",
+ 507: "StatusInsufficientStorage",
+ 508: "StatusLoopDetected",
+ 510: "StatusNotExtended",
+ 511: "StatusNetworkAuthenticationRequired",
+}
+
+func CheckHTTPStatusCodes(pass *analysis.Pass) (interface{}, error) {
+ whitelist := map[string]bool{}
+ for _, code := range config.For(pass).HTTPStatusCodeWhitelist {
+ whitelist[code] = true
+ }
+ fn := func(node ast.Node) bool {
+ if node == nil {
+ return true
+ }
+ call, ok := node.(*ast.CallExpr)
+ if !ok {
+ return true
+ }
+
+ var arg int
+ switch CallNameAST(pass, call) {
+ case "net/http.Error":
+ arg = 2
+ case "net/http.Redirect":
+ arg = 3
+ case "net/http.StatusText":
+ arg = 0
+ case "net/http.RedirectHandler":
+ arg = 1
+ default:
+ return true
+ }
+ lit, ok := call.Args[arg].(*ast.BasicLit)
+ if !ok {
+ return true
+ }
+ if whitelist[lit.Value] {
+ return true
+ }
+
+ n, err := strconv.Atoi(lit.Value)
+ if err != nil {
+ return true
+ }
+ s, ok := httpStatusCodes[n]
+ if !ok {
+ return true
+ }
+ ReportNodefFG(pass, lit, "should use constant http.%s instead of numeric literal %d", s, n)
+ return true
+ }
+ // OPT(dh): replace with inspector
+ for _, f := range pass.Files {
+ ast.Inspect(f, fn)
+ }
+ return nil, nil
+}
+
+func CheckDefaultCaseOrder(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ stmt := node.(*ast.SwitchStmt)
+ list := stmt.Body.List
+ for i, c := range list {
+ if c.(*ast.CaseClause).List == nil && i != 0 && i != len(list)-1 {
+ ReportNodefFG(pass, c, "default case should be first or last in switch statement")
+ break
+ }
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.SwitchStmt)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckYodaConditions(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ cond := node.(*ast.BinaryExpr)
+ if cond.Op != token.EQL && cond.Op != token.NEQ {
+ return
+ }
+ if _, ok := cond.X.(*ast.BasicLit); !ok {
+ return
+ }
+ if _, ok := cond.Y.(*ast.BasicLit); ok {
+ // Don't flag lit == lit conditions, just in case
+ return
+ }
+ ReportNodefFG(pass, cond, "don't use Yoda conditions")
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.BinaryExpr)(nil)}, fn)
+ return nil, nil
+}
+
+func CheckInvisibleCharacters(pass *analysis.Pass) (interface{}, error) {
+ fn := func(node ast.Node) {
+ lit := node.(*ast.BasicLit)
+ if lit.Kind != token.STRING {
+ return
+ }
+ for _, r := range lit.Value {
+ if unicode.Is(unicode.Cf, r) {
+ ReportNodef(pass, lit, "string literal contains the Unicode format character %U, consider using the %q escape sequence", r, r)
+ } else if unicode.Is(unicode.Cc, r) && r != '\n' && r != '\t' && r != '\r' {
+ ReportNodef(pass, lit, "string literal contains the Unicode control character %U, consider using the %q escape sequence", r, r)
+ }
+ }
+ }
+ pass.ResultOf[inspect.Analyzer].(*inspector.Inspector).Preorder([]ast.Node{(*ast.BasicLit)(nil)}, fn)
+ return nil, nil
+}
diff --git a/vendor/honnef.co/go/tools/stylecheck/names.go b/vendor/honnef.co/go/tools/stylecheck/names.go
new file mode 100644
index 0000000000000..160f9d7ff71bc
--- /dev/null
+++ b/vendor/honnef.co/go/tools/stylecheck/names.go
@@ -0,0 +1,264 @@
+// Copyright (c) 2013 The Go Authors. All rights reserved.
+// Copyright (c) 2018 Dominik Honnef. All rights reserved.
+
+package stylecheck
+
+import (
+ "go/ast"
+ "go/token"
+ "strings"
+ "unicode"
+
+ "golang.org/x/tools/go/analysis"
+ "honnef.co/go/tools/config"
+ . "honnef.co/go/tools/lint/lintdsl"
+)
+
+// knownNameExceptions is a set of names that are known to be exempt from naming checks.
+// This is usually because they are constrained by having to match names in the
+// standard library.
+var knownNameExceptions = map[string]bool{
+ "LastInsertId": true, // must match database/sql
+ "kWh": true,
+}
+
+func CheckNames(pass *analysis.Pass) (interface{}, error) {
+ // A large part of this function is copied from
+ // github.com/golang/lint, Copyright (c) 2013 The Go Authors,
+ // licensed under the BSD 3-clause license.
+
+ allCaps := func(s string) bool {
+ for _, r := range s {
+ if !((r >= 'A' && r <= 'Z') || (r >= '0' && r <= '9') || r == '_') {
+ return false
+ }
+ }
+ return true
+ }
+
+ check := func(id *ast.Ident, thing string, initialisms map[string]bool) {
+ if id.Name == "_" {
+ return
+ }
+ if knownNameExceptions[id.Name] {
+ return
+ }
+
+ // Handle two common styles from other languages that don't belong in Go.
+ if len(id.Name) >= 5 && allCaps(id.Name) && strings.Contains(id.Name, "_") {
+ ReportfFG(pass, id.Pos(), "should not use ALL_CAPS in Go names; use CamelCase instead")
+ return
+ }
+
+ should := lintName(id.Name, initialisms)
+ if id.Name == should {
+ return
+ }
+
+ if len(id.Name) > 2 && strings.Contains(id.Name[1:len(id.Name)-1], "_") {
+ ReportfFG(pass, id.Pos(), "should not use underscores in Go names; %s %s should be %s", thing, id.Name, should)
+ return
+ }
+ ReportfFG(pass, id.Pos(), "%s %s should be %s", thing, id.Name, should)
+ }
+ checkList := func(fl *ast.FieldList, thing string, initialisms map[string]bool) {
+ if fl == nil {
+ return
+ }
+ for _, f := range fl.List {
+ for _, id := range f.Names {
+ check(id, thing, initialisms)
+ }
+ }
+ }
+
+ il := config.For(pass).Initialisms
+ initialisms := make(map[string]bool, len(il))
+ for _, word := range il {
+ initialisms[word] = true
+ }
+ for _, f := range pass.Files {
+ // Package names need slightly different handling than other names.
+ if !strings.HasSuffix(f.Name.Name, "_test") && strings.Contains(f.Name.Name, "_") {
+ ReportfFG(pass, f.Pos(), "should not use underscores in package names")
+ }
+ if strings.IndexFunc(f.Name.Name, unicode.IsUpper) != -1 {
+ ReportfFG(pass, f.Pos(), "should not use MixedCaps in package name; %s should be %s", f.Name.Name, strings.ToLower(f.Name.Name))
+ }
+
+ ast.Inspect(f, func(node ast.Node) bool {
+ switch v := node.(type) {
+ case *ast.AssignStmt:
+ if v.Tok != token.DEFINE {
+ return true
+ }
+ for _, exp := range v.Lhs {
+ if id, ok := exp.(*ast.Ident); ok {
+ check(id, "var", initialisms)
+ }
+ }
+ case *ast.FuncDecl:
+ // Functions with no body are defined elsewhere (in
+ // assembly, or via go:linkname). These are likely to
+ // be something very low level (such as the runtime),
+ // where our rules don't apply.
+ if v.Body == nil {
+ return true
+ }
+
+ if IsInTest(pass, v) && (strings.HasPrefix(v.Name.Name, "Example") || strings.HasPrefix(v.Name.Name, "Test") || strings.HasPrefix(v.Name.Name, "Benchmark")) {
+ return true
+ }
+
+ thing := "func"
+ if v.Recv != nil {
+ thing = "method"
+ }
+
+ if !isTechnicallyExported(v) {
+ check(v.Name, thing, initialisms)
+ }
+
+ checkList(v.Type.Params, thing+" parameter", initialisms)
+ checkList(v.Type.Results, thing+" result", initialisms)
+ case *ast.GenDecl:
+ if v.Tok == token.IMPORT {
+ return true
+ }
+ var thing string
+ switch v.Tok {
+ case token.CONST:
+ thing = "const"
+ case token.TYPE:
+ thing = "type"
+ case token.VAR:
+ thing = "var"
+ }
+ for _, spec := range v.Specs {
+ switch s := spec.(type) {
+ case *ast.TypeSpec:
+ check(s.Name, thing, initialisms)
+ case *ast.ValueSpec:
+ for _, id := range s.Names {
+ check(id, thing, initialisms)
+ }
+ }
+ }
+ case *ast.InterfaceType:
+ // Do not check interface method names.
+ // They are often constrainted by the method names of concrete types.
+ for _, x := range v.Methods.List {
+ ft, ok := x.Type.(*ast.FuncType)
+ if !ok { // might be an embedded interface name
+ continue
+ }
+ checkList(ft.Params, "interface method parameter", initialisms)
+ checkList(ft.Results, "interface method result", initialisms)
+ }
+ case *ast.RangeStmt:
+ if v.Tok == token.ASSIGN {
+ return true
+ }
+ if id, ok := v.Key.(*ast.Ident); ok {
+ check(id, "range var", initialisms)
+ }
+ if id, ok := v.Value.(*ast.Ident); ok {
+ check(id, "range var", initialisms)
+ }
+ case *ast.StructType:
+ for _, f := range v.Fields.List {
+ for _, id := range f.Names {
+ check(id, "struct field", initialisms)
+ }
+ }
+ }
+ return true
+ })
+ }
+ return nil, nil
+}
+
+// lintName returns a different name if it should be different.
+func lintName(name string, initialisms map[string]bool) (should string) {
+ // A large part of this function is copied from
+ // github.com/golang/lint, Copyright (c) 2013 The Go Authors,
+ // licensed under the BSD 3-clause license.
+
+ // Fast path for simple cases: "_" and all lowercase.
+ if name == "_" {
+ return name
+ }
+ if strings.IndexFunc(name, func(r rune) bool { return !unicode.IsLower(r) }) == -1 {
+ return name
+ }
+
+ // Split camelCase at any lower->upper transition, and split on underscores.
+ // Check each word for common initialisms.
+ runes := []rune(name)
+ w, i := 0, 0 // index of start of word, scan
+ for i+1 <= len(runes) {
+ eow := false // whether we hit the end of a word
+ if i+1 == len(runes) {
+ eow = true
+ } else if runes[i+1] == '_' && i+1 != len(runes)-1 {
+ // underscore; shift the remainder forward over any run of underscores
+ eow = true
+ n := 1
+ for i+n+1 < len(runes) && runes[i+n+1] == '_' {
+ n++
+ }
+
+ // Leave at most one underscore if the underscore is between two digits
+ if i+n+1 < len(runes) && unicode.IsDigit(runes[i]) && unicode.IsDigit(runes[i+n+1]) {
+ n--
+ }
+
+ copy(runes[i+1:], runes[i+n+1:])
+ runes = runes[:len(runes)-n]
+ } else if unicode.IsLower(runes[i]) && !unicode.IsLower(runes[i+1]) {
+ // lower->non-lower
+ eow = true
+ }
+ i++
+ if !eow {
+ continue
+ }
+
+ // [w,i) is a word.
+ word := string(runes[w:i])
+ if u := strings.ToUpper(word); initialisms[u] {
+ // Keep consistent case, which is lowercase only at the start.
+ if w == 0 && unicode.IsLower(runes[w]) {
+ u = strings.ToLower(u)
+ }
+ // All the common initialisms are ASCII,
+ // so we can replace the bytes exactly.
+ // TODO(dh): this won't be true once we allow custom initialisms
+ copy(runes[w:], []rune(u))
+ } else if w > 0 && strings.ToLower(word) == word {
+ // already all lowercase, and not the first word, so uppercase the first character.
+ runes[w] = unicode.ToUpper(runes[w])
+ }
+ w = i
+ }
+ return string(runes)
+}
+
+func isTechnicallyExported(f *ast.FuncDecl) bool {
+ if f.Recv != nil || f.Doc == nil {
+ return false
+ }
+
+ const export = "//export "
+ const linkname = "//go:linkname "
+ for _, c := range f.Doc.List {
+ if strings.HasPrefix(c.Text, export) && len(c.Text) == len(export)+len(f.Name.Name) && c.Text[len(export):] == f.Name.Name {
+ return true
+ }
+
+ if strings.HasPrefix(c.Text, linkname) {
+ return true
+ }
+ }
+ return false
+}
diff --git a/vendor/honnef.co/go/tools/unused/edge.go b/vendor/honnef.co/go/tools/unused/edge.go
new file mode 100644
index 0000000000000..02e0d09cf2ae9
--- /dev/null
+++ b/vendor/honnef.co/go/tools/unused/edge.go
@@ -0,0 +1,54 @@
+package unused
+
+//go:generate stringer -type edgeKind
+type edgeKind uint64
+
+func (e edgeKind) is(o edgeKind) bool {
+ return e&o != 0
+}
+
+const (
+ edgeAlias edgeKind = 1 << iota
+ edgeBlankField
+ edgeAnonymousStruct
+ edgeCgoExported
+ edgeConstGroup
+ edgeElementType
+ edgeEmbeddedInterface
+ edgeExportedConstant
+ edgeExportedField
+ edgeExportedFunction
+ edgeExportedMethod
+ edgeExportedType
+ edgeExportedVariable
+ edgeExtendsExportedFields
+ edgeExtendsExportedMethodSet
+ edgeFieldAccess
+ edgeFunctionArgument
+ edgeFunctionResult
+ edgeFunctionSignature
+ edgeImplements
+ edgeInstructionOperand
+ edgeInterfaceCall
+ edgeInterfaceMethod
+ edgeKeyType
+ edgeLinkname
+ edgeMainFunction
+ edgeNamedType
+ edgeNetRPCRegister
+ edgeNoCopySentinel
+ edgeProvidesMethod
+ edgeReceiver
+ edgeRuntimeFunction
+ edgeSignature
+ edgeStructConversion
+ edgeTestSink
+ edgeTupleElement
+ edgeType
+ edgeTypeName
+ edgeUnderlyingType
+ edgePointerType
+ edgeUnsafeConversion
+ edgeUsedConstant
+ edgeVarDecl
+)
diff --git a/vendor/honnef.co/go/tools/unused/edgekind_string.go b/vendor/honnef.co/go/tools/unused/edgekind_string.go
new file mode 100644
index 0000000000000..7629636cf13b2
--- /dev/null
+++ b/vendor/honnef.co/go/tools/unused/edgekind_string.go
@@ -0,0 +1,109 @@
+// Code generated by "stringer -type edgeKind"; DO NOT EDIT.
+
+package unused
+
+import "strconv"
+
+func _() {
+ // An "invalid array index" compiler error signifies that the constant values have changed.
+ // Re-run the stringer command to generate them again.
+ var x [1]struct{}
+ _ = x[edgeAlias-1]
+ _ = x[edgeBlankField-2]
+ _ = x[edgeAnonymousStruct-4]
+ _ = x[edgeCgoExported-8]
+ _ = x[edgeConstGroup-16]
+ _ = x[edgeElementType-32]
+ _ = x[edgeEmbeddedInterface-64]
+ _ = x[edgeExportedConstant-128]
+ _ = x[edgeExportedField-256]
+ _ = x[edgeExportedFunction-512]
+ _ = x[edgeExportedMethod-1024]
+ _ = x[edgeExportedType-2048]
+ _ = x[edgeExportedVariable-4096]
+ _ = x[edgeExtendsExportedFields-8192]
+ _ = x[edgeExtendsExportedMethodSet-16384]
+ _ = x[edgeFieldAccess-32768]
+ _ = x[edgeFunctionArgument-65536]
+ _ = x[edgeFunctionResult-131072]
+ _ = x[edgeFunctionSignature-262144]
+ _ = x[edgeImplements-524288]
+ _ = x[edgeInstructionOperand-1048576]
+ _ = x[edgeInterfaceCall-2097152]
+ _ = x[edgeInterfaceMethod-4194304]
+ _ = x[edgeKeyType-8388608]
+ _ = x[edgeLinkname-16777216]
+ _ = x[edgeMainFunction-33554432]
+ _ = x[edgeNamedType-67108864]
+ _ = x[edgeNetRPCRegister-134217728]
+ _ = x[edgeNoCopySentinel-268435456]
+ _ = x[edgeProvidesMethod-536870912]
+ _ = x[edgeReceiver-1073741824]
+ _ = x[edgeRuntimeFunction-2147483648]
+ _ = x[edgeSignature-4294967296]
+ _ = x[edgeStructConversion-8589934592]
+ _ = x[edgeTestSink-17179869184]
+ _ = x[edgeTupleElement-34359738368]
+ _ = x[edgeType-68719476736]
+ _ = x[edgeTypeName-137438953472]
+ _ = x[edgeUnderlyingType-274877906944]
+ _ = x[edgePointerType-549755813888]
+ _ = x[edgeUnsafeConversion-1099511627776]
+ _ = x[edgeUsedConstant-2199023255552]
+ _ = x[edgeVarDecl-4398046511104]
+}
+
+const _edgeKind_name = "edgeAliasedgeBlankFieldedgeAnonymousStructedgeCgoExportededgeConstGroupedgeElementTypeedgeEmbeddedInterfaceedgeExportedConstantedgeExportedFieldedgeExportedFunctionedgeExportedMethodedgeExportedTypeedgeExportedVariableedgeExtendsExportedFieldsedgeExtendsExportedMethodSetedgeFieldAccessedgeFunctionArgumentedgeFunctionResultedgeFunctionSignatureedgeImplementsedgeInstructionOperandedgeInterfaceCalledgeInterfaceMethodedgeKeyTypeedgeLinknameedgeMainFunctionedgeNamedTypeedgeNetRPCRegisteredgeNoCopySentineledgeProvidesMethodedgeReceiveredgeRuntimeFunctionedgeSignatureedgeStructConversionedgeTestSinkedgeTupleElementedgeTypeedgeTypeNameedgeUnderlyingTypeedgePointerTypeedgeUnsafeConversionedgeUsedConstantedgeVarDecl"
+
+var _edgeKind_map = map[edgeKind]string{
+ 1: _edgeKind_name[0:9],
+ 2: _edgeKind_name[9:23],
+ 4: _edgeKind_name[23:42],
+ 8: _edgeKind_name[42:57],
+ 16: _edgeKind_name[57:71],
+ 32: _edgeKind_name[71:86],
+ 64: _edgeKind_name[86:107],
+ 128: _edgeKind_name[107:127],
+ 256: _edgeKind_name[127:144],
+ 512: _edgeKind_name[144:164],
+ 1024: _edgeKind_name[164:182],
+ 2048: _edgeKind_name[182:198],
+ 4096: _edgeKind_name[198:218],
+ 8192: _edgeKind_name[218:243],
+ 16384: _edgeKind_name[243:271],
+ 32768: _edgeKind_name[271:286],
+ 65536: _edgeKind_name[286:306],
+ 131072: _edgeKind_name[306:324],
+ 262144: _edgeKind_name[324:345],
+ 524288: _edgeKind_name[345:359],
+ 1048576: _edgeKind_name[359:381],
+ 2097152: _edgeKind_name[381:398],
+ 4194304: _edgeKind_name[398:417],
+ 8388608: _edgeKind_name[417:428],
+ 16777216: _edgeKind_name[428:440],
+ 33554432: _edgeKind_name[440:456],
+ 67108864: _edgeKind_name[456:469],
+ 134217728: _edgeKind_name[469:487],
+ 268435456: _edgeKind_name[487:505],
+ 536870912: _edgeKind_name[505:523],
+ 1073741824: _edgeKind_name[523:535],
+ 2147483648: _edgeKind_name[535:554],
+ 4294967296: _edgeKind_name[554:567],
+ 8589934592: _edgeKind_name[567:587],
+ 17179869184: _edgeKind_name[587:599],
+ 34359738368: _edgeKind_name[599:615],
+ 68719476736: _edgeKind_name[615:623],
+ 137438953472: _edgeKind_name[623:635],
+ 274877906944: _edgeKind_name[635:653],
+ 549755813888: _edgeKind_name[653:668],
+ 1099511627776: _edgeKind_name[668:688],
+ 2199023255552: _edgeKind_name[688:704],
+ 4398046511104: _edgeKind_name[704:715],
+}
+
+func (i edgeKind) String() string {
+ if str, ok := _edgeKind_map[i]; ok {
+ return str
+ }
+ return "edgeKind(" + strconv.FormatInt(int64(i), 10) + ")"
+}
diff --git a/vendor/honnef.co/go/tools/unused/implements.go b/vendor/honnef.co/go/tools/unused/implements.go
new file mode 100644
index 0000000000000..835baac692531
--- /dev/null
+++ b/vendor/honnef.co/go/tools/unused/implements.go
@@ -0,0 +1,82 @@
+package unused
+
+import "go/types"
+
+// lookupMethod returns the index of and method with matching package and name, or (-1, nil).
+func lookupMethod(T *types.Interface, pkg *types.Package, name string) (int, *types.Func) {
+ if name != "_" {
+ for i := 0; i < T.NumMethods(); i++ {
+ m := T.Method(i)
+ if sameId(m, pkg, name) {
+ return i, m
+ }
+ }
+ }
+ return -1, nil
+}
+
+func sameId(obj types.Object, pkg *types.Package, name string) bool {
+ // spec:
+ // "Two identifiers are different if they are spelled differently,
+ // or if they appear in different packages and are not exported.
+ // Otherwise, they are the same."
+ if name != obj.Name() {
+ return false
+ }
+ // obj.Name == name
+ if obj.Exported() {
+ return true
+ }
+ // not exported, so packages must be the same (pkg == nil for
+ // fields in Universe scope; this can only happen for types
+ // introduced via Eval)
+ if pkg == nil || obj.Pkg() == nil {
+ return pkg == obj.Pkg()
+ }
+ // pkg != nil && obj.pkg != nil
+ return pkg.Path() == obj.Pkg().Path()
+}
+
+func (g *Graph) implements(V types.Type, T *types.Interface, msV *types.MethodSet) ([]*types.Selection, bool) {
+ // fast path for common case
+ if T.Empty() {
+ return nil, true
+ }
+
+ if ityp, _ := V.Underlying().(*types.Interface); ityp != nil {
+ // TODO(dh): is this code reachable?
+ for i := 0; i < T.NumMethods(); i++ {
+ m := T.Method(i)
+ _, obj := lookupMethod(ityp, m.Pkg(), m.Name())
+ switch {
+ case obj == nil:
+ return nil, false
+ case !types.Identical(obj.Type(), m.Type()):
+ return nil, false
+ }
+ }
+ return nil, true
+ }
+
+ // A concrete type implements T if it implements all methods of T.
+ var sels []*types.Selection
+ for i := 0; i < T.NumMethods(); i++ {
+ m := T.Method(i)
+ sel := msV.Lookup(m.Pkg(), m.Name())
+ if sel == nil {
+ return nil, false
+ }
+
+ f, _ := sel.Obj().(*types.Func)
+ if f == nil {
+ return nil, false
+ }
+
+ if !types.Identical(f.Type(), m.Type()) {
+ return nil, false
+ }
+
+ sels = append(sels, sel)
+ }
+ return sels, true
+}
diff --git a/vendor/honnef.co/go/tools/unused/unused.go b/vendor/honnef.co/go/tools/unused/unused.go
new file mode 100644
index 0000000000000..152d3692dd8fc
--- /dev/null
+++ b/vendor/honnef.co/go/tools/unused/unused.go
@@ -0,0 +1,1964 @@
+package unused
+
+import (
+ "fmt"
+ "go/ast"
+ "go/token"
+ "go/types"
+ "io"
+ "strings"
+ "sync"
+ "sync/atomic"
+
+ "golang.org/x/tools/go/analysis"
+ "honnef.co/go/tools/go/types/typeutil"
+ "honnef.co/go/tools/internal/passes/buildssa"
+ "honnef.co/go/tools/lint"
+ "honnef.co/go/tools/lint/lintdsl"
+ "honnef.co/go/tools/ssa"
+)
+
+// The graph we construct omits nodes along a path that do not
+// contribute any new information to the solution. For example, the
+// full graph for a function with a receiver would be Func ->
+// Signature -> Var -> Type. However, since signatures cannot be
+// unused, and receivers are always considered used, we can compact
+// the graph down to Func -> Type. This makes the graph smaller, but
+// harder to debug.
+
+// TODO(dh): conversions between structs mark fields as used, but the
+// conversion itself isn't part of that subgraph. even if the function
+// containing the conversion is unused, the fields will be marked as
+// used.
+
+// TODO(dh): we cannot observe function calls in assembly files.
+
+/*
+
+- packages use:
+ - (1.1) exported named types (unless in package main)
+ - (1.2) exported functions (unless in package main)
+ - (1.3) exported variables (unless in package main)
+ - (1.4) exported constants (unless in package main)
+ - (1.5) init functions
+ - (1.6) functions exported to cgo
+ - (1.7) the main function iff in the main package
+ - (1.8) symbols linked via go:linkname
+
+- named types use:
+ - (2.1) exported methods
+ - (2.2) the type they're based on
+ - (2.3) all their aliases. we can't easily track uses of aliases
+ because go/types turns them into uses of the aliased types. assume
+ that if a type is used, so are all of its aliases.
+ - (2.4) the pointer type. this aids with eagerly implementing
+ interfaces. if a method that implements an interface is defined on
+ a pointer receiver, and the pointer type is never used, but the
+ named type is, then we still want to mark the method as used.
+
+- variables and constants use:
+ - their types
+
+- functions use:
+ - (4.1) all their arguments, return parameters and receivers
+ - (4.2) anonymous functions defined beneath them
+ - (4.3) closures and bound methods.
+ this implements a simplified model where a function is used merely by being referenced, even if it is never called.
+ that way we don't have to keep track of closures escaping functions.
+ - (4.4) functions they return. we assume that someone else will call the returned function
+ - (4.5) functions/interface methods they call
+ - types they instantiate or convert to
+ - (4.7) fields they access
+ - (4.8) types of all instructions
+ - (4.9) package-level variables they assign to iff in tests (sinks for benchmarks)
+
+- conversions use:
+ - (5.1) when converting between two equivalent structs, the fields in
+ either struct use each other. the fields are relevant for the
+ conversion, but only if the fields are also accessed outside the
+ conversion.
+ - (5.2) when converting to or from unsafe.Pointer, mark all fields as used.
+
+- structs use:
+ - (6.1) fields of type NoCopy sentinel
+ - (6.2) exported fields
+ - (6.3) embedded fields that help implement interfaces (either fully implements it, or contributes required methods) (recursively)
+ - (6.4) embedded fields that have exported methods (recursively)
+ - (6.5) embedded structs that have exported fields (recursively)
+
+- (7.1) field accesses use fields
+- (7.2) fields use their types
+
+- (8.0) How we handle interfaces:
+ - (8.1) We do not technically care about interfaces that only consist of
+ exported methods. Exported methods on concrete types are always
+ marked as used.
+ - Any concrete type implements all known interfaces. Even if it isn't
+ assigned to any interfaces in our code, the user may receive a value
+ of the type and expect to pass it back to us through an interface.
+
+ Concrete types use their methods that implement interfaces. If the
+ type is used, it uses those methods. Otherwise, it doesn't. This
+ way, types aren't incorrectly marked reachable through the edge
+ from method to type.
+
+ - (8.3) All interface methods are marked as used, even if they never get
+ called. This is to accomodate sum types (unexported interface
+ method that must exist but never gets called.)
+
+ - (8.4) All embedded interfaces are marked as used. This is an
+ extension of 8.3, but we have to explicitly track embedded
+ interfaces because in a chain C->B->A, B wouldn't be marked as
+ used by 8.3 just because it contributes A's methods to C.
+
+- Inherent uses:
+ - thunks and other generated wrappers call the real function
+ - (9.2) variables use their types
+ - (9.3) types use their underlying and element types
+ - (9.4) conversions use the type they convert to
+ - (9.5) instructions use their operands
+ - (9.6) instructions use their operands' types
+ - (9.7) variable _reads_ use variables, writes do not, except in tests
+ - (9.8) runtime functions that may be called from user code via the compiler
+
+
+- const groups:
+ (10.1) if one constant out of a block of constants is used, mark all
+ of them used. a lot of the time, unused constants exist for the sake
+ of completeness. See also
+ https://github.com/dominikh/go-tools/issues/365
+
+
+- (11.1) anonymous struct types use all their fields. we cannot
+ deduplicate struct types, as that leads to order-dependent
+ reportings. we can't not deduplicate struct types while still
+ tracking fields, because then each instance of the unnamed type in
+ the data flow chain will get its own fields, causing false
+ positives. Thus, we only accurately track fields of named struct
+ types, and assume that unnamed struct types use all their fields.
+
+
+- Differences in whole program mode:
+ - (e2) types aim to implement all exported interfaces from all packages
+ - (e3) exported identifiers aren't automatically used. for fields and
+ methods this poses extra issues due to reflection. We assume
+ that all exported fields are used. We also maintain a list of
+ known reflection-based method callers.
+
+*/
+
+func assert(b bool) {
+ if !b {
+ panic("failed assertion")
+ }
+}
+
+func typString(obj types.Object) string {
+ switch obj := obj.(type) {
+ case *types.Func:
+ return "func"
+ case *types.Var:
+ if obj.IsField() {
+ return "field"
+ }
+ return "var"
+ case *types.Const:
+ return "const"
+ case *types.TypeName:
+ return "type"
+ default:
+ return "identifier"
+ }
+}
+
+// /usr/lib/go/src/runtime/proc.go:433:6: func badmorestackg0 is unused (U1000)
+
+// Functions defined in the Go runtime that may be called through
+// compiler magic or via assembly.
+var runtimeFuncs = map[string]bool{
+ // The first part of the list is copied from
+ // cmd/compile/internal/gc/builtin.go, var runtimeDecls
+ "newobject": true,
+ "panicindex": true,
+ "panicslice": true,
+ "panicdivide": true,
+ "panicmakeslicelen": true,
+ "throwinit": true,
+ "panicwrap": true,
+ "gopanic": true,
+ "gorecover": true,
+ "goschedguarded": true,
+ "printbool": true,
+ "printfloat": true,
+ "printint": true,
+ "printhex": true,
+ "printuint": true,
+ "printcomplex": true,
+ "printstring": true,
+ "printpointer": true,
+ "printiface": true,
+ "printeface": true,
+ "printslice": true,
+ "printnl": true,
+ "printsp": true,
+ "printlock": true,
+ "printunlock": true,
+ "concatstring2": true,
+ "concatstring3": true,
+ "concatstring4": true,
+ "concatstring5": true,
+ "concatstrings": true,
+ "cmpstring": true,
+ "intstring": true,
+ "slicebytetostring": true,
+ "slicebytetostringtmp": true,
+ "slicerunetostring": true,
+ "stringtoslicebyte": true,
+ "stringtoslicerune": true,
+ "slicecopy": true,
+ "slicestringcopy": true,
+ "decoderune": true,
+ "countrunes": true,
+ "convI2I": true,
+ "convT16": true,
+ "convT32": true,
+ "convT64": true,
+ "convTstring": true,
+ "convTslice": true,
+ "convT2E": true,
+ "convT2Enoptr": true,
+ "convT2I": true,
+ "convT2Inoptr": true,
+ "assertE2I": true,
+ "assertE2I2": true,
+ "assertI2I": true,
+ "assertI2I2": true,
+ "panicdottypeE": true,
+ "panicdottypeI": true,
+ "panicnildottype": true,
+ "ifaceeq": true,
+ "efaceeq": true,
+ "fastrand": true,
+ "makemap64": true,
+ "makemap": true,
+ "makemap_small": true,
+ "mapaccess1": true,
+ "mapaccess1_fast32": true,
+ "mapaccess1_fast64": true,
+ "mapaccess1_faststr": true,
+ "mapaccess1_fat": true,
+ "mapaccess2": true,
+ "mapaccess2_fast32": true,
+ "mapaccess2_fast64": true,
+ "mapaccess2_faststr": true,
+ "mapaccess2_fat": true,
+ "mapassign": true,
+ "mapassign_fast32": true,
+ "mapassign_fast32ptr": true,
+ "mapassign_fast64": true,
+ "mapassign_fast64ptr": true,
+ "mapassign_faststr": true,
+ "mapiterinit": true,
+ "mapdelete": true,
+ "mapdelete_fast32": true,
+ "mapdelete_fast64": true,
+ "mapdelete_faststr": true,
+ "mapiternext": true,
+ "mapclear": true,
+ "makechan64": true,
+ "makechan": true,
+ "chanrecv1": true,
+ "chanrecv2": true,
+ "chansend1": true,
+ "closechan": true,
+ "writeBarrier": true,
+ "typedmemmove": true,
+ "typedmemclr": true,
+ "typedslicecopy": true,
+ "selectnbsend": true,
+ "selectnbrecv": true,
+ "selectnbrecv2": true,
+ "selectsetpc": true,
+ "selectgo": true,
+ "block": true,
+ "makeslice": true,
+ "makeslice64": true,
+ "growslice": true,
+ "memmove": true,
+ "memclrNoHeapPointers": true,
+ "memclrHasPointers": true,
+ "memequal": true,
+ "memequal8": true,
+ "memequal16": true,
+ "memequal32": true,
+ "memequal64": true,
+ "memequal128": true,
+ "int64div": true,
+ "uint64div": true,
+ "int64mod": true,
+ "uint64mod": true,
+ "float64toint64": true,
+ "float64touint64": true,
+ "float64touint32": true,
+ "int64tofloat64": true,
+ "uint64tofloat64": true,
+ "uint32tofloat64": true,
+ "complex128div": true,
+ "racefuncenter": true,
+ "racefuncenterfp": true,
+ "racefuncexit": true,
+ "raceread": true,
+ "racewrite": true,
+ "racereadrange": true,
+ "racewriterange": true,
+ "msanread": true,
+ "msanwrite": true,
+ "x86HasPOPCNT": true,
+ "x86HasSSE41": true,
+ "arm64HasATOMICS": true,
+
+ // The second part of the list is extracted from assembly code in
+ // the standard library, with the exception of the runtime package itself
+ "abort": true,
+ "aeshashbody": true,
+ "args": true,
+ "asminit": true,
+ "badctxt": true,
+ "badmcall2": true,
+ "badmcall": true,
+ "badmorestackg0": true,
+ "badmorestackgsignal": true,
+ "badsignal2": true,
+ "callbackasm1": true,
+ "callCfunction": true,
+ "cgocallback_gofunc": true,
+ "cgocallbackg": true,
+ "checkgoarm": true,
+ "check": true,
+ "debugCallCheck": true,
+ "debugCallWrap": true,
+ "emptyfunc": true,
+ "entersyscall": true,
+ "exit": true,
+ "exits": true,
+ "exitsyscall": true,
+ "externalthreadhandler": true,
+ "findnull": true,
+ "goexit1": true,
+ "gostring": true,
+ "i386_set_ldt": true,
+ "_initcgo": true,
+ "init_thread_tls": true,
+ "ldt0setup": true,
+ "libpreinit": true,
+ "load_g": true,
+ "morestack": true,
+ "mstart": true,
+ "nacl_sysinfo": true,
+ "nanotimeQPC": true,
+ "nanotime": true,
+ "newosproc0": true,
+ "newproc": true,
+ "newstack": true,
+ "noted": true,
+ "nowQPC": true,
+ "osinit": true,
+ "printf": true,
+ "racecallback": true,
+ "reflectcallmove": true,
+ "reginit": true,
+ "rt0_go": true,
+ "save_g": true,
+ "schedinit": true,
+ "setldt": true,
+ "settls": true,
+ "sighandler": true,
+ "sigprofNonGo": true,
+ "sigtrampgo": true,
+ "_sigtramp": true,
+ "sigtramp": true,
+ "stackcheck": true,
+ "syscall_chdir": true,
+ "syscall_chroot": true,
+ "syscall_close": true,
+ "syscall_dup2": true,
+ "syscall_execve": true,
+ "syscall_exit": true,
+ "syscall_fcntl": true,
+ "syscall_forkx": true,
+ "syscall_gethostname": true,
+ "syscall_getpid": true,
+ "syscall_ioctl": true,
+ "syscall_pipe": true,
+ "syscall_rawsyscall6": true,
+ "syscall_rawSyscall6": true,
+ "syscall_rawsyscall": true,
+ "syscall_RawSyscall": true,
+ "syscall_rawsysvicall6": true,
+ "syscall_setgid": true,
+ "syscall_setgroups": true,
+ "syscall_setpgid": true,
+ "syscall_setsid": true,
+ "syscall_setuid": true,
+ "syscall_syscall6": true,
+ "syscall_syscall": true,
+ "syscall_Syscall": true,
+ "syscall_sysvicall6": true,
+ "syscall_wait4": true,
+ "syscall_write": true,
+ "traceback": true,
+ "tstart": true,
+ "usplitR0": true,
+ "wbBufFlush": true,
+ "write": true,
+}
+
+type pkg struct {
+ Fset *token.FileSet
+ Files []*ast.File
+ Pkg *types.Package
+ TypesInfo *types.Info
+ TypesSizes types.Sizes
+ SSA *ssa.Package
+ SrcFuncs []*ssa.Function
+}
+
+type Checker struct {
+ WholeProgram bool
+ Debug io.Writer
+
+ mu sync.Mutex
+ initialPackages map[*types.Package]struct{}
+ allPackages map[*types.Package]struct{}
+ graph *Graph
+}
+
+func NewChecker(wholeProgram bool) *Checker {
+ return &Checker{
+ initialPackages: map[*types.Package]struct{}{},
+ allPackages: map[*types.Package]struct{}{},
+ WholeProgram: wholeProgram,
+ }
+}
+
+func (c *Checker) Analyzer() *analysis.Analyzer {
+ name := "U1000"
+ if c.WholeProgram {
+ name = "U1001"
+ }
+ return &analysis.Analyzer{
+ Name: name,
+ Doc: "Unused code",
+ Run: c.Run,
+ Requires: []*analysis.Analyzer{buildssa.Analyzer},
+ }
+}
+
+func (c *Checker) Run(pass *analysis.Pass) (interface{}, error) {
+ c.mu.Lock()
+ if c.graph == nil {
+ c.graph = NewGraph()
+ c.graph.wholeProgram = c.WholeProgram
+ c.graph.fset = pass.Fset
+ }
+
+ var visit func(pkg *types.Package)
+ visit = func(pkg *types.Package) {
+ if _, ok := c.allPackages[pkg]; ok {
+ return
+ }
+ c.allPackages[pkg] = struct{}{}
+ for _, imp := range pkg.Imports() {
+ visit(imp)
+ }
+ }
+ visit(pass.Pkg)
+
+ c.initialPackages[pass.Pkg] = struct{}{}
+ c.mu.Unlock()
+
+ ssapkg := pass.ResultOf[buildssa.Analyzer].(*buildssa.SSA)
+ pkg := &pkg{
+ Fset: pass.Fset,
+ Files: pass.Files,
+ Pkg: pass.Pkg,
+ TypesInfo: pass.TypesInfo,
+ TypesSizes: pass.TypesSizes,
+ SSA: ssapkg.Pkg,
+ SrcFuncs: ssapkg.SrcFuncs,
+ }
+
+ c.processPkg(c.graph, pkg)
+
+ return nil, nil
+}
+
+func (c *Checker) ProblemObject(fset *token.FileSet, obj types.Object) lint.Problem {
+ name := obj.Name()
+ if sig, ok := obj.Type().(*types.Signature); ok && sig.Recv() != nil {
+ switch sig.Recv().Type().(type) {
+ case *types.Named, *types.Pointer:
+ typ := types.TypeString(sig.Recv().Type(), func(*types.Package) string { return "" })
+ if len(typ) > 0 && typ[0] == '*' {
+ name = fmt.Sprintf("(%s).%s", typ, obj.Name())
+ } else if len(typ) > 0 {
+ name = fmt.Sprintf("%s.%s", typ, obj.Name())
+ }
+ }
+ }
+
+ checkName := "U1000"
+ if c.WholeProgram {
+ checkName = "U1001"
+ }
+ return lint.Problem{
+ Pos: lint.DisplayPosition(fset, obj.Pos()),
+ Message: fmt.Sprintf("%s %s is unused", typString(obj), name),
+ Check: checkName,
+ }
+}
+
+func (c *Checker) Result() []types.Object {
+ out := c.results()
+
+ out2 := make([]types.Object, 0, len(out))
+ for _, v := range out {
+ if _, ok := c.initialPackages[v.Pkg()]; !ok {
+ continue
+ }
+ out2 = append(out2, v)
+ }
+
+ return out2
+}
+
+func (c *Checker) debugf(f string, v ...interface{}) {
+ if c.Debug != nil {
+ fmt.Fprintf(c.Debug, f, v...)
+ }
+}
+
+func (graph *Graph) quieten(node *Node) {
+ if node.seen {
+ return
+ }
+ switch obj := node.obj.(type) {
+ case *types.Named:
+ for i := 0; i < obj.NumMethods(); i++ {
+ m := obj.Method(i)
+ if node, ok := graph.nodeMaybe(m); ok {
+ node.quiet = true
+ }
+ }
+ case *types.Struct:
+ for i := 0; i < obj.NumFields(); i++ {
+ if node, ok := graph.nodeMaybe(obj.Field(i)); ok {
+ node.quiet = true
+ }
+ }
+ case *types.Interface:
+ for i := 0; i < obj.NumExplicitMethods(); i++ {
+ m := obj.ExplicitMethod(i)
+ if node, ok := graph.nodeMaybe(m); ok {
+ node.quiet = true
+ }
+ }
+ }
+}
+
+func (c *Checker) results() []types.Object {
+ if c.graph == nil {
+ // We never analyzed any packages
+ return nil
+ }
+
+ var out []types.Object
+
+ if c.WholeProgram {
+ var ifaces []*types.Interface
+ var notIfaces []types.Type
+
+ // implement as many interfaces as possible
+ c.graph.seenTypes.Iterate(func(t types.Type, _ interface{}) {
+ switch t := t.(type) {
+ case *types.Interface:
+ if t.NumMethods() > 0 {
+ ifaces = append(ifaces, t)
+ }
+ default:
+ if _, ok := t.Underlying().(*types.Interface); !ok {
+ notIfaces = append(notIfaces, t)
+ }
+ }
+ })
+
+ for pkg := range c.allPackages {
+ for _, iface := range interfacesFromExportData(pkg) {
+ if iface.NumMethods() > 0 {
+ ifaces = append(ifaces, iface)
+ }
+ }
+ }
+
+ ctx := &context{
+ g: c.graph,
+ seenTypes: &c.graph.seenTypes,
+ }
+ // (8.0) handle interfaces
+ // (e2) types aim to implement all exported interfaces from all packages
+ for _, t := range notIfaces {
+ // OPT(dh): it is unfortunate that we do not have access
+ // to a populated method set at this point.
+ ms := types.NewMethodSet(t)
+ for _, iface := range ifaces {
+ if sels, ok := c.graph.implements(t, iface, ms); ok {
+ for _, sel := range sels {
+ c.graph.useMethod(ctx, t, sel, t, edgeImplements)
+ }
+ }
+ }
+ }
+ }
+
+ if c.Debug != nil {
+ debugNode := func(node *Node) {
+ if node.obj == nil {
+ c.debugf("n%d [label=\"Root\"];\n", node.id)
+ } else {
+ c.debugf("n%d [label=%q];\n", node.id, fmt.Sprintf("(%T) %s", node.obj, node.obj))
+ }
+ for _, e := range node.used {
+ for i := edgeKind(1); i < 64; i++ {
+ if e.kind.is(1 << i) {
+ c.debugf("n%d -> n%d [label=%q];\n", node.id, e.node.id, edgeKind(1<<i))
+ }
+ }
+ }
+ }
+
+ c.debugf("digraph{\n")
+ debugNode(c.graph.Root)
+ c.graph.Nodes.Range(func(k, v interface{}) bool {
+ debugNode(v.(*Node))
+ return true
+ })
+ c.graph.TypeNodes.Iterate(func(key types.Type, value interface{}) {
+ debugNode(value.(*Node))
+ })
+
+ c.debugf("}\n")
+ }
+
+ c.graph.color(c.graph.Root)
+ // if a node is unused, don't report any of the node's
+ // children as unused. for example, if a function is unused,
+ // don't flag its receiver. if a named type is unused, don't
+ // flag its methods.
+
+ c.graph.Nodes.Range(func(k, v interface{}) bool {
+ c.graph.quieten(v.(*Node))
+ return true
+ })
+ c.graph.TypeNodes.Iterate(func(_ types.Type, value interface{}) {
+ c.graph.quieten(value.(*Node))
+ })
+
+ report := func(node *Node) {
+ if node.seen {
+ return
+ }
+ if node.quiet {
+ c.debugf("n%d [color=purple];\n", node.id)
+ return
+ }
+
+ c.debugf("n%d [color=red];\n", node.id)
+ switch obj := node.obj.(type) {
+ case *types.Var:
+ // don't report unnamed variables (interface embedding)
+ if obj.Name() != "" || obj.IsField() {
+ out = append(out, obj)
+ }
+ return
+ case types.Object:
+ if obj.Name() != "_" {
+ out = append(out, obj)
+ }
+ return
+ }
+ c.debugf("n%d [color=gray];\n", node.id)
+ }
+ c.graph.Nodes.Range(func(k, v interface{}) bool {
+ report(v.(*Node))
+ return true
+ })
+ c.graph.TypeNodes.Iterate(func(_ types.Type, value interface{}) {
+ report(value.(*Node))
+ })
+
+ return out
+}
+
+func (c *Checker) processPkg(graph *Graph, pkg *pkg) {
+ if pkg.Pkg.Path() == "unsafe" {
+ return
+ }
+ graph.entry(pkg)
+}
+
+func objNodeKeyFor(fset *token.FileSet, obj types.Object) objNodeKey {
+ var kind objType
+ switch obj.(type) {
+ case *types.PkgName:
+ kind = otPkgName
+ case *types.Const:
+ kind = otConst
+ case *types.TypeName:
+ kind = otTypeName
+ case *types.Var:
+ kind = otVar
+ case *types.Func:
+ kind = otFunc
+ case *types.Label:
+ kind = otLabel
+ case *types.Builtin:
+ kind = otBuiltin
+ case *types.Nil:
+ kind = otNil
+ default:
+ panic(fmt.Sprintf("unreachable: %T", obj))
+ }
+
+ position := fset.PositionFor(obj.Pos(), false)
+ position.Column = 0
+ position.Offset = 0
+ return objNodeKey{
+ position: position,
+ kind: kind,
+ name: obj.Name(),
+ }
+}
+
+type objType uint8
+
+const (
+ otPkgName objType = iota
+ otConst
+ otTypeName
+ otVar
+ otFunc
+ otLabel
+ otBuiltin
+ otNil
+)
+
+// An objNodeKey describes a types.Object node in the graph.
+//
+// Due to test variants we may end up with multiple instances of the
+// same object, which is why we have to deduplicate based on their
+// source position. And because export data lacks column information,
+// we also have to incorporate the object's string representation in
+// the key.
+//
+// Previously we used the object's full string representation
+// (types.ObjectString), but that causes a significant amount of
+// allocations. Currently we're using the object's type and name, in
+// the hope that it is impossible for two objects to have the same
+// type, name and file position.
+type objNodeKey struct {
+ position token.Position
+ kind objType
+ name string
+}
+
+type Graph struct {
+ // accessed atomically
+ nodeOffset uint64
+
+ // Safe for concurrent use
+ fset *token.FileSet
+ Root *Node
+ seenTypes typeutil.Map
+ Nodes sync.Map // map[interface{}]*Node
+ objNodes sync.Map // map[objNodeKey]*Node
+
+ // read-only
+ wholeProgram bool
+
+ // need synchronisation
+ mu sync.Mutex
+ TypeNodes typeutil.Map
+}
+
+type context struct {
+ g *Graph
+ pkg *pkg
+ seenFns map[string]struct{}
+ seenTypes *typeutil.Map
+ nodeCounter uint64
+
+ // local cache for the map in Graph
+ typeNodes typeutil.Map
+}
+
+func NewGraph() *Graph {
+ g := &Graph{}
+ g.Root = g.newNode(&context{}, nil)
+ return g
+}
+
+func (g *Graph) color(root *Node) {
+ if root.seen {
+ return
+ }
+ root.seen = true
+ for _, e := range root.used {
+ g.color(e.node)
+ }
+}
+
+type ConstGroup struct {
+ // give the struct a size to get unique pointers
+ _ byte
+}
+
+func (ConstGroup) String() string { return "const group" }
+
+type edge struct {
+ node *Node
+ kind edgeKind
+}
+
+type Node struct {
+ obj interface{}
+ id uint64
+
+ mu sync.Mutex
+ used []edge
+
+ // set during final graph walk if node is reachable
+ seen bool
+ // a parent node (e.g. the struct type containing a field) is
+ // already unused, don't report children
+ quiet bool
+}
+
+func (g *Graph) nodeMaybe(obj types.Object) (*Node, bool) {
+ if node, ok := g.Nodes.Load(obj); ok {
+ return node.(*Node), true
+ }
+ return nil, false
+}
+
+func (g *Graph) node(ctx *context, obj interface{}) (node *Node, new bool) {
+ if t, ok := obj.(types.Type); ok {
+ if v := ctx.typeNodes.At(t); v != nil {
+ return v.(*Node), false
+ }
+ g.mu.Lock()
+ defer g.mu.Unlock()
+
+ if v := g.TypeNodes.At(t); v != nil {
+ return v.(*Node), false
+ }
+ node := g.newNode(ctx, t)
+ g.TypeNodes.Set(t, node)
+ ctx.typeNodes.Set(t, node)
+ return node, true
+ }
+
+ if node, ok := g.Nodes.Load(obj); ok {
+ return node.(*Node), false
+ }
+
+ if obj, ok := obj.(types.Object); ok {
+ key := objNodeKeyFor(g.fset, obj)
+ if o, ok := g.objNodes.Load(key); ok {
+ onode := o.(*Node)
+ return onode, false
+ }
+
+ node = g.newNode(ctx, obj)
+ g.Nodes.Store(obj, node)
+ g.objNodes.Store(key, node)
+ return node, true
+ }
+
+ node = g.newNode(ctx, obj)
+ g.Nodes.Store(obj, node)
+ return node, true
+}
+
+func (g *Graph) newNode(ctx *context, obj interface{}) *Node {
+ ctx.nodeCounter++
+ return &Node{
+ obj: obj,
+ id: ctx.nodeCounter,
+ }
+}
+
+func (n *Node) use(node *Node, kind edgeKind) {
+ n.mu.Lock()
+ defer n.mu.Unlock()
+ assert(node != nil)
+ n.used = append(n.used, edge{node: node, kind: kind})
+}
+
+// isIrrelevant reports whether an object's presence in the graph is
+// of any relevance. A lot of objects will never have outgoing edges,
+// nor meaningful incoming ones. Examples are basic types and empty
+// signatures, among many others.
+//
+// Dropping these objects should have no effect on correctness, but
+// may improve performance. It also helps with debugging, as it
+// greatly reduces the size of the graph.
+func isIrrelevant(obj interface{}) bool {
+ if obj, ok := obj.(types.Object); ok {
+ switch obj := obj.(type) {
+ case *types.Var:
+ if obj.IsField() {
+ // We need to track package fields
+ return false
+ }
+ if obj.Pkg() != nil && obj.Parent() == obj.Pkg().Scope() {
+ // We need to track package-level variables
+ return false
+ }
+ return isIrrelevant(obj.Type())
+ default:
+ return false
+ }
+ }
+ if T, ok := obj.(types.Type); ok {
+ switch T := T.(type) {
+ case *types.Array:
+ return isIrrelevant(T.Elem())
+ case *types.Slice:
+ return isIrrelevant(T.Elem())
+ case *types.Basic:
+ return true
+ case *types.Tuple:
+ for i := 0; i < T.Len(); i++ {
+ if !isIrrelevant(T.At(i).Type()) {
+ return false
+ }
+ }
+ return true
+ case *types.Signature:
+ if T.Recv() != nil {
+ return false
+ }
+ for i := 0; i < T.Params().Len(); i++ {
+ if !isIrrelevant(T.Params().At(i)) {
+ return false
+ }
+ }
+ for i := 0; i < T.Results().Len(); i++ {
+ if !isIrrelevant(T.Results().At(i)) {
+ return false
+ }
+ }
+ return true
+ case *types.Interface:
+ return T.NumMethods() == 0 && T.NumEmbeddeds() == 0
+ case *types.Pointer:
+ return isIrrelevant(T.Elem())
+ case *types.Map:
+ return isIrrelevant(T.Key()) && isIrrelevant(T.Elem())
+ case *types.Struct:
+ return T.NumFields() == 0
+ case *types.Chan:
+ return isIrrelevant(T.Elem())
+ default:
+ return false
+ }
+ }
+ return false
+}
+
+func (ctx *context) see(obj interface{}) *Node {
+ if isIrrelevant(obj) {
+ return nil
+ }
+
+ assert(obj != nil)
+ // add new node to graph
+ node, _ := ctx.g.node(ctx, obj)
+ return node
+}
+
+func (ctx *context) use(used, by interface{}, kind edgeKind) {
+ if isIrrelevant(used) {
+ return
+ }
+
+ assert(used != nil)
+ if obj, ok := by.(types.Object); ok && obj.Pkg() != nil {
+ if !ctx.g.wholeProgram && obj.Pkg() != ctx.pkg.Pkg {
+ return
+ }
+ }
+ usedNode, new := ctx.g.node(ctx, used)
+ assert(!new)
+ if by == nil {
+ ctx.g.Root.use(usedNode, kind)
+ } else {
+ byNode, new := ctx.g.node(ctx, by)
+ assert(!new)
+ byNode.use(usedNode, kind)
+ }
+}
+
+func (ctx *context) seeAndUse(used, by interface{}, kind edgeKind) *Node {
+ node := ctx.see(used)
+ ctx.use(used, by, kind)
+ return node
+}
+
+// trackExportedIdentifier reports whether obj should be considered
+// used due to being exported, checking various conditions that affect
+// the decision.
+func (g *Graph) trackExportedIdentifier(ctx *context, obj types.Object) bool {
+ if !obj.Exported() {
+ // object isn't exported, the question is moot
+ return false
+ }
+ path := g.fset.Position(obj.Pos()).Filename
+ if g.wholeProgram {
+ // Example functions without "Output:" comments aren't being
+ // run and thus don't show up in the graph.
+ if strings.HasSuffix(path, "_test.go") && strings.HasPrefix(obj.Name(), "Example") {
+ return true
+ }
+ // whole program mode tracks exported identifiers accurately
+ return false
+ }
+
+ if ctx.pkg.Pkg.Name() == "main" && !strings.HasSuffix(path, "_test.go") {
+ // exported identifiers in package main can't be imported.
+ // However, test functions can be called, and xtest packages
+ // even have access to exported identifiers.
+ return false
+ }
+
+ if strings.HasSuffix(path, "_test.go") {
+ if strings.HasPrefix(obj.Name(), "Test") ||
+ strings.HasPrefix(obj.Name(), "Benchmark") ||
+ strings.HasPrefix(obj.Name(), "Example") {
+ return true
+ }
+ return false
+ }
+
+ return true
+}
+
+func (g *Graph) entry(pkg *pkg) {
+ no := atomic.AddUint64(&g.nodeOffset, 1)
+ ctx := &context{
+ g: g,
+ pkg: pkg,
+ nodeCounter: no * 1e9,
+ seenFns: map[string]struct{}{},
+ }
+ if g.wholeProgram {
+ ctx.seenTypes = &g.seenTypes
+ } else {
+ ctx.seenTypes = &typeutil.Map{}
+ }
+
+ scopes := map[*types.Scope]*ssa.Function{}
+ for _, fn := range pkg.SrcFuncs {
+ if fn.Object() != nil {
+ scope := fn.Object().(*types.Func).Scope()
+ scopes[scope] = fn
+ }
+ }
+
+ for _, f := range pkg.Files {
+ for _, cg := range f.Comments {
+ for _, c := range cg.List {
+ if strings.HasPrefix(c.Text, "//go:linkname ") {
+ // FIXME(dh): we're looking at all comments. The
+ // compiler only looks at comments in the
+ // left-most column. The intention probably is to
+ // only look at top-level comments.
+
+ // (1.8) packages use symbols linked via go:linkname
+ fields := strings.Fields(c.Text)
+ if len(fields) == 3 {
+ if m, ok := pkg.SSA.Members[fields[1]]; ok {
+ var obj types.Object
+ switch m := m.(type) {
+ case *ssa.Global:
+ obj = m.Object()
+ case *ssa.Function:
+ obj = m.Object()
+ default:
+ panic(fmt.Sprintf("unhandled type: %T", m))
+ }
+ assert(obj != nil)
+ ctx.seeAndUse(obj, nil, edgeLinkname)
+ }
+ }
+ }
+ }
+ }
+ }
+
+ surroundingFunc := func(obj types.Object) *ssa.Function {
+ scope := obj.Parent()
+ for scope != nil {
+ if fn := scopes[scope]; fn != nil {
+ return fn
+ }
+ scope = scope.Parent()
+ }
+ return nil
+ }
+
+ // SSA form won't tell us about locally scoped types that aren't
+ // being used. Walk the list of Defs to get all named types.
+ //
+ // SSA form also won't tell us about constants; use Defs and Uses
+ // to determine which constants exist and which are being used.
+ for _, obj := range pkg.TypesInfo.Defs {
+ switch obj := obj.(type) {
+ case *types.TypeName:
+ // types are being handled by walking the AST
+ case *types.Const:
+ ctx.see(obj)
+ fn := surroundingFunc(obj)
+ if fn == nil && g.trackExportedIdentifier(ctx, obj) {
+ // (1.4) packages use exported constants (unless in package main)
+ ctx.use(obj, nil, edgeExportedConstant)
+ }
+ g.typ(ctx, obj.Type(), nil)
+ ctx.seeAndUse(obj.Type(), obj, edgeType)
+ }
+ }
+
+ // Find constants being used inside functions, find sinks in tests
+ for _, fn := range pkg.SrcFuncs {
+ if fn.Object() != nil {
+ ctx.see(fn.Object())
+ }
+ node := fn.Syntax()
+ if node == nil {
+ continue
+ }
+ ast.Inspect(node, func(node ast.Node) bool {
+ switch node := node.(type) {
+ case *ast.Ident:
+ obj, ok := pkg.TypesInfo.Uses[node]
+ if !ok {
+ return true
+ }
+ switch obj := obj.(type) {
+ case *types.Const:
+ ctx.seeAndUse(obj, owningObject(fn), edgeUsedConstant)
+ }
+ case *ast.AssignStmt:
+ for _, expr := range node.Lhs {
+ ident, ok := expr.(*ast.Ident)
+ if !ok {
+ continue
+ }
+ obj := pkg.TypesInfo.ObjectOf(ident)
+ if obj == nil {
+ continue
+ }
+ path := g.fset.File(obj.Pos()).Name()
+ if strings.HasSuffix(path, "_test.go") {
+ if obj.Parent() != nil && obj.Parent().Parent() != nil && obj.Parent().Parent().Parent() == nil {
+ // object's scope is the package, whose
+ // parent is the file, whose parent is nil
+
+ // (4.9) functions use package-level variables they assign to iff in tests (sinks for benchmarks)
+ // (9.7) variable _reads_ use variables, writes do not, except in tests
+ ctx.seeAndUse(obj, owningObject(fn), edgeTestSink)
+ }
+ }
+ }
+ }
+
+ return true
+ })
+ }
+ // Find constants being used in non-function contexts
+ for _, obj := range pkg.TypesInfo.Uses {
+ _, ok := obj.(*types.Const)
+ if !ok {
+ continue
+ }
+ ctx.seeAndUse(obj, nil, edgeUsedConstant)
+ }
+
+ var fns []*types.Func
+ var fn *types.Func
+ var stack []ast.Node
+ for _, f := range pkg.Files {
+ ast.Inspect(f, func(n ast.Node) bool {
+ if n == nil {
+ pop := stack[len(stack)-1]
+ stack = stack[:len(stack)-1]
+ if _, ok := pop.(*ast.FuncDecl); ok {
+ fns = fns[:len(fns)-1]
+ if len(fns) == 0 {
+ fn = nil
+ } else {
+ fn = fns[len(fns)-1]
+ }
+ }
+ return true
+ }
+ stack = append(stack, n)
+ switch n := n.(type) {
+ case *ast.FuncDecl:
+ fn = pkg.TypesInfo.ObjectOf(n.Name).(*types.Func)
+ fns = append(fns, fn)
+ ctx.see(fn)
+ case *ast.GenDecl:
+ switch n.Tok {
+ case token.CONST:
+ groups := lintdsl.GroupSpecs(pkg.Fset, n.Specs)
+ for _, specs := range groups {
+ if len(specs) > 1 {
+ cg := &ConstGroup{}
+ ctx.see(cg)
+ for _, spec := range specs {
+ for _, name := range spec.(*ast.ValueSpec).Names {
+ obj := pkg.TypesInfo.ObjectOf(name)
+ // (10.1) const groups
+ ctx.seeAndUse(obj, cg, edgeConstGroup)
+ ctx.use(cg, obj, edgeConstGroup)
+ }
+ }
+ }
+ }
+ case token.VAR:
+ for _, spec := range n.Specs {
+ v := spec.(*ast.ValueSpec)
+ for _, name := range v.Names {
+ T := pkg.TypesInfo.TypeOf(name)
+ if fn != nil {
+ ctx.seeAndUse(T, fn, edgeVarDecl)
+ } else {
+ // TODO(dh): we likely want to make
+ // the type used by the variable, not
+ // the package containing the
+ // variable. But then we have to take
+ // special care of blank identifiers.
+ ctx.seeAndUse(T, nil, edgeVarDecl)
+ }
+ g.typ(ctx, T, nil)
+ }
+ }
+ case token.TYPE:
+ for _, spec := range n.Specs {
+ // go/types doesn't provide a way to go from a
+ // types.Named to the named type it was based on
+ // (the t1 in type t2 t1). Therefore we walk the
+ // AST and process GenDecls.
+ //
+ // (2.2) named types use the type they're based on
+ v := spec.(*ast.TypeSpec)
+ T := pkg.TypesInfo.TypeOf(v.Type)
+ obj := pkg.TypesInfo.ObjectOf(v.Name)
+ ctx.see(obj)
+ ctx.see(T)
+ ctx.use(T, obj, edgeType)
+ g.typ(ctx, obj.Type(), nil)
+ g.typ(ctx, T, nil)
+
+ if v.Assign != 0 {
+ aliasFor := obj.(*types.TypeName).Type()
+ // (2.3) named types use all their aliases. we can't easily track uses of aliases
+ if isIrrelevant(aliasFor) {
+ // We do not track the type this is an
+ // alias for (for example builtins), so
+ // just mark the alias used.
+ //
+ // FIXME(dh): what about aliases declared inside functions?
+ ctx.use(obj, nil, edgeAlias)
+ } else {
+ ctx.see(aliasFor)
+ ctx.seeAndUse(obj, aliasFor, edgeAlias)
+ }
+ }
+ }
+ }
+ }
+ return true
+ })
+ }
+
+ for _, m := range pkg.SSA.Members {
+ switch m := m.(type) {
+ case *ssa.NamedConst:
+ // nothing to do, we collect all constants from Defs
+ case *ssa.Global:
+ if m.Object() != nil {
+ ctx.see(m.Object())
+ if g.trackExportedIdentifier(ctx, m.Object()) {
+ // (1.3) packages use exported variables (unless in package main)
+ ctx.use(m.Object(), nil, edgeExportedVariable)
+ }
+ }
+ case *ssa.Function:
+ mObj := owningObject(m)
+ if mObj != nil {
+ ctx.see(mObj)
+ }
+ //lint:ignore SA9003 handled implicitly
+ if m.Name() == "init" {
+ // (1.5) packages use init functions
+ //
+ // This is handled implicitly. The generated init
+ // function has no object, thus everything in it will
+ // be owned by the package.
+ }
+ // This branch catches top-level functions, not methods.
+ if m.Object() != nil && g.trackExportedIdentifier(ctx, m.Object()) {
+ // (1.2) packages use exported functions (unless in package main)
+ ctx.use(mObj, nil, edgeExportedFunction)
+ }
+ if m.Name() == "main" && pkg.Pkg.Name() == "main" {
+ // (1.7) packages use the main function iff in the main package
+ ctx.use(mObj, nil, edgeMainFunction)
+ }
+ if pkg.Pkg.Path() == "runtime" && runtimeFuncs[m.Name()] {
+ // (9.8) runtime functions that may be called from user code via the compiler
+ ctx.use(mObj, nil, edgeRuntimeFunction)
+ }
+ if m.Syntax() != nil {
+ doc := m.Syntax().(*ast.FuncDecl).Doc
+ if doc != nil {
+ for _, cmt := range doc.List {
+ if strings.HasPrefix(cmt.Text, "//go:cgo_export_") {
+ // (1.6) packages use functions exported to cgo
+ ctx.use(mObj, nil, edgeCgoExported)
+ }
+ }
+ }
+ }
+ g.function(ctx, m)
+ case *ssa.Type:
+ if m.Object() != nil {
+ ctx.see(m.Object())
+ if g.trackExportedIdentifier(ctx, m.Object()) {
+ // (1.1) packages use exported named types (unless in package main)
+ ctx.use(m.Object(), nil, edgeExportedType)
+ }
+ }
+ g.typ(ctx, m.Type(), nil)
+ default:
+ panic(fmt.Sprintf("unreachable: %T", m))
+ }
+ }
+
+ if !g.wholeProgram {
+ // When not in whole program mode we reset seenTypes after each package,
+ // which means g.seenTypes only contains types of
+ // interest to us. In whole program mode, we're better off
+ // processing all interfaces at once, globally, both for
+ // performance reasons and because in whole program mode we
+ // actually care about all interfaces, not just the subset
+ // that has unexported methods.
+
+ var ifaces []*types.Interface
+ var notIfaces []types.Type
+
+ ctx.seenTypes.Iterate(func(t types.Type, _ interface{}) {
+ switch t := t.(type) {
+ case *types.Interface:
+ // OPT(dh): (8.1) we only need interfaces that have unexported methods
+ ifaces = append(ifaces, t)
+ default:
+ if _, ok := t.Underlying().(*types.Interface); !ok {
+ notIfaces = append(notIfaces, t)
+ }
+ }
+ })
+
+ // (8.0) handle interfaces
+ for _, t := range notIfaces {
+ ms := pkg.SSA.Prog.MethodSets.MethodSet(t)
+ for _, iface := range ifaces {
+ if sels, ok := g.implements(t, iface, ms); ok {
+ for _, sel := range sels {
+ g.useMethod(ctx, t, sel, t, edgeImplements)
+ }
+ }
+ }
+ }
+ }
+}
+
+func (g *Graph) useMethod(ctx *context, t types.Type, sel *types.Selection, by interface{}, kind edgeKind) {
+ obj := sel.Obj()
+ path := sel.Index()
+ assert(obj != nil)
+ if len(path) > 1 {
+ base := lintdsl.Dereference(t).Underlying().(*types.Struct)
+ for _, idx := range path[:len(path)-1] {
+ next := base.Field(idx)
+ // (6.3) structs use embedded fields that help implement interfaces
+ ctx.see(base)
+ ctx.seeAndUse(next, base, edgeProvidesMethod)
+ base, _ = lintdsl.Dereference(next.Type()).Underlying().(*types.Struct)
+ }
+ }
+ ctx.seeAndUse(obj, by, kind)
+}
+
+func owningObject(fn *ssa.Function) types.Object {
+ if fn.Object() != nil {
+ return fn.Object()
+ }
+ if fn.Parent() != nil {
+ return owningObject(fn.Parent())
+ }
+ return nil
+}
+
+func (g *Graph) function(ctx *context, fn *ssa.Function) {
+ if fn.Package() != nil && fn.Package() != ctx.pkg.SSA {
+ return
+ }
+
+ name := fn.RelString(nil)
+ if _, ok := ctx.seenFns[name]; ok {
+ return
+ }
+ ctx.seenFns[name] = struct{}{}
+
+ // (4.1) functions use all their arguments, return parameters and receivers
+ g.signature(ctx, fn.Signature, owningObject(fn))
+ g.instructions(ctx, fn)
+ for _, anon := range fn.AnonFuncs {
+ // (4.2) functions use anonymous functions defined beneath them
+ //
+ // This fact is expressed implicitly. Anonymous functions have
+ // no types.Object, so their owner is the surrounding
+ // function.
+ g.function(ctx, anon)
+ }
+}
+
+func (g *Graph) typ(ctx *context, t types.Type, parent types.Type) {
+ if g.wholeProgram {
+ g.mu.Lock()
+ }
+ if ctx.seenTypes.At(t) != nil {
+ if g.wholeProgram {
+ g.mu.Unlock()
+ }
+ return
+ }
+ if g.wholeProgram {
+ g.mu.Unlock()
+ }
+ if t, ok := t.(*types.Named); ok && t.Obj().Pkg() != nil {
+ if t.Obj().Pkg() != ctx.pkg.Pkg {
+ return
+ }
+ }
+
+ if g.wholeProgram {
+ g.mu.Lock()
+ }
+ ctx.seenTypes.Set(t, struct{}{})
+ if g.wholeProgram {
+ g.mu.Unlock()
+ }
+ if isIrrelevant(t) {
+ return
+ }
+
+ ctx.see(t)
+ switch t := t.(type) {
+ case *types.Struct:
+ for i := 0; i < t.NumFields(); i++ {
+ ctx.see(t.Field(i))
+ if t.Field(i).Exported() {
+ // (6.2) structs use exported fields
+ ctx.use(t.Field(i), t, edgeExportedField)
+ } else if t.Field(i).Name() == "_" {
+ ctx.use(t.Field(i), t, edgeBlankField)
+ } else if isNoCopyType(t.Field(i).Type()) {
+ // (6.1) structs use fields of type NoCopy sentinel
+ ctx.use(t.Field(i), t, edgeNoCopySentinel)
+ } else if parent == nil {
+ // (11.1) anonymous struct types use all their fields.
+ ctx.use(t.Field(i), t, edgeAnonymousStruct)
+ }
+ if t.Field(i).Anonymous() {
+ // (e3) exported identifiers aren't automatically used.
+ if !g.wholeProgram {
+ // does the embedded field contribute exported methods to the method set?
+ T := t.Field(i).Type()
+ if _, ok := T.Underlying().(*types.Pointer); !ok {
+ // An embedded field is addressable, so check
+ // the pointer type to get the full method set
+ T = types.NewPointer(T)
+ }
+ ms := ctx.pkg.SSA.Prog.MethodSets.MethodSet(T)
+ for j := 0; j < ms.Len(); j++ {
+ if ms.At(j).Obj().Exported() {
+ // (6.4) structs use embedded fields that have exported methods (recursively)
+ ctx.use(t.Field(i), t, edgeExtendsExportedMethodSet)
+ break
+ }
+ }
+ }
+
+ seen := map[*types.Struct]struct{}{}
+ var hasExportedField func(t types.Type) bool
+ hasExportedField = func(T types.Type) bool {
+ t, ok := lintdsl.Dereference(T).Underlying().(*types.Struct)
+ if !ok {
+ return false
+ }
+ if _, ok := seen[t]; ok {
+ return false
+ }
+ seen[t] = struct{}{}
+ for i := 0; i < t.NumFields(); i++ {
+ field := t.Field(i)
+ if field.Exported() {
+ return true
+ }
+ if field.Embedded() && hasExportedField(field.Type()) {
+ return true
+ }
+ }
+ return false
+ }
+ // does the embedded field contribute exported fields?
+ if hasExportedField(t.Field(i).Type()) {
+ // (6.5) structs use embedded structs that have exported fields (recursively)
+ ctx.use(t.Field(i), t, edgeExtendsExportedFields)
+ }
+
+ }
+ g.variable(ctx, t.Field(i))
+ }
+ case *types.Basic:
+ // Nothing to do
+ case *types.Named:
+ // (9.3) types use their underlying and element types
+ ctx.seeAndUse(t.Underlying(), t, edgeUnderlyingType)
+ ctx.seeAndUse(t.Obj(), t, edgeTypeName)
+ ctx.seeAndUse(t, t.Obj(), edgeNamedType)
+
+ // (2.4) named types use the pointer type
+ if _, ok := t.Underlying().(*types.Interface); !ok && t.NumMethods() > 0 {
+ ctx.seeAndUse(types.NewPointer(t), t, edgePointerType)
+ }
+
+ for i := 0; i < t.NumMethods(); i++ {
+ ctx.see(t.Method(i))
+ // don't use trackExportedIdentifier here, we care about
+ // all exported methods, even in package main or in tests.
+ if t.Method(i).Exported() && !g.wholeProgram {
+ // (2.1) named types use exported methods
+ ctx.use(t.Method(i), t, edgeExportedMethod)
+ }
+ g.function(ctx, ctx.pkg.SSA.Prog.FuncValue(t.Method(i)))
+ }
+
+ g.typ(ctx, t.Underlying(), t)
+ case *types.Slice:
+ // (9.3) types use their underlying and element types
+ ctx.seeAndUse(t.Elem(), t, edgeElementType)
+ g.typ(ctx, t.Elem(), nil)
+ case *types.Map:
+ // (9.3) types use their underlying and element types
+ ctx.seeAndUse(t.Elem(), t, edgeElementType)
+ // (9.3) types use their underlying and element types
+ ctx.seeAndUse(t.Key(), t, edgeKeyType)
+ g.typ(ctx, t.Elem(), nil)
+ g.typ(ctx, t.Key(), nil)
+ case *types.Signature:
+ g.signature(ctx, t, nil)
+ case *types.Interface:
+ for i := 0; i < t.NumMethods(); i++ {
+ m := t.Method(i)
+ // (8.3) All interface methods are marked as used
+ ctx.seeAndUse(m, t, edgeInterfaceMethod)
+ ctx.seeAndUse(m.Type().(*types.Signature), m, edgeSignature)
+ g.signature(ctx, m.Type().(*types.Signature), nil)
+ }
+ for i := 0; i < t.NumEmbeddeds(); i++ {
+ tt := t.EmbeddedType(i)
+ // (8.4) All embedded interfaces are marked as used
+ ctx.seeAndUse(tt, t, edgeEmbeddedInterface)
+ }
+ case *types.Array:
+ // (9.3) types use their underlying and element types
+ ctx.seeAndUse(t.Elem(), t, edgeElementType)
+ g.typ(ctx, t.Elem(), nil)
+ case *types.Pointer:
+ // (9.3) types use their underlying and element types
+ ctx.seeAndUse(t.Elem(), t, edgeElementType)
+ g.typ(ctx, t.Elem(), nil)
+ case *types.Chan:
+ // (9.3) types use their underlying and element types
+ ctx.seeAndUse(t.Elem(), t, edgeElementType)
+ g.typ(ctx, t.Elem(), nil)
+ case *types.Tuple:
+ for i := 0; i < t.Len(); i++ {
+ // (9.3) types use their underlying and element types
+ ctx.seeAndUse(t.At(i).Type(), t, edgeTupleElement|edgeType)
+ g.typ(ctx, t.At(i).Type(), nil)
+ }
+ default:
+ panic(fmt.Sprintf("unreachable: %T", t))
+ }
+}
+
+func (g *Graph) variable(ctx *context, v *types.Var) {
+ // (9.2) variables use their types
+ ctx.seeAndUse(v.Type(), v, edgeType)
+ g.typ(ctx, v.Type(), nil)
+}
+
+func (g *Graph) signature(ctx *context, sig *types.Signature, fn types.Object) {
+ var user interface{} = fn
+ if fn == nil {
+ user = sig
+ ctx.see(sig)
+ }
+ if sig.Recv() != nil {
+ ctx.seeAndUse(sig.Recv().Type(), user, edgeReceiver|edgeType)
+ g.typ(ctx, sig.Recv().Type(), nil)
+ }
+ for i := 0; i < sig.Params().Len(); i++ {
+ param := sig.Params().At(i)
+ ctx.seeAndUse(param.Type(), user, edgeFunctionArgument|edgeType)
+ g.typ(ctx, param.Type(), nil)
+ }
+ for i := 0; i < sig.Results().Len(); i++ {
+ param := sig.Results().At(i)
+ ctx.seeAndUse(param.Type(), user, edgeFunctionResult|edgeType)
+ g.typ(ctx, param.Type(), nil)
+ }
+}
+
+func (g *Graph) instructions(ctx *context, fn *ssa.Function) {
+ fnObj := owningObject(fn)
+ for _, b := range fn.Blocks {
+ for _, instr := range b.Instrs {
+ ops := instr.Operands(nil)
+ switch instr.(type) {
+ case *ssa.Store:
+ // (9.7) variable _reads_ use variables, writes do not
+ ops = ops[1:]
+ case *ssa.DebugRef:
+ ops = nil
+ }
+ for _, arg := range ops {
+ walkPhi(*arg, func(v ssa.Value) {
+ switch v := v.(type) {
+ case *ssa.Function:
+ // (4.3) functions use closures and bound methods.
+ // (4.5) functions use functions they call
+ // (9.5) instructions use their operands
+ // (4.4) functions use functions they return. we assume that someone else will call the returned function
+ if owningObject(v) != nil {
+ ctx.seeAndUse(owningObject(v), fnObj, edgeInstructionOperand)
+ }
+ g.function(ctx, v)
+ case *ssa.Const:
+ // (9.6) instructions use their operands' types
+ ctx.seeAndUse(v.Type(), fnObj, edgeType)
+ g.typ(ctx, v.Type(), nil)
+ case *ssa.Global:
+ if v.Object() != nil {
+ // (9.5) instructions use their operands
+ ctx.seeAndUse(v.Object(), fnObj, edgeInstructionOperand)
+ }
+ }
+ })
+ }
+ if v, ok := instr.(ssa.Value); ok {
+ if _, ok := v.(*ssa.Range); !ok {
+ // See https://github.com/golang/go/issues/19670
+
+ // (4.8) instructions use their types
+ // (9.4) conversions use the type they convert to
+ ctx.seeAndUse(v.Type(), fnObj, edgeType)
+ g.typ(ctx, v.Type(), nil)
+ }
+ }
+ switch instr := instr.(type) {
+ case *ssa.Field:
+ st := instr.X.Type().Underlying().(*types.Struct)
+ field := st.Field(instr.Field)
+ // (4.7) functions use fields they access
+ ctx.seeAndUse(field, fnObj, edgeFieldAccess)
+ case *ssa.FieldAddr:
+ st := lintdsl.Dereference(instr.X.Type()).Underlying().(*types.Struct)
+ field := st.Field(instr.Field)
+ // (4.7) functions use fields they access
+ ctx.seeAndUse(field, fnObj, edgeFieldAccess)
+ case *ssa.Store:
+ // nothing to do, handled generically by operands
+ case *ssa.Call:
+ c := instr.Common()
+ if !c.IsInvoke() {
+ // handled generically as an instruction operand
+
+ if g.wholeProgram {
+ // (e3) special case known reflection-based method callers
+ switch lintdsl.CallName(c) {
+ case "net/rpc.Register", "net/rpc.RegisterName", "(*net/rpc.Server).Register", "(*net/rpc.Server).RegisterName":
+ var arg ssa.Value
+ switch lintdsl.CallName(c) {
+ case "net/rpc.Register":
+ arg = c.Args[0]
+ case "net/rpc.RegisterName":
+ arg = c.Args[1]
+ case "(*net/rpc.Server).Register":
+ arg = c.Args[1]
+ case "(*net/rpc.Server).RegisterName":
+ arg = c.Args[2]
+ }
+ walkPhi(arg, func(v ssa.Value) {
+ if v, ok := v.(*ssa.MakeInterface); ok {
+ walkPhi(v.X, func(vv ssa.Value) {
+ ms := ctx.pkg.SSA.Prog.MethodSets.MethodSet(vv.Type())
+ for i := 0; i < ms.Len(); i++ {
+ if ms.At(i).Obj().Exported() {
+ g.useMethod(ctx, vv.Type(), ms.At(i), fnObj, edgeNetRPCRegister)
+ }
+ }
+ })
+ }
+ })
+ }
+ }
+ } else {
+ // (4.5) functions use functions/interface methods they call
+ ctx.seeAndUse(c.Method, fnObj, edgeInterfaceCall)
+ }
+ case *ssa.Return:
+ // nothing to do, handled generically by operands
+ case *ssa.ChangeType:
+ // conversion type handled generically
+
+ s1, ok1 := lintdsl.Dereference(instr.Type()).Underlying().(*types.Struct)
+ s2, ok2 := lintdsl.Dereference(instr.X.Type()).Underlying().(*types.Struct)
+ if ok1 && ok2 {
+ // Converting between two structs. The fields are
+ // relevant for the conversion, but only if the
+ // fields are also used outside of the conversion.
+ // Mark fields as used by each other.
+
+ assert(s1.NumFields() == s2.NumFields())
+ for i := 0; i < s1.NumFields(); i++ {
+ ctx.see(s1.Field(i))
+ ctx.see(s2.Field(i))
+ // (5.1) when converting between two equivalent structs, the fields in
+ // either struct use each other. the fields are relevant for the
+ // conversion, but only if the fields are also accessed outside the
+ // conversion.
+ ctx.seeAndUse(s1.Field(i), s2.Field(i), edgeStructConversion)
+ ctx.seeAndUse(s2.Field(i), s1.Field(i), edgeStructConversion)
+ }
+ }
+ case *ssa.MakeInterface:
+ // nothing to do, handled generically by operands
+ case *ssa.Slice:
+ // nothing to do, handled generically by operands
+ case *ssa.RunDefers:
+ // nothing to do, the deferred functions are already marked use by defering them.
+ case *ssa.Convert:
+ // to unsafe.Pointer
+ if typ, ok := instr.Type().(*types.Basic); ok && typ.Kind() == types.UnsafePointer {
+ if ptr, ok := instr.X.Type().Underlying().(*types.Pointer); ok {
+ if st, ok := ptr.Elem().Underlying().(*types.Struct); ok {
+ for i := 0; i < st.NumFields(); i++ {
+ // (5.2) when converting to or from unsafe.Pointer, mark all fields as used.
+ ctx.seeAndUse(st.Field(i), fnObj, edgeUnsafeConversion)
+ }
+ }
+ }
+ }
+ // from unsafe.Pointer
+ if typ, ok := instr.X.Type().(*types.Basic); ok && typ.Kind() == types.UnsafePointer {
+ if ptr, ok := instr.Type().Underlying().(*types.Pointer); ok {
+ if st, ok := ptr.Elem().Underlying().(*types.Struct); ok {
+ for i := 0; i < st.NumFields(); i++ {
+ // (5.2) when converting to or from unsafe.Pointer, mark all fields as used.
+ ctx.seeAndUse(st.Field(i), fnObj, edgeUnsafeConversion)
+ }
+ }
+ }
+ }
+ case *ssa.TypeAssert:
+ // nothing to do, handled generically by instruction
+ // type (possibly a tuple, which contains the asserted
+ // to type). redundantly handled by the type of
+ // ssa.Extract, too
+ case *ssa.MakeClosure:
+ // nothing to do, handled generically by operands
+ case *ssa.Alloc:
+ // nothing to do
+ case *ssa.UnOp:
+ // nothing to do
+ case *ssa.BinOp:
+ // nothing to do
+ case *ssa.If:
+ // nothing to do
+ case *ssa.Jump:
+ // nothing to do
+ case *ssa.IndexAddr:
+ // nothing to do
+ case *ssa.Extract:
+ // nothing to do
+ case *ssa.Panic:
+ // nothing to do
+ case *ssa.DebugRef:
+ // nothing to do
+ case *ssa.BlankStore:
+ // nothing to do
+ case *ssa.Phi:
+ // nothing to do
+ case *ssa.MakeMap:
+ // nothing to do
+ case *ssa.MapUpdate:
+ // nothing to do
+ case *ssa.Lookup:
+ // nothing to do
+ case *ssa.MakeSlice:
+ // nothing to do
+ case *ssa.Send:
+ // nothing to do
+ case *ssa.MakeChan:
+ // nothing to do
+ case *ssa.Range:
+ // nothing to do
+ case *ssa.Next:
+ // nothing to do
+ case *ssa.Index:
+ // nothing to do
+ case *ssa.Select:
+ // nothing to do
+ case *ssa.ChangeInterface:
+ // nothing to do
+ case *ssa.Go:
+ // nothing to do, handled generically by operands
+ case *ssa.Defer:
+ // nothing to do, handled generically by operands
+ default:
+ panic(fmt.Sprintf("unreachable: %T", instr))
+ }
+ }
+ }
+}
+
+// isNoCopyType reports whether a type represents the NoCopy sentinel
+// type. The NoCopy type is a named struct with no fields and exactly
+// one method `func Lock()` that is empty.
+//
+// FIXME(dh): currently we're not checking that the function body is
+// empty.
+func isNoCopyType(typ types.Type) bool {
+ st, ok := typ.Underlying().(*types.Struct)
+ if !ok {
+ return false
+ }
+ if st.NumFields() != 0 {
+ return false
+ }
+
+ named, ok := typ.(*types.Named)
+ if !ok {
+ return false
+ }
+ if named.NumMethods() != 1 {
+ return false
+ }
+ meth := named.Method(0)
+ if meth.Name() != "Lock" {
+ return false
+ }
+ sig := meth.Type().(*types.Signature)
+ if sig.Params().Len() != 0 || sig.Results().Len() != 0 {
+ return false
+ }
+ return true
+}
+
+func walkPhi(v ssa.Value, fn func(v ssa.Value)) {
+ phi, ok := v.(*ssa.Phi)
+ if !ok {
+ fn(v)
+ return
+ }
+
+ seen := map[ssa.Value]struct{}{}
+ var impl func(v *ssa.Phi)
+ impl = func(v *ssa.Phi) {
+ if _, ok := seen[v]; ok {
+ return
+ }
+ seen[v] = struct{}{}
+ for _, e := range v.Edges {
+ if ev, ok := e.(*ssa.Phi); ok {
+ impl(ev)
+ } else {
+ fn(e)
+ }
+ }
+ }
+ impl(phi)
+}
+
+func interfacesFromExportData(pkg *types.Package) []*types.Interface {
+ var out []*types.Interface
+ scope := pkg.Scope()
+ for _, name := range scope.Names() {
+ obj := scope.Lookup(name)
+ out = append(out, interfacesFromObject(obj)...)
+ }
+ return out
+}
+
+func interfacesFromObject(obj types.Object) []*types.Interface {
+ var out []*types.Interface
+ switch obj := obj.(type) {
+ case *types.Func:
+ sig := obj.Type().(*types.Signature)
+ for i := 0; i < sig.Results().Len(); i++ {
+ out = append(out, interfacesFromObject(sig.Results().At(i))...)
+ }
+ for i := 0; i < sig.Params().Len(); i++ {
+ out = append(out, interfacesFromObject(sig.Params().At(i))...)
+ }
+ case *types.TypeName:
+ if named, ok := obj.Type().(*types.Named); ok {
+ for i := 0; i < named.NumMethods(); i++ {
+ out = append(out, interfacesFromObject(named.Method(i))...)
+ }
+
+ if iface, ok := named.Underlying().(*types.Interface); ok {
+ out = append(out, iface)
+ }
+ }
+ case *types.Var:
+ // No call to Underlying here. We want unnamed interfaces
+ // only. Named interfaces are gotten directly from the
+ // package's scope.
+ if iface, ok := obj.Type().(*types.Interface); ok {
+ out = append(out, iface)
+ }
+ case *types.Const:
+ case *types.Builtin:
+ default:
+ panic(fmt.Sprintf("unhandled type: %T", obj))
+ }
+ return out
+}
diff --git a/vendor/honnef.co/go/tools/version/buildinfo.go b/vendor/honnef.co/go/tools/version/buildinfo.go
new file mode 100644
index 0000000000000..b6034bb7dcd8d
--- /dev/null
+++ b/vendor/honnef.co/go/tools/version/buildinfo.go
@@ -0,0 +1,46 @@
+// +build go1.12
+
+package version
+
+import (
+ "fmt"
+ "runtime/debug"
+)
+
+func printBuildInfo() {
+ if info, ok := debug.ReadBuildInfo(); ok {
+ fmt.Println("Main module:")
+ printModule(&info.Main)
+ fmt.Println("Dependencies:")
+ for _, dep := range info.Deps {
+ printModule(dep)
+ }
+ } else {
+ fmt.Println("Built without Go modules")
+ }
+}
+
+func buildInfoVersion() (string, bool) {
+ info, ok := debug.ReadBuildInfo()
+ if !ok {
+ return "", false
+ }
+ if info.Main.Version == "(devel)" {
+ return "", false
+ }
+ return info.Main.Version, true
+}
+
+func printModule(m *debug.Module) {
+ fmt.Printf("\t%s", m.Path)
+ if m.Version != "(devel)" {
+ fmt.Printf("@%s", m.Version)
+ }
+ if m.Sum != "" {
+ fmt.Printf(" (sum: %s)", m.Sum)
+ }
+ if m.Replace != nil {
+ fmt.Printf(" (replace: %s)", m.Replace.Path)
+ }
+ fmt.Println()
+}
diff --git a/vendor/honnef.co/go/tools/version/buildinfo111.go b/vendor/honnef.co/go/tools/version/buildinfo111.go
new file mode 100644
index 0000000000000..06aae1e65bb3d
--- /dev/null
+++ b/vendor/honnef.co/go/tools/version/buildinfo111.go
@@ -0,0 +1,6 @@
+// +build !go1.12
+
+package version
+
+func printBuildInfo() {}
+func buildInfoVersion() (string, bool) { return "", false }
diff --git a/vendor/honnef.co/go/tools/version/version.go b/vendor/honnef.co/go/tools/version/version.go
new file mode 100644
index 0000000000000..468e8efd6e1ca
--- /dev/null
+++ b/vendor/honnef.co/go/tools/version/version.go
@@ -0,0 +1,42 @@
+package version
+
+import (
+ "fmt"
+ "os"
+ "path/filepath"
+ "runtime"
+)
+
+const Version = "2019.2.3"
+
+// version returns a version descriptor and reports whether the
+// version is a known release.
+func version() (string, bool) {
+ if Version != "devel" {
+ return Version, true
+ }
+ v, ok := buildInfoVersion()
+ if ok {
+ return v, false
+ }
+ return "devel", false
+}
+
+func Print() {
+ v, release := version()
+
+ if release {
+ fmt.Printf("%s %s\n", filepath.Base(os.Args[0]), v)
+ } else if v == "devel" {
+ fmt.Printf("%s (no version)\n", filepath.Base(os.Args[0]))
+ } else {
+ fmt.Printf("%s (devel, %s)\n", filepath.Base(os.Args[0]), v)
+ }
+}
+
+func Verbose() {
+ Print()
+ fmt.Println()
+ fmt.Println("Compiled with Go version:", runtime.Version())
+ printBuildInfo()
+}
diff --git a/vendor/k8s.io/api/admissionregistration/v1beta1/generated.proto b/vendor/k8s.io/api/admissionregistration/v1beta1/generated.proto
index c133192647d8c..086cbcc79867d 100644
--- a/vendor/k8s.io/api/admissionregistration/v1beta1/generated.proto
+++ b/vendor/k8s.io/api/admissionregistration/v1beta1/generated.proto
@@ -131,7 +131,7 @@ message MutatingWebhook {
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.LabelSelector objectSelector = 11;
- // SideEffects states whether this webhookk has side effects.
+ // SideEffects states whether this webhook has side effects.
// Acceptable values are: Unknown, None, Some, NoneOnDryRun
// Webhooks with side effects MUST implement a reconciliation system, since a request may be
// rejected by a future step in the admission change and the side effects therefore need to be undone.
@@ -179,8 +179,9 @@ message MutatingWebhook {
}
// MutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and may change the object.
+// Deprecated in v1.16, planned for removal in v1.19. Use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration instead.
message MutatingWebhookConfiguration {
- // Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata.
+ // Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata.
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
@@ -194,7 +195,7 @@ message MutatingWebhookConfiguration {
// MutatingWebhookConfigurationList is a list of MutatingWebhookConfiguration.
message MutatingWebhookConfigurationList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -384,7 +385,7 @@ message ValidatingWebhook {
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.LabelSelector objectSelector = 10;
- // SideEffects states whether this webhookk has side effects.
+ // SideEffects states whether this webhook has side effects.
// Acceptable values are: Unknown, None, Some, NoneOnDryRun
// Webhooks with side effects MUST implement a reconciliation system, since a request may be
// rejected by a future step in the admission change and the side effects therefore need to be undone.
@@ -414,8 +415,9 @@ message ValidatingWebhook {
}
// ValidatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and object without changing it.
+// Deprecated in v1.16, planned for removal in v1.19. Use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration instead.
message ValidatingWebhookConfiguration {
- // Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata.
+ // Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata.
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
@@ -429,7 +431,7 @@ message ValidatingWebhookConfiguration {
// ValidatingWebhookConfigurationList is a list of ValidatingWebhookConfiguration.
message ValidatingWebhookConfigurationList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
diff --git a/vendor/k8s.io/api/admissionregistration/v1beta1/types.go b/vendor/k8s.io/api/admissionregistration/v1beta1/types.go
index 6b8c5a23a724c..37a993e3e5a18 100644
--- a/vendor/k8s.io/api/admissionregistration/v1beta1/types.go
+++ b/vendor/k8s.io/api/admissionregistration/v1beta1/types.go
@@ -115,9 +115,10 @@ const (
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// ValidatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and object without changing it.
+// Deprecated in v1.16, planned for removal in v1.19. Use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration instead.
type ValidatingWebhookConfiguration struct {
metav1.TypeMeta `json:",inline"`
- // Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata.
+ // Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata.
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Webhooks is a list of webhooks and the affected resources and operations.
@@ -133,7 +134,7 @@ type ValidatingWebhookConfiguration struct {
type ValidatingWebhookConfigurationList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// List of ValidatingWebhookConfiguration.
@@ -145,9 +146,10 @@ type ValidatingWebhookConfigurationList struct {
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// MutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and may change the object.
+// Deprecated in v1.16, planned for removal in v1.19. Use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration instead.
type MutatingWebhookConfiguration struct {
metav1.TypeMeta `json:",inline"`
- // Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata.
+ // Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata.
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Webhooks is a list of webhooks and the affected resources and operations.
@@ -163,7 +165,7 @@ type MutatingWebhookConfiguration struct {
type MutatingWebhookConfigurationList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// List of MutatingWebhookConfiguration.
@@ -273,7 +275,7 @@ type ValidatingWebhook struct {
// +optional
ObjectSelector *metav1.LabelSelector `json:"objectSelector,omitempty" protobuf:"bytes,10,opt,name=objectSelector"`
- // SideEffects states whether this webhookk has side effects.
+ // SideEffects states whether this webhook has side effects.
// Acceptable values are: Unknown, None, Some, NoneOnDryRun
// Webhooks with side effects MUST implement a reconciliation system, since a request may be
// rejected by a future step in the admission change and the side effects therefore need to be undone.
@@ -405,7 +407,7 @@ type MutatingWebhook struct {
// +optional
ObjectSelector *metav1.LabelSelector `json:"objectSelector,omitempty" protobuf:"bytes,11,opt,name=objectSelector"`
- // SideEffects states whether this webhookk has side effects.
+ // SideEffects states whether this webhook has side effects.
// Acceptable values are: Unknown, None, Some, NoneOnDryRun
// Webhooks with side effects MUST implement a reconciliation system, since a request may be
// rejected by a future step in the admission change and the side effects therefore need to be undone.
diff --git a/vendor/k8s.io/api/admissionregistration/v1beta1/types_swagger_doc_generated.go b/vendor/k8s.io/api/admissionregistration/v1beta1/types_swagger_doc_generated.go
index 39e86db9769fd..d9fb5af8fcb7b 100644
--- a/vendor/k8s.io/api/admissionregistration/v1beta1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/admissionregistration/v1beta1/types_swagger_doc_generated.go
@@ -36,7 +36,7 @@ var map_MutatingWebhook = map[string]string{
"matchPolicy": "matchPolicy defines how the \"rules\" list is used to match incoming requests. Allowed values are \"Exact\" or \"Equivalent\".\n\n- Exact: match a request only if it exactly matches a specified rule. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, but \"rules\" only included `apiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]`, a request to apps/v1beta1 or extensions/v1beta1 would not be sent to the webhook.\n\n- Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, and \"rules\" only included `apiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]`, a request to apps/v1beta1 or extensions/v1beta1 would be converted to apps/v1 and sent to the webhook.\n\nDefaults to \"Exact\"",
"namespaceSelector": "NamespaceSelector decides whether to run the webhook on an object based on whether the namespace for that object matches the selector. If the object itself is a namespace, the matching is performed on object.metadata.labels. If the object is another cluster scoped resource, it never skips the webhook.\n\nFor example, to run the webhook on any objects whose namespace is not associated with \"runlevel\" of \"0\" or \"1\"; you will set the selector as follows: \"namespaceSelector\": {\n \"matchExpressions\": [\n {\n \"key\": \"runlevel\",\n \"operator\": \"NotIn\",\n \"values\": [\n \"0\",\n \"1\"\n ]\n }\n ]\n}\n\nIf instead you want to only run the webhook on any objects whose namespace is associated with the \"environment\" of \"prod\" or \"staging\"; you will set the selector as follows: \"namespaceSelector\": {\n \"matchExpressions\": [\n {\n \"key\": \"environment\",\n \"operator\": \"In\",\n \"values\": [\n \"prod\",\n \"staging\"\n ]\n }\n ]\n}\n\nSee https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ for more examples of label selectors.\n\nDefault to the empty LabelSelector, which matches everything.",
"objectSelector": "ObjectSelector decides whether to run the webhook based on if the object has matching labels. objectSelector is evaluated against both the oldObject and newObject that would be sent to the webhook, and is considered to match if either object matches the selector. A null object (oldObject in the case of create, or newObject in the case of delete) or an object that cannot have labels (like a DeploymentRollback or a PodProxyOptions object) is not considered to match. Use the object selector only if the webhook is opt-in, because end users may skip the admission webhook by setting the labels. Default to the empty LabelSelector, which matches everything.",
- "sideEffects": "SideEffects states whether this webhookk has side effects. Acceptable values are: Unknown, None, Some, NoneOnDryRun Webhooks with side effects MUST implement a reconciliation system, since a request may be rejected by a future step in the admission change and the side effects therefore need to be undone. Requests with the dryRun attribute will be auto-rejected if they match a webhook with sideEffects == Unknown or Some. Defaults to Unknown.",
+ "sideEffects": "SideEffects states whether this webhook has side effects. Acceptable values are: Unknown, None, Some, NoneOnDryRun Webhooks with side effects MUST implement a reconciliation system, since a request may be rejected by a future step in the admission change and the side effects therefore need to be undone. Requests with the dryRun attribute will be auto-rejected if they match a webhook with sideEffects == Unknown or Some. Defaults to Unknown.",
"timeoutSeconds": "TimeoutSeconds specifies the timeout for this webhook. After the timeout passes, the webhook call will be ignored or the API call will fail based on the failure policy. The timeout value must be between 1 and 30 seconds. Default to 30 seconds.",
"admissionReviewVersions": "AdmissionReviewVersions is an ordered list of preferred `AdmissionReview` versions the Webhook expects. API server will try to use first version in the list which it supports. If none of the versions specified in this list supported by API server, validation will fail for this object. If a persisted webhook configuration specifies allowed versions and does not include any versions known to the API Server, calls to the webhook will fail and be subject to the failure policy. Default to `['v1beta1']`.",
"reinvocationPolicy": "reinvocationPolicy indicates whether this webhook should be called multiple times as part of a single admission evaluation. Allowed values are \"Never\" and \"IfNeeded\".\n\nNever: the webhook will not be called more than once in a single admission evaluation.\n\nIfNeeded: the webhook will be called at least one additional time as part of the admission evaluation if the object being admitted is modified by other admission plugins after the initial webhook call. Webhooks that specify this option *must* be idempotent, able to process objects they previously admitted. Note: * the number of additional invocations is not guaranteed to be exactly one. * if additional invocations result in further modifications to the object, webhooks are not guaranteed to be invoked again. * webhooks that use this option may be reordered to minimize the number of additional invocations. * to validate an object after all mutations are guaranteed complete, use a validating admission webhook instead.\n\nDefaults to \"Never\".",
@@ -47,8 +47,8 @@ func (MutatingWebhook) SwaggerDoc() map[string]string {
}
var map_MutatingWebhookConfiguration = map[string]string{
- "": "MutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and may change the object.",
- "metadata": "Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata.",
+ "": "MutatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and may change the object. Deprecated in v1.16, planned for removal in v1.19. Use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration instead.",
+ "metadata": "Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata.",
"webhooks": "Webhooks is a list of webhooks and the affected resources and operations.",
}
@@ -58,7 +58,7 @@ func (MutatingWebhookConfiguration) SwaggerDoc() map[string]string {
var map_MutatingWebhookConfigurationList = map[string]string{
"": "MutatingWebhookConfigurationList is a list of MutatingWebhookConfiguration.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"items": "List of MutatingWebhookConfiguration.",
}
@@ -108,7 +108,7 @@ var map_ValidatingWebhook = map[string]string{
"matchPolicy": "matchPolicy defines how the \"rules\" list is used to match incoming requests. Allowed values are \"Exact\" or \"Equivalent\".\n\n- Exact: match a request only if it exactly matches a specified rule. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, but \"rules\" only included `apiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]`, a request to apps/v1beta1 or extensions/v1beta1 would not be sent to the webhook.\n\n- Equivalent: match a request if modifies a resource listed in rules, even via another API group or version. For example, if deployments can be modified via apps/v1, apps/v1beta1, and extensions/v1beta1, and \"rules\" only included `apiGroups:[\"apps\"], apiVersions:[\"v1\"], resources: [\"deployments\"]`, a request to apps/v1beta1 or extensions/v1beta1 would be converted to apps/v1 and sent to the webhook.\n\nDefaults to \"Exact\"",
"namespaceSelector": "NamespaceSelector decides whether to run the webhook on an object based on whether the namespace for that object matches the selector. If the object itself is a namespace, the matching is performed on object.metadata.labels. If the object is another cluster scoped resource, it never skips the webhook.\n\nFor example, to run the webhook on any objects whose namespace is not associated with \"runlevel\" of \"0\" or \"1\"; you will set the selector as follows: \"namespaceSelector\": {\n \"matchExpressions\": [\n {\n \"key\": \"runlevel\",\n \"operator\": \"NotIn\",\n \"values\": [\n \"0\",\n \"1\"\n ]\n }\n ]\n}\n\nIf instead you want to only run the webhook on any objects whose namespace is associated with the \"environment\" of \"prod\" or \"staging\"; you will set the selector as follows: \"namespaceSelector\": {\n \"matchExpressions\": [\n {\n \"key\": \"environment\",\n \"operator\": \"In\",\n \"values\": [\n \"prod\",\n \"staging\"\n ]\n }\n ]\n}\n\nSee https://kubernetes.io/docs/concepts/overview/working-with-objects/labels for more examples of label selectors.\n\nDefault to the empty LabelSelector, which matches everything.",
"objectSelector": "ObjectSelector decides whether to run the webhook based on if the object has matching labels. objectSelector is evaluated against both the oldObject and newObject that would be sent to the webhook, and is considered to match if either object matches the selector. A null object (oldObject in the case of create, or newObject in the case of delete) or an object that cannot have labels (like a DeploymentRollback or a PodProxyOptions object) is not considered to match. Use the object selector only if the webhook is opt-in, because end users may skip the admission webhook by setting the labels. Default to the empty LabelSelector, which matches everything.",
- "sideEffects": "SideEffects states whether this webhookk has side effects. Acceptable values are: Unknown, None, Some, NoneOnDryRun Webhooks with side effects MUST implement a reconciliation system, since a request may be rejected by a future step in the admission change and the side effects therefore need to be undone. Requests with the dryRun attribute will be auto-rejected if they match a webhook with sideEffects == Unknown or Some. Defaults to Unknown.",
+ "sideEffects": "SideEffects states whether this webhook has side effects. Acceptable values are: Unknown, None, Some, NoneOnDryRun Webhooks with side effects MUST implement a reconciliation system, since a request may be rejected by a future step in the admission change and the side effects therefore need to be undone. Requests with the dryRun attribute will be auto-rejected if they match a webhook with sideEffects == Unknown or Some. Defaults to Unknown.",
"timeoutSeconds": "TimeoutSeconds specifies the timeout for this webhook. After the timeout passes, the webhook call will be ignored or the API call will fail based on the failure policy. The timeout value must be between 1 and 30 seconds. Default to 30 seconds.",
"admissionReviewVersions": "AdmissionReviewVersions is an ordered list of preferred `AdmissionReview` versions the Webhook expects. API server will try to use first version in the list which it supports. If none of the versions specified in this list supported by API server, validation will fail for this object. If a persisted webhook configuration specifies allowed versions and does not include any versions known to the API Server, calls to the webhook will fail and be subject to the failure policy. Default to `['v1beta1']`.",
}
@@ -118,8 +118,8 @@ func (ValidatingWebhook) SwaggerDoc() map[string]string {
}
var map_ValidatingWebhookConfiguration = map[string]string{
- "": "ValidatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and object without changing it.",
- "metadata": "Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata.",
+ "": "ValidatingWebhookConfiguration describes the configuration of and admission webhook that accept or reject and object without changing it. Deprecated in v1.16, planned for removal in v1.19. Use admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration instead.",
+ "metadata": "Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata.",
"webhooks": "Webhooks is a list of webhooks and the affected resources and operations.",
}
@@ -129,7 +129,7 @@ func (ValidatingWebhookConfiguration) SwaggerDoc() map[string]string {
var map_ValidatingWebhookConfigurationList = map[string]string{
"": "ValidatingWebhookConfigurationList is a list of ValidatingWebhookConfiguration.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"items": "List of ValidatingWebhookConfiguration.",
}
diff --git a/vendor/k8s.io/api/apps/v1/generated.proto b/vendor/k8s.io/api/apps/v1/generated.proto
index fea81922f3b6b..6c55279745761 100644
--- a/vendor/k8s.io/api/apps/v1/generated.proto
+++ b/vendor/k8s.io/api/apps/v1/generated.proto
@@ -41,7 +41,7 @@ option go_package = "v1";
// depend on its stability. It is primarily for internal use by controllers.
message ControllerRevision {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
@@ -54,7 +54,7 @@ message ControllerRevision {
// ControllerRevisionList is a resource containing a list of ControllerRevision objects.
message ControllerRevisionList {
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -65,12 +65,12 @@ message ControllerRevisionList {
// DaemonSet represents the configuration of a daemon set.
message DaemonSet {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// The desired behavior of this daemon set.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional DaemonSetSpec spec = 2;
@@ -78,7 +78,7 @@ message DaemonSet {
// out of date by some window of time.
// Populated by the system.
// Read-only.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional DaemonSetStatus status = 3;
}
@@ -107,7 +107,7 @@ message DaemonSetCondition {
// DaemonSetList is a collection of daemon sets.
message DaemonSetList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -366,12 +366,12 @@ message DeploymentStrategy {
message ReplicaSet {
// If the Labels of a ReplicaSet are empty, they are defaulted to
// be the same as the Pod(s) that the ReplicaSet manages.
- // Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// Spec defines the specification of the desired behavior of the ReplicaSet.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional ReplicaSetSpec spec = 2;
@@ -379,7 +379,7 @@ message ReplicaSet {
// This data may be out of date by some window of time.
// Populated by the system.
// Read-only.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional ReplicaSetStatus status = 3;
}
@@ -408,7 +408,7 @@ message ReplicaSetCondition {
// ReplicaSetList is a collection of ReplicaSets.
message ReplicaSetList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
diff --git a/vendor/k8s.io/api/apps/v1/types.go b/vendor/k8s.io/api/apps/v1/types.go
index 2fe8574581211..e003a0c4f7c05 100644
--- a/vendor/k8s.io/api/apps/v1/types.go
+++ b/vendor/k8s.io/api/apps/v1/types.go
@@ -95,13 +95,13 @@ const (
// ordering constraints. When a scale operation is performed with this
// strategy, new Pods will be created from the specification version indicated
// by the StatefulSet's updateRevision.
- RollingUpdateStatefulSetStrategyType = "RollingUpdate"
+ RollingUpdateStatefulSetStrategyType StatefulSetUpdateStrategyType = "RollingUpdate"
// OnDeleteStatefulSetStrategyType triggers the legacy behavior. Version
// tracking and ordered rolling restarts are disabled. Pods are recreated
// from the StatefulSetSpec when they are manually deleted. When a scale
// operation is performed with this strategy,specification version indicated
// by the StatefulSet's currentRevision.
- OnDeleteStatefulSetStrategyType = "OnDelete"
+ OnDeleteStatefulSetStrategyType StatefulSetUpdateStrategyType = "OnDelete"
)
// RollingUpdateStatefulSetStrategy is used to communicate parameter for RollingUpdateStatefulSetStrategyType.
@@ -618,12 +618,12 @@ type DaemonSetCondition struct {
type DaemonSet struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// The desired behavior of this daemon set.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Spec DaemonSetSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
@@ -631,7 +631,7 @@ type DaemonSet struct {
// out of date by some window of time.
// Populated by the system.
// Read-only.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Status DaemonSetStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"`
}
@@ -649,7 +649,7 @@ const (
type DaemonSetList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -668,12 +668,12 @@ type ReplicaSet struct {
// If the Labels of a ReplicaSet are empty, they are defaulted to
// be the same as the Pod(s) that the ReplicaSet manages.
- // Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Spec defines the specification of the desired behavior of the ReplicaSet.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Spec ReplicaSetSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
@@ -681,7 +681,7 @@ type ReplicaSet struct {
// This data may be out of date by some window of time.
// Populated by the system.
// Read-only.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Status ReplicaSetStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"`
}
@@ -692,7 +692,7 @@ type ReplicaSet struct {
type ReplicaSetList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -800,7 +800,7 @@ type ReplicaSetCondition struct {
type ControllerRevision struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -817,7 +817,7 @@ type ControllerRevision struct {
type ControllerRevisionList struct {
metav1.TypeMeta `json:",inline"`
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
diff --git a/vendor/k8s.io/api/apps/v1/types_swagger_doc_generated.go b/vendor/k8s.io/api/apps/v1/types_swagger_doc_generated.go
index 7e992c58469a1..3f0299d03335e 100644
--- a/vendor/k8s.io/api/apps/v1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/apps/v1/types_swagger_doc_generated.go
@@ -29,7 +29,7 @@ package v1
// AUTO-GENERATED FUNCTIONS START HERE. DO NOT EDIT.
var map_ControllerRevision = map[string]string{
"": "ControllerRevision implements an immutable snapshot of state data. Clients are responsible for serializing and deserializing the objects that contain their internal state. Once a ControllerRevision has been successfully created, it can not be updated. The API Server will fail validation of all requests that attempt to mutate the Data field. ControllerRevisions may, however, be deleted. Note that, due to its use by both the DaemonSet and StatefulSet controllers for update and rollback, this object is beta. However, it may be subject to name and representation changes in future releases, and clients should not depend on its stability. It is primarily for internal use by controllers.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"data": "Data is the serialized representation of the state.",
"revision": "Revision indicates the revision of the state represented by Data.",
}
@@ -40,7 +40,7 @@ func (ControllerRevision) SwaggerDoc() map[string]string {
var map_ControllerRevisionList = map[string]string{
"": "ControllerRevisionList is a resource containing a list of ControllerRevision objects.",
- "metadata": "More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"items": "Items is the list of ControllerRevisions",
}
@@ -50,9 +50,9 @@ func (ControllerRevisionList) SwaggerDoc() map[string]string {
var map_DaemonSet = map[string]string{
"": "DaemonSet represents the configuration of a daemon set.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "spec": "The desired behavior of this daemon set. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
- "status": "The current status of this daemon set. This data may be out of date by some window of time. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "spec": "The desired behavior of this daemon set. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
+ "status": "The current status of this daemon set. This data may be out of date by some window of time. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
}
func (DaemonSet) SwaggerDoc() map[string]string {
@@ -74,7 +74,7 @@ func (DaemonSetCondition) SwaggerDoc() map[string]string {
var map_DaemonSetList = map[string]string{
"": "DaemonSetList is a collection of daemon sets.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"items": "A list of daemon sets.",
}
@@ -202,9 +202,9 @@ func (DeploymentStrategy) SwaggerDoc() map[string]string {
var map_ReplicaSet = map[string]string{
"": "ReplicaSet ensures that a specified number of pod replicas are running at any given time.",
- "metadata": "If the Labels of a ReplicaSet are empty, they are defaulted to be the same as the Pod(s) that the ReplicaSet manages. Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "spec": "Spec defines the specification of the desired behavior of the ReplicaSet. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
- "status": "Status is the most recently observed status of the ReplicaSet. This data may be out of date by some window of time. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
+ "metadata": "If the Labels of a ReplicaSet are empty, they are defaulted to be the same as the Pod(s) that the ReplicaSet manages. Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "spec": "Spec defines the specification of the desired behavior of the ReplicaSet. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
+ "status": "Status is the most recently observed status of the ReplicaSet. This data may be out of date by some window of time. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
}
func (ReplicaSet) SwaggerDoc() map[string]string {
@@ -226,7 +226,7 @@ func (ReplicaSetCondition) SwaggerDoc() map[string]string {
var map_ReplicaSetList = map[string]string{
"": "ReplicaSetList is a collection of ReplicaSets.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"items": "List of ReplicaSets. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller",
}
diff --git a/vendor/k8s.io/api/apps/v1beta1/generated.proto b/vendor/k8s.io/api/apps/v1beta1/generated.proto
index 7942b997b60dd..694f6570d8c42 100644
--- a/vendor/k8s.io/api/apps/v1beta1/generated.proto
+++ b/vendor/k8s.io/api/apps/v1beta1/generated.proto
@@ -43,7 +43,7 @@ option go_package = "v1beta1";
// depend on its stability. It is primarily for internal use by controllers.
message ControllerRevision {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
@@ -56,7 +56,7 @@ message ControllerRevision {
// ControllerRevisionList is a resource containing a list of ControllerRevision objects.
message ControllerRevisionList {
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -277,15 +277,15 @@ message RollingUpdateStatefulSetStrategy {
// Scale represents a scaling request for a resource.
message Scale {
- // Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata.
+ // Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata.
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
- // defines the behavior of the scale. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status.
+ // defines the behavior of the scale. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.
// +optional
optional ScaleSpec spec = 2;
- // current status of the scale. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status. Read-only.
+ // current status of the scale. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status. Read-only.
// +optional
optional ScaleStatus status = 3;
}
diff --git a/vendor/k8s.io/api/apps/v1beta1/types.go b/vendor/k8s.io/api/apps/v1beta1/types.go
index cf6039df693df..b77fcf7af21a4 100644
--- a/vendor/k8s.io/api/apps/v1beta1/types.go
+++ b/vendor/k8s.io/api/apps/v1beta1/types.go
@@ -60,15 +60,15 @@ type ScaleStatus struct {
// Scale represents a scaling request for a resource.
type Scale struct {
metav1.TypeMeta `json:",inline"`
- // Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata.
+ // Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata.
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
- // defines the behavior of the scale. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status.
+ // defines the behavior of the scale. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.
// +optional
Spec ScaleSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
- // current status of the scale. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status. Read-only.
+ // current status of the scale. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status. Read-only.
// +optional
Status ScaleStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"`
}
@@ -134,13 +134,13 @@ const (
// ordering constraints. When a scale operation is performed with this
// strategy, new Pods will be created from the specification version indicated
// by the StatefulSet's updateRevision.
- RollingUpdateStatefulSetStrategyType = "RollingUpdate"
+ RollingUpdateStatefulSetStrategyType StatefulSetUpdateStrategyType = "RollingUpdate"
// OnDeleteStatefulSetStrategyType triggers the legacy behavior. Version
// tracking and ordered rolling restarts are disabled. Pods are recreated
// from the StatefulSetSpec when they are manually deleted. When a scale
// operation is performed with this strategy,specification version indicated
// by the StatefulSet's currentRevision.
- OnDeleteStatefulSetStrategyType = "OnDelete"
+ OnDeleteStatefulSetStrategyType StatefulSetUpdateStrategyType = "OnDelete"
)
// RollingUpdateStatefulSetStrategy is used to communicate parameter for RollingUpdateStatefulSetStrategyType.
@@ -541,7 +541,7 @@ type DeploymentList struct {
type ControllerRevision struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -558,7 +558,7 @@ type ControllerRevision struct {
type ControllerRevisionList struct {
metav1.TypeMeta `json:",inline"`
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
diff --git a/vendor/k8s.io/api/apps/v1beta1/types_swagger_doc_generated.go b/vendor/k8s.io/api/apps/v1beta1/types_swagger_doc_generated.go
index da1eb5996eb37..504b858639ea7 100644
--- a/vendor/k8s.io/api/apps/v1beta1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/apps/v1beta1/types_swagger_doc_generated.go
@@ -29,7 +29,7 @@ package v1beta1
// AUTO-GENERATED FUNCTIONS START HERE. DO NOT EDIT.
var map_ControllerRevision = map[string]string{
"": "DEPRECATED - This group version of ControllerRevision is deprecated by apps/v1beta2/ControllerRevision. See the release notes for more information. ControllerRevision implements an immutable snapshot of state data. Clients are responsible for serializing and deserializing the objects that contain their internal state. Once a ControllerRevision has been successfully created, it can not be updated. The API Server will fail validation of all requests that attempt to mutate the Data field. ControllerRevisions may, however, be deleted. Note that, due to its use by both the DaemonSet and StatefulSet controllers for update and rollback, this object is beta. However, it may be subject to name and representation changes in future releases, and clients should not depend on its stability. It is primarily for internal use by controllers.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"data": "Data is the serialized representation of the state.",
"revision": "Revision indicates the revision of the state represented by Data.",
}
@@ -40,7 +40,7 @@ func (ControllerRevision) SwaggerDoc() map[string]string {
var map_ControllerRevisionList = map[string]string{
"": "ControllerRevisionList is a resource containing a list of ControllerRevision objects.",
- "metadata": "More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"items": "Items is the list of ControllerRevisions",
}
@@ -167,9 +167,9 @@ func (RollingUpdateStatefulSetStrategy) SwaggerDoc() map[string]string {
var map_Scale = map[string]string{
"": "Scale represents a scaling request for a resource.",
- "metadata": "Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata.",
- "spec": "defines the behavior of the scale. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status.",
- "status": "current status of the scale. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status. Read-only.",
+ "metadata": "Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata.",
+ "spec": "defines the behavior of the scale. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.",
+ "status": "current status of the scale. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status. Read-only.",
}
func (Scale) SwaggerDoc() map[string]string {
diff --git a/vendor/k8s.io/api/apps/v1beta2/types.go b/vendor/k8s.io/api/apps/v1beta2/types.go
index 39e07e278ae2e..d358455f0eb64 100644
--- a/vendor/k8s.io/api/apps/v1beta2/types.go
+++ b/vendor/k8s.io/api/apps/v1beta2/types.go
@@ -141,13 +141,13 @@ const (
// ordering constraints. When a scale operation is performed with this
// strategy, new Pods will be created from the specification version indicated
// by the StatefulSet's updateRevision.
- RollingUpdateStatefulSetStrategyType = "RollingUpdate"
+ RollingUpdateStatefulSetStrategyType StatefulSetUpdateStrategyType = "RollingUpdate"
// OnDeleteStatefulSetStrategyType triggers the legacy behavior. Version
// tracking and ordered rolling restarts are disabled. Pods are recreated
// from the StatefulSetSpec when they are manually deleted. When a scale
// operation is performed with this strategy,specification version indicated
// by the StatefulSet's currentRevision.
- OnDeleteStatefulSetStrategyType = "OnDelete"
+ OnDeleteStatefulSetStrategyType StatefulSetUpdateStrategyType = "OnDelete"
)
// RollingUpdateStatefulSetStrategy is used to communicate parameter for RollingUpdateStatefulSetStrategyType.
diff --git a/vendor/k8s.io/api/autoscaling/v1/generated.proto b/vendor/k8s.io/api/autoscaling/v1/generated.proto
index d1590b5ee576e..f50ed9d1f0906 100644
--- a/vendor/k8s.io/api/autoscaling/v1/generated.proto
+++ b/vendor/k8s.io/api/autoscaling/v1/generated.proto
@@ -32,7 +32,7 @@ option go_package = "v1";
// CrossVersionObjectReference contains enough information to let you identify the referred resource.
message CrossVersionObjectReference {
- // Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds"
+ // Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds"
optional string kind = 1;
// Name of the referent; More info: http://kubernetes.io/docs/user-guide/identifiers#names
@@ -88,11 +88,11 @@ message ExternalMetricStatus {
// configuration of a horizontal pod autoscaler.
message HorizontalPodAutoscaler {
- // Standard object metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // Standard object metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
- // behaviour of autoscaler. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status.
+ // behaviour of autoscaler. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.
// +optional
optional HorizontalPodAutoscalerSpec spec = 2;
@@ -384,15 +384,15 @@ message ResourceMetricStatus {
// Scale represents a scaling request for a resource.
message Scale {
- // Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata.
+ // Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata.
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
- // defines the behavior of the scale. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status.
+ // defines the behavior of the scale. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.
// +optional
optional ScaleSpec spec = 2;
- // current status of the scale. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status. Read-only.
+ // current status of the scale. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status. Read-only.
// +optional
optional ScaleStatus status = 3;
}
diff --git a/vendor/k8s.io/api/autoscaling/v1/types.go b/vendor/k8s.io/api/autoscaling/v1/types.go
index fa61bec76cb8f..55b2a0d6b68d3 100644
--- a/vendor/k8s.io/api/autoscaling/v1/types.go
+++ b/vendor/k8s.io/api/autoscaling/v1/types.go
@@ -24,7 +24,7 @@ import (
// CrossVersionObjectReference contains enough information to let you identify the referred resource.
type CrossVersionObjectReference struct {
- // Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds"
+ // Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds"
Kind string `json:"kind" protobuf:"bytes,1,opt,name=kind"`
// Name of the referent; More info: http://kubernetes.io/docs/user-guide/identifiers#names
Name string `json:"name" protobuf:"bytes,2,opt,name=name"`
@@ -82,11 +82,11 @@ type HorizontalPodAutoscalerStatus struct {
// configuration of a horizontal pod autoscaler.
type HorizontalPodAutoscaler struct {
metav1.TypeMeta `json:",inline"`
- // Standard object metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // Standard object metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
- // behaviour of autoscaler. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status.
+ // behaviour of autoscaler. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.
// +optional
Spec HorizontalPodAutoscalerSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
@@ -113,15 +113,15 @@ type HorizontalPodAutoscalerList struct {
// Scale represents a scaling request for a resource.
type Scale struct {
metav1.TypeMeta `json:",inline"`
- // Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata.
+ // Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata.
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
- // defines the behavior of the scale. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status.
+ // defines the behavior of the scale. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.
// +optional
Spec ScaleSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
- // current status of the scale. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status. Read-only.
+ // current status of the scale. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status. Read-only.
// +optional
Status ScaleStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"`
}
@@ -151,7 +151,7 @@ type ScaleStatus struct {
// MetricSourceType indicates the type of metric.
type MetricSourceType string
-var (
+const (
// ObjectMetricSourceType is a metric describing a kubernetes object
// (for example, hits-per-second on an Ingress object).
ObjectMetricSourceType MetricSourceType = "Object"
@@ -322,7 +322,7 @@ type MetricStatus struct {
// a HorizontalPodAutoscaler.
type HorizontalPodAutoscalerConditionType string
-var (
+const (
// ScalingActive indicates that the HPA controller is able to scale if necessary:
// it's correctly configured, can fetch the desired metrics, and isn't disabled.
ScalingActive HorizontalPodAutoscalerConditionType = "ScalingActive"
diff --git a/vendor/k8s.io/api/autoscaling/v1/types_swagger_doc_generated.go b/vendor/k8s.io/api/autoscaling/v1/types_swagger_doc_generated.go
index 4fffb46306e53..129ce2b484e9a 100644
--- a/vendor/k8s.io/api/autoscaling/v1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/autoscaling/v1/types_swagger_doc_generated.go
@@ -29,7 +29,7 @@ package v1
// AUTO-GENERATED FUNCTIONS START HERE. DO NOT EDIT.
var map_CrossVersionObjectReference = map[string]string{
"": "CrossVersionObjectReference contains enough information to let you identify the referred resource.",
- "kind": "Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds\"",
+ "kind": "Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\"",
"name": "Name of the referent; More info: http://kubernetes.io/docs/user-guide/identifiers#names",
"apiVersion": "API version of the referent",
}
@@ -64,8 +64,8 @@ func (ExternalMetricStatus) SwaggerDoc() map[string]string {
var map_HorizontalPodAutoscaler = map[string]string{
"": "configuration of a horizontal pod autoscaler.",
- "metadata": "Standard object metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "spec": "behaviour of autoscaler. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status.",
+ "metadata": "Standard object metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "spec": "behaviour of autoscaler. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.",
"status": "current information about the autoscaler.",
}
@@ -219,9 +219,9 @@ func (ResourceMetricStatus) SwaggerDoc() map[string]string {
var map_Scale = map[string]string{
"": "Scale represents a scaling request for a resource.",
- "metadata": "Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata.",
- "spec": "defines the behavior of the scale. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status.",
- "status": "current status of the scale. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status. Read-only.",
+ "metadata": "Standard object metadata; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata.",
+ "spec": "defines the behavior of the scale. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.",
+ "status": "current status of the scale. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status. Read-only.",
}
func (Scale) SwaggerDoc() map[string]string {
diff --git a/vendor/k8s.io/api/autoscaling/v2beta1/generated.proto b/vendor/k8s.io/api/autoscaling/v2beta1/generated.proto
index 9b4d1a9c036fc..f90f93f9f4026 100644
--- a/vendor/k8s.io/api/autoscaling/v2beta1/generated.proto
+++ b/vendor/k8s.io/api/autoscaling/v2beta1/generated.proto
@@ -32,7 +32,7 @@ option go_package = "v2beta1";
// CrossVersionObjectReference contains enough information to let you identify the referred resource.
message CrossVersionObjectReference {
- // Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds"
+ // Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds"
optional string kind = 1;
// Name of the referent; More info: http://kubernetes.io/docs/user-guide/identifiers#names
@@ -92,12 +92,12 @@ message ExternalMetricStatus {
// implementing the scale subresource based on the metrics specified.
message HorizontalPodAutoscaler {
// metadata is the standard object metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// spec is the specification for the behaviour of the autoscaler.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status.
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.
// +optional
optional HorizontalPodAutoscalerSpec spec = 2;
diff --git a/vendor/k8s.io/api/autoscaling/v2beta1/types.go b/vendor/k8s.io/api/autoscaling/v2beta1/types.go
index f904a9ce3822a..53a53a3a9c7c2 100644
--- a/vendor/k8s.io/api/autoscaling/v2beta1/types.go
+++ b/vendor/k8s.io/api/autoscaling/v2beta1/types.go
@@ -24,7 +24,7 @@ import (
// CrossVersionObjectReference contains enough information to let you identify the referred resource.
type CrossVersionObjectReference struct {
- // Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds"
+ // Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds"
Kind string `json:"kind" protobuf:"bytes,1,opt,name=kind"`
// Name of the referent; More info: http://kubernetes.io/docs/user-guide/identifiers#names
Name string `json:"name" protobuf:"bytes,2,opt,name=name"`
@@ -62,7 +62,7 @@ type HorizontalPodAutoscalerSpec struct {
// MetricSourceType indicates the type of metric.
type MetricSourceType string
-var (
+const (
// ObjectMetricSourceType is a metric describing a kubernetes object
// (for example, hits-per-second on an Ingress object).
ObjectMetricSourceType MetricSourceType = "Object"
@@ -231,7 +231,7 @@ type HorizontalPodAutoscalerStatus struct {
// a HorizontalPodAutoscaler.
type HorizontalPodAutoscalerConditionType string
-var (
+const (
// ScalingActive indicates that the HPA controller is able to scale if necessary:
// it's correctly configured, can fetch the desired metrics, and isn't disabled.
ScalingActive HorizontalPodAutoscalerConditionType = "ScalingActive"
@@ -380,12 +380,12 @@ type ExternalMetricStatus struct {
type HorizontalPodAutoscaler struct {
metav1.TypeMeta `json:",inline"`
// metadata is the standard object metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// spec is the specification for the behaviour of the autoscaler.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status.
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.
// +optional
Spec HorizontalPodAutoscalerSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
diff --git a/vendor/k8s.io/api/autoscaling/v2beta1/types_swagger_doc_generated.go b/vendor/k8s.io/api/autoscaling/v2beta1/types_swagger_doc_generated.go
index ce57089d9293c..a0d5f533758ec 100644
--- a/vendor/k8s.io/api/autoscaling/v2beta1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/autoscaling/v2beta1/types_swagger_doc_generated.go
@@ -29,7 +29,7 @@ package v2beta1
// AUTO-GENERATED FUNCTIONS START HERE. DO NOT EDIT.
var map_CrossVersionObjectReference = map[string]string{
"": "CrossVersionObjectReference contains enough information to let you identify the referred resource.",
- "kind": "Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds\"",
+ "kind": "Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\"",
"name": "Name of the referent; More info: http://kubernetes.io/docs/user-guide/identifiers#names",
"apiVersion": "API version of the referent",
}
@@ -64,8 +64,8 @@ func (ExternalMetricStatus) SwaggerDoc() map[string]string {
var map_HorizontalPodAutoscaler = map[string]string{
"": "HorizontalPodAutoscaler is the configuration for a horizontal pod autoscaler, which automatically manages the replica count of any resource implementing the scale subresource based on the metrics specified.",
- "metadata": "metadata is the standard object metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "spec": "spec is the specification for the behaviour of the autoscaler. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status.",
+ "metadata": "metadata is the standard object metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "spec": "spec is the specification for the behaviour of the autoscaler. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.",
"status": "status is the current information about the autoscaler.",
}
diff --git a/vendor/k8s.io/api/autoscaling/v2beta2/generated.proto b/vendor/k8s.io/api/autoscaling/v2beta2/generated.proto
index 6ddf9bf29fe7e..80f1d345d4411 100644
--- a/vendor/k8s.io/api/autoscaling/v2beta2/generated.proto
+++ b/vendor/k8s.io/api/autoscaling/v2beta2/generated.proto
@@ -32,7 +32,7 @@ option go_package = "v2beta2";
// CrossVersionObjectReference contains enough information to let you identify the referred resource.
message CrossVersionObjectReference {
- // Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds"
+ // Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds"
optional string kind = 1;
// Name of the referent; More info: http://kubernetes.io/docs/user-guide/identifiers#names
@@ -69,12 +69,12 @@ message ExternalMetricStatus {
// implementing the scale subresource based on the metrics specified.
message HorizontalPodAutoscaler {
// metadata is the standard object metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// spec is the specification for the behaviour of the autoscaler.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status.
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.
// +optional
optional HorizontalPodAutoscalerSpec spec = 2;
diff --git a/vendor/k8s.io/api/autoscaling/v2beta2/types.go b/vendor/k8s.io/api/autoscaling/v2beta2/types.go
index d5a8669d650c8..4480c7da8d239 100644
--- a/vendor/k8s.io/api/autoscaling/v2beta2/types.go
+++ b/vendor/k8s.io/api/autoscaling/v2beta2/types.go
@@ -33,12 +33,12 @@ import (
type HorizontalPodAutoscaler struct {
metav1.TypeMeta `json:",inline"`
// metadata is the standard object metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// spec is the specification for the behaviour of the autoscaler.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status.
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.
// +optional
Spec HorizontalPodAutoscalerSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
@@ -76,7 +76,7 @@ type HorizontalPodAutoscalerSpec struct {
// CrossVersionObjectReference contains enough information to let you identify the referred resource.
type CrossVersionObjectReference struct {
- // Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds"
+ // Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds"
Kind string `json:"kind" protobuf:"bytes,1,opt,name=kind"`
// Name of the referent; More info: http://kubernetes.io/docs/user-guide/identifiers#names
Name string `json:"name" protobuf:"bytes,2,opt,name=name"`
@@ -120,7 +120,7 @@ type MetricSpec struct {
// MetricSourceType indicates the type of metric.
type MetricSourceType string
-var (
+const (
// ObjectMetricSourceType is a metric describing a kubernetes object
// (for example, hits-per-second on an Ingress object).
ObjectMetricSourceType MetricSourceType = "Object"
@@ -221,7 +221,7 @@ type MetricTarget struct {
// "Value", "AverageValue", or "Utilization"
type MetricTargetType string
-var (
+const (
// UtilizationMetricType declares a MetricTarget is an AverageUtilization value
UtilizationMetricType MetricTargetType = "Utilization"
// ValueMetricType declares a MetricTarget is a raw value
@@ -262,7 +262,7 @@ type HorizontalPodAutoscalerStatus struct {
// a HorizontalPodAutoscaler.
type HorizontalPodAutoscalerConditionType string
-var (
+const (
// ScalingActive indicates that the HPA controller is able to scale if necessary:
// it's correctly configured, can fetch the desired metrics, and isn't disabled.
ScalingActive HorizontalPodAutoscalerConditionType = "ScalingActive"
diff --git a/vendor/k8s.io/api/autoscaling/v2beta2/types_swagger_doc_generated.go b/vendor/k8s.io/api/autoscaling/v2beta2/types_swagger_doc_generated.go
index 8a628d8844707..bb85b9f0f45af 100644
--- a/vendor/k8s.io/api/autoscaling/v2beta2/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/autoscaling/v2beta2/types_swagger_doc_generated.go
@@ -29,7 +29,7 @@ package v2beta2
// AUTO-GENERATED FUNCTIONS START HERE. DO NOT EDIT.
var map_CrossVersionObjectReference = map[string]string{
"": "CrossVersionObjectReference contains enough information to let you identify the referred resource.",
- "kind": "Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds\"",
+ "kind": "Kind of the referent; More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\"",
"name": "Name of the referent; More info: http://kubernetes.io/docs/user-guide/identifiers#names",
"apiVersion": "API version of the referent",
}
@@ -60,8 +60,8 @@ func (ExternalMetricStatus) SwaggerDoc() map[string]string {
var map_HorizontalPodAutoscaler = map[string]string{
"": "HorizontalPodAutoscaler is the configuration for a horizontal pod autoscaler, which automatically manages the replica count of any resource implementing the scale subresource based on the metrics specified.",
- "metadata": "metadata is the standard object metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "spec": "spec is the specification for the behaviour of the autoscaler. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status.",
+ "metadata": "metadata is the standard object metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "spec": "spec is the specification for the behaviour of the autoscaler. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status.",
"status": "status is the current information about the autoscaler.",
}
diff --git a/vendor/k8s.io/api/batch/v1/generated.proto b/vendor/k8s.io/api/batch/v1/generated.proto
index 039149daba2b6..75de45f1710d6 100644
--- a/vendor/k8s.io/api/batch/v1/generated.proto
+++ b/vendor/k8s.io/api/batch/v1/generated.proto
@@ -32,17 +32,17 @@ option go_package = "v1";
// Job represents the configuration of a single job.
message Job {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// Specification of the desired behavior of a job.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional JobSpec spec = 2;
// Current status of a job.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional JobStatus status = 3;
}
@@ -75,7 +75,7 @@ message JobCondition {
// JobList is a collection of jobs.
message JobList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
diff --git a/vendor/k8s.io/api/batch/v1/types.go b/vendor/k8s.io/api/batch/v1/types.go
index 8dad9043d5cb4..646a3cd7efa1c 100644
--- a/vendor/k8s.io/api/batch/v1/types.go
+++ b/vendor/k8s.io/api/batch/v1/types.go
@@ -28,17 +28,17 @@ import (
type Job struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Specification of the desired behavior of a job.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Spec JobSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
// Current status of a job.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Status JobStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"`
}
@@ -49,7 +49,7 @@ type Job struct {
type JobList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
diff --git a/vendor/k8s.io/api/batch/v1/types_swagger_doc_generated.go b/vendor/k8s.io/api/batch/v1/types_swagger_doc_generated.go
index d8e2bdd780fc9..0120e07d45e28 100644
--- a/vendor/k8s.io/api/batch/v1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/batch/v1/types_swagger_doc_generated.go
@@ -29,9 +29,9 @@ package v1
// AUTO-GENERATED FUNCTIONS START HERE. DO NOT EDIT.
var map_Job = map[string]string{
"": "Job represents the configuration of a single job.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "spec": "Specification of the desired behavior of a job. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
- "status": "Current status of a job. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "spec": "Specification of the desired behavior of a job. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
+ "status": "Current status of a job. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
}
func (Job) SwaggerDoc() map[string]string {
@@ -54,7 +54,7 @@ func (JobCondition) SwaggerDoc() map[string]string {
var map_JobList = map[string]string{
"": "JobList is a collection of jobs.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"items": "items is the list of Jobs.",
}
diff --git a/vendor/k8s.io/api/batch/v1beta1/generated.proto b/vendor/k8s.io/api/batch/v1beta1/generated.proto
index 043b3551b08b9..995b4f3f9d93c 100644
--- a/vendor/k8s.io/api/batch/v1beta1/generated.proto
+++ b/vendor/k8s.io/api/batch/v1beta1/generated.proto
@@ -33,17 +33,17 @@ option go_package = "v1beta1";
// CronJob represents the configuration of a single cron job.
message CronJob {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// Specification of the desired behavior of a cron job, including the schedule.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional CronJobSpec spec = 2;
// Current status of a cron job.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional CronJobStatus status = 3;
}
@@ -51,7 +51,7 @@ message CronJob {
// CronJobList is a collection of cron jobs.
message CronJobList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -112,12 +112,12 @@ message CronJobStatus {
// JobTemplate describes a template for creating copies of a predefined pod.
message JobTemplate {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// Defines jobs that will be created from this template.
- // https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional JobTemplateSpec template = 2;
}
@@ -125,12 +125,12 @@ message JobTemplate {
// JobTemplateSpec describes the data a Job should have when created from a template
message JobTemplateSpec {
// Standard object's metadata of the jobs created from this template.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// Specification of the desired behavior of the job.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional k8s.io.api.batch.v1.JobSpec spec = 2;
}
diff --git a/vendor/k8s.io/api/batch/v1beta1/types.go b/vendor/k8s.io/api/batch/v1beta1/types.go
index cb5c9bad29464..2978747a488ec 100644
--- a/vendor/k8s.io/api/batch/v1beta1/types.go
+++ b/vendor/k8s.io/api/batch/v1beta1/types.go
@@ -28,12 +28,12 @@ import (
type JobTemplate struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Defines jobs that will be created from this template.
- // https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Template JobTemplateSpec `json:"template,omitempty" protobuf:"bytes,2,opt,name=template"`
}
@@ -41,12 +41,12 @@ type JobTemplate struct {
// JobTemplateSpec describes the data a Job should have when created from a template
type JobTemplateSpec struct {
// Standard object's metadata of the jobs created from this template.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Specification of the desired behavior of the job.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Spec batchv1.JobSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
}
@@ -58,17 +58,17 @@ type JobTemplateSpec struct {
type CronJob struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Specification of the desired behavior of a cron job, including the schedule.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Spec CronJobSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
// Current status of a cron job.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Status CronJobStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"`
}
@@ -80,7 +80,7 @@ type CronJobList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
diff --git a/vendor/k8s.io/api/batch/v1beta1/types_swagger_doc_generated.go b/vendor/k8s.io/api/batch/v1beta1/types_swagger_doc_generated.go
index abbdfec010136..ecc914446067a 100644
--- a/vendor/k8s.io/api/batch/v1beta1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/batch/v1beta1/types_swagger_doc_generated.go
@@ -29,9 +29,9 @@ package v1beta1
// AUTO-GENERATED FUNCTIONS START HERE. DO NOT EDIT.
var map_CronJob = map[string]string{
"": "CronJob represents the configuration of a single cron job.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "spec": "Specification of the desired behavior of a cron job, including the schedule. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
- "status": "Current status of a cron job. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "spec": "Specification of the desired behavior of a cron job, including the schedule. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
+ "status": "Current status of a cron job. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
}
func (CronJob) SwaggerDoc() map[string]string {
@@ -40,7 +40,7 @@ func (CronJob) SwaggerDoc() map[string]string {
var map_CronJobList = map[string]string{
"": "CronJobList is a collection of cron jobs.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"items": "items is the list of CronJobs.",
}
@@ -75,8 +75,8 @@ func (CronJobStatus) SwaggerDoc() map[string]string {
var map_JobTemplate = map[string]string{
"": "JobTemplate describes a template for creating copies of a predefined pod.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "template": "Defines jobs that will be created from this template. https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "template": "Defines jobs that will be created from this template. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
}
func (JobTemplate) SwaggerDoc() map[string]string {
@@ -85,8 +85,8 @@ func (JobTemplate) SwaggerDoc() map[string]string {
var map_JobTemplateSpec = map[string]string{
"": "JobTemplateSpec describes the data a Job should have when created from a template",
- "metadata": "Standard object's metadata of the jobs created from this template. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "spec": "Specification of the desired behavior of the job. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
+ "metadata": "Standard object's metadata of the jobs created from this template. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "spec": "Specification of the desired behavior of the job. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
}
func (JobTemplateSpec) SwaggerDoc() map[string]string {
diff --git a/vendor/k8s.io/api/batch/v2alpha1/generated.proto b/vendor/k8s.io/api/batch/v2alpha1/generated.proto
index 4321c3361e1a6..0bba13b86a8df 100644
--- a/vendor/k8s.io/api/batch/v2alpha1/generated.proto
+++ b/vendor/k8s.io/api/batch/v2alpha1/generated.proto
@@ -33,17 +33,17 @@ option go_package = "v2alpha1";
// CronJob represents the configuration of a single cron job.
message CronJob {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// Specification of the desired behavior of a cron job, including the schedule.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional CronJobSpec spec = 2;
// Current status of a cron job.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional CronJobStatus status = 3;
}
@@ -51,7 +51,7 @@ message CronJob {
// CronJobList is a collection of cron jobs.
message CronJobList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -110,12 +110,12 @@ message CronJobStatus {
// JobTemplate describes a template for creating copies of a predefined pod.
message JobTemplate {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// Defines jobs that will be created from this template.
- // https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional JobTemplateSpec template = 2;
}
@@ -123,12 +123,12 @@ message JobTemplate {
// JobTemplateSpec describes the data a Job should have when created from a template
message JobTemplateSpec {
// Standard object's metadata of the jobs created from this template.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// Specification of the desired behavior of the job.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional k8s.io.api.batch.v1.JobSpec spec = 2;
}
diff --git a/vendor/k8s.io/api/batch/v2alpha1/types.go b/vendor/k8s.io/api/batch/v2alpha1/types.go
index cccff94ff2ace..465e614aec396 100644
--- a/vendor/k8s.io/api/batch/v2alpha1/types.go
+++ b/vendor/k8s.io/api/batch/v2alpha1/types.go
@@ -28,12 +28,12 @@ import (
type JobTemplate struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Defines jobs that will be created from this template.
- // https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Template JobTemplateSpec `json:"template,omitempty" protobuf:"bytes,2,opt,name=template"`
}
@@ -41,12 +41,12 @@ type JobTemplate struct {
// JobTemplateSpec describes the data a Job should have when created from a template
type JobTemplateSpec struct {
// Standard object's metadata of the jobs created from this template.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Specification of the desired behavior of the job.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Spec batchv1.JobSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
}
@@ -58,17 +58,17 @@ type JobTemplateSpec struct {
type CronJob struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Specification of the desired behavior of a cron job, including the schedule.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Spec CronJobSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
// Current status of a cron job.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Status CronJobStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"`
}
@@ -80,7 +80,7 @@ type CronJobList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
diff --git a/vendor/k8s.io/api/batch/v2alpha1/types_swagger_doc_generated.go b/vendor/k8s.io/api/batch/v2alpha1/types_swagger_doc_generated.go
index f448a92cfe049..bc80eca48f0de 100644
--- a/vendor/k8s.io/api/batch/v2alpha1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/batch/v2alpha1/types_swagger_doc_generated.go
@@ -29,9 +29,9 @@ package v2alpha1
// AUTO-GENERATED FUNCTIONS START HERE. DO NOT EDIT.
var map_CronJob = map[string]string{
"": "CronJob represents the configuration of a single cron job.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "spec": "Specification of the desired behavior of a cron job, including the schedule. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
- "status": "Current status of a cron job. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "spec": "Specification of the desired behavior of a cron job, including the schedule. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
+ "status": "Current status of a cron job. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
}
func (CronJob) SwaggerDoc() map[string]string {
@@ -40,7 +40,7 @@ func (CronJob) SwaggerDoc() map[string]string {
var map_CronJobList = map[string]string{
"": "CronJobList is a collection of cron jobs.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"items": "items is the list of CronJobs.",
}
@@ -75,8 +75,8 @@ func (CronJobStatus) SwaggerDoc() map[string]string {
var map_JobTemplate = map[string]string{
"": "JobTemplate describes a template for creating copies of a predefined pod.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "template": "Defines jobs that will be created from this template. https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "template": "Defines jobs that will be created from this template. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
}
func (JobTemplate) SwaggerDoc() map[string]string {
@@ -85,8 +85,8 @@ func (JobTemplate) SwaggerDoc() map[string]string {
var map_JobTemplateSpec = map[string]string{
"": "JobTemplateSpec describes the data a Job should have when created from a template",
- "metadata": "Standard object's metadata of the jobs created from this template. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "spec": "Specification of the desired behavior of the job. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
+ "metadata": "Standard object's metadata of the jobs created from this template. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "spec": "Specification of the desired behavior of the job. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
}
func (JobTemplateSpec) SwaggerDoc() map[string]string {
diff --git a/vendor/k8s.io/api/certificates/v1beta1/types.go b/vendor/k8s.io/api/certificates/v1beta1/types.go
index bb9e82d30820e..93f81cd529b95 100644
--- a/vendor/k8s.io/api/certificates/v1beta1/types.go
+++ b/vendor/k8s.io/api/certificates/v1beta1/types.go
@@ -129,27 +129,27 @@ type CertificateSigningRequestList struct {
type KeyUsage string
const (
- UsageSigning KeyUsage = "signing"
- UsageDigitalSignature KeyUsage = "digital signature"
- UsageContentCommittment KeyUsage = "content commitment"
- UsageKeyEncipherment KeyUsage = "key encipherment"
- UsageKeyAgreement KeyUsage = "key agreement"
- UsageDataEncipherment KeyUsage = "data encipherment"
- UsageCertSign KeyUsage = "cert sign"
- UsageCRLSign KeyUsage = "crl sign"
- UsageEncipherOnly KeyUsage = "encipher only"
- UsageDecipherOnly KeyUsage = "decipher only"
- UsageAny KeyUsage = "any"
- UsageServerAuth KeyUsage = "server auth"
- UsageClientAuth KeyUsage = "client auth"
- UsageCodeSigning KeyUsage = "code signing"
- UsageEmailProtection KeyUsage = "email protection"
- UsageSMIME KeyUsage = "s/mime"
- UsageIPsecEndSystem KeyUsage = "ipsec end system"
- UsageIPsecTunnel KeyUsage = "ipsec tunnel"
- UsageIPsecUser KeyUsage = "ipsec user"
- UsageTimestamping KeyUsage = "timestamping"
- UsageOCSPSigning KeyUsage = "ocsp signing"
- UsageMicrosoftSGC KeyUsage = "microsoft sgc"
- UsageNetscapSGC KeyUsage = "netscape sgc"
+ UsageSigning KeyUsage = "signing"
+ UsageDigitalSignature KeyUsage = "digital signature"
+ UsageContentCommitment KeyUsage = "content commitment"
+ UsageKeyEncipherment KeyUsage = "key encipherment"
+ UsageKeyAgreement KeyUsage = "key agreement"
+ UsageDataEncipherment KeyUsage = "data encipherment"
+ UsageCertSign KeyUsage = "cert sign"
+ UsageCRLSign KeyUsage = "crl sign"
+ UsageEncipherOnly KeyUsage = "encipher only"
+ UsageDecipherOnly KeyUsage = "decipher only"
+ UsageAny KeyUsage = "any"
+ UsageServerAuth KeyUsage = "server auth"
+ UsageClientAuth KeyUsage = "client auth"
+ UsageCodeSigning KeyUsage = "code signing"
+ UsageEmailProtection KeyUsage = "email protection"
+ UsageSMIME KeyUsage = "s/mime"
+ UsageIPsecEndSystem KeyUsage = "ipsec end system"
+ UsageIPsecTunnel KeyUsage = "ipsec tunnel"
+ UsageIPsecUser KeyUsage = "ipsec user"
+ UsageTimestamping KeyUsage = "timestamping"
+ UsageOCSPSigning KeyUsage = "ocsp signing"
+ UsageMicrosoftSGC KeyUsage = "microsoft sgc"
+ UsageNetscapeSGC KeyUsage = "netscape sgc"
)
diff --git a/vendor/k8s.io/api/coordination/v1/generated.proto b/vendor/k8s.io/api/coordination/v1/generated.proto
index 99692e958d6fc..4206746d8323c 100644
--- a/vendor/k8s.io/api/coordination/v1/generated.proto
+++ b/vendor/k8s.io/api/coordination/v1/generated.proto
@@ -30,12 +30,12 @@ option go_package = "v1";
// Lease defines a lease concept.
message Lease {
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// Specification of the Lease.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional LeaseSpec spec = 2;
}
@@ -43,7 +43,7 @@ message Lease {
// LeaseList is a list of Lease objects.
message LeaseList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
diff --git a/vendor/k8s.io/api/coordination/v1/types.go b/vendor/k8s.io/api/coordination/v1/types.go
index 8f9f24d04ebfb..7a5605ace171d 100644
--- a/vendor/k8s.io/api/coordination/v1/types.go
+++ b/vendor/k8s.io/api/coordination/v1/types.go
@@ -26,12 +26,12 @@ import (
// Lease defines a lease concept.
type Lease struct {
metav1.TypeMeta `json:",inline"`
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Specification of the Lease.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Spec LeaseSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
}
@@ -65,7 +65,7 @@ type LeaseSpec struct {
type LeaseList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
diff --git a/vendor/k8s.io/api/coordination/v1/types_swagger_doc_generated.go b/vendor/k8s.io/api/coordination/v1/types_swagger_doc_generated.go
index bd02ad5daa64e..0f14404308c72 100644
--- a/vendor/k8s.io/api/coordination/v1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/coordination/v1/types_swagger_doc_generated.go
@@ -29,8 +29,8 @@ package v1
// AUTO-GENERATED FUNCTIONS START HERE. DO NOT EDIT.
var map_Lease = map[string]string{
"": "Lease defines a lease concept.",
- "metadata": "More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "spec": "Specification of the Lease. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
+ "metadata": "More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "spec": "Specification of the Lease. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
}
func (Lease) SwaggerDoc() map[string]string {
@@ -39,7 +39,7 @@ func (Lease) SwaggerDoc() map[string]string {
var map_LeaseList = map[string]string{
"": "LeaseList is a list of Lease objects.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"items": "Items is a list of schema objects.",
}
diff --git a/vendor/k8s.io/api/coordination/v1beta1/generated.proto b/vendor/k8s.io/api/coordination/v1beta1/generated.proto
index 918e0de1c74c4..cfc2711c677b8 100644
--- a/vendor/k8s.io/api/coordination/v1beta1/generated.proto
+++ b/vendor/k8s.io/api/coordination/v1beta1/generated.proto
@@ -30,12 +30,12 @@ option go_package = "v1beta1";
// Lease defines a lease concept.
message Lease {
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// Specification of the Lease.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional LeaseSpec spec = 2;
}
@@ -43,7 +43,7 @@ message Lease {
// LeaseList is a list of Lease objects.
message LeaseList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
diff --git a/vendor/k8s.io/api/coordination/v1beta1/types.go b/vendor/k8s.io/api/coordination/v1beta1/types.go
index 846f72802808e..da88f675c0acd 100644
--- a/vendor/k8s.io/api/coordination/v1beta1/types.go
+++ b/vendor/k8s.io/api/coordination/v1beta1/types.go
@@ -26,12 +26,12 @@ import (
// Lease defines a lease concept.
type Lease struct {
metav1.TypeMeta `json:",inline"`
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Specification of the Lease.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Spec LeaseSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
}
@@ -65,7 +65,7 @@ type LeaseSpec struct {
type LeaseList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
diff --git a/vendor/k8s.io/api/coordination/v1beta1/types_swagger_doc_generated.go b/vendor/k8s.io/api/coordination/v1beta1/types_swagger_doc_generated.go
index 4532d322ab6f0..f557d265d4c52 100644
--- a/vendor/k8s.io/api/coordination/v1beta1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/coordination/v1beta1/types_swagger_doc_generated.go
@@ -29,8 +29,8 @@ package v1beta1
// AUTO-GENERATED FUNCTIONS START HERE. DO NOT EDIT.
var map_Lease = map[string]string{
"": "Lease defines a lease concept.",
- "metadata": "More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "spec": "Specification of the Lease. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
+ "metadata": "More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "spec": "Specification of the Lease. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
}
func (Lease) SwaggerDoc() map[string]string {
@@ -39,7 +39,7 @@ func (Lease) SwaggerDoc() map[string]string {
var map_LeaseList = map[string]string{
"": "LeaseList is a list of Lease objects.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"items": "Items is a list of schema objects.",
}
diff --git a/vendor/k8s.io/api/core/v1/generated.pb.go b/vendor/k8s.io/api/core/v1/generated.pb.go
index 7e745891dfea0..b7291d28606bd 100644
--- a/vendor/k8s.io/api/core/v1/generated.pb.go
+++ b/vendor/k8s.io/api/core/v1/generated.pb.go
@@ -2373,10 +2373,38 @@ func (m *Namespace) XXX_DiscardUnknown() {
var xxx_messageInfo_Namespace proto.InternalMessageInfo
+func (m *NamespaceCondition) Reset() { *m = NamespaceCondition{} }
+func (*NamespaceCondition) ProtoMessage() {}
+func (*NamespaceCondition) Descriptor() ([]byte, []int) {
+ return fileDescriptor_83c10c24ec417dc9, []int{83}
+}
+func (m *NamespaceCondition) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *NamespaceCondition) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+}
+func (m *NamespaceCondition) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_NamespaceCondition.Merge(m, src)
+}
+func (m *NamespaceCondition) XXX_Size() int {
+ return m.Size()
+}
+func (m *NamespaceCondition) XXX_DiscardUnknown() {
+ xxx_messageInfo_NamespaceCondition.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_NamespaceCondition proto.InternalMessageInfo
+
func (m *NamespaceList) Reset() { *m = NamespaceList{} }
func (*NamespaceList) ProtoMessage() {}
func (*NamespaceList) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{83}
+ return fileDescriptor_83c10c24ec417dc9, []int{84}
}
func (m *NamespaceList) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2404,7 +2432,7 @@ var xxx_messageInfo_NamespaceList proto.InternalMessageInfo
func (m *NamespaceSpec) Reset() { *m = NamespaceSpec{} }
func (*NamespaceSpec) ProtoMessage() {}
func (*NamespaceSpec) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{84}
+ return fileDescriptor_83c10c24ec417dc9, []int{85}
}
func (m *NamespaceSpec) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2432,7 +2460,7 @@ var xxx_messageInfo_NamespaceSpec proto.InternalMessageInfo
func (m *NamespaceStatus) Reset() { *m = NamespaceStatus{} }
func (*NamespaceStatus) ProtoMessage() {}
func (*NamespaceStatus) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{85}
+ return fileDescriptor_83c10c24ec417dc9, []int{86}
}
func (m *NamespaceStatus) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2460,7 +2488,7 @@ var xxx_messageInfo_NamespaceStatus proto.InternalMessageInfo
func (m *Node) Reset() { *m = Node{} }
func (*Node) ProtoMessage() {}
func (*Node) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{86}
+ return fileDescriptor_83c10c24ec417dc9, []int{87}
}
func (m *Node) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2488,7 +2516,7 @@ var xxx_messageInfo_Node proto.InternalMessageInfo
func (m *NodeAddress) Reset() { *m = NodeAddress{} }
func (*NodeAddress) ProtoMessage() {}
func (*NodeAddress) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{87}
+ return fileDescriptor_83c10c24ec417dc9, []int{88}
}
func (m *NodeAddress) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2516,7 +2544,7 @@ var xxx_messageInfo_NodeAddress proto.InternalMessageInfo
func (m *NodeAffinity) Reset() { *m = NodeAffinity{} }
func (*NodeAffinity) ProtoMessage() {}
func (*NodeAffinity) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{88}
+ return fileDescriptor_83c10c24ec417dc9, []int{89}
}
func (m *NodeAffinity) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2544,7 +2572,7 @@ var xxx_messageInfo_NodeAffinity proto.InternalMessageInfo
func (m *NodeCondition) Reset() { *m = NodeCondition{} }
func (*NodeCondition) ProtoMessage() {}
func (*NodeCondition) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{89}
+ return fileDescriptor_83c10c24ec417dc9, []int{90}
}
func (m *NodeCondition) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2572,7 +2600,7 @@ var xxx_messageInfo_NodeCondition proto.InternalMessageInfo
func (m *NodeConfigSource) Reset() { *m = NodeConfigSource{} }
func (*NodeConfigSource) ProtoMessage() {}
func (*NodeConfigSource) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{90}
+ return fileDescriptor_83c10c24ec417dc9, []int{91}
}
func (m *NodeConfigSource) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2600,7 +2628,7 @@ var xxx_messageInfo_NodeConfigSource proto.InternalMessageInfo
func (m *NodeConfigStatus) Reset() { *m = NodeConfigStatus{} }
func (*NodeConfigStatus) ProtoMessage() {}
func (*NodeConfigStatus) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{91}
+ return fileDescriptor_83c10c24ec417dc9, []int{92}
}
func (m *NodeConfigStatus) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2628,7 +2656,7 @@ var xxx_messageInfo_NodeConfigStatus proto.InternalMessageInfo
func (m *NodeDaemonEndpoints) Reset() { *m = NodeDaemonEndpoints{} }
func (*NodeDaemonEndpoints) ProtoMessage() {}
func (*NodeDaemonEndpoints) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{92}
+ return fileDescriptor_83c10c24ec417dc9, []int{93}
}
func (m *NodeDaemonEndpoints) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2656,7 +2684,7 @@ var xxx_messageInfo_NodeDaemonEndpoints proto.InternalMessageInfo
func (m *NodeList) Reset() { *m = NodeList{} }
func (*NodeList) ProtoMessage() {}
func (*NodeList) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{93}
+ return fileDescriptor_83c10c24ec417dc9, []int{94}
}
func (m *NodeList) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2684,7 +2712,7 @@ var xxx_messageInfo_NodeList proto.InternalMessageInfo
func (m *NodeProxyOptions) Reset() { *m = NodeProxyOptions{} }
func (*NodeProxyOptions) ProtoMessage() {}
func (*NodeProxyOptions) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{94}
+ return fileDescriptor_83c10c24ec417dc9, []int{95}
}
func (m *NodeProxyOptions) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2712,7 +2740,7 @@ var xxx_messageInfo_NodeProxyOptions proto.InternalMessageInfo
func (m *NodeResources) Reset() { *m = NodeResources{} }
func (*NodeResources) ProtoMessage() {}
func (*NodeResources) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{95}
+ return fileDescriptor_83c10c24ec417dc9, []int{96}
}
func (m *NodeResources) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2740,7 +2768,7 @@ var xxx_messageInfo_NodeResources proto.InternalMessageInfo
func (m *NodeSelector) Reset() { *m = NodeSelector{} }
func (*NodeSelector) ProtoMessage() {}
func (*NodeSelector) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{96}
+ return fileDescriptor_83c10c24ec417dc9, []int{97}
}
func (m *NodeSelector) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2768,7 +2796,7 @@ var xxx_messageInfo_NodeSelector proto.InternalMessageInfo
func (m *NodeSelectorRequirement) Reset() { *m = NodeSelectorRequirement{} }
func (*NodeSelectorRequirement) ProtoMessage() {}
func (*NodeSelectorRequirement) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{97}
+ return fileDescriptor_83c10c24ec417dc9, []int{98}
}
func (m *NodeSelectorRequirement) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2796,7 +2824,7 @@ var xxx_messageInfo_NodeSelectorRequirement proto.InternalMessageInfo
func (m *NodeSelectorTerm) Reset() { *m = NodeSelectorTerm{} }
func (*NodeSelectorTerm) ProtoMessage() {}
func (*NodeSelectorTerm) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{98}
+ return fileDescriptor_83c10c24ec417dc9, []int{99}
}
func (m *NodeSelectorTerm) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2824,7 +2852,7 @@ var xxx_messageInfo_NodeSelectorTerm proto.InternalMessageInfo
func (m *NodeSpec) Reset() { *m = NodeSpec{} }
func (*NodeSpec) ProtoMessage() {}
func (*NodeSpec) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{99}
+ return fileDescriptor_83c10c24ec417dc9, []int{100}
}
func (m *NodeSpec) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2852,7 +2880,7 @@ var xxx_messageInfo_NodeSpec proto.InternalMessageInfo
func (m *NodeStatus) Reset() { *m = NodeStatus{} }
func (*NodeStatus) ProtoMessage() {}
func (*NodeStatus) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{100}
+ return fileDescriptor_83c10c24ec417dc9, []int{101}
}
func (m *NodeStatus) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2880,7 +2908,7 @@ var xxx_messageInfo_NodeStatus proto.InternalMessageInfo
func (m *NodeSystemInfo) Reset() { *m = NodeSystemInfo{} }
func (*NodeSystemInfo) ProtoMessage() {}
func (*NodeSystemInfo) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{101}
+ return fileDescriptor_83c10c24ec417dc9, []int{102}
}
func (m *NodeSystemInfo) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2908,7 +2936,7 @@ var xxx_messageInfo_NodeSystemInfo proto.InternalMessageInfo
func (m *ObjectFieldSelector) Reset() { *m = ObjectFieldSelector{} }
func (*ObjectFieldSelector) ProtoMessage() {}
func (*ObjectFieldSelector) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{102}
+ return fileDescriptor_83c10c24ec417dc9, []int{103}
}
func (m *ObjectFieldSelector) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2936,7 +2964,7 @@ var xxx_messageInfo_ObjectFieldSelector proto.InternalMessageInfo
func (m *ObjectReference) Reset() { *m = ObjectReference{} }
func (*ObjectReference) ProtoMessage() {}
func (*ObjectReference) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{103}
+ return fileDescriptor_83c10c24ec417dc9, []int{104}
}
func (m *ObjectReference) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2964,7 +2992,7 @@ var xxx_messageInfo_ObjectReference proto.InternalMessageInfo
func (m *PersistentVolume) Reset() { *m = PersistentVolume{} }
func (*PersistentVolume) ProtoMessage() {}
func (*PersistentVolume) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{104}
+ return fileDescriptor_83c10c24ec417dc9, []int{105}
}
func (m *PersistentVolume) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -2992,7 +3020,7 @@ var xxx_messageInfo_PersistentVolume proto.InternalMessageInfo
func (m *PersistentVolumeClaim) Reset() { *m = PersistentVolumeClaim{} }
func (*PersistentVolumeClaim) ProtoMessage() {}
func (*PersistentVolumeClaim) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{105}
+ return fileDescriptor_83c10c24ec417dc9, []int{106}
}
func (m *PersistentVolumeClaim) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3020,7 +3048,7 @@ var xxx_messageInfo_PersistentVolumeClaim proto.InternalMessageInfo
func (m *PersistentVolumeClaimCondition) Reset() { *m = PersistentVolumeClaimCondition{} }
func (*PersistentVolumeClaimCondition) ProtoMessage() {}
func (*PersistentVolumeClaimCondition) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{106}
+ return fileDescriptor_83c10c24ec417dc9, []int{107}
}
func (m *PersistentVolumeClaimCondition) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3048,7 +3076,7 @@ var xxx_messageInfo_PersistentVolumeClaimCondition proto.InternalMessageInfo
func (m *PersistentVolumeClaimList) Reset() { *m = PersistentVolumeClaimList{} }
func (*PersistentVolumeClaimList) ProtoMessage() {}
func (*PersistentVolumeClaimList) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{107}
+ return fileDescriptor_83c10c24ec417dc9, []int{108}
}
func (m *PersistentVolumeClaimList) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3076,7 +3104,7 @@ var xxx_messageInfo_PersistentVolumeClaimList proto.InternalMessageInfo
func (m *PersistentVolumeClaimSpec) Reset() { *m = PersistentVolumeClaimSpec{} }
func (*PersistentVolumeClaimSpec) ProtoMessage() {}
func (*PersistentVolumeClaimSpec) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{108}
+ return fileDescriptor_83c10c24ec417dc9, []int{109}
}
func (m *PersistentVolumeClaimSpec) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3104,7 +3132,7 @@ var xxx_messageInfo_PersistentVolumeClaimSpec proto.InternalMessageInfo
func (m *PersistentVolumeClaimStatus) Reset() { *m = PersistentVolumeClaimStatus{} }
func (*PersistentVolumeClaimStatus) ProtoMessage() {}
func (*PersistentVolumeClaimStatus) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{109}
+ return fileDescriptor_83c10c24ec417dc9, []int{110}
}
func (m *PersistentVolumeClaimStatus) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3132,7 +3160,7 @@ var xxx_messageInfo_PersistentVolumeClaimStatus proto.InternalMessageInfo
func (m *PersistentVolumeClaimVolumeSource) Reset() { *m = PersistentVolumeClaimVolumeSource{} }
func (*PersistentVolumeClaimVolumeSource) ProtoMessage() {}
func (*PersistentVolumeClaimVolumeSource) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{110}
+ return fileDescriptor_83c10c24ec417dc9, []int{111}
}
func (m *PersistentVolumeClaimVolumeSource) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3160,7 +3188,7 @@ var xxx_messageInfo_PersistentVolumeClaimVolumeSource proto.InternalMessageInfo
func (m *PersistentVolumeList) Reset() { *m = PersistentVolumeList{} }
func (*PersistentVolumeList) ProtoMessage() {}
func (*PersistentVolumeList) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{111}
+ return fileDescriptor_83c10c24ec417dc9, []int{112}
}
func (m *PersistentVolumeList) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3188,7 +3216,7 @@ var xxx_messageInfo_PersistentVolumeList proto.InternalMessageInfo
func (m *PersistentVolumeSource) Reset() { *m = PersistentVolumeSource{} }
func (*PersistentVolumeSource) ProtoMessage() {}
func (*PersistentVolumeSource) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{112}
+ return fileDescriptor_83c10c24ec417dc9, []int{113}
}
func (m *PersistentVolumeSource) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3216,7 +3244,7 @@ var xxx_messageInfo_PersistentVolumeSource proto.InternalMessageInfo
func (m *PersistentVolumeSpec) Reset() { *m = PersistentVolumeSpec{} }
func (*PersistentVolumeSpec) ProtoMessage() {}
func (*PersistentVolumeSpec) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{113}
+ return fileDescriptor_83c10c24ec417dc9, []int{114}
}
func (m *PersistentVolumeSpec) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3244,7 +3272,7 @@ var xxx_messageInfo_PersistentVolumeSpec proto.InternalMessageInfo
func (m *PersistentVolumeStatus) Reset() { *m = PersistentVolumeStatus{} }
func (*PersistentVolumeStatus) ProtoMessage() {}
func (*PersistentVolumeStatus) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{114}
+ return fileDescriptor_83c10c24ec417dc9, []int{115}
}
func (m *PersistentVolumeStatus) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3272,7 +3300,7 @@ var xxx_messageInfo_PersistentVolumeStatus proto.InternalMessageInfo
func (m *PhotonPersistentDiskVolumeSource) Reset() { *m = PhotonPersistentDiskVolumeSource{} }
func (*PhotonPersistentDiskVolumeSource) ProtoMessage() {}
func (*PhotonPersistentDiskVolumeSource) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{115}
+ return fileDescriptor_83c10c24ec417dc9, []int{116}
}
func (m *PhotonPersistentDiskVolumeSource) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3300,7 +3328,7 @@ var xxx_messageInfo_PhotonPersistentDiskVolumeSource proto.InternalMessageInfo
func (m *Pod) Reset() { *m = Pod{} }
func (*Pod) ProtoMessage() {}
func (*Pod) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{116}
+ return fileDescriptor_83c10c24ec417dc9, []int{117}
}
func (m *Pod) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3328,7 +3356,7 @@ var xxx_messageInfo_Pod proto.InternalMessageInfo
func (m *PodAffinity) Reset() { *m = PodAffinity{} }
func (*PodAffinity) ProtoMessage() {}
func (*PodAffinity) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{117}
+ return fileDescriptor_83c10c24ec417dc9, []int{118}
}
func (m *PodAffinity) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3356,7 +3384,7 @@ var xxx_messageInfo_PodAffinity proto.InternalMessageInfo
func (m *PodAffinityTerm) Reset() { *m = PodAffinityTerm{} }
func (*PodAffinityTerm) ProtoMessage() {}
func (*PodAffinityTerm) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{118}
+ return fileDescriptor_83c10c24ec417dc9, []int{119}
}
func (m *PodAffinityTerm) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3384,7 +3412,7 @@ var xxx_messageInfo_PodAffinityTerm proto.InternalMessageInfo
func (m *PodAntiAffinity) Reset() { *m = PodAntiAffinity{} }
func (*PodAntiAffinity) ProtoMessage() {}
func (*PodAntiAffinity) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{119}
+ return fileDescriptor_83c10c24ec417dc9, []int{120}
}
func (m *PodAntiAffinity) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3412,7 +3440,7 @@ var xxx_messageInfo_PodAntiAffinity proto.InternalMessageInfo
func (m *PodAttachOptions) Reset() { *m = PodAttachOptions{} }
func (*PodAttachOptions) ProtoMessage() {}
func (*PodAttachOptions) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{120}
+ return fileDescriptor_83c10c24ec417dc9, []int{121}
}
func (m *PodAttachOptions) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3440,7 +3468,7 @@ var xxx_messageInfo_PodAttachOptions proto.InternalMessageInfo
func (m *PodCondition) Reset() { *m = PodCondition{} }
func (*PodCondition) ProtoMessage() {}
func (*PodCondition) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{121}
+ return fileDescriptor_83c10c24ec417dc9, []int{122}
}
func (m *PodCondition) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3468,7 +3496,7 @@ var xxx_messageInfo_PodCondition proto.InternalMessageInfo
func (m *PodDNSConfig) Reset() { *m = PodDNSConfig{} }
func (*PodDNSConfig) ProtoMessage() {}
func (*PodDNSConfig) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{122}
+ return fileDescriptor_83c10c24ec417dc9, []int{123}
}
func (m *PodDNSConfig) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3496,7 +3524,7 @@ var xxx_messageInfo_PodDNSConfig proto.InternalMessageInfo
func (m *PodDNSConfigOption) Reset() { *m = PodDNSConfigOption{} }
func (*PodDNSConfigOption) ProtoMessage() {}
func (*PodDNSConfigOption) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{123}
+ return fileDescriptor_83c10c24ec417dc9, []int{124}
}
func (m *PodDNSConfigOption) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3524,7 +3552,7 @@ var xxx_messageInfo_PodDNSConfigOption proto.InternalMessageInfo
func (m *PodExecOptions) Reset() { *m = PodExecOptions{} }
func (*PodExecOptions) ProtoMessage() {}
func (*PodExecOptions) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{124}
+ return fileDescriptor_83c10c24ec417dc9, []int{125}
}
func (m *PodExecOptions) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3552,7 +3580,7 @@ var xxx_messageInfo_PodExecOptions proto.InternalMessageInfo
func (m *PodIP) Reset() { *m = PodIP{} }
func (*PodIP) ProtoMessage() {}
func (*PodIP) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{125}
+ return fileDescriptor_83c10c24ec417dc9, []int{126}
}
func (m *PodIP) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3580,7 +3608,7 @@ var xxx_messageInfo_PodIP proto.InternalMessageInfo
func (m *PodList) Reset() { *m = PodList{} }
func (*PodList) ProtoMessage() {}
func (*PodList) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{126}
+ return fileDescriptor_83c10c24ec417dc9, []int{127}
}
func (m *PodList) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3608,7 +3636,7 @@ var xxx_messageInfo_PodList proto.InternalMessageInfo
func (m *PodLogOptions) Reset() { *m = PodLogOptions{} }
func (*PodLogOptions) ProtoMessage() {}
func (*PodLogOptions) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{127}
+ return fileDescriptor_83c10c24ec417dc9, []int{128}
}
func (m *PodLogOptions) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3636,7 +3664,7 @@ var xxx_messageInfo_PodLogOptions proto.InternalMessageInfo
func (m *PodPortForwardOptions) Reset() { *m = PodPortForwardOptions{} }
func (*PodPortForwardOptions) ProtoMessage() {}
func (*PodPortForwardOptions) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{128}
+ return fileDescriptor_83c10c24ec417dc9, []int{129}
}
func (m *PodPortForwardOptions) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3664,7 +3692,7 @@ var xxx_messageInfo_PodPortForwardOptions proto.InternalMessageInfo
func (m *PodProxyOptions) Reset() { *m = PodProxyOptions{} }
func (*PodProxyOptions) ProtoMessage() {}
func (*PodProxyOptions) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{129}
+ return fileDescriptor_83c10c24ec417dc9, []int{130}
}
func (m *PodProxyOptions) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3692,7 +3720,7 @@ var xxx_messageInfo_PodProxyOptions proto.InternalMessageInfo
func (m *PodReadinessGate) Reset() { *m = PodReadinessGate{} }
func (*PodReadinessGate) ProtoMessage() {}
func (*PodReadinessGate) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{130}
+ return fileDescriptor_83c10c24ec417dc9, []int{131}
}
func (m *PodReadinessGate) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3720,7 +3748,7 @@ var xxx_messageInfo_PodReadinessGate proto.InternalMessageInfo
func (m *PodSecurityContext) Reset() { *m = PodSecurityContext{} }
func (*PodSecurityContext) ProtoMessage() {}
func (*PodSecurityContext) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{131}
+ return fileDescriptor_83c10c24ec417dc9, []int{132}
}
func (m *PodSecurityContext) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3748,7 +3776,7 @@ var xxx_messageInfo_PodSecurityContext proto.InternalMessageInfo
func (m *PodSignature) Reset() { *m = PodSignature{} }
func (*PodSignature) ProtoMessage() {}
func (*PodSignature) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{132}
+ return fileDescriptor_83c10c24ec417dc9, []int{133}
}
func (m *PodSignature) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3776,7 +3804,7 @@ var xxx_messageInfo_PodSignature proto.InternalMessageInfo
func (m *PodSpec) Reset() { *m = PodSpec{} }
func (*PodSpec) ProtoMessage() {}
func (*PodSpec) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{133}
+ return fileDescriptor_83c10c24ec417dc9, []int{134}
}
func (m *PodSpec) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3804,7 +3832,7 @@ var xxx_messageInfo_PodSpec proto.InternalMessageInfo
func (m *PodStatus) Reset() { *m = PodStatus{} }
func (*PodStatus) ProtoMessage() {}
func (*PodStatus) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{134}
+ return fileDescriptor_83c10c24ec417dc9, []int{135}
}
func (m *PodStatus) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3832,7 +3860,7 @@ var xxx_messageInfo_PodStatus proto.InternalMessageInfo
func (m *PodStatusResult) Reset() { *m = PodStatusResult{} }
func (*PodStatusResult) ProtoMessage() {}
func (*PodStatusResult) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{135}
+ return fileDescriptor_83c10c24ec417dc9, []int{136}
}
func (m *PodStatusResult) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3860,7 +3888,7 @@ var xxx_messageInfo_PodStatusResult proto.InternalMessageInfo
func (m *PodTemplate) Reset() { *m = PodTemplate{} }
func (*PodTemplate) ProtoMessage() {}
func (*PodTemplate) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{136}
+ return fileDescriptor_83c10c24ec417dc9, []int{137}
}
func (m *PodTemplate) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3888,7 +3916,7 @@ var xxx_messageInfo_PodTemplate proto.InternalMessageInfo
func (m *PodTemplateList) Reset() { *m = PodTemplateList{} }
func (*PodTemplateList) ProtoMessage() {}
func (*PodTemplateList) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{137}
+ return fileDescriptor_83c10c24ec417dc9, []int{138}
}
func (m *PodTemplateList) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3916,7 +3944,7 @@ var xxx_messageInfo_PodTemplateList proto.InternalMessageInfo
func (m *PodTemplateSpec) Reset() { *m = PodTemplateSpec{} }
func (*PodTemplateSpec) ProtoMessage() {}
func (*PodTemplateSpec) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{138}
+ return fileDescriptor_83c10c24ec417dc9, []int{139}
}
func (m *PodTemplateSpec) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3944,7 +3972,7 @@ var xxx_messageInfo_PodTemplateSpec proto.InternalMessageInfo
func (m *PortworxVolumeSource) Reset() { *m = PortworxVolumeSource{} }
func (*PortworxVolumeSource) ProtoMessage() {}
func (*PortworxVolumeSource) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{139}
+ return fileDescriptor_83c10c24ec417dc9, []int{140}
}
func (m *PortworxVolumeSource) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -3972,7 +4000,7 @@ var xxx_messageInfo_PortworxVolumeSource proto.InternalMessageInfo
func (m *Preconditions) Reset() { *m = Preconditions{} }
func (*Preconditions) ProtoMessage() {}
func (*Preconditions) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{140}
+ return fileDescriptor_83c10c24ec417dc9, []int{141}
}
func (m *Preconditions) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4000,7 +4028,7 @@ var xxx_messageInfo_Preconditions proto.InternalMessageInfo
func (m *PreferAvoidPodsEntry) Reset() { *m = PreferAvoidPodsEntry{} }
func (*PreferAvoidPodsEntry) ProtoMessage() {}
func (*PreferAvoidPodsEntry) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{141}
+ return fileDescriptor_83c10c24ec417dc9, []int{142}
}
func (m *PreferAvoidPodsEntry) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4028,7 +4056,7 @@ var xxx_messageInfo_PreferAvoidPodsEntry proto.InternalMessageInfo
func (m *PreferredSchedulingTerm) Reset() { *m = PreferredSchedulingTerm{} }
func (*PreferredSchedulingTerm) ProtoMessage() {}
func (*PreferredSchedulingTerm) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{142}
+ return fileDescriptor_83c10c24ec417dc9, []int{143}
}
func (m *PreferredSchedulingTerm) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4056,7 +4084,7 @@ var xxx_messageInfo_PreferredSchedulingTerm proto.InternalMessageInfo
func (m *Probe) Reset() { *m = Probe{} }
func (*Probe) ProtoMessage() {}
func (*Probe) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{143}
+ return fileDescriptor_83c10c24ec417dc9, []int{144}
}
func (m *Probe) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4084,7 +4112,7 @@ var xxx_messageInfo_Probe proto.InternalMessageInfo
func (m *ProjectedVolumeSource) Reset() { *m = ProjectedVolumeSource{} }
func (*ProjectedVolumeSource) ProtoMessage() {}
func (*ProjectedVolumeSource) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{144}
+ return fileDescriptor_83c10c24ec417dc9, []int{145}
}
func (m *ProjectedVolumeSource) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4112,7 +4140,7 @@ var xxx_messageInfo_ProjectedVolumeSource proto.InternalMessageInfo
func (m *QuobyteVolumeSource) Reset() { *m = QuobyteVolumeSource{} }
func (*QuobyteVolumeSource) ProtoMessage() {}
func (*QuobyteVolumeSource) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{145}
+ return fileDescriptor_83c10c24ec417dc9, []int{146}
}
func (m *QuobyteVolumeSource) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4140,7 +4168,7 @@ var xxx_messageInfo_QuobyteVolumeSource proto.InternalMessageInfo
func (m *RBDPersistentVolumeSource) Reset() { *m = RBDPersistentVolumeSource{} }
func (*RBDPersistentVolumeSource) ProtoMessage() {}
func (*RBDPersistentVolumeSource) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{146}
+ return fileDescriptor_83c10c24ec417dc9, []int{147}
}
func (m *RBDPersistentVolumeSource) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4168,7 +4196,7 @@ var xxx_messageInfo_RBDPersistentVolumeSource proto.InternalMessageInfo
func (m *RBDVolumeSource) Reset() { *m = RBDVolumeSource{} }
func (*RBDVolumeSource) ProtoMessage() {}
func (*RBDVolumeSource) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{147}
+ return fileDescriptor_83c10c24ec417dc9, []int{148}
}
func (m *RBDVolumeSource) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4196,7 +4224,7 @@ var xxx_messageInfo_RBDVolumeSource proto.InternalMessageInfo
func (m *RangeAllocation) Reset() { *m = RangeAllocation{} }
func (*RangeAllocation) ProtoMessage() {}
func (*RangeAllocation) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{148}
+ return fileDescriptor_83c10c24ec417dc9, []int{149}
}
func (m *RangeAllocation) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4224,7 +4252,7 @@ var xxx_messageInfo_RangeAllocation proto.InternalMessageInfo
func (m *ReplicationController) Reset() { *m = ReplicationController{} }
func (*ReplicationController) ProtoMessage() {}
func (*ReplicationController) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{149}
+ return fileDescriptor_83c10c24ec417dc9, []int{150}
}
func (m *ReplicationController) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4252,7 +4280,7 @@ var xxx_messageInfo_ReplicationController proto.InternalMessageInfo
func (m *ReplicationControllerCondition) Reset() { *m = ReplicationControllerCondition{} }
func (*ReplicationControllerCondition) ProtoMessage() {}
func (*ReplicationControllerCondition) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{150}
+ return fileDescriptor_83c10c24ec417dc9, []int{151}
}
func (m *ReplicationControllerCondition) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4280,7 +4308,7 @@ var xxx_messageInfo_ReplicationControllerCondition proto.InternalMessageInfo
func (m *ReplicationControllerList) Reset() { *m = ReplicationControllerList{} }
func (*ReplicationControllerList) ProtoMessage() {}
func (*ReplicationControllerList) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{151}
+ return fileDescriptor_83c10c24ec417dc9, []int{152}
}
func (m *ReplicationControllerList) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4308,7 +4336,7 @@ var xxx_messageInfo_ReplicationControllerList proto.InternalMessageInfo
func (m *ReplicationControllerSpec) Reset() { *m = ReplicationControllerSpec{} }
func (*ReplicationControllerSpec) ProtoMessage() {}
func (*ReplicationControllerSpec) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{152}
+ return fileDescriptor_83c10c24ec417dc9, []int{153}
}
func (m *ReplicationControllerSpec) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4336,7 +4364,7 @@ var xxx_messageInfo_ReplicationControllerSpec proto.InternalMessageInfo
func (m *ReplicationControllerStatus) Reset() { *m = ReplicationControllerStatus{} }
func (*ReplicationControllerStatus) ProtoMessage() {}
func (*ReplicationControllerStatus) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{153}
+ return fileDescriptor_83c10c24ec417dc9, []int{154}
}
func (m *ReplicationControllerStatus) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4364,7 +4392,7 @@ var xxx_messageInfo_ReplicationControllerStatus proto.InternalMessageInfo
func (m *ResourceFieldSelector) Reset() { *m = ResourceFieldSelector{} }
func (*ResourceFieldSelector) ProtoMessage() {}
func (*ResourceFieldSelector) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{154}
+ return fileDescriptor_83c10c24ec417dc9, []int{155}
}
func (m *ResourceFieldSelector) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4392,7 +4420,7 @@ var xxx_messageInfo_ResourceFieldSelector proto.InternalMessageInfo
func (m *ResourceQuota) Reset() { *m = ResourceQuota{} }
func (*ResourceQuota) ProtoMessage() {}
func (*ResourceQuota) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{155}
+ return fileDescriptor_83c10c24ec417dc9, []int{156}
}
func (m *ResourceQuota) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4420,7 +4448,7 @@ var xxx_messageInfo_ResourceQuota proto.InternalMessageInfo
func (m *ResourceQuotaList) Reset() { *m = ResourceQuotaList{} }
func (*ResourceQuotaList) ProtoMessage() {}
func (*ResourceQuotaList) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{156}
+ return fileDescriptor_83c10c24ec417dc9, []int{157}
}
func (m *ResourceQuotaList) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4448,7 +4476,7 @@ var xxx_messageInfo_ResourceQuotaList proto.InternalMessageInfo
func (m *ResourceQuotaSpec) Reset() { *m = ResourceQuotaSpec{} }
func (*ResourceQuotaSpec) ProtoMessage() {}
func (*ResourceQuotaSpec) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{157}
+ return fileDescriptor_83c10c24ec417dc9, []int{158}
}
func (m *ResourceQuotaSpec) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4476,7 +4504,7 @@ var xxx_messageInfo_ResourceQuotaSpec proto.InternalMessageInfo
func (m *ResourceQuotaStatus) Reset() { *m = ResourceQuotaStatus{} }
func (*ResourceQuotaStatus) ProtoMessage() {}
func (*ResourceQuotaStatus) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{158}
+ return fileDescriptor_83c10c24ec417dc9, []int{159}
}
func (m *ResourceQuotaStatus) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4504,7 +4532,7 @@ var xxx_messageInfo_ResourceQuotaStatus proto.InternalMessageInfo
func (m *ResourceRequirements) Reset() { *m = ResourceRequirements{} }
func (*ResourceRequirements) ProtoMessage() {}
func (*ResourceRequirements) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{159}
+ return fileDescriptor_83c10c24ec417dc9, []int{160}
}
func (m *ResourceRequirements) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4532,7 +4560,7 @@ var xxx_messageInfo_ResourceRequirements proto.InternalMessageInfo
func (m *SELinuxOptions) Reset() { *m = SELinuxOptions{} }
func (*SELinuxOptions) ProtoMessage() {}
func (*SELinuxOptions) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{160}
+ return fileDescriptor_83c10c24ec417dc9, []int{161}
}
func (m *SELinuxOptions) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4560,7 +4588,7 @@ var xxx_messageInfo_SELinuxOptions proto.InternalMessageInfo
func (m *ScaleIOPersistentVolumeSource) Reset() { *m = ScaleIOPersistentVolumeSource{} }
func (*ScaleIOPersistentVolumeSource) ProtoMessage() {}
func (*ScaleIOPersistentVolumeSource) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{161}
+ return fileDescriptor_83c10c24ec417dc9, []int{162}
}
func (m *ScaleIOPersistentVolumeSource) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4588,7 +4616,7 @@ var xxx_messageInfo_ScaleIOPersistentVolumeSource proto.InternalMessageInfo
func (m *ScaleIOVolumeSource) Reset() { *m = ScaleIOVolumeSource{} }
func (*ScaleIOVolumeSource) ProtoMessage() {}
func (*ScaleIOVolumeSource) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{162}
+ return fileDescriptor_83c10c24ec417dc9, []int{163}
}
func (m *ScaleIOVolumeSource) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4616,7 +4644,7 @@ var xxx_messageInfo_ScaleIOVolumeSource proto.InternalMessageInfo
func (m *ScopeSelector) Reset() { *m = ScopeSelector{} }
func (*ScopeSelector) ProtoMessage() {}
func (*ScopeSelector) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{163}
+ return fileDescriptor_83c10c24ec417dc9, []int{164}
}
func (m *ScopeSelector) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4644,7 +4672,7 @@ var xxx_messageInfo_ScopeSelector proto.InternalMessageInfo
func (m *ScopedResourceSelectorRequirement) Reset() { *m = ScopedResourceSelectorRequirement{} }
func (*ScopedResourceSelectorRequirement) ProtoMessage() {}
func (*ScopedResourceSelectorRequirement) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{164}
+ return fileDescriptor_83c10c24ec417dc9, []int{165}
}
func (m *ScopedResourceSelectorRequirement) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4672,7 +4700,7 @@ var xxx_messageInfo_ScopedResourceSelectorRequirement proto.InternalMessageInfo
func (m *Secret) Reset() { *m = Secret{} }
func (*Secret) ProtoMessage() {}
func (*Secret) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{165}
+ return fileDescriptor_83c10c24ec417dc9, []int{166}
}
func (m *Secret) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4700,7 +4728,7 @@ var xxx_messageInfo_Secret proto.InternalMessageInfo
func (m *SecretEnvSource) Reset() { *m = SecretEnvSource{} }
func (*SecretEnvSource) ProtoMessage() {}
func (*SecretEnvSource) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{166}
+ return fileDescriptor_83c10c24ec417dc9, []int{167}
}
func (m *SecretEnvSource) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4728,7 +4756,7 @@ var xxx_messageInfo_SecretEnvSource proto.InternalMessageInfo
func (m *SecretKeySelector) Reset() { *m = SecretKeySelector{} }
func (*SecretKeySelector) ProtoMessage() {}
func (*SecretKeySelector) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{167}
+ return fileDescriptor_83c10c24ec417dc9, []int{168}
}
func (m *SecretKeySelector) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4756,7 +4784,7 @@ var xxx_messageInfo_SecretKeySelector proto.InternalMessageInfo
func (m *SecretList) Reset() { *m = SecretList{} }
func (*SecretList) ProtoMessage() {}
func (*SecretList) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{168}
+ return fileDescriptor_83c10c24ec417dc9, []int{169}
}
func (m *SecretList) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4784,7 +4812,7 @@ var xxx_messageInfo_SecretList proto.InternalMessageInfo
func (m *SecretProjection) Reset() { *m = SecretProjection{} }
func (*SecretProjection) ProtoMessage() {}
func (*SecretProjection) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{169}
+ return fileDescriptor_83c10c24ec417dc9, []int{170}
}
func (m *SecretProjection) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4812,7 +4840,7 @@ var xxx_messageInfo_SecretProjection proto.InternalMessageInfo
func (m *SecretReference) Reset() { *m = SecretReference{} }
func (*SecretReference) ProtoMessage() {}
func (*SecretReference) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{170}
+ return fileDescriptor_83c10c24ec417dc9, []int{171}
}
func (m *SecretReference) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4840,7 +4868,7 @@ var xxx_messageInfo_SecretReference proto.InternalMessageInfo
func (m *SecretVolumeSource) Reset() { *m = SecretVolumeSource{} }
func (*SecretVolumeSource) ProtoMessage() {}
func (*SecretVolumeSource) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{171}
+ return fileDescriptor_83c10c24ec417dc9, []int{172}
}
func (m *SecretVolumeSource) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4868,7 +4896,7 @@ var xxx_messageInfo_SecretVolumeSource proto.InternalMessageInfo
func (m *SecurityContext) Reset() { *m = SecurityContext{} }
func (*SecurityContext) ProtoMessage() {}
func (*SecurityContext) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{172}
+ return fileDescriptor_83c10c24ec417dc9, []int{173}
}
func (m *SecurityContext) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4896,7 +4924,7 @@ var xxx_messageInfo_SecurityContext proto.InternalMessageInfo
func (m *SerializedReference) Reset() { *m = SerializedReference{} }
func (*SerializedReference) ProtoMessage() {}
func (*SerializedReference) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{173}
+ return fileDescriptor_83c10c24ec417dc9, []int{174}
}
func (m *SerializedReference) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4924,7 +4952,7 @@ var xxx_messageInfo_SerializedReference proto.InternalMessageInfo
func (m *Service) Reset() { *m = Service{} }
func (*Service) ProtoMessage() {}
func (*Service) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{174}
+ return fileDescriptor_83c10c24ec417dc9, []int{175}
}
func (m *Service) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4952,7 +4980,7 @@ var xxx_messageInfo_Service proto.InternalMessageInfo
func (m *ServiceAccount) Reset() { *m = ServiceAccount{} }
func (*ServiceAccount) ProtoMessage() {}
func (*ServiceAccount) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{175}
+ return fileDescriptor_83c10c24ec417dc9, []int{176}
}
func (m *ServiceAccount) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -4980,7 +5008,7 @@ var xxx_messageInfo_ServiceAccount proto.InternalMessageInfo
func (m *ServiceAccountList) Reset() { *m = ServiceAccountList{} }
func (*ServiceAccountList) ProtoMessage() {}
func (*ServiceAccountList) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{176}
+ return fileDescriptor_83c10c24ec417dc9, []int{177}
}
func (m *ServiceAccountList) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5008,7 +5036,7 @@ var xxx_messageInfo_ServiceAccountList proto.InternalMessageInfo
func (m *ServiceAccountTokenProjection) Reset() { *m = ServiceAccountTokenProjection{} }
func (*ServiceAccountTokenProjection) ProtoMessage() {}
func (*ServiceAccountTokenProjection) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{177}
+ return fileDescriptor_83c10c24ec417dc9, []int{178}
}
func (m *ServiceAccountTokenProjection) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5036,7 +5064,7 @@ var xxx_messageInfo_ServiceAccountTokenProjection proto.InternalMessageInfo
func (m *ServiceList) Reset() { *m = ServiceList{} }
func (*ServiceList) ProtoMessage() {}
func (*ServiceList) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{178}
+ return fileDescriptor_83c10c24ec417dc9, []int{179}
}
func (m *ServiceList) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5064,7 +5092,7 @@ var xxx_messageInfo_ServiceList proto.InternalMessageInfo
func (m *ServicePort) Reset() { *m = ServicePort{} }
func (*ServicePort) ProtoMessage() {}
func (*ServicePort) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{179}
+ return fileDescriptor_83c10c24ec417dc9, []int{180}
}
func (m *ServicePort) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5092,7 +5120,7 @@ var xxx_messageInfo_ServicePort proto.InternalMessageInfo
func (m *ServiceProxyOptions) Reset() { *m = ServiceProxyOptions{} }
func (*ServiceProxyOptions) ProtoMessage() {}
func (*ServiceProxyOptions) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{180}
+ return fileDescriptor_83c10c24ec417dc9, []int{181}
}
func (m *ServiceProxyOptions) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5120,7 +5148,7 @@ var xxx_messageInfo_ServiceProxyOptions proto.InternalMessageInfo
func (m *ServiceSpec) Reset() { *m = ServiceSpec{} }
func (*ServiceSpec) ProtoMessage() {}
func (*ServiceSpec) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{181}
+ return fileDescriptor_83c10c24ec417dc9, []int{182}
}
func (m *ServiceSpec) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5148,7 +5176,7 @@ var xxx_messageInfo_ServiceSpec proto.InternalMessageInfo
func (m *ServiceStatus) Reset() { *m = ServiceStatus{} }
func (*ServiceStatus) ProtoMessage() {}
func (*ServiceStatus) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{182}
+ return fileDescriptor_83c10c24ec417dc9, []int{183}
}
func (m *ServiceStatus) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5176,7 +5204,7 @@ var xxx_messageInfo_ServiceStatus proto.InternalMessageInfo
func (m *SessionAffinityConfig) Reset() { *m = SessionAffinityConfig{} }
func (*SessionAffinityConfig) ProtoMessage() {}
func (*SessionAffinityConfig) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{183}
+ return fileDescriptor_83c10c24ec417dc9, []int{184}
}
func (m *SessionAffinityConfig) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5204,7 +5232,7 @@ var xxx_messageInfo_SessionAffinityConfig proto.InternalMessageInfo
func (m *StorageOSPersistentVolumeSource) Reset() { *m = StorageOSPersistentVolumeSource{} }
func (*StorageOSPersistentVolumeSource) ProtoMessage() {}
func (*StorageOSPersistentVolumeSource) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{184}
+ return fileDescriptor_83c10c24ec417dc9, []int{185}
}
func (m *StorageOSPersistentVolumeSource) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5232,7 +5260,7 @@ var xxx_messageInfo_StorageOSPersistentVolumeSource proto.InternalMessageInfo
func (m *StorageOSVolumeSource) Reset() { *m = StorageOSVolumeSource{} }
func (*StorageOSVolumeSource) ProtoMessage() {}
func (*StorageOSVolumeSource) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{185}
+ return fileDescriptor_83c10c24ec417dc9, []int{186}
}
func (m *StorageOSVolumeSource) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5260,7 +5288,7 @@ var xxx_messageInfo_StorageOSVolumeSource proto.InternalMessageInfo
func (m *Sysctl) Reset() { *m = Sysctl{} }
func (*Sysctl) ProtoMessage() {}
func (*Sysctl) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{186}
+ return fileDescriptor_83c10c24ec417dc9, []int{187}
}
func (m *Sysctl) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5288,7 +5316,7 @@ var xxx_messageInfo_Sysctl proto.InternalMessageInfo
func (m *TCPSocketAction) Reset() { *m = TCPSocketAction{} }
func (*TCPSocketAction) ProtoMessage() {}
func (*TCPSocketAction) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{187}
+ return fileDescriptor_83c10c24ec417dc9, []int{188}
}
func (m *TCPSocketAction) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5316,7 +5344,7 @@ var xxx_messageInfo_TCPSocketAction proto.InternalMessageInfo
func (m *Taint) Reset() { *m = Taint{} }
func (*Taint) ProtoMessage() {}
func (*Taint) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{188}
+ return fileDescriptor_83c10c24ec417dc9, []int{189}
}
func (m *Taint) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5344,7 +5372,7 @@ var xxx_messageInfo_Taint proto.InternalMessageInfo
func (m *Toleration) Reset() { *m = Toleration{} }
func (*Toleration) ProtoMessage() {}
func (*Toleration) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{189}
+ return fileDescriptor_83c10c24ec417dc9, []int{190}
}
func (m *Toleration) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5372,7 +5400,7 @@ var xxx_messageInfo_Toleration proto.InternalMessageInfo
func (m *TopologySelectorLabelRequirement) Reset() { *m = TopologySelectorLabelRequirement{} }
func (*TopologySelectorLabelRequirement) ProtoMessage() {}
func (*TopologySelectorLabelRequirement) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{190}
+ return fileDescriptor_83c10c24ec417dc9, []int{191}
}
func (m *TopologySelectorLabelRequirement) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5400,7 +5428,7 @@ var xxx_messageInfo_TopologySelectorLabelRequirement proto.InternalMessageInfo
func (m *TopologySelectorTerm) Reset() { *m = TopologySelectorTerm{} }
func (*TopologySelectorTerm) ProtoMessage() {}
func (*TopologySelectorTerm) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{191}
+ return fileDescriptor_83c10c24ec417dc9, []int{192}
}
func (m *TopologySelectorTerm) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5428,7 +5456,7 @@ var xxx_messageInfo_TopologySelectorTerm proto.InternalMessageInfo
func (m *TopologySpreadConstraint) Reset() { *m = TopologySpreadConstraint{} }
func (*TopologySpreadConstraint) ProtoMessage() {}
func (*TopologySpreadConstraint) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{192}
+ return fileDescriptor_83c10c24ec417dc9, []int{193}
}
func (m *TopologySpreadConstraint) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5456,7 +5484,7 @@ var xxx_messageInfo_TopologySpreadConstraint proto.InternalMessageInfo
func (m *TypedLocalObjectReference) Reset() { *m = TypedLocalObjectReference{} }
func (*TypedLocalObjectReference) ProtoMessage() {}
func (*TypedLocalObjectReference) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{193}
+ return fileDescriptor_83c10c24ec417dc9, []int{194}
}
func (m *TypedLocalObjectReference) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5484,7 +5512,7 @@ var xxx_messageInfo_TypedLocalObjectReference proto.InternalMessageInfo
func (m *Volume) Reset() { *m = Volume{} }
func (*Volume) ProtoMessage() {}
func (*Volume) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{194}
+ return fileDescriptor_83c10c24ec417dc9, []int{195}
}
func (m *Volume) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5512,7 +5540,7 @@ var xxx_messageInfo_Volume proto.InternalMessageInfo
func (m *VolumeDevice) Reset() { *m = VolumeDevice{} }
func (*VolumeDevice) ProtoMessage() {}
func (*VolumeDevice) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{195}
+ return fileDescriptor_83c10c24ec417dc9, []int{196}
}
func (m *VolumeDevice) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5540,7 +5568,7 @@ var xxx_messageInfo_VolumeDevice proto.InternalMessageInfo
func (m *VolumeMount) Reset() { *m = VolumeMount{} }
func (*VolumeMount) ProtoMessage() {}
func (*VolumeMount) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{196}
+ return fileDescriptor_83c10c24ec417dc9, []int{197}
}
func (m *VolumeMount) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5568,7 +5596,7 @@ var xxx_messageInfo_VolumeMount proto.InternalMessageInfo
func (m *VolumeNodeAffinity) Reset() { *m = VolumeNodeAffinity{} }
func (*VolumeNodeAffinity) ProtoMessage() {}
func (*VolumeNodeAffinity) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{197}
+ return fileDescriptor_83c10c24ec417dc9, []int{198}
}
func (m *VolumeNodeAffinity) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5596,7 +5624,7 @@ var xxx_messageInfo_VolumeNodeAffinity proto.InternalMessageInfo
func (m *VolumeProjection) Reset() { *m = VolumeProjection{} }
func (*VolumeProjection) ProtoMessage() {}
func (*VolumeProjection) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{198}
+ return fileDescriptor_83c10c24ec417dc9, []int{199}
}
func (m *VolumeProjection) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5624,7 +5652,7 @@ var xxx_messageInfo_VolumeProjection proto.InternalMessageInfo
func (m *VolumeSource) Reset() { *m = VolumeSource{} }
func (*VolumeSource) ProtoMessage() {}
func (*VolumeSource) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{199}
+ return fileDescriptor_83c10c24ec417dc9, []int{200}
}
func (m *VolumeSource) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5652,7 +5680,7 @@ var xxx_messageInfo_VolumeSource proto.InternalMessageInfo
func (m *VsphereVirtualDiskVolumeSource) Reset() { *m = VsphereVirtualDiskVolumeSource{} }
func (*VsphereVirtualDiskVolumeSource) ProtoMessage() {}
func (*VsphereVirtualDiskVolumeSource) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{200}
+ return fileDescriptor_83c10c24ec417dc9, []int{201}
}
func (m *VsphereVirtualDiskVolumeSource) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5680,7 +5708,7 @@ var xxx_messageInfo_VsphereVirtualDiskVolumeSource proto.InternalMessageInfo
func (m *WeightedPodAffinityTerm) Reset() { *m = WeightedPodAffinityTerm{} }
func (*WeightedPodAffinityTerm) ProtoMessage() {}
func (*WeightedPodAffinityTerm) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{201}
+ return fileDescriptor_83c10c24ec417dc9, []int{202}
}
func (m *WeightedPodAffinityTerm) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5708,7 +5736,7 @@ var xxx_messageInfo_WeightedPodAffinityTerm proto.InternalMessageInfo
func (m *WindowsSecurityContextOptions) Reset() { *m = WindowsSecurityContextOptions{} }
func (*WindowsSecurityContextOptions) ProtoMessage() {}
func (*WindowsSecurityContextOptions) Descriptor() ([]byte, []int) {
- return fileDescriptor_83c10c24ec417dc9, []int{202}
+ return fileDescriptor_83c10c24ec417dc9, []int{203}
}
func (m *WindowsSecurityContextOptions) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -5828,6 +5856,7 @@ func init() {
proto.RegisterType((*LocalVolumeSource)(nil), "k8s.io.api.core.v1.LocalVolumeSource")
proto.RegisterType((*NFSVolumeSource)(nil), "k8s.io.api.core.v1.NFSVolumeSource")
proto.RegisterType((*Namespace)(nil), "k8s.io.api.core.v1.Namespace")
+ proto.RegisterType((*NamespaceCondition)(nil), "k8s.io.api.core.v1.NamespaceCondition")
proto.RegisterType((*NamespaceList)(nil), "k8s.io.api.core.v1.NamespaceList")
proto.RegisterType((*NamespaceSpec)(nil), "k8s.io.api.core.v1.NamespaceSpec")
proto.RegisterType((*NamespaceStatus)(nil), "k8s.io.api.core.v1.NamespaceStatus")
@@ -5971,849 +6000,858 @@ func init() {
}
var fileDescriptor_83c10c24ec417dc9 = []byte{
- // 13467 bytes of a gzipped FileDescriptorProto
- 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0x7d, 0x6b, 0x70, 0x64, 0x57,
- 0x5a, 0xd8, 0xde, 0x6e, 0x3d, 0xba, 0x3f, 0xbd, 0xcf, 0x3c, 0xac, 0x91, 0x3d, 0xa3, 0xf1, 0x9d,
- 0xdd, 0xf1, 0x78, 0x6d, 0x6b, 0xd6, 0x63, 0x7b, 0x6d, 0xd6, 0xbb, 0x06, 0x49, 0x2d, 0xcd, 0xc8,
- 0x33, 0xd2, 0xb4, 0x4f, 0x6b, 0x66, 0x76, 0x8d, 0x77, 0xd9, 0xab, 0xbe, 0x47, 0xd2, 0xb5, 0xba,
- 0xef, 0x6d, 0xdf, 0x7b, 0x5b, 0x1a, 0x39, 0x50, 0x21, 0x4b, 0x78, 0x6c, 0x80, 0xd4, 0x56, 0x42,
- 0xe5, 0x01, 0x14, 0xa9, 0x22, 0xa4, 0x80, 0x40, 0x52, 0x21, 0x4b, 0x80, 0xb0, 0x24, 0x21, 0x90,
- 0xa4, 0x48, 0x7e, 0x6c, 0x48, 0x2a, 0xa9, 0xa5, 0x8a, 0x8a, 0x02, 0x43, 0x2a, 0x14, 0x3f, 0x02,
- 0x54, 0xe0, 0x4f, 0x14, 0x2a, 0xa4, 0xce, 0xf3, 0x9e, 0x73, 0xfb, 0xde, 0xee, 0xd6, 0x58, 0xa3,
- 0x35, 0x5b, 0xfe, 0xd7, 0x7d, 0xbe, 0xef, 0x7c, 0xe7, 0xdc, 0xf3, 0xfc, 0xce, 0xf7, 0x84, 0x57,
- 0x77, 0x5e, 0x89, 0xe6, 0xbc, 0xe0, 0xea, 0x4e, 0x7b, 0x83, 0x84, 0x3e, 0x89, 0x49, 0x74, 0x75,
- 0x97, 0xf8, 0x6e, 0x10, 0x5e, 0x15, 0x00, 0xa7, 0xe5, 0x5d, 0xad, 0x07, 0x21, 0xb9, 0xba, 0xfb,
- 0xfc, 0xd5, 0x2d, 0xe2, 0x93, 0xd0, 0x89, 0x89, 0x3b, 0xd7, 0x0a, 0x83, 0x38, 0x40, 0x88, 0xe3,
- 0xcc, 0x39, 0x2d, 0x6f, 0x8e, 0xe2, 0xcc, 0xed, 0x3e, 0x3f, 0xf3, 0xdc, 0x96, 0x17, 0x6f, 0xb7,
- 0x37, 0xe6, 0xea, 0x41, 0xf3, 0xea, 0x56, 0xb0, 0x15, 0x5c, 0x65, 0xa8, 0x1b, 0xed, 0x4d, 0xf6,
- 0x8f, 0xfd, 0x61, 0xbf, 0x38, 0x89, 0x99, 0x17, 0x93, 0x66, 0x9a, 0x4e, 0x7d, 0xdb, 0xf3, 0x49,
- 0xb8, 0x7f, 0xb5, 0xb5, 0xb3, 0xc5, 0xda, 0x0d, 0x49, 0x14, 0xb4, 0xc3, 0x3a, 0x49, 0x37, 0xdc,
- 0xb5, 0x56, 0x74, 0xb5, 0x49, 0x62, 0x27, 0xa3, 0xbb, 0x33, 0x57, 0xf3, 0x6a, 0x85, 0x6d, 0x3f,
- 0xf6, 0x9a, 0x9d, 0xcd, 0x7c, 0xbc, 0x57, 0x85, 0xa8, 0xbe, 0x4d, 0x9a, 0x4e, 0x47, 0xbd, 0x17,
- 0xf2, 0xea, 0xb5, 0x63, 0xaf, 0x71, 0xd5, 0xf3, 0xe3, 0x28, 0x0e, 0xd3, 0x95, 0xec, 0xaf, 0x59,
- 0x70, 0x71, 0xfe, 0x5e, 0x6d, 0xa9, 0xe1, 0x44, 0xb1, 0x57, 0x5f, 0x68, 0x04, 0xf5, 0x9d, 0x5a,
- 0x1c, 0x84, 0xe4, 0x6e, 0xd0, 0x68, 0x37, 0x49, 0x8d, 0x0d, 0x04, 0x7a, 0x16, 0x4a, 0xbb, 0xec,
- 0xff, 0x4a, 0x65, 0xda, 0xba, 0x68, 0x5d, 0x29, 0x2f, 0x4c, 0xfe, 0xc6, 0xc1, 0xec, 0x87, 0x1e,
- 0x1c, 0xcc, 0x96, 0xee, 0x8a, 0x72, 0xac, 0x30, 0xd0, 0x65, 0x18, 0xda, 0x8c, 0xd6, 0xf7, 0x5b,
- 0x64, 0xba, 0xc0, 0x70, 0xc7, 0x05, 0xee, 0xd0, 0x72, 0x8d, 0x96, 0x62, 0x01, 0x45, 0x57, 0xa1,
- 0xdc, 0x72, 0xc2, 0xd8, 0x8b, 0xbd, 0xc0, 0x9f, 0x2e, 0x5e, 0xb4, 0xae, 0x0c, 0x2e, 0x4c, 0x09,
- 0xd4, 0x72, 0x55, 0x02, 0x70, 0x82, 0x43, 0xbb, 0x11, 0x12, 0xc7, 0xbd, 0xed, 0x37, 0xf6, 0xa7,
- 0x07, 0x2e, 0x5a, 0x57, 0x4a, 0x49, 0x37, 0xb0, 0x28, 0xc7, 0x0a, 0xc3, 0xfe, 0xe1, 0x02, 0x94,
- 0xe6, 0x37, 0x37, 0x3d, 0xdf, 0x8b, 0xf7, 0xd1, 0x5d, 0x18, 0xf5, 0x03, 0x97, 0xc8, 0xff, 0xec,
- 0x2b, 0x46, 0xae, 0x5d, 0x9c, 0xeb, 0x5c, 0x4a, 0x73, 0x6b, 0x1a, 0xde, 0xc2, 0xe4, 0x83, 0x83,
- 0xd9, 0x51, 0xbd, 0x04, 0x1b, 0x74, 0x10, 0x86, 0x91, 0x56, 0xe0, 0x2a, 0xb2, 0x05, 0x46, 0x76,
- 0x36, 0x8b, 0x6c, 0x35, 0x41, 0x5b, 0x98, 0x78, 0x70, 0x30, 0x3b, 0xa2, 0x15, 0x60, 0x9d, 0x08,
- 0xda, 0x80, 0x09, 0xfa, 0xd7, 0x8f, 0x3d, 0x45, 0xb7, 0xc8, 0xe8, 0x5e, 0xca, 0xa3, 0xab, 0xa1,
- 0x2e, 0x9c, 0x7a, 0x70, 0x30, 0x3b, 0x91, 0x2a, 0xc4, 0x69, 0x82, 0xf6, 0xbb, 0x30, 0x3e, 0x1f,
- 0xc7, 0x4e, 0x7d, 0x9b, 0xb8, 0x7c, 0x06, 0xd1, 0x8b, 0x30, 0xe0, 0x3b, 0x4d, 0x22, 0xe6, 0xf7,
- 0xa2, 0x18, 0xd8, 0x81, 0x35, 0xa7, 0x49, 0x0e, 0x0f, 0x66, 0x27, 0xef, 0xf8, 0xde, 0x3b, 0x6d,
- 0xb1, 0x2a, 0x68, 0x19, 0x66, 0xd8, 0xe8, 0x1a, 0x80, 0x4b, 0x76, 0xbd, 0x3a, 0xa9, 0x3a, 0xf1,
- 0xb6, 0x98, 0x6f, 0x24, 0xea, 0x42, 0x45, 0x41, 0xb0, 0x86, 0x65, 0xdf, 0x87, 0xf2, 0xfc, 0x6e,
- 0xe0, 0xb9, 0xd5, 0xc0, 0x8d, 0xd0, 0x0e, 0x4c, 0xb4, 0x42, 0xb2, 0x49, 0x42, 0x55, 0x34, 0x6d,
- 0x5d, 0x2c, 0x5e, 0x19, 0xb9, 0x76, 0x25, 0xf3, 0x63, 0x4d, 0xd4, 0x25, 0x3f, 0x0e, 0xf7, 0x17,
- 0x1e, 0x13, 0xed, 0x4d, 0xa4, 0xa0, 0x38, 0x4d, 0xd9, 0xfe, 0x37, 0x05, 0x38, 0x33, 0xff, 0x6e,
- 0x3b, 0x24, 0x15, 0x2f, 0xda, 0x49, 0xaf, 0x70, 0xd7, 0x8b, 0x76, 0xd6, 0x92, 0x11, 0x50, 0x4b,
- 0xab, 0x22, 0xca, 0xb1, 0xc2, 0x40, 0xcf, 0xc1, 0x30, 0xfd, 0x7d, 0x07, 0xaf, 0x88, 0x4f, 0x3e,
- 0x25, 0x90, 0x47, 0x2a, 0x4e, 0xec, 0x54, 0x38, 0x08, 0x4b, 0x1c, 0xb4, 0x0a, 0x23, 0x75, 0xb6,
- 0x21, 0xb7, 0x56, 0x03, 0x97, 0xb0, 0xc9, 0x2c, 0x2f, 0x3c, 0x43, 0xd1, 0x17, 0x93, 0xe2, 0xc3,
- 0x83, 0xd9, 0x69, 0xde, 0x37, 0x41, 0x42, 0x83, 0x61, 0xbd, 0x3e, 0xb2, 0xd5, 0xfe, 0x1a, 0x60,
- 0x94, 0x20, 0x63, 0x6f, 0x5d, 0xd1, 0xb6, 0xca, 0x20, 0xdb, 0x2a, 0xa3, 0xd9, 0xdb, 0x04, 0x3d,
- 0x0f, 0x03, 0x3b, 0x9e, 0xef, 0x4e, 0x0f, 0x31, 0x5a, 0xe7, 0xe9, 0x9c, 0xdf, 0xf4, 0x7c, 0xf7,
- 0xf0, 0x60, 0x76, 0xca, 0xe8, 0x0e, 0x2d, 0xc4, 0x0c, 0xd5, 0xfe, 0x13, 0x0b, 0x66, 0x19, 0x6c,
- 0xd9, 0x6b, 0x90, 0x2a, 0x09, 0x23, 0x2f, 0x8a, 0x89, 0x1f, 0x1b, 0x03, 0x7a, 0x0d, 0x20, 0x22,
- 0xf5, 0x90, 0xc4, 0xda, 0x90, 0xaa, 0x85, 0x51, 0x53, 0x10, 0xac, 0x61, 0xd1, 0x03, 0x21, 0xda,
- 0x76, 0x42, 0xb6, 0xbe, 0xc4, 0xc0, 0xaa, 0x03, 0xa1, 0x26, 0x01, 0x38, 0xc1, 0x31, 0x0e, 0x84,
- 0x62, 0xaf, 0x03, 0x01, 0x7d, 0x0a, 0x26, 0x92, 0xc6, 0xa2, 0x96, 0x53, 0x97, 0x03, 0xc8, 0xb6,
- 0x4c, 0xcd, 0x04, 0xe1, 0x34, 0xae, 0xfd, 0x0f, 0x2d, 0xb1, 0x78, 0xe8, 0x57, 0xbf, 0xcf, 0xbf,
- 0xd5, 0xfe, 0x25, 0x0b, 0x86, 0x17, 0x3c, 0xdf, 0xf5, 0xfc, 0x2d, 0xf4, 0x79, 0x28, 0xd1, 0xbb,
- 0xc9, 0x75, 0x62, 0x47, 0x9c, 0x7b, 0x1f, 0xd3, 0xf6, 0x96, 0xba, 0x2a, 0xe6, 0x5a, 0x3b, 0x5b,
- 0xb4, 0x20, 0x9a, 0xa3, 0xd8, 0x74, 0xb7, 0xdd, 0xde, 0x78, 0x9b, 0xd4, 0xe3, 0x55, 0x12, 0x3b,
- 0xc9, 0xe7, 0x24, 0x65, 0x58, 0x51, 0x45, 0x37, 0x61, 0x28, 0x76, 0xc2, 0x2d, 0x12, 0x8b, 0x03,
- 0x30, 0xf3, 0xa0, 0xe2, 0x35, 0x31, 0xdd, 0x91, 0xc4, 0xaf, 0x93, 0xe4, 0x5a, 0x58, 0x67, 0x55,
- 0xb1, 0x20, 0x61, 0xff, 0xe0, 0x30, 0x9c, 0x5b, 0xac, 0xad, 0xe4, 0xac, 0xab, 0xcb, 0x30, 0xe4,
- 0x86, 0xde, 0x2e, 0x09, 0xc5, 0x38, 0x2b, 0x2a, 0x15, 0x56, 0x8a, 0x05, 0x14, 0xbd, 0x02, 0xa3,
- 0xfc, 0x42, 0xba, 0xe1, 0xf8, 0x6e, 0x43, 0x0e, 0xf1, 0x69, 0x81, 0x3d, 0x7a, 0x57, 0x83, 0x61,
- 0x03, 0xf3, 0x88, 0x8b, 0xea, 0x72, 0x6a, 0x33, 0xe6, 0x5d, 0x76, 0x5f, 0xb4, 0x60, 0x92, 0x37,
- 0x33, 0x1f, 0xc7, 0xa1, 0xb7, 0xd1, 0x8e, 0x49, 0x34, 0x3d, 0xc8, 0x4e, 0xba, 0xc5, 0xac, 0xd1,
- 0xca, 0x1d, 0x81, 0xb9, 0xbb, 0x29, 0x2a, 0xfc, 0x10, 0x9c, 0x16, 0xed, 0x4e, 0xa6, 0xc1, 0xb8,
- 0xa3, 0x59, 0xf4, 0x5d, 0x16, 0xcc, 0xd4, 0x03, 0x3f, 0x0e, 0x83, 0x46, 0x83, 0x84, 0xd5, 0xf6,
- 0x46, 0xc3, 0x8b, 0xb6, 0xf9, 0x3a, 0xc5, 0x64, 0x93, 0x9d, 0x04, 0x39, 0x73, 0xa8, 0x90, 0xc4,
- 0x1c, 0x5e, 0x78, 0x70, 0x30, 0x3b, 0xb3, 0x98, 0x4b, 0x0a, 0x77, 0x69, 0x06, 0xed, 0x00, 0xa2,
- 0x57, 0x69, 0x2d, 0x76, 0xb6, 0x48, 0xd2, 0xf8, 0x70, 0xff, 0x8d, 0x9f, 0x7d, 0x70, 0x30, 0x8b,
- 0xd6, 0x3a, 0x48, 0xe0, 0x0c, 0xb2, 0xe8, 0x1d, 0x38, 0x4d, 0x4b, 0x3b, 0xbe, 0xb5, 0xd4, 0x7f,
- 0x73, 0xd3, 0x0f, 0x0e, 0x66, 0x4f, 0xaf, 0x65, 0x10, 0xc1, 0x99, 0xa4, 0xd1, 0x77, 0x5a, 0x70,
- 0x2e, 0xf9, 0xfc, 0xa5, 0xfb, 0x2d, 0xc7, 0x77, 0x93, 0x86, 0xcb, 0xfd, 0x37, 0x4c, 0xcf, 0xe4,
- 0x73, 0x8b, 0x79, 0x94, 0x70, 0x7e, 0x23, 0x33, 0x8b, 0x70, 0x26, 0x73, 0xb5, 0xa0, 0x49, 0x28,
- 0xee, 0x10, 0xce, 0x05, 0x95, 0x31, 0xfd, 0x89, 0x4e, 0xc3, 0xe0, 0xae, 0xd3, 0x68, 0x8b, 0x8d,
- 0x82, 0xf9, 0x9f, 0x4f, 0x14, 0x5e, 0xb1, 0xec, 0x7f, 0x5b, 0x84, 0x89, 0xc5, 0xda, 0xca, 0x43,
- 0xed, 0x42, 0xfd, 0x1a, 0x2a, 0x74, 0xbd, 0x86, 0x92, 0x4b, 0xad, 0x98, 0x7b, 0xa9, 0xfd, 0xe5,
- 0x8c, 0x2d, 0x34, 0xc0, 0xb6, 0xd0, 0x37, 0xe5, 0x6c, 0xa1, 0x63, 0xde, 0x38, 0xbb, 0x39, 0xab,
- 0x68, 0x90, 0x4d, 0x66, 0x26, 0xc7, 0x72, 0x2b, 0xa8, 0x3b, 0x8d, 0xf4, 0xd1, 0x77, 0xc4, 0xa5,
- 0x74, 0x3c, 0xf3, 0x58, 0x87, 0xd1, 0x45, 0xa7, 0xe5, 0x6c, 0x78, 0x0d, 0x2f, 0xf6, 0x48, 0x84,
- 0x9e, 0x82, 0xa2, 0xe3, 0xba, 0x8c, 0xdb, 0x2a, 0x2f, 0x9c, 0x79, 0x70, 0x30, 0x5b, 0x9c, 0x77,
- 0xe9, 0xb5, 0x0f, 0x0a, 0x6b, 0x1f, 0x53, 0x0c, 0xf4, 0x51, 0x18, 0x70, 0xc3, 0xa0, 0x35, 0x5d,
- 0x60, 0x98, 0x74, 0xd7, 0x0d, 0x54, 0xc2, 0xa0, 0x95, 0x42, 0x65, 0x38, 0xf6, 0xaf, 0x16, 0xe0,
- 0x89, 0x45, 0xd2, 0xda, 0x5e, 0xae, 0xe5, 0x9c, 0xdf, 0x57, 0xa0, 0xd4, 0x0c, 0x7c, 0x2f, 0x0e,
- 0xc2, 0x48, 0x34, 0xcd, 0x56, 0xc4, 0xaa, 0x28, 0xc3, 0x0a, 0x8a, 0x2e, 0xc2, 0x40, 0x2b, 0x61,
- 0x2a, 0x47, 0x25, 0x43, 0xca, 0xd8, 0x49, 0x06, 0xa1, 0x18, 0xed, 0x88, 0x84, 0x62, 0xc5, 0x28,
- 0x8c, 0x3b, 0x11, 0x09, 0x31, 0x83, 0x24, 0x37, 0x33, 0xbd, 0xb3, 0xc5, 0x09, 0x9d, 0xba, 0x99,
- 0x29, 0x04, 0x6b, 0x58, 0xa8, 0x0a, 0xe5, 0x28, 0x35, 0xb3, 0x7d, 0x6d, 0xd3, 0x31, 0x76, 0x75,
- 0xab, 0x99, 0x4c, 0x88, 0x18, 0x37, 0xca, 0x50, 0xcf, 0xab, 0xfb, 0x2b, 0x05, 0x40, 0x7c, 0x08,
- 0xff, 0x82, 0x0d, 0xdc, 0x9d, 0xce, 0x81, 0xeb, 0x7f, 0x4b, 0x1c, 0xd7, 0xe8, 0xfd, 0xa9, 0x05,
- 0x4f, 0x2c, 0x7a, 0xbe, 0x4b, 0xc2, 0x9c, 0x05, 0xf8, 0x68, 0xde, 0xb2, 0x47, 0x63, 0x1a, 0x8c,
- 0x25, 0x36, 0x70, 0x0c, 0x4b, 0xcc, 0xfe, 0x23, 0x0b, 0x10, 0xff, 0xec, 0xf7, 0xdd, 0xc7, 0xde,
- 0xe9, 0xfc, 0xd8, 0x63, 0x58, 0x16, 0xf6, 0x2d, 0x18, 0x5f, 0x6c, 0x78, 0xc4, 0x8f, 0x57, 0xaa,
- 0x8b, 0x81, 0xbf, 0xe9, 0x6d, 0xa1, 0x4f, 0xc0, 0x78, 0xec, 0x35, 0x49, 0xd0, 0x8e, 0x6b, 0xa4,
- 0x1e, 0xf8, 0xec, 0x25, 0x69, 0x5d, 0x19, 0x5c, 0x40, 0x0f, 0x0e, 0x66, 0xc7, 0xd7, 0x0d, 0x08,
- 0x4e, 0x61, 0xda, 0xbf, 0x4d, 0xc7, 0x2f, 0x68, 0xb6, 0x02, 0x9f, 0xf8, 0xf1, 0x62, 0xe0, 0xbb,
- 0x5c, 0xe2, 0xf0, 0x09, 0x18, 0x88, 0xe9, 0x78, 0xf0, 0xb1, 0xbb, 0x2c, 0x37, 0x0a, 0x1d, 0x85,
- 0xc3, 0x83, 0xd9, 0xb3, 0x9d, 0x35, 0xd8, 0x38, 0xb1, 0x3a, 0xe8, 0x9b, 0x60, 0x28, 0x8a, 0x9d,
- 0xb8, 0x1d, 0x89, 0xd1, 0x7c, 0x52, 0x8e, 0x66, 0x8d, 0x95, 0x1e, 0x1e, 0xcc, 0x4e, 0xa8, 0x6a,
- 0xbc, 0x08, 0x8b, 0x0a, 0xe8, 0x69, 0x18, 0x6e, 0x92, 0x28, 0x72, 0xb6, 0xe4, 0x6d, 0x38, 0x21,
- 0xea, 0x0e, 0xaf, 0xf2, 0x62, 0x2c, 0xe1, 0xe8, 0x12, 0x0c, 0x92, 0x30, 0x0c, 0x42, 0xb1, 0x47,
- 0xc7, 0x04, 0xe2, 0xe0, 0x12, 0x2d, 0xc4, 0x1c, 0x66, 0xff, 0x47, 0x0b, 0x26, 0x54, 0x5f, 0x79,
- 0x5b, 0x27, 0xf0, 0x2a, 0x78, 0x13, 0xa0, 0x2e, 0x3f, 0x30, 0x62, 0xb7, 0xc7, 0xc8, 0xb5, 0xcb,
- 0x99, 0x17, 0x75, 0xc7, 0x30, 0x26, 0x94, 0x55, 0x51, 0x84, 0x35, 0x6a, 0xf6, 0xbf, 0xb0, 0xe0,
- 0x54, 0xea, 0x8b, 0x6e, 0x79, 0x51, 0x8c, 0xde, 0xea, 0xf8, 0xaa, 0xb9, 0xfe, 0xbe, 0x8a, 0xd6,
- 0x66, 0xdf, 0xa4, 0x96, 0xb2, 0x2c, 0xd1, 0xbe, 0xe8, 0x06, 0x0c, 0x7a, 0x31, 0x69, 0xca, 0x8f,
- 0xb9, 0xd4, 0xf5, 0x63, 0x78, 0xaf, 0x92, 0x19, 0x59, 0xa1, 0x35, 0x31, 0x27, 0x60, 0xff, 0xcd,
- 0x22, 0x94, 0xf9, 0xb2, 0x5d, 0x75, 0x5a, 0x27, 0x30, 0x17, 0x2b, 0x30, 0xc0, 0xa8, 0xf3, 0x8e,
- 0x3f, 0x95, 0xdd, 0x71, 0xd1, 0x9d, 0x39, 0xfa, 0xe4, 0xe7, 0xcc, 0x91, 0xba, 0x1a, 0x68, 0x11,
- 0x66, 0x24, 0x90, 0x03, 0xb0, 0xe1, 0xf9, 0x4e, 0xb8, 0x4f, 0xcb, 0xa6, 0x8b, 0x8c, 0xe0, 0x73,
- 0xdd, 0x09, 0x2e, 0x28, 0x7c, 0x4e, 0x56, 0xf5, 0x35, 0x01, 0x60, 0x8d, 0xe8, 0xcc, 0xcb, 0x50,
- 0x56, 0xc8, 0x47, 0xe1, 0x71, 0x66, 0x3e, 0x05, 0x13, 0xa9, 0xb6, 0x7a, 0x55, 0x1f, 0xd5, 0x59,
- 0xa4, 0x5f, 0x66, 0xa7, 0x80, 0xe8, 0xf5, 0x92, 0xbf, 0x2b, 0x4e, 0xd1, 0x77, 0xe1, 0x74, 0x23,
- 0xe3, 0x70, 0x12, 0x53, 0xd5, 0xff, 0x61, 0xf6, 0x84, 0xf8, 0xec, 0xd3, 0x59, 0x50, 0x9c, 0xd9,
- 0x06, 0xbd, 0xf6, 0x83, 0x16, 0x5d, 0xf3, 0x4e, 0x43, 0xe7, 0xa0, 0x6f, 0x8b, 0x32, 0xac, 0xa0,
- 0xf4, 0x08, 0x3b, 0xad, 0x3a, 0x7f, 0x93, 0xec, 0xd7, 0x48, 0x83, 0xd4, 0xe3, 0x20, 0xfc, 0xba,
- 0x76, 0xff, 0x3c, 0x1f, 0x7d, 0x7e, 0x02, 0x8e, 0x08, 0x02, 0xc5, 0x9b, 0x64, 0x9f, 0x4f, 0x85,
- 0xfe, 0x75, 0xc5, 0xae, 0x5f, 0xf7, 0x73, 0x16, 0x8c, 0xa9, 0xaf, 0x3b, 0x81, 0xad, 0xbe, 0x60,
- 0x6e, 0xf5, 0xf3, 0x5d, 0x17, 0x78, 0xce, 0x26, 0xff, 0x4a, 0x01, 0xce, 0x29, 0x1c, 0xca, 0xee,
- 0xf3, 0x3f, 0x62, 0x55, 0x5d, 0x85, 0xb2, 0xaf, 0x04, 0x51, 0x96, 0x29, 0x01, 0x4a, 0xc4, 0x50,
- 0x09, 0x0e, 0xe5, 0xda, 0xfc, 0x44, 0x5a, 0x34, 0xaa, 0x4b, 0x68, 0x85, 0x34, 0x76, 0x01, 0x8a,
- 0x6d, 0xcf, 0x15, 0x77, 0xc6, 0xc7, 0xe4, 0x68, 0xdf, 0x59, 0xa9, 0x1c, 0x1e, 0xcc, 0x3e, 0x99,
- 0xa7, 0x1d, 0xa0, 0x97, 0x55, 0x34, 0x77, 0x67, 0xa5, 0x82, 0x69, 0x65, 0x34, 0x0f, 0x13, 0x52,
- 0x01, 0x72, 0x97, 0x72, 0x50, 0x81, 0x2f, 0xae, 0x16, 0x25, 0x66, 0xc5, 0x26, 0x18, 0xa7, 0xf1,
- 0x51, 0x05, 0x26, 0x77, 0xda, 0x1b, 0xa4, 0x41, 0x62, 0xfe, 0xc1, 0x37, 0x09, 0x17, 0x42, 0x96,
- 0x93, 0xc7, 0xd6, 0xcd, 0x14, 0x1c, 0x77, 0xd4, 0xb0, 0xff, 0x9c, 0x1d, 0xf1, 0x62, 0xf4, 0xaa,
- 0x61, 0x40, 0x17, 0x16, 0xa5, 0xfe, 0xf5, 0x5c, 0xce, 0xfd, 0xac, 0x8a, 0x9b, 0x64, 0x7f, 0x3d,
- 0xa0, 0xcc, 0x76, 0xf6, 0xaa, 0x30, 0xd6, 0xfc, 0x40, 0xd7, 0x35, 0xff, 0xf3, 0x05, 0x38, 0xa3,
- 0x46, 0xc0, 0xe0, 0xeb, 0xfe, 0xa2, 0x8f, 0xc1, 0xf3, 0x30, 0xe2, 0x92, 0x4d, 0xa7, 0xdd, 0x88,
- 0x95, 0x44, 0x7c, 0x90, 0x6b, 0x45, 0x2a, 0x49, 0x31, 0xd6, 0x71, 0x8e, 0x30, 0x6c, 0x3f, 0x31,
- 0xc2, 0xee, 0xd6, 0xd8, 0xa1, 0x6b, 0x5c, 0xed, 0x1a, 0x2b, 0x77, 0xd7, 0x5c, 0x82, 0x41, 0xaf,
- 0x49, 0x79, 0xad, 0x82, 0xc9, 0x42, 0xad, 0xd0, 0x42, 0xcc, 0x61, 0xe8, 0x23, 0x30, 0x5c, 0x0f,
- 0x9a, 0x4d, 0xc7, 0x77, 0xd9, 0x95, 0x57, 0x5e, 0x18, 0xa1, 0xec, 0xd8, 0x22, 0x2f, 0xc2, 0x12,
- 0x86, 0x9e, 0x80, 0x01, 0x27, 0xdc, 0xe2, 0x62, 0x89, 0xf2, 0x42, 0x89, 0xb6, 0x34, 0x1f, 0x6e,
- 0x45, 0x98, 0x95, 0xd2, 0x57, 0xd5, 0x5e, 0x10, 0xee, 0x78, 0xfe, 0x56, 0xc5, 0x0b, 0xc5, 0x96,
- 0x50, 0x77, 0xe1, 0x3d, 0x05, 0xc1, 0x1a, 0x16, 0x5a, 0x86, 0xc1, 0x56, 0x10, 0xc6, 0xd1, 0xf4,
- 0x10, 0x1b, 0xee, 0x27, 0x73, 0x0e, 0x22, 0xfe, 0xb5, 0xd5, 0x20, 0x8c, 0x93, 0x0f, 0xa0, 0xff,
- 0x22, 0xcc, 0xab, 0xa3, 0x5b, 0x30, 0x4c, 0xfc, 0xdd, 0xe5, 0x30, 0x68, 0x4e, 0x9f, 0xca, 0xa7,
- 0xb4, 0xc4, 0x51, 0xf8, 0x32, 0x4b, 0xd8, 0x4e, 0x51, 0x8c, 0x25, 0x09, 0xf4, 0x4d, 0x50, 0x24,
- 0xfe, 0xee, 0xf4, 0x30, 0xa3, 0x34, 0x93, 0x43, 0xe9, 0xae, 0x13, 0x26, 0x67, 0xfe, 0x92, 0xbf,
- 0x8b, 0x69, 0x1d, 0xf4, 0x19, 0x28, 0xcb, 0x03, 0x23, 0x12, 0xf2, 0xb7, 0xcc, 0x05, 0x2b, 0x8f,
- 0x19, 0x4c, 0xde, 0x69, 0x7b, 0x21, 0x69, 0x12, 0x3f, 0x8e, 0x92, 0x13, 0x52, 0x42, 0x23, 0x9c,
- 0x50, 0x43, 0x9f, 0x91, 0x42, 0xdf, 0xd5, 0xa0, 0xed, 0xc7, 0xd1, 0x74, 0x99, 0x75, 0x2f, 0x53,
- 0x1d, 0x77, 0x37, 0xc1, 0x4b, 0x4b, 0x85, 0x79, 0x65, 0x6c, 0x90, 0x42, 0x9f, 0x85, 0x31, 0xfe,
- 0x9f, 0x2b, 0xb5, 0xa2, 0xe9, 0x33, 0x8c, 0xf6, 0xc5, 0x7c, 0xda, 0x1c, 0x71, 0xe1, 0x8c, 0x20,
- 0x3e, 0xa6, 0x97, 0x46, 0xd8, 0xa4, 0x86, 0x30, 0x8c, 0x35, 0xbc, 0x5d, 0xe2, 0x93, 0x28, 0xaa,
- 0x86, 0xc1, 0x06, 0x99, 0x06, 0x36, 0x30, 0xe7, 0xb2, 0x95, 0x60, 0xc1, 0x06, 0x59, 0x98, 0xa2,
- 0x34, 0x6f, 0xe9, 0x75, 0xb0, 0x49, 0x02, 0xdd, 0x81, 0x71, 0xfa, 0x08, 0xf3, 0x12, 0xa2, 0x23,
- 0xbd, 0x88, 0xb2, 0xa7, 0x12, 0x36, 0x2a, 0xe1, 0x14, 0x11, 0xf4, 0x3a, 0x94, 0x1b, 0xde, 0x26,
- 0xa9, 0xef, 0xd7, 0x1b, 0x64, 0x7a, 0x94, 0x51, 0xcc, 0x3c, 0x03, 0x6e, 0x49, 0x24, 0xfe, 0x88,
- 0x53, 0x7f, 0x71, 0x52, 0x1d, 0xdd, 0x85, 0xb3, 0x31, 0x09, 0x9b, 0x9e, 0xef, 0xd0, 0xbd, 0x2b,
- 0x1e, 0x37, 0x4c, 0x95, 0x38, 0xc6, 0x36, 0xc7, 0x05, 0x31, 0x78, 0x67, 0xd7, 0x33, 0xb1, 0x70,
- 0x4e, 0x6d, 0x74, 0x1f, 0xa6, 0x33, 0x20, 0x41, 0xc3, 0xab, 0xef, 0x4f, 0x9f, 0x66, 0x94, 0x3f,
- 0x29, 0x28, 0x4f, 0xaf, 0xe7, 0xe0, 0x1d, 0x76, 0x81, 0xe1, 0x5c, 0xea, 0xe8, 0x36, 0x4c, 0xb0,
- 0x03, 0xa3, 0xda, 0x6e, 0x34, 0x44, 0x83, 0xe3, 0xac, 0xc1, 0x8f, 0xc8, 0xeb, 0x73, 0xc5, 0x04,
- 0x1f, 0x1e, 0xcc, 0x42, 0xf2, 0x0f, 0xa7, 0x6b, 0xa3, 0x0d, 0xa6, 0xb5, 0x6a, 0x87, 0x5e, 0xbc,
- 0x4f, 0xb7, 0x39, 0xb9, 0x1f, 0x4f, 0x4f, 0x74, 0x95, 0x18, 0xe8, 0xa8, 0x4a, 0xb5, 0xa5, 0x17,
- 0xe2, 0x34, 0x41, 0x7a, 0x02, 0x46, 0xb1, 0xeb, 0xf9, 0xd3, 0x93, 0xec, 0x60, 0x55, 0x07, 0x48,
- 0x8d, 0x16, 0x62, 0x0e, 0x63, 0x1a, 0x2b, 0xfa, 0xe3, 0x36, 0xbd, 0x68, 0xa6, 0x18, 0x62, 0xa2,
- 0xb1, 0x92, 0x00, 0x9c, 0xe0, 0x50, 0xde, 0x2f, 0x8e, 0xf7, 0xa7, 0x11, 0x43, 0x55, 0xe7, 0xc0,
- 0xfa, 0xfa, 0x67, 0x30, 0x2d, 0xb7, 0x37, 0x60, 0x5c, 0x9d, 0x5b, 0x6c, 0x4c, 0xd0, 0x2c, 0x0c,
- 0x32, 0x6e, 0x47, 0xc8, 0xb7, 0xca, 0xb4, 0x0b, 0x8c, 0x13, 0xc2, 0xbc, 0x9c, 0x75, 0xc1, 0x7b,
- 0x97, 0x2c, 0xec, 0xc7, 0x84, 0xbf, 0xaa, 0x8b, 0x5a, 0x17, 0x24, 0x00, 0x27, 0x38, 0xf6, 0xff,
- 0xe3, 0x5c, 0x63, 0x72, 0x38, 0xf6, 0x71, 0x1d, 0x3c, 0x0b, 0xa5, 0xed, 0x20, 0x8a, 0x29, 0x36,
- 0x6b, 0x63, 0x30, 0xe1, 0x13, 0x6f, 0x88, 0x72, 0xac, 0x30, 0xd0, 0xab, 0x30, 0x56, 0xd7, 0x1b,
- 0x10, 0x77, 0x99, 0xda, 0xf5, 0x46, 0xeb, 0xd8, 0xc4, 0x45, 0xaf, 0x40, 0x89, 0x59, 0x61, 0xd4,
- 0x83, 0x86, 0x60, 0xb2, 0xe4, 0x85, 0x5c, 0xaa, 0x8a, 0xf2, 0x43, 0xed, 0x37, 0x56, 0xd8, 0xe8,
- 0x32, 0x0c, 0xd1, 0x2e, 0xac, 0x54, 0xc5, 0x2d, 0xa2, 0x44, 0x35, 0x37, 0x58, 0x29, 0x16, 0x50,
- 0xfb, 0x6f, 0x14, 0xb4, 0x51, 0xa6, 0x2f, 0x52, 0x82, 0xaa, 0x30, 0xbc, 0xe7, 0x78, 0xb1, 0xe7,
- 0x6f, 0x09, 0x76, 0xe1, 0xe9, 0xae, 0x57, 0x0a, 0xab, 0x74, 0x8f, 0x57, 0xe0, 0x97, 0x9e, 0xf8,
- 0x83, 0x25, 0x19, 0x4a, 0x31, 0x6c, 0xfb, 0x3e, 0xa5, 0x58, 0xe8, 0x97, 0x22, 0xe6, 0x15, 0x38,
- 0x45, 0xf1, 0x07, 0x4b, 0x32, 0xe8, 0x2d, 0x00, 0xb9, 0xc3, 0x88, 0x2b, 0xac, 0x1f, 0x9e, 0xed,
- 0x4d, 0x74, 0x5d, 0xd5, 0x59, 0x18, 0xa7, 0x57, 0x6a, 0xf2, 0x1f, 0x6b, 0xf4, 0xec, 0x98, 0xb1,
- 0x55, 0x9d, 0x9d, 0x41, 0xdf, 0x4a, 0x97, 0xb8, 0x13, 0xc6, 0xc4, 0x9d, 0x8f, 0xc5, 0xe0, 0x7c,
- 0xb4, 0xbf, 0x37, 0xc5, 0xba, 0xd7, 0x24, 0xfa, 0x76, 0x10, 0x44, 0x70, 0x42, 0xcf, 0xfe, 0xc5,
- 0x22, 0x4c, 0xe7, 0x75, 0x97, 0x2e, 0x3a, 0x72, 0xdf, 0x8b, 0x17, 0x29, 0x37, 0x64, 0x99, 0x8b,
- 0x6e, 0x49, 0x94, 0x63, 0x85, 0x41, 0x67, 0x3f, 0xf2, 0xb6, 0xe4, 0x93, 0x70, 0x30, 0x99, 0xfd,
- 0x1a, 0x2b, 0xc5, 0x02, 0x4a, 0xf1, 0x42, 0xe2, 0x44, 0xc2, 0xbc, 0x46, 0x5b, 0x25, 0x98, 0x95,
- 0x62, 0x01, 0xd5, 0xe5, 0x4d, 0x03, 0x3d, 0xe4, 0x4d, 0xc6, 0x10, 0x0d, 0x1e, 0xef, 0x10, 0xa1,
- 0xcf, 0x01, 0x6c, 0x7a, 0xbe, 0x17, 0x6d, 0x33, 0xea, 0x43, 0x47, 0xa6, 0xae, 0x78, 0xa9, 0x65,
- 0x45, 0x05, 0x6b, 0x14, 0xd1, 0x4b, 0x30, 0xa2, 0x36, 0xe0, 0x4a, 0x85, 0xe9, 0x1a, 0x35, 0xdb,
- 0x8d, 0xe4, 0x34, 0xaa, 0x60, 0x1d, 0xcf, 0x7e, 0x3b, 0xbd, 0x5e, 0xc4, 0x0e, 0xd0, 0xc6, 0xd7,
- 0xea, 0x77, 0x7c, 0x0b, 0xdd, 0xc7, 0xd7, 0xfe, 0xb5, 0x22, 0x4c, 0x18, 0x8d, 0xb5, 0xa3, 0x3e,
- 0xce, 0xac, 0xeb, 0xf4, 0x00, 0x77, 0x62, 0x22, 0xf6, 0x9f, 0xdd, 0x7b, 0xab, 0xe8, 0x87, 0x3c,
- 0xdd, 0x01, 0xbc, 0x3e, 0xfa, 0x1c, 0x94, 0x1b, 0x4e, 0xc4, 0x64, 0x57, 0x44, 0xec, 0xbb, 0x7e,
- 0x88, 0x25, 0xef, 0x08, 0x27, 0x8a, 0xb5, 0x5b, 0x93, 0xd3, 0x4e, 0x48, 0xd2, 0x9b, 0x86, 0xb2,
- 0x13, 0xd2, 0x7e, 0x4b, 0x75, 0x82, 0xf2, 0x1c, 0xfb, 0x98, 0xc3, 0xd0, 0x2b, 0x30, 0x1a, 0x12,
- 0xb6, 0x2a, 0x16, 0x29, 0xf3, 0xc5, 0x96, 0xd9, 0x60, 0xc2, 0xa5, 0x61, 0x0d, 0x86, 0x0d, 0xcc,
- 0x84, 0x95, 0x1f, 0xea, 0xc2, 0xca, 0x3f, 0x0d, 0xc3, 0xec, 0x87, 0x5a, 0x01, 0x6a, 0x36, 0x56,
- 0x78, 0x31, 0x96, 0xf0, 0xf4, 0x82, 0x29, 0xf5, 0xb9, 0x60, 0x3e, 0x0a, 0xe3, 0x15, 0x87, 0x34,
- 0x03, 0x7f, 0xc9, 0x77, 0x5b, 0x81, 0xe7, 0xc7, 0x68, 0x1a, 0x06, 0xd8, 0xed, 0xc0, 0xf7, 0xf6,
- 0x00, 0xa5, 0x80, 0x07, 0x28, 0x63, 0x6e, 0x6f, 0xc1, 0x99, 0x4a, 0xb0, 0xe7, 0xef, 0x39, 0xa1,
- 0x3b, 0x5f, 0x5d, 0xd1, 0xde, 0xb9, 0x6b, 0xf2, 0x9d, 0xc5, 0xed, 0xa1, 0x32, 0xcf, 0x54, 0xad,
- 0x26, 0x67, 0x2f, 0x97, 0xbd, 0x06, 0xc9, 0x91, 0x46, 0xfc, 0xed, 0x82, 0xd1, 0x52, 0x82, 0xaf,
- 0x14, 0x46, 0x56, 0xae, 0xc2, 0xe8, 0x0d, 0x28, 0x6d, 0x7a, 0xa4, 0xe1, 0x62, 0xb2, 0x29, 0x96,
- 0xd8, 0x53, 0xf9, 0x26, 0x1e, 0xcb, 0x14, 0x53, 0x4a, 0x9f, 0xf8, 0x2b, 0x6d, 0x59, 0x54, 0xc6,
- 0x8a, 0x0c, 0xda, 0x81, 0x49, 0xc9, 0xb8, 0x4b, 0xa8, 0x58, 0x70, 0x4f, 0x77, 0x7b, 0x0d, 0x98,
- 0xc4, 0x4f, 0x3f, 0x38, 0x98, 0x9d, 0xc4, 0x29, 0x32, 0xb8, 0x83, 0x30, 0x7d, 0x96, 0x35, 0xe9,
- 0xd1, 0x3a, 0xc0, 0x86, 0x9f, 0x3d, 0xcb, 0xd8, 0x0b, 0x93, 0x95, 0xda, 0x3f, 0x6a, 0xc1, 0x63,
- 0x1d, 0x23, 0x23, 0x5e, 0xda, 0xc7, 0x3c, 0x0b, 0xe9, 0x97, 0x6f, 0xa1, 0xf7, 0xcb, 0xd7, 0xfe,
- 0x59, 0x0b, 0x4e, 0x2f, 0x35, 0x5b, 0xf1, 0x7e, 0xc5, 0x33, 0xb5, 0x3b, 0x2f, 0xc3, 0x50, 0x93,
- 0xb8, 0x5e, 0xbb, 0x29, 0x66, 0x6e, 0x56, 0x1e, 0x3f, 0xab, 0xac, 0xf4, 0xf0, 0x60, 0x76, 0xac,
- 0x16, 0x07, 0xa1, 0xb3, 0x45, 0x78, 0x01, 0x16, 0xe8, 0xec, 0x10, 0xf7, 0xde, 0x25, 0xb7, 0xbc,
- 0xa6, 0x27, 0x4d, 0x76, 0xba, 0xca, 0xce, 0xe6, 0xe4, 0x80, 0xce, 0xbd, 0xd1, 0x76, 0xfc, 0xd8,
- 0x8b, 0xf7, 0x85, 0x62, 0x46, 0x12, 0xc1, 0x09, 0x3d, 0xfb, 0x6b, 0x16, 0x4c, 0xc8, 0x75, 0x3f,
- 0xef, 0xba, 0x21, 0x89, 0x22, 0x34, 0x03, 0x05, 0xaf, 0x25, 0x7a, 0x09, 0xa2, 0x97, 0x85, 0x95,
- 0x2a, 0x2e, 0x78, 0x2d, 0xc9, 0x6f, 0xb1, 0x13, 0xae, 0x68, 0xea, 0xa8, 0x6e, 0x88, 0x72, 0xac,
- 0x30, 0xd0, 0x15, 0x28, 0xf9, 0x81, 0xcb, 0xcd, 0xa6, 0xf8, 0x5d, 0xc5, 0x16, 0xd8, 0x9a, 0x28,
- 0xc3, 0x0a, 0x8a, 0xaa, 0x50, 0xe6, 0x16, 0x45, 0xc9, 0xa2, 0xed, 0xcb, 0x2e, 0x89, 0x7d, 0xd9,
- 0xba, 0xac, 0x89, 0x13, 0x22, 0xf6, 0x0f, 0x58, 0x30, 0x2a, 0xbf, 0xac, 0x4f, 0x66, 0x92, 0x6e,
- 0xad, 0x84, 0x91, 0x4c, 0xb6, 0x16, 0x65, 0x06, 0x19, 0xc4, 0xe0, 0x01, 0x8b, 0x47, 0xe1, 0x01,
- 0xed, 0x1f, 0x29, 0xc0, 0xb8, 0xec, 0x4e, 0xad, 0xbd, 0x11, 0x91, 0x18, 0xad, 0x43, 0xd9, 0xe1,
- 0x43, 0x4e, 0xe4, 0x8a, 0xbd, 0x94, 0xfd, 0x38, 0x37, 0xe6, 0x27, 0xb9, 0x96, 0xe7, 0x65, 0x6d,
- 0x9c, 0x10, 0x42, 0x0d, 0x98, 0xf2, 0x83, 0x98, 0x1d, 0xd1, 0x0a, 0xde, 0x4d, 0x05, 0x92, 0xa6,
- 0x7e, 0x4e, 0x50, 0x9f, 0x5a, 0x4b, 0x53, 0xc1, 0x9d, 0x84, 0xd1, 0x92, 0x14, 0x78, 0x14, 0xf3,
- 0x5f, 0xd8, 0xfa, 0x2c, 0x64, 0xcb, 0x3b, 0xec, 0x5f, 0xb1, 0xa0, 0x2c, 0xd1, 0x4e, 0x42, 0xdb,
- 0xb5, 0x0a, 0xc3, 0x11, 0x9b, 0x04, 0x39, 0x34, 0x76, 0xb7, 0x8e, 0xf3, 0xf9, 0x4a, 0x6e, 0x1e,
- 0xfe, 0x3f, 0xc2, 0x92, 0x06, 0x93, 0x77, 0xab, 0xee, 0xbf, 0x4f, 0xe4, 0xdd, 0xaa, 0x3f, 0x39,
- 0x37, 0xcc, 0xef, 0xb3, 0x3e, 0x6b, 0x02, 0x24, 0xca, 0x20, 0xb5, 0x42, 0xb2, 0xe9, 0xdd, 0x4f,
- 0x33, 0x48, 0x55, 0x56, 0x8a, 0x05, 0x14, 0xbd, 0x05, 0xa3, 0x75, 0x29, 0xe8, 0x4c, 0xb6, 0xeb,
- 0xe5, 0xae, 0x42, 0x77, 0xa5, 0x9f, 0xe1, 0x46, 0xda, 0x8b, 0x5a, 0x7d, 0x6c, 0x50, 0x33, 0xd5,
- 0xed, 0xc5, 0x5e, 0xea, 0xf6, 0x84, 0x6e, 0xbe, 0xf2, 0xf9, 0xc7, 0x2c, 0x18, 0xe2, 0x02, 0xae,
- 0xfe, 0xe4, 0x8b, 0x9a, 0xba, 0x2a, 0x19, 0xbb, 0xbb, 0xb4, 0x50, 0xa8, 0x9f, 0xd0, 0x2a, 0x94,
- 0xd9, 0x0f, 0x26, 0xa0, 0x2b, 0xe6, 0x5b, 0xa7, 0xf3, 0x56, 0xf5, 0x0e, 0xde, 0x95, 0xd5, 0x70,
- 0x42, 0xc1, 0xfe, 0xa1, 0x22, 0x3d, 0xaa, 0x12, 0x54, 0xe3, 0x06, 0xb7, 0x1e, 0xdd, 0x0d, 0x5e,
- 0x78, 0x54, 0x37, 0xf8, 0x16, 0x4c, 0xd4, 0x35, 0xe5, 0x56, 0x32, 0x93, 0x57, 0xba, 0x2e, 0x12,
- 0x4d, 0x0f, 0xc6, 0x65, 0x21, 0x8b, 0x26, 0x11, 0x9c, 0xa6, 0x8a, 0xbe, 0x15, 0x46, 0xf9, 0x3c,
- 0x8b, 0x56, 0xb8, 0xc5, 0xc2, 0x47, 0xf2, 0xd7, 0x8b, 0xde, 0x04, 0x5b, 0x89, 0x35, 0xad, 0x3a,
- 0x36, 0x88, 0xd9, 0x7f, 0x6c, 0x01, 0x5a, 0x6a, 0x6d, 0x93, 0x26, 0x09, 0x9d, 0x46, 0x22, 0xa3,
- 0xfe, 0x6b, 0x16, 0x4c, 0x93, 0x8e, 0xe2, 0xc5, 0xa0, 0xd9, 0x14, 0x4f, 0x8b, 0x9c, 0xd7, 0xef,
- 0x52, 0x4e, 0x1d, 0x65, 0xbe, 0x3f, 0x9d, 0x87, 0x81, 0x73, 0xdb, 0x43, 0xab, 0x70, 0x8a, 0x5f,
- 0x79, 0x0a, 0xa0, 0xd9, 0x28, 0x3f, 0x2e, 0x08, 0x9f, 0x5a, 0xef, 0x44, 0xc1, 0x59, 0xf5, 0xec,
- 0x2f, 0x8f, 0x40, 0x6e, 0x2f, 0x3e, 0x10, 0xce, 0x7f, 0x20, 0x9c, 0xff, 0x40, 0x38, 0xff, 0x81,
- 0x70, 0xfe, 0x03, 0xe1, 0xfc, 0xfb, 0x4d, 0x38, 0xff, 0x87, 0x16, 0x9c, 0xea, 0x3c, 0xb5, 0x4f,
- 0x82, 0x8f, 0x6e, 0xc3, 0xa9, 0xce, 0xab, 0xa9, 0xab, 0xf9, 0x58, 0x67, 0x3f, 0x93, 0x6b, 0x2a,
- 0xe3, 0x1b, 0x70, 0x16, 0x7d, 0xfb, 0x17, 0x4b, 0x30, 0xb8, 0xb4, 0x4b, 0xfc, 0xf8, 0x04, 0x3e,
- 0xb1, 0x0e, 0xe3, 0x9e, 0xbf, 0x1b, 0x34, 0x76, 0x89, 0xcb, 0xe1, 0x47, 0x79, 0x9e, 0x9e, 0x15,
- 0xa4, 0xc7, 0x57, 0x0c, 0x12, 0x38, 0x45, 0xf2, 0x51, 0xc8, 0x7e, 0xaf, 0xc3, 0x10, 0x3f, 0xcc,
- 0x85, 0xe0, 0x37, 0xf3, 0xec, 0x66, 0x83, 0x28, 0xae, 0xa8, 0x44, 0x2e, 0xcd, 0x2f, 0x0b, 0x51,
- 0x1d, 0xbd, 0x0d, 0xe3, 0x9b, 0x5e, 0x18, 0xc5, 0xeb, 0x5e, 0x93, 0x44, 0xb1, 0xd3, 0x6c, 0x3d,
- 0x84, 0xac, 0x57, 0x8d, 0xc3, 0xb2, 0x41, 0x09, 0xa7, 0x28, 0xa3, 0x2d, 0x18, 0x6b, 0x38, 0x7a,
- 0x53, 0xc3, 0x47, 0x6e, 0x4a, 0xdd, 0x12, 0xb7, 0x74, 0x42, 0xd8, 0xa4, 0x4b, 0xf7, 0x69, 0x9d,
- 0x89, 0x2b, 0x4b, 0xec, 0xad, 0xaf, 0xf6, 0x29, 0x97, 0x53, 0x72, 0x18, 0x65, 0x78, 0x98, 0x41,
- 0x69, 0xd9, 0x64, 0x78, 0x34, 0xb3, 0xd1, 0xcf, 0x43, 0x99, 0xd0, 0x21, 0xa4, 0x84, 0xc5, 0x45,
- 0x73, 0xb5, 0xbf, 0xbe, 0xae, 0x7a, 0xf5, 0x30, 0x30, 0xa5, 0xec, 0x4b, 0x92, 0x12, 0x4e, 0x88,
- 0xa2, 0x45, 0x18, 0x8a, 0x48, 0xe8, 0x91, 0x48, 0x5c, 0x39, 0x5d, 0xa6, 0x91, 0xa1, 0x71, 0x5f,
- 0x0c, 0xfe, 0x1b, 0x8b, 0xaa, 0x74, 0x79, 0x39, 0x4c, 0x4e, 0xc9, 0x6e, 0x19, 0x6d, 0x79, 0xcd,
- 0xb3, 0x52, 0x2c, 0xa0, 0xe8, 0x75, 0x18, 0x0e, 0x49, 0x83, 0xa9, 0x71, 0xc6, 0xfa, 0x5f, 0xe4,
- 0x5c, 0x2b, 0xc4, 0xeb, 0x61, 0x49, 0x00, 0xdd, 0x04, 0x14, 0x12, 0xca, 0x30, 0x79, 0xfe, 0x96,
- 0x32, 0xb3, 0x14, 0x27, 0xb8, 0xda, 0xf1, 0x38, 0xc1, 0x90, 0x6e, 0x31, 0x38, 0xa3, 0x1a, 0xba,
- 0x0e, 0x53, 0xaa, 0x74, 0xc5, 0x8f, 0x62, 0x87, 0x9e, 0x9c, 0x13, 0x8c, 0x96, 0x92, 0x57, 0xe0,
- 0x34, 0x02, 0xee, 0xac, 0x63, 0xff, 0xb4, 0x05, 0x7c, 0x9c, 0x4f, 0xe0, 0x95, 0xfe, 0x9a, 0xf9,
- 0x4a, 0x3f, 0x97, 0x3b, 0x73, 0x39, 0x2f, 0xf4, 0x07, 0x16, 0x8c, 0x68, 0x33, 0x9b, 0xac, 0x59,
- 0xab, 0xcb, 0x9a, 0x6d, 0xc3, 0x24, 0x5d, 0xe9, 0xb7, 0x37, 0x22, 0x12, 0xee, 0x12, 0x97, 0x2d,
- 0xcc, 0xc2, 0xc3, 0x2d, 0x4c, 0x65, 0xff, 0x75, 0x2b, 0x45, 0x10, 0x77, 0x34, 0x81, 0x5e, 0x96,
- 0x3a, 0x8d, 0xa2, 0x61, 0x3e, 0xcd, 0xf5, 0x15, 0x87, 0x07, 0xb3, 0x93, 0xda, 0x87, 0xe8, 0x3a,
- 0x0c, 0xfb, 0xf3, 0xf2, 0x1b, 0x95, 0x9d, 0x5d, 0x5d, 0x2d, 0x96, 0x94, 0x9d, 0x9d, 0x5a, 0x0e,
- 0x38, 0xc1, 0xa1, 0x7b, 0x74, 0x3b, 0x88, 0xe2, 0xb4, 0x9d, 0xdd, 0x8d, 0x20, 0x8a, 0x31, 0x83,
- 0xd8, 0x2f, 0x00, 0x2c, 0xdd, 0x27, 0x75, 0xbe, 0xd4, 0xf5, 0xd7, 0x87, 0x95, 0xff, 0xfa, 0xb0,
- 0xff, 0xb3, 0x05, 0xe3, 0xcb, 0x8b, 0x86, 0x00, 0x77, 0x0e, 0x80, 0x3f, 0x99, 0xee, 0xdd, 0x5b,
- 0x93, 0x5a, 0x6f, 0xae, 0xb8, 0x54, 0xa5, 0x58, 0xc3, 0x40, 0xe7, 0xa0, 0xd8, 0x68, 0xfb, 0x42,
- 0x98, 0x38, 0x4c, 0x2f, 0xec, 0x5b, 0x6d, 0x1f, 0xd3, 0x32, 0xcd, 0x76, 0xbf, 0xd8, 0xb7, 0xed,
- 0x7e, 0x4f, 0x1f, 0x7a, 0x34, 0x0b, 0x83, 0x7b, 0x7b, 0x9e, 0xcb, 0x3d, 0x15, 0x85, 0x46, 0xfe,
- 0xde, 0xbd, 0x95, 0x4a, 0x84, 0x79, 0xb9, 0xfd, 0xa5, 0x22, 0xcc, 0x2c, 0x37, 0xc8, 0xfd, 0xf7,
- 0xe8, 0xad, 0xd9, 0xaf, 0xe7, 0xc1, 0xd1, 0x24, 0x39, 0x47, 0xf5, 0x2e, 0xe9, 0x3d, 0x1e, 0x9b,
- 0x30, 0xcc, 0xcd, 0xcc, 0xa4, 0xef, 0xe6, 0xab, 0x59, 0xad, 0xe7, 0x0f, 0xc8, 0x1c, 0x37, 0x57,
- 0x13, 0xae, 0x67, 0xea, 0xa6, 0x15, 0xa5, 0x58, 0x12, 0x9f, 0xf9, 0x04, 0x8c, 0xea, 0x98, 0x47,
- 0xf2, 0xf3, 0xfa, 0x2b, 0x45, 0x98, 0xa4, 0x3d, 0x78, 0xa4, 0x13, 0x71, 0xa7, 0x73, 0x22, 0x8e,
- 0xdb, 0xd7, 0xa7, 0xf7, 0x6c, 0xbc, 0x95, 0x9e, 0x8d, 0xe7, 0xf3, 0x66, 0xe3, 0xa4, 0xe7, 0xe0,
- 0xbb, 0x2c, 0x38, 0xb5, 0xdc, 0x08, 0xea, 0x3b, 0x29, 0x7f, 0x9c, 0x97, 0x60, 0x84, 0x9e, 0xe3,
- 0x91, 0xe1, 0x2a, 0x6e, 0x04, 0x0f, 0x10, 0x20, 0xac, 0xe3, 0x69, 0xd5, 0xee, 0xdc, 0x59, 0xa9,
- 0x64, 0xc5, 0x1c, 0x10, 0x20, 0xac, 0xe3, 0xd9, 0x5f, 0xb5, 0xe0, 0xfc, 0xf5, 0xc5, 0xa5, 0x64,
- 0x29, 0x76, 0x84, 0x3d, 0xb8, 0x0c, 0x43, 0x2d, 0x57, 0xeb, 0x4a, 0x22, 0x9f, 0xad, 0xb0, 0x5e,
- 0x08, 0xe8, 0xfb, 0x25, 0xa4, 0xc7, 0x4f, 0x59, 0x70, 0xea, 0xba, 0x17, 0xd3, 0x6b, 0x39, 0xed,
- 0x80, 0x4f, 0xef, 0xe5, 0xc8, 0x8b, 0x83, 0x70, 0x3f, 0xed, 0x80, 0x8f, 0x15, 0x04, 0x6b, 0x58,
- 0xbc, 0xe5, 0x5d, 0x8f, 0x19, 0x38, 0x17, 0x4c, 0xb5, 0x13, 0x16, 0xe5, 0x58, 0x61, 0xd0, 0x0f,
- 0x73, 0xbd, 0x90, 0x09, 0xf9, 0xf6, 0xc5, 0x09, 0xab, 0x3e, 0xac, 0x22, 0x01, 0x38, 0xc1, 0xa1,
- 0x0f, 0xa8, 0xd9, 0xeb, 0x8d, 0x76, 0x14, 0x93, 0x70, 0x33, 0xca, 0x39, 0x1d, 0x5f, 0x80, 0x32,
- 0x91, 0x22, 0x75, 0xd1, 0x6b, 0xc5, 0x6a, 0x2a, 0x59, 0x3b, 0x8f, 0x03, 0xa0, 0xf0, 0xfa, 0xf0,
- 0xee, 0x3b, 0x9a, 0x7b, 0xd6, 0x32, 0x20, 0xa2, 0xb7, 0xa5, 0x07, 0x46, 0x60, 0x1e, 0xd6, 0x4b,
- 0x1d, 0x50, 0x9c, 0x51, 0xc3, 0xfe, 0x51, 0x0b, 0xce, 0xa8, 0x0f, 0x7e, 0xdf, 0x7d, 0xa6, 0xfd,
- 0xe5, 0x02, 0x8c, 0xdd, 0x58, 0x5f, 0xaf, 0x5e, 0x27, 0xb1, 0xb8, 0xb6, 0x7b, 0x6b, 0xbd, 0xb1,
- 0xa6, 0xbc, 0xeb, 0xf6, 0x0a, 0x6c, 0xc7, 0x5e, 0x63, 0x8e, 0xc7, 0xd7, 0x99, 0x5b, 0xf1, 0xe3,
- 0xdb, 0x61, 0x2d, 0x0e, 0x3d, 0x7f, 0x2b, 0x53, 0xdd, 0x27, 0x99, 0x8b, 0x62, 0x1e, 0x73, 0x81,
- 0x5e, 0x80, 0x21, 0x16, 0xe0, 0x47, 0x4e, 0xc2, 0xe3, 0xea, 0x11, 0xc5, 0x4a, 0x0f, 0x0f, 0x66,
- 0xcb, 0x77, 0xf0, 0x0a, 0xff, 0x83, 0x05, 0x2a, 0xba, 0x03, 0x23, 0xdb, 0x71, 0xdc, 0xba, 0x41,
- 0x1c, 0x97, 0xbe, 0x96, 0xf9, 0x71, 0x78, 0x21, 0xeb, 0x38, 0xa4, 0x83, 0xc0, 0xd1, 0x92, 0x13,
- 0x24, 0x29, 0x8b, 0xb0, 0x4e, 0xc7, 0xae, 0x01, 0x24, 0xb0, 0x63, 0x52, 0x75, 0xd8, 0xbf, 0x67,
- 0xc1, 0x30, 0x8f, 0xb5, 0x10, 0xa2, 0x4f, 0xc2, 0x00, 0xb9, 0x4f, 0xea, 0x82, 0x55, 0xce, 0xec,
- 0x70, 0xc2, 0x69, 0x71, 0x91, 0x2d, 0xfd, 0x8f, 0x59, 0x2d, 0x74, 0x03, 0x86, 0x69, 0x6f, 0xaf,
- 0xab, 0xc0, 0x13, 0x4f, 0xe6, 0x7d, 0xb1, 0x9a, 0x76, 0xce, 0x9c, 0x89, 0x22, 0x2c, 0xab, 0x33,
- 0x65, 0x71, 0xbd, 0x55, 0xa3, 0x27, 0x76, 0xdc, 0x8d, 0xb1, 0x58, 0x5f, 0xac, 0x72, 0x24, 0x41,
- 0x8d, 0x2b, 0x8b, 0x65, 0x21, 0x4e, 0x88, 0xd8, 0xeb, 0x50, 0xa6, 0x93, 0x3a, 0xdf, 0xf0, 0x9c,
- 0xee, 0xfa, 0xef, 0x67, 0xa0, 0x2c, 0xb5, 0xdb, 0x91, 0xf0, 0xb1, 0x66, 0x54, 0xa5, 0xf2, 0x3b,
- 0xc2, 0x09, 0xdc, 0xde, 0x84, 0xd3, 0xcc, 0x08, 0xd1, 0x89, 0xb7, 0x8d, 0x3d, 0xd6, 0x7b, 0x31,
- 0x3f, 0x2b, 0x5e, 0x9e, 0x7c, 0x66, 0xa6, 0x35, 0x37, 0xc6, 0x51, 0x49, 0x31, 0x79, 0x85, 0xda,
- 0x7f, 0x30, 0x00, 0x8f, 0xaf, 0xd4, 0xf2, 0xc3, 0x70, 0xbc, 0x02, 0xa3, 0x9c, 0x2f, 0xa5, 0x4b,
- 0xdb, 0x69, 0x88, 0x76, 0x95, 0xac, 0x76, 0x5d, 0x83, 0x61, 0x03, 0x13, 0x9d, 0x87, 0xa2, 0xf7,
- 0x8e, 0x9f, 0xf6, 0x08, 0x5a, 0x79, 0x63, 0x0d, 0xd3, 0x72, 0x0a, 0xa6, 0x2c, 0x2e, 0xbf, 0x3b,
- 0x14, 0x58, 0xb1, 0xb9, 0xaf, 0xc1, 0xb8, 0x17, 0xd5, 0x23, 0x6f, 0xc5, 0xa7, 0xe7, 0x8c, 0x76,
- 0x52, 0x29, 0xa9, 0x08, 0xed, 0xb4, 0x82, 0xe2, 0x14, 0xb6, 0x76, 0x91, 0x0d, 0xf6, 0xcd, 0x26,
- 0xf7, 0x74, 0x3a, 0xa6, 0x2f, 0x80, 0x16, 0xfb, 0xba, 0x88, 0x09, 0xdd, 0xc5, 0x0b, 0x80, 0x7f,
- 0x70, 0x84, 0x25, 0x8c, 0x3e, 0x39, 0xeb, 0xdb, 0x4e, 0x6b, 0xbe, 0x1d, 0x6f, 0x57, 0xbc, 0xa8,
- 0x1e, 0xec, 0x92, 0x70, 0x9f, 0x49, 0x0b, 0x4a, 0xc9, 0x93, 0x53, 0x01, 0x16, 0x6f, 0xcc, 0x57,
- 0x29, 0x26, 0xee, 0xac, 0x83, 0xe6, 0x61, 0x42, 0x16, 0xd6, 0x48, 0xc4, 0xae, 0xb0, 0x11, 0x46,
- 0x46, 0xf9, 0xe8, 0x88, 0x62, 0x45, 0x24, 0x8d, 0x6f, 0x72, 0xd2, 0x70, 0x1c, 0x9c, 0xf4, 0xcb,
- 0x30, 0xe6, 0xf9, 0x5e, 0xec, 0x39, 0x71, 0xc0, 0x35, 0x46, 0x5c, 0x30, 0xc0, 0x44, 0xe1, 0x2b,
- 0x3a, 0x00, 0x9b, 0x78, 0xf6, 0xff, 0x18, 0x80, 0x29, 0x36, 0x6d, 0x1f, 0xac, 0xb0, 0x6f, 0xa4,
- 0x15, 0x76, 0xa7, 0x73, 0x85, 0x1d, 0xc7, 0x13, 0xe1, 0xa1, 0x97, 0xd9, 0xdb, 0x50, 0x56, 0x6e,
- 0x49, 0xd2, 0x2f, 0xd1, 0xca, 0xf1, 0x4b, 0xec, 0xcd, 0x7d, 0x48, 0x8b, 0xb2, 0x62, 0xa6, 0x45,
- 0xd9, 0xdf, 0xb5, 0x20, 0xd1, 0xa9, 0xa0, 0x1b, 0x50, 0x6e, 0x05, 0xcc, 0x02, 0x32, 0x94, 0x66,
- 0xc5, 0x8f, 0x67, 0x5e, 0x54, 0xfc, 0x52, 0xe4, 0x1f, 0x5f, 0x95, 0x35, 0x70, 0x52, 0x19, 0x2d,
- 0xc0, 0x70, 0x2b, 0x24, 0xb5, 0x98, 0x45, 0xe3, 0xe8, 0x49, 0x87, 0xaf, 0x11, 0x8e, 0x8f, 0x65,
- 0x45, 0xfb, 0xe7, 0x2d, 0x00, 0x6e, 0xb4, 0xe5, 0xf8, 0x5b, 0xe4, 0x04, 0xc4, 0xdd, 0x15, 0x18,
- 0x88, 0x5a, 0xa4, 0xde, 0xcd, 0x36, 0x35, 0xe9, 0x4f, 0xad, 0x45, 0xea, 0xc9, 0x80, 0xd3, 0x7f,
- 0x98, 0xd5, 0xb6, 0xbf, 0x1b, 0x60, 0x3c, 0x41, 0x5b, 0x89, 0x49, 0x13, 0x3d, 0x67, 0x78, 0xe7,
- 0x9f, 0x4b, 0x79, 0xe7, 0x97, 0x19, 0xb6, 0x26, 0x59, 0x7d, 0x1b, 0x8a, 0x4d, 0xe7, 0xbe, 0x10,
- 0x9d, 0x3d, 0xd3, 0xbd, 0x1b, 0x94, 0xfe, 0xdc, 0xaa, 0x73, 0x9f, 0x3f, 0x12, 0x9f, 0x91, 0x0b,
- 0x64, 0xd5, 0xb9, 0x7f, 0xc8, 0x2d, 0x50, 0xd9, 0x21, 0x75, 0xcb, 0x8b, 0xe2, 0x2f, 0xfc, 0xf7,
- 0xe4, 0x3f, 0x5b, 0x76, 0xb4, 0x11, 0xd6, 0x96, 0xe7, 0x0b, 0x13, 0xa6, 0xbe, 0xda, 0xf2, 0xfc,
- 0x74, 0x5b, 0x9e, 0xdf, 0x47, 0x5b, 0x9e, 0x8f, 0xde, 0x85, 0x61, 0x61, 0x2e, 0x28, 0xa2, 0xe1,
- 0x5c, 0xed, 0xa3, 0x3d, 0x61, 0x6d, 0xc8, 0xdb, 0xbc, 0x2a, 0x1f, 0xc1, 0xa2, 0xb4, 0x67, 0xbb,
- 0xb2, 0x41, 0xf4, 0xb7, 0x2c, 0x18, 0x17, 0xbf, 0x31, 0x79, 0xa7, 0x4d, 0xa2, 0x58, 0xf0, 0x9e,
- 0x1f, 0xef, 0xbf, 0x0f, 0xa2, 0x22, 0xef, 0xca, 0xc7, 0xe5, 0x31, 0x6b, 0x02, 0x7b, 0xf6, 0x28,
- 0xd5, 0x0b, 0xf4, 0x8f, 0x2d, 0x38, 0xdd, 0x74, 0xee, 0xf3, 0x16, 0x79, 0x19, 0x76, 0x62, 0x2f,
- 0x10, 0x9a, 0xfa, 0x4f, 0xf6, 0x37, 0xfd, 0x1d, 0xd5, 0x79, 0x27, 0xa5, 0x7e, 0xf2, 0x74, 0x16,
- 0x4a, 0xcf, 0xae, 0x66, 0xf6, 0x6b, 0x66, 0x13, 0x4a, 0x72, 0xbd, 0x65, 0x88, 0x1a, 0x2a, 0x3a,
- 0x63, 0x7d, 0x64, 0x6b, 0x4d, 0xdd, 0x45, 0x9e, 0xb6, 0x23, 0xd6, 0xda, 0x23, 0x6d, 0xe7, 0x6d,
- 0x18, 0xd5, 0xd7, 0xd8, 0x23, 0x6d, 0xeb, 0x1d, 0x38, 0x95, 0xb1, 0x96, 0x1e, 0x69, 0x93, 0x7b,
- 0x70, 0x2e, 0x77, 0x7d, 0x3c, 0xca, 0x86, 0xed, 0x2f, 0x5b, 0xfa, 0x39, 0x78, 0x02, 0x3a, 0x87,
- 0x45, 0x53, 0xe7, 0x70, 0xa1, 0xfb, 0xce, 0xc9, 0x51, 0x3c, 0xbc, 0xa5, 0x77, 0x9a, 0x9e, 0xea,
- 0xe8, 0x75, 0x18, 0x6a, 0xd0, 0x12, 0x69, 0xa7, 0x6a, 0xf7, 0xde, 0x91, 0x09, 0x2f, 0xc5, 0xca,
- 0x23, 0x2c, 0x28, 0xd8, 0xbf, 0x64, 0xc1, 0xc0, 0x09, 0x8c, 0x04, 0x36, 0x47, 0xe2, 0xb9, 0x5c,
- 0xd2, 0x22, 0x50, 0xef, 0x1c, 0x76, 0xf6, 0x96, 0xee, 0xc7, 0xc4, 0x8f, 0xd8, 0x53, 0x31, 0x73,
- 0x60, 0xbe, 0x0d, 0x4e, 0xdd, 0x0a, 0x1c, 0x77, 0xc1, 0x69, 0x38, 0x7e, 0x9d, 0x84, 0x2b, 0xfe,
- 0xd6, 0x91, 0x0c, 0xa6, 0x0b, 0xbd, 0x0c, 0xa6, 0xed, 0x6d, 0x40, 0x7a, 0x03, 0xc2, 0xa5, 0x04,
- 0xc3, 0xb0, 0xc7, 0x9b, 0x12, 0xc3, 0xff, 0x54, 0x36, 0x6b, 0xd6, 0xd1, 0x33, 0xcd, 0x59, 0x82,
- 0x17, 0x60, 0x49, 0xc8, 0x7e, 0x05, 0x32, 0xdd, 0xc8, 0x7b, 0x8b, 0x0d, 0xec, 0xcf, 0xc0, 0x14,
- 0xab, 0x79, 0xc4, 0x27, 0xad, 0x9d, 0x92, 0x4a, 0x66, 0xc4, 0x8c, 0xb3, 0xbf, 0x68, 0xc1, 0xc4,
- 0x5a, 0x2a, 0x94, 0xd6, 0x65, 0xa6, 0x00, 0xcd, 0x10, 0x86, 0xd7, 0x58, 0x29, 0x16, 0xd0, 0x63,
- 0x97, 0x41, 0xfd, 0xb9, 0x05, 0x49, 0x64, 0x87, 0x13, 0x60, 0xbc, 0x16, 0x0d, 0xc6, 0x2b, 0x53,
- 0x36, 0xa2, 0xba, 0x93, 0xc7, 0x77, 0xa1, 0x9b, 0x2a, 0x8c, 0x51, 0x17, 0xb1, 0x48, 0x42, 0x86,
- 0x07, 0xbd, 0x19, 0x37, 0x63, 0x1d, 0xc9, 0xc0, 0x46, 0xcc, 0xaa, 0x59, 0xe1, 0xbe, 0x4f, 0xac,
- 0x9a, 0x55, 0x7f, 0x72, 0x76, 0x68, 0x55, 0xeb, 0x32, 0x3b, 0xb9, 0xbe, 0x99, 0x79, 0xa9, 0x39,
- 0x0d, 0xef, 0x5d, 0xa2, 0x62, 0xb1, 0xcd, 0x0a, 0xaf, 0x33, 0x51, 0x7a, 0x78, 0x30, 0x3b, 0xa6,
- 0xfe, 0xf1, 0xd8, 0xaf, 0x49, 0x15, 0xfb, 0x06, 0x4c, 0xa4, 0x06, 0x0c, 0xbd, 0x04, 0x83, 0xad,
- 0x6d, 0x27, 0x22, 0x29, 0x4f, 0x8e, 0xc1, 0x2a, 0x2d, 0x3c, 0x3c, 0x98, 0x1d, 0x57, 0x15, 0x58,
- 0x09, 0xe6, 0xd8, 0xf6, 0x1f, 0x5b, 0x30, 0xb0, 0x16, 0xb8, 0x27, 0xb1, 0x98, 0x5e, 0x33, 0x16,
- 0xd3, 0x13, 0x79, 0x91, 0xb3, 0x73, 0xd7, 0xd1, 0x72, 0x6a, 0x1d, 0x5d, 0xc8, 0xa5, 0xd0, 0x7d,
- 0x09, 0x35, 0x61, 0x84, 0xc5, 0xe3, 0x16, 0x9e, 0x25, 0x2f, 0x18, 0x6f, 0x80, 0xd9, 0xd4, 0x1b,
- 0x60, 0x42, 0x43, 0xd5, 0x5e, 0x02, 0x4f, 0xc3, 0xb0, 0xf0, 0x6e, 0x48, 0xfb, 0xe3, 0x09, 0x5c,
- 0x2c, 0xe1, 0xf6, 0x8f, 0x15, 0xc1, 0x88, 0xff, 0x8d, 0x7e, 0xc5, 0x82, 0xb9, 0x90, 0x5b, 0x3d,
- 0xba, 0x95, 0x76, 0xe8, 0xf9, 0x5b, 0xb5, 0xfa, 0x36, 0x71, 0xdb, 0x0d, 0xcf, 0xdf, 0x5a, 0xd9,
- 0xf2, 0x03, 0x55, 0xbc, 0x74, 0x9f, 0xd4, 0xdb, 0x4c, 0x11, 0xd2, 0x23, 0xd8, 0xb8, 0xb2, 0x1e,
- 0xbe, 0xf6, 0xe0, 0x60, 0x76, 0x0e, 0x1f, 0x89, 0x36, 0x3e, 0x62, 0x5f, 0xd0, 0x57, 0x2d, 0xb8,
- 0xca, 0xc3, 0x62, 0xf7, 0xdf, 0xff, 0x2e, 0x2f, 0xa6, 0xaa, 0x24, 0x95, 0x10, 0x59, 0x27, 0x61,
- 0x73, 0xe1, 0x65, 0x31, 0xa0, 0x57, 0xab, 0x47, 0x6b, 0x0b, 0x1f, 0xb5, 0x73, 0xf6, 0xbf, 0x2e,
- 0xc2, 0x98, 0x88, 0xad, 0x23, 0x82, 0xb6, 0xbd, 0x64, 0x2c, 0x89, 0x27, 0x53, 0x4b, 0x62, 0xca,
- 0x40, 0x3e, 0x9e, 0x78, 0x6d, 0x11, 0x4c, 0x35, 0x9c, 0x28, 0xbe, 0x41, 0x9c, 0x30, 0xde, 0x20,
- 0x0e, 0xb7, 0xdd, 0x29, 0x1e, 0xd9, 0xce, 0x48, 0x89, 0x68, 0x6e, 0xa5, 0x89, 0xe1, 0x4e, 0xfa,
- 0x68, 0x17, 0x10, 0x33, 0x40, 0x0a, 0x1d, 0x3f, 0xe2, 0xdf, 0xe2, 0x09, 0x9d, 0xc1, 0xd1, 0x5a,
- 0x9d, 0x11, 0xad, 0xa2, 0x5b, 0x1d, 0xd4, 0x70, 0x46, 0x0b, 0x9a, 0x61, 0xd9, 0x60, 0xbf, 0x86,
- 0x65, 0x43, 0x3d, 0x9c, 0x5e, 0x7d, 0x98, 0xec, 0x08, 0x8f, 0xf4, 0x26, 0x94, 0x95, 0x69, 0xbe,
- 0x38, 0x74, 0xba, 0x47, 0x19, 0x4b, 0x53, 0xe0, 0x62, 0x94, 0xc4, 0x2d, 0x24, 0x21, 0x67, 0xff,
- 0x93, 0x82, 0xd1, 0x20, 0x9f, 0xc4, 0x35, 0x28, 0x39, 0x51, 0xe4, 0x6d, 0xf9, 0xc4, 0x15, 0x3b,
- 0xf6, 0xc3, 0x79, 0x3b, 0xd6, 0x68, 0x86, 0xb9, 0x47, 0xcc, 0x8b, 0x9a, 0x58, 0xd1, 0x40, 0x37,
- 0xb8, 0x85, 0xd4, 0xae, 0xe4, 0xf9, 0xfb, 0xa3, 0x06, 0xd2, 0x86, 0x6a, 0x97, 0x60, 0x51, 0x1f,
- 0x7d, 0x96, 0x9b, 0xb0, 0xdd, 0xf4, 0x83, 0x3d, 0xff, 0x7a, 0x10, 0x48, 0x87, 0xf8, 0xfe, 0x08,
- 0x4e, 0x49, 0xc3, 0x35, 0x55, 0x1d, 0x9b, 0xd4, 0xfa, 0x0b, 0x21, 0xf8, 0xed, 0x70, 0x8a, 0x92,
- 0x36, 0xdd, 0x5a, 0x23, 0x44, 0x60, 0x42, 0x04, 0x6e, 0x92, 0x65, 0x62, 0xec, 0x32, 0xd9, 0x79,
- 0xb3, 0x76, 0x22, 0x4b, 0xbc, 0x69, 0x92, 0xc0, 0x69, 0x9a, 0xf6, 0x4f, 0x5a, 0xc0, 0x5c, 0xfc,
- 0x4e, 0x80, 0x65, 0xf8, 0x94, 0xc9, 0x32, 0x4c, 0xe7, 0x0d, 0x72, 0x0e, 0xb7, 0xf0, 0x22, 0x5f,
- 0x59, 0xd5, 0x30, 0xb8, 0xbf, 0x2f, 0xcc, 0x07, 0x7a, 0x73, 0xb2, 0xf6, 0xff, 0xb5, 0xf8, 0x21,
- 0xa6, 0x0c, 0xe7, 0xd1, 0x77, 0x40, 0xa9, 0xee, 0xb4, 0x9c, 0x3a, 0x4f, 0x56, 0x91, 0x2b, 0xd5,
- 0x31, 0x2a, 0xcd, 0x2d, 0x8a, 0x1a, 0x5c, 0x4a, 0x21, 0x03, 0x80, 0x95, 0x64, 0x71, 0x4f, 0xc9,
- 0x84, 0x6a, 0x72, 0x66, 0x07, 0xc6, 0x0c, 0x62, 0x8f, 0xf4, 0x49, 0xfb, 0x1d, 0xfc, 0x8a, 0x55,
- 0x01, 0xeb, 0x9a, 0x30, 0xe5, 0x6b, 0xff, 0xe9, 0x85, 0x22, 0x9f, 0x29, 0x1f, 0xee, 0x75, 0x89,
- 0xb2, 0xdb, 0x47, 0x73, 0x38, 0x4c, 0x91, 0xc1, 0x9d, 0x94, 0xed, 0x1f, 0xb7, 0xe0, 0x31, 0x1d,
- 0x51, 0xf3, 0x69, 0xe8, 0x25, 0x27, 0xae, 0x40, 0x29, 0x68, 0x91, 0xd0, 0x89, 0x83, 0x50, 0xdc,
- 0x1a, 0x57, 0xe4, 0xa0, 0xdf, 0x16, 0xe5, 0x87, 0x22, 0xd4, 0xb3, 0xa4, 0x2e, 0xcb, 0xb1, 0xaa,
- 0x49, 0xdf, 0x31, 0x6c, 0x30, 0x22, 0xe1, 0xbd, 0xc2, 0xce, 0x00, 0xa6, 0x32, 0x8d, 0xb0, 0x80,
- 0xd8, 0x7f, 0x60, 0xf1, 0x85, 0xa5, 0x77, 0x1d, 0xbd, 0x03, 0x93, 0x4d, 0x27, 0xae, 0x6f, 0x2f,
- 0xdd, 0x6f, 0x85, 0x5c, 0xea, 0x2e, 0xc7, 0xe9, 0x99, 0x5e, 0xe3, 0xa4, 0x7d, 0x64, 0x62, 0x95,
- 0xb7, 0x9a, 0x22, 0x86, 0x3b, 0xc8, 0xa3, 0x0d, 0x18, 0x61, 0x65, 0xcc, 0x31, 0x2b, 0xea, 0xc6,
- 0x1a, 0xe4, 0xb5, 0xa6, 0xb4, 0xce, 0xab, 0x09, 0x1d, 0xac, 0x13, 0xb5, 0x7f, 0xb6, 0xc8, 0x77,
- 0x3b, 0xe3, 0xb6, 0x9f, 0x86, 0xe1, 0x56, 0xe0, 0x2e, 0xae, 0x54, 0xb0, 0x98, 0x05, 0x75, 0x8d,
- 0x54, 0x79, 0x31, 0x96, 0x70, 0x74, 0x05, 0x4a, 0xe2, 0xa7, 0xd4, 0x92, 0xb0, 0xb3, 0x59, 0xe0,
- 0x45, 0x58, 0x41, 0xd1, 0x35, 0x80, 0x56, 0x18, 0xec, 0x7a, 0x2e, 0x73, 0xeb, 0x2f, 0x9a, 0x06,
- 0x23, 0x55, 0x05, 0xc1, 0x1a, 0x16, 0x7a, 0x15, 0xc6, 0xda, 0x7e, 0xc4, 0xd9, 0x11, 0x67, 0x43,
- 0x44, 0x45, 0x2e, 0x25, 0xa6, 0x0c, 0x77, 0x74, 0x20, 0x36, 0x71, 0xd1, 0x3c, 0x0c, 0xc5, 0x0e,
- 0x33, 0x80, 0x18, 0xcc, 0xb7, 0xdc, 0x5c, 0xa7, 0x18, 0x7a, 0x5e, 0x04, 0x5a, 0x01, 0x8b, 0x8a,
- 0xe8, 0x4d, 0xe9, 0x23, 0xc9, 0x0f, 0x76, 0x61, 0x32, 0xdd, 0xdf, 0x25, 0xa0, 0x79, 0x48, 0x0a,
- 0x53, 0x6c, 0x83, 0x16, 0x7a, 0x15, 0x80, 0xdc, 0x8f, 0x49, 0xe8, 0x3b, 0x0d, 0x65, 0x5f, 0xa4,
- 0x2c, 0x6a, 0x2b, 0xc1, 0x5a, 0x10, 0xdf, 0x89, 0xc8, 0xb7, 0x2d, 0x29, 0x14, 0xac, 0xa1, 0xdb,
- 0x5f, 0x2d, 0x03, 0x24, 0x8c, 0x3b, 0x7a, 0xb7, 0xe3, 0xe4, 0x7a, 0xb6, 0x3b, 0xab, 0x7f, 0x7c,
- 0xc7, 0x16, 0xfa, 0x1e, 0x0b, 0x46, 0x9c, 0x46, 0x23, 0xa8, 0x3b, 0x31, 0x9b, 0xa2, 0x42, 0xf7,
- 0x93, 0x53, 0xb4, 0x3f, 0x9f, 0xd4, 0xe0, 0x5d, 0x78, 0x41, 0x2e, 0x51, 0x0d, 0xd2, 0xb3, 0x17,
- 0x7a, 0xc3, 0xe8, 0x63, 0xf2, 0x3d, 0xc7, 0xd7, 0xd6, 0x4c, 0xfa, 0x3d, 0x57, 0x66, 0x97, 0x84,
- 0xf6, 0x94, 0x43, 0x77, 0x8c, 0x68, 0xb9, 0x03, 0xf9, 0x5e, 0x60, 0x06, 0xff, 0xda, 0x2b, 0x50,
- 0x2e, 0xaa, 0xea, 0x1e, 0xe1, 0x83, 0xf9, 0x2e, 0x57, 0xda, 0x43, 0xa9, 0x87, 0x37, 0xf8, 0xdb,
- 0x30, 0xe1, 0x9a, 0x5c, 0x80, 0x58, 0x8a, 0x4f, 0xe5, 0xd1, 0x4d, 0x31, 0x0d, 0xc9, 0xbd, 0x9f,
- 0x02, 0xe0, 0x34, 0x61, 0x54, 0xe5, 0xde, 0xfe, 0x2b, 0xfe, 0x66, 0x20, 0xec, 0xf6, 0xed, 0xdc,
- 0xb9, 0xdc, 0x8f, 0x62, 0xd2, 0xa4, 0x98, 0xc9, 0xf5, 0xbe, 0x26, 0xea, 0x62, 0x45, 0x05, 0xbd,
- 0x0e, 0x43, 0xcc, 0x89, 0x27, 0x9a, 0x2e, 0xe5, 0x8b, 0x1d, 0xcd, 0xb8, 0x54, 0xc9, 0x8e, 0x64,
- 0x7f, 0x23, 0x2c, 0x28, 0xa0, 0x1b, 0xd2, 0xa3, 0x2d, 0x5a, 0xf1, 0xef, 0x44, 0x84, 0x79, 0xb4,
- 0x95, 0x17, 0x3e, 0x9c, 0x38, 0xab, 0xf1, 0xf2, 0xcc, 0xf4, 0x49, 0x46, 0x4d, 0xca, 0x46, 0x89,
- 0xff, 0x32, 0x2b, 0xd3, 0x34, 0xe4, 0x77, 0xcf, 0xcc, 0xdc, 0x94, 0x0c, 0xe7, 0x5d, 0x93, 0x04,
- 0x4e, 0xd3, 0xa4, 0x2c, 0x29, 0xdf, 0xf6, 0xc2, 0xf2, 0xbf, 0xd7, 0xe1, 0xc1, 0x5f, 0xe2, 0xec,
- 0x3a, 0xe2, 0x25, 0x58, 0xd4, 0x3f, 0x51, 0xfe, 0x60, 0xc6, 0x87, 0xc9, 0xf4, 0x16, 0x7d, 0xa4,
- 0xfc, 0xc8, 0xef, 0x0d, 0xc0, 0xb8, 0xb9, 0xa4, 0xd0, 0x55, 0x28, 0x0b, 0x22, 0x2a, 0x92, 0xba,
- 0xda, 0x25, 0xab, 0x12, 0x80, 0x13, 0x1c, 0x16, 0x40, 0x9f, 0x55, 0xd7, 0x2c, 0x36, 0x93, 0x00,
- 0xfa, 0x0a, 0x82, 0x35, 0x2c, 0xfa, 0xb2, 0xda, 0x08, 0x82, 0x58, 0xdd, 0x48, 0x6a, 0xdd, 0x2d,
- 0xb0, 0x52, 0x2c, 0xa0, 0xf4, 0x26, 0xda, 0x21, 0xa1, 0x4f, 0x1a, 0x66, 0x80, 0x56, 0x75, 0x13,
- 0xdd, 0xd4, 0x81, 0xd8, 0xc4, 0xa5, 0xf7, 0x69, 0x10, 0xb1, 0x85, 0x2c, 0xde, 0x6f, 0x89, 0x05,
- 0x6c, 0x8d, 0x3b, 0xd5, 0x4a, 0x38, 0xfa, 0x0c, 0x3c, 0xa6, 0xa2, 0xda, 0x60, 0x2e, 0xd2, 0x96,
- 0x2d, 0x0e, 0x19, 0xe2, 0x96, 0xc7, 0x16, 0xb3, 0xd1, 0x70, 0x5e, 0x7d, 0xf4, 0x1a, 0x8c, 0x0b,
- 0x1e, 0x5f, 0x52, 0x1c, 0x36, 0xad, 0x2c, 0x6e, 0x1a, 0x50, 0x9c, 0xc2, 0x96, 0x21, 0x66, 0x19,
- 0x9b, 0x2d, 0x29, 0x94, 0x3a, 0x43, 0xcc, 0xea, 0x70, 0xdc, 0x51, 0x03, 0xcd, 0xc3, 0x04, 0x67,
- 0xc2, 0x3c, 0x7f, 0x8b, 0xcf, 0x89, 0x70, 0xcc, 0x51, 0x5b, 0xea, 0xb6, 0x09, 0xc6, 0x69, 0x7c,
- 0xf4, 0x0a, 0x8c, 0x3a, 0x61, 0x7d, 0xdb, 0x8b, 0x49, 0x3d, 0x6e, 0x87, 0xdc, 0x63, 0x47, 0x33,
- 0x53, 0x99, 0xd7, 0x60, 0xd8, 0xc0, 0xb4, 0xdf, 0x85, 0x53, 0x19, 0xde, 0xf6, 0x74, 0xe1, 0x38,
- 0x2d, 0x4f, 0x7e, 0x53, 0xca, 0x96, 0x75, 0xbe, 0xba, 0x22, 0xbf, 0x46, 0xc3, 0xa2, 0xab, 0x93,
- 0x79, 0xe5, 0x6b, 0x49, 0xd8, 0xd4, 0xea, 0x5c, 0x96, 0x00, 0x9c, 0xe0, 0xd8, 0xff, 0xbb, 0x00,
- 0x13, 0x19, 0x62, 0x7a, 0x96, 0x08, 0x2c, 0xf5, 0x4a, 0x49, 0xf2, 0x7e, 0x99, 0x11, 0x8b, 0x0b,
- 0x47, 0x88, 0x58, 0x5c, 0xec, 0x15, 0xb1, 0x78, 0xe0, 0xbd, 0x44, 0x2c, 0x36, 0x47, 0x6c, 0xb0,
- 0xaf, 0x11, 0xcb, 0x88, 0x72, 0x3c, 0x74, 0xc4, 0x28, 0xc7, 0xc6, 0xa0, 0x0f, 0xf7, 0x31, 0xe8,
- 0x3f, 0x54, 0x80, 0xc9, 0xb4, 0x39, 0xdd, 0x09, 0x08, 0x6e, 0x5f, 0x37, 0x04, 0xb7, 0xd9, 0x69,
- 0xf5, 0xd2, 0x46, 0x7e, 0x79, 0x42, 0x5c, 0x9c, 0x12, 0xe2, 0x7e, 0xb4, 0x2f, 0x6a, 0xdd, 0x05,
- 0xba, 0x7f, 0xbf, 0x00, 0x67, 0xd2, 0x55, 0x16, 0x1b, 0x8e, 0xd7, 0x3c, 0x81, 0xb1, 0xb9, 0x6d,
- 0x8c, 0xcd, 0x73, 0xfd, 0x7c, 0x0d, 0xeb, 0x5a, 0xee, 0x00, 0xdd, 0x4b, 0x0d, 0xd0, 0xd5, 0xfe,
- 0x49, 0x76, 0x1f, 0xa5, 0xaf, 0x15, 0xe1, 0x42, 0x66, 0xbd, 0x44, 0xee, 0xb9, 0x6c, 0xc8, 0x3d,
- 0xaf, 0xa5, 0xe4, 0x9e, 0x76, 0xf7, 0xda, 0xc7, 0x23, 0x08, 0x15, 0xce, 0x96, 0xcc, 0x17, 0xfd,
- 0x21, 0x85, 0xa0, 0x86, 0xb3, 0xa5, 0x22, 0x84, 0x4d, 0xba, 0xdf, 0x48, 0xc2, 0xcf, 0x7f, 0x6f,
- 0xc1, 0xb9, 0xcc, 0xb9, 0x39, 0x01, 0x61, 0xd7, 0x9a, 0x29, 0xec, 0x7a, 0xba, 0xef, 0xd5, 0x9a,
- 0x23, 0xfd, 0xfa, 0xf5, 0x81, 0x9c, 0x6f, 0x61, 0x4f, 0xf9, 0xdb, 0x30, 0xe2, 0xd4, 0xeb, 0x24,
- 0x8a, 0x56, 0x03, 0x57, 0x45, 0x79, 0x7d, 0x8e, 0xbd, 0xb3, 0x92, 0xe2, 0xc3, 0x83, 0xd9, 0x99,
- 0x34, 0x89, 0x04, 0x8c, 0x75, 0x0a, 0xe8, 0xb3, 0x50, 0x8a, 0xc4, 0xbd, 0x29, 0xe6, 0xfe, 0x85,
- 0x3e, 0x07, 0xc7, 0xd9, 0x20, 0x0d, 0x33, 0xc0, 0x8d, 0x12, 0x55, 0x28, 0x92, 0x66, 0x30, 0x8c,
- 0xc2, 0xb1, 0x06, 0xc3, 0xb8, 0x06, 0xb0, 0xab, 0x1e, 0x03, 0x69, 0x01, 0x84, 0xf6, 0x4c, 0xd0,
- 0xb0, 0xd0, 0xb7, 0xc0, 0x64, 0xc4, 0xc3, 0xb9, 0x2d, 0x36, 0x9c, 0x88, 0x79, 0x4c, 0x88, 0x55,
- 0xc8, 0x82, 0xe8, 0xd4, 0x52, 0x30, 0xdc, 0x81, 0x8d, 0x96, 0x65, 0xab, 0x2c, 0xf6, 0x1c, 0x5f,
- 0x98, 0x97, 0x93, 0x16, 0x45, 0x1a, 0xd2, 0xd3, 0xe9, 0xe1, 0x67, 0x03, 0xaf, 0xd5, 0x44, 0x9f,
- 0x05, 0xa0, 0xcb, 0x47, 0x08, 0x22, 0x86, 0xf3, 0x0f, 0x4f, 0x7a, 0xaa, 0xb8, 0x99, 0x06, 0x9e,
- 0xcc, 0xcd, 0xb1, 0xa2, 0x88, 0x60, 0x8d, 0xa0, 0xfd, 0x43, 0x03, 0xf0, 0x78, 0x97, 0x33, 0x12,
- 0xcd, 0x9b, 0xca, 0xd2, 0x67, 0xd2, 0x8f, 0xeb, 0x99, 0xcc, 0xca, 0xc6, 0x6b, 0x3b, 0xb5, 0x14,
- 0x0b, 0xef, 0x79, 0x29, 0x7e, 0xbf, 0xa5, 0x89, 0x3d, 0xb8, 0xd9, 0xdf, 0xa7, 0x8e, 0x78, 0xf6,
- 0x1f, 0xa3, 0x1c, 0x64, 0x33, 0x43, 0x98, 0x70, 0xad, 0xef, 0xee, 0xf4, 0x2d, 0x5d, 0x38, 0x59,
- 0x31, 0xf1, 0x17, 0x2c, 0x78, 0x32, 0xb3, 0xbf, 0x86, 0x71, 0xc7, 0x55, 0x28, 0xd7, 0x69, 0xa1,
- 0xe6, 0xd5, 0x96, 0xb8, 0xfb, 0x4a, 0x00, 0x4e, 0x70, 0x0c, 0x1b, 0x8e, 0x42, 0x4f, 0x1b, 0x8e,
- 0x7f, 0x65, 0x41, 0xc7, 0xfe, 0x38, 0x81, 0x83, 0x7a, 0xc5, 0x3c, 0xa8, 0x3f, 0xdc, 0xcf, 0x5c,
- 0xe6, 0x9c, 0xd1, 0x7f, 0x34, 0x01, 0x67, 0x73, 0xbc, 0x3a, 0x76, 0x61, 0x6a, 0xab, 0x4e, 0x4c,
- 0x7f, 0x41, 0xf1, 0x31, 0x99, 0xae, 0x95, 0x5d, 0x9d, 0x0b, 0x59, 0x4e, 0xc1, 0xa9, 0x0e, 0x14,
- 0xdc, 0xd9, 0x04, 0xfa, 0x82, 0x05, 0xa7, 0x9d, 0xbd, 0xa8, 0x23, 0x09, 0xb9, 0x58, 0x33, 0x2f,
- 0x66, 0x0a, 0x41, 0x7a, 0x24, 0x2d, 0xe7, 0x49, 0x16, 0xb3, 0xb0, 0x70, 0x66, 0x5b, 0x08, 0x8b,
- 0xb8, 0xdf, 0x94, 0x9d, 0xef, 0xe2, 0xd1, 0x9a, 0xe5, 0x7e, 0xc3, 0x6f, 0x10, 0x09, 0xc1, 0x8a,
- 0x0e, 0xfa, 0x3c, 0x94, 0xb7, 0xa4, 0x4f, 0x5c, 0xc6, 0x0d, 0x95, 0x0c, 0x64, 0x77, 0x4f, 0x41,
- 0xae, 0xca, 0x54, 0x48, 0x38, 0x21, 0x8a, 0x5e, 0x83, 0xa2, 0xbf, 0x19, 0x75, 0xcb, 0x53, 0x98,
- 0xb2, 0x7e, 0xe2, 0x7e, 0xe3, 0x6b, 0xcb, 0x35, 0x4c, 0x2b, 0xa2, 0x1b, 0x50, 0x0c, 0x37, 0x5c,
- 0x21, 0xc1, 0xcb, 0x3c, 0xc3, 0xf1, 0x42, 0x25, 0xa7, 0x57, 0x8c, 0x12, 0x5e, 0xa8, 0x60, 0x4a,
- 0x02, 0x55, 0x61, 0x90, 0xb9, 0x42, 0x88, 0xfb, 0x20, 0x93, 0xf3, 0xed, 0xe2, 0x52, 0xc4, 0x9d,
- 0xcb, 0x19, 0x02, 0xe6, 0x84, 0xd0, 0x3a, 0x0c, 0xd5, 0x59, 0x4e, 0x3b, 0x11, 0x89, 0xea, 0x63,
- 0x99, 0xb2, 0xba, 0x2e, 0xc9, 0xfe, 0x84, 0xe8, 0x8a, 0x61, 0x60, 0x41, 0x8b, 0x51, 0x25, 0xad,
- 0xed, 0xcd, 0x48, 0xe4, 0x60, 0xcd, 0xa6, 0xda, 0x25, 0x87, 0xa5, 0xa0, 0xca, 0x30, 0xb0, 0xa0,
- 0x85, 0x3e, 0x01, 0x85, 0xcd, 0xba, 0x70, 0x73, 0xc8, 0x14, 0xda, 0x99, 0xae, 0xff, 0x0b, 0x43,
- 0x0f, 0x0e, 0x66, 0x0b, 0xcb, 0x8b, 0xb8, 0xb0, 0x59, 0x47, 0x6b, 0x30, 0xbc, 0xc9, 0x9d, 0x85,
- 0x85, 0x5c, 0xee, 0xa9, 0x6c, 0x3f, 0xe6, 0x0e, 0x7f, 0x62, 0x6e, 0xe1, 0x2f, 0x00, 0x58, 0x12,
- 0x61, 0x61, 0xb4, 0x95, 0xd3, 0xb3, 0x88, 0x02, 0x35, 0x77, 0x34, 0x47, 0x75, 0x7e, 0x3f, 0x27,
- 0xae, 0xd3, 0x58, 0xa3, 0x48, 0x57, 0xb5, 0x23, 0x13, 0x61, 0x8b, 0xa8, 0x1e, 0x99, 0xab, 0xba,
- 0x47, 0x8e, 0x70, 0xbe, 0xaa, 0x15, 0x12, 0x4e, 0x88, 0xa2, 0x1d, 0x18, 0xdb, 0x8d, 0x5a, 0xdb,
- 0x44, 0x6e, 0x69, 0x16, 0xe4, 0x23, 0xe7, 0x0a, 0xbb, 0x2b, 0x10, 0xbd, 0x30, 0x6e, 0x3b, 0x8d,
- 0x8e, 0x53, 0x88, 0xe9, 0xbf, 0xef, 0xea, 0xc4, 0xb0, 0x49, 0x9b, 0x0e, 0xff, 0x3b, 0xed, 0x60,
- 0x63, 0x3f, 0x26, 0x22, 0x78, 0x53, 0xe6, 0xf0, 0xbf, 0xc1, 0x51, 0x3a, 0x87, 0x5f, 0x00, 0xb0,
- 0x24, 0x82, 0xee, 0x8a, 0xe1, 0x61, 0xa7, 0xe7, 0x64, 0x7e, 0x40, 0xc4, 0xcc, 0x4c, 0xf4, 0xda,
- 0xa0, 0xb0, 0xd3, 0x32, 0x21, 0xc5, 0x4e, 0xc9, 0xd6, 0x76, 0x10, 0x07, 0x7e, 0xea, 0x84, 0x9e,
- 0xca, 0x3f, 0x25, 0xab, 0x19, 0xf8, 0x9d, 0xa7, 0x64, 0x16, 0x16, 0xce, 0x6c, 0x0b, 0xb9, 0x30,
- 0xde, 0x0a, 0xc2, 0x78, 0x2f, 0x08, 0xe5, 0xfa, 0x42, 0x5d, 0xe4, 0x0a, 0x06, 0xa6, 0x68, 0x91,
- 0x85, 0x31, 0x33, 0x21, 0x38, 0x45, 0x13, 0x7d, 0x1a, 0x86, 0xa3, 0xba, 0xd3, 0x20, 0x2b, 0xb7,
- 0xa7, 0x4f, 0xe5, 0x5f, 0x3f, 0x35, 0x8e, 0x92, 0xb3, 0xba, 0xd8, 0xe4, 0x08, 0x14, 0x2c, 0xc9,
- 0xa1, 0x65, 0x18, 0x64, 0x59, 0x8d, 0x58, 0xa4, 0xb1, 0x9c, 0xb8, 0x8e, 0x1d, 0xb6, 0xa8, 0xfc,
- 0x6c, 0x62, 0xc5, 0x98, 0x57, 0xa7, 0x7b, 0x40, 0xb0, 0xd7, 0x41, 0x34, 0x7d, 0x26, 0x7f, 0x0f,
- 0x08, 0xae, 0xfc, 0x76, 0xad, 0xdb, 0x1e, 0x50, 0x48, 0x38, 0x21, 0x4a, 0x4f, 0x66, 0x7a, 0x9a,
- 0x9e, 0xed, 0x62, 0xfa, 0x92, 0x7b, 0x96, 0xb2, 0x93, 0x99, 0x9e, 0xa4, 0x94, 0x84, 0xfd, 0x3b,
- 0xc3, 0x9d, 0x3c, 0x0b, 0x7b, 0x90, 0xfd, 0x55, 0xab, 0x43, 0x57, 0xf7, 0xf1, 0x7e, 0xe5, 0x43,
- 0xc7, 0xc8, 0xad, 0x7e, 0xc1, 0x82, 0xb3, 0xad, 0xcc, 0x0f, 0x11, 0x0c, 0x40, 0x7f, 0x62, 0x26,
- 0xfe, 0xe9, 0x2a, 0x2a, 0x5d, 0x36, 0x1c, 0xe7, 0xb4, 0x94, 0x7e, 0x11, 0x14, 0xdf, 0xf3, 0x8b,
- 0x60, 0x15, 0x4a, 0x8c, 0xc9, 0xec, 0x91, 0xe3, 0x35, 0xfd, 0x30, 0x62, 0xac, 0xc4, 0xa2, 0xa8,
- 0x88, 0x15, 0x09, 0xf4, 0x03, 0x16, 0x9c, 0x4f, 0x77, 0x1d, 0x13, 0x06, 0x16, 0xa1, 0xec, 0xf8,
- 0x5b, 0x70, 0x59, 0x7c, 0xff, 0xf9, 0x6a, 0x37, 0xe4, 0xc3, 0x5e, 0x08, 0xb8, 0x7b, 0x63, 0xa8,
- 0x92, 0xf1, 0x18, 0x1d, 0x32, 0x05, 0xf0, 0x7d, 0x3c, 0x48, 0x5f, 0x84, 0xd1, 0x66, 0xd0, 0xf6,
- 0x63, 0x61, 0x29, 0x23, 0xb4, 0xf6, 0x4c, 0x5b, 0xbd, 0xaa, 0x95, 0x63, 0x03, 0x2b, 0xf5, 0x8c,
- 0x2d, 0x3d, 0xf4, 0x33, 0xf6, 0x2d, 0x18, 0xf5, 0x35, 0xd3, 0x4e, 0xc1, 0x0f, 0x5c, 0xce, 0x8f,
- 0x1a, 0xa9, 0x1b, 0x82, 0xf2, 0x5e, 0xea, 0x25, 0xd8, 0xa0, 0x76, 0xb2, 0x6f, 0xa3, 0x9f, 0xb6,
- 0x32, 0x98, 0x7a, 0xfe, 0x5a, 0xfe, 0xa4, 0xf9, 0x5a, 0xbe, 0x9c, 0x7e, 0x2d, 0x77, 0x08, 0x5f,
- 0x8d, 0x87, 0x72, 0xff, 0xa9, 0x2b, 0xfa, 0x8d, 0x38, 0x67, 0x37, 0xe0, 0x62, 0xaf, 0x6b, 0x89,
- 0x99, 0x4c, 0xb9, 0x4a, 0xd5, 0x96, 0x98, 0x4c, 0xb9, 0x2b, 0x15, 0xcc, 0x20, 0xfd, 0x86, 0x24,
- 0xb1, 0xff, 0x97, 0x05, 0xc5, 0x6a, 0xe0, 0x9e, 0x80, 0x30, 0xf9, 0x53, 0x86, 0x30, 0xf9, 0xf1,
- 0xec, 0x0b, 0xd1, 0xcd, 0x15, 0x1d, 0x2f, 0xa5, 0x44, 0xc7, 0xe7, 0xf3, 0x08, 0x74, 0x17, 0x14,
- 0xff, 0x44, 0x11, 0x46, 0xaa, 0x81, 0xab, 0xec, 0x95, 0x7f, 0xfd, 0x61, 0xec, 0x95, 0x73, 0x43,
- 0xbb, 0x6b, 0x94, 0x99, 0xa5, 0x95, 0x74, 0xd7, 0xfb, 0x0b, 0x66, 0xb6, 0x7c, 0x8f, 0x78, 0x5b,
- 0xdb, 0x31, 0x71, 0xd3, 0x9f, 0x73, 0x72, 0x66, 0xcb, 0xff, 0xd3, 0x82, 0x89, 0x54, 0xeb, 0xa8,
- 0x01, 0x63, 0x0d, 0x5d, 0x30, 0x29, 0xd6, 0xe9, 0x43, 0xc9, 0x34, 0x85, 0xd9, 0xa7, 0x56, 0x84,
- 0x4d, 0xe2, 0x68, 0x0e, 0x40, 0x69, 0xea, 0xa4, 0x04, 0x8c, 0x71, 0xfd, 0x4a, 0x95, 0x17, 0x61,
- 0x0d, 0x03, 0xbd, 0x04, 0x23, 0x71, 0xd0, 0x0a, 0x1a, 0xc1, 0xd6, 0xfe, 0x4d, 0x22, 0x83, 0xe0,
- 0x28, 0x63, 0xae, 0xf5, 0x04, 0x84, 0x75, 0x3c, 0xfb, 0xa7, 0x8a, 0xfc, 0x43, 0xfd, 0xd8, 0xfb,
- 0x60, 0x4d, 0xbe, 0xbf, 0xd7, 0xe4, 0xd7, 0x2c, 0x98, 0xa4, 0xad, 0x33, 0x73, 0x11, 0x79, 0xd9,
- 0xaa, 0xf0, 0xb3, 0x56, 0x97, 0xf0, 0xb3, 0x97, 0xe9, 0xd9, 0xe5, 0x06, 0xed, 0x58, 0x48, 0xd0,
- 0xb4, 0xc3, 0x89, 0x96, 0x62, 0x01, 0x15, 0x78, 0x24, 0x0c, 0x85, 0xb7, 0x94, 0x8e, 0x47, 0xc2,
- 0x10, 0x0b, 0xa8, 0x8c, 0x4e, 0x3b, 0x90, 0x1d, 0x9d, 0x96, 0x87, 0xf4, 0x13, 0x86, 0x05, 0x82,
- 0xed, 0xd1, 0x42, 0xfa, 0x49, 0x8b, 0x83, 0x04, 0xc7, 0xfe, 0x72, 0x11, 0x46, 0xab, 0x81, 0x9b,
- 0xe8, 0xca, 0x5e, 0x34, 0x74, 0x65, 0x17, 0x53, 0xba, 0xb2, 0x49, 0x1d, 0xf7, 0x03, 0xcd, 0xd8,
- 0xd7, 0x4b, 0x33, 0xf6, 0x2f, 0x2d, 0x36, 0x6b, 0x95, 0xb5, 0x9a, 0xc8, 0xf0, 0xff, 0x3c, 0x8c,
- 0xb0, 0x03, 0x89, 0xb9, 0xe7, 0x49, 0x05, 0x12, 0x4b, 0x9e, 0xb3, 0x96, 0x14, 0x63, 0x1d, 0x07,
- 0x5d, 0x81, 0x52, 0x44, 0x9c, 0xb0, 0xbe, 0xad, 0xce, 0x38, 0xa1, 0xed, 0xe1, 0x65, 0x58, 0x41,
- 0xd1, 0x1b, 0x49, 0x34, 0xb9, 0x62, 0x7e, 0xb0, 0x61, 0xbd, 0x3f, 0x7c, 0x8b, 0xe4, 0x87, 0x90,
- 0xb3, 0xef, 0x01, 0xea, 0xc4, 0xef, 0x23, 0x8c, 0xd2, 0xac, 0x19, 0x46, 0xa9, 0xdc, 0x11, 0x42,
- 0xe9, 0xcf, 0x2c, 0x18, 0xaf, 0x06, 0x2e, 0xdd, 0xba, 0xdf, 0x48, 0xfb, 0x54, 0x0f, 0xa5, 0x39,
- 0xd4, 0x25, 0x94, 0xe6, 0x25, 0x18, 0xac, 0x06, 0xee, 0x4a, 0xb5, 0x9b, 0x9b, 0xac, 0xfd, 0x0f,
- 0x2c, 0x18, 0xae, 0x06, 0xee, 0x09, 0x08, 0xe7, 0x3f, 0x69, 0x0a, 0xe7, 0x1f, 0xcb, 0x59, 0x37,
- 0x39, 0xf2, 0xf8, 0x5f, 0x28, 0xc2, 0x18, 0xed, 0x67, 0xb0, 0x25, 0xa7, 0xd2, 0x18, 0x36, 0xab,
- 0x8f, 0x61, 0xa3, 0xbc, 0x70, 0xd0, 0x68, 0x04, 0x7b, 0xe9, 0x69, 0x5d, 0x66, 0xa5, 0x58, 0x40,
- 0xd1, 0xb3, 0x50, 0x6a, 0x85, 0x64, 0xd7, 0x0b, 0x04, 0x93, 0xa9, 0xa9, 0x3a, 0xaa, 0xa2, 0x1c,
- 0x2b, 0x0c, 0xfa, 0x38, 0x8b, 0x3c, 0xbf, 0x4e, 0x6a, 0xa4, 0x1e, 0xf8, 0x2e, 0x97, 0x5f, 0x17,
- 0x45, 0x8a, 0x0b, 0xad, 0x1c, 0x1b, 0x58, 0xe8, 0x1e, 0x94, 0xd9, 0x7f, 0x76, 0xec, 0x1c, 0x3d,
- 0x41, 0xa0, 0xc8, 0x2b, 0x25, 0x08, 0xe0, 0x84, 0x16, 0xba, 0x06, 0x10, 0xcb, 0x60, 0xcb, 0x91,
- 0x08, 0x99, 0xa3, 0x18, 0x72, 0x15, 0x86, 0x39, 0xc2, 0x1a, 0x16, 0x7a, 0x06, 0xca, 0xb1, 0xe3,
- 0x35, 0x6e, 0x79, 0x3e, 0x89, 0x98, 0x5c, 0xba, 0x28, 0xd3, 0x3b, 0x89, 0x42, 0x9c, 0xc0, 0x29,
- 0x43, 0xc4, 0xfc, 0xc9, 0x79, 0x7a, 0xd1, 0x12, 0xc3, 0x66, 0x0c, 0xd1, 0x2d, 0x55, 0x8a, 0x35,
- 0x0c, 0xfb, 0x15, 0x38, 0x53, 0x0d, 0xdc, 0x6a, 0x10, 0xc6, 0xcb, 0x41, 0xb8, 0xe7, 0x84, 0xae,
- 0x9c, 0xbf, 0x59, 0x99, 0x15, 0x82, 0x1e, 0x50, 0x83, 0x7c, 0xfb, 0x1a, 0xb9, 0x89, 0x5e, 0x60,
- 0x2c, 0xd1, 0x11, 0x7d, 0x44, 0xea, 0xec, 0x72, 0x56, 0xf1, 0xff, 0xaf, 0x3b, 0x31, 0x41, 0xb7,
- 0x59, 0xf6, 0xd1, 0xe4, 0x9e, 0x12, 0xd5, 0x9f, 0xd6, 0xb2, 0x8f, 0x26, 0xc0, 0xcc, 0x8b, 0xcd,
- 0xac, 0x6f, 0xff, 0xec, 0x00, 0x3b, 0xb2, 0x52, 0x41, 0xe9, 0xd1, 0xe7, 0x60, 0x3c, 0x22, 0xb7,
- 0x3c, 0xbf, 0x7d, 0x5f, 0xbe, 0xd4, 0xbb, 0x78, 0xf9, 0xd4, 0x96, 0x74, 0x4c, 0x2e, 0xef, 0x33,
- 0xcb, 0x70, 0x8a, 0x1a, 0x6a, 0xc2, 0xf8, 0x9e, 0xe7, 0xbb, 0xc1, 0x5e, 0x24, 0xe9, 0x97, 0xf2,
- 0xc5, 0x7e, 0xf7, 0x38, 0x66, 0xaa, 0x8f, 0x46, 0x73, 0xf7, 0x0c, 0x62, 0x38, 0x45, 0x9c, 0x2e,
- 0x8b, 0xb0, 0xed, 0xcf, 0x47, 0x77, 0x22, 0x12, 0x8a, 0x3c, 0xb2, 0x6c, 0x59, 0x60, 0x59, 0x88,
- 0x13, 0x38, 0x5d, 0x16, 0xec, 0xcf, 0xf5, 0x30, 0x68, 0xf3, 0x40, 0xe5, 0x62, 0x59, 0x60, 0x55,
- 0x8a, 0x35, 0x0c, 0xba, 0x6d, 0xd8, 0xbf, 0xb5, 0xc0, 0xc7, 0x41, 0x10, 0xcb, 0x8d, 0xc6, 0x32,
- 0x17, 0x6a, 0xe5, 0xd8, 0xc0, 0x42, 0xcb, 0x80, 0xa2, 0x76, 0xab, 0xd5, 0x60, 0xd6, 0x03, 0x4e,
- 0x83, 0x91, 0xe2, 0x9a, 0xdb, 0x22, 0x0f, 0xc3, 0x58, 0xeb, 0x80, 0xe2, 0x8c, 0x1a, 0xf4, 0x04,
- 0xdd, 0x14, 0x5d, 0x1d, 0x64, 0x5d, 0xe5, 0x2a, 0x82, 0x1a, 0xef, 0xa7, 0x84, 0xa1, 0x25, 0x18,
- 0x8e, 0xf6, 0xa3, 0x7a, 0x2c, 0xe2, 0x49, 0xe5, 0xa4, 0x09, 0xa9, 0x31, 0x14, 0x2d, 0x4b, 0x15,
- 0xaf, 0x82, 0x65, 0x5d, 0xfb, 0x3b, 0xd8, 0x05, 0xcd, 0xb2, 0x8e, 0xc6, 0xed, 0x90, 0xa0, 0x26,
- 0x8c, 0xb5, 0xd8, 0x0a, 0x13, 0x91, 0xb7, 0xc5, 0x32, 0x79, 0xb1, 0xcf, 0x97, 0xf6, 0x1e, 0x3d,
- 0xd7, 0x94, 0x24, 0x8c, 0x3d, 0x61, 0xaa, 0x3a, 0x39, 0x6c, 0x52, 0xb7, 0xbf, 0x76, 0x96, 0x1d,
- 0xf1, 0x35, 0xfe, 0x7c, 0x1e, 0x16, 0xe6, 0xce, 0xe2, 0xad, 0x30, 0x93, 0x2f, 0xc7, 0x49, 0xbe,
- 0x48, 0x98, 0x4c, 0x63, 0x59, 0x17, 0x7d, 0x16, 0xc6, 0x29, 0xeb, 0xad, 0x65, 0x1e, 0x38, 0x9d,
- 0xef, 0x3a, 0x9e, 0x24, 0x1c, 0xd0, 0xa2, 0xf2, 0xeb, 0x95, 0x71, 0x8a, 0x18, 0x7a, 0x83, 0x29,
- 0xe6, 0xcd, 0xa4, 0x06, 0x3d, 0x48, 0xeb, 0x3a, 0x78, 0x49, 0x56, 0x23, 0x92, 0x97, 0x30, 0xc1,
- 0x7e, 0xb4, 0x09, 0x13, 0xd0, 0x2d, 0x18, 0x13, 0xa9, 0x37, 0x85, 0xf8, 0xb1, 0x68, 0x88, 0x97,
- 0xc6, 0xb0, 0x0e, 0x3c, 0x4c, 0x17, 0x60, 0xb3, 0x32, 0xda, 0x82, 0xf3, 0x5a, 0xd6, 0x8e, 0xeb,
- 0xa1, 0xc3, 0x74, 0xc4, 0x1e, 0x3b, 0x89, 0xb4, 0xcb, 0xe7, 0xc9, 0x07, 0x07, 0xb3, 0xe7, 0xd7,
- 0xbb, 0x21, 0xe2, 0xee, 0x74, 0xd0, 0x6d, 0x38, 0xc3, 0xbd, 0x2a, 0x2b, 0xc4, 0x71, 0x1b, 0x9e,
- 0xaf, 0x6e, 0x37, 0xbe, 0x5b, 0xce, 0x3d, 0x38, 0x98, 0x3d, 0x33, 0x9f, 0x85, 0x80, 0xb3, 0xeb,
- 0xa1, 0x4f, 0x42, 0xd9, 0xf5, 0x23, 0x31, 0x06, 0x43, 0x46, 0x62, 0x94, 0x72, 0x65, 0xad, 0xa6,
- 0xbe, 0x3f, 0xf9, 0x83, 0x93, 0x0a, 0x68, 0x8b, 0x8b, 0x20, 0xd5, 0x8b, 0x7f, 0xb8, 0x23, 0xb2,
- 0x48, 0x5a, 0x76, 0x64, 0xf8, 0x55, 0x71, 0xd9, 0xbb, 0xb2, 0x36, 0x36, 0x5c, 0xae, 0x0c, 0xc2,
- 0xe8, 0x75, 0x40, 0x94, 0x25, 0xf6, 0xea, 0x64, 0xbe, 0xce, 0xc2, 0xba, 0x33, 0x89, 0x6d, 0xc9,
- 0xf0, 0x4e, 0x41, 0xb5, 0x0e, 0x0c, 0x9c, 0x51, 0x0b, 0xdd, 0xa0, 0xb7, 0x81, 0x5e, 0x2a, 0xac,
- 0xa6, 0x55, 0xd6, 0xa9, 0x0a, 0x69, 0x85, 0xa4, 0xee, 0xc4, 0xc4, 0x35, 0x29, 0xe2, 0x54, 0x3d,
- 0xe4, 0xc2, 0x13, 0x4e, 0x3b, 0x0e, 0x98, 0x74, 0xd7, 0x44, 0x5d, 0x0f, 0x76, 0x88, 0xcf, 0x14,
- 0x2b, 0xa5, 0x85, 0x8b, 0x0f, 0x0e, 0x66, 0x9f, 0x98, 0xef, 0x82, 0x87, 0xbb, 0x52, 0xa1, 0x6c,
- 0x8f, 0xca, 0x19, 0x09, 0x66, 0xc0, 0x94, 0x8c, 0xbc, 0x91, 0x2f, 0xc1, 0xc8, 0x76, 0x10, 0xc5,
- 0x6b, 0x24, 0xde, 0x0b, 0xc2, 0x1d, 0x11, 0xf6, 0x2e, 0x09, 0x95, 0x9a, 0x80, 0xb0, 0x8e, 0x47,
- 0xdf, 0x35, 0x4c, 0xed, 0xbf, 0x52, 0x61, 0x1a, 0xd7, 0x52, 0x72, 0xc6, 0xdc, 0xe0, 0xc5, 0x58,
- 0xc2, 0x25, 0xea, 0x4a, 0x75, 0x91, 0x69, 0x4f, 0x53, 0xa8, 0x2b, 0xd5, 0x45, 0x2c, 0xe1, 0x74,
- 0xb9, 0x46, 0xdb, 0x4e, 0x48, 0xaa, 0x61, 0x50, 0x27, 0x91, 0x16, 0xa0, 0xf7, 0x71, 0x1e, 0xd4,
- 0x8f, 0x2e, 0xd7, 0x5a, 0x16, 0x02, 0xce, 0xae, 0x87, 0x48, 0x67, 0xc6, 0x9a, 0xf1, 0x7c, 0xb1,
- 0x77, 0x27, 0x2b, 0xd0, 0x67, 0xd2, 0x1a, 0x1f, 0x26, 0x55, 0xae, 0x1c, 0x1e, 0xc6, 0x2f, 0x9a,
- 0x9e, 0x60, 0x6b, 0xbb, 0xff, 0x18, 0x80, 0x4a, 0x91, 0xb0, 0x92, 0xa2, 0x84, 0x3b, 0x68, 0x1b,
- 0x31, 0x71, 0x26, 0x7b, 0x26, 0x11, 0xbd, 0x0a, 0xe5, 0xa8, 0xbd, 0xe1, 0x06, 0x4d, 0xc7, 0xf3,
- 0x99, 0xf6, 0x54, 0x63, 0xb0, 0x6b, 0x12, 0x80, 0x13, 0x1c, 0xb4, 0x0c, 0x25, 0x47, 0x6a, 0x09,
- 0x50, 0x7e, 0x00, 0x0c, 0xa5, 0x1b, 0xe0, 0x3e, 0xe1, 0x52, 0x2f, 0xa0, 0xea, 0xa2, 0x57, 0x61,
- 0x4c, 0x78, 0x05, 0x8a, 0xac, 0x6a, 0xa7, 0x4c, 0xcf, 0x8d, 0x9a, 0x0e, 0xc4, 0x26, 0x2e, 0xba,
- 0x03, 0x23, 0x71, 0xd0, 0x60, 0xee, 0x07, 0x94, 0x43, 0x3a, 0x9b, 0x1f, 0x8e, 0x69, 0x5d, 0xa1,
- 0xe9, 0x02, 0x3a, 0x55, 0x15, 0xeb, 0x74, 0xd0, 0x3a, 0x5f, 0xef, 0x2c, 0x50, 0x2d, 0x89, 0xa6,
- 0x1f, 0xcb, 0xbf, 0x93, 0x54, 0x3c, 0x5b, 0x73, 0x3b, 0x88, 0x9a, 0x58, 0x27, 0x83, 0xae, 0xc3,
- 0x54, 0x2b, 0xf4, 0x02, 0xb6, 0x26, 0x94, 0x82, 0x68, 0xda, 0x4c, 0xaf, 0x51, 0x4d, 0x23, 0xe0,
- 0xce, 0x3a, 0xcc, 0xa9, 0x53, 0x14, 0x4e, 0x9f, 0xe3, 0x59, 0x54, 0xf9, 0x7b, 0x85, 0x97, 0x61,
- 0x05, 0x45, 0xab, 0xec, 0x24, 0xe6, 0x4f, 0xed, 0xe9, 0x99, 0xfc, 0x98, 0x1b, 0xfa, 0x93, 0x9c,
- 0xf3, 0x7d, 0xea, 0x2f, 0x4e, 0x28, 0x20, 0x57, 0xcb, 0xd0, 0x45, 0x99, 0xed, 0x68, 0xfa, 0x89,
- 0x2e, 0xb6, 0x57, 0x29, 0xce, 0x3c, 0x61, 0x08, 0x8c, 0xe2, 0x08, 0xa7, 0x68, 0xa2, 0x6f, 0x81,
- 0x49, 0x11, 0x2d, 0x2a, 0x19, 0xa6, 0xf3, 0x89, 0x51, 0x27, 0x4e, 0xc1, 0x70, 0x07, 0x36, 0x0f,
- 0xe0, 0xed, 0x6c, 0x34, 0x88, 0x38, 0xfa, 0x6e, 0x79, 0xfe, 0x4e, 0x34, 0x7d, 0x81, 0x9d, 0x0f,
- 0x22, 0x80, 0x77, 0x1a, 0x8a, 0x33, 0x6a, 0xa0, 0x75, 0x98, 0x6c, 0x85, 0x84, 0x34, 0x19, 0x8f,
- 0x2c, 0xee, 0xb3, 0x59, 0xee, 0xd3, 0x4c, 0x7b, 0x52, 0x4d, 0xc1, 0x0e, 0x33, 0xca, 0x70, 0x07,
- 0x05, 0xb4, 0x07, 0xa5, 0x60, 0x97, 0x84, 0xdb, 0xc4, 0x71, 0xa7, 0x2f, 0x76, 0x31, 0x32, 0x16,
- 0x97, 0xdb, 0x6d, 0x81, 0x9b, 0x52, 0x2a, 0xcb, 0xe2, 0xde, 0x4a, 0x65, 0xd9, 0x18, 0xfa, 0x41,
- 0x0b, 0xce, 0x49, 0x39, 0x74, 0xad, 0x45, 0x47, 0x7d, 0x31, 0xf0, 0xa3, 0x38, 0xe4, 0x5e, 0xb8,
- 0x4f, 0xe6, 0x3b, 0xa6, 0xae, 0xe7, 0x54, 0x52, 0xd2, 0xbe, 0x73, 0x79, 0x18, 0x11, 0xce, 0x6f,
- 0x71, 0xe6, 0x9b, 0x61, 0xaa, 0xe3, 0xe6, 0x3e, 0x4a, 0x4e, 0x81, 0x99, 0x1d, 0x18, 0x33, 0x46,
- 0xe7, 0x91, 0xea, 0x13, 0xff, 0xdd, 0x30, 0x94, 0x95, 0xae, 0x09, 0x5d, 0x35, 0x55, 0x88, 0xe7,
- 0xd2, 0x2a, 0xc4, 0x12, 0x7d, 0xcd, 0xea, 0x5a, 0xc3, 0x75, 0xc3, 0xfe, 0xb4, 0x90, 0x9f, 0x8d,
- 0x4f, 0x7f, 0x8f, 0xf6, 0xf4, 0x65, 0xd5, 0x44, 0x87, 0xc5, 0xbe, 0x75, 0x91, 0x03, 0x5d, 0xa5,
- 0x91, 0xd7, 0x61, 0xca, 0x0f, 0x18, 0xbb, 0x48, 0x5c, 0xc9, 0x0b, 0xb0, 0x2b, 0xbf, 0xac, 0x07,
- 0x11, 0x48, 0x21, 0xe0, 0xce, 0x3a, 0xb4, 0x41, 0x7e, 0x67, 0xa7, 0xc5, 0x9f, 0xfc, 0x4a, 0xc7,
- 0x02, 0x8a, 0x2e, 0xc1, 0x60, 0x2b, 0x70, 0x57, 0xaa, 0xe9, 0x0c, 0xf5, 0x4c, 0x9e, 0x85, 0x39,
- 0x0c, 0xcd, 0xc3, 0x10, 0xfb, 0x11, 0x4d, 0x8f, 0xe6, 0x7b, 0x8b, 0xb3, 0x1a, 0x5a, 0xc6, 0x06,
- 0x56, 0x01, 0x8b, 0x8a, 0x4c, 0x0c, 0x43, 0xf9, 0x6b, 0x26, 0x86, 0x19, 0x7e, 0x48, 0x31, 0x8c,
- 0x24, 0x80, 0x13, 0x5a, 0xe8, 0x3e, 0x9c, 0x31, 0xde, 0x34, 0x7c, 0x89, 0x90, 0x48, 0x38, 0xac,
- 0x5e, 0xea, 0xfa, 0x98, 0x11, 0xba, 0xcb, 0xf3, 0xa2, 0xd3, 0x67, 0x56, 0xb2, 0x28, 0xe1, 0xec,
- 0x06, 0x50, 0x03, 0xa6, 0xea, 0x1d, 0xad, 0x96, 0xfa, 0x6f, 0x55, 0x4d, 0x68, 0x67, 0x8b, 0x9d,
- 0x84, 0xd1, 0xab, 0x50, 0x7a, 0x27, 0x88, 0xd8, 0x31, 0x2b, 0xd8, 0x5b, 0xe9, 0xed, 0x58, 0x7a,
- 0xe3, 0x76, 0x8d, 0x95, 0x1f, 0x1e, 0xcc, 0x8e, 0x54, 0x03, 0x57, 0xfe, 0xc5, 0xaa, 0x02, 0xfa,
- 0x5e, 0x0b, 0x66, 0x3a, 0x1f, 0x4d, 0xaa, 0xd3, 0x63, 0xfd, 0x77, 0xda, 0x16, 0x8d, 0xce, 0x2c,
- 0xe5, 0x92, 0xc3, 0x5d, 0x9a, 0xb2, 0x7f, 0x99, 0xeb, 0x19, 0x85, 0x36, 0x82, 0x44, 0xed, 0xc6,
- 0x49, 0x64, 0xb8, 0x5b, 0x32, 0x14, 0x25, 0x0f, 0xad, 0xcb, 0xfe, 0x35, 0x8b, 0xe9, 0xb2, 0xd7,
- 0x49, 0xb3, 0xd5, 0x70, 0xe2, 0x93, 0x70, 0x96, 0x7b, 0x03, 0x4a, 0xb1, 0x68, 0xad, 0x5b, 0x52,
- 0x3e, 0xad, 0x53, 0x4c, 0x9f, 0xaf, 0x98, 0x4d, 0x59, 0x8a, 0x15, 0x19, 0xfb, 0x9f, 0xf1, 0x19,
- 0x90, 0x90, 0x13, 0x90, 0x47, 0x57, 0x4c, 0x79, 0xf4, 0x6c, 0x8f, 0x2f, 0xc8, 0x91, 0x4b, 0xff,
- 0x53, 0xb3, 0xdf, 0x4c, 0xc8, 0xf2, 0x7e, 0x37, 0xa2, 0xb0, 0x7f, 0xd8, 0x82, 0xd3, 0x59, 0x56,
- 0x87, 0xf4, 0x81, 0xc0, 0x45, 0x3c, 0xca, 0xa8, 0x44, 0x8d, 0xe0, 0x5d, 0x51, 0x8e, 0x15, 0x46,
- 0xdf, 0xf9, 0x6e, 0x8e, 0x16, 0xff, 0xf1, 0x36, 0x8c, 0x55, 0x43, 0xa2, 0x5d, 0x68, 0xaf, 0x71,
- 0xef, 0x57, 0xde, 0x9f, 0x67, 0x8f, 0xec, 0xf9, 0x6a, 0xff, 0x4c, 0x01, 0x4e, 0x73, 0xad, 0xf0,
- 0xfc, 0x6e, 0xe0, 0xb9, 0xd5, 0xc0, 0x15, 0xb9, 0x8a, 0xde, 0x84, 0xd1, 0x96, 0x26, 0x97, 0xeb,
- 0x16, 0x81, 0x4e, 0x97, 0xdf, 0x25, 0x92, 0x04, 0xbd, 0x14, 0x1b, 0xb4, 0x90, 0x0b, 0xa3, 0x64,
- 0xd7, 0xab, 0x2b, 0xd5, 0x62, 0xe1, 0xc8, 0x97, 0x8b, 0x6a, 0x65, 0x49, 0xa3, 0x83, 0x0d, 0xaa,
- 0x8f, 0x20, 0x7d, 0xa5, 0xfd, 0x23, 0x16, 0x3c, 0x96, 0x13, 0xaf, 0x8e, 0x36, 0xb7, 0xc7, 0xf4,
- 0xef, 0x22, 0x13, 0x9e, 0x6a, 0x8e, 0x6b, 0xe5, 0xb1, 0x80, 0xa2, 0x4f, 0x03, 0x70, 0xad, 0x3a,
- 0x7d, 0xa1, 0xf6, 0x0a, 0xec, 0x65, 0xc4, 0x24, 0xd2, 0xc2, 0xcb, 0xc8, 0xfa, 0x58, 0xa3, 0x65,
- 0xff, 0x64, 0x11, 0x06, 0x79, 0x0e, 0xdf, 0x65, 0x18, 0xde, 0xe6, 0x11, 0xdc, 0xfb, 0x09, 0x16,
- 0x9f, 0xc8, 0x0e, 0x78, 0x01, 0x96, 0x95, 0xd1, 0x2a, 0x9c, 0xe2, 0x11, 0xf0, 0x1b, 0x15, 0xd2,
- 0x70, 0xf6, 0xa5, 0xa0, 0x8b, 0x67, 0x8f, 0x53, 0x02, 0xbf, 0x95, 0x4e, 0x14, 0x9c, 0x55, 0x0f,
- 0xbd, 0x06, 0xe3, 0xf4, 0xe1, 0x11, 0xb4, 0x63, 0x49, 0x89, 0xc7, 0xbe, 0x57, 0x2f, 0x9d, 0x75,
- 0x03, 0x8a, 0x53, 0xd8, 0xf4, 0xed, 0xdb, 0xea, 0x10, 0xe9, 0x0d, 0x26, 0x6f, 0x5f, 0x53, 0x8c,
- 0x67, 0xe2, 0x32, 0x73, 0xc3, 0x36, 0x33, 0xae, 0x5c, 0xdf, 0x0e, 0x49, 0xb4, 0x1d, 0x34, 0x5c,
- 0xc6, 0x68, 0x0d, 0x6a, 0xe6, 0x86, 0x29, 0x38, 0xee, 0xa8, 0x41, 0xa9, 0x6c, 0x3a, 0x5e, 0xa3,
- 0x1d, 0x92, 0x84, 0xca, 0x90, 0x49, 0x65, 0x39, 0x05, 0xc7, 0x1d, 0x35, 0xe8, 0x3a, 0x3a, 0x53,
- 0x0d, 0x03, 0x7a, 0x78, 0xc9, 0x18, 0x1c, 0xca, 0x86, 0x74, 0x58, 0xba, 0x0b, 0x76, 0x09, 0x57,
- 0x25, 0xac, 0xec, 0x38, 0x05, 0x43, 0x81, 0x5c, 0x13, 0x8e, 0x82, 0x92, 0x0a, 0x7a, 0x1e, 0x46,
- 0x44, 0x5c, 0x73, 0x66, 0xea, 0xc8, 0xa7, 0x8e, 0x29, 0xbc, 0x2b, 0x49, 0x31, 0xd6, 0x71, 0xec,
- 0xef, 0x2b, 0xc0, 0xa9, 0x0c, 0x5b, 0x75, 0x7e, 0x54, 0x6d, 0x79, 0x51, 0xac, 0x32, 0x64, 0x69,
- 0x47, 0x15, 0x2f, 0xc7, 0x0a, 0x83, 0xee, 0x07, 0x7e, 0x18, 0xa6, 0x0f, 0x40, 0x61, 0x0b, 0x2a,
- 0xa0, 0x47, 0xcc, 0x35, 0x75, 0x11, 0x06, 0xda, 0x11, 0x91, 0x81, 0xe6, 0xd4, 0xf9, 0xcd, 0x34,
- 0x2e, 0x0c, 0x42, 0xd9, 0xe3, 0x2d, 0xa5, 0xbc, 0xd0, 0xd8, 0x63, 0xae, 0xbe, 0xe0, 0x30, 0xda,
- 0xb9, 0x98, 0xf8, 0x8e, 0x1f, 0x0b, 0x26, 0x3a, 0x89, 0x98, 0xc4, 0x4a, 0xb1, 0x80, 0xda, 0x5f,
- 0x2a, 0xc2, 0xb9, 0x5c, 0xef, 0x15, 0xda, 0xf5, 0x66, 0xe0, 0x7b, 0x71, 0xa0, 0x2c, 0x09, 0x78,
- 0x94, 0x24, 0xd2, 0xda, 0x5e, 0x15, 0xe5, 0x58, 0x61, 0xa0, 0xcb, 0x66, 0x1e, 0xfa, 0xe4, 0x2b,
- 0x17, 0x2a, 0x46, 0x2a, 0xfa, 0x7e, 0xf3, 0x30, 0x5e, 0x82, 0x81, 0x56, 0x10, 0x34, 0xd2, 0x87,
- 0x16, 0xed, 0x6e, 0x10, 0x34, 0x30, 0x03, 0xa2, 0x8f, 0x88, 0xf1, 0x4a, 0xa9, 0xce, 0xb1, 0xe3,
- 0x06, 0x91, 0x36, 0x68, 0x4f, 0xc3, 0xf0, 0x0e, 0xd9, 0x0f, 0x3d, 0x7f, 0x2b, 0x6d, 0x52, 0x71,
- 0x93, 0x17, 0x63, 0x09, 0x37, 0xd3, 0xbe, 0x0c, 0x1f, 0x77, 0x02, 0xc5, 0x52, 0xcf, 0x2b, 0xf0,
- 0xfb, 0x8b, 0x30, 0x81, 0x17, 0x2a, 0x1f, 0x4c, 0xc4, 0x9d, 0xce, 0x89, 0x38, 0xee, 0x04, 0x8a,
- 0xbd, 0x67, 0xe3, 0x17, 0x2c, 0x98, 0x60, 0xd1, 0xd5, 0x45, 0x78, 0x1d, 0x2f, 0xf0, 0x4f, 0x80,
- 0xc5, 0xbb, 0x04, 0x83, 0x21, 0x6d, 0x34, 0x9d, 0x24, 0x8c, 0xf5, 0x04, 0x73, 0x18, 0x7a, 0x02,
- 0x06, 0x58, 0x17, 0xe8, 0xe4, 0x8d, 0xf2, 0xfc, 0x2a, 0x15, 0x27, 0x76, 0x30, 0x2b, 0x65, 0x31,
- 0x23, 0x30, 0x69, 0x35, 0x3c, 0xde, 0xe9, 0x44, 0x25, 0xf8, 0xfe, 0x88, 0x19, 0x91, 0xd9, 0xb5,
- 0xf7, 0x16, 0x33, 0x22, 0x9b, 0x64, 0xf7, 0xe7, 0xd3, 0x1f, 0x16, 0xe0, 0x42, 0x66, 0xbd, 0xbe,
- 0x63, 0x46, 0x74, 0xaf, 0x7d, 0x3c, 0x96, 0x71, 0xd9, 0x06, 0x6b, 0xc5, 0x13, 0x34, 0x58, 0x1b,
- 0xe8, 0x97, 0xc3, 0x1c, 0xec, 0x23, 0x94, 0x43, 0xe6, 0x90, 0xbd, 0x4f, 0x42, 0x39, 0x64, 0xf6,
- 0x2d, 0xe7, 0xf9, 0xf7, 0xe7, 0x85, 0x9c, 0x6f, 0x61, 0x0f, 0xc1, 0x2b, 0xf4, 0x9c, 0x61, 0xc0,
- 0x48, 0x70, 0xcc, 0xa3, 0xfc, 0x8c, 0xe1, 0x65, 0x58, 0x41, 0xd1, 0x3c, 0x4c, 0x34, 0x3d, 0x9f,
- 0x1e, 0x3e, 0xfb, 0x26, 0xe3, 0xa7, 0x22, 0xed, 0xac, 0x9a, 0x60, 0x9c, 0xc6, 0x47, 0x9e, 0x16,
- 0xe6, 0xa1, 0x90, 0x9f, 0x76, 0x37, 0xb7, 0xb7, 0x73, 0xa6, 0xba, 0x54, 0x8d, 0x62, 0x46, 0xc8,
- 0x87, 0x55, 0xed, 0xfd, 0x5f, 0xec, 0xff, 0xfd, 0x3f, 0x9a, 0xfd, 0xf6, 0x9f, 0x79, 0x15, 0xc6,
- 0x1e, 0x5a, 0xe0, 0x6b, 0x7f, 0xad, 0x08, 0x8f, 0x77, 0xd9, 0xf6, 0xfc, 0xac, 0x37, 0xe6, 0x40,
- 0x3b, 0xeb, 0x3b, 0xe6, 0xa1, 0x0a, 0xa7, 0x37, 0xdb, 0x8d, 0xc6, 0x3e, 0xb3, 0x09, 0x27, 0xae,
- 0xc4, 0x10, 0x3c, 0xe5, 0x13, 0x32, 0xa3, 0xcd, 0x72, 0x06, 0x0e, 0xce, 0xac, 0x49, 0x19, 0x7a,
- 0x7a, 0x93, 0xec, 0x2b, 0x52, 0x29, 0x86, 0x1e, 0xeb, 0x40, 0x6c, 0xe2, 0xa2, 0xeb, 0x30, 0xe5,
- 0xec, 0x3a, 0x1e, 0x0f, 0x96, 0x29, 0x09, 0x70, 0x8e, 0x5e, 0xc9, 0xe9, 0xe6, 0xd3, 0x08, 0xb8,
- 0xb3, 0x0e, 0x7a, 0x1d, 0x50, 0x20, 0xd2, 0x86, 0x5f, 0x27, 0xbe, 0xd0, 0x6a, 0xb1, 0xb9, 0x2b,
- 0x26, 0x47, 0xc2, 0xed, 0x0e, 0x0c, 0x9c, 0x51, 0x2b, 0x15, 0x36, 0x61, 0x28, 0x3f, 0x6c, 0x42,
- 0xf7, 0x73, 0xb1, 0x97, 0x20, 0xdb, 0xfe, 0x6f, 0x16, 0xbd, 0xbe, 0x38, 0x93, 0x6f, 0x46, 0xff,
- 0x7a, 0x95, 0x19, 0x74, 0x71, 0x19, 0x9e, 0x16, 0xc1, 0xe0, 0x8c, 0x66, 0xd0, 0x95, 0x00, 0xb1,
- 0x89, 0xcb, 0x17, 0x44, 0x94, 0x38, 0xce, 0x19, 0x2c, 0xbe, 0x08, 0x51, 0xa2, 0x30, 0xd0, 0x67,
- 0x60, 0xd8, 0xf5, 0x76, 0xbd, 0x28, 0x08, 0xc5, 0x4a, 0x3f, 0xa2, 0xba, 0x20, 0x39, 0x07, 0x2b,
- 0x9c, 0x0c, 0x96, 0xf4, 0xec, 0xef, 0x2f, 0xc0, 0x98, 0x6c, 0xf1, 0x8d, 0x76, 0x10, 0x3b, 0x27,
- 0x70, 0x2d, 0x5f, 0x37, 0xae, 0xe5, 0x8f, 0x74, 0x8b, 0xd3, 0xc2, 0xba, 0x94, 0x7b, 0x1d, 0xdf,
- 0x4e, 0x5d, 0xc7, 0x4f, 0xf5, 0x26, 0xd5, 0xfd, 0x1a, 0xfe, 0xe7, 0x16, 0x4c, 0x19, 0xf8, 0x27,
- 0x70, 0x1b, 0x2c, 0x9b, 0xb7, 0xc1, 0x93, 0x3d, 0xbf, 0x21, 0xe7, 0x16, 0xf8, 0xee, 0x62, 0xaa,
- 0xef, 0xec, 0xf4, 0x7f, 0x07, 0x06, 0xb6, 0x9d, 0xd0, 0xed, 0x16, 0x98, 0xba, 0xa3, 0xd2, 0xdc,
- 0x0d, 0x27, 0x14, 0x6a, 0xbd, 0x67, 0x55, 0xd6, 0x5b, 0x27, 0xec, 0xad, 0xd2, 0x63, 0x4d, 0xa1,
- 0x57, 0x60, 0x28, 0xaa, 0x07, 0x2d, 0x65, 0xc5, 0x7d, 0x91, 0x67, 0xc4, 0xa5, 0x25, 0x87, 0x07,
- 0xb3, 0xc8, 0x6c, 0x8e, 0x16, 0x63, 0x81, 0x8f, 0xde, 0x84, 0x31, 0xf6, 0x4b, 0xd9, 0xd8, 0x14,
- 0xf3, 0xd3, 0xa1, 0xd4, 0x74, 0x44, 0x6e, 0x80, 0x66, 0x14, 0x61, 0x93, 0xd4, 0xcc, 0x16, 0x94,
- 0xd5, 0x67, 0x3d, 0x52, 0x7d, 0xdc, 0x7f, 0x2a, 0xc2, 0xa9, 0x8c, 0x35, 0x87, 0x22, 0x63, 0x26,
- 0x9e, 0xef, 0x73, 0xa9, 0xbe, 0xc7, 0xb9, 0x88, 0xd8, 0x6b, 0xc8, 0x15, 0x6b, 0xab, 0xef, 0x46,
- 0xef, 0x44, 0x24, 0xdd, 0x28, 0x2d, 0xea, 0xdd, 0x28, 0x6d, 0xec, 0xc4, 0x86, 0x9a, 0x36, 0xa4,
- 0x7a, 0xfa, 0x48, 0xe7, 0xf4, 0x4f, 0x8a, 0x70, 0x3a, 0x2b, 0x74, 0x14, 0xfa, 0xf6, 0x54, 0x6a,
- 0xac, 0x17, 0xfb, 0x0d, 0x3a, 0xc5, 0xf3, 0x65, 0x89, 0xcc, 0xf6, 0x73, 0x66, 0xb2, 0xac, 0x9e,
- 0xc3, 0x2c, 0xda, 0x64, 0x4e, 0xe1, 0x21, 0x4f, 0x69, 0x26, 0x8f, 0x8f, 0x8f, 0xf7, 0xdd, 0x01,
- 0x91, 0x0b, 0x2d, 0x4a, 0xe9, 0xef, 0x65, 0x71, 0x6f, 0xfd, 0xbd, 0x6c, 0x79, 0xc6, 0x83, 0x11,
- 0xed, 0x6b, 0x1e, 0xe9, 0x8c, 0xef, 0xd0, 0xdb, 0x4a, 0xeb, 0xf7, 0x23, 0x9d, 0xf5, 0x1f, 0xb1,
- 0x20, 0x65, 0x0d, 0xad, 0xc4, 0x62, 0x56, 0xae, 0x58, 0xec, 0x22, 0x0c, 0x84, 0x41, 0x83, 0xa4,
- 0x33, 0x51, 0xe1, 0xa0, 0x41, 0x30, 0x83, 0x50, 0x8c, 0x38, 0x11, 0x76, 0x8c, 0xea, 0x0f, 0x39,
- 0xf1, 0x44, 0xbb, 0x04, 0x83, 0x0d, 0xb2, 0x4b, 0x1a, 0xe9, 0x34, 0x0f, 0xb7, 0x68, 0x21, 0xe6,
- 0x30, 0xfb, 0x17, 0x06, 0xe0, 0x7c, 0xd7, 0xb0, 0x0a, 0xf4, 0x39, 0xb4, 0xe5, 0xc4, 0x64, 0xcf,
- 0xd9, 0x4f, 0xc7, 0x63, 0xbf, 0xce, 0x8b, 0xb1, 0x84, 0x33, 0x2f, 0x12, 0x1e, 0x55, 0x35, 0x25,
- 0x44, 0x14, 0xc1, 0x54, 0x05, 0xd4, 0x14, 0x4a, 0x15, 0x8f, 0x43, 0x28, 0x75, 0x0d, 0x20, 0x8a,
- 0x1a, 0xdc, 0xf0, 0xc5, 0x15, 0xee, 0x29, 0x49, 0xf4, 0xdd, 0xda, 0x2d, 0x01, 0xc1, 0x1a, 0x16,
- 0xaa, 0xc0, 0x64, 0x2b, 0x0c, 0x62, 0x2e, 0x93, 0xad, 0x70, 0xdb, 0xb0, 0x41, 0xd3, 0xa3, 0xbd,
- 0x9a, 0x82, 0xe3, 0x8e, 0x1a, 0xe8, 0x25, 0x18, 0x11, 0x5e, 0xee, 0xd5, 0x20, 0x68, 0x08, 0x31,
- 0x90, 0x32, 0x97, 0xaa, 0x25, 0x20, 0xac, 0xe3, 0x69, 0xd5, 0x98, 0xa0, 0x77, 0x38, 0xb3, 0x1a,
- 0x17, 0xf6, 0x6a, 0x78, 0xa9, 0x30, 0x72, 0xa5, 0xbe, 0xc2, 0xc8, 0x25, 0x82, 0xb1, 0x72, 0xdf,
- 0xba, 0x2d, 0xe8, 0x29, 0x4a, 0xfa, 0xb9, 0x01, 0x38, 0x25, 0x16, 0xce, 0xa3, 0x5e, 0x2e, 0x77,
- 0x3a, 0x97, 0xcb, 0x71, 0x88, 0xce, 0x3e, 0x58, 0x33, 0x27, 0xbd, 0x66, 0x7e, 0xc0, 0x02, 0x93,
- 0xbd, 0x42, 0x7f, 0x29, 0x37, 0xa1, 0xc5, 0x4b, 0xb9, 0xec, 0x9a, 0x2b, 0x2f, 0x90, 0xf7, 0x98,
- 0xda, 0xc2, 0xfe, 0xaf, 0x16, 0x3c, 0xd9, 0x93, 0x22, 0x5a, 0x82, 0x32, 0xe3, 0x01, 0xb5, 0xd7,
- 0xd9, 0x53, 0xca, 0x76, 0x54, 0x02, 0x72, 0x58, 0xd2, 0xa4, 0x26, 0x5a, 0xea, 0xc8, 0x1c, 0xf2,
- 0x74, 0x46, 0xe6, 0x90, 0x33, 0xc6, 0xf0, 0x3c, 0x64, 0xea, 0x90, 0x5f, 0x2e, 0xc2, 0x10, 0x5f,
- 0xf1, 0x27, 0xf0, 0x0c, 0x5b, 0x16, 0x72, 0xdb, 0x2e, 0x71, 0xea, 0x78, 0x5f, 0xe6, 0x2a, 0x4e,
- 0xec, 0x70, 0x36, 0x41, 0xdd, 0x56, 0x89, 0x84, 0x17, 0x7d, 0x0e, 0x20, 0x8a, 0x43, 0xcf, 0xdf,
- 0xa2, 0x65, 0x22, 0x82, 0xe1, 0x47, 0xbb, 0x50, 0xab, 0x29, 0x64, 0x4e, 0x33, 0xd9, 0xb9, 0x0a,
- 0x80, 0x35, 0x8a, 0x68, 0xce, 0xb8, 0x2f, 0x67, 0x52, 0x82, 0x4f, 0xe0, 0x54, 0x93, 0xdb, 0x73,
- 0xe6, 0x65, 0x28, 0x2b, 0xe2, 0xbd, 0xa4, 0x38, 0xa3, 0x3a, 0x73, 0xf1, 0x29, 0x98, 0x48, 0xf5,
- 0xed, 0x48, 0x42, 0xa0, 0x5f, 0xb4, 0x60, 0x82, 0x77, 0x66, 0xc9, 0xdf, 0x15, 0x67, 0xea, 0xbb,
- 0x70, 0xba, 0x91, 0x71, 0xb6, 0x89, 0x19, 0xed, 0xff, 0x2c, 0x54, 0x42, 0x9f, 0x2c, 0x28, 0xce,
- 0x6c, 0x03, 0x5d, 0xa1, 0xeb, 0x96, 0x9e, 0x5d, 0x4e, 0x43, 0x38, 0x1b, 0x8e, 0xf2, 0x35, 0xcb,
- 0xcb, 0xb0, 0x82, 0xda, 0xbf, 0x65, 0xc1, 0x14, 0xef, 0xf9, 0x4d, 0xb2, 0xaf, 0x76, 0xf8, 0xd7,
- 0xb3, 0xef, 0x22, 0x99, 0x4f, 0x21, 0x27, 0x99, 0x8f, 0xfe, 0x69, 0xc5, 0xae, 0x9f, 0xf6, 0x33,
- 0x16, 0x88, 0x15, 0x72, 0x02, 0x4f, 0xf9, 0x6f, 0x36, 0x9f, 0xf2, 0x33, 0xf9, 0x9b, 0x20, 0xe7,
- 0x0d, 0xff, 0x67, 0x16, 0x4c, 0x72, 0x84, 0x44, 0xe7, 0xfc, 0x75, 0x9d, 0x87, 0x7e, 0xb2, 0x72,
- 0xaa, 0x54, 0xfd, 0xd9, 0x1f, 0x65, 0x4c, 0xd6, 0x40, 0xd7, 0xc9, 0x72, 0xe5, 0x06, 0x3a, 0x42,
- 0x46, 0xda, 0x23, 0x87, 0xba, 0xb7, 0xff, 0xc0, 0x02, 0xc4, 0x9b, 0x31, 0xd8, 0x1f, 0xca, 0x54,
- 0xb0, 0x52, 0xed, 0xba, 0x48, 0x8e, 0x26, 0x05, 0xc1, 0x1a, 0xd6, 0xb1, 0x0c, 0x4f, 0xca, 0x70,
- 0xa0, 0xd8, 0xdb, 0x70, 0xe0, 0x08, 0x23, 0xfa, 0xfb, 0x83, 0x90, 0xf6, 0x00, 0x41, 0x77, 0x61,
- 0xb4, 0xee, 0xb4, 0x9c, 0x0d, 0xaf, 0xe1, 0xc5, 0x1e, 0x89, 0xba, 0x59, 0x1c, 0x2d, 0x6a, 0x78,
- 0x42, 0xd5, 0xab, 0x95, 0x60, 0x83, 0x0e, 0x9a, 0x03, 0x68, 0x85, 0xde, 0xae, 0xd7, 0x20, 0x5b,
- 0x4c, 0xe2, 0xc0, 0xdc, 0x9b, 0xb9, 0x19, 0x8d, 0x2c, 0xc5, 0x1a, 0x46, 0x86, 0xa7, 0x6a, 0xf1,
- 0x11, 0x7b, 0xaa, 0xc2, 0x89, 0x79, 0xaa, 0x0e, 0x1c, 0xc9, 0x53, 0xb5, 0x74, 0x64, 0x4f, 0xd5,
- 0xc1, 0xbe, 0x3c, 0x55, 0x31, 0x9c, 0x95, 0x1c, 0x1c, 0xfd, 0xbf, 0xec, 0x35, 0x88, 0x60, 0xdb,
- 0xb9, 0x4f, 0xf6, 0xcc, 0x83, 0x83, 0xd9, 0xb3, 0x38, 0x13, 0x03, 0xe7, 0xd4, 0x44, 0x9f, 0x86,
- 0x69, 0xa7, 0xd1, 0x08, 0xf6, 0xd4, 0xa4, 0x2e, 0x45, 0x75, 0xa7, 0xc1, 0x45, 0xf9, 0xc3, 0x8c,
- 0xea, 0x13, 0x0f, 0x0e, 0x66, 0xa7, 0xe7, 0x73, 0x70, 0x70, 0x6e, 0x6d, 0xf4, 0x49, 0x28, 0xb7,
- 0xc2, 0xa0, 0xbe, 0xaa, 0xb9, 0xa9, 0x5d, 0xa0, 0x03, 0x58, 0x95, 0x85, 0x87, 0x07, 0xb3, 0x63,
- 0xea, 0x0f, 0xbb, 0xf0, 0x93, 0x0a, 0xf6, 0x0e, 0x9c, 0xaa, 0x91, 0xd0, 0x63, 0x89, 0x7b, 0xdd,
- 0xe4, 0xfc, 0x58, 0x87, 0x72, 0x98, 0x3a, 0x31, 0xfb, 0x8a, 0xed, 0xa6, 0xc5, 0x04, 0x97, 0x27,
- 0x64, 0x42, 0xc8, 0xfe, 0x3f, 0x16, 0x0c, 0x0b, 0x8f, 0x8c, 0x13, 0x60, 0xd4, 0xe6, 0x0d, 0x79,
- 0xf9, 0x6c, 0xf6, 0xad, 0xc2, 0x3a, 0x93, 0x2b, 0x29, 0x5f, 0x49, 0x49, 0xca, 0x9f, 0xec, 0x46,
- 0xa4, 0xbb, 0x8c, 0xfc, 0xef, 0x14, 0x61, 0xdc, 0x74, 0xdd, 0x3b, 0x81, 0x21, 0x58, 0x83, 0xe1,
- 0x48, 0xf8, 0xa6, 0x15, 0xf2, 0x2d, 0xb2, 0xd3, 0x93, 0x98, 0x58, 0x6b, 0x09, 0x6f, 0x34, 0x49,
- 0x24, 0xd3, 0xe9, 0xad, 0xf8, 0x08, 0x9d, 0xde, 0x7a, 0x79, 0x4f, 0x0e, 0x1c, 0x87, 0xf7, 0xa4,
- 0xfd, 0x15, 0x76, 0xb3, 0xe9, 0xe5, 0x27, 0xc0, 0xf4, 0x5c, 0x37, 0xef, 0x40, 0xbb, 0xcb, 0xca,
- 0x12, 0x9d, 0xca, 0x61, 0x7e, 0x7e, 0xde, 0x82, 0xf3, 0x19, 0x5f, 0xa5, 0x71, 0x42, 0xcf, 0x42,
- 0xc9, 0x69, 0xbb, 0x9e, 0xda, 0xcb, 0x9a, 0xd6, 0x6c, 0x5e, 0x94, 0x63, 0x85, 0x81, 0x16, 0x61,
- 0x8a, 0xdc, 0x6f, 0x79, 0x5c, 0x61, 0xa8, 0x9b, 0x54, 0x16, 0x79, 0xbc, 0xeb, 0xa5, 0x34, 0x10,
- 0x77, 0xe2, 0xab, 0x60, 0x0f, 0xc5, 0xdc, 0x60, 0x0f, 0xff, 0xc8, 0x82, 0x11, 0xe5, 0x9d, 0xf5,
- 0xc8, 0x47, 0xfb, 0x5b, 0xcc, 0xd1, 0x7e, 0xbc, 0xcb, 0x68, 0xe7, 0x0c, 0xf3, 0xdf, 0x2b, 0xa8,
- 0xfe, 0x56, 0x83, 0x30, 0xee, 0x83, 0xc3, 0x7a, 0x05, 0x4a, 0xad, 0x30, 0x88, 0x83, 0x7a, 0xd0,
- 0x10, 0x0c, 0xd6, 0x13, 0x49, 0x2c, 0x12, 0x5e, 0x7e, 0xa8, 0xfd, 0xc6, 0x0a, 0x9b, 0x8d, 0x5e,
- 0x10, 0xc6, 0x82, 0xa9, 0x49, 0x46, 0x2f, 0x08, 0x63, 0xcc, 0x20, 0xc8, 0x05, 0x88, 0x9d, 0x70,
- 0x8b, 0xc4, 0xb4, 0x4c, 0xc4, 0x3e, 0xca, 0x3f, 0x3c, 0xda, 0xb1, 0xd7, 0x98, 0xf3, 0xfc, 0x38,
- 0x8a, 0xc3, 0xb9, 0x15, 0x3f, 0xbe, 0x1d, 0xf2, 0xf7, 0x9a, 0x16, 0x5c, 0x44, 0xd1, 0xc2, 0x1a,
- 0x5d, 0xe9, 0x56, 0xcc, 0xda, 0x18, 0x34, 0xf5, 0xef, 0x6b, 0xa2, 0x1c, 0x2b, 0x0c, 0xfb, 0x65,
- 0x76, 0x95, 0xb0, 0x01, 0x3a, 0x5a, 0xdc, 0x8f, 0xaf, 0x96, 0xd4, 0xd0, 0x32, 0xe5, 0x5b, 0x45,
- 0x8f, 0x2e, 0xd2, 0xfd, 0xe4, 0xa6, 0x0d, 0xeb, 0x2e, 0x46, 0x49, 0x08, 0x12, 0xf4, 0xad, 0x1d,
- 0x36, 0x15, 0xcf, 0xf5, 0xb8, 0x02, 0x8e, 0x60, 0x45, 0xc1, 0x62, 0xf0, 0xb3, 0x08, 0xe5, 0x2b,
- 0x55, 0xb1, 0xc8, 0xb5, 0x18, 0xfc, 0x02, 0x80, 0x13, 0x1c, 0x74, 0x55, 0xbc, 0xc6, 0x07, 0x8c,
- 0xcc, 0x93, 0xf2, 0x35, 0x2e, 0x3f, 0x5f, 0x13, 0x66, 0x3f, 0x0f, 0x23, 0x2a, 0x03, 0x65, 0x95,
- 0x27, 0x36, 0x14, 0x91, 0xa0, 0x96, 0x92, 0x62, 0xac, 0xe3, 0xa0, 0x75, 0x98, 0x88, 0xb8, 0xa8,
- 0x47, 0x05, 0xfc, 0xe4, 0x22, 0xb3, 0x8f, 0x4a, 0x43, 0x94, 0x9a, 0x09, 0x3e, 0x64, 0x45, 0xfc,
- 0xe8, 0x90, 0xae, 0xbc, 0x69, 0x12, 0xe8, 0x35, 0x18, 0x6f, 0x04, 0x8e, 0xbb, 0xe0, 0x34, 0x1c,
- 0xbf, 0xce, 0xbe, 0xb7, 0x64, 0x26, 0x32, 0xbb, 0x65, 0x40, 0x71, 0x0a, 0x9b, 0x72, 0x3e, 0x7a,
- 0x89, 0x08, 0x52, 0xeb, 0xf8, 0x5b, 0x24, 0x12, 0xf9, 0x04, 0x19, 0xe7, 0x73, 0x2b, 0x07, 0x07,
- 0xe7, 0xd6, 0x46, 0xaf, 0xc0, 0xa8, 0xfc, 0x7c, 0xcd, 0xf3, 0x3d, 0xb1, 0xbd, 0xd7, 0x60, 0xd8,
- 0xc0, 0x44, 0x7b, 0x70, 0x46, 0xfe, 0x5f, 0x0f, 0x9d, 0xcd, 0x4d, 0xaf, 0x2e, 0xdc, 0x41, 0xb9,
- 0x63, 0xdc, 0xbc, 0xf4, 0xde, 0x5a, 0xca, 0x42, 0x3a, 0x3c, 0x98, 0xbd, 0x28, 0x46, 0x2d, 0x13,
- 0xce, 0x26, 0x31, 0x9b, 0x3e, 0x5a, 0x85, 0x53, 0xdb, 0xc4, 0x69, 0xc4, 0xdb, 0x8b, 0xdb, 0xa4,
- 0xbe, 0x23, 0x37, 0x11, 0xf3, 0xa7, 0xd7, 0x2c, 0xd6, 0x6f, 0x74, 0xa2, 0xe0, 0xac, 0x7a, 0xe8,
- 0x2d, 0x98, 0x6e, 0xb5, 0x37, 0x1a, 0x5e, 0xb4, 0xbd, 0x16, 0xc4, 0xcc, 0x1a, 0x45, 0x25, 0xb4,
- 0x14, 0x8e, 0xf7, 0x2a, 0x62, 0x41, 0x35, 0x07, 0x0f, 0xe7, 0x52, 0x40, 0xef, 0xc2, 0x99, 0xd4,
- 0x62, 0x10, 0xae, 0xc7, 0xe3, 0xf9, 0x21, 0xbf, 0x6b, 0x59, 0x15, 0x84, 0x17, 0x7f, 0x16, 0x08,
- 0x67, 0x37, 0xf1, 0xde, 0xec, 0x8a, 0xde, 0xa1, 0x95, 0x35, 0xa6, 0x0c, 0x7d, 0x1e, 0x46, 0xf5,
- 0x55, 0x24, 0x2e, 0x98, 0xcb, 0xd9, 0x3c, 0x8b, 0xb6, 0xda, 0x38, 0x4b, 0xa7, 0x56, 0x94, 0x0e,
- 0xc3, 0x06, 0x45, 0x9b, 0x40, 0xf6, 0xf7, 0xa1, 0x5b, 0x50, 0xaa, 0x37, 0x3c, 0xe2, 0xc7, 0x2b,
- 0xd5, 0x6e, 0x21, 0x85, 0x16, 0x05, 0x8e, 0x18, 0x30, 0x11, 0x23, 0x99, 0x97, 0x61, 0x45, 0xc1,
- 0xfe, 0xd5, 0x02, 0xcc, 0xf6, 0x08, 0xb8, 0x9d, 0x12, 0x7f, 0x5b, 0x7d, 0x89, 0xbf, 0xe7, 0x65,
- 0x7a, 0xce, 0xb5, 0x94, 0x4c, 0x20, 0x95, 0x7a, 0x33, 0x91, 0x0c, 0xa4, 0xf1, 0xfb, 0x36, 0x47,
- 0xd6, 0x25, 0xe8, 0x03, 0x3d, 0x0d, 0xea, 0x0d, 0xcd, 0xd9, 0x60, 0xff, 0x0f, 0x91, 0x5c, 0x2d,
- 0x88, 0xfd, 0x95, 0x02, 0x9c, 0x51, 0x43, 0xf8, 0x8d, 0x3b, 0x70, 0x77, 0x3a, 0x07, 0xee, 0x18,
- 0x74, 0x48, 0xf6, 0x6d, 0x18, 0xe2, 0x31, 0x92, 0xfa, 0x60, 0x80, 0x2e, 0x99, 0x41, 0xfe, 0xd4,
- 0x35, 0x6d, 0x04, 0xfa, 0xfb, 0x5e, 0x0b, 0x26, 0xd6, 0x17, 0xab, 0xb5, 0xa0, 0xbe, 0x43, 0xe2,
- 0x79, 0xce, 0xb0, 0x62, 0xc1, 0xff, 0x58, 0x0f, 0xc9, 0xd7, 0x64, 0x71, 0x4c, 0x17, 0x61, 0x60,
- 0x3b, 0x88, 0xe2, 0xb4, 0x82, 0xf9, 0x46, 0x10, 0xc5, 0x98, 0x41, 0xec, 0xdf, 0xb6, 0x60, 0x90,
- 0x65, 0xa4, 0xee, 0x95, 0x13, 0xbd, 0x9f, 0xef, 0x42, 0x2f, 0xc1, 0x10, 0xd9, 0xdc, 0x24, 0xf5,
- 0x58, 0xcc, 0xaa, 0xf4, 0x08, 0x1e, 0x5a, 0x62, 0xa5, 0xf4, 0xd2, 0x67, 0x8d, 0xf1, 0xbf, 0x58,
- 0x20, 0xa3, 0x7b, 0x50, 0x8e, 0xbd, 0x26, 0x99, 0x77, 0x5d, 0xa1, 0xa2, 0x7b, 0x08, 0xaf, 0xe6,
- 0x75, 0x49, 0x00, 0x27, 0xb4, 0xec, 0x2f, 0x15, 0x00, 0x92, 0x08, 0x19, 0xbd, 0x3e, 0x71, 0xa1,
- 0x43, 0x79, 0x73, 0x39, 0x43, 0x79, 0x83, 0x12, 0x82, 0x19, 0x9a, 0x1b, 0x35, 0x4c, 0xc5, 0xbe,
- 0x86, 0x69, 0xe0, 0x28, 0xc3, 0xb4, 0x08, 0x53, 0x49, 0x84, 0x0f, 0x33, 0xc0, 0x11, 0x7b, 0xa4,
- 0xac, 0xa7, 0x81, 0xb8, 0x13, 0xdf, 0x26, 0x70, 0x51, 0x05, 0x3a, 0x10, 0x77, 0x0d, 0xb3, 0x00,
- 0x3d, 0x42, 0x7a, 0xfc, 0x44, 0x3b, 0x55, 0xc8, 0xd5, 0x4e, 0xfd, 0xb8, 0x05, 0xa7, 0xd3, 0xed,
- 0x30, 0x97, 0xbc, 0x2f, 0x5a, 0x70, 0x86, 0xe9, 0xe8, 0x58, 0xab, 0x9d, 0x1a, 0xc1, 0x17, 0xbb,
- 0x06, 0x6f, 0xc8, 0xe9, 0x71, 0xe2, 0x7a, 0xbe, 0x9a, 0x45, 0x1a, 0x67, 0xb7, 0x68, 0xff, 0x97,
- 0x02, 0x4c, 0xe7, 0x45, 0x7d, 0x60, 0x06, 0xe2, 0xce, 0xfd, 0xda, 0x0e, 0xd9, 0x13, 0x66, 0xb8,
- 0x89, 0x81, 0x38, 0x2f, 0xc6, 0x12, 0x9e, 0x8e, 0xa1, 0x5c, 0xe8, 0x2f, 0x86, 0x32, 0xda, 0x86,
- 0xa9, 0xbd, 0x6d, 0xe2, 0xdf, 0xf1, 0x23, 0x27, 0xf6, 0xa2, 0x4d, 0x8f, 0xe5, 0x36, 0xe7, 0xeb,
- 0xe6, 0x13, 0xd2, 0x58, 0xf6, 0x5e, 0x1a, 0xe1, 0xf0, 0x60, 0xf6, 0xbc, 0x51, 0x90, 0x74, 0x99,
- 0x1f, 0x24, 0xb8, 0x93, 0x68, 0x67, 0x08, 0xea, 0x81, 0x47, 0x18, 0x82, 0xda, 0xfe, 0xa2, 0x05,
- 0xe7, 0x72, 0x53, 0xc4, 0xa1, 0x2b, 0x50, 0x72, 0x5a, 0x1e, 0x17, 0x66, 0x8a, 0x63, 0x94, 0x3d,
- 0xca, 0xab, 0x2b, 0x5c, 0x94, 0xa9, 0xa0, 0x2a, 0x75, 0x6d, 0x21, 0x37, 0x75, 0x6d, 0xcf, 0x4c,
- 0xb4, 0xf6, 0xf7, 0x58, 0x20, 0x9c, 0xdb, 0xfa, 0x38, 0xbb, 0xdf, 0x94, 0x99, 0xbf, 0x8d, 0x34,
- 0x15, 0x17, 0xf3, 0xbd, 0xfd, 0x44, 0x72, 0x0a, 0xc5, 0x2b, 0x19, 0x29, 0x29, 0x0c, 0x5a, 0xb6,
- 0x0b, 0x02, 0x5a, 0x21, 0x4c, 0x14, 0xd8, 0xbb, 0x37, 0xd7, 0x00, 0x5c, 0x86, 0xab, 0xe5, 0xff,
- 0x55, 0x37, 0x73, 0x45, 0x41, 0xb0, 0x86, 0x65, 0xff, 0x87, 0x02, 0x8c, 0xc8, 0xb4, 0x08, 0x6d,
- 0xbf, 0x9f, 0x07, 0xfb, 0x91, 0xf2, 0xa4, 0xb1, 0x84, 0xd9, 0x94, 0x70, 0x35, 0x91, 0x73, 0x24,
- 0x09, 0xb3, 0x25, 0x00, 0x27, 0x38, 0x74, 0x17, 0x45, 0xed, 0x0d, 0x86, 0x9e, 0x72, 0xc5, 0xaa,
- 0xf1, 0x62, 0x2c, 0xe1, 0xe8, 0xd3, 0x30, 0xc9, 0xeb, 0x85, 0x41, 0xcb, 0xd9, 0xe2, 0x52, 0xe2,
- 0x41, 0xe5, 0x43, 0x3d, 0xb9, 0x9a, 0x82, 0x1d, 0x1e, 0xcc, 0x9e, 0x4e, 0x97, 0x31, 0xf5, 0x47,
- 0x07, 0x15, 0x66, 0x52, 0xc1, 0x1b, 0xa1, 0xbb, 0xbf, 0xc3, 0x12, 0x23, 0x01, 0x61, 0x1d, 0xcf,
- 0xfe, 0x3c, 0xa0, 0xce, 0x04, 0x11, 0xe8, 0x75, 0x6e, 0x47, 0xe7, 0x85, 0xc4, 0xed, 0xa6, 0x0e,
- 0xd1, 0x3d, 0x85, 0xa5, 0x17, 0x05, 0xaf, 0x85, 0x55, 0x7d, 0xfb, 0xaf, 0x17, 0x61, 0x32, 0xed,
- 0x37, 0x8a, 0x6e, 0xc0, 0x10, 0x67, 0x3d, 0x04, 0xf9, 0x2e, 0xda, 0x76, 0xcd, 0xdb, 0x94, 0x1d,
- 0xc2, 0x82, 0x7b, 0x11, 0xf5, 0xd1, 0x5b, 0x30, 0xe2, 0x06, 0x7b, 0xfe, 0x9e, 0x13, 0xba, 0xf3,
- 0xd5, 0x15, 0xb1, 0x9c, 0x33, 0x5f, 0x30, 0x95, 0x04, 0x4d, 0xf7, 0x60, 0x65, 0x9a, 0xa5, 0x04,
- 0x84, 0x75, 0x72, 0x68, 0x9d, 0xc5, 0xb3, 0xdd, 0xf4, 0xb6, 0x56, 0x9d, 0x56, 0x37, 0xa3, 0xea,
- 0x45, 0x89, 0xa4, 0x51, 0x1e, 0x13, 0x41, 0x6f, 0x39, 0x00, 0x27, 0x84, 0xd0, 0xb7, 0xc3, 0xa9,
- 0x28, 0x47, 0xe8, 0x99, 0x97, 0x2f, 0xa8, 0x9b, 0x1c, 0x70, 0xe1, 0x31, 0xfa, 0xb6, 0xcc, 0x12,
- 0x8f, 0x66, 0x35, 0x63, 0xff, 0xda, 0x29, 0x30, 0x36, 0xb1, 0x91, 0x3e, 0xce, 0x3a, 0xa6, 0xf4,
- 0x71, 0x18, 0x4a, 0xa4, 0xd9, 0x8a, 0xf7, 0x2b, 0x5e, 0xd8, 0x2d, 0xff, 0xe8, 0x92, 0xc0, 0xe9,
- 0xa4, 0x29, 0x21, 0x58, 0xd1, 0xc9, 0xce, 0xf1, 0x57, 0xfc, 0x3a, 0xe6, 0xf8, 0x1b, 0x38, 0xc1,
- 0x1c, 0x7f, 0x6b, 0x30, 0xbc, 0xe5, 0xc5, 0x98, 0xb4, 0x02, 0xc1, 0xf4, 0x67, 0xae, 0xc3, 0xeb,
- 0x1c, 0xa5, 0x33, 0x9b, 0x94, 0x00, 0x60, 0x49, 0x04, 0xbd, 0xae, 0x76, 0xe0, 0x50, 0xfe, 0x9b,
- 0xb9, 0x53, 0x2d, 0x9c, 0xb9, 0x07, 0x45, 0x26, 0xbf, 0xe1, 0x87, 0xcd, 0xe4, 0xb7, 0x2c, 0xf3,
- 0xef, 0x95, 0xf2, 0x3d, 0x20, 0x58, 0x7a, 0xbd, 0x1e, 0x59, 0xf7, 0xee, 0xea, 0x39, 0x0b, 0xcb,
- 0xf9, 0x27, 0x81, 0x4a, 0x47, 0xd8, 0x67, 0xa6, 0xc2, 0xef, 0xb1, 0xe0, 0x4c, 0x2b, 0x2b, 0x7d,
- 0xa7, 0xd0, 0xa0, 0xbe, 0xd4, 0x77, 0x7e, 0x52, 0xa3, 0x41, 0x26, 0x3c, 0xc9, 0x44, 0xc3, 0xd9,
- 0xcd, 0xd1, 0x81, 0x0e, 0x37, 0x5c, 0x91, 0x6a, 0xef, 0x52, 0x4e, 0xca, 0xc3, 0x2e, 0x89, 0x0e,
- 0xd7, 0x33, 0xd2, 0xeb, 0x7d, 0x38, 0x2f, 0xbd, 0x5e, 0xdf, 0x49, 0xf5, 0x5e, 0x57, 0xc9, 0x0e,
- 0xc7, 0xf2, 0x97, 0x12, 0x4f, 0x65, 0xd8, 0x33, 0xc5, 0xe1, 0xeb, 0x2a, 0xc5, 0x61, 0x97, 0xd8,
- 0x8e, 0x3c, 0x81, 0x61, 0xcf, 0xc4, 0x86, 0x5a, 0x72, 0xc2, 0x89, 0xe3, 0x49, 0x4e, 0x68, 0x5c,
- 0x35, 0x3c, 0x3f, 0xde, 0x33, 0x3d, 0xae, 0x1a, 0x83, 0x6e, 0xf7, 0xcb, 0x86, 0x27, 0x62, 0x9c,
- 0x7a, 0xa8, 0x44, 0x8c, 0x77, 0xf5, 0xc4, 0x86, 0xa8, 0x47, 0xe6, 0x3e, 0x8a, 0xd4, 0x67, 0x3a,
- 0xc3, 0xbb, 0xfa, 0x05, 0x78, 0x2a, 0x9f, 0xae, 0xba, 0xe7, 0x3a, 0xe9, 0x66, 0x5e, 0x81, 0x1d,
- 0x69, 0x12, 0x4f, 0x9f, 0x4c, 0x9a, 0xc4, 0x33, 0xc7, 0x9e, 0x26, 0xf1, 0xec, 0x09, 0xa4, 0x49,
- 0x7c, 0xec, 0x04, 0xd3, 0x24, 0xde, 0x65, 0x66, 0x07, 0x3c, 0x44, 0x88, 0x88, 0x45, 0x99, 0x1d,
- 0xf7, 0x30, 0x2b, 0x8e, 0x08, 0xff, 0x38, 0x05, 0xc2, 0x09, 0xa9, 0x8c, 0xf4, 0x8b, 0xd3, 0x8f,
- 0x20, 0xfd, 0xe2, 0x5a, 0x92, 0x7e, 0xf1, 0x5c, 0xfe, 0x54, 0x67, 0x98, 0x7b, 0xe7, 0x24, 0x5d,
- 0xbc, 0xab, 0x27, 0x4b, 0x7c, 0xbc, 0x8b, 0x78, 0x3c, 0x4b, 0xf0, 0xd8, 0x25, 0x45, 0xe2, 0x6b,
- 0x3c, 0x45, 0xe2, 0x13, 0xf9, 0x27, 0x79, 0xfa, 0xba, 0x33, 0x13, 0x23, 0x7e, 0x5f, 0x01, 0x2e,
- 0x74, 0xdf, 0x17, 0x89, 0xd4, 0xb3, 0x9a, 0x68, 0xe9, 0x52, 0x52, 0x4f, 0xfe, 0xb6, 0x4a, 0xb0,
- 0xfa, 0x8e, 0x1e, 0x75, 0x1d, 0xa6, 0x94, 0x3d, 0x77, 0xc3, 0xab, 0xef, 0x6b, 0xb9, 0xe0, 0x95,
- 0x0f, 0x6c, 0x2d, 0x8d, 0x80, 0x3b, 0xeb, 0xa0, 0x79, 0x98, 0x30, 0x0a, 0x57, 0x2a, 0xe2, 0x0d,
- 0xa5, 0xc4, 0xac, 0x35, 0x13, 0x8c, 0xd3, 0xf8, 0xf6, 0x4f, 0x5b, 0xf0, 0x58, 0x4e, 0x06, 0xa2,
- 0xbe, 0x83, 0x23, 0x6d, 0xc2, 0x44, 0xcb, 0xac, 0xda, 0x23, 0x86, 0x9a, 0x91, 0xe7, 0x48, 0xf5,
- 0x35, 0x05, 0xc0, 0x69, 0xa2, 0xf6, 0x9f, 0x5a, 0x70, 0xbe, 0xab, 0x69, 0x15, 0xc2, 0x70, 0x76,
- 0xab, 0x19, 0x39, 0x8b, 0x21, 0x71, 0x89, 0x1f, 0x7b, 0x4e, 0xa3, 0xd6, 0x22, 0x75, 0x4d, 0x6e,
- 0xcd, 0x6c, 0x94, 0xae, 0xaf, 0xd6, 0xe6, 0x3b, 0x31, 0x70, 0x4e, 0x4d, 0xb4, 0x0c, 0xa8, 0x13,
- 0x22, 0x66, 0x98, 0xc5, 0x59, 0xed, 0xa4, 0x87, 0x33, 0x6a, 0xa0, 0x97, 0x61, 0x4c, 0x99, 0x6c,
- 0x69, 0x33, 0xce, 0x0e, 0x60, 0xac, 0x03, 0xb0, 0x89, 0xb7, 0x70, 0xe5, 0x37, 0x7e, 0xf7, 0xc2,
- 0x87, 0x7e, 0xf3, 0x77, 0x2f, 0x7c, 0xe8, 0xb7, 0x7e, 0xf7, 0xc2, 0x87, 0xbe, 0xf3, 0xc1, 0x05,
- 0xeb, 0x37, 0x1e, 0x5c, 0xb0, 0x7e, 0xf3, 0xc1, 0x05, 0xeb, 0xb7, 0x1e, 0x5c, 0xb0, 0x7e, 0xe7,
- 0xc1, 0x05, 0xeb, 0x4b, 0xbf, 0x77, 0xe1, 0x43, 0x6f, 0x16, 0x76, 0x9f, 0xff, 0xff, 0x01, 0x00,
- 0x00, 0xff, 0xff, 0xee, 0x7c, 0xe3, 0x20, 0x5f, 0xf9, 0x00, 0x00,
+ // 13601 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xec, 0xbd, 0x7b, 0x70, 0x24, 0x49,
+ 0x5a, 0x18, 0x7e, 0xd5, 0xad, 0x47, 0xf7, 0xa7, 0x77, 0xce, 0x63, 0x35, 0xda, 0x99, 0xd1, 0x6c,
+ 0xed, 0xdd, 0xec, 0xec, 0xed, 0xae, 0xe6, 0xf6, 0x75, 0xbb, 0xec, 0xee, 0x2d, 0x48, 0x6a, 0x69,
+ 0xa6, 0x77, 0x46, 0x9a, 0xde, 0x6c, 0xcd, 0xcc, 0xdd, 0xb2, 0x77, 0xbf, 0x2b, 0x75, 0xa5, 0xa4,
+ 0x5a, 0xb5, 0xaa, 0x7a, 0xab, 0xaa, 0xa5, 0xd1, 0xfe, 0x20, 0x8c, 0x8f, 0xe7, 0x19, 0x70, 0x5c,
+ 0xd8, 0x84, 0x1f, 0x40, 0x60, 0x07, 0xc6, 0x01, 0x18, 0xec, 0x30, 0x06, 0x03, 0xe6, 0xb0, 0x8d,
+ 0xc1, 0x76, 0x60, 0xff, 0x81, 0xb1, 0xc3, 0xf6, 0x11, 0x41, 0x58, 0x86, 0xc1, 0x61, 0xe2, 0xfe,
+ 0x30, 0x10, 0x06, 0xff, 0x61, 0x99, 0x30, 0x8e, 0x7c, 0x56, 0x66, 0x75, 0x55, 0x77, 0x6b, 0x56,
+ 0xa3, 0x5b, 0x2e, 0xf6, 0xbf, 0xee, 0xfc, 0xbe, 0xfc, 0x32, 0x2b, 0x9f, 0x5f, 0x7e, 0x4f, 0x78,
+ 0x75, 0xfb, 0xe5, 0x68, 0xce, 0x0b, 0xae, 0x6e, 0xb7, 0xd7, 0x49, 0xe8, 0x93, 0x98, 0x44, 0x57,
+ 0x77, 0x89, 0xef, 0x06, 0xe1, 0x55, 0x01, 0x70, 0x5a, 0xde, 0xd5, 0x46, 0x10, 0x92, 0xab, 0xbb,
+ 0xcf, 0x5e, 0xdd, 0x24, 0x3e, 0x09, 0x9d, 0x98, 0xb8, 0x73, 0xad, 0x30, 0x88, 0x03, 0x84, 0x38,
+ 0xce, 0x9c, 0xd3, 0xf2, 0xe6, 0x28, 0xce, 0xdc, 0xee, 0xb3, 0x33, 0xcf, 0x6c, 0x7a, 0xf1, 0x56,
+ 0x7b, 0x7d, 0xae, 0x11, 0xec, 0x5c, 0xdd, 0x0c, 0x36, 0x83, 0xab, 0x0c, 0x75, 0xbd, 0xbd, 0xc1,
+ 0xfe, 0xb1, 0x3f, 0xec, 0x17, 0x27, 0x31, 0xf3, 0x42, 0xd2, 0xcc, 0x8e, 0xd3, 0xd8, 0xf2, 0x7c,
+ 0x12, 0xee, 0x5f, 0x6d, 0x6d, 0x6f, 0xb2, 0x76, 0x43, 0x12, 0x05, 0xed, 0xb0, 0x41, 0xd2, 0x0d,
+ 0x77, 0xad, 0x15, 0x5d, 0xdd, 0x21, 0xb1, 0x93, 0xd1, 0xdd, 0x99, 0xab, 0x79, 0xb5, 0xc2, 0xb6,
+ 0x1f, 0x7b, 0x3b, 0x9d, 0xcd, 0x7c, 0xb2, 0x57, 0x85, 0xa8, 0xb1, 0x45, 0x76, 0x9c, 0x8e, 0x7a,
+ 0xcf, 0xe7, 0xd5, 0x6b, 0xc7, 0x5e, 0xf3, 0xaa, 0xe7, 0xc7, 0x51, 0x1c, 0xa6, 0x2b, 0xd9, 0x5f,
+ 0xb1, 0xe0, 0xd2, 0xfc, 0xdd, 0xfa, 0x52, 0xd3, 0x89, 0x62, 0xaf, 0xb1, 0xd0, 0x0c, 0x1a, 0xdb,
+ 0xf5, 0x38, 0x08, 0xc9, 0x9d, 0xa0, 0xd9, 0xde, 0x21, 0x75, 0x36, 0x10, 0xe8, 0x69, 0x28, 0xed,
+ 0xb2, 0xff, 0xd5, 0xca, 0xb4, 0x75, 0xc9, 0xba, 0x52, 0x5e, 0x98, 0xfc, 0xf5, 0x83, 0xd9, 0x8f,
+ 0xdc, 0x3f, 0x98, 0x2d, 0xdd, 0x11, 0xe5, 0x58, 0x61, 0xa0, 0xcb, 0x30, 0xb4, 0x11, 0xad, 0xed,
+ 0xb7, 0xc8, 0x74, 0x81, 0xe1, 0x8e, 0x0b, 0xdc, 0xa1, 0xe5, 0x3a, 0x2d, 0xc5, 0x02, 0x8a, 0xae,
+ 0x42, 0xb9, 0xe5, 0x84, 0xb1, 0x17, 0x7b, 0x81, 0x3f, 0x5d, 0xbc, 0x64, 0x5d, 0x19, 0x5c, 0x98,
+ 0x12, 0xa8, 0xe5, 0x9a, 0x04, 0xe0, 0x04, 0x87, 0x76, 0x23, 0x24, 0x8e, 0x7b, 0xcb, 0x6f, 0xee,
+ 0x4f, 0x0f, 0x5c, 0xb2, 0xae, 0x94, 0x92, 0x6e, 0x60, 0x51, 0x8e, 0x15, 0x86, 0xfd, 0x83, 0x05,
+ 0x28, 0xcd, 0x6f, 0x6c, 0x78, 0xbe, 0x17, 0xef, 0xa3, 0x3b, 0x30, 0xea, 0x07, 0x2e, 0x91, 0xff,
+ 0xd9, 0x57, 0x8c, 0x3c, 0x77, 0x69, 0xae, 0x73, 0x29, 0xcd, 0xad, 0x6a, 0x78, 0x0b, 0x93, 0xf7,
+ 0x0f, 0x66, 0x47, 0xf5, 0x12, 0x6c, 0xd0, 0x41, 0x18, 0x46, 0x5a, 0x81, 0xab, 0xc8, 0x16, 0x18,
+ 0xd9, 0xd9, 0x2c, 0xb2, 0xb5, 0x04, 0x6d, 0x61, 0xe2, 0xfe, 0xc1, 0xec, 0x88, 0x56, 0x80, 0x75,
+ 0x22, 0x68, 0x1d, 0x26, 0xe8, 0x5f, 0x3f, 0xf6, 0x14, 0xdd, 0x22, 0xa3, 0xfb, 0x78, 0x1e, 0x5d,
+ 0x0d, 0x75, 0xe1, 0xd4, 0xfd, 0x83, 0xd9, 0x89, 0x54, 0x21, 0x4e, 0x13, 0xb4, 0xdf, 0x83, 0xf1,
+ 0xf9, 0x38, 0x76, 0x1a, 0x5b, 0xc4, 0xe5, 0x33, 0x88, 0x5e, 0x80, 0x01, 0xdf, 0xd9, 0x21, 0x62,
+ 0x7e, 0x2f, 0x89, 0x81, 0x1d, 0x58, 0x75, 0x76, 0xc8, 0xe1, 0xc1, 0xec, 0xe4, 0x6d, 0xdf, 0x7b,
+ 0xb7, 0x2d, 0x56, 0x05, 0x2d, 0xc3, 0x0c, 0x1b, 0x3d, 0x07, 0xe0, 0x92, 0x5d, 0xaf, 0x41, 0x6a,
+ 0x4e, 0xbc, 0x25, 0xe6, 0x1b, 0x89, 0xba, 0x50, 0x51, 0x10, 0xac, 0x61, 0xd9, 0xf7, 0xa0, 0x3c,
+ 0xbf, 0x1b, 0x78, 0x6e, 0x2d, 0x70, 0x23, 0xb4, 0x0d, 0x13, 0xad, 0x90, 0x6c, 0x90, 0x50, 0x15,
+ 0x4d, 0x5b, 0x97, 0x8a, 0x57, 0x46, 0x9e, 0xbb, 0x92, 0xf9, 0xb1, 0x26, 0xea, 0x92, 0x1f, 0x87,
+ 0xfb, 0x0b, 0x8f, 0x88, 0xf6, 0x26, 0x52, 0x50, 0x9c, 0xa6, 0x6c, 0xff, 0xcb, 0x02, 0x9c, 0x99,
+ 0x7f, 0xaf, 0x1d, 0x92, 0x8a, 0x17, 0x6d, 0xa7, 0x57, 0xb8, 0xeb, 0x45, 0xdb, 0xab, 0xc9, 0x08,
+ 0xa8, 0xa5, 0x55, 0x11, 0xe5, 0x58, 0x61, 0xa0, 0x67, 0x60, 0x98, 0xfe, 0xbe, 0x8d, 0xab, 0xe2,
+ 0x93, 0x4f, 0x09, 0xe4, 0x91, 0x8a, 0x13, 0x3b, 0x15, 0x0e, 0xc2, 0x12, 0x07, 0xad, 0xc0, 0x48,
+ 0x83, 0x6d, 0xc8, 0xcd, 0x95, 0xc0, 0x25, 0x6c, 0x32, 0xcb, 0x0b, 0x4f, 0x51, 0xf4, 0xc5, 0xa4,
+ 0xf8, 0xf0, 0x60, 0x76, 0x9a, 0xf7, 0x4d, 0x90, 0xd0, 0x60, 0x58, 0xaf, 0x8f, 0x6c, 0xb5, 0xbf,
+ 0x06, 0x18, 0x25, 0xc8, 0xd8, 0x5b, 0x57, 0xb4, 0xad, 0x32, 0xc8, 0xb6, 0xca, 0x68, 0xf6, 0x36,
+ 0x41, 0xcf, 0xc2, 0xc0, 0xb6, 0xe7, 0xbb, 0xd3, 0x43, 0x8c, 0xd6, 0x05, 0x3a, 0xe7, 0x37, 0x3c,
+ 0xdf, 0x3d, 0x3c, 0x98, 0x9d, 0x32, 0xba, 0x43, 0x0b, 0x31, 0x43, 0xb5, 0xff, 0xd8, 0x82, 0x59,
+ 0x06, 0x5b, 0xf6, 0x9a, 0xa4, 0x46, 0xc2, 0xc8, 0x8b, 0x62, 0xe2, 0xc7, 0xc6, 0x80, 0x3e, 0x07,
+ 0x10, 0x91, 0x46, 0x48, 0x62, 0x6d, 0x48, 0xd5, 0xc2, 0xa8, 0x2b, 0x08, 0xd6, 0xb0, 0xe8, 0x81,
+ 0x10, 0x6d, 0x39, 0x21, 0x5b, 0x5f, 0x62, 0x60, 0xd5, 0x81, 0x50, 0x97, 0x00, 0x9c, 0xe0, 0x18,
+ 0x07, 0x42, 0xb1, 0xd7, 0x81, 0x80, 0x3e, 0x05, 0x13, 0x49, 0x63, 0x51, 0xcb, 0x69, 0xc8, 0x01,
+ 0x64, 0x5b, 0xa6, 0x6e, 0x82, 0x70, 0x1a, 0xd7, 0xfe, 0x7b, 0x96, 0x58, 0x3c, 0xf4, 0xab, 0x3f,
+ 0xe0, 0xdf, 0x6a, 0xff, 0xa2, 0x05, 0xc3, 0x0b, 0x9e, 0xef, 0x7a, 0xfe, 0x26, 0xfa, 0x3c, 0x94,
+ 0xe8, 0xdd, 0xe4, 0x3a, 0xb1, 0x23, 0xce, 0xbd, 0x4f, 0x68, 0x7b, 0x4b, 0x5d, 0x15, 0x73, 0xad,
+ 0xed, 0x4d, 0x5a, 0x10, 0xcd, 0x51, 0x6c, 0xba, 0xdb, 0x6e, 0xad, 0xbf, 0x43, 0x1a, 0xf1, 0x0a,
+ 0x89, 0x9d, 0xe4, 0x73, 0x92, 0x32, 0xac, 0xa8, 0xa2, 0x1b, 0x30, 0x14, 0x3b, 0xe1, 0x26, 0x89,
+ 0xc5, 0x01, 0x98, 0x79, 0x50, 0xf1, 0x9a, 0x98, 0xee, 0x48, 0xe2, 0x37, 0x48, 0x72, 0x2d, 0xac,
+ 0xb1, 0xaa, 0x58, 0x90, 0xb0, 0xbf, 0x7f, 0x18, 0xce, 0x2d, 0xd6, 0xab, 0x39, 0xeb, 0xea, 0x32,
+ 0x0c, 0xb9, 0xa1, 0xb7, 0x4b, 0x42, 0x31, 0xce, 0x8a, 0x4a, 0x85, 0x95, 0x62, 0x01, 0x45, 0x2f,
+ 0xc3, 0x28, 0xbf, 0x90, 0xae, 0x3b, 0xbe, 0xdb, 0x94, 0x43, 0x7c, 0x5a, 0x60, 0x8f, 0xde, 0xd1,
+ 0x60, 0xd8, 0xc0, 0x3c, 0xe2, 0xa2, 0xba, 0x9c, 0xda, 0x8c, 0x79, 0x97, 0xdd, 0x17, 0x2d, 0x98,
+ 0xe4, 0xcd, 0xcc, 0xc7, 0x71, 0xe8, 0xad, 0xb7, 0x63, 0x12, 0x4d, 0x0f, 0xb2, 0x93, 0x6e, 0x31,
+ 0x6b, 0xb4, 0x72, 0x47, 0x60, 0xee, 0x4e, 0x8a, 0x0a, 0x3f, 0x04, 0xa7, 0x45, 0xbb, 0x93, 0x69,
+ 0x30, 0xee, 0x68, 0x16, 0x7d, 0xbb, 0x05, 0x33, 0x8d, 0xc0, 0x8f, 0xc3, 0xa0, 0xd9, 0x24, 0x61,
+ 0xad, 0xbd, 0xde, 0xf4, 0xa2, 0x2d, 0xbe, 0x4e, 0x31, 0xd9, 0x60, 0x27, 0x41, 0xce, 0x1c, 0x2a,
+ 0x24, 0x31, 0x87, 0x17, 0xef, 0x1f, 0xcc, 0xce, 0x2c, 0xe6, 0x92, 0xc2, 0x5d, 0x9a, 0x41, 0xdb,
+ 0x80, 0xe8, 0x55, 0x5a, 0x8f, 0x9d, 0x4d, 0x92, 0x34, 0x3e, 0xdc, 0x7f, 0xe3, 0x67, 0xef, 0x1f,
+ 0xcc, 0xa2, 0xd5, 0x0e, 0x12, 0x38, 0x83, 0x2c, 0x7a, 0x17, 0x4e, 0xd3, 0xd2, 0x8e, 0x6f, 0x2d,
+ 0xf5, 0xdf, 0xdc, 0xf4, 0xfd, 0x83, 0xd9, 0xd3, 0xab, 0x19, 0x44, 0x70, 0x26, 0x69, 0xf4, 0x6d,
+ 0x16, 0x9c, 0x4b, 0x3e, 0x7f, 0xe9, 0x5e, 0xcb, 0xf1, 0xdd, 0xa4, 0xe1, 0x72, 0xff, 0x0d, 0xd3,
+ 0x33, 0xf9, 0xdc, 0x62, 0x1e, 0x25, 0x9c, 0xdf, 0xc8, 0xcc, 0x22, 0x9c, 0xc9, 0x5c, 0x2d, 0x68,
+ 0x12, 0x8a, 0xdb, 0x84, 0x73, 0x41, 0x65, 0x4c, 0x7f, 0xa2, 0xd3, 0x30, 0xb8, 0xeb, 0x34, 0xdb,
+ 0x62, 0xa3, 0x60, 0xfe, 0xe7, 0x95, 0xc2, 0xcb, 0x96, 0xfd, 0xaf, 0x8a, 0x30, 0xb1, 0x58, 0xaf,
+ 0x3e, 0xd0, 0x2e, 0xd4, 0xaf, 0xa1, 0x42, 0xd7, 0x6b, 0x28, 0xb9, 0xd4, 0x8a, 0xb9, 0x97, 0xda,
+ 0x5f, 0xc8, 0xd8, 0x42, 0x03, 0x6c, 0x0b, 0x7d, 0x43, 0xce, 0x16, 0x3a, 0xe6, 0x8d, 0xb3, 0x9b,
+ 0xb3, 0x8a, 0x06, 0xd9, 0x64, 0x66, 0x72, 0x2c, 0x37, 0x83, 0x86, 0xd3, 0x4c, 0x1f, 0x7d, 0x47,
+ 0x5c, 0x4a, 0xc7, 0x33, 0x8f, 0x0d, 0x18, 0x5d, 0x74, 0x5a, 0xce, 0xba, 0xd7, 0xf4, 0x62, 0x8f,
+ 0x44, 0xe8, 0x09, 0x28, 0x3a, 0xae, 0xcb, 0xb8, 0xad, 0xf2, 0xc2, 0x99, 0xfb, 0x07, 0xb3, 0xc5,
+ 0x79, 0x97, 0x5e, 0xfb, 0xa0, 0xb0, 0xf6, 0x31, 0xc5, 0x40, 0x1f, 0x87, 0x01, 0x37, 0x0c, 0x5a,
+ 0xd3, 0x05, 0x86, 0x49, 0x77, 0xdd, 0x40, 0x25, 0x0c, 0x5a, 0x29, 0x54, 0x86, 0x63, 0xff, 0x4a,
+ 0x01, 0xce, 0x2f, 0x92, 0xd6, 0xd6, 0x72, 0x3d, 0xe7, 0xfc, 0xbe, 0x02, 0xa5, 0x9d, 0xc0, 0xf7,
+ 0xe2, 0x20, 0x8c, 0x44, 0xd3, 0x6c, 0x45, 0xac, 0x88, 0x32, 0xac, 0xa0, 0xe8, 0x12, 0x0c, 0xb4,
+ 0x12, 0xa6, 0x72, 0x54, 0x32, 0xa4, 0x8c, 0x9d, 0x64, 0x10, 0x8a, 0xd1, 0x8e, 0x48, 0x28, 0x56,
+ 0x8c, 0xc2, 0xb8, 0x1d, 0x91, 0x10, 0x33, 0x48, 0x72, 0x33, 0xd3, 0x3b, 0x5b, 0x9c, 0xd0, 0xa9,
+ 0x9b, 0x99, 0x42, 0xb0, 0x86, 0x85, 0x6a, 0x50, 0x8e, 0x52, 0x33, 0xdb, 0xd7, 0x36, 0x1d, 0x63,
+ 0x57, 0xb7, 0x9a, 0xc9, 0x84, 0x88, 0x71, 0xa3, 0x0c, 0xf5, 0xbc, 0xba, 0xbf, 0x5c, 0x00, 0xc4,
+ 0x87, 0xf0, 0xcf, 0xd9, 0xc0, 0xdd, 0xee, 0x1c, 0xb8, 0xfe, 0xb7, 0xc4, 0x71, 0x8d, 0xde, 0x9f,
+ 0x58, 0x70, 0x7e, 0xd1, 0xf3, 0x5d, 0x12, 0xe6, 0x2c, 0xc0, 0x87, 0xf3, 0x96, 0x3d, 0x1a, 0xd3,
+ 0x60, 0x2c, 0xb1, 0x81, 0x63, 0x58, 0x62, 0xf6, 0x1f, 0x5a, 0x80, 0xf8, 0x67, 0x7f, 0xe0, 0x3e,
+ 0xf6, 0x76, 0xe7, 0xc7, 0x1e, 0xc3, 0xb2, 0xb0, 0x6f, 0xc2, 0xf8, 0x62, 0xd3, 0x23, 0x7e, 0x5c,
+ 0xad, 0x2d, 0x06, 0xfe, 0x86, 0xb7, 0x89, 0x5e, 0x81, 0xf1, 0xd8, 0xdb, 0x21, 0x41, 0x3b, 0xae,
+ 0x93, 0x46, 0xe0, 0xb3, 0x97, 0xa4, 0x75, 0x65, 0x70, 0x01, 0xdd, 0x3f, 0x98, 0x1d, 0x5f, 0x33,
+ 0x20, 0x38, 0x85, 0x69, 0xff, 0x36, 0x1d, 0xbf, 0x60, 0xa7, 0x15, 0xf8, 0xc4, 0x8f, 0x17, 0x03,
+ 0xdf, 0xe5, 0x12, 0x87, 0x57, 0x60, 0x20, 0xa6, 0xe3, 0xc1, 0xc7, 0xee, 0xb2, 0xdc, 0x28, 0x74,
+ 0x14, 0x0e, 0x0f, 0x66, 0xcf, 0x76, 0xd6, 0x60, 0xe3, 0xc4, 0xea, 0xa0, 0x6f, 0x80, 0xa1, 0x28,
+ 0x76, 0xe2, 0x76, 0x24, 0x46, 0xf3, 0x31, 0x39, 0x9a, 0x75, 0x56, 0x7a, 0x78, 0x30, 0x3b, 0xa1,
+ 0xaa, 0xf1, 0x22, 0x2c, 0x2a, 0xa0, 0x27, 0x61, 0x78, 0x87, 0x44, 0x91, 0xb3, 0x29, 0x6f, 0xc3,
+ 0x09, 0x51, 0x77, 0x78, 0x85, 0x17, 0x63, 0x09, 0x47, 0x8f, 0xc3, 0x20, 0x09, 0xc3, 0x20, 0x14,
+ 0x7b, 0x74, 0x4c, 0x20, 0x0e, 0x2e, 0xd1, 0x42, 0xcc, 0x61, 0xf6, 0xbf, 0xb3, 0x60, 0x42, 0xf5,
+ 0x95, 0xb7, 0x75, 0x02, 0xaf, 0x82, 0xb7, 0x00, 0x1a, 0xf2, 0x03, 0x23, 0x76, 0x7b, 0x8c, 0x3c,
+ 0x77, 0x39, 0xf3, 0xa2, 0xee, 0x18, 0xc6, 0x84, 0xb2, 0x2a, 0x8a, 0xb0, 0x46, 0xcd, 0xfe, 0xa7,
+ 0x16, 0x9c, 0x4a, 0x7d, 0xd1, 0x4d, 0x2f, 0x8a, 0xd1, 0xdb, 0x1d, 0x5f, 0x35, 0xd7, 0xdf, 0x57,
+ 0xd1, 0xda, 0xec, 0x9b, 0xd4, 0x52, 0x96, 0x25, 0xda, 0x17, 0x5d, 0x87, 0x41, 0x2f, 0x26, 0x3b,
+ 0xf2, 0x63, 0x1e, 0xef, 0xfa, 0x31, 0xbc, 0x57, 0xc9, 0x8c, 0x54, 0x69, 0x4d, 0xcc, 0x09, 0xd8,
+ 0x7f, 0xb5, 0x08, 0x65, 0xbe, 0x6c, 0x57, 0x9c, 0xd6, 0x09, 0xcc, 0x45, 0x15, 0x06, 0x18, 0x75,
+ 0xde, 0xf1, 0x27, 0xb2, 0x3b, 0x2e, 0xba, 0x33, 0x47, 0x9f, 0xfc, 0x9c, 0x39, 0x52, 0x57, 0x03,
+ 0x2d, 0xc2, 0x8c, 0x04, 0x72, 0x00, 0xd6, 0x3d, 0xdf, 0x09, 0xf7, 0x69, 0xd9, 0x74, 0x91, 0x11,
+ 0x7c, 0xa6, 0x3b, 0xc1, 0x05, 0x85, 0xcf, 0xc9, 0xaa, 0xbe, 0x26, 0x00, 0xac, 0x11, 0x9d, 0x79,
+ 0x09, 0xca, 0x0a, 0xf9, 0x28, 0x3c, 0xce, 0xcc, 0xa7, 0x60, 0x22, 0xd5, 0x56, 0xaf, 0xea, 0xa3,
+ 0x3a, 0x8b, 0xf4, 0x4b, 0xec, 0x14, 0x10, 0xbd, 0x5e, 0xf2, 0x77, 0xc5, 0x29, 0xfa, 0x1e, 0x9c,
+ 0x6e, 0x66, 0x1c, 0x4e, 0x62, 0xaa, 0xfa, 0x3f, 0xcc, 0xce, 0x8b, 0xcf, 0x3e, 0x9d, 0x05, 0xc5,
+ 0x99, 0x6d, 0xd0, 0x6b, 0x3f, 0x68, 0xd1, 0x35, 0xef, 0x34, 0x75, 0x0e, 0xfa, 0x96, 0x28, 0xc3,
+ 0x0a, 0x4a, 0x8f, 0xb0, 0xd3, 0xaa, 0xf3, 0x37, 0xc8, 0x7e, 0x9d, 0x34, 0x49, 0x23, 0x0e, 0xc2,
+ 0xaf, 0x69, 0xf7, 0x2f, 0xf0, 0xd1, 0xe7, 0x27, 0xe0, 0x88, 0x20, 0x50, 0xbc, 0x41, 0xf6, 0xf9,
+ 0x54, 0xe8, 0x5f, 0x57, 0xec, 0xfa, 0x75, 0x3f, 0x63, 0xc1, 0x98, 0xfa, 0xba, 0x13, 0xd8, 0xea,
+ 0x0b, 0xe6, 0x56, 0xbf, 0xd0, 0x75, 0x81, 0xe7, 0x6c, 0xf2, 0x2f, 0x17, 0xe0, 0x9c, 0xc2, 0xa1,
+ 0xec, 0x3e, 0xff, 0x23, 0x56, 0xd5, 0x55, 0x28, 0xfb, 0x4a, 0x10, 0x65, 0x99, 0x12, 0xa0, 0x44,
+ 0x0c, 0x95, 0xe0, 0x50, 0xae, 0xcd, 0x4f, 0xa4, 0x45, 0xa3, 0xba, 0x84, 0x56, 0x48, 0x63, 0x17,
+ 0xa0, 0xd8, 0xf6, 0x5c, 0x71, 0x67, 0x7c, 0x42, 0x8e, 0xf6, 0xed, 0x6a, 0xe5, 0xf0, 0x60, 0xf6,
+ 0xb1, 0x3c, 0xed, 0x00, 0xbd, 0xac, 0xa2, 0xb9, 0xdb, 0xd5, 0x0a, 0xa6, 0x95, 0xd1, 0x3c, 0x4c,
+ 0x48, 0x05, 0xc8, 0x1d, 0xca, 0x41, 0x05, 0xbe, 0xb8, 0x5a, 0x94, 0x98, 0x15, 0x9b, 0x60, 0x9c,
+ 0xc6, 0x47, 0x15, 0x98, 0xdc, 0x6e, 0xaf, 0x93, 0x26, 0x89, 0xf9, 0x07, 0xdf, 0x20, 0x5c, 0x08,
+ 0x59, 0x4e, 0x1e, 0x5b, 0x37, 0x52, 0x70, 0xdc, 0x51, 0xc3, 0xfe, 0x33, 0x76, 0xc4, 0x8b, 0xd1,
+ 0xab, 0x85, 0x01, 0x5d, 0x58, 0x94, 0xfa, 0xd7, 0x72, 0x39, 0xf7, 0xb3, 0x2a, 0x6e, 0x90, 0xfd,
+ 0xb5, 0x80, 0x32, 0xdb, 0xd9, 0xab, 0xc2, 0x58, 0xf3, 0x03, 0x5d, 0xd7, 0xfc, 0xcf, 0x15, 0xe0,
+ 0x8c, 0x1a, 0x01, 0x83, 0xaf, 0xfb, 0xf3, 0x3e, 0x06, 0xcf, 0xc2, 0x88, 0x4b, 0x36, 0x9c, 0x76,
+ 0x33, 0x56, 0x12, 0xf1, 0x41, 0xae, 0x15, 0xa9, 0x24, 0xc5, 0x58, 0xc7, 0x39, 0xc2, 0xb0, 0xfd,
+ 0xaf, 0x11, 0x76, 0xb7, 0xc6, 0x0e, 0x5d, 0xe3, 0x6a, 0xd7, 0x58, 0xb9, 0xbb, 0xe6, 0x71, 0x18,
+ 0xf4, 0x76, 0x28, 0xaf, 0x55, 0x30, 0x59, 0xa8, 0x2a, 0x2d, 0xc4, 0x1c, 0x86, 0x3e, 0x06, 0xc3,
+ 0x8d, 0x60, 0x67, 0xc7, 0xf1, 0x5d, 0x76, 0xe5, 0x95, 0x17, 0x46, 0x28, 0x3b, 0xb6, 0xc8, 0x8b,
+ 0xb0, 0x84, 0xa1, 0xf3, 0x30, 0xe0, 0x84, 0x9b, 0x5c, 0x2c, 0x51, 0x5e, 0x28, 0xd1, 0x96, 0xe6,
+ 0xc3, 0xcd, 0x08, 0xb3, 0x52, 0xfa, 0xaa, 0xda, 0x0b, 0xc2, 0x6d, 0xcf, 0xdf, 0xac, 0x78, 0xa1,
+ 0xd8, 0x12, 0xea, 0x2e, 0xbc, 0xab, 0x20, 0x58, 0xc3, 0x42, 0xcb, 0x30, 0xd8, 0x0a, 0xc2, 0x38,
+ 0x9a, 0x1e, 0x62, 0xc3, 0xfd, 0x58, 0xce, 0x41, 0xc4, 0xbf, 0xb6, 0x16, 0x84, 0x71, 0xf2, 0x01,
+ 0xf4, 0x5f, 0x84, 0x79, 0x75, 0x74, 0x13, 0x86, 0x89, 0xbf, 0xbb, 0x1c, 0x06, 0x3b, 0xd3, 0xa7,
+ 0xf2, 0x29, 0x2d, 0x71, 0x14, 0xbe, 0xcc, 0x12, 0xb6, 0x53, 0x14, 0x63, 0x49, 0x02, 0x7d, 0x03,
+ 0x14, 0x89, 0xbf, 0x3b, 0x3d, 0xcc, 0x28, 0xcd, 0xe4, 0x50, 0xba, 0xe3, 0x84, 0xc9, 0x99, 0xbf,
+ 0xe4, 0xef, 0x62, 0x5a, 0x07, 0x7d, 0x06, 0xca, 0xf2, 0xc0, 0x88, 0x84, 0xfc, 0x2d, 0x73, 0xc1,
+ 0xca, 0x63, 0x06, 0x93, 0x77, 0xdb, 0x5e, 0x48, 0x76, 0x88, 0x1f, 0x47, 0xc9, 0x09, 0x29, 0xa1,
+ 0x11, 0x4e, 0xa8, 0xa1, 0xcf, 0x48, 0xa1, 0xef, 0x4a, 0xd0, 0xf6, 0xe3, 0x68, 0xba, 0xcc, 0xba,
+ 0x97, 0xa9, 0x8e, 0xbb, 0x93, 0xe0, 0xa5, 0xa5, 0xc2, 0xbc, 0x32, 0x36, 0x48, 0xa1, 0xcf, 0xc2,
+ 0x18, 0xff, 0xcf, 0x95, 0x5a, 0xd1, 0xf4, 0x19, 0x46, 0xfb, 0x52, 0x3e, 0x6d, 0x8e, 0xb8, 0x70,
+ 0x46, 0x10, 0x1f, 0xd3, 0x4b, 0x23, 0x6c, 0x52, 0x43, 0x18, 0xc6, 0x9a, 0xde, 0x2e, 0xf1, 0x49,
+ 0x14, 0xd5, 0xc2, 0x60, 0x9d, 0x4c, 0x03, 0x1b, 0x98, 0x73, 0xd9, 0x4a, 0xb0, 0x60, 0x9d, 0x2c,
+ 0x4c, 0x51, 0x9a, 0x37, 0xf5, 0x3a, 0xd8, 0x24, 0x81, 0x6e, 0xc3, 0x38, 0x7d, 0x84, 0x79, 0x09,
+ 0xd1, 0x91, 0x5e, 0x44, 0xd9, 0x53, 0x09, 0x1b, 0x95, 0x70, 0x8a, 0x08, 0xba, 0x05, 0xa3, 0x51,
+ 0xec, 0x84, 0x71, 0xbb, 0xc5, 0x89, 0x9e, 0xed, 0x45, 0x94, 0xe9, 0x50, 0xeb, 0x5a, 0x15, 0x6c,
+ 0x10, 0x40, 0x6f, 0x40, 0xb9, 0xe9, 0x6d, 0x90, 0xc6, 0x7e, 0xa3, 0x49, 0xa6, 0x47, 0x19, 0xb5,
+ 0xcc, 0x43, 0xe5, 0xa6, 0x44, 0xe2, 0xaf, 0x42, 0xf5, 0x17, 0x27, 0xd5, 0xd1, 0x1d, 0x38, 0x1b,
+ 0x93, 0x70, 0xc7, 0xf3, 0x1d, 0x7a, 0x18, 0x88, 0xd7, 0x12, 0xd3, 0x4d, 0x8e, 0xb1, 0xdd, 0x76,
+ 0x51, 0xcc, 0xc6, 0xd9, 0xb5, 0x4c, 0x2c, 0x9c, 0x53, 0x1b, 0xdd, 0x83, 0xe9, 0x0c, 0x48, 0xd0,
+ 0xf4, 0x1a, 0xfb, 0xd3, 0xa7, 0x19, 0xe5, 0xd7, 0x04, 0xe5, 0xe9, 0xb5, 0x1c, 0xbc, 0xc3, 0x2e,
+ 0x30, 0x9c, 0x4b, 0x1d, 0xdd, 0x82, 0x09, 0x76, 0x02, 0xd5, 0xda, 0xcd, 0xa6, 0x68, 0x70, 0x9c,
+ 0x35, 0xf8, 0x31, 0x79, 0x1f, 0x57, 0x4d, 0xf0, 0xe1, 0xc1, 0x2c, 0x24, 0xff, 0x70, 0xba, 0x36,
+ 0x5a, 0x67, 0x6a, 0xb0, 0x76, 0xe8, 0xc5, 0xfb, 0xf4, 0xdc, 0x20, 0xf7, 0xe2, 0xe9, 0x89, 0xae,
+ 0x22, 0x08, 0x1d, 0x55, 0xe9, 0xca, 0xf4, 0x42, 0x9c, 0x26, 0x48, 0x8f, 0xd4, 0x28, 0x76, 0x3d,
+ 0x7f, 0x7a, 0x92, 0x9d, 0xd4, 0xea, 0x44, 0xaa, 0xd3, 0x42, 0xcc, 0x61, 0x4c, 0x05, 0x46, 0x7f,
+ 0xdc, 0xa2, 0x37, 0xd7, 0x14, 0x43, 0x4c, 0x54, 0x60, 0x12, 0x80, 0x13, 0x1c, 0xca, 0x4c, 0xc6,
+ 0xf1, 0xfe, 0x34, 0x62, 0xa8, 0xea, 0x60, 0x59, 0x5b, 0xfb, 0x0c, 0xa6, 0xe5, 0xf6, 0x3a, 0x8c,
+ 0xab, 0x83, 0x90, 0x8d, 0x09, 0x9a, 0x85, 0x41, 0xc6, 0x3e, 0x09, 0x81, 0x59, 0x99, 0x76, 0x81,
+ 0xb1, 0x56, 0x98, 0x97, 0xb3, 0x2e, 0x78, 0xef, 0x91, 0x85, 0xfd, 0x98, 0xf0, 0x67, 0x7a, 0x51,
+ 0xeb, 0x82, 0x04, 0xe0, 0x04, 0xc7, 0xfe, 0xbf, 0x9c, 0x0d, 0x4d, 0x4e, 0xdb, 0x3e, 0xee, 0x97,
+ 0xa7, 0xa1, 0xb4, 0x15, 0x44, 0x31, 0xc5, 0x66, 0x6d, 0x0c, 0x26, 0x8c, 0xe7, 0x75, 0x51, 0x8e,
+ 0x15, 0x06, 0x7a, 0x15, 0xc6, 0x1a, 0x7a, 0x03, 0xe2, 0x72, 0x54, 0xc7, 0x88, 0xd1, 0x3a, 0x36,
+ 0x71, 0xd1, 0xcb, 0x50, 0x62, 0x66, 0x1d, 0x8d, 0xa0, 0x29, 0xb8, 0x36, 0x79, 0xc3, 0x97, 0x6a,
+ 0xa2, 0xfc, 0x50, 0xfb, 0x8d, 0x15, 0x36, 0xba, 0x0c, 0x43, 0xb4, 0x0b, 0xd5, 0x9a, 0xb8, 0x96,
+ 0x94, 0xec, 0xe7, 0x3a, 0x2b, 0xc5, 0x02, 0x6a, 0xff, 0x95, 0x82, 0x36, 0xca, 0xf4, 0x89, 0x4b,
+ 0x50, 0x0d, 0x86, 0xf7, 0x1c, 0x2f, 0xf6, 0xfc, 0x4d, 0xc1, 0x7f, 0x3c, 0xd9, 0xf5, 0x8e, 0x62,
+ 0x95, 0xee, 0xf2, 0x0a, 0xfc, 0x16, 0x15, 0x7f, 0xb0, 0x24, 0x43, 0x29, 0x86, 0x6d, 0xdf, 0xa7,
+ 0x14, 0x0b, 0xfd, 0x52, 0xc4, 0xbc, 0x02, 0xa7, 0x28, 0xfe, 0x60, 0x49, 0x06, 0xbd, 0x0d, 0x20,
+ 0x77, 0x18, 0x71, 0x85, 0x39, 0xc5, 0xd3, 0xbd, 0x89, 0xae, 0xa9, 0x3a, 0x0b, 0xe3, 0xf4, 0x8e,
+ 0x4e, 0xfe, 0x63, 0x8d, 0x9e, 0x1d, 0x33, 0x3e, 0xad, 0xb3, 0x33, 0xe8, 0x9b, 0xe9, 0x12, 0x77,
+ 0xc2, 0x98, 0xb8, 0xf3, 0xb1, 0x18, 0x9c, 0x8f, 0xf7, 0xf7, 0x48, 0x59, 0xf3, 0x76, 0x88, 0xbe,
+ 0x1d, 0x04, 0x11, 0x9c, 0xd0, 0xb3, 0x7f, 0xa1, 0x08, 0xd3, 0x79, 0xdd, 0xa5, 0x8b, 0x8e, 0xdc,
+ 0xf3, 0xe2, 0x45, 0xca, 0x5e, 0x59, 0xe6, 0xa2, 0x5b, 0x12, 0xe5, 0x58, 0x61, 0xd0, 0xd9, 0x8f,
+ 0xbc, 0x4d, 0xf9, 0xc6, 0x1c, 0x4c, 0x66, 0xbf, 0xce, 0x4a, 0xb1, 0x80, 0x52, 0xbc, 0x90, 0x38,
+ 0x91, 0xb0, 0xd7, 0xd1, 0x56, 0x09, 0x66, 0xa5, 0x58, 0x40, 0x75, 0x01, 0xd6, 0x40, 0x0f, 0x01,
+ 0x96, 0x31, 0x44, 0x83, 0xc7, 0x3b, 0x44, 0xe8, 0x73, 0x00, 0x1b, 0x9e, 0xef, 0x45, 0x5b, 0x8c,
+ 0xfa, 0xd0, 0x91, 0xa9, 0x2b, 0xe6, 0x6c, 0x59, 0x51, 0xc1, 0x1a, 0x45, 0xf4, 0x22, 0x8c, 0xa8,
+ 0x0d, 0x58, 0xad, 0x30, 0xe5, 0xa5, 0x66, 0x0c, 0x92, 0x9c, 0x46, 0x15, 0xac, 0xe3, 0xd9, 0xef,
+ 0xa4, 0xd7, 0x8b, 0xd8, 0x01, 0xda, 0xf8, 0x5a, 0xfd, 0x8e, 0x6f, 0xa1, 0xfb, 0xf8, 0xda, 0x5f,
+ 0x2d, 0xc2, 0x84, 0xd1, 0x58, 0x3b, 0xea, 0xe3, 0xcc, 0xba, 0x46, 0x0f, 0x70, 0x27, 0x26, 0x62,
+ 0xff, 0xd9, 0xbd, 0xb7, 0x8a, 0x7e, 0xc8, 0xd3, 0x1d, 0xc0, 0xeb, 0xa3, 0xcf, 0x41, 0xb9, 0xe9,
+ 0x44, 0x4c, 0x18, 0x46, 0xc4, 0xbe, 0xeb, 0x87, 0x58, 0xf2, 0x30, 0x71, 0xa2, 0x58, 0xbb, 0x35,
+ 0x39, 0xed, 0x84, 0x24, 0xbd, 0x69, 0x28, 0x7f, 0x22, 0x0d, 0xc2, 0x54, 0x27, 0x28, 0x13, 0xb3,
+ 0x8f, 0x39, 0x0c, 0xbd, 0x0c, 0xa3, 0x21, 0x61, 0xab, 0x62, 0x91, 0x72, 0x73, 0x6c, 0x99, 0x0d,
+ 0x26, 0x6c, 0x1f, 0xd6, 0x60, 0xd8, 0xc0, 0x4c, 0xde, 0x06, 0x43, 0x5d, 0xde, 0x06, 0x4f, 0xc2,
+ 0x30, 0xfb, 0xa1, 0x56, 0x80, 0x9a, 0x8d, 0x2a, 0x2f, 0xc6, 0x12, 0x9e, 0x5e, 0x30, 0xa5, 0xfe,
+ 0x16, 0x0c, 0x7d, 0x7d, 0x88, 0x45, 0xcd, 0x14, 0xc7, 0x25, 0x7e, 0xca, 0x89, 0x25, 0x8f, 0x25,
+ 0xcc, 0xfe, 0x38, 0x8c, 0x57, 0x1c, 0xb2, 0x13, 0xf8, 0x4b, 0xbe, 0xdb, 0x0a, 0x3c, 0x3f, 0x46,
+ 0xd3, 0x30, 0xc0, 0x2e, 0x11, 0x7e, 0x04, 0x0c, 0xd0, 0x86, 0xf0, 0x00, 0x7d, 0x10, 0xd8, 0x9b,
+ 0x70, 0xa6, 0x12, 0xec, 0xf9, 0x7b, 0x4e, 0xe8, 0xce, 0xd7, 0xaa, 0xda, 0xfb, 0x7a, 0x55, 0xbe,
+ 0xef, 0xb8, 0x1d, 0x56, 0xe6, 0xd1, 0xab, 0xd5, 0xe4, 0x6c, 0xed, 0xb2, 0xd7, 0x24, 0x39, 0x52,
+ 0x90, 0xbf, 0x5e, 0x30, 0x5a, 0x4a, 0xf0, 0x95, 0xa2, 0xca, 0xca, 0x55, 0x54, 0xbd, 0x09, 0xa5,
+ 0x0d, 0x8f, 0x34, 0x5d, 0x4c, 0x36, 0xc4, 0x4a, 0x7c, 0x22, 0xdf, 0xb4, 0x64, 0x99, 0x62, 0x4a,
+ 0xa9, 0x17, 0x7f, 0x1d, 0x2e, 0x8b, 0xca, 0x58, 0x91, 0x41, 0xdb, 0x30, 0x29, 0x1f, 0x0c, 0x12,
+ 0x2a, 0xd6, 0xe5, 0x93, 0xdd, 0x5e, 0x21, 0x26, 0xf1, 0xd3, 0xf7, 0x0f, 0x66, 0x27, 0x71, 0x8a,
+ 0x0c, 0xee, 0x20, 0x4c, 0x9f, 0x83, 0x3b, 0xf4, 0x04, 0x1e, 0x60, 0xc3, 0xcf, 0x9e, 0x83, 0xec,
+ 0x65, 0xcb, 0x4a, 0xed, 0x1f, 0xb6, 0xe0, 0x91, 0x8e, 0x91, 0x11, 0x2f, 0xfc, 0x63, 0x9e, 0x85,
+ 0xf4, 0x8b, 0xbb, 0xd0, 0xfb, 0xc5, 0x6d, 0xff, 0xb4, 0x05, 0xa7, 0x97, 0x76, 0x5a, 0xf1, 0x7e,
+ 0xc5, 0x33, 0xb5, 0x4a, 0x2f, 0xc1, 0xd0, 0x0e, 0x71, 0xbd, 0xf6, 0x8e, 0x98, 0xb9, 0x59, 0x79,
+ 0x4a, 0xad, 0xb0, 0xd2, 0xc3, 0x83, 0xd9, 0xb1, 0x7a, 0x1c, 0x84, 0xce, 0x26, 0xe1, 0x05, 0x58,
+ 0xa0, 0xb3, 0xb3, 0xde, 0x7b, 0x8f, 0xdc, 0xf4, 0x76, 0x3c, 0x69, 0x2a, 0xd4, 0x55, 0x66, 0x37,
+ 0x27, 0x07, 0x74, 0xee, 0xcd, 0xb6, 0xe3, 0xc7, 0x5e, 0xbc, 0x2f, 0x14, 0x42, 0x92, 0x08, 0x4e,
+ 0xe8, 0xd9, 0x5f, 0xb1, 0x60, 0x42, 0xae, 0xfb, 0x79, 0xd7, 0x0d, 0x49, 0x14, 0xa1, 0x19, 0x28,
+ 0x78, 0x2d, 0xd1, 0x4b, 0x10, 0xbd, 0x2c, 0x54, 0x6b, 0xb8, 0xe0, 0xb5, 0x24, 0x5b, 0xc6, 0x0e,
+ 0xc2, 0xa2, 0xa9, 0x1b, 0xbb, 0x2e, 0xca, 0xb1, 0xc2, 0x40, 0x57, 0xa0, 0xe4, 0x07, 0x2e, 0x37,
+ 0xd7, 0xe2, 0x57, 0x1a, 0x5b, 0x60, 0xab, 0xa2, 0x0c, 0x2b, 0x28, 0xaa, 0x41, 0x99, 0x5b, 0x32,
+ 0x25, 0x8b, 0xb6, 0x2f, 0x7b, 0x28, 0xf6, 0x65, 0x6b, 0xb2, 0x26, 0x4e, 0x88, 0xd8, 0xdf, 0x67,
+ 0xc1, 0xa8, 0xfc, 0xb2, 0x3e, 0x79, 0x4e, 0xba, 0xb5, 0x12, 0x7e, 0x33, 0xd9, 0x5a, 0x94, 0x67,
+ 0x64, 0x10, 0x83, 0x55, 0x2c, 0x1e, 0x85, 0x55, 0xb4, 0x7f, 0xa8, 0x00, 0xe3, 0xb2, 0x3b, 0xf5,
+ 0xf6, 0x7a, 0x44, 0x62, 0xb4, 0x06, 0x65, 0x87, 0x0f, 0x39, 0x91, 0x2b, 0xf6, 0xf1, 0x6c, 0xa1,
+ 0x80, 0x31, 0x3f, 0xc9, 0xed, 0x3d, 0x2f, 0x6b, 0xe3, 0x84, 0x10, 0x6a, 0xc2, 0x94, 0x1f, 0xc4,
+ 0xec, 0x24, 0x57, 0xf0, 0x6e, 0xaa, 0x97, 0x34, 0xf5, 0x73, 0x82, 0xfa, 0xd4, 0x6a, 0x9a, 0x0a,
+ 0xee, 0x24, 0x8c, 0x96, 0xa4, 0xa0, 0xa5, 0x98, 0xff, 0xb2, 0xd7, 0x67, 0x21, 0x5b, 0xce, 0x62,
+ 0xff, 0xb2, 0x05, 0x65, 0x89, 0x76, 0x12, 0x5a, 0xb6, 0x15, 0x18, 0x8e, 0xd8, 0x24, 0xc8, 0xa1,
+ 0xb1, 0xbb, 0x75, 0x9c, 0xcf, 0x57, 0x72, 0x41, 0xf1, 0xff, 0x11, 0x96, 0x34, 0x98, 0x9c, 0x5d,
+ 0x75, 0xff, 0x03, 0x22, 0x67, 0x57, 0xfd, 0xc9, 0xb9, 0x61, 0x7e, 0x9f, 0xf5, 0x59, 0x13, 0x5c,
+ 0x51, 0x3e, 0xaa, 0x15, 0x92, 0x0d, 0xef, 0x5e, 0x9a, 0x8f, 0xaa, 0xb1, 0x52, 0x2c, 0xa0, 0xe8,
+ 0x6d, 0x18, 0x6d, 0x48, 0x01, 0x6b, 0xb2, 0x5d, 0x2f, 0x77, 0x15, 0xf6, 0x2b, 0xbd, 0x10, 0x17,
+ 0x6c, 0x2c, 0x6a, 0xf5, 0xb1, 0x41, 0xcd, 0x54, 0xf3, 0x17, 0x7b, 0xa9, 0xf9, 0x13, 0xba, 0xf9,
+ 0x4a, 0xef, 0x1f, 0xb1, 0x60, 0x88, 0x0b, 0xd6, 0xfa, 0x93, 0x6b, 0x6a, 0x6a, 0xb2, 0x64, 0xec,
+ 0xee, 0xd0, 0x42, 0xa1, 0xf6, 0x42, 0x2b, 0x50, 0x66, 0x3f, 0x98, 0x60, 0xb0, 0x98, 0x6f, 0x15,
+ 0xcf, 0x5b, 0xd5, 0x3b, 0x78, 0x47, 0x56, 0xc3, 0x09, 0x05, 0xfb, 0x07, 0x8a, 0xf4, 0xa8, 0x4a,
+ 0x50, 0x8d, 0x1b, 0xdc, 0x7a, 0x78, 0x37, 0x78, 0xe1, 0x61, 0xdd, 0xe0, 0x9b, 0x30, 0xd1, 0xd0,
+ 0x94, 0x6a, 0xc9, 0x4c, 0x5e, 0xe9, 0xba, 0x48, 0x34, 0xfd, 0x1b, 0x17, 0x99, 0x2c, 0x9a, 0x44,
+ 0x70, 0x9a, 0x2a, 0xfa, 0x66, 0x18, 0xe5, 0xf3, 0x2c, 0x5a, 0xe1, 0x96, 0x12, 0x1f, 0xcb, 0x5f,
+ 0x2f, 0x7a, 0x13, 0x5c, 0xc4, 0xa6, 0x55, 0xc7, 0x06, 0x31, 0xfb, 0x8f, 0x2c, 0x40, 0x4b, 0xad,
+ 0x2d, 0xb2, 0x43, 0x42, 0xa7, 0x99, 0xc8, 0xc6, 0xff, 0x92, 0x05, 0xd3, 0xa4, 0xa3, 0x78, 0x31,
+ 0xd8, 0xd9, 0x11, 0x2f, 0x90, 0x9c, 0x47, 0xf2, 0x52, 0x4e, 0x1d, 0xe5, 0x36, 0x30, 0x9d, 0x87,
+ 0x81, 0x73, 0xdb, 0x43, 0x2b, 0x70, 0x8a, 0x5f, 0x79, 0x0a, 0xa0, 0xd9, 0x46, 0x3f, 0x2a, 0x08,
+ 0x9f, 0x5a, 0xeb, 0x44, 0xc1, 0x59, 0xf5, 0xec, 0xef, 0x18, 0x85, 0xdc, 0x5e, 0x7c, 0xa8, 0x14,
+ 0xf8, 0x50, 0x29, 0xf0, 0xa1, 0x52, 0xe0, 0x43, 0xa5, 0xc0, 0x87, 0x4a, 0x81, 0xaf, 0x7b, 0xa5,
+ 0xc0, 0x1f, 0x58, 0x70, 0xaa, 0xf3, 0x1a, 0x38, 0x09, 0xc6, 0xbc, 0x0d, 0xa7, 0x3a, 0xef, 0xba,
+ 0xae, 0x76, 0x70, 0x9d, 0xfd, 0x4c, 0xee, 0xbd, 0x8c, 0x6f, 0xc0, 0x59, 0xf4, 0xed, 0x5f, 0x28,
+ 0xc1, 0xe0, 0xd2, 0x2e, 0xf1, 0xe3, 0x13, 0xf8, 0xc4, 0x06, 0x8c, 0x7b, 0xfe, 0x6e, 0xd0, 0xdc,
+ 0x25, 0x2e, 0x87, 0x1f, 0xe5, 0xbd, 0x7b, 0x56, 0x90, 0x1e, 0xaf, 0x1a, 0x24, 0x70, 0x8a, 0xe4,
+ 0xc3, 0x90, 0x39, 0x5f, 0x83, 0x21, 0x7e, 0x3b, 0x08, 0x81, 0x73, 0xe6, 0x65, 0xc0, 0x06, 0x51,
+ 0xdc, 0x79, 0x89, 0x3c, 0x9c, 0xdf, 0x3e, 0xa2, 0x3a, 0x7a, 0x07, 0xc6, 0x37, 0xbc, 0x30, 0x8a,
+ 0xd7, 0xbc, 0x1d, 0x12, 0xc5, 0xce, 0x4e, 0xeb, 0x01, 0x64, 0xcc, 0x6a, 0x1c, 0x96, 0x0d, 0x4a,
+ 0x38, 0x45, 0x19, 0x6d, 0xc2, 0x58, 0xd3, 0xd1, 0x9b, 0x1a, 0x3e, 0x72, 0x53, 0xea, 0xda, 0xb9,
+ 0xa9, 0x13, 0xc2, 0x26, 0x5d, 0xba, 0x4f, 0x1b, 0x4c, 0x4c, 0x5a, 0x62, 0xc2, 0x03, 0xb5, 0x4f,
+ 0xb9, 0x7c, 0x94, 0xc3, 0x28, 0x07, 0xc5, 0x2c, 0x63, 0xcb, 0x26, 0x07, 0xa5, 0xd9, 0xbf, 0x7e,
+ 0x1e, 0xca, 0x84, 0x0e, 0x21, 0x25, 0x2c, 0x6e, 0xae, 0xab, 0xfd, 0xf5, 0x75, 0xc5, 0x6b, 0x84,
+ 0x81, 0x29, 0xdd, 0x5f, 0x92, 0x94, 0x70, 0x42, 0x14, 0x2d, 0xc2, 0x50, 0x44, 0x42, 0x8f, 0x44,
+ 0xe2, 0x0e, 0xeb, 0x32, 0x8d, 0x0c, 0x8d, 0x3b, 0x95, 0xf0, 0xdf, 0x58, 0x54, 0xa5, 0xcb, 0xcb,
+ 0x61, 0x82, 0x4f, 0x76, 0xcb, 0x68, 0xcb, 0x6b, 0x9e, 0x95, 0x62, 0x01, 0x45, 0x6f, 0xc0, 0x70,
+ 0x48, 0x9a, 0x4c, 0x7d, 0x34, 0xd6, 0xff, 0x22, 0xe7, 0xda, 0x28, 0x5e, 0x0f, 0x4b, 0x02, 0xe8,
+ 0x06, 0xa0, 0x90, 0x50, 0x0e, 0xcc, 0xf3, 0x37, 0x95, 0xbd, 0xa8, 0x38, 0xc1, 0xd5, 0x8e, 0xc7,
+ 0x09, 0x86, 0xf4, 0xef, 0xc1, 0x19, 0xd5, 0xd0, 0x35, 0x98, 0x52, 0xa5, 0x55, 0x3f, 0x8a, 0x1d,
+ 0x7a, 0x72, 0x4e, 0x30, 0x5a, 0x4a, 0x00, 0x82, 0xd3, 0x08, 0xb8, 0xb3, 0x8e, 0xfd, 0x93, 0x16,
+ 0xf0, 0x71, 0x3e, 0x81, 0x67, 0xff, 0xeb, 0xe6, 0xb3, 0xff, 0x5c, 0xee, 0xcc, 0xe5, 0x3c, 0xf9,
+ 0xef, 0x5b, 0x30, 0xa2, 0xcd, 0x6c, 0xb2, 0x66, 0xad, 0x2e, 0x6b, 0xb6, 0x0d, 0x93, 0x74, 0xa5,
+ 0xdf, 0x5a, 0x8f, 0x48, 0xb8, 0x4b, 0x5c, 0xb6, 0x30, 0x0b, 0x0f, 0xb6, 0x30, 0x95, 0x21, 0xdb,
+ 0xcd, 0x14, 0x41, 0xdc, 0xd1, 0x04, 0x7a, 0x49, 0xea, 0x52, 0x8a, 0x86, 0x1d, 0x38, 0xd7, 0x93,
+ 0x1c, 0x1e, 0xcc, 0x4e, 0x6a, 0x1f, 0xa2, 0xeb, 0x4e, 0xec, 0xcf, 0xcb, 0x6f, 0x54, 0x06, 0x83,
+ 0x0d, 0xb5, 0x58, 0x52, 0x06, 0x83, 0x6a, 0x39, 0xe0, 0x04, 0x87, 0xee, 0xd1, 0xad, 0x20, 0x8a,
+ 0xd3, 0x06, 0x83, 0xd7, 0x83, 0x28, 0xc6, 0x0c, 0x62, 0x3f, 0x0f, 0xb0, 0x74, 0x8f, 0x34, 0xf8,
+ 0x52, 0xd7, 0x9f, 0x33, 0x56, 0xfe, 0x73, 0xc6, 0xfe, 0x0f, 0x16, 0x8c, 0x2f, 0x2f, 0x1a, 0x12,
+ 0xe1, 0x39, 0x00, 0xfe, 0x06, 0xbb, 0x7b, 0x77, 0x55, 0x6a, 0xdb, 0xb9, 0xc2, 0x54, 0x95, 0x62,
+ 0x0d, 0x03, 0x9d, 0x83, 0x62, 0xb3, 0xed, 0x0b, 0xe9, 0xe4, 0x30, 0xbd, 0xb0, 0x6f, 0xb6, 0x7d,
+ 0x4c, 0xcb, 0x34, 0x27, 0x84, 0x62, 0xdf, 0x4e, 0x08, 0x3d, 0x83, 0x01, 0xa0, 0x59, 0x18, 0xdc,
+ 0xdb, 0xf3, 0x5c, 0xee, 0x72, 0x29, 0x2c, 0x01, 0xee, 0xde, 0xad, 0x56, 0x22, 0xcc, 0xcb, 0xed,
+ 0x2f, 0x15, 0x61, 0x66, 0xb9, 0x49, 0xee, 0xbd, 0x4f, 0xb7, 0xd3, 0x7e, 0x5d, 0x28, 0x8e, 0x26,
+ 0x1a, 0x3a, 0xaa, 0x9b, 0x4c, 0xef, 0xf1, 0xd8, 0x80, 0x61, 0x6e, 0x2f, 0x27, 0x9d, 0x50, 0x5f,
+ 0xcd, 0x6a, 0x3d, 0x7f, 0x40, 0xe6, 0xb8, 0xdd, 0x9d, 0xf0, 0xa1, 0x53, 0x37, 0xad, 0x28, 0xc5,
+ 0x92, 0xf8, 0xcc, 0x2b, 0x30, 0xaa, 0x63, 0x1e, 0xc9, 0x61, 0xed, 0x2f, 0x16, 0x61, 0x92, 0xf6,
+ 0xe0, 0xa1, 0x4e, 0xc4, 0xed, 0xce, 0x89, 0x38, 0x6e, 0xa7, 0xa5, 0xde, 0xb3, 0xf1, 0x76, 0x7a,
+ 0x36, 0x9e, 0xcd, 0x9b, 0x8d, 0x93, 0x9e, 0x83, 0x6f, 0xb7, 0xe0, 0xd4, 0x72, 0x33, 0x68, 0x6c,
+ 0xa7, 0x1c, 0x8b, 0x5e, 0x84, 0x11, 0x7a, 0x8e, 0x47, 0x86, 0xcf, 0xbb, 0x11, 0x05, 0x41, 0x80,
+ 0xb0, 0x8e, 0xa7, 0x55, 0xbb, 0x7d, 0xbb, 0x5a, 0xc9, 0x0a, 0x9e, 0x20, 0x40, 0x58, 0xc7, 0xb3,
+ 0x7f, 0xc3, 0x82, 0x0b, 0xd7, 0x16, 0x97, 0x92, 0xa5, 0xd8, 0x11, 0xbf, 0xe1, 0x32, 0x0c, 0xb5,
+ 0x5c, 0xad, 0x2b, 0x89, 0xc0, 0xb7, 0xc2, 0x7a, 0x21, 0xa0, 0x1f, 0x94, 0xd8, 0x24, 0x3f, 0x61,
+ 0xc1, 0xa9, 0x6b, 0x5e, 0x4c, 0xaf, 0xe5, 0x74, 0x24, 0x01, 0x7a, 0x2f, 0x47, 0x5e, 0x1c, 0x84,
+ 0xfb, 0xe9, 0x48, 0x02, 0x58, 0x41, 0xb0, 0x86, 0xc5, 0x5b, 0xde, 0xf5, 0x98, 0xa5, 0x76, 0xc1,
+ 0xd4, 0x63, 0x61, 0x51, 0x8e, 0x15, 0x06, 0xfd, 0x30, 0xd7, 0x0b, 0x99, 0xd4, 0x70, 0x5f, 0x9c,
+ 0xb0, 0xea, 0xc3, 0x2a, 0x12, 0x80, 0x13, 0x1c, 0xfa, 0x80, 0x9a, 0xbd, 0xd6, 0x6c, 0x47, 0x31,
+ 0x09, 0x37, 0xa2, 0x9c, 0xd3, 0xf1, 0x79, 0x28, 0x13, 0x29, 0xa3, 0x17, 0xbd, 0x56, 0xac, 0xa6,
+ 0x12, 0xde, 0xf3, 0x80, 0x06, 0x0a, 0xaf, 0x0f, 0x37, 0xc5, 0xa3, 0xf9, 0x99, 0x2d, 0x03, 0x22,
+ 0x7a, 0x5b, 0x7a, 0x84, 0x07, 0xe6, 0x2a, 0xbe, 0xd4, 0x01, 0xc5, 0x19, 0x35, 0xec, 0x1f, 0xb6,
+ 0xe0, 0x8c, 0xfa, 0xe0, 0x0f, 0xdc, 0x67, 0xda, 0x3f, 0x5b, 0x80, 0xb1, 0xeb, 0x6b, 0x6b, 0xb5,
+ 0x6b, 0x24, 0x16, 0xd7, 0x76, 0x6f, 0x35, 0x3a, 0xd6, 0xb4, 0x81, 0xdd, 0x5e, 0x81, 0xed, 0xd8,
+ 0x6b, 0xce, 0xf1, 0x40, 0x41, 0x73, 0x55, 0x3f, 0xbe, 0x15, 0xd6, 0xe3, 0xd0, 0xf3, 0x37, 0x33,
+ 0xf5, 0x87, 0x92, 0xb9, 0x28, 0xe6, 0x31, 0x17, 0xe8, 0x79, 0x18, 0x62, 0x91, 0x8a, 0xe4, 0x24,
+ 0x3c, 0xaa, 0x1e, 0x51, 0xac, 0xf4, 0xf0, 0x60, 0xb6, 0x7c, 0x1b, 0x57, 0xf9, 0x1f, 0x2c, 0x50,
+ 0xd1, 0x6d, 0x18, 0xd9, 0x8a, 0xe3, 0xd6, 0x75, 0xe2, 0xb8, 0xf4, 0xb5, 0xcc, 0x8f, 0xc3, 0x8b,
+ 0x59, 0xc7, 0x21, 0x1d, 0x04, 0x8e, 0x96, 0x9c, 0x20, 0x49, 0x59, 0x84, 0x75, 0x3a, 0x76, 0x1d,
+ 0x20, 0x81, 0x1d, 0x93, 0xee, 0xc4, 0xfe, 0x3d, 0x0b, 0x86, 0x79, 0xd0, 0x88, 0x10, 0xbd, 0x06,
+ 0x03, 0xe4, 0x1e, 0x69, 0x08, 0x56, 0x39, 0xb3, 0xc3, 0x09, 0xa7, 0xc5, 0x65, 0xc0, 0xf4, 0x3f,
+ 0x66, 0xb5, 0xd0, 0x75, 0x18, 0xa6, 0xbd, 0xbd, 0xa6, 0x22, 0x68, 0x3c, 0x96, 0xf7, 0xc5, 0x6a,
+ 0xda, 0x39, 0x73, 0x26, 0x8a, 0xb0, 0xac, 0xce, 0xb4, 0xcf, 0x8d, 0x56, 0x9d, 0x9e, 0xd8, 0x71,
+ 0x37, 0xc6, 0x62, 0x6d, 0xb1, 0xc6, 0x91, 0x04, 0x35, 0xae, 0x7d, 0x96, 0x85, 0x38, 0x21, 0x62,
+ 0xaf, 0x41, 0x99, 0x4e, 0xea, 0x7c, 0xd3, 0x73, 0xba, 0x2b, 0xd4, 0x9f, 0x82, 0xb2, 0x54, 0x97,
+ 0x47, 0xc2, 0x59, 0x9c, 0x51, 0x95, 0xda, 0xf4, 0x08, 0x27, 0x70, 0x7b, 0x03, 0x4e, 0x33, 0xe3,
+ 0x47, 0x27, 0xde, 0x32, 0xf6, 0x58, 0xef, 0xc5, 0xfc, 0xb4, 0x78, 0x79, 0xf2, 0x99, 0x99, 0xd6,
+ 0xfc, 0x31, 0x47, 0x25, 0xc5, 0xe4, 0x15, 0x6a, 0x7f, 0x75, 0x00, 0x1e, 0xad, 0xd6, 0xf3, 0xe3,
+ 0x89, 0xbc, 0x0c, 0xa3, 0x9c, 0x2f, 0xa5, 0x4b, 0xdb, 0x69, 0x8a, 0x76, 0x95, 0xf0, 0x77, 0x4d,
+ 0x83, 0x61, 0x03, 0x13, 0x5d, 0x80, 0xa2, 0xf7, 0xae, 0x9f, 0x76, 0x6d, 0xaa, 0xbe, 0xb9, 0x8a,
+ 0x69, 0x39, 0x05, 0x53, 0x16, 0x97, 0xdf, 0x1d, 0x0a, 0xac, 0xd8, 0xdc, 0xd7, 0x61, 0xdc, 0x8b,
+ 0x1a, 0x91, 0x57, 0xf5, 0xe9, 0x39, 0xa3, 0x9d, 0x54, 0x4a, 0x2a, 0x42, 0x3b, 0xad, 0xa0, 0x38,
+ 0x85, 0xad, 0x5d, 0x64, 0x83, 0x7d, 0xb3, 0xc9, 0x3d, 0xbd, 0xa7, 0xe9, 0x0b, 0xa0, 0xc5, 0xbe,
+ 0x2e, 0x62, 0x52, 0x7c, 0xf1, 0x02, 0xe0, 0x1f, 0x1c, 0x61, 0x09, 0xa3, 0x4f, 0xce, 0xc6, 0x96,
+ 0xd3, 0x9a, 0x6f, 0xc7, 0x5b, 0x15, 0x2f, 0x6a, 0x04, 0xbb, 0x24, 0xdc, 0x67, 0xd2, 0x82, 0x52,
+ 0xf2, 0xe4, 0x54, 0x80, 0xc5, 0xeb, 0xf3, 0x35, 0x8a, 0x89, 0x3b, 0xeb, 0xa0, 0x79, 0x98, 0x90,
+ 0x85, 0x75, 0x12, 0xb1, 0x2b, 0x6c, 0x84, 0x91, 0x51, 0xce, 0x46, 0xa2, 0x58, 0x11, 0x49, 0xe3,
+ 0x9b, 0x9c, 0x34, 0x1c, 0x07, 0x27, 0xfd, 0x12, 0x8c, 0x79, 0xbe, 0x17, 0x7b, 0x4e, 0x1c, 0x70,
+ 0x15, 0x14, 0x17, 0x0c, 0x30, 0xd9, 0x7a, 0x55, 0x07, 0x60, 0x13, 0xcf, 0xfe, 0x6f, 0x03, 0x30,
+ 0xc5, 0xa6, 0xed, 0xc3, 0x15, 0xf6, 0xf5, 0xb4, 0xc2, 0x6e, 0x77, 0xae, 0xb0, 0xe3, 0x78, 0x22,
+ 0x3c, 0xf0, 0x32, 0x7b, 0x07, 0xca, 0xca, 0xbf, 0x4a, 0x3a, 0x58, 0x5a, 0x39, 0x0e, 0x96, 0xbd,
+ 0xb9, 0x0f, 0x69, 0xa2, 0x56, 0xcc, 0x34, 0x51, 0xfb, 0x9b, 0x16, 0x24, 0x3a, 0x15, 0x74, 0x1d,
+ 0xca, 0xad, 0x80, 0x59, 0x5e, 0x86, 0xd2, 0x9c, 0xf9, 0xd1, 0xcc, 0x8b, 0x8a, 0x5f, 0x8a, 0xfc,
+ 0xe3, 0x6b, 0xb2, 0x06, 0x4e, 0x2a, 0xa3, 0x05, 0x18, 0x6e, 0x85, 0xa4, 0x1e, 0xb3, 0xb0, 0x22,
+ 0x3d, 0xe9, 0xf0, 0x35, 0xc2, 0xf1, 0xb1, 0xac, 0x68, 0xff, 0x9c, 0x05, 0xc0, 0xad, 0xc0, 0x1c,
+ 0x7f, 0x93, 0x9c, 0x80, 0xb8, 0xbb, 0x02, 0x03, 0x51, 0x8b, 0x34, 0xba, 0xd9, 0xc4, 0x26, 0xfd,
+ 0xa9, 0xb7, 0x48, 0x23, 0x19, 0x70, 0xfa, 0x0f, 0xb3, 0xda, 0xf6, 0x77, 0x02, 0x8c, 0x27, 0x68,
+ 0xd5, 0x98, 0xec, 0xa0, 0x67, 0x8c, 0x30, 0x03, 0xe7, 0x52, 0x61, 0x06, 0xca, 0x0c, 0x5b, 0x93,
+ 0xac, 0xbe, 0x03, 0xc5, 0x1d, 0xe7, 0x9e, 0x10, 0x9d, 0x3d, 0xd5, 0xbd, 0x1b, 0x94, 0xfe, 0xdc,
+ 0x8a, 0x73, 0x8f, 0x3f, 0x12, 0x9f, 0x92, 0x0b, 0x64, 0xc5, 0xb9, 0x77, 0xc8, 0x2d, 0x5f, 0xd9,
+ 0x21, 0x75, 0xd3, 0x8b, 0xe2, 0x2f, 0xfc, 0xd7, 0xe4, 0x3f, 0x5b, 0x76, 0xb4, 0x11, 0xd6, 0x96,
+ 0xe7, 0x0b, 0x9b, 0xa8, 0xbe, 0xda, 0xf2, 0xfc, 0x74, 0x5b, 0x9e, 0xdf, 0x47, 0x5b, 0x9e, 0x8f,
+ 0xde, 0x83, 0x61, 0x61, 0x7f, 0x28, 0xc2, 0xfa, 0x5c, 0xed, 0xa3, 0x3d, 0x61, 0xbe, 0xc8, 0xdb,
+ 0xbc, 0x2a, 0x1f, 0xc1, 0xa2, 0xb4, 0x67, 0xbb, 0xb2, 0x41, 0xf4, 0xd7, 0x2c, 0x18, 0x17, 0xbf,
+ 0x31, 0x79, 0xb7, 0x4d, 0xa2, 0x58, 0xf0, 0x9e, 0x9f, 0xec, 0xbf, 0x0f, 0xa2, 0x22, 0xef, 0xca,
+ 0x27, 0xe5, 0x31, 0x6b, 0x02, 0x7b, 0xf6, 0x28, 0xd5, 0x0b, 0xf4, 0x0f, 0x2c, 0x38, 0xbd, 0xe3,
+ 0xdc, 0xe3, 0x2d, 0xf2, 0x32, 0xec, 0xc4, 0x5e, 0x20, 0x54, 0xff, 0xaf, 0xf5, 0x37, 0xfd, 0x1d,
+ 0xd5, 0x79, 0x27, 0xa5, 0x7e, 0xf2, 0x74, 0x16, 0x4a, 0xcf, 0xae, 0x66, 0xf6, 0x6b, 0x66, 0x03,
+ 0x4a, 0x72, 0xbd, 0x65, 0x88, 0x1a, 0x2a, 0x3a, 0x63, 0x7d, 0x64, 0xf3, 0x4f, 0xdd, 0xd7, 0x9f,
+ 0xb6, 0x23, 0xd6, 0xda, 0x43, 0x6d, 0xe7, 0x1d, 0x18, 0xd5, 0xd7, 0xd8, 0x43, 0x6d, 0xeb, 0x5d,
+ 0x38, 0x95, 0xb1, 0x96, 0x1e, 0x6a, 0x93, 0x7b, 0x70, 0x2e, 0x77, 0x7d, 0x3c, 0xcc, 0x86, 0xed,
+ 0x9f, 0xb5, 0xf4, 0x73, 0xf0, 0x04, 0x74, 0x0e, 0x8b, 0xa6, 0xce, 0xe1, 0x62, 0xf7, 0x9d, 0x93,
+ 0xa3, 0x78, 0x78, 0x5b, 0xef, 0x34, 0x3d, 0xd5, 0xd1, 0x1b, 0x30, 0xd4, 0xa4, 0x25, 0xd2, 0xf0,
+ 0xd5, 0xee, 0xbd, 0x23, 0x13, 0x5e, 0x8a, 0x95, 0x47, 0x58, 0x50, 0xb0, 0x7f, 0xd1, 0x82, 0x81,
+ 0x13, 0x18, 0x09, 0x6c, 0x8e, 0xc4, 0x33, 0xb9, 0xa4, 0x45, 0xc4, 0xe1, 0x39, 0xec, 0xec, 0x2d,
+ 0xdd, 0x8b, 0x89, 0x1f, 0xb1, 0xa7, 0x62, 0xe6, 0xc0, 0xfc, 0x7f, 0x70, 0xea, 0x66, 0xe0, 0xb8,
+ 0x0b, 0x4e, 0xd3, 0xf1, 0x1b, 0x24, 0xac, 0xfa, 0x9b, 0x47, 0xb2, 0xc0, 0x2e, 0xf4, 0xb2, 0xc0,
+ 0xb6, 0xb7, 0x00, 0xe9, 0x0d, 0x08, 0x57, 0x16, 0x0c, 0xc3, 0x1e, 0x6f, 0x4a, 0x0c, 0xff, 0x13,
+ 0xd9, 0xac, 0x59, 0x47, 0xcf, 0x34, 0x27, 0x0d, 0x5e, 0x80, 0x25, 0x21, 0xfb, 0x65, 0xc8, 0xf4,
+ 0x87, 0xef, 0x2d, 0x36, 0xb0, 0x3f, 0x03, 0x53, 0xac, 0xe6, 0x11, 0x9f, 0xb4, 0x76, 0x4a, 0x2a,
+ 0x99, 0x11, 0xfc, 0xce, 0xfe, 0xa2, 0x05, 0x13, 0xab, 0xa9, 0x98, 0x60, 0x97, 0x99, 0x02, 0x34,
+ 0x43, 0x18, 0x5e, 0x67, 0xa5, 0x58, 0x40, 0x8f, 0x5d, 0x06, 0xf5, 0x67, 0x16, 0x24, 0x21, 0x2a,
+ 0x4e, 0x80, 0xf1, 0x5a, 0x34, 0x18, 0xaf, 0x4c, 0xd9, 0x88, 0xea, 0x4e, 0x1e, 0xdf, 0x85, 0x6e,
+ 0xa8, 0x78, 0x4c, 0x5d, 0xc4, 0x22, 0x09, 0x19, 0x1e, 0xbd, 0x67, 0xdc, 0x0c, 0xda, 0x24, 0x23,
+ 0x34, 0xd9, 0xff, 0xb9, 0x00, 0x48, 0xe1, 0xf6, 0x1d, 0x2f, 0xaa, 0xb3, 0xc6, 0xf1, 0xc4, 0x8b,
+ 0xda, 0x05, 0xc4, 0x54, 0xf8, 0xa1, 0xe3, 0x47, 0x9c, 0xac, 0x27, 0xa4, 0x6e, 0x47, 0xb3, 0x0f,
+ 0x98, 0x11, 0x4d, 0xa2, 0x9b, 0x1d, 0xd4, 0x70, 0x46, 0x0b, 0x9a, 0x69, 0xc6, 0x60, 0xbf, 0xa6,
+ 0x19, 0x43, 0x3d, 0xdc, 0xd5, 0x7e, 0xc6, 0x82, 0x31, 0x35, 0x4c, 0x1f, 0x10, 0xfb, 0x73, 0xd5,
+ 0x9f, 0x9c, 0xa3, 0xaf, 0xa6, 0x75, 0x99, 0x5d, 0x09, 0xdf, 0xc8, 0xdc, 0x0e, 0x9d, 0xa6, 0xf7,
+ 0x1e, 0x51, 0xd1, 0xfa, 0x66, 0x85, 0x1b, 0xa1, 0x28, 0x3d, 0x3c, 0x98, 0x1d, 0x53, 0xff, 0x78,
+ 0x74, 0xe0, 0xa4, 0x8a, 0xfd, 0x63, 0x74, 0xb3, 0x9b, 0x4b, 0x11, 0xbd, 0x08, 0x83, 0xad, 0x2d,
+ 0x27, 0x22, 0x29, 0xa7, 0x9b, 0xc1, 0x1a, 0x2d, 0x3c, 0x3c, 0x98, 0x1d, 0x57, 0x15, 0x58, 0x09,
+ 0xe6, 0xd8, 0xfd, 0x47, 0xe1, 0xea, 0x5c, 0x9c, 0x3d, 0xa3, 0x70, 0xfd, 0x91, 0x05, 0x03, 0xab,
+ 0x81, 0x7b, 0x12, 0x47, 0xc0, 0xeb, 0xc6, 0x11, 0x70, 0x3e, 0x2f, 0x70, 0x7b, 0xee, 0xee, 0x5f,
+ 0x4e, 0xed, 0xfe, 0x8b, 0xb9, 0x14, 0xba, 0x6f, 0xfc, 0x1d, 0x18, 0x61, 0xe1, 0xe0, 0x85, 0x83,
+ 0xd1, 0xf3, 0xc6, 0x86, 0x9f, 0x4d, 0x6d, 0xf8, 0x09, 0x0d, 0x55, 0xdb, 0xe9, 0x4f, 0xc2, 0xb0,
+ 0x70, 0x72, 0x49, 0x7b, 0x6f, 0x0a, 0x5c, 0x2c, 0xe1, 0xf6, 0x8f, 0x14, 0xc1, 0x08, 0x3f, 0x8f,
+ 0x7e, 0xd9, 0x82, 0xb9, 0x90, 0x1b, 0xbf, 0xba, 0x95, 0x76, 0xe8, 0xf9, 0x9b, 0xf5, 0xc6, 0x16,
+ 0x71, 0xdb, 0x4d, 0xcf, 0xdf, 0xac, 0x6e, 0xfa, 0x81, 0x2a, 0x5e, 0xba, 0x47, 0x1a, 0x6d, 0xa6,
+ 0xbe, 0xea, 0x11, 0xeb, 0x5e, 0x19, 0x91, 0x3f, 0x77, 0xff, 0x60, 0x76, 0x0e, 0x1f, 0x89, 0x36,
+ 0x3e, 0x62, 0x5f, 0xd0, 0x6f, 0x58, 0x70, 0x95, 0x47, 0x65, 0xef, 0xbf, 0xff, 0x5d, 0xde, 0xb9,
+ 0x35, 0x49, 0x2a, 0x21, 0xb2, 0x46, 0xc2, 0x9d, 0x85, 0x97, 0xc4, 0x80, 0x5e, 0xad, 0x1d, 0xad,
+ 0x2d, 0x7c, 0xd4, 0xce, 0xd9, 0xff, 0xa2, 0x08, 0x63, 0x22, 0xb4, 0x93, 0xb8, 0x03, 0x5e, 0x34,
+ 0x96, 0xc4, 0x63, 0xa9, 0x25, 0x31, 0x65, 0x20, 0x1f, 0xcf, 0xf1, 0x1f, 0xc1, 0x14, 0x3d, 0x9c,
+ 0xaf, 0x13, 0x27, 0x8c, 0xd7, 0x89, 0xc3, 0x2d, 0xae, 0x8a, 0x47, 0x3e, 0xfd, 0x95, 0x60, 0xed,
+ 0x66, 0x9a, 0x18, 0xee, 0xa4, 0xff, 0xf5, 0x74, 0xe7, 0xf8, 0x30, 0xd9, 0x11, 0x9d, 0xeb, 0x2d,
+ 0x28, 0x2b, 0x0f, 0x0d, 0x71, 0xe8, 0x74, 0x0f, 0x72, 0x97, 0xa6, 0xc0, 0x85, 0x5f, 0x89, 0x77,
+ 0x50, 0x42, 0xce, 0xfe, 0x87, 0x05, 0xa3, 0x41, 0x3e, 0x89, 0xab, 0x50, 0x72, 0xa2, 0xc8, 0xdb,
+ 0xf4, 0x89, 0x2b, 0x76, 0xec, 0x47, 0xf3, 0x76, 0xac, 0xd1, 0x0c, 0xf3, 0x92, 0x99, 0x17, 0x35,
+ 0xb1, 0xa2, 0x81, 0xae, 0x73, 0xbb, 0xb6, 0x5d, 0xf9, 0x52, 0xeb, 0x8f, 0x1a, 0x48, 0xcb, 0xb7,
+ 0x5d, 0x82, 0x45, 0x7d, 0xf4, 0x59, 0x6e, 0x78, 0x78, 0xc3, 0x0f, 0xf6, 0xfc, 0x6b, 0x41, 0x20,
+ 0xc3, 0x27, 0xf4, 0x47, 0x70, 0x4a, 0x9a, 0x1b, 0xaa, 0xea, 0xd8, 0xa4, 0xd6, 0x5f, 0x04, 0xcb,
+ 0x6f, 0x81, 0x53, 0x94, 0xb4, 0xe9, 0xdd, 0x1c, 0x21, 0x02, 0x13, 0x22, 0x6e, 0x98, 0x2c, 0x13,
+ 0x63, 0x97, 0xf9, 0x08, 0x33, 0x6b, 0x27, 0x12, 0xe0, 0x1b, 0x26, 0x09, 0x9c, 0xa6, 0x69, 0xff,
+ 0xb8, 0x05, 0xcc, 0xd3, 0xf3, 0x04, 0xf8, 0x91, 0x4f, 0x99, 0xfc, 0xc8, 0x74, 0xde, 0x20, 0xe7,
+ 0xb0, 0x22, 0x2f, 0xf0, 0x95, 0x55, 0x0b, 0x83, 0x7b, 0xfb, 0xc2, 0xe8, 0xa3, 0xf7, 0xfb, 0xc3,
+ 0xfe, 0x3f, 0x16, 0x3f, 0xc4, 0x94, 0xff, 0x04, 0xfa, 0x56, 0x28, 0x35, 0x9c, 0x96, 0xd3, 0xe0,
+ 0xb9, 0x52, 0x72, 0x65, 0x71, 0x46, 0xa5, 0xb9, 0x45, 0x51, 0x83, 0xcb, 0x96, 0x64, 0xfc, 0xb9,
+ 0x92, 0x2c, 0xee, 0x29, 0x4f, 0x52, 0x4d, 0xce, 0x6c, 0xc3, 0x98, 0x41, 0xec, 0xa1, 0x0a, 0x22,
+ 0xbe, 0x95, 0x5f, 0xb1, 0x2a, 0x5e, 0xe2, 0x0e, 0x4c, 0xf9, 0xda, 0x7f, 0x7a, 0xa1, 0xc8, 0xc7,
+ 0xe5, 0x47, 0x7b, 0x5d, 0xa2, 0xec, 0xf6, 0xd1, 0xfc, 0x4e, 0x53, 0x64, 0x70, 0x27, 0x65, 0xfb,
+ 0x47, 0x2d, 0x78, 0x44, 0x47, 0xd4, 0x5c, 0x5b, 0x7a, 0x49, 0xf7, 0x2b, 0x50, 0x0a, 0x5a, 0x24,
+ 0x74, 0xe2, 0x20, 0x14, 0xb7, 0xc6, 0x15, 0x39, 0xe8, 0xb7, 0x44, 0xf9, 0xa1, 0x88, 0x34, 0x2e,
+ 0xa9, 0xcb, 0x72, 0xac, 0x6a, 0xd2, 0xd7, 0x27, 0x1b, 0x8c, 0x48, 0x38, 0x31, 0xb1, 0x33, 0x80,
+ 0x29, 0xba, 0x23, 0x2c, 0x20, 0xf6, 0x57, 0x2d, 0xbe, 0xb0, 0xf4, 0xae, 0xa3, 0x77, 0x61, 0x72,
+ 0xc7, 0x89, 0x1b, 0x5b, 0x4b, 0xf7, 0x5a, 0x21, 0xd7, 0x95, 0xc8, 0x71, 0x7a, 0xaa, 0xd7, 0x38,
+ 0x69, 0x1f, 0x99, 0xd8, 0x52, 0xae, 0xa4, 0x88, 0xe1, 0x0e, 0xf2, 0x68, 0x1d, 0x46, 0x58, 0x19,
+ 0xf3, 0xcf, 0x8b, 0xba, 0xb1, 0x06, 0x79, 0xad, 0x29, 0x5b, 0x81, 0x95, 0x84, 0x0e, 0xd6, 0x89,
+ 0xda, 0x3f, 0x55, 0xe4, 0xbb, 0x9d, 0xb1, 0xf2, 0x4f, 0xc2, 0x70, 0x2b, 0x70, 0x17, 0xab, 0x15,
+ 0x2c, 0x66, 0x41, 0x5d, 0x23, 0x35, 0x5e, 0x8c, 0x25, 0x1c, 0x5d, 0x81, 0x92, 0xf8, 0x29, 0x75,
+ 0x5b, 0xec, 0x6c, 0x16, 0x78, 0x11, 0x56, 0x50, 0xf4, 0x1c, 0x40, 0x2b, 0x0c, 0x76, 0x3d, 0x97,
+ 0x05, 0x81, 0x28, 0x9a, 0x66, 0x3e, 0x35, 0x05, 0xc1, 0x1a, 0x16, 0x7a, 0x15, 0xc6, 0xda, 0x7e,
+ 0xc4, 0xd9, 0x11, 0x67, 0x5d, 0x04, 0xe5, 0x2e, 0x25, 0x06, 0x28, 0xb7, 0x75, 0x20, 0x36, 0x71,
+ 0xd1, 0x3c, 0x0c, 0xc5, 0x0e, 0x33, 0x5b, 0x19, 0xcc, 0xb7, 0xb7, 0x5d, 0xa3, 0x18, 0x7a, 0x5a,
+ 0x0e, 0x5a, 0x01, 0x8b, 0x8a, 0xe8, 0x2d, 0xe9, 0x2a, 0xcb, 0x0f, 0x76, 0x61, 0xe8, 0xde, 0xdf,
+ 0x25, 0xa0, 0x39, 0xca, 0x0a, 0x03, 0x7a, 0x83, 0x16, 0x7a, 0x05, 0x80, 0xdc, 0x8b, 0x49, 0xe8,
+ 0x3b, 0x4d, 0x65, 0x15, 0xa6, 0xf8, 0x82, 0x4a, 0xb0, 0x1a, 0xc4, 0xb7, 0x23, 0xb2, 0xa4, 0x30,
+ 0xb0, 0x86, 0x6d, 0xff, 0x46, 0x19, 0x20, 0xe1, 0xdb, 0xd1, 0x7b, 0x1d, 0x07, 0xd7, 0xd3, 0xdd,
+ 0x39, 0xfd, 0xe3, 0x3b, 0xb5, 0xd0, 0x77, 0x59, 0x30, 0xe2, 0x34, 0x9b, 0x41, 0xc3, 0x89, 0xd9,
+ 0x0c, 0x15, 0xba, 0x1f, 0x9c, 0xa2, 0xfd, 0xf9, 0xa4, 0x06, 0xef, 0xc2, 0xf3, 0x72, 0x85, 0x6a,
+ 0x90, 0x9e, 0xbd, 0xd0, 0x1b, 0x46, 0x9f, 0x90, 0x4f, 0xc5, 0xa2, 0x31, 0x94, 0xea, 0xa9, 0x58,
+ 0x66, 0x77, 0x84, 0xfe, 0x4a, 0xbc, 0x6d, 0xbc, 0x12, 0x07, 0xf2, 0x7d, 0x01, 0x0d, 0xf6, 0xb5,
+ 0xd7, 0x03, 0x11, 0xd5, 0xf4, 0xb8, 0x00, 0x83, 0xf9, 0x8e, 0x77, 0xda, 0x3b, 0xa9, 0x47, 0x4c,
+ 0x80, 0x77, 0x60, 0xc2, 0x35, 0x99, 0x00, 0xb1, 0x12, 0x9f, 0xc8, 0xa3, 0x9b, 0xe2, 0x19, 0x92,
+ 0x6b, 0x3f, 0x05, 0xc0, 0x69, 0xc2, 0xa8, 0xc6, 0x63, 0x3e, 0x54, 0xfd, 0x8d, 0x40, 0x38, 0x5b,
+ 0xd8, 0xb9, 0x73, 0xb9, 0x1f, 0xc5, 0x64, 0x87, 0x62, 0x26, 0xb7, 0xfb, 0xaa, 0xa8, 0x8b, 0x15,
+ 0x15, 0xf4, 0x06, 0x0c, 0x31, 0xcf, 0xab, 0x68, 0xba, 0x94, 0x2f, 0x2b, 0x36, 0x83, 0x98, 0x25,
+ 0x1b, 0x92, 0xfd, 0x8d, 0xb0, 0xa0, 0x80, 0xae, 0x4b, 0xbf, 0xc6, 0xa8, 0xea, 0xdf, 0x8e, 0x08,
+ 0xf3, 0x6b, 0x2c, 0x2f, 0x7c, 0x34, 0x71, 0x59, 0xe4, 0xe5, 0x99, 0xc9, 0xbb, 0x8c, 0x9a, 0x94,
+ 0x8b, 0x12, 0xff, 0x65, 0x4e, 0xb0, 0x69, 0xc8, 0xef, 0x9e, 0x99, 0x37, 0x2c, 0x19, 0xce, 0x3b,
+ 0x26, 0x09, 0x9c, 0xa6, 0x49, 0x39, 0x52, 0xbe, 0xeb, 0x85, 0xbb, 0x46, 0xaf, 0xb3, 0x83, 0x3f,
+ 0xc4, 0xd9, 0x6d, 0xc4, 0x4b, 0xb0, 0xa8, 0x7f, 0xa2, 0xec, 0xc1, 0x8c, 0x0f, 0x93, 0xe9, 0x2d,
+ 0xfa, 0x50, 0xd9, 0x91, 0xdf, 0x1b, 0x80, 0x71, 0x73, 0x49, 0xa1, 0xab, 0x50, 0x16, 0x44, 0x54,
+ 0x1c, 0x7f, 0xb5, 0x4b, 0x56, 0x24, 0x00, 0x27, 0x38, 0x2c, 0x7d, 0x03, 0xab, 0xae, 0x99, 0xd9,
+ 0x26, 0xe9, 0x1b, 0x14, 0x04, 0x6b, 0x58, 0xf4, 0x61, 0xb5, 0x1e, 0x04, 0xb1, 0xba, 0x90, 0xd4,
+ 0xba, 0x5b, 0x60, 0xa5, 0x58, 0x40, 0xe9, 0x45, 0xb4, 0x4d, 0x42, 0x9f, 0x34, 0xcd, 0xf0, 0xc0,
+ 0xea, 0x22, 0xba, 0xa1, 0x03, 0xb1, 0x89, 0x4b, 0xaf, 0xd3, 0x20, 0x62, 0x0b, 0x59, 0x3c, 0xdf,
+ 0x12, 0xb3, 0xe5, 0x3a, 0x77, 0xad, 0x96, 0x70, 0xf4, 0x19, 0x78, 0x44, 0x85, 0x40, 0xc2, 0x5c,
+ 0x0f, 0x21, 0x5b, 0x1c, 0x32, 0xa4, 0x2d, 0x8f, 0x2c, 0x66, 0xa3, 0xe1, 0xbc, 0xfa, 0xe8, 0x75,
+ 0x18, 0x17, 0x2c, 0xbe, 0xa4, 0x38, 0x6c, 0x9a, 0xc6, 0xdc, 0x30, 0xa0, 0x38, 0x85, 0x2d, 0x03,
+ 0x1c, 0x33, 0x2e, 0x5b, 0x52, 0x28, 0x75, 0x06, 0x38, 0xd6, 0xe1, 0xb8, 0xa3, 0x06, 0x9a, 0x87,
+ 0x09, 0xce, 0x83, 0x79, 0xfe, 0x26, 0x9f, 0x13, 0xe1, 0x4d, 0xa5, 0xb6, 0xd4, 0x2d, 0x13, 0x8c,
+ 0xd3, 0xf8, 0xe8, 0x65, 0x18, 0x75, 0xc2, 0xc6, 0x96, 0x17, 0x93, 0x46, 0xdc, 0x0e, 0xb9, 0x9b,
+ 0x95, 0x66, 0x5b, 0x34, 0xaf, 0xc1, 0xb0, 0x81, 0x69, 0xbf, 0x07, 0xa7, 0x32, 0x62, 0x2e, 0xd0,
+ 0x85, 0xe3, 0xb4, 0x3c, 0xf9, 0x4d, 0x29, 0x03, 0xe4, 0xf9, 0x5a, 0x55, 0x7e, 0x8d, 0x86, 0x45,
+ 0x57, 0x27, 0x8b, 0xcd, 0xa0, 0xa5, 0x00, 0x54, 0xab, 0x73, 0x59, 0x02, 0x70, 0x82, 0x63, 0xff,
+ 0xcf, 0x02, 0x4c, 0x64, 0xe8, 0x56, 0x58, 0x1a, 0xba, 0xd4, 0x23, 0x25, 0xc9, 0x3a, 0x67, 0xc6,
+ 0xcb, 0x2e, 0x1c, 0x21, 0x5e, 0x76, 0xb1, 0x57, 0xbc, 0xec, 0x81, 0xf7, 0x13, 0x2f, 0xdb, 0x1c,
+ 0xb1, 0xc1, 0xbe, 0x46, 0x2c, 0x23, 0xc6, 0xf6, 0xd0, 0x11, 0x63, 0x6c, 0x1b, 0x83, 0x3e, 0xdc,
+ 0xc7, 0xa0, 0xff, 0x40, 0x01, 0x26, 0xd3, 0x36, 0x90, 0x27, 0x20, 0xb7, 0x7d, 0xc3, 0x90, 0xdb,
+ 0x66, 0x27, 0x75, 0x4c, 0x5b, 0x66, 0xe6, 0xc9, 0x70, 0x71, 0x4a, 0x86, 0xfb, 0xf1, 0xbe, 0xa8,
+ 0x75, 0x97, 0xe7, 0xfe, 0x9d, 0x02, 0x9c, 0x49, 0x57, 0x59, 0x6c, 0x3a, 0xde, 0xce, 0x09, 0x8c,
+ 0xcd, 0x2d, 0x63, 0x6c, 0x9e, 0xe9, 0xe7, 0x6b, 0x58, 0xd7, 0x72, 0x07, 0xe8, 0x6e, 0x6a, 0x80,
+ 0xae, 0xf6, 0x4f, 0xb2, 0xfb, 0x28, 0x7d, 0xa5, 0x08, 0x17, 0x33, 0xeb, 0x25, 0x62, 0xcf, 0x65,
+ 0x43, 0xec, 0xf9, 0x5c, 0x4a, 0xec, 0x69, 0x77, 0xaf, 0x7d, 0x3c, 0x72, 0x50, 0xe1, 0x21, 0xcb,
+ 0x02, 0x08, 0x3c, 0xa0, 0x0c, 0xd4, 0xf0, 0x90, 0x55, 0x84, 0xb0, 0x49, 0xf7, 0xeb, 0x49, 0xf6,
+ 0xf9, 0x6f, 0x2c, 0x38, 0x97, 0x39, 0x37, 0x27, 0x20, 0xeb, 0x5a, 0x35, 0x65, 0x5d, 0x4f, 0xf6,
+ 0xbd, 0x5a, 0x73, 0x84, 0x5f, 0xbf, 0x36, 0x90, 0xf3, 0x2d, 0xec, 0x25, 0x7f, 0x0b, 0x46, 0x9c,
+ 0x46, 0x83, 0x44, 0xd1, 0x4a, 0xe0, 0xaa, 0x90, 0xc0, 0xcf, 0xb0, 0x77, 0x56, 0x52, 0x7c, 0x78,
+ 0x30, 0x3b, 0x93, 0x26, 0x91, 0x80, 0xb1, 0x4e, 0x01, 0x7d, 0x16, 0x4a, 0x91, 0xb8, 0x37, 0xc5,
+ 0xdc, 0x3f, 0xdf, 0xe7, 0xe0, 0x38, 0xeb, 0xa4, 0x69, 0x86, 0x39, 0x52, 0x92, 0x0a, 0x45, 0xd2,
+ 0x0c, 0x89, 0x52, 0x38, 0xd6, 0x90, 0x28, 0xcf, 0x01, 0xec, 0xaa, 0xc7, 0x40, 0x5a, 0xfe, 0xa0,
+ 0x3d, 0x13, 0x34, 0x2c, 0xf4, 0x4d, 0x30, 0x19, 0xf1, 0xa0, 0x7e, 0x8b, 0x4d, 0x27, 0x62, 0x6e,
+ 0x2e, 0x62, 0x15, 0xb2, 0x50, 0x4a, 0xf5, 0x14, 0x0c, 0x77, 0x60, 0xa3, 0x65, 0xd9, 0x2a, 0x8b,
+ 0x40, 0xc8, 0x17, 0xe6, 0xe5, 0xa4, 0x45, 0x91, 0x04, 0xf7, 0x74, 0x7a, 0xf8, 0xd9, 0xc0, 0x6b,
+ 0x35, 0xd1, 0x67, 0x01, 0xe8, 0xf2, 0x11, 0x72, 0x88, 0xe1, 0xfc, 0xc3, 0x93, 0x9e, 0x2a, 0x6e,
+ 0xa6, 0x55, 0x2e, 0xf3, 0x4d, 0xad, 0x28, 0x22, 0x58, 0x23, 0x68, 0xff, 0xc0, 0x00, 0x3c, 0xda,
+ 0xe5, 0x8c, 0x44, 0xf3, 0xa6, 0x1e, 0xf6, 0xa9, 0xf4, 0xe3, 0x7a, 0x26, 0xb3, 0xb2, 0xf1, 0xda,
+ 0x4e, 0x2d, 0xc5, 0xc2, 0xfb, 0x5e, 0x8a, 0xdf, 0x6b, 0x69, 0x62, 0x0f, 0x6e, 0xab, 0xf9, 0xa9,
+ 0x23, 0x9e, 0xfd, 0xc7, 0x28, 0x07, 0xd9, 0xc8, 0x10, 0x26, 0x3c, 0xd7, 0x77, 0x77, 0xfa, 0x96,
+ 0x2e, 0x9c, 0xac, 0x94, 0xf8, 0x0b, 0x16, 0x3c, 0x96, 0xd9, 0x5f, 0xc3, 0x22, 0xe7, 0x2a, 0x94,
+ 0x1b, 0xb4, 0x50, 0x73, 0x45, 0x4c, 0x7c, 0xb4, 0x25, 0x00, 0x27, 0x38, 0x86, 0xe1, 0x4d, 0xa1,
+ 0xa7, 0xe1, 0xcd, 0x3f, 0xb7, 0xa0, 0x63, 0x7f, 0x9c, 0xc0, 0x41, 0x5d, 0x35, 0x0f, 0xea, 0x8f,
+ 0xf6, 0x33, 0x97, 0x39, 0x67, 0xf4, 0x1f, 0x4e, 0xc0, 0xd9, 0x1c, 0x57, 0x9c, 0x5d, 0x98, 0xda,
+ 0x6c, 0x10, 0xd3, 0xc9, 0x53, 0x7c, 0x4c, 0xa6, 0x3f, 0x6c, 0x57, 0x8f, 0x50, 0x96, 0xd1, 0x72,
+ 0xaa, 0x03, 0x05, 0x77, 0x36, 0x81, 0xbe, 0x60, 0xc1, 0x69, 0x67, 0x2f, 0xea, 0x48, 0x81, 0x2f,
+ 0xd6, 0xcc, 0x0b, 0x99, 0x42, 0x90, 0x1e, 0x29, 0xf3, 0x79, 0x8a, 0xcf, 0x2c, 0x2c, 0x9c, 0xd9,
+ 0x16, 0xc2, 0x22, 0x48, 0x3c, 0x65, 0xe7, 0xbb, 0xb8, 0x21, 0x67, 0xf9, 0x4c, 0xf1, 0x1b, 0x44,
+ 0x42, 0xb0, 0xa2, 0x83, 0x3e, 0x0f, 0xe5, 0x4d, 0xe9, 0xc8, 0x98, 0x71, 0x43, 0x25, 0x03, 0xd9,
+ 0xdd, 0xbd, 0x93, 0x6b, 0x32, 0x15, 0x12, 0x4e, 0x88, 0xa2, 0xd7, 0xa1, 0xe8, 0x6f, 0x44, 0xdd,
+ 0xb2, 0x64, 0xa6, 0x4c, 0xd6, 0xb8, 0xb3, 0xff, 0xea, 0x72, 0x1d, 0xd3, 0x8a, 0xe8, 0x3a, 0x14,
+ 0xc3, 0x75, 0x57, 0x48, 0xf0, 0x32, 0xcf, 0x70, 0xbc, 0x50, 0xc9, 0xe9, 0x15, 0xa3, 0x84, 0x17,
+ 0x2a, 0x98, 0x92, 0x40, 0x35, 0x18, 0x64, 0xfe, 0x2b, 0xe2, 0x3e, 0xc8, 0xe4, 0x7c, 0xbb, 0xf8,
+ 0x81, 0xf1, 0x88, 0x00, 0x0c, 0x01, 0x73, 0x42, 0x68, 0x0d, 0x86, 0x1a, 0x2c, 0xa3, 0xa2, 0x88,
+ 0x47, 0xf6, 0x89, 0x4c, 0x59, 0x5d, 0x97, 0x54, 0x93, 0x42, 0x74, 0xc5, 0x30, 0xb0, 0xa0, 0xc5,
+ 0xa8, 0x92, 0xd6, 0xd6, 0x46, 0x24, 0x32, 0x00, 0x67, 0x53, 0xed, 0x92, 0x41, 0x55, 0x50, 0x65,
+ 0x18, 0x58, 0xd0, 0x42, 0xaf, 0x40, 0x61, 0xa3, 0x21, 0x7c, 0x53, 0x32, 0x85, 0x76, 0x66, 0xbc,
+ 0x86, 0x85, 0xa1, 0xfb, 0x07, 0xb3, 0x85, 0xe5, 0x45, 0x5c, 0xd8, 0x68, 0xa0, 0x55, 0x18, 0xde,
+ 0xe0, 0x1e, 0xde, 0x42, 0x2e, 0xf7, 0x44, 0xb6, 0xf3, 0x79, 0x87, 0x13, 0x38, 0x77, 0xcb, 0x10,
+ 0x00, 0x2c, 0x89, 0xb0, 0x98, 0xeb, 0xca, 0x53, 0x5d, 0x84, 0xee, 0x9a, 0x3b, 0x5a, 0x74, 0x01,
+ 0x7e, 0x3f, 0x27, 0xfe, 0xee, 0x58, 0xa3, 0x48, 0x57, 0xb5, 0x23, 0xd3, 0xb0, 0x8b, 0x50, 0x2c,
+ 0x99, 0xab, 0xba, 0x47, 0x86, 0x7a, 0xbe, 0xaa, 0x15, 0x12, 0x4e, 0x88, 0xa2, 0x6d, 0x18, 0xdb,
+ 0x8d, 0x5a, 0x5b, 0x44, 0x6e, 0x69, 0x16, 0x99, 0x25, 0xe7, 0x0a, 0xbb, 0x23, 0x10, 0xbd, 0x30,
+ 0x6e, 0x3b, 0xcd, 0x8e, 0x53, 0x88, 0xa9, 0xbf, 0xef, 0xe8, 0xc4, 0xb0, 0x49, 0x9b, 0x0e, 0xff,
+ 0xbb, 0xed, 0x60, 0x7d, 0x3f, 0x26, 0x22, 0xe2, 0x56, 0xe6, 0xf0, 0xbf, 0xc9, 0x51, 0x3a, 0x87,
+ 0x5f, 0x00, 0xb0, 0x24, 0x82, 0xee, 0x88, 0xe1, 0x61, 0xa7, 0xe7, 0x64, 0x7e, 0x58, 0xcc, 0x79,
+ 0x89, 0x94, 0x33, 0x28, 0xec, 0xb4, 0x4c, 0x48, 0xb1, 0x53, 0xb2, 0xb5, 0x15, 0xc4, 0x81, 0x9f,
+ 0x3a, 0xa1, 0xa7, 0xf2, 0x4f, 0xc9, 0x5a, 0x06, 0x7e, 0xe7, 0x29, 0x99, 0x85, 0x85, 0x33, 0xdb,
+ 0x42, 0x2e, 0x8c, 0xb7, 0x82, 0x30, 0xde, 0x0b, 0x42, 0xb9, 0xbe, 0x50, 0x17, 0xb9, 0x82, 0x81,
+ 0x29, 0x5a, 0x64, 0xc1, 0xec, 0x4c, 0x08, 0x4e, 0xd1, 0x44, 0x9f, 0x86, 0xe1, 0xa8, 0xe1, 0x34,
+ 0x49, 0xf5, 0xd6, 0xf4, 0xa9, 0xfc, 0xeb, 0xa7, 0xce, 0x51, 0x72, 0x56, 0x17, 0x0f, 0xd0, 0xce,
+ 0x51, 0xb0, 0x24, 0x87, 0x96, 0x61, 0x90, 0xe5, 0xd4, 0x62, 0xe1, 0xe1, 0x72, 0xa2, 0x7b, 0x76,
+ 0x18, 0x10, 0xf3, 0xb3, 0x89, 0x15, 0x63, 0x5e, 0x9d, 0xee, 0x01, 0xc1, 0x5e, 0x07, 0xd1, 0xf4,
+ 0x99, 0xfc, 0x3d, 0x20, 0xb8, 0xf2, 0x5b, 0xf5, 0x6e, 0x7b, 0x40, 0x21, 0xe1, 0x84, 0x28, 0x3d,
+ 0x99, 0xe9, 0x69, 0x7a, 0xb6, 0x8b, 0xe5, 0x4b, 0xee, 0x59, 0xca, 0x4e, 0x66, 0x7a, 0x92, 0x52,
+ 0x12, 0xf6, 0xef, 0x0c, 0x77, 0xf2, 0x2c, 0xec, 0x41, 0xf6, 0x1d, 0x56, 0x87, 0xae, 0xee, 0x93,
+ 0xfd, 0xca, 0x87, 0x8e, 0x91, 0x5b, 0xfd, 0x82, 0x05, 0x67, 0x5b, 0x99, 0x1f, 0x22, 0x18, 0x80,
+ 0xfe, 0xc4, 0x4c, 0xfc, 0xd3, 0x55, 0x28, 0xc1, 0x6c, 0x38, 0xce, 0x69, 0x29, 0xfd, 0x22, 0x28,
+ 0xbe, 0xef, 0x17, 0xc1, 0x0a, 0x94, 0x18, 0x93, 0xd9, 0x23, 0xc3, 0x70, 0xfa, 0x61, 0xc4, 0x58,
+ 0x89, 0x45, 0x51, 0x11, 0x2b, 0x12, 0xe8, 0xfb, 0x2c, 0xb8, 0x90, 0xee, 0x3a, 0x26, 0x0c, 0x2c,
+ 0xe2, 0x0f, 0xf2, 0xb7, 0xe0, 0xb2, 0xf8, 0xfe, 0x0b, 0xb5, 0x6e, 0xc8, 0x87, 0xbd, 0x10, 0x70,
+ 0xf7, 0xc6, 0x50, 0x25, 0xe3, 0x31, 0x3a, 0x64, 0x0a, 0xe0, 0xfb, 0x78, 0x90, 0xbe, 0x00, 0xa3,
+ 0x3b, 0x41, 0xdb, 0x8f, 0x85, 0xa1, 0x8c, 0x50, 0xda, 0x33, 0x65, 0xf5, 0x8a, 0x56, 0x8e, 0x0d,
+ 0xac, 0xd4, 0x33, 0xb6, 0xf4, 0xc0, 0xcf, 0xd8, 0xb7, 0x61, 0xd4, 0xd7, 0x2c, 0x3b, 0x05, 0x3f,
+ 0x70, 0x39, 0x3f, 0x76, 0xa8, 0x6e, 0x07, 0xca, 0x7b, 0xa9, 0x97, 0x60, 0x83, 0xda, 0xc9, 0xbe,
+ 0x8d, 0x7e, 0xd2, 0xca, 0x60, 0xea, 0xf9, 0x6b, 0xf9, 0x35, 0xf3, 0xb5, 0x7c, 0x39, 0xfd, 0x5a,
+ 0xee, 0x10, 0xbe, 0x1a, 0x0f, 0xe5, 0xfe, 0xf3, 0x9c, 0xf4, 0x1b, 0x26, 0xd0, 0x6e, 0xc2, 0xa5,
+ 0x5e, 0xd7, 0x12, 0xb3, 0x98, 0x72, 0x95, 0xaa, 0x2d, 0xb1, 0x98, 0x72, 0xab, 0x15, 0xcc, 0x20,
+ 0xfd, 0xc6, 0x91, 0xb1, 0xff, 0x87, 0x05, 0xc5, 0x5a, 0xe0, 0x9e, 0x80, 0x30, 0xf9, 0x53, 0x86,
+ 0x30, 0xf9, 0xd1, 0xec, 0x0b, 0xd1, 0xcd, 0x15, 0x1d, 0x2f, 0xa5, 0x44, 0xc7, 0x17, 0xf2, 0x08,
+ 0x74, 0x17, 0x14, 0xff, 0x58, 0x11, 0x46, 0x6a, 0x81, 0xab, 0xcc, 0x95, 0x7f, 0xed, 0x41, 0xcc,
+ 0x95, 0x73, 0x03, 0xfc, 0x6b, 0x94, 0x99, 0xa1, 0x95, 0xf4, 0xb1, 0xfc, 0x73, 0x66, 0xb5, 0x7c,
+ 0x97, 0x78, 0x9b, 0x5b, 0x31, 0x71, 0xd3, 0x9f, 0x73, 0x72, 0x56, 0xcb, 0xff, 0xdd, 0x82, 0x89,
+ 0x54, 0xeb, 0xa8, 0x09, 0x63, 0x4d, 0x5d, 0x30, 0x29, 0xd6, 0xe9, 0x03, 0xc9, 0x34, 0x85, 0xd5,
+ 0xa7, 0x56, 0x84, 0x4d, 0xe2, 0x68, 0x0e, 0x40, 0x69, 0xea, 0xa4, 0x04, 0x8c, 0x71, 0xfd, 0x4a,
+ 0x95, 0x17, 0x61, 0x0d, 0x03, 0xbd, 0x08, 0x23, 0x71, 0xd0, 0x0a, 0x9a, 0xc1, 0xe6, 0xfe, 0x0d,
+ 0x22, 0x23, 0x17, 0x29, 0x5b, 0xae, 0xb5, 0x04, 0x84, 0x75, 0x3c, 0xfb, 0x27, 0x8a, 0xfc, 0x43,
+ 0xfd, 0xd8, 0xfb, 0x70, 0x4d, 0x7e, 0xb0, 0xd7, 0xe4, 0x57, 0x2c, 0x98, 0xa4, 0xad, 0x33, 0x73,
+ 0x11, 0x79, 0xd9, 0xaa, 0x98, 0xc1, 0x56, 0x97, 0x98, 0xc1, 0x97, 0xe9, 0xd9, 0xe5, 0x06, 0xed,
+ 0x58, 0x48, 0xd0, 0xb4, 0xc3, 0x89, 0x96, 0x62, 0x01, 0x15, 0x78, 0x24, 0x0c, 0x85, 0x8b, 0x9b,
+ 0x8e, 0x47, 0xc2, 0x10, 0x0b, 0xa8, 0x0c, 0x29, 0x3c, 0x90, 0x1d, 0x52, 0x98, 0xc7, 0x61, 0x14,
+ 0x86, 0x05, 0x82, 0xed, 0xd1, 0xe2, 0x30, 0x4a, 0x8b, 0x83, 0x04, 0xc7, 0xfe, 0xd9, 0x22, 0x8c,
+ 0xd6, 0x02, 0x37, 0xd1, 0x95, 0xbd, 0x60, 0xe8, 0xca, 0x2e, 0xa5, 0x74, 0x65, 0x93, 0x3a, 0xee,
+ 0x87, 0x9a, 0xb1, 0xaf, 0x95, 0x66, 0xec, 0x9f, 0x59, 0x6c, 0xd6, 0x2a, 0xab, 0x75, 0x6e, 0x7d,
+ 0x84, 0x9e, 0x85, 0x11, 0x76, 0x20, 0x31, 0x9f, 0x4a, 0xa9, 0x40, 0x62, 0x29, 0x94, 0x56, 0x93,
+ 0x62, 0xac, 0xe3, 0xa0, 0x2b, 0x50, 0x8a, 0x88, 0x13, 0x36, 0xb6, 0xd4, 0x19, 0x27, 0xb4, 0x3d,
+ 0xbc, 0x0c, 0x2b, 0x28, 0x7a, 0x33, 0x09, 0x01, 0x58, 0xcc, 0xf7, 0xd1, 0xd2, 0xfb, 0xc3, 0xb7,
+ 0x48, 0x7e, 0xdc, 0x3f, 0xfb, 0x2e, 0xa0, 0x4e, 0xfc, 0x3e, 0x62, 0x5f, 0xcd, 0x9a, 0xb1, 0xaf,
+ 0xca, 0x1d, 0x71, 0xaf, 0xfe, 0xd4, 0x82, 0xf1, 0x5a, 0xe0, 0xd2, 0xad, 0xfb, 0xf5, 0xb4, 0x4f,
+ 0xf5, 0xf8, 0xa7, 0x43, 0x5d, 0xe2, 0x9f, 0x3e, 0x0e, 0x83, 0xb5, 0xc0, 0xad, 0xd6, 0xba, 0xf9,
+ 0x36, 0xdb, 0x7f, 0xd7, 0x82, 0xe1, 0x5a, 0xe0, 0x9e, 0x80, 0x70, 0xfe, 0x35, 0x53, 0x38, 0xff,
+ 0x48, 0xce, 0xba, 0xc9, 0x91, 0xc7, 0xff, 0xed, 0x01, 0x18, 0xa3, 0xfd, 0x0c, 0x36, 0xe5, 0x54,
+ 0x1a, 0xc3, 0x66, 0xf5, 0x31, 0x6c, 0x94, 0x17, 0x0e, 0x9a, 0xcd, 0x60, 0x2f, 0x3d, 0xad, 0xcb,
+ 0xac, 0x14, 0x0b, 0x28, 0x7a, 0x1a, 0x4a, 0xad, 0x90, 0xec, 0x7a, 0x81, 0x60, 0x32, 0x35, 0x55,
+ 0x47, 0x4d, 0x94, 0x63, 0x85, 0x41, 0x1f, 0x67, 0x91, 0xe7, 0x37, 0x48, 0x9d, 0x34, 0x02, 0xdf,
+ 0xe5, 0xf2, 0xeb, 0xa2, 0x48, 0x1b, 0xa0, 0x95, 0x63, 0x03, 0x0b, 0xdd, 0x85, 0x32, 0xfb, 0xcf,
+ 0x8e, 0x9d, 0xa3, 0x67, 0x93, 0x14, 0xd9, 0xc5, 0x04, 0x01, 0x9c, 0xd0, 0x42, 0xcf, 0x01, 0xc4,
+ 0x32, 0x42, 0x76, 0x24, 0xe2, 0x1c, 0x29, 0x86, 0x5c, 0xc5, 0xce, 0x8e, 0xb0, 0x86, 0x85, 0x9e,
+ 0x82, 0x72, 0xec, 0x78, 0xcd, 0x9b, 0x9e, 0x4f, 0x22, 0x26, 0x97, 0x2e, 0xca, 0x24, 0x5f, 0xa2,
+ 0x10, 0x27, 0x70, 0xca, 0x10, 0xb1, 0x20, 0x00, 0x3c, 0x17, 0x6d, 0x89, 0x61, 0x33, 0x86, 0xe8,
+ 0xa6, 0x2a, 0xc5, 0x1a, 0x06, 0xda, 0x82, 0xf3, 0x9e, 0xcf, 0x42, 0xec, 0x93, 0xfa, 0xb6, 0xd7,
+ 0x5a, 0xbb, 0x59, 0xbf, 0x43, 0x42, 0x6f, 0x63, 0x7f, 0xc1, 0x69, 0x6c, 0x13, 0x5f, 0xe6, 0x09,
+ 0xfc, 0xa8, 0xe8, 0xe2, 0xf9, 0x6a, 0x17, 0x5c, 0xdc, 0x95, 0x92, 0xfd, 0x32, 0x9c, 0xa9, 0x05,
+ 0x6e, 0x2d, 0x08, 0xe3, 0xe5, 0x20, 0xdc, 0x73, 0x42, 0x57, 0xae, 0x94, 0x59, 0x99, 0x85, 0x84,
+ 0x1e, 0x85, 0x83, 0xfc, 0xa0, 0x30, 0x72, 0x61, 0x3d, 0xcf, 0x98, 0xaf, 0x23, 0x3a, 0xa3, 0x34,
+ 0x18, 0x1b, 0xa0, 0xf2, 0x4d, 0x5c, 0x73, 0x62, 0x82, 0x6e, 0xb1, 0xa4, 0xb8, 0xc9, 0x8d, 0x28,
+ 0xaa, 0x3f, 0xa9, 0x25, 0xc5, 0x4d, 0x80, 0x99, 0x57, 0xa8, 0x59, 0xdf, 0xfe, 0xe9, 0x01, 0x76,
+ 0x38, 0xa6, 0x72, 0x16, 0xa0, 0xcf, 0xc1, 0x78, 0x44, 0x6e, 0x7a, 0x7e, 0xfb, 0x9e, 0x94, 0x09,
+ 0x74, 0x71, 0x27, 0xaa, 0x2f, 0xe9, 0x98, 0x5c, 0xb2, 0x68, 0x96, 0xe1, 0x14, 0x35, 0xb4, 0x03,
+ 0xe3, 0x7b, 0x9e, 0xef, 0x06, 0x7b, 0x91, 0xa4, 0x5f, 0xca, 0x17, 0x30, 0xde, 0xe5, 0x98, 0xa9,
+ 0x3e, 0x1a, 0xcd, 0xdd, 0x35, 0x88, 0xe1, 0x14, 0x71, 0xba, 0x00, 0xc3, 0xb6, 0x3f, 0x1f, 0xdd,
+ 0x8e, 0x48, 0x28, 0xd2, 0x1b, 0xb3, 0x05, 0x88, 0x65, 0x21, 0x4e, 0xe0, 0x74, 0x01, 0xb2, 0x3f,
+ 0xd7, 0xc2, 0xa0, 0xcd, 0xe3, 0xd8, 0x8b, 0x05, 0x88, 0x55, 0x29, 0xd6, 0x30, 0xe8, 0x06, 0x65,
+ 0xff, 0x56, 0x03, 0x1f, 0x07, 0x41, 0x2c, 0xb7, 0x34, 0x4b, 0xa8, 0xa9, 0x95, 0x63, 0x03, 0x0b,
+ 0x2d, 0x03, 0x8a, 0xda, 0xad, 0x56, 0x93, 0xd9, 0x29, 0x38, 0x4d, 0x46, 0x8a, 0xeb, 0x88, 0x8b,
+ 0x3c, 0x4a, 0x67, 0xbd, 0x03, 0x8a, 0x33, 0x6a, 0xd0, 0xb3, 0x7a, 0x43, 0x74, 0x75, 0x90, 0x75,
+ 0x95, 0x2b, 0x23, 0xea, 0xbc, 0x9f, 0x12, 0x86, 0x96, 0x60, 0x38, 0xda, 0x8f, 0x1a, 0xb1, 0x08,
+ 0x37, 0x96, 0x93, 0x96, 0xa6, 0xce, 0x50, 0xb4, 0xac, 0x68, 0xbc, 0x0a, 0x96, 0x75, 0xed, 0x6f,
+ 0x65, 0xac, 0x00, 0x4b, 0x86, 0x1b, 0xb7, 0x43, 0x82, 0x76, 0x60, 0xac, 0xc5, 0x56, 0x98, 0x08,
+ 0xcc, 0x2e, 0x96, 0xc9, 0x0b, 0x7d, 0xbe, 0xe9, 0xf7, 0xe8, 0x09, 0xaa, 0x64, 0x6e, 0xec, 0xb1,
+ 0x54, 0xd3, 0xc9, 0x61, 0x93, 0xba, 0xfd, 0x95, 0xb3, 0xec, 0x32, 0xa9, 0xf3, 0x87, 0xfa, 0xb0,
+ 0x30, 0xac, 0x16, 0xaf, 0x92, 0x99, 0x7c, 0x89, 0x51, 0xf2, 0x45, 0xc2, 0x38, 0x1b, 0xcb, 0xba,
+ 0xe8, 0xb3, 0x30, 0x4e, 0x99, 0x7c, 0x2d, 0x31, 0xc5, 0xe9, 0x7c, 0x07, 0xf8, 0x24, 0x1f, 0x85,
+ 0x96, 0xb4, 0x41, 0xaf, 0x8c, 0x53, 0xc4, 0xd0, 0x9b, 0xcc, 0x04, 0xc0, 0xcc, 0x79, 0xd1, 0x83,
+ 0xb4, 0xae, 0xed, 0x97, 0x64, 0x35, 0x22, 0x79, 0xf9, 0x34, 0xec, 0x87, 0x9b, 0x4f, 0x03, 0xdd,
+ 0x84, 0x31, 0x91, 0x11, 0x56, 0x08, 0x3a, 0x8b, 0x86, 0x20, 0x6b, 0x0c, 0xeb, 0xc0, 0xc3, 0x74,
+ 0x01, 0x36, 0x2b, 0xa3, 0x4d, 0xb8, 0xa0, 0x25, 0x75, 0xb9, 0x16, 0x3a, 0x4c, 0x1b, 0xed, 0xb1,
+ 0x93, 0x48, 0xbb, 0xe6, 0x1e, 0xbb, 0x7f, 0x30, 0x7b, 0x61, 0xad, 0x1b, 0x22, 0xee, 0x4e, 0x07,
+ 0xdd, 0x82, 0x33, 0xdc, 0x7d, 0xb3, 0x42, 0x1c, 0xb7, 0xe9, 0xf9, 0xea, 0x1e, 0xe5, 0xbb, 0xe5,
+ 0xdc, 0xfd, 0x83, 0xd9, 0x33, 0xf3, 0x59, 0x08, 0x38, 0xbb, 0x1e, 0x7a, 0x0d, 0xca, 0xae, 0x1f,
+ 0x89, 0x31, 0x18, 0x32, 0xf2, 0xe6, 0x94, 0x2b, 0xab, 0x75, 0xf5, 0xfd, 0xc9, 0x1f, 0x9c, 0x54,
+ 0x40, 0x9b, 0x5c, 0xd8, 0xa9, 0x64, 0x0b, 0xc3, 0x1d, 0x81, 0x67, 0xd2, 0x52, 0x2a, 0xc3, 0x81,
+ 0x8b, 0x4b, 0xf9, 0x95, 0x5d, 0xb3, 0xe1, 0xdb, 0x65, 0x10, 0x46, 0x6f, 0x00, 0xa2, 0xcc, 0xb7,
+ 0xd7, 0x20, 0xf3, 0x0d, 0x16, 0xf5, 0x9f, 0xc9, 0x86, 0x4b, 0xa6, 0x4b, 0x51, 0xbd, 0x03, 0x03,
+ 0x67, 0xd4, 0x42, 0xd7, 0xe9, 0x6d, 0xa0, 0x97, 0x0a, 0xfb, 0x6c, 0x95, 0xe5, 0xac, 0x42, 0x5a,
+ 0x21, 0x69, 0x38, 0x31, 0x71, 0x4d, 0x8a, 0x38, 0x55, 0x0f, 0xb9, 0x70, 0xde, 0x69, 0xc7, 0x01,
+ 0x93, 0x23, 0x9b, 0xa8, 0x6b, 0xc1, 0x36, 0xf1, 0x99, 0x0a, 0xa7, 0xb4, 0x70, 0x89, 0x5e, 0xd4,
+ 0xf3, 0x5d, 0xf0, 0x70, 0x57, 0x2a, 0x94, 0xc1, 0x52, 0x39, 0x4a, 0xc1, 0x8c, 0xa7, 0x93, 0x91,
+ 0xa7, 0xf4, 0x45, 0x18, 0xd9, 0x0a, 0xa2, 0x78, 0x95, 0xc4, 0x7b, 0x41, 0xb8, 0x2d, 0xa2, 0x22,
+ 0x26, 0x91, 0x74, 0x13, 0x10, 0xd6, 0xf1, 0xe8, 0x0b, 0x8a, 0x19, 0x18, 0x54, 0x2b, 0x4c, 0xb7,
+ 0x5b, 0x4a, 0xce, 0x98, 0xeb, 0xbc, 0x18, 0x4b, 0xb8, 0x44, 0xad, 0xd6, 0x16, 0x99, 0x9e, 0x36,
+ 0x85, 0x5a, 0xad, 0x2d, 0x62, 0x09, 0xa7, 0xcb, 0x35, 0xda, 0x72, 0x42, 0x52, 0x0b, 0x83, 0x06,
+ 0x89, 0xb4, 0xf8, 0xcd, 0x8f, 0xf2, 0x98, 0x8f, 0x74, 0xb9, 0xd6, 0xb3, 0x10, 0x70, 0x76, 0x3d,
+ 0x44, 0x3a, 0x13, 0x1a, 0x8d, 0xe7, 0x0b, 0xd8, 0x3b, 0x59, 0x81, 0x3e, 0x73, 0x1a, 0xf9, 0x30,
+ 0xa9, 0x52, 0x29, 0xf1, 0x28, 0x8f, 0xd1, 0xf4, 0x04, 0x5b, 0xdb, 0xfd, 0x87, 0x88, 0x54, 0x2a,
+ 0x8b, 0x6a, 0x8a, 0x12, 0xee, 0xa0, 0x6d, 0x84, 0x4c, 0x9a, 0xec, 0x99, 0xb4, 0xf6, 0x2a, 0x94,
+ 0xa3, 0xf6, 0xba, 0x1b, 0xec, 0x38, 0x9e, 0xcf, 0xf4, 0xb4, 0x1a, 0x2b, 0x5f, 0x97, 0x00, 0x9c,
+ 0xe0, 0xa0, 0x65, 0x28, 0x39, 0x52, 0x1f, 0x81, 0xf2, 0x23, 0x6d, 0x28, 0x2d, 0x04, 0x77, 0x3e,
+ 0x97, 0x1a, 0x08, 0x55, 0x17, 0xbd, 0x0a, 0x63, 0xc2, 0xfd, 0x50, 0x64, 0xf1, 0x3b, 0x65, 0xfa,
+ 0x88, 0xd4, 0x75, 0x20, 0x36, 0x71, 0xd1, 0x6d, 0x18, 0x89, 0x83, 0x26, 0x73, 0x74, 0xa0, 0x1c,
+ 0xd2, 0xd9, 0xfc, 0x68, 0x5d, 0x6b, 0x0a, 0x4d, 0x17, 0x05, 0xaa, 0xaa, 0x58, 0xa7, 0x83, 0xd6,
+ 0xf8, 0x7a, 0x67, 0x71, 0x8c, 0x49, 0x34, 0xfd, 0x48, 0xfe, 0x9d, 0xa4, 0xc2, 0x1d, 0x9b, 0xdb,
+ 0x41, 0xd4, 0xc4, 0x3a, 0x19, 0x74, 0x0d, 0xa6, 0x5a, 0xa1, 0x17, 0xb0, 0x35, 0xa1, 0x54, 0x51,
+ 0xd3, 0x66, 0xf6, 0x95, 0x5a, 0x1a, 0x01, 0x77, 0xd6, 0x61, 0xde, 0xa3, 0xa2, 0x70, 0xfa, 0x1c,
+ 0xcf, 0xda, 0xcb, 0x5f, 0x46, 0xbc, 0x0c, 0x2b, 0x28, 0x5a, 0x61, 0x27, 0x31, 0x7f, 0xd4, 0x4f,
+ 0xcf, 0xe4, 0x07, 0xf7, 0xd0, 0x1f, 0xff, 0x9c, 0xef, 0x53, 0x7f, 0x71, 0x42, 0x01, 0xb9, 0x5a,
+ 0x46, 0x38, 0xca, 0x6c, 0x47, 0xd3, 0xe7, 0xbb, 0x58, 0x79, 0xa5, 0x38, 0xf3, 0x84, 0x21, 0x30,
+ 0x8a, 0x23, 0x9c, 0xa2, 0x89, 0xbe, 0x09, 0x26, 0x45, 0x30, 0xb1, 0x64, 0x98, 0x2e, 0x24, 0xe6,
+ 0xa3, 0x38, 0x05, 0xc3, 0x1d, 0xd8, 0x3c, 0xbe, 0xbb, 0xb3, 0xde, 0x24, 0xe2, 0xe8, 0xbb, 0xe9,
+ 0xf9, 0xdb, 0xd1, 0xf4, 0x45, 0x76, 0x3e, 0x88, 0xf8, 0xee, 0x69, 0x28, 0xce, 0xa8, 0x81, 0xd6,
+ 0x60, 0xb2, 0x15, 0x12, 0xb2, 0xc3, 0x78, 0x64, 0x71, 0x9f, 0xcd, 0x72, 0xe7, 0x69, 0xda, 0x93,
+ 0x5a, 0x0a, 0x76, 0x98, 0x51, 0x86, 0x3b, 0x28, 0xa0, 0x3d, 0x28, 0x05, 0xbb, 0x24, 0xdc, 0x22,
+ 0x8e, 0x3b, 0x7d, 0xa9, 0x8b, 0x39, 0xb3, 0xb8, 0xdc, 0x6e, 0x09, 0xdc, 0x94, 0xfa, 0x5a, 0x16,
+ 0xf7, 0x56, 0x5f, 0xcb, 0xc6, 0xd0, 0xf7, 0x5b, 0x70, 0x4e, 0x4a, 0xbc, 0xeb, 0x2d, 0x3a, 0xea,
+ 0x8b, 0x81, 0x1f, 0xc5, 0x21, 0x77, 0xf7, 0x7d, 0x2c, 0xdf, 0x05, 0x76, 0x2d, 0xa7, 0x92, 0x92,
+ 0x2b, 0x9e, 0xcb, 0xc3, 0x88, 0x70, 0x7e, 0x8b, 0x33, 0xdf, 0x08, 0x53, 0x1d, 0x37, 0xf7, 0x51,
+ 0x52, 0x4e, 0xcc, 0x6c, 0xc3, 0x98, 0x31, 0x3a, 0x0f, 0x55, 0x73, 0xf9, 0xaf, 0x87, 0xa1, 0xac,
+ 0xb4, 0x5a, 0xe8, 0xaa, 0xa9, 0xac, 0x3c, 0x97, 0x56, 0x56, 0x96, 0xe8, 0x6b, 0x56, 0xd7, 0x4f,
+ 0xae, 0x65, 0x04, 0x57, 0xca, 0xdb, 0x8b, 0xfd, 0x7b, 0xcd, 0x6a, 0x42, 0xca, 0x62, 0xdf, 0x5a,
+ 0xcf, 0x81, 0xae, 0x72, 0xcf, 0x6b, 0x30, 0xe5, 0x07, 0x8c, 0x5d, 0x24, 0xae, 0xe4, 0x05, 0xd8,
+ 0x95, 0x5f, 0xd6, 0xa3, 0x15, 0xa4, 0x10, 0x70, 0x67, 0x1d, 0xda, 0x20, 0xbf, 0xb3, 0xd3, 0x82,
+ 0x56, 0x7e, 0xa5, 0x63, 0x01, 0x45, 0x8f, 0xc3, 0x60, 0x2b, 0x70, 0xab, 0x35, 0xc1, 0x2a, 0x6a,
+ 0xe9, 0x47, 0xdd, 0x6a, 0x0d, 0x73, 0x18, 0x9a, 0x87, 0x21, 0xf6, 0x23, 0x9a, 0x1e, 0xcd, 0x77,
+ 0x4b, 0x67, 0x35, 0xb4, 0x84, 0x1e, 0xac, 0x02, 0x16, 0x15, 0x99, 0xc0, 0x87, 0xf2, 0xd7, 0x4c,
+ 0xe0, 0x33, 0xfc, 0x80, 0x02, 0x1f, 0x49, 0x00, 0x27, 0xb4, 0xd0, 0x3d, 0x38, 0x63, 0xbc, 0x69,
+ 0xf8, 0x12, 0x21, 0x91, 0x70, 0x8d, 0x7d, 0xbc, 0xeb, 0x63, 0x46, 0x68, 0x49, 0x2f, 0x88, 0x4e,
+ 0x9f, 0xa9, 0x66, 0x51, 0xc2, 0xd9, 0x0d, 0xa0, 0x26, 0x4c, 0x35, 0x3a, 0x5a, 0x2d, 0xf5, 0xdf,
+ 0xaa, 0x9a, 0xd0, 0xce, 0x16, 0x3b, 0x09, 0xa3, 0x57, 0xa1, 0xf4, 0x6e, 0x10, 0xb1, 0x63, 0x56,
+ 0xb0, 0xb7, 0xd2, 0xaf, 0xb2, 0xf4, 0xe6, 0xad, 0x3a, 0x2b, 0x3f, 0x3c, 0x98, 0x1d, 0xa9, 0x05,
+ 0xae, 0xfc, 0x8b, 0x55, 0x05, 0xf4, 0xdd, 0x16, 0xcc, 0x74, 0x3e, 0x9a, 0x54, 0xa7, 0xc7, 0xfa,
+ 0xef, 0xb4, 0x2d, 0x1a, 0x9d, 0x59, 0xca, 0x25, 0x87, 0xbb, 0x34, 0x65, 0xff, 0x12, 0xd7, 0x68,
+ 0x0a, 0xbd, 0x07, 0x89, 0xda, 0xcd, 0x93, 0x48, 0x80, 0xb8, 0x64, 0xa8, 0x64, 0x1e, 0x58, 0x6b,
+ 0xfe, 0xab, 0x16, 0xd3, 0x9a, 0xaf, 0x91, 0x9d, 0x56, 0xd3, 0x89, 0x4f, 0xc2, 0x2d, 0xef, 0x4d,
+ 0x28, 0xc5, 0xa2, 0xb5, 0x6e, 0x39, 0x1b, 0xb5, 0x4e, 0x31, 0xcb, 0x01, 0xc5, 0x6c, 0xca, 0x52,
+ 0xac, 0xc8, 0xd8, 0xff, 0x98, 0xcf, 0x80, 0x84, 0x9c, 0x80, 0xe4, 0xbb, 0x62, 0x4a, 0xbe, 0x67,
+ 0x7b, 0x7c, 0x41, 0x8e, 0x04, 0xfc, 0x1f, 0x99, 0xfd, 0x66, 0x42, 0x96, 0x0f, 0xba, 0xb9, 0x86,
+ 0xfd, 0x83, 0x16, 0x9c, 0xce, 0xb2, 0x6f, 0xa4, 0x0f, 0x04, 0x2e, 0xe2, 0x51, 0xe6, 0x2b, 0x6a,
+ 0x04, 0xef, 0x88, 0x72, 0xac, 0x30, 0xfa, 0x4e, 0x87, 0x74, 0xb4, 0xf0, 0xa0, 0xb7, 0x60, 0xac,
+ 0x16, 0x12, 0xed, 0x42, 0x7b, 0x9d, 0xfb, 0xd9, 0xf2, 0xfe, 0x3c, 0x7d, 0x64, 0x1f, 0x5b, 0xfb,
+ 0xa7, 0x0a, 0x70, 0x9a, 0xeb, 0x9f, 0xe7, 0x77, 0x03, 0xcf, 0xad, 0x05, 0xae, 0x48, 0x65, 0xf5,
+ 0x16, 0x8c, 0xb6, 0x34, 0xb9, 0x5c, 0xb7, 0x50, 0x77, 0xba, 0xfc, 0x2e, 0x91, 0x24, 0xe8, 0xa5,
+ 0xd8, 0xa0, 0x85, 0x5c, 0x18, 0x25, 0xbb, 0x5e, 0x43, 0x29, 0x31, 0x0b, 0x47, 0xbe, 0x5c, 0x54,
+ 0x2b, 0x4b, 0x1a, 0x1d, 0x6c, 0x50, 0x7d, 0x08, 0xd9, 0x4d, 0xed, 0x1f, 0xb2, 0xe0, 0x91, 0x9c,
+ 0xc0, 0x78, 0xb4, 0xb9, 0x3d, 0xa6, 0xe9, 0x17, 0x89, 0x12, 0x55, 0x73, 0x5c, 0xff, 0x8f, 0x05,
+ 0x14, 0x7d, 0x1a, 0x80, 0xeb, 0xef, 0xe9, 0x0b, 0xb5, 0x57, 0x04, 0x31, 0x23, 0xf8, 0x91, 0x16,
+ 0xc7, 0x46, 0xd6, 0xc7, 0x1a, 0x2d, 0xfb, 0xc7, 0x8b, 0x30, 0xc8, 0x53, 0x3c, 0x2f, 0xc3, 0xf0,
+ 0x16, 0x0f, 0xf0, 0xdf, 0x4f, 0x2e, 0x81, 0x44, 0x76, 0xc0, 0x0b, 0xb0, 0xac, 0x8c, 0x56, 0xe0,
+ 0x14, 0x4f, 0x90, 0xd0, 0xac, 0x90, 0xa6, 0xb3, 0x2f, 0x05, 0x5d, 0x3c, 0xb9, 0xa0, 0x12, 0xf8,
+ 0x55, 0x3b, 0x51, 0x70, 0x56, 0x3d, 0xf4, 0x3a, 0x8c, 0xd3, 0x87, 0x47, 0xd0, 0x8e, 0x25, 0x25,
+ 0x9e, 0x1a, 0x41, 0xbd, 0x74, 0xd6, 0x0c, 0x28, 0x4e, 0x61, 0xd3, 0xb7, 0x6f, 0xab, 0x43, 0xa4,
+ 0x37, 0x98, 0xbc, 0x7d, 0x4d, 0x31, 0x9e, 0x89, 0xcb, 0x0c, 0x1b, 0xdb, 0xcc, 0x8c, 0x73, 0x6d,
+ 0x2b, 0x24, 0xd1, 0x56, 0xd0, 0x74, 0x19, 0xa3, 0x35, 0xa8, 0x19, 0x36, 0xa6, 0xe0, 0xb8, 0xa3,
+ 0x06, 0xa5, 0xb2, 0xe1, 0x78, 0xcd, 0x76, 0x48, 0x12, 0x2a, 0x43, 0x26, 0x95, 0xe5, 0x14, 0x1c,
+ 0x77, 0xd4, 0xa0, 0xeb, 0xe8, 0x4c, 0x2d, 0x0c, 0xe8, 0xe1, 0x25, 0xa3, 0x7d, 0x28, 0x6b, 0xd5,
+ 0x61, 0xe9, 0x98, 0xd8, 0x25, 0x2e, 0x96, 0xb0, 0xe7, 0xe3, 0x14, 0x0c, 0x55, 0x75, 0x5d, 0xb8,
+ 0x24, 0x4a, 0x2a, 0xe8, 0x59, 0x18, 0x11, 0x61, 0xef, 0x99, 0x51, 0x25, 0x9f, 0x3a, 0xa6, 0x5a,
+ 0xaf, 0x24, 0xc5, 0x58, 0xc7, 0xb1, 0xbf, 0xa7, 0x00, 0xa7, 0x32, 0xac, 0xe2, 0xf9, 0x51, 0xb5,
+ 0xe9, 0x45, 0xb1, 0x4a, 0xa0, 0xa6, 0x1d, 0x55, 0xbc, 0x1c, 0x2b, 0x0c, 0xba, 0x1f, 0xf8, 0x61,
+ 0x98, 0x3e, 0x00, 0x85, 0xd5, 0xa9, 0x80, 0x1e, 0x31, 0x15, 0xd9, 0x25, 0x18, 0x68, 0x47, 0x44,
+ 0x46, 0xb4, 0x53, 0xe7, 0x37, 0xd3, 0xb8, 0x30, 0x08, 0x65, 0x8f, 0x37, 0x95, 0xf2, 0x42, 0x63,
+ 0x8f, 0xb9, 0xfa, 0x82, 0xc3, 0x68, 0xe7, 0x62, 0xe2, 0x3b, 0x7e, 0x2c, 0x98, 0xe8, 0x24, 0x34,
+ 0x13, 0x2b, 0xc5, 0x02, 0x6a, 0x7f, 0xa9, 0x08, 0xe7, 0x72, 0xfd, 0x64, 0x68, 0xd7, 0x77, 0x02,
+ 0xdf, 0x8b, 0x03, 0x65, 0xb3, 0xc0, 0xc3, 0x31, 0x91, 0xd6, 0xd6, 0x8a, 0x28, 0xc7, 0x0a, 0x03,
+ 0x5d, 0x86, 0x41, 0x26, 0x74, 0xea, 0x48, 0x25, 0xb7, 0x50, 0xe1, 0xf1, 0x39, 0x38, 0xb8, 0xef,
+ 0x34, 0x9d, 0x8f, 0xc3, 0x40, 0x2b, 0x08, 0x9a, 0xe9, 0x43, 0x8b, 0x76, 0x37, 0x08, 0x9a, 0x98,
+ 0x01, 0xd1, 0xc7, 0xc4, 0x78, 0xa5, 0x94, 0xf4, 0xd8, 0x71, 0x83, 0x48, 0x1b, 0xb4, 0x27, 0x61,
+ 0x78, 0x9b, 0xec, 0x87, 0x9e, 0xbf, 0x99, 0x36, 0xde, 0xb8, 0xc1, 0x8b, 0xb1, 0x84, 0x9b, 0x59,
+ 0x81, 0x86, 0x8f, 0x3b, 0xbf, 0x66, 0xa9, 0xe7, 0x15, 0xf8, 0xbd, 0x45, 0x98, 0xc0, 0x0b, 0x95,
+ 0x0f, 0x27, 0xe2, 0x76, 0xe7, 0x44, 0x1c, 0x77, 0x7e, 0xcd, 0xde, 0xb3, 0xf1, 0xf3, 0x16, 0x4c,
+ 0xb0, 0xe0, 0xfb, 0x22, 0x90, 0x8f, 0x17, 0xf8, 0x27, 0xc0, 0xe2, 0x3d, 0x0e, 0x83, 0x21, 0x6d,
+ 0x34, 0x9d, 0x43, 0x8e, 0xf5, 0x04, 0x73, 0x18, 0x3a, 0x0f, 0x03, 0xac, 0x0b, 0x74, 0xf2, 0x46,
+ 0x79, 0xfa, 0x9d, 0x8a, 0x13, 0x3b, 0x98, 0x95, 0xb2, 0xe8, 0x14, 0x98, 0xb4, 0x9a, 0x1e, 0xef,
+ 0x74, 0xa2, 0x12, 0xfc, 0x60, 0x44, 0xa7, 0xc8, 0xec, 0xda, 0xfb, 0x8b, 0x4e, 0x91, 0x4d, 0xb2,
+ 0xfb, 0xf3, 0xe9, 0x0f, 0x0a, 0x70, 0x31, 0xb3, 0x5e, 0xdf, 0xd1, 0x29, 0xba, 0xd7, 0x7e, 0x98,
+ 0x41, 0xda, 0x8b, 0x27, 0x68, 0x1a, 0x37, 0xd0, 0x2f, 0x87, 0x39, 0xd8, 0x47, 0xd0, 0x88, 0xcc,
+ 0x21, 0xfb, 0x80, 0x04, 0x8d, 0xc8, 0xec, 0x5b, 0xce, 0xf3, 0xef, 0xcf, 0x0a, 0x39, 0xdf, 0xc2,
+ 0x1e, 0x82, 0x57, 0xe8, 0x39, 0xc3, 0x80, 0x91, 0xe0, 0x98, 0x47, 0xf9, 0x19, 0xc3, 0xcb, 0xb0,
+ 0x82, 0xa2, 0x79, 0x98, 0xd8, 0xf1, 0x7c, 0x7a, 0xf8, 0xec, 0x9b, 0x8c, 0x9f, 0x8a, 0xe9, 0xb3,
+ 0x62, 0x82, 0x71, 0x1a, 0x1f, 0x79, 0x5a, 0x40, 0x89, 0x42, 0x7e, 0x56, 0xe6, 0xdc, 0xde, 0xce,
+ 0x99, 0xea, 0x52, 0x35, 0x8a, 0x19, 0xc1, 0x25, 0x56, 0xb4, 0xf7, 0x7f, 0xb1, 0xff, 0xf7, 0xff,
+ 0x68, 0xf6, 0xdb, 0x7f, 0xe6, 0x55, 0x18, 0x7b, 0x60, 0x81, 0xaf, 0xfd, 0x95, 0x22, 0x3c, 0xda,
+ 0x65, 0xdb, 0xf3, 0xb3, 0xde, 0x98, 0x03, 0xed, 0xac, 0xef, 0x98, 0x87, 0x1a, 0x9c, 0xde, 0x68,
+ 0x37, 0x9b, 0xfb, 0xcc, 0xfa, 0x9c, 0xb8, 0x12, 0x43, 0xf0, 0x94, 0xe7, 0x65, 0xc2, 0xa3, 0xe5,
+ 0x0c, 0x1c, 0x9c, 0x59, 0x93, 0x32, 0xf4, 0xf4, 0x26, 0xd9, 0x57, 0xa4, 0x52, 0x0c, 0x3d, 0xd6,
+ 0x81, 0xd8, 0xc4, 0x45, 0xd7, 0x60, 0xca, 0xd9, 0x75, 0x3c, 0x1e, 0x95, 0x53, 0x12, 0xe0, 0x1c,
+ 0xbd, 0x92, 0xd3, 0xcd, 0xa7, 0x11, 0x70, 0x67, 0x1d, 0xf4, 0x06, 0xa0, 0x40, 0x64, 0x95, 0xbf,
+ 0x46, 0x7c, 0xa1, 0xd5, 0x62, 0x73, 0x57, 0x4c, 0x8e, 0x84, 0x5b, 0x1d, 0x18, 0x38, 0xa3, 0x56,
+ 0x2a, 0x40, 0xc3, 0x50, 0x7e, 0x80, 0x86, 0xee, 0xe7, 0x62, 0xcf, 0xfc, 0x00, 0xff, 0xc5, 0xa2,
+ 0xd7, 0x17, 0x67, 0xf2, 0xcd, 0x38, 0x63, 0xaf, 0x32, 0x83, 0x2e, 0x2e, 0xc3, 0xd3, 0x62, 0x25,
+ 0x9c, 0xd1, 0x0c, 0xba, 0x12, 0x20, 0x36, 0x71, 0xf9, 0x82, 0x88, 0x12, 0x17, 0x3d, 0x83, 0xc5,
+ 0x17, 0xc1, 0x50, 0x14, 0x06, 0xfa, 0x0c, 0x0c, 0xbb, 0xde, 0xae, 0x17, 0x05, 0xa1, 0x58, 0xe9,
+ 0x47, 0x54, 0x17, 0x24, 0xe7, 0x60, 0x85, 0x93, 0xc1, 0x92, 0x9e, 0xfd, 0xbd, 0x05, 0x18, 0x93,
+ 0x2d, 0xbe, 0xd9, 0x0e, 0x62, 0xe7, 0x04, 0xae, 0xe5, 0x6b, 0xc6, 0xb5, 0xfc, 0xb1, 0x6e, 0x11,
+ 0x61, 0x58, 0x97, 0x72, 0xaf, 0xe3, 0x5b, 0xa9, 0xeb, 0xf8, 0x89, 0xde, 0xa4, 0xba, 0x5f, 0xc3,
+ 0xff, 0xc4, 0x82, 0x29, 0x03, 0xff, 0x04, 0x6e, 0x83, 0x65, 0xf3, 0x36, 0x78, 0xac, 0xe7, 0x37,
+ 0xe4, 0xdc, 0x02, 0xdf, 0x59, 0x4c, 0xf5, 0x9d, 0x9d, 0xfe, 0xef, 0xc2, 0xc0, 0x96, 0x13, 0xba,
+ 0xdd, 0x22, 0x60, 0x77, 0x54, 0x9a, 0xbb, 0xee, 0x84, 0x42, 0xad, 0xf7, 0xb4, 0x4a, 0x8a, 0xec,
+ 0x84, 0xbd, 0x55, 0x7a, 0xac, 0x29, 0xf4, 0x32, 0x0c, 0x45, 0x8d, 0xa0, 0xa5, 0xec, 0xc5, 0x2f,
+ 0xf1, 0x84, 0xc9, 0xb4, 0xe4, 0xf0, 0x60, 0x16, 0x99, 0xcd, 0xd1, 0x62, 0x2c, 0xf0, 0xd1, 0x5b,
+ 0x30, 0xc6, 0x7e, 0x29, 0x1b, 0x9b, 0x62, 0x7e, 0xb6, 0x9c, 0xba, 0x8e, 0xc8, 0x0d, 0xd0, 0x8c,
+ 0x22, 0x6c, 0x92, 0x9a, 0xd9, 0x84, 0xb2, 0xfa, 0xac, 0x87, 0xaa, 0x8f, 0xfb, 0xf7, 0x45, 0x38,
+ 0x95, 0xb1, 0xe6, 0x50, 0x64, 0xcc, 0xc4, 0xb3, 0x7d, 0x2e, 0xd5, 0xf7, 0x39, 0x17, 0x11, 0x7b,
+ 0x0d, 0xb9, 0x62, 0x6d, 0xf5, 0xdd, 0xe8, 0xed, 0x88, 0xa4, 0x1b, 0xa5, 0x45, 0xbd, 0x1b, 0xa5,
+ 0x8d, 0x9d, 0xd8, 0x50, 0xd3, 0x86, 0x54, 0x4f, 0x1f, 0xea, 0x9c, 0xfe, 0x71, 0x11, 0x4e, 0x67,
+ 0x05, 0xa9, 0x42, 0xdf, 0x92, 0xca, 0x9c, 0xf6, 0x42, 0xbf, 0xe1, 0xad, 0x78, 0x3a, 0x35, 0x2e,
+ 0x03, 0x5e, 0x98, 0x33, 0x73, 0xa9, 0xf5, 0x1c, 0x66, 0xd1, 0x26, 0x73, 0x3f, 0x0f, 0x79, 0xc6,
+ 0x3b, 0x79, 0x7c, 0x7c, 0xb2, 0xef, 0x0e, 0x88, 0x54, 0x79, 0x51, 0x4a, 0x7f, 0x2f, 0x8b, 0x7b,
+ 0xeb, 0xef, 0x65, 0xcb, 0x33, 0x1e, 0x8c, 0x68, 0x5f, 0xf3, 0x50, 0x67, 0x7c, 0x9b, 0xde, 0x56,
+ 0x5a, 0xbf, 0x1f, 0xea, 0xac, 0xff, 0x90, 0x05, 0x29, 0x6b, 0x68, 0x25, 0x16, 0xb3, 0x72, 0xc5,
+ 0x62, 0x97, 0x60, 0x20, 0x0c, 0x9a, 0x24, 0x9d, 0xa8, 0x0c, 0x07, 0x4d, 0x82, 0x19, 0x84, 0x62,
+ 0xc4, 0x89, 0xb0, 0x63, 0x54, 0x7f, 0xc8, 0x89, 0x27, 0xda, 0xe3, 0x30, 0xd8, 0x24, 0xbb, 0xa4,
+ 0x99, 0xce, 0x27, 0x71, 0x93, 0x16, 0x62, 0x0e, 0xb3, 0x7f, 0x7e, 0x00, 0x2e, 0x74, 0x0d, 0xe0,
+ 0x40, 0x9f, 0x43, 0x9b, 0x4e, 0x4c, 0xf6, 0x9c, 0xfd, 0x74, 0xe0, 0xf7, 0x6b, 0xbc, 0x18, 0x4b,
+ 0x38, 0xf3, 0x57, 0xe1, 0xf1, 0x5b, 0x53, 0x42, 0x44, 0x11, 0xb6, 0x55, 0x40, 0x4d, 0xa1, 0x54,
+ 0xf1, 0x38, 0x84, 0x52, 0xcf, 0x01, 0x44, 0x51, 0x93, 0x1b, 0xbe, 0xb8, 0xc2, 0x11, 0x26, 0x89,
+ 0xf3, 0x5b, 0xbf, 0x29, 0x20, 0x58, 0xc3, 0x42, 0x15, 0x98, 0x6c, 0x85, 0x41, 0xcc, 0x65, 0xb2,
+ 0x15, 0x6e, 0x1b, 0x36, 0x68, 0xfa, 0xce, 0xd7, 0x52, 0x70, 0xdc, 0x51, 0x03, 0xbd, 0x08, 0x23,
+ 0xc2, 0x9f, 0xbe, 0x16, 0x04, 0x4d, 0x21, 0x06, 0x52, 0xe6, 0x52, 0xf5, 0x04, 0x84, 0x75, 0x3c,
+ 0xad, 0x1a, 0x13, 0xf4, 0x0e, 0x67, 0x56, 0xe3, 0xc2, 0x5e, 0x0d, 0x2f, 0x15, 0xb0, 0xae, 0xd4,
+ 0x57, 0xc0, 0xba, 0x44, 0x30, 0x56, 0xee, 0x5b, 0xb7, 0x05, 0x3d, 0x45, 0x49, 0x3f, 0x33, 0x00,
+ 0xa7, 0xc4, 0xc2, 0x79, 0xd8, 0xcb, 0xe5, 0x76, 0xe7, 0x72, 0x39, 0x0e, 0xd1, 0xd9, 0x87, 0x6b,
+ 0xe6, 0xa4, 0xd7, 0xcc, 0xf7, 0x59, 0x60, 0xb2, 0x57, 0xe8, 0xff, 0xcf, 0xcd, 0x9c, 0xf1, 0x62,
+ 0x2e, 0xbb, 0xe6, 0xca, 0x0b, 0xe4, 0x7d, 0xe6, 0xd0, 0xb0, 0xff, 0x93, 0x05, 0x8f, 0xf5, 0xa4,
+ 0x88, 0x96, 0xa0, 0xcc, 0x78, 0x40, 0xed, 0x75, 0xf6, 0x84, 0xb2, 0x1d, 0x95, 0x80, 0x1c, 0x96,
+ 0x34, 0xa9, 0x89, 0x96, 0x3a, 0x52, 0x94, 0x3c, 0x99, 0x91, 0xa2, 0xe4, 0x8c, 0x31, 0x3c, 0x0f,
+ 0x98, 0xa3, 0xe4, 0x97, 0x8a, 0x30, 0xc4, 0x57, 0xfc, 0x09, 0x3c, 0xc3, 0x96, 0x85, 0xdc, 0xb6,
+ 0x4b, 0x44, 0x3c, 0xde, 0x97, 0xb9, 0x8a, 0x13, 0x3b, 0x9c, 0x4d, 0x50, 0xb7, 0x55, 0x22, 0xe1,
+ 0x45, 0x9f, 0x03, 0x88, 0xe2, 0xd0, 0xf3, 0x37, 0x69, 0x99, 0x88, 0x95, 0xf8, 0xf1, 0x2e, 0xd4,
+ 0xea, 0x0a, 0x99, 0xd3, 0x4c, 0x76, 0xae, 0x02, 0x60, 0x8d, 0x22, 0x9a, 0x33, 0xee, 0xcb, 0x99,
+ 0x94, 0xe0, 0x13, 0x38, 0xd5, 0xe4, 0xf6, 0x9c, 0x79, 0x09, 0xca, 0x8a, 0x78, 0x2f, 0x29, 0xce,
+ 0xa8, 0xce, 0x5c, 0x7c, 0x0a, 0x26, 0x52, 0x7d, 0x3b, 0x92, 0x10, 0xe8, 0x17, 0x2c, 0x98, 0xe0,
+ 0x9d, 0x59, 0xf2, 0x77, 0xc5, 0x99, 0xfa, 0x1e, 0x9c, 0x6e, 0x66, 0x9c, 0x6d, 0x62, 0x46, 0xfb,
+ 0x3f, 0x0b, 0x95, 0xd0, 0x27, 0x0b, 0x8a, 0x33, 0xdb, 0x40, 0x57, 0xe8, 0xba, 0xa5, 0x67, 0x97,
+ 0xd3, 0x14, 0x6e, 0x8d, 0xa3, 0x7c, 0xcd, 0xf2, 0x32, 0xac, 0xa0, 0xf6, 0x6f, 0x59, 0x30, 0xc5,
+ 0x7b, 0x7e, 0x83, 0xec, 0xab, 0x1d, 0xfe, 0xb5, 0xec, 0xbb, 0xc8, 0x1a, 0x54, 0xc8, 0xc9, 0x1a,
+ 0xa4, 0x7f, 0x5a, 0xb1, 0xeb, 0xa7, 0xfd, 0x94, 0x05, 0x62, 0x85, 0x9c, 0xc0, 0x53, 0xfe, 0x1b,
+ 0xcd, 0xa7, 0xfc, 0x4c, 0xfe, 0x26, 0xc8, 0x79, 0xc3, 0xff, 0xa9, 0x05, 0x93, 0x1c, 0x21, 0xd1,
+ 0x39, 0x7f, 0x4d, 0xe7, 0xa1, 0x9f, 0xdc, 0xa2, 0x37, 0xc8, 0xfe, 0x5a, 0x50, 0x73, 0xe2, 0xad,
+ 0xec, 0x8f, 0x32, 0x26, 0x6b, 0xa0, 0xeb, 0x64, 0xb9, 0x72, 0x03, 0x1d, 0x21, 0x61, 0xf1, 0x91,
+ 0x83, 0xea, 0xdb, 0x5f, 0xb5, 0x00, 0xf1, 0x66, 0x0c, 0xf6, 0x87, 0x32, 0x15, 0xac, 0x54, 0xbb,
+ 0x2e, 0x92, 0xa3, 0x49, 0x41, 0xb0, 0x86, 0x75, 0x2c, 0xc3, 0x93, 0x32, 0x1c, 0x28, 0xf6, 0x36,
+ 0x1c, 0x38, 0xc2, 0x88, 0xfe, 0xfe, 0x20, 0xa4, 0x3d, 0x40, 0xd0, 0x1d, 0x18, 0x6d, 0x38, 0x2d,
+ 0x67, 0xdd, 0x6b, 0x7a, 0xb1, 0x47, 0xa2, 0x6e, 0x16, 0x47, 0x8b, 0x1a, 0x9e, 0x50, 0xf5, 0x6a,
+ 0x25, 0xd8, 0xa0, 0x83, 0xe6, 0x00, 0x5a, 0xa1, 0xb7, 0xeb, 0x35, 0xc9, 0x26, 0x93, 0x38, 0x30,
+ 0x47, 0x6a, 0x6e, 0x46, 0x23, 0x4b, 0xb1, 0x86, 0x91, 0xe1, 0xa9, 0x5a, 0x7c, 0xc8, 0x9e, 0xaa,
+ 0x70, 0x62, 0x9e, 0xaa, 0x03, 0x47, 0xf2, 0x54, 0x2d, 0x1d, 0xd9, 0x53, 0x75, 0xb0, 0x2f, 0x4f,
+ 0x55, 0x0c, 0x67, 0x25, 0x07, 0x47, 0xff, 0x2f, 0x7b, 0x4d, 0x22, 0xd8, 0x76, 0xee, 0xfd, 0x3d,
+ 0x73, 0xff, 0x60, 0xf6, 0x2c, 0xce, 0xc4, 0xc0, 0x39, 0x35, 0xd1, 0xa7, 0x61, 0xda, 0x69, 0x36,
+ 0x83, 0x3d, 0x35, 0xa9, 0x4b, 0x51, 0xc3, 0x69, 0x72, 0x51, 0xfe, 0x30, 0xa3, 0x7a, 0xfe, 0xfe,
+ 0xc1, 0xec, 0xf4, 0x7c, 0x0e, 0x0e, 0xce, 0xad, 0x8d, 0x5e, 0x83, 0x72, 0x2b, 0x0c, 0x1a, 0x2b,
+ 0x9a, 0x9b, 0xda, 0x45, 0x3a, 0x80, 0x35, 0x59, 0x78, 0x78, 0x30, 0x3b, 0xa6, 0xfe, 0xb0, 0x0b,
+ 0x3f, 0xa9, 0x60, 0x6f, 0xc3, 0xa9, 0x3a, 0x09, 0x3d, 0x96, 0x7e, 0xd8, 0x4d, 0xce, 0x8f, 0x35,
+ 0x28, 0x87, 0xa9, 0x13, 0xb3, 0xaf, 0x28, 0x72, 0x5a, 0xf4, 0x71, 0x79, 0x42, 0x26, 0x84, 0xec,
+ 0xff, 0x6d, 0xc1, 0xb0, 0xf0, 0xc8, 0x38, 0x01, 0x46, 0x6d, 0xde, 0x90, 0x97, 0xcf, 0x66, 0xdf,
+ 0x2a, 0xac, 0x33, 0xb9, 0x92, 0xf2, 0x6a, 0x4a, 0x52, 0xfe, 0x58, 0x37, 0x22, 0xdd, 0x65, 0xe4,
+ 0x7f, 0xa3, 0x08, 0xe3, 0xa6, 0xeb, 0xde, 0x09, 0x0c, 0xc1, 0x2a, 0x0c, 0x47, 0xc2, 0x37, 0xad,
+ 0x90, 0x6f, 0x91, 0x9d, 0x9e, 0xc4, 0xc4, 0x5a, 0x4b, 0x78, 0xa3, 0x49, 0x22, 0x99, 0x4e, 0x6f,
+ 0xc5, 0x87, 0xe8, 0xf4, 0xd6, 0xcb, 0x7b, 0x72, 0xe0, 0x38, 0xbc, 0x27, 0xed, 0x2f, 0xb3, 0x9b,
+ 0x4d, 0x2f, 0x3f, 0x01, 0xa6, 0xe7, 0x9a, 0x79, 0x07, 0xda, 0x5d, 0x56, 0x96, 0xe8, 0x54, 0x0e,
+ 0xf3, 0xf3, 0x73, 0x16, 0x5c, 0xc8, 0xf8, 0x2a, 0x8d, 0x13, 0x7a, 0x1a, 0x4a, 0x4e, 0xdb, 0xf5,
+ 0xd4, 0x5e, 0xd6, 0xb4, 0x66, 0xf3, 0xa2, 0x1c, 0x2b, 0x0c, 0xb4, 0x08, 0x53, 0xe4, 0x5e, 0xcb,
+ 0xe3, 0x0a, 0x43, 0xdd, 0xa4, 0xb2, 0xc8, 0x23, 0x6b, 0x2f, 0xa5, 0x81, 0xb8, 0x13, 0x5f, 0x05,
+ 0x7b, 0x28, 0xe6, 0x06, 0x7b, 0xf8, 0xfb, 0x16, 0x8c, 0x28, 0xef, 0xac, 0x87, 0x3e, 0xda, 0xdf,
+ 0x64, 0x8e, 0xf6, 0xa3, 0x5d, 0x46, 0x3b, 0x67, 0x98, 0xff, 0x56, 0x41, 0xf5, 0xb7, 0x16, 0x84,
+ 0x71, 0x1f, 0x1c, 0xd6, 0xcb, 0x50, 0x6a, 0x85, 0x41, 0x1c, 0x34, 0x82, 0xa6, 0x60, 0xb0, 0xce,
+ 0x27, 0x51, 0x4f, 0x78, 0xf9, 0xa1, 0xf6, 0x1b, 0x2b, 0x6c, 0x36, 0x7a, 0x41, 0x18, 0x0b, 0xa6,
+ 0x26, 0x19, 0xbd, 0x20, 0x8c, 0x31, 0x83, 0x20, 0x17, 0x20, 0x76, 0xc2, 0x4d, 0x12, 0xd3, 0x32,
+ 0x11, 0x65, 0x29, 0xff, 0xf0, 0x68, 0xc7, 0x5e, 0x73, 0xce, 0xf3, 0xe3, 0x28, 0x0e, 0xe7, 0xaa,
+ 0x7e, 0x7c, 0x2b, 0xe4, 0xef, 0x35, 0x2d, 0x8c, 0x89, 0xa2, 0x85, 0x35, 0xba, 0xd2, 0xad, 0x98,
+ 0xb5, 0x31, 0x68, 0xea, 0xdf, 0x57, 0x45, 0x39, 0x56, 0x18, 0xf6, 0x4b, 0xec, 0x2a, 0x61, 0x03,
+ 0x74, 0xb4, 0xb8, 0x1f, 0xdf, 0x51, 0x56, 0x43, 0xcb, 0x94, 0x6f, 0x15, 0x3d, 0xba, 0x48, 0xf7,
+ 0x93, 0x9b, 0x36, 0xac, 0xbb, 0x18, 0x25, 0x21, 0x48, 0xd0, 0x37, 0x77, 0xd8, 0x54, 0x3c, 0xd3,
+ 0xe3, 0x0a, 0x38, 0x82, 0x15, 0x05, 0x8b, 0xf6, 0xcf, 0x62, 0xa1, 0x57, 0x6b, 0x62, 0x91, 0x6b,
+ 0xd1, 0xfe, 0x05, 0x00, 0x27, 0x38, 0xe8, 0xaa, 0x78, 0x8d, 0x73, 0xd1, 0xf4, 0xa3, 0xa9, 0xd7,
+ 0xb8, 0xfc, 0x7c, 0x4d, 0x98, 0xfd, 0x2c, 0x8c, 0xa8, 0x5c, 0x97, 0x35, 0x9e, 0x42, 0x51, 0xc4,
+ 0x9c, 0x5a, 0x4a, 0x8a, 0xb1, 0x8e, 0x83, 0xd6, 0x60, 0x22, 0xe2, 0xa2, 0x1e, 0x15, 0x5a, 0x94,
+ 0x8b, 0xcc, 0x3e, 0x2e, 0x0d, 0x51, 0xea, 0x26, 0xf8, 0x90, 0x15, 0xf1, 0xa3, 0x43, 0xba, 0xf2,
+ 0xa6, 0x49, 0xa0, 0xd7, 0x61, 0xbc, 0x19, 0x38, 0xee, 0x82, 0xd3, 0x74, 0xfc, 0x06, 0xfb, 0xde,
+ 0x92, 0x99, 0x32, 0xed, 0xa6, 0x01, 0xc5, 0x29, 0x6c, 0xca, 0xf9, 0xe8, 0x25, 0x22, 0x1c, 0xae,
+ 0xe3, 0x6f, 0x92, 0x48, 0x64, 0x2e, 0x64, 0x9c, 0xcf, 0xcd, 0x1c, 0x1c, 0x9c, 0x5b, 0x1b, 0xbd,
+ 0x0c, 0xa3, 0xf2, 0xf3, 0x35, 0xcf, 0xf7, 0xc4, 0xf6, 0x5e, 0x83, 0x61, 0x03, 0x13, 0xed, 0xc1,
+ 0x19, 0xf9, 0x7f, 0x2d, 0x74, 0x36, 0x36, 0xbc, 0x86, 0x70, 0x07, 0xe5, 0x8e, 0x71, 0xf3, 0xd2,
+ 0x7b, 0x6b, 0x29, 0x0b, 0xe9, 0xf0, 0x60, 0xf6, 0x92, 0x18, 0xb5, 0x4c, 0x38, 0x9b, 0xc4, 0x6c,
+ 0xfa, 0x68, 0x05, 0x4e, 0x6d, 0x11, 0xa7, 0x19, 0x6f, 0x2d, 0x6e, 0x91, 0xc6, 0xb6, 0xdc, 0x44,
+ 0xcc, 0x9f, 0x5e, 0xb3, 0x58, 0xbf, 0xde, 0x89, 0x82, 0xb3, 0xea, 0xa1, 0xb7, 0x61, 0xba, 0xd5,
+ 0x5e, 0x6f, 0x7a, 0xd1, 0xd6, 0x6a, 0x10, 0x33, 0x6b, 0x14, 0x95, 0x3a, 0x53, 0x38, 0xde, 0xab,
+ 0x88, 0x05, 0xb5, 0x1c, 0x3c, 0x9c, 0x4b, 0x01, 0xbd, 0x07, 0x67, 0x52, 0x8b, 0x41, 0xb8, 0x1e,
+ 0x8f, 0xe7, 0x07, 0x17, 0xaf, 0x67, 0x55, 0x10, 0x5e, 0xfc, 0x59, 0x20, 0x9c, 0xdd, 0x04, 0x7a,
+ 0x01, 0x4a, 0x5e, 0x6b, 0xd9, 0xd9, 0xf1, 0x9a, 0xfb, 0x2c, 0x3a, 0x7a, 0x99, 0x45, 0x0c, 0x2f,
+ 0x55, 0x6b, 0xbc, 0xec, 0x50, 0xfb, 0x8d, 0x15, 0xe6, 0xfb, 0xb3, 0x46, 0x7a, 0x97, 0x56, 0xd6,
+ 0x58, 0x39, 0xf4, 0x79, 0x18, 0xd5, 0xd7, 0x9e, 0xb8, 0x96, 0x2e, 0x67, 0x73, 0x3a, 0xda, 0x1a,
+ 0xe5, 0x8c, 0xa0, 0x5a, 0x87, 0x3a, 0x0c, 0x1b, 0x14, 0x6d, 0x02, 0xd9, 0xa3, 0x82, 0x6e, 0x42,
+ 0xa9, 0xd1, 0xf4, 0x88, 0x1f, 0x57, 0x6b, 0xdd, 0x02, 0x11, 0x2d, 0x0a, 0x1c, 0x31, 0xcc, 0x22,
+ 0x86, 0x33, 0x2f, 0xc3, 0x8a, 0x82, 0xfd, 0x2b, 0x05, 0x98, 0xed, 0x11, 0x10, 0x3c, 0x25, 0x34,
+ 0xb7, 0xfa, 0x12, 0x9a, 0xcf, 0xcb, 0xf4, 0xa1, 0xab, 0x29, 0x49, 0x42, 0x2a, 0x35, 0x68, 0x22,
+ 0x4f, 0x48, 0xe3, 0xf7, 0x6d, 0xc4, 0xac, 0xcb, 0xdd, 0x07, 0x7a, 0x9a, 0xe1, 0x1b, 0xfa, 0xb6,
+ 0xc1, 0xfe, 0x9f, 0x2f, 0xb9, 0xba, 0x13, 0xfb, 0xcb, 0x05, 0x38, 0xa3, 0x86, 0xf0, 0xeb, 0x77,
+ 0xe0, 0x6e, 0x77, 0x0e, 0xdc, 0x31, 0x68, 0x9e, 0xec, 0x5b, 0x30, 0xc4, 0x23, 0x2b, 0xf5, 0xc1,
+ 0x36, 0x3d, 0x6e, 0x06, 0x21, 0x54, 0x97, 0xbb, 0x11, 0x88, 0xf0, 0xbb, 0x2d, 0x98, 0x58, 0x5b,
+ 0xac, 0xd5, 0x83, 0xc6, 0x36, 0x89, 0xe7, 0x39, 0x9b, 0x8b, 0x05, 0xd7, 0x64, 0x3d, 0x20, 0x37,
+ 0x94, 0xc5, 0x67, 0x5d, 0x82, 0x81, 0xad, 0x20, 0x8a, 0xd3, 0x6a, 0xe9, 0xeb, 0x41, 0x14, 0x63,
+ 0x06, 0xb1, 0x7f, 0xdb, 0x82, 0x41, 0x96, 0x30, 0xbb, 0x57, 0xca, 0xf6, 0x7e, 0xbe, 0x0b, 0xbd,
+ 0x08, 0x43, 0x64, 0x63, 0x83, 0x34, 0x62, 0x31, 0xab, 0xd2, 0x8f, 0x78, 0x68, 0x89, 0x95, 0x52,
+ 0x56, 0x81, 0x35, 0xc6, 0xff, 0x62, 0x81, 0x8c, 0xee, 0x42, 0x39, 0xf6, 0x76, 0xc8, 0xbc, 0xeb,
+ 0x0a, 0xc5, 0xde, 0x03, 0xf8, 0x42, 0xaf, 0x49, 0x02, 0x38, 0xa1, 0x65, 0x7f, 0xa9, 0x00, 0x90,
+ 0xc4, 0xd5, 0xe8, 0xf5, 0x89, 0x0b, 0x1d, 0x2a, 0x9f, 0xcb, 0x19, 0x2a, 0x1f, 0x94, 0x10, 0xcc,
+ 0xd0, 0xf7, 0xa8, 0x61, 0x2a, 0xf6, 0x35, 0x4c, 0x03, 0x47, 0x19, 0xa6, 0x45, 0x98, 0x4a, 0xe2,
+ 0x82, 0x98, 0x61, 0x91, 0xd8, 0xd3, 0x66, 0x2d, 0x0d, 0xc4, 0x9d, 0xf8, 0x36, 0x81, 0x4b, 0x2a,
+ 0x3c, 0x82, 0xb8, 0x6b, 0x98, 0xdd, 0xe8, 0x11, 0xb2, 0xf7, 0x27, 0x3a, 0xad, 0x42, 0xae, 0x4e,
+ 0xeb, 0x47, 0x2d, 0x38, 0x9d, 0x6e, 0x87, 0x39, 0xf2, 0x7d, 0xd1, 0x82, 0x33, 0x4c, 0xb3, 0xc7,
+ 0x5a, 0xed, 0xd4, 0x23, 0xbe, 0xd0, 0x35, 0xe4, 0x43, 0x4e, 0x8f, 0x13, 0x87, 0xf5, 0x95, 0x2c,
+ 0xd2, 0x38, 0xbb, 0x45, 0xfb, 0x3f, 0x16, 0x60, 0x3a, 0x2f, 0x56, 0x04, 0x33, 0x2b, 0x77, 0xee,
+ 0xd5, 0xb7, 0xc9, 0x9e, 0x30, 0xde, 0x4d, 0xcc, 0xca, 0x79, 0x31, 0x96, 0xf0, 0x74, 0x8c, 0xe7,
+ 0x42, 0x7f, 0x31, 0x9e, 0xd1, 0x16, 0x4c, 0xed, 0x6d, 0x11, 0xff, 0xb6, 0x1f, 0x39, 0xb1, 0x17,
+ 0x6d, 0x78, 0x2c, 0xf7, 0x3a, 0x5f, 0x37, 0xaf, 0x48, 0x13, 0xdb, 0xbb, 0x69, 0x84, 0xc3, 0x83,
+ 0xd9, 0x0b, 0x46, 0x41, 0xd2, 0x65, 0x7e, 0x90, 0xe0, 0x4e, 0xa2, 0x9d, 0x21, 0xb2, 0x07, 0x1e,
+ 0x62, 0x88, 0x6c, 0xfb, 0x8b, 0x16, 0x9c, 0xcb, 0x4d, 0x61, 0x87, 0xae, 0x40, 0xc9, 0x69, 0x79,
+ 0x5c, 0x04, 0x2a, 0x8e, 0x51, 0xf6, 0x94, 0xaf, 0x55, 0xb9, 0x00, 0x54, 0x41, 0x55, 0x6a, 0xdd,
+ 0x42, 0x6e, 0x6a, 0xdd, 0x9e, 0x99, 0x72, 0xed, 0xef, 0xb2, 0x40, 0xb8, 0xc4, 0xf5, 0x71, 0x76,
+ 0xbf, 0x25, 0x33, 0x93, 0x1b, 0x69, 0x34, 0x2e, 0xe5, 0xfb, 0x08, 0x8a, 0xe4, 0x19, 0x8a, 0x57,
+ 0x32, 0x52, 0x66, 0x18, 0xb4, 0x6c, 0x17, 0x04, 0xb4, 0x42, 0x98, 0x00, 0xb1, 0x77, 0x6f, 0x9e,
+ 0x03, 0x70, 0x19, 0xae, 0x96, 0x9f, 0x58, 0xdd, 0xcc, 0x15, 0x05, 0xc1, 0x1a, 0x96, 0xfd, 0x6f,
+ 0x0b, 0x30, 0x22, 0xd3, 0x36, 0xb4, 0xfd, 0x7e, 0x9e, 0xf9, 0x47, 0xca, 0xe3, 0xc6, 0x12, 0x7a,
+ 0x53, 0xc2, 0xb5, 0x44, 0x3a, 0x92, 0x24, 0xf4, 0x96, 0x00, 0x9c, 0xe0, 0xd0, 0x5d, 0x14, 0xb5,
+ 0xd7, 0x19, 0x7a, 0xca, 0x81, 0xab, 0xce, 0x8b, 0xb1, 0x84, 0xa3, 0x4f, 0xc3, 0x24, 0xaf, 0x17,
+ 0x06, 0x2d, 0x67, 0x93, 0xcb, 0x96, 0x07, 0x95, 0xe7, 0xf5, 0xe4, 0x4a, 0x0a, 0x76, 0x78, 0x30,
+ 0x7b, 0x3a, 0x5d, 0xc6, 0x94, 0x26, 0x1d, 0x54, 0x98, 0x21, 0x06, 0x6f, 0x84, 0xee, 0xfe, 0x0e,
+ 0xfb, 0x8d, 0x04, 0x84, 0x75, 0x3c, 0xfb, 0xf3, 0x80, 0x3a, 0x13, 0x58, 0xa0, 0x37, 0xb8, 0xf5,
+ 0x9d, 0x17, 0x12, 0xb7, 0x9b, 0x12, 0x45, 0xf7, 0x2f, 0x96, 0xbe, 0x17, 0xbc, 0x16, 0x56, 0xf5,
+ 0xed, 0xbf, 0x5c, 0x84, 0xc9, 0xb4, 0xb7, 0x29, 0xba, 0x0e, 0x43, 0x9c, 0xf5, 0x10, 0xe4, 0xbb,
+ 0xe8, 0xe8, 0x35, 0x1f, 0x55, 0x76, 0x08, 0x0b, 0xee, 0x45, 0xd4, 0x47, 0x6f, 0xc3, 0x88, 0x1b,
+ 0xec, 0xf9, 0x7b, 0x4e, 0xe8, 0xce, 0xd7, 0xaa, 0x62, 0x39, 0x67, 0xbe, 0x7b, 0x2a, 0x09, 0x9a,
+ 0xee, 0xf7, 0xca, 0xf4, 0x51, 0x09, 0x08, 0xeb, 0xe4, 0xd0, 0x1a, 0x8b, 0xb7, 0xbb, 0xe1, 0x6d,
+ 0xae, 0x38, 0xad, 0x6e, 0xa6, 0xd8, 0x8b, 0x12, 0x49, 0xa3, 0x3c, 0x26, 0x82, 0xf2, 0x72, 0x00,
+ 0x4e, 0x08, 0xa1, 0x6f, 0x81, 0x53, 0x51, 0x8e, 0xa8, 0x34, 0x2f, 0x9f, 0x51, 0x37, 0xe9, 0xe1,
+ 0xc2, 0x23, 0xf4, 0x45, 0x9a, 0x25, 0x54, 0xcd, 0x6a, 0xc6, 0xfe, 0xd5, 0x53, 0x60, 0x6c, 0x62,
+ 0x23, 0xbd, 0x9d, 0x75, 0x4c, 0xe9, 0xed, 0x30, 0x94, 0xc8, 0x4e, 0x2b, 0xde, 0xaf, 0x78, 0x61,
+ 0xb7, 0xfc, 0xa8, 0x4b, 0x02, 0xa7, 0x93, 0xa6, 0x84, 0x60, 0x45, 0x27, 0x3b, 0x07, 0x61, 0xf1,
+ 0x6b, 0x98, 0x83, 0x70, 0xe0, 0x04, 0x73, 0x10, 0xae, 0xc2, 0xf0, 0xa6, 0x17, 0x63, 0xd2, 0x0a,
+ 0x04, 0xd3, 0x9f, 0xb9, 0x0e, 0xaf, 0x71, 0x94, 0xce, 0x6c, 0x57, 0x02, 0x80, 0x25, 0x11, 0xf4,
+ 0x86, 0xda, 0x81, 0x43, 0xf9, 0x6f, 0xe6, 0x4e, 0x65, 0x72, 0xe6, 0x1e, 0x14, 0x99, 0x06, 0x87,
+ 0x1f, 0x34, 0xd3, 0xe0, 0xb2, 0xcc, 0x0f, 0x58, 0xca, 0xf7, 0x9b, 0x60, 0xe9, 0xff, 0x7a, 0x64,
+ 0x05, 0xbc, 0xa3, 0xe7, 0x54, 0x2c, 0xe7, 0x9f, 0x04, 0x2a, 0x5d, 0x62, 0x9f, 0x99, 0x14, 0xbf,
+ 0xcb, 0x82, 0x33, 0xad, 0xac, 0xf4, 0xa2, 0x42, 0xef, 0xfa, 0x62, 0xdf, 0xf9, 0x53, 0x8d, 0x06,
+ 0x99, 0xc8, 0x25, 0x13, 0x0d, 0x67, 0x37, 0x47, 0x07, 0x3a, 0x5c, 0x77, 0x45, 0x2a, 0xc0, 0xc7,
+ 0x73, 0x52, 0x32, 0x76, 0x49, 0xc4, 0xb8, 0x96, 0x91, 0xfe, 0xef, 0xa3, 0x79, 0xe9, 0xff, 0xfa,
+ 0x4e, 0xfa, 0xf7, 0x86, 0x4a, 0xc6, 0x38, 0x96, 0xbf, 0x94, 0x78, 0xaa, 0xc5, 0x9e, 0x29, 0x18,
+ 0xdf, 0x50, 0x29, 0x18, 0xbb, 0x44, 0x84, 0xe4, 0x09, 0x16, 0x7b, 0x26, 0x5e, 0xd4, 0x92, 0x27,
+ 0x4e, 0x1c, 0x4f, 0xf2, 0x44, 0xe3, 0xaa, 0xe1, 0xf9, 0xfb, 0x9e, 0xea, 0x71, 0xd5, 0x18, 0x74,
+ 0xbb, 0x5f, 0x36, 0x3c, 0x51, 0xe4, 0xd4, 0x03, 0x25, 0x8a, 0xbc, 0xa3, 0x27, 0x5e, 0x44, 0x3d,
+ 0x32, 0x0b, 0x52, 0xa4, 0x3e, 0xd3, 0x2d, 0xde, 0xd1, 0x2f, 0xc0, 0x53, 0xf9, 0x74, 0xd5, 0x3d,
+ 0xd7, 0x49, 0x37, 0xf3, 0x0a, 0xec, 0x48, 0xe3, 0x78, 0xfa, 0x64, 0xd2, 0x38, 0x9e, 0x39, 0xf6,
+ 0x34, 0x8e, 0x67, 0x4f, 0x20, 0x8d, 0xe3, 0x23, 0x27, 0x98, 0xc6, 0xf1, 0x0e, 0x33, 0x56, 0xe0,
+ 0x81, 0x45, 0x44, 0x04, 0xcb, 0xec, 0x68, 0x89, 0x59, 0xd1, 0x47, 0xf8, 0xc7, 0x29, 0x10, 0x4e,
+ 0x48, 0x65, 0xa4, 0x87, 0x9c, 0x7e, 0x08, 0xe9, 0x21, 0x57, 0x93, 0xf4, 0x90, 0xe7, 0xf2, 0xa7,
+ 0x3a, 0xc3, 0x48, 0x3c, 0x27, 0x29, 0xe4, 0x1d, 0x3d, 0x99, 0xe3, 0xa3, 0x5d, 0x84, 0xea, 0x59,
+ 0x82, 0xc7, 0x2e, 0x29, 0x1c, 0x5f, 0xe7, 0x29, 0x1c, 0xcf, 0xe7, 0x9f, 0xe4, 0xe9, 0xeb, 0xce,
+ 0x4c, 0xdc, 0xf8, 0x3d, 0x05, 0xb8, 0xd8, 0x7d, 0x5f, 0x24, 0x52, 0xcf, 0x5a, 0xa2, 0xdb, 0x4b,
+ 0x49, 0x3d, 0xf9, 0xdb, 0x2a, 0xc1, 0xea, 0x3b, 0xe6, 0xd4, 0x35, 0x98, 0x52, 0x56, 0xe0, 0x4d,
+ 0xaf, 0xb1, 0xaf, 0xe5, 0xaa, 0x57, 0x9e, 0xb3, 0xf5, 0x34, 0x02, 0xee, 0xac, 0x83, 0xe6, 0x61,
+ 0xc2, 0x28, 0xac, 0x56, 0xc4, 0x1b, 0x4a, 0x89, 0x59, 0xeb, 0x26, 0x18, 0xa7, 0xf1, 0xed, 0x9f,
+ 0xb4, 0xe0, 0x91, 0x9c, 0x0c, 0x49, 0x7d, 0x87, 0x54, 0xda, 0x80, 0x89, 0x96, 0x59, 0xb5, 0x47,
+ 0xe4, 0x35, 0x23, 0x0f, 0x93, 0xea, 0x6b, 0x0a, 0x80, 0xd3, 0x44, 0xed, 0x3f, 0xb1, 0xe0, 0x42,
+ 0x57, 0x83, 0x2c, 0x84, 0xe1, 0xec, 0xe6, 0x4e, 0xe4, 0x2c, 0x86, 0xc4, 0x25, 0x7e, 0xec, 0x39,
+ 0xcd, 0x7a, 0x8b, 0x34, 0x34, 0xb9, 0x35, 0xb3, 0x6c, 0xba, 0xb6, 0x52, 0x9f, 0xef, 0xc4, 0xc0,
+ 0x39, 0x35, 0xd1, 0x32, 0xa0, 0x4e, 0x88, 0x98, 0x61, 0x16, 0x9d, 0xb5, 0x93, 0x1e, 0xce, 0xa8,
+ 0x81, 0x5e, 0x82, 0x31, 0x65, 0xe8, 0xa5, 0xcd, 0x38, 0x3b, 0x80, 0xb1, 0x0e, 0xc0, 0x26, 0xde,
+ 0xc2, 0x95, 0x5f, 0xff, 0xdd, 0x8b, 0x1f, 0xf9, 0xcd, 0xdf, 0xbd, 0xf8, 0x91, 0xdf, 0xfa, 0xdd,
+ 0x8b, 0x1f, 0xf9, 0xb6, 0xfb, 0x17, 0xad, 0x5f, 0xbf, 0x7f, 0xd1, 0xfa, 0xcd, 0xfb, 0x17, 0xad,
+ 0xdf, 0xba, 0x7f, 0xd1, 0xfa, 0x9d, 0xfb, 0x17, 0xad, 0x2f, 0xfd, 0xde, 0xc5, 0x8f, 0xbc, 0x55,
+ 0xd8, 0x7d, 0xf6, 0xff, 0x05, 0x00, 0x00, 0xff, 0xff, 0xb2, 0x8d, 0x5d, 0x90, 0x7d, 0xfc, 0x00,
+ 0x00,
}
func (m *AWSElasticBlockStoreVolumeSource) Marshal() (dAtA []byte, err error) {
@@ -8238,6 +8276,20 @@ func (m *Container) MarshalToSizedBuffer(dAtA []byte) (int, error) {
_ = i
var l int
_ = l
+ if m.StartupProbe != nil {
+ {
+ size, err := m.StartupProbe.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintGenerated(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xb2
+ }
if len(m.VolumeDevices) > 0 {
for iNdEx := len(m.VolumeDevices) - 1; iNdEx >= 0; iNdEx-- {
{
@@ -8741,6 +8793,16 @@ func (m *ContainerStatus) MarshalToSizedBuffer(dAtA []byte) (int, error) {
_ = i
var l int
_ = l
+ if m.Started != nil {
+ i--
+ if *m.Started {
+ dAtA[i] = 1
+ } else {
+ dAtA[i] = 0
+ }
+ i--
+ dAtA[i] = 0x48
+ }
i -= len(m.ContainerID)
copy(dAtA[i:], m.ContainerID)
i = encodeVarintGenerated(dAtA, i, uint64(len(m.ContainerID)))
@@ -9470,6 +9532,20 @@ func (m *EphemeralContainerCommon) MarshalToSizedBuffer(dAtA []byte) (int, error
_ = i
var l int
_ = l
+ if m.StartupProbe != nil {
+ {
+ size, err := m.StartupProbe.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintGenerated(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xb2
+ }
if len(m.VolumeDevices) > 0 {
for iNdEx := len(m.VolumeDevices) - 1; iNdEx >= 0; iNdEx-- {
{
@@ -11523,6 +11599,59 @@ func (m *Namespace) MarshalToSizedBuffer(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
+func (m *NamespaceCondition) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *NamespaceCondition) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *NamespaceCondition) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ i -= len(m.Message)
+ copy(dAtA[i:], m.Message)
+ i = encodeVarintGenerated(dAtA, i, uint64(len(m.Message)))
+ i--
+ dAtA[i] = 0x32
+ i -= len(m.Reason)
+ copy(dAtA[i:], m.Reason)
+ i = encodeVarintGenerated(dAtA, i, uint64(len(m.Reason)))
+ i--
+ dAtA[i] = 0x2a
+ {
+ size, err := m.LastTransitionTime.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintGenerated(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x22
+ i -= len(m.Status)
+ copy(dAtA[i:], m.Status)
+ i = encodeVarintGenerated(dAtA, i, uint64(len(m.Status)))
+ i--
+ dAtA[i] = 0x12
+ i -= len(m.Type)
+ copy(dAtA[i:], m.Type)
+ i = encodeVarintGenerated(dAtA, i, uint64(len(m.Type)))
+ i--
+ dAtA[i] = 0xa
+ return len(dAtA) - i, nil
+}
+
func (m *NamespaceList) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
@@ -11622,6 +11751,20 @@ func (m *NamespaceStatus) MarshalToSizedBuffer(dAtA []byte) (int, error) {
_ = i
var l int
_ = l
+ if len(m.Conditions) > 0 {
+ for iNdEx := len(m.Conditions) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.Conditions[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintGenerated(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x12
+ }
+ }
i -= len(m.Phase)
copy(dAtA[i:], m.Phase)
i = encodeVarintGenerated(dAtA, i, uint64(len(m.Phase)))
@@ -12285,9 +12428,9 @@ func (m *NodeSpec) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i = encodeVarintGenerated(dAtA, i, uint64(len(m.ProviderID)))
i--
dAtA[i] = 0x1a
- i -= len(m.DoNotUse_ExternalID)
- copy(dAtA[i:], m.DoNotUse_ExternalID)
- i = encodeVarintGenerated(dAtA, i, uint64(len(m.DoNotUse_ExternalID)))
+ i -= len(m.DoNotUseExternalID)
+ copy(dAtA[i:], m.DoNotUseExternalID)
+ i = encodeVarintGenerated(dAtA, i, uint64(len(m.DoNotUseExternalID)))
i--
dAtA[i] = 0x12
i -= len(m.PodCIDR)
@@ -14183,6 +14326,14 @@ func (m *PodLogOptions) MarshalToSizedBuffer(dAtA []byte) (int, error) {
_ = i
var l int
_ = l
+ i--
+ if m.InsecureSkipTLSVerifyBackend {
+ dAtA[i] = 1
+ } else {
+ dAtA[i] = 0
+ }
+ i--
+ dAtA[i] = 0x48
if m.LimitBytes != nil {
i = encodeVarintGenerated(dAtA, i, uint64(*m.LimitBytes))
i--
@@ -17500,6 +17651,13 @@ func (m *ServiceSpec) MarshalToSizedBuffer(dAtA []byte) (int, error) {
_ = i
var l int
_ = l
+ if m.IPFamily != nil {
+ i -= len(*m.IPFamily)
+ copy(dAtA[i:], *m.IPFamily)
+ i = encodeVarintGenerated(dAtA, i, uint64(len(*m.IPFamily)))
+ i--
+ dAtA[i] = 0x7a
+ }
if m.SessionAffinityConfig != nil {
{
size, err := m.SessionAffinityConfig.MarshalToSizedBuffer(dAtA[:i])
@@ -19481,6 +19639,10 @@ func (m *Container) Size() (n int) {
n += 2 + l + sovGenerated(uint64(l))
}
}
+ if m.StartupProbe != nil {
+ l = m.StartupProbe.Size()
+ n += 2 + l + sovGenerated(uint64(l))
+ }
return n
}
@@ -19603,6 +19765,9 @@ func (m *ContainerStatus) Size() (n int) {
n += 1 + l + sovGenerated(uint64(l))
l = len(m.ContainerID)
n += 1 + l + sovGenerated(uint64(l))
+ if m.Started != nil {
+ n += 2
+ }
return n
}
@@ -19937,6 +20102,10 @@ func (m *EphemeralContainerCommon) Size() (n int) {
n += 2 + l + sovGenerated(uint64(l))
}
}
+ if m.StartupProbe != nil {
+ l = m.StartupProbe.Size()
+ n += 2 + l + sovGenerated(uint64(l))
+ }
return n
}
@@ -20605,6 +20774,25 @@ func (m *Namespace) Size() (n int) {
return n
}
+func (m *NamespaceCondition) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = len(m.Type)
+ n += 1 + l + sovGenerated(uint64(l))
+ l = len(m.Status)
+ n += 1 + l + sovGenerated(uint64(l))
+ l = m.LastTransitionTime.Size()
+ n += 1 + l + sovGenerated(uint64(l))
+ l = len(m.Reason)
+ n += 1 + l + sovGenerated(uint64(l))
+ l = len(m.Message)
+ n += 1 + l + sovGenerated(uint64(l))
+ return n
+}
+
func (m *NamespaceList) Size() (n int) {
if m == nil {
return 0
@@ -20645,6 +20833,12 @@ func (m *NamespaceStatus) Size() (n int) {
_ = l
l = len(m.Phase)
n += 1 + l + sovGenerated(uint64(l))
+ if len(m.Conditions) > 0 {
+ for _, e := range m.Conditions {
+ l = e.Size()
+ n += 1 + l + sovGenerated(uint64(l))
+ }
+ }
return n
}
@@ -20872,7 +21066,7 @@ func (m *NodeSpec) Size() (n int) {
_ = l
l = len(m.PodCIDR)
n += 1 + l + sovGenerated(uint64(l))
- l = len(m.DoNotUse_ExternalID)
+ l = len(m.DoNotUseExternalID)
n += 1 + l + sovGenerated(uint64(l))
l = len(m.ProviderID)
n += 1 + l + sovGenerated(uint64(l))
@@ -21594,6 +21788,7 @@ func (m *PodLogOptions) Size() (n int) {
if m.LimitBytes != nil {
n += 1 + sovGenerated(uint64(*m.LimitBytes))
}
+ n += 2
return n
}
@@ -22818,6 +23013,10 @@ func (m *ServiceSpec) Size() (n int) {
l = m.SessionAffinityConfig.Size()
n += 1 + l + sovGenerated(uint64(l))
}
+ if m.IPFamily != nil {
+ l = len(*m.IPFamily)
+ n += 1 + l + sovGenerated(uint64(l))
+ }
return n
}
@@ -23727,6 +23926,7 @@ func (this *Container) String() string {
`EnvFrom:` + repeatedStringForEnvFrom + `,`,
`TerminationMessagePolicy:` + fmt.Sprintf("%v", this.TerminationMessagePolicy) + `,`,
`VolumeDevices:` + repeatedStringForVolumeDevices + `,`,
+ `StartupProbe:` + strings.Replace(this.StartupProbe.String(), "Probe", "Probe", 1) + `,`,
`}`,
}, "")
return s
@@ -23818,6 +24018,7 @@ func (this *ContainerStatus) String() string {
`Image:` + fmt.Sprintf("%v", this.Image) + `,`,
`ImageID:` + fmt.Sprintf("%v", this.ImageID) + `,`,
`ContainerID:` + fmt.Sprintf("%v", this.ContainerID) + `,`,
+ `Started:` + valueToStringGenerated(this.Started) + `,`,
`}`,
}, "")
return s
@@ -24070,6 +24271,7 @@ func (this *EphemeralContainerCommon) String() string {
`EnvFrom:` + repeatedStringForEnvFrom + `,`,
`TerminationMessagePolicy:` + fmt.Sprintf("%v", this.TerminationMessagePolicy) + `,`,
`VolumeDevices:` + repeatedStringForVolumeDevices + `,`,
+ `StartupProbe:` + strings.Replace(this.StartupProbe.String(), "Probe", "Probe", 1) + `,`,
`}`,
}, "")
return s
@@ -24607,6 +24809,20 @@ func (this *Namespace) String() string {
}, "")
return s
}
+func (this *NamespaceCondition) String() string {
+ if this == nil {
+ return "nil"
+ }
+ s := strings.Join([]string{`&NamespaceCondition{`,
+ `Type:` + fmt.Sprintf("%v", this.Type) + `,`,
+ `Status:` + fmt.Sprintf("%v", this.Status) + `,`,
+ `LastTransitionTime:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.LastTransitionTime), "Time", "v1.Time", 1), `&`, ``, 1) + `,`,
+ `Reason:` + fmt.Sprintf("%v", this.Reason) + `,`,
+ `Message:` + fmt.Sprintf("%v", this.Message) + `,`,
+ `}`,
+ }, "")
+ return s
+}
func (this *NamespaceList) String() string {
if this == nil {
return "nil"
@@ -24637,8 +24853,14 @@ func (this *NamespaceStatus) String() string {
if this == nil {
return "nil"
}
+ repeatedStringForConditions := "[]NamespaceCondition{"
+ for _, f := range this.Conditions {
+ repeatedStringForConditions += strings.Replace(strings.Replace(f.String(), "NamespaceCondition", "NamespaceCondition", 1), `&`, ``, 1) + ","
+ }
+ repeatedStringForConditions += "}"
s := strings.Join([]string{`&NamespaceStatus{`,
`Phase:` + fmt.Sprintf("%v", this.Phase) + `,`,
+ `Conditions:` + repeatedStringForConditions + `,`,
`}`,
}, "")
return s
@@ -24835,7 +25057,7 @@ func (this *NodeSpec) String() string {
repeatedStringForTaints += "}"
s := strings.Join([]string{`&NodeSpec{`,
`PodCIDR:` + fmt.Sprintf("%v", this.PodCIDR) + `,`,
- `DoNotUse_ExternalID:` + fmt.Sprintf("%v", this.DoNotUse_ExternalID) + `,`,
+ `DoNotUseExternalID:` + fmt.Sprintf("%v", this.DoNotUseExternalID) + `,`,
`ProviderID:` + fmt.Sprintf("%v", this.ProviderID) + `,`,
`Unschedulable:` + fmt.Sprintf("%v", this.Unschedulable) + `,`,
`Taints:` + repeatedStringForTaints + `,`,
@@ -25336,6 +25558,7 @@ func (this *PodLogOptions) String() string {
`Timestamps:` + fmt.Sprintf("%v", this.Timestamps) + `,`,
`TailLines:` + valueToStringGenerated(this.TailLines) + `,`,
`LimitBytes:` + valueToStringGenerated(this.LimitBytes) + `,`,
+ `InsecureSkipTLSVerifyBackend:` + fmt.Sprintf("%v", this.InsecureSkipTLSVerifyBackend) + `,`,
`}`,
}, "")
return s
@@ -26314,6 +26537,7 @@ func (this *ServiceSpec) String() string {
`HealthCheckNodePort:` + fmt.Sprintf("%v", this.HealthCheckNodePort) + `,`,
`PublishNotReadyAddresses:` + fmt.Sprintf("%v", this.PublishNotReadyAddresses) + `,`,
`SessionAffinityConfig:` + strings.Replace(this.SessionAffinityConfig.String(), "SessionAffinityConfig", "SessionAffinityConfig", 1) + `,`,
+ `IPFamily:` + valueToStringGenerated(this.IPFamily) + `,`,
`}`,
}, "")
return s
@@ -31878,6 +32102,42 @@ func (m *Container) Unmarshal(dAtA []byte) error {
return err
}
iNdEx = postIndex
+ case 22:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field StartupProbe", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.StartupProbe == nil {
+ m.StartupProbe = &Probe{}
+ }
+ if err := m.StartupProbe.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipGenerated(dAtA[iNdEx:])
@@ -33072,64 +33332,11 @@ func (m *ContainerStatus) Unmarshal(dAtA []byte) error {
}
m.ContainerID = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
- default:
- iNdEx = preIndex
- skippy, err := skipGenerated(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if skippy < 0 {
- return ErrInvalidLengthGenerated
- }
- if (iNdEx + skippy) < 0 {
- return ErrInvalidLengthGenerated
- }
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
- }
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *DaemonEndpoint) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowGenerated
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: DaemonEndpoint: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: DaemonEndpoint: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
+ case 9:
if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field Port", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Started", wireType)
}
- m.Port = 0
+ var v int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowGenerated
@@ -33139,11 +33346,13 @@ func (m *DaemonEndpoint) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- m.Port |= int32(b&0x7F) << shift
+ v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
+ b := bool(v != 0)
+ m.Started = &b
default:
iNdEx = preIndex
skippy, err := skipGenerated(dAtA[iNdEx:])
@@ -33168,7 +33377,7 @@ func (m *DaemonEndpoint) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *DownwardAPIProjection) Unmarshal(dAtA []byte) error {
+func (m *DaemonEndpoint) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -33191,17 +33400,17 @@ func (m *DownwardAPIProjection) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: DownwardAPIProjection: wiretype end group for non-group")
+ return fmt.Errorf("proto: DaemonEndpoint: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: DownwardAPIProjection: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: DaemonEndpoint: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Items", wireType)
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Port", wireType)
}
- var msglen int
+ m.Port = 0
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowGenerated
@@ -33211,26 +33420,98 @@ func (m *DownwardAPIProjection) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ m.Port |= int32(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
- return ErrInvalidLengthGenerated
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthGenerated
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- m.Items = append(m.Items, DownwardAPIVolumeFile{})
- if err := m.Items[len(m.Items)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipGenerated(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if skippy < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if (iNdEx + skippy) < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *DownwardAPIProjection) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: DownwardAPIProjection: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: DownwardAPIProjection: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Items", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Items = append(m.Items, DownwardAPIVolumeFile{})
+ if err := m.Items[len(m.Items)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipGenerated(dAtA[iNdEx:])
@@ -35694,6 +35975,42 @@ func (m *EphemeralContainerCommon) Unmarshal(dAtA []byte) error {
return err
}
iNdEx = postIndex
+ case 22:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field StartupProbe", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.StartupProbe == nil {
+ m.StartupProbe = &Probe{}
+ }
+ if err := m.StartupProbe.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipGenerated(dAtA[iNdEx:])
@@ -41704,11 +42021,397 @@ func (m *NFSVolumeSource) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Server = string(dAtA[iNdEx:postIndex])
+ m.Server = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Path = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 3:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ReadOnly", wireType)
+ }
+ var v int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ v |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ m.ReadOnly = bool(v != 0)
+ default:
+ iNdEx = preIndex
+ skippy, err := skipGenerated(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if skippy < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if (iNdEx + skippy) < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *Namespace) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: Namespace: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: Namespace: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ObjectMeta", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if err := m.ObjectMeta.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Spec", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if err := m.Spec.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 3:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipGenerated(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if skippy < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if (iNdEx + skippy) < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *NamespaceCondition) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: NamespaceCondition: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: NamespaceCondition: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Type", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Type = NamespaceConditionType(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Status = ConditionStatus(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 4:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field LastTransitionTime", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if err := m.LastTransitionTime.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 5:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Reason", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Reason = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
- case 2:
+ case 6:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Path", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Message", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
@@ -41736,28 +42439,8 @@ func (m *NFSVolumeSource) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Path = string(dAtA[iNdEx:postIndex])
+ m.Message = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
- case 3:
- if wireType != 0 {
- return fmt.Errorf("proto: wrong wireType = %d for field ReadOnly", wireType)
- }
- var v int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowGenerated
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- v |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- m.ReadOnly = bool(v != 0)
default:
iNdEx = preIndex
skippy, err := skipGenerated(dAtA[iNdEx:])
@@ -41782,7 +42465,7 @@ func (m *NFSVolumeSource) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *Namespace) Unmarshal(dAtA []byte) error {
+func (m *NamespaceList) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -41805,15 +42488,15 @@ func (m *Namespace) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: Namespace: wiretype end group for non-group")
+ return fmt.Errorf("proto: NamespaceList: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: Namespace: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: NamespaceList: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field ObjectMeta", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field ListMeta", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -41840,46 +42523,13 @@ func (m *Namespace) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- if err := m.ObjectMeta.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ if err := m.ListMeta.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
case 2:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Spec", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowGenerated
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthGenerated
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthGenerated
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- if err := m.Spec.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- case 3:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Status", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Items", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -41906,7 +42556,8 @@ func (m *Namespace) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- if err := m.Status.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ m.Items = append(m.Items, Namespace{})
+ if err := m.Items[len(m.Items)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
@@ -41934,7 +42585,7 @@ func (m *Namespace) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *NamespaceList) Unmarshal(dAtA []byte) error {
+func (m *NamespaceSpec) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -41957,50 +42608,17 @@ func (m *NamespaceList) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: NamespaceList: wiretype end group for non-group")
+ return fmt.Errorf("proto: NamespaceSpec: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: NamespaceList: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: NamespaceSpec: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field ListMeta", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowGenerated
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthGenerated
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthGenerated
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- if err := m.ListMeta.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
- iNdEx = postIndex
- case 2:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Items", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Finalizers", wireType)
}
- var msglen int
+ var stringLen uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowGenerated
@@ -42010,25 +42628,23 @@ func (m *NamespaceList) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- msglen |= int(b&0x7F) << shift
+ stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
- if msglen < 0 {
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
return ErrInvalidLengthGenerated
}
- postIndex := iNdEx + msglen
+ postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthGenerated
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Items = append(m.Items, Namespace{})
- if err := m.Items[len(m.Items)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
- return err
- }
+ m.Finalizers = append(m.Finalizers, FinalizerName(dAtA[iNdEx:postIndex]))
iNdEx = postIndex
default:
iNdEx = preIndex
@@ -42054,7 +42670,7 @@ func (m *NamespaceList) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *NamespaceSpec) Unmarshal(dAtA []byte) error {
+func (m *NamespaceStatus) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
for iNdEx < l {
@@ -42077,15 +42693,15 @@ func (m *NamespaceSpec) Unmarshal(dAtA []byte) error {
fieldNum := int32(wire >> 3)
wireType := int(wire & 0x7)
if wireType == 4 {
- return fmt.Errorf("proto: NamespaceSpec: wiretype end group for non-group")
+ return fmt.Errorf("proto: NamespaceStatus: wiretype end group for non-group")
}
if fieldNum <= 0 {
- return fmt.Errorf("proto: NamespaceSpec: illegal tag %d (wire type %d)", fieldNum, wire)
+ return fmt.Errorf("proto: NamespaceStatus: illegal tag %d (wire type %d)", fieldNum, wire)
}
switch fieldNum {
case 1:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Finalizers", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Phase", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
@@ -42113,66 +42729,13 @@ func (m *NamespaceSpec) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Finalizers = append(m.Finalizers, FinalizerName(dAtA[iNdEx:postIndex]))
+ m.Phase = NamespacePhase(dAtA[iNdEx:postIndex])
iNdEx = postIndex
- default:
- iNdEx = preIndex
- skippy, err := skipGenerated(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if skippy < 0 {
- return ErrInvalidLengthGenerated
- }
- if (iNdEx + skippy) < 0 {
- return ErrInvalidLengthGenerated
- }
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
- }
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
-func (m *NamespaceStatus) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowGenerated
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: NamespaceStatus: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: NamespaceStatus: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
+ case 2:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Phase", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field Conditions", wireType)
}
- var stringLen uint64
+ var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowGenerated
@@ -42182,23 +42745,25 @@ func (m *NamespaceStatus) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
- stringLen |= uint64(b&0x7F) << shift
+ msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
- intStringLen := int(stringLen)
- if intStringLen < 0 {
+ if msglen < 0 {
return ErrInvalidLengthGenerated
}
- postIndex := iNdEx + intStringLen
+ postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthGenerated
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.Phase = NamespacePhase(dAtA[iNdEx:postIndex])
+ m.Conditions = append(m.Conditions, NamespaceCondition{})
+ if err := m.Conditions[len(m.Conditions)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
iNdEx = postIndex
default:
iNdEx = preIndex
@@ -44038,7 +44603,7 @@ func (m *NodeSpec) Unmarshal(dAtA []byte) error {
iNdEx = postIndex
case 2:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field DoNotUse_ExternalID", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field DoNotUseExternalID", wireType)
}
var stringLen uint64
for shift := uint(0); ; shift += 7 {
@@ -44066,7 +44631,7 @@ func (m *NodeSpec) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- m.DoNotUse_ExternalID = string(dAtA[iNdEx:postIndex])
+ m.DoNotUseExternalID = string(dAtA[iNdEx:postIndex])
iNdEx = postIndex
case 3:
if wireType != 2 {
@@ -50496,6 +51061,26 @@ func (m *PodLogOptions) Unmarshal(dAtA []byte) error {
}
}
m.LimitBytes = &v
+ case 9:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field InsecureSkipTLSVerifyBackend", wireType)
+ }
+ var v int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ v |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ m.InsecureSkipTLSVerifyBackend = bool(v != 0)
default:
iNdEx = preIndex
skippy, err := skipGenerated(dAtA[iNdEx:])
@@ -61629,6 +62214,39 @@ func (m *ServiceSpec) Unmarshal(dAtA []byte) error {
return err
}
iNdEx = postIndex
+ case 15:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field IPFamily", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ s := IPFamily(dAtA[iNdEx:postIndex])
+ m.IPFamily = &s
+ iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipGenerated(dAtA[iNdEx:])
diff --git a/vendor/k8s.io/api/core/v1/generated.proto b/vendor/k8s.io/api/core/v1/generated.proto
index 814bf5ca40cbf..86d6c5091110a 100644
--- a/vendor/k8s.io/api/core/v1/generated.proto
+++ b/vendor/k8s.io/api/core/v1/generated.proto
@@ -161,7 +161,7 @@ message AzureFileVolumeSource {
// Deprecated in 1.7, please use the bindings subresource of pods instead.
message Binding {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
@@ -426,7 +426,7 @@ message ComponentCondition {
// ComponentStatus (and ComponentStatusList) holds the cluster validation info.
message ComponentStatus {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
@@ -440,7 +440,7 @@ message ComponentStatus {
// Status of all the conditions for the component as a list of ComponentStatus objects.
message ComponentStatusList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -451,7 +451,7 @@ message ComponentStatusList {
// ConfigMap holds configuration data for pods to consume.
message ConfigMap {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
@@ -503,7 +503,7 @@ message ConfigMapKeySelector {
// ConfigMapList is a resource containing a list of ConfigMap objects.
message ConfigMapList {
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -701,6 +701,17 @@ message Container {
// +optional
optional Probe readinessProbe = 11;
+ // StartupProbe indicates that the Pod has successfully initialized.
+ // If specified, no other probes are executed until this completes successfully.
+ // If this probe fails, the Pod will be restarted, just as if the livenessProbe failed.
+ // This can be used to provide different probe parameters at the beginning of a Pod's lifecycle,
+ // when it might take a long time to load data or warm a cache, than during steady-state operation.
+ // This cannot be updated.
+ // This is an alpha feature enabled by the StartupProbe feature flag.
+ // More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+ // +optional
+ optional Probe startupProbe = 22;
+
// Actions that the management system should take in response to container lifecycle events.
// Cannot be updated.
// +optional
@@ -901,6 +912,13 @@ message ContainerStatus {
// Container's ID in the format 'docker://<container_id>'.
// +optional
optional string containerID = 8;
+
+ // Specifies whether the container has passed its startup probe.
+ // Initialized as false, becomes true after startupProbe is considered successful.
+ // Resets to false when the container is restarted, or if kubelet loses state temporarily.
+ // Is always true when no startupProbe is defined.
+ // +optional
+ optional bool started = 9;
}
// DaemonEndpoint contains information about a single Daemon endpoint.
@@ -1001,7 +1019,8 @@ message EndpointAddress {
// EndpointPort is a tuple that describes a single port.
message EndpointPort {
- // The name of this port (corresponds to ServicePort.Name).
+ // The name of this port. This must match the 'name' field in the
+ // corresponding ServicePort.
// Must be a DNS_LABEL.
// Optional only if one port is defined.
// +optional
@@ -1058,7 +1077,7 @@ message EndpointSubset {
// ]
message Endpoints {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
@@ -1076,7 +1095,7 @@ message Endpoints {
// EndpointsList is a list of endpoints.
message EndpointsList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -1123,7 +1142,7 @@ message EnvVar {
// EnvVarSource represents a source for the value of an EnvVar.
message EnvVarSource {
// Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels, metadata.annotations,
- // spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP.
+ // spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.
// +optional
optional ObjectFieldSelector fieldRef = 1;
@@ -1141,16 +1160,20 @@ message EnvVarSource {
optional SecretKeySelector secretKeyRef = 4;
}
-// An EphemeralContainer is a special type of container which doesn't come with any resource
-// or scheduling guarantees but can be added to a pod that has already been created. They are
-// intended for user-initiated activities such as troubleshooting a running pod.
-// Ephemeral containers will not be restarted when they exit, and they will be killed if the
-// pod is removed or restarted. If an ephemeral container causes a pod to exceed its resource
+// An EphemeralContainer is a container that may be added temporarily to an existing pod for
+// user-initiated activities such as debugging. Ephemeral containers have no resource or
+// scheduling guarantees, and they will not be restarted when they exit or when a pod is
+// removed or restarted. If an ephemeral container causes a pod to exceed its resource
// allocation, the pod may be evicted.
-// Ephemeral containers are added via a pod's ephemeralcontainers subresource and will appear
-// in the pod spec once added. No fields in EphemeralContainer may be changed once added.
+// Ephemeral containers may not be added by directly updating the pod spec. They must be added
+// via the pod's ephemeralcontainers subresource, and they will appear in the pod spec
+// once added.
// This is an alpha feature enabled by the EphemeralContainers feature flag.
message EphemeralContainer {
+ // Ephemeral containers have all of the fields of Container, plus additional fields
+ // specific to ephemeral containers. Fields in common with Container are in the
+ // following inlined struct so than an EphemeralContainer may easily be converted
+ // to a Container.
optional EphemeralContainerCommon ephemeralContainerCommon = 1;
// If set, the name of the container from PodSpec that this ephemeral container targets.
@@ -1161,6 +1184,10 @@ message EphemeralContainer {
optional string targetContainerName = 2;
}
+// EphemeralContainerCommon is a copy of all fields in Container to be inlined in
+// EphemeralContainer. This separate type allows easy conversion from EphemeralContainer
+// to Container and allows separate documentation for the fields of EphemeralContainer.
+// When a new field is added to Container it must be added here as well.
message EphemeralContainerCommon {
// Name of the ephemeral container specified as a DNS_LABEL.
// This name must be unique among all containers, init containers and ephemeral containers.
@@ -1245,6 +1272,10 @@ message EphemeralContainerCommon {
// +optional
optional Probe readinessProbe = 11;
+ // Probes are not allowed for ephemeral containers.
+ // +optional
+ optional Probe startupProbe = 22;
+
// Lifecycle is not allowed for ephemeral containers.
// +optional
optional Lifecycle lifecycle = 12;
@@ -1303,12 +1334,14 @@ message EphemeralContainerCommon {
optional bool tty = 18;
}
-// A list of ephemeral containers used in API operations
+// A list of ephemeral containers used with the Pod ephemeralcontainers subresource.
message EphemeralContainers {
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
- // The new set of ephemeral containers to use for a pod.
+ // A list of ephemeral containers associated with this pod. New ephemeral containers
+ // may be appended to this list, but existing ephemeral containers may not be removed
+ // or modified.
// +patchMergeKey=name
// +patchStrategy=merge
repeated EphemeralContainer ephemeralContainers = 2;
@@ -1317,7 +1350,7 @@ message EphemeralContainers {
// Event is a report of an event somewhere in the cluster.
message Event {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// The object that this event is about.
@@ -1382,7 +1415,7 @@ message Event {
// EventList is a list of events.
message EventList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -1851,7 +1884,7 @@ message Lifecycle {
optional Handler postStart = 1;
// PreStop is called immediately before a container is terminated due to an
- // API request or management event such as liveness probe failure,
+ // API request or management event such as liveness/startup probe failure,
// preemption, resource contention, etc. The handler is not called if the
// container crashes or exits. The reason for termination is passed to the
// handler. The Pod's termination grace period countdown begins before the
@@ -1867,12 +1900,12 @@ message Lifecycle {
// LimitRange sets resource usage limits for each kind of resource in a Namespace.
message LimitRange {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// Spec defines the limits enforced.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional LimitRangeSpec spec = 2;
}
@@ -1907,7 +1940,7 @@ message LimitRangeItem {
// LimitRangeList is a list of LimitRange items.
message LimitRangeList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -1925,7 +1958,7 @@ message LimitRangeSpec {
// List holds a list of objects, which may not be known by the server.
message List {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -2002,25 +2035,43 @@ message NFSVolumeSource {
// Use of multiple namespaces is optional.
message Namespace {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// Spec defines the behavior of the Namespace.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional NamespaceSpec spec = 2;
// Status describes the current status of a Namespace.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional NamespaceStatus status = 3;
}
+// NamespaceCondition contains details about state of namespace.
+message NamespaceCondition {
+ // Type of namespace controller condition.
+ optional string type = 1;
+
+ // Status of the condition, one of True, False, Unknown.
+ optional string status = 2;
+
+ // +optional
+ optional k8s.io.apimachinery.pkg.apis.meta.v1.Time lastTransitionTime = 4;
+
+ // +optional
+ optional string reason = 5;
+
+ // +optional
+ optional string message = 6;
+}
+
// NamespaceList is a list of Namespaces.
message NamespaceList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -2043,25 +2094,31 @@ message NamespaceStatus {
// More info: https://kubernetes.io/docs/tasks/administer-cluster/namespaces/
// +optional
optional string phase = 1;
+
+ // Represents the latest available observations of a namespace's current state.
+ // +optional
+ // +patchMergeKey=type
+ // +patchStrategy=merge
+ repeated NamespaceCondition conditions = 2;
}
// Node is a worker node in Kubernetes.
// Each node will have a unique identifier in the cache (i.e. in etcd).
message Node {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// Spec defines the behavior of a node.
- // https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional NodeSpec spec = 2;
// Most recently observed status of the node.
// Populated by the system.
// Read-only.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional NodeStatus status = 3;
}
@@ -2189,7 +2246,7 @@ message NodeDaemonEndpoints {
// NodeList is the whole list of all Nodes which have been registered with master.
message NodeList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -2401,7 +2458,7 @@ message ObjectFieldSelector {
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
message ObjectReference {
// Kind of the referent.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional string kind = 1;
@@ -2425,7 +2482,7 @@ message ObjectReference {
optional string apiVersion = 5;
// Specific resourceVersion to which this reference is made, if any.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
// +optional
optional string resourceVersion = 6;
@@ -2446,7 +2503,7 @@ message ObjectReference {
// More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes
message PersistentVolume {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
@@ -2467,7 +2524,7 @@ message PersistentVolume {
// PersistentVolumeClaim is a user's request for and claim to a persistent volume
message PersistentVolumeClaim {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
@@ -2511,7 +2568,7 @@ message PersistentVolumeClaimCondition {
// PersistentVolumeClaimList is a list of PersistentVolumeClaim items.
message PersistentVolumeClaimList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -2605,7 +2662,7 @@ message PersistentVolumeClaimVolumeSource {
// PersistentVolumeList is a list of PersistentVolume items.
message PersistentVolumeList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -2806,12 +2863,12 @@ message PhotonPersistentDiskVolumeSource {
// by clients and scheduled onto hosts.
message Pod {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// Specification of the desired behavior of the pod.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional PodSpec spec = 2;
@@ -2819,7 +2876,7 @@ message Pod {
// This data may not be up to date.
// Populated by the system.
// Read-only.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional PodStatus status = 3;
}
@@ -3036,12 +3093,12 @@ message PodIP {
// PodList is a list of Pods.
message PodList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
// List of pods.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md
repeated Pod items = 2;
}
@@ -3088,6 +3145,15 @@ message PodLogOptions {
// slightly more or slightly less than the specified limit.
// +optional
optional int64 limitBytes = 8;
+
+ // insecureSkipTLSVerifyBackend indicates that the apiserver should not confirm the validity of the
+ // serving certificate of the backend it is connecting to. This will make the HTTPS connection between the apiserver
+ // and the backend insecure. This means the apiserver cannot verify the log data it is receiving came from the real
+ // kubelet. If the kubelet is configured to verify the apiserver's TLS credentials, it does not mean the
+ // connection to the real kubelet is vulnerable to a man in the middle attack (e.g. an attacker could not intercept
+ // the actual log data coming from the real kubelet).
+ // +optional
+ optional bool insecureSkipTLSVerifyBackend = 9;
}
// PodPortForwardOptions is the query options to a Pod's port forward call
@@ -3205,7 +3271,7 @@ message PodSpec {
// init container fails, the pod is considered to have failed and is handled according
// to its restartPolicy. The name for an init container or normal container must be
// unique among all containers.
- // Init containers may not have Lifecycle actions, Readiness probes, or Liveness probes.
+ // Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes.
// The resourceRequirements of an init container are taken into account during scheduling
// by finding the highest request/limit for each resource type, and then using the max of
// of that value or the sum of the normal containers. Limits are applied to init containers
@@ -3225,12 +3291,10 @@ message PodSpec {
// +patchStrategy=merge
repeated Container containers = 2;
- // EphemeralContainers is the list of ephemeral containers that run in this pod. Ephemeral containers
- // are added to an existing pod as a result of a user-initiated action such as troubleshooting.
- // This list is read-only in the pod spec. It may not be specified in a create or modified in an
- // update of a pod or pod template.
- // To add an ephemeral container use the pod's ephemeralcontainers subresource, which allows update
- // using the EphemeralContainers kind.
+ // List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing
+ // pod to perform user-initiated actions such as debugging. This list cannot be specified when
+ // creating a pod, and it cannot be modified by updating the pod spec. In order to add an
+ // ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource.
// This field is alpha-level and is only honored by servers that enable the EphemeralContainers feature.
// +optional
// +patchMergeKey=name
@@ -3320,7 +3384,6 @@ message PodSpec {
// in the same pod, and the first process in each container will not be assigned PID 1.
// HostPID and ShareProcessNamespace cannot both be set.
// Optional: Default to false.
- // This field is beta-level and may be disabled with the PodShareProcessNamespace feature.
// +k8s:conversion-gen=false
// +optional
optional bool shareProcessNamespace = 27;
@@ -3537,8 +3600,8 @@ message PodStatus {
// +optional
optional string qosClass = 9;
- // Status for any ephemeral containers that running in this pod.
- // This field is alpha-level and is only honored by servers that enable the EphemeralContainers feature.
+ // Status for any ephemeral containers that have run in this pod.
+ // This field is alpha-level and is only populated by servers that enable the EphemeralContainers feature.
// +optional
repeated ContainerStatus ephemeralContainerStatuses = 13;
}
@@ -3546,7 +3609,7 @@ message PodStatus {
// PodStatusResult is a wrapper for PodStatus returned by kubelet that can be encode/decoded
message PodStatusResult {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
@@ -3554,7 +3617,7 @@ message PodStatusResult {
// This data may not be up to date.
// Populated by the system.
// Read-only.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional PodStatus status = 2;
}
@@ -3562,12 +3625,12 @@ message PodStatusResult {
// PodTemplate describes a template for creating copies of a predefined pod.
message PodTemplate {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// Template defines the pods that will be created from this pod template.
- // https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional PodTemplateSpec template = 2;
}
@@ -3575,7 +3638,7 @@ message PodTemplate {
// PodTemplateList is a list of PodTemplates.
message PodTemplateList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -3586,12 +3649,12 @@ message PodTemplateList {
// PodTemplateSpec describes the data a pod should have when created from a template
message PodTemplateSpec {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// Specification of the desired behavior of the pod.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional PodSpec spec = 2;
}
@@ -3671,7 +3734,7 @@ message Probe {
optional int32 periodSeconds = 4;
// Minimum consecutive successes for the probe to be considered successful after having failed.
- // Defaults to 1. Must be 1 for liveness. Minimum value is 1.
+ // Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.
// +optional
optional int32 successThreshold = 5;
@@ -3832,7 +3895,7 @@ message RBDVolumeSource {
// RangeAllocation is not a public type.
message RangeAllocation {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
@@ -3847,12 +3910,12 @@ message RangeAllocation {
message ReplicationController {
// If the Labels of a ReplicationController are empty, they are defaulted to
// be the same as the Pod(s) that the replication controller manages.
- // Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// Spec defines the specification of the desired behavior of the replication controller.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional ReplicationControllerSpec spec = 2;
@@ -3860,7 +3923,7 @@ message ReplicationController {
// This data may be out of date by some window of time.
// Populated by the system.
// Read-only.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional ReplicationControllerStatus status = 3;
}
@@ -3889,7 +3952,7 @@ message ReplicationControllerCondition {
// ReplicationControllerList is a collection of replication controllers.
message ReplicationControllerList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -3975,17 +4038,17 @@ message ResourceFieldSelector {
// ResourceQuota sets aggregate quota restrictions enforced per namespace
message ResourceQuota {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// Spec defines the desired quota.
- // https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional ResourceQuotaSpec spec = 2;
// Status defines the actual enforced quota and its current usage.
- // https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional ResourceQuotaStatus status = 3;
}
@@ -3993,7 +4056,7 @@ message ResourceQuota {
// ResourceQuotaList is a list of ResourceQuota items.
message ResourceQuotaList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -4189,7 +4252,7 @@ message ScopedResourceSelectorRequirement {
// the Data field must be less than MaxSecretSize bytes.
message Secret {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
@@ -4243,7 +4306,7 @@ message SecretKeySelector {
// SecretList is a list of Secret.
message SecretList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -4407,19 +4470,19 @@ message SerializedReference {
// will answer requests sent through the proxy.
message Service {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// Spec defines the behavior of a service.
- // https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional ServiceSpec spec = 2;
// Most recently observed status of the service.
// Populated by the system.
// Read-only.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional ServiceStatus status = 3;
}
@@ -4430,7 +4493,7 @@ message Service {
// * a set of secrets
message ServiceAccount {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
@@ -4457,7 +4520,7 @@ message ServiceAccount {
// ServiceAccountList is a list of ServiceAccount objects
message ServiceAccountList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -4495,7 +4558,7 @@ message ServiceAccountTokenProjection {
// ServiceList holds a list of services.
message ServiceList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -4506,8 +4569,9 @@ message ServiceList {
// ServicePort contains information on service's port.
message ServicePort {
// The name of this port within the service. This must be a DNS_LABEL.
- // All ports within a ServiceSpec must have unique names. This maps to
- // the 'Name' field in EndpointPort objects.
+ // All ports within a ServiceSpec must have unique names. When considering
+ // the endpoints for a Service, this must match the 'name' field in the
+ // EndpointPort.
// Optional if only one ServicePort is defined on this service.
// +optional
optional string name = 1;
@@ -4667,6 +4731,16 @@ message ServiceSpec {
// sessionAffinityConfig contains the configurations of session affinity.
// +optional
optional SessionAffinityConfig sessionAffinityConfig = 14;
+
+ // ipFamily specifies whether this Service has a preference for a particular IP family (e.g. IPv4 vs.
+ // IPv6). If a specific IP family is requested, the clusterIP field will be allocated from that family, if it is
+ // available in the cluster. If no IP family is requested, the cluster's primary IP family will be used.
+ // Other IP fields (loadBalancerIP, loadBalancerSourceRanges, externalIPs) and controllers which
+ // allocate external load-balancers should use the same IP family. Endpoints for this Service will be of
+ // this family. This field is immutable after creation. Assigning a ServiceIPFamily not available in the
+ // cluster (e.g. IPv6 in IPv4 only cluster) is an error condition and will fail during clusterIP assignment.
+ // +optional
+ optional string ipFamily = 15;
}
// ServiceStatus represents the current status of a service.
@@ -4966,7 +5040,6 @@ message VolumeMount {
// Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container's environment.
// Defaults to "" (volume's root).
// SubPathExpr and SubPath are mutually exclusive.
- // This field is beta in 1.15.
// +optional
optional string subPathExpr = 6;
}
@@ -5183,7 +5256,7 @@ message WindowsSecurityContextOptions {
// Defaults to the user specified in image metadata if unspecified.
// May also be set in PodSecurityContext. If set in both SecurityContext and
// PodSecurityContext, the value specified in SecurityContext takes precedence.
- // This field is alpha-level and it is only honored by servers that enable the WindowsRunAsUserName feature flag.
+ // This field is beta-level and may be disabled with the WindowsRunAsUserName feature flag.
// +optional
optional string runAsUserName = 3;
}
diff --git a/vendor/k8s.io/api/core/v1/types.go b/vendor/k8s.io/api/core/v1/types.go
index 98e7b093f826d..2cf88566f6e81 100644
--- a/vendor/k8s.io/api/core/v1/types.go
+++ b/vendor/k8s.io/api/core/v1/types.go
@@ -275,7 +275,7 @@ const (
type PersistentVolume struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -390,7 +390,7 @@ type PersistentVolumeStatus struct {
type PersistentVolumeList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// List of persistent volumes.
@@ -405,7 +405,7 @@ type PersistentVolumeList struct {
type PersistentVolumeClaim struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -427,7 +427,7 @@ type PersistentVolumeClaim struct {
type PersistentVolumeClaimList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// A list of persistent volume claims.
@@ -1784,7 +1784,6 @@ type VolumeMount struct {
// Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container's environment.
// Defaults to "" (volume's root).
// SubPathExpr and SubPath are mutually exclusive.
- // This field is beta in 1.15.
// +optional
SubPathExpr string `json:"subPathExpr,omitempty" protobuf:"bytes,6,opt,name=subPathExpr"`
}
@@ -1847,7 +1846,7 @@ type EnvVar struct {
// EnvVarSource represents a source for the value of an EnvVar.
type EnvVarSource struct {
// Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels, metadata.annotations,
- // spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP.
+ // spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.
// +optional
FieldRef *ObjectFieldSelector `json:"fieldRef,omitempty" protobuf:"bytes,1,opt,name=fieldRef"`
// Selects a resource of the container: only resources limits and requests
@@ -2025,7 +2024,7 @@ type Probe struct {
// +optional
PeriodSeconds int32 `json:"periodSeconds,omitempty" protobuf:"varint,4,opt,name=periodSeconds"`
// Minimum consecutive successes for the probe to be considered successful after having failed.
- // Defaults to 1. Must be 1 for liveness. Minimum value is 1.
+ // Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.
// +optional
SuccessThreshold int32 `json:"successThreshold,omitempty" protobuf:"varint,5,opt,name=successThreshold"`
// Minimum consecutive failures for the probe to be considered failed after having succeeded.
@@ -2196,6 +2195,16 @@ type Container struct {
// More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
// +optional
ReadinessProbe *Probe `json:"readinessProbe,omitempty" protobuf:"bytes,11,opt,name=readinessProbe"`
+ // StartupProbe indicates that the Pod has successfully initialized.
+ // If specified, no other probes are executed until this completes successfully.
+ // If this probe fails, the Pod will be restarted, just as if the livenessProbe failed.
+ // This can be used to provide different probe parameters at the beginning of a Pod's lifecycle,
+ // when it might take a long time to load data or warm a cache, than during steady-state operation.
+ // This cannot be updated.
+ // This is an alpha feature enabled by the StartupProbe feature flag.
+ // More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes
+ // +optional
+ StartupProbe *Probe `json:"startupProbe,omitempty" protobuf:"bytes,22,opt,name=startupProbe"`
// Actions that the management system should take in response to container lifecycle events.
// Cannot be updated.
// +optional
@@ -2282,7 +2291,7 @@ type Lifecycle struct {
// +optional
PostStart *Handler `json:"postStart,omitempty" protobuf:"bytes,1,opt,name=postStart"`
// PreStop is called immediately before a container is terminated due to an
- // API request or management event such as liveness probe failure,
+ // API request or management event such as liveness/startup probe failure,
// preemption, resource contention, etc. The handler is not called if the
// container crashes or exits. The reason for termination is passed to the
// handler. The Pod's termination grace period countdown begins before the
@@ -2390,6 +2399,12 @@ type ContainerStatus struct {
// Container's ID in the format 'docker://<container_id>'.
// +optional
ContainerID string `json:"containerID,omitempty" protobuf:"bytes,8,opt,name=containerID"`
+ // Specifies whether the container has passed its startup probe.
+ // Initialized as false, becomes true after startupProbe is considered successful.
+ // Resets to false when the container is restarted, or if kubelet loses state temporarily.
+ // Is always true when no startupProbe is defined.
+ // +optional
+ Started *bool `json:"started,omitempty" protobuf:"varint,9,opt,name=started"`
}
// PodPhase is a label for the condition of a pod at the current time.
@@ -2825,7 +2840,7 @@ type PodSpec struct {
// init container fails, the pod is considered to have failed and is handled according
// to its restartPolicy. The name for an init container or normal container must be
// unique among all containers.
- // Init containers may not have Lifecycle actions, Readiness probes, or Liveness probes.
+ // Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes.
// The resourceRequirements of an init container are taken into account during scheduling
// by finding the highest request/limit for each resource type, and then using the max of
// of that value or the sum of the normal containers. Limits are applied to init containers
@@ -2843,12 +2858,10 @@ type PodSpec struct {
// +patchMergeKey=name
// +patchStrategy=merge
Containers []Container `json:"containers" patchStrategy:"merge" patchMergeKey:"name" protobuf:"bytes,2,rep,name=containers"`
- // EphemeralContainers is the list of ephemeral containers that run in this pod. Ephemeral containers
- // are added to an existing pod as a result of a user-initiated action such as troubleshooting.
- // This list is read-only in the pod spec. It may not be specified in a create or modified in an
- // update of a pod or pod template.
- // To add an ephemeral container use the pod's ephemeralcontainers subresource, which allows update
- // using the EphemeralContainers kind.
+ // List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing
+ // pod to perform user-initiated actions such as debugging. This list cannot be specified when
+ // creating a pod, and it cannot be modified by updating the pod spec. In order to add an
+ // ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource.
// This field is alpha-level and is only honored by servers that enable the EphemeralContainers feature.
// +optional
// +patchMergeKey=name
@@ -2927,7 +2940,6 @@ type PodSpec struct {
// in the same pod, and the first process in each container will not be assigned PID 1.
// HostPID and ShareProcessNamespace cannot both be set.
// Optional: Default to false.
- // This field is beta-level and may be disabled with the PodShareProcessNamespace feature.
// +k8s:conversion-gen=false
// +optional
ShareProcessNamespace *bool `json:"shareProcessNamespace,omitempty" protobuf:"varint,27,opt,name=shareProcessNamespace"`
@@ -3220,6 +3232,10 @@ type PodIP struct {
IP string `json:"ip,omitempty" protobuf:"bytes,1,opt,name=ip"`
}
+// EphemeralContainerCommon is a copy of all fields in Container to be inlined in
+// EphemeralContainer. This separate type allows easy conversion from EphemeralContainer
+// to Container and allows separate documentation for the fields of EphemeralContainer.
+// When a new field is added to Container it must be added here as well.
type EphemeralContainerCommon struct {
// Name of the ephemeral container specified as a DNS_LABEL.
// This name must be unique among all containers, init containers and ephemeral containers.
@@ -3291,6 +3307,9 @@ type EphemeralContainerCommon struct {
// Probes are not allowed for ephemeral containers.
// +optional
ReadinessProbe *Probe `json:"readinessProbe,omitempty" protobuf:"bytes,11,opt,name=readinessProbe"`
+ // Probes are not allowed for ephemeral containers.
+ // +optional
+ StartupProbe *Probe `json:"startupProbe,omitempty" protobuf:"bytes,22,opt,name=startupProbe"`
// Lifecycle is not allowed for ephemeral containers.
// +optional
Lifecycle *Lifecycle `json:"lifecycle,omitempty" protobuf:"bytes,12,opt,name=lifecycle"`
@@ -3350,16 +3369,20 @@ type EphemeralContainerCommon struct {
// these two types.
var _ = Container(EphemeralContainerCommon{})
-// An EphemeralContainer is a special type of container which doesn't come with any resource
-// or scheduling guarantees but can be added to a pod that has already been created. They are
-// intended for user-initiated activities such as troubleshooting a running pod.
-// Ephemeral containers will not be restarted when they exit, and they will be killed if the
-// pod is removed or restarted. If an ephemeral container causes a pod to exceed its resource
+// An EphemeralContainer is a container that may be added temporarily to an existing pod for
+// user-initiated activities such as debugging. Ephemeral containers have no resource or
+// scheduling guarantees, and they will not be restarted when they exit or when a pod is
+// removed or restarted. If an ephemeral container causes a pod to exceed its resource
// allocation, the pod may be evicted.
-// Ephemeral containers are added via a pod's ephemeralcontainers subresource and will appear
-// in the pod spec once added. No fields in EphemeralContainer may be changed once added.
+// Ephemeral containers may not be added by directly updating the pod spec. They must be added
+// via the pod's ephemeralcontainers subresource, and they will appear in the pod spec
+// once added.
// This is an alpha feature enabled by the EphemeralContainers feature flag.
type EphemeralContainer struct {
+ // Ephemeral containers have all of the fields of Container, plus additional fields
+ // specific to ephemeral containers. Fields in common with Container are in the
+ // following inlined struct so than an EphemeralContainer may easily be converted
+ // to a Container.
EphemeralContainerCommon `json:",inline" protobuf:"bytes,1,req"`
// If set, the name of the container from PodSpec that this ephemeral container targets.
@@ -3454,8 +3477,8 @@ type PodStatus struct {
// More info: https://git.k8s.io/community/contributors/design-proposals/node/resource-qos.md
// +optional
QOSClass PodQOSClass `json:"qosClass,omitempty" protobuf:"bytes,9,rep,name=qosClass"`
- // Status for any ephemeral containers that running in this pod.
- // This field is alpha-level and is only honored by servers that enable the EphemeralContainers feature.
+ // Status for any ephemeral containers that have run in this pod.
+ // This field is alpha-level and is only populated by servers that enable the EphemeralContainers feature.
// +optional
EphemeralContainerStatuses []ContainerStatus `json:"ephemeralContainerStatuses,omitempty" protobuf:"bytes,13,rep,name=ephemeralContainerStatuses"`
}
@@ -3466,14 +3489,14 @@ type PodStatus struct {
type PodStatusResult struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Most recently observed status of the pod.
// This data may not be up to date.
// Populated by the system.
// Read-only.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Status PodStatus `json:"status,omitempty" protobuf:"bytes,2,opt,name=status"`
}
@@ -3488,12 +3511,12 @@ type PodStatusResult struct {
type Pod struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Specification of the desired behavior of the pod.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Spec PodSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
@@ -3501,7 +3524,7 @@ type Pod struct {
// This data may not be up to date.
// Populated by the system.
// Read-only.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Status PodStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"`
}
@@ -3512,24 +3535,24 @@ type Pod struct {
type PodList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// List of pods.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md
Items []Pod `json:"items" protobuf:"bytes,2,rep,name=items"`
}
// PodTemplateSpec describes the data a pod should have when created from a template
type PodTemplateSpec struct {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Specification of the desired behavior of the pod.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Spec PodSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
}
@@ -3541,12 +3564,12 @@ type PodTemplateSpec struct {
type PodTemplate struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Template defines the pods that will be created from this pod template.
- // https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Template PodTemplateSpec `json:"template,omitempty" protobuf:"bytes,2,opt,name=template"`
}
@@ -3557,7 +3580,7 @@ type PodTemplate struct {
type PodTemplateList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -3669,12 +3692,12 @@ type ReplicationController struct {
// If the Labels of a ReplicationController are empty, they are defaulted to
// be the same as the Pod(s) that the replication controller manages.
- // Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Spec defines the specification of the desired behavior of the replication controller.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Spec ReplicationControllerSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
@@ -3682,7 +3705,7 @@ type ReplicationController struct {
// This data may be out of date by some window of time.
// Populated by the system.
// Read-only.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Status ReplicationControllerStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"`
}
@@ -3693,7 +3716,7 @@ type ReplicationController struct {
type ReplicationControllerList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -3794,6 +3817,17 @@ type LoadBalancerIngress struct {
Hostname string `json:"hostname,omitempty" protobuf:"bytes,2,opt,name=hostname"`
}
+// IPFamily represents the IP Family (IPv4 or IPv6). This type is used
+// to express the family of an IP expressed by a type (i.e. service.Spec.IPFamily)
+type IPFamily string
+
+const (
+ // IPv4Protocol indicates that this IP is IPv4 protocol
+ IPv4Protocol IPFamily = "IPv4"
+ // IPv6Protocol indicates that this IP is IPv6 protocol
+ IPv6Protocol IPFamily = "IPv6"
+)
+
// ServiceSpec describes the attributes that a user creates on a service.
type ServiceSpec struct {
// The list of ports that are exposed by this service.
@@ -3909,13 +3943,24 @@ type ServiceSpec struct {
// sessionAffinityConfig contains the configurations of session affinity.
// +optional
SessionAffinityConfig *SessionAffinityConfig `json:"sessionAffinityConfig,omitempty" protobuf:"bytes,14,opt,name=sessionAffinityConfig"`
+
+ // ipFamily specifies whether this Service has a preference for a particular IP family (e.g. IPv4 vs.
+ // IPv6). If a specific IP family is requested, the clusterIP field will be allocated from that family, if it is
+ // available in the cluster. If no IP family is requested, the cluster's primary IP family will be used.
+ // Other IP fields (loadBalancerIP, loadBalancerSourceRanges, externalIPs) and controllers which
+ // allocate external load-balancers should use the same IP family. Endpoints for this Service will be of
+ // this family. This field is immutable after creation. Assigning a ServiceIPFamily not available in the
+ // cluster (e.g. IPv6 in IPv4 only cluster) is an error condition and will fail during clusterIP assignment.
+ // +optional
+ IPFamily *IPFamily `json:"ipFamily,omitempty" protobuf:"bytes,15,opt,name=ipFamily,Configcasttype=IPFamily"`
}
// ServicePort contains information on service's port.
type ServicePort struct {
// The name of this port within the service. This must be a DNS_LABEL.
- // All ports within a ServiceSpec must have unique names. This maps to
- // the 'Name' field in EndpointPort objects.
+ // All ports within a ServiceSpec must have unique names. When considering
+ // the endpoints for a Service, this must match the 'name' field in the
+ // EndpointPort.
// Optional if only one ServicePort is defined on this service.
// +optional
Name string `json:"name,omitempty" protobuf:"bytes,1,opt,name=name"`
@@ -3958,19 +4003,19 @@ type ServicePort struct {
type Service struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Spec defines the behavior of a service.
- // https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Spec ServiceSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
// Most recently observed status of the service.
// Populated by the system.
// Read-only.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Status ServiceStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"`
}
@@ -3987,7 +4032,7 @@ const (
type ServiceList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -4005,7 +4050,7 @@ type ServiceList struct {
type ServiceAccount struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -4035,7 +4080,7 @@ type ServiceAccount struct {
type ServiceAccountList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -4062,7 +4107,7 @@ type ServiceAccountList struct {
type Endpoints struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -4124,7 +4169,8 @@ type EndpointAddress struct {
// EndpointPort is a tuple that describes a single port.
type EndpointPort struct {
- // The name of this port (corresponds to ServicePort.Name).
+ // The name of this port. This must match the 'name' field in the
+ // corresponding ServicePort.
// Must be a DNS_LABEL.
// Optional only if one port is defined.
// +optional
@@ -4146,7 +4192,7 @@ type EndpointPort struct {
type EndpointsList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -4185,7 +4231,7 @@ type NodeSpec struct {
// Deprecated. Not all kubelets will set this field. Remove field after 1.13.
// see: https://issues.k8s.io/61966
// +optional
- DoNotUse_ExternalID string `json:"externalID,omitempty" protobuf:"bytes,2,opt,name=externalID"`
+ DoNotUseExternalID string `json:"externalID,omitempty" protobuf:"bytes,2,opt,name=externalID"`
}
// NodeConfigSource specifies a source of node configuration. Exactly one subfield (excluding metadata) must be non-nil.
@@ -4540,19 +4586,19 @@ type ResourceList map[ResourceName]resource.Quantity
type Node struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Spec defines the behavior of a node.
- // https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Spec NodeSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
// Most recently observed status of the node.
// Populated by the system.
// Read-only.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Status NodeStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"`
}
@@ -4563,7 +4609,7 @@ type Node struct {
type NodeList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -4594,6 +4640,12 @@ type NamespaceStatus struct {
// More info: https://kubernetes.io/docs/tasks/administer-cluster/namespaces/
// +optional
Phase NamespacePhase `json:"phase,omitempty" protobuf:"bytes,1,opt,name=phase,casttype=NamespacePhase"`
+
+ // Represents the latest available observations of a namespace's current state.
+ // +optional
+ // +patchMergeKey=type
+ // +patchStrategy=merge
+ Conditions []NamespaceCondition `json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,2,rep,name=conditions"`
}
type NamespacePhase string
@@ -4606,6 +4658,42 @@ const (
NamespaceTerminating NamespacePhase = "Terminating"
)
+const (
+ // NamespaceTerminatingCause is returned as a defaults.cause item when a change is
+ // forbidden due to the namespace being terminated.
+ NamespaceTerminatingCause metav1.CauseType = "NamespaceTerminating"
+)
+
+type NamespaceConditionType string
+
+// These are valid conditions of a namespace.
+const (
+ // NamespaceDeletionDiscoveryFailure contains information about namespace deleter errors during resource discovery.
+ NamespaceDeletionDiscoveryFailure NamespaceConditionType = "NamespaceDeletionDiscoveryFailure"
+ // NamespaceDeletionContentFailure contains information about namespace deleter errors during deletion of resources.
+ NamespaceDeletionContentFailure NamespaceConditionType = "NamespaceDeletionContentFailure"
+ // NamespaceDeletionGVParsingFailure contains information about namespace deleter errors parsing GV for legacy types.
+ NamespaceDeletionGVParsingFailure NamespaceConditionType = "NamespaceDeletionGroupVersionParsingFailure"
+ // NamespaceContentRemaining contains information about resources remaining in a namespace.
+ NamespaceContentRemaining NamespaceConditionType = "NamespaceContentRemaining"
+ // NamespaceFinalizersRemaining contains information about which finalizers are on resources remaining in a namespace.
+ NamespaceFinalizersRemaining NamespaceConditionType = "NamespaceFinalizersRemaining"
+)
+
+// NamespaceCondition contains details about state of namespace.
+type NamespaceCondition struct {
+ // Type of namespace controller condition.
+ Type NamespaceConditionType `json:"type" protobuf:"bytes,1,opt,name=type,casttype=NamespaceConditionType"`
+ // Status of the condition, one of True, False, Unknown.
+ Status ConditionStatus `json:"status" protobuf:"bytes,2,opt,name=status,casttype=ConditionStatus"`
+ // +optional
+ LastTransitionTime metav1.Time `json:"lastTransitionTime,omitempty" protobuf:"bytes,4,opt,name=lastTransitionTime"`
+ // +optional
+ Reason string `json:"reason,omitempty" protobuf:"bytes,5,opt,name=reason"`
+ // +optional
+ Message string `json:"message,omitempty" protobuf:"bytes,6,opt,name=message"`
+}
+
// +genclient
// +genclient:nonNamespaced
// +genclient:skipVerbs=deleteCollection
@@ -4616,17 +4704,17 @@ const (
type Namespace struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Spec defines the behavior of the Namespace.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Spec NamespaceSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
// Status describes the current status of a Namespace.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Status NamespaceStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"`
}
@@ -4637,7 +4725,7 @@ type Namespace struct {
type NamespaceList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -4653,7 +4741,7 @@ type NamespaceList struct {
type Binding struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -4663,13 +4751,15 @@ type Binding struct {
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
-// A list of ephemeral containers used in API operations
+// A list of ephemeral containers used with the Pod ephemeralcontainers subresource.
type EphemeralContainers struct {
metav1.TypeMeta `json:",inline"`
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
- // The new set of ephemeral containers to use for a pod.
+ // A list of ephemeral containers associated with this pod. New ephemeral containers
+ // may be appended to this list, but existing ephemeral containers may not be removed
+ // or modified.
// +patchMergeKey=name
// +patchStrategy=merge
EphemeralContainers []EphemeralContainer `json:"ephemeralContainers" patchStrategy:"merge" patchMergeKey:"name" protobuf:"bytes,2,rep,name=ephemeralContainers"`
@@ -4683,6 +4773,7 @@ type Preconditions struct {
UID *types.UID `json:"uid,omitempty" protobuf:"bytes,1,opt,name=uid,casttype=k8s.io/apimachinery/pkg/types.UID"`
}
+// +k8s:conversion-gen:explicit-from=net/url.Values
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// PodLogOptions is the query options for a Pod's logs REST call.
@@ -4723,8 +4814,18 @@ type PodLogOptions struct {
// slightly more or slightly less than the specified limit.
// +optional
LimitBytes *int64 `json:"limitBytes,omitempty" protobuf:"varint,8,opt,name=limitBytes"`
+
+ // insecureSkipTLSVerifyBackend indicates that the apiserver should not confirm the validity of the
+ // serving certificate of the backend it is connecting to. This will make the HTTPS connection between the apiserver
+ // and the backend insecure. This means the apiserver cannot verify the log data it is receiving came from the real
+ // kubelet. If the kubelet is configured to verify the apiserver's TLS credentials, it does not mean the
+ // connection to the real kubelet is vulnerable to a man in the middle attack (e.g. an attacker could not intercept
+ // the actual log data coming from the real kubelet).
+ // +optional
+ InsecureSkipTLSVerifyBackend bool `json:"insecureSkipTLSVerifyBackend,omitempty" protobuf:"varint,9,opt,name=insecureSkipTLSVerifyBackend"`
}
+// +k8s:conversion-gen:explicit-from=net/url.Values
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// PodAttachOptions is the query options to a Pod's remote attach call.
@@ -4762,6 +4863,7 @@ type PodAttachOptions struct {
Container string `json:"container,omitempty" protobuf:"bytes,5,opt,name=container"`
}
+// +k8s:conversion-gen:explicit-from=net/url.Values
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// PodExecOptions is the query options to a Pod's remote exec call.
@@ -4800,6 +4902,7 @@ type PodExecOptions struct {
Command []string `json:"command" protobuf:"bytes,6,rep,name=command"`
}
+// +k8s:conversion-gen:explicit-from=net/url.Values
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// PodPortForwardOptions is the query options to a Pod's port forward call
@@ -4817,6 +4920,7 @@ type PodPortForwardOptions struct {
Ports []int32 `json:"ports,omitempty" protobuf:"varint,1,rep,name=ports"`
}
+// +k8s:conversion-gen:explicit-from=net/url.Values
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// PodProxyOptions is the query options to a Pod's proxy call.
@@ -4828,6 +4932,7 @@ type PodProxyOptions struct {
Path string `json:"path,omitempty" protobuf:"bytes,1,opt,name=path"`
}
+// +k8s:conversion-gen:explicit-from=net/url.Values
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// NodeProxyOptions is the query options to a Node's proxy call.
@@ -4839,6 +4944,7 @@ type NodeProxyOptions struct {
Path string `json:"path,omitempty" protobuf:"bytes,1,opt,name=path"`
}
+// +k8s:conversion-gen:explicit-from=net/url.Values
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// ServiceProxyOptions is the query options to a Service's proxy call.
@@ -4858,7 +4964,7 @@ type ServiceProxyOptions struct {
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
type ObjectReference struct {
// Kind of the referent.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
Kind string `json:"kind,omitempty" protobuf:"bytes,1,opt,name=kind"`
// Namespace of the referent.
@@ -4877,7 +4983,7 @@ type ObjectReference struct {
// +optional
APIVersion string `json:"apiVersion,omitempty" protobuf:"bytes,5,opt,name=apiVersion"`
// Specific resourceVersion to which this reference is made, if any.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
// +optional
ResourceVersion string `json:"resourceVersion,omitempty" protobuf:"bytes,6,opt,name=resourceVersion"`
@@ -4952,7 +5058,7 @@ const (
type Event struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
metav1.ObjectMeta `json:"metadata" protobuf:"bytes,1,opt,name=metadata"`
// The object that this event is about.
@@ -5040,7 +5146,7 @@ const (
type EventList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -5100,12 +5206,12 @@ type LimitRangeSpec struct {
type LimitRange struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Spec defines the limits enforced.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Spec LimitRangeSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
}
@@ -5116,7 +5222,7 @@ type LimitRange struct {
type LimitRangeList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -5256,17 +5362,17 @@ type ResourceQuotaStatus struct {
type ResourceQuota struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Spec defines the desired quota.
- // https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Spec ResourceQuotaSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
// Status defines the actual enforced quota and its current usage.
- // https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Status ResourceQuotaStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"`
}
@@ -5277,7 +5383,7 @@ type ResourceQuota struct {
type ResourceQuotaList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -5294,7 +5400,7 @@ type ResourceQuotaList struct {
type Secret struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -5411,7 +5517,7 @@ const (
type SecretList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -5427,7 +5533,7 @@ type SecretList struct {
type ConfigMap struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -5456,7 +5562,7 @@ type ConfigMap struct {
type ConfigMapList struct {
metav1.TypeMeta `json:",inline"`
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -5498,7 +5604,7 @@ type ComponentCondition struct {
type ComponentStatus struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -5515,7 +5621,7 @@ type ComponentStatus struct {
type ComponentStatusList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -5682,7 +5788,7 @@ type WindowsSecurityContextOptions struct {
// Defaults to the user specified in image metadata if unspecified.
// May also be set in PodSecurityContext. If set in both SecurityContext and
// PodSecurityContext, the value specified in SecurityContext takes precedence.
- // This field is alpha-level and it is only honored by servers that enable the WindowsRunAsUserName feature flag.
+ // This field is beta-level and may be disabled with the WindowsRunAsUserName feature flag.
// +optional
RunAsUserName *string `json:"runAsUserName,omitempty" protobuf:"bytes,3,opt,name=runAsUserName"`
}
@@ -5693,7 +5799,7 @@ type WindowsSecurityContextOptions struct {
type RangeAllocation struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
diff --git a/vendor/k8s.io/api/core/v1/types_swagger_doc_generated.go b/vendor/k8s.io/api/core/v1/types_swagger_doc_generated.go
index abc2e9b99b454..7ad0595323862 100644
--- a/vendor/k8s.io/api/core/v1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/core/v1/types_swagger_doc_generated.go
@@ -108,7 +108,7 @@ func (AzureFileVolumeSource) SwaggerDoc() map[string]string {
var map_Binding = map[string]string{
"": "Binding ties one object to another; for example, a pod is bound to a node by a scheduler. Deprecated in 1.7, please use the bindings subresource of pods instead.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"target": "The target object that you want to bind to the standard object.",
}
@@ -231,7 +231,7 @@ func (ComponentCondition) SwaggerDoc() map[string]string {
var map_ComponentStatus = map[string]string{
"": "ComponentStatus (and ComponentStatusList) holds the cluster validation info.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"conditions": "List of component conditions observed",
}
@@ -241,7 +241,7 @@ func (ComponentStatus) SwaggerDoc() map[string]string {
var map_ComponentStatusList = map[string]string{
"": "Status of all the conditions for the component as a list of ComponentStatus objects.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"items": "List of ComponentStatus objects.",
}
@@ -251,7 +251,7 @@ func (ComponentStatusList) SwaggerDoc() map[string]string {
var map_ConfigMap = map[string]string{
"": "ConfigMap holds configuration data for pods to consume.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"data": "Data contains the configuration data. Each key must consist of alphanumeric characters, '-', '_' or '.'. Values with non-UTF-8 byte sequences must use the BinaryData field. The keys stored in Data must not overlap with the keys in the BinaryData field, this is enforced during validation process.",
"binaryData": "BinaryData contains the binary data. Each key must consist of alphanumeric characters, '-', '_' or '.'. BinaryData can contain byte sequences that are not in the UTF-8 range. The keys stored in BinaryData must not overlap with the ones in the Data field, this is enforced during validation process. Using this field will require 1.10+ apiserver and kubelet.",
}
@@ -281,7 +281,7 @@ func (ConfigMapKeySelector) SwaggerDoc() map[string]string {
var map_ConfigMapList = map[string]string{
"": "ConfigMapList is a resource containing a list of ConfigMap objects.",
- "metadata": "More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"items": "Items is the list of ConfigMaps.",
}
@@ -338,6 +338,7 @@ var map_Container = map[string]string{
"volumeDevices": "volumeDevices is the list of block devices to be used by the container. This is a beta feature.",
"livenessProbe": "Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes",
"readinessProbe": "Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes",
+ "startupProbe": "StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod's lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. This is an alpha feature enabled by the StartupProbe feature flag. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes",
"lifecycle": "Actions that the management system should take in response to container lifecycle events. Cannot be updated.",
"terminationMessagePath": "Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated.",
"terminationMessagePolicy": "Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated.",
@@ -430,6 +431,7 @@ var map_ContainerStatus = map[string]string{
"image": "The image the container is running. More info: https://kubernetes.io/docs/concepts/containers/images",
"imageID": "ImageID of the container's image.",
"containerID": "Container's ID in the format 'docker://<container_id>'.",
+ "started": "Specifies whether the container has passed its startup probe. Initialized as false, becomes true after startupProbe is considered successful. Resets to false when the container is restarted, or if kubelet loses state temporarily. Is always true when no startupProbe is defined.",
}
func (ContainerStatus) SwaggerDoc() map[string]string {
@@ -500,7 +502,7 @@ func (EndpointAddress) SwaggerDoc() map[string]string {
var map_EndpointPort = map[string]string{
"": "EndpointPort is a tuple that describes a single port.",
- "name": "The name of this port (corresponds to ServicePort.Name). Must be a DNS_LABEL. Optional only if one port is defined.",
+ "name": "The name of this port. This must match the 'name' field in the corresponding ServicePort. Must be a DNS_LABEL. Optional only if one port is defined.",
"port": "The port number of the endpoint.",
"protocol": "The IP protocol for this port. Must be UDP, TCP, or SCTP. Default is TCP.",
}
@@ -522,7 +524,7 @@ func (EndpointSubset) SwaggerDoc() map[string]string {
var map_Endpoints = map[string]string{
"": "Endpoints is a collection of endpoints that implement the actual service. Example:\n Name: \"mysvc\",\n Subsets: [\n {\n Addresses: [{\"ip\": \"10.10.1.1\"}, {\"ip\": \"10.10.2.2\"}],\n Ports: [{\"name\": \"a\", \"port\": 8675}, {\"name\": \"b\", \"port\": 309}]\n },\n {\n Addresses: [{\"ip\": \"10.10.3.3\"}],\n Ports: [{\"name\": \"a\", \"port\": 93}, {\"name\": \"b\", \"port\": 76}]\n },\n ]",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"subsets": "The set of all endpoints is the union of all subsets. Addresses are placed into subsets according to the IPs they share. A single address with multiple ports, some of which are ready and some of which are not (because they come from different containers) will result in the address being displayed in different subsets for the different ports. No address will appear in both Addresses and NotReadyAddresses in the same subset. Sets of addresses and ports that comprise a service.",
}
@@ -532,7 +534,7 @@ func (Endpoints) SwaggerDoc() map[string]string {
var map_EndpointsList = map[string]string{
"": "EndpointsList is a list of endpoints.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"items": "List of endpoints.",
}
@@ -564,7 +566,7 @@ func (EnvVar) SwaggerDoc() map[string]string {
var map_EnvVarSource = map[string]string{
"": "EnvVarSource represents a source for the value of an EnvVar.",
- "fieldRef": "Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels, metadata.annotations, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP.",
+ "fieldRef": "Selects a field of the pod: supports metadata.name, metadata.namespace, metadata.labels, metadata.annotations, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.",
"resourceFieldRef": "Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.",
"configMapKeyRef": "Selects a key of a ConfigMap.",
"secretKeyRef": "Selects a key of a secret in the pod's namespace",
@@ -575,7 +577,7 @@ func (EnvVarSource) SwaggerDoc() map[string]string {
}
var map_EphemeralContainer = map[string]string{
- "": "An EphemeralContainer is a special type of container which doesn't come with any resource or scheduling guarantees but can be added to a pod that has already been created. They are intended for user-initiated activities such as troubleshooting a running pod. Ephemeral containers will not be restarted when they exit, and they will be killed if the pod is removed or restarted. If an ephemeral container causes a pod to exceed its resource allocation, the pod may be evicted. Ephemeral containers are added via a pod's ephemeralcontainers subresource and will appear in the pod spec once added. No fields in EphemeralContainer may be changed once added. This is an alpha feature enabled by the EphemeralContainers feature flag.",
+ "": "An EphemeralContainer is a container that may be added temporarily to an existing pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a pod is removed or restarted. If an ephemeral container causes a pod to exceed its resource allocation, the pod may be evicted. Ephemeral containers may not be added by directly updating the pod spec. They must be added via the pod's ephemeralcontainers subresource, and they will appear in the pod spec once added. This is an alpha feature enabled by the EphemeralContainers feature flag.",
"targetContainerName": "If set, the name of the container from PodSpec that this ephemeral container targets. The ephemeral container will be run in the namespaces (IPC, PID, etc) of this container. If not set then the ephemeral container is run in whatever namespaces are shared for the pod. Note that the container runtime must support this feature.",
}
@@ -584,6 +586,7 @@ func (EphemeralContainer) SwaggerDoc() map[string]string {
}
var map_EphemeralContainerCommon = map[string]string{
+ "": "EphemeralContainerCommon is a copy of all fields in Container to be inlined in EphemeralContainer. This separate type allows easy conversion from EphemeralContainer to Container and allows separate documentation for the fields of EphemeralContainer. When a new field is added to Container it must be added here as well.",
"name": "Name of the ephemeral container specified as a DNS_LABEL. This name must be unique among all containers, init containers and ephemeral containers.",
"image": "Docker image name. More info: https://kubernetes.io/docs/concepts/containers/images",
"command": "Entrypoint array. Not executed within a shell. The docker image's ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container's environment. If a variable cannot be resolved, the reference in the input string will be unchanged. The $(VAR_NAME) syntax can be escaped with a double $$, ie: $$(VAR_NAME). Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell",
@@ -597,6 +600,7 @@ var map_EphemeralContainerCommon = map[string]string{
"volumeDevices": "volumeDevices is the list of block devices to be used by the container. This is a beta feature.",
"livenessProbe": "Probes are not allowed for ephemeral containers.",
"readinessProbe": "Probes are not allowed for ephemeral containers.",
+ "startupProbe": "Probes are not allowed for ephemeral containers.",
"lifecycle": "Lifecycle is not allowed for ephemeral containers.",
"terminationMessagePath": "Optional: Path at which the file to which the container's termination message will be written is mounted into the container's filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated.",
"terminationMessagePolicy": "Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated.",
@@ -612,8 +616,8 @@ func (EphemeralContainerCommon) SwaggerDoc() map[string]string {
}
var map_EphemeralContainers = map[string]string{
- "": "A list of ephemeral containers used in API operations",
- "ephemeralContainers": "The new set of ephemeral containers to use for a pod.",
+ "": "A list of ephemeral containers used with the Pod ephemeralcontainers subresource.",
+ "ephemeralContainers": "A list of ephemeral containers associated with this pod. New ephemeral containers may be appended to this list, but existing ephemeral containers may not be removed or modified.",
}
func (EphemeralContainers) SwaggerDoc() map[string]string {
@@ -622,7 +626,7 @@ func (EphemeralContainers) SwaggerDoc() map[string]string {
var map_Event = map[string]string{
"": "Event is a report of an event somewhere in the cluster.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"involvedObject": "The object that this event is about.",
"reason": "This should be a short, machine understandable string that gives the reason for the transition into the object's current status.",
"message": "A human-readable description of the status of this operation.",
@@ -645,7 +649,7 @@ func (Event) SwaggerDoc() map[string]string {
var map_EventList = map[string]string{
"": "EventList is a list of events.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"items": "List of events",
}
@@ -884,7 +888,7 @@ func (KeyToPath) SwaggerDoc() map[string]string {
var map_Lifecycle = map[string]string{
"": "Lifecycle describes actions that the management system should take in response to container lifecycle events. For the PostStart and PreStop lifecycle handlers, management of the container blocks until the action is complete, unless the container process fails, in which case the handler is aborted.",
"postStart": "PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks",
- "preStop": "PreStop is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The reason for termination is passed to the handler. The Pod's termination grace period countdown begins before the PreStop hooked is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period. Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks",
+ "preStop": "PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The reason for termination is passed to the handler. The Pod's termination grace period countdown begins before the PreStop hooked is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod's termination grace period. Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks",
}
func (Lifecycle) SwaggerDoc() map[string]string {
@@ -893,8 +897,8 @@ func (Lifecycle) SwaggerDoc() map[string]string {
var map_LimitRange = map[string]string{
"": "LimitRange sets resource usage limits for each kind of resource in a Namespace.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "spec": "Spec defines the limits enforced. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "spec": "Spec defines the limits enforced. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
}
func (LimitRange) SwaggerDoc() map[string]string {
@@ -917,7 +921,7 @@ func (LimitRangeItem) SwaggerDoc() map[string]string {
var map_LimitRangeList = map[string]string{
"": "LimitRangeList is a list of LimitRange items.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"items": "Items is a list of LimitRange objects. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/",
}
@@ -985,18 +989,28 @@ func (NFSVolumeSource) SwaggerDoc() map[string]string {
var map_Namespace = map[string]string{
"": "Namespace provides a scope for Names. Use of multiple namespaces is optional.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "spec": "Spec defines the behavior of the Namespace. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
- "status": "Status describes the current status of a Namespace. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "spec": "Spec defines the behavior of the Namespace. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
+ "status": "Status describes the current status of a Namespace. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
}
func (Namespace) SwaggerDoc() map[string]string {
return map_Namespace
}
+var map_NamespaceCondition = map[string]string{
+ "": "NamespaceCondition contains details about state of namespace.",
+ "type": "Type of namespace controller condition.",
+ "status": "Status of the condition, one of True, False, Unknown.",
+}
+
+func (NamespaceCondition) SwaggerDoc() map[string]string {
+ return map_NamespaceCondition
+}
+
var map_NamespaceList = map[string]string{
"": "NamespaceList is a list of Namespaces.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"items": "Items is the list of Namespace objects in the list. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/",
}
@@ -1014,8 +1028,9 @@ func (NamespaceSpec) SwaggerDoc() map[string]string {
}
var map_NamespaceStatus = map[string]string{
- "": "NamespaceStatus is information about the current status of a Namespace.",
- "phase": "Phase is the current lifecycle phase of the namespace. More info: https://kubernetes.io/docs/tasks/administer-cluster/namespaces/",
+ "": "NamespaceStatus is information about the current status of a Namespace.",
+ "phase": "Phase is the current lifecycle phase of the namespace. More info: https://kubernetes.io/docs/tasks/administer-cluster/namespaces/",
+ "conditions": "Represents the latest available observations of a namespace's current state.",
}
func (NamespaceStatus) SwaggerDoc() map[string]string {
@@ -1024,9 +1039,9 @@ func (NamespaceStatus) SwaggerDoc() map[string]string {
var map_Node = map[string]string{
"": "Node is a worker node in Kubernetes. Each node will have a unique identifier in the cache (i.e. in etcd).",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "spec": "Spec defines the behavior of a node. https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
- "status": "Most recently observed status of the node. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "spec": "Spec defines the behavior of a node. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
+ "status": "Most recently observed status of the node. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
}
func (Node) SwaggerDoc() map[string]string {
@@ -1099,7 +1114,7 @@ func (NodeDaemonEndpoints) SwaggerDoc() map[string]string {
var map_NodeList = map[string]string{
"": "NodeList is the whole list of all Nodes which have been registered with master.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"items": "List of nodes",
}
@@ -1219,12 +1234,12 @@ func (ObjectFieldSelector) SwaggerDoc() map[string]string {
var map_ObjectReference = map[string]string{
"": "ObjectReference contains enough information to let you inspect or modify the referred object.",
- "kind": "Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
+ "kind": "Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"namespace": "Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/",
"name": "Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names",
"uid": "UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids",
"apiVersion": "API version of the referent.",
- "resourceVersion": "Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency",
+ "resourceVersion": "Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency",
"fieldPath": "If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: \"spec.containers{name}\" (where \"name\" refers to the name of the container that triggered the event) or if no container name is specified \"spec.containers[2]\" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object.",
}
@@ -1234,7 +1249,7 @@ func (ObjectReference) SwaggerDoc() map[string]string {
var map_PersistentVolume = map[string]string{
"": "PersistentVolume (PV) is a storage resource provisioned by an administrator. It is analogous to a node. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"spec": "Spec defines a specification of a persistent volume owned by the cluster. Provisioned by an administrator. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistent-volumes",
"status": "Status represents the current information/status for the persistent volume. Populated by the system. Read-only. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistent-volumes",
}
@@ -1245,7 +1260,7 @@ func (PersistentVolume) SwaggerDoc() map[string]string {
var map_PersistentVolumeClaim = map[string]string{
"": "PersistentVolumeClaim is a user's request for and claim to a persistent volume",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"spec": "Spec defines the desired characteristics of a volume requested by a pod author. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims",
"status": "Status represents the current information/status of a persistent volume claim. Read-only. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims",
}
@@ -1268,7 +1283,7 @@ func (PersistentVolumeClaimCondition) SwaggerDoc() map[string]string {
var map_PersistentVolumeClaimList = map[string]string{
"": "PersistentVolumeClaimList is a list of PersistentVolumeClaim items.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"items": "A list of persistent volume claims. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims",
}
@@ -1315,7 +1330,7 @@ func (PersistentVolumeClaimVolumeSource) SwaggerDoc() map[string]string {
var map_PersistentVolumeList = map[string]string{
"": "PersistentVolumeList is a list of PersistentVolume items.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"items": "List of persistent volumes. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes",
}
@@ -1392,9 +1407,9 @@ func (PhotonPersistentDiskVolumeSource) SwaggerDoc() map[string]string {
var map_Pod = map[string]string{
"": "Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "spec": "Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
- "status": "Most recently observed status of the pod. This data may not be up to date. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "spec": "Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
+ "status": "Most recently observed status of the pod. This data may not be up to date. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
}
func (Pod) SwaggerDoc() map[string]string {
@@ -1504,8 +1519,8 @@ func (PodIP) SwaggerDoc() map[string]string {
var map_PodList = map[string]string{
"": "PodList is a list of Pods.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
- "items": "List of pods. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
+ "items": "List of pods. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md",
}
func (PodList) SwaggerDoc() map[string]string {
@@ -1513,15 +1528,16 @@ func (PodList) SwaggerDoc() map[string]string {
}
var map_PodLogOptions = map[string]string{
- "": "PodLogOptions is the query options for a Pod's logs REST call.",
- "container": "The container for which to stream logs. Defaults to only container if there is one container in the pod.",
- "follow": "Follow the log stream of the pod. Defaults to false.",
- "previous": "Return previous terminated container logs. Defaults to false.",
- "sinceSeconds": "A relative time in seconds before the current time from which to show logs. If this value precedes the time a pod was started, only logs since the pod start will be returned. If this value is in the future, no logs will be returned. Only one of sinceSeconds or sinceTime may be specified.",
- "sinceTime": "An RFC3339 timestamp from which to show logs. If this value precedes the time a pod was started, only logs since the pod start will be returned. If this value is in the future, no logs will be returned. Only one of sinceSeconds or sinceTime may be specified.",
- "timestamps": "If true, add an RFC3339 or RFC3339Nano timestamp at the beginning of every line of log output. Defaults to false.",
- "tailLines": "If set, the number of lines from the end of the logs to show. If not specified, logs are shown from the creation of the container or sinceSeconds or sinceTime",
- "limitBytes": "If set, the number of bytes to read from the server before terminating the log output. This may not display a complete final line of logging, and may return slightly more or slightly less than the specified limit.",
+ "": "PodLogOptions is the query options for a Pod's logs REST call.",
+ "container": "The container for which to stream logs. Defaults to only container if there is one container in the pod.",
+ "follow": "Follow the log stream of the pod. Defaults to false.",
+ "previous": "Return previous terminated container logs. Defaults to false.",
+ "sinceSeconds": "A relative time in seconds before the current time from which to show logs. If this value precedes the time a pod was started, only logs since the pod start will be returned. If this value is in the future, no logs will be returned. Only one of sinceSeconds or sinceTime may be specified.",
+ "sinceTime": "An RFC3339 timestamp from which to show logs. If this value precedes the time a pod was started, only logs since the pod start will be returned. If this value is in the future, no logs will be returned. Only one of sinceSeconds or sinceTime may be specified.",
+ "timestamps": "If true, add an RFC3339 or RFC3339Nano timestamp at the beginning of every line of log output. Defaults to false.",
+ "tailLines": "If set, the number of lines from the end of the logs to show. If not specified, logs are shown from the creation of the container or sinceSeconds or sinceTime",
+ "limitBytes": "If set, the number of bytes to read from the server before terminating the log output. This may not display a complete final line of logging, and may return slightly more or slightly less than the specified limit.",
+ "insecureSkipTLSVerifyBackend": "insecureSkipTLSVerifyBackend indicates that the apiserver should not confirm the validity of the serving certificate of the backend it is connecting to. This will make the HTTPS connection between the apiserver and the backend insecure. This means the apiserver cannot verify the log data it is receiving came from the real kubelet. If the kubelet is configured to verify the apiserver's TLS credentials, it does not mean the connection to the real kubelet is vulnerable to a man in the middle attack (e.g. an attacker could not intercept the actual log data coming from the real kubelet).",
}
func (PodLogOptions) SwaggerDoc() map[string]string {
@@ -1583,9 +1599,9 @@ func (PodSignature) SwaggerDoc() map[string]string {
var map_PodSpec = map[string]string{
"": "PodSpec is a description of a pod.",
"volumes": "List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes",
- "initContainers": "List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, or Liveness probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/",
+ "initContainers": "List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/",
"containers": "List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated.",
- "ephemeralContainers": "EphemeralContainers is the list of ephemeral containers that run in this pod. Ephemeral containers are added to an existing pod as a result of a user-initiated action such as troubleshooting. This list is read-only in the pod spec. It may not be specified in a create or modified in an update of a pod or pod template. To add an ephemeral container use the pod's ephemeralcontainers subresource, which allows update using the EphemeralContainers kind. This field is alpha-level and is only honored by servers that enable the EphemeralContainers feature.",
+ "ephemeralContainers": "List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. This field is alpha-level and is only honored by servers that enable the EphemeralContainers feature.",
"restartPolicy": "Restart policy for all containers within the pod. One of Always, OnFailure, Never. Default to Always. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy",
"terminationGracePeriodSeconds": "Optional duration in seconds the pod needs to terminate gracefully. May be decreased in delete request. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period will be used instead. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. Defaults to 30 seconds.",
"activeDeadlineSeconds": "Optional duration in seconds the pod may be active on the node relative to StartTime before the system will actively try to mark it failed and kill associated containers. Value must be a positive integer.",
@@ -1598,7 +1614,7 @@ var map_PodSpec = map[string]string{
"hostNetwork": "Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false.",
"hostPID": "Use the host's pid namespace. Optional: Default to false.",
"hostIPC": "Use the host's ipc namespace. Optional: Default to false.",
- "shareProcessNamespace": "Share a single process namespace between all of the containers in a pod. When this is set containers will be able to view and signal processes from other containers in the same pod, and the first process in each container will not be assigned PID 1. HostPID and ShareProcessNamespace cannot both be set. Optional: Default to false. This field is beta-level and may be disabled with the PodShareProcessNamespace feature.",
+ "shareProcessNamespace": "Share a single process namespace between all of the containers in a pod. When this is set containers will be able to view and signal processes from other containers in the same pod, and the first process in each container will not be assigned PID 1. HostPID and ShareProcessNamespace cannot both be set. Optional: Default to false.",
"securityContext": "SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field.",
"imagePullSecrets": "ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. For example, in the case of docker, only DockerConfig type secrets are honored. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod",
"hostname": "Specifies the hostname of the Pod If not specified, the pod's hostname will be set to a system-defined value.",
@@ -1636,7 +1652,7 @@ var map_PodStatus = map[string]string{
"initContainerStatuses": "The list has one entry per init container in the manifest. The most recent successful init container will have ready = true, the most recently started container will have startTime set. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-and-container-status",
"containerStatuses": "The list has one entry per container in the manifest. Each entry is currently the output of `docker inspect`. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#pod-and-container-status",
"qosClass": "The Quality of Service (QOS) classification assigned to the pod based on resource requirements See PodQOSClass type for available QOS classes More info: https://git.k8s.io/community/contributors/design-proposals/node/resource-qos.md",
- "ephemeralContainerStatuses": "Status for any ephemeral containers that running in this pod. This field is alpha-level and is only honored by servers that enable the EphemeralContainers feature.",
+ "ephemeralContainerStatuses": "Status for any ephemeral containers that have run in this pod. This field is alpha-level and is only populated by servers that enable the EphemeralContainers feature.",
}
func (PodStatus) SwaggerDoc() map[string]string {
@@ -1645,8 +1661,8 @@ func (PodStatus) SwaggerDoc() map[string]string {
var map_PodStatusResult = map[string]string{
"": "PodStatusResult is a wrapper for PodStatus returned by kubelet that can be encode/decoded",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "status": "Most recently observed status of the pod. This data may not be up to date. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "status": "Most recently observed status of the pod. This data may not be up to date. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
}
func (PodStatusResult) SwaggerDoc() map[string]string {
@@ -1655,8 +1671,8 @@ func (PodStatusResult) SwaggerDoc() map[string]string {
var map_PodTemplate = map[string]string{
"": "PodTemplate describes a template for creating copies of a predefined pod.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "template": "Template defines the pods that will be created from this pod template. https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "template": "Template defines the pods that will be created from this pod template. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
}
func (PodTemplate) SwaggerDoc() map[string]string {
@@ -1665,7 +1681,7 @@ func (PodTemplate) SwaggerDoc() map[string]string {
var map_PodTemplateList = map[string]string{
"": "PodTemplateList is a list of PodTemplates.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"items": "List of pod templates",
}
@@ -1675,8 +1691,8 @@ func (PodTemplateList) SwaggerDoc() map[string]string {
var map_PodTemplateSpec = map[string]string{
"": "PodTemplateSpec describes the data a pod should have when created from a template",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "spec": "Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "spec": "Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
}
func (PodTemplateSpec) SwaggerDoc() map[string]string {
@@ -1730,7 +1746,7 @@ var map_Probe = map[string]string{
"initialDelaySeconds": "Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes",
"timeoutSeconds": "Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes",
"periodSeconds": "How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.",
- "successThreshold": "Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness. Minimum value is 1.",
+ "successThreshold": "Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1.",
"failureThreshold": "Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1.",
}
@@ -1796,7 +1812,7 @@ func (RBDVolumeSource) SwaggerDoc() map[string]string {
var map_RangeAllocation = map[string]string{
"": "RangeAllocation is not a public type.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"range": "Range is string that identifies the range represented by 'data'.",
"data": "Data is a bit array containing all allocated addresses in the previous segment.",
}
@@ -1807,9 +1823,9 @@ func (RangeAllocation) SwaggerDoc() map[string]string {
var map_ReplicationController = map[string]string{
"": "ReplicationController represents the configuration of a replication controller.",
- "metadata": "If the Labels of a ReplicationController are empty, they are defaulted to be the same as the Pod(s) that the replication controller manages. Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "spec": "Spec defines the specification of the desired behavior of the replication controller. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
- "status": "Status is the most recently observed status of the replication controller. This data may be out of date by some window of time. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
+ "metadata": "If the Labels of a ReplicationController are empty, they are defaulted to be the same as the Pod(s) that the replication controller manages. Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "spec": "Spec defines the specification of the desired behavior of the replication controller. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
+ "status": "Status is the most recently observed status of the replication controller. This data may be out of date by some window of time. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
}
func (ReplicationController) SwaggerDoc() map[string]string {
@@ -1831,7 +1847,7 @@ func (ReplicationControllerCondition) SwaggerDoc() map[string]string {
var map_ReplicationControllerList = map[string]string{
"": "ReplicationControllerList is a collection of replication controllers.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"items": "List of replication controllers. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller",
}
@@ -1878,9 +1894,9 @@ func (ResourceFieldSelector) SwaggerDoc() map[string]string {
var map_ResourceQuota = map[string]string{
"": "ResourceQuota sets aggregate quota restrictions enforced per namespace",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "spec": "Spec defines the desired quota. https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
- "status": "Status defines the actual enforced quota and its current usage. https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "spec": "Spec defines the desired quota. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
+ "status": "Status defines the actual enforced quota and its current usage. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
}
func (ResourceQuota) SwaggerDoc() map[string]string {
@@ -1889,7 +1905,7 @@ func (ResourceQuota) SwaggerDoc() map[string]string {
var map_ResourceQuotaList = map[string]string{
"": "ResourceQuotaList is a list of ResourceQuota items.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"items": "Items is a list of ResourceQuota objects. More info: https://kubernetes.io/docs/concepts/policy/resource-quotas/",
}
@@ -1998,7 +2014,7 @@ func (ScopedResourceSelectorRequirement) SwaggerDoc() map[string]string {
var map_Secret = map[string]string{
"": "Secret holds secret data of a certain type. The total bytes of the values in the Data field must be less than MaxSecretSize bytes.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"data": "Data contains the secret data. Each key must consist of alphanumeric characters, '-', '_' or '.'. The serialized form of the secret data is a base64 encoded string, representing the arbitrary (possibly non-string) data value here. Described in https://tools.ietf.org/html/rfc4648#section-4",
"stringData": "stringData allows specifying non-binary secret data in string form. It is provided as a write-only convenience method. All keys and values are merged into the data field on write, overwriting any existing values. It is never output when reading from the API.",
"type": "Used to facilitate programmatic handling of secret data.",
@@ -2029,7 +2045,7 @@ func (SecretKeySelector) SwaggerDoc() map[string]string {
var map_SecretList = map[string]string{
"": "SecretList is a list of Secret.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"items": "Items is a list of secret objects. More info: https://kubernetes.io/docs/concepts/configuration/secret",
}
@@ -2098,9 +2114,9 @@ func (SerializedReference) SwaggerDoc() map[string]string {
var map_Service = map[string]string{
"": "Service is a named abstraction of software service (for example, mysql) consisting of local port (for example 3306) that the proxy listens on, and the selector that determines which pods will answer requests sent through the proxy.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "spec": "Spec defines the behavior of a service. https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
- "status": "Most recently observed status of the service. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "spec": "Spec defines the behavior of a service. https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
+ "status": "Most recently observed status of the service. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
}
func (Service) SwaggerDoc() map[string]string {
@@ -2109,7 +2125,7 @@ func (Service) SwaggerDoc() map[string]string {
var map_ServiceAccount = map[string]string{
"": "ServiceAccount binds together: * a name, understood by users, and perhaps by peripheral systems, for an identity * a principal that can be authenticated and authorized * a set of secrets",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"secrets": "Secrets is the list of secrets allowed to be used by pods running using this ServiceAccount. More info: https://kubernetes.io/docs/concepts/configuration/secret",
"imagePullSecrets": "ImagePullSecrets is a list of references to secrets in the same namespace to use for pulling any images in pods that reference this ServiceAccount. ImagePullSecrets are distinct from Secrets because Secrets can be mounted in the pod, but ImagePullSecrets are only accessed by the kubelet. More info: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod",
"automountServiceAccountToken": "AutomountServiceAccountToken indicates whether pods running as this service account should have an API token automatically mounted. Can be overridden at the pod level.",
@@ -2121,7 +2137,7 @@ func (ServiceAccount) SwaggerDoc() map[string]string {
var map_ServiceAccountList = map[string]string{
"": "ServiceAccountList is a list of ServiceAccount objects",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"items": "List of ServiceAccounts. More info: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/",
}
@@ -2142,7 +2158,7 @@ func (ServiceAccountTokenProjection) SwaggerDoc() map[string]string {
var map_ServiceList = map[string]string{
"": "ServiceList holds a list of services.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"items": "List of services",
}
@@ -2152,7 +2168,7 @@ func (ServiceList) SwaggerDoc() map[string]string {
var map_ServicePort = map[string]string{
"": "ServicePort contains information on service's port.",
- "name": "The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. This maps to the 'Name' field in EndpointPort objects. Optional if only one ServicePort is defined on this service.",
+ "name": "The name of this port within the service. This must be a DNS_LABEL. All ports within a ServiceSpec must have unique names. When considering the endpoints for a Service, this must match the 'name' field in the EndpointPort. Optional if only one ServicePort is defined on this service.",
"protocol": "The IP protocol for this port. Supports \"TCP\", \"UDP\", and \"SCTP\". Default is TCP.",
"port": "The port that will be exposed by this service.",
"targetPort": "Number or name of the port to access on the pods targeted by the service. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. If this is a string, it will be looked up as a named port in the target Pod's container ports. If this is not specified, the value of the 'port' field is used (an identity map). This field is ignored for services with clusterIP=None, and should be omitted or set equal to the 'port' field. More info: https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service",
@@ -2187,6 +2203,7 @@ var map_ServiceSpec = map[string]string{
"healthCheckNodePort": "healthCheckNodePort specifies the healthcheck nodePort for the service. If not specified, HealthCheckNodePort is created by the service api backend with the allocated nodePort. Will use user-specified nodePort value if specified by the client. Only effects when Type is set to LoadBalancer and ExternalTrafficPolicy is set to Local.",
"publishNotReadyAddresses": "publishNotReadyAddresses, when set to true, indicates that DNS implementations must publish the notReadyAddresses of subsets for the Endpoints associated with the Service. The default value is false. The primary use case for setting this field is to use a StatefulSet's Headless Service to propagate SRV records for its Pods without respect to their readiness for purpose of peer discovery.",
"sessionAffinityConfig": "sessionAffinityConfig contains the configurations of session affinity.",
+ "ipFamily": "ipFamily specifies whether this Service has a preference for a particular IP family (e.g. IPv4 vs. IPv6). If a specific IP family is requested, the clusterIP field will be allocated from that family, if it is available in the cluster. If no IP family is requested, the cluster's primary IP family will be used. Other IP fields (loadBalancerIP, loadBalancerSourceRanges, externalIPs) and controllers which allocate external load-balancers should use the same IP family. Endpoints for this Service will be of this family. This field is immutable after creation. Assigning a ServiceIPFamily not available in the cluster (e.g. IPv6 in IPv4 only cluster) is an error condition and will fail during clusterIP assignment.",
}
func (ServiceSpec) SwaggerDoc() map[string]string {
@@ -2350,7 +2367,7 @@ var map_VolumeMount = map[string]string{
"mountPath": "Path within the container at which the volume should be mounted. Must not contain ':'.",
"subPath": "Path within the volume from which the container's volume should be mounted. Defaults to \"\" (volume's root).",
"mountPropagation": "mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10.",
- "subPathExpr": "Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container's environment. Defaults to \"\" (volume's root). SubPathExpr and SubPath are mutually exclusive. This field is beta in 1.15.",
+ "subPathExpr": "Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container's environment. Defaults to \"\" (volume's root). SubPathExpr and SubPath are mutually exclusive.",
}
func (VolumeMount) SwaggerDoc() map[string]string {
@@ -2440,7 +2457,7 @@ var map_WindowsSecurityContextOptions = map[string]string{
"": "WindowsSecurityContextOptions contain Windows-specific options and credentials.",
"gmsaCredentialSpecName": "GMSACredentialSpecName is the name of the GMSA credential spec to use. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag.",
"gmsaCredentialSpec": "GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. This field is alpha-level and is only honored by servers that enable the WindowsGMSA feature flag.",
- "runAsUserName": "The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. This field is alpha-level and it is only honored by servers that enable the WindowsRunAsUserName feature flag.",
+ "runAsUserName": "The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. This field is beta-level and may be disabled with the WindowsRunAsUserName feature flag.",
}
func (WindowsSecurityContextOptions) SwaggerDoc() map[string]string {
diff --git a/vendor/k8s.io/api/core/v1/well_known_labels.go b/vendor/k8s.io/api/core/v1/well_known_labels.go
index 4497760d3f6b4..22aa55b911195 100644
--- a/vendor/k8s.io/api/core/v1/well_known_labels.go
+++ b/vendor/k8s.io/api/core/v1/well_known_labels.go
@@ -17,15 +17,23 @@ limitations under the License.
package v1
const (
- LabelHostname = "kubernetes.io/hostname"
- LabelZoneFailureDomain = "failure-domain.beta.kubernetes.io/zone"
- LabelZoneRegion = "failure-domain.beta.kubernetes.io/region"
+ LabelHostname = "kubernetes.io/hostname"
- LabelInstanceType = "beta.kubernetes.io/instance-type"
+ LabelZoneFailureDomain = "failure-domain.beta.kubernetes.io/zone"
+ LabelZoneRegion = "failure-domain.beta.kubernetes.io/region"
+ LabelZoneFailureDomainStable = "topology.kubernetes.io/zone"
+ LabelZoneRegionStable = "topology.kubernetes.io/region"
+
+ LabelInstanceType = "beta.kubernetes.io/instance-type"
+ LabelInstanceTypeStable = "node.kubernetes.io/instance-type"
LabelOSStable = "kubernetes.io/os"
LabelArchStable = "kubernetes.io/arch"
+ // LabelWindowsBuild is used on Windows nodes to specify the Windows build number starting with v1.17.0.
+ // It's in the format MajorVersion.MinorVersion.BuildNumber (for ex: 10.0.17763)
+ LabelWindowsBuild = "node.kubernetes.io/windows-build"
+
// LabelNamespaceSuffixKubelet is an allowed label namespace suffix kubelets can self-set ([*.]kubelet.kubernetes.io/*)
LabelNamespaceSuffixKubelet = "kubelet.kubernetes.io"
// LabelNamespaceSuffixNode is an allowed label namespace suffix kubelets can self-set ([*.]node.kubernetes.io/*)
@@ -33,4 +41,10 @@ const (
// LabelNamespaceNodeRestriction is a forbidden label namespace that kubelets may not self-set when the NodeRestriction admission plugin is enabled
LabelNamespaceNodeRestriction = "node-restriction.kubernetes.io"
+
+ // IsHeadlessService is added by Controller to an Endpoint denoting if its parent
+ // Service is Headless. The existence of this label can be used further by other
+ // controllers and kube-proxy to check if the Endpoint objects should be replicated when
+ // using Headless Services
+ IsHeadlessService = "service.kubernetes.io/headless"
)
diff --git a/vendor/k8s.io/api/core/v1/well_known_taints.go b/vendor/k8s.io/api/core/v1/well_known_taints.go
new file mode 100644
index 0000000000000..e390519280f4d
--- /dev/null
+++ b/vendor/k8s.io/api/core/v1/well_known_taints.go
@@ -0,0 +1,55 @@
+/*
+Copyright 2019 The Kubernetes Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package v1
+
+const (
+ // TaintNodeNotReady will be added when node is not ready
+ // and feature-gate for TaintBasedEvictions flag is enabled,
+ // and removed when node becomes ready.
+ TaintNodeNotReady = "node.kubernetes.io/not-ready"
+
+ // TaintNodeUnreachable will be added when node becomes unreachable
+ // (corresponding to NodeReady status ConditionUnknown)
+ // and feature-gate for TaintBasedEvictions flag is enabled,
+ // and removed when node becomes reachable (NodeReady status ConditionTrue).
+ TaintNodeUnreachable = "node.kubernetes.io/unreachable"
+
+ // TaintNodeUnschedulable will be added when node becomes unschedulable
+ // and feature-gate for TaintNodesByCondition flag is enabled,
+ // and removed when node becomes scheduable.
+ TaintNodeUnschedulable = "node.kubernetes.io/unschedulable"
+
+ // TaintNodeMemoryPressure will be added when node has memory pressure
+ // and feature-gate for TaintNodesByCondition flag is enabled,
+ // and removed when node has enough memory.
+ TaintNodeMemoryPressure = "node.kubernetes.io/memory-pressure"
+
+ // TaintNodeDiskPressure will be added when node has disk pressure
+ // and feature-gate for TaintNodesByCondition flag is enabled,
+ // and removed when node has enough disk.
+ TaintNodeDiskPressure = "node.kubernetes.io/disk-pressure"
+
+ // TaintNodeNetworkUnavailable will be added when node's network is unavailable
+ // and feature-gate for TaintNodesByCondition flag is enabled,
+ // and removed when network becomes ready.
+ TaintNodeNetworkUnavailable = "node.kubernetes.io/network-unavailable"
+
+ // TaintNodePIDPressure will be added when node has pid pressure
+ // and feature-gate for TaintNodesByCondition flag is enabled,
+ // and removed when node has enough disk.
+ TaintNodePIDPressure = "node.kubernetes.io/pid-pressure"
+)
diff --git a/vendor/k8s.io/api/core/v1/zz_generated.deepcopy.go b/vendor/k8s.io/api/core/v1/zz_generated.deepcopy.go
index 02d36058b17a3..fd47019c0344e 100644
--- a/vendor/k8s.io/api/core/v1/zz_generated.deepcopy.go
+++ b/vendor/k8s.io/api/core/v1/zz_generated.deepcopy.go
@@ -773,6 +773,11 @@ func (in *Container) DeepCopyInto(out *Container) {
*out = new(Probe)
(*in).DeepCopyInto(*out)
}
+ if in.StartupProbe != nil {
+ in, out := &in.StartupProbe, &out.StartupProbe
+ *out = new(Probe)
+ (*in).DeepCopyInto(*out)
+ }
if in.Lifecycle != nil {
in, out := &in.Lifecycle, &out.Lifecycle
*out = new(Lifecycle)
@@ -920,6 +925,11 @@ func (in *ContainerStatus) DeepCopyInto(out *ContainerStatus) {
*out = *in
in.State.DeepCopyInto(&out.State)
in.LastTerminationState.DeepCopyInto(&out.LastTerminationState)
+ if in.Started != nil {
+ in, out := &in.Started, &out.Started
+ *out = new(bool)
+ **out = **in
+ }
return
}
@@ -1350,6 +1360,11 @@ func (in *EphemeralContainerCommon) DeepCopyInto(out *EphemeralContainerCommon)
*out = new(Probe)
(*in).DeepCopyInto(*out)
}
+ if in.StartupProbe != nil {
+ in, out := &in.StartupProbe, &out.StartupProbe
+ *out = new(Probe)
+ (*in).DeepCopyInto(*out)
+ }
if in.Lifecycle != nil {
in, out := &in.Lifecycle, &out.Lifecycle
*out = new(Lifecycle)
@@ -2189,7 +2204,7 @@ func (in *Namespace) DeepCopyInto(out *Namespace) {
out.TypeMeta = in.TypeMeta
in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
in.Spec.DeepCopyInto(&out.Spec)
- out.Status = in.Status
+ in.Status.DeepCopyInto(&out.Status)
return
}
@@ -2211,6 +2226,23 @@ func (in *Namespace) DeepCopyObject() runtime.Object {
return nil
}
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *NamespaceCondition) DeepCopyInto(out *NamespaceCondition) {
+ *out = *in
+ in.LastTransitionTime.DeepCopyInto(&out.LastTransitionTime)
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NamespaceCondition.
+func (in *NamespaceCondition) DeepCopy() *NamespaceCondition {
+ if in == nil {
+ return nil
+ }
+ out := new(NamespaceCondition)
+ in.DeepCopyInto(out)
+ return out
+}
+
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *NamespaceList) DeepCopyInto(out *NamespaceList) {
*out = *in
@@ -2268,6 +2300,13 @@ func (in *NamespaceSpec) DeepCopy() *NamespaceSpec {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *NamespaceStatus) DeepCopyInto(out *NamespaceStatus) {
*out = *in
+ if in.Conditions != nil {
+ in, out := &in.Conditions, &out.Conditions
+ *out = make([]NamespaceCondition, len(*in))
+ for i := range *in {
+ (*in)[i].DeepCopyInto(&(*out)[i])
+ }
+ }
return
}
@@ -5142,6 +5181,11 @@ func (in *ServiceSpec) DeepCopyInto(out *ServiceSpec) {
*out = new(SessionAffinityConfig)
(*in).DeepCopyInto(*out)
}
+ if in.IPFamily != nil {
+ in, out := &in.IPFamily, &out.IPFamily
+ *out = new(IPFamily)
+ **out = **in
+ }
return
}
diff --git a/vendor/k8s.io/api/events/v1beta1/generated.proto b/vendor/k8s.io/api/events/v1beta1/generated.proto
index 04eacbb280c70..58f5aa422f638 100644
--- a/vendor/k8s.io/api/events/v1beta1/generated.proto
+++ b/vendor/k8s.io/api/events/v1beta1/generated.proto
@@ -98,7 +98,7 @@ message Event {
// EventList is a list of Event objects.
message EventList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
diff --git a/vendor/k8s.io/api/events/v1beta1/types.go b/vendor/k8s.io/api/events/v1beta1/types.go
index eef45645323a2..0571fbb2e826c 100644
--- a/vendor/k8s.io/api/events/v1beta1/types.go
+++ b/vendor/k8s.io/api/events/v1beta1/types.go
@@ -43,11 +43,11 @@ type Event struct {
// ID of the controller instance, e.g. `kubelet-xyzf`.
// +optional
- ReportingInstance string `json:"reportingInstance,omitemtpy" protobuf:"bytes,5,opt,name=reportingInstance"`
+ ReportingInstance string `json:"reportingInstance,omitempty" protobuf:"bytes,5,opt,name=reportingInstance"`
// What action was taken/failed regarding to the regarding object.
// +optional
- Action string `json:"action,omitemtpy" protobuf:"bytes,6,name=action"`
+ Action string `json:"action,omitempty" protobuf:"bytes,6,name=action"`
// Why the action was taken.
Reason string `json:"reason,omitempty" protobuf:"bytes,7,name=reason"`
@@ -114,7 +114,7 @@ const (
type EventList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
diff --git a/vendor/k8s.io/api/events/v1beta1/types_swagger_doc_generated.go b/vendor/k8s.io/api/events/v1beta1/types_swagger_doc_generated.go
index bbc91ed9b3a70..639daca6daef1 100644
--- a/vendor/k8s.io/api/events/v1beta1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/events/v1beta1/types_swagger_doc_generated.go
@@ -51,7 +51,7 @@ func (Event) SwaggerDoc() map[string]string {
var map_EventList = map[string]string{
"": "EventList is a list of Event objects.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"items": "Items is a list of schema objects.",
}
diff --git a/vendor/k8s.io/api/extensions/v1beta1/generated.proto b/vendor/k8s.io/api/extensions/v1beta1/generated.proto
index 18d4813671d03..6c90cb3c453fb 100644
--- a/vendor/k8s.io/api/extensions/v1beta1/generated.proto
+++ b/vendor/k8s.io/api/extensions/v1beta1/generated.proto
@@ -873,7 +873,6 @@ message PodSecurityPolicySpec {
// AllowedCSIDrivers is a whitelist of inline CSI drivers that must be explicitly set to be embedded within a pod spec.
// An empty value indicates that any CSI driver can be used for inline ephemeral volumes.
- // This is an alpha field, and is only honored if the API server enables the CSIInlineVolume feature gate.
// +optional
repeated AllowedCSIDriver allowedCSIDrivers = 23;
diff --git a/vendor/k8s.io/api/extensions/v1beta1/types.go b/vendor/k8s.io/api/extensions/v1beta1/types.go
index 6e9b58d76bf8c..eb255341f40a2 100644
--- a/vendor/k8s.io/api/extensions/v1beta1/types.go
+++ b/vendor/k8s.io/api/extensions/v1beta1/types.go
@@ -930,7 +930,6 @@ type PodSecurityPolicySpec struct {
AllowedFlexVolumes []AllowedFlexVolume `json:"allowedFlexVolumes,omitempty" protobuf:"bytes,18,rep,name=allowedFlexVolumes"`
// AllowedCSIDrivers is a whitelist of inline CSI drivers that must be explicitly set to be embedded within a pod spec.
// An empty value indicates that any CSI driver can be used for inline ephemeral volumes.
- // This is an alpha field, and is only honored if the API server enables the CSIInlineVolume feature gate.
// +optional
AllowedCSIDrivers []AllowedCSIDriver `json:"allowedCSIDrivers,omitempty" protobuf:"bytes,23,rep,name=allowedCSIDrivers"`
// allowedUnsafeSysctls is a list of explicitly allowed unsafe sysctls, defaults to none.
@@ -986,7 +985,7 @@ type AllowedHostPath struct {
// Deprecated: use FSType from policy API Group instead.
type FSType string
-var (
+const (
AzureFile FSType = "azureFile"
Flocker FSType = "flocker"
FlexVolume FSType = "flexVolume"
diff --git a/vendor/k8s.io/api/extensions/v1beta1/types_swagger_doc_generated.go b/vendor/k8s.io/api/extensions/v1beta1/types_swagger_doc_generated.go
index 719c1548bc882..a7eb2ec907ee3 100644
--- a/vendor/k8s.io/api/extensions/v1beta1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/extensions/v1beta1/types_swagger_doc_generated.go
@@ -470,7 +470,7 @@ var map_PodSecurityPolicySpec = map[string]string{
"allowPrivilegeEscalation": "allowPrivilegeEscalation determines if a pod can request to allow privilege escalation. If unspecified, defaults to true.",
"allowedHostPaths": "allowedHostPaths is a white list of allowed host paths. Empty indicates that all host paths may be used.",
"allowedFlexVolumes": "allowedFlexVolumes is a whitelist of allowed Flexvolumes. Empty or nil indicates that all Flexvolumes may be used. This parameter is effective only when the usage of the Flexvolumes is allowed in the \"volumes\" field.",
- "allowedCSIDrivers": "AllowedCSIDrivers is a whitelist of inline CSI drivers that must be explicitly set to be embedded within a pod spec. An empty value indicates that any CSI driver can be used for inline ephemeral volumes. This is an alpha field, and is only honored if the API server enables the CSIInlineVolume feature gate.",
+ "allowedCSIDrivers": "AllowedCSIDrivers is a whitelist of inline CSI drivers that must be explicitly set to be embedded within a pod spec. An empty value indicates that any CSI driver can be used for inline ephemeral volumes.",
"allowedUnsafeSysctls": "allowedUnsafeSysctls is a list of explicitly allowed unsafe sysctls, defaults to none. Each entry is either a plain sysctl name or ends in \"*\" in which case it is considered as a prefix of allowed sysctls. Single * means all unsafe sysctls are allowed. Kubelet has to whitelist all allowed unsafe sysctls explicitly to avoid rejection.\n\nExamples: e.g. \"foo/*\" allows \"foo/bar\", \"foo/baz\", etc. e.g. \"foo.*\" allows \"foo.bar\", \"foo.baz\", etc.",
"forbiddenSysctls": "forbiddenSysctls is a list of explicitly forbidden sysctls, defaults to none. Each entry is either a plain sysctl name or ends in \"*\" in which case it is considered as a prefix of forbidden sysctls. Single * means all sysctls are forbidden.\n\nExamples: e.g. \"foo/*\" forbids \"foo/bar\", \"foo/baz\", etc. e.g. \"foo.*\" forbids \"foo.bar\", \"foo.baz\", etc.",
"allowedProcMountTypes": "AllowedProcMountTypes is a whitelist of allowed ProcMountTypes. Empty or nil indicates that only the DefaultProcMountType may be used. This requires the ProcMountType feature flag to be enabled.",
diff --git a/vendor/k8s.io/api/networking/v1beta1/generated.proto b/vendor/k8s.io/api/networking/v1beta1/generated.proto
index 7df19138e242c..a72545c81342c 100644
--- a/vendor/k8s.io/api/networking/v1beta1/generated.proto
+++ b/vendor/k8s.io/api/networking/v1beta1/generated.proto
@@ -64,17 +64,17 @@ message HTTPIngressRuleValue {
// based virtual hosting etc.
message Ingress {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// Spec is the desired state of the Ingress.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional IngressSpec spec = 2;
// Status is the current state of the Ingress.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional IngressStatus status = 3;
}
@@ -91,7 +91,7 @@ message IngressBackend {
// IngressList is a collection of Ingress.
message IngressList {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
diff --git a/vendor/k8s.io/api/networking/v1beta1/types.go b/vendor/k8s.io/api/networking/v1beta1/types.go
index 63bf2d52a3d1f..37277bf8169a9 100644
--- a/vendor/k8s.io/api/networking/v1beta1/types.go
+++ b/vendor/k8s.io/api/networking/v1beta1/types.go
@@ -32,17 +32,17 @@ import (
type Ingress struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Spec is the desired state of the Ingress.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Spec IngressSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"`
// Status is the current state of the Ingress.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Status IngressStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"`
}
@@ -53,7 +53,7 @@ type Ingress struct {
type IngressList struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
diff --git a/vendor/k8s.io/api/networking/v1beta1/types_swagger_doc_generated.go b/vendor/k8s.io/api/networking/v1beta1/types_swagger_doc_generated.go
index 9e05b7f1bbfbf..4ae5e32d01413 100644
--- a/vendor/k8s.io/api/networking/v1beta1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/networking/v1beta1/types_swagger_doc_generated.go
@@ -48,9 +48,9 @@ func (HTTPIngressRuleValue) SwaggerDoc() map[string]string {
var map_Ingress = map[string]string{
"": "Ingress is a collection of rules that allow inbound connections to reach the endpoints defined by a backend. An Ingress can be configured to give services externally-reachable urls, load balance traffic, terminate SSL, offer name based virtual hosting etc.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "spec": "Spec is the desired state of the Ingress. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
- "status": "Status is the current state of the Ingress. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "spec": "Spec is the desired state of the Ingress. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
+ "status": "Status is the current state of the Ingress. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
}
func (Ingress) SwaggerDoc() map[string]string {
@@ -69,7 +69,7 @@ func (IngressBackend) SwaggerDoc() map[string]string {
var map_IngressList = map[string]string{
"": "IngressList is a collection of Ingress.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"items": "Items is the list of Ingress.",
}
diff --git a/vendor/k8s.io/api/node/v1alpha1/generated.proto b/vendor/k8s.io/api/node/v1alpha1/generated.proto
index 25d3515643335..ac05f839eba80 100644
--- a/vendor/k8s.io/api/node/v1alpha1/generated.proto
+++ b/vendor/k8s.io/api/node/v1alpha1/generated.proto
@@ -45,19 +45,19 @@ message Overhead {
// pod. For more details, see
// https://git.k8s.io/enhancements/keps/sig-node/runtime-class.md
message RuntimeClass {
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// Specification of the RuntimeClass
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
optional RuntimeClassSpec spec = 2;
}
// RuntimeClassList is a list of RuntimeClass objects.
message RuntimeClassList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
diff --git a/vendor/k8s.io/api/node/v1alpha1/types.go b/vendor/k8s.io/api/node/v1alpha1/types.go
index 035aac4aa023c..b59767107c455 100644
--- a/vendor/k8s.io/api/node/v1alpha1/types.go
+++ b/vendor/k8s.io/api/node/v1alpha1/types.go
@@ -34,12 +34,12 @@ import (
// https://git.k8s.io/enhancements/keps/sig-node/runtime-class.md
type RuntimeClass struct {
metav1.TypeMeta `json:",inline"`
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Specification of the RuntimeClass
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
Spec RuntimeClassSpec `json:"spec" protobuf:"bytes,2,name=spec"`
}
@@ -107,7 +107,7 @@ type Scheduling struct {
type RuntimeClassList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
diff --git a/vendor/k8s.io/api/node/v1alpha1/types_swagger_doc_generated.go b/vendor/k8s.io/api/node/v1alpha1/types_swagger_doc_generated.go
index 6868fec220d74..3900001723580 100644
--- a/vendor/k8s.io/api/node/v1alpha1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/node/v1alpha1/types_swagger_doc_generated.go
@@ -38,8 +38,8 @@ func (Overhead) SwaggerDoc() map[string]string {
var map_RuntimeClass = map[string]string{
"": "RuntimeClass defines a class of container runtime supported in the cluster. The RuntimeClass is used to determine which container runtime is used to run all containers in a pod. RuntimeClasses are (currently) manually defined by a user or cluster provisioner, and referenced in the PodSpec. The Kubelet is responsible for resolving the RuntimeClassName reference before running the pod. For more details, see https://git.k8s.io/enhancements/keps/sig-node/runtime-class.md",
- "metadata": "More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "spec": "Specification of the RuntimeClass More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
+ "metadata": "More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "spec": "Specification of the RuntimeClass More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
}
func (RuntimeClass) SwaggerDoc() map[string]string {
@@ -48,7 +48,7 @@ func (RuntimeClass) SwaggerDoc() map[string]string {
var map_RuntimeClassList = map[string]string{
"": "RuntimeClassList is a list of RuntimeClass objects.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"items": "Items is a list of schema objects.",
}
diff --git a/vendor/k8s.io/api/node/v1beta1/generated.proto b/vendor/k8s.io/api/node/v1beta1/generated.proto
index 07ff350e63674..49166798dad57 100644
--- a/vendor/k8s.io/api/node/v1beta1/generated.proto
+++ b/vendor/k8s.io/api/node/v1beta1/generated.proto
@@ -45,7 +45,7 @@ message Overhead {
// pod. For more details, see
// https://git.k8s.io/enhancements/keps/sig-node/runtime-class.md
message RuntimeClass {
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
@@ -79,7 +79,7 @@ message RuntimeClass {
// RuntimeClassList is a list of RuntimeClass objects.
message RuntimeClassList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
diff --git a/vendor/k8s.io/api/node/v1beta1/types.go b/vendor/k8s.io/api/node/v1beta1/types.go
index fbf9461b1c020..793a48f62b64f 100644
--- a/vendor/k8s.io/api/node/v1beta1/types.go
+++ b/vendor/k8s.io/api/node/v1beta1/types.go
@@ -34,7 +34,7 @@ import (
// https://git.k8s.io/enhancements/keps/sig-node/runtime-class.md
type RuntimeClass struct {
metav1.TypeMeta `json:",inline"`
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -97,7 +97,7 @@ type Scheduling struct {
type RuntimeClassList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
diff --git a/vendor/k8s.io/api/node/v1beta1/types_swagger_doc_generated.go b/vendor/k8s.io/api/node/v1beta1/types_swagger_doc_generated.go
index 9c16c9e7bebd8..681f73f23c832 100644
--- a/vendor/k8s.io/api/node/v1beta1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/node/v1beta1/types_swagger_doc_generated.go
@@ -38,7 +38,7 @@ func (Overhead) SwaggerDoc() map[string]string {
var map_RuntimeClass = map[string]string{
"": "RuntimeClass defines a class of container runtime supported in the cluster. The RuntimeClass is used to determine which container runtime is used to run all containers in a pod. RuntimeClasses are (currently) manually defined by a user or cluster provisioner, and referenced in the PodSpec. The Kubelet is responsible for resolving the RuntimeClassName reference before running the pod. For more details, see https://git.k8s.io/enhancements/keps/sig-node/runtime-class.md",
- "metadata": "More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"handler": "Handler specifies the underlying runtime and configuration that the CRI implementation will use to handle pods of this class. The possible values are specific to the node & CRI configuration. It is assumed that all handlers are available on every node, and handlers of the same name are equivalent on every node. For example, a handler called \"runc\" might specify that the runc OCI runtime (using native Linux containers) will be used to run the containers in a pod. The Handler must conform to the DNS Label (RFC 1123) requirements, and is immutable.",
"overhead": "Overhead represents the resource overhead associated with running a pod for a given RuntimeClass. For more details, see https://git.k8s.io/enhancements/keps/sig-node/20190226-pod-overhead.md This field is alpha-level as of Kubernetes v1.15, and is only honored by servers that enable the PodOverhead feature.",
"scheduling": "Scheduling holds the scheduling constraints to ensure that pods running with this RuntimeClass are scheduled to nodes that support it. If scheduling is nil, this RuntimeClass is assumed to be supported by all nodes.",
@@ -50,7 +50,7 @@ func (RuntimeClass) SwaggerDoc() map[string]string {
var map_RuntimeClassList = map[string]string{
"": "RuntimeClassList is a list of RuntimeClass objects.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"items": "Items is a list of schema objects.",
}
diff --git a/vendor/k8s.io/api/policy/v1beta1/generated.proto b/vendor/k8s.io/api/policy/v1beta1/generated.proto
index a1173a61c6f85..9679dafc51052 100644
--- a/vendor/k8s.io/api/policy/v1beta1/generated.proto
+++ b/vendor/k8s.io/api/policy/v1beta1/generated.proto
@@ -151,7 +151,7 @@ message PodDisruptionBudgetSpec {
// PodDisruptionBudget. Status may trail the actual state of a system.
message PodDisruptionBudgetStatus {
// Most recent generation observed when updating this PDB status. PodDisruptionsAllowed and other
- // status informatio is valid only if observedGeneration equals to PDB's object generation.
+ // status information is valid only if observedGeneration equals to PDB's object generation.
// +optional
optional int64 observedGeneration = 1;
@@ -186,7 +186,7 @@ message PodDisruptionBudgetStatus {
// that will be applied to a pod and container.
message PodSecurityPolicy {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
@@ -198,7 +198,7 @@ message PodSecurityPolicy {
// PodSecurityPolicyList is a list of PodSecurityPolicy objects.
message PodSecurityPolicyList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
diff --git a/vendor/k8s.io/api/policy/v1beta1/types.go b/vendor/k8s.io/api/policy/v1beta1/types.go
index a59df9840d546..d8e417ab2635c 100644
--- a/vendor/k8s.io/api/policy/v1beta1/types.go
+++ b/vendor/k8s.io/api/policy/v1beta1/types.go
@@ -48,7 +48,7 @@ type PodDisruptionBudgetSpec struct {
// PodDisruptionBudget. Status may trail the actual state of a system.
type PodDisruptionBudgetStatus struct {
// Most recent generation observed when updating this PDB status. PodDisruptionsAllowed and other
- // status informatio is valid only if observedGeneration equals to PDB's object generation.
+ // status information is valid only if observedGeneration equals to PDB's object generation.
// +optional
ObservedGeneration int64 `json:"observedGeneration,omitempty" protobuf:"varint,1,opt,name=observedGeneration"`
@@ -134,7 +134,7 @@ type Eviction struct {
type PodSecurityPolicy struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -276,7 +276,7 @@ var AllowAllCapabilities v1.Capability = "*"
// FSType gives strong typing to different file systems that are used by volumes.
type FSType string
-var (
+const (
AzureFile FSType = "azureFile"
Flocker FSType = "flocker"
FlexVolume FSType = "flexVolume"
@@ -480,7 +480,7 @@ const AllowAllRuntimeClassNames = "*"
type PodSecurityPolicyList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
diff --git a/vendor/k8s.io/api/policy/v1beta1/types_swagger_doc_generated.go b/vendor/k8s.io/api/policy/v1beta1/types_swagger_doc_generated.go
index eb2eec9333f1c..40a951c417cce 100644
--- a/vendor/k8s.io/api/policy/v1beta1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/policy/v1beta1/types_swagger_doc_generated.go
@@ -126,7 +126,7 @@ func (PodDisruptionBudgetSpec) SwaggerDoc() map[string]string {
var map_PodDisruptionBudgetStatus = map[string]string{
"": "PodDisruptionBudgetStatus represents information about the status of a PodDisruptionBudget. Status may trail the actual state of a system.",
- "observedGeneration": "Most recent generation observed when updating this PDB status. PodDisruptionsAllowed and other status informatio is valid only if observedGeneration equals to PDB's object generation.",
+ "observedGeneration": "Most recent generation observed when updating this PDB status. PodDisruptionsAllowed and other status information is valid only if observedGeneration equals to PDB's object generation.",
"disruptedPods": "DisruptedPods contains information about pods whose eviction was processed by the API server eviction subresource handler but has not yet been observed by the PodDisruptionBudget controller. A pod will be in this map from the time when the API server processed the eviction request to the time when the pod is seen by PDB controller as having been marked for deletion (or after a timeout). The key in the map is the name of the pod and the value is the time when the API server processed the eviction request. If the deletion didn't occur and a pod is still there it will be removed from the list automatically by PodDisruptionBudget controller after some time. If everything goes smooth this map should be empty for the most of the time. Large number of entries in the map may indicate problems with pod deletions.",
"disruptionsAllowed": "Number of pod disruptions that are currently allowed.",
"currentHealthy": "current number of healthy pods",
@@ -140,7 +140,7 @@ func (PodDisruptionBudgetStatus) SwaggerDoc() map[string]string {
var map_PodSecurityPolicy = map[string]string{
"": "PodSecurityPolicy governs the ability to make requests that affect the Security Context that will be applied to a pod and container.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"spec": "spec defines the policy enforced.",
}
@@ -150,7 +150,7 @@ func (PodSecurityPolicy) SwaggerDoc() map[string]string {
var map_PodSecurityPolicyList = map[string]string{
"": "PodSecurityPolicyList is a list of PodSecurityPolicy objects.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"items": "items is a list of schema objects.",
}
diff --git a/vendor/k8s.io/api/rbac/v1alpha1/generated.proto b/vendor/k8s.io/api/rbac/v1alpha1/generated.proto
index b16715bc492d8..5c50a87eed4d8 100644
--- a/vendor/k8s.io/api/rbac/v1alpha1/generated.proto
+++ b/vendor/k8s.io/api/rbac/v1alpha1/generated.proto
@@ -37,6 +37,7 @@ message AggregationRule {
}
// ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 ClusterRole, and will no longer be served in v1.20.
message ClusterRole {
// Standard object's metadata.
// +optional
@@ -55,6 +56,7 @@ message ClusterRole {
// ClusterRoleBinding references a ClusterRole, but not contain it. It can reference a ClusterRole in the global namespace,
// and adds who information via Subject.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 ClusterRoleBinding, and will no longer be served in v1.20.
message ClusterRoleBinding {
// Standard object's metadata.
// +optional
@@ -69,7 +71,8 @@ message ClusterRoleBinding {
optional RoleRef roleRef = 3;
}
-// ClusterRoleBindingList is a collection of ClusterRoleBindings
+// ClusterRoleBindingList is a collection of ClusterRoleBindings.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 ClusterRoleBindings, and will no longer be served in v1.20.
message ClusterRoleBindingList {
// Standard object's metadata.
// +optional
@@ -79,7 +82,8 @@ message ClusterRoleBindingList {
repeated ClusterRoleBinding items = 2;
}
-// ClusterRoleList is a collection of ClusterRoles
+// ClusterRoleList is a collection of ClusterRoles.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 ClusterRoles, and will no longer be served in v1.20.
message ClusterRoleList {
// Standard object's metadata.
// +optional
@@ -117,6 +121,7 @@ message PolicyRule {
}
// Role is a namespaced, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 Role, and will no longer be served in v1.20.
message Role {
// Standard object's metadata.
// +optional
@@ -130,6 +135,7 @@ message Role {
// RoleBinding references a role, but does not contain it. It can reference a Role in the same namespace or a ClusterRole in the global namespace.
// It adds who information via Subjects and namespace information by which namespace it exists in. RoleBindings in a given
// namespace only have effect in that namespace.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 RoleBinding, and will no longer be served in v1.20.
message RoleBinding {
// Standard object's metadata.
// +optional
@@ -145,6 +151,7 @@ message RoleBinding {
}
// RoleBindingList is a collection of RoleBindings
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 RoleBindingList, and will no longer be served in v1.20.
message RoleBindingList {
// Standard object's metadata.
// +optional
@@ -154,7 +161,8 @@ message RoleBindingList {
repeated RoleBinding items = 2;
}
-// RoleList is a collection of Roles
+// RoleList is a collection of Roles.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 RoleList, and will no longer be served in v1.20.
message RoleList {
// Standard object's metadata.
// +optional
diff --git a/vendor/k8s.io/api/rbac/v1alpha1/types.go b/vendor/k8s.io/api/rbac/v1alpha1/types.go
index 521cce4f31d3f..a5d3e38f65787 100644
--- a/vendor/k8s.io/api/rbac/v1alpha1/types.go
+++ b/vendor/k8s.io/api/rbac/v1alpha1/types.go
@@ -103,6 +103,7 @@ type RoleRef struct {
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// Role is a namespaced, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 Role, and will no longer be served in v1.20.
type Role struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
@@ -120,6 +121,7 @@ type Role struct {
// RoleBinding references a role, but does not contain it. It can reference a Role in the same namespace or a ClusterRole in the global namespace.
// It adds who information via Subjects and namespace information by which namespace it exists in. RoleBindings in a given
// namespace only have effect in that namespace.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 RoleBinding, and will no longer be served in v1.20.
type RoleBinding struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
@@ -138,6 +140,7 @@ type RoleBinding struct {
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// RoleBindingList is a collection of RoleBindings
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 RoleBindingList, and will no longer be served in v1.20.
type RoleBindingList struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
@@ -150,7 +153,8 @@ type RoleBindingList struct {
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
-// RoleList is a collection of Roles
+// RoleList is a collection of Roles.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 RoleList, and will no longer be served in v1.20.
type RoleList struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
@@ -166,6 +170,7 @@ type RoleList struct {
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 ClusterRole, and will no longer be served in v1.20.
type ClusterRole struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
@@ -197,6 +202,7 @@ type AggregationRule struct {
// ClusterRoleBinding references a ClusterRole, but not contain it. It can reference a ClusterRole in the global namespace,
// and adds who information via Subject.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 ClusterRoleBinding, and will no longer be served in v1.20.
type ClusterRoleBinding struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
@@ -214,7 +220,8 @@ type ClusterRoleBinding struct {
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
-// ClusterRoleBindingList is a collection of ClusterRoleBindings
+// ClusterRoleBindingList is a collection of ClusterRoleBindings.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 ClusterRoleBindings, and will no longer be served in v1.20.
type ClusterRoleBindingList struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
@@ -227,7 +234,8 @@ type ClusterRoleBindingList struct {
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
-// ClusterRoleList is a collection of ClusterRoles
+// ClusterRoleList is a collection of ClusterRoles.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 ClusterRoles, and will no longer be served in v1.20.
type ClusterRoleList struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
diff --git a/vendor/k8s.io/api/rbac/v1alpha1/types_swagger_doc_generated.go b/vendor/k8s.io/api/rbac/v1alpha1/types_swagger_doc_generated.go
index d7b194ae40740..8238de21d69aa 100644
--- a/vendor/k8s.io/api/rbac/v1alpha1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/rbac/v1alpha1/types_swagger_doc_generated.go
@@ -37,7 +37,7 @@ func (AggregationRule) SwaggerDoc() map[string]string {
}
var map_ClusterRole = map[string]string{
- "": "ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding.",
+ "": "ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding. Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 ClusterRole, and will no longer be served in v1.20.",
"metadata": "Standard object's metadata.",
"rules": "Rules holds all the PolicyRules for this ClusterRole",
"aggregationRule": "AggregationRule is an optional field that describes how to build the Rules for this ClusterRole. If AggregationRule is set, then the Rules are controller managed and direct changes to Rules will be stomped by the controller.",
@@ -48,7 +48,7 @@ func (ClusterRole) SwaggerDoc() map[string]string {
}
var map_ClusterRoleBinding = map[string]string{
- "": "ClusterRoleBinding references a ClusterRole, but not contain it. It can reference a ClusterRole in the global namespace, and adds who information via Subject.",
+ "": "ClusterRoleBinding references a ClusterRole, but not contain it. It can reference a ClusterRole in the global namespace, and adds who information via Subject. Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 ClusterRoleBinding, and will no longer be served in v1.20.",
"metadata": "Standard object's metadata.",
"subjects": "Subjects holds references to the objects the role applies to.",
"roleRef": "RoleRef can only reference a ClusterRole in the global namespace. If the RoleRef cannot be resolved, the Authorizer must return an error.",
@@ -59,7 +59,7 @@ func (ClusterRoleBinding) SwaggerDoc() map[string]string {
}
var map_ClusterRoleBindingList = map[string]string{
- "": "ClusterRoleBindingList is a collection of ClusterRoleBindings",
+ "": "ClusterRoleBindingList is a collection of ClusterRoleBindings. Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 ClusterRoleBindings, and will no longer be served in v1.20.",
"metadata": "Standard object's metadata.",
"items": "Items is a list of ClusterRoleBindings",
}
@@ -69,7 +69,7 @@ func (ClusterRoleBindingList) SwaggerDoc() map[string]string {
}
var map_ClusterRoleList = map[string]string{
- "": "ClusterRoleList is a collection of ClusterRoles",
+ "": "ClusterRoleList is a collection of ClusterRoles. Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 ClusterRoles, and will no longer be served in v1.20.",
"metadata": "Standard object's metadata.",
"items": "Items is a list of ClusterRoles",
}
@@ -92,7 +92,7 @@ func (PolicyRule) SwaggerDoc() map[string]string {
}
var map_Role = map[string]string{
- "": "Role is a namespaced, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding.",
+ "": "Role is a namespaced, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding. Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 Role, and will no longer be served in v1.20.",
"metadata": "Standard object's metadata.",
"rules": "Rules holds all the PolicyRules for this Role",
}
@@ -102,7 +102,7 @@ func (Role) SwaggerDoc() map[string]string {
}
var map_RoleBinding = map[string]string{
- "": "RoleBinding references a role, but does not contain it. It can reference a Role in the same namespace or a ClusterRole in the global namespace. It adds who information via Subjects and namespace information by which namespace it exists in. RoleBindings in a given namespace only have effect in that namespace.",
+ "": "RoleBinding references a role, but does not contain it. It can reference a Role in the same namespace or a ClusterRole in the global namespace. It adds who information via Subjects and namespace information by which namespace it exists in. RoleBindings in a given namespace only have effect in that namespace. Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 RoleBinding, and will no longer be served in v1.20.",
"metadata": "Standard object's metadata.",
"subjects": "Subjects holds references to the objects the role applies to.",
"roleRef": "RoleRef can reference a Role in the current namespace or a ClusterRole in the global namespace. If the RoleRef cannot be resolved, the Authorizer must return an error.",
@@ -113,7 +113,7 @@ func (RoleBinding) SwaggerDoc() map[string]string {
}
var map_RoleBindingList = map[string]string{
- "": "RoleBindingList is a collection of RoleBindings",
+ "": "RoleBindingList is a collection of RoleBindings Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 RoleBindingList, and will no longer be served in v1.20.",
"metadata": "Standard object's metadata.",
"items": "Items is a list of RoleBindings",
}
@@ -123,7 +123,7 @@ func (RoleBindingList) SwaggerDoc() map[string]string {
}
var map_RoleList = map[string]string{
- "": "RoleList is a collection of Roles",
+ "": "RoleList is a collection of Roles. Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 RoleList, and will no longer be served in v1.20.",
"metadata": "Standard object's metadata.",
"items": "Items is a list of Roles",
}
diff --git a/vendor/k8s.io/api/rbac/v1beta1/generated.proto b/vendor/k8s.io/api/rbac/v1beta1/generated.proto
index 07bf735a06bfc..87e0dbdfdd348 100644
--- a/vendor/k8s.io/api/rbac/v1beta1/generated.proto
+++ b/vendor/k8s.io/api/rbac/v1beta1/generated.proto
@@ -37,6 +37,7 @@ message AggregationRule {
}
// ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 ClusterRole, and will no longer be served in v1.20.
message ClusterRole {
// Standard object's metadata.
// +optional
@@ -55,6 +56,7 @@ message ClusterRole {
// ClusterRoleBinding references a ClusterRole, but not contain it. It can reference a ClusterRole in the global namespace,
// and adds who information via Subject.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 ClusterRoleBinding, and will no longer be served in v1.20.
message ClusterRoleBinding {
// Standard object's metadata.
// +optional
@@ -69,7 +71,8 @@ message ClusterRoleBinding {
optional RoleRef roleRef = 3;
}
-// ClusterRoleBindingList is a collection of ClusterRoleBindings
+// ClusterRoleBindingList is a collection of ClusterRoleBindings.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 ClusterRoleBindingList, and will no longer be served in v1.20.
message ClusterRoleBindingList {
// Standard object's metadata.
// +optional
@@ -79,7 +82,8 @@ message ClusterRoleBindingList {
repeated ClusterRoleBinding items = 2;
}
-// ClusterRoleList is a collection of ClusterRoles
+// ClusterRoleList is a collection of ClusterRoles.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 ClusterRoles, and will no longer be served in v1.20.
message ClusterRoleList {
// Standard object's metadata.
// +optional
@@ -117,6 +121,7 @@ message PolicyRule {
}
// Role is a namespaced, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 Role, and will no longer be served in v1.20.
message Role {
// Standard object's metadata.
// +optional
@@ -130,6 +135,7 @@ message Role {
// RoleBinding references a role, but does not contain it. It can reference a Role in the same namespace or a ClusterRole in the global namespace.
// It adds who information via Subjects and namespace information by which namespace it exists in. RoleBindings in a given
// namespace only have effect in that namespace.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 RoleBinding, and will no longer be served in v1.20.
message RoleBinding {
// Standard object's metadata.
// +optional
@@ -145,6 +151,7 @@ message RoleBinding {
}
// RoleBindingList is a collection of RoleBindings
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 RoleBindingList, and will no longer be served in v1.20.
message RoleBindingList {
// Standard object's metadata.
// +optional
@@ -155,6 +162,7 @@ message RoleBindingList {
}
// RoleList is a collection of Roles
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 RoleList, and will no longer be served in v1.20.
message RoleList {
// Standard object's metadata.
// +optional
diff --git a/vendor/k8s.io/api/rbac/v1beta1/types.go b/vendor/k8s.io/api/rbac/v1beta1/types.go
index 35843c90d182d..74c70936a43b1 100644
--- a/vendor/k8s.io/api/rbac/v1beta1/types.go
+++ b/vendor/k8s.io/api/rbac/v1beta1/types.go
@@ -102,6 +102,7 @@ type RoleRef struct {
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// Role is a namespaced, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 Role, and will no longer be served in v1.20.
type Role struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
@@ -119,6 +120,7 @@ type Role struct {
// RoleBinding references a role, but does not contain it. It can reference a Role in the same namespace or a ClusterRole in the global namespace.
// It adds who information via Subjects and namespace information by which namespace it exists in. RoleBindings in a given
// namespace only have effect in that namespace.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 RoleBinding, and will no longer be served in v1.20.
type RoleBinding struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
@@ -137,6 +139,7 @@ type RoleBinding struct {
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// RoleBindingList is a collection of RoleBindings
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 RoleBindingList, and will no longer be served in v1.20.
type RoleBindingList struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
@@ -150,6 +153,7 @@ type RoleBindingList struct {
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// RoleList is a collection of Roles
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 RoleList, and will no longer be served in v1.20.
type RoleList struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
@@ -165,6 +169,7 @@ type RoleList struct {
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 ClusterRole, and will no longer be served in v1.20.
type ClusterRole struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
@@ -195,6 +200,7 @@ type AggregationRule struct {
// ClusterRoleBinding references a ClusterRole, but not contain it. It can reference a ClusterRole in the global namespace,
// and adds who information via Subject.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 ClusterRoleBinding, and will no longer be served in v1.20.
type ClusterRoleBinding struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
@@ -212,7 +218,8 @@ type ClusterRoleBinding struct {
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
-// ClusterRoleBindingList is a collection of ClusterRoleBindings
+// ClusterRoleBindingList is a collection of ClusterRoleBindings.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 ClusterRoleBindingList, and will no longer be served in v1.20.
type ClusterRoleBindingList struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
@@ -225,7 +232,8 @@ type ClusterRoleBindingList struct {
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
-// ClusterRoleList is a collection of ClusterRoles
+// ClusterRoleList is a collection of ClusterRoles.
+// Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 ClusterRoles, and will no longer be served in v1.20.
type ClusterRoleList struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
diff --git a/vendor/k8s.io/api/rbac/v1beta1/types_swagger_doc_generated.go b/vendor/k8s.io/api/rbac/v1beta1/types_swagger_doc_generated.go
index c80327593d789..8e9d7ace79555 100644
--- a/vendor/k8s.io/api/rbac/v1beta1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/rbac/v1beta1/types_swagger_doc_generated.go
@@ -37,7 +37,7 @@ func (AggregationRule) SwaggerDoc() map[string]string {
}
var map_ClusterRole = map[string]string{
- "": "ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding.",
+ "": "ClusterRole is a cluster level, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding or ClusterRoleBinding. Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 ClusterRole, and will no longer be served in v1.20.",
"metadata": "Standard object's metadata.",
"rules": "Rules holds all the PolicyRules for this ClusterRole",
"aggregationRule": "AggregationRule is an optional field that describes how to build the Rules for this ClusterRole. If AggregationRule is set, then the Rules are controller managed and direct changes to Rules will be stomped by the controller.",
@@ -48,7 +48,7 @@ func (ClusterRole) SwaggerDoc() map[string]string {
}
var map_ClusterRoleBinding = map[string]string{
- "": "ClusterRoleBinding references a ClusterRole, but not contain it. It can reference a ClusterRole in the global namespace, and adds who information via Subject.",
+ "": "ClusterRoleBinding references a ClusterRole, but not contain it. It can reference a ClusterRole in the global namespace, and adds who information via Subject. Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 ClusterRoleBinding, and will no longer be served in v1.20.",
"metadata": "Standard object's metadata.",
"subjects": "Subjects holds references to the objects the role applies to.",
"roleRef": "RoleRef can only reference a ClusterRole in the global namespace. If the RoleRef cannot be resolved, the Authorizer must return an error.",
@@ -59,7 +59,7 @@ func (ClusterRoleBinding) SwaggerDoc() map[string]string {
}
var map_ClusterRoleBindingList = map[string]string{
- "": "ClusterRoleBindingList is a collection of ClusterRoleBindings",
+ "": "ClusterRoleBindingList is a collection of ClusterRoleBindings. Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 ClusterRoleBindingList, and will no longer be served in v1.20.",
"metadata": "Standard object's metadata.",
"items": "Items is a list of ClusterRoleBindings",
}
@@ -69,7 +69,7 @@ func (ClusterRoleBindingList) SwaggerDoc() map[string]string {
}
var map_ClusterRoleList = map[string]string{
- "": "ClusterRoleList is a collection of ClusterRoles",
+ "": "ClusterRoleList is a collection of ClusterRoles. Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 ClusterRoles, and will no longer be served in v1.20.",
"metadata": "Standard object's metadata.",
"items": "Items is a list of ClusterRoles",
}
@@ -92,7 +92,7 @@ func (PolicyRule) SwaggerDoc() map[string]string {
}
var map_Role = map[string]string{
- "": "Role is a namespaced, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding.",
+ "": "Role is a namespaced, logical grouping of PolicyRules that can be referenced as a unit by a RoleBinding. Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 Role, and will no longer be served in v1.20.",
"metadata": "Standard object's metadata.",
"rules": "Rules holds all the PolicyRules for this Role",
}
@@ -102,7 +102,7 @@ func (Role) SwaggerDoc() map[string]string {
}
var map_RoleBinding = map[string]string{
- "": "RoleBinding references a role, but does not contain it. It can reference a Role in the same namespace or a ClusterRole in the global namespace. It adds who information via Subjects and namespace information by which namespace it exists in. RoleBindings in a given namespace only have effect in that namespace.",
+ "": "RoleBinding references a role, but does not contain it. It can reference a Role in the same namespace or a ClusterRole in the global namespace. It adds who information via Subjects and namespace information by which namespace it exists in. RoleBindings in a given namespace only have effect in that namespace. Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 RoleBinding, and will no longer be served in v1.20.",
"metadata": "Standard object's metadata.",
"subjects": "Subjects holds references to the objects the role applies to.",
"roleRef": "RoleRef can reference a Role in the current namespace or a ClusterRole in the global namespace. If the RoleRef cannot be resolved, the Authorizer must return an error.",
@@ -113,7 +113,7 @@ func (RoleBinding) SwaggerDoc() map[string]string {
}
var map_RoleBindingList = map[string]string{
- "": "RoleBindingList is a collection of RoleBindings",
+ "": "RoleBindingList is a collection of RoleBindings Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 RoleBindingList, and will no longer be served in v1.20.",
"metadata": "Standard object's metadata.",
"items": "Items is a list of RoleBindings",
}
@@ -123,7 +123,7 @@ func (RoleBindingList) SwaggerDoc() map[string]string {
}
var map_RoleList = map[string]string{
- "": "RoleList is a collection of Roles",
+ "": "RoleList is a collection of Roles Deprecated in v1.17 in favor of rbac.authorization.k8s.io/v1 RoleList, and will no longer be served in v1.20.",
"metadata": "Standard object's metadata.",
"items": "Items is a list of Roles",
}
diff --git a/vendor/k8s.io/api/scheduling/v1/generated.proto b/vendor/k8s.io/api/scheduling/v1/generated.proto
index ada9eaf85b038..82f6e0a21a54b 100644
--- a/vendor/k8s.io/api/scheduling/v1/generated.proto
+++ b/vendor/k8s.io/api/scheduling/v1/generated.proto
@@ -33,7 +33,7 @@ option go_package = "v1";
// integer value. The value can be any valid integer.
message PriorityClass {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
@@ -65,7 +65,7 @@ message PriorityClass {
// PriorityClassList is a collection of priority classes.
message PriorityClassList {
// Standard list metadata
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
diff --git a/vendor/k8s.io/api/scheduling/v1/types.go b/vendor/k8s.io/api/scheduling/v1/types.go
index e91842ec4da26..087ee10d83390 100644
--- a/vendor/k8s.io/api/scheduling/v1/types.go
+++ b/vendor/k8s.io/api/scheduling/v1/types.go
@@ -30,7 +30,7 @@ import (
type PriorityClass struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -65,7 +65,7 @@ type PriorityClass struct {
type PriorityClassList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
diff --git a/vendor/k8s.io/api/scheduling/v1/types_swagger_doc_generated.go b/vendor/k8s.io/api/scheduling/v1/types_swagger_doc_generated.go
index 853f255d52610..4cfb9d3e35343 100644
--- a/vendor/k8s.io/api/scheduling/v1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/scheduling/v1/types_swagger_doc_generated.go
@@ -29,7 +29,7 @@ package v1
// AUTO-GENERATED FUNCTIONS START HERE. DO NOT EDIT.
var map_PriorityClass = map[string]string{
"": "PriorityClass defines mapping from a priority class name to the priority integer value. The value can be any valid integer.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"value": "The value of this priority class. This is the actual priority that pods receive when they have the name of this class in their pod spec.",
"globalDefault": "globalDefault specifies whether this PriorityClass should be considered as the default priority for pods that do not have any priority class. Only one PriorityClass can be marked as `globalDefault`. However, if more than one PriorityClasses exists with their `globalDefault` field set to true, the smallest value of such global default PriorityClasses will be used as the default priority.",
"description": "description is an arbitrary string that usually provides guidelines on when this priority class should be used.",
@@ -42,7 +42,7 @@ func (PriorityClass) SwaggerDoc() map[string]string {
var map_PriorityClassList = map[string]string{
"": "PriorityClassList is a collection of priority classes.",
- "metadata": "Standard list metadata More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"items": "items is the list of PriorityClasses",
}
diff --git a/vendor/k8s.io/api/scheduling/v1alpha1/generated.proto b/vendor/k8s.io/api/scheduling/v1alpha1/generated.proto
index 584a2918a263c..682fb873636ce 100644
--- a/vendor/k8s.io/api/scheduling/v1alpha1/generated.proto
+++ b/vendor/k8s.io/api/scheduling/v1alpha1/generated.proto
@@ -34,7 +34,7 @@ option go_package = "v1alpha1";
// integer value. The value can be any valid integer.
message PriorityClass {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
@@ -66,7 +66,7 @@ message PriorityClass {
// PriorityClassList is a collection of priority classes.
message PriorityClassList {
// Standard list metadata
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
diff --git a/vendor/k8s.io/api/scheduling/v1alpha1/types.go b/vendor/k8s.io/api/scheduling/v1alpha1/types.go
index c1a09bce8eb09..86a2c5130e997 100644
--- a/vendor/k8s.io/api/scheduling/v1alpha1/types.go
+++ b/vendor/k8s.io/api/scheduling/v1alpha1/types.go
@@ -31,7 +31,7 @@ import (
type PriorityClass struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -66,7 +66,7 @@ type PriorityClass struct {
type PriorityClassList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
diff --git a/vendor/k8s.io/api/scheduling/v1alpha1/types_swagger_doc_generated.go b/vendor/k8s.io/api/scheduling/v1alpha1/types_swagger_doc_generated.go
index f9880922a13df..63a9a353cbe44 100644
--- a/vendor/k8s.io/api/scheduling/v1alpha1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/scheduling/v1alpha1/types_swagger_doc_generated.go
@@ -29,7 +29,7 @@ package v1alpha1
// AUTO-GENERATED FUNCTIONS START HERE. DO NOT EDIT.
var map_PriorityClass = map[string]string{
"": "DEPRECATED - This group version of PriorityClass is deprecated by scheduling.k8s.io/v1/PriorityClass. PriorityClass defines mapping from a priority class name to the priority integer value. The value can be any valid integer.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"value": "The value of this priority class. This is the actual priority that pods receive when they have the name of this class in their pod spec.",
"globalDefault": "globalDefault specifies whether this PriorityClass should be considered as the default priority for pods that do not have any priority class. Only one PriorityClass can be marked as `globalDefault`. However, if more than one PriorityClasses exists with their `globalDefault` field set to true, the smallest value of such global default PriorityClasses will be used as the default priority.",
"description": "description is an arbitrary string that usually provides guidelines on when this priority class should be used.",
@@ -42,7 +42,7 @@ func (PriorityClass) SwaggerDoc() map[string]string {
var map_PriorityClassList = map[string]string{
"": "PriorityClassList is a collection of priority classes.",
- "metadata": "Standard list metadata More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"items": "items is the list of PriorityClasses",
}
diff --git a/vendor/k8s.io/api/storage/v1/generated.pb.go b/vendor/k8s.io/api/storage/v1/generated.pb.go
index 782f50693f3ff..3d09ee7e44974 100644
--- a/vendor/k8s.io/api/storage/v1/generated.pb.go
+++ b/vendor/k8s.io/api/storage/v1/generated.pb.go
@@ -46,10 +46,122 @@ var _ = math.Inf
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
+func (m *CSINode) Reset() { *m = CSINode{} }
+func (*CSINode) ProtoMessage() {}
+func (*CSINode) Descriptor() ([]byte, []int) {
+ return fileDescriptor_3b530c1983504d8d, []int{0}
+}
+func (m *CSINode) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *CSINode) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+}
+func (m *CSINode) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_CSINode.Merge(m, src)
+}
+func (m *CSINode) XXX_Size() int {
+ return m.Size()
+}
+func (m *CSINode) XXX_DiscardUnknown() {
+ xxx_messageInfo_CSINode.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_CSINode proto.InternalMessageInfo
+
+func (m *CSINodeDriver) Reset() { *m = CSINodeDriver{} }
+func (*CSINodeDriver) ProtoMessage() {}
+func (*CSINodeDriver) Descriptor() ([]byte, []int) {
+ return fileDescriptor_3b530c1983504d8d, []int{1}
+}
+func (m *CSINodeDriver) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *CSINodeDriver) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+}
+func (m *CSINodeDriver) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_CSINodeDriver.Merge(m, src)
+}
+func (m *CSINodeDriver) XXX_Size() int {
+ return m.Size()
+}
+func (m *CSINodeDriver) XXX_DiscardUnknown() {
+ xxx_messageInfo_CSINodeDriver.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_CSINodeDriver proto.InternalMessageInfo
+
+func (m *CSINodeList) Reset() { *m = CSINodeList{} }
+func (*CSINodeList) ProtoMessage() {}
+func (*CSINodeList) Descriptor() ([]byte, []int) {
+ return fileDescriptor_3b530c1983504d8d, []int{2}
+}
+func (m *CSINodeList) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *CSINodeList) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+}
+func (m *CSINodeList) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_CSINodeList.Merge(m, src)
+}
+func (m *CSINodeList) XXX_Size() int {
+ return m.Size()
+}
+func (m *CSINodeList) XXX_DiscardUnknown() {
+ xxx_messageInfo_CSINodeList.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_CSINodeList proto.InternalMessageInfo
+
+func (m *CSINodeSpec) Reset() { *m = CSINodeSpec{} }
+func (*CSINodeSpec) ProtoMessage() {}
+func (*CSINodeSpec) Descriptor() ([]byte, []int) {
+ return fileDescriptor_3b530c1983504d8d, []int{3}
+}
+func (m *CSINodeSpec) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *CSINodeSpec) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+}
+func (m *CSINodeSpec) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_CSINodeSpec.Merge(m, src)
+}
+func (m *CSINodeSpec) XXX_Size() int {
+ return m.Size()
+}
+func (m *CSINodeSpec) XXX_DiscardUnknown() {
+ xxx_messageInfo_CSINodeSpec.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_CSINodeSpec proto.InternalMessageInfo
+
func (m *StorageClass) Reset() { *m = StorageClass{} }
func (*StorageClass) ProtoMessage() {}
func (*StorageClass) Descriptor() ([]byte, []int) {
- return fileDescriptor_3b530c1983504d8d, []int{0}
+ return fileDescriptor_3b530c1983504d8d, []int{4}
}
func (m *StorageClass) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -77,7 +189,7 @@ var xxx_messageInfo_StorageClass proto.InternalMessageInfo
func (m *StorageClassList) Reset() { *m = StorageClassList{} }
func (*StorageClassList) ProtoMessage() {}
func (*StorageClassList) Descriptor() ([]byte, []int) {
- return fileDescriptor_3b530c1983504d8d, []int{1}
+ return fileDescriptor_3b530c1983504d8d, []int{5}
}
func (m *StorageClassList) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -105,7 +217,7 @@ var xxx_messageInfo_StorageClassList proto.InternalMessageInfo
func (m *VolumeAttachment) Reset() { *m = VolumeAttachment{} }
func (*VolumeAttachment) ProtoMessage() {}
func (*VolumeAttachment) Descriptor() ([]byte, []int) {
- return fileDescriptor_3b530c1983504d8d, []int{2}
+ return fileDescriptor_3b530c1983504d8d, []int{6}
}
func (m *VolumeAttachment) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -133,7 +245,7 @@ var xxx_messageInfo_VolumeAttachment proto.InternalMessageInfo
func (m *VolumeAttachmentList) Reset() { *m = VolumeAttachmentList{} }
func (*VolumeAttachmentList) ProtoMessage() {}
func (*VolumeAttachmentList) Descriptor() ([]byte, []int) {
- return fileDescriptor_3b530c1983504d8d, []int{3}
+ return fileDescriptor_3b530c1983504d8d, []int{7}
}
func (m *VolumeAttachmentList) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -161,7 +273,7 @@ var xxx_messageInfo_VolumeAttachmentList proto.InternalMessageInfo
func (m *VolumeAttachmentSource) Reset() { *m = VolumeAttachmentSource{} }
func (*VolumeAttachmentSource) ProtoMessage() {}
func (*VolumeAttachmentSource) Descriptor() ([]byte, []int) {
- return fileDescriptor_3b530c1983504d8d, []int{4}
+ return fileDescriptor_3b530c1983504d8d, []int{8}
}
func (m *VolumeAttachmentSource) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -189,7 +301,7 @@ var xxx_messageInfo_VolumeAttachmentSource proto.InternalMessageInfo
func (m *VolumeAttachmentSpec) Reset() { *m = VolumeAttachmentSpec{} }
func (*VolumeAttachmentSpec) ProtoMessage() {}
func (*VolumeAttachmentSpec) Descriptor() ([]byte, []int) {
- return fileDescriptor_3b530c1983504d8d, []int{5}
+ return fileDescriptor_3b530c1983504d8d, []int{9}
}
func (m *VolumeAttachmentSpec) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -217,7 +329,7 @@ var xxx_messageInfo_VolumeAttachmentSpec proto.InternalMessageInfo
func (m *VolumeAttachmentStatus) Reset() { *m = VolumeAttachmentStatus{} }
func (*VolumeAttachmentStatus) ProtoMessage() {}
func (*VolumeAttachmentStatus) Descriptor() ([]byte, []int) {
- return fileDescriptor_3b530c1983504d8d, []int{6}
+ return fileDescriptor_3b530c1983504d8d, []int{10}
}
func (m *VolumeAttachmentStatus) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -245,7 +357,7 @@ var xxx_messageInfo_VolumeAttachmentStatus proto.InternalMessageInfo
func (m *VolumeError) Reset() { *m = VolumeError{} }
func (*VolumeError) ProtoMessage() {}
func (*VolumeError) Descriptor() ([]byte, []int) {
- return fileDescriptor_3b530c1983504d8d, []int{7}
+ return fileDescriptor_3b530c1983504d8d, []int{11}
}
func (m *VolumeError) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -270,7 +382,39 @@ func (m *VolumeError) XXX_DiscardUnknown() {
var xxx_messageInfo_VolumeError proto.InternalMessageInfo
+func (m *VolumeNodeResources) Reset() { *m = VolumeNodeResources{} }
+func (*VolumeNodeResources) ProtoMessage() {}
+func (*VolumeNodeResources) Descriptor() ([]byte, []int) {
+ return fileDescriptor_3b530c1983504d8d, []int{12}
+}
+func (m *VolumeNodeResources) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *VolumeNodeResources) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+}
+func (m *VolumeNodeResources) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_VolumeNodeResources.Merge(m, src)
+}
+func (m *VolumeNodeResources) XXX_Size() int {
+ return m.Size()
+}
+func (m *VolumeNodeResources) XXX_DiscardUnknown() {
+ xxx_messageInfo_VolumeNodeResources.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_VolumeNodeResources proto.InternalMessageInfo
+
func init() {
+ proto.RegisterType((*CSINode)(nil), "k8s.io.api.storage.v1.CSINode")
+ proto.RegisterType((*CSINodeDriver)(nil), "k8s.io.api.storage.v1.CSINodeDriver")
+ proto.RegisterType((*CSINodeList)(nil), "k8s.io.api.storage.v1.CSINodeList")
+ proto.RegisterType((*CSINodeSpec)(nil), "k8s.io.api.storage.v1.CSINodeSpec")
proto.RegisterType((*StorageClass)(nil), "k8s.io.api.storage.v1.StorageClass")
proto.RegisterMapType((map[string]string)(nil), "k8s.io.api.storage.v1.StorageClass.ParametersEntry")
proto.RegisterType((*StorageClassList)(nil), "k8s.io.api.storage.v1.StorageClassList")
@@ -281,6 +425,7 @@ func init() {
proto.RegisterType((*VolumeAttachmentStatus)(nil), "k8s.io.api.storage.v1.VolumeAttachmentStatus")
proto.RegisterMapType((map[string]string)(nil), "k8s.io.api.storage.v1.VolumeAttachmentStatus.AttachmentMetadataEntry")
proto.RegisterType((*VolumeError)(nil), "k8s.io.api.storage.v1.VolumeError")
+ proto.RegisterType((*VolumeNodeResources)(nil), "k8s.io.api.storage.v1.VolumeNodeResources")
}
func init() {
@@ -288,71 +433,264 @@ func init() {
}
var fileDescriptor_3b530c1983504d8d = []byte{
- // 1018 bytes of a gzipped FileDescriptorProto
- 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x56, 0x3d, 0x6f, 0x23, 0xc5,
- 0x1b, 0xcf, 0xc6, 0x79, 0x71, 0xc6, 0xc9, 0xff, 0x9c, 0xf9, 0x07, 0x30, 0x2e, 0xec, 0xc8, 0x14,
- 0x98, 0x83, 0xdb, 0xbd, 0x84, 0x03, 0x9d, 0x90, 0x40, 0xf2, 0x82, 0x25, 0x4e, 0x8a, 0xef, 0xa2,
- 0x49, 0x38, 0x21, 0x44, 0xc1, 0x64, 0xf7, 0x61, 0xb3, 0x67, 0xef, 0xce, 0x32, 0x33, 0x36, 0xa4,
- 0xa3, 0xa2, 0x43, 0x82, 0x96, 0x8f, 0x42, 0x49, 0x15, 0xba, 0x13, 0xd5, 0x55, 0x16, 0x59, 0x6a,
- 0xbe, 0x40, 0x2a, 0x34, 0xb3, 0x13, 0x7b, 0x63, 0x6f, 0xc0, 0x69, 0xae, 0xf3, 0xf3, 0xf2, 0xfb,
- 0x3d, 0xef, 0xb3, 0x46, 0x1f, 0xf5, 0x1f, 0x0a, 0x3b, 0x64, 0x4e, 0x7f, 0x78, 0x02, 0x3c, 0x06,
- 0x09, 0xc2, 0x19, 0x41, 0xec, 0x33, 0xee, 0x18, 0x03, 0x4d, 0x42, 0x47, 0x48, 0xc6, 0x69, 0x00,
- 0xce, 0x68, 0xcf, 0x09, 0x20, 0x06, 0x4e, 0x25, 0xf8, 0x76, 0xc2, 0x99, 0x64, 0xf8, 0x95, 0xcc,
- 0xcd, 0xa6, 0x49, 0x68, 0x1b, 0x37, 0x7b, 0xb4, 0x57, 0xbf, 0x17, 0x84, 0xf2, 0x74, 0x78, 0x62,
- 0x7b, 0x2c, 0x72, 0x02, 0x16, 0x30, 0x47, 0x7b, 0x9f, 0x0c, 0xbf, 0xd6, 0x92, 0x16, 0xf4, 0xaf,
- 0x8c, 0xa5, 0xde, 0xca, 0x05, 0xf3, 0x18, 0x2f, 0x8a, 0x54, 0x7f, 0x30, 0xf5, 0x89, 0xa8, 0x77,
- 0x1a, 0xc6, 0xc0, 0xcf, 0x9c, 0xa4, 0x1f, 0x28, 0x85, 0x70, 0x22, 0x90, 0xb4, 0x08, 0xe5, 0xdc,
- 0x84, 0xe2, 0xc3, 0x58, 0x86, 0x11, 0xcc, 0x01, 0xde, 0xff, 0x2f, 0x80, 0xf0, 0x4e, 0x21, 0xa2,
- 0xb3, 0xb8, 0xd6, 0x8f, 0x6b, 0x68, 0xf3, 0x28, 0x6b, 0xc0, 0xc7, 0x03, 0x2a, 0x04, 0xfe, 0x0a,
- 0x95, 0x55, 0x52, 0x3e, 0x95, 0xb4, 0x66, 0xed, 0x5a, 0xed, 0xca, 0xfe, 0x7d, 0x7b, 0xda, 0xac,
- 0x09, 0xb7, 0x9d, 0xf4, 0x03, 0xa5, 0x10, 0xb6, 0xf2, 0xb6, 0x47, 0x7b, 0xf6, 0x93, 0x93, 0x67,
- 0xe0, 0xc9, 0x1e, 0x48, 0xea, 0xe2, 0xf3, 0x71, 0x73, 0x29, 0x1d, 0x37, 0xd1, 0x54, 0x47, 0x26,
- 0xac, 0xf8, 0x3d, 0x54, 0x49, 0x38, 0x1b, 0x85, 0x22, 0x64, 0x31, 0xf0, 0xda, 0xf2, 0xae, 0xd5,
- 0xde, 0x70, 0xff, 0x6f, 0x20, 0x95, 0xc3, 0xa9, 0x89, 0xe4, 0xfd, 0x70, 0x80, 0x50, 0x42, 0x39,
- 0x8d, 0x40, 0x02, 0x17, 0xb5, 0xd2, 0x6e, 0xa9, 0x5d, 0xd9, 0x7f, 0xd7, 0x2e, 0x9c, 0xa3, 0x9d,
- 0xaf, 0xc8, 0x3e, 0x9c, 0xa0, 0xba, 0xb1, 0xe4, 0x67, 0xd3, 0xec, 0xa6, 0x06, 0x92, 0xa3, 0xc6,
- 0x7d, 0xb4, 0xc5, 0xc1, 0x1b, 0xd0, 0x30, 0x3a, 0x64, 0x83, 0xd0, 0x3b, 0xab, 0xad, 0xe8, 0x0c,
- 0xbb, 0xe9, 0xb8, 0xb9, 0x45, 0xf2, 0x86, 0xcb, 0x71, 0xf3, 0xfe, 0xfc, 0x06, 0xd8, 0x87, 0xc0,
- 0x45, 0x28, 0x24, 0xc4, 0xf2, 0x29, 0x1b, 0x0c, 0x23, 0xb8, 0x86, 0x21, 0xd7, 0xb9, 0xf1, 0x03,
- 0xb4, 0x19, 0xb1, 0x61, 0x2c, 0x9f, 0x24, 0x32, 0x64, 0xb1, 0xa8, 0xad, 0xee, 0x96, 0xda, 0x1b,
- 0x6e, 0x35, 0x1d, 0x37, 0x37, 0x7b, 0x39, 0x3d, 0xb9, 0xe6, 0x85, 0x0f, 0xd0, 0x0e, 0x1d, 0x0c,
- 0xd8, 0xb7, 0x59, 0x80, 0xee, 0x77, 0x09, 0x8d, 0x55, 0x97, 0x6a, 0x6b, 0xbb, 0x56, 0xbb, 0xec,
- 0xd6, 0xd2, 0x71, 0x73, 0xa7, 0x53, 0x60, 0x27, 0x85, 0x28, 0xfc, 0x39, 0xda, 0x1e, 0x69, 0x95,
- 0x1b, 0xc6, 0x7e, 0x18, 0x07, 0x3d, 0xe6, 0x43, 0x6d, 0x5d, 0x17, 0x7d, 0x37, 0x1d, 0x37, 0xb7,
- 0x9f, 0xce, 0x1a, 0x2f, 0x8b, 0x94, 0x64, 0x9e, 0x04, 0x7f, 0x83, 0xb6, 0x75, 0x44, 0xf0, 0x8f,
- 0x59, 0xc2, 0x06, 0x2c, 0x08, 0x41, 0xd4, 0xca, 0x7a, 0x74, 0xed, 0xfc, 0xe8, 0x54, 0xeb, 0xd4,
- 0xdc, 0x8c, 0xd7, 0xd9, 0x11, 0x0c, 0xc0, 0x93, 0x8c, 0x1f, 0x03, 0x8f, 0xdc, 0xd7, 0xcd, 0xbc,
- 0xb6, 0x3b, 0xb3, 0x54, 0x64, 0x9e, 0xbd, 0xfe, 0x21, 0xba, 0x33, 0x33, 0x70, 0x5c, 0x45, 0xa5,
- 0x3e, 0x9c, 0xe9, 0x6d, 0xde, 0x20, 0xea, 0x27, 0xde, 0x41, 0xab, 0x23, 0x3a, 0x18, 0x42, 0xb6,
- 0x7c, 0x24, 0x13, 0x3e, 0x58, 0x7e, 0x68, 0xb5, 0x7e, 0xb5, 0x50, 0x35, 0xbf, 0x3d, 0x07, 0xa1,
- 0x90, 0xf8, 0xcb, 0xb9, 0x9b, 0xb0, 0x17, 0xbb, 0x09, 0x85, 0xd6, 0x17, 0x51, 0x35, 0x35, 0x94,
- 0xaf, 0x34, 0xb9, 0x7b, 0xf8, 0x14, 0xad, 0x86, 0x12, 0x22, 0x51, 0x5b, 0xd6, 0x8d, 0x79, 0x63,
- 0x81, 0x9d, 0x76, 0xb7, 0x0c, 0xdf, 0xea, 0x23, 0x85, 0x24, 0x19, 0x41, 0xeb, 0x97, 0x65, 0x54,
- 0xcd, 0xe6, 0xd2, 0x91, 0x92, 0x7a, 0xa7, 0x11, 0xc4, 0xf2, 0x25, 0x1c, 0x74, 0x0f, 0xad, 0x88,
- 0x04, 0x3c, 0xdd, 0xcc, 0xca, 0xfe, 0xdb, 0x37, 0xe4, 0x3f, 0x9b, 0xd8, 0x51, 0x02, 0x9e, 0xbb,
- 0x69, 0x88, 0x57, 0x94, 0x44, 0x34, 0x0d, 0xfe, 0x0c, 0xad, 0x09, 0x49, 0xe5, 0x50, 0x1d, 0xb9,
- 0x22, 0xbc, 0xb7, 0x28, 0xa1, 0x06, 0xb9, 0xff, 0x33, 0x94, 0x6b, 0x99, 0x4c, 0x0c, 0x59, 0xeb,
- 0x37, 0x0b, 0xed, 0xcc, 0x42, 0x5e, 0xc2, 0x74, 0x0f, 0xae, 0x4f, 0xf7, 0xcd, 0x05, 0x8b, 0xb9,
- 0x61, 0xc2, 0x7f, 0x58, 0xe8, 0xd5, 0xb9, 0xba, 0xd9, 0x90, 0x7b, 0xa0, 0xde, 0x84, 0x64, 0xe6,
- 0xe5, 0x79, 0x4c, 0x23, 0xc8, 0xd6, 0x3e, 0x7b, 0x13, 0x0e, 0x0b, 0xec, 0xa4, 0x10, 0x85, 0x9f,
- 0xa1, 0x6a, 0x18, 0x0f, 0xc2, 0x18, 0x32, 0xdd, 0xd1, 0x74, 0xbe, 0x85, 0x87, 0x3b, 0xcb, 0xac,
- 0x87, 0xbb, 0x93, 0x8e, 0x9b, 0xd5, 0x47, 0x33, 0x2c, 0x64, 0x8e, 0xb7, 0xf5, 0x7b, 0xc1, 0x64,
- 0x94, 0x01, 0xbf, 0x83, 0xca, 0x54, 0x6b, 0x80, 0x9b, 0x32, 0x26, 0x9d, 0xee, 0x18, 0x3d, 0x99,
- 0x78, 0xe8, 0xbd, 0xd1, 0xad, 0x30, 0x89, 0x2e, 0xbc, 0x37, 0x1a, 0x94, 0xdb, 0x1b, 0x2d, 0x13,
- 0x43, 0xa6, 0x92, 0x88, 0x99, 0x9f, 0xf5, 0xb2, 0x74, 0x3d, 0x89, 0xc7, 0x46, 0x4f, 0x26, 0x1e,
- 0xad, 0xbf, 0x4b, 0x05, 0x03, 0xd2, 0x0b, 0x98, 0xab, 0xc6, 0xd7, 0xd5, 0x94, 0xe7, 0xaa, 0xf1,
- 0x27, 0xd5, 0xf8, 0xf8, 0x67, 0x0b, 0x61, 0x3a, 0xa1, 0xe8, 0x5d, 0x2d, 0x68, 0xb6, 0x45, 0xdd,
- 0x5b, 0x9d, 0x84, 0xdd, 0x99, 0xe3, 0xc9, 0xbe, 0x84, 0x75, 0x13, 0x1f, 0xcf, 0x3b, 0x90, 0x82,
- 0xe0, 0xd8, 0x47, 0x95, 0x4c, 0xdb, 0xe5, 0x9c, 0x71, 0x73, 0x9e, 0xad, 0x7f, 0xcd, 0x45, 0x7b,
- 0xba, 0x0d, 0xf5, 0x65, 0xef, 0x4c, 0xa1, 0x97, 0xe3, 0x66, 0x25, 0x67, 0x27, 0x79, 0x5a, 0x15,
- 0xc5, 0x87, 0x69, 0x94, 0x95, 0xdb, 0x45, 0xf9, 0x04, 0x6e, 0x8e, 0x92, 0xa3, 0xad, 0x77, 0xd1,
- 0x6b, 0x37, 0xb4, 0xe5, 0x56, 0xdf, 0x8b, 0x1f, 0x2c, 0x94, 0x8f, 0x81, 0x0f, 0xd0, 0x8a, 0xfa,
- 0xbb, 0x65, 0x1e, 0x92, 0xbb, 0x8b, 0x3d, 0x24, 0xc7, 0x61, 0x04, 0xd3, 0xa7, 0x50, 0x49, 0x44,
- 0xb3, 0xe0, 0xb7, 0xd0, 0x7a, 0x04, 0x42, 0xd0, 0xc0, 0x44, 0x76, 0xef, 0x18, 0xa7, 0xf5, 0x5e,
- 0xa6, 0x26, 0x57, 0x76, 0xb7, 0x7d, 0x7e, 0xd1, 0x58, 0x7a, 0x7e, 0xd1, 0x58, 0x7a, 0x71, 0xd1,
- 0x58, 0xfa, 0x3e, 0x6d, 0x58, 0xe7, 0x69, 0xc3, 0x7a, 0x9e, 0x36, 0xac, 0x17, 0x69, 0xc3, 0xfa,
- 0x33, 0x6d, 0x58, 0x3f, 0xfd, 0xd5, 0x58, 0xfa, 0x62, 0x79, 0xb4, 0xf7, 0x4f, 0x00, 0x00, 0x00,
- 0xff, 0xff, 0xe2, 0xd4, 0x42, 0x3d, 0x3c, 0x0b, 0x00, 0x00,
+ // 1212 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x57, 0x41, 0x6f, 0xe3, 0x44,
+ 0x14, 0xae, 0x9b, 0xa4, 0x4d, 0x27, 0x2d, 0x9b, 0xce, 0x16, 0x08, 0x39, 0x24, 0x95, 0x41, 0x10,
+ 0x0a, 0xeb, 0x6c, 0x97, 0x65, 0xb5, 0x42, 0x02, 0x29, 0x6e, 0x23, 0x51, 0xd1, 0xb4, 0xd5, 0xb4,
+ 0xac, 0x10, 0x02, 0xc4, 0xd4, 0x1e, 0x52, 0x6f, 0x62, 0x8f, 0xf1, 0x4c, 0x02, 0xb9, 0x71, 0xe2,
+ 0x86, 0x04, 0x57, 0x7e, 0x05, 0x5c, 0x39, 0x72, 0x2a, 0xb7, 0x15, 0xa7, 0x3d, 0x45, 0xd4, 0x9c,
+ 0xe1, 0x07, 0xf4, 0x84, 0x66, 0x3c, 0x8d, 0x9d, 0xc4, 0x29, 0xe9, 0xa5, 0xb7, 0xcc, 0x9b, 0xf7,
+ 0x7d, 0xef, 0xbd, 0xf9, 0xde, 0xbc, 0x71, 0xc0, 0x07, 0x9d, 0xc7, 0xcc, 0x70, 0x68, 0xbd, 0xd3,
+ 0x3b, 0x25, 0x81, 0x47, 0x38, 0x61, 0xf5, 0x3e, 0xf1, 0x6c, 0x1a, 0xd4, 0xd5, 0x06, 0xf6, 0x9d,
+ 0x3a, 0xe3, 0x34, 0xc0, 0x6d, 0x52, 0xef, 0x6f, 0xd7, 0xdb, 0xc4, 0x23, 0x01, 0xe6, 0xc4, 0x36,
+ 0xfc, 0x80, 0x72, 0x0a, 0x5f, 0x8c, 0xdc, 0x0c, 0xec, 0x3b, 0x86, 0x72, 0x33, 0xfa, 0xdb, 0xe5,
+ 0x7b, 0x6d, 0x87, 0x9f, 0xf5, 0x4e, 0x0d, 0x8b, 0xba, 0xf5, 0x36, 0x6d, 0xd3, 0xba, 0xf4, 0x3e,
+ 0xed, 0x7d, 0x25, 0x57, 0x72, 0x21, 0x7f, 0x45, 0x2c, 0x65, 0x3d, 0x11, 0xcc, 0xa2, 0x41, 0x5a,
+ 0xa4, 0xf2, 0xc3, 0xd8, 0xc7, 0xc5, 0xd6, 0x99, 0xe3, 0x91, 0x60, 0x50, 0xf7, 0x3b, 0x6d, 0x61,
+ 0x60, 0x75, 0x97, 0x70, 0x9c, 0x86, 0xaa, 0xcf, 0x42, 0x05, 0x3d, 0x8f, 0x3b, 0x2e, 0x99, 0x02,
+ 0x3c, 0xfa, 0x3f, 0x00, 0xb3, 0xce, 0x88, 0x8b, 0x27, 0x71, 0xfa, 0xaf, 0x1a, 0x58, 0xde, 0x39,
+ 0xde, 0x3b, 0xa0, 0x36, 0x81, 0x5f, 0x82, 0xbc, 0xc8, 0xc7, 0xc6, 0x1c, 0x97, 0xb4, 0x4d, 0xad,
+ 0x56, 0x78, 0x70, 0xdf, 0x88, 0xcf, 0x69, 0x44, 0x6b, 0xf8, 0x9d, 0xb6, 0x30, 0x30, 0x43, 0x78,
+ 0x1b, 0xfd, 0x6d, 0xe3, 0xf0, 0xf4, 0x29, 0xb1, 0x78, 0x8b, 0x70, 0x6c, 0xc2, 0xf3, 0x61, 0x75,
+ 0x21, 0x1c, 0x56, 0x41, 0x6c, 0x43, 0x23, 0x56, 0xb8, 0x0b, 0xb2, 0xcc, 0x27, 0x56, 0x69, 0x51,
+ 0xb2, 0xeb, 0x46, 0xaa, 0x0a, 0x86, 0xca, 0xe7, 0xd8, 0x27, 0x96, 0xb9, 0xaa, 0xf8, 0xb2, 0x62,
+ 0x85, 0x24, 0x5a, 0xff, 0x57, 0x03, 0x6b, 0xca, 0x67, 0x37, 0x70, 0xfa, 0x24, 0x80, 0x9b, 0x20,
+ 0xeb, 0x61, 0x97, 0xc8, 0xac, 0x57, 0x62, 0xcc, 0x01, 0x76, 0x09, 0x92, 0x3b, 0xf0, 0x75, 0xb0,
+ 0xe4, 0x51, 0x9b, 0xec, 0xed, 0xca, 0xd8, 0x2b, 0xe6, 0x0b, 0xca, 0x67, 0xe9, 0x40, 0x5a, 0x91,
+ 0xda, 0x85, 0x0f, 0xc1, 0x2a, 0xa7, 0x3e, 0xed, 0xd2, 0xf6, 0xe0, 0x23, 0x32, 0x60, 0xa5, 0xcc,
+ 0x66, 0xa6, 0xb6, 0x62, 0x16, 0xc3, 0x61, 0x75, 0xf5, 0x24, 0x61, 0x47, 0x63, 0x5e, 0xf0, 0x73,
+ 0x50, 0xc0, 0xdd, 0x2e, 0xb5, 0x30, 0xc7, 0xa7, 0x5d, 0x52, 0xca, 0xca, 0xf2, 0xb6, 0x66, 0x94,
+ 0xf7, 0x84, 0x76, 0x7b, 0x2e, 0x11, 0x71, 0x11, 0x61, 0xb4, 0x17, 0x58, 0x84, 0x99, 0x77, 0xc2,
+ 0x61, 0xb5, 0xd0, 0x88, 0x29, 0x50, 0x92, 0x4f, 0xff, 0x45, 0x03, 0x05, 0x55, 0xf0, 0xbe, 0xc3,
+ 0x38, 0xfc, 0x6c, 0x4a, 0x28, 0x63, 0x3e, 0xa1, 0x04, 0x5a, 0xca, 0x54, 0x54, 0xe5, 0xe7, 0xaf,
+ 0x2c, 0x09, 0x91, 0x76, 0x40, 0xce, 0xe1, 0xc4, 0x65, 0xa5, 0xc5, 0xcd, 0x4c, 0xad, 0xf0, 0xa0,
+ 0x72, 0xbd, 0x4a, 0xe6, 0x9a, 0xa2, 0xca, 0xed, 0x09, 0x10, 0x8a, 0xb0, 0xfa, 0x17, 0xa3, 0x8c,
+ 0x85, 0x70, 0xf0, 0x10, 0x2c, 0xdb, 0x52, 0x2a, 0x56, 0xd2, 0x24, 0xeb, 0x6b, 0xd7, 0xb3, 0x46,
+ 0xba, 0x9a, 0x77, 0x14, 0xf7, 0x72, 0xb4, 0x66, 0xe8, 0x8a, 0x45, 0xff, 0x61, 0x09, 0xac, 0x1e,
+ 0x47, 0xb0, 0x9d, 0x2e, 0x66, 0xec, 0x16, 0x9a, 0xf7, 0x5d, 0x50, 0xf0, 0x03, 0xda, 0x77, 0x98,
+ 0x43, 0x3d, 0x12, 0xa8, 0x3e, 0xba, 0xab, 0x20, 0x85, 0xa3, 0x78, 0x0b, 0x25, 0xfd, 0x60, 0x1b,
+ 0x00, 0x1f, 0x07, 0xd8, 0x25, 0x5c, 0x54, 0x9f, 0x91, 0xd5, 0xbf, 0x33, 0xa3, 0xfa, 0x64, 0x45,
+ 0xc6, 0xd1, 0x08, 0xd5, 0xf4, 0x78, 0x30, 0x88, 0xb3, 0x8b, 0x37, 0x50, 0x82, 0x1a, 0x76, 0xc0,
+ 0x5a, 0x40, 0xac, 0x2e, 0x76, 0xdc, 0x23, 0xda, 0x75, 0xac, 0x81, 0x6c, 0xc3, 0x15, 0xb3, 0x19,
+ 0x0e, 0xab, 0x6b, 0x28, 0xb9, 0x71, 0x39, 0xac, 0xde, 0x9f, 0x9e, 0x5c, 0xc6, 0x11, 0x09, 0x98,
+ 0xc3, 0x38, 0xf1, 0x78, 0xd4, 0xa1, 0x63, 0x18, 0x34, 0xce, 0x2d, 0xee, 0x89, 0x4b, 0x7b, 0x1e,
+ 0x3f, 0xf4, 0xb9, 0x43, 0x3d, 0x56, 0xca, 0xc5, 0xf7, 0xa4, 0x95, 0xb0, 0xa3, 0x31, 0x2f, 0xb8,
+ 0x0f, 0x36, 0x44, 0x5f, 0x7f, 0x13, 0x05, 0x68, 0x7e, 0xeb, 0x63, 0x4f, 0x9c, 0x52, 0x69, 0x69,
+ 0x53, 0xab, 0xe5, 0xcd, 0x52, 0x38, 0xac, 0x6e, 0x34, 0x52, 0xf6, 0x51, 0x2a, 0x0a, 0x7e, 0x02,
+ 0xd6, 0xfb, 0xd2, 0x64, 0x3a, 0x9e, 0xed, 0x78, 0xed, 0x16, 0xb5, 0x49, 0x69, 0x59, 0x16, 0xbd,
+ 0x15, 0x0e, 0xab, 0xeb, 0x4f, 0x26, 0x37, 0x2f, 0xd3, 0x8c, 0x68, 0x9a, 0x04, 0x7e, 0x0d, 0xd6,
+ 0x65, 0x44, 0x62, 0xab, 0x4b, 0xef, 0x10, 0x56, 0xca, 0x4b, 0xe9, 0x6a, 0x49, 0xe9, 0xc4, 0xd1,
+ 0x09, 0xdd, 0xae, 0x46, 0xc3, 0x31, 0xe9, 0x12, 0x8b, 0xd3, 0xe0, 0x84, 0x04, 0xae, 0xf9, 0x8a,
+ 0xd2, 0x6b, 0xbd, 0x31, 0x49, 0x85, 0xa6, 0xd9, 0xcb, 0xef, 0x83, 0x3b, 0x13, 0x82, 0xc3, 0x22,
+ 0xc8, 0x74, 0xc8, 0x20, 0x1a, 0x6a, 0x48, 0xfc, 0x84, 0x1b, 0x20, 0xd7, 0xc7, 0xdd, 0x1e, 0x89,
+ 0x9a, 0x0f, 0x45, 0x8b, 0xf7, 0x16, 0x1f, 0x6b, 0xfa, 0x6f, 0x1a, 0x28, 0x26, 0xbb, 0xe7, 0x16,
+ 0xe6, 0xc4, 0x87, 0xe3, 0x73, 0xe2, 0xd5, 0x39, 0x7a, 0x7a, 0xc6, 0xb0, 0xf8, 0x79, 0x11, 0x14,
+ 0x23, 0x5d, 0x1a, 0x9c, 0x63, 0xeb, 0xcc, 0x25, 0x1e, 0xbf, 0x85, 0x0b, 0xdd, 0x1a, 0x7b, 0x8d,
+ 0xde, 0xba, 0x76, 0x5c, 0xc7, 0x89, 0xcd, 0x7a, 0x96, 0xe0, 0xc7, 0x60, 0x89, 0x71, 0xcc, 0x7b,
+ 0xe2, 0x92, 0x0b, 0xc2, 0x7b, 0xf3, 0x12, 0x4a, 0x50, 0xfc, 0x22, 0x45, 0x6b, 0xa4, 0xc8, 0xf4,
+ 0xdf, 0x35, 0xb0, 0x31, 0x09, 0xb9, 0x05, 0x75, 0xf7, 0xc7, 0xd5, 0x7d, 0x63, 0xce, 0x62, 0x66,
+ 0x28, 0xfc, 0xa7, 0x06, 0x5e, 0x9a, 0xaa, 0x5b, 0xbe, 0x7d, 0x62, 0x26, 0xf8, 0x13, 0x93, 0xe7,
+ 0x20, 0x7e, 0xcb, 0xe5, 0x4c, 0x38, 0x4a, 0xd9, 0x47, 0xa9, 0x28, 0xf8, 0x14, 0x14, 0x1d, 0xaf,
+ 0xeb, 0x78, 0x24, 0xb2, 0x1d, 0xc7, 0xfa, 0xa6, 0x5e, 0xdc, 0x49, 0x66, 0x29, 0xee, 0x46, 0x38,
+ 0xac, 0x16, 0xf7, 0x26, 0x58, 0xd0, 0x14, 0xaf, 0xfe, 0x47, 0x8a, 0x32, 0xf2, 0xb5, 0x7b, 0x1b,
+ 0xe4, 0xb1, 0xb4, 0x90, 0x40, 0x95, 0x31, 0x3a, 0xe9, 0x86, 0xb2, 0xa3, 0x91, 0x87, 0xec, 0x1b,
+ 0x79, 0x14, 0x2a, 0xd1, 0xb9, 0xfb, 0x46, 0x82, 0x12, 0x7d, 0x23, 0xd7, 0x48, 0x91, 0x89, 0x24,
+ 0xc4, 0x37, 0x8d, 0x3c, 0xcb, 0xcc, 0x78, 0x12, 0x07, 0xca, 0x8e, 0x46, 0x1e, 0xfa, 0x3f, 0x99,
+ 0x14, 0x81, 0x64, 0x03, 0x26, 0xaa, 0xb1, 0x65, 0x35, 0xf9, 0xa9, 0x6a, 0xec, 0x51, 0x35, 0x36,
+ 0xfc, 0x49, 0x03, 0x10, 0x8f, 0x28, 0x5a, 0x57, 0x0d, 0x1a, 0x75, 0x51, 0xf3, 0x46, 0x57, 0xc2,
+ 0x68, 0x4c, 0xf1, 0x44, 0x2f, 0x61, 0x59, 0xc5, 0x87, 0xd3, 0x0e, 0x28, 0x25, 0x38, 0xb4, 0x41,
+ 0x21, 0xb2, 0x36, 0x83, 0x80, 0x06, 0xea, 0x7a, 0xea, 0xd7, 0xe6, 0x22, 0x3d, 0xcd, 0x8a, 0xfc,
+ 0x2c, 0x8b, 0xa1, 0x97, 0xc3, 0x6a, 0x21, 0xb1, 0x8f, 0x92, 0xb4, 0x22, 0x8a, 0x4d, 0xe2, 0x28,
+ 0xd9, 0x9b, 0x45, 0xd9, 0x25, 0xb3, 0xa3, 0x24, 0x68, 0xcb, 0x4d, 0xf0, 0xf2, 0x8c, 0x63, 0xb9,
+ 0xd1, 0x7b, 0xf1, 0xbd, 0x06, 0x92, 0x31, 0xe0, 0x3e, 0xc8, 0x8a, 0xbf, 0x09, 0x6a, 0x90, 0x6c,
+ 0xcd, 0x37, 0x48, 0x4e, 0x1c, 0x97, 0xc4, 0xa3, 0x50, 0xac, 0x90, 0x64, 0x81, 0x6f, 0x82, 0x65,
+ 0x97, 0x30, 0x86, 0xdb, 0x2a, 0x72, 0xfc, 0x21, 0xd7, 0x8a, 0xcc, 0xe8, 0x6a, 0x5f, 0x7f, 0x04,
+ 0xee, 0xa6, 0x7c, 0x10, 0xc3, 0x2a, 0xc8, 0x59, 0xe2, 0xcb, 0x41, 0x26, 0x94, 0x33, 0x57, 0xc4,
+ 0x44, 0xd9, 0x11, 0x06, 0x14, 0xd9, 0xcd, 0xda, 0xf9, 0x45, 0x65, 0xe1, 0xd9, 0x45, 0x65, 0xe1,
+ 0xf9, 0x45, 0x65, 0xe1, 0xbb, 0xb0, 0xa2, 0x9d, 0x87, 0x15, 0xed, 0x59, 0x58, 0xd1, 0x9e, 0x87,
+ 0x15, 0xed, 0xaf, 0xb0, 0xa2, 0xfd, 0xf8, 0x77, 0x65, 0xe1, 0xd3, 0xc5, 0xfe, 0xf6, 0x7f, 0x01,
+ 0x00, 0x00, 0xff, 0xff, 0x5c, 0x59, 0x23, 0xb9, 0x2c, 0x0e, 0x00, 0x00,
+}
+
+func (m *CSINode) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *CSINode) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *CSINode) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ {
+ size, err := m.Spec.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintGenerated(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x12
+ {
+ size, err := m.ObjectMeta.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintGenerated(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0xa
+ return len(dAtA) - i, nil
+}
+
+func (m *CSINodeDriver) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *CSINodeDriver) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *CSINodeDriver) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.Allocatable != nil {
+ {
+ size, err := m.Allocatable.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintGenerated(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x22
+ }
+ if len(m.TopologyKeys) > 0 {
+ for iNdEx := len(m.TopologyKeys) - 1; iNdEx >= 0; iNdEx-- {
+ i -= len(m.TopologyKeys[iNdEx])
+ copy(dAtA[i:], m.TopologyKeys[iNdEx])
+ i = encodeVarintGenerated(dAtA, i, uint64(len(m.TopologyKeys[iNdEx])))
+ i--
+ dAtA[i] = 0x1a
+ }
+ }
+ i -= len(m.NodeID)
+ copy(dAtA[i:], m.NodeID)
+ i = encodeVarintGenerated(dAtA, i, uint64(len(m.NodeID)))
+ i--
+ dAtA[i] = 0x12
+ i -= len(m.Name)
+ copy(dAtA[i:], m.Name)
+ i = encodeVarintGenerated(dAtA, i, uint64(len(m.Name)))
+ i--
+ dAtA[i] = 0xa
+ return len(dAtA) - i, nil
+}
+
+func (m *CSINodeList) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *CSINodeList) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *CSINodeList) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if len(m.Items) > 0 {
+ for iNdEx := len(m.Items) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.Items[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintGenerated(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x12
+ }
+ }
+ {
+ size, err := m.ListMeta.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintGenerated(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0xa
+ return len(dAtA) - i, nil
+}
+
+func (m *CSINodeSpec) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *CSINodeSpec) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *CSINodeSpec) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if len(m.Drivers) > 0 {
+ for iNdEx := len(m.Drivers) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.Drivers[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintGenerated(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0xa
+ }
+ }
+ return len(dAtA) - i, nil
}
func (m *StorageClass) Marshal() (dAtA []byte, err error) {
@@ -813,17 +1151,113 @@ func (m *VolumeError) MarshalToSizedBuffer(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
-func encodeVarintGenerated(dAtA []byte, offset int, v uint64) int {
- offset -= sovGenerated(v)
- base := offset
- for v >= 1<<7 {
- dAtA[offset] = uint8(v&0x7f | 0x80)
- v >>= 7
- offset++
- }
- dAtA[offset] = uint8(v)
+func (m *VolumeNodeResources) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *VolumeNodeResources) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *VolumeNodeResources) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.Count != nil {
+ i = encodeVarintGenerated(dAtA, i, uint64(*m.Count))
+ i--
+ dAtA[i] = 0x8
+ }
+ return len(dAtA) - i, nil
+}
+
+func encodeVarintGenerated(dAtA []byte, offset int, v uint64) int {
+ offset -= sovGenerated(v)
+ base := offset
+ for v >= 1<<7 {
+ dAtA[offset] = uint8(v&0x7f | 0x80)
+ v >>= 7
+ offset++
+ }
+ dAtA[offset] = uint8(v)
return base
}
+func (m *CSINode) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = m.ObjectMeta.Size()
+ n += 1 + l + sovGenerated(uint64(l))
+ l = m.Spec.Size()
+ n += 1 + l + sovGenerated(uint64(l))
+ return n
+}
+
+func (m *CSINodeDriver) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = len(m.Name)
+ n += 1 + l + sovGenerated(uint64(l))
+ l = len(m.NodeID)
+ n += 1 + l + sovGenerated(uint64(l))
+ if len(m.TopologyKeys) > 0 {
+ for _, s := range m.TopologyKeys {
+ l = len(s)
+ n += 1 + l + sovGenerated(uint64(l))
+ }
+ }
+ if m.Allocatable != nil {
+ l = m.Allocatable.Size()
+ n += 1 + l + sovGenerated(uint64(l))
+ }
+ return n
+}
+
+func (m *CSINodeList) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = m.ListMeta.Size()
+ n += 1 + l + sovGenerated(uint64(l))
+ if len(m.Items) > 0 {
+ for _, e := range m.Items {
+ l = e.Size()
+ n += 1 + l + sovGenerated(uint64(l))
+ }
+ }
+ return n
+}
+
+func (m *CSINodeSpec) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if len(m.Drivers) > 0 {
+ for _, e := range m.Drivers {
+ l = e.Size()
+ n += 1 + l + sovGenerated(uint64(l))
+ }
+ }
+ return n
+}
+
func (m *StorageClass) Size() (n int) {
if m == nil {
return 0
@@ -988,12 +1422,79 @@ func (m *VolumeError) Size() (n int) {
return n
}
+func (m *VolumeNodeResources) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Count != nil {
+ n += 1 + sovGenerated(uint64(*m.Count))
+ }
+ return n
+}
+
func sovGenerated(x uint64) (n int) {
return (math_bits.Len64(x|1) + 6) / 7
}
func sozGenerated(x uint64) (n int) {
return sovGenerated(uint64((x << 1) ^ uint64((int64(x) >> 63))))
}
+func (this *CSINode) String() string {
+ if this == nil {
+ return "nil"
+ }
+ s := strings.Join([]string{`&CSINode{`,
+ `ObjectMeta:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.ObjectMeta), "ObjectMeta", "v1.ObjectMeta", 1), `&`, ``, 1) + `,`,
+ `Spec:` + strings.Replace(strings.Replace(this.Spec.String(), "CSINodeSpec", "CSINodeSpec", 1), `&`, ``, 1) + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func (this *CSINodeDriver) String() string {
+ if this == nil {
+ return "nil"
+ }
+ s := strings.Join([]string{`&CSINodeDriver{`,
+ `Name:` + fmt.Sprintf("%v", this.Name) + `,`,
+ `NodeID:` + fmt.Sprintf("%v", this.NodeID) + `,`,
+ `TopologyKeys:` + fmt.Sprintf("%v", this.TopologyKeys) + `,`,
+ `Allocatable:` + strings.Replace(this.Allocatable.String(), "VolumeNodeResources", "VolumeNodeResources", 1) + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func (this *CSINodeList) String() string {
+ if this == nil {
+ return "nil"
+ }
+ repeatedStringForItems := "[]CSINode{"
+ for _, f := range this.Items {
+ repeatedStringForItems += strings.Replace(strings.Replace(f.String(), "CSINode", "CSINode", 1), `&`, ``, 1) + ","
+ }
+ repeatedStringForItems += "}"
+ s := strings.Join([]string{`&CSINodeList{`,
+ `ListMeta:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.ListMeta), "ListMeta", "v1.ListMeta", 1), `&`, ``, 1) + `,`,
+ `Items:` + repeatedStringForItems + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func (this *CSINodeSpec) String() string {
+ if this == nil {
+ return "nil"
+ }
+ repeatedStringForDrivers := "[]CSINodeDriver{"
+ for _, f := range this.Drivers {
+ repeatedStringForDrivers += strings.Replace(strings.Replace(f.String(), "CSINodeDriver", "CSINodeDriver", 1), `&`, ``, 1) + ","
+ }
+ repeatedStringForDrivers += "}"
+ s := strings.Join([]string{`&CSINodeSpec{`,
+ `Drivers:` + repeatedStringForDrivers + `,`,
+ `}`,
+ }, "")
+ return s
+}
func (this *StorageClass) String() string {
if this == nil {
return "nil"
@@ -1101,39 +1602,560 @@ func (this *VolumeAttachmentStatus) String() string {
for k := range this.AttachmentMetadata {
keysForAttachmentMetadata = append(keysForAttachmentMetadata, k)
}
- github_com_gogo_protobuf_sortkeys.Strings(keysForAttachmentMetadata)
- mapStringForAttachmentMetadata := "map[string]string{"
- for _, k := range keysForAttachmentMetadata {
- mapStringForAttachmentMetadata += fmt.Sprintf("%v: %v,", k, this.AttachmentMetadata[k])
+ github_com_gogo_protobuf_sortkeys.Strings(keysForAttachmentMetadata)
+ mapStringForAttachmentMetadata := "map[string]string{"
+ for _, k := range keysForAttachmentMetadata {
+ mapStringForAttachmentMetadata += fmt.Sprintf("%v: %v,", k, this.AttachmentMetadata[k])
+ }
+ mapStringForAttachmentMetadata += "}"
+ s := strings.Join([]string{`&VolumeAttachmentStatus{`,
+ `Attached:` + fmt.Sprintf("%v", this.Attached) + `,`,
+ `AttachmentMetadata:` + mapStringForAttachmentMetadata + `,`,
+ `AttachError:` + strings.Replace(this.AttachError.String(), "VolumeError", "VolumeError", 1) + `,`,
+ `DetachError:` + strings.Replace(this.DetachError.String(), "VolumeError", "VolumeError", 1) + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func (this *VolumeError) String() string {
+ if this == nil {
+ return "nil"
+ }
+ s := strings.Join([]string{`&VolumeError{`,
+ `Time:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.Time), "Time", "v1.Time", 1), `&`, ``, 1) + `,`,
+ `Message:` + fmt.Sprintf("%v", this.Message) + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func (this *VolumeNodeResources) String() string {
+ if this == nil {
+ return "nil"
+ }
+ s := strings.Join([]string{`&VolumeNodeResources{`,
+ `Count:` + valueToStringGenerated(this.Count) + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func valueToStringGenerated(v interface{}) string {
+ rv := reflect.ValueOf(v)
+ if rv.IsNil() {
+ return "nil"
+ }
+ pv := reflect.Indirect(rv).Interface()
+ return fmt.Sprintf("*%v", pv)
+}
+func (m *CSINode) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: CSINode: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: CSINode: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ObjectMeta", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if err := m.ObjectMeta.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Spec", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if err := m.Spec.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipGenerated(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if skippy < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if (iNdEx + skippy) < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *CSINodeDriver) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: CSINodeDriver: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: CSINodeDriver: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Name = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field NodeID", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.NodeID = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 3:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field TopologyKeys", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.TopologyKeys = append(m.TopologyKeys, string(dAtA[iNdEx:postIndex]))
+ iNdEx = postIndex
+ case 4:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Allocatable", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.Allocatable == nil {
+ m.Allocatable = &VolumeNodeResources{}
+ }
+ if err := m.Allocatable.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipGenerated(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if skippy < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if (iNdEx + skippy) < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *CSINodeList) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: CSINodeList: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: CSINodeList: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ListMeta", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if err := m.ListMeta.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Items", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Items = append(m.Items, CSINode{})
+ if err := m.Items[len(m.Items)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipGenerated(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if skippy < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if (iNdEx + skippy) < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
}
- mapStringForAttachmentMetadata += "}"
- s := strings.Join([]string{`&VolumeAttachmentStatus{`,
- `Attached:` + fmt.Sprintf("%v", this.Attached) + `,`,
- `AttachmentMetadata:` + mapStringForAttachmentMetadata + `,`,
- `AttachError:` + strings.Replace(this.AttachError.String(), "VolumeError", "VolumeError", 1) + `,`,
- `DetachError:` + strings.Replace(this.DetachError.String(), "VolumeError", "VolumeError", 1) + `,`,
- `}`,
- }, "")
- return s
+ return nil
}
-func (this *VolumeError) String() string {
- if this == nil {
- return "nil"
+func (m *CSINodeSpec) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: CSINodeSpec: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: CSINodeSpec: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Drivers", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Drivers = append(m.Drivers, CSINodeDriver{})
+ if err := m.Drivers[len(m.Drivers)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipGenerated(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if skippy < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if (iNdEx + skippy) < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
}
- s := strings.Join([]string{`&VolumeError{`,
- `Time:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.Time), "Time", "v1.Time", 1), `&`, ``, 1) + `,`,
- `Message:` + fmt.Sprintf("%v", this.Message) + `,`,
- `}`,
- }, "")
- return s
-}
-func valueToStringGenerated(v interface{}) string {
- rv := reflect.ValueOf(v)
- if rv.IsNil() {
- return "nil"
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
}
- pv := reflect.Indirect(rv).Interface()
- return fmt.Sprintf("*%v", pv)
+ return nil
}
func (m *StorageClass) Unmarshal(dAtA []byte) error {
l := len(dAtA)
@@ -2587,6 +3609,79 @@ func (m *VolumeError) Unmarshal(dAtA []byte) error {
}
return nil
}
+func (m *VolumeNodeResources) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: VolumeNodeResources: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: VolumeNodeResources: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Count", wireType)
+ }
+ var v int32
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ v |= int32(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ m.Count = &v
+ default:
+ iNdEx = preIndex
+ skippy, err := skipGenerated(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if skippy < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if (iNdEx + skippy) < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
func skipGenerated(dAtA []byte) (n int, err error) {
l := len(dAtA)
iNdEx := 0
diff --git a/vendor/k8s.io/api/storage/v1/generated.proto b/vendor/k8s.io/api/storage/v1/generated.proto
index df7823593e3c1..e5004c84243de 100644
--- a/vendor/k8s.io/api/storage/v1/generated.proto
+++ b/vendor/k8s.io/api/storage/v1/generated.proto
@@ -29,6 +29,80 @@ import "k8s.io/apimachinery/pkg/runtime/schema/generated.proto";
// Package-wide variables from generator "generated".
option go_package = "v1";
+// CSINode holds information about all CSI drivers installed on a node.
+// CSI drivers do not need to create the CSINode object directly. As long as
+// they use the node-driver-registrar sidecar container, the kubelet will
+// automatically populate the CSINode object for the CSI driver as part of
+// kubelet plugin registration.
+// CSINode has the same name as a node. If the object is missing, it means either
+// there are no CSI Drivers available on the node, or the Kubelet version is low
+// enough that it doesn't create this object.
+// CSINode has an OwnerReference that points to the corresponding node object.
+message CSINode {
+ // metadata.name must be the Kubernetes node name.
+ optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
+
+ // spec is the specification of CSINode
+ optional CSINodeSpec spec = 2;
+}
+
+// CSINodeDriver holds information about the specification of one CSI driver installed on a node
+message CSINodeDriver {
+ // This is the name of the CSI driver that this object refers to.
+ // This MUST be the same name returned by the CSI GetPluginName() call for
+ // that driver.
+ optional string name = 1;
+
+ // nodeID of the node from the driver point of view.
+ // This field enables Kubernetes to communicate with storage systems that do
+ // not share the same nomenclature for nodes. For example, Kubernetes may
+ // refer to a given node as "node1", but the storage system may refer to
+ // the same node as "nodeA". When Kubernetes issues a command to the storage
+ // system to attach a volume to a specific node, it can use this field to
+ // refer to the node name using the ID that the storage system will
+ // understand, e.g. "nodeA" instead of "node1". This field is required.
+ optional string nodeID = 2;
+
+ // topologyKeys is the list of keys supported by the driver.
+ // When a driver is initialized on a cluster, it provides a set of topology
+ // keys that it understands (e.g. "company.com/zone", "company.com/region").
+ // When a driver is initialized on a node, it provides the same topology keys
+ // along with values. Kubelet will expose these topology keys as labels
+ // on its own node object.
+ // When Kubernetes does topology aware provisioning, it can use this list to
+ // determine which labels it should retrieve from the node object and pass
+ // back to the driver.
+ // It is possible for different nodes to use different topology keys.
+ // This can be empty if driver does not support topology.
+ // +optional
+ repeated string topologyKeys = 3;
+
+ // allocatable represents the volume resources of a node that are available for scheduling.
+ // This field is beta.
+ // +optional
+ optional VolumeNodeResources allocatable = 4;
+}
+
+// CSINodeList is a collection of CSINode objects.
+message CSINodeList {
+ // Standard list metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
+ // +optional
+ optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
+
+ // items is the list of CSINode
+ repeated CSINode items = 2;
+}
+
+// CSINodeSpec holds information about the specification of all CSI drivers installed on a node
+message CSINodeSpec {
+ // drivers is a list of information of all CSI Drivers existing on a node.
+ // If all drivers in the list are uninstalled, this can become empty.
+ // +patchMergeKey=name
+ // +patchStrategy=merge
+ repeated CSINodeDriver drivers = 1;
+}
+
// StorageClass describes the parameters for a class of storage for
// which PersistentVolumes can be dynamically provisioned.
//
@@ -36,7 +110,7 @@ option go_package = "v1";
// according to etcd is in ObjectMeta.Name.
message StorageClass {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
@@ -80,7 +154,7 @@ message StorageClass {
// StorageClassList is a collection of storage classes.
message StorageClassList {
// Standard list metadata
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -94,7 +168,7 @@ message StorageClassList {
// VolumeAttachment objects are non-namespaced.
message VolumeAttachment {
// Standard object metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
@@ -112,7 +186,7 @@ message VolumeAttachment {
// VolumeAttachmentList is a collection of VolumeAttachment objects.
message VolumeAttachmentList {
// Standard list metadata
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -193,3 +267,13 @@ message VolumeError {
optional string message = 2;
}
+// VolumeNodeResources is a set of resource limits for scheduling of volumes.
+message VolumeNodeResources {
+ // Maximum number of unique volumes managed by the CSI driver that can be used on a node.
+ // A volume that is both attached and mounted on a node is considered to be used once, not twice.
+ // The same rule applies for a unique volume that is shared among multiple pods on the same node.
+ // If this field is not specified, then the supported number of volumes on this node is unbounded.
+ // +optional
+ optional int32 count = 1;
+}
+
diff --git a/vendor/k8s.io/api/storage/v1/register.go b/vendor/k8s.io/api/storage/v1/register.go
index 473c687278b94..67493fd0fab5e 100644
--- a/vendor/k8s.io/api/storage/v1/register.go
+++ b/vendor/k8s.io/api/storage/v1/register.go
@@ -49,6 +49,9 @@ func addKnownTypes(scheme *runtime.Scheme) error {
&VolumeAttachment{},
&VolumeAttachmentList{},
+
+ &CSINode{},
+ &CSINodeList{},
)
metav1.AddToGroupVersion(scheme, SchemeGroupVersion)
diff --git a/vendor/k8s.io/api/storage/v1/types.go b/vendor/k8s.io/api/storage/v1/types.go
index 21531c9e14657..86cb78b6401c8 100644
--- a/vendor/k8s.io/api/storage/v1/types.go
+++ b/vendor/k8s.io/api/storage/v1/types.go
@@ -17,7 +17,7 @@ limitations under the License.
package v1
import (
- "k8s.io/api/core/v1"
+ v1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
@@ -33,7 +33,7 @@ import (
type StorageClass struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -80,7 +80,7 @@ type StorageClass struct {
type StorageClassList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -115,7 +115,7 @@ type VolumeAttachment struct {
metav1.TypeMeta `json:",inline"`
// Standard object metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -136,7 +136,7 @@ type VolumeAttachment struct {
type VolumeAttachmentList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -216,3 +216,97 @@ type VolumeError struct {
// +optional
Message string `json:"message,omitempty" protobuf:"bytes,2,opt,name=message"`
}
+
+// +genclient
+// +genclient:nonNamespaced
+// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
+
+// CSINode holds information about all CSI drivers installed on a node.
+// CSI drivers do not need to create the CSINode object directly. As long as
+// they use the node-driver-registrar sidecar container, the kubelet will
+// automatically populate the CSINode object for the CSI driver as part of
+// kubelet plugin registration.
+// CSINode has the same name as a node. If the object is missing, it means either
+// there are no CSI Drivers available on the node, or the Kubelet version is low
+// enough that it doesn't create this object.
+// CSINode has an OwnerReference that points to the corresponding node object.
+type CSINode struct {
+ metav1.TypeMeta `json:",inline"`
+
+ // metadata.name must be the Kubernetes node name.
+ metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
+
+ // spec is the specification of CSINode
+ Spec CSINodeSpec `json:"spec" protobuf:"bytes,2,opt,name=spec"`
+}
+
+// CSINodeSpec holds information about the specification of all CSI drivers installed on a node
+type CSINodeSpec struct {
+ // drivers is a list of information of all CSI Drivers existing on a node.
+ // If all drivers in the list are uninstalled, this can become empty.
+ // +patchMergeKey=name
+ // +patchStrategy=merge
+ Drivers []CSINodeDriver `json:"drivers" patchStrategy:"merge" patchMergeKey:"name" protobuf:"bytes,1,rep,name=drivers"`
+}
+
+// CSINodeDriver holds information about the specification of one CSI driver installed on a node
+type CSINodeDriver struct {
+ // This is the name of the CSI driver that this object refers to.
+ // This MUST be the same name returned by the CSI GetPluginName() call for
+ // that driver.
+ Name string `json:"name" protobuf:"bytes,1,opt,name=name"`
+
+ // nodeID of the node from the driver point of view.
+ // This field enables Kubernetes to communicate with storage systems that do
+ // not share the same nomenclature for nodes. For example, Kubernetes may
+ // refer to a given node as "node1", but the storage system may refer to
+ // the same node as "nodeA". When Kubernetes issues a command to the storage
+ // system to attach a volume to a specific node, it can use this field to
+ // refer to the node name using the ID that the storage system will
+ // understand, e.g. "nodeA" instead of "node1". This field is required.
+ NodeID string `json:"nodeID" protobuf:"bytes,2,opt,name=nodeID"`
+
+ // topologyKeys is the list of keys supported by the driver.
+ // When a driver is initialized on a cluster, it provides a set of topology
+ // keys that it understands (e.g. "company.com/zone", "company.com/region").
+ // When a driver is initialized on a node, it provides the same topology keys
+ // along with values. Kubelet will expose these topology keys as labels
+ // on its own node object.
+ // When Kubernetes does topology aware provisioning, it can use this list to
+ // determine which labels it should retrieve from the node object and pass
+ // back to the driver.
+ // It is possible for different nodes to use different topology keys.
+ // This can be empty if driver does not support topology.
+ // +optional
+ TopologyKeys []string `json:"topologyKeys" protobuf:"bytes,3,rep,name=topologyKeys"`
+
+ // allocatable represents the volume resources of a node that are available for scheduling.
+ // This field is beta.
+ // +optional
+ Allocatable *VolumeNodeResources `json:"allocatable,omitempty" protobuf:"bytes,4,opt,name=allocatable"`
+}
+
+// VolumeNodeResources is a set of resource limits for scheduling of volumes.
+type VolumeNodeResources struct {
+ // Maximum number of unique volumes managed by the CSI driver that can be used on a node.
+ // A volume that is both attached and mounted on a node is considered to be used once, not twice.
+ // The same rule applies for a unique volume that is shared among multiple pods on the same node.
+ // If this field is not specified, then the supported number of volumes on this node is unbounded.
+ // +optional
+ Count *int32 `json:"count,omitempty" protobuf:"varint,1,opt,name=count"`
+}
+
+// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
+
+// CSINodeList is a collection of CSINode objects.
+type CSINodeList struct {
+ metav1.TypeMeta `json:",inline"`
+
+ // Standard list metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
+ // +optional
+ metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
+
+ // items is the list of CSINode
+ Items []CSINode `json:"items" protobuf:"bytes,2,rep,name=items"`
+}
diff --git a/vendor/k8s.io/api/storage/v1/types_swagger_doc_generated.go b/vendor/k8s.io/api/storage/v1/types_swagger_doc_generated.go
index e31dd7f712b47..d6e3a16293d8f 100644
--- a/vendor/k8s.io/api/storage/v1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/storage/v1/types_swagger_doc_generated.go
@@ -27,9 +27,50 @@ package v1
// Those methods can be generated by using hack/update-generated-swagger-docs.sh
// AUTO-GENERATED FUNCTIONS START HERE. DO NOT EDIT.
+var map_CSINode = map[string]string{
+ "": "CSINode holds information about all CSI drivers installed on a node. CSI drivers do not need to create the CSINode object directly. As long as they use the node-driver-registrar sidecar container, the kubelet will automatically populate the CSINode object for the CSI driver as part of kubelet plugin registration. CSINode has the same name as a node. If the object is missing, it means either there are no CSI Drivers available on the node, or the Kubelet version is low enough that it doesn't create this object. CSINode has an OwnerReference that points to the corresponding node object.",
+ "metadata": "metadata.name must be the Kubernetes node name.",
+ "spec": "spec is the specification of CSINode",
+}
+
+func (CSINode) SwaggerDoc() map[string]string {
+ return map_CSINode
+}
+
+var map_CSINodeDriver = map[string]string{
+ "": "CSINodeDriver holds information about the specification of one CSI driver installed on a node",
+ "name": "This is the name of the CSI driver that this object refers to. This MUST be the same name returned by the CSI GetPluginName() call for that driver.",
+ "nodeID": "nodeID of the node from the driver point of view. This field enables Kubernetes to communicate with storage systems that do not share the same nomenclature for nodes. For example, Kubernetes may refer to a given node as \"node1\", but the storage system may refer to the same node as \"nodeA\". When Kubernetes issues a command to the storage system to attach a volume to a specific node, it can use this field to refer to the node name using the ID that the storage system will understand, e.g. \"nodeA\" instead of \"node1\". This field is required.",
+ "topologyKeys": "topologyKeys is the list of keys supported by the driver. When a driver is initialized on a cluster, it provides a set of topology keys that it understands (e.g. \"company.com/zone\", \"company.com/region\"). When a driver is initialized on a node, it provides the same topology keys along with values. Kubelet will expose these topology keys as labels on its own node object. When Kubernetes does topology aware provisioning, it can use this list to determine which labels it should retrieve from the node object and pass back to the driver. It is possible for different nodes to use different topology keys. This can be empty if driver does not support topology.",
+ "allocatable": "allocatable represents the volume resources of a node that are available for scheduling. This field is beta.",
+}
+
+func (CSINodeDriver) SwaggerDoc() map[string]string {
+ return map_CSINodeDriver
+}
+
+var map_CSINodeList = map[string]string{
+ "": "CSINodeList is a collection of CSINode objects.",
+ "metadata": "Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "items": "items is the list of CSINode",
+}
+
+func (CSINodeList) SwaggerDoc() map[string]string {
+ return map_CSINodeList
+}
+
+var map_CSINodeSpec = map[string]string{
+ "": "CSINodeSpec holds information about the specification of all CSI drivers installed on a node",
+ "drivers": "drivers is a list of information of all CSI Drivers existing on a node. If all drivers in the list are uninstalled, this can become empty.",
+}
+
+func (CSINodeSpec) SwaggerDoc() map[string]string {
+ return map_CSINodeSpec
+}
+
var map_StorageClass = map[string]string{
"": "StorageClass describes the parameters for a class of storage for which PersistentVolumes can be dynamically provisioned.\n\nStorageClasses are non-namespaced; the name of the storage class according to etcd is in ObjectMeta.Name.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"provisioner": "Provisioner indicates the type of the provisioner.",
"parameters": "Parameters holds the parameters for the provisioner that should create volumes of this storage class.",
"reclaimPolicy": "Dynamically provisioned PersistentVolumes of this storage class are created with this reclaimPolicy. Defaults to Delete.",
@@ -45,7 +86,7 @@ func (StorageClass) SwaggerDoc() map[string]string {
var map_StorageClassList = map[string]string{
"": "StorageClassList is a collection of storage classes.",
- "metadata": "Standard list metadata More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"items": "Items is the list of StorageClasses",
}
@@ -55,7 +96,7 @@ func (StorageClassList) SwaggerDoc() map[string]string {
var map_VolumeAttachment = map[string]string{
"": "VolumeAttachment captures the intent to attach or detach the specified volume to/from the specified node.\n\nVolumeAttachment objects are non-namespaced.",
- "metadata": "Standard object metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard object metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"spec": "Specification of the desired attach/detach volume behavior. Populated by the Kubernetes system.",
"status": "Status of the VolumeAttachment request. Populated by the entity completing the attach or detach operation, i.e. the external-attacher.",
}
@@ -66,7 +107,7 @@ func (VolumeAttachment) SwaggerDoc() map[string]string {
var map_VolumeAttachmentList = map[string]string{
"": "VolumeAttachmentList is a collection of VolumeAttachment objects.",
- "metadata": "Standard list metadata More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"items": "Items is the list of VolumeAttachments",
}
@@ -116,4 +157,13 @@ func (VolumeError) SwaggerDoc() map[string]string {
return map_VolumeError
}
+var map_VolumeNodeResources = map[string]string{
+ "": "VolumeNodeResources is a set of resource limits for scheduling of volumes.",
+ "count": "Maximum number of unique volumes managed by the CSI driver that can be used on a node. A volume that is both attached and mounted on a node is considered to be used once, not twice. The same rule applies for a unique volume that is shared among multiple pods on the same node. If this field is not specified, then the supported number of volumes on this node is unbounded.",
+}
+
+func (VolumeNodeResources) SwaggerDoc() map[string]string {
+ return map_VolumeNodeResources
+}
+
// AUTO-GENERATED FUNCTIONS END HERE
diff --git a/vendor/k8s.io/api/storage/v1/zz_generated.deepcopy.go b/vendor/k8s.io/api/storage/v1/zz_generated.deepcopy.go
index eb8626e6e01b7..76255a0af741c 100644
--- a/vendor/k8s.io/api/storage/v1/zz_generated.deepcopy.go
+++ b/vendor/k8s.io/api/storage/v1/zz_generated.deepcopy.go
@@ -25,6 +25,115 @@ import (
runtime "k8s.io/apimachinery/pkg/runtime"
)
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *CSINode) DeepCopyInto(out *CSINode) {
+ *out = *in
+ out.TypeMeta = in.TypeMeta
+ in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
+ in.Spec.DeepCopyInto(&out.Spec)
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CSINode.
+func (in *CSINode) DeepCopy() *CSINode {
+ if in == nil {
+ return nil
+ }
+ out := new(CSINode)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
+func (in *CSINode) DeepCopyObject() runtime.Object {
+ if c := in.DeepCopy(); c != nil {
+ return c
+ }
+ return nil
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *CSINodeDriver) DeepCopyInto(out *CSINodeDriver) {
+ *out = *in
+ if in.TopologyKeys != nil {
+ in, out := &in.TopologyKeys, &out.TopologyKeys
+ *out = make([]string, len(*in))
+ copy(*out, *in)
+ }
+ if in.Allocatable != nil {
+ in, out := &in.Allocatable, &out.Allocatable
+ *out = new(VolumeNodeResources)
+ (*in).DeepCopyInto(*out)
+ }
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CSINodeDriver.
+func (in *CSINodeDriver) DeepCopy() *CSINodeDriver {
+ if in == nil {
+ return nil
+ }
+ out := new(CSINodeDriver)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *CSINodeList) DeepCopyInto(out *CSINodeList) {
+ *out = *in
+ out.TypeMeta = in.TypeMeta
+ in.ListMeta.DeepCopyInto(&out.ListMeta)
+ if in.Items != nil {
+ in, out := &in.Items, &out.Items
+ *out = make([]CSINode, len(*in))
+ for i := range *in {
+ (*in)[i].DeepCopyInto(&(*out)[i])
+ }
+ }
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CSINodeList.
+func (in *CSINodeList) DeepCopy() *CSINodeList {
+ if in == nil {
+ return nil
+ }
+ out := new(CSINodeList)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
+func (in *CSINodeList) DeepCopyObject() runtime.Object {
+ if c := in.DeepCopy(); c != nil {
+ return c
+ }
+ return nil
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *CSINodeSpec) DeepCopyInto(out *CSINodeSpec) {
+ *out = *in
+ if in.Drivers != nil {
+ in, out := &in.Drivers, &out.Drivers
+ *out = make([]CSINodeDriver, len(*in))
+ for i := range *in {
+ (*in)[i].DeepCopyInto(&(*out)[i])
+ }
+ }
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CSINodeSpec.
+func (in *CSINodeSpec) DeepCopy() *CSINodeSpec {
+ if in == nil {
+ return nil
+ }
+ out := new(CSINodeSpec)
+ in.DeepCopyInto(out)
+ return out
+}
+
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *StorageClass) DeepCopyInto(out *StorageClass) {
*out = *in
@@ -271,3 +380,24 @@ func (in *VolumeError) DeepCopy() *VolumeError {
in.DeepCopyInto(out)
return out
}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *VolumeNodeResources) DeepCopyInto(out *VolumeNodeResources) {
+ *out = *in
+ if in.Count != nil {
+ in, out := &in.Count, &out.Count
+ *out = new(int32)
+ **out = **in
+ }
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new VolumeNodeResources.
+func (in *VolumeNodeResources) DeepCopy() *VolumeNodeResources {
+ if in == nil {
+ return nil
+ }
+ out := new(VolumeNodeResources)
+ in.DeepCopyInto(out)
+ return out
+}
diff --git a/vendor/k8s.io/api/storage/v1alpha1/generated.proto b/vendor/k8s.io/api/storage/v1alpha1/generated.proto
index 57a8357384731..7601963924071 100644
--- a/vendor/k8s.io/api/storage/v1alpha1/generated.proto
+++ b/vendor/k8s.io/api/storage/v1alpha1/generated.proto
@@ -35,7 +35,7 @@ option go_package = "v1alpha1";
// VolumeAttachment objects are non-namespaced.
message VolumeAttachment {
// Standard object metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
@@ -53,7 +53,7 @@ message VolumeAttachment {
// VolumeAttachmentList is a collection of VolumeAttachment objects.
message VolumeAttachmentList {
// Standard list metadata
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
diff --git a/vendor/k8s.io/api/storage/v1alpha1/types.go b/vendor/k8s.io/api/storage/v1alpha1/types.go
index 76ad6dc0dd8af..39408857c26bf 100644
--- a/vendor/k8s.io/api/storage/v1alpha1/types.go
+++ b/vendor/k8s.io/api/storage/v1alpha1/types.go
@@ -33,7 +33,7 @@ type VolumeAttachment struct {
metav1.TypeMeta `json:",inline"`
// Standard object metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -54,7 +54,7 @@ type VolumeAttachment struct {
type VolumeAttachmentList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
diff --git a/vendor/k8s.io/api/storage/v1alpha1/types_swagger_doc_generated.go b/vendor/k8s.io/api/storage/v1alpha1/types_swagger_doc_generated.go
index 3701b08640d53..2e821616649d0 100644
--- a/vendor/k8s.io/api/storage/v1alpha1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/storage/v1alpha1/types_swagger_doc_generated.go
@@ -29,7 +29,7 @@ package v1alpha1
// AUTO-GENERATED FUNCTIONS START HERE. DO NOT EDIT.
var map_VolumeAttachment = map[string]string{
"": "VolumeAttachment captures the intent to attach or detach the specified volume to/from the specified node.\n\nVolumeAttachment objects are non-namespaced.",
- "metadata": "Standard object metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard object metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"spec": "Specification of the desired attach/detach volume behavior. Populated by the Kubernetes system.",
"status": "Status of the VolumeAttachment request. Populated by the entity completing the attach or detach operation, i.e. the external-attacher.",
}
@@ -40,7 +40,7 @@ func (VolumeAttachment) SwaggerDoc() map[string]string {
var map_VolumeAttachmentList = map[string]string{
"": "VolumeAttachmentList is a collection of VolumeAttachment objects.",
- "metadata": "Standard list metadata More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"items": "Items is the list of VolumeAttachments",
}
diff --git a/vendor/k8s.io/api/storage/v1beta1/generated.pb.go b/vendor/k8s.io/api/storage/v1beta1/generated.pb.go
index 677d366f5efea..cd35af34f8621 100644
--- a/vendor/k8s.io/api/storage/v1beta1/generated.pb.go
+++ b/vendor/k8s.io/api/storage/v1beta1/generated.pb.go
@@ -520,89 +520,91 @@ func init() {
}
var fileDescriptor_7d2980599fd0de80 = []byte{
- // 1311 bytes of a gzipped FileDescriptorProto
- 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x57, 0x4d, 0x6f, 0x1b, 0x45,
- 0x18, 0xce, 0xc6, 0xf9, 0x1c, 0x27, 0xad, 0x33, 0x8d, 0xc0, 0xf8, 0x60, 0x47, 0x46, 0xd0, 0xb4,
- 0x6a, 0xd7, 0x6d, 0x55, 0xaa, 0xaa, 0x12, 0x87, 0x6c, 0x1a, 0x09, 0xb7, 0x75, 0x1a, 0x26, 0x51,
- 0x85, 0x2a, 0x0e, 0x8c, 0x77, 0xdf, 0x3a, 0xdb, 0x78, 0x77, 0xb6, 0x33, 0x63, 0x43, 0x6e, 0x9c,
- 0xe0, 0x8a, 0x38, 0xf0, 0x0b, 0xf8, 0x0b, 0x20, 0xc1, 0x85, 0x23, 0x3d, 0xa1, 0x8a, 0x53, 0x4f,
- 0x16, 0x5d, 0x7e, 0x02, 0xb7, 0x88, 0x03, 0x9a, 0xd9, 0x89, 0x77, 0xfd, 0xd5, 0x24, 0x1c, 0x72,
- 0xf3, 0xbc, 0x1f, 0xcf, 0xfb, 0xf5, 0xcc, 0x3b, 0x6b, 0xb4, 0x79, 0x70, 0x57, 0xd8, 0x3e, 0xab,
- 0x1d, 0x74, 0x9a, 0xc0, 0x43, 0x90, 0x20, 0x6a, 0x5d, 0x08, 0x3d, 0xc6, 0x6b, 0x46, 0x41, 0x23,
- 0xbf, 0x26, 0x24, 0xe3, 0xb4, 0x05, 0xb5, 0xee, 0xcd, 0x26, 0x48, 0x7a, 0xb3, 0xd6, 0x82, 0x10,
- 0x38, 0x95, 0xe0, 0xd9, 0x11, 0x67, 0x92, 0xe1, 0x52, 0x62, 0x6b, 0xd3, 0xc8, 0xb7, 0x8d, 0xad,
- 0x6d, 0x6c, 0x4b, 0xd7, 0x5b, 0xbe, 0xdc, 0xef, 0x34, 0x6d, 0x97, 0x05, 0xb5, 0x16, 0x6b, 0xb1,
- 0x9a, 0x76, 0x69, 0x76, 0x9e, 0xe9, 0x93, 0x3e, 0xe8, 0x5f, 0x09, 0x54, 0xa9, 0x9a, 0x09, 0xeb,
- 0x32, 0xae, 0x62, 0x0e, 0x87, 0x2b, 0xdd, 0x4e, 0x6d, 0x02, 0xea, 0xee, 0xfb, 0x21, 0xf0, 0xc3,
- 0x5a, 0x74, 0xd0, 0x52, 0x02, 0x51, 0x0b, 0x40, 0xd2, 0x71, 0x5e, 0xb5, 0x49, 0x5e, 0xbc, 0x13,
- 0x4a, 0x3f, 0x80, 0x11, 0x87, 0x3b, 0x27, 0x39, 0x08, 0x77, 0x1f, 0x02, 0x3a, 0xec, 0x57, 0xfd,
- 0xd5, 0x42, 0x8b, 0x9b, 0xbb, 0xf5, 0xfb, 0xdc, 0xef, 0x02, 0xc7, 0x5f, 0xa0, 0x05, 0x95, 0x91,
- 0x47, 0x25, 0x2d, 0x5a, 0x6b, 0xd6, 0x7a, 0xfe, 0xd6, 0x0d, 0x3b, 0x6d, 0x57, 0x1f, 0xd8, 0x8e,
- 0x0e, 0x5a, 0x4a, 0x20, 0x6c, 0x65, 0x6d, 0x77, 0x6f, 0xda, 0x8f, 0x9b, 0xcf, 0xc1, 0x95, 0x0d,
- 0x90, 0xd4, 0xc1, 0x2f, 0x7b, 0x95, 0xa9, 0xb8, 0x57, 0x41, 0xa9, 0x8c, 0xf4, 0x51, 0xf1, 0x43,
- 0x34, 0x23, 0x22, 0x70, 0x8b, 0xd3, 0x1a, 0xfd, 0x8a, 0x3d, 0x79, 0x18, 0x76, 0x3f, 0xad, 0xdd,
- 0x08, 0x5c, 0x67, 0xc9, 0xc0, 0xce, 0xa8, 0x13, 0xd1, 0x20, 0xd5, 0x5f, 0x2c, 0xb4, 0xdc, 0xb7,
- 0x7a, 0xe4, 0x0b, 0x89, 0x3f, 0x1f, 0x29, 0xc0, 0x3e, 0x5d, 0x01, 0xca, 0x5b, 0xa7, 0x5f, 0x30,
- 0x71, 0x16, 0x8e, 0x25, 0x99, 0xe4, 0x1f, 0xa0, 0x59, 0x5f, 0x42, 0x20, 0x8a, 0xd3, 0x6b, 0xb9,
- 0xf5, 0xfc, 0xad, 0x0f, 0x4e, 0x95, 0xbd, 0xb3, 0x6c, 0x10, 0x67, 0xeb, 0xca, 0x97, 0x24, 0x10,
- 0xd5, 0x6f, 0xb3, 0xb9, 0xab, 0x9a, 0xf0, 0x3d, 0x74, 0x81, 0x4a, 0x49, 0xdd, 0x7d, 0x02, 0x2f,
- 0x3a, 0x3e, 0x07, 0x4f, 0x57, 0xb0, 0xe0, 0xe0, 0xb8, 0x57, 0xb9, 0xb0, 0x31, 0xa0, 0x21, 0x43,
- 0x96, 0xca, 0x37, 0x62, 0x5e, 0x3d, 0x7c, 0xc6, 0x1e, 0x87, 0x0d, 0xd6, 0x09, 0xa5, 0x6e, 0xb0,
- 0xf1, 0xdd, 0x19, 0xd0, 0x90, 0x21, 0xcb, 0xea, 0xcf, 0x16, 0x9a, 0xdf, 0xdc, 0xad, 0x6f, 0x33,
- 0x0f, 0xce, 0x81, 0x00, 0xf5, 0x01, 0x02, 0x5c, 0x3e, 0xa1, 0x85, 0x2a, 0xa9, 0x89, 0xe3, 0xff,
- 0x27, 0x69, 0xa1, 0xb2, 0x31, 0xfc, 0x5d, 0x43, 0x33, 0x21, 0x0d, 0x40, 0xa7, 0xbe, 0x98, 0xfa,
- 0x6c, 0xd3, 0x00, 0x88, 0xd6, 0xe0, 0x0f, 0xd1, 0x5c, 0xc8, 0x3c, 0xa8, 0xdf, 0xd7, 0x09, 0x2c,
- 0x3a, 0x17, 0x8c, 0xcd, 0xdc, 0xb6, 0x96, 0x12, 0xa3, 0xc5, 0xb7, 0xd1, 0x92, 0x64, 0x11, 0x6b,
- 0xb3, 0xd6, 0xe1, 0x43, 0x38, 0x14, 0xc5, 0xdc, 0x5a, 0x6e, 0x7d, 0xd1, 0x29, 0xc4, 0xbd, 0xca,
- 0xd2, 0x5e, 0x46, 0x4e, 0x06, 0xac, 0x70, 0x13, 0xe5, 0x69, 0xbb, 0xcd, 0x5c, 0x2a, 0x69, 0xb3,
- 0x0d, 0xc5, 0x19, 0x5d, 0x63, 0xed, 0x6d, 0x35, 0x3e, 0x61, 0xed, 0x4e, 0x00, 0x2a, 0x38, 0x01,
- 0xc1, 0x3a, 0xdc, 0x05, 0xe1, 0x5c, 0x8c, 0x7b, 0x95, 0xfc, 0x46, 0x8a, 0x43, 0xb2, 0xa0, 0xd5,
- 0x9f, 0x2c, 0x94, 0x37, 0x55, 0x9f, 0x03, 0xe5, 0x3f, 0x19, 0xa4, 0xfc, 0xfb, 0xa7, 0x98, 0xd7,
- 0x04, 0xc2, 0xbb, 0xfd, 0xb4, 0x35, 0xdb, 0xf7, 0xd0, 0xbc, 0xa7, 0x87, 0x26, 0x8a, 0x96, 0x86,
- 0xbe, 0x72, 0x0a, 0x68, 0x73, 0xa3, 0x2e, 0x9a, 0x00, 0xf3, 0xc9, 0x59, 0x90, 0x63, 0xa8, 0xea,
- 0xf7, 0x73, 0x68, 0x69, 0x37, 0xf1, 0xdd, 0x6c, 0x53, 0x21, 0xce, 0x81, 0xd0, 0x1f, 0xa1, 0x7c,
- 0xc4, 0x59, 0xd7, 0x17, 0x3e, 0x0b, 0x81, 0x1b, 0x5a, 0x5d, 0x32, 0x2e, 0xf9, 0x9d, 0x54, 0x45,
- 0xb2, 0x76, 0xb8, 0x8d, 0x50, 0x44, 0x39, 0x0d, 0x40, 0xaa, 0x16, 0xe4, 0x74, 0x0b, 0xee, 0xbe,
- 0xad, 0x05, 0xd9, 0xb2, 0xec, 0x9d, 0xbe, 0xeb, 0x56, 0x28, 0xf9, 0x61, 0x9a, 0x62, 0xaa, 0x20,
- 0x19, 0x7c, 0x7c, 0x80, 0x96, 0x39, 0xb8, 0x6d, 0xea, 0x07, 0x3b, 0xac, 0xed, 0xbb, 0x87, 0x9a,
- 0x9a, 0x8b, 0xce, 0x56, 0xdc, 0xab, 0x2c, 0x93, 0xac, 0xe2, 0xa8, 0x57, 0xb9, 0x31, 0xfa, 0xaa,
- 0xd9, 0x3b, 0xc0, 0x85, 0x2f, 0x24, 0x84, 0x32, 0x21, 0xec, 0x80, 0x0f, 0x19, 0xc4, 0x56, 0x77,
- 0x27, 0x50, 0x9b, 0xe5, 0x71, 0x24, 0x7d, 0x16, 0x8a, 0xe2, 0x6c, 0x7a, 0x77, 0x1a, 0x19, 0x39,
- 0x19, 0xb0, 0xc2, 0x8f, 0xd0, 0xaa, 0xa2, 0xf9, 0x97, 0x49, 0x80, 0xad, 0xaf, 0x22, 0x1a, 0xaa,
- 0x56, 0x15, 0xe7, 0xf4, 0x22, 0x2b, 0xc6, 0xbd, 0xca, 0xea, 0xc6, 0x18, 0x3d, 0x19, 0xeb, 0x85,
- 0x3f, 0x43, 0x2b, 0x5d, 0x2d, 0x72, 0xfc, 0xd0, 0xf3, 0xc3, 0x56, 0x83, 0x79, 0x50, 0x9c, 0xd7,
- 0x45, 0x5f, 0x8d, 0x7b, 0x95, 0x95, 0x27, 0xc3, 0xca, 0xa3, 0x71, 0x42, 0x32, 0x0a, 0x82, 0x5f,
- 0xa0, 0x15, 0x1d, 0x11, 0x3c, 0xb3, 0x08, 0x7c, 0x10, 0xc5, 0x05, 0x3d, 0xbf, 0xf5, 0xec, 0xfc,
- 0x54, 0xeb, 0x14, 0x91, 0x8e, 0xd7, 0xc5, 0x2e, 0xb4, 0xc1, 0x95, 0x8c, 0xef, 0x01, 0x0f, 0x9c,
- 0xf7, 0xcc, 0xbc, 0x56, 0x36, 0x86, 0xa1, 0xc8, 0x28, 0x7a, 0xe9, 0x63, 0x74, 0x71, 0x68, 0xe0,
- 0xb8, 0x80, 0x72, 0x07, 0x70, 0x98, 0x2c, 0x3a, 0xa2, 0x7e, 0xe2, 0x55, 0x34, 0xdb, 0xa5, 0xed,
- 0x0e, 0x24, 0x0c, 0x24, 0xc9, 0xe1, 0xde, 0xf4, 0x5d, 0xab, 0xfa, 0x9b, 0x85, 0x0a, 0x59, 0xf6,
- 0x9c, 0xc3, 0xda, 0x68, 0x0c, 0xae, 0x8d, 0xf5, 0xd3, 0x12, 0x7b, 0xc2, 0xee, 0xf8, 0x71, 0x1a,
- 0x15, 0x92, 0xe1, 0x24, 0xef, 0x60, 0x00, 0xa1, 0x3c, 0x87, 0xab, 0x4d, 0x06, 0xde, 0xaa, 0x1b,
- 0x27, 0xef, 0xf1, 0x34, 0xbb, 0x49, 0x8f, 0x16, 0x7e, 0x8a, 0xe6, 0x84, 0xa4, 0xb2, 0xa3, 0xee,
- 0xbc, 0x42, 0xbd, 0x75, 0x26, 0x54, 0xed, 0x99, 0x3e, 0x5a, 0xc9, 0x99, 0x18, 0xc4, 0xea, 0xef,
- 0x16, 0x5a, 0x1d, 0x76, 0x39, 0x87, 0x61, 0x7f, 0x3a, 0x38, 0xec, 0x6b, 0x67, 0xa9, 0x68, 0xc2,
- 0xc0, 0xff, 0xb4, 0xd0, 0x3b, 0x23, 0xc5, 0xeb, 0xe7, 0x51, 0xed, 0x89, 0x68, 0x68, 0x1b, 0x6d,
- 0xa7, 0x6f, 0xbe, 0xde, 0x13, 0x3b, 0x63, 0xf4, 0x64, 0xac, 0x17, 0x7e, 0x8e, 0x0a, 0x7e, 0xd8,
- 0xf6, 0x43, 0x48, 0x64, 0xbb, 0xe9, 0xb8, 0xc7, 0x5e, 0xe6, 0x61, 0x64, 0x3d, 0xe6, 0xd5, 0xb8,
- 0x57, 0x29, 0xd4, 0x87, 0x50, 0xc8, 0x08, 0x6e, 0xf5, 0x8f, 0x31, 0xe3, 0xd1, 0x6f, 0xe1, 0x35,
- 0xb4, 0x90, 0x7c, 0xcf, 0x01, 0x37, 0x65, 0xf4, 0xdb, 0xbd, 0x61, 0xe4, 0xa4, 0x6f, 0xa1, 0x19,
- 0xa4, 0x5b, 0x61, 0x12, 0x3d, 0x1b, 0x83, 0xb4, 0x67, 0x86, 0x41, 0xfa, 0x4c, 0x0c, 0xa2, 0xca,
- 0x44, 0x7d, 0x00, 0xe9, 0x86, 0xe6, 0x06, 0x33, 0xd9, 0x36, 0x72, 0xd2, 0xb7, 0xa8, 0xfe, 0x9b,
- 0x1b, 0x33, 0x25, 0x4d, 0xc5, 0x4c, 0x49, 0xc7, 0x9f, 0xb1, 0xc3, 0x25, 0x79, 0xfd, 0x92, 0x3c,
- 0xfc, 0x83, 0x85, 0x30, 0xed, 0x43, 0x34, 0x8e, 0xa9, 0x9a, 0xf0, 0xe9, 0xc1, 0xd9, 0x6f, 0x88,
- 0xbd, 0x31, 0x02, 0x96, 0xbc, 0x93, 0x25, 0x93, 0x04, 0x1e, 0x35, 0x20, 0x63, 0x32, 0xc0, 0x3e,
- 0xca, 0x27, 0xd2, 0x2d, 0xce, 0x19, 0x37, 0x57, 0xf6, 0xf2, 0xc9, 0x09, 0x69, 0x73, 0xa7, 0xac,
- 0x3f, 0xe4, 0x52, 0xff, 0xa3, 0x5e, 0x25, 0x9f, 0xd1, 0x93, 0x2c, 0xb6, 0x0a, 0xe5, 0x41, 0x1a,
- 0x6a, 0xe6, 0x7f, 0x84, 0xba, 0x0f, 0x93, 0x43, 0x65, 0xb0, 0x4b, 0x5b, 0xe8, 0xdd, 0x09, 0x0d,
- 0x3a, 0xd3, 0xbb, 0xf2, 0x8d, 0x85, 0xb2, 0x31, 0xf0, 0x23, 0x34, 0xa3, 0xfe, 0x6a, 0x9a, 0x0d,
- 0x73, 0xf5, 0x74, 0x1b, 0x66, 0xcf, 0x0f, 0x20, 0x5d, 0x94, 0xea, 0x44, 0x34, 0x0a, 0xbe, 0x82,
- 0xe6, 0x03, 0x10, 0x82, 0xb6, 0x4c, 0xe4, 0xf4, 0xab, 0xaf, 0x91, 0x88, 0xc9, 0xb1, 0xbe, 0x7a,
- 0x07, 0x5d, 0x1a, 0xf3, 0x1d, 0x8d, 0x2b, 0x68, 0xd6, 0xd5, 0xff, 0x85, 0x54, 0x42, 0xb3, 0xce,
- 0xa2, 0xda, 0x32, 0x9b, 0xfa, 0x2f, 0x50, 0x22, 0x77, 0xae, 0xbf, 0x7c, 0x53, 0x9e, 0x7a, 0xf5,
- 0xa6, 0x3c, 0xf5, 0xfa, 0x4d, 0x79, 0xea, 0xeb, 0xb8, 0x6c, 0xbd, 0x8c, 0xcb, 0xd6, 0xab, 0xb8,
- 0x6c, 0xbd, 0x8e, 0xcb, 0xd6, 0x5f, 0x71, 0xd9, 0xfa, 0xee, 0xef, 0xf2, 0xd4, 0xd3, 0x79, 0xd3,
- 0xef, 0xff, 0x02, 0x00, 0x00, 0xff, 0xff, 0xce, 0x65, 0xbb, 0xc7, 0x7f, 0x10, 0x00, 0x00,
+ // 1344 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xbc, 0x57, 0xbd, 0x6f, 0xdb, 0x46,
+ 0x1b, 0x37, 0x2d, 0x7f, 0x9e, 0xec, 0x44, 0xbe, 0x18, 0xef, 0xab, 0x57, 0x83, 0x64, 0xe8, 0x45,
+ 0x1b, 0x27, 0x48, 0xc8, 0x24, 0x48, 0x83, 0x20, 0x40, 0x07, 0xd3, 0x31, 0x50, 0x25, 0x96, 0xe3,
+ 0x9e, 0x8d, 0xa0, 0x08, 0x3a, 0xf4, 0x44, 0x3e, 0x91, 0x19, 0x93, 0x3c, 0x86, 0x3c, 0xa9, 0xd5,
+ 0xd6, 0xa9, 0x73, 0xd1, 0xa1, 0x7f, 0x41, 0xff, 0x85, 0x16, 0x68, 0x97, 0x8e, 0xcd, 0x54, 0x04,
+ 0x9d, 0x32, 0x09, 0x0d, 0xbb, 0x76, 0xeb, 0x66, 0x74, 0x28, 0xee, 0x78, 0x12, 0x29, 0x89, 0x8a,
+ 0xed, 0x0e, 0xde, 0x78, 0xcf, 0xc7, 0xef, 0xf9, 0x7e, 0xee, 0x88, 0xb6, 0x8f, 0xef, 0x47, 0xba,
+ 0xc3, 0x8c, 0xe3, 0x4e, 0x0b, 0x42, 0x1f, 0x38, 0x44, 0x46, 0x17, 0x7c, 0x9b, 0x85, 0x86, 0x62,
+ 0xd0, 0xc0, 0x31, 0x22, 0xce, 0x42, 0xda, 0x06, 0xa3, 0x7b, 0xbb, 0x05, 0x9c, 0xde, 0x36, 0xda,
+ 0xe0, 0x43, 0x48, 0x39, 0xd8, 0x7a, 0x10, 0x32, 0xce, 0x70, 0x25, 0x91, 0xd5, 0x69, 0xe0, 0xe8,
+ 0x4a, 0x56, 0x57, 0xb2, 0x95, 0x9b, 0x6d, 0x87, 0x1f, 0x75, 0x5a, 0xba, 0xc5, 0x3c, 0xa3, 0xcd,
+ 0xda, 0xcc, 0x90, 0x2a, 0xad, 0xce, 0x73, 0x79, 0x92, 0x07, 0xf9, 0x95, 0x40, 0x55, 0xea, 0x19,
+ 0xb3, 0x16, 0x0b, 0x85, 0xcd, 0x71, 0x73, 0x95, 0xbb, 0xa9, 0x8c, 0x47, 0xad, 0x23, 0xc7, 0x87,
+ 0xb0, 0x67, 0x04, 0xc7, 0x6d, 0x41, 0x88, 0x0c, 0x0f, 0x38, 0xcd, 0xd3, 0x32, 0xa6, 0x69, 0x85,
+ 0x1d, 0x9f, 0x3b, 0x1e, 0x4c, 0x28, 0xdc, 0x3b, 0x4d, 0x21, 0xb2, 0x8e, 0xc0, 0xa3, 0xe3, 0x7a,
+ 0xf5, 0x9f, 0x34, 0xb4, 0xbc, 0x7d, 0xd0, 0x78, 0x18, 0x3a, 0x5d, 0x08, 0xf1, 0x67, 0x68, 0x49,
+ 0x78, 0x64, 0x53, 0x4e, 0xcb, 0xda, 0x86, 0xb6, 0x59, 0xbc, 0x73, 0x4b, 0x4f, 0xd3, 0x35, 0x04,
+ 0xd6, 0x83, 0xe3, 0xb6, 0x20, 0x44, 0xba, 0x90, 0xd6, 0xbb, 0xb7, 0xf5, 0x27, 0xad, 0x17, 0x60,
+ 0xf1, 0x26, 0x70, 0x6a, 0xe2, 0x57, 0xfd, 0xda, 0x4c, 0xdc, 0xaf, 0xa1, 0x94, 0x46, 0x86, 0xa8,
+ 0xf8, 0x31, 0x9a, 0x8b, 0x02, 0xb0, 0xca, 0xb3, 0x12, 0xfd, 0x9a, 0x3e, 0xbd, 0x18, 0xfa, 0xd0,
+ 0xad, 0x83, 0x00, 0x2c, 0x73, 0x45, 0xc1, 0xce, 0x89, 0x13, 0x91, 0x20, 0xf5, 0x1f, 0x35, 0xb4,
+ 0x3a, 0x94, 0xda, 0x75, 0x22, 0x8e, 0x3f, 0x9d, 0x08, 0x40, 0x3f, 0x5b, 0x00, 0x42, 0x5b, 0xba,
+ 0x5f, 0x52, 0x76, 0x96, 0x06, 0x94, 0x8c, 0xf3, 0x8f, 0xd0, 0xbc, 0xc3, 0xc1, 0x8b, 0xca, 0xb3,
+ 0x1b, 0x85, 0xcd, 0xe2, 0x9d, 0xf7, 0xce, 0xe4, 0xbd, 0xb9, 0xaa, 0x10, 0xe7, 0x1b, 0x42, 0x97,
+ 0x24, 0x10, 0xf5, 0x3f, 0xb3, 0xbe, 0x8b, 0x98, 0xf0, 0x03, 0x74, 0x89, 0x72, 0x4e, 0xad, 0x23,
+ 0x02, 0x2f, 0x3b, 0x4e, 0x08, 0xb6, 0x8c, 0x60, 0xc9, 0xc4, 0x71, 0xbf, 0x76, 0x69, 0x6b, 0x84,
+ 0x43, 0xc6, 0x24, 0x85, 0x6e, 0xc0, 0xec, 0x86, 0xff, 0x9c, 0x3d, 0xf1, 0x9b, 0xac, 0xe3, 0x73,
+ 0x99, 0x60, 0xa5, 0xbb, 0x3f, 0xc2, 0x21, 0x63, 0x92, 0xd8, 0x42, 0xeb, 0x5d, 0xe6, 0x76, 0x3c,
+ 0xd8, 0x75, 0x9e, 0x83, 0xd5, 0xb3, 0x5c, 0x68, 0x32, 0x1b, 0xa2, 0x72, 0x61, 0xa3, 0xb0, 0xb9,
+ 0x6c, 0x1a, 0x71, 0xbf, 0xb6, 0xfe, 0x34, 0x87, 0x7f, 0xd2, 0xaf, 0x5d, 0xc9, 0xa1, 0x93, 0x5c,
+ 0xb0, 0xfa, 0x0f, 0x1a, 0x5a, 0xdc, 0x3e, 0x68, 0xec, 0x31, 0x1b, 0x2e, 0xa0, 0xcb, 0x1a, 0x23,
+ 0x5d, 0x76, 0xf5, 0x94, 0x3a, 0x09, 0xa7, 0xa6, 0xf6, 0xd8, 0x5f, 0x49, 0x9d, 0x84, 0x8c, 0x1a,
+ 0x92, 0x0d, 0x34, 0xe7, 0x53, 0x0f, 0xa4, 0xeb, 0xcb, 0xa9, 0xce, 0x1e, 0xf5, 0x80, 0x48, 0x0e,
+ 0x7e, 0x1f, 0x2d, 0xf8, 0xcc, 0x86, 0xc6, 0x43, 0xe9, 0xc0, 0xb2, 0x79, 0x49, 0xc9, 0x2c, 0xec,
+ 0x49, 0x2a, 0x51, 0x5c, 0x7c, 0x17, 0xad, 0x70, 0x16, 0x30, 0x97, 0xb5, 0x7b, 0x8f, 0xa1, 0x37,
+ 0xc8, 0x78, 0x29, 0xee, 0xd7, 0x56, 0x0e, 0x33, 0x74, 0x32, 0x22, 0x85, 0x5b, 0xa8, 0x48, 0x5d,
+ 0x97, 0x59, 0x94, 0xd3, 0x96, 0x0b, 0xe5, 0x39, 0x19, 0xa3, 0xf1, 0xae, 0x18, 0x93, 0x32, 0x09,
+ 0xe3, 0x04, 0x22, 0xd6, 0x09, 0x2d, 0x88, 0xcc, 0xcb, 0x71, 0xbf, 0x56, 0xdc, 0x4a, 0x71, 0x48,
+ 0x16, 0xb4, 0xfe, 0xbd, 0x86, 0x8a, 0x2a, 0xea, 0x0b, 0x98, 0xab, 0x8f, 0x46, 0xe7, 0xea, 0xff,
+ 0x67, 0xa8, 0xd7, 0x94, 0xa9, 0xb2, 0x86, 0x6e, 0xcb, 0x91, 0x3a, 0x44, 0x8b, 0xb6, 0x2c, 0x5a,
+ 0x54, 0xd6, 0x24, 0xf4, 0xb5, 0x33, 0x40, 0xab, 0xb1, 0xbd, 0xac, 0x0c, 0x2c, 0x26, 0xe7, 0x88,
+ 0x0c, 0xa0, 0xea, 0xdf, 0x2c, 0xa0, 0x95, 0x83, 0x44, 0x77, 0xdb, 0xa5, 0x51, 0x74, 0x01, 0x0d,
+ 0xfd, 0x01, 0x2a, 0x06, 0x21, 0xeb, 0x3a, 0x91, 0xc3, 0x7c, 0x08, 0x55, 0x5b, 0x5d, 0x51, 0x2a,
+ 0xc5, 0xfd, 0x94, 0x45, 0xb2, 0x72, 0xd8, 0x45, 0x28, 0xa0, 0x21, 0xf5, 0x80, 0x8b, 0x14, 0x14,
+ 0x64, 0x0a, 0xee, 0xbf, 0x2b, 0x05, 0xd9, 0xb0, 0xf4, 0xfd, 0xa1, 0xea, 0x8e, 0xcf, 0xc3, 0x5e,
+ 0xea, 0x62, 0xca, 0x20, 0x19, 0x7c, 0x7c, 0x8c, 0x56, 0x43, 0xb0, 0x5c, 0xea, 0x78, 0xfb, 0xcc,
+ 0x75, 0xac, 0x9e, 0x6c, 0xcd, 0x65, 0x73, 0x27, 0xee, 0xd7, 0x56, 0x49, 0x96, 0x71, 0xd2, 0xaf,
+ 0xdd, 0x9a, 0xbc, 0x3a, 0xf5, 0x7d, 0x08, 0x23, 0x27, 0xe2, 0xe0, 0xf3, 0xa4, 0x61, 0x47, 0x74,
+ 0xc8, 0x28, 0xb6, 0x98, 0x1d, 0x4f, 0xac, 0xaf, 0x27, 0x01, 0x77, 0x98, 0x1f, 0x95, 0xe7, 0xd3,
+ 0xd9, 0x69, 0x66, 0xe8, 0x64, 0x44, 0x0a, 0xef, 0xa2, 0x75, 0xd1, 0xe6, 0x9f, 0x27, 0x06, 0x76,
+ 0xbe, 0x08, 0xa8, 0x2f, 0x52, 0x55, 0x5e, 0x90, 0xdb, 0xb2, 0x2c, 0x76, 0xdd, 0x56, 0x0e, 0x9f,
+ 0xe4, 0x6a, 0xe1, 0x4f, 0xd0, 0x5a, 0xb2, 0xec, 0x4c, 0xc7, 0xb7, 0x1d, 0xbf, 0x2d, 0x56, 0x5d,
+ 0x79, 0x51, 0x06, 0x7d, 0x3d, 0xee, 0xd7, 0xd6, 0x9e, 0x8e, 0x33, 0x4f, 0xf2, 0x88, 0x64, 0x12,
+ 0x04, 0xbf, 0x44, 0x6b, 0xd2, 0x22, 0xd8, 0x6a, 0x11, 0x38, 0x10, 0x95, 0x97, 0x64, 0xfd, 0x36,
+ 0xb3, 0xf5, 0x13, 0xa9, 0x13, 0x8d, 0x34, 0x58, 0x17, 0x07, 0xe0, 0x82, 0xc5, 0x59, 0x78, 0x08,
+ 0xa1, 0x67, 0xfe, 0x4f, 0xd5, 0x6b, 0x6d, 0x6b, 0x1c, 0x8a, 0x4c, 0xa2, 0x57, 0x3e, 0x44, 0x97,
+ 0xc7, 0x0a, 0x8e, 0x4b, 0xa8, 0x70, 0x0c, 0xbd, 0x64, 0xd1, 0x11, 0xf1, 0x89, 0xd7, 0xd1, 0x7c,
+ 0x97, 0xba, 0x1d, 0x48, 0x3a, 0x90, 0x24, 0x87, 0x07, 0xb3, 0xf7, 0xb5, 0xfa, 0xcf, 0x1a, 0x2a,
+ 0x65, 0xbb, 0xe7, 0x02, 0xd6, 0x46, 0x73, 0x74, 0x6d, 0x6c, 0x9e, 0xb5, 0xb1, 0xa7, 0xec, 0x8e,
+ 0xef, 0x66, 0x51, 0x29, 0x29, 0x4e, 0x72, 0xd9, 0x7a, 0xe0, 0xf3, 0x0b, 0x18, 0x6d, 0x32, 0x72,
+ 0x57, 0xdd, 0x3a, 0x7d, 0x8f, 0xa7, 0xde, 0x4d, 0xbb, 0xb4, 0xf0, 0x33, 0xb4, 0x10, 0x71, 0xca,
+ 0x3b, 0x62, 0xe6, 0x05, 0xea, 0x9d, 0x73, 0xa1, 0x4a, 0xcd, 0xf4, 0xd2, 0x4a, 0xce, 0x44, 0x21,
+ 0xd6, 0x7f, 0xd1, 0xd0, 0xfa, 0xb8, 0xca, 0x05, 0x14, 0xfb, 0xe3, 0xd1, 0x62, 0xdf, 0x38, 0x4f,
+ 0x44, 0x53, 0x0a, 0xfe, 0x9b, 0x86, 0xfe, 0x33, 0x11, 0xbc, 0xbc, 0x1e, 0xc5, 0x9e, 0x08, 0xc6,
+ 0xb6, 0xd1, 0x5e, 0x7a, 0xe7, 0xcb, 0x3d, 0xb1, 0x9f, 0xc3, 0x27, 0xb9, 0x5a, 0xf8, 0x05, 0x2a,
+ 0x39, 0xbe, 0xeb, 0xf8, 0x90, 0xd0, 0x0e, 0xd2, 0x72, 0xe7, 0x0e, 0xf3, 0x38, 0xb2, 0x2c, 0xf3,
+ 0x7a, 0xdc, 0xaf, 0x95, 0x1a, 0x63, 0x28, 0x64, 0x02, 0xb7, 0xfe, 0x6b, 0x4e, 0x79, 0xe4, 0x5d,
+ 0x78, 0x03, 0x2d, 0x25, 0x8f, 0x46, 0x08, 0x55, 0x18, 0xc3, 0x74, 0x6f, 0x29, 0x3a, 0x19, 0x4a,
+ 0xc8, 0x0e, 0x92, 0xa9, 0x50, 0x8e, 0x9e, 0xaf, 0x83, 0xa4, 0x66, 0xa6, 0x83, 0xe4, 0x99, 0x28,
+ 0x44, 0xe1, 0x89, 0x78, 0x00, 0xc9, 0x84, 0x16, 0x46, 0x3d, 0xd9, 0x53, 0x74, 0x32, 0x94, 0xa8,
+ 0xff, 0x5d, 0xc8, 0xa9, 0x92, 0x6c, 0xc5, 0x4c, 0x48, 0x83, 0xb7, 0xf2, 0x78, 0x48, 0xf6, 0x30,
+ 0x24, 0x1b, 0x7f, 0xab, 0x21, 0x4c, 0x87, 0x10, 0xcd, 0x41, 0xab, 0x26, 0xfd, 0xf4, 0xe8, 0xfc,
+ 0x13, 0xa2, 0x6f, 0x4d, 0x80, 0x25, 0xf7, 0x64, 0x45, 0x39, 0x81, 0x27, 0x05, 0x48, 0x8e, 0x07,
+ 0xd8, 0x41, 0xc5, 0x84, 0xba, 0x13, 0x86, 0x2c, 0x54, 0x23, 0x7b, 0xf5, 0x74, 0x87, 0xa4, 0xb8,
+ 0x59, 0x95, 0x0f, 0xb9, 0x54, 0xff, 0xa4, 0x5f, 0x2b, 0x66, 0xf8, 0x24, 0x8b, 0x2d, 0x4c, 0xd9,
+ 0x90, 0x9a, 0x9a, 0xfb, 0x17, 0xa6, 0x1e, 0xc2, 0x74, 0x53, 0x19, 0xec, 0xca, 0x0e, 0xfa, 0xef,
+ 0x94, 0x04, 0x9d, 0xeb, 0x5e, 0xf9, 0x4a, 0x43, 0x59, 0x1b, 0x78, 0x17, 0xcd, 0x89, 0xff, 0x59,
+ 0xb5, 0x61, 0xae, 0x9f, 0x6d, 0xc3, 0x1c, 0x3a, 0x1e, 0xa4, 0x8b, 0x52, 0x9c, 0x88, 0x44, 0xc1,
+ 0xd7, 0xd0, 0xa2, 0x07, 0x51, 0x44, 0xdb, 0xca, 0x72, 0xfa, 0xea, 0x6b, 0x26, 0x64, 0x32, 0xe0,
+ 0xd7, 0xef, 0xa1, 0x2b, 0x39, 0xef, 0x68, 0x5c, 0x43, 0xf3, 0x96, 0xfc, 0xe1, 0x12, 0x0e, 0xcd,
+ 0x9b, 0xcb, 0x62, 0xcb, 0x6c, 0xcb, 0xff, 0xac, 0x84, 0x6e, 0xde, 0x7c, 0xf5, 0xb6, 0x3a, 0xf3,
+ 0xfa, 0x6d, 0x75, 0xe6, 0xcd, 0xdb, 0xea, 0xcc, 0x97, 0x71, 0x55, 0x7b, 0x15, 0x57, 0xb5, 0xd7,
+ 0x71, 0x55, 0x7b, 0x13, 0x57, 0xb5, 0xdf, 0xe3, 0xaa, 0xf6, 0xf5, 0x1f, 0xd5, 0x99, 0x67, 0x8b,
+ 0x2a, 0xdf, 0xff, 0x04, 0x00, 0x00, 0xff, 0xff, 0x72, 0xff, 0xde, 0x2e, 0xe4, 0x10, 0x00, 0x00,
}
func (m *CSIDriver) Marshal() (dAtA []byte, err error) {
@@ -715,6 +717,15 @@ func (m *CSIDriverSpec) MarshalToSizedBuffer(dAtA []byte) (int, error) {
_ = i
var l int
_ = l
+ if len(m.VolumeLifecycleModes) > 0 {
+ for iNdEx := len(m.VolumeLifecycleModes) - 1; iNdEx >= 0; iNdEx-- {
+ i -= len(m.VolumeLifecycleModes[iNdEx])
+ copy(dAtA[i:], m.VolumeLifecycleModes[iNdEx])
+ i = encodeVarintGenerated(dAtA, i, uint64(len(m.VolumeLifecycleModes[iNdEx])))
+ i--
+ dAtA[i] = 0x1a
+ }
+ }
if m.PodInfoOnMount != nil {
i--
if *m.PodInfoOnMount {
@@ -1458,6 +1469,12 @@ func (m *CSIDriverSpec) Size() (n int) {
if m.PodInfoOnMount != nil {
n += 2
}
+ if len(m.VolumeLifecycleModes) > 0 {
+ for _, s := range m.VolumeLifecycleModes {
+ l = len(s)
+ n += 1 + l + sovGenerated(uint64(l))
+ }
+ }
return n
}
@@ -1745,6 +1762,7 @@ func (this *CSIDriverSpec) String() string {
s := strings.Join([]string{`&CSIDriverSpec{`,
`AttachRequired:` + valueToStringGenerated(this.AttachRequired) + `,`,
`PodInfoOnMount:` + valueToStringGenerated(this.PodInfoOnMount) + `,`,
+ `VolumeLifecycleModes:` + fmt.Sprintf("%v", this.VolumeLifecycleModes) + `,`,
`}`,
}, "")
return s
@@ -2265,6 +2283,38 @@ func (m *CSIDriverSpec) Unmarshal(dAtA []byte) error {
}
b := bool(v != 0)
m.PodInfoOnMount = &b
+ case 3:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field VolumeLifecycleModes", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.VolumeLifecycleModes = append(m.VolumeLifecycleModes, VolumeLifecycleMode(dAtA[iNdEx:postIndex]))
+ iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipGenerated(dAtA[iNdEx:])
diff --git a/vendor/k8s.io/api/storage/v1beta1/generated.proto b/vendor/k8s.io/api/storage/v1beta1/generated.proto
index 3bcc2139cadb6..373a154b11249 100644
--- a/vendor/k8s.io/api/storage/v1beta1/generated.proto
+++ b/vendor/k8s.io/api/storage/v1beta1/generated.proto
@@ -45,7 +45,7 @@ message CSIDriver {
// The driver name must be 63 characters or less, beginning and ending with
// an alphanumeric character ([a-z0-9A-Z]) with dashes (-), dots (.), and
// alphanumerics between.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
// Specification of the CSI Driver.
@@ -55,7 +55,7 @@ message CSIDriver {
// CSIDriverList is a collection of CSIDriver objects.
message CSIDriverList {
// Standard list metadata
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -93,10 +93,36 @@ message CSIDriverSpec {
// "csi.storage.k8s.io/pod.name": pod.Name
// "csi.storage.k8s.io/pod.namespace": pod.Namespace
// "csi.storage.k8s.io/pod.uid": string(pod.UID)
+ // "csi.storage.k8s.io/ephemeral": "true" iff the volume is an ephemeral inline volume
+ // defined by a CSIVolumeSource, otherwise "false"
+ //
+ // "csi.storage.k8s.io/ephemeral" is a new feature in Kubernetes 1.16. It is only
+ // required for drivers which support both the "Persistent" and "Ephemeral" VolumeLifecycleMode.
+ // Other drivers can leave pod info disabled and/or ignore this field.
+ // As Kubernetes 1.15 doesn't support this field, drivers can only support one mode when
+ // deployed on such a cluster and the deployment determines which mode that is, for example
+ // via a command line parameter of the driver.
// +optional
optional bool podInfoOnMount = 2;
+
+ // VolumeLifecycleModes defines what kind of volumes this CSI volume driver supports.
+ // The default if the list is empty is "Persistent", which is the usage
+ // defined by the CSI specification and implemented in Kubernetes via the usual
+ // PV/PVC mechanism.
+ // The other mode is "Ephemeral". In this mode, volumes are defined inline
+ // inside the pod spec with CSIVolumeSource and their lifecycle is tied to
+ // the lifecycle of that pod. A driver has to be aware of this
+ // because it is only going to get a NodePublishVolume call for such a volume.
+ // For more information about implementing this mode, see
+ // https://kubernetes-csi.github.io/docs/ephemeral-local-volumes.html
+ // A driver can support one or more of these modes and
+ // more modes may be added in the future.
+ // +optional
+ repeated string volumeLifecycleModes = 3;
}
+// DEPRECATED - This group version of CSINode is deprecated by storage/v1/CSINode.
+// See the release notes for more information.
// CSINode holds information about all CSI drivers installed on a node.
// CSI drivers do not need to create the CSINode object directly. As long as
// they use the node-driver-registrar sidecar container, the kubelet will
@@ -153,7 +179,7 @@ message CSINodeDriver {
// CSINodeList is a collection of CSINode objects.
message CSINodeList {
// Standard list metadata
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -177,7 +203,7 @@ message CSINodeSpec {
// according to etcd is in ObjectMeta.Name.
message StorageClass {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
@@ -221,7 +247,7 @@ message StorageClass {
// StorageClassList is a collection of storage classes.
message StorageClassList {
// Standard list metadata
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
@@ -235,7 +261,7 @@ message StorageClassList {
// VolumeAttachment objects are non-namespaced.
message VolumeAttachment {
// Standard object metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ObjectMeta metadata = 1;
@@ -253,7 +279,7 @@ message VolumeAttachment {
// VolumeAttachmentList is a collection of VolumeAttachment objects.
message VolumeAttachmentList {
// Standard list metadata
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 1;
diff --git a/vendor/k8s.io/api/storage/v1beta1/types.go b/vendor/k8s.io/api/storage/v1beta1/types.go
index 762fcfcd001ab..a8faeb9d13075 100644
--- a/vendor/k8s.io/api/storage/v1beta1/types.go
+++ b/vendor/k8s.io/api/storage/v1beta1/types.go
@@ -33,7 +33,7 @@ import (
type StorageClass struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -80,7 +80,7 @@ type StorageClass struct {
type StorageClassList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -115,7 +115,7 @@ type VolumeAttachment struct {
metav1.TypeMeta `json:",inline"`
// Standard object metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -136,7 +136,7 @@ type VolumeAttachment struct {
type VolumeAttachmentList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -239,7 +239,7 @@ type CSIDriver struct {
// The driver name must be 63 characters or less, beginning and ending with
// an alphanumeric character ([a-z0-9A-Z]) with dashes (-), dots (.), and
// alphanumerics between.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Specification of the CSI Driver.
@@ -253,7 +253,7 @@ type CSIDriverList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -291,14 +291,65 @@ type CSIDriverSpec struct {
// "csi.storage.k8s.io/pod.name": pod.Name
// "csi.storage.k8s.io/pod.namespace": pod.Namespace
// "csi.storage.k8s.io/pod.uid": string(pod.UID)
+ // "csi.storage.k8s.io/ephemeral": "true" iff the volume is an ephemeral inline volume
+ // defined by a CSIVolumeSource, otherwise "false"
+ //
+ // "csi.storage.k8s.io/ephemeral" is a new feature in Kubernetes 1.16. It is only
+ // required for drivers which support both the "Persistent" and "Ephemeral" VolumeLifecycleMode.
+ // Other drivers can leave pod info disabled and/or ignore this field.
+ // As Kubernetes 1.15 doesn't support this field, drivers can only support one mode when
+ // deployed on such a cluster and the deployment determines which mode that is, for example
+ // via a command line parameter of the driver.
// +optional
PodInfoOnMount *bool `json:"podInfoOnMount,omitempty" protobuf:"bytes,2,opt,name=podInfoOnMount"`
+
+ // VolumeLifecycleModes defines what kind of volumes this CSI volume driver supports.
+ // The default if the list is empty is "Persistent", which is the usage
+ // defined by the CSI specification and implemented in Kubernetes via the usual
+ // PV/PVC mechanism.
+ // The other mode is "Ephemeral". In this mode, volumes are defined inline
+ // inside the pod spec with CSIVolumeSource and their lifecycle is tied to
+ // the lifecycle of that pod. A driver has to be aware of this
+ // because it is only going to get a NodePublishVolume call for such a volume.
+ // For more information about implementing this mode, see
+ // https://kubernetes-csi.github.io/docs/ephemeral-local-volumes.html
+ // A driver can support one or more of these modes and
+ // more modes may be added in the future.
+ // +optional
+ VolumeLifecycleModes []VolumeLifecycleMode `json:"volumeLifecycleModes,omitempty" protobuf:"bytes,3,opt,name=volumeLifecycleModes"`
}
+// VolumeLifecycleMode is an enumeration of possible usage modes for a volume
+// provided by a CSI driver. More modes may be added in the future.
+type VolumeLifecycleMode string
+
+const (
+ // VolumeLifecyclePersistent explicitly confirms that the driver implements
+ // the full CSI spec. It is the default when CSIDriverSpec.VolumeLifecycleModes is not
+ // set. Such volumes are managed in Kubernetes via the persistent volume
+ // claim mechanism and have a lifecycle that is independent of the pods which
+ // use them.
+ VolumeLifecyclePersistent VolumeLifecycleMode = "Persistent"
+
+ // VolumeLifecycleEphemeral indicates that the driver can be used for
+ // ephemeral inline volumes. Such volumes are specified inside the pod
+ // spec with a CSIVolumeSource and, as far as Kubernetes is concerned, have
+ // a lifecycle that is tied to the lifecycle of the pod. For example, such
+ // a volume might contain data that gets created specifically for that pod,
+ // like secrets.
+ // But how the volume actually gets created and managed is entirely up to
+ // the driver. It might also use reference counting to share the same volume
+ // instance among different pods if the CSIVolumeSource of those pods is
+ // identical.
+ VolumeLifecycleEphemeral VolumeLifecycleMode = "Ephemeral"
+)
+
// +genclient
// +genclient:nonNamespaced
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
+// DEPRECATED - This group version of CSINode is deprecated by storage/v1/CSINode.
+// See the release notes for more information.
// CSINode holds information about all CSI drivers installed on a node.
// CSI drivers do not need to create the CSINode object directly. As long as
// they use the node-driver-registrar sidecar container, the kubelet will
@@ -380,7 +431,7 @@ type CSINodeList struct {
metav1.TypeMeta `json:",inline"`
// Standard list metadata
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
metav1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
diff --git a/vendor/k8s.io/api/storage/v1beta1/types_swagger_doc_generated.go b/vendor/k8s.io/api/storage/v1beta1/types_swagger_doc_generated.go
index 0bc3456b97746..53fa666ba0ac0 100644
--- a/vendor/k8s.io/api/storage/v1beta1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/api/storage/v1beta1/types_swagger_doc_generated.go
@@ -29,7 +29,7 @@ package v1beta1
// AUTO-GENERATED FUNCTIONS START HERE. DO NOT EDIT.
var map_CSIDriver = map[string]string{
"": "CSIDriver captures information about a Container Storage Interface (CSI) volume driver deployed on the cluster. CSI drivers do not need to create the CSIDriver object directly. Instead they may use the cluster-driver-registrar sidecar container. When deployed with a CSI driver it automatically creates a CSIDriver object representing the driver. Kubernetes attach detach controller uses this object to determine whether attach is required. Kubelet uses this object to determine whether pod information needs to be passed on mount. CSIDriver objects are non-namespaced.",
- "metadata": "Standard object metadata. metadata.Name indicates the name of the CSI driver that this object refers to; it MUST be the same name returned by the CSI GetPluginName() call for that driver. The driver name must be 63 characters or less, beginning and ending with an alphanumeric character ([a-z0-9A-Z]) with dashes (-), dots (.), and alphanumerics between. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard object metadata. metadata.Name indicates the name of the CSI driver that this object refers to; it MUST be the same name returned by the CSI GetPluginName() call for that driver. The driver name must be 63 characters or less, beginning and ending with an alphanumeric character ([a-z0-9A-Z]) with dashes (-), dots (.), and alphanumerics between. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"spec": "Specification of the CSI Driver.",
}
@@ -39,7 +39,7 @@ func (CSIDriver) SwaggerDoc() map[string]string {
var map_CSIDriverList = map[string]string{
"": "CSIDriverList is a collection of CSIDriver objects.",
- "metadata": "Standard list metadata More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"items": "items is the list of CSIDriver",
}
@@ -48,9 +48,10 @@ func (CSIDriverList) SwaggerDoc() map[string]string {
}
var map_CSIDriverSpec = map[string]string{
- "": "CSIDriverSpec is the specification of a CSIDriver.",
- "attachRequired": "attachRequired indicates this CSI volume driver requires an attach operation (because it implements the CSI ControllerPublishVolume() method), and that the Kubernetes attach detach controller should call the attach volume interface which checks the volumeattachment status and waits until the volume is attached before proceeding to mounting. The CSI external-attacher coordinates with CSI volume driver and updates the volumeattachment status when the attach operation is complete. If the CSIDriverRegistry feature gate is enabled and the value is specified to false, the attach operation will be skipped. Otherwise the attach operation will be called.",
- "podInfoOnMount": "If set to true, podInfoOnMount indicates this CSI volume driver requires additional pod information (like podName, podUID, etc.) during mount operations. If set to false, pod information will not be passed on mount. Default is false. The CSI driver specifies podInfoOnMount as part of driver deployment. If true, Kubelet will pass pod information as VolumeContext in the CSI NodePublishVolume() calls. The CSI driver is responsible for parsing and validating the information passed in as VolumeContext. The following VolumeConext will be passed if podInfoOnMount is set to true. This list might grow, but the prefix will be used. \"csi.storage.k8s.io/pod.name\": pod.Name \"csi.storage.k8s.io/pod.namespace\": pod.Namespace \"csi.storage.k8s.io/pod.uid\": string(pod.UID)",
+ "": "CSIDriverSpec is the specification of a CSIDriver.",
+ "attachRequired": "attachRequired indicates this CSI volume driver requires an attach operation (because it implements the CSI ControllerPublishVolume() method), and that the Kubernetes attach detach controller should call the attach volume interface which checks the volumeattachment status and waits until the volume is attached before proceeding to mounting. The CSI external-attacher coordinates with CSI volume driver and updates the volumeattachment status when the attach operation is complete. If the CSIDriverRegistry feature gate is enabled and the value is specified to false, the attach operation will be skipped. Otherwise the attach operation will be called.",
+ "podInfoOnMount": "If set to true, podInfoOnMount indicates this CSI volume driver requires additional pod information (like podName, podUID, etc.) during mount operations. If set to false, pod information will not be passed on mount. Default is false. The CSI driver specifies podInfoOnMount as part of driver deployment. If true, Kubelet will pass pod information as VolumeContext in the CSI NodePublishVolume() calls. The CSI driver is responsible for parsing and validating the information passed in as VolumeContext. The following VolumeConext will be passed if podInfoOnMount is set to true. This list might grow, but the prefix will be used. \"csi.storage.k8s.io/pod.name\": pod.Name \"csi.storage.k8s.io/pod.namespace\": pod.Namespace \"csi.storage.k8s.io/pod.uid\": string(pod.UID) \"csi.storage.k8s.io/ephemeral\": \"true\" iff the volume is an ephemeral inline volume\n defined by a CSIVolumeSource, otherwise \"false\"\n\n\"csi.storage.k8s.io/ephemeral\" is a new feature in Kubernetes 1.16. It is only required for drivers which support both the \"Persistent\" and \"Ephemeral\" VolumeLifecycleMode. Other drivers can leave pod info disabled and/or ignore this field. As Kubernetes 1.15 doesn't support this field, drivers can only support one mode when deployed on such a cluster and the deployment determines which mode that is, for example via a command line parameter of the driver.",
+ "volumeLifecycleModes": "VolumeLifecycleModes defines what kind of volumes this CSI volume driver supports. The default if the list is empty is \"Persistent\", which is the usage defined by the CSI specification and implemented in Kubernetes via the usual PV/PVC mechanism. The other mode is \"Ephemeral\". In this mode, volumes are defined inline inside the pod spec with CSIVolumeSource and their lifecycle is tied to the lifecycle of that pod. A driver has to be aware of this because it is only going to get a NodePublishVolume call for such a volume. For more information about implementing this mode, see https://kubernetes-csi.github.io/docs/ephemeral-local-volumes.html A driver can support one or more of these modes and more modes may be added in the future.",
}
func (CSIDriverSpec) SwaggerDoc() map[string]string {
@@ -58,7 +59,7 @@ func (CSIDriverSpec) SwaggerDoc() map[string]string {
}
var map_CSINode = map[string]string{
- "": "CSINode holds information about all CSI drivers installed on a node. CSI drivers do not need to create the CSINode object directly. As long as they use the node-driver-registrar sidecar container, the kubelet will automatically populate the CSINode object for the CSI driver as part of kubelet plugin registration. CSINode has the same name as a node. If the object is missing, it means either there are no CSI Drivers available on the node, or the Kubelet version is low enough that it doesn't create this object. CSINode has an OwnerReference that points to the corresponding node object.",
+ "": "DEPRECATED - This group version of CSINode is deprecated by storage/v1/CSINode. See the release notes for more information. CSINode holds information about all CSI drivers installed on a node. CSI drivers do not need to create the CSINode object directly. As long as they use the node-driver-registrar sidecar container, the kubelet will automatically populate the CSINode object for the CSI driver as part of kubelet plugin registration. CSINode has the same name as a node. If the object is missing, it means either there are no CSI Drivers available on the node, or the Kubelet version is low enough that it doesn't create this object. CSINode has an OwnerReference that points to the corresponding node object.",
"metadata": "metadata.name must be the Kubernetes node name.",
"spec": "spec is the specification of CSINode",
}
@@ -81,7 +82,7 @@ func (CSINodeDriver) SwaggerDoc() map[string]string {
var map_CSINodeList = map[string]string{
"": "CSINodeList is a collection of CSINode objects.",
- "metadata": "Standard list metadata More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"items": "items is the list of CSINode",
}
@@ -100,7 +101,7 @@ func (CSINodeSpec) SwaggerDoc() map[string]string {
var map_StorageClass = map[string]string{
"": "StorageClass describes the parameters for a class of storage for which PersistentVolumes can be dynamically provisioned.\n\nStorageClasses are non-namespaced; the name of the storage class according to etcd is in ObjectMeta.Name.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"provisioner": "Provisioner indicates the type of the provisioner.",
"parameters": "Parameters holds the parameters for the provisioner that should create volumes of this storage class.",
"reclaimPolicy": "Dynamically provisioned PersistentVolumes of this storage class are created with this reclaimPolicy. Defaults to Delete.",
@@ -116,7 +117,7 @@ func (StorageClass) SwaggerDoc() map[string]string {
var map_StorageClassList = map[string]string{
"": "StorageClassList is a collection of storage classes.",
- "metadata": "Standard list metadata More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"items": "Items is the list of StorageClasses",
}
@@ -126,7 +127,7 @@ func (StorageClassList) SwaggerDoc() map[string]string {
var map_VolumeAttachment = map[string]string{
"": "VolumeAttachment captures the intent to attach or detach the specified volume to/from the specified node.\n\nVolumeAttachment objects are non-namespaced.",
- "metadata": "Standard object metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard object metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"spec": "Specification of the desired attach/detach volume behavior. Populated by the Kubernetes system.",
"status": "Status of the VolumeAttachment request. Populated by the entity completing the attach or detach operation, i.e. the external-attacher.",
}
@@ -137,7 +138,7 @@ func (VolumeAttachment) SwaggerDoc() map[string]string {
var map_VolumeAttachmentList = map[string]string{
"": "VolumeAttachmentList is a collection of VolumeAttachment objects.",
- "metadata": "Standard list metadata More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard list metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"items": "Items is the list of VolumeAttachments",
}
diff --git a/vendor/k8s.io/api/storage/v1beta1/zz_generated.deepcopy.go b/vendor/k8s.io/api/storage/v1beta1/zz_generated.deepcopy.go
index 6b4726559fa84..52433fcdf2c77 100644
--- a/vendor/k8s.io/api/storage/v1beta1/zz_generated.deepcopy.go
+++ b/vendor/k8s.io/api/storage/v1beta1/zz_generated.deepcopy.go
@@ -98,6 +98,11 @@ func (in *CSIDriverSpec) DeepCopyInto(out *CSIDriverSpec) {
*out = new(bool)
**out = **in
}
+ if in.VolumeLifecycleModes != nil {
+ in, out := &in.VolumeLifecycleModes, &out.VolumeLifecycleModes
+ *out = make([]VolumeLifecycleMode, len(*in))
+ copy(*out, *in)
+ }
return
}
diff --git a/vendor/k8s.io/apimachinery/pkg/api/errors/OWNERS b/vendor/k8s.io/apimachinery/pkg/api/errors/OWNERS
index 63434030ca1ce..435297a8d5131 100644
--- a/vendor/k8s.io/apimachinery/pkg/api/errors/OWNERS
+++ b/vendor/k8s.io/apimachinery/pkg/api/errors/OWNERS
@@ -23,4 +23,3 @@ reviewers:
- krousey
- cjcullen
- david-mcmahon
-- goltermann
diff --git a/vendor/k8s.io/apimachinery/pkg/api/errors/errors.go b/vendor/k8s.io/apimachinery/pkg/api/errors/errors.go
index f4201eb691036..e53c3e61fd146 100644
--- a/vendor/k8s.io/apimachinery/pkg/api/errors/errors.go
+++ b/vendor/k8s.io/apimachinery/pkg/api/errors/errors.go
@@ -32,7 +32,9 @@ import (
const (
// StatusTooManyRequests means the server experienced too many requests within a
// given window and that the client must wait to perform the action again.
- StatusTooManyRequests = 429
+ // DEPRECATED: please use http.StatusTooManyRequests, this will be removed in
+ // the future version.
+ StatusTooManyRequests = http.StatusTooManyRequests
)
// StatusError is an error intended for consumption by a REST API server; it can also be
@@ -68,6 +70,28 @@ func (e *StatusError) DebugError() (string, []interface{}) {
return "server response object: %#v", []interface{}{e.ErrStatus}
}
+// HasStatusCause returns true if the provided error has a details cause
+// with the provided type name.
+func HasStatusCause(err error, name metav1.CauseType) bool {
+ _, ok := StatusCause(err, name)
+ return ok
+}
+
+// StatusCause returns the named cause from the provided error if it exists and
+// the error is of the type APIStatus. Otherwise it returns false.
+func StatusCause(err error, name metav1.CauseType) (metav1.StatusCause, bool) {
+ apierr, ok := err.(APIStatus)
+ if !ok || apierr == nil || apierr.Status().Details == nil {
+ return metav1.StatusCause{}, false
+ }
+ for _, cause := range apierr.Status().Details.Causes {
+ if cause.Type == name {
+ return cause, true
+ }
+ }
+ return metav1.StatusCause{}, false
+}
+
// UnexpectedObjectError can be returned by FromObject if it's passed a non-status object.
type UnexpectedObjectError struct {
Object runtime.Object
@@ -199,6 +223,7 @@ func NewApplyConflict(causes []metav1.StatusCause, message string) *StatusError
}
// NewGone returns an error indicating the item no longer available at the server and no forwarding address is known.
+// DEPRECATED: Please use NewResourceExpired instead.
func NewGone(message string) *StatusError {
return &StatusError{metav1.Status{
Status: metav1.StatusFailure,
@@ -349,7 +374,7 @@ func NewTimeoutError(message string, retryAfterSeconds int) *StatusError {
func NewTooManyRequestsError(message string) *StatusError {
return &StatusError{metav1.Status{
Status: metav1.StatusFailure,
- Code: StatusTooManyRequests,
+ Code: http.StatusTooManyRequests,
Reason: metav1.StatusReasonTooManyRequests,
Message: fmt.Sprintf("Too many requests: %s", message),
}}
diff --git a/vendor/k8s.io/apimachinery/pkg/api/meta/OWNERS b/vendor/k8s.io/apimachinery/pkg/api/meta/OWNERS
index dd2c0cb614278..96bccff1b23bc 100644
--- a/vendor/k8s.io/apimachinery/pkg/api/meta/OWNERS
+++ b/vendor/k8s.io/apimachinery/pkg/api/meta/OWNERS
@@ -17,11 +17,7 @@ reviewers:
- eparis
- dims
- krousey
-- markturansky
-- fabioy
- resouer
- david-mcmahon
- mfojtik
- jianhuiz
-- feihujiang
-- ghodss
diff --git a/vendor/k8s.io/apimachinery/pkg/api/resource/OWNERS b/vendor/k8s.io/apimachinery/pkg/api/resource/OWNERS
index 8454be55ed015..dc7740190ae9b 100644
--- a/vendor/k8s.io/apimachinery/pkg/api/resource/OWNERS
+++ b/vendor/k8s.io/apimachinery/pkg/api/resource/OWNERS
@@ -11,8 +11,6 @@ reviewers:
- janetkuo
- tallclair
- eparis
-- jbeda
- xiang90
- mbohlool
- david-mcmahon
-- goltermann
diff --git a/vendor/k8s.io/apimachinery/pkg/api/resource/generated.proto b/vendor/k8s.io/apimachinery/pkg/api/resource/generated.proto
index acc9044452287..18a6c7cd6812a 100644
--- a/vendor/k8s.io/apimachinery/pkg/api/resource/generated.proto
+++ b/vendor/k8s.io/apimachinery/pkg/api/resource/generated.proto
@@ -26,7 +26,7 @@ option go_package = "resource";
// Quantity is a fixed-point representation of a number.
// It provides convenient marshaling/unmarshaling in JSON and YAML,
-// in addition to String() and Int64() accessors.
+// in addition to String() and AsInt64() accessors.
//
// The serialization format is:
//
diff --git a/vendor/k8s.io/apimachinery/pkg/api/resource/quantity.go b/vendor/k8s.io/apimachinery/pkg/api/resource/quantity.go
index b73b3b1414b3c..516d041dafdda 100644
--- a/vendor/k8s.io/apimachinery/pkg/api/resource/quantity.go
+++ b/vendor/k8s.io/apimachinery/pkg/api/resource/quantity.go
@@ -29,7 +29,7 @@ import (
// Quantity is a fixed-point representation of a number.
// It provides convenient marshaling/unmarshaling in JSON and YAML,
-// in addition to String() and Int64() accessors.
+// in addition to String() and AsInt64() accessors.
//
// The serialization format is:
//
@@ -726,21 +726,3 @@ func (q *Quantity) SetScaled(value int64, scale Scale) {
q.d.Dec = nil
q.i = int64Amount{value: value, scale: scale}
}
-
-// Copy is a convenience function that makes a deep copy for you. Non-deep
-// copies of quantities share pointers and you will regret that.
-func (q *Quantity) Copy() *Quantity {
- if q.d.Dec == nil {
- return &Quantity{
- s: q.s,
- i: q.i,
- Format: q.Format,
- }
- }
- tmp := &inf.Dec{}
- return &Quantity{
- s: q.s,
- d: infDecAmount{tmp.Set(q.d.Dec)},
- Format: q.Format,
- }
-}
diff --git a/vendor/k8s.io/apimachinery/pkg/apis/meta/internalversion/register.go b/vendor/k8s.io/apimachinery/pkg/apis/meta/internalversion/register.go
index 3fea2c380e533..b56140de5fce5 100644
--- a/vendor/k8s.io/apimachinery/pkg/apis/meta/internalversion/register.go
+++ b/vendor/k8s.io/apimachinery/pkg/apis/meta/internalversion/register.go
@@ -21,15 +21,11 @@ import (
metav1beta1 "k8s.io/apimachinery/pkg/apis/meta/v1beta1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
- "k8s.io/apimachinery/pkg/runtime/serializer"
)
// GroupName is the group name for this API.
const GroupName = "meta.k8s.io"
-// Scheme is the registry for any type that adheres to the meta API spec.
-var scheme = runtime.NewScheme()
-
var (
// TODO: move SchemeBuilder with zz_generated.deepcopy.go to k8s.io/api.
// localSchemeBuilder and AddToScheme will stay in k8s.io/kubernetes.
@@ -38,22 +34,16 @@ var (
AddToScheme = localSchemeBuilder.AddToScheme
)
-// Codecs provides access to encoding and decoding for the scheme.
-var Codecs = serializer.NewCodecFactory(scheme)
-
// SchemeGroupVersion is group version used to register these objects
var SchemeGroupVersion = schema.GroupVersion{Group: GroupName, Version: runtime.APIVersionInternal}
-// ParameterCodec handles versioning of objects that are converted to query parameters.
-var ParameterCodec = runtime.NewParameterCodec(scheme)
-
// Kind takes an unqualified kind and returns a Group qualified GroupKind
func Kind(kind string) schema.GroupKind {
return SchemeGroupVersion.WithKind(kind).GroupKind()
}
// addToGroupVersion registers common meta types into schemas.
-func addToGroupVersion(scheme *runtime.Scheme, groupVersion schema.GroupVersion) error {
+func addToGroupVersion(scheme *runtime.Scheme) error {
if err := scheme.AddIgnoredConversionType(&metav1.TypeMeta{}, &metav1.TypeMeta{}); err != nil {
return err
}
@@ -104,7 +94,6 @@ func addToGroupVersion(scheme *runtime.Scheme, groupVersion schema.GroupVersion)
// Unlike other API groups, meta internal knows about all meta external versions, but keeps
// the logic for conversion private.
func init() {
- if err := addToGroupVersion(scheme, SchemeGroupVersion); err != nil {
- panic(err)
- }
+ localSchemeBuilder.Register(addToGroupVersion)
+ localSchemeBuilder.Register(metav1.RegisterConversions)
}
diff --git a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/OWNERS b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/OWNERS
index 44929b1c002f0..77cfb0c1aa369 100644
--- a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/OWNERS
+++ b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/OWNERS
@@ -30,4 +30,3 @@ reviewers:
- mqliang
- kevin-wangzefeng
- jianhuiz
-- feihujiang
diff --git a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/controller_ref.go b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/controller_ref.go
index 042cd5b9c5587..15b45ffa84b82 100644
--- a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/controller_ref.go
+++ b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/controller_ref.go
@@ -22,7 +22,7 @@ import (
// IsControlledBy checks if the object has a controllerRef set to the given owner
func IsControlledBy(obj Object, owner Object) bool {
- ref := GetControllerOf(obj)
+ ref := GetControllerOfNoCopy(obj)
if ref == nil {
return false
}
@@ -31,9 +31,20 @@ func IsControlledBy(obj Object, owner Object) bool {
// GetControllerOf returns a pointer to a copy of the controllerRef if controllee has a controller
func GetControllerOf(controllee Object) *OwnerReference {
- for _, ref := range controllee.GetOwnerReferences() {
- if ref.Controller != nil && *ref.Controller {
- return &ref
+ ref := GetControllerOfNoCopy(controllee)
+ if ref == nil {
+ return nil
+ }
+ cp := *ref
+ return &cp
+}
+
+// GetControllerOf returns a pointer to the controllerRef if controllee has a controller
+func GetControllerOfNoCopy(controllee Object) *OwnerReference {
+ refs := controllee.GetOwnerReferences()
+ for i := range refs {
+ if refs[i].Controller != nil && *refs[i].Controller {
+ return &refs[i]
}
}
return nil
diff --git a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/conversion.go b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/conversion.go
index d07069ef245c1..285a41a422849 100644
--- a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/conversion.go
+++ b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/conversion.go
@@ -18,6 +18,7 @@ package v1
import (
"fmt"
+ "net/url"
"strconv"
"strings"
@@ -26,6 +27,7 @@ import (
"k8s.io/apimachinery/pkg/fields"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime"
+ "k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/intstr"
)
@@ -35,12 +37,17 @@ func AddConversionFuncs(scheme *runtime.Scheme) error {
Convert_v1_ListMeta_To_v1_ListMeta,
+ Convert_v1_DeleteOptions_To_v1_DeleteOptions,
+
Convert_intstr_IntOrString_To_intstr_IntOrString,
+ Convert_Pointer_intstr_IntOrString_To_intstr_IntOrString,
+ Convert_intstr_IntOrString_To_Pointer_intstr_IntOrString,
Convert_Pointer_v1_Duration_To_v1_Duration,
Convert_v1_Duration_To_Pointer_v1_Duration,
Convert_Slice_string_To_v1_Time,
+ Convert_Slice_string_To_Pointer_v1_Time,
Convert_v1_Time_To_v1_Time,
Convert_v1_MicroTime_To_v1_MicroTime,
@@ -76,7 +83,7 @@ func AddConversionFuncs(scheme *runtime.Scheme) error {
Convert_Slice_string_To_Slice_int32,
- Convert_Slice_string_To_v1_DeletionPropagation,
+ Convert_Slice_string_To_Pointer_v1_DeletionPropagation,
Convert_Slice_string_To_v1_IncludeObjectPolicy,
)
@@ -194,12 +201,33 @@ func Convert_v1_ListMeta_To_v1_ListMeta(in, out *ListMeta, s conversion.Scope) e
return nil
}
+// +k8s:conversion-fn=copy-only
+func Convert_v1_DeleteOptions_To_v1_DeleteOptions(in, out *DeleteOptions, s conversion.Scope) error {
+ *out = *in
+ return nil
+}
+
// +k8s:conversion-fn=copy-only
func Convert_intstr_IntOrString_To_intstr_IntOrString(in, out *intstr.IntOrString, s conversion.Scope) error {
*out = *in
return nil
}
+func Convert_Pointer_intstr_IntOrString_To_intstr_IntOrString(in **intstr.IntOrString, out *intstr.IntOrString, s conversion.Scope) error {
+ if *in == nil {
+ *out = intstr.IntOrString{} // zero value
+ return nil
+ }
+ *out = **in // copy
+ return nil
+}
+
+func Convert_intstr_IntOrString_To_Pointer_intstr_IntOrString(in *intstr.IntOrString, out **intstr.IntOrString, s conversion.Scope) error {
+ temp := *in // copy
+ *out = &temp
+ return nil
+}
+
// +k8s:conversion-fn=copy-only
func Convert_v1_Time_To_v1_Time(in *Time, out *Time, s conversion.Scope) error {
// Cannot deep copy these, because time.Time has unexported fields.
@@ -230,14 +258,30 @@ func Convert_v1_Duration_To_Pointer_v1_Duration(in *Duration, out **Duration, s
}
// Convert_Slice_string_To_v1_Time allows converting a URL query parameter value
-func Convert_Slice_string_To_v1_Time(input *[]string, out *Time, s conversion.Scope) error {
+func Convert_Slice_string_To_v1_Time(in *[]string, out *Time, s conversion.Scope) error {
str := ""
- if len(*input) > 0 {
- str = (*input)[0]
+ if len(*in) > 0 {
+ str = (*in)[0]
}
return out.UnmarshalQueryParameter(str)
}
+func Convert_Slice_string_To_Pointer_v1_Time(in *[]string, out **Time, s conversion.Scope) error {
+ if in == nil {
+ return nil
+ }
+ str := ""
+ if len(*in) > 0 {
+ str = (*in)[0]
+ }
+ temp := Time{}
+ if err := temp.UnmarshalQueryParameter(str); err != nil {
+ return err
+ }
+ *out = &temp
+ return nil
+}
+
func Convert_string_To_labels_Selector(in *string, out *labels.Selector, s conversion.Scope) error {
selector, err := labels.Parse(*in)
if err != nil {
@@ -310,20 +354,53 @@ func Convert_Slice_string_To_Slice_int32(in *[]string, out *[]int32, s conversio
return nil
}
-// Convert_Slice_string_To_v1_DeletionPropagation allows converting a URL query parameter propagationPolicy
-func Convert_Slice_string_To_v1_DeletionPropagation(input *[]string, out *DeletionPropagation, s conversion.Scope) error {
- if len(*input) > 0 {
- *out = DeletionPropagation((*input)[0])
+// Convert_Slice_string_To_Pointer_v1_DeletionPropagation allows converting a URL query parameter propagationPolicy
+func Convert_Slice_string_To_Pointer_v1_DeletionPropagation(in *[]string, out **DeletionPropagation, s conversion.Scope) error {
+ var str string
+ if len(*in) > 0 {
+ str = (*in)[0]
} else {
- *out = ""
+ str = ""
}
+ temp := DeletionPropagation(str)
+ *out = &temp
return nil
}
// Convert_Slice_string_To_v1_IncludeObjectPolicy allows converting a URL query parameter value
-func Convert_Slice_string_To_v1_IncludeObjectPolicy(input *[]string, out *IncludeObjectPolicy, s conversion.Scope) error {
- if len(*input) > 0 {
- *out = IncludeObjectPolicy((*input)[0])
+func Convert_Slice_string_To_v1_IncludeObjectPolicy(in *[]string, out *IncludeObjectPolicy, s conversion.Scope) error {
+ if len(*in) > 0 {
+ *out = IncludeObjectPolicy((*in)[0])
+ }
+ return nil
+}
+
+// Convert_url_Values_To_v1_DeleteOptions allows converting a URL to DeleteOptions.
+func Convert_url_Values_To_v1_DeleteOptions(in *url.Values, out *DeleteOptions, s conversion.Scope) error {
+ if err := autoConvert_url_Values_To_v1_DeleteOptions(in, out, s); err != nil {
+ return err
+ }
+
+ uid := types.UID("")
+ if values, ok := (*in)["uid"]; ok && len(values) > 0 {
+ uid = types.UID(values[0])
+ }
+
+ resourceVersion := ""
+ if values, ok := (*in)["resourceVersion"]; ok && len(values) > 0 {
+ resourceVersion = values[0]
+ }
+
+ if len(uid) > 0 || len(resourceVersion) > 0 {
+ if out.Preconditions == nil {
+ out.Preconditions = &Preconditions{}
+ }
+ if len(uid) > 0 {
+ out.Preconditions.UID = &uid
+ }
+ if len(resourceVersion) > 0 {
+ out.Preconditions.ResourceVersion = &resourceVersion
+ }
}
return nil
}
diff --git a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/doc.go b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/doc.go
index dbaa87c879fc1..7736753d6682d 100644
--- a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/doc.go
+++ b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/doc.go
@@ -14,6 +14,7 @@ See the License for the specific language governing permissions and
limitations under the License.
*/
+// +k8s:conversion-gen=false
// +k8s:deepcopy-gen=package
// +k8s:openapi-gen=true
// +k8s:defaulter-gen=TypeMeta
diff --git a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/fields_proto.go b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/fields_proto.go
deleted file mode 100644
index d403e76a41c18..0000000000000
--- a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/fields_proto.go
+++ /dev/null
@@ -1,88 +0,0 @@
-/*
-Copyright 2019 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package v1
-
-import (
- "encoding/json"
-)
-
-// Fields is declared in types.go
-
-// ProtoFields is a struct that is equivalent to Fields, but intended for
-// protobuf marshalling/unmarshalling. It is generated into a serialization
-// that matches Fields. Do not use in Go structs.
-type ProtoFields struct {
- // Map is the representation used in the alpha version of this API
- Map map[string]Fields `json:"-" protobuf:"bytes,1,rep,name=map"`
-
- // Raw is the underlying serialization of this object.
- Raw []byte `json:"-" protobuf:"bytes,2,opt,name=raw"`
-}
-
-// ProtoFields returns the Fields as a new ProtoFields value.
-func (m *Fields) ProtoFields() *ProtoFields {
- if m == nil {
- return &ProtoFields{}
- }
- return &ProtoFields{
- Raw: m.Raw,
- }
-}
-
-// Size implements the protobuf marshalling interface.
-func (m *Fields) Size() (n int) {
- return m.ProtoFields().Size()
-}
-
-// Unmarshal implements the protobuf marshalling interface.
-func (m *Fields) Unmarshal(data []byte) error {
- if len(data) == 0 {
- return nil
- }
- p := ProtoFields{}
- if err := p.Unmarshal(data); err != nil {
- return err
- }
- if len(p.Map) == 0 {
- return json.Unmarshal(p.Raw, &m)
- }
- b, err := json.Marshal(&p.Map)
- if err != nil {
- return err
- }
- return json.Unmarshal(b, &m)
-}
-
-// Marshal implements the protobuf marshaling interface.
-func (m *Fields) Marshal() (data []byte, err error) {
- return m.ProtoFields().Marshal()
-}
-
-// MarshalTo implements the protobuf marshaling interface.
-func (m *Fields) MarshalTo(data []byte) (int, error) {
- return m.ProtoFields().MarshalTo(data)
-}
-
-// MarshalToSizedBuffer implements the protobuf reverse marshaling interface.
-func (m *Fields) MarshalToSizedBuffer(data []byte) (int, error) {
- return m.ProtoFields().MarshalToSizedBuffer(data)
-}
-
-// String implements the protobuf goproto_stringer interface.
-func (m *Fields) String() string {
- return m.ProtoFields().String()
-}
diff --git a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/generated.pb.go b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/generated.pb.go
index f03ded7a4e7cb..31b1d955ececd 100644
--- a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/generated.pb.go
+++ b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/generated.pb.go
@@ -301,28 +301,33 @@ func (m *ExportOptions) XXX_DiscardUnknown() {
var xxx_messageInfo_ExportOptions proto.InternalMessageInfo
-func (m *Fields) Reset() { *m = Fields{} }
-func (*Fields) ProtoMessage() {}
-func (*Fields) Descriptor() ([]byte, []int) {
+func (m *FieldsV1) Reset() { *m = FieldsV1{} }
+func (*FieldsV1) ProtoMessage() {}
+func (*FieldsV1) Descriptor() ([]byte, []int) {
return fileDescriptor_cf52fa777ced5367, []int{9}
}
-func (m *Fields) XXX_Unmarshal(b []byte) error {
- return xxx_messageInfo_Fields.Unmarshal(m, b)
+func (m *FieldsV1) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
}
-func (m *Fields) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- return xxx_messageInfo_Fields.Marshal(b, m, deterministic)
+func (m *FieldsV1) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
}
-func (m *Fields) XXX_Merge(src proto.Message) {
- xxx_messageInfo_Fields.Merge(m, src)
+func (m *FieldsV1) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_FieldsV1.Merge(m, src)
}
-func (m *Fields) XXX_Size() int {
- return xxx_messageInfo_Fields.Size(m)
+func (m *FieldsV1) XXX_Size() int {
+ return m.Size()
}
-func (m *Fields) XXX_DiscardUnknown() {
- xxx_messageInfo_Fields.DiscardUnknown(m)
+func (m *FieldsV1) XXX_DiscardUnknown() {
+ xxx_messageInfo_FieldsV1.DiscardUnknown(m)
}
-var xxx_messageInfo_Fields proto.InternalMessageInfo
+var xxx_messageInfo_FieldsV1 proto.InternalMessageInfo
func (m *GetOptions) Reset() { *m = GetOptions{} }
func (*GetOptions) ProtoMessage() {}
@@ -907,38 +912,10 @@ func (m *Preconditions) XXX_DiscardUnknown() {
var xxx_messageInfo_Preconditions proto.InternalMessageInfo
-func (m *ProtoFields) Reset() { *m = ProtoFields{} }
-func (*ProtoFields) ProtoMessage() {}
-func (*ProtoFields) Descriptor() ([]byte, []int) {
- return fileDescriptor_cf52fa777ced5367, []int{31}
-}
-func (m *ProtoFields) XXX_Unmarshal(b []byte) error {
- return m.Unmarshal(b)
-}
-func (m *ProtoFields) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
- b = b[:cap(b)]
- n, err := m.MarshalToSizedBuffer(b)
- if err != nil {
- return nil, err
- }
- return b[:n], nil
-}
-func (m *ProtoFields) XXX_Merge(src proto.Message) {
- xxx_messageInfo_ProtoFields.Merge(m, src)
-}
-func (m *ProtoFields) XXX_Size() int {
- return m.Size()
-}
-func (m *ProtoFields) XXX_DiscardUnknown() {
- xxx_messageInfo_ProtoFields.DiscardUnknown(m)
-}
-
-var xxx_messageInfo_ProtoFields proto.InternalMessageInfo
-
func (m *RootPaths) Reset() { *m = RootPaths{} }
func (*RootPaths) ProtoMessage() {}
func (*RootPaths) Descriptor() ([]byte, []int) {
- return fileDescriptor_cf52fa777ced5367, []int{32}
+ return fileDescriptor_cf52fa777ced5367, []int{31}
}
func (m *RootPaths) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -966,7 +943,7 @@ var xxx_messageInfo_RootPaths proto.InternalMessageInfo
func (m *ServerAddressByClientCIDR) Reset() { *m = ServerAddressByClientCIDR{} }
func (*ServerAddressByClientCIDR) ProtoMessage() {}
func (*ServerAddressByClientCIDR) Descriptor() ([]byte, []int) {
- return fileDescriptor_cf52fa777ced5367, []int{33}
+ return fileDescriptor_cf52fa777ced5367, []int{32}
}
func (m *ServerAddressByClientCIDR) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -994,7 +971,7 @@ var xxx_messageInfo_ServerAddressByClientCIDR proto.InternalMessageInfo
func (m *Status) Reset() { *m = Status{} }
func (*Status) ProtoMessage() {}
func (*Status) Descriptor() ([]byte, []int) {
- return fileDescriptor_cf52fa777ced5367, []int{34}
+ return fileDescriptor_cf52fa777ced5367, []int{33}
}
func (m *Status) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -1022,7 +999,7 @@ var xxx_messageInfo_Status proto.InternalMessageInfo
func (m *StatusCause) Reset() { *m = StatusCause{} }
func (*StatusCause) ProtoMessage() {}
func (*StatusCause) Descriptor() ([]byte, []int) {
- return fileDescriptor_cf52fa777ced5367, []int{35}
+ return fileDescriptor_cf52fa777ced5367, []int{34}
}
func (m *StatusCause) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -1050,7 +1027,7 @@ var xxx_messageInfo_StatusCause proto.InternalMessageInfo
func (m *StatusDetails) Reset() { *m = StatusDetails{} }
func (*StatusDetails) ProtoMessage() {}
func (*StatusDetails) Descriptor() ([]byte, []int) {
- return fileDescriptor_cf52fa777ced5367, []int{36}
+ return fileDescriptor_cf52fa777ced5367, []int{35}
}
func (m *StatusDetails) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -1078,7 +1055,7 @@ var xxx_messageInfo_StatusDetails proto.InternalMessageInfo
func (m *TableOptions) Reset() { *m = TableOptions{} }
func (*TableOptions) ProtoMessage() {}
func (*TableOptions) Descriptor() ([]byte, []int) {
- return fileDescriptor_cf52fa777ced5367, []int{37}
+ return fileDescriptor_cf52fa777ced5367, []int{36}
}
func (m *TableOptions) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -1106,7 +1083,7 @@ var xxx_messageInfo_TableOptions proto.InternalMessageInfo
func (m *Time) Reset() { *m = Time{} }
func (*Time) ProtoMessage() {}
func (*Time) Descriptor() ([]byte, []int) {
- return fileDescriptor_cf52fa777ced5367, []int{38}
+ return fileDescriptor_cf52fa777ced5367, []int{37}
}
func (m *Time) XXX_Unmarshal(b []byte) error {
return xxx_messageInfo_Time.Unmarshal(m, b)
@@ -1129,7 +1106,7 @@ var xxx_messageInfo_Time proto.InternalMessageInfo
func (m *Timestamp) Reset() { *m = Timestamp{} }
func (*Timestamp) ProtoMessage() {}
func (*Timestamp) Descriptor() ([]byte, []int) {
- return fileDescriptor_cf52fa777ced5367, []int{39}
+ return fileDescriptor_cf52fa777ced5367, []int{38}
}
func (m *Timestamp) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -1157,7 +1134,7 @@ var xxx_messageInfo_Timestamp proto.InternalMessageInfo
func (m *TypeMeta) Reset() { *m = TypeMeta{} }
func (*TypeMeta) ProtoMessage() {}
func (*TypeMeta) Descriptor() ([]byte, []int) {
- return fileDescriptor_cf52fa777ced5367, []int{40}
+ return fileDescriptor_cf52fa777ced5367, []int{39}
}
func (m *TypeMeta) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -1185,7 +1162,7 @@ var xxx_messageInfo_TypeMeta proto.InternalMessageInfo
func (m *UpdateOptions) Reset() { *m = UpdateOptions{} }
func (*UpdateOptions) ProtoMessage() {}
func (*UpdateOptions) Descriptor() ([]byte, []int) {
- return fileDescriptor_cf52fa777ced5367, []int{41}
+ return fileDescriptor_cf52fa777ced5367, []int{40}
}
func (m *UpdateOptions) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -1213,7 +1190,7 @@ var xxx_messageInfo_UpdateOptions proto.InternalMessageInfo
func (m *Verbs) Reset() { *m = Verbs{} }
func (*Verbs) ProtoMessage() {}
func (*Verbs) Descriptor() ([]byte, []int) {
- return fileDescriptor_cf52fa777ced5367, []int{42}
+ return fileDescriptor_cf52fa777ced5367, []int{41}
}
func (m *Verbs) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -1241,7 +1218,7 @@ var xxx_messageInfo_Verbs proto.InternalMessageInfo
func (m *WatchEvent) Reset() { *m = WatchEvent{} }
func (*WatchEvent) ProtoMessage() {}
func (*WatchEvent) Descriptor() ([]byte, []int) {
- return fileDescriptor_cf52fa777ced5367, []int{43}
+ return fileDescriptor_cf52fa777ced5367, []int{42}
}
func (m *WatchEvent) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
@@ -1276,8 +1253,7 @@ func init() {
proto.RegisterType((*DeleteOptions)(nil), "k8s.io.apimachinery.pkg.apis.meta.v1.DeleteOptions")
proto.RegisterType((*Duration)(nil), "k8s.io.apimachinery.pkg.apis.meta.v1.Duration")
proto.RegisterType((*ExportOptions)(nil), "k8s.io.apimachinery.pkg.apis.meta.v1.ExportOptions")
- proto.RegisterType((*Fields)(nil), "k8s.io.apimachinery.pkg.apis.meta.v1.Fields")
- proto.RegisterMapType((map[string]Fields)(nil), "k8s.io.apimachinery.pkg.apis.meta.v1.Fields.MapEntry")
+ proto.RegisterType((*FieldsV1)(nil), "k8s.io.apimachinery.pkg.apis.meta.v1.FieldsV1")
proto.RegisterType((*GetOptions)(nil), "k8s.io.apimachinery.pkg.apis.meta.v1.GetOptions")
proto.RegisterType((*GroupKind)(nil), "k8s.io.apimachinery.pkg.apis.meta.v1.GroupKind")
proto.RegisterType((*GroupResource)(nil), "k8s.io.apimachinery.pkg.apis.meta.v1.GroupResource")
@@ -1302,8 +1278,6 @@ func init() {
proto.RegisterType((*Patch)(nil), "k8s.io.apimachinery.pkg.apis.meta.v1.Patch")
proto.RegisterType((*PatchOptions)(nil), "k8s.io.apimachinery.pkg.apis.meta.v1.PatchOptions")
proto.RegisterType((*Preconditions)(nil), "k8s.io.apimachinery.pkg.apis.meta.v1.Preconditions")
- proto.RegisterType((*ProtoFields)(nil), "k8s.io.apimachinery.pkg.apis.meta.v1.ProtoFields")
- proto.RegisterMapType((map[string]Fields)(nil), "k8s.io.apimachinery.pkg.apis.meta.v1.ProtoFields.MapEntry")
proto.RegisterType((*RootPaths)(nil), "k8s.io.apimachinery.pkg.apis.meta.v1.RootPaths")
proto.RegisterType((*ServerAddressByClientCIDR)(nil), "k8s.io.apimachinery.pkg.apis.meta.v1.ServerAddressByClientCIDR")
proto.RegisterType((*Status)(nil), "k8s.io.apimachinery.pkg.apis.meta.v1.Status")
@@ -1323,181 +1297,177 @@ func init() {
}
var fileDescriptor_cf52fa777ced5367 = []byte{
- // 2779 bytes of a gzipped FileDescriptorProto
- 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xcc, 0x1a, 0xcf, 0x6f, 0x1c, 0x57,
- 0xd9, 0xb3, 0xeb, 0x5d, 0xef, 0x7e, 0xeb, 0x4d, 0xec, 0x97, 0x04, 0x36, 0x46, 0x78, 0xdd, 0x29,
- 0xaa, 0x52, 0x48, 0xd7, 0x4d, 0xa0, 0x55, 0x48, 0x69, 0xc1, 0x6b, 0x3b, 0xa9, 0x69, 0x5c, 0x5b,
- 0xcf, 0x49, 0x50, 0x43, 0x85, 0xfa, 0x3c, 0xf3, 0xbc, 0x1e, 0x3c, 0x3b, 0x33, 0x7d, 0x33, 0x6b,
- 0x67, 0xe1, 0x40, 0x0f, 0x20, 0x40, 0x82, 0xaa, 0x47, 0x4e, 0xa8, 0x15, 0xfc, 0x05, 0x9c, 0xf8,
- 0x03, 0x90, 0xe8, 0x05, 0xa9, 0x12, 0x97, 0x4a, 0xa0, 0x55, 0x6b, 0x0e, 0xc0, 0x09, 0x71, 0xe0,
- 0xe2, 0x13, 0x7a, 0xbf, 0xe6, 0xc7, 0xae, 0x37, 0x9e, 0x25, 0xa5, 0xea, 0x6d, 0xe6, 0xfb, 0xf9,
- 0xde, 0xfb, 0xbe, 0xf7, 0xfd, 0x9a, 0x81, 0xcd, 0x83, 0x1b, 0x61, 0xcb, 0xf1, 0x97, 0x0f, 0x7a,
- 0xbb, 0x94, 0x79, 0x34, 0xa2, 0xe1, 0xf2, 0x21, 0xf5, 0x6c, 0x9f, 0x2d, 0x2b, 0x04, 0x09, 0x9c,
- 0x2e, 0xb1, 0xf6, 0x1d, 0x8f, 0xb2, 0xfe, 0x72, 0x70, 0xd0, 0xe1, 0x80, 0x70, 0xb9, 0x4b, 0x23,
- 0xb2, 0x7c, 0x78, 0x6d, 0xb9, 0x43, 0x3d, 0xca, 0x48, 0x44, 0xed, 0x56, 0xc0, 0xfc, 0xc8, 0x47,
- 0x5f, 0x92, 0x5c, 0xad, 0x34, 0x57, 0x2b, 0x38, 0xe8, 0x70, 0x40, 0xd8, 0xe2, 0x5c, 0xad, 0xc3,
- 0x6b, 0x0b, 0xcf, 0x74, 0x9c, 0x68, 0xbf, 0xb7, 0xdb, 0xb2, 0xfc, 0xee, 0x72, 0xc7, 0xef, 0xf8,
- 0xcb, 0x82, 0x79, 0xb7, 0xb7, 0x27, 0xde, 0xc4, 0x8b, 0x78, 0x92, 0x42, 0x17, 0xc6, 0x2e, 0x85,
- 0xf5, 0xbc, 0xc8, 0xe9, 0xd2, 0xe1, 0x55, 0x2c, 0x3c, 0x7f, 0x16, 0x43, 0x68, 0xed, 0xd3, 0x2e,
- 0x19, 0xe6, 0x33, 0xff, 0x58, 0x84, 0xca, 0xca, 0xf6, 0xc6, 0x6d, 0xe6, 0xf7, 0x02, 0xb4, 0x04,
- 0xd3, 0x1e, 0xe9, 0xd2, 0x86, 0xb1, 0x64, 0x5c, 0xa9, 0xb6, 0x67, 0xdf, 0x1f, 0x34, 0xa7, 0x8e,
- 0x07, 0xcd, 0xe9, 0x57, 0x49, 0x97, 0x62, 0x81, 0x41, 0x2e, 0x54, 0x0e, 0x29, 0x0b, 0x1d, 0xdf,
- 0x0b, 0x1b, 0x85, 0xa5, 0xe2, 0x95, 0xda, 0xf5, 0x97, 0x5a, 0x79, 0xf6, 0xdf, 0x12, 0x0a, 0xee,
- 0x4b, 0xd6, 0x5b, 0x3e, 0x5b, 0x73, 0x42, 0xcb, 0x3f, 0xa4, 0xac, 0xdf, 0x9e, 0x53, 0x5a, 0x2a,
- 0x0a, 0x19, 0xe2, 0x58, 0x03, 0xfa, 0xb1, 0x01, 0x73, 0x01, 0xa3, 0x7b, 0x94, 0x31, 0x6a, 0x2b,
- 0x7c, 0xa3, 0xb8, 0x64, 0x7c, 0x02, 0x6a, 0x1b, 0x4a, 0xed, 0xdc, 0xf6, 0x90, 0x7c, 0x3c, 0xa2,
- 0x11, 0xfd, 0xc6, 0x80, 0x85, 0x90, 0xb2, 0x43, 0xca, 0x56, 0x6c, 0x9b, 0xd1, 0x30, 0x6c, 0xf7,
- 0x57, 0x5d, 0x87, 0x7a, 0xd1, 0xea, 0xc6, 0x1a, 0x0e, 0x1b, 0xd3, 0xe2, 0x1c, 0xbe, 0x99, 0x6f,
- 0x41, 0x3b, 0xe3, 0xe4, 0xb4, 0x4d, 0xb5, 0xa2, 0x85, 0xb1, 0x24, 0x21, 0x7e, 0xc4, 0x32, 0xcc,
- 0x3d, 0x98, 0xd5, 0x86, 0xbc, 0xe3, 0x84, 0x11, 0xba, 0x0f, 0xe5, 0x0e, 0x7f, 0x09, 0x1b, 0x86,
- 0x58, 0x60, 0x2b, 0xdf, 0x02, 0xb5, 0x8c, 0xf6, 0x39, 0xb5, 0x9e, 0xb2, 0x78, 0x0d, 0xb1, 0x92,
- 0x66, 0xfe, 0x7c, 0x1a, 0x6a, 0x2b, 0xdb, 0x1b, 0x98, 0x86, 0x7e, 0x8f, 0x59, 0x34, 0x87, 0xd3,
- 0xdc, 0x80, 0xd9, 0xd0, 0xf1, 0x3a, 0x3d, 0x97, 0x30, 0x0e, 0x6d, 0x94, 0x05, 0xe5, 0x45, 0x45,
- 0x39, 0xbb, 0x93, 0xc2, 0xe1, 0x0c, 0x25, 0xba, 0x0e, 0xc0, 0x25, 0x84, 0x01, 0xb1, 0xa8, 0xdd,
- 0x28, 0x2c, 0x19, 0x57, 0x2a, 0x6d, 0xa4, 0xf8, 0xe0, 0xd5, 0x18, 0x83, 0x53, 0x54, 0xe8, 0x49,
- 0x28, 0x89, 0x95, 0x36, 0x2a, 0x42, 0x4d, 0x5d, 0x91, 0x97, 0xc4, 0x36, 0xb0, 0xc4, 0xa1, 0xa7,
- 0x61, 0x46, 0x79, 0x59, 0xa3, 0x2a, 0xc8, 0xce, 0x2b, 0xb2, 0x19, 0xed, 0x06, 0x1a, 0xcf, 0xf7,
- 0x77, 0xe0, 0x78, 0xb6, 0xf0, 0xbb, 0xd4, 0xfe, 0x5e, 0x71, 0x3c, 0x1b, 0x0b, 0x0c, 0xba, 0x03,
- 0xa5, 0x43, 0xca, 0x76, 0xb9, 0x27, 0x70, 0xd7, 0xfc, 0x4a, 0xbe, 0x83, 0xbe, 0xcf, 0x59, 0xda,
- 0x55, 0xbe, 0x34, 0xf1, 0x88, 0xa5, 0x10, 0xd4, 0x02, 0x08, 0xf7, 0x7d, 0x16, 0x89, 0xed, 0x35,
- 0x4a, 0x4b, 0xc5, 0x2b, 0xd5, 0xf6, 0x39, 0xbe, 0xdf, 0x9d, 0x18, 0x8a, 0x53, 0x14, 0x9c, 0xde,
- 0x22, 0x11, 0xed, 0xf8, 0xcc, 0xa1, 0x61, 0x63, 0x26, 0xa1, 0x5f, 0x8d, 0xa1, 0x38, 0x45, 0x81,
- 0xbe, 0x0d, 0x28, 0x8c, 0x7c, 0x46, 0x3a, 0x54, 0x6d, 0xf5, 0x65, 0x12, 0xee, 0x37, 0x40, 0xec,
- 0x6e, 0x41, 0xed, 0x0e, 0xed, 0x8c, 0x50, 0xe0, 0x53, 0xb8, 0xcc, 0xdf, 0x19, 0x70, 0x3e, 0xe5,
- 0x0b, 0xc2, 0xef, 0x6e, 0xc0, 0x6c, 0x27, 0x75, 0xeb, 0x94, 0x5f, 0xc4, 0xd6, 0x4e, 0xdf, 0x48,
- 0x9c, 0xa1, 0x44, 0x14, 0xaa, 0x4c, 0x49, 0xd2, 0xd1, 0xe5, 0x5a, 0x6e, 0xa7, 0xd5, 0x6b, 0x48,
- 0x34, 0xa5, 0x80, 0x21, 0x4e, 0x24, 0x9b, 0x7f, 0x37, 0x84, 0x03, 0xeb, 0x78, 0x83, 0xae, 0xa4,
- 0x62, 0x9a, 0x21, 0x8e, 0x6f, 0x76, 0x4c, 0x3c, 0x3a, 0x23, 0x10, 0x14, 0x3e, 0x13, 0x81, 0xe0,
- 0x66, 0xe5, 0x57, 0xef, 0x36, 0xa7, 0xde, 0xfa, 0xeb, 0xd2, 0x94, 0xd9, 0x85, 0xfa, 0x2a, 0xa3,
- 0x24, 0xa2, 0x5b, 0x41, 0x24, 0x36, 0x60, 0x42, 0xd9, 0x66, 0x7d, 0xdc, 0xf3, 0xd4, 0x46, 0x81,
- 0xdf, 0xef, 0x35, 0x01, 0xc1, 0x0a, 0xc3, 0xed, 0xb7, 0xe7, 0x50, 0xd7, 0xde, 0x24, 0x1e, 0xe9,
- 0x50, 0xa6, 0xfc, 0x3e, 0x3e, 0xd5, 0x5b, 0x29, 0x1c, 0xce, 0x50, 0x9a, 0x3f, 0x2d, 0x42, 0x7d,
- 0x8d, 0xba, 0x34, 0xd1, 0x77, 0x0b, 0x50, 0x87, 0x11, 0x8b, 0x6e, 0x53, 0xe6, 0xf8, 0xf6, 0x0e,
- 0xb5, 0x7c, 0xcf, 0x0e, 0x85, 0x47, 0x14, 0xdb, 0x9f, 0xe3, 0x7e, 0x76, 0x7b, 0x04, 0x8b, 0x4f,
- 0xe1, 0x40, 0x2e, 0xd4, 0x03, 0x26, 0x9e, 0x9d, 0x48, 0xe5, 0x1e, 0x7e, 0xd3, 0xbe, 0x9a, 0xef,
- 0xa8, 0xb7, 0xd3, 0xac, 0xed, 0xf9, 0xe3, 0x41, 0xb3, 0x9e, 0x01, 0xe1, 0xac, 0x70, 0xf4, 0x2d,
- 0x98, 0xf3, 0x59, 0xb0, 0x4f, 0xbc, 0x35, 0x1a, 0x50, 0xcf, 0xa6, 0x5e, 0x14, 0x8a, 0x53, 0xa8,
- 0xb4, 0x2f, 0xf2, 0x8c, 0xb1, 0x35, 0x84, 0xc3, 0x23, 0xd4, 0xe8, 0x01, 0xcc, 0x07, 0xcc, 0x0f,
- 0x48, 0x87, 0x70, 0x89, 0xdb, 0xbe, 0xeb, 0x58, 0x7d, 0x11, 0x1d, 0xaa, 0xed, 0xab, 0xc7, 0x83,
- 0xe6, 0xfc, 0xf6, 0x30, 0xf2, 0x64, 0xd0, 0xbc, 0x20, 0x8e, 0x8e, 0x43, 0x12, 0x24, 0x1e, 0x15,
- 0x93, 0xb2, 0x61, 0x69, 0x9c, 0x0d, 0xcd, 0x0d, 0xa8, 0xac, 0xf5, 0x98, 0xe0, 0x42, 0x2f, 0x42,
- 0xc5, 0x56, 0xcf, 0xea, 0xe4, 0x9f, 0xd0, 0x29, 0x57, 0xd3, 0x9c, 0x0c, 0x9a, 0x75, 0x5e, 0x24,
- 0xb4, 0x34, 0x00, 0xc7, 0x2c, 0xe6, 0xeb, 0x50, 0x5f, 0x7f, 0x18, 0xf8, 0x2c, 0xd2, 0x36, 0x7d,
- 0x0a, 0xca, 0x54, 0x00, 0x84, 0xb4, 0x4a, 0x92, 0x27, 0x24, 0x19, 0x56, 0x58, 0x1e, 0x87, 0xe9,
- 0x43, 0x62, 0x45, 0x2a, 0x6c, 0xc7, 0x71, 0x78, 0x9d, 0x03, 0xb1, 0xc4, 0x99, 0xff, 0x31, 0xa0,
- 0x2c, 0x3c, 0x2a, 0x44, 0x77, 0xa1, 0xd8, 0x25, 0x81, 0x4a, 0x56, 0xcf, 0xe5, 0xb3, 0xac, 0x64,
- 0x6d, 0x6d, 0x92, 0x60, 0xdd, 0x8b, 0x58, 0xbf, 0x5d, 0x53, 0x4a, 0x8a, 0x9b, 0x24, 0xc0, 0x5c,
- 0x1c, 0xba, 0x0c, 0x45, 0x46, 0x8e, 0xc4, 0x1a, 0x66, 0xdb, 0x33, 0x1c, 0x85, 0xc9, 0x11, 0xe6,
- 0xb0, 0x05, 0x1b, 0x2a, 0x9a, 0x11, 0xcd, 0x41, 0xf1, 0x80, 0xf6, 0x65, 0xac, 0xc2, 0xfc, 0x11,
- 0xb5, 0xa1, 0x74, 0x48, 0xdc, 0x1e, 0x55, 0xae, 0x76, 0x75, 0x92, 0x05, 0x61, 0xc9, 0x7a, 0xb3,
- 0x70, 0xc3, 0xb8, 0x79, 0x91, 0xdf, 0xc6, 0x9f, 0xbd, 0xd7, 0x9c, 0x7a, 0xe7, 0xbd, 0xe6, 0xd4,
- 0xbb, 0xef, 0xa9, 0x9b, 0xb9, 0x05, 0x70, 0x9b, 0xc6, 0x47, 0xba, 0x02, 0xe7, 0x75, 0x78, 0xca,
- 0x46, 0xcd, 0xcf, 0xab, 0xfd, 0x9c, 0xc7, 0x59, 0x34, 0x1e, 0xa6, 0x37, 0x5f, 0x87, 0xaa, 0x88,
- 0xac, 0x3c, 0x2d, 0x25, 0x29, 0xd0, 0x78, 0x44, 0x0a, 0xd4, 0x79, 0xad, 0x30, 0x2e, 0xaf, 0xa5,
- 0x02, 0x89, 0x0b, 0x75, 0xc9, 0xab, 0x93, 0x7e, 0x2e, 0x0d, 0x57, 0xa1, 0xa2, 0x97, 0xa9, 0xb4,
- 0xc4, 0xc5, 0x9e, 0x16, 0x84, 0x63, 0x8a, 0x94, 0xb6, 0x7d, 0xc8, 0x64, 0x89, 0x7c, 0xca, 0x52,
- 0x19, 0xbd, 0xf0, 0xe8, 0x8c, 0x9e, 0xd2, 0xf4, 0x23, 0x68, 0x8c, 0xab, 0x10, 0x1f, 0x23, 0x8f,
- 0xe5, 0x5f, 0x8a, 0xf9, 0xb6, 0x01, 0x73, 0x69, 0x49, 0xf9, 0xcd, 0x97, 0x5f, 0xc9, 0xd9, 0x15,
- 0x4c, 0xea, 0x44, 0x7e, 0x6d, 0xc0, 0xc5, 0xcc, 0xd6, 0x26, 0xb2, 0xf8, 0x04, 0x8b, 0x4a, 0x3b,
- 0x47, 0x71, 0x02, 0xe7, 0xf8, 0x73, 0x01, 0xea, 0x77, 0xc8, 0x2e, 0x75, 0x77, 0xa8, 0x4b, 0xad,
- 0xc8, 0x67, 0xe8, 0x87, 0x50, 0xeb, 0x92, 0xc8, 0xda, 0x17, 0x50, 0x5d, 0xed, 0xae, 0xe5, 0xbb,
- 0xaf, 0x19, 0x49, 0xad, 0xcd, 0x44, 0x8c, 0x8c, 0x27, 0x17, 0xd4, 0x92, 0x6a, 0x29, 0x0c, 0x4e,
- 0x6b, 0x13, 0x2d, 0x8a, 0x78, 0x5f, 0x7f, 0x18, 0xf0, 0x54, 0x3c, 0x79, 0x67, 0x94, 0x59, 0x02,
- 0xa6, 0x6f, 0xf6, 0x1c, 0x46, 0xbb, 0xd4, 0x8b, 0x92, 0x16, 0x65, 0x73, 0x48, 0x3e, 0x1e, 0xd1,
- 0xb8, 0xf0, 0x12, 0xcc, 0x0d, 0x2f, 0xfe, 0x94, 0x98, 0x76, 0x31, 0x1d, 0xd3, 0xaa, 0xa9, 0x28,
- 0x65, 0xfe, 0xd6, 0x80, 0xc6, 0xb8, 0x85, 0xa0, 0x2f, 0xa6, 0x04, 0x25, 0x21, 0xf6, 0x15, 0xda,
- 0x97, 0x52, 0xd7, 0xa1, 0xe2, 0x07, 0xbc, 0xa9, 0xf4, 0x99, 0xb2, 0xfa, 0xd3, 0xda, 0x92, 0x5b,
- 0x0a, 0x7e, 0x32, 0x68, 0x5e, 0xca, 0x88, 0xd7, 0x08, 0x1c, 0xb3, 0xf2, 0xbc, 0x26, 0xd6, 0xc3,
- 0x73, 0x6d, 0x9c, 0xd7, 0xee, 0x0b, 0x08, 0x56, 0x18, 0xf3, 0xf7, 0x06, 0x4c, 0x8b, 0x22, 0xf3,
- 0x75, 0xa8, 0xf0, 0xf3, 0xb3, 0x49, 0x44, 0xc4, 0xba, 0x72, 0xb7, 0x37, 0x9c, 0x7b, 0x93, 0x46,
- 0x24, 0xf1, 0x36, 0x0d, 0xc1, 0xb1, 0x44, 0x84, 0xa1, 0xe4, 0x44, 0xb4, 0xab, 0x0d, 0xf9, 0xcc,
- 0x58, 0xd1, 0xaa, 0xb9, 0x6e, 0x61, 0x72, 0xb4, 0xfe, 0x30, 0xa2, 0x1e, 0x37, 0x46, 0x72, 0x35,
- 0x36, 0xb8, 0x0c, 0x2c, 0x45, 0x99, 0xff, 0x36, 0x20, 0x56, 0xc5, 0x9d, 0x3f, 0xa4, 0xee, 0xde,
- 0x1d, 0xc7, 0x3b, 0x50, 0xc7, 0x1a, 0x2f, 0x67, 0x47, 0xc1, 0x71, 0x4c, 0x71, 0x5a, 0x7a, 0x28,
- 0x4c, 0x96, 0x1e, 0xb8, 0x42, 0xcb, 0xf7, 0x22, 0xc7, 0xeb, 0x8d, 0xdc, 0xb6, 0x55, 0x05, 0xc7,
- 0x31, 0x05, 0x2f, 0xdb, 0x18, 0xed, 0x12, 0xc7, 0x73, 0xbc, 0x0e, 0xdf, 0xc4, 0xaa, 0xdf, 0xf3,
- 0x22, 0x51, 0xbf, 0xa8, 0xb2, 0x0d, 0x8f, 0x60, 0xf1, 0x29, 0x1c, 0xe6, 0x9f, 0x8a, 0x50, 0xe3,
- 0x7b, 0xd6, 0x79, 0xee, 0x05, 0xa8, 0xbb, 0x69, 0x2f, 0x50, 0x7b, 0xbf, 0xa4, 0x96, 0x92, 0xbd,
- 0xd7, 0x38, 0x4b, 0xcb, 0x99, 0x45, 0xb5, 0x19, 0x33, 0x17, 0xb2, 0xcc, 0xb7, 0xd2, 0x48, 0x9c,
- 0xa5, 0xe5, 0xd1, 0xeb, 0x88, 0xdf, 0x0f, 0x55, 0xc7, 0xc5, 0x26, 0xfa, 0x0e, 0x07, 0x62, 0x89,
- 0x43, 0x9b, 0x70, 0x81, 0xb8, 0xae, 0x7f, 0x24, 0x80, 0x6d, 0xdf, 0x3f, 0xe8, 0x12, 0x76, 0x10,
- 0x8a, 0x06, 0xb1, 0xd2, 0xfe, 0x82, 0x62, 0xb9, 0xb0, 0x32, 0x4a, 0x82, 0x4f, 0xe3, 0x3b, 0xcd,
- 0x6c, 0xd3, 0x13, 0x9a, 0xed, 0x26, 0x9c, 0xe3, 0xfe, 0xe5, 0xf7, 0x22, 0x5d, 0x3b, 0x97, 0x84,
- 0x11, 0xd0, 0xf1, 0xa0, 0x79, 0xee, 0x6e, 0x06, 0x83, 0x87, 0x28, 0xf9, 0x96, 0x5d, 0xa7, 0xeb,
- 0x44, 0x8d, 0x19, 0xc1, 0x12, 0x6f, 0xf9, 0x0e, 0x07, 0x62, 0x89, 0xcb, 0xf8, 0x45, 0xe5, 0x2c,
- 0xbf, 0x30, 0xff, 0x51, 0x00, 0x24, 0x8b, 0x7d, 0x5b, 0x16, 0x3a, 0x32, 0xd0, 0x3c, 0x0d, 0x33,
- 0x5d, 0xd5, 0x2c, 0x18, 0xd9, 0xa8, 0xaf, 0xfb, 0x04, 0x8d, 0x47, 0x9b, 0x50, 0x95, 0x17, 0x3e,
- 0x71, 0xe2, 0x65, 0x45, 0x5c, 0xdd, 0xd2, 0x88, 0x93, 0x41, 0x73, 0x21, 0xa3, 0x26, 0xc6, 0xdc,
- 0xed, 0x07, 0x14, 0x27, 0x12, 0xd0, 0x75, 0x00, 0x12, 0x38, 0xe9, 0xc9, 0x50, 0x35, 0x99, 0x0f,
- 0x24, 0x3d, 0x1e, 0x4e, 0x51, 0xa1, 0x97, 0x61, 0x9a, 0x9f, 0x94, 0x6a, 0xd6, 0xbf, 0x9c, 0x2f,
- 0x6c, 0xf0, 0xb3, 0x6e, 0x57, 0x78, 0xd6, 0xe4, 0x4f, 0x58, 0x48, 0x40, 0x0f, 0xa0, 0x2c, 0xbc,
- 0x4c, 0x5a, 0x65, 0xc2, 0x1a, 0x51, 0xf4, 0x12, 0xaa, 0xf6, 0x3d, 0x89, 0x9f, 0xb0, 0x92, 0x68,
- 0xbe, 0x09, 0xd5, 0x4d, 0xc7, 0x62, 0x3e, 0x57, 0xc7, 0x0f, 0x38, 0xcc, 0xf4, 0x4e, 0xf1, 0x01,
- 0x6b, 0xe3, 0x6b, 0x3c, 0xb7, 0xba, 0x47, 0x3c, 0x5f, 0x76, 0x48, 0xa5, 0xc4, 0xea, 0xaf, 0x72,
- 0x20, 0x96, 0xb8, 0x31, 0x35, 0xe9, 0x09, 0x00, 0x6c, 0xed, 0x7e, 0x9f, 0x5a, 0x32, 0x46, 0xe5,
- 0x9a, 0xeb, 0xe8, 0x71, 0xa2, 0x98, 0xeb, 0x14, 0x86, 0x2a, 0xa4, 0x14, 0x0e, 0x67, 0x28, 0xd1,
- 0x32, 0x54, 0xe3, 0x89, 0x8d, 0x32, 0xdb, 0xbc, 0x76, 0x83, 0x78, 0xac, 0x83, 0x13, 0x9a, 0x4c,
- 0xc0, 0x9c, 0x3e, 0x33, 0x60, 0xb6, 0xa1, 0xd8, 0x73, 0x6c, 0x61, 0x95, 0x6a, 0xfb, 0x59, 0x9d,
- 0xb0, 0xee, 0x6d, 0xac, 0x9d, 0x0c, 0x9a, 0x4f, 0x8c, 0x1b, 0x94, 0x46, 0xfd, 0x80, 0x86, 0xad,
- 0x7b, 0x1b, 0x6b, 0x98, 0x33, 0x9f, 0x76, 0x7b, 0xcb, 0x13, 0xde, 0xde, 0xeb, 0x00, 0x6a, 0xd7,
- 0x9c, 0x5b, 0x5e, 0xc3, 0xd8, 0x3b, 0x6f, 0xc7, 0x18, 0x9c, 0xa2, 0x42, 0x21, 0xcc, 0x5b, 0xbc,
- 0x65, 0xe7, 0xce, 0xee, 0x74, 0x69, 0x18, 0x91, 0xae, 0x9c, 0x64, 0x4d, 0xe6, 0xaa, 0x97, 0x95,
- 0x9a, 0xf9, 0xd5, 0x61, 0x61, 0x78, 0x54, 0x3e, 0xf2, 0x61, 0xde, 0x56, 0xcd, 0x67, 0xa2, 0xb4,
- 0x3a, 0xb1, 0xd2, 0x4b, 0x5c, 0xe1, 0xda, 0xb0, 0x20, 0x3c, 0x2a, 0x1b, 0x7d, 0x0f, 0x16, 0x34,
- 0x70, 0x74, 0x02, 0x20, 0x66, 0x51, 0xc5, 0xf6, 0xe2, 0xf1, 0xa0, 0xb9, 0xb0, 0x36, 0x96, 0x0a,
- 0x3f, 0x42, 0x02, 0xb2, 0xa1, 0xec, 0xca, 0x6a, 0xb0, 0x26, 0x32, 0xf8, 0x37, 0xf2, 0xed, 0x22,
- 0xf1, 0xfe, 0x56, 0xba, 0x0a, 0x8c, 0x3b, 0x5c, 0x55, 0x00, 0x2a, 0xd9, 0xe8, 0x21, 0xd4, 0x88,
- 0xe7, 0xf9, 0x11, 0x91, 0x33, 0x89, 0x59, 0xa1, 0x6a, 0x65, 0x62, 0x55, 0x2b, 0x89, 0x8c, 0xa1,
- 0xaa, 0x33, 0x85, 0xc1, 0x69, 0x55, 0xe8, 0x08, 0xce, 0xfb, 0x47, 0x1e, 0x65, 0x98, 0xee, 0x51,
- 0x46, 0x3d, 0x8b, 0x86, 0x8d, 0xba, 0xd0, 0xfe, 0xb5, 0x9c, 0xda, 0x33, 0xcc, 0x89, 0x4b, 0x67,
- 0xe1, 0x21, 0x1e, 0xd6, 0x82, 0x5a, 0x00, 0x7b, 0x8e, 0x47, 0x5c, 0xe7, 0x07, 0x94, 0x85, 0x8d,
- 0x73, 0xc9, 0xb0, 0xf1, 0x56, 0x0c, 0xc5, 0x29, 0x0a, 0xf4, 0x1c, 0xd4, 0x2c, 0xb7, 0x17, 0x46,
- 0x54, 0x4e, 0x7e, 0xcf, 0x8b, 0x1b, 0x14, 0xef, 0x6f, 0x35, 0x41, 0xe1, 0x34, 0x1d, 0xea, 0x41,
- 0xbd, 0x9b, 0x4e, 0x00, 0x8d, 0x79, 0xb1, 0xbb, 0x1b, 0xf9, 0x76, 0x37, 0x9a, 0xa2, 0x92, 0x2a,
- 0x21, 0x83, 0xc3, 0x59, 0x2d, 0x0b, 0x5f, 0x87, 0xda, 0xff, 0x58, 0x40, 0xf3, 0x02, 0x7c, 0xd8,
- 0x8e, 0x13, 0x15, 0xe0, 0x7f, 0x28, 0xc0, 0xb9, 0xec, 0xe9, 0x0f, 0x25, 0xb7, 0x52, 0xae, 0xe4,
- 0xa6, 0x5b, 0x3d, 0x63, 0xec, 0xb0, 0x5a, 0x87, 0xf5, 0xe2, 0xd8, 0xb0, 0xae, 0xa2, 0xe7, 0xf4,
- 0xe3, 0x44, 0xcf, 0x16, 0x00, 0xaf, 0x1a, 0x98, 0xef, 0xba, 0x94, 0x89, 0xc0, 0x59, 0x51, 0x43,
- 0xe9, 0x18, 0x8a, 0x53, 0x14, 0xbc, 0xe2, 0xdc, 0x75, 0x7d, 0xeb, 0x40, 0x1c, 0x81, 0xbe, 0xf4,
- 0x22, 0x64, 0x56, 0x64, 0xc5, 0xd9, 0x1e, 0xc1, 0xe2, 0x53, 0x38, 0xcc, 0x3e, 0x5c, 0xda, 0x26,
- 0x2c, 0x72, 0x88, 0x9b, 0x5c, 0x30, 0x51, 0xd2, 0xbf, 0x31, 0xd2, 0x30, 0x3c, 0x3b, 0xe9, 0x45,
- 0x4d, 0x0e, 0x3f, 0x81, 0x25, 0x4d, 0x83, 0xf9, 0x17, 0x03, 0x2e, 0x9f, 0xaa, 0xfb, 0x53, 0x68,
- 0x58, 0xde, 0xc8, 0x36, 0x2c, 0x2f, 0xe4, 0x9c, 0x8b, 0x9e, 0xb6, 0xda, 0x31, 0xed, 0xcb, 0x0c,
- 0x94, 0xb6, 0x79, 0x79, 0x6b, 0xfe, 0xd2, 0x80, 0x59, 0xf1, 0x34, 0xc9, 0x4c, 0xb9, 0x09, 0xa5,
- 0x3d, 0x5f, 0x8f, 0x81, 0x2a, 0xf2, 0xa3, 0xc7, 0x2d, 0x0e, 0xc0, 0x12, 0xfe, 0x18, 0x43, 0xe7,
- 0xb7, 0x0d, 0xc8, 0x4e, 0x73, 0xd1, 0x4b, 0xd2, 0x7f, 0x8d, 0x78, 0xdc, 0x3a, 0xa1, 0xef, 0xbe,
- 0x38, 0xae, 0xdd, 0xba, 0x90, 0x6b, 0x12, 0xf7, 0x4f, 0x03, 0x6a, 0xdb, 0xcc, 0x8f, 0x7c, 0x35,
- 0xd7, 0x7c, 0x2d, 0x3d, 0xd7, 0xbc, 0x99, 0x77, 0x62, 0x1d, 0xf3, 0x7f, 0x96, 0x87, 0x9b, 0xe6,
- 0x55, 0xa8, 0x62, 0xdf, 0x8f, 0xb6, 0x49, 0xb4, 0x1f, 0x72, 0x23, 0x07, 0xfc, 0x41, 0xf9, 0x81,
- 0x30, 0xb2, 0xc0, 0x60, 0x09, 0x37, 0x7f, 0x61, 0xc0, 0xe5, 0xb1, 0xdf, 0x34, 0x78, 0xb8, 0xb3,
- 0xe2, 0x37, 0x65, 0xbd, 0xf8, 0xc6, 0x25, 0x74, 0x38, 0x45, 0xc5, 0x7b, 0xc2, 0xcc, 0x87, 0x90,
- 0xe1, 0x9e, 0x30, 0xa3, 0x0d, 0x67, 0x69, 0xcd, 0x7f, 0x15, 0xa0, 0xbc, 0x13, 0x91, 0xa8, 0x17,
- 0xfe, 0x9f, 0x6f, 0xe7, 0x53, 0x50, 0x0e, 0x85, 0x1e, 0xb5, 0xbc, 0xb8, 0x9e, 0x90, 0xda, 0xb1,
- 0xc2, 0x8a, 0x3e, 0x8a, 0x86, 0x21, 0xe9, 0xe8, 0xe8, 0x9c, 0xf4, 0x51, 0x12, 0x8c, 0x35, 0x1e,
- 0x3d, 0x0f, 0x65, 0x46, 0x49, 0x18, 0xb7, 0x94, 0x8b, 0x5a, 0x24, 0x16, 0xd0, 0x93, 0x41, 0x73,
- 0x56, 0x09, 0x17, 0xef, 0x58, 0x51, 0xa3, 0x07, 0x30, 0x63, 0xd3, 0x88, 0x38, 0xae, 0xee, 0x59,
- 0x72, 0x7e, 0x42, 0x91, 0xc2, 0xd6, 0x24, 0x6b, 0xbb, 0xc6, 0xd7, 0xa4, 0x5e, 0xb0, 0x16, 0xc8,
- 0x33, 0x8b, 0xe5, 0xdb, 0xf2, 0xf3, 0x6e, 0x29, 0xc9, 0x2c, 0xab, 0xbe, 0x4d, 0xb1, 0xc0, 0x98,
- 0xef, 0x18, 0x50, 0x93, 0x92, 0x56, 0x49, 0x2f, 0xa4, 0xe8, 0x5a, 0xbc, 0x0b, 0x69, 0x6e, 0x5d,
- 0xb5, 0x4e, 0xf3, 0x3e, 0xef, 0x64, 0xd0, 0xac, 0x0a, 0x32, 0xd1, 0xf4, 0xe9, 0x0d, 0xa4, 0xce,
- 0xa8, 0x70, 0xc6, 0x19, 0x3d, 0x09, 0x25, 0x11, 0x29, 0xd4, 0x61, 0xc6, 0x71, 0x4d, 0xb8, 0x31,
- 0x96, 0x38, 0xf3, 0xa3, 0x02, 0xd4, 0x33, 0x9b, 0xcb, 0xd1, 0xf7, 0xc4, 0xa3, 0xd0, 0x42, 0x8e,
- 0xf1, 0xfa, 0xf8, 0xcf, 0xc6, 0x2a, 0xcf, 0x96, 0x1f, 0x27, 0xcf, 0xbe, 0x06, 0x65, 0x8b, 0x9f,
- 0x91, 0xfe, 0x0b, 0xe1, 0xda, 0x24, 0xe6, 0x14, 0xa7, 0x9b, 0x78, 0xa3, 0x78, 0x0d, 0xb1, 0x12,
- 0x88, 0x6e, 0xc3, 0x3c, 0xa3, 0x11, 0xeb, 0xaf, 0xec, 0x45, 0x94, 0xa5, 0xc7, 0x0f, 0xa5, 0xa4,
- 0xbb, 0xc0, 0xc3, 0x04, 0x78, 0x94, 0xc7, 0xdc, 0x85, 0xd9, 0xbb, 0x64, 0xd7, 0x8d, 0x3f, 0x0a,
- 0x62, 0xa8, 0x3b, 0x9e, 0xe5, 0xf6, 0x6c, 0x2a, 0x33, 0x8f, 0x8e, 0xd4, 0xfa, 0xd2, 0x6e, 0xa4,
- 0x91, 0x27, 0x83, 0xe6, 0x85, 0x0c, 0x40, 0x7e, 0x05, 0xc3, 0x59, 0x11, 0xa6, 0x0b, 0xd3, 0x9f,
- 0x62, 0xa7, 0xfc, 0x5d, 0xa8, 0x26, 0xbd, 0xcc, 0x27, 0xac, 0xd2, 0x7c, 0x03, 0x2a, 0xdc, 0xe3,
- 0x75, 0x0f, 0x7e, 0x46, 0x39, 0x97, 0x2d, 0x12, 0x0b, 0x79, 0x8a, 0x44, 0xb3, 0x0b, 0xf5, 0x7b,
- 0x81, 0xfd, 0x98, 0x9f, 0x85, 0x0b, 0xb9, 0x33, 0xf4, 0x75, 0x90, 0x3f, 0x38, 0xf0, 0x04, 0x21,
- 0xab, 0x94, 0x54, 0x82, 0x48, 0x17, 0x19, 0xa9, 0x29, 0xff, 0x4f, 0x0c, 0x00, 0x31, 0x4e, 0x5b,
- 0x3f, 0xa4, 0x5e, 0xc4, 0xcf, 0x81, 0x3b, 0xfe, 0xf0, 0x39, 0x88, 0xc8, 0x20, 0x30, 0xe8, 0x1e,
- 0x94, 0x7d, 0xe9, 0x4d, 0x32, 0xa5, 0x4d, 0x38, 0xb3, 0x8d, 0x2f, 0x81, 0xf4, 0x27, 0xac, 0x84,
- 0xb5, 0xaf, 0xbc, 0xff, 0xf1, 0xe2, 0xd4, 0x07, 0x1f, 0x2f, 0x4e, 0x7d, 0xf8, 0xf1, 0xe2, 0xd4,
- 0x5b, 0xc7, 0x8b, 0xc6, 0xfb, 0xc7, 0x8b, 0xc6, 0x07, 0xc7, 0x8b, 0xc6, 0x87, 0xc7, 0x8b, 0xc6,
- 0x47, 0xc7, 0x8b, 0xc6, 0x3b, 0x7f, 0x5b, 0x9c, 0x7a, 0x50, 0x38, 0xbc, 0xf6, 0xdf, 0x00, 0x00,
- 0x00, 0xff, 0xff, 0xe2, 0xeb, 0x45, 0x81, 0x56, 0x26, 0x00, 0x00,
+ // 2713 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xcc, 0x19, 0xcd, 0x6f, 0x1b, 0x59,
+ 0x3d, 0x63, 0xc7, 0x8e, 0xfd, 0x73, 0x9c, 0x8f, 0x97, 0x16, 0xdc, 0x00, 0x71, 0x76, 0x16, 0xad,
+ 0x52, 0xe8, 0x3a, 0x9b, 0x02, 0xab, 0xd2, 0x65, 0x0b, 0x71, 0x9c, 0x74, 0xc3, 0x36, 0x4d, 0xf4,
+ 0xd2, 0x16, 0x28, 0x15, 0xea, 0x64, 0xe6, 0xc5, 0x19, 0x32, 0x9e, 0xf1, 0xbe, 0x19, 0x27, 0x35,
+ 0x1c, 0xd8, 0x03, 0x08, 0x90, 0x60, 0xd5, 0x23, 0xe2, 0x80, 0xb6, 0x82, 0xbf, 0x80, 0x13, 0x7f,
+ 0x00, 0x12, 0xbd, 0x20, 0xad, 0xc4, 0x65, 0x25, 0x90, 0xb5, 0x0d, 0x07, 0x8e, 0x88, 0x6b, 0x4e,
+ 0xe8, 0x7d, 0xcd, 0x87, 0x1d, 0x37, 0x63, 0xba, 0xac, 0xf6, 0xe6, 0xf9, 0x7d, 0xff, 0xde, 0xfb,
+ 0xbd, 0xdf, 0x97, 0x61, 0xeb, 0xf0, 0x9a, 0x5f, 0xb3, 0xbd, 0xe5, 0xc3, 0xce, 0x1e, 0xa1, 0x2e,
+ 0x09, 0x88, 0xbf, 0x7c, 0x44, 0x5c, 0xcb, 0xa3, 0xcb, 0x12, 0x61, 0xb4, 0xed, 0x96, 0x61, 0x1e,
+ 0xd8, 0x2e, 0xa1, 0xdd, 0xe5, 0xf6, 0x61, 0x93, 0x01, 0xfc, 0xe5, 0x16, 0x09, 0x8c, 0xe5, 0xa3,
+ 0x95, 0xe5, 0x26, 0x71, 0x09, 0x35, 0x02, 0x62, 0xd5, 0xda, 0xd4, 0x0b, 0x3c, 0xf4, 0x45, 0xc1,
+ 0x55, 0x8b, 0x73, 0xd5, 0xda, 0x87, 0x4d, 0x06, 0xf0, 0x6b, 0x8c, 0xab, 0x76, 0xb4, 0x32, 0xff,
+ 0x6a, 0xd3, 0x0e, 0x0e, 0x3a, 0x7b, 0x35, 0xd3, 0x6b, 0x2d, 0x37, 0xbd, 0xa6, 0xb7, 0xcc, 0x99,
+ 0xf7, 0x3a, 0xfb, 0xfc, 0x8b, 0x7f, 0xf0, 0x5f, 0x42, 0xe8, 0xfc, 0x50, 0x53, 0x68, 0xc7, 0x0d,
+ 0xec, 0x16, 0xe9, 0xb7, 0x62, 0xfe, 0xf5, 0xf3, 0x18, 0x7c, 0xf3, 0x80, 0xb4, 0x8c, 0x7e, 0x3e,
+ 0xfd, 0x2f, 0x59, 0x28, 0xac, 0xee, 0x6c, 0xde, 0xa4, 0x5e, 0xa7, 0x8d, 0x16, 0x61, 0xdc, 0x35,
+ 0x5a, 0xa4, 0xa2, 0x2d, 0x6a, 0x4b, 0xc5, 0xfa, 0xe4, 0xd3, 0x5e, 0x75, 0xec, 0xa4, 0x57, 0x1d,
+ 0xbf, 0x6d, 0xb4, 0x08, 0xe6, 0x18, 0xe4, 0x40, 0xe1, 0x88, 0x50, 0xdf, 0xf6, 0x5c, 0xbf, 0x92,
+ 0x59, 0xcc, 0x2e, 0x95, 0xae, 0xde, 0xa8, 0xa5, 0xf1, 0xbf, 0xc6, 0x15, 0xdc, 0x13, 0xac, 0x1b,
+ 0x1e, 0x6d, 0xd8, 0xbe, 0xe9, 0x1d, 0x11, 0xda, 0xad, 0xcf, 0x48, 0x2d, 0x05, 0x89, 0xf4, 0x71,
+ 0xa8, 0x01, 0xfd, 0x54, 0x83, 0x99, 0x36, 0x25, 0xfb, 0x84, 0x52, 0x62, 0x49, 0x7c, 0x25, 0xbb,
+ 0xa8, 0x7d, 0x0c, 0x6a, 0x2b, 0x52, 0xed, 0xcc, 0x4e, 0x9f, 0x7c, 0x3c, 0xa0, 0x11, 0xfd, 0x5e,
+ 0x83, 0x79, 0x9f, 0xd0, 0x23, 0x42, 0x57, 0x2d, 0x8b, 0x12, 0xdf, 0xaf, 0x77, 0xd7, 0x1c, 0x9b,
+ 0xb8, 0xc1, 0xda, 0x66, 0x03, 0xfb, 0x95, 0x71, 0x7e, 0x0e, 0xdf, 0x4c, 0x67, 0xd0, 0xee, 0x30,
+ 0x39, 0x75, 0x5d, 0x5a, 0x34, 0x3f, 0x94, 0xc4, 0xc7, 0xcf, 0x31, 0x43, 0xdf, 0x87, 0x49, 0x75,
+ 0x91, 0xb7, 0x6c, 0x3f, 0x40, 0xf7, 0x20, 0xdf, 0x64, 0x1f, 0x7e, 0x45, 0xe3, 0x06, 0xd6, 0xd2,
+ 0x19, 0xa8, 0x64, 0xd4, 0xa7, 0xa4, 0x3d, 0x79, 0xfe, 0xe9, 0x63, 0x29, 0x4d, 0xff, 0xe5, 0x38,
+ 0x94, 0x56, 0x77, 0x36, 0x31, 0xf1, 0xbd, 0x0e, 0x35, 0x49, 0x8a, 0xa0, 0xb9, 0x06, 0x93, 0xbe,
+ 0xed, 0x36, 0x3b, 0x8e, 0x41, 0x19, 0xb4, 0x92, 0xe7, 0x94, 0x17, 0x24, 0xe5, 0xe4, 0x6e, 0x0c,
+ 0x87, 0x13, 0x94, 0xe8, 0x2a, 0x00, 0x93, 0xe0, 0xb7, 0x0d, 0x93, 0x58, 0x95, 0xcc, 0xa2, 0xb6,
+ 0x54, 0xa8, 0x23, 0xc9, 0x07, 0xb7, 0x43, 0x0c, 0x8e, 0x51, 0xa1, 0x97, 0x21, 0xc7, 0x2d, 0xad,
+ 0x14, 0xb8, 0x9a, 0xb2, 0x24, 0xcf, 0x71, 0x37, 0xb0, 0xc0, 0xa1, 0xcb, 0x30, 0x21, 0xa3, 0xac,
+ 0x52, 0xe4, 0x64, 0xd3, 0x92, 0x6c, 0x42, 0x85, 0x81, 0xc2, 0x33, 0xff, 0x0e, 0x6d, 0xd7, 0xe2,
+ 0x71, 0x17, 0xf3, 0xef, 0x6d, 0xdb, 0xb5, 0x30, 0xc7, 0xa0, 0x5b, 0x90, 0x3b, 0x22, 0x74, 0x8f,
+ 0x45, 0x02, 0x0b, 0xcd, 0x2f, 0xa7, 0x3b, 0xe8, 0x7b, 0x8c, 0xa5, 0x5e, 0x64, 0xa6, 0xf1, 0x9f,
+ 0x58, 0x08, 0x41, 0x35, 0x00, 0xff, 0xc0, 0xa3, 0x01, 0x77, 0xaf, 0x92, 0x5b, 0xcc, 0x2e, 0x15,
+ 0xeb, 0x53, 0xcc, 0xdf, 0xdd, 0x10, 0x8a, 0x63, 0x14, 0x8c, 0xde, 0x34, 0x02, 0xd2, 0xf4, 0xa8,
+ 0x4d, 0xfc, 0xca, 0x44, 0x44, 0xbf, 0x16, 0x42, 0x71, 0x8c, 0x02, 0x7d, 0x1b, 0x90, 0x1f, 0x78,
+ 0xd4, 0x68, 0x12, 0xe9, 0xea, 0x5b, 0x86, 0x7f, 0x50, 0x01, 0xee, 0xdd, 0xbc, 0xf4, 0x0e, 0xed,
+ 0x0e, 0x50, 0xe0, 0x33, 0xb8, 0xf4, 0x3f, 0x6a, 0x30, 0x1d, 0x8b, 0x05, 0x1e, 0x77, 0xd7, 0x60,
+ 0xb2, 0x19, 0x7b, 0x75, 0x32, 0x2e, 0xc2, 0xdb, 0x8e, 0xbf, 0x48, 0x9c, 0xa0, 0x44, 0x04, 0x8a,
+ 0x54, 0x4a, 0x52, 0xd9, 0x65, 0x25, 0x75, 0xd0, 0x2a, 0x1b, 0x22, 0x4d, 0x31, 0xa0, 0x8f, 0x23,
+ 0xc9, 0xfa, 0xbf, 0x34, 0x1e, 0xc0, 0x2a, 0xdf, 0xa0, 0xa5, 0x58, 0x4e, 0xd3, 0xf8, 0xf1, 0x4d,
+ 0x0e, 0xc9, 0x47, 0xe7, 0x24, 0x82, 0xcc, 0xa7, 0x22, 0x11, 0x5c, 0x2f, 0xfc, 0xe6, 0xfd, 0xea,
+ 0xd8, 0xbb, 0xff, 0x58, 0x1c, 0xd3, 0x5b, 0x50, 0x5e, 0xa3, 0xc4, 0x08, 0xc8, 0x76, 0x3b, 0xe0,
+ 0x0e, 0xe8, 0x90, 0xb7, 0x68, 0x17, 0x77, 0x5c, 0xe9, 0x28, 0xb0, 0xf7, 0xdd, 0xe0, 0x10, 0x2c,
+ 0x31, 0xec, 0xfe, 0xf6, 0x6d, 0xe2, 0x58, 0x5b, 0x86, 0x6b, 0x34, 0x09, 0x95, 0x71, 0x1f, 0x9e,
+ 0xea, 0x46, 0x0c, 0x87, 0x13, 0x94, 0xfa, 0xcf, 0xb3, 0x50, 0x6e, 0x10, 0x87, 0x44, 0xfa, 0x36,
+ 0x00, 0x35, 0xa9, 0x61, 0x92, 0x1d, 0x42, 0x6d, 0xcf, 0xda, 0x25, 0xa6, 0xe7, 0x5a, 0x3e, 0x8f,
+ 0x88, 0x6c, 0xfd, 0x33, 0x2c, 0xce, 0x6e, 0x0e, 0x60, 0xf1, 0x19, 0x1c, 0xc8, 0x81, 0x72, 0x9b,
+ 0xf2, 0xdf, 0x76, 0x20, 0x6b, 0x0f, 0x7b, 0x69, 0x5f, 0x49, 0x77, 0xd4, 0x3b, 0x71, 0xd6, 0xfa,
+ 0xec, 0x49, 0xaf, 0x5a, 0x4e, 0x80, 0x70, 0x52, 0x38, 0xfa, 0x16, 0xcc, 0x78, 0xb4, 0x7d, 0x60,
+ 0xb8, 0x0d, 0xd2, 0x26, 0xae, 0x45, 0xdc, 0xc0, 0xe7, 0xa7, 0x50, 0xa8, 0x5f, 0x60, 0x15, 0x63,
+ 0xbb, 0x0f, 0x87, 0x07, 0xa8, 0xd1, 0x7d, 0x98, 0x6d, 0x53, 0xaf, 0x6d, 0x34, 0x0d, 0x26, 0x71,
+ 0xc7, 0x73, 0x6c, 0xb3, 0xcb, 0xb3, 0x43, 0xb1, 0x7e, 0xe5, 0xa4, 0x57, 0x9d, 0xdd, 0xe9, 0x47,
+ 0x9e, 0xf6, 0xaa, 0x73, 0xfc, 0xe8, 0x18, 0x24, 0x42, 0xe2, 0x41, 0x31, 0xb1, 0x3b, 0xcc, 0x0d,
+ 0xbb, 0x43, 0x7d, 0x13, 0x0a, 0x8d, 0x0e, 0xe5, 0x5c, 0xe8, 0x4d, 0x28, 0x58, 0xf2, 0xb7, 0x3c,
+ 0xf9, 0x97, 0x54, 0xc9, 0x55, 0x34, 0xa7, 0xbd, 0x6a, 0x99, 0x35, 0x09, 0x35, 0x05, 0xc0, 0x21,
+ 0x8b, 0xfe, 0x00, 0xca, 0xeb, 0x8f, 0xda, 0x1e, 0x0d, 0xd4, 0x9d, 0xbe, 0x02, 0x79, 0xc2, 0x01,
+ 0x5c, 0x5a, 0x21, 0xaa, 0x13, 0x82, 0x0c, 0x4b, 0x2c, 0xcb, 0xc3, 0xe4, 0x91, 0x61, 0x06, 0x32,
+ 0x6d, 0x87, 0x79, 0x78, 0x9d, 0x01, 0xb1, 0xc0, 0xe9, 0x9f, 0x87, 0x02, 0x0f, 0x28, 0xff, 0xde,
+ 0x0a, 0x9a, 0x81, 0x2c, 0x36, 0x8e, 0xb9, 0xd4, 0x49, 0x9c, 0xa5, 0xc6, 0xb1, 0xbe, 0x0d, 0x70,
+ 0x93, 0x84, 0x8a, 0x57, 0x61, 0x5a, 0x3d, 0xe2, 0x64, 0x6e, 0xf9, 0xac, 0x14, 0x3d, 0x8d, 0x93,
+ 0x68, 0xdc, 0x4f, 0xaf, 0x3f, 0x80, 0x22, 0xcf, 0x3f, 0x2c, 0x79, 0x47, 0x85, 0x42, 0x7b, 0x4e,
+ 0xa1, 0x50, 0xd9, 0x3f, 0x33, 0x2c, 0xfb, 0xc7, 0x9e, 0x9b, 0x03, 0x65, 0xc1, 0xab, 0x4a, 0x63,
+ 0x2a, 0x0d, 0x57, 0xa0, 0xa0, 0xcc, 0x94, 0x5a, 0xc2, 0x96, 0x48, 0x09, 0xc2, 0x21, 0x45, 0x4c,
+ 0xdb, 0x01, 0x24, 0x72, 0x69, 0x3a, 0x65, 0xb1, 0xba, 0x97, 0x79, 0x7e, 0xdd, 0x8b, 0x69, 0xfa,
+ 0x09, 0x54, 0x86, 0xf5, 0x51, 0x2f, 0x90, 0xed, 0xd3, 0x9b, 0xa2, 0xbf, 0xa7, 0xc1, 0x4c, 0x5c,
+ 0x52, 0xfa, 0xeb, 0x4b, 0xaf, 0xe4, 0xfc, 0x3a, 0x1f, 0x3b, 0x91, 0xdf, 0x69, 0x70, 0x21, 0xe1,
+ 0xda, 0x48, 0x37, 0x3e, 0x82, 0x51, 0xf1, 0xe0, 0xc8, 0x8e, 0x10, 0x1c, 0x7f, 0xcb, 0x40, 0xf9,
+ 0x96, 0xb1, 0x47, 0x9c, 0x5d, 0xe2, 0x10, 0x33, 0xf0, 0x28, 0xfa, 0x31, 0x94, 0x5a, 0x46, 0x60,
+ 0x1e, 0x70, 0xa8, 0xea, 0x09, 0x1b, 0xe9, 0x12, 0x68, 0x42, 0x52, 0x6d, 0x2b, 0x12, 0xb3, 0xee,
+ 0x06, 0xb4, 0x5b, 0x9f, 0x93, 0x26, 0x95, 0x62, 0x18, 0x1c, 0xd7, 0xc6, 0x1b, 0x79, 0xfe, 0xbd,
+ 0xfe, 0xa8, 0xcd, 0x0a, 0xd6, 0xe8, 0xf3, 0x43, 0xc2, 0x04, 0x4c, 0xde, 0xe9, 0xd8, 0x94, 0xb4,
+ 0x88, 0x1b, 0x44, 0x8d, 0xfc, 0x56, 0x9f, 0x7c, 0x3c, 0xa0, 0x71, 0xfe, 0x06, 0xcc, 0xf4, 0x1b,
+ 0xcf, 0xb2, 0xce, 0x21, 0xe9, 0x8a, 0xfb, 0xc2, 0xec, 0x27, 0xba, 0x00, 0xb9, 0x23, 0xc3, 0xe9,
+ 0xc8, 0xd7, 0x88, 0xc5, 0xc7, 0xf5, 0xcc, 0x35, 0x4d, 0xff, 0x83, 0x06, 0x95, 0x61, 0x86, 0xa0,
+ 0x2f, 0xc4, 0x04, 0xd5, 0x4b, 0xd2, 0xaa, 0xec, 0xdb, 0xa4, 0x2b, 0xa4, 0xae, 0x43, 0xc1, 0x6b,
+ 0xb3, 0xd1, 0xcb, 0xa3, 0xf2, 0xd6, 0x2f, 0xab, 0x9b, 0xdc, 0x96, 0xf0, 0xd3, 0x5e, 0xf5, 0x62,
+ 0x42, 0xbc, 0x42, 0xe0, 0x90, 0x95, 0x65, 0x7f, 0x6e, 0x0f, 0xab, 0x48, 0x61, 0xf6, 0xbf, 0xc7,
+ 0x21, 0x58, 0x62, 0xf4, 0x3f, 0x69, 0x30, 0xce, 0x5b, 0xb1, 0x07, 0x50, 0x60, 0xe7, 0x67, 0x19,
+ 0x81, 0xc1, 0xed, 0x4a, 0x3d, 0x04, 0x30, 0xee, 0x2d, 0x12, 0x18, 0x51, 0xb4, 0x29, 0x08, 0x0e,
+ 0x25, 0x22, 0x0c, 0x39, 0x3b, 0x20, 0x2d, 0x75, 0x91, 0xaf, 0x0e, 0x15, 0x2d, 0x47, 0xd0, 0x1a,
+ 0x36, 0x8e, 0xd7, 0x1f, 0x05, 0xc4, 0x65, 0x97, 0x11, 0x3d, 0x8d, 0x4d, 0x26, 0x03, 0x0b, 0x51,
+ 0xfa, 0x7f, 0x34, 0x08, 0x55, 0xb1, 0xe0, 0xf7, 0x89, 0xb3, 0x7f, 0xcb, 0x76, 0x0f, 0xe5, 0xb1,
+ 0x86, 0xe6, 0xec, 0x4a, 0x38, 0x0e, 0x29, 0xce, 0x2a, 0x0f, 0x99, 0xd1, 0xca, 0x03, 0x53, 0x68,
+ 0x7a, 0x6e, 0x60, 0xbb, 0x9d, 0x81, 0xd7, 0xb6, 0x26, 0xe1, 0x38, 0xa4, 0x60, 0xcd, 0x0d, 0x25,
+ 0x2d, 0xc3, 0x76, 0x6d, 0xb7, 0xc9, 0x9c, 0x58, 0xf3, 0x3a, 0x6e, 0xc0, 0xab, 0xbc, 0x6c, 0x6e,
+ 0xf0, 0x00, 0x16, 0x9f, 0xc1, 0xa1, 0xff, 0x35, 0x0b, 0x25, 0xe6, 0xb3, 0xaa, 0x73, 0x6f, 0x40,
+ 0xd9, 0x89, 0x47, 0x81, 0xf4, 0xfd, 0xa2, 0x34, 0x25, 0xf9, 0xae, 0x71, 0x92, 0x96, 0x31, 0xf3,
+ 0x9e, 0x2c, 0x64, 0xce, 0x24, 0x99, 0x37, 0xe2, 0x48, 0x9c, 0xa4, 0x65, 0xd9, 0xeb, 0x98, 0xbd,
+ 0x0f, 0xd9, 0xed, 0x84, 0x57, 0xf4, 0x1d, 0x06, 0xc4, 0x02, 0x87, 0xb6, 0x60, 0xce, 0x70, 0x1c,
+ 0xef, 0x98, 0x03, 0xeb, 0x9e, 0x77, 0xd8, 0x32, 0xe8, 0xa1, 0xcf, 0xc7, 0xa8, 0x42, 0xfd, 0x73,
+ 0x92, 0x65, 0x6e, 0x75, 0x90, 0x04, 0x9f, 0xc5, 0x77, 0xd6, 0xb5, 0x8d, 0x8f, 0x78, 0x6d, 0xd7,
+ 0x61, 0x8a, 0xc5, 0x97, 0xd7, 0x09, 0x54, 0x87, 0x99, 0xe3, 0x97, 0x80, 0x4e, 0x7a, 0xd5, 0xa9,
+ 0x3b, 0x09, 0x0c, 0xee, 0xa3, 0x64, 0x2e, 0x3b, 0x76, 0xcb, 0x0e, 0x2a, 0x13, 0x9c, 0x25, 0x74,
+ 0xf9, 0x16, 0x03, 0x62, 0x81, 0x4b, 0xc4, 0x45, 0xe1, 0xbc, 0xb8, 0xd0, 0x7f, 0x9b, 0x05, 0x24,
+ 0x5a, 0x62, 0x4b, 0xf4, 0x36, 0x22, 0xd1, 0x5c, 0x86, 0x89, 0x96, 0x6c, 0xa9, 0xb5, 0x64, 0xd6,
+ 0x57, 0xdd, 0xb4, 0xc2, 0xa3, 0x2d, 0x28, 0x8a, 0x07, 0x1f, 0x05, 0xf1, 0xb2, 0x24, 0x2e, 0x6e,
+ 0x2b, 0xc4, 0x69, 0xaf, 0x3a, 0x9f, 0x50, 0x13, 0x62, 0xee, 0x74, 0xdb, 0x04, 0x47, 0x12, 0xd8,
+ 0x14, 0x6d, 0xb4, 0xed, 0xf8, 0xfe, 0xa4, 0x18, 0x4d, 0xd1, 0xd1, 0x24, 0x84, 0x63, 0x54, 0xe8,
+ 0x2d, 0x18, 0x67, 0x27, 0x25, 0x47, 0xda, 0x2f, 0xa5, 0x4b, 0x1b, 0xec, 0xac, 0xeb, 0x05, 0x56,
+ 0x35, 0xd9, 0x2f, 0xcc, 0x25, 0x30, 0xed, 0x3c, 0xca, 0x7c, 0x66, 0x96, 0x9c, 0xfd, 0x43, 0xed,
+ 0x1b, 0x21, 0x06, 0xc7, 0xa8, 0xd0, 0x77, 0xa1, 0xb0, 0x2f, 0xdb, 0x42, 0x7e, 0x31, 0xa9, 0x13,
+ 0x97, 0x6a, 0x26, 0xc5, 0x08, 0xa7, 0xbe, 0x70, 0x28, 0x4d, 0x7f, 0x07, 0x8a, 0x5b, 0xb6, 0x49,
+ 0x3d, 0x66, 0x20, 0xbb, 0x12, 0x3f, 0x31, 0x93, 0x84, 0x57, 0xa2, 0xc2, 0x45, 0xe1, 0x59, 0x9c,
+ 0xb8, 0x86, 0xeb, 0x89, 0xc9, 0x23, 0x17, 0xc5, 0xc9, 0x6d, 0x06, 0xc4, 0x02, 0x77, 0xfd, 0x02,
+ 0xab, 0xbf, 0xbf, 0x78, 0x52, 0x1d, 0x7b, 0xfc, 0xa4, 0x3a, 0xf6, 0xfe, 0x13, 0x59, 0x8b, 0x4f,
+ 0x01, 0x60, 0x7b, 0xef, 0x87, 0xc4, 0x14, 0x59, 0x2d, 0xd5, 0xbe, 0x44, 0xad, 0xe9, 0xf8, 0xbe,
+ 0x24, 0xd3, 0xd7, 0x53, 0xc5, 0x70, 0x38, 0x41, 0x89, 0x96, 0xa1, 0x18, 0x6e, 0x42, 0xe4, 0x45,
+ 0xcf, 0xaa, 0xc0, 0x09, 0xd7, 0x25, 0x38, 0xa2, 0x49, 0xa4, 0xd8, 0xf1, 0x73, 0x53, 0x6c, 0x1d,
+ 0xb2, 0x1d, 0xdb, 0xe2, 0xaf, 0xab, 0x58, 0x7f, 0x4d, 0x95, 0xb8, 0xbb, 0x9b, 0x8d, 0xd3, 0x5e,
+ 0xf5, 0xa5, 0x61, 0x0b, 0xc8, 0xa0, 0xdb, 0x26, 0x7e, 0xed, 0xee, 0x66, 0x03, 0x33, 0xe6, 0xb3,
+ 0xde, 0x7b, 0x7e, 0xc4, 0xf7, 0x7e, 0x15, 0x40, 0x7a, 0xcd, 0xb8, 0xc5, 0xc3, 0x0d, 0x23, 0xea,
+ 0x66, 0x88, 0xc1, 0x31, 0x2a, 0xe4, 0xc3, 0xac, 0xc9, 0x46, 0x61, 0xf6, 0x3c, 0xec, 0x16, 0xf1,
+ 0x03, 0xa3, 0x25, 0x36, 0x44, 0xa3, 0x05, 0xf7, 0x25, 0xa9, 0x66, 0x76, 0xad, 0x5f, 0x18, 0x1e,
+ 0x94, 0x8f, 0x3c, 0x98, 0xb5, 0xe4, 0x50, 0x17, 0x29, 0x2d, 0x8e, 0xac, 0xf4, 0x22, 0x53, 0xd8,
+ 0xe8, 0x17, 0x84, 0x07, 0x65, 0xa3, 0x1f, 0xc0, 0xbc, 0x02, 0x0e, 0x4e, 0xd6, 0x7c, 0xc7, 0x93,
+ 0xad, 0x2f, 0x9c, 0xf4, 0xaa, 0xf3, 0x8d, 0xa1, 0x54, 0xf8, 0x39, 0x12, 0x90, 0x05, 0x79, 0x47,
+ 0xf4, 0x8f, 0x25, 0x5e, 0xf3, 0xbf, 0x91, 0xce, 0x8b, 0x28, 0xfa, 0x6b, 0xf1, 0xbe, 0x31, 0x9c,
+ 0x1c, 0x65, 0xcb, 0x28, 0x65, 0xa3, 0x47, 0x50, 0x32, 0x5c, 0xd7, 0x0b, 0x0c, 0x31, 0xeb, 0x4f,
+ 0x72, 0x55, 0xab, 0x23, 0xab, 0x5a, 0x8d, 0x64, 0xf4, 0xf5, 0xa9, 0x31, 0x0c, 0x8e, 0xab, 0x42,
+ 0xc7, 0x30, 0xed, 0x1d, 0xbb, 0x84, 0x62, 0xb2, 0x4f, 0x28, 0x71, 0x4d, 0xe2, 0x57, 0xca, 0x5c,
+ 0xfb, 0x57, 0x53, 0x6a, 0x4f, 0x30, 0x47, 0x21, 0x9d, 0x84, 0xfb, 0xb8, 0x5f, 0x0b, 0xaa, 0xb1,
+ 0x24, 0xe9, 0x1a, 0x8e, 0xfd, 0x23, 0x42, 0xfd, 0xca, 0x54, 0xb4, 0xc4, 0xdb, 0x08, 0xa1, 0x38,
+ 0x46, 0x81, 0xbe, 0x06, 0x25, 0xd3, 0xe9, 0xf8, 0x01, 0x11, 0x1b, 0xd5, 0x69, 0xfe, 0x82, 0x42,
+ 0xff, 0xd6, 0x22, 0x14, 0x8e, 0xd3, 0xa1, 0x0e, 0x94, 0x5b, 0xf1, 0x92, 0x51, 0x99, 0xe5, 0xde,
+ 0x5d, 0x4b, 0xe7, 0xdd, 0x60, 0x51, 0x8b, 0xfa, 0x8a, 0x04, 0x0e, 0x27, 0xb5, 0xcc, 0x7f, 0x1d,
+ 0x4a, 0xff, 0x63, 0xcb, 0xcd, 0x5a, 0xf6, 0xfe, 0x7b, 0x1c, 0xa9, 0x65, 0xff, 0x73, 0x06, 0xa6,
+ 0x92, 0xa7, 0xdf, 0x57, 0x0e, 0x73, 0xa9, 0xca, 0xa1, 0x1a, 0x0e, 0xb5, 0xa1, 0x4b, 0x60, 0x95,
+ 0xd6, 0xb3, 0x43, 0xd3, 0xba, 0xcc, 0x9e, 0xe3, 0x2f, 0x92, 0x3d, 0x6b, 0x00, 0xac, 0xcf, 0xa0,
+ 0x9e, 0xe3, 0x10, 0xca, 0x13, 0x67, 0x41, 0x2e, 0x7b, 0x43, 0x28, 0x8e, 0x51, 0xb0, 0x1e, 0x75,
+ 0xcf, 0xf1, 0xcc, 0x43, 0x7e, 0x04, 0xea, 0xd1, 0xf3, 0x94, 0x59, 0x10, 0x3d, 0x6a, 0x7d, 0x00,
+ 0x8b, 0xcf, 0xe0, 0xd0, 0xbb, 0x70, 0x71, 0xc7, 0xa0, 0x81, 0x6d, 0x38, 0xd1, 0x03, 0xe3, 0x43,
+ 0xc0, 0xc3, 0x81, 0x11, 0xe3, 0xb5, 0x51, 0x1f, 0x6a, 0x74, 0xf8, 0x11, 0x2c, 0x1a, 0x33, 0xf4,
+ 0xbf, 0x6b, 0x70, 0xe9, 0x4c, 0xdd, 0x9f, 0xc0, 0x88, 0xf3, 0x30, 0x39, 0xe2, 0xbc, 0x91, 0x72,
+ 0xdf, 0x78, 0x96, 0xb5, 0x43, 0x06, 0x9e, 0x09, 0xc8, 0xed, 0xb0, 0x86, 0x58, 0xff, 0xb5, 0x06,
+ 0x93, 0xfc, 0xd7, 0x28, 0xbb, 0xda, 0x2a, 0xe4, 0xf6, 0x3d, 0xb5, 0x38, 0x2a, 0x88, 0x3f, 0x13,
+ 0x36, 0x18, 0x00, 0x0b, 0xf8, 0x0b, 0x2c, 0x73, 0xdf, 0xd3, 0x20, 0xb9, 0x25, 0x45, 0x37, 0x44,
+ 0xfc, 0x6a, 0xe1, 0x1a, 0x73, 0xc4, 0xd8, 0x7d, 0x73, 0xd8, 0x80, 0x36, 0x97, 0x6a, 0x77, 0x77,
+ 0x05, 0x8a, 0xd8, 0xf3, 0x82, 0x1d, 0x23, 0x38, 0xf0, 0x99, 0xe3, 0x6d, 0xf6, 0x43, 0x9e, 0x0d,
+ 0x77, 0x9c, 0x63, 0xb0, 0x80, 0xeb, 0xbf, 0xd2, 0xe0, 0xd2, 0xd0, 0xfd, 0x39, 0x4b, 0x01, 0x66,
+ 0xf8, 0x25, 0x3d, 0x0a, 0xa3, 0x30, 0xa2, 0xc3, 0x31, 0x2a, 0x36, 0x59, 0x25, 0x96, 0xee, 0xfd,
+ 0x93, 0x55, 0x42, 0x1b, 0x4e, 0xd2, 0xea, 0xff, 0xce, 0x40, 0x7e, 0x37, 0x30, 0x82, 0x8e, 0xff,
+ 0x7f, 0x8e, 0xd8, 0x57, 0x20, 0xef, 0x73, 0x3d, 0xd2, 0xbc, 0xb0, 0xc6, 0x0a, 0xed, 0x58, 0x62,
+ 0xf9, 0x34, 0x42, 0x7c, 0xdf, 0x68, 0xaa, 0x8c, 0x15, 0x4d, 0x23, 0x02, 0x8c, 0x15, 0x1e, 0xbd,
+ 0x0e, 0x79, 0x4a, 0x0c, 0x3f, 0x1c, 0xcc, 0x16, 0x94, 0x48, 0xcc, 0xa1, 0xa7, 0xbd, 0xea, 0xa4,
+ 0x14, 0xce, 0xbf, 0xb1, 0xa4, 0x46, 0xf7, 0x61, 0xc2, 0x22, 0x81, 0x61, 0x3b, 0x62, 0x1e, 0x4b,
+ 0xbd, 0xae, 0x17, 0xc2, 0x1a, 0x82, 0xb5, 0x5e, 0x62, 0x36, 0xc9, 0x0f, 0xac, 0x04, 0xb2, 0x6c,
+ 0x6b, 0x7a, 0x96, 0x18, 0x27, 0x72, 0x51, 0xb6, 0x5d, 0xf3, 0x2c, 0x82, 0x39, 0x46, 0x7f, 0xac,
+ 0x41, 0x49, 0x48, 0x5a, 0x33, 0x3a, 0x3e, 0x41, 0x2b, 0xa1, 0x17, 0xe2, 0xba, 0x55, 0x27, 0x37,
+ 0xce, 0x06, 0x8e, 0xd3, 0x5e, 0xb5, 0xc8, 0xc9, 0xf8, 0x24, 0xa2, 0x1c, 0x88, 0x9d, 0x51, 0xe6,
+ 0x9c, 0x33, 0x7a, 0x19, 0x72, 0xfc, 0xf5, 0xc8, 0xc3, 0x0c, 0xdf, 0x3a, 0x7f, 0x60, 0x58, 0xe0,
+ 0xf4, 0x8f, 0x32, 0x50, 0x4e, 0x38, 0x97, 0x62, 0x16, 0x08, 0x17, 0x8a, 0x99, 0x14, 0x4b, 0xea,
+ 0xe1, 0x7f, 0x51, 0xca, 0xda, 0x93, 0x7f, 0x91, 0xda, 0xf3, 0x3d, 0xc8, 0x9b, 0xec, 0x8c, 0xd4,
+ 0x3f, 0xde, 0x2b, 0xa3, 0x5c, 0x27, 0x3f, 0xdd, 0x28, 0x1a, 0xf9, 0xa7, 0x8f, 0xa5, 0x40, 0x74,
+ 0x13, 0x66, 0x29, 0x09, 0x68, 0x77, 0x75, 0x3f, 0x20, 0x34, 0x3e, 0xc4, 0xe7, 0xa2, 0x8e, 0x1b,
+ 0xf7, 0x13, 0xe0, 0x41, 0x1e, 0x7d, 0x0f, 0x26, 0xef, 0x18, 0x7b, 0x4e, 0xf8, 0x07, 0x14, 0x86,
+ 0xb2, 0xed, 0x9a, 0x4e, 0xc7, 0x22, 0x22, 0x1b, 0xab, 0xec, 0xa5, 0x1e, 0xed, 0x66, 0x1c, 0x79,
+ 0xda, 0xab, 0xce, 0x25, 0x00, 0xe2, 0x1f, 0x17, 0x9c, 0x14, 0xa1, 0x3b, 0x30, 0xfe, 0x09, 0x4e,
+ 0x8f, 0xdf, 0x87, 0x62, 0xd4, 0xdf, 0x7f, 0xcc, 0x2a, 0xf5, 0x87, 0x50, 0x60, 0x11, 0xaf, 0xe6,
+ 0xd2, 0x73, 0x5a, 0x9c, 0x64, 0xe3, 0x94, 0x49, 0xd3, 0x38, 0xe9, 0x2d, 0x28, 0xdf, 0x6d, 0x5b,
+ 0x2f, 0xf8, 0x17, 0x64, 0x26, 0x75, 0xd5, 0xba, 0x0a, 0xe2, 0xcf, 0x74, 0x56, 0x20, 0x44, 0xe5,
+ 0x8e, 0x15, 0x88, 0x78, 0xe1, 0x8d, 0xed, 0xca, 0x7f, 0xa6, 0x01, 0xf0, 0xa5, 0xd4, 0xfa, 0x11,
+ 0x71, 0x03, 0x76, 0x0e, 0x2c, 0xf0, 0xfb, 0xcf, 0x81, 0x67, 0x06, 0x8e, 0x41, 0x77, 0x21, 0xef,
+ 0x89, 0x68, 0x12, 0x7f, 0x43, 0x8e, 0xb8, 0xf9, 0x0c, 0x1f, 0x81, 0x88, 0x27, 0x2c, 0x85, 0xd5,
+ 0x97, 0x9e, 0x3e, 0x5b, 0x18, 0xfb, 0xe0, 0xd9, 0xc2, 0xd8, 0x87, 0xcf, 0x16, 0xc6, 0xde, 0x3d,
+ 0x59, 0xd0, 0x9e, 0x9e, 0x2c, 0x68, 0x1f, 0x9c, 0x2c, 0x68, 0x1f, 0x9e, 0x2c, 0x68, 0x1f, 0x9d,
+ 0x2c, 0x68, 0x8f, 0xff, 0xb9, 0x30, 0x76, 0x3f, 0x73, 0xb4, 0xf2, 0xdf, 0x00, 0x00, 0x00, 0xff,
+ 0xff, 0x61, 0xb7, 0xc5, 0x7c, 0xc2, 0x24, 0x00, 0x00,
}
func (m *APIGroup) Marshal() (dAtA []byte, err error) {
@@ -1950,6 +1920,36 @@ func (m *ExportOptions) MarshalToSizedBuffer(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
+func (m *FieldsV1) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *FieldsV1) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *FieldsV1) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.Raw != nil {
+ i -= len(m.Raw)
+ copy(dAtA[i:], m.Raw)
+ i = encodeVarintGenerated(dAtA, i, uint64(len(m.Raw)))
+ i--
+ dAtA[i] = 0xa
+ }
+ return len(dAtA) - i, nil
+}
+
func (m *GetOptions) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
@@ -2466,9 +2466,9 @@ func (m *ManagedFieldsEntry) MarshalToSizedBuffer(dAtA []byte) (int, error) {
_ = i
var l int
_ = l
- if m.Fields != nil {
+ if m.FieldsV1 != nil {
{
- size, err := m.Fields.MarshalToSizedBuffer(dAtA[:i])
+ size, err := m.FieldsV1.MarshalToSizedBuffer(dAtA[:i])
if err != nil {
return 0, err
}
@@ -2476,8 +2476,13 @@ func (m *ManagedFieldsEntry) MarshalToSizedBuffer(dAtA []byte) (int, error) {
i = encodeVarintGenerated(dAtA, i, uint64(size))
}
i--
- dAtA[i] = 0x2a
+ dAtA[i] = 0x3a
}
+ i -= len(m.FieldsType)
+ copy(dAtA[i:], m.FieldsType)
+ i = encodeVarintGenerated(dAtA, i, uint64(len(m.FieldsType)))
+ i--
+ dAtA[i] = 0x32
if m.Time != nil {
{
size, err := m.Time.MarshalToSizedBuffer(dAtA[:i])
@@ -2933,65 +2938,6 @@ func (m *Preconditions) MarshalToSizedBuffer(dAtA []byte) (int, error) {
return len(dAtA) - i, nil
}
-func (m *ProtoFields) Marshal() (dAtA []byte, err error) {
- size := m.Size()
- dAtA = make([]byte, size)
- n, err := m.MarshalToSizedBuffer(dAtA[:size])
- if err != nil {
- return nil, err
- }
- return dAtA[:n], nil
-}
-
-func (m *ProtoFields) MarshalTo(dAtA []byte) (int, error) {
- size := m.Size()
- return m.MarshalToSizedBuffer(dAtA[:size])
-}
-
-func (m *ProtoFields) MarshalToSizedBuffer(dAtA []byte) (int, error) {
- i := len(dAtA)
- _ = i
- var l int
- _ = l
- if m.Raw != nil {
- i -= len(m.Raw)
- copy(dAtA[i:], m.Raw)
- i = encodeVarintGenerated(dAtA, i, uint64(len(m.Raw)))
- i--
- dAtA[i] = 0x12
- }
- if len(m.Map) > 0 {
- keysForMap := make([]string, 0, len(m.Map))
- for k := range m.Map {
- keysForMap = append(keysForMap, string(k))
- }
- github_com_gogo_protobuf_sortkeys.Strings(keysForMap)
- for iNdEx := len(keysForMap) - 1; iNdEx >= 0; iNdEx-- {
- v := m.Map[string(keysForMap[iNdEx])]
- baseI := i
- {
- size, err := (&v).MarshalToSizedBuffer(dAtA[:i])
- if err != nil {
- return 0, err
- }
- i -= size
- i = encodeVarintGenerated(dAtA, i, uint64(size))
- }
- i--
- dAtA[i] = 0x12
- i -= len(keysForMap[iNdEx])
- copy(dAtA[i:], keysForMap[iNdEx])
- i = encodeVarintGenerated(dAtA, i, uint64(len(keysForMap[iNdEx])))
- i--
- dAtA[i] = 0xa
- i = encodeVarintGenerated(dAtA, i, uint64(baseI-i))
- i--
- dAtA[i] = 0xa
- }
- }
- return len(dAtA) - i, nil
-}
-
func (m *RootPaths) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
@@ -3609,6 +3555,19 @@ func (m *ExportOptions) Size() (n int) {
return n
}
+func (m *FieldsV1) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Raw != nil {
+ l = len(m.Raw)
+ n += 1 + l + sovGenerated(uint64(l))
+ }
+ return n
+}
+
func (m *GetOptions) Size() (n int) {
if m == nil {
return 0
@@ -3818,8 +3777,10 @@ func (m *ManagedFieldsEntry) Size() (n int) {
l = m.Time.Size()
n += 1 + l + sovGenerated(uint64(l))
}
- if m.Fields != nil {
- l = m.Fields.Size()
+ l = len(m.FieldsType)
+ n += 1 + l + sovGenerated(uint64(l))
+ if m.FieldsV1 != nil {
+ l = m.FieldsV1.Size()
n += 1 + l + sovGenerated(uint64(l))
}
return n
@@ -3989,28 +3950,6 @@ func (m *Preconditions) Size() (n int) {
return n
}
-func (m *ProtoFields) Size() (n int) {
- if m == nil {
- return 0
- }
- var l int
- _ = l
- if len(m.Map) > 0 {
- for k, v := range m.Map {
- _ = k
- _ = v
- l = v.Size()
- mapEntrySize := 1 + len(k) + sovGenerated(uint64(len(k))) + 1 + l + sovGenerated(uint64(l))
- n += mapEntrySize + 1 + sovGenerated(uint64(mapEntrySize))
- }
- }
- if m.Raw != nil {
- l = len(m.Raw)
- n += 1 + l + sovGenerated(uint64(l))
- }
- return n
-}
-
func (m *RootPaths) Size() (n int) {
if m == nil {
return 0
@@ -4305,6 +4244,16 @@ func (this *ExportOptions) String() string {
}, "")
return s
}
+func (this *FieldsV1) String() string {
+ if this == nil {
+ return "nil"
+ }
+ s := strings.Join([]string{`&FieldsV1{`,
+ `Raw:` + valueToStringGenerated(this.Raw) + `,`,
+ `}`,
+ }, "")
+ return s
+}
func (this *GetOptions) String() string {
if this == nil {
return "nil"
@@ -4419,7 +4368,8 @@ func (this *ManagedFieldsEntry) String() string {
`Operation:` + fmt.Sprintf("%v", this.Operation) + `,`,
`APIVersion:` + fmt.Sprintf("%v", this.APIVersion) + `,`,
`Time:` + strings.Replace(fmt.Sprintf("%v", this.Time), "Time", "Time", 1) + `,`,
- `Fields:` + strings.Replace(fmt.Sprintf("%v", this.Fields), "Fields", "Fields", 1) + `,`,
+ `FieldsType:` + fmt.Sprintf("%v", this.FieldsType) + `,`,
+ `FieldsV1:` + strings.Replace(this.FieldsV1.String(), "FieldsV1", "FieldsV1", 1) + `,`,
`}`,
}, "")
return s
@@ -4552,27 +4502,6 @@ func (this *Preconditions) String() string {
}, "")
return s
}
-func (this *ProtoFields) String() string {
- if this == nil {
- return "nil"
- }
- keysForMap := make([]string, 0, len(this.Map))
- for k := range this.Map {
- keysForMap = append(keysForMap, k)
- }
- github_com_gogo_protobuf_sortkeys.Strings(keysForMap)
- mapStringForMap := "map[string]Fields{"
- for _, k := range keysForMap {
- mapStringForMap += fmt.Sprintf("%v: %v,", k, this.Map[k])
- }
- mapStringForMap += "}"
- s := strings.Join([]string{`&ProtoFields{`,
- `Map:` + mapStringForMap + `,`,
- `Raw:` + valueToStringGenerated(this.Raw) + `,`,
- `}`,
- }, "")
- return s
-}
func (this *RootPaths) String() string {
if this == nil {
return "nil"
@@ -6056,6 +5985,93 @@ func (m *ExportOptions) Unmarshal(dAtA []byte) error {
}
return nil
}
+func (m *FieldsV1) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: FieldsV1: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: FieldsV1: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Raw", wireType)
+ }
+ var byteLen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ byteLen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if byteLen < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ postIndex := iNdEx + byteLen
+ if postIndex < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Raw = append(m.Raw[:0], dAtA[iNdEx:postIndex]...)
+ if m.Raw == nil {
+ m.Raw = []byte{}
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipGenerated(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if skippy < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if (iNdEx + skippy) < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
func (m *GetOptions) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
@@ -7980,9 +7996,41 @@ func (m *ManagedFieldsEntry) Unmarshal(dAtA []byte) error {
return err
}
iNdEx = postIndex
- case 5:
+ case 6:
if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Fields", wireType)
+ return fmt.Errorf("proto: wrong wireType = %d for field FieldsType", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowGenerated
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthGenerated
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.FieldsType = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 7:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field FieldsV1", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
@@ -8009,10 +8057,10 @@ func (m *ManagedFieldsEntry) Unmarshal(dAtA []byte) error {
if postIndex > l {
return io.ErrUnexpectedEOF
}
- if m.Fields == nil {
- m.Fields = &Fields{}
+ if m.FieldsV1 == nil {
+ m.FieldsV1 = &FieldsV1{}
}
- if err := m.Fields.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ if err := m.FieldsV1.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
@@ -9518,222 +9566,6 @@ func (m *Preconditions) Unmarshal(dAtA []byte) error {
}
return nil
}
-func (m *ProtoFields) Unmarshal(dAtA []byte) error {
- l := len(dAtA)
- iNdEx := 0
- for iNdEx < l {
- preIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowGenerated
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- wireType := int(wire & 0x7)
- if wireType == 4 {
- return fmt.Errorf("proto: ProtoFields: wiretype end group for non-group")
- }
- if fieldNum <= 0 {
- return fmt.Errorf("proto: ProtoFields: illegal tag %d (wire type %d)", fieldNum, wire)
- }
- switch fieldNum {
- case 1:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Map", wireType)
- }
- var msglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowGenerated
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- msglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if msglen < 0 {
- return ErrInvalidLengthGenerated
- }
- postIndex := iNdEx + msglen
- if postIndex < 0 {
- return ErrInvalidLengthGenerated
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- if m.Map == nil {
- m.Map = make(map[string]Fields)
- }
- var mapkey string
- mapvalue := &Fields{}
- for iNdEx < postIndex {
- entryPreIndex := iNdEx
- var wire uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowGenerated
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- wire |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- fieldNum := int32(wire >> 3)
- if fieldNum == 1 {
- var stringLenmapkey uint64
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowGenerated
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- stringLenmapkey |= uint64(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- intStringLenmapkey := int(stringLenmapkey)
- if intStringLenmapkey < 0 {
- return ErrInvalidLengthGenerated
- }
- postStringIndexmapkey := iNdEx + intStringLenmapkey
- if postStringIndexmapkey < 0 {
- return ErrInvalidLengthGenerated
- }
- if postStringIndexmapkey > l {
- return io.ErrUnexpectedEOF
- }
- mapkey = string(dAtA[iNdEx:postStringIndexmapkey])
- iNdEx = postStringIndexmapkey
- } else if fieldNum == 2 {
- var mapmsglen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowGenerated
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- mapmsglen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if mapmsglen < 0 {
- return ErrInvalidLengthGenerated
- }
- postmsgIndex := iNdEx + mapmsglen
- if postmsgIndex < 0 {
- return ErrInvalidLengthGenerated
- }
- if postmsgIndex > l {
- return io.ErrUnexpectedEOF
- }
- mapvalue = &Fields{}
- if err := mapvalue.Unmarshal(dAtA[iNdEx:postmsgIndex]); err != nil {
- return err
- }
- iNdEx = postmsgIndex
- } else {
- iNdEx = entryPreIndex
- skippy, err := skipGenerated(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if skippy < 0 {
- return ErrInvalidLengthGenerated
- }
- if (iNdEx + skippy) > postIndex {
- return io.ErrUnexpectedEOF
- }
- iNdEx += skippy
- }
- }
- m.Map[mapkey] = *mapvalue
- iNdEx = postIndex
- case 2:
- if wireType != 2 {
- return fmt.Errorf("proto: wrong wireType = %d for field Raw", wireType)
- }
- var byteLen int
- for shift := uint(0); ; shift += 7 {
- if shift >= 64 {
- return ErrIntOverflowGenerated
- }
- if iNdEx >= l {
- return io.ErrUnexpectedEOF
- }
- b := dAtA[iNdEx]
- iNdEx++
- byteLen |= int(b&0x7F) << shift
- if b < 0x80 {
- break
- }
- }
- if byteLen < 0 {
- return ErrInvalidLengthGenerated
- }
- postIndex := iNdEx + byteLen
- if postIndex < 0 {
- return ErrInvalidLengthGenerated
- }
- if postIndex > l {
- return io.ErrUnexpectedEOF
- }
- m.Raw = append(m.Raw[:0], dAtA[iNdEx:postIndex]...)
- if m.Raw == nil {
- m.Raw = []byte{}
- }
- iNdEx = postIndex
- default:
- iNdEx = preIndex
- skippy, err := skipGenerated(dAtA[iNdEx:])
- if err != nil {
- return err
- }
- if skippy < 0 {
- return ErrInvalidLengthGenerated
- }
- if (iNdEx + skippy) < 0 {
- return ErrInvalidLengthGenerated
- }
- if (iNdEx + skippy) > l {
- return io.ErrUnexpectedEOF
- }
- iNdEx += skippy
- }
- }
-
- if iNdEx > l {
- return io.ErrUnexpectedEOF
- }
- return nil
-}
func (m *RootPaths) Unmarshal(dAtA []byte) error {
l := len(dAtA)
iNdEx := 0
diff --git a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/generated.proto b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/generated.proto
index ba36c3cd25e3f..ba1194dcc56ca 100644
--- a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/generated.proto
+++ b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/generated.proto
@@ -163,6 +163,7 @@ message DeleteOptions {
// Must be fulfilled before a deletion is carried out. If not possible, a 409 Conflict status will be
// returned.
+ // +k8s:conversion-gen=false
// +optional
optional Preconditions preconditions = 2;
@@ -212,7 +213,7 @@ message ExportOptions {
optional bool exact = 2;
}
-// Fields stores a set of fields in a data structure like a Trie.
+// FieldsV1 stores a set of fields in a data structure like a Trie, in JSON format.
//
// Each key is either a '.' representing the field itself, and will always map to an empty set,
// or a string representing a sub-field or item. The string will follow one of these four formats:
@@ -223,15 +224,9 @@ message ExportOptions {
// If a key maps to an empty Fields value, the field that key represents is part of the set.
//
// The exact format is defined in sigs.k8s.io/structured-merge-diff
-// +protobuf.options.marshal=false
-// +protobuf.as=ProtoFields
-// +protobuf.options.(gogoproto.goproto_stringer)=false
-message Fields {
- // Map is the representation used in the alpha version of this API
- map<string, Fields> map = 1;
-
+message FieldsV1 {
// Raw is the underlying serialization of this object.
- optional bytes raw = 2;
+ optional bytes Raw = 1;
}
// GetOptions is the standard query options to the standard REST get call.
@@ -345,7 +340,7 @@ message LabelSelectorRequirement {
// List holds a list of objects, which may not be known by the server.
message List {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional ListMeta metadata = 1;
@@ -371,7 +366,7 @@ message ListMeta {
// Value must be treated as opaque by clients and passed unmodified back to the server.
// Populated by the system.
// Read-only.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
// +optional
optional string resourceVersion = 2;
@@ -393,9 +388,6 @@ message ListMeta {
// Servers older than v1.15 do not set this field.
// The intended use of the remainingItemCount is *estimating* the size of a collection. Clients
// should not rely on the remainingItemCount to be set or to be exact.
- //
- // This field is alpha and can be changed or removed without notice.
- //
// +optional
optional int64 remainingItemCount = 4;
}
@@ -425,9 +417,6 @@ message ListOptions {
// If this is not a watch, this field is ignored.
// If the feature gate WatchBookmarks is not enabled in apiserver,
// this field is ignored.
- //
- // This field is beta.
- //
// +optional
optional bool allowWatchBookmarks = 9;
@@ -500,9 +489,13 @@ message ManagedFieldsEntry {
// +optional
optional Time time = 4;
- // Fields identifies a set of fields.
+ // FieldsType is the discriminator for the different fields format and version.
+ // There is currently only one possible value: "FieldsV1"
+ optional string fieldsType = 6;
+
+ // FieldsV1 holds the first JSON version format as described in the "FieldsV1" type.
// +optional
- optional Fields fields = 5;
+ optional FieldsV1 fieldsV1 = 7;
}
// MicroTime is version of Time with microsecond level precision.
@@ -549,7 +542,7 @@ message ObjectMeta {
// should retry (optionally after the time indicated in the Retry-After header).
//
// Applied only if Name is not specified.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#idempotency
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency
// +optional
optional string generateName = 2;
@@ -593,7 +586,7 @@ message ObjectMeta {
// Populated by the system.
// Read-only.
// Value must be treated as opaque by clients and .
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
// +optional
optional string resourceVersion = 6;
@@ -609,7 +602,7 @@ message ObjectMeta {
// Populated by the system.
// Read-only.
// Null for lists.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional Time creationTimestamp = 8;
@@ -630,7 +623,7 @@ message ObjectMeta {
//
// Populated by the system when a graceful deletion is requested.
// Read-only.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional Time deletionTimestamp = 9;
@@ -668,6 +661,15 @@ message ObjectMeta {
// is an identifier for the responsible component that will remove the entry
// from the list. If the deletionTimestamp of the object is non-nil, entries
// in this list can only be removed.
+ // Finalizers may be processed and removed in any order. Order is NOT enforced
+ // because it introduces significant risk of stuck finalizers.
+ // finalizers is a shared field, any actor with permission can reorder it.
+ // If the finalizer list is processed in order, then this can lead to a situation
+ // in which the component responsible for the first finalizer in the list is
+ // waiting for a signal (field value, external system, or other) produced by a
+ // component responsible for a finalizer later in the list, resulting in a deadlock.
+ // Without enforced ordering finalizers are free to order amongst themselves and
+ // are not vulnerable to ordering changes in the list.
// +optional
// +patchStrategy=merge
repeated string finalizers = 14;
@@ -686,8 +688,6 @@ message ObjectMeta {
// "ci-cd". The set of fields is always in the version that the
// workflow used when modifying the object.
//
- // This field is alpha and can be changed or removed without notice.
- //
// +optional
repeated ManagedFieldsEntry managedFields = 17;
}
@@ -700,7 +700,7 @@ message OwnerReference {
optional string apiVersion = 5;
// Kind of the referent.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
optional string kind = 1;
// Name of the referent.
@@ -730,7 +730,7 @@ message OwnerReference {
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
message PartialObjectMetadata {
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
optional ObjectMeta metadata = 1;
}
@@ -739,7 +739,7 @@ message PartialObjectMetadata {
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
message PartialObjectMetadataList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional ListMeta metadata = 1;
@@ -790,17 +790,6 @@ message Preconditions {
optional string resourceVersion = 2;
}
-// ProtoFields is a struct that is equivalent to Fields, but intended for
-// protobuf marshalling/unmarshalling. It is generated into a serialization
-// that matches Fields. Do not use in Go structs.
-message ProtoFields {
- // Map is the representation used in the alpha version of this API
- map<string, Fields> map = 1;
-
- // Raw is the underlying serialization of this object.
- optional bytes raw = 2;
-}
-
// RootPaths lists the paths available at root.
// For example: "/healthz", "/apis".
message RootPaths {
@@ -821,13 +810,13 @@ message ServerAddressByClientCIDR {
// Status is a return value for calls that don't return other objects.
message Status {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional ListMeta metadata = 1;
// Status of the operation.
// One of: "Success" or "Failure".
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
optional string status = 2;
@@ -898,7 +887,7 @@ message StatusDetails {
// The kind attribute of the resource associated with the status StatusReason.
// On some operations may differ from the requested resource Kind.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional string kind = 3;
@@ -976,14 +965,14 @@ message TypeMeta {
// Servers may infer this from the endpoint the client submits requests to.
// Cannot be updated.
// In CamelCase.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional string kind = 1;
// APIVersion defines the versioned schema of this representation of an object.
// Servers should convert recognized schemas to the latest internal value, and
// may reject unrecognized values.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
// +optional
optional string apiVersion = 2;
}
diff --git a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/helpers.go b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/helpers.go
index 843cd3b15bf9e..ec016fd3c8da4 100644
--- a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/helpers.go
+++ b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/helpers.go
@@ -258,7 +258,7 @@ func ResetObjectMetaForStatus(meta, existingMeta Object) {
// MarshalJSON implements json.Marshaler
// MarshalJSON may get called on pointers or values, so implement MarshalJSON on value.
// http://stackoverflow.com/questions/21390979/custom-marshaljson-never-gets-called-in-go
-func (f Fields) MarshalJSON() ([]byte, error) {
+func (f FieldsV1) MarshalJSON() ([]byte, error) {
if f.Raw == nil {
return []byte("null"), nil
}
@@ -266,7 +266,7 @@ func (f Fields) MarshalJSON() ([]byte, error) {
}
// UnmarshalJSON implements json.Unmarshaler
-func (f *Fields) UnmarshalJSON(b []byte) error {
+func (f *FieldsV1) UnmarshalJSON(b []byte) error {
if f == nil {
return errors.New("metav1.Fields: UnmarshalJSON on nil pointer")
}
@@ -276,5 +276,5 @@ func (f *Fields) UnmarshalJSON(b []byte) error {
return nil
}
-var _ json.Marshaler = Fields{}
-var _ json.Unmarshaler = &Fields{}
+var _ json.Marshaler = FieldsV1{}
+var _ json.Unmarshaler = &FieldsV1{}
diff --git a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/register.go b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/register.go
index 368efe1efd976..a7b8aa34f9e8b 100644
--- a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/register.go
+++ b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/register.go
@@ -25,6 +25,13 @@ import (
// GroupName is the group name for this API.
const GroupName = "meta.k8s.io"
+var (
+ // localSchemeBuilder is used to make compiler happy for autogenerated
+ // conversions. However, it's not used.
+ schemeBuilder runtime.SchemeBuilder
+ localSchemeBuilder = &schemeBuilder
+)
+
// SchemeGroupVersion is group version used to register these objects
var SchemeGroupVersion = schema.GroupVersion{Group: GroupName, Version: "v1"}
@@ -40,6 +47,31 @@ func Kind(kind string) schema.GroupKind {
return SchemeGroupVersion.WithKind(kind).GroupKind()
}
+// scheme is the registry for the common types that adhere to the meta v1 API spec.
+var scheme = runtime.NewScheme()
+
+// ParameterCodec knows about query parameters used with the meta v1 API spec.
+var ParameterCodec = runtime.NewParameterCodec(scheme)
+
+func addEventConversionFuncs(scheme *runtime.Scheme) error {
+ return scheme.AddConversionFuncs(
+ Convert_v1_WatchEvent_To_watch_Event,
+ Convert_v1_InternalEvent_To_v1_WatchEvent,
+ Convert_watch_Event_To_v1_WatchEvent,
+ Convert_v1_WatchEvent_To_v1_InternalEvent,
+ )
+}
+
+var optionsTypes = []runtime.Object{
+ &ListOptions{},
+ &ExportOptions{},
+ &GetOptions{},
+ &DeleteOptions{},
+ &CreateOptions{},
+ &UpdateOptions{},
+ &PatchOptions{},
+}
+
// AddToGroupVersion registers common meta types into schemas.
func AddToGroupVersion(scheme *runtime.Scheme, groupVersion schema.GroupVersion) {
scheme.AddKnownTypeWithName(groupVersion.WithKind(WatchEventKind), &WatchEvent{})
@@ -48,21 +80,7 @@ func AddToGroupVersion(scheme *runtime.Scheme, groupVersion schema.GroupVersion)
&InternalEvent{},
)
// Supports legacy code paths, most callers should use metav1.ParameterCodec for now
- scheme.AddKnownTypes(groupVersion,
- &ListOptions{},
- &ExportOptions{},
- &GetOptions{},
- &DeleteOptions{},
- &CreateOptions{},
- &UpdateOptions{},
- &PatchOptions{},
- )
- utilruntime.Must(scheme.AddConversionFuncs(
- Convert_v1_WatchEvent_To_watch_Event,
- Convert_v1_InternalEvent_To_v1_WatchEvent,
- Convert_watch_Event_To_v1_WatchEvent,
- Convert_v1_WatchEvent_To_v1_InternalEvent,
- ))
+ scheme.AddKnownTypes(groupVersion, optionsTypes...)
// Register Unversioned types under their own special group
scheme.AddUnversionedTypes(Unversioned,
&Status{},
@@ -72,36 +90,14 @@ func AddToGroupVersion(scheme *runtime.Scheme, groupVersion schema.GroupVersion)
&APIResourceList{},
)
- // register manually. This usually goes through the SchemeBuilder, which we cannot use here.
- utilruntime.Must(AddConversionFuncs(scheme))
- utilruntime.Must(RegisterDefaults(scheme))
-}
-
-// scheme is the registry for the common types that adhere to the meta v1 API spec.
-var scheme = runtime.NewScheme()
-
-// ParameterCodec knows about query parameters used with the meta v1 API spec.
-var ParameterCodec = runtime.NewParameterCodec(scheme)
-
-func init() {
- scheme.AddUnversionedTypes(SchemeGroupVersion,
- &ListOptions{},
- &ExportOptions{},
- &GetOptions{},
- &DeleteOptions{},
- &CreateOptions{},
- &UpdateOptions{},
- &PatchOptions{},
- )
-
- if err := AddMetaToScheme(scheme); err != nil {
- panic(err)
- }
+ utilruntime.Must(addEventConversionFuncs(scheme))
// register manually. This usually goes through the SchemeBuilder, which we cannot use here.
+ utilruntime.Must(AddConversionFuncs(scheme))
utilruntime.Must(RegisterDefaults(scheme))
}
+// AddMetaToScheme registers base meta types into schemas.
func AddMetaToScheme(scheme *runtime.Scheme) error {
scheme.AddKnownTypes(SchemeGroupVersion,
&Table{},
@@ -114,3 +110,12 @@ func AddMetaToScheme(scheme *runtime.Scheme) error {
Convert_Slice_string_To_v1_IncludeObjectPolicy,
)
}
+
+func init() {
+ scheme.AddUnversionedTypes(SchemeGroupVersion, optionsTypes...)
+
+ utilruntime.Must(AddMetaToScheme(scheme))
+
+ // register manually. This usually goes through the SchemeBuilder, which we cannot use here.
+ utilruntime.Must(RegisterDefaults(scheme))
+}
diff --git a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/types.go b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/types.go
index b1e92ed66a0a7..bf125b62a73f1 100644
--- a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/types.go
+++ b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/types.go
@@ -43,14 +43,14 @@ type TypeMeta struct {
// Servers may infer this from the endpoint the client submits requests to.
// Cannot be updated.
// In CamelCase.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
Kind string `json:"kind,omitempty" protobuf:"bytes,1,opt,name=kind"`
// APIVersion defines the versioned schema of this representation of an object.
// Servers should convert recognized schemas to the latest internal value, and
// may reject unrecognized values.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
// +optional
APIVersion string `json:"apiVersion,omitempty" protobuf:"bytes,2,opt,name=apiVersion"`
}
@@ -73,7 +73,7 @@ type ListMeta struct {
// Value must be treated as opaque by clients and passed unmodified back to the server.
// Populated by the system.
// Read-only.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
// +optional
ResourceVersion string `json:"resourceVersion,omitempty" protobuf:"bytes,2,opt,name=resourceVersion"`
@@ -95,9 +95,6 @@ type ListMeta struct {
// Servers older than v1.15 do not set this field.
// The intended use of the remainingItemCount is *estimating* the size of a collection. Clients
// should not rely on the remainingItemCount to be set or to be exact.
- //
- // This field is alpha and can be changed or removed without notice.
- //
// +optional
RemainingItemCount *int64 `json:"remainingItemCount,omitempty" protobuf:"bytes,4,opt,name=remainingItemCount"`
}
@@ -134,7 +131,7 @@ type ObjectMeta struct {
// should retry (optionally after the time indicated in the Retry-After header).
//
// Applied only if Name is not specified.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#idempotency
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency
// +optional
GenerateName string `json:"generateName,omitempty" protobuf:"bytes,2,opt,name=generateName"`
@@ -178,7 +175,7 @@ type ObjectMeta struct {
// Populated by the system.
// Read-only.
// Value must be treated as opaque by clients and .
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency
// +optional
ResourceVersion string `json:"resourceVersion,omitempty" protobuf:"bytes,6,opt,name=resourceVersion"`
@@ -194,7 +191,7 @@ type ObjectMeta struct {
// Populated by the system.
// Read-only.
// Null for lists.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
CreationTimestamp Time `json:"creationTimestamp,omitempty" protobuf:"bytes,8,opt,name=creationTimestamp"`
@@ -215,7 +212,7 @@ type ObjectMeta struct {
//
// Populated by the system when a graceful deletion is requested.
// Read-only.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
DeletionTimestamp *Time `json:"deletionTimestamp,omitempty" protobuf:"bytes,9,opt,name=deletionTimestamp"`
@@ -253,6 +250,15 @@ type ObjectMeta struct {
// is an identifier for the responsible component that will remove the entry
// from the list. If the deletionTimestamp of the object is non-nil, entries
// in this list can only be removed.
+ // Finalizers may be processed and removed in any order. Order is NOT enforced
+ // because it introduces significant risk of stuck finalizers.
+ // finalizers is a shared field, any actor with permission can reorder it.
+ // If the finalizer list is processed in order, then this can lead to a situation
+ // in which the component responsible for the first finalizer in the list is
+ // waiting for a signal (field value, external system, or other) produced by a
+ // component responsible for a finalizer later in the list, resulting in a deadlock.
+ // Without enforced ordering finalizers are free to order amongst themselves and
+ // are not vulnerable to ordering changes in the list.
// +optional
// +patchStrategy=merge
Finalizers []string `json:"finalizers,omitempty" patchStrategy:"merge" protobuf:"bytes,14,rep,name=finalizers"`
@@ -271,8 +277,6 @@ type ObjectMeta struct {
// "ci-cd". The set of fields is always in the version that the
// workflow used when modifying the object.
//
- // This field is alpha and can be changed or removed without notice.
- //
// +optional
ManagedFields []ManagedFieldsEntry `json:"managedFields,omitempty" protobuf:"bytes,17,rep,name=managedFields"`
}
@@ -297,7 +301,7 @@ type OwnerReference struct {
// API version of the referent.
APIVersion string `json:"apiVersion" protobuf:"bytes,5,opt,name=apiVersion"`
// Kind of the referent.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
Kind string `json:"kind" protobuf:"bytes,1,opt,name=kind"`
// Name of the referent.
// More info: http://kubernetes.io/docs/user-guide/identifiers#names
@@ -318,6 +322,7 @@ type OwnerReference struct {
BlockOwnerDeletion *bool `json:"blockOwnerDeletion,omitempty" protobuf:"varint,7,opt,name=blockOwnerDeletion"`
}
+// +k8s:conversion-gen:explicit-from=net/url.Values
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// ListOptions is the query options to a standard REST list call.
@@ -347,9 +352,6 @@ type ListOptions struct {
// If this is not a watch, this field is ignored.
// If the feature gate WatchBookmarks is not enabled in apiserver,
// this field is ignored.
- //
- // This field is beta.
- //
// +optional
AllowWatchBookmarks bool `json:"allowWatchBookmarks,omitempty" protobuf:"varint,9,opt,name=allowWatchBookmarks"`
@@ -400,6 +402,7 @@ type ListOptions struct {
Continue string `json:"continue,omitempty" protobuf:"bytes,8,opt,name=continue"`
}
+// +k8s:conversion-gen:explicit-from=net/url.Values
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// ExportOptions is the query options to the standard REST get call.
@@ -414,6 +417,7 @@ type ExportOptions struct {
Exact bool `json:"exact" protobuf:"varint,2,opt,name=exact"`
}
+// +k8s:conversion-gen:explicit-from=net/url.Values
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// GetOptions is the standard query options to the standard REST get call.
@@ -451,6 +455,7 @@ const (
DryRunAll = "All"
)
+// +k8s:conversion-gen:explicit-from=net/url.Values
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// DeleteOptions may be provided when deleting an API object.
@@ -466,6 +471,7 @@ type DeleteOptions struct {
// Must be fulfilled before a deletion is carried out. If not possible, a 409 Conflict status will be
// returned.
+ // +k8s:conversion-gen=false
// +optional
Preconditions *Preconditions `json:"preconditions,omitempty" protobuf:"bytes,2,opt,name=preconditions"`
@@ -496,6 +502,7 @@ type DeleteOptions struct {
DryRun []string `json:"dryRun,omitempty" protobuf:"bytes,5,rep,name=dryRun"`
}
+// +k8s:conversion-gen:explicit-from=net/url.Values
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// CreateOptions may be provided when creating an API object.
@@ -519,6 +526,7 @@ type CreateOptions struct {
FieldManager string `json:"fieldManager,omitempty" protobuf:"bytes,3,name=fieldManager"`
}
+// +k8s:conversion-gen:explicit-from=net/url.Values
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// PatchOptions may be provided when patching an API object.
@@ -551,6 +559,7 @@ type PatchOptions struct {
FieldManager string `json:"fieldManager,omitempty" protobuf:"bytes,3,name=fieldManager"`
}
+// +k8s:conversion-gen:explicit-from=net/url.Values
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
// UpdateOptions may be provided when updating an API object.
@@ -590,13 +599,13 @@ type Preconditions struct {
type Status struct {
TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
// Status of the operation.
// One of: "Success" or "Failure".
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
// +optional
Status string `json:"status,omitempty" protobuf:"bytes,2,opt,name=status"`
// A human-readable description of the status of this operation.
@@ -635,7 +644,7 @@ type StatusDetails struct {
Group string `json:"group,omitempty" protobuf:"bytes,2,opt,name=group"`
// The kind attribute of the resource associated with the status StatusReason.
// On some operations may differ from the requested resource Kind.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
Kind string `json:"kind,omitempty" protobuf:"bytes,3,opt,name=kind"`
// UID of the resource.
@@ -767,11 +776,13 @@ const (
// doesn't make any sense, for example deleting a read-only object. This is different than
// StatusReasonInvalid above which indicates that the API call could possibly succeed, but the
// data was invalid. API calls that return BadRequest can never succeed.
+ // Status code 400
StatusReasonBadRequest StatusReason = "BadRequest"
// StatusReasonMethodNotAllowed means that the action the client attempted to perform on the
// resource was not supported by the code - for instance, attempting to delete a resource that
// can only be created. API calls that return MethodNotAllowed can never succeed.
+ // Status code 405
StatusReasonMethodNotAllowed StatusReason = "MethodNotAllowed"
// StatusReasonNotAcceptable means that the accept types indicated by the client were not acceptable
@@ -870,7 +881,7 @@ const (
type List struct {
TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
@@ -1103,9 +1114,16 @@ type ManagedFieldsEntry struct {
// Time is timestamp of when these fields were set. It should always be empty if Operation is 'Apply'
// +optional
Time *Time `json:"time,omitempty" protobuf:"bytes,4,opt,name=time"`
- // Fields identifies a set of fields.
+
+ // Fields is tombstoned to show why 5 is a reserved protobuf tag.
+ //Fields *Fields `json:"fields,omitempty" protobuf:"bytes,5,opt,name=fields,casttype=Fields"`
+
+ // FieldsType is the discriminator for the different fields format and version.
+ // There is currently only one possible value: "FieldsV1"
+ FieldsType string `json:"fieldsType,omitempty" protobuf:"bytes,6,opt,name=fieldsType"`
+ // FieldsV1 holds the first JSON version format as described in the "FieldsV1" type.
// +optional
- Fields *Fields `json:"fields,omitempty" protobuf:"bytes,5,opt,name=fields,casttype=Fields"`
+ FieldsV1 *FieldsV1 `json:"fieldsV1,omitempty" protobuf:"bytes,7,opt,name=fieldsV1"`
}
// ManagedFieldsOperationType is the type of operation which lead to a ManagedFieldsEntry being created.
@@ -1116,7 +1134,7 @@ const (
ManagedFieldsOperationUpdate ManagedFieldsOperationType = "Update"
)
-// Fields stores a set of fields in a data structure like a Trie.
+// FieldsV1 stores a set of fields in a data structure like a Trie, in JSON format.
//
// Each key is either a '.' representing the field itself, and will always map to an empty set,
// or a string representing a sub-field or item. The string will follow one of these four formats:
@@ -1127,12 +1145,9 @@ const (
// If a key maps to an empty Fields value, the field that key represents is part of the set.
//
// The exact format is defined in sigs.k8s.io/structured-merge-diff
-// +protobuf.options.marshal=false
-// +protobuf.as=ProtoFields
-// +protobuf.options.(gogoproto.goproto_stringer)=false
-type Fields struct {
+type FieldsV1 struct {
// Raw is the underlying serialization of this object.
- Raw []byte `json:"-" protobuf:"-"`
+ Raw []byte `json:"-" protobuf:"bytes,1,opt,name=Raw"`
}
// TODO: Table does not generate to protobuf because of the interface{} - fix protobuf
@@ -1147,7 +1162,7 @@ type Fields struct {
type Table struct {
TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
ListMeta `json:"metadata,omitempty"`
@@ -1257,6 +1272,7 @@ const (
)
// TableOptions are used when a Table is requested by the caller.
+// +k8s:conversion-gen:explicit-from=net/url.Values
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
type TableOptions struct {
TypeMeta `json:",inline"`
@@ -1278,7 +1294,7 @@ type TableOptions struct {
type PartialObjectMetadata struct {
TypeMeta `json:",inline"`
// Standard object's metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
// +optional
ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
}
@@ -1288,7 +1304,7 @@ type PartialObjectMetadata struct {
type PartialObjectMetadataList struct {
TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
diff --git a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/types_swagger_doc_generated.go b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/types_swagger_doc_generated.go
index 5bbc110f979a9..b62e591ee8779 100644
--- a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/types_swagger_doc_generated.go
@@ -119,12 +119,12 @@ func (ExportOptions) SwaggerDoc() map[string]string {
return map_ExportOptions
}
-var map_Fields = map[string]string{
- "": "Fields stores a set of fields in a data structure like a Trie.\n\nEach key is either a '.' representing the field itself, and will always map to an empty set, or a string representing a sub-field or item. The string will follow one of these four formats: 'f:<name>', where <name> is the name of a field in a struct, or key in a map 'v:<value>', where <value> is the exact json formatted value of a list item 'i:<index>', where <index> is position of a item in a list 'k:<keys>', where <keys> is a map of a list item's key fields to their unique values If a key maps to an empty Fields value, the field that key represents is part of the set.\n\nThe exact format is defined in sigs.k8s.io/structured-merge-diff",
+var map_FieldsV1 = map[string]string{
+ "": "FieldsV1 stores a set of fields in a data structure like a Trie, in JSON format.\n\nEach key is either a '.' representing the field itself, and will always map to an empty set, or a string representing a sub-field or item. The string will follow one of these four formats: 'f:<name>', where <name> is the name of a field in a struct, or key in a map 'v:<value>', where <value> is the exact json formatted value of a list item 'i:<index>', where <index> is position of a item in a list 'k:<keys>', where <keys> is a map of a list item's key fields to their unique values If a key maps to an empty Fields value, the field that key represents is part of the set.\n\nThe exact format is defined in sigs.k8s.io/structured-merge-diff",
}
-func (Fields) SwaggerDoc() map[string]string {
- return map_Fields
+func (FieldsV1) SwaggerDoc() map[string]string {
+ return map_FieldsV1
}
var map_GetOptions = map[string]string{
@@ -169,7 +169,7 @@ func (LabelSelectorRequirement) SwaggerDoc() map[string]string {
var map_List = map[string]string{
"": "List holds a list of objects, which may not be known by the server.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"items": "List of objects",
}
@@ -180,9 +180,9 @@ func (List) SwaggerDoc() map[string]string {
var map_ListMeta = map[string]string{
"": "ListMeta describes metadata that synthetic resources must have, including lists and various status objects. A resource may have only one of {ObjectMeta, ListMeta}.",
"selfLink": "selfLink is a URL representing this object. Populated by the system. Read-only.\n\nDEPRECATED Kubernetes will stop propagating this field in 1.20 release and the field is planned to be removed in 1.21 release.",
- "resourceVersion": "String that identifies the server's internal version of this object that can be used by clients to determine when objects have changed. Value must be treated as opaque by clients and passed unmodified back to the server. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency",
+ "resourceVersion": "String that identifies the server's internal version of this object that can be used by clients to determine when objects have changed. Value must be treated as opaque by clients and passed unmodified back to the server. Populated by the system. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency",
"continue": "continue may be set if the user set a limit on the number of items returned, and indicates that the server has more data available. The value is opaque and may be used to issue another request to the endpoint that served this list to retrieve the next set of available objects. Continuing a consistent list may not be possible if the server configuration has changed or more than a few minutes have passed. The resourceVersion field returned when using this continue value will be identical to the value in the first response, unless you have received this token from an error message.",
- "remainingItemCount": "remainingItemCount is the number of subsequent items in the list which are not included in this list response. If the list request contained label or field selectors, then the number of remaining items is unknown and the field will be left unset and omitted during serialization. If the list is complete (either because it is not chunking or because this is the last chunk), then there are no more remaining items and this field will be left unset and omitted during serialization. Servers older than v1.15 do not set this field. The intended use of the remainingItemCount is *estimating* the size of a collection. Clients should not rely on the remainingItemCount to be set or to be exact.\n\nThis field is alpha and can be changed or removed without notice.",
+ "remainingItemCount": "remainingItemCount is the number of subsequent items in the list which are not included in this list response. If the list request contained label or field selectors, then the number of remaining items is unknown and the field will be left unset and omitted during serialization. If the list is complete (either because it is not chunking or because this is the last chunk), then there are no more remaining items and this field will be left unset and omitted during serialization. Servers older than v1.15 do not set this field. The intended use of the remainingItemCount is *estimating* the size of a collection. Clients should not rely on the remainingItemCount to be set or to be exact.",
}
func (ListMeta) SwaggerDoc() map[string]string {
@@ -194,7 +194,7 @@ var map_ListOptions = map[string]string{
"labelSelector": "A selector to restrict the list of returned objects by their labels. Defaults to everything.",
"fieldSelector": "A selector to restrict the list of returned objects by their fields. Defaults to everything.",
"watch": "Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.",
- "allowWatchBookmarks": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. If the feature gate WatchBookmarks is not enabled in apiserver, this field is ignored.\n\nThis field is beta.",
+ "allowWatchBookmarks": "allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. If the feature gate WatchBookmarks is not enabled in apiserver, this field is ignored.",
"resourceVersion": "When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.",
"timeoutSeconds": "Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.",
"limit": "limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true.\n\nThe server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.",
@@ -211,7 +211,8 @@ var map_ManagedFieldsEntry = map[string]string{
"operation": "Operation is the type of operation which lead to this ManagedFieldsEntry being created. The only valid values for this field are 'Apply' and 'Update'.",
"apiVersion": "APIVersion defines the version of this resource that this field set applies to. The format is \"group/version\" just like the top-level APIVersion field. It is necessary to track the version of a field set because it cannot be automatically converted.",
"time": "Time is timestamp of when these fields were set. It should always be empty if Operation is 'Apply'",
- "fields": "Fields identifies a set of fields.",
+ "fieldsType": "FieldsType is the discriminator for the different fields format and version. There is currently only one possible value: \"FieldsV1\"",
+ "fieldsV1": "FieldsV1 holds the first JSON version format as described in the \"FieldsV1\" type.",
}
func (ManagedFieldsEntry) SwaggerDoc() map[string]string {
@@ -221,21 +222,21 @@ func (ManagedFieldsEntry) SwaggerDoc() map[string]string {
var map_ObjectMeta = map[string]string{
"": "ObjectMeta is metadata that all persisted resources must have, which includes all objects users must create.",
"name": "Name must be unique within a namespace. Is required when creating resources, although some resources may allow a client to request the generation of an appropriate name automatically. Name is primarily intended for creation idempotence and configuration definition. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/identifiers#names",
- "generateName": "GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server.\n\nIf this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header).\n\nApplied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#idempotency",
+ "generateName": "GenerateName is an optional prefix, used by the server, to generate a unique name ONLY IF the Name field has not been provided. If this field is used, the name returned to the client will be different than the name passed. This value will also be combined with a unique suffix. The provided value has the same validation rules as the Name field, and may be truncated by the length of the suffix required to make the value unique on the server.\n\nIf this field is specified and the generated name exists, the server will NOT return a 409 - instead, it will either return 201 Created or 500 with Reason ServerTimeout indicating a unique name could not be found in the time allotted, and the client should retry (optionally after the time indicated in the Retry-After header).\n\nApplied only if Name is not specified. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency",
"namespace": "Namespace defines the space within each name must be unique. An empty namespace is equivalent to the \"default\" namespace, but \"default\" is the canonical representation. Not all objects are required to be scoped to a namespace - the value of this field for those objects will be empty.\n\nMust be a DNS_LABEL. Cannot be updated. More info: http://kubernetes.io/docs/user-guide/namespaces",
"selfLink": "SelfLink is a URL representing this object. Populated by the system. Read-only.\n\nDEPRECATED Kubernetes will stop propagating this field in 1.20 release and the field is planned to be removed in 1.21 release.",
"uid": "UID is the unique in time and space value for this object. It is typically generated by the server on successful creation of a resource and is not allowed to change on PUT operations.\n\nPopulated by the system. Read-only. More info: http://kubernetes.io/docs/user-guide/identifiers#uids",
- "resourceVersion": "An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources.\n\nPopulated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#concurrency-control-and-consistency",
+ "resourceVersion": "An opaque value that represents the internal version of this object that can be used by clients to determine when objects have changed. May be used for optimistic concurrency, change detection, and the watch operation on a resource or set of resources. Clients must treat these values as opaque and passed unmodified back to the server. They may only be valid for a particular resource or set of resources.\n\nPopulated by the system. Read-only. Value must be treated as opaque by clients and . More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency",
"generation": "A sequence number representing a specific generation of the desired state. Populated by the system. Read-only.",
- "creationTimestamp": "CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.\n\nPopulated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
- "deletionTimestamp": "DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource is expected to be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field, once the finalizers list is empty. As long as the finalizers list contains items, deletion is blocked. Once the deletionTimestamp is set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup, remove the pod from the API. In the presence of network partitions, this object may still exist after this timestamp, until an administrator or automated process can determine the resource is fully terminated. If not set, graceful deletion of the object has not been requested.\n\nPopulated by the system when a graceful deletion is requested. Read-only. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "creationTimestamp": "CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.\n\nPopulated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
+ "deletionTimestamp": "DeletionTimestamp is RFC 3339 date and time at which this resource will be deleted. This field is set by the server when a graceful deletion is requested by the user, and is not directly settable by a client. The resource is expected to be deleted (no longer visible from resource lists, and not reachable by name) after the time in this field, once the finalizers list is empty. As long as the finalizers list contains items, deletion is blocked. Once the deletionTimestamp is set, this value may not be unset or be set further into the future, although it may be shortened or the resource may be deleted prior to this time. For example, a user may request that a pod is deleted in 30 seconds. The Kubelet will react by sending a graceful termination signal to the containers in the pod. After that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL) to the container and after cleanup, remove the pod from the API. In the presence of network partitions, this object may still exist after this timestamp, until an administrator or automated process can determine the resource is fully terminated. If not set, graceful deletion of the object has not been requested.\n\nPopulated by the system when a graceful deletion is requested. Read-only. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
"deletionGracePeriodSeconds": "Number of seconds allowed for this object to gracefully terminate before it will be removed from the system. Only set when deletionTimestamp is also set. May only be shortened. Read-only.",
"labels": "Map of string keys and values that can be used to organize and categorize (scope and select) objects. May match selectors of replication controllers and services. More info: http://kubernetes.io/docs/user-guide/labels",
"annotations": "Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. They are not queryable and should be preserved when modifying objects. More info: http://kubernetes.io/docs/user-guide/annotations",
"ownerReferences": "List of objects depended by this object. If ALL objects in the list have been deleted, this object will be garbage collected. If this object is managed by a controller, then an entry in this list will point to this controller, with the controller field set to true. There cannot be more than one managing controller.",
- "finalizers": "Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed.",
+ "finalizers": "Must be empty before the object is deleted from the registry. Each entry is an identifier for the responsible component that will remove the entry from the list. If the deletionTimestamp of the object is non-nil, entries in this list can only be removed. Finalizers may be processed and removed in any order. Order is NOT enforced because it introduces significant risk of stuck finalizers. finalizers is a shared field, any actor with permission can reorder it. If the finalizer list is processed in order, then this can lead to a situation in which the component responsible for the first finalizer in the list is waiting for a signal (field value, external system, or other) produced by a component responsible for a finalizer later in the list, resulting in a deadlock. Without enforced ordering finalizers are free to order amongst themselves and are not vulnerable to ordering changes in the list.",
"clusterName": "The name of the cluster which the object belongs to. This is used to distinguish resources with same name and namespace in different clusters. This field is not set anywhere right now and apiserver is going to ignore it if set in create or update request.",
- "managedFields": "ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like \"ci-cd\". The set of fields is always in the version that the workflow used when modifying the object.\n\nThis field is alpha and can be changed or removed without notice.",
+ "managedFields": "ManagedFields maps workflow-id and version to the set of fields that are managed by that workflow. This is mostly for internal housekeeping, and users typically shouldn't need to set or understand this field. A workflow can be the user's name, a controller's name, or the name of a specific apply path like \"ci-cd\". The set of fields is always in the version that the workflow used when modifying the object.",
}
func (ObjectMeta) SwaggerDoc() map[string]string {
@@ -245,7 +246,7 @@ func (ObjectMeta) SwaggerDoc() map[string]string {
var map_OwnerReference = map[string]string{
"": "OwnerReference contains enough information to let you identify an owning object. An owning object must be in the same namespace as the dependent, or be cluster-scoped, so there is no namespace field.",
"apiVersion": "API version of the referent.",
- "kind": "Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
+ "kind": "Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"name": "Name of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#names",
"uid": "UID of the referent. More info: http://kubernetes.io/docs/user-guide/identifiers#uids",
"controller": "If true, this reference points to the managing controller.",
@@ -258,7 +259,7 @@ func (OwnerReference) SwaggerDoc() map[string]string {
var map_PartialObjectMetadata = map[string]string{
"": "PartialObjectMetadata is a generic representation of any object with ObjectMeta. It allows clients to get access to a particular ObjectMeta schema without knowing the details of the version.",
- "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata",
+ "metadata": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata",
}
func (PartialObjectMetadata) SwaggerDoc() map[string]string {
@@ -267,7 +268,7 @@ func (PartialObjectMetadata) SwaggerDoc() map[string]string {
var map_PartialObjectMetadataList = map[string]string{
"": "PartialObjectMetadataList contains a list of objects containing only their metadata",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"items": "items contains each of the included items.",
}
@@ -325,8 +326,8 @@ func (ServerAddressByClientCIDR) SwaggerDoc() map[string]string {
var map_Status = map[string]string{
"": "Status is a return value for calls that don't return other objects.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
- "status": "Status of the operation. One of: \"Success\" or \"Failure\". More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
+ "status": "Status of the operation. One of: \"Success\" or \"Failure\". More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status",
"message": "A human-readable description of the status of this operation.",
"reason": "A machine-readable description of why this operation is in the \"Failure\" status. If this value is empty there is no information available. A Reason clarifies an HTTP status code but does not override it.",
"details": "Extended data associated with the reason. Each reason may define its own extended details. This field is optional and the data returned is not guaranteed to conform to any schema except that defined by the reason type.",
@@ -352,7 +353,7 @@ var map_StatusDetails = map[string]string{
"": "StatusDetails is a set of additional properties that MAY be set by the server to provide additional information about a response. The Reason field of a Status object defines what attributes will be set. Clients must ignore fields that do not match the defined type of each attribute, and should assume that any attribute may be empty, invalid, or under defined.",
"name": "The name attribute of the resource associated with the status StatusReason (when there is a single name which can be described).",
"group": "The group attribute of the resource associated with the status StatusReason.",
- "kind": "The kind attribute of the resource associated with the status StatusReason. On some operations may differ from the requested resource Kind. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
+ "kind": "The kind attribute of the resource associated with the status StatusReason. On some operations may differ from the requested resource Kind. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"uid": "UID of the resource. (when there is a single resource which can be described). More info: http://kubernetes.io/docs/user-guide/identifiers#uids",
"causes": "The Causes array includes more details associated with the StatusReason failure. Not all StatusReasons may provide detailed causes.",
"retryAfterSeconds": "If specified, the time in seconds before the operation should be retried. Some errors may indicate the client must take an alternate action - for those errors this field may indicate how long to wait before taking the alternate action.",
@@ -364,7 +365,7 @@ func (StatusDetails) SwaggerDoc() map[string]string {
var map_Table = map[string]string{
"": "Table is a tabular representation of a set of API resources. The server transforms the object into a set of preferred columns for quickly reviewing the objects.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"columnDefinitions": "columnDefinitions describes each column in the returned items array. The number of cells per row will always match the number of column definitions.",
"rows": "rows is the list of items in the table.",
}
@@ -420,8 +421,8 @@ func (TableRowCondition) SwaggerDoc() map[string]string {
var map_TypeMeta = map[string]string{
"": "TypeMeta describes an individual object in an API response or request with strings representing the type of the object and its API schema version. Structures that are versioned or persisted should inline TypeMeta.",
- "kind": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
- "apiVersion": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources",
+ "kind": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
+ "apiVersion": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources",
}
func (TypeMeta) SwaggerDoc() map[string]string {
diff --git a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured/helpers.go b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured/helpers.go
index 3b07e86db8f3c..4244b8a6df1d1 100644
--- a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured/helpers.go
+++ b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/unstructured/helpers.go
@@ -27,11 +27,15 @@ import (
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/apimachinery/pkg/types"
"k8s.io/apimachinery/pkg/util/json"
+ "k8s.io/klog"
)
// NestedFieldCopy returns a deep copy of the value of a nested field.
// Returns false if the value is missing.
// No error is returned for a nil field.
+//
+// Note: fields passed to this function are treated as keys within the passed
+// object; no array/slice syntax is supported.
func NestedFieldCopy(obj map[string]interface{}, fields ...string) (interface{}, bool, error) {
val, found, err := NestedFieldNoCopy(obj, fields...)
if !found || err != nil {
@@ -43,6 +47,9 @@ func NestedFieldCopy(obj map[string]interface{}, fields ...string) (interface{},
// NestedFieldNoCopy returns a reference to a nested field.
// Returns false if value is not found and an error if unable
// to traverse obj.
+//
+// Note: fields passed to this function are treated as keys within the passed
+// object; no array/slice syntax is supported.
func NestedFieldNoCopy(obj map[string]interface{}, fields ...string) (interface{}, bool, error) {
var val interface{} = obj
@@ -323,6 +330,8 @@ var UnstructuredJSONScheme runtime.Codec = unstructuredJSONScheme{}
type unstructuredJSONScheme struct{}
+const unstructuredJSONSchemeIdentifier runtime.Identifier = "unstructuredJSON"
+
func (s unstructuredJSONScheme) Decode(data []byte, _ *schema.GroupVersionKind, obj runtime.Object) (runtime.Object, *schema.GroupVersionKind, error) {
var err error
if obj != nil {
@@ -343,7 +352,14 @@ func (s unstructuredJSONScheme) Decode(data []byte, _ *schema.GroupVersionKind,
return obj, &gvk, nil
}
-func (unstructuredJSONScheme) Encode(obj runtime.Object, w io.Writer) error {
+func (s unstructuredJSONScheme) Encode(obj runtime.Object, w io.Writer) error {
+ if co, ok := obj.(runtime.CacheableObject); ok {
+ return co.CacheEncode(s.Identifier(), s.doEncode, w)
+ }
+ return s.doEncode(obj, w)
+}
+
+func (unstructuredJSONScheme) doEncode(obj runtime.Object, w io.Writer) error {
switch t := obj.(type) {
case *Unstructured:
return json.NewEncoder(w).Encode(t.Object)
@@ -367,6 +383,11 @@ func (unstructuredJSONScheme) Encode(obj runtime.Object, w io.Writer) error {
}
}
+// Identifier implements runtime.Encoder interface.
+func (unstructuredJSONScheme) Identifier() runtime.Identifier {
+ return unstructuredJSONSchemeIdentifier
+}
+
func (s unstructuredJSONScheme) decode(data []byte) (runtime.Object, error) {
type detector struct {
Items gojson.RawMessage
@@ -394,12 +415,6 @@ func (s unstructuredJSONScheme) decodeInto(data []byte, obj runtime.Object) erro
return s.decodeToUnstructured(data, x)
case *UnstructuredList:
return s.decodeToList(data, x)
- case *runtime.VersionedObjects:
- o, err := s.decode(data)
- if err == nil {
- x.Objects = []runtime.Object{o}
- }
- return err
default:
return json.Unmarshal(data, x)
}
@@ -454,12 +469,30 @@ func (s unstructuredJSONScheme) decodeToList(data []byte, list *UnstructuredList
return nil
}
-type JSONFallbackEncoder struct {
- runtime.Encoder
+type jsonFallbackEncoder struct {
+ encoder runtime.Encoder
+ identifier runtime.Identifier
}
-func (c JSONFallbackEncoder) Encode(obj runtime.Object, w io.Writer) error {
- err := c.Encoder.Encode(obj, w)
+func NewJSONFallbackEncoder(encoder runtime.Encoder) runtime.Encoder {
+ result := map[string]string{
+ "name": "fallback",
+ "base": string(encoder.Identifier()),
+ }
+ identifier, err := gojson.Marshal(result)
+ if err != nil {
+ klog.Fatalf("Failed marshaling identifier for jsonFallbackEncoder: %v", err)
+ }
+ return &jsonFallbackEncoder{
+ encoder: encoder,
+ identifier: runtime.Identifier(identifier),
+ }
+}
+
+func (c *jsonFallbackEncoder) Encode(obj runtime.Object, w io.Writer) error {
+ // There is no need to handle runtime.CacheableObject, as we only
+ // fallback to other encoders here.
+ err := c.encoder.Encode(obj, w)
if runtime.IsNotRegisteredError(err) {
switch obj.(type) {
case *Unstructured, *UnstructuredList:
@@ -468,3 +501,8 @@ func (c JSONFallbackEncoder) Encode(obj runtime.Object, w io.Writer) error {
}
return err
}
+
+// Identifier implements runtime.Encoder interface.
+func (c *jsonFallbackEncoder) Identifier() runtime.Identifier {
+ return c.identifier
+}
diff --git a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/zz_generated.conversion.go b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/zz_generated.conversion.go
new file mode 100644
index 0000000000000..2ade69dd9eb29
--- /dev/null
+++ b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/zz_generated.conversion.go
@@ -0,0 +1,523 @@
+// +build !ignore_autogenerated
+
+/*
+Copyright The Kubernetes Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+// Code generated by conversion-gen. DO NOT EDIT.
+
+package v1
+
+import (
+ url "net/url"
+ unsafe "unsafe"
+
+ resource "k8s.io/apimachinery/pkg/api/resource"
+ conversion "k8s.io/apimachinery/pkg/conversion"
+ fields "k8s.io/apimachinery/pkg/fields"
+ labels "k8s.io/apimachinery/pkg/labels"
+ runtime "k8s.io/apimachinery/pkg/runtime"
+ intstr "k8s.io/apimachinery/pkg/util/intstr"
+ watch "k8s.io/apimachinery/pkg/watch"
+)
+
+func init() {
+ localSchemeBuilder.Register(RegisterConversions)
+}
+
+// RegisterConversions adds conversion functions to the given scheme.
+// Public to allow building arbitrary schemes.
+func RegisterConversions(s *runtime.Scheme) error {
+ if err := s.AddGeneratedConversionFunc((*url.Values)(nil), (*CreateOptions)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_url_Values_To_v1_CreateOptions(a.(*url.Values), b.(*CreateOptions), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddGeneratedConversionFunc((*url.Values)(nil), (*DeleteOptions)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_url_Values_To_v1_DeleteOptions(a.(*url.Values), b.(*DeleteOptions), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddGeneratedConversionFunc((*url.Values)(nil), (*ExportOptions)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_url_Values_To_v1_ExportOptions(a.(*url.Values), b.(*ExportOptions), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddGeneratedConversionFunc((*url.Values)(nil), (*GetOptions)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_url_Values_To_v1_GetOptions(a.(*url.Values), b.(*GetOptions), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddGeneratedConversionFunc((*url.Values)(nil), (*ListOptions)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_url_Values_To_v1_ListOptions(a.(*url.Values), b.(*ListOptions), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddGeneratedConversionFunc((*url.Values)(nil), (*PatchOptions)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_url_Values_To_v1_PatchOptions(a.(*url.Values), b.(*PatchOptions), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddGeneratedConversionFunc((*url.Values)(nil), (*TableOptions)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_url_Values_To_v1_TableOptions(a.(*url.Values), b.(*TableOptions), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddGeneratedConversionFunc((*url.Values)(nil), (*UpdateOptions)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_url_Values_To_v1_UpdateOptions(a.(*url.Values), b.(*UpdateOptions), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*map[string]string)(nil), (*LabelSelector)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_Map_string_To_string_To_v1_LabelSelector(a.(*map[string]string), b.(*LabelSelector), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((**bool)(nil), (*bool)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_Pointer_bool_To_bool(a.(**bool), b.(*bool), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((**float64)(nil), (*float64)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_Pointer_float64_To_float64(a.(**float64), b.(*float64), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((**int32)(nil), (*int32)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_Pointer_int32_To_int32(a.(**int32), b.(*int32), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((**int64)(nil), (*int)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_Pointer_int64_To_int(a.(**int64), b.(*int), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((**int64)(nil), (*int64)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_Pointer_int64_To_int64(a.(**int64), b.(*int64), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((**intstr.IntOrString)(nil), (*intstr.IntOrString)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_Pointer_intstr_IntOrString_To_intstr_IntOrString(a.(**intstr.IntOrString), b.(*intstr.IntOrString), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((**string)(nil), (*string)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_Pointer_string_To_string(a.(**string), b.(*string), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((**Duration)(nil), (*Duration)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_Pointer_v1_Duration_To_v1_Duration(a.(**Duration), b.(*Duration), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*[]string)(nil), (**DeletionPropagation)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_Slice_string_To_Pointer_v1_DeletionPropagation(a.(*[]string), b.(**DeletionPropagation), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*[]string)(nil), (**Time)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_Slice_string_To_Pointer_v1_Time(a.(*[]string), b.(**Time), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*[]string)(nil), (*[]int32)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_Slice_string_To_Slice_int32(a.(*[]string), b.(*[]int32), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*[]string)(nil), (*IncludeObjectPolicy)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_Slice_string_To_v1_IncludeObjectPolicy(a.(*[]string), b.(*IncludeObjectPolicy), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*[]string)(nil), (*Time)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_Slice_string_To_v1_Time(a.(*[]string), b.(*Time), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*bool)(nil), (**bool)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_bool_To_Pointer_bool(a.(*bool), b.(**bool), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*fields.Selector)(nil), (*string)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_fields_Selector_To_string(a.(*fields.Selector), b.(*string), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*float64)(nil), (**float64)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_float64_To_Pointer_float64(a.(*float64), b.(**float64), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*int32)(nil), (**int32)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_int32_To_Pointer_int32(a.(*int32), b.(**int32), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*int64)(nil), (**int64)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_int64_To_Pointer_int64(a.(*int64), b.(**int64), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*int)(nil), (**int64)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_int_To_Pointer_int64(a.(*int), b.(**int64), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*intstr.IntOrString)(nil), (**intstr.IntOrString)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_intstr_IntOrString_To_Pointer_intstr_IntOrString(a.(*intstr.IntOrString), b.(**intstr.IntOrString), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*intstr.IntOrString)(nil), (*intstr.IntOrString)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_intstr_IntOrString_To_intstr_IntOrString(a.(*intstr.IntOrString), b.(*intstr.IntOrString), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*labels.Selector)(nil), (*string)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_labels_Selector_To_string(a.(*labels.Selector), b.(*string), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*resource.Quantity)(nil), (*resource.Quantity)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_resource_Quantity_To_resource_Quantity(a.(*resource.Quantity), b.(*resource.Quantity), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*string)(nil), (**string)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_string_To_Pointer_string(a.(*string), b.(**string), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*string)(nil), (*fields.Selector)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_string_To_fields_Selector(a.(*string), b.(*fields.Selector), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*string)(nil), (*labels.Selector)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_string_To_labels_Selector(a.(*string), b.(*labels.Selector), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*url.Values)(nil), (*DeleteOptions)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_url_Values_To_v1_DeleteOptions(a.(*url.Values), b.(*DeleteOptions), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*DeleteOptions)(nil), (*DeleteOptions)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_v1_DeleteOptions_To_v1_DeleteOptions(a.(*DeleteOptions), b.(*DeleteOptions), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*Duration)(nil), (**Duration)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_v1_Duration_To_Pointer_v1_Duration(a.(*Duration), b.(**Duration), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*InternalEvent)(nil), (*WatchEvent)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_v1_InternalEvent_To_v1_WatchEvent(a.(*InternalEvent), b.(*WatchEvent), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*LabelSelector)(nil), (*map[string]string)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_v1_LabelSelector_To_Map_string_To_string(a.(*LabelSelector), b.(*map[string]string), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*ListMeta)(nil), (*ListMeta)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_v1_ListMeta_To_v1_ListMeta(a.(*ListMeta), b.(*ListMeta), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*MicroTime)(nil), (*MicroTime)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_v1_MicroTime_To_v1_MicroTime(a.(*MicroTime), b.(*MicroTime), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*Time)(nil), (*Time)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_v1_Time_To_v1_Time(a.(*Time), b.(*Time), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*TypeMeta)(nil), (*TypeMeta)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_v1_TypeMeta_To_v1_TypeMeta(a.(*TypeMeta), b.(*TypeMeta), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*WatchEvent)(nil), (*InternalEvent)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_v1_WatchEvent_To_v1_InternalEvent(a.(*WatchEvent), b.(*InternalEvent), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*WatchEvent)(nil), (*watch.Event)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_v1_WatchEvent_To_watch_Event(a.(*WatchEvent), b.(*watch.Event), scope)
+ }); err != nil {
+ return err
+ }
+ if err := s.AddConversionFunc((*watch.Event)(nil), (*WatchEvent)(nil), func(a, b interface{}, scope conversion.Scope) error {
+ return Convert_watch_Event_To_v1_WatchEvent(a.(*watch.Event), b.(*WatchEvent), scope)
+ }); err != nil {
+ return err
+ }
+ return nil
+}
+
+func autoConvert_url_Values_To_v1_CreateOptions(in *url.Values, out *CreateOptions, s conversion.Scope) error {
+ // WARNING: Field TypeMeta does not have json tag, skipping.
+
+ if values, ok := map[string][]string(*in)["dryRun"]; ok && len(values) > 0 {
+ out.DryRun = *(*[]string)(unsafe.Pointer(&values))
+ } else {
+ out.DryRun = nil
+ }
+ if values, ok := map[string][]string(*in)["fieldManager"]; ok && len(values) > 0 {
+ if err := runtime.Convert_Slice_string_To_string(&values, &out.FieldManager, s); err != nil {
+ return err
+ }
+ } else {
+ out.FieldManager = ""
+ }
+ return nil
+}
+
+// Convert_url_Values_To_v1_CreateOptions is an autogenerated conversion function.
+func Convert_url_Values_To_v1_CreateOptions(in *url.Values, out *CreateOptions, s conversion.Scope) error {
+ return autoConvert_url_Values_To_v1_CreateOptions(in, out, s)
+}
+
+func autoConvert_url_Values_To_v1_DeleteOptions(in *url.Values, out *DeleteOptions, s conversion.Scope) error {
+ // WARNING: Field TypeMeta does not have json tag, skipping.
+
+ if values, ok := map[string][]string(*in)["gracePeriodSeconds"]; ok && len(values) > 0 {
+ if err := runtime.Convert_Slice_string_To_Pointer_int64(&values, &out.GracePeriodSeconds, s); err != nil {
+ return err
+ }
+ } else {
+ out.GracePeriodSeconds = nil
+ }
+ // INFO: in.Preconditions opted out of conversion generation
+ if values, ok := map[string][]string(*in)["orphanDependents"]; ok && len(values) > 0 {
+ if err := runtime.Convert_Slice_string_To_Pointer_bool(&values, &out.OrphanDependents, s); err != nil {
+ return err
+ }
+ } else {
+ out.OrphanDependents = nil
+ }
+ if values, ok := map[string][]string(*in)["propagationPolicy"]; ok && len(values) > 0 {
+ if err := Convert_Slice_string_To_Pointer_v1_DeletionPropagation(&values, &out.PropagationPolicy, s); err != nil {
+ return err
+ }
+ } else {
+ out.PropagationPolicy = nil
+ }
+ if values, ok := map[string][]string(*in)["dryRun"]; ok && len(values) > 0 {
+ out.DryRun = *(*[]string)(unsafe.Pointer(&values))
+ } else {
+ out.DryRun = nil
+ }
+ return nil
+}
+
+func autoConvert_url_Values_To_v1_ExportOptions(in *url.Values, out *ExportOptions, s conversion.Scope) error {
+ // WARNING: Field TypeMeta does not have json tag, skipping.
+
+ if values, ok := map[string][]string(*in)["export"]; ok && len(values) > 0 {
+ if err := runtime.Convert_Slice_string_To_bool(&values, &out.Export, s); err != nil {
+ return err
+ }
+ } else {
+ out.Export = false
+ }
+ if values, ok := map[string][]string(*in)["exact"]; ok && len(values) > 0 {
+ if err := runtime.Convert_Slice_string_To_bool(&values, &out.Exact, s); err != nil {
+ return err
+ }
+ } else {
+ out.Exact = false
+ }
+ return nil
+}
+
+// Convert_url_Values_To_v1_ExportOptions is an autogenerated conversion function.
+func Convert_url_Values_To_v1_ExportOptions(in *url.Values, out *ExportOptions, s conversion.Scope) error {
+ return autoConvert_url_Values_To_v1_ExportOptions(in, out, s)
+}
+
+func autoConvert_url_Values_To_v1_GetOptions(in *url.Values, out *GetOptions, s conversion.Scope) error {
+ // WARNING: Field TypeMeta does not have json tag, skipping.
+
+ if values, ok := map[string][]string(*in)["resourceVersion"]; ok && len(values) > 0 {
+ if err := runtime.Convert_Slice_string_To_string(&values, &out.ResourceVersion, s); err != nil {
+ return err
+ }
+ } else {
+ out.ResourceVersion = ""
+ }
+ return nil
+}
+
+// Convert_url_Values_To_v1_GetOptions is an autogenerated conversion function.
+func Convert_url_Values_To_v1_GetOptions(in *url.Values, out *GetOptions, s conversion.Scope) error {
+ return autoConvert_url_Values_To_v1_GetOptions(in, out, s)
+}
+
+func autoConvert_url_Values_To_v1_ListOptions(in *url.Values, out *ListOptions, s conversion.Scope) error {
+ // WARNING: Field TypeMeta does not have json tag, skipping.
+
+ if values, ok := map[string][]string(*in)["labelSelector"]; ok && len(values) > 0 {
+ if err := runtime.Convert_Slice_string_To_string(&values, &out.LabelSelector, s); err != nil {
+ return err
+ }
+ } else {
+ out.LabelSelector = ""
+ }
+ if values, ok := map[string][]string(*in)["fieldSelector"]; ok && len(values) > 0 {
+ if err := runtime.Convert_Slice_string_To_string(&values, &out.FieldSelector, s); err != nil {
+ return err
+ }
+ } else {
+ out.FieldSelector = ""
+ }
+ if values, ok := map[string][]string(*in)["watch"]; ok && len(values) > 0 {
+ if err := runtime.Convert_Slice_string_To_bool(&values, &out.Watch, s); err != nil {
+ return err
+ }
+ } else {
+ out.Watch = false
+ }
+ if values, ok := map[string][]string(*in)["allowWatchBookmarks"]; ok && len(values) > 0 {
+ if err := runtime.Convert_Slice_string_To_bool(&values, &out.AllowWatchBookmarks, s); err != nil {
+ return err
+ }
+ } else {
+ out.AllowWatchBookmarks = false
+ }
+ if values, ok := map[string][]string(*in)["resourceVersion"]; ok && len(values) > 0 {
+ if err := runtime.Convert_Slice_string_To_string(&values, &out.ResourceVersion, s); err != nil {
+ return err
+ }
+ } else {
+ out.ResourceVersion = ""
+ }
+ if values, ok := map[string][]string(*in)["timeoutSeconds"]; ok && len(values) > 0 {
+ if err := runtime.Convert_Slice_string_To_Pointer_int64(&values, &out.TimeoutSeconds, s); err != nil {
+ return err
+ }
+ } else {
+ out.TimeoutSeconds = nil
+ }
+ if values, ok := map[string][]string(*in)["limit"]; ok && len(values) > 0 {
+ if err := runtime.Convert_Slice_string_To_int64(&values, &out.Limit, s); err != nil {
+ return err
+ }
+ } else {
+ out.Limit = 0
+ }
+ if values, ok := map[string][]string(*in)["continue"]; ok && len(values) > 0 {
+ if err := runtime.Convert_Slice_string_To_string(&values, &out.Continue, s); err != nil {
+ return err
+ }
+ } else {
+ out.Continue = ""
+ }
+ return nil
+}
+
+// Convert_url_Values_To_v1_ListOptions is an autogenerated conversion function.
+func Convert_url_Values_To_v1_ListOptions(in *url.Values, out *ListOptions, s conversion.Scope) error {
+ return autoConvert_url_Values_To_v1_ListOptions(in, out, s)
+}
+
+func autoConvert_url_Values_To_v1_PatchOptions(in *url.Values, out *PatchOptions, s conversion.Scope) error {
+ // WARNING: Field TypeMeta does not have json tag, skipping.
+
+ if values, ok := map[string][]string(*in)["dryRun"]; ok && len(values) > 0 {
+ out.DryRun = *(*[]string)(unsafe.Pointer(&values))
+ } else {
+ out.DryRun = nil
+ }
+ if values, ok := map[string][]string(*in)["force"]; ok && len(values) > 0 {
+ if err := runtime.Convert_Slice_string_To_Pointer_bool(&values, &out.Force, s); err != nil {
+ return err
+ }
+ } else {
+ out.Force = nil
+ }
+ if values, ok := map[string][]string(*in)["fieldManager"]; ok && len(values) > 0 {
+ if err := runtime.Convert_Slice_string_To_string(&values, &out.FieldManager, s); err != nil {
+ return err
+ }
+ } else {
+ out.FieldManager = ""
+ }
+ return nil
+}
+
+// Convert_url_Values_To_v1_PatchOptions is an autogenerated conversion function.
+func Convert_url_Values_To_v1_PatchOptions(in *url.Values, out *PatchOptions, s conversion.Scope) error {
+ return autoConvert_url_Values_To_v1_PatchOptions(in, out, s)
+}
+
+func autoConvert_url_Values_To_v1_TableOptions(in *url.Values, out *TableOptions, s conversion.Scope) error {
+ // WARNING: Field TypeMeta does not have json tag, skipping.
+
+ if values, ok := map[string][]string(*in)["-"]; ok && len(values) > 0 {
+ if err := runtime.Convert_Slice_string_To_bool(&values, &out.NoHeaders, s); err != nil {
+ return err
+ }
+ } else {
+ out.NoHeaders = false
+ }
+ if values, ok := map[string][]string(*in)["includeObject"]; ok && len(values) > 0 {
+ if err := Convert_Slice_string_To_v1_IncludeObjectPolicy(&values, &out.IncludeObject, s); err != nil {
+ return err
+ }
+ } else {
+ out.IncludeObject = ""
+ }
+ return nil
+}
+
+// Convert_url_Values_To_v1_TableOptions is an autogenerated conversion function.
+func Convert_url_Values_To_v1_TableOptions(in *url.Values, out *TableOptions, s conversion.Scope) error {
+ return autoConvert_url_Values_To_v1_TableOptions(in, out, s)
+}
+
+func autoConvert_url_Values_To_v1_UpdateOptions(in *url.Values, out *UpdateOptions, s conversion.Scope) error {
+ // WARNING: Field TypeMeta does not have json tag, skipping.
+
+ if values, ok := map[string][]string(*in)["dryRun"]; ok && len(values) > 0 {
+ out.DryRun = *(*[]string)(unsafe.Pointer(&values))
+ } else {
+ out.DryRun = nil
+ }
+ if values, ok := map[string][]string(*in)["fieldManager"]; ok && len(values) > 0 {
+ if err := runtime.Convert_Slice_string_To_string(&values, &out.FieldManager, s); err != nil {
+ return err
+ }
+ } else {
+ out.FieldManager = ""
+ }
+ return nil
+}
+
+// Convert_url_Values_To_v1_UpdateOptions is an autogenerated conversion function.
+func Convert_url_Values_To_v1_UpdateOptions(in *url.Values, out *UpdateOptions, s conversion.Scope) error {
+ return autoConvert_url_Values_To_v1_UpdateOptions(in, out, s)
+}
diff --git a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/zz_generated.deepcopy.go b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/zz_generated.deepcopy.go
index eddbbe3389487..b82fdf202f2cd 100644
--- a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/zz_generated.deepcopy.go
+++ b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1/zz_generated.deepcopy.go
@@ -313,7 +313,7 @@ func (in *ExportOptions) DeepCopyObject() runtime.Object {
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
-func (in *Fields) DeepCopyInto(out *Fields) {
+func (in *FieldsV1) DeepCopyInto(out *FieldsV1) {
*out = *in
if in.Raw != nil {
in, out := &in.Raw, &out.Raw
@@ -323,12 +323,12 @@ func (in *Fields) DeepCopyInto(out *Fields) {
return
}
-// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Fields.
-func (in *Fields) DeepCopy() *Fields {
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new FieldsV1.
+func (in *FieldsV1) DeepCopy() *FieldsV1 {
if in == nil {
return nil
}
- out := new(Fields)
+ out := new(FieldsV1)
in.DeepCopyInto(out)
return out
}
@@ -615,9 +615,9 @@ func (in *ManagedFieldsEntry) DeepCopyInto(out *ManagedFieldsEntry) {
in, out := &in.Time, &out.Time
*out = (*in).DeepCopy()
}
- if in.Fields != nil {
- in, out := &in.Fields, &out.Fields
- *out = new(Fields)
+ if in.FieldsV1 != nil {
+ in, out := &in.FieldsV1, &out.FieldsV1
+ *out = new(FieldsV1)
(*in).DeepCopyInto(*out)
}
return
@@ -864,34 +864,6 @@ func (in *Preconditions) DeepCopy() *Preconditions {
return out
}
-// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
-func (in *ProtoFields) DeepCopyInto(out *ProtoFields) {
- *out = *in
- if in.Map != nil {
- in, out := &in.Map, &out.Map
- *out = make(map[string]Fields, len(*in))
- for key, val := range *in {
- (*out)[key] = *val.DeepCopy()
- }
- }
- if in.Raw != nil {
- in, out := &in.Raw, &out.Raw
- *out = make([]byte, len(*in))
- copy(*out, *in)
- }
- return
-}
-
-// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ProtoFields.
-func (in *ProtoFields) DeepCopy() *ProtoFields {
- if in == nil {
- return nil
- }
- out := new(ProtoFields)
- in.DeepCopyInto(out)
- return out
-}
-
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *RootPaths) DeepCopyInto(out *RootPaths) {
*out = *in
diff --git a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1beta1/generated.proto b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1beta1/generated.proto
index 6339e719ad3a0..19606666f8a3c 100644
--- a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1beta1/generated.proto
+++ b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1beta1/generated.proto
@@ -32,7 +32,7 @@ option go_package = "v1beta1";
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
message PartialObjectMetadataList {
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
optional k8s.io.apimachinery.pkg.apis.meta.v1.ListMeta metadata = 2;
diff --git a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1beta1/register.go b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1beta1/register.go
index 108a0764e72b3..4b4acd72f120f 100644
--- a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1beta1/register.go
+++ b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1beta1/register.go
@@ -19,6 +19,7 @@ package v1beta1
import (
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
+ utilruntime "k8s.io/apimachinery/pkg/util/runtime"
)
// GroupName is the group name for this API.
@@ -38,12 +39,7 @@ var scheme = runtime.NewScheme()
// ParameterCodec knows about query parameters used with the meta v1beta1 API spec.
var ParameterCodec = runtime.NewParameterCodec(scheme)
-func init() {
- if err := AddMetaToScheme(scheme); err != nil {
- panic(err)
- }
-}
-
+// AddMetaToScheme registers base meta types into schemas.
func AddMetaToScheme(scheme *runtime.Scheme) error {
scheme.AddKnownTypes(SchemeGroupVersion,
&Table{},
@@ -55,7 +51,11 @@ func AddMetaToScheme(scheme *runtime.Scheme) error {
return scheme.AddConversionFuncs(
Convert_Slice_string_To_v1beta1_IncludeObjectPolicy,
)
+}
+
+func init() {
+ utilruntime.Must(AddMetaToScheme(scheme))
// register manually. This usually goes through the SchemeBuilder, which we cannot use here.
- //scheme.AddGeneratedDeepCopyFuncs(GetGeneratedDeepCopyFuncs()...)
+ utilruntime.Must(RegisterDefaults(scheme))
}
diff --git a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1beta1/types.go b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1beta1/types.go
index 87895a5b5f270..f16170a37b2e7 100644
--- a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1beta1/types.go
+++ b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1beta1/types.go
@@ -63,7 +63,7 @@ type PartialObjectMetadata = v1.PartialObjectMetadata
type PartialObjectMetadataList struct {
v1.TypeMeta `json:",inline"`
// Standard list metadata.
- // More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
+ // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
// +optional
v1.ListMeta `json:"metadata,omitempty" protobuf:"bytes,2,opt,name=metadata"`
diff --git a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1beta1/types_swagger_doc_generated.go b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1beta1/types_swagger_doc_generated.go
index 26d13f5d91c4b..ef7e7c1e9017a 100644
--- a/vendor/k8s.io/apimachinery/pkg/apis/meta/v1beta1/types_swagger_doc_generated.go
+++ b/vendor/k8s.io/apimachinery/pkg/apis/meta/v1beta1/types_swagger_doc_generated.go
@@ -29,7 +29,7 @@ package v1beta1
// AUTO-GENERATED FUNCTIONS START HERE. DO NOT EDIT.
var map_PartialObjectMetadataList = map[string]string{
"": "PartialObjectMetadataList contains a list of objects containing only their metadata.",
- "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds",
+ "metadata": "Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
"items": "items contains each of the included items.",
}
diff --git a/vendor/k8s.io/apimachinery/pkg/labels/selector.go b/vendor/k8s.io/apimachinery/pkg/labels/selector.go
index 9be9e57d3affa..2f8e1e2b0c4fe 100644
--- a/vendor/k8s.io/apimachinery/pkg/labels/selector.go
+++ b/vendor/k8s.io/apimachinery/pkg/labels/selector.go
@@ -54,6 +54,11 @@ type Selector interface {
// Make a deep copy of the selector.
DeepCopySelector() Selector
+
+ // RequiresExactMatch allows a caller to introspect whether a given selector
+ // requires a single specific label to be set, and if so returns the value it
+ // requires.
+ RequiresExactMatch(label string) (value string, found bool)
}
// Everything returns a selector that matches all labels.
@@ -63,12 +68,13 @@ func Everything() Selector {
type nothingSelector struct{}
-func (n nothingSelector) Matches(_ Labels) bool { return false }
-func (n nothingSelector) Empty() bool { return false }
-func (n nothingSelector) String() string { return "" }
-func (n nothingSelector) Add(_ ...Requirement) Selector { return n }
-func (n nothingSelector) Requirements() (Requirements, bool) { return nil, false }
-func (n nothingSelector) DeepCopySelector() Selector { return n }
+func (n nothingSelector) Matches(_ Labels) bool { return false }
+func (n nothingSelector) Empty() bool { return false }
+func (n nothingSelector) String() string { return "" }
+func (n nothingSelector) Add(_ ...Requirement) Selector { return n }
+func (n nothingSelector) Requirements() (Requirements, bool) { return nil, false }
+func (n nothingSelector) DeepCopySelector() Selector { return n }
+func (n nothingSelector) RequiresExactMatch(label string) (value string, found bool) { return "", false }
// Nothing returns a selector that matches no labels
func Nothing() Selector {
@@ -358,6 +364,23 @@ func (lsel internalSelector) String() string {
return strings.Join(reqs, ",")
}
+// RequiresExactMatch introspect whether a given selector requires a single specific field
+// to be set, and if so returns the value it requires.
+func (lsel internalSelector) RequiresExactMatch(label string) (value string, found bool) {
+ for ix := range lsel {
+ if lsel[ix].key == label {
+ switch lsel[ix].operator {
+ case selection.Equals, selection.DoubleEquals, selection.In:
+ if len(lsel[ix].strValues) == 1 {
+ return lsel[ix].strValues[0], true
+ }
+ }
+ return "", false
+ }
+ }
+ return "", false
+}
+
// Token represents constant definition for lexer token
type Token int
@@ -850,7 +873,7 @@ func SelectorFromSet(ls Set) Selector {
if ls == nil || len(ls) == 0 {
return internalSelector{}
}
- var requirements internalSelector
+ requirements := make([]Requirement, 0, len(ls))
for label, value := range ls {
r, err := NewRequirement(label, selection.Equals, []string{value})
if err == nil {
@@ -862,7 +885,7 @@ func SelectorFromSet(ls Set) Selector {
}
// sort to have deterministic string representation
sort.Sort(ByKey(requirements))
- return requirements
+ return internalSelector(requirements)
}
// SelectorFromValidatedSet returns a Selector which will match exactly the given Set.
@@ -872,13 +895,13 @@ func SelectorFromValidatedSet(ls Set) Selector {
if ls == nil || len(ls) == 0 {
return internalSelector{}
}
- var requirements internalSelector
+ requirements := make([]Requirement, 0, len(ls))
for label, value := range ls {
requirements = append(requirements, Requirement{key: label, operator: selection.Equals, strValues: []string{value}})
}
// sort to have deterministic string representation
sort.Sort(ByKey(requirements))
- return requirements
+ return internalSelector(requirements)
}
// ParseToRequirements takes a string representing a selector and returns a list of
diff --git a/vendor/k8s.io/apimachinery/pkg/runtime/codec.go b/vendor/k8s.io/apimachinery/pkg/runtime/codec.go
index 284e32bc3cb8e..0bccf9dd95bdf 100644
--- a/vendor/k8s.io/apimachinery/pkg/runtime/codec.go
+++ b/vendor/k8s.io/apimachinery/pkg/runtime/codec.go
@@ -19,13 +19,17 @@ package runtime
import (
"bytes"
"encoding/base64"
+ "encoding/json"
"fmt"
"io"
"net/url"
"reflect"
+ "strconv"
+ "strings"
"k8s.io/apimachinery/pkg/conversion/queryparams"
"k8s.io/apimachinery/pkg/runtime/schema"
+ "k8s.io/klog"
)
// codec binds an encoder and decoder.
@@ -100,10 +104,19 @@ type NoopEncoder struct {
var _ Serializer = NoopEncoder{}
+const noopEncoderIdentifier Identifier = "noop"
+
func (n NoopEncoder) Encode(obj Object, w io.Writer) error {
+ // There is no need to handle runtime.CacheableObject, as we don't
+ // process the obj at all.
return fmt.Errorf("encoding is not allowed for this codec: %v", reflect.TypeOf(n.Decoder))
}
+// Identifier implements runtime.Encoder interface.
+func (n NoopEncoder) Identifier() Identifier {
+ return noopEncoderIdentifier
+}
+
// NoopDecoder converts an Encoder to a Serializer or Codec for code that expects them but only uses encoding.
type NoopDecoder struct {
Encoder
@@ -193,19 +206,51 @@ func (c *parameterCodec) EncodeParameters(obj Object, to schema.GroupVersion) (u
type base64Serializer struct {
Encoder
Decoder
+
+ identifier Identifier
}
func NewBase64Serializer(e Encoder, d Decoder) Serializer {
- return &base64Serializer{e, d}
+ return &base64Serializer{
+ Encoder: e,
+ Decoder: d,
+ identifier: identifier(e),
+ }
+}
+
+func identifier(e Encoder) Identifier {
+ result := map[string]string{
+ "name": "base64",
+ }
+ if e != nil {
+ result["encoder"] = string(e.Identifier())
+ }
+ identifier, err := json.Marshal(result)
+ if err != nil {
+ klog.Fatalf("Failed marshaling identifier for base64Serializer: %v", err)
+ }
+ return Identifier(identifier)
}
func (s base64Serializer) Encode(obj Object, stream io.Writer) error {
+ if co, ok := obj.(CacheableObject); ok {
+ return co.CacheEncode(s.Identifier(), s.doEncode, stream)
+ }
+ return s.doEncode(obj, stream)
+}
+
+func (s base64Serializer) doEncode(obj Object, stream io.Writer) error {
e := base64.NewEncoder(base64.StdEncoding, stream)
err := s.Encoder.Encode(obj, e)
e.Close()
return err
}
+// Identifier implements runtime.Encoder interface.
+func (s base64Serializer) Identifier() Identifier {
+ return s.identifier
+}
+
func (s base64Serializer) Decode(data []byte, defaults *schema.GroupVersionKind, into Object) (Object, *schema.GroupVersionKind, error) {
out := make([]byte, base64.StdEncoding.DecodedLen(len(data)))
n, err := base64.StdEncoding.Decode(out, data)
@@ -238,6 +283,11 @@ var (
DisabledGroupVersioner GroupVersioner = disabledGroupVersioner{}
)
+const (
+ internalGroupVersionerIdentifier = "internal"
+ disabledGroupVersionerIdentifier = "disabled"
+)
+
type internalGroupVersioner struct{}
// KindForGroupVersionKinds returns an internal Kind if one is found, or converts the first provided kind to the internal version.
@@ -253,6 +303,11 @@ func (internalGroupVersioner) KindForGroupVersionKinds(kinds []schema.GroupVersi
return schema.GroupVersionKind{}, false
}
+// Identifier implements GroupVersioner interface.
+func (internalGroupVersioner) Identifier() string {
+ return internalGroupVersionerIdentifier
+}
+
type disabledGroupVersioner struct{}
// KindForGroupVersionKinds returns false for any input.
@@ -260,19 +315,9 @@ func (disabledGroupVersioner) KindForGroupVersionKinds(kinds []schema.GroupVersi
return schema.GroupVersionKind{}, false
}
-// GroupVersioners implements GroupVersioner and resolves to the first exact match for any kind.
-type GroupVersioners []GroupVersioner
-
-// KindForGroupVersionKinds returns the first match of any of the group versioners, or false if no match occurred.
-func (gvs GroupVersioners) KindForGroupVersionKinds(kinds []schema.GroupVersionKind) (schema.GroupVersionKind, bool) {
- for _, gv := range gvs {
- target, ok := gv.KindForGroupVersionKinds(kinds)
- if !ok {
- continue
- }
- return target, true
- }
- return schema.GroupVersionKind{}, false
+// Identifier implements GroupVersioner interface.
+func (disabledGroupVersioner) Identifier() string {
+ return disabledGroupVersionerIdentifier
}
// Assert that schema.GroupVersion and GroupVersions implement GroupVersioner
@@ -330,3 +375,22 @@ func (v multiGroupVersioner) KindForGroupVersionKinds(kinds []schema.GroupVersio
}
return schema.GroupVersionKind{}, false
}
+
+// Identifier implements GroupVersioner interface.
+func (v multiGroupVersioner) Identifier() string {
+ groupKinds := make([]string, 0, len(v.acceptedGroupKinds))
+ for _, gk := range v.acceptedGroupKinds {
+ groupKinds = append(groupKinds, gk.String())
+ }
+ result := map[string]string{
+ "name": "multi",
+ "target": v.target.String(),
+ "accepted": strings.Join(groupKinds, ","),
+ "coerce": strconv.FormatBool(v.coerce),
+ }
+ identifier, err := json.Marshal(result)
+ if err != nil {
+ klog.Fatalf("Failed marshaling Identifier for %#v: %v", v, err)
+ }
+ return string(identifier)
+}
diff --git a/vendor/k8s.io/apimachinery/pkg/runtime/conversion.go b/vendor/k8s.io/apimachinery/pkg/runtime/conversion.go
index 08d2abfe687d4..0947dce73563a 100644
--- a/vendor/k8s.io/apimachinery/pkg/runtime/conversion.go
+++ b/vendor/k8s.io/apimachinery/pkg/runtime/conversion.go
@@ -61,19 +61,21 @@ var DefaultStringConversions = []interface{}{
Convert_Slice_string_To_int64,
}
-func Convert_Slice_string_To_string(input *[]string, out *string, s conversion.Scope) error {
- if len(*input) == 0 {
+func Convert_Slice_string_To_string(in *[]string, out *string, s conversion.Scope) error {
+ if len(*in) == 0 {
*out = ""
+ return nil
}
- *out = (*input)[0]
+ *out = (*in)[0]
return nil
}
-func Convert_Slice_string_To_int(input *[]string, out *int, s conversion.Scope) error {
- if len(*input) == 0 {
+func Convert_Slice_string_To_int(in *[]string, out *int, s conversion.Scope) error {
+ if len(*in) == 0 {
*out = 0
+ return nil
}
- str := (*input)[0]
+ str := (*in)[0]
i, err := strconv.Atoi(str)
if err != nil {
return err
@@ -83,15 +85,16 @@ func Convert_Slice_string_To_int(input *[]string, out *int, s conversion.Scope)
}
// Convert_Slice_string_To_bool will convert a string parameter to boolean.
-// Only the absence of a value, a value of "false", or a value of "0" resolve to false.
+// Only the absence of a value (i.e. zero-length slice), a value of "false", or a
+// value of "0" resolve to false.
// Any other value (including empty string) resolves to true.
-func Convert_Slice_string_To_bool(input *[]string, out *bool, s conversion.Scope) error {
- if len(*input) == 0 {
+func Convert_Slice_string_To_bool(in *[]string, out *bool, s conversion.Scope) error {
+ if len(*in) == 0 {
*out = false
return nil
}
- switch strings.ToLower((*input)[0]) {
- case "false", "0":
+ switch {
+ case (*in)[0] == "0", strings.EqualFold((*in)[0], "false"):
*out = false
default:
*out = true
@@ -99,15 +102,79 @@ func Convert_Slice_string_To_bool(input *[]string, out *bool, s conversion.Scope
return nil
}
-func Convert_Slice_string_To_int64(input *[]string, out *int64, s conversion.Scope) error {
- if len(*input) == 0 {
+// Convert_Slice_string_To_bool will convert a string parameter to boolean.
+// Only the absence of a value (i.e. zero-length slice), a value of "false", or a
+// value of "0" resolve to false.
+// Any other value (including empty string) resolves to true.
+func Convert_Slice_string_To_Pointer_bool(in *[]string, out **bool, s conversion.Scope) error {
+ if len(*in) == 0 {
+ boolVar := false
+ *out = &boolVar
+ return nil
+ }
+ switch {
+ case (*in)[0] == "0", strings.EqualFold((*in)[0], "false"):
+ boolVar := false
+ *out = &boolVar
+ default:
+ boolVar := true
+ *out = &boolVar
+ }
+ return nil
+}
+
+func string_to_int64(in string) (int64, error) {
+ return strconv.ParseInt(in, 10, 64)
+}
+
+func Convert_string_To_int64(in *string, out *int64, s conversion.Scope) error {
+ if in == nil {
+ *out = 0
+ return nil
+ }
+ i, err := string_to_int64(*in)
+ if err != nil {
+ return err
+ }
+ *out = i
+ return nil
+}
+
+func Convert_Slice_string_To_int64(in *[]string, out *int64, s conversion.Scope) error {
+ if len(*in) == 0 {
*out = 0
+ return nil
}
- str := (*input)[0]
- i, err := strconv.ParseInt(str, 10, 64)
+ i, err := string_to_int64((*in)[0])
if err != nil {
return err
}
*out = i
return nil
}
+
+func Convert_string_To_Pointer_int64(in *string, out **int64, s conversion.Scope) error {
+ if in == nil {
+ *out = nil
+ return nil
+ }
+ i, err := string_to_int64(*in)
+ if err != nil {
+ return err
+ }
+ *out = &i
+ return nil
+}
+
+func Convert_Slice_string_To_Pointer_int64(in *[]string, out **int64, s conversion.Scope) error {
+ if len(*in) == 0 {
+ *out = nil
+ return nil
+ }
+ i, err := string_to_int64((*in)[0])
+ if err != nil {
+ return err
+ }
+ *out = &i
+ return nil
+}
diff --git a/vendor/k8s.io/apimachinery/pkg/runtime/interfaces.go b/vendor/k8s.io/apimachinery/pkg/runtime/interfaces.go
index bded5bf1591f7..f44693c0c6f84 100644
--- a/vendor/k8s.io/apimachinery/pkg/runtime/interfaces.go
+++ b/vendor/k8s.io/apimachinery/pkg/runtime/interfaces.go
@@ -37,13 +37,36 @@ type GroupVersioner interface {
// Scheme.New(target) and then perform a conversion between the current Go type and the destination Go type.
// Sophisticated implementations may use additional information about the input kinds to pick a destination kind.
KindForGroupVersionKinds(kinds []schema.GroupVersionKind) (target schema.GroupVersionKind, ok bool)
+ // Identifier returns string representation of the object.
+ // Identifiers of two different encoders should be equal only if for every input
+ // kinds they return the same result.
+ Identifier() string
}
+// Identifier represents an identifier.
+// Identitier of two different objects should be equal if and only if for every
+// input the output they produce is exactly the same.
+type Identifier string
+
// Encoder writes objects to a serialized form
type Encoder interface {
// Encode writes an object to a stream. Implementations may return errors if the versions are
// incompatible, or if no conversion is defined.
Encode(obj Object, w io.Writer) error
+ // Identifier returns an identifier of the encoder.
+ // Identifiers of two different encoders should be equal if and only if for every input
+ // object it will be encoded to the same representation by both of them.
+ //
+ // Identifier is inteted for use with CacheableObject#CacheEncode method. In order to
+ // correctly handle CacheableObject, Encode() method should look similar to below, where
+ // doEncode() is the encoding logic of implemented encoder:
+ // func (e *MyEncoder) Encode(obj Object, w io.Writer) error {
+ // if co, ok := obj.(CacheableObject); ok {
+ // return co.CacheEncode(e.Identifier(), e.doEncode, w)
+ // }
+ // return e.doEncode(obj, w)
+ // }
+ Identifier() Identifier
}
// Decoder attempts to load an object from data.
@@ -132,6 +155,28 @@ type NegotiatedSerializer interface {
DecoderToVersion(serializer Decoder, gv GroupVersioner) Decoder
}
+// ClientNegotiator handles turning an HTTP content type into the appropriate encoder.
+// Use NewClientNegotiator or NewVersionedClientNegotiator to create this interface from
+// a NegotiatedSerializer.
+type ClientNegotiator interface {
+ // Encoder returns the appropriate encoder for the provided contentType (e.g. application/json)
+ // and any optional mediaType parameters (e.g. pretty=1), or an error. If no serializer is found
+ // a NegotiateError will be returned. The current client implementations consider params to be
+ // optional modifiers to the contentType and will ignore unrecognized parameters.
+ Encoder(contentType string, params map[string]string) (Encoder, error)
+ // Decoder returns the appropriate decoder for the provided contentType (e.g. application/json)
+ // and any optional mediaType parameters (e.g. pretty=1), or an error. If no serializer is found
+ // a NegotiateError will be returned. The current client implementations consider params to be
+ // optional modifiers to the contentType and will ignore unrecognized parameters.
+ Decoder(contentType string, params map[string]string) (Decoder, error)
+ // StreamDecoder returns the appropriate stream decoder for the provided contentType (e.g.
+ // application/json) and any optional mediaType parameters (e.g. pretty=1), or an error. If no
+ // serializer is found a NegotiateError will be returned. The Serializer and Framer will always
+ // be returned if a Decoder is returned. The current client implementations consider params to be
+ // optional modifiers to the contentType and will ignore unrecognized parameters.
+ StreamDecoder(contentType string, params map[string]string) (Decoder, Serializer, Framer, error)
+}
+
// StorageSerializer is an interface used for obtaining encoders, decoders, and serializers
// that can read and write data at rest. This would commonly be used by client tools that must
// read files, or server side storage interfaces that persist restful objects.
@@ -256,6 +301,27 @@ type Object interface {
DeepCopyObject() Object
}
+// CacheableObject allows an object to cache its different serializations
+// to avoid performing the same serialization multiple times.
+type CacheableObject interface {
+ // CacheEncode writes an object to a stream. The <encode> function will
+ // be used in case of cache miss. The <encode> function takes ownership
+ // of the object.
+ // If CacheableObject is a wrapper, then deep-copy of the wrapped object
+ // should be passed to <encode> function.
+ // CacheEncode assumes that for two different calls with the same <id>,
+ // <encode> function will also be the same.
+ CacheEncode(id Identifier, encode func(Object, io.Writer) error, w io.Writer) error
+ // GetObject returns a deep-copy of an object to be encoded - the caller of
+ // GetObject() is the owner of returned object. The reason for making a copy
+ // is to avoid bugs, where caller modifies the object and forgets to copy it,
+ // thus modifying the object for everyone.
+ // The object returned by GetObject should be the same as the one that is supposed
+ // to be passed to <encode> function in CacheEncode method.
+ // If CacheableObject is a wrapper, the copy of wrapped object should be returned.
+ GetObject() Object
+}
+
// Unstructured objects store values as map[string]interface{}, with only values that can be serialized
// to JSON allowed.
type Unstructured interface {
diff --git a/vendor/k8s.io/apimachinery/pkg/runtime/negotiate.go b/vendor/k8s.io/apimachinery/pkg/runtime/negotiate.go
new file mode 100644
index 0000000000000..159b301206a7b
--- /dev/null
+++ b/vendor/k8s.io/apimachinery/pkg/runtime/negotiate.go
@@ -0,0 +1,146 @@
+/*
+Copyright 2019 The Kubernetes Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package runtime
+
+import (
+ "fmt"
+
+ "k8s.io/apimachinery/pkg/runtime/schema"
+)
+
+// NegotiateError is returned when a ClientNegotiator is unable to locate
+// a serializer for the requested operation.
+type NegotiateError struct {
+ ContentType string
+ Stream bool
+}
+
+func (e NegotiateError) Error() string {
+ if e.Stream {
+ return fmt.Sprintf("no stream serializers registered for %s", e.ContentType)
+ }
+ return fmt.Sprintf("no serializers registered for %s", e.ContentType)
+}
+
+type clientNegotiator struct {
+ serializer NegotiatedSerializer
+ encode, decode GroupVersioner
+}
+
+func (n *clientNegotiator) Encoder(contentType string, params map[string]string) (Encoder, error) {
+ // TODO: `pretty=1` is handled in NegotiateOutputMediaType, consider moving it to this method
+ // if client negotiators truly need to use it
+ mediaTypes := n.serializer.SupportedMediaTypes()
+ info, ok := SerializerInfoForMediaType(mediaTypes, contentType)
+ if !ok {
+ if len(contentType) != 0 || len(mediaTypes) == 0 {
+ return nil, NegotiateError{ContentType: contentType}
+ }
+ info = mediaTypes[0]
+ }
+ return n.serializer.EncoderForVersion(info.Serializer, n.encode), nil
+}
+
+func (n *clientNegotiator) Decoder(contentType string, params map[string]string) (Decoder, error) {
+ mediaTypes := n.serializer.SupportedMediaTypes()
+ info, ok := SerializerInfoForMediaType(mediaTypes, contentType)
+ if !ok {
+ if len(contentType) != 0 || len(mediaTypes) == 0 {
+ return nil, NegotiateError{ContentType: contentType}
+ }
+ info = mediaTypes[0]
+ }
+ return n.serializer.DecoderToVersion(info.Serializer, n.decode), nil
+}
+
+func (n *clientNegotiator) StreamDecoder(contentType string, params map[string]string) (Decoder, Serializer, Framer, error) {
+ mediaTypes := n.serializer.SupportedMediaTypes()
+ info, ok := SerializerInfoForMediaType(mediaTypes, contentType)
+ if !ok {
+ if len(contentType) != 0 || len(mediaTypes) == 0 {
+ return nil, nil, nil, NegotiateError{ContentType: contentType, Stream: true}
+ }
+ info = mediaTypes[0]
+ }
+ if info.StreamSerializer == nil {
+ return nil, nil, nil, NegotiateError{ContentType: info.MediaType, Stream: true}
+ }
+ return n.serializer.DecoderToVersion(info.Serializer, n.decode), info.StreamSerializer.Serializer, info.StreamSerializer.Framer, nil
+}
+
+// NewClientNegotiator will attempt to retrieve the appropriate encoder, decoder, or
+// stream decoder for a given content type. Does not perform any conversion, but will
+// encode the object to the desired group, version, and kind. Use when creating a client.
+func NewClientNegotiator(serializer NegotiatedSerializer, gv schema.GroupVersion) ClientNegotiator {
+ return &clientNegotiator{
+ serializer: serializer,
+ encode: gv,
+ }
+}
+
+// NewInternalClientNegotiator applies the default client rules for connecting to a Kubernetes apiserver
+// where objects are converted to gv prior to sending and decoded to their internal representation prior
+// to retrieval.
+//
+// DEPRECATED: Internal clients are deprecated and will be removed in a future Kubernetes release.
+func NewInternalClientNegotiator(serializer NegotiatedSerializer, gv schema.GroupVersion) ClientNegotiator {
+ decode := schema.GroupVersions{
+ {
+ Group: gv.Group,
+ Version: APIVersionInternal,
+ },
+ // always include the legacy group as a decoding target to handle non-error `Status` return types
+ {
+ Group: "",
+ Version: APIVersionInternal,
+ },
+ }
+ return &clientNegotiator{
+ encode: gv,
+ decode: decode,
+ serializer: serializer,
+ }
+}
+
+// NewSimpleClientNegotiator will negotiate for a single serializer. This should only be used
+// for testing or when the caller is taking responsibility for setting the GVK on encoded objects.
+func NewSimpleClientNegotiator(info SerializerInfo, gv schema.GroupVersion) ClientNegotiator {
+ return &clientNegotiator{
+ serializer: &simpleNegotiatedSerializer{info: info},
+ encode: gv,
+ }
+}
+
+type simpleNegotiatedSerializer struct {
+ info SerializerInfo
+}
+
+func NewSimpleNegotiatedSerializer(info SerializerInfo) NegotiatedSerializer {
+ return &simpleNegotiatedSerializer{info: info}
+}
+
+func (n *simpleNegotiatedSerializer) SupportedMediaTypes() []SerializerInfo {
+ return []SerializerInfo{n.info}
+}
+
+func (n *simpleNegotiatedSerializer) EncoderForVersion(e Encoder, _ GroupVersioner) Encoder {
+ return e
+}
+
+func (n *simpleNegotiatedSerializer) DecoderToVersion(d Decoder, _gv GroupVersioner) Decoder {
+ return d
+}
diff --git a/vendor/k8s.io/apimachinery/pkg/runtime/register.go b/vendor/k8s.io/apimachinery/pkg/runtime/register.go
index eeb380c3dc39e..1cd2e4c3871bc 100644
--- a/vendor/k8s.io/apimachinery/pkg/runtime/register.go
+++ b/vendor/k8s.io/apimachinery/pkg/runtime/register.go
@@ -29,33 +29,3 @@ func (obj *TypeMeta) GroupVersionKind() schema.GroupVersionKind {
}
func (obj *TypeMeta) GetObjectKind() schema.ObjectKind { return obj }
-
-// GetObjectKind implements Object for VersionedObjects, returning an empty ObjectKind
-// interface if no objects are provided, or the ObjectKind interface of the object in the
-// highest array position.
-func (obj *VersionedObjects) GetObjectKind() schema.ObjectKind {
- last := obj.Last()
- if last == nil {
- return schema.EmptyObjectKind
- }
- return last.GetObjectKind()
-}
-
-// First returns the leftmost object in the VersionedObjects array, which is usually the
-// object as serialized on the wire.
-func (obj *VersionedObjects) First() Object {
- if len(obj.Objects) == 0 {
- return nil
- }
- return obj.Objects[0]
-}
-
-// Last is the rightmost object in the VersionedObjects array, which is the object after
-// all transformations have been applied. This is the same object that would be returned
-// by Decode in a normal invocation (without VersionedObjects in the into argument).
-func (obj *VersionedObjects) Last() Object {
- if len(obj.Objects) == 0 {
- return nil
- }
- return obj.Objects[len(obj.Objects)-1]
-}
diff --git a/vendor/k8s.io/apimachinery/pkg/runtime/schema/group_version.go b/vendor/k8s.io/apimachinery/pkg/runtime/schema/group_version.go
index 4c67ed59801be..636103312f0fa 100644
--- a/vendor/k8s.io/apimachinery/pkg/runtime/schema/group_version.go
+++ b/vendor/k8s.io/apimachinery/pkg/runtime/schema/group_version.go
@@ -191,6 +191,11 @@ func (gv GroupVersion) String() string {
return gv.Version
}
+// Identifier implements runtime.GroupVersioner interface.
+func (gv GroupVersion) Identifier() string {
+ return gv.String()
+}
+
// KindForGroupVersionKinds identifies the preferred GroupVersionKind out of a list. It returns ok false
// if none of the options match the group. It prefers a match to group and version over just group.
// TODO: Move GroupVersion to a package under pkg/runtime, since it's used by scheme.
@@ -246,6 +251,15 @@ func (gv GroupVersion) WithResource(resource string) GroupVersionResource {
// in fewer places.
type GroupVersions []GroupVersion
+// Identifier implements runtime.GroupVersioner interface.
+func (gv GroupVersions) Identifier() string {
+ groupVersions := make([]string, 0, len(gv))
+ for i := range gv {
+ groupVersions = append(groupVersions, gv[i].String())
+ }
+ return fmt.Sprintf("[%s]", strings.Join(groupVersions, ","))
+}
+
// KindForGroupVersionKinds identifies the preferred GroupVersionKind out of a list. It returns ok false
// if none of the options match the group.
func (gvs GroupVersions) KindForGroupVersionKinds(kinds []GroupVersionKind) (GroupVersionKind, bool) {
diff --git a/vendor/k8s.io/apimachinery/pkg/runtime/serializer/codec_factory.go b/vendor/k8s.io/apimachinery/pkg/runtime/serializer/codec_factory.go
index d1d4073978315..f21b0ef19dfb7 100644
--- a/vendor/k8s.io/apimachinery/pkg/runtime/serializer/codec_factory.go
+++ b/vendor/k8s.io/apimachinery/pkg/runtime/serializer/codec_factory.go
@@ -322,7 +322,3 @@ func (f WithoutConversionCodecFactory) DecoderToVersion(serializer runtime.Decod
Decoder: serializer,
}
}
-
-// DirectCodecFactory was renamed to WithoutConversionCodecFactory in 1.15.
-// TODO: remove in 1.16.
-type DirectCodecFactory = WithoutConversionCodecFactory
diff --git a/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json/json.go b/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json/json.go
index 69ada8ecf9c5d..9d17f09e54c78 100644
--- a/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json/json.go
+++ b/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json/json.go
@@ -31,6 +31,7 @@ import (
"k8s.io/apimachinery/pkg/runtime/serializer/recognizer"
"k8s.io/apimachinery/pkg/util/framer"
utilyaml "k8s.io/apimachinery/pkg/util/yaml"
+ "k8s.io/klog"
)
// NewSerializer creates a JSON serializer that handles encoding versioned objects into the proper JSON form. If typer
@@ -53,13 +54,28 @@ func NewYAMLSerializer(meta MetaFactory, creater runtime.ObjectCreater, typer ru
// and are immutable.
func NewSerializerWithOptions(meta MetaFactory, creater runtime.ObjectCreater, typer runtime.ObjectTyper, options SerializerOptions) *Serializer {
return &Serializer{
- meta: meta,
- creater: creater,
- typer: typer,
- options: options,
+ meta: meta,
+ creater: creater,
+ typer: typer,
+ options: options,
+ identifier: identifier(options),
}
}
+// identifier computes Identifier of Encoder based on the given options.
+func identifier(options SerializerOptions) runtime.Identifier {
+ result := map[string]string{
+ "name": "json",
+ "yaml": strconv.FormatBool(options.Yaml),
+ "pretty": strconv.FormatBool(options.Pretty),
+ }
+ identifier, err := json.Marshal(result)
+ if err != nil {
+ klog.Fatalf("Failed marshaling identifier for json Serializer: %v", err)
+ }
+ return runtime.Identifier(identifier)
+}
+
// SerializerOptions holds the options which are used to configure a JSON/YAML serializer.
// example:
// (1) To configure a JSON serializer, set `Yaml` to `false`.
@@ -85,6 +101,8 @@ type Serializer struct {
options SerializerOptions
creater runtime.ObjectCreater
typer runtime.ObjectTyper
+
+ identifier runtime.Identifier
}
// Serializer implements Serializer
@@ -188,16 +206,6 @@ func gvkWithDefaults(actual, defaultGVK schema.GroupVersionKind) schema.GroupVer
// On success or most errors, the method will return the calculated schema kind.
// The gvk calculate priority will be originalData > default gvk > into
func (s *Serializer) Decode(originalData []byte, gvk *schema.GroupVersionKind, into runtime.Object) (runtime.Object, *schema.GroupVersionKind, error) {
- if versioned, ok := into.(*runtime.VersionedObjects); ok {
- into = versioned.Last()
- obj, actual, err := s.Decode(originalData, gvk, into)
- if err != nil {
- return nil, actual, err
- }
- versioned.Objects = []runtime.Object{obj}
- return versioned, actual, nil
- }
-
data := originalData
if s.options.Yaml {
altered, err := yaml.YAMLToJSON(data)
@@ -286,6 +294,13 @@ func (s *Serializer) Decode(originalData []byte, gvk *schema.GroupVersionKind, i
// Encode serializes the provided object to the given writer.
func (s *Serializer) Encode(obj runtime.Object, w io.Writer) error {
+ if co, ok := obj.(runtime.CacheableObject); ok {
+ return co.CacheEncode(s.Identifier(), s.doEncode, w)
+ }
+ return s.doEncode(obj, w)
+}
+
+func (s *Serializer) doEncode(obj runtime.Object, w io.Writer) error {
if s.options.Yaml {
json, err := caseSensitiveJsonIterator.Marshal(obj)
if err != nil {
@@ -311,6 +326,11 @@ func (s *Serializer) Encode(obj runtime.Object, w io.Writer) error {
return encoder.Encode(obj)
}
+// Identifier implements runtime.Encoder interface.
+func (s *Serializer) Identifier() runtime.Identifier {
+ return s.identifier
+}
+
// RecognizesData implements the RecognizingDecoder interface.
func (s *Serializer) RecognizesData(peek io.Reader) (ok, unknown bool, err error) {
if s.options.Yaml {
diff --git a/vendor/k8s.io/apimachinery/pkg/runtime/serializer/protobuf/protobuf.go b/vendor/k8s.io/apimachinery/pkg/runtime/serializer/protobuf/protobuf.go
index 0f33e1d821fac..f606b7d728b12 100644
--- a/vendor/k8s.io/apimachinery/pkg/runtime/serializer/protobuf/protobuf.go
+++ b/vendor/k8s.io/apimachinery/pkg/runtime/serializer/protobuf/protobuf.go
@@ -86,6 +86,8 @@ type Serializer struct {
var _ runtime.Serializer = &Serializer{}
var _ recognizer.RecognizingDecoder = &Serializer{}
+const serializerIdentifier runtime.Identifier = "protobuf"
+
// Decode attempts to convert the provided data into a protobuf message, extract the stored schema kind, apply the provided default
// gvk, and then load that data into an object matching the desired schema kind or the provided into. If into is *runtime.Unknown,
// the raw data will be extracted and no decoding will be performed. If into is not registered with the typer, then the object will
@@ -93,23 +95,6 @@ var _ recognizer.RecognizingDecoder = &Serializer{}
// not fully qualified with kind/version/group, the type of the into will be used to alter the returned gvk. On success or most
// errors, the method will return the calculated schema kind.
func (s *Serializer) Decode(originalData []byte, gvk *schema.GroupVersionKind, into runtime.Object) (runtime.Object, *schema.GroupVersionKind, error) {
- if versioned, ok := into.(*runtime.VersionedObjects); ok {
- into = versioned.Last()
- obj, actual, err := s.Decode(originalData, gvk, into)
- if err != nil {
- return nil, actual, err
- }
- // the last item in versioned becomes into, so if versioned was not originally empty we reset the object
- // array so the first position is the decoded object and the second position is the outermost object.
- // if there were no objects in the versioned list passed to us, only add ourselves.
- if into != nil && into != obj {
- versioned.Objects = []runtime.Object{obj, into}
- } else {
- versioned.Objects = []runtime.Object{obj}
- }
- return versioned, actual, err
- }
-
prefixLen := len(s.prefix)
switch {
case len(originalData) == 0:
@@ -176,6 +161,13 @@ func (s *Serializer) Decode(originalData []byte, gvk *schema.GroupVersionKind, i
// Encode serializes the provided object to the given writer.
func (s *Serializer) Encode(obj runtime.Object, w io.Writer) error {
+ if co, ok := obj.(runtime.CacheableObject); ok {
+ return co.CacheEncode(s.Identifier(), s.doEncode, w)
+ }
+ return s.doEncode(obj, w)
+}
+
+func (s *Serializer) doEncode(obj runtime.Object, w io.Writer) error {
prefixSize := uint64(len(s.prefix))
var unk runtime.Unknown
@@ -245,6 +237,11 @@ func (s *Serializer) Encode(obj runtime.Object, w io.Writer) error {
}
}
+// Identifier implements runtime.Encoder interface.
+func (s *Serializer) Identifier() runtime.Identifier {
+ return serializerIdentifier
+}
+
// RecognizesData implements the RecognizingDecoder interface.
func (s *Serializer) RecognizesData(peek io.Reader) (bool, bool, error) {
prefix := make([]byte, 4)
@@ -321,6 +318,8 @@ type RawSerializer struct {
var _ runtime.Serializer = &RawSerializer{}
+const rawSerializerIdentifier runtime.Identifier = "raw-protobuf"
+
// Decode attempts to convert the provided data into a protobuf message, extract the stored schema kind, apply the provided default
// gvk, and then load that data into an object matching the desired schema kind or the provided into. If into is *runtime.Unknown,
// the raw data will be extracted and no decoding will be performed. If into is not registered with the typer, then the object will
@@ -332,20 +331,6 @@ func (s *RawSerializer) Decode(originalData []byte, gvk *schema.GroupVersionKind
return nil, nil, fmt.Errorf("this serializer requires an object to decode into: %#v", s)
}
- if versioned, ok := into.(*runtime.VersionedObjects); ok {
- into = versioned.Last()
- obj, actual, err := s.Decode(originalData, gvk, into)
- if err != nil {
- return nil, actual, err
- }
- if into != nil && into != obj {
- versioned.Objects = []runtime.Object{obj, into}
- } else {
- versioned.Objects = []runtime.Object{obj}
- }
- return versioned, actual, err
- }
-
if len(originalData) == 0 {
// TODO: treat like decoding {} from JSON with defaulting
return nil, nil, fmt.Errorf("empty data")
@@ -419,6 +404,13 @@ func unmarshalToObject(typer runtime.ObjectTyper, creater runtime.ObjectCreater,
// Encode serializes the provided object to the given writer. Overrides is ignored.
func (s *RawSerializer) Encode(obj runtime.Object, w io.Writer) error {
+ if co, ok := obj.(runtime.CacheableObject); ok {
+ return co.CacheEncode(s.Identifier(), s.doEncode, w)
+ }
+ return s.doEncode(obj, w)
+}
+
+func (s *RawSerializer) doEncode(obj runtime.Object, w io.Writer) error {
switch t := obj.(type) {
case bufferedReverseMarshaller:
// this path performs a single allocation during write but requires the caller to implement
@@ -460,6 +452,11 @@ func (s *RawSerializer) Encode(obj runtime.Object, w io.Writer) error {
}
}
+// Identifier implements runtime.Encoder interface.
+func (s *RawSerializer) Identifier() runtime.Identifier {
+ return rawSerializerIdentifier
+}
+
var LengthDelimitedFramer = lengthDelimitedFramer{}
type lengthDelimitedFramer struct{}
diff --git a/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning/versioning.go b/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning/versioning.go
index ee5cb86f7e640..ced184c91e5e6 100644
--- a/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning/versioning.go
+++ b/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning/versioning.go
@@ -17,12 +17,15 @@ limitations under the License.
package versioning
import (
+ "encoding/json"
"io"
"reflect"
+ "sync"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/runtime/schema"
+ "k8s.io/klog"
)
// NewDefaultingCodecForScheme is a convenience method for callers that are using a scheme.
@@ -62,6 +65,8 @@ func NewCodec(
encodeVersion: encodeVersion,
decodeVersion: decodeVersion,
+ identifier: identifier(encodeVersion, encoder),
+
originalSchemeName: originalSchemeName,
}
return internal
@@ -78,19 +83,47 @@ type codec struct {
encodeVersion runtime.GroupVersioner
decodeVersion runtime.GroupVersioner
+ identifier runtime.Identifier
+
// originalSchemeName is optional, but when filled in it holds the name of the scheme from which this codec originates
originalSchemeName string
}
+var identifiersMap sync.Map
+
+type codecIdentifier struct {
+ EncodeGV string `json:"encodeGV,omitempty"`
+ Encoder string `json:"encoder,omitempty"`
+ Name string `json:"name,omitempty"`
+}
+
+// identifier computes Identifier of Encoder based on codec parameters.
+func identifier(encodeGV runtime.GroupVersioner, encoder runtime.Encoder) runtime.Identifier {
+ result := codecIdentifier{
+ Name: "versioning",
+ }
+
+ if encodeGV != nil {
+ result.EncodeGV = encodeGV.Identifier()
+ }
+ if encoder != nil {
+ result.Encoder = string(encoder.Identifier())
+ }
+ if id, ok := identifiersMap.Load(result); ok {
+ return id.(runtime.Identifier)
+ }
+ identifier, err := json.Marshal(result)
+ if err != nil {
+ klog.Fatalf("Failed marshaling identifier for codec: %v", err)
+ }
+ identifiersMap.Store(result, runtime.Identifier(identifier))
+ return runtime.Identifier(identifier)
+}
+
// Decode attempts a decode of the object, then tries to convert it to the internal version. If into is provided and the decoding is
// successful, the returned runtime.Object will be the value passed as into. Note that this may bypass conversion if you pass an
// into that matches the serialized version.
func (c *codec) Decode(data []byte, defaultGVK *schema.GroupVersionKind, into runtime.Object) (runtime.Object, *schema.GroupVersionKind, error) {
- versioned, isVersioned := into.(*runtime.VersionedObjects)
- if isVersioned {
- into = versioned.Last()
- }
-
// If the into object is unstructured and expresses an opinion about its group/version,
// create a new instance of the type so we always exercise the conversion path (skips short-circuiting on `into == obj`)
decodeInto := into
@@ -115,22 +148,11 @@ func (c *codec) Decode(data []byte, defaultGVK *schema.GroupVersionKind, into ru
if into != nil {
// perform defaulting if requested
if c.defaulter != nil {
- // create a copy to ensure defaulting is not applied to the original versioned objects
- if isVersioned {
- versioned.Objects = []runtime.Object{obj.DeepCopyObject()}
- }
c.defaulter.Default(obj)
- } else {
- if isVersioned {
- versioned.Objects = []runtime.Object{obj}
- }
}
// Short-circuit conversion if the into object is same object
if into == obj {
- if isVersioned {
- return versioned, gvk, nil
- }
return into, gvk, nil
}
@@ -138,19 +160,9 @@ func (c *codec) Decode(data []byte, defaultGVK *schema.GroupVersionKind, into ru
return nil, gvk, err
}
- if isVersioned {
- versioned.Objects = append(versioned.Objects, into)
- return versioned, gvk, nil
- }
return into, gvk, nil
}
- // Convert if needed.
- if isVersioned {
- // create a copy, because ConvertToVersion does not guarantee non-mutation of objects
- versioned.Objects = []runtime.Object{obj.DeepCopyObject()}
- }
-
// perform defaulting if requested
if c.defaulter != nil {
c.defaulter.Default(obj)
@@ -160,18 +172,19 @@ func (c *codec) Decode(data []byte, defaultGVK *schema.GroupVersionKind, into ru
if err != nil {
return nil, gvk, err
}
- if isVersioned {
- if versioned.Last() != out {
- versioned.Objects = append(versioned.Objects, out)
- }
- return versioned, gvk, nil
- }
return out, gvk, nil
}
// Encode ensures the provided object is output in the appropriate group and version, invoking
// conversion if necessary. Unversioned objects (according to the ObjectTyper) are output as is.
func (c *codec) Encode(obj runtime.Object, w io.Writer) error {
+ if co, ok := obj.(runtime.CacheableObject); ok {
+ return co.CacheEncode(c.Identifier(), c.doEncode, w)
+ }
+ return c.doEncode(obj, w)
+}
+
+func (c *codec) doEncode(obj runtime.Object, w io.Writer) error {
switch obj := obj.(type) {
case *runtime.Unknown:
return c.encoder.Encode(obj, w)
@@ -230,3 +243,8 @@ func (c *codec) Encode(obj runtime.Object, w io.Writer) error {
// Conversion is responsible for setting the proper group, version, and kind onto the outgoing object
return c.encoder.Encode(out, w)
}
+
+// Identifier implements runtime.Encoder interface.
+func (c *codec) Identifier() runtime.Identifier {
+ return c.identifier
+}
diff --git a/vendor/k8s.io/apimachinery/pkg/runtime/types.go b/vendor/k8s.io/apimachinery/pkg/runtime/types.go
index 3d3ebe5f9d1b4..31359f35f4512 100644
--- a/vendor/k8s.io/apimachinery/pkg/runtime/types.go
+++ b/vendor/k8s.io/apimachinery/pkg/runtime/types.go
@@ -95,7 +95,7 @@ type RawExtension struct {
// Raw is the underlying serialization of this object.
//
// TODO: Determine how to detect ContentType and ContentEncoding of 'Raw' data.
- Raw []byte `protobuf:"bytes,1,opt,name=raw"`
+ Raw []byte `json:"-" protobuf:"bytes,1,opt,name=raw"`
// Object can hold a representation of this extension - useful for working with versioned
// structs.
Object Object `json:"-"`
@@ -124,16 +124,3 @@ type Unknown struct {
// Unspecified means ContentTypeJSON.
ContentType string `protobuf:"bytes,4,opt,name=contentType"`
}
-
-// VersionedObjects is used by Decoders to give callers a way to access all versions
-// of an object during the decoding process.
-//
-// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
-// +k8s:deepcopy-gen=true
-type VersionedObjects struct {
- // Objects is the set of objects retrieved during decoding, in order of conversion.
- // The 0 index is the object as serialized on the wire. If conversion has occurred,
- // other objects may be present. The right most object is the same as would be returned
- // by a normal Decode call.
- Objects []Object
-}
diff --git a/vendor/k8s.io/apimachinery/pkg/runtime/zz_generated.deepcopy.go b/vendor/k8s.io/apimachinery/pkg/runtime/zz_generated.deepcopy.go
index 8b9182f359d44..b0393839e1f40 100644
--- a/vendor/k8s.io/apimachinery/pkg/runtime/zz_generated.deepcopy.go
+++ b/vendor/k8s.io/apimachinery/pkg/runtime/zz_generated.deepcopy.go
@@ -73,36 +73,3 @@ func (in *Unknown) DeepCopyObject() Object {
}
return nil
}
-
-// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
-func (in *VersionedObjects) DeepCopyInto(out *VersionedObjects) {
- *out = *in
- if in.Objects != nil {
- in, out := &in.Objects, &out.Objects
- *out = make([]Object, len(*in))
- for i := range *in {
- if (*in)[i] != nil {
- (*out)[i] = (*in)[i].DeepCopyObject()
- }
- }
- }
- return
-}
-
-// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new VersionedObjects.
-func (in *VersionedObjects) DeepCopy() *VersionedObjects {
- if in == nil {
- return nil
- }
- out := new(VersionedObjects)
- in.DeepCopyInto(out)
- return out
-}
-
-// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new Object.
-func (in *VersionedObjects) DeepCopyObject() Object {
- if c := in.DeepCopy(); c != nil {
- return c
- }
- return nil
-}
diff --git a/vendor/k8s.io/apimachinery/pkg/util/cache/cache.go b/vendor/k8s.io/apimachinery/pkg/util/cache/cache.go
deleted file mode 100644
index 9a09fe54d6eb6..0000000000000
--- a/vendor/k8s.io/apimachinery/pkg/util/cache/cache.go
+++ /dev/null
@@ -1,83 +0,0 @@
-/*
-Copyright 2014 The Kubernetes Authors.
-
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-*/
-
-package cache
-
-import (
- "sync"
-)
-
-const (
- shardsCount int = 32
-)
-
-type Cache []*cacheShard
-
-func NewCache(maxSize int) Cache {
- if maxSize < shardsCount {
- maxSize = shardsCount
- }
- cache := make(Cache, shardsCount)
- for i := 0; i < shardsCount; i++ {
- cache[i] = &cacheShard{
- items: make(map[uint64]interface{}),
- maxSize: maxSize / shardsCount,
- }
- }
- return cache
-}
-
-func (c Cache) getShard(index uint64) *cacheShard {
- return c[index%uint64(shardsCount)]
-}
-
-// Returns true if object already existed, false otherwise.
-func (c *Cache) Add(index uint64, obj interface{}) bool {
- return c.getShard(index).add(index, obj)
-}
-
-func (c *Cache) Get(index uint64) (obj interface{}, found bool) {
- return c.getShard(index).get(index)
-}
-
-type cacheShard struct {
- items map[uint64]interface{}
- sync.RWMutex
- maxSize int
-}
-
-// Returns true if object already existed, false otherwise.
-func (s *cacheShard) add(index uint64, obj interface{}) bool {
- s.Lock()
- defer s.Unlock()
- _, isOverwrite := s.items[index]
- if !isOverwrite && len(s.items) >= s.maxSize {
- var randomKey uint64
- for randomKey = range s.items {
- break
- }
- delete(s.items, randomKey)
- }
- s.items[index] = obj
- return isOverwrite
-}
-
-func (s *cacheShard) get(index uint64) (obj interface{}, found bool) {
- s.RLock()
- defer s.RUnlock()
- obj, found = s.items[index]
- return
-}
diff --git a/vendor/k8s.io/apimachinery/pkg/util/cache/expiring.go b/vendor/k8s.io/apimachinery/pkg/util/cache/expiring.go
new file mode 100644
index 0000000000000..e9ebfc877dac8
--- /dev/null
+++ b/vendor/k8s.io/apimachinery/pkg/util/cache/expiring.go
@@ -0,0 +1,208 @@
+/*
+Copyright 2019 The Kubernetes Authors.
+
+Licensed under the Apache License, Version 2.0 (the "License");
+you may not use this file except in compliance with the License.
+You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+package cache
+
+import (
+ "container/heap"
+ "context"
+ "sync"
+ "time"
+
+ utilclock "k8s.io/apimachinery/pkg/util/clock"
+)
+
+// NewExpiring returns an initialized expiring cache. Users must call
+// (*Expiring).Run() to begin the GC goroutine.
+func NewExpiring() *Expiring {
+ return NewExpiringWithClock(utilclock.RealClock{})
+}
+
+// NewExpiringWithClock is like NewExpiring but allows passing in a custom
+// clock for testing.
+func NewExpiringWithClock(clock utilclock.Clock) *Expiring {
+ return &Expiring{
+ clock: clock,
+ cache: make(map[interface{}]entry),
+ }
+}
+
+// Expiring is a map whose entries expire after a per-entry timeout.
+type Expiring struct {
+ clock utilclock.Clock
+
+ // mu protects the below fields
+ mu sync.RWMutex
+ // cache is the internal map that backs the cache.
+ cache map[interface{}]entry
+ // generation is used as a cheap resource version for cache entries. Cleanups
+ // are scheduled with a key and generation. When the cleanup runs, it first
+ // compares its generation with the current generation of the entry. It
+ // deletes the entry iff the generation matches. This prevents cleanups
+ // scheduled for earlier versions of an entry from deleting later versions of
+ // an entry when Set() is called multiple times with the same key.
+ //
+ // The integer value of the generation of an entry is meaningless.
+ generation uint64
+
+ heap expiringHeap
+}
+
+type entry struct {
+ val interface{}
+ expiry time.Time
+ generation uint64
+}
+
+// Get looks up an entry in the cache.
+func (c *Expiring) Get(key interface{}) (val interface{}, ok bool) {
+ c.mu.RLock()
+ defer c.mu.RUnlock()
+ e, ok := c.cache[key]
+ if !ok || c.clock.Now().After(e.expiry) {
+ return nil, false
+ }
+ return e.val, true
+}
+
+// Set sets a key/value/expiry entry in the map, overwriting any previous entry
+// with the same key. The entry expires at the given expiry time, but its TTL
+// may be lengthened or shortened by additional calls to Set().
+func (c *Expiring) Set(key interface{}, val interface{}, ttl time.Duration) {
+ expiry := c.clock.Now().Add(ttl)
+
+ c.mu.Lock()
+ defer c.mu.Unlock()
+
+ c.generation++
+
+ c.cache[key] = entry{
+ val: val,
+ expiry: expiry,
+ generation: c.generation,
+ }
+
+ heap.Push(&c.heap, &expiringHeapEntry{
+ key: key,
+ generation: c.generation,
+ expiry: expiry,
+ })
+}
+
+// Delete deletes an entry in the map.
+func (c *Expiring) Delete(key interface{}) {
+ c.mu.Lock()
+ defer c.mu.Unlock()
+ c.del(key, 0)
+}
+
+// del deletes the entry for the given key. The generation argument is the
+// generation of the entry that should be deleted. If the generation has been
+// changed (e.g. if a set has occurred on an existing element but the old
+// cleanup still runs), this is a noop. If the generation argument is 0, the
+// entry's generation is ignored and the entry is deleted.
+//
+// del must be called under the write lock.
+func (c *Expiring) del(key interface{}, generation uint64) {
+ e, ok := c.cache[key]
+ if !ok {
+ return
+ }
+ if generation != 0 && generation != e.generation {
+ return
+ }
+ delete(c.cache, key)
+}
+
+// Len returns the number of items in the cache.
+func (c *Expiring) Len() int {
+ c.mu.RLock()
+ defer c.mu.RUnlock()
+ return len(c.cache)
+}
+
+const gcInterval = 50 * time.Millisecond
+
+// Run runs the GC goroutine. The goroutine exits when the passed in context is
+// cancelled.
+func (c *Expiring) Run(ctx context.Context) {
+ t := c.clock.NewTicker(gcInterval)
+ defer t.Stop()
+ for {
+ select {
+ case <-t.C():
+ c.gc()
+ case <-ctx.Done():
+ return
+ }
+ }
+}
+
+func (c *Expiring) gc() {
+ now := c.clock.Now()
+
+ c.mu.Lock()
+ defer c.mu.Unlock()
+ for {
+ // Return from gc if the heap is empty or the next element is not yet
+ // expired.
+ //
+ // heap[0] is a peek at the next element in the heap, which is not obvious
+ // from looking at the (*expiringHeap).Pop() implmentation below.
+ // heap.Pop() swaps the first entry with the last entry of the heap, then
+ // calls (*expiringHeap).Pop() which returns the last element.
+ if len(c.heap) == 0 || now.After(c.heap[0].expiry) {
+ return
+ }
+ cleanup := heap.Pop(&c.heap).(*expiringHeapEntry)
+ c.del(cleanup.key, cleanup.generation)
+ }
+}
+
+type expiringHeapEntry struct {
+ key interface{}
+ generation uint64
+ expiry time.Time
+}
+
+// expiringHeap is a min-heap ordered by expiration time of it's entries. The
+// expiring cache uses this as a priority queue efficiently organize entries to
+// be garbage collected once they expire.
+type expiringHeap []*expiringHeapEntry
+
+var _ heap.Interface = &expiringHeap{}
+
+func (cq expiringHeap) Len() int {
+ return len(cq)
+}
+
+func (cq expiringHeap) Less(i, j int) bool {
+ return cq[i].expiry.Before(cq[j].expiry)
+}
+
+func (cq expiringHeap) Swap(i, j int) {
+ cq[i], cq[j] = cq[j], cq[i]
+}
+
+func (cq *expiringHeap) Push(c interface{}) {
+ *cq = append(*cq, c.(*expiringHeapEntry))
+}
+
+func (cq *expiringHeap) Pop() interface{} {
+ c := (*cq)[cq.Len()-1]
+ *cq = (*cq)[:cq.Len()-1]
+ return c
+}
diff --git a/vendor/k8s.io/apimachinery/pkg/util/clock/clock.go b/vendor/k8s.io/apimachinery/pkg/util/clock/clock.go
index 0d739d961f1f5..1689e62e82912 100644
--- a/vendor/k8s.io/apimachinery/pkg/util/clock/clock.go
+++ b/vendor/k8s.io/apimachinery/pkg/util/clock/clock.go
@@ -21,11 +21,18 @@ import (
"time"
)
+// PassiveClock allows for injecting fake or real clocks into code
+// that needs to read the current time but does not support scheduling
+// activity in the future.
+type PassiveClock interface {
+ Now() time.Time
+ Since(time.Time) time.Duration
+}
+
// Clock allows for injecting fake or real clocks into code that
// needs to do arbitrary things based on time.
type Clock interface {
- Now() time.Time
- Since(time.Time) time.Duration
+ PassiveClock
After(time.Duration) <-chan time.Time
NewTimer(time.Duration) Timer
Sleep(time.Duration)
@@ -66,10 +73,15 @@ func (RealClock) Sleep(d time.Duration) {
time.Sleep(d)
}
-// FakeClock implements Clock, but returns an arbitrary time.
-type FakeClock struct {
+// FakePassiveClock implements PassiveClock, but returns an arbitrary time.
+type FakePassiveClock struct {
lock sync.RWMutex
time time.Time
+}
+
+// FakeClock implements Clock, but returns an arbitrary time.
+type FakeClock struct {
+ FakePassiveClock
// waiters are waiting for the fake time to pass their specified time
waiters []fakeClockWaiter
@@ -82,26 +94,39 @@ type fakeClockWaiter struct {
destChan chan time.Time
}
+func NewFakePassiveClock(t time.Time) *FakePassiveClock {
+ return &FakePassiveClock{
+ time: t,
+ }
+}
+
func NewFakeClock(t time.Time) *FakeClock {
return &FakeClock{
- time: t,
+ FakePassiveClock: *NewFakePassiveClock(t),
}
}
// Now returns f's time.
-func (f *FakeClock) Now() time.Time {
+func (f *FakePassiveClock) Now() time.Time {
f.lock.RLock()
defer f.lock.RUnlock()
return f.time
}
// Since returns time since the time in f.
-func (f *FakeClock) Since(ts time.Time) time.Duration {
+func (f *FakePassiveClock) Since(ts time.Time) time.Duration {
f.lock.RLock()
defer f.lock.RUnlock()
return f.time.Sub(ts)
}
+// Sets the time.
+func (f *FakePassiveClock) SetTime(t time.Time) {
+ f.lock.Lock()
+ defer f.lock.Unlock()
+ f.time = t
+}
+
// Fake version of time.After(d).
func (f *FakeClock) After(d time.Duration) <-chan time.Time {
f.lock.Lock()
diff --git a/vendor/k8s.io/apimachinery/pkg/util/diff/diff.go b/vendor/k8s.io/apimachinery/pkg/util/diff/diff.go
index a006b925a9e62..fa9ffa51b74fd 100644
--- a/vendor/k8s.io/apimachinery/pkg/util/diff/diff.go
+++ b/vendor/k8s.io/apimachinery/pkg/util/diff/diff.go
@@ -19,6 +19,7 @@ package diff
import (
"bytes"
"fmt"
+ "reflect"
"strings"
"text/tabwriter"
@@ -116,3 +117,41 @@ func ObjectGoPrintSideBySide(a, b interface{}) string {
w.Flush()
return buf.String()
}
+
+// IgnoreUnset is an option that ignores fields that are unset on the right
+// hand side of a comparison. This is useful in testing to assert that an
+// object is a derivative.
+func IgnoreUnset() cmp.Option {
+ return cmp.Options{
+ // ignore unset fields in v2
+ cmp.FilterPath(func(path cmp.Path) bool {
+ _, v2 := path.Last().Values()
+ switch v2.Kind() {
+ case reflect.Slice, reflect.Map:
+ if v2.IsNil() || v2.Len() == 0 {
+ return true
+ }
+ case reflect.String:
+ if v2.Len() == 0 {
+ return true
+ }
+ case reflect.Interface, reflect.Ptr:
+ if v2.IsNil() {
+ return true
+ }
+ }
+ return false
+ }, cmp.Ignore()),
+ // ignore map entries that aren't set in v2
+ cmp.FilterPath(func(path cmp.Path) bool {
+ switch i := path.Last().(type) {
+ case cmp.MapIndex:
+ if _, v2 := i.Values(); !v2.IsValid() {
+ fmt.Println("E")
+ return true
+ }
+ }
+ return false
+ }, cmp.Ignore()),
+ }
+}
diff --git a/vendor/k8s.io/apimachinery/pkg/util/intstr/intstr.go b/vendor/k8s.io/apimachinery/pkg/util/intstr/intstr.go
index 12c8a7b6cbed9..2df62955538f8 100644
--- a/vendor/k8s.io/apimachinery/pkg/util/intstr/intstr.go
+++ b/vendor/k8s.io/apimachinery/pkg/util/intstr/intstr.go
@@ -45,7 +45,7 @@ type IntOrString struct {
}
// Type represents the stored type of IntOrString.
-type Type int
+type Type int64
const (
Int Type = iota // The IntOrString holds an int.
diff --git a/vendor/k8s.io/apimachinery/pkg/util/json/json.go b/vendor/k8s.io/apimachinery/pkg/util/json/json.go
index 10c8cb837ed5f..0e2e301754763 100644
--- a/vendor/k8s.io/apimachinery/pkg/util/json/json.go
+++ b/vendor/k8s.io/apimachinery/pkg/util/json/json.go
@@ -19,6 +19,7 @@ package json
import (
"bytes"
"encoding/json"
+ "fmt"
"io"
)
@@ -34,6 +35,9 @@ func Marshal(v interface{}) ([]byte, error) {
return json.Marshal(v)
}
+// limit recursive depth to prevent stack overflow errors
+const maxDepth = 10000
+
// Unmarshal unmarshals the given data
// If v is a *map[string]interface{}, numbers are converted to int64 or float64
func Unmarshal(data []byte, v interface{}) error {
@@ -48,7 +52,7 @@ func Unmarshal(data []byte, v interface{}) error {
return err
}
// If the decode succeeds, post-process the map to convert json.Number objects to int64 or float64
- return convertMapNumbers(*v)
+ return convertMapNumbers(*v, 0)
case *[]interface{}:
// Build a decoder from the given data
@@ -60,7 +64,7 @@ func Unmarshal(data []byte, v interface{}) error {
return err
}
// If the decode succeeds, post-process the map to convert json.Number objects to int64 or float64
- return convertSliceNumbers(*v)
+ return convertSliceNumbers(*v, 0)
default:
return json.Unmarshal(data, v)
@@ -69,16 +73,20 @@ func Unmarshal(data []byte, v interface{}) error {
// convertMapNumbers traverses the map, converting any json.Number values to int64 or float64.
// values which are map[string]interface{} or []interface{} are recursively visited
-func convertMapNumbers(m map[string]interface{}) error {
+func convertMapNumbers(m map[string]interface{}, depth int) error {
+ if depth > maxDepth {
+ return fmt.Errorf("exceeded max depth of %d", maxDepth)
+ }
+
var err error
for k, v := range m {
switch v := v.(type) {
case json.Number:
m[k], err = convertNumber(v)
case map[string]interface{}:
- err = convertMapNumbers(v)
+ err = convertMapNumbers(v, depth+1)
case []interface{}:
- err = convertSliceNumbers(v)
+ err = convertSliceNumbers(v, depth+1)
}
if err != nil {
return err
@@ -89,16 +97,20 @@ func convertMapNumbers(m map[string]interface{}) error {
// convertSliceNumbers traverses the slice, converting any json.Number values to int64 or float64.
// values which are map[string]interface{} or []interface{} are recursively visited
-func convertSliceNumbers(s []interface{}) error {
+func convertSliceNumbers(s []interface{}, depth int) error {
+ if depth > maxDepth {
+ return fmt.Errorf("exceeded max depth of %d", maxDepth)
+ }
+
var err error
for i, v := range s {
switch v := v.(type) {
case json.Number:
s[i], err = convertNumber(v)
case map[string]interface{}:
- err = convertMapNumbers(v)
+ err = convertMapNumbers(v, depth+1)
case []interface{}:
- err = convertSliceNumbers(v)
+ err = convertSliceNumbers(v, depth+1)
}
if err != nil {
return err
diff --git a/vendor/k8s.io/apimachinery/pkg/util/naming/from_stack.go b/vendor/k8s.io/apimachinery/pkg/util/naming/from_stack.go
index 2965d5a8bc523..d69bf32caa8b5 100644
--- a/vendor/k8s.io/apimachinery/pkg/util/naming/from_stack.go
+++ b/vendor/k8s.io/apimachinery/pkg/util/naming/from_stack.go
@@ -82,7 +82,7 @@ var stackCreator = regexp.MustCompile(`(?m)^created by (.*)\n\s+(.*):(\d+) \+0x[
func extractStackCreator() (string, int, bool) {
stack := debug.Stack()
matches := stackCreator.FindStringSubmatch(string(stack))
- if matches == nil || len(matches) != 4 {
+ if len(matches) != 4 {
return "", 0, false
}
line, err := strconv.Atoi(matches[3])
diff --git a/vendor/k8s.io/apimachinery/pkg/util/net/http.go b/vendor/k8s.io/apimachinery/pkg/util/net/http.go
index 078f00d9b979d..f9540c63bb25b 100644
--- a/vendor/k8s.io/apimachinery/pkg/util/net/http.go
+++ b/vendor/k8s.io/apimachinery/pkg/util/net/http.go
@@ -101,6 +101,9 @@ func SetOldTransportDefaults(t *http.Transport) *http.Transport {
if t.TLSHandshakeTimeout == 0 {
t.TLSHandshakeTimeout = defaultTransport.TLSHandshakeTimeout
}
+ if t.IdleConnTimeout == 0 {
+ t.IdleConnTimeout = defaultTransport.IdleConnTimeout
+ }
return t
}
@@ -111,7 +114,7 @@ func SetTransportDefaults(t *http.Transport) *http.Transport {
// Allow clients to disable http2 if needed.
if s := os.Getenv("DISABLE_HTTP2"); len(s) > 0 {
klog.Infof("HTTP2 has been explicitly disabled")
- } else {
+ } else if allowsHTTP2(t) {
if err := http2.ConfigureTransport(t); err != nil {
klog.Warningf("Transport failed http2 configuration: %v", err)
}
@@ -119,6 +122,21 @@ func SetTransportDefaults(t *http.Transport) *http.Transport {
return t
}
+func allowsHTTP2(t *http.Transport) bool {
+ if t.TLSClientConfig == nil || len(t.TLSClientConfig.NextProtos) == 0 {
+ // the transport expressed no NextProto preference, allow
+ return true
+ }
+ for _, p := range t.TLSClientConfig.NextProtos {
+ if p == http2.NextProtoTLS {
+ // the transport explicitly allowed http/2
+ return true
+ }
+ }
+ // the transport explicitly set NextProtos and excluded http/2
+ return false
+}
+
type RoundTripperWrapper interface {
http.RoundTripper
WrappedRoundTripper() http.RoundTripper
diff --git a/vendor/k8s.io/apimachinery/pkg/util/net/interface.go b/vendor/k8s.io/apimachinery/pkg/util/net/interface.go
index daf5d24964559..836494d579aad 100644
--- a/vendor/k8s.io/apimachinery/pkg/util/net/interface.go
+++ b/vendor/k8s.io/apimachinery/pkg/util/net/interface.go
@@ -36,6 +36,18 @@ const (
familyIPv6 AddressFamily = 6
)
+type AddressFamilyPreference []AddressFamily
+
+var (
+ preferIPv4 = AddressFamilyPreference{familyIPv4, familyIPv6}
+ preferIPv6 = AddressFamilyPreference{familyIPv6, familyIPv4}
+)
+
+const (
+ // LoopbackInterfaceName is the default name of the loopback interface
+ LoopbackInterfaceName = "lo"
+)
+
const (
ipv4RouteFile = "/proc/net/route"
ipv6RouteFile = "/proc/net/ipv6_route"
@@ -53,7 +65,7 @@ type RouteFile struct {
parse func(input io.Reader) ([]Route, error)
}
-// noRoutesError can be returned by ChooseBindAddress() in case of no routes
+// noRoutesError can be returned in case of no routes
type noRoutesError struct {
message string
}
@@ -254,7 +266,7 @@ func getIPFromInterface(intfName string, forFamily AddressFamily, nw networkInte
return nil, nil
}
-// memberOF tells if the IP is of the desired family. Used for checking interface addresses.
+// memberOf tells if the IP is of the desired family. Used for checking interface addresses.
func memberOf(ip net.IP, family AddressFamily) bool {
if ip.To4() != nil {
return family == familyIPv4
@@ -265,8 +277,8 @@ func memberOf(ip net.IP, family AddressFamily) bool {
// chooseIPFromHostInterfaces looks at all system interfaces, trying to find one that is up that
// has a global unicast address (non-loopback, non-link local, non-point2point), and returns the IP.
-// Searches for IPv4 addresses, and then IPv6 addresses.
-func chooseIPFromHostInterfaces(nw networkInterfacer) (net.IP, error) {
+// addressFamilies determines whether it prefers IPv4 or IPv6
+func chooseIPFromHostInterfaces(nw networkInterfacer, addressFamilies AddressFamilyPreference) (net.IP, error) {
intfs, err := nw.Interfaces()
if err != nil {
return nil, err
@@ -274,7 +286,7 @@ func chooseIPFromHostInterfaces(nw networkInterfacer) (net.IP, error) {
if len(intfs) == 0 {
return nil, fmt.Errorf("no interfaces found on host.")
}
- for _, family := range []AddressFamily{familyIPv4, familyIPv6} {
+ for _, family := range addressFamilies {
klog.V(4).Infof("Looking for system interface with a global IPv%d address", uint(family))
for _, intf := range intfs {
if !isInterfaceUp(&intf) {
@@ -321,15 +333,19 @@ func chooseIPFromHostInterfaces(nw networkInterfacer) (net.IP, error) {
// IP of the interface with a gateway on it (with priority given to IPv4). For a node
// with no internet connection, it returns error.
func ChooseHostInterface() (net.IP, error) {
+ return chooseHostInterface(preferIPv4)
+}
+
+func chooseHostInterface(addressFamilies AddressFamilyPreference) (net.IP, error) {
var nw networkInterfacer = networkInterface{}
if _, err := os.Stat(ipv4RouteFile); os.IsNotExist(err) {
- return chooseIPFromHostInterfaces(nw)
+ return chooseIPFromHostInterfaces(nw, addressFamilies)
}
routes, err := getAllDefaultRoutes()
if err != nil {
return nil, err
}
- return chooseHostInterfaceFromRoute(routes, nw)
+ return chooseHostInterfaceFromRoute(routes, nw, addressFamilies)
}
// networkInterfacer defines an interface for several net library functions. Production
@@ -377,10 +393,10 @@ func getAllDefaultRoutes() ([]Route, error) {
}
// chooseHostInterfaceFromRoute cycles through each default route provided, looking for a
-// global IP address from the interface for the route. Will first look all each IPv4 route for
-// an IPv4 IP, and then will look at each IPv6 route for an IPv6 IP.
-func chooseHostInterfaceFromRoute(routes []Route, nw networkInterfacer) (net.IP, error) {
- for _, family := range []AddressFamily{familyIPv4, familyIPv6} {
+// global IP address from the interface for the route. addressFamilies determines whether it
+// prefers IPv4 or IPv6
+func chooseHostInterfaceFromRoute(routes []Route, nw networkInterfacer, addressFamilies AddressFamilyPreference) (net.IP, error) {
+ for _, family := range addressFamilies {
klog.V(4).Infof("Looking for default routes with IPv%d addresses", uint(family))
for _, route := range routes {
if route.Family != family {
@@ -401,12 +417,19 @@ func chooseHostInterfaceFromRoute(routes []Route, nw networkInterfacer) (net.IP,
return nil, fmt.Errorf("unable to select an IP from default routes.")
}
-// If bind-address is usable, return it directly
-// If bind-address is not usable (unset, 0.0.0.0, or loopback), we will use the host's default
-// interface.
-func ChooseBindAddress(bindAddress net.IP) (net.IP, error) {
+// ResolveBindAddress returns the IP address of a daemon, based on the given bindAddress:
+// If bindAddress is unset, it returns the host's default IP, as with ChooseHostInterface().
+// If bindAddress is unspecified or loopback, it returns the default IP of the same
+// address family as bindAddress.
+// Otherwise, it just returns bindAddress.
+func ResolveBindAddress(bindAddress net.IP) (net.IP, error) {
+ addressFamilies := preferIPv4
+ if bindAddress != nil && memberOf(bindAddress, familyIPv6) {
+ addressFamilies = preferIPv6
+ }
+
if bindAddress == nil || bindAddress.IsUnspecified() || bindAddress.IsLoopback() {
- hostIP, err := ChooseHostInterface()
+ hostIP, err := chooseHostInterface(addressFamilies)
if err != nil {
return nil, err
}
@@ -414,3 +437,21 @@ func ChooseBindAddress(bindAddress net.IP) (net.IP, error) {
}
return bindAddress, nil
}
+
+// ChooseBindAddressForInterface choose a global IP for a specific interface, with priority given to IPv4.
+// This is required in case of network setups where default routes are present, but network
+// interfaces use only link-local addresses (e.g. as described in RFC5549).
+// e.g when using BGP to announce a host IP over link-local ip addresses and this ip address is attached to the lo interface.
+func ChooseBindAddressForInterface(intfName string) (net.IP, error) {
+ var nw networkInterfacer = networkInterface{}
+ for _, family := range preferIPv4 {
+ ip, err := getIPFromInterface(intfName, family, nw)
+ if err != nil {
+ return nil, err
+ }
+ if ip != nil {
+ return ip, nil
+ }
+ }
+ return nil, fmt.Errorf("unable to select an IP from %s network interface", intfName)
+}
diff --git a/vendor/k8s.io/apimachinery/pkg/util/net/util.go b/vendor/k8s.io/apimachinery/pkg/util/net/util.go
index 8344d10c83ae9..2e7cb9499465e 100644
--- a/vendor/k8s.io/apimachinery/pkg/util/net/util.go
+++ b/vendor/k8s.io/apimachinery/pkg/util/net/util.go
@@ -54,3 +54,20 @@ func IsConnectionReset(err error) bool {
}
return false
}
+
+// Returns if the given err is "connection refused" error
+func IsConnectionRefused(err error) bool {
+ if urlErr, ok := err.(*url.Error); ok {
+ err = urlErr.Err
+ }
+ if opErr, ok := err.(*net.OpError); ok {
+ err = opErr.Err
+ }
+ if osErr, ok := err.(*os.SyscallError); ok {
+ err = osErr.Err
+ }
+ if errno, ok := err.(syscall.Errno); ok && errno == syscall.ECONNREFUSED {
+ return true
+ }
+ return false
+}
diff --git a/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go b/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go
index c7348129ae816..1428443f54491 100644
--- a/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go
+++ b/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go
@@ -18,6 +18,7 @@ package runtime
import (
"fmt"
+ "net/http"
"runtime"
"sync"
"time"
@@ -56,8 +57,16 @@ func HandleCrash(additionalHandlers ...func(interface{})) {
}
}
-// logPanic logs the caller tree when a panic occurs.
+// logPanic logs the caller tree when a panic occurs (except in the special case of http.ErrAbortHandler).
func logPanic(r interface{}) {
+ if r == http.ErrAbortHandler {
+ // honor the http.ErrAbortHandler sentinel panic value:
+ // ErrAbortHandler is a sentinel panic value to abort a handler.
+ // While any panic from ServeHTTP aborts the response to the client,
+ // panicking with ErrAbortHandler also suppresses logging of a stack trace to the server's error log.
+ return
+ }
+
// Same as stdlib http server code. Manually allocate stack trace buffer size
// to prevent excessively large logs
const size = 64 << 10
diff --git a/vendor/k8s.io/apimachinery/pkg/util/sets/byte.go b/vendor/k8s.io/apimachinery/pkg/util/sets/byte.go
index 766f4501e0f28..9bfa85d43d411 100644
--- a/vendor/k8s.io/apimachinery/pkg/util/sets/byte.go
+++ b/vendor/k8s.io/apimachinery/pkg/util/sets/byte.go
@@ -46,17 +46,19 @@ func ByteKeySet(theMap interface{}) Byte {
}
// Insert adds items to the set.
-func (s Byte) Insert(items ...byte) {
+func (s Byte) Insert(items ...byte) Byte {
for _, item := range items {
s[item] = Empty{}
}
+ return s
}
// Delete removes all items from the set.
-func (s Byte) Delete(items ...byte) {
+func (s Byte) Delete(items ...byte) Byte {
for _, item := range items {
delete(s, item)
}
+ return s
}
// Has returns true if and only if item is contained in the set.
diff --git a/vendor/k8s.io/apimachinery/pkg/util/sets/int.go b/vendor/k8s.io/apimachinery/pkg/util/sets/int.go
index a0a513cd9b511..88bd7096791e6 100644
--- a/vendor/k8s.io/apimachinery/pkg/util/sets/int.go
+++ b/vendor/k8s.io/apimachinery/pkg/util/sets/int.go
@@ -46,17 +46,19 @@ func IntKeySet(theMap interface{}) Int {
}
// Insert adds items to the set.
-func (s Int) Insert(items ...int) {
+func (s Int) Insert(items ...int) Int {
for _, item := range items {
s[item] = Empty{}
}
+ return s
}
// Delete removes all items from the set.
-func (s Int) Delete(items ...int) {
+func (s Int) Delete(items ...int) Int {
for _, item := range items {
delete(s, item)
}
+ return s
}
// Has returns true if and only if item is contained in the set.
diff --git a/vendor/k8s.io/apimachinery/pkg/util/sets/int32.go b/vendor/k8s.io/apimachinery/pkg/util/sets/int32.go
index 584eabc8b7618..96a48555426f3 100644
--- a/vendor/k8s.io/apimachinery/pkg/util/sets/int32.go
+++ b/vendor/k8s.io/apimachinery/pkg/util/sets/int32.go
@@ -46,17 +46,19 @@ func Int32KeySet(theMap interface{}) Int32 {
}
// Insert adds items to the set.
-func (s Int32) Insert(items ...int32) {
+func (s Int32) Insert(items ...int32) Int32 {
for _, item := range items {
s[item] = Empty{}
}
+ return s
}
// Delete removes all items from the set.
-func (s Int32) Delete(items ...int32) {
+func (s Int32) Delete(items ...int32) Int32 {
for _, item := range items {
delete(s, item)
}
+ return s
}
// Has returns true if and only if item is contained in the set.
diff --git a/vendor/k8s.io/apimachinery/pkg/util/sets/int64.go b/vendor/k8s.io/apimachinery/pkg/util/sets/int64.go
index 9ca9af0c59187..b375a1b065c9c 100644
--- a/vendor/k8s.io/apimachinery/pkg/util/sets/int64.go
+++ b/vendor/k8s.io/apimachinery/pkg/util/sets/int64.go
@@ -46,17 +46,19 @@ func Int64KeySet(theMap interface{}) Int64 {
}
// Insert adds items to the set.
-func (s Int64) Insert(items ...int64) {
+func (s Int64) Insert(items ...int64) Int64 {
for _, item := range items {
s[item] = Empty{}
}
+ return s
}
// Delete removes all items from the set.
-func (s Int64) Delete(items ...int64) {
+func (s Int64) Delete(items ...int64) Int64 {
for _, item := range items {
delete(s, item)
}
+ return s
}
// Has returns true if and only if item is contained in the set.
diff --git a/vendor/k8s.io/apimachinery/pkg/util/sets/string.go b/vendor/k8s.io/apimachinery/pkg/util/sets/string.go
index ba00ad7df4e7e..e6f37db8874b9 100644
--- a/vendor/k8s.io/apimachinery/pkg/util/sets/string.go
+++ b/vendor/k8s.io/apimachinery/pkg/util/sets/string.go
@@ -46,17 +46,19 @@ func StringKeySet(theMap interface{}) String {
}
// Insert adds items to the set.
-func (s String) Insert(items ...string) {
+func (s String) Insert(items ...string) String {
for _, item := range items {
s[item] = Empty{}
}
+ return s
}
// Delete removes all items from the set.
-func (s String) Delete(items ...string) {
+func (s String) Delete(items ...string) String {
for _, item := range items {
delete(s, item)
}
+ return s
}
// Has returns true if and only if item is contained in the set.
diff --git a/vendor/k8s.io/apimachinery/pkg/util/validation/field/errors.go b/vendor/k8s.io/apimachinery/pkg/util/validation/field/errors.go
index 4767fd1dda104..0cd5d65775a70 100644
--- a/vendor/k8s.io/apimachinery/pkg/util/validation/field/errors.go
+++ b/vendor/k8s.io/apimachinery/pkg/util/validation/field/errors.go
@@ -116,6 +116,10 @@ const (
// This is similar to ErrorTypeInvalid, but the error will not include the
// too-long value. See TooLong().
ErrorTypeTooLong ErrorType = "FieldValueTooLong"
+ // ErrorTypeTooMany is used to report "too many". This is used to
+ // report that a given list has too many items. This is similar to FieldValueTooLong,
+ // but the error indicates quantity instead of length.
+ ErrorTypeTooMany ErrorType = "FieldValueTooMany"
// ErrorTypeInternal is used to report other errors that are not related
// to user input. See InternalError().
ErrorTypeInternal ErrorType = "InternalError"
@@ -138,6 +142,8 @@ func (t ErrorType) String() string {
return "Forbidden"
case ErrorTypeTooLong:
return "Too long"
+ case ErrorTypeTooMany:
+ return "Too many"
case ErrorTypeInternal:
return "Internal error"
default:
@@ -198,7 +204,14 @@ func Forbidden(field *Path, detail string) *Error {
// Invalid, but the returned error will not include the too-long
// value.
func TooLong(field *Path, value interface{}, maxLength int) *Error {
- return &Error{ErrorTypeTooLong, field.String(), value, fmt.Sprintf("must have at most %d characters", maxLength)}
+ return &Error{ErrorTypeTooLong, field.String(), value, fmt.Sprintf("must have at most %d bytes", maxLength)}
+}
+
+// TooMany returns a *Error indicating "too many". This is used to
+// report that a given list has too many items. This is similar to TooLong,
+// but the returned error indicates quantity instead of length.
+func TooMany(field *Path, actualQuantity, maxQuantity int) *Error {
+ return &Error{ErrorTypeTooMany, field.String(), actualQuantity, fmt.Sprintf("must have at most %d items", maxQuantity)}
}
// InternalError returns a *Error indicating "internal error". This is used
diff --git a/vendor/k8s.io/apimachinery/pkg/util/validation/validation.go b/vendor/k8s.io/apimachinery/pkg/util/validation/validation.go
index 2dd99992dcad6..8e1907c2a6994 100644
--- a/vendor/k8s.io/apimachinery/pkg/util/validation/validation.go
+++ b/vendor/k8s.io/apimachinery/pkg/util/validation/validation.go
@@ -70,7 +70,11 @@ func IsQualifiedName(value string) []string {
return errs
}
-// IsFullyQualifiedName checks if the name is fully qualified.
+// IsFullyQualifiedName checks if the name is fully qualified. This is similar
+// to IsFullyQualifiedDomainName but requires a minimum of 3 segments instead of
+// 2 and does not accept a trailing . as valid.
+// TODO: This function is deprecated and preserved until all callers migrate to
+// IsFullyQualifiedDomainName; please don't add new callers.
func IsFullyQualifiedName(fldPath *field.Path, name string) field.ErrorList {
var allErrors field.ErrorList
if len(name) == 0 {
@@ -85,6 +89,26 @@ func IsFullyQualifiedName(fldPath *field.Path, name string) field.ErrorList {
return allErrors
}
+// IsFullyQualifiedDomainName checks if the domain name is fully qualified. This
+// is similar to IsFullyQualifiedName but only requires a minimum of 2 segments
+// instead of 3 and accepts a trailing . as valid.
+func IsFullyQualifiedDomainName(fldPath *field.Path, name string) field.ErrorList {
+ var allErrors field.ErrorList
+ if len(name) == 0 {
+ return append(allErrors, field.Required(fldPath, ""))
+ }
+ if strings.HasSuffix(name, ".") {
+ name = name[:len(name)-1]
+ }
+ if errs := IsDNS1123Subdomain(name); len(errs) > 0 {
+ return append(allErrors, field.Invalid(fldPath, name, strings.Join(errs, ",")))
+ }
+ if len(strings.Split(name, ".")) < 2 {
+ return append(allErrors, field.Invalid(fldPath, name, "should be a domain with at least two segments separated by dots"))
+ }
+ return allErrors
+}
+
const labelValueFmt string = "(" + qualifiedNameFmt + ")?"
const labelValueErrMsg string = "a valid label must be an empty string or consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character"
@@ -285,6 +309,26 @@ func IsValidIP(value string) []string {
return nil
}
+// IsValidIPv4Address tests that the argument is a valid IPv4 address.
+func IsValidIPv4Address(fldPath *field.Path, value string) field.ErrorList {
+ var allErrors field.ErrorList
+ ip := net.ParseIP(value)
+ if ip == nil || ip.To4() == nil {
+ allErrors = append(allErrors, field.Invalid(fldPath, value, "must be a valid IPv4 address"))
+ }
+ return allErrors
+}
+
+// IsValidIPv6Address tests that the argument is a valid IPv6 address.
+func IsValidIPv6Address(fldPath *field.Path, value string) field.ErrorList {
+ var allErrors field.ErrorList
+ ip := net.ParseIP(value)
+ if ip == nil || ip.To4() != nil {
+ allErrors = append(allErrors, field.Invalid(fldPath, value, "must be a valid IPv6 address"))
+ }
+ return allErrors
+}
+
const percentFmt string = "[0-9]+%"
const percentErrMsg string = "a valid percent string must be a numeric string followed by an ending '%'"
diff --git a/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go b/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go
index bc6b18d2b4630..386c3e7ea0252 100644
--- a/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go
+++ b/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go
@@ -207,23 +207,31 @@ type ConditionFunc func() (done bool, err error)
type Backoff struct {
// The initial duration.
Duration time.Duration
- // Duration is multiplied by factor each iteration. Must be greater
- // than or equal to zero.
+ // Duration is multiplied by factor each iteration, if factor is not zero
+ // and the limits imposed by Steps and Cap have not been reached.
+ // Should not be negative.
+ // The jitter does not contribute to the updates to the duration parameter.
Factor float64
- // The amount of jitter applied each iteration. Jitter is applied after
- // cap.
+ // The sleep at each iteration is the duration plus an additional
+ // amount chosen uniformly at random from the interval between
+ // zero and `jitter*duration`.
Jitter float64
- // The number of steps before duration stops changing. If zero, initial
- // duration is always used. Used for exponential backoff in combination
- // with Factor.
+ // The remaining number of iterations in which the duration
+ // parameter may change (but progress can be stopped earlier by
+ // hitting the cap). If not positive, the duration is not
+ // changed. Used for exponential backoff in combination with
+ // Factor and Cap.
Steps int
- // The returned duration will never be greater than cap *before* jitter
- // is applied. The actual maximum cap is `cap * (1.0 + jitter)`.
+ // A limit on revised values of the duration parameter. If a
+ // multiplication by the factor parameter would make the duration
+ // exceed the cap then the duration is set to the cap and the
+ // steps parameter is set to zero.
Cap time.Duration
}
-// Step returns the next interval in the exponential backoff. This method
-// will mutate the provided backoff.
+// Step (1) returns an amount of time to sleep determined by the
+// original Duration and Jitter and (2) mutates the provided Backoff
+// to update its Steps and Duration.
func (b *Backoff) Step() time.Duration {
if b.Steps < 1 {
if b.Jitter > 0 {
@@ -271,14 +279,14 @@ func contextForChannel(parentCh <-chan struct{}) (context.Context, context.Cance
// ExponentialBackoff repeats a condition check with exponential backoff.
//
-// It checks the condition up to Steps times, increasing the wait by multiplying
-// the previous duration by Factor.
-//
-// If Jitter is greater than zero, a random amount of each duration is added
-// (between duration and duration*(1+jitter)).
-//
-// If the condition never returns true, ErrWaitTimeout is returned. All other
-// errors terminate immediately.
+// It repeatedly checks the condition and then sleeps, using `backoff.Step()`
+// to determine the length of the sleep and adjust Duration and Steps.
+// Stops and returns as soon as:
+// 1. the condition check returns true or an error,
+// 2. `backoff.Steps` checks of the condition have been done, or
+// 3. a sleep truncated by the cap on duration has been completed.
+// In case (1) the returned error is what the condition function returned.
+// In all other cases, ErrWaitTimeout is returned.
func ExponentialBackoff(backoff Backoff, condition ConditionFunc) error {
for backoff.Steps > 0 {
if ok, err := condition(); err != nil || ok {
diff --git a/vendor/k8s.io/klog/klog.go b/vendor/k8s.io/klog/klog.go
index 2520ebdaa7412..2712ce0afc11e 100644
--- a/vendor/k8s.io/klog/klog.go
+++ b/vendor/k8s.io/klog/klog.go
@@ -142,7 +142,7 @@ func (s *severity) Set(value string) error {
if v, ok := severityByName(value); ok {
threshold = v
} else {
- v, err := strconv.Atoi(value)
+ v, err := strconv.ParseInt(value, 10, 32)
if err != nil {
return err
}
@@ -226,7 +226,7 @@ func (l *Level) Get() interface{} {
// Set is part of the flag.Value interface.
func (l *Level) Set(value string) error {
- v, err := strconv.Atoi(value)
+ v, err := strconv.ParseInt(value, 10, 32)
if err != nil {
return err
}
@@ -294,7 +294,7 @@ func (m *moduleSpec) Set(value string) error {
return errVmoduleSyntax
}
pattern := patLev[0]
- v, err := strconv.Atoi(patLev[1])
+ v, err := strconv.ParseInt(patLev[1], 10, 32)
if err != nil {
return errors.New("syntax error: expect comma-separated list of filename=N")
}
@@ -396,31 +396,23 @@ type flushSyncWriter interface {
io.Writer
}
+// init sets up the defaults and runs flushDaemon.
func init() {
- // Default stderrThreshold is ERROR.
- logging.stderrThreshold = errorLog
-
+ logging.stderrThreshold = errorLog // Default stderrThreshold is ERROR.
logging.setVState(0, nil, false)
+ logging.logDir = ""
+ logging.logFile = ""
+ logging.logFileMaxSizeMB = 1800
+ logging.toStderr = true
+ logging.alsoToStderr = false
+ logging.skipHeaders = false
+ logging.addDirHeader = false
+ logging.skipLogHeaders = false
go logging.flushDaemon()
}
-var initDefaultsOnce sync.Once
-
// InitFlags is for explicitly initializing the flags.
func InitFlags(flagset *flag.FlagSet) {
-
- // Initialize defaults.
- initDefaultsOnce.Do(func() {
- logging.logDir = ""
- logging.logFile = ""
- logging.logFileMaxSizeMB = 1800
- logging.toStderr = true
- logging.alsoToStderr = false
- logging.skipHeaders = false
- logging.addDirHeader = false
- logging.skipLogHeaders = false
- })
-
if flagset == nil {
flagset = flag.CommandLine
}
diff --git a/vendor/modules.txt b/vendor/modules.txt
index 45eb152aebc14..b928e26e54f1e 100644
--- a/vendor/modules.txt
+++ b/vendor/modules.txt
@@ -1,7 +1,5 @@
-# cloud.google.com/go v0.44.1
-cloud.google.com/go/bigtable
-cloud.google.com/go/bigtable/bttest
-cloud.google.com/go/bigtable/internal/option
+# cloud.google.com/go v0.49.0
+cloud.google.com/go
cloud.google.com/go/compute/metadata
cloud.google.com/go/iam
cloud.google.com/go/internal
@@ -10,10 +8,15 @@ cloud.google.com/go/internal/trace
cloud.google.com/go/internal/version
cloud.google.com/go/longrunning
cloud.google.com/go/longrunning/autogen
+# cloud.google.com/go/bigtable v1.1.0
+cloud.google.com/go/bigtable
+cloud.google.com/go/bigtable/bttest
+cloud.google.com/go/bigtable/internal/option
+# cloud.google.com/go/storage v1.3.0
cloud.google.com/go/storage
-# github.com/Azure/azure-pipeline-go v0.2.1
+# github.com/Azure/azure-pipeline-go v0.2.2
github.com/Azure/azure-pipeline-go/pipeline
-# github.com/Azure/azure-sdk-for-go v23.2.0+incompatible => github.com/Azure/azure-sdk-for-go v36.2.0+incompatible
+# github.com/Azure/azure-sdk-for-go v36.1.0+incompatible => github.com/Azure/azure-sdk-for-go v36.2.0+incompatible
github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2018-10-01/compute
github.com/Azure/azure-sdk-for-go/services/network/mgmt/2018-10-01/network
github.com/Azure/azure-sdk-for-go/version
@@ -22,21 +25,23 @@ github.com/Azure/azure-storage-blob-go/azblob
# github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78
github.com/Azure/go-ansiterm
github.com/Azure/go-ansiterm/winterm
-# github.com/Azure/go-autorest/autorest v0.9.2
+# github.com/Azure/go-autorest/autorest v0.9.3-0.20191028180845-3492b2aff503
github.com/Azure/go-autorest/autorest
github.com/Azure/go-autorest/autorest/azure
-# github.com/Azure/go-autorest/autorest/adal v0.8.0
+# github.com/Azure/go-autorest/autorest/adal v0.8.1-0.20191028180845-3492b2aff503
github.com/Azure/go-autorest/autorest/adal
# github.com/Azure/go-autorest/autorest/date v0.2.0
github.com/Azure/go-autorest/autorest/date
-# github.com/Azure/go-autorest/autorest/to v0.3.0
+# github.com/Azure/go-autorest/autorest/to v0.3.1-0.20191028180845-3492b2aff503
github.com/Azure/go-autorest/autorest/to
-# github.com/Azure/go-autorest/autorest/validation v0.2.0
+# github.com/Azure/go-autorest/autorest/validation v0.2.1-0.20191028180845-3492b2aff503
github.com/Azure/go-autorest/autorest/validation
# github.com/Azure/go-autorest/logger v0.1.0
github.com/Azure/go-autorest/logger
# github.com/Azure/go-autorest/tracing v0.5.0
github.com/Azure/go-autorest/tracing
+# github.com/BurntSushi/toml v0.3.1
+github.com/BurntSushi/toml
# github.com/Microsoft/go-winio v0.4.12
github.com/Microsoft/go-winio
# github.com/NYTimes/gziphandler v1.1.1
@@ -44,12 +49,12 @@ github.com/NYTimes/gziphandler
# github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751
github.com/alecthomas/template
github.com/alecthomas/template/parse
-# github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4
+# github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d
github.com/alecthomas/units
-# github.com/armon/go-metrics v0.0.0-20190430140413-ec5e00d3c878
+# github.com/armon/go-metrics v0.3.0
github.com/armon/go-metrics
github.com/armon/go-metrics/prometheus
-# github.com/aws/aws-sdk-go v1.25.22
+# github.com/aws/aws-sdk-go v1.25.35
github.com/aws/aws-sdk-go/aws
github.com/aws/aws-sdk-go/aws/awserr
github.com/aws/aws-sdk-go/aws/awsutil
@@ -106,6 +111,8 @@ github.com/bmatcuk/doublestar
github.com/bradfitz/gomemcache/memcache
# github.com/cespare/xxhash v1.1.0
github.com/cespare/xxhash
+# github.com/cespare/xxhash/v2 v2.1.1
+github.com/cespare/xxhash/v2
# github.com/containerd/containerd v1.3.2
github.com/containerd/containerd/errdefs
# github.com/containerd/fifo v0.0.0-20190226154929-a9fb20d87448
@@ -119,7 +126,7 @@ github.com/coreos/go-systemd/sdjournal
# github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f
github.com/coreos/pkg/capnslog
github.com/coreos/pkg/dlopen
-# github.com/cortexproject/cortex v0.4.1-0.20191217132644-cd4009e2f8e7
+# github.com/cortexproject/cortex v0.4.1-0.20200122092731-ab3e8360fe30
github.com/cortexproject/cortex/pkg/chunk
github.com/cortexproject/cortex/pkg/chunk/aws
github.com/cortexproject/cortex/pkg/chunk/azure
@@ -149,6 +156,7 @@ github.com/cortexproject/cortex/pkg/util/flagext
github.com/cortexproject/cortex/pkg/util/grpcclient
github.com/cortexproject/cortex/pkg/util/limiter
github.com/cortexproject/cortex/pkg/util/middleware
+github.com/cortexproject/cortex/pkg/util/runtimeconfig
github.com/cortexproject/cortex/pkg/util/spanlogger
github.com/cortexproject/cortex/pkg/util/test
github.com/cortexproject/cortex/pkg/util/validation
@@ -232,7 +240,7 @@ github.com/gocql/gocql/internal/murmur
github.com/gocql/gocql/internal/streams
# github.com/gogo/googleapis v1.1.0
github.com/gogo/googleapis/google/rpc
-# github.com/gogo/protobuf v1.3.0
+# github.com/gogo/protobuf v1.3.1
github.com/gogo/protobuf/gogoproto
github.com/gogo/protobuf/io
github.com/gogo/protobuf/proto
@@ -241,14 +249,17 @@ github.com/gogo/protobuf/sortkeys
github.com/gogo/protobuf/types
# github.com/gogo/status v1.0.3
github.com/gogo/status
-# github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6
+# github.com/golang/groupcache v0.0.0-20191027212112-611e8accdfc9
github.com/golang/groupcache/lru
# github.com/golang/protobuf v1.3.2
+github.com/golang/protobuf/descriptor
github.com/golang/protobuf/jsonpb
github.com/golang/protobuf/proto
+github.com/golang/protobuf/protoc-gen-go
github.com/golang/protobuf/protoc-gen-go/descriptor
github.com/golang/protobuf/protoc-gen-go/generator
github.com/golang/protobuf/protoc-gen-go/generator/internal/remap
+github.com/golang/protobuf/protoc-gen-go/grpc
github.com/golang/protobuf/protoc-gen-go/plugin
github.com/golang/protobuf/ptypes
github.com/golang/protobuf/ptypes/any
@@ -276,11 +287,11 @@ github.com/google/gofuzz
github.com/google/uuid
# github.com/googleapis/gax-go/v2 v2.0.5
github.com/googleapis/gax-go/v2
-# github.com/googleapis/gnostic v0.3.0
+# github.com/googleapis/gnostic v0.3.1
github.com/googleapis/gnostic/OpenAPIv2
github.com/googleapis/gnostic/compiler
github.com/googleapis/gnostic/extensions
-# github.com/gophercloud/gophercloud v0.3.0
+# github.com/gophercloud/gophercloud v0.6.0
github.com/gophercloud/gophercloud
github.com/gophercloud/gophercloud/openstack
github.com/gophercloud/gophercloud/openstack/compute/v2/extensions/floatingips
@@ -302,7 +313,7 @@ github.com/grpc-ecosystem/go-grpc-middleware
# github.com/grpc-ecosystem/go-grpc-prometheus v1.2.1-0.20191002090509-6af20e3a5340
github.com/grpc-ecosystem/go-grpc-prometheus
github.com/grpc-ecosystem/go-grpc-prometheus/packages/grpcstatus
-# github.com/grpc-ecosystem/grpc-gateway v1.9.6
+# github.com/grpc-ecosystem/grpc-gateway v1.12.1
github.com/grpc-ecosystem/grpc-gateway/internal
github.com/grpc-ecosystem/grpc-gateway/runtime
github.com/grpc-ecosystem/grpc-gateway/utilities
@@ -310,7 +321,7 @@ github.com/grpc-ecosystem/grpc-gateway/utilities
github.com/grpc-ecosystem/grpc-opentracing/go/otgrpc
# github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed
github.com/hailocab/go-hostpool
-# github.com/hashicorp/consul/api v1.1.0
+# github.com/hashicorp/consul/api v1.3.0
github.com/hashicorp/consul/api
# github.com/hashicorp/errwrap v1.0.0
github.com/hashicorp/errwrap
@@ -329,9 +340,9 @@ github.com/hashicorp/go-sockaddr
# github.com/hashicorp/golang-lru v0.5.3
github.com/hashicorp/golang-lru
github.com/hashicorp/golang-lru/simplelru
-# github.com/hashicorp/memberlist v0.1.4
+# github.com/hashicorp/memberlist v0.1.5
github.com/hashicorp/memberlist
-# github.com/hashicorp/serf v0.8.3
+# github.com/hashicorp/serf v0.8.5
github.com/hashicorp/serf/coordinate
# github.com/hpcloud/tail v1.0.0 => github.com/grafana/tail v0.0.0-20191024143944-0b54ddf21fe7
github.com/hpcloud/tail
@@ -349,10 +360,14 @@ github.com/influxdata/go-syslog/v2/rfc5424
github.com/jmespath/go-jmespath
# github.com/jonboulle/clockwork v0.1.0
github.com/jonboulle/clockwork
-# github.com/jpillora/backoff v0.0.0-20180909062703-3050d21c67d7
+# github.com/jpillora/backoff v1.0.0
github.com/jpillora/backoff
# github.com/json-iterator/go v1.1.9
github.com/json-iterator/go
+# github.com/jstemmer/go-junit-report v0.9.1
+github.com/jstemmer/go-junit-report
+github.com/jstemmer/go-junit-report/formatter
+github.com/jstemmer/go-junit-report/parser
# github.com/klauspost/compress v1.9.4
github.com/klauspost/compress/flate
github.com/klauspost/compress/gzip
@@ -365,13 +380,13 @@ github.com/leodido/ragel-machinery
github.com/leodido/ragel-machinery/parser
# github.com/mattn/go-colorable v0.0.9
github.com/mattn/go-colorable
-# github.com/mattn/go-ieproxy v0.0.0-20190805055040-f9202b1cfdeb
+# github.com/mattn/go-ieproxy v0.0.0-20191113090002-7c0f6868bffe
github.com/mattn/go-ieproxy
# github.com/mattn/go-isatty v0.0.4
github.com/mattn/go-isatty
# github.com/matttproud/golang_protobuf_extensions v1.0.1
github.com/matttproud/golang_protobuf_extensions/pbutil
-# github.com/miekg/dns v1.1.19
+# github.com/miekg/dns v1.1.22
github.com/miekg/dns
# github.com/mitchellh/go-homedir v1.1.0
github.com/mitchellh/go-homedir
@@ -409,13 +424,14 @@ github.com/pierrec/lz4/internal/xxh32
github.com/pkg/errors
# github.com/pmezard/go-difflib v1.0.0
github.com/pmezard/go-difflib/difflib
-# github.com/prometheus/client_golang v1.1.0
+# github.com/prometheus/client_golang v1.2.1
github.com/prometheus/client_golang/api
github.com/prometheus/client_golang/api/prometheus/v1
github.com/prometheus/client_golang/prometheus
github.com/prometheus/client_golang/prometheus/internal
github.com/prometheus/client_golang/prometheus/promauto
github.com/prometheus/client_golang/prometheus/promhttp
+github.com/prometheus/client_golang/prometheus/push
github.com/prometheus/client_golang/prometheus/testutil
# github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4
github.com/prometheus/client_model/go
@@ -425,10 +441,11 @@ github.com/prometheus/common/expfmt
github.com/prometheus/common/internal/bitbucket.org/ww/goautoneg
github.com/prometheus/common/model
github.com/prometheus/common/version
-# github.com/prometheus/procfs v0.0.3
+# github.com/prometheus/procfs v0.0.6
github.com/prometheus/procfs
github.com/prometheus/procfs/internal/fs
-# github.com/prometheus/prometheus v1.8.2-0.20190918104050-8744afdd1ea0
+github.com/prometheus/procfs/internal/util
+# github.com/prometheus/prometheus v1.8.2-0.20191126064551-80ba03c67da1
github.com/prometheus/prometheus/discovery
github.com/prometheus/prometheus/discovery/azure
github.com/prometheus/prometheus/discovery/config
@@ -444,6 +461,7 @@ github.com/prometheus/prometheus/discovery/refresh
github.com/prometheus/prometheus/discovery/targetgroup
github.com/prometheus/prometheus/discovery/triton
github.com/prometheus/prometheus/discovery/zookeeper
+github.com/prometheus/prometheus/pkg/exemplar
github.com/prometheus/prometheus/pkg/gate
github.com/prometheus/prometheus/pkg/labels
github.com/prometheus/prometheus/pkg/modtimevfs
@@ -464,14 +482,15 @@ github.com/prometheus/prometheus/tsdb/errors
github.com/prometheus/prometheus/tsdb/fileutil
github.com/prometheus/prometheus/tsdb/goversion
github.com/prometheus/prometheus/tsdb/index
-github.com/prometheus/prometheus/tsdb/labels
+github.com/prometheus/prometheus/tsdb/record
+github.com/prometheus/prometheus/tsdb/tombstones
github.com/prometheus/prometheus/tsdb/wal
github.com/prometheus/prometheus/util/stats
github.com/prometheus/prometheus/util/strutil
github.com/prometheus/prometheus/util/teststorage
github.com/prometheus/prometheus/util/testutil
github.com/prometheus/prometheus/util/treecache
-# github.com/samuel/go-zookeeper v0.0.0-20190810000440-0ceca61e4d75
+# github.com/samuel/go-zookeeper v0.0.0-20190923202752-2cc03de413da
github.com/samuel/go-zookeeper/zk
# github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529
github.com/sean-/seed
@@ -487,7 +506,7 @@ github.com/shurcooL/vfsgen
github.com/sirupsen/logrus
# github.com/soheilhy/cmux v0.1.4
github.com/soheilhy/cmux
-# github.com/spf13/pflag v1.0.3
+# github.com/spf13/pflag v1.0.5
github.com/spf13/pflag
# github.com/stretchr/objx v0.2.0
github.com/stretchr/objx
@@ -501,7 +520,9 @@ github.com/tinylib/msgp/msgp
github.com/tmc/grpc-websocket-proxy/wsproxy
# github.com/tonistiigi/fifo v0.0.0-20190226154929-a9fb20d87448
github.com/tonistiigi/fifo
-# github.com/uber/jaeger-client-go v2.20.0+incompatible
+# github.com/uber-go/atomic v1.4.0
+github.com/uber-go/atomic
+# github.com/uber/jaeger-client-go v2.20.1+incompatible
github.com/uber/jaeger-client-go
github.com/uber/jaeger-client-go/config
github.com/uber/jaeger-client-go/internal/baggage
@@ -622,7 +643,7 @@ go.etcd.io/etcd/raft/tracker
go.etcd.io/etcd/version
go.etcd.io/etcd/wal
go.etcd.io/etcd/wal/walpb
-# go.opencensus.io v0.22.1
+# go.opencensus.io v0.22.2
go.opencensus.io
go.opencensus.io/internal
go.opencensus.io/internal/tagencoding
@@ -640,7 +661,7 @@ go.opencensus.io/trace
go.opencensus.io/trace/internal
go.opencensus.io/trace/propagation
go.opencensus.io/trace/tracestate
-# go.uber.org/atomic v1.4.0
+# go.uber.org/atomic v1.5.0
go.uber.org/atomic
# go.uber.org/multierr v1.1.0
go.uber.org/multierr
@@ -651,13 +672,19 @@ go.uber.org/zap/internal/bufferpool
go.uber.org/zap/internal/color
go.uber.org/zap/internal/exit
go.uber.org/zap/zapcore
-# golang.org/x/crypto v0.0.0-20191002192127-34f69633bfdc
+# golang.org/x/crypto v0.0.0-20191112222119-e1110fd1c708
golang.org/x/crypto/bcrypt
golang.org/x/crypto/blowfish
golang.org/x/crypto/ed25519
golang.org/x/crypto/ed25519/internal/edwards25519
golang.org/x/crypto/ssh/terminal
-# golang.org/x/net v0.0.0-20190923162816-aa69164e4478 => golang.org/x/net v0.0.0-20190923162816-aa69164e4478
+# golang.org/x/exp v0.0.0-20191030013958-a1ab85dbe136
+golang.org/x/exp/apidiff
+golang.org/x/exp/cmd/apidiff
+# golang.org/x/lint v0.0.0-20190930215403-16217165b5de
+golang.org/x/lint
+golang.org/x/lint/golint
+# golang.org/x/net v0.0.0-20191112182307-2180aed22343 => golang.org/x/net v0.0.0-20190923162816-aa69164e4478
golang.org/x/net/bpf
golang.org/x/net/context
golang.org/x/net/context/ctxhttp
@@ -693,14 +720,33 @@ golang.org/x/text/unicode/bidi
golang.org/x/text/unicode/norm
# golang.org/x/time v0.0.0-20191024005414-555d28b269f0
golang.org/x/time/rate
-# google.golang.org/api v0.11.0
+# golang.org/x/tools v0.0.0-20191115202509-3a792d9c32b2
+golang.org/x/tools/cmd/goimports
+golang.org/x/tools/go/analysis
+golang.org/x/tools/go/analysis/passes/inspect
+golang.org/x/tools/go/ast/astutil
+golang.org/x/tools/go/ast/inspector
+golang.org/x/tools/go/buildutil
+golang.org/x/tools/go/gcexportdata
+golang.org/x/tools/go/internal/gcimporter
+golang.org/x/tools/go/internal/packagesdriver
+golang.org/x/tools/go/packages
+golang.org/x/tools/go/types/objectpath
+golang.org/x/tools/go/types/typeutil
+golang.org/x/tools/internal/fastwalk
+golang.org/x/tools/internal/gopathwalk
+golang.org/x/tools/internal/imports
+golang.org/x/tools/internal/module
+golang.org/x/tools/internal/semver
+golang.org/x/tools/internal/span
+# google.golang.org/api v0.14.0
google.golang.org/api/cloudresourcemanager/v1
google.golang.org/api/compute/v1
google.golang.org/api/googleapi
-google.golang.org/api/googleapi/internal/uritemplates
google.golang.org/api/googleapi/transport
google.golang.org/api/internal
google.golang.org/api/internal/gensupport
+google.golang.org/api/internal/third_party/uritemplates
google.golang.org/api/iterator
google.golang.org/api/option
google.golang.org/api/storage/v1
@@ -708,7 +754,7 @@ google.golang.org/api/transport
google.golang.org/api/transport/grpc
google.golang.org/api/transport/http
google.golang.org/api/transport/http/internal/propagation
-# google.golang.org/appengine v1.6.3
+# google.golang.org/appengine v1.6.5
google.golang.org/appengine
google.golang.org/appengine/internal
google.golang.org/appengine/internal/app_identity
@@ -721,7 +767,7 @@ google.golang.org/appengine/internal/socket
google.golang.org/appengine/internal/urlfetch
google.golang.org/appengine/socket
google.golang.org/appengine/urlfetch
-# google.golang.org/genproto v0.0.0-20190916214212-f660b8655731
+# google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9
google.golang.org/genproto/googleapis/api/annotations
google.golang.org/genproto/googleapis/api/httpbody
google.golang.org/genproto/googleapis/bigtable/admin/v2
@@ -794,9 +840,35 @@ gopkg.in/fsnotify/fsnotify.v1
gopkg.in/inf.v0
# gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7
gopkg.in/tomb.v1
-# gopkg.in/yaml.v2 v2.2.2
+# gopkg.in/yaml.v2 v2.2.5
gopkg.in/yaml.v2
-# k8s.io/api v0.0.0-20190813020757-36bff7324fb7
+# honnef.co/go/tools v0.0.1-2019.2.3
+honnef.co/go/tools/arg
+honnef.co/go/tools/cmd/staticcheck
+honnef.co/go/tools/config
+honnef.co/go/tools/deprecated
+honnef.co/go/tools/facts
+honnef.co/go/tools/functions
+honnef.co/go/tools/go/types/typeutil
+honnef.co/go/tools/internal/cache
+honnef.co/go/tools/internal/passes/buildssa
+honnef.co/go/tools/internal/renameio
+honnef.co/go/tools/internal/sharedcheck
+honnef.co/go/tools/lint
+honnef.co/go/tools/lint/lintdsl
+honnef.co/go/tools/lint/lintutil
+honnef.co/go/tools/lint/lintutil/format
+honnef.co/go/tools/loader
+honnef.co/go/tools/printf
+honnef.co/go/tools/simple
+honnef.co/go/tools/ssa
+honnef.co/go/tools/ssautil
+honnef.co/go/tools/staticcheck
+honnef.co/go/tools/staticcheck/vrp
+honnef.co/go/tools/stylecheck
+honnef.co/go/tools/unused
+honnef.co/go/tools/version
+# k8s.io/api v0.0.0-20191115095533-47f6de673b26
k8s.io/api/admissionregistration/v1beta1
k8s.io/api/apps/v1
k8s.io/api/apps/v1beta1
@@ -833,7 +905,7 @@ k8s.io/api/settings/v1alpha1
k8s.io/api/storage/v1
k8s.io/api/storage/v1alpha1
k8s.io/api/storage/v1beta1
-# k8s.io/apimachinery v0.0.0-20190809020650-423f5d784010
+# k8s.io/apimachinery v0.0.0-20191115015347-3c7067801da2
k8s.io/apimachinery/pkg/api/errors
k8s.io/apimachinery/pkg/api/meta
k8s.io/apimachinery/pkg/api/resource
@@ -932,9 +1004,9 @@ k8s.io/client-go/util/flowcontrol
k8s.io/client-go/util/keyutil
k8s.io/client-go/util/retry
k8s.io/client-go/util/workqueue
-# k8s.io/klog v0.4.0
+# k8s.io/klog v1.0.0
k8s.io/klog
-# k8s.io/utils v0.0.0-20190809000727-6c36bc71fc4a
+# k8s.io/utils v0.0.0-20191114200735-6ca3b61696b6
k8s.io/utils/buffer
k8s.io/utils/integer
k8s.io/utils/trace
|
loki
|
use new runtimeconfig package from Cortex (#1484)
|
43e214038cb7322d7e190d4696d249f74971e4d3
|
2023-07-27 22:36:09
|
Periklis Tsirakidis
|
operator: Prepare community release v0.4.0 (#10050)
| false
|
diff --git a/operator/CHANGELOG.md b/operator/CHANGELOG.md
index 2437c68a8e608..ff1d265a76613 100644
--- a/operator/CHANGELOG.md
+++ b/operator/CHANGELOG.md
@@ -1,5 +1,7 @@
## Main
+## 0.4.0 (2023-07-27)
+
- [10019](https://github.com/grafana/loki/pull/10019) **periklis**: Update Loki operand to v2.8.3
- [9972](https://github.com/grafana/loki/pull/9972) **JoaoBraveCoding**: Fix OIDC.IssuerCAPath by updating it to type CASpec
- [9931](https://github.com/grafana/loki/pull/9931) **aminesnow**: Custom configuration for LokiStack admin groups
@@ -37,7 +39,7 @@
- [9188](https://github.com/grafana/loki/pull/9188) **aminesnow**: Add PodDisruptionBudgets to the query path
- [9162](https://github.com/grafana/loki/pull/9162) **aminesnow**: Add a PodDisruptionBudget to lokistack-gateway
-# 0.3.0 (2023-04-20)
+## 0.3.0 (2023-04-20)
- [9049](https://github.com/grafana/loki/pull/9049) **alanconway**: Revert 1x.extra-small changes, add 1x.demo
- [8661](https://github.com/grafana/loki/pull/8661) **xuanyunhui**: Add a new Object Storage Type for AlibabaCloud OSS
diff --git a/operator/Makefile b/operator/Makefile
index 35561b6baa526..d8d3bba81dced 100644
--- a/operator/Makefile
+++ b/operator/Makefile
@@ -21,7 +21,7 @@ LOKI_OPERATOR_NS ?= kubernetes-operators
# To re-generate a bundle for another specific version without changing the standard setup, you can:
# - use the VERSION as arg of the bundle target (e.g make bundle VERSION=0.0.2)
# - use environment variables to overwrite this value (e.g export VERSION=0.0.2)
-VERSION ?= v0.3.0
+VERSION ?= 0.4.0
CHANNELS ?= "alpha"
DEFAULT_CHANNEL ?= "alpha"
SUPPORTED_OCP_VERSIONS="v4.10"
@@ -32,24 +32,20 @@ REGISTRY_BASE_COMMUNITY = docker.io/grafana
REGISTRY_BASE_OPENSHIFT = quay.io/openshift-logging
REGISTRY_BASE ?= $(REGISTRY_BASE_COMMUNITY)
-# TODO(@periklis): Replace this image tag with VERSION once we have GH tags
-MAIN_IMAGE_TAG = main-ac1c1fd
-
# Customize for variants: community, community-openshift or openshift
VARIANT ?= community
ifeq ($(VARIANT), openshift)
ifeq ($(REGISTRY_BASE), $(REGISTRY_BASE_COMMUNITY))
REGISTRY_BASE = $(REGISTRY_BASE_OPENSHIFT)
endif
- VERSION = v0.1.0
+ VERSION = 0.1.0
CHANNELS = stable
DEFAULT_CHANNEL = stable
LOKI_OPERATOR_NS = openshift-operators-redhat
- MAIN_IMAGE_TAG = $(VERSION)
endif
# Image URL to use all building/pushing image targets
-IMG ?= $(REGISTRY_BASE)/loki-operator:$(MAIN_IMAGE_TAG)
+IMG ?= $(REGISTRY_BASE)/loki-operator:$(VERSION)
# CHANNELS define the bundle channels used in the bundle.
# Add a new line here if you would like to change its default config. (E.g CHANNELS = "preview,fast,stable")
@@ -75,7 +71,7 @@ BUNDLE_METADATA_OPTS ?= $(BUNDLE_CHANNELS) $(BUNDLE_DEFAULT_CHANNEL)
BUNDLE_IMG ?= $(REGISTRY_BASE)/loki-operator-bundle:$(VERSION)
# BUNDLE_GEN_FLAGS are the flags passed to the operator-sdk generate bundle command
-BUNDLE_GEN_FLAGS ?= -q --overwrite --version $(subst v,,$(VERSION)) $(BUNDLE_METADATA_OPTS)
+BUNDLE_GEN_FLAGS ?= -q --overwrite --version $(VERSION) $(BUNDLE_METADATA_OPTS)
MANIFESTS_DIR = config/manifests/$(VARIANT)
BUNDLE_DIR = ./bundle/$(VARIANT)
diff --git a/operator/bundle/community-openshift/manifests/loki-operator-controller-manager-metrics-service_v1_service.yaml b/operator/bundle/community-openshift/manifests/loki-operator-controller-manager-metrics-service_v1_service.yaml
index e0995a63d8b59..7101ca5af48a5 100644
--- a/operator/bundle/community-openshift/manifests/loki-operator-controller-manager-metrics-service_v1_service.yaml
+++ b/operator/bundle/community-openshift/manifests/loki-operator-controller-manager-metrics-service_v1_service.yaml
@@ -5,11 +5,11 @@ metadata:
service.beta.openshift.io/serving-cert-secret-name: loki-operator-metrics
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.3.0
+ app.kubernetes.io/instance: loki-operator-v0.4.0
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.3.0
+ app.kubernetes.io/version: 0.4.0
name: loki-operator-controller-manager-metrics-service
spec:
ports:
diff --git a/operator/bundle/community-openshift/manifests/loki-operator-manager-config_v1_configmap.yaml b/operator/bundle/community-openshift/manifests/loki-operator-manager-config_v1_configmap.yaml
index a6300086fbc71..d082624e03dec 100644
--- a/operator/bundle/community-openshift/manifests/loki-operator-manager-config_v1_configmap.yaml
+++ b/operator/bundle/community-openshift/manifests/loki-operator-manager-config_v1_configmap.yaml
@@ -59,9 +59,9 @@ data:
kind: ConfigMap
metadata:
labels:
- app.kubernetes.io/instance: loki-operator-v0.3.0
+ app.kubernetes.io/instance: loki-operator-v0.4.0
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.3.0
+ app.kubernetes.io/version: 0.4.0
name: loki-operator-manager-config
diff --git a/operator/bundle/community-openshift/manifests/loki-operator-metrics-monitor_monitoring.coreos.com_v1_servicemonitor.yaml b/operator/bundle/community-openshift/manifests/loki-operator-metrics-monitor_monitoring.coreos.com_v1_servicemonitor.yaml
index 73e71585fecd9..97128e4175d9b 100644
--- a/operator/bundle/community-openshift/manifests/loki-operator-metrics-monitor_monitoring.coreos.com_v1_servicemonitor.yaml
+++ b/operator/bundle/community-openshift/manifests/loki-operator-metrics-monitor_monitoring.coreos.com_v1_servicemonitor.yaml
@@ -2,11 +2,11 @@ apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
- app.kubernetes.io/instance: loki-operator-v0.3.0
+ app.kubernetes.io/instance: loki-operator-v0.4.0
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.3.0
+ app.kubernetes.io/version: 0.4.0
name: loki-operator
name: loki-operator-metrics-monitor
spec:
diff --git a/operator/bundle/community-openshift/manifests/loki-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml b/operator/bundle/community-openshift/manifests/loki-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml
index 3ef257165399d..96738fd625cc7 100644
--- a/operator/bundle/community-openshift/manifests/loki-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml
+++ b/operator/bundle/community-openshift/manifests/loki-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml
@@ -3,11 +3,11 @@ kind: ClusterRole
metadata:
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.3.0
+ app.kubernetes.io/instance: loki-operator-v0.4.0
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.3.0
+ app.kubernetes.io/version: 0.4.0
name: loki-operator-metrics-reader
rules:
- nonResourceURLs:
diff --git a/operator/bundle/community-openshift/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_role.yaml b/operator/bundle/community-openshift/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_role.yaml
index ec36553071312..5472ebb497895 100644
--- a/operator/bundle/community-openshift/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_role.yaml
+++ b/operator/bundle/community-openshift/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_role.yaml
@@ -6,11 +6,11 @@ metadata:
include.release.openshift.io/single-node-developer: "true"
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.3.0
+ app.kubernetes.io/instance: loki-operator-v0.4.0
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.3.0
+ app.kubernetes.io/version: 0.4.0
name: loki-operator-prometheus
rules:
- apiGroups:
diff --git a/operator/bundle/community-openshift/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_rolebinding.yaml b/operator/bundle/community-openshift/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_rolebinding.yaml
index 1e6bf9abe1801..d37772946f2d7 100644
--- a/operator/bundle/community-openshift/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_rolebinding.yaml
+++ b/operator/bundle/community-openshift/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_rolebinding.yaml
@@ -6,11 +6,11 @@ metadata:
include.release.openshift.io/single-node-developer: "true"
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.3.0
+ app.kubernetes.io/instance: loki-operator-v0.4.0
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.3.0
+ app.kubernetes.io/version: 0.4.0
name: loki-operator-prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
diff --git a/operator/bundle/community-openshift/manifests/loki-operator-webhook-service_v1_service.yaml b/operator/bundle/community-openshift/manifests/loki-operator-webhook-service_v1_service.yaml
index 325ac8e0e167d..5979829a9e38c 100644
--- a/operator/bundle/community-openshift/manifests/loki-operator-webhook-service_v1_service.yaml
+++ b/operator/bundle/community-openshift/manifests/loki-operator-webhook-service_v1_service.yaml
@@ -3,11 +3,11 @@ kind: Service
metadata:
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.3.0
+ app.kubernetes.io/instance: loki-operator-v0.4.0
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.3.0
+ app.kubernetes.io/version: 0.4.0
name: loki-operator-webhook-service
spec:
ports:
diff --git a/operator/bundle/community-openshift/manifests/loki-operator.clusterserviceversion.yaml b/operator/bundle/community-openshift/manifests/loki-operator.clusterserviceversion.yaml
index 14e9457e48e14..b3b6fb340086f 100644
--- a/operator/bundle/community-openshift/manifests/loki-operator.clusterserviceversion.yaml
+++ b/operator/bundle/community-openshift/manifests/loki-operator.clusterserviceversion.yaml
@@ -149,8 +149,8 @@ metadata:
capabilities: Full Lifecycle
categories: OpenShift Optional, Logging & Tracing
certified: "false"
- containerImage: docker.io/grafana/loki-operator:main-ac1c1fd
- createdAt: "2023-07-24T11:44:09Z"
+ containerImage: docker.io/grafana/loki-operator:0.4.0
+ createdAt: "2023-07-27T16:51:28Z"
description: The Community Loki Operator provides Kubernetes native deployment
and management of Loki and related logging components.
operators.operatorframework.io/builder: operator-sdk-unknown
@@ -160,7 +160,7 @@ metadata:
labels:
operatorframework.io/arch.amd64: supported
operatorframework.io/arch.arm64: supported
- name: loki-operator.v0.3.0
+ name: loki-operator.v0.4.0
namespace: placeholder
spec:
apiservicedefinitions: {}
@@ -1616,11 +1616,11 @@ spec:
serviceAccountName: default
deployments:
- label:
- app.kubernetes.io/instance: loki-operator-v0.3.0
+ app.kubernetes.io/instance: loki-operator-v0.4.0
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.3.0
+ app.kubernetes.io/version: 0.4.0
control-plane: controller-manager
name: loki-operator-controller-manager
spec:
@@ -1654,7 +1654,7 @@ spec:
value: quay.io/observatorium/api:latest
- name: RELATED_IMAGE_OPA
value: quay.io/observatorium/opa-openshift:latest
- image: docker.io/grafana/loki-operator:main-ac1c1fd
+ image: docker.io/grafana/loki-operator:0.4.0
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
@@ -1778,8 +1778,8 @@ spec:
name: gateway
- image: quay.io/observatorium/opa-openshift:latest
name: opa
- replaces: loki-operator.v0.2.0
- version: 0.3.0
+ replaces: loki-operator.v0.3.0
+ version: 0.4.0
webhookdefinitions:
- admissionReviewVersions:
- v1
diff --git a/operator/bundle/community-openshift/manifests/loki.grafana.com_alertingrules.yaml b/operator/bundle/community-openshift/manifests/loki.grafana.com_alertingrules.yaml
index 99a860f5b80ab..f524144151873 100644
--- a/operator/bundle/community-openshift/manifests/loki.grafana.com_alertingrules.yaml
+++ b/operator/bundle/community-openshift/manifests/loki.grafana.com_alertingrules.yaml
@@ -5,11 +5,11 @@ metadata:
controller-gen.kubebuilder.io/version: v0.11.3
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.3.0
+ app.kubernetes.io/instance: loki-operator-v0.4.0
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.3.0
+ app.kubernetes.io/version: 0.4.0
name: alertingrules.loki.grafana.com
spec:
conversion:
diff --git a/operator/bundle/community-openshift/manifests/loki.grafana.com_lokistacks.yaml b/operator/bundle/community-openshift/manifests/loki.grafana.com_lokistacks.yaml
index 04c471aa321a6..44c471d1c8c21 100644
--- a/operator/bundle/community-openshift/manifests/loki.grafana.com_lokistacks.yaml
+++ b/operator/bundle/community-openshift/manifests/loki.grafana.com_lokistacks.yaml
@@ -5,11 +5,11 @@ metadata:
controller-gen.kubebuilder.io/version: v0.11.3
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.3.0
+ app.kubernetes.io/instance: loki-operator-v0.4.0
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.3.0
+ app.kubernetes.io/version: 0.4.0
name: lokistacks.loki.grafana.com
spec:
conversion:
diff --git a/operator/bundle/community-openshift/manifests/loki.grafana.com_recordingrules.yaml b/operator/bundle/community-openshift/manifests/loki.grafana.com_recordingrules.yaml
index ef1ffe1d0e6d3..07eac8b94e801 100644
--- a/operator/bundle/community-openshift/manifests/loki.grafana.com_recordingrules.yaml
+++ b/operator/bundle/community-openshift/manifests/loki.grafana.com_recordingrules.yaml
@@ -5,11 +5,11 @@ metadata:
controller-gen.kubebuilder.io/version: v0.11.3
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.3.0
+ app.kubernetes.io/instance: loki-operator-v0.4.0
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.3.0
+ app.kubernetes.io/version: 0.4.0
name: recordingrules.loki.grafana.com
spec:
conversion:
diff --git a/operator/bundle/community-openshift/manifests/loki.grafana.com_rulerconfigs.yaml b/operator/bundle/community-openshift/manifests/loki.grafana.com_rulerconfigs.yaml
index 279d36569d012..2fd20cc1ce5a9 100644
--- a/operator/bundle/community-openshift/manifests/loki.grafana.com_rulerconfigs.yaml
+++ b/operator/bundle/community-openshift/manifests/loki.grafana.com_rulerconfigs.yaml
@@ -5,11 +5,11 @@ metadata:
controller-gen.kubebuilder.io/version: v0.11.3
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.3.0
+ app.kubernetes.io/instance: loki-operator-v0.4.0
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.3.0
+ app.kubernetes.io/version: 0.4.0
name: rulerconfigs.loki.grafana.com
spec:
conversion:
diff --git a/operator/bundle/community/manifests/loki-operator-controller-manager-metrics-service_v1_service.yaml b/operator/bundle/community/manifests/loki-operator-controller-manager-metrics-service_v1_service.yaml
index c2f5eadadc670..b7f50405c453f 100644
--- a/operator/bundle/community/manifests/loki-operator-controller-manager-metrics-service_v1_service.yaml
+++ b/operator/bundle/community/manifests/loki-operator-controller-manager-metrics-service_v1_service.yaml
@@ -3,11 +3,11 @@ kind: Service
metadata:
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.3.0
+ app.kubernetes.io/instance: loki-operator-v0.4.0
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.3.0
+ app.kubernetes.io/version: 0.4.0
name: loki-operator-controller-manager-metrics-service
spec:
ports:
diff --git a/operator/bundle/community/manifests/loki-operator-manager-config_v1_configmap.yaml b/operator/bundle/community/manifests/loki-operator-manager-config_v1_configmap.yaml
index 2acb5dccc0fb1..a5e6538f4f1f7 100644
--- a/operator/bundle/community/manifests/loki-operator-manager-config_v1_configmap.yaml
+++ b/operator/bundle/community/manifests/loki-operator-manager-config_v1_configmap.yaml
@@ -24,9 +24,9 @@ data:
kind: ConfigMap
metadata:
labels:
- app.kubernetes.io/instance: loki-operator-v0.3.0
+ app.kubernetes.io/instance: loki-operator-v0.4.0
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.3.0
+ app.kubernetes.io/version: 0.4.0
name: loki-operator-manager-config
diff --git a/operator/bundle/community/manifests/loki-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml b/operator/bundle/community/manifests/loki-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml
index 3ef257165399d..96738fd625cc7 100644
--- a/operator/bundle/community/manifests/loki-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml
+++ b/operator/bundle/community/manifests/loki-operator-metrics-reader_rbac.authorization.k8s.io_v1_clusterrole.yaml
@@ -3,11 +3,11 @@ kind: ClusterRole
metadata:
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.3.0
+ app.kubernetes.io/instance: loki-operator-v0.4.0
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.3.0
+ app.kubernetes.io/version: 0.4.0
name: loki-operator-metrics-reader
rules:
- nonResourceURLs:
diff --git a/operator/bundle/community/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_role.yaml b/operator/bundle/community/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_role.yaml
index ec36553071312..5472ebb497895 100644
--- a/operator/bundle/community/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_role.yaml
+++ b/operator/bundle/community/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_role.yaml
@@ -6,11 +6,11 @@ metadata:
include.release.openshift.io/single-node-developer: "true"
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.3.0
+ app.kubernetes.io/instance: loki-operator-v0.4.0
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.3.0
+ app.kubernetes.io/version: 0.4.0
name: loki-operator-prometheus
rules:
- apiGroups:
diff --git a/operator/bundle/community/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_rolebinding.yaml b/operator/bundle/community/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_rolebinding.yaml
index 1e6bf9abe1801..d37772946f2d7 100644
--- a/operator/bundle/community/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_rolebinding.yaml
+++ b/operator/bundle/community/manifests/loki-operator-prometheus_rbac.authorization.k8s.io_v1_rolebinding.yaml
@@ -6,11 +6,11 @@ metadata:
include.release.openshift.io/single-node-developer: "true"
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.3.0
+ app.kubernetes.io/instance: loki-operator-v0.4.0
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.3.0
+ app.kubernetes.io/version: 0.4.0
name: loki-operator-prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
diff --git a/operator/bundle/community/manifests/loki-operator-webhook-service_v1_service.yaml b/operator/bundle/community/manifests/loki-operator-webhook-service_v1_service.yaml
index 325ac8e0e167d..5979829a9e38c 100644
--- a/operator/bundle/community/manifests/loki-operator-webhook-service_v1_service.yaml
+++ b/operator/bundle/community/manifests/loki-operator-webhook-service_v1_service.yaml
@@ -3,11 +3,11 @@ kind: Service
metadata:
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.3.0
+ app.kubernetes.io/instance: loki-operator-v0.4.0
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.3.0
+ app.kubernetes.io/version: 0.4.0
name: loki-operator-webhook-service
spec:
ports:
diff --git a/operator/bundle/community/manifests/loki-operator.clusterserviceversion.yaml b/operator/bundle/community/manifests/loki-operator.clusterserviceversion.yaml
index 4f0787e96470a..1c94304138416 100644
--- a/operator/bundle/community/manifests/loki-operator.clusterserviceversion.yaml
+++ b/operator/bundle/community/manifests/loki-operator.clusterserviceversion.yaml
@@ -149,8 +149,8 @@ metadata:
capabilities: Full Lifecycle
categories: OpenShift Optional, Logging & Tracing
certified: "false"
- containerImage: docker.io/grafana/loki-operator:main-ac1c1fd
- createdAt: "2023-07-24T11:44:07Z"
+ containerImage: docker.io/grafana/loki-operator:0.4.0
+ createdAt: "2023-07-27T16:51:26Z"
description: The Community Loki Operator provides Kubernetes native deployment
and management of Loki and related logging components.
operators.operatorframework.io/builder: operator-sdk-unknown
@@ -160,7 +160,7 @@ metadata:
labels:
operatorframework.io/arch.amd64: supported
operatorframework.io/arch.arm64: supported
- name: loki-operator.v0.3.0
+ name: loki-operator.v0.4.0
namespace: placeholder
spec:
apiservicedefinitions: {}
@@ -1603,11 +1603,11 @@ spec:
serviceAccountName: default
deployments:
- label:
- app.kubernetes.io/instance: loki-operator-v0.3.0
+ app.kubernetes.io/instance: loki-operator-v0.4.0
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.3.0
+ app.kubernetes.io/version: 0.4.0
control-plane: controller-manager
name: loki-operator-controller-manager
spec:
@@ -1641,7 +1641,7 @@ spec:
value: quay.io/observatorium/api:latest
- name: RELATED_IMAGE_OPA
value: quay.io/observatorium/opa-openshift:latest
- image: docker.io/grafana/loki-operator:main-ac1c1fd
+ image: docker.io/grafana/loki-operator:0.4.0
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
@@ -1753,8 +1753,8 @@ spec:
name: gateway
- image: quay.io/observatorium/opa-openshift:latest
name: opa
- replaces: loki-operator.v0.2.0
- version: 0.3.0
+ replaces: loki-operator.v0.3.0
+ version: 0.4.0
webhookdefinitions:
- admissionReviewVersions:
- v1
diff --git a/operator/bundle/community/manifests/loki.grafana.com_alertingrules.yaml b/operator/bundle/community/manifests/loki.grafana.com_alertingrules.yaml
index 6b3638872eb23..2e17035252284 100644
--- a/operator/bundle/community/manifests/loki.grafana.com_alertingrules.yaml
+++ b/operator/bundle/community/manifests/loki.grafana.com_alertingrules.yaml
@@ -5,11 +5,11 @@ metadata:
controller-gen.kubebuilder.io/version: v0.11.3
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.3.0
+ app.kubernetes.io/instance: loki-operator-v0.4.0
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.3.0
+ app.kubernetes.io/version: 0.4.0
name: alertingrules.loki.grafana.com
spec:
conversion:
diff --git a/operator/bundle/community/manifests/loki.grafana.com_lokistacks.yaml b/operator/bundle/community/manifests/loki.grafana.com_lokistacks.yaml
index edc09496c1d3c..d989d147d2d40 100644
--- a/operator/bundle/community/manifests/loki.grafana.com_lokistacks.yaml
+++ b/operator/bundle/community/manifests/loki.grafana.com_lokistacks.yaml
@@ -5,11 +5,11 @@ metadata:
controller-gen.kubebuilder.io/version: v0.11.3
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.3.0
+ app.kubernetes.io/instance: loki-operator-v0.4.0
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.3.0
+ app.kubernetes.io/version: 0.4.0
name: lokistacks.loki.grafana.com
spec:
conversion:
diff --git a/operator/bundle/community/manifests/loki.grafana.com_recordingrules.yaml b/operator/bundle/community/manifests/loki.grafana.com_recordingrules.yaml
index 6b998ab449941..2c6e34792d28a 100644
--- a/operator/bundle/community/manifests/loki.grafana.com_recordingrules.yaml
+++ b/operator/bundle/community/manifests/loki.grafana.com_recordingrules.yaml
@@ -5,11 +5,11 @@ metadata:
controller-gen.kubebuilder.io/version: v0.11.3
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.3.0
+ app.kubernetes.io/instance: loki-operator-v0.4.0
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.3.0
+ app.kubernetes.io/version: 0.4.0
name: recordingrules.loki.grafana.com
spec:
conversion:
diff --git a/operator/bundle/community/manifests/loki.grafana.com_rulerconfigs.yaml b/operator/bundle/community/manifests/loki.grafana.com_rulerconfigs.yaml
index 4d05b766f2ab5..451e24beb5bf5 100644
--- a/operator/bundle/community/manifests/loki.grafana.com_rulerconfigs.yaml
+++ b/operator/bundle/community/manifests/loki.grafana.com_rulerconfigs.yaml
@@ -5,11 +5,11 @@ metadata:
controller-gen.kubebuilder.io/version: v0.11.3
creationTimestamp: null
labels:
- app.kubernetes.io/instance: loki-operator-v0.3.0
+ app.kubernetes.io/instance: loki-operator-v0.4.0
app.kubernetes.io/managed-by: operator-lifecycle-manager
app.kubernetes.io/name: loki-operator
app.kubernetes.io/part-of: loki-operator
- app.kubernetes.io/version: 0.3.0
+ app.kubernetes.io/version: 0.4.0
name: rulerconfigs.loki.grafana.com
spec:
conversion:
diff --git a/operator/bundle/openshift/manifests/loki-operator.clusterserviceversion.yaml b/operator/bundle/openshift/manifests/loki-operator.clusterserviceversion.yaml
index 9e5ab66320e99..8f7a8e5a9ac6b 100644
--- a/operator/bundle/openshift/manifests/loki-operator.clusterserviceversion.yaml
+++ b/operator/bundle/openshift/manifests/loki-operator.clusterserviceversion.yaml
@@ -150,7 +150,7 @@ metadata:
categories: OpenShift Optional, Logging & Tracing
certified: "false"
containerImage: quay.io/openshift-logging/loki-operator:v0.1.0
- createdAt: "2023-07-24T11:44:11Z"
+ createdAt: "2023-07-27T16:51:31Z"
description: |
The Loki Operator for OCP provides a means for configuring and managing a Loki stack for cluster logging.
## Prerequisites and Requirements
@@ -1639,7 +1639,7 @@ spec:
value: quay.io/observatorium/api:latest
- name: RELATED_IMAGE_OPA
value: quay.io/observatorium/opa-openshift:latest
- image: quay.io/openshift-logging/loki-operator:v0.1.0
+ image: quay.io/openshift-logging/loki-operator:0.1.0
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
diff --git a/operator/config/manager/kustomization.yaml b/operator/config/manager/kustomization.yaml
index 86cb69733899b..d68a66a255237 100644
--- a/operator/config/manager/kustomization.yaml
+++ b/operator/config/manager/kustomization.yaml
@@ -6,4 +6,4 @@ kind: Kustomization
images:
- name: controller
newName: quay.io/openshift-logging/loki-operator
- newTag: v0.1.0
+ newTag: 0.1.0
diff --git a/operator/config/manifests/community-openshift/bases/loki-operator.clusterserviceversion.yaml b/operator/config/manifests/community-openshift/bases/loki-operator.clusterserviceversion.yaml
index 880800ee3e1eb..c7fc1d1e7c660 100644
--- a/operator/config/manifests/community-openshift/bases/loki-operator.clusterserviceversion.yaml
+++ b/operator/config/manifests/community-openshift/bases/loki-operator.clusterserviceversion.yaml
@@ -6,7 +6,7 @@ metadata:
capabilities: Full Lifecycle
categories: OpenShift Optional, Logging & Tracing
certified: "false"
- containerImage: docker.io/grafana/loki-operator:main-ac1c1fd
+ containerImage: docker.io/grafana/loki-operator:0.4.0
createdAt: "2022-12-22T13:28:40+00:00"
description: The Community Loki Operator provides Kubernetes native deployment
and management of Loki and related logging components.
@@ -2211,5 +2211,5 @@ spec:
minKubeVersion: 1.21.1
provider:
name: Grafana Loki SIG Operator
- replaces: loki-operator.v0.2.0
+ replaces: loki-operator.v0.3.0
version: 0.0.0
diff --git a/operator/config/manifests/community/bases/loki-operator.clusterserviceversion.yaml b/operator/config/manifests/community/bases/loki-operator.clusterserviceversion.yaml
index bdf142ed65689..9a46ca69db84b 100644
--- a/operator/config/manifests/community/bases/loki-operator.clusterserviceversion.yaml
+++ b/operator/config/manifests/community/bases/loki-operator.clusterserviceversion.yaml
@@ -6,7 +6,7 @@ metadata:
capabilities: Full Lifecycle
categories: OpenShift Optional, Logging & Tracing
certified: "false"
- containerImage: docker.io/grafana/loki-operator:main-ac1c1fd
+ containerImage: docker.io/grafana/loki-operator:0.4.0
createdAt: "2022-12-22T13:28:40+00:00"
description: The Community Loki Operator provides Kubernetes native deployment
and management of Loki and related logging components.
@@ -2198,5 +2198,5 @@ spec:
minKubeVersion: 1.21.1
provider:
name: Grafana Loki SIG Operator
- replaces: loki-operator.v0.2.0
+ replaces: loki-operator.v0.3.0
version: 0.0.0
diff --git a/operator/config/overlays/community-openshift/kustomization.yaml b/operator/config/overlays/community-openshift/kustomization.yaml
index 4053794775cb4..2981d73c458dc 100644
--- a/operator/config/overlays/community-openshift/kustomization.yaml
+++ b/operator/config/overlays/community-openshift/kustomization.yaml
@@ -11,8 +11,8 @@ labels:
app.kubernetes.io/managed-by: operator-lifecycle-manager
includeSelectors: true
- pairs:
- app.kubernetes.io/instance: loki-operator-v0.3.0
- app.kubernetes.io/version: "0.3.0"
+ app.kubernetes.io/instance: loki-operator-v0.4.0
+ app.kubernetes.io/version: "0.4.0"
configMapGenerator:
- files:
@@ -27,4 +27,4 @@ patchesStrategicMerge:
images:
- name: controller
newName: docker.io/grafana/loki-operator
- newTag: main-ac1c1fd
+ newTag: 0.4.0
diff --git a/operator/config/overlays/community/kustomization.yaml b/operator/config/overlays/community/kustomization.yaml
index 211c751e59485..7aa216f1f7166 100644
--- a/operator/config/overlays/community/kustomization.yaml
+++ b/operator/config/overlays/community/kustomization.yaml
@@ -22,8 +22,8 @@ labels:
app.kubernetes.io/managed-by: operator-lifecycle-manager
includeSelectors: true
- pairs:
- app.kubernetes.io/instance: loki-operator-v0.3.0
- app.kubernetes.io/version: "0.3.0"
+ app.kubernetes.io/instance: loki-operator-v0.4.0
+ app.kubernetes.io/version: "0.4.0"
generatorOptions:
disableNameSuffixHash: true
@@ -43,7 +43,7 @@ patchesStrategicMerge:
images:
- name: controller
newName: docker.io/grafana/loki-operator
- newTag: main-ac1c1fd
+ newTag: 0.4.0
# the following config is for teaching kustomize how to do var substitution
vars:
diff --git a/operator/hack/operatorhub.sh b/operator/hack/operatorhub.sh
index 3fd13c37d8b5a..3e279f4fd009b 100755
--- a/operator/hack/operatorhub.sh
+++ b/operator/hack/operatorhub.sh
@@ -18,7 +18,6 @@ fi
SOURCE_DIR=$(pwd)
VERSION=$(grep "VERSION ?= " Makefile | awk -F= '{print $2}' | xargs)
-INT_VERSION="${VERSION#v}"
for dest in ${COMMUNITY_OPERATORS_REPOSITORY} ${UPSTREAM_REPOSITORY}; do
(
@@ -34,21 +33,21 @@ for dest in ${COMMUNITY_OPERATORS_REPOSITORY} ${UPSTREAM_REPOSITORY}; do
git checkout -q main
git rebase -q upstream/main
- mkdir -p "operators/loki-operator/${INT_VERSION}"
+ mkdir -p "operators/loki-operator/${VERSION}"
if [[ "${dest}" = "${UPSTREAM_REPOSITORY}" ]]; then
- cp -r "${SOURCE_DIR}/bundle/community-openshift"/* "operators/loki-operator/${INT_VERSION}/"
+ cp -r "${SOURCE_DIR}/bundle/community-openshift"/* "operators/loki-operator/${VERSION}/"
else
- cp -r "${SOURCE_DIR}/bundle/community"/* "operators/loki-operator/${INT_VERSION}/"
+ cp -r "${SOURCE_DIR}/bundle/community"/* "operators/loki-operator/${VERSION}/"
fi
- rm "operators/loki-operator/${INT_VERSION}/bundle.Dockerfile"
+ rm "operators/loki-operator/${VERSION}/bundle.Dockerfile"
if [[ "${dest}" = "${UPSTREAM_REPOSITORY}" ]]; then
python3 - << END
import os, yaml
-with open("./operators/loki-operator/${INT_VERSION}/metadata/annotations.yaml", 'r') as f:
+with open("./operators/loki-operator/${VERSION}/metadata/annotations.yaml", 'r') as f:
y=yaml.safe_load(f) or {}
y['annotations']['com.redhat.openshift.versions'] = os.getenv('SUPPORTED_OCP_VERSIONS')
-with open("./operators/loki-operator/${INT_VERSION}/metadata/annotations.yaml", 'w') as f:
+with open("./operators/loki-operator/${VERSION}/metadata/annotations.yaml", 'w') as f:
yaml.dump(y, f)
END
fi
|
operator
|
Prepare community release v0.4.0 (#10050)
|
76c2e910cb53eb0962402e02afdfd7369b1eadbd
|
2023-03-20 15:35:38
|
René Scheibe
|
chore: Fix spelling of "syncing" (#8829)
| false
|
diff --git a/pkg/storage/stores/series/index/table_manager.go b/pkg/storage/stores/series/index/table_manager.go
index 4db98b851f279..095a4b7f9ac1a 100644
--- a/pkg/storage/stores/series/index/table_manager.go
+++ b/pkg/storage/stores/series/index/table_manager.go
@@ -43,7 +43,7 @@ func newTableManagerMetrics(r prometheus.Registerer) *tableManagerMetrics {
m.syncTableDuration = promauto.With(r).NewHistogramVec(prometheus.HistogramOpts{
Namespace: "loki",
Name: "table_manager_sync_duration_seconds",
- Help: "Time spent synching tables.",
+ Help: "Time spent syncing tables.",
Buckets: prometheus.DefBuckets,
}, []string{"operation", "status_code"})
@@ -325,7 +325,7 @@ func (m *TableManager) SyncTables(ctx context.Context) error {
}
expected := m.calculateExpectedTables()
- level.Debug(util_log.Logger).Log("msg", "synching tables", "expected_tables", len(expected))
+ level.Debug(util_log.Logger).Log("msg", "syncing tables", "expected_tables", len(expected))
toCreate, toCheckThroughput, toDelete, err := m.partitionTables(ctx, expected)
if err != nil {
|
chore
|
Fix spelling of "syncing" (#8829)
|
4768b6d997dfdf611aac290589c1c88c5b50fcd8
|
2022-12-14 02:46:22
|
Kaviraj Kanagaraj
|
doc(api): Default value for `delete_ring_tokens` on `/ingester/shutdown` endpoint (#7921)
| false
|
diff --git a/docs/sources/api/_index.md b/docs/sources/api/_index.md
index 696ae7a4af9ab..c20e426fb6aff 100644
--- a/docs/sources/api/_index.md
+++ b/docs/sources/api/_index.md
@@ -220,7 +220,7 @@ gave this response:
}
```
-If your cluster has
+If your cluster has
[Grafana Loki Multi-Tenancy](../operations/multi-tenancy/) enabled,
set the `X-Scope-OrgID` header to identify the tenant you want to query.
Here is the same example query for the single tenant called `Tenant1`:
@@ -637,7 +637,7 @@ It accepts three URL query parameters `flush`, `delete_ring_tokens`, and `termin
* `flush=<bool>`:
Flag to control whether to flush any in-memory chunks the ingester holds. Defaults to `true`.
* `delete_ring_tokens=<bool>`:
- Flag to control whether to delete the file that contains the ingester ring tokens of the instance if the `-ingester.token-file-path` is specified.
+ Flag to control whether to delete the file that contains the ingester ring tokens of the instance if the `-ingester.token-file-path` is specified. Defaults to `false.
* `terminate=<bool>`:
Flag to control whether to terminate the Loki process after service shutdown. Defaults to `true`.
@@ -1385,4 +1385,3 @@ This is helpful for scaling down WAL-enabled ingesters where we want to ensure o
but instead flushed to our chunk backend.
In microservices mode, the `/ingester/flush_shutdown` endpoint is exposed by the ingester.
-
|
doc
|
Default value for `delete_ring_tokens` on `/ingester/shutdown` endpoint (#7921)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.